AI Primer for F&B Execs

As a Food & Beverage (F&B) technology executive, what do you need to know about Artificial Intelligence (AI) to ensure your business remains competitive? This article aims to provide an exec-level overview of AI. Once you have a baseline understanding of AI, you can start assessing how it applies to F&B.

As I began exploring AI, I found that there wasn’t a consistent way of organizing and describing all the different disciplines, technologies, approaches, and applications related to AI and they overlapped a lot. After spending time digging through the details from many sources, I did my best to filter out the significant number of applications of AI and focus on extracting, organizing, and summarizing the types and disciplines of AI.

I’m using the content in this article to organize and structure my own understanding of AI into a framework for continued learning. My hope is that F&B technology executives find value in this framework to structure their own learning. I encourage you to evolve this framework as well since AI is a rapidly moving target and the framework will likely be obsolete by the time you read this.

What is Artificial Intelligence?

There are many different definitions of Artificial Intelligence (AI) but I like to generalize AI as anything related to computers exhibiting human-like intelligence. From the perspective of a business executive, AI can be applied to improve and automate decision-making in any area of your business where data is being captured.

Levels of AI

AI research and development efforts are applied to accomplish goals within one or more of these three levels of intelligence:

  • Artificial Narrow Intelligence (ANI): considered “weak” AI (whereas AGI and ASI are considered “strong” AI) and has the ability to complete a very specific task.
  • Artificial General Intelligence (AGI): performs on par with another human’s intelligence and ability.
  • Artificial Super Intelligence (ASI): would surpass a human’s intelligence and ability.

The AI space evolves and innovates continuously but here are the primary disciplines at the time of this writing.

AI Disciplines

In researching AI, it has become clear that there is no consistent way of organizing all the different disciplines so here’s the one that made the most sense to me:

  • Machine Learning (learning and problem-solving)
  • Machine Perception (seeing, hearing, touching, smelling)
  • Natural Language Processing (communicating)
  • Robotics (physical applications)
  • Social Intelligence (emotions)

There will be more disciplines but these five cover most of the AI research and development taking place today. Here is a map of these five disciplines and their sub-disciplines that I will summarize below:

As we continue to explore each discipline, I recommend looking at each in the context of how you would create a robot to mimic a human. Which capabilities would need to be replicated using technology? Using this context will help you organize all the different technologies in your mind. Then later we can look at which of these AI technologies can be applied in the F&B space.

Machine Learning

The ability for the system to learn automatically and improve from experience without being explicitly programmed. This is achieved by using algorithms that discover patterns and generate insights from the data they are exposed to. Machine Learning (ML) applications are based on a neural network (a network of algorithmic calculations attempting to mimic the perception and thought process of the human brain). A neural network consists of the following:

  • Input Layer
  • Hidden Layer
  • Output Layer

The two primary disciplines within Machine Learning are Deep Learning and Statistical Learning:

Deep Learning

Deep Learning (DL) is a subset of Machine Learning focused on the ability for the system to mimic a human brain’s neural network. It can make sense of patterns, noise, and sources of confusion in the data. These neural networks consist of multiple hidden layers, each of which further refines the conclusions of the previous layer:

As you move through a model’s hidden layers, calculations of each previous layer are weighted and refined by increasingly more complex algorithms to get to the output. This movement forward through the layers is called forward propagation. A process called backpropagation identifies errors in calculations, assigns them weights and biases, and sends them back to previous layers to help refine and train the model. As a result, the system learns and gets more efficient and accurate over time as it processes large amounts of data.

Within the discipline of Deep Learning, there are two primary types of neural networks:

  • Convolution Neural Network (CNN): used primarily in computer vision applications, recognizing features, patterns, and objects in a complex image.
  • Recurrent Neural Network (RNN): uses sequential data or time series data (ex: language translation, natural language processing (NLP), speech recognition, and image captioning). They’re different than CNN in that they leverage a “memory” of prior inputs and outputs.

This is a very high-level overview of Deep Learning.

Statistical Learning

Another subset of ML is Statistical Learning, a set of tools that use statistics and functional analysis to build predictive models based on the data. Learning falls into many categories, including:

  • Supervised: a type of ML that involves learning from a training set of human-labelled data where each input maps to an output (best understood in Statistical Learning).
  • Unsupervised: a type of ML that identifies previously undetected patterns in a data set with no pre-existing labels and a minimum of human supervision.
  • Online: a type of ML that uses sequential data to predict the next step in the sequence.
  • Reinforcement: a type of ML where agents take actions in an environment setup to maximize a system of rewards.

Machine Perception

In order for machines to be more human-like, they will need the ability to perceive their environment using the senses of vision, hearing, touch, and smell:

  • Vision: focused on how computers can gain high-level understanding from digital images or videos. Computer vision has many applications, including facial recognition, object recognition, geographic modeling, and even aesthetic judgment (see Social Intelligence below).
  • Hearing: focused on how computers can take in and process sound data such as music or speech.
  • Touch: focused on a machine’s perception of tactile information, including perception of surface properties, dexterity, reflexes, and interaction with the environment.
  • Smell: focused on a machine’s ability to recognize and measure smells.

These AI technologies will be very important in the F&B arena, especially in the vision and hearing capabilities.

Natural Language Processing (NLP)

Computers have been communicating with humans in one way or another since their existence. The discipline of Natural Language Processing (NLP) started in the 50’s and is focused on a machine’s ability to read and understand human languages. There are two subtopics within NLP:

  • Natural Language Understanding (NLU): focuses on a machine’s comprehension of grammar and context to determine the meaning of a sentence.
  • Natural Language Generation (NLG): focuses on a machine’s ability to plan and construct text, sentences, and paragraphs with proper grammar in a specific language.

The NLP discipline leverages machine learning and deep learning models to extract, classify, and label elements of text and voice data and then assign statistical likelihood to each of the possible meanings of those elements and applying deep learning neural networks to improve accuracy.


Research and development in robotics is primarily focused on developing machines that can substitute for humans and replicate human actions. Here are the primary components of robotics:

  • Power: focused on sources to power robots, including pneumatic (compressed gases), solar power (converting the sun’s energy to electricity), hydraulics (liquids), flywheel energy storage, organic garbage (through anaerobic digestion), and nuclear.
  • Actuators: focused on the “muscles” of a robot, converting stored energy into movement, including electric motors, linear actuators (in/out movements), series elastic actuation (shock absorption), air muscles (use air to expand), muscle wire (contracts when electricity is applied), electroactive polymers (plastic material that contracts substantially from electricity), ultrasonic motors (linear or rotary motion), and elastic nanotubes (compact filaments deform elastically by several percent).
  • Sensors: focused on receiving information about the external environment or internal components, including touch, vision, lidar, radar, and sonar.
  • Manipulators: focused on a robot’s control of its environment through selective contact, including mechanical grippers (hands, jaws, etc) and suction end-effectors (attach/grip with suction).
  • Mobility: focused on a robot’s ability to move, including rolling robots, two-wheeled balancing robots, one-wheeled balancing , spherical orb , six-wheeled , tracked , walking , hopping , flying, snaking, skating, climbing, swimming, and sailing robots.
  • Navigation: focused on the ability for a robot to operate autonomously in a dynamic environment (ex: self-driving cars, drones, etc).
  • Interaction: focused on how robots can interact with humans (home and other non-industrial environments), including speech recognition, talking, gestures, facial expressions, artificial emotions, personality, and social intelligence (see next discipline).

Robotics will have considerable impact in the F&B space given the challenges with labor supply, turnover, and continually rising costs.

Social Intelligence

Applying social intelligence to computers includes both detecting and recognizing emotional information as well as adding emotion to machines. Here are some of the technologies focused on social intelligence in machines:

  • Emotional Speech: focused on detecting emotions in speech (ex: fear, anger, joy).
  • Facial Expression: focused on detecting emotional state based on facial expressions, in conjunction with gestures and speech.
  • Gestures: focused on detecting emotional state based on gestures, in conjunction with facial expression and speech.
  • Physiological Monitoring: focused on detecting based on monitoring and analyzing physiological signs (ex: heart rate, skin conductance, minute contractions of facial muscles, and changes in facial blood flow).
  • Visual Aesthetics: focused on applying subjective discrimination between aesthetically pleasing and displeasing images.

These technologies will definitely have applications in the F&B space, especially in the area of customer service.

There you have it, an exec-level overview of Artificial Intelligence. Here is an image of the categories covered above to help structure your continued learning about AI:

My hope is that this article will give F&B executives a base-level understanding of AI. Hopefully this framework will help as we continue exploring each discipline and begin looking at how AI is and will be impacting the F&B space.

Sources: most of the information in this article is summarized from two primary resources:
– Wikipedia: Artificial Intelligence, Machine Learning, Deep Learning, Machine Perception, Natural Language Processing, Robotics, Affective Computing
– IBM: Artificial Intelligence, Machine Learning, Deep Learning, Natural Language Processing