AI, data and foodservice: risk and reward

New technologies bring new challenges and artificial intelligence is no exception. Amelia Levin outlines the potential risks that foodservice needs to watch out for on new AI platforms

Until recently, artificial intelligence (AI) in the foodservice industry has centered primarily on the use of robots and robotic arms that can flip burgers, make fries, mix salads and pour coffee drinks. But as “generative AI” gains traction in the media and creative worlds, expect to see expanded capabilities for its use in menu writing, website design, pricing, accounting, customer service and even everyday foodservice operations.

This next generation AI, which currently centers on platforms such as ChatGPT and OpenAI, has led to a crop of new lawsuits and legal questions around copyright, rights and infringement. The National Restaurant Association’s Restaurant Law Center recently hosted a webinar during which James Gatto, partner, Shepard Mullin LLP, addressed the legal implications and best practices minimizing the risks associated with AI technology as more restaurants and foodservice organizations turn to AI to help with a wide range of functions, including intelligent ordering assistants, computer vision-based quality control, restocking kitchen ingredients, “cloud” based cashiers, and automatically responding to online reviews.

AI goes beyond machine learning, where the more data the processor receives, the better it can provide answers, Gatto explained in the webinar. Generative AI, on the other hand, involves processors that “appear to mimic human intelligence and understand what someone is saying, not just see the words. By “training” AI, it can be used to generate content in many forms, including blog posts, articles, web copy, images, videos and audio programming.

“If you want to train a model to generate images of animals, you’ll show it a lot of images [of animals[ and it will learn the difference between a monkey and a dog,” Gatto said in the webinar.

The source of the legal issues, however, centers on what programmers use to train generative AI platforms. There have been so many lawsuits by book authors alleging their works have been used to in this sense that an Atlantic magazine reporter actually developed a database for authors to be able to see if their books were used in training.

Somewhat shockingly, Gatto says, “There are a number of booksellers on Amazon that are creating books using AI and putting author names on them without their knowledge.”

In August, a ruling in a Washington, D.C. federal court set a new precendent indicating that art generated from simple prompts is likely uncopyrightable and that future cases and rulings will need to happen for judges to weigh how much human input is needed to claim copyright laws when creating AI-generated artwork and other creative material.

Pay attention

While seemingly limited to the media and art worlds, the legal issues around generative AI platforms are slowly making their way into other industries, including foodservice, chiefly among larger restaurant chains and foodservice companies investing more heavily in AI to do their own model training and coding. There are several ways restaurants and foodservice companies are using AI currently, he pointed out. Those uses include “menu development, ordering and order displays, voice recognition and facial recognition for ordering, order predictions, prompting — If you like that, you might like this — order tracking, meaning systems that figure out how long orders will take to be completed, and there are even uses for prepping the food, using imaging for quality control and consistency, and uses for customer retention, loyalty and marketing.” There is also AI code generators used for website design, including content and imagery.

Companies in all industries should pay attention to current regulatory actions; Open AI and ChatGPT are currently under investigation by the Federal Trade Commission (FTC). “There’s a demand letter requesting information about how [these platforms] train models and manage ‘hallucinations,’ which Gatto says happens when they
go a little “haywire” and produce faulty information.

“If you want a roadmap, look at that 20-page letter and the questions FTC is asking; companies using AI will need to be prepared to answer those questions,” he said. “You have to make sure you have the right data; using copyright material off the web may be infringement, and there’s also the question of what’s ‘fair use.’”

Material deemed for fair use could be books scanned by libraries to create digital collections, but not to generate new content. “If your company is using AI in generative ways, you need to have a policy on use of AI so employees understand what the risks are and you have mitigation throughout the organization,“ Gatto said.

Now, that’s certainly food for [artificial] thought.

Amelia Levin

More Relevant

View More