FRESH

Sunday, December 22, 2024
BusinessFood + Hospitality

Go Ahead and Make Your AI Recipe. It Won’t Be Good.

Lille Allen

Keep your computer away from my food

In the mid-2010s, the scientist and engineer Janelle Shane made a name for herself by exposing the ridiculousness of the neural net. Her blog, AI Weirdness, chronicles what happens when she trains neural networks on everything from paint colors to animal names. Multiple times, Shane has tried to feed neural networks databases of recipes, only for them to spit out complete nonsense. A recipe for “small sandwiches” from 2017 included the measurement “1 salad dressing.” Another from that year was given the name “BAKED OTHER LIE 1993 CAKE,” and instructed, “if on the side, as becomes warmed, carefully frost them with a sauce.” Shane uses her blog to showcase what neural networks can and cannot do, and readers walk away understanding that these tools, while impressive, do not have any semblance of what we know as intelligence or critical thought. They simply regurgitate patterns.

Of course, AI has come a long way from Shane’s experiments in the 2010s. It can now create recipes that can actually be followed, with obligatory stunt blogs following in their wake, all trying to answer the question of whether AI-generated recipes are any good. While that question is far from settled, it hasn’t stopped tech optimists and venture capitalists with a foodie bent from throwing all their hopes into the technology. Last month, BuzzFeed launched “Botatouille,” a chatbot that recommends recipes from the company’s food vertical, Tasty. Startup CloudChef is claiming to use AI to digitize not just recipes but a chef’s techniques, to guide staff so that “someone who doesn’t know a scallion from a shallot will cook up a Michelin Guide-worthy plate of chicken pulao,” as Eater SF editor Lauren Saria put it.

Despite the enthusiasm from deep-pocketed investors, by most accounts AI-generated recipes are still not very good. Priya Krishna wrote that an AI-generated menu supposedly designed precisely to her tastes gave her mushy chaat and dry turkey (it called for no butter or oil). Chef Donald Mullikin had to make his own adjustments to recipes because ChatGPT kept suggesting the wrong kind of pepper, and didn’t include salt. Recently, I attended a chili cookoff in which one contestant raved that their bone marrow chili came from typing “bone-marrow chili” into ChatGPT. The result was bland and mealy, with barely a sign of the alluring bone marrow. And my attempts to use Botatouille resulted in disappointment; requests to use non-Western ingredients like hing powder and ong choy were met with recipes that didn’t include them, and an ask for low-FODMAP Mexican recipes brought up three options with high-FODMAP ingredients. Simply asking for a recipe that uses both cabbage and tomato summoned three tomato-heavy recipes with no cabbage in sight.

Unfortunately, these tools as they currently exist don’t solve any problems in the kitchen.

At the core of any technology is the promise that it will fix a problem. There is sunscreen for when your problem is getting sunburned, and the printing press for when your problem is the church keeping the masses illiterate. But the goal of any capitalist venture is telling you which problems you need fixed, and mostly, that your biggest problem is not having the thing they’re offering you.

Unfortunately, these tools as they currently exist don’t solve any problems in the kitchen. If the problem is not having a recipe for pasta salad in front of you, search engines can produce that. If the problem is ensuring a recipe is from a trusted and reliable source, the amalgamation of information coming from these language models does not actually give you anything more trusted, and in fact obscures that knowledge. If the problem is you don’t know how to scan a recipe and tell if it seems like it’ll turn out well, AI can’t teach you.

On some level, I understand the person who made the bone marrow chili. It’s easy to picture ChatGPT as some sort of mega brain. What if you could take all the recipes in the world for something, mash them together, and from that come up with one uber recipe? Surely it would be the best one, right?

This is not how ChatGPT or any other neural networks work. “AI platforms recover patterns and relationships, which they then use to create rules, and then make judgments and predictions, when responding to a prompt,” writes the Harvard Business Review. In the New Yorker, Ted Chiang compares ChatGPT to a blurry, lossy JPEG — it can mimic the original, but “if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation.” It doesn’t operate all that differently from a more traditional search engine like Google, but while those may give you direct quotes or primary sources, ChatGPT gives you a summary of that information, based on what it thinks you’re looking for, without the ability to check the sources it’s pulling from.

One’s ability to use ChatGPT to, say, suggest a week’s worth of meals using chicken thighs, or a recipe for Korean-influenced cacio e pepe, is contingent on both the language model presenting the information it’s been fed in a coherent way (no “1 salad dressing” measurements) and the recipient’s existing knowledge of food and cooking. You have to know what a muffin recipe looks like already to know if ChatGPT has given you one that could produce a somewhat successful muffin. And while Mullikin claims he was able to “collaborate” with ChatGPT, what he described was basically correcting the algorithm until it gave him ingredients like kimchi juice and chile sauce he knew he wanted to use already.

So while it doesn’t seem AI is solving problems related to actual cooking, could it still improve the way we approach cooking and eating? One popular application is meal planning, especially for people who have dietary restrictions that complicate grocery shopping. But the Washington Post notes that ChatGPT’s training data ends in 2021, meaning it can’t provide up-to-date information. It’s also trained mostly on English-language recipes, says Nik Sharma, which favor Western flavors and diets, a disadvantage if someone wants to eat both a gluten-free diet and one that includes a lot of Chinese food. And it just gets things wrong. The paper still advises people to double-check everything they’re given, which defeats the point of the convenience. Olivia Scholes, who used ChatGPT to create a meal plan to help with polycystic ovary syndrome, told the Post, “Our world is full of biases and full of stuff that isn’t true. I kind of worry about the ethics of AI and what it’s built on.”

One of the biggest concerns around current AI tools is generating content out of someone else’s IP. It’s one of the main issues the Writers Guild of America is striking over, and artists have already taken AI developers to court over it. Essays and cartoons and photographs and songs are being used to train these language models without creators’ knowledge or consent, and without any way to cite these influences.

But proper citation has long been a problem in recipes, which can’t be copyrighted, as they are considered lists of ingredients and instructions. A language model being trained on just instructions isn’t legally violating anyone’s rights.

Language models strip recipes of the stuff that could actually teach one to be a better cook.

This may seem like a point in AI’s favor. But legality and morality have never completely overlapped. While recipes can’t be copyrighted, cookbooks and the writing around recipes can. Language models strip away that context, and thus the ability to pay someone fairly for their creative efforts. If a cache of recipes is informing what a language model is telling you to cook, it is bad that the creators aren’t just uncompensated, but unacknowledged. Language models also strip recipes of the stuff that could actually teach one to be a better cook. “Cooking is the sum of every bite we’ve ever taken informing our palates,” writes Alicia Kennedy, who notes that you cannot properly cite any recipe even if you tried. Which is why recipes need context, an explanation of a history, a point of view, or the decision for why a choice was made. When ChatGPT gives you a recipe, it doesn’t say who came up with it, what they were trying to accomplish, why they chose to use more of one spice or swapped out a common ingredient. It is instructions empty of the thing it is trying to instruct you on.

In the Financial Times, Rebecca May Johnson asked herself what would happen if she treated cooking like thinking — that is, if she was present in the moment of cooking, not just following instructions. “When I cook, I am using the knowledge produced through the work of generations of cooks in kitchens all over the world,” she says. “It is only because of this thinking that it is possible for me to understand what will happen when I add salt, or cover the pan, or leave a sauce to rest.”

I can’t force you to care about the origins of a recipe, or accept that reading and thinking and giving attention to how a recipe was created are things that should be valued. There will always be people who just want to make the pasta salad. And as much as I personally think that’s robbing you of an amazing experience, that’s fine. Sometimes you just need pasta salad.

No one is stopping you from opening up Bard or ChatGPT and asking it to give you a recipe. Language models are tools, meant to be used however we deem them to be helpful. But these tools as they exist right now, and as they are being marketed by the corporations invested in you using them, do not solve your cooking problems. They don’t make the process easier, faster, or more intuitive. They can’t provide options that don’t already exist. They make the task more confusing, more opaque, and more likely to fail. And a future in which they might be better, in which they actually might solve some problems in the kitchen, relies on a mountain of knowledge and creativity that, as of now, these tools will not acknowledge or credit. We need to solve that problem first.

Related Posts

Load More Posts Loading...No More Posts.