I used Hello Fresh for a year, not sure if recipe variety was my problem. I think there's a lot of other possibilities to use LLMs in that product though, e.g. to help users express complex preferences, or to create best menus and schedules out of existing recipes. I'd say tampering with recipes would be the worst part of the AI integration. I think it's unfortunate that businesses look into using AI for something essential when there's a lot of easy, smaller pickings around their product where it could actually be useful
Tbh I understand this, it did feel like a lot of plastic waste being left around. But I think it'd be an impossible logistical problem to avoid this in the foreseeable future
…and then hand-packing to include or remove individual ingredients on a per-order basis? As someone with a significant ERP & warehouse implementation at work, that sounds very expensive and error-prone. Unfortunately :(
An aunt signed me up for a few free meals through some promotion. I tried it with an open mind but I found the entire experience infuriating.
Most of the meals came with poor quality produce (e.g. tiny unripe avocado — something I never would have grabbed at the grocer) and a silly amount of waste for items I had already.
My least favourite part was the recipes. I’m not sure if it was just the small selection of recipes we got, but it seemed like each one was designed around a template with six photos. As if they had a spreadsheet of all their recipes and required each one to fill six columns so they could just print it out in a predictable way. This meant that if you followed the recipe you would wind up doing several redundant tasks and creating unnecessary dishes. It drove me wild. By the last meal I just read the recipe, looked at the ingredients, and improvised.
A high quality recipe should not only result in a delicious meal but also be efficient and thoughtful about order of operations and clean-up. Having full control over ingredients, HF has no excuse providing low quality recipes, never mind the ingredients.
I would recommend a NYT Cooking subscription and using a grocery delivery app instead.
I agree about the issues - (sometimes) poor quality produce, lots of tiny bits of waste, sometimes template-y feeling meals. Also very expensive, and sometimes not enough food.
But overall I really like it. I actually learned a lot about cooking. The meals aren't complicated, and they follow similar patterns, so you start picking up on things. Like seasoning in layers (they tell you to salt & pepper at multiple steps), how to improve flavors with stock, spices, how to make presentation nice. And they're all pretty quick. After a while I have a nice library of recipe cards, and now I just pick a few of them when I go to the grocery store and get the ingredients myself. And even though sometimes it doesn't feel like enough food, it almost always comes out perfect. Sometimes I'll augment the amount of rice or something like that.
So I like HelloFresh, and maybe eventually I'll graduate to more interesting meals like NYT Cooking. But HelloFresh is quick, easy, and usually good.
What do you mean about "creating unnecessary dishes"?
I don’t remember the recipe but one of them told me to sauté one thing, and put it in a bowl to cool. Then in another bowl soak something else. Then drain and combine.
Instead I soaked one ingredient, while sautéing the other ingredient. When I drained the first ingredient, I just added the sautéd stuff to that bowl, saving one bowl from washing.
Another thing is the order you cut stuff. If you cut your produce first and then meat you can use the same cutting board. But if you cut meat before veggies (that will stay fresh) then you need a new board, or at least a flip.
I recently had a 6 month stint using hello fresh, and now cancelled. I find it's really useful to get out of a rut, and teach you some small tricks. This recent one has me:
* Using much more citrus, juice & zest
* Chickpeas + spice mix + touch of liquid in a pan is an easy go-to (we're veggie)
* Broth / onions in rice.
* Template of flatbread + stuff on top, which they had 3 or 4 in their rotation. Easy to riff on w/ whatever you have in the fridge.
None of these are earth-shattering revelations, but the repeated practice has ingrained some of these much more, and got me cooking a little differently.
We do this every few years, it seems worth it for a while, but I can't stick with it because I hate the waste, and the recipes get pretty repetitive after a bit, doubly so w/ vegetarian meals which have fewer options.
This was my experience with hello fresh as well. Subpar produce and absolutely useless recipes. It feels like the ingredients and recipes are the things this company focuses on the least, never trying them again and I actively dissuade people from using them whenever it comes up in conversation.
I'm not who you're replying to, but have experienced the same sadness when using meal kits. Why?
- This company and its supply chain went to the trouble of packaging a single serving of mayonnaise for me to personally discard as my token contribution to waste. Sure, an extra mayo packet won't make a difference, but simply not caring feels bad, you know?
- I have a jar of mayo in the fridge. I didn't even need this.
- Come to think of it, everything in this meal kit is wrapped in packaging that may well outlive me, depending on where it ends up. All for one meal's worth of food that I could have obtained more cheaply, less wastefully, and with more control over customization had I mustered the effort to plan ahead.
As someone with a few allergies, I effectively had to throw away and replace a third to half of the ingredients in every Hello Fresh meal. That's the #1 reason I stopped using it. An AI should be able to know what to substitute to make the whole kit edible for me.
Are you thinking "Rather than having a cook compose recipes with 1000 variations based on permutations of allergies, just produce one recipe with intended outcome, and let an AI figure out the variants"?
How do you train an AI on taste?
Would a chef state what's allowable for a recipe? "This one you can sub the soy out, but this one falls apart without it"
You keep shoveling piles of recipes at it, and as long as most of the recipes are positively tasteful (as opposed to just randomly generated, or worse, engineered to suck out of spite), the AI should eventually pick up on taste in general.
Having done a lot of user research, sometimes the most important things to a person aren’t stated, or even consciously known.
If I’m shopping for chocolate chip cookies, they’re a proxy for another need: hunger, yes, but also emotions: wanting to feel taken care of, missing a place, etc. If chocolate chip cookies are unavailable, I’m not now considering snickerdoodles or shopping chocolate bars; I’m shopping for the feel: maybe it’s pie, maybe soup.
I’m interested to see if the AIs we build will have the ability to identify this, so when someone says “I want a steak frites” the AI doesn’t strictly recommend steak recipes—“they’re asking for steak”—but realized the constellation of what the asker is actually asking for—they’re asking for a recipe that reminds them of when they lived in that apartment in that city X years ago.
I feel like generating recipe ideas/titles is the easy part, and advanced AI techniques are not necessary. Even the article notes that there is not really any difference between 1 round of training and 40 rounds of training with their model.
Actually generating a recipe is obviously much harder, and should be effectively impossible unless you include a humanoid robot that can cook and taste. I think the best you could do is generate a possible recipe that highlights ingredients and techniques you'll need to experiment with.
Also some of their final ideas don't really make sense:
- "Carb smart mexican beef and capsicum stuffed peppers with fries , avo and sour cream". Carb smart but it comes with fries? And "capsicum stuffed peppers" is a strange way to phrase it.
- "One - pan beef meatloaf italiano with green peas and spinach". You can cook this in one pan if you really want to but it will not be good.
- " little ears " pasta serves as balsamic tomatoes. Strange phrasing.
A do think a different model could solve these problems, but coming up with generic recipe ideas is not really a problem anyone has.
There was a really nice post on doing this kind of thing with CRF back in 2015 [0]. Open source data, and code on github. Also a nice tutorial on structured prediction using CRF type models.
Would be interesting if you could prompt, LoRA distill, or use modern LLM tricks against a well-labeled and curated set, similar to how other tagging problems are handled with modern pretrained models.
One problem I have is recipe clustering by ingredients. I want to minimize the different ingredients I need to buy so I can get some economies of scale while shopping. I don't know of any service that does this, I have been thinking about building it for my website but I haven't figure out what is a good UI for it.
better homes and garden cookbook with the red and white checkered cover. The basics are all there and the ingredient lookup is in the index. Say I lookup potato, all the potato recipes are in the index. It also offers substitutions. I have an older version and a newer version.
I get all of my recipes from GPT-4 these days. Granted, most of them are just regurgitating a common recipe, but I find that it's able to tweak recipes appropriately if I say "I'm out of X, can I use Y or Z instead?" - I've never actually had a bad recipe come of it.
Yeah I've found GPT-4 to be surprisingly good at recipes. I think that's because it's effectively serving up the "average" recipe for anything you ask it for, based on everything it's scraped from the internet... which usually ends up being a good recipe.
I love being able to challenge it with "and now make it vegan" etc to see what it comes up with.
This is also another example of the thing where having a randomly unreliable teacher is actually quite useful if you're trying to learn to cook - because it forces you to question what it tells you, apply your own intuition and think carefully about what worked and what didn't.
If you think about how it works, it's going to have a strong representation of oft-published recipes and oft-published substitutions and will be able to mash those together into plausible original recipes that are adjacent to them. Making a satisfying meal for a few people with comparatively mainstream tastes should be pretty reliable.
But the more esoteric you go, aiming for something really novel or something unusually historic/traditional/regional, or the more technical and delicate, the more it'll turn to hallucinating naive text continuations that will lead you astray. An experienced cook will know to spot the weird stuff and revise on the fly; an inexperienced one is going to end up with some... odd dishes.
It's the same as many of us see when used for code assistance. It's great at inventing "original" boilerplate which inspiration has been exhaustively covered in source material, but gets real wacky and unreliable the more you move away from that.
I mean if you can cook already you can do whatever you want. I could make a decent meal from a recipe in a language I don't speak just off a handful of cognates. Doesn't mean I followed the recipe in a meaningful sense.
When I've looked at gpt recipes they've had some pretty deranged proportions and you'd be in trouble following them precisely. If you just need a list of ingredients and a loose technique though eg "tofu & cashew stir fry with soy sauce and garlic" then ya sure I mean absolutely.
>Actually generating a recipe is obviously much harder, and should be effectively impossible unless you include a humanoid robot that can cook and taste.
It's not because this necessary information is already implicit in the sum of all the data it is trained on.
I can't imagine any cook or chef who believes that all the necessary information for all good recipes has been captured in text, let alone in the text that's been digitized and made available for training.
That's not to say that LLM's can't mash together reasonable new recipes for certain audiences, especially for variations on popular modern standards, but the idea that their working space of recipes is exhaustive across all cuisine and tastes is absurd.
Language Models are predictors. If the text you give them is the shadow in Plato's cave, they don't try to draw shadows, they try to build walls. They are trying to reverse engineer the computation that must have led to that output. They will not stop at "surface level similarity" or "plausible" by choice. They will train until they have completely succeeded or until the architecture or data fail them.
With a capable enough architecture and sufficient data, there is no recipe a predictor couldn't divine with enough training.
For the perfect predictor of recipes, Is the architecture capable enough ? Is the data sufficient (both variance and quantity) ?
I'm not sure but this is not a question a cook can answer.
The OP claimed
>Actually generating a recipe is obviously much harder, and should be effectively impossible unless you include a humanoid robot that can cook and taste.
This is just false. And you don't need the hypothetical perfect predictor (just a capable one) to see it.
This is the fantasy of GenAI boosters stated concisely - that these statistical methods present a new objectivity that avoids the shortcomings of human subjectivity.
Of course, these web-scraped models are distilled subjectivity in aggregate, subjectivity at scale.
ChatGPT is made of people!
I don't agree, even a human chef with decades of experience in all the tools and ingredients they're using is not going to get every recipe right on the first try. I'm not saying an AI recipe will never be good, but it is going to make mistakes sometimes.
If you're wondering whether a task is theoretically possible for an AI, I think a good rule of thumb is to ask "could a human domain expert do this given a few days and as much research material as they need". If the task here is "develop a novel recipe without trying it", the answer is no. You can't predict the interactions of every possible combination of ingredients and processes no matter how much time and experience you have.
You misunderstand what a language model is. What a human can or cannot do is frankly irrelevant.
A GPT (Generative pretrained Transformer) is not a simulator nor an imitator. It is a predictor.
Predictors try to reverse engineer the computation that could have led to a certain output so they don't just grok what is explicitly stated in a dataset but also what is implied by its structure.
As an example, In a language model trained on protein sequences alone and nothing else, you will find biological structure and function emerge in the inner layers.
In fact, with a capable enough architecture and sufficient data (quantity, qualit, variance), there is nothing a predictor couldn't divine with enough training.
You assert
>Actually generating a recipe is obviously much harder, and should be effectively impossible unless you include a humanoid robot that can cook and taste.
In the pursuit of recipe prediction, a predictor will learn to internally model taste because taste is implicit in the data.
You don't need the hypothetical perfect predictor to demonstrate your assertion as false because a capable predictor already does.
>If you're wondering whether a task is theoretically possible for an AI, I think a good rule of thumb is to ask "could a human domain expert do this given a few days and as much research material as they need".
You are wrong. Train a language model on descriptions of the functions of proteins and an equivalent protein sequence and you get a language model that can predict novel functioning sequences from function descriptions alone.
Not only is this not something a human expert can achieve in a couple days, It's not something a human expert can achieve at all. It is a Super-Human ability.
I understand what you're saying, I just think we have a fundamental disagreement over "taste is implicit in the data" that I'm not sure we can resolve here. Intuitively it makes sense to me that if you know the entire sequence of a protein's fundamental building block then you can deduce its higher-level properties, but it does not make sense that if you know a lot of high-level descriptions of an ingredient and how it tastes then you can deduce behaviors like how it will react to a certain application of heat or another ingredient.
I appreciate your links though, I think this is the first I've heard of an LLM doing something that a human can't do. I can't claim to understand a lot of the first paper and I can't access the second, but assuming your descriptions are right I agree that my rule of thumb was wrong.
First let me explain what I mean by implicit incentive with our biology example.
You have the protein sequence, G46AKT5778FAG4
The predictor is given "G46A____"
The only way to predict this correctly consistently across multiple proteins is learn the underlying biological structure.
Now, I pulled this snippet of a recipe randomly from the web.
"Add the tomato purée and turn up the heat slightly, cooking until it has darkened, about 2-3 mins."
Block out some information.
"Add the tomato purée and turn up the heat slightly, cooking until it has ____"
The only way a predictor is getting what kind of completion to make consistently correct across multiple unseen recipes and food items is if it has some model of the visible effects of heat on food in general. The predictor will be forced learn what kind of foods darken in heated water and group them together internally so it doesn't have to memorize every single instance (more effort than simply learning it)
Now let's put back darkened and block out the time.
"Add the tomato purée and turn up the heat slightly, cooking until it has darkened, about____."
The only way a predictor is consistently correct on what time interval this should be across multiple recipes is if it has some model of the effects of heat and time on whatever it is you're cooking.
I don't think the commenter's understanding of LLMs is relevant here. It's what the authors of TFA think LLMs are for and, arguably more importantly, what kinds of problems (nails) they think about when presented with the idea of applying LLMs (hammer).
It's a bit ridiculous that this is the best they can come up with, and I don't really think the idea is worth defending, especially by telling other people that they just don't get it.
His/Her primary assertion is that it is impossible for a LLM to generate novel recipes without physically tasting it first. His/her assertion is wrong and one easily corrected by understanding a LLM is and does.
It's not a perfect predictor so it's not impeccable for all conceivable situations but GPT-4 can already generate novel recipes that taste nice. This is not some far flung science fiction ability.
Salt to taste, cook until brown, kneed until the dough is no longer sticky, everything about cooking is tasting and feeling. Moreover everybody has differences in their tastes and consistency preferences. Good recipe books teach you how to use ingredients and apply that knowledge to recipes and how to modify those recipes to your personal preferences. Sure an LLM can generate novel recipes but without the premise of taste, texture and food pairing tuned in to personal preference, the novel recipe is a crap shoot in the dark. The reality is, the more you learn to cook, the less you need recipes and a good cookbokk will teach you alot more than a pile of recipes.
This is true. On the other hand, the recipes aren't typically good, and generating novel ones isn't really what people mostly want to do... so the value seems limited.
There are alot of bad recipes floating around out there. Recipe books are often geared to teaching you how to cook not just what to cook. There's techniques to cooking that affect how recipes come out. A good recipe book will teach you how to treat ingredients, how they interact with each other and then give you ideas on how to put them together.
My friend has a restrictive diet for health reasons (can't eat certain greens, no gluten, etc) and decided to use ChatGPT to come up with a dessert for her dinner party. It was honestly one of the worst things I've ever tried. She verified that the recipe wasn't regurgitated from another source so it was all ChatGPT's doing.
That just means you used a predictor that wasn't good enough (was this at least 4 ? or 3.5 ?) not that a GPT-X would need to physically taste recipes to generate novel recipes that tasted good.
GPT-2 was mostly an incoherent babbling mess but that didn't mean a better predictor couldn't be coherent.
GPT-3 could not play chess at all but that didn't mean a better predictor couldn't play chess (3.5-turbo-instruct)
Taste is implicit in recipes so a good enough predictor has to model it somehow to succeed, no physical experimentation necessary.
This isn't really surprising. The source material it's trained on isn't particularly good, and interpolating between them at all generally doesn't work well.
seems like there's a high potential for the AI to hallucinate and get confused with ingredient measurements... last I checked, LLMs struggle with numbers.
> Hosting and fine-tuning LLMs for such a specific and simple machine learning problem may be overkill, especially in terms of costs and time for experimentation.
Yes, thank you! I think a lot of companies have lost the plot jumping to LLMs when they aren't necessary. Recipies are a pretty small search space. You don't really need an LLM to understand the questions being asked.
I like the idea of generating recipes with AI. I dislike the idea of using text as a medium for it.
I want this AI to be considering whether what I ate this morning had enough protein, whether this ingredient is on sale and in stock nearby, or whether it's in my fridge and what's the expiration date on that. Is there a plan for the other half of the cauliflower? etc.
The last thing I need in a meal is appealing marketing copy for it.
This is exactly how I feel. I want it to actually be helpful.
It needs to be able to:
- track what I have (if I have input that data) and how soon anything may expire
- track what I made but haven't finished eating (just a few quick pictures in the fridge should update that as needed, or maybe it's time we all get that Alton Brown in-fridge camera to talk to)
- can I use this in something else? How long should it still be good? Etc...
- track my health/nutrition goals
- track what I've already eaten (again, the onus is on me to make sure that data is there) so it can help me follow my health/nutrition goals
- track local grocery store fliers for deals (I do this on my own with the app Flipp and save a ton of money, but if the AI could handle that... that'd be great)
- tracking local grocery store prices would be great too, but unlikely. They need us to come into the store for hopeful impulsive purchasing
- I'm sure there are more things
Not saying this is remotely simple to pull off... just what I would want in an AI that is helping me with recipes and food. I can't wait until we can utilize it for things like that.
As complex as it might be, the "intelligence" would probably be the easy part. Aggregating that data in a way that didn't become a burden for the user--either because its tedious, or because they don't trust you--would be quite a trick.
Then another trick will be surviving the attempts of stores and food manufactures to screw with this system as it becomes popular, because individuals empowered to make good choices for themselves hurt everyone's bottom line.
Because it would have to crawl into my refrigerator to see when my milk expires and into... I don't know where... to know that I didn't eat enough vitamin A yesterday. Data that's not on the web.
Unrelated to the article, but I have had great success using LLMs to generate recipe. Here's how it goes:
"I have a ton of onions, what can I make"
(LLM provides like 4 options)
"Ok, give me a recipe for French Onion Soup"
(LLM provides recipe)
"I don't want to use my oven for the bread, can I use my air fryer?"
(LLM provides an alternative approach using the air fryer instead)
Thanks LLM! That was way better than some awful google recipe SEO spam nonsense.
Computer generated recipes and images would make me discount the service entirely, unless their tasters tried it first, in which case they could simply take a picture of the dish.
You can't use a language model to predict how something is going to taste.
Less tritely - I do find it fascinating how problems that would've been independent research endeavors can now be subsumed by large language models. Rather than building a big dataset of protein, calories, etc., just ask ChatGPT
Im the founder of foodforecast and we help the food industry (ie bakeries, gastronomy, supermarkets) with demand predictions, so they only need to produce what they will sell. Our AI uses historical sales data, weather and holiday data, resulting in a much better prediction quality (>95% accuracy) than simple statistical approaches.
This way we can reduce food waste by 30% on average.
I think something like the recipe generator that suggested mixing dangerous chemicals into cocktails is going to be a problem for any AI application in this space:
It’s important to caveat here the language model was prompted into the suggestions explicitly, it’s not that it spontaneously suggested dangerous recipes.
I’d not also that it’s getting harder to do that with current ChatGPT (this article uses GPT3.5), and I suspect “alignment” research in the 5 years time frame will make these sorts of things pretty hard to trick the models into doing.
I run a recipe management website for my friends and family as personal passion project: https://letscooktime.com. People eat with their eyes. You *need* good images. Traditionally this meant expensive food photography, but now you can use AI to generate images. The challenge is that the image may not look like the real meal, but anyway 90% of people scrolling by won't make the recipe anyway so that's probably acceptable!
Fake food images are literally fraud. I’m also having trouble believing that this is a passion project for friends and family based on your 90% comment.
When you search for recipes, how many recipes do you view before you decide to make one?
It's a stretch to call it fraud when there's no guarantee what you make at home will look like what's made in a test kitchen. I challenge you to try to do food photography well at home without thousands in photography equipment, props, the actual dish, and expertise in making it look enticing on camera.
People absolutely expect the photos in a recipe to show the thing actually being made, and therefore an additional source of information. Adorning a recipe with photos of something similar but not quite like the described thing is, IMO, a form of gaslighting.
It's kind of funny-- I'm a technical visual artist, classically trained (former) chef, and spent a long time as a web developer in an environment where people breathlessly spoke about the possibility of LLMs. (Auto-correcting OCR was my pet wish that I never fulfilled.) It's been... interesting seeing layman's usually naive and mistaken takes on generative AI performing these functions.
The difficult part of recipe development is testing, and professional recipe authors pay a lot for it. (It was regularly available pickup work when I was in culinary school.) Professionals don't use recipes like home cooks do: ours are much simpler, tend to combine ratios and amounts with known techniques, and assume knowledge that most home cooks don't have. Home cook recipes tolerate people who don't really know what "4 lbs turned radishes glace w/sherry" means, where nearly any professional cook familiar with classical european cooking could grab the half dozen ingredients and equipment without clarification and perform the technique almost identically to another cook across the world. It's not rocket science, but even with clear instructions, there's a lot of technique there that you just have to do at least a few times to get a sense of it in a really basic way.
Writing recipes for home cooks poses the same challenges developers have writing documentation for people who aren't familiar with the codebase they're documenting. Home cooks consistently execute things like "Saute" much differently than a chef might assume they would, use seasoning very differently, measure doneness by the amount of time cooked rather than using internal temperatures, textures and smells, need measurements for "pinches" of things and use volumetric measurements instead of weights, and and all sorts of other things. For a home cook recipe, the professional instruction, "hard sear, glaze, and brown in the broiler" would need likely need specific times, settings, pans, and things like that... and the temperature of home cooking equipment, the initial temperature of the ingredients, slight variations in salt or sugar content, all have a significant chance of making that recipe fail. I guarantee you that when, say, Gordon Ramsay writes a cookbook, rather than writing the recipes himself, he goes into a R&D kitchen, cooks the dishes with those cooks who know how home cooks do things, and they write the actual recipes that go in the books.
So like almost every other "hey lets replace some creative/technical person with this generative AI" initiative I've seen, this would do the easiest 95% that takes 5% of the time while not addressing the most difficult 5% that takes 95% of the time. When someone asks Midjourney to make, oh, say, a sexy elf in the style of Thomas Kinkaid or whatever Midjourney users want these days, they might not be a professional artist, but since they're the consumer, they can judge whether the elf looks appropriately sexy or stylistically enough like Thomas Kinkaid. Getting a recipe spit out like this requires technical judgement that any user who'd rely on such a device would almost certainly lack.
Fortunately/Unfortunately it seems pretty difficult to devalue professional cooking as a skill any more than it already has been, and your average pro doesn't have much exposure to this sort of market anyway, so I'm not really worried about industry impact compared to, say, concept artists for video games... Though I think workaday utility developers are more squarely on the chopping block than most. However, I feel for the home cooks who'll faithfully follow these recipes expecting similar results to what they get from foodnetwork.com, simply recipes, the new york times food section, or whatever other source they get recipes from.
> and use volumetric measurements instead of weights
Sorry to pick up on one sentence of an interesting long comment, but this is a pet peeve on mine when having to rely on American recipes. There seems to be an obsession on using volumetric measurements for things which really should be done in mass units - flour, sugar, sometimes even ingrediants like grated cheese.
European recipes in my experience tend to use most 'natural' units - mass for solids, volume for liquids - but you still get the silly culturally specific units sometimes - '1 medium onion' ... well how big is that? Would two small onions be too much? Or half a large onion?
Yeah it's definitely a big thing with recipes designed for American home cooks. Just like using SAE measurements, it's a chicken and egg thing. Maybe one in ten US home kitchens has a food scale because our recipes don't generally use weight measurements, and recipes don't generally use weight measurements because one in ten US home kitchens has a food scale. If you're making an investment in publishing a cookbook, it's probably not a defensible principled stand to sacrifice the sales and piss off your potential readers because they don't have the equipment to make your recipes. Many professionally-published recipes use both, but that's an added expense and increases your recipe testing costs.
As far as the medium onion is concerned, some of that is unavoidable because onions are a natural product with size variations, and unlike sugar, salt, flour, water, etc that serve important chemical or mechanical purposes during cooking, something being 10-20 percent more or less oniony in the grand scheme of all the flavors you have is not likely consequential. Most professional recipes wouldn't be that much more specific in that regard.
With respect ingredients such as onions, I am concerned that what I think of as a medium onion, may not be a medium onion in other countries/cultures. Like, I generally will consider an onion of about 5cm (2 inches) as medium-sized, but if I am off by 1cm from the author's perceptions that's 50% less, or 70% more!
If a recipe could say '125g chopped onion (1 medium onion)' that would be so useful - catering to the geeks like me and normal home cooks!
I get it... you're anxious that it's going to be the wrong amount so you want a very specific number... but that specificity doesn't yield commensurate accuracy. Even if everyone put in exactly 125g of plain yellow onion, the variance in age, when it was harvested, soil conditions, etc. would yield more variation than changing the weight... it just usually doesn't matter that much with onions. This is most apparent with strong ingredients like garlic and herbs: one strong small clove of garlic will easily overpower 2 larger mellower cloves of garlic if used raw, but will often have less comparative influence if cooked... not too many people outside of the restaurant world would even know how to judge whether a small piece of raw garlic was stronger or mellower than average. If you're using a flour with a standardized grind and protein level, using weight for a loaf really matters.
The most common manifestation of this disconnect is in cooking times. Unless you've got completely standardized ingredients, preparations, storage temperatures, etc. which is very difficult outside of large food service organizations, (and why so many restaurants are willing to pay so much to huge restaurant suppliers like Sysco for mediocre food... it will always cook the same way,) cooking times will almost always be the wrong answer. People always ask questions like "how long do I cook a thick ribeye steak?" and I always say "until its done." There's a general perception that very accurate cooking times will yield very accurate results, but that's so not true. What's the size and shape of the steak? What's the water content of the steak? Has that changed since you pre-salted it too far in advance? Refrigerator temps vary tremendously: what's yours at, how long has the steak been out.
People like cooking times and precise measurements for imprecise ingredients because it gives them a sense of control, but it's a false sense of control. when the only way to get that sense of control is learning how to tell when the food is done by yourself.
Random thing that came to my mind related to that. Back in 2015-2017 when the tesla P85D came out and then the P100D, lots and lots of car "experts" had their money taken in drag races and rolling street races. It was just a simple google search away that it did a 10.5 1/4 mile yet slower cars that had no chance would legitimately bet hundreds on a quick race just to lose. Moral of the story, these people understood their street racing and drag racing worlds very well, but it took a few years to put their bias aside when breakthrough were being made every year.