Hacker News new | past | comments | ask | show | jobs | submit login

I’ve used Google Lens as “if it says unsafe, definitely unsafe”. I would not rely on it for “oh yeah, I can eat this” :).

I doubt your “few dozen years” though. Humans are only so good at it themselves. Computing has improved a lot since 1984 (3 dozen years ago), and so I’d wager that by 2050 we can be better than human at “Eat or not?” for fungi. Up for a longbets.org wager? :)




I mean the thing about fungi is that the tops can look the same and you just need to do a spore print to positivity distinguish one from another. There may be some mushrooms which are simply impossible to tell apart by outward appearance. In one of the fast.ai lectures Jeremy shows how to distinguish different breeds of cats. Then he shows how to look at the confusion matrix, and he found one pair of breeds the network really struggled with. It turns out they look really similar to him too, and when he researched further he found they’re simply hard to tell apart. Perhaps with an enormous data set there might be small differences a network could detect, but the confidence might still be low.

And given that mushrooms can kill you, it may simply never be advisable to rely on any photo based identification.


I don’t consider it against the rules of the bet to allow multiple pictures, including the underside and perhaps even “here, smush the mushroom on a piece of paper and take a picture of that”. My question is can a vision-based AI thing outperform humans within another thirty years, not if it can do it via a mechanism that isn’t discriminating.

For all the myco folks here: Do you have a sense of whether or not the multiple hours mentioned is “required” or “just” makes it easier to get a strong signal? (That is, how much is the signal boost due to our inability to see well as humans)

[1] https://en.m.wikipedia.org/wiki/Spore_print


We foragers and amateur mycologists use smell, touch (slimy, dry, etc,) sometimes taste (bitter, acrid,...), habitat (on wood, ground, type of wood, is there a bulb below ground or a root-like structure, time of year, spore prints, sometimes color change due to drops of chemicals (especially on boletes), sometimes even microscopes to view spores, and more.

all of these variables could of course be coded for a good classification algorithm.

just saying, it's often more than simply visual.


It's mostly visual. I would add location and latest weather to any software trying to recognize the mushroom. The mushrooms that people commonly forage for are not that numerous, so the algorithm needs to know about a dozen or two varieties. Chanterelles are easy to tell from images. You can probably have an AI chanterelle identifier coded right now and for most edible mushrooms very soon if not now.


Other sensors combined with photo would likely be the solution and the results might not be instant for some samples.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: