Hacker News new | past | comments | ask | show | jobs | submit login
MLCopilot: Human Expertise Meets Machine Intelligence for Efficient ML Solutions (arxiv.org)
60 points by mercat on May 2, 2023 | hide | past | favorite | 11 comments



I've been working on ML ad bid algoriths for my ecommerce business. GPT-4 has been indespensible, teaching me all these tools and heavy math. I'm over my head but working through it. I do wish there was something better. It's good at high-level brainstorming discussions, and at writing small functions and showing me how the tools work. It is not very good at composing, and it hallucinates methods sometimes. I imagine the future will have more domain AIs with targeted knowlege and tuning. Still, GPT-4 feels like a huge leap forward. It's like having a friend who is a math and coding genius who will work side by side with me for free.


Almost free at $20/month


You can get a lot of use out of an API key (pay for what you use) in one month before hitting the $20 mark, especially if you're swapping between gpt-4 and gpt-3.5-turbo


How expensive does GPT-4 end up being via apikey in your experience?


A 20 min API conversation was about $2, but it gets around the 25msg/3hr limit.


Can you share your prompts?


I can share some conversations if you would like. Send me an email (see profile).


Some notes: - based on GPT3.5 - essentially, the test was “how well can GPT produce ML code” (tune hyper parameters, base off of case studies) - did not compare to the human case, only to other ML models (unless “human” is considered perfect, in which case GPT got 86%. Although I don’t think a human would perform at 100% of the benchmark)


The main criticism of AutoML type frameworks was that they generate models that cannot be understood by a human. It seems that GPT based AutoML will solve that problem to a large extent. Model building / selection is almost certainly going to be fully automated away in the next few years.


I don't think this makes much sense at all. How can a gpt based machine learning solution arrive at a model that can be explained? Explanation is not simply understanding what the model knows about a system. If that were true then shap values and partial dependence would be all we need. We also need to be able to understand how a model arrived at a given solution. And not simply which neurons fired but what is the actual structure of the system of study. You could have a full 3d view of a fluid flowing with an infinite number of trackable particles and a perfect computer to calculate a neural network to predict where the particles will go. That model will probably perform very well even with a very high amount of turbulence in the system. However you will be no closer to producing navier stokes equations then you were when you started. The model cannot tell you what those equations are even though it is able to approximate them to high precision. Why would adding an LLM to this process suddenly produce these equations? Because the LLM scraped the internet and more or less will figure out it's a fluid and assume navier stokes applies? What if we replace the system of study with the singularity in a black hole where we don't understand the physics? How will the LLM explain that?


> models that cannot be understood by a human

I think that boat has sailed with the dominance of ANN models.

We sort of ascribe these ex-post hypotheses to how they work, but we don't really understand.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: