Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Magic Loops – Combine LLMs and code to create simple automations (magicloops.dev)
248 points by jumploops on Aug 1, 2023 | hide | past | favorite | 49 comments
Howdy! We built this as an experiment in personal-programming, combining the best of LLMs and code to help automate tasks around you. I personally use it to track the tides and get notified when certain conditions are met, something that pure LLMs had trouble dealing with and pure code was often too brittle for.

We created it after getting frustrated with the inability of LLMs to deal with numbers and the various hoops we had to jump through to make ChatGPT output repeatable.

At the core, Magic Loops are just a series of "blocks" (JSON) that can be triggered with different inputs (email, time, webhook), then operate on those inputs using a combination of LLMs and code, and then output those results (email, text, webhook). Under the hood, the LLM calls are using GPT-4 via OpenAI and the code is run in sandboxed (no internet) Docker containers in AWS.

You have full control over each step of the loop, but you can also create (or attempt to create) a Magic Loop by simply describing what you want. We use GPT-4 to break that request into feasible steps, and then create a Magic Loop scaffold. Of course, you should still validate the loop before publishing it!

We've seen some neat use cases already:

- "Text me when the tide is less than 1ft between 7am and 7pm at Fort Funston"

- "Summarize an email using this format and forward it to this address"

- "Text me every time our store does more than $1000/day in volume on Shopify"

- "Take specific data from Cloudflare, format it, and send it to Mixpanel every hour"

We hope you enjoy what's essentially an experiment at this point. If folks like the concept, we're thinking about open sourcing it so you can run the loops locally with the code runtimes you wish (rather than in our code runners).

Let us know what you think, and more importantly, what you wish to build or automate!

Cheers, Adam & Mihai




This looks really cool! I really think an idea of a proper UI tool to chain different tools together is nice. https://flowiseai.com/ seems to be the most similar, but it's quite a bit more technical, and doesn't seem to be focused on doing the whole thing by itself - rather, you can create chains that can then be used through APIs.

I really hope you open-source this :)


Nice work. I'm trying to build something similar, yet different: FlowFlow.AI: Automate your core business workflows by defining step-by-step tasks and follow up actions for the AI agent to execute. - The AI agent decides the appropriate next step in the workflow based on one of the predefined conditions. - Keeps the AI in control within the boundaries of your complex workflows. - Use the FlowFlow API or Python/JS packages to interact with the workflows in your code. FlowFlow.AI is almost ready for the beta launch. :-)


Found one problem - when I get a really big json (e.g. https://a.4cdn.org/g/catalog.json, sorry for a 4chan example), I can't pass it into the code block because I get "{ "error": "limits.read.size", "message": "Failed while reading stream: Max output size exceeded (100000 bytes)" }"

Is there no way to bypass this currently?


I've just increased the limit to 1MB on our test box, which should be sufficient for your file (424KB), but it will take awhile to get the code runners switched over to the new image.

Update: You should be good to go now, all the instances have been replaced.


I do like the way your UI strips out some of the ambiguity in how your initial prompt is interpreted by an LLM and allows for editing and parameterisation and IFTTT outputs. Especially since exactly how LLMs parse complex instructions is hard to test. There's definitely a sweet spot between "type instruction and hope for magical understanding" and "write the program yourself"

(though as it still involves LLMs I'd probably want to use it for stuff that sounds less mission critical than some of your examples!)


Completely agree -- LLMs are super powerful, but code is predictable. We found the best loops are ones that are almost entirely code (built with the LLM's help of course!) and only use LLM blocks for very specific and repeatable tasks, if at all.

We see a future where the code can be "self-healing" as well, with user approval of course.


+1, I love how the blocks provide transparency. I have been mulling over the UX for a somewhat similar app that I have been working on, and I may borrow some of how this works.


Happy to chat about some of our decisions (or lack thereof!)


Would be good to see the open source code for sure. LLMs really struggle with Plan and Execute agents still and hopping off from tool to tool is a drag. This kind of architecture is less magical but more reliable


Agreed, nobody wants to use a tool that stops working in a month!

Most of the code is actually pretty simple, the complexity comes from the code runners and the infra needed to support the various integrations.

To open source it, we'll need to create some local code-runner alternative, as not everyone wants to spin up AWS resources. With that said, we're excited about running this locally ourselves (e.g. scripting various things on-device).


First of all, this is amazing/fantastic/cool as hell!

Second, this is what I remember Retool looking like long before it became a huge tool. I feel that this is a step towards building bespoke tools using LLM tech. (Of course, I'm sure others started in similar ways, but this is one I remember.)


This is very cool. I like how simple it is, and I think this is a nice first step towards making LLMs long-term useful for the layperson.


Thanks! We're pretty excited about the potential for LLMs to democratize the ability to program computers (and increasingly the world around us).


This all looks very cool! The generic API element is could be really useful as well and it seems like the LLM is pretty good at knowing some pretty common APIs. Would be really cool to run locally but could definitely see it as a business case as well. Way to go!

Will definitely play around with it more


I asked it to make a shit post on r/askReddit every minute and it got an error

File "/home/glot/main.py", line 1 var BLOCK_INPUT = `"If you could instantly become an expert in any one subject, what would it be and why?"`; ^ SyntaxError: invalid syntax


Strange, that line appears to be correct JS. Have you tried regenerating the code?

Each block is editable, even if it doesn't seem so at first (including output!)

It's worth noting that this is not AutoGPT -- the initial generation is just a starting point, and that the actual loop configuration and validation is entirely within your control.


Isn't GPT 4 very costly? Plus you're running a scraper. How do you plan to keep this free, or is this just a teaser?

This may be useful but would love some clarity on how long you're gonna keep it like this.


Great question. Running GPT-4, the scraper, SMS, and email isn't free, so to keep the experiment going forever we would need to charge at some point.

For those finding the service useful already: we're also thinking about open sourcing it so you can just run the loops yourself to keep costs down.


Pretty cool tool, could turn into something really powerful.


Wow this is so cool, feels like it could be a better Zapier


I know Zapier has new LLM features but I haven't done in a detailed comparison. I would be curious if the Magic Loops creators could speak to what their product does that is different. On first glance, it seems to do less than Zapier, which isn't necessarily a bad thing, just that it might excel for different use-cases.


Our goal with Magic Loops was to leverage a LLM-first approach to programming, allowing more people to write code that does useful things.

It turns out that when you write simple programs, they often resemble what you'd find on Zapier or IFTTT :)


Wouldn’t it be possible to leverage Zapier for their integrations instead of trying to imitate that?


Definitely! Because it's just code at the end of the day, as soon as our code-runners have internet access you can integrate to your heart's content.


Zapier has a pretty big barrier of entry, it already provides countless integrations - that would be the actual tough part to catch up with.


This is great. Please consider low cost tier for non-business users, I think many HNers would be willing to pay for such automations


Thanks. The biggest spend is definitely from GPT-4 and the scrapers, so we're exploring a "bring your own" keys approach for those that don't want to run it locally but keep costs down.


Hi! What a great personal project! I was trying to scrape a page and use GPT to analyze it, and then send me an email IF there was a certain type of (fuzzily defined) content on the page. How would I add the conditional check/step on the email block? Thanks!


Great question! Right now the solution is pretty hacky, but if your code block outputs to stderr then we stop the loop execution.

As an example in Javascript, simply calling `throw new Error('break!')` will stop the run from continuing.

We plan to add conditionals, branching, etc. Baby steps :)


heh, I was half-working on something like this. The ideal is to have one stack that can pipe

whisper voice input ("hey, could you tell me what alarms I have tomorrow?") ->

LLM command interpretation ("match input query to pre-defined commands and arguments", "LISTALARMS, , , 8, 2, 2023") ->

underlying script ("{'23:00', 'Doctor's Appointment'}") ->

natural re-statement ("re-word this data as an answer to the question xxx", "you have a doctor's appointment tomorrow at 11 o'clock PM") ->

so-vits/tortoise-tts/some TTS plugin

nodes could be fabbed out of single-use .py scripts, and the whole thing could be compiled into a headless script. endless possibilities.


Very cool usage of LLMs! Are you planning to build a business around this? Open source?


We were working on another product called ChatSeed (ChatGPT for teams) but found that many of our users wanted something with a bit more control. Building Magic Loops has been on our back-burner for awhile, and so we decided to dogfood ChatSeed to build this.

Our current plan is to keep Magic Loops running as long as people seem to like it (and especially because I track the tides with it!), but we also think it could be super powerful to run locally, so we're hoping to open-source it at some point as well.


This is pretty fantastic, well presented and easy to understand. You definitely need to get in front of enterprise customers with this. It hits a lot of key asks from my enterprise customers, simple AI driven insights and automation.


Pretty cool concept. ChatGPT meets IFTTT. I could see myself using it for some automations in my life. However, I am not ready to trust you with my API keys just yet. ;)


I suppose a webhook could be set once their instances can access the internet. No trust required.


You can access the internet using the API blocks, just not from within the code blocks today.


Every few years, somebody reinvents Yahoo! Pipes.


It's Unix all the way down...


Damn, this could get incredibly powerful. Nice work!


Thanks, we're pretty excited about the potential of Magic Loops.

Right now the code blocks can't access the internet, but the possibilities will be endless when that lands.

Would love any integration requests!


Looks Great tool. :)

Suggestion: please add the ability to use credential for social, to be able to perform social tasks.


Great suggestion! With internet-accessible code blocks this should be doable soon, but we love the idea of a "first class" block for social APIs.


This is dope. I wonder if you can automate a lot of the web scraping online using this UI


How do you plan to get around the gpt 4 rate limiting if this thing scales?


Fantastic question. Our goal is to use LLMs to help create the loops, but to rely on code for the majority of (predictable) automation. LLMs make for useful blocks, and oftentimes you can fallback to GPT 3.5 for most structured tasks. With that said, this launch definitely hit some GPT-4 rate limits :)

Also, we hope that an open-source version will allow folks to do more with their own keys.


Are you willing to open source this?


We’re definitely thinking about it — running loops using a local runtime could enable some neat use-cases.


What are you using to scrape?


We’re using Apify to scrape with their Web Content Scraper actor. It’s nice because they handle proxying for us which decreases the likelihood of us getting blocked. Their “to markdown” functionality is also a lot better than libs you find on npm. They are expensive tho




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: