Hacker News new | past | comments | ask | show | jobs | submit login
We built an exceedingly polite AI dog that answers questions about your APIs (akitasoftware.com)
78 points by eob on March 16, 2023 | hide | past | favorite | 61 comments



I know we're all suffering exhaustion from all things GPT, all the time, but the new possibilities that keep popping up continue to amaze me.

I would far rather ask AWS: "Which ALB threw the error for the request to /foo/bar" than figure out how to put the dots together and discover myself.


I’m not exhausted at all.

Unlike the NFT craze, there’s actually something here haha I can use it right now and do it


Right, every time I see the AI hype compared to crypto my brain breaks a little.

As far as I'm aware crypto never saved me a minute of time, but even in a pretty conservative accounting, ChatGPT has shaved a few hours of tedium off various coding tasks since it released for me. Not to mention just the sheer entertainment value I've gotten out of it.

I definitely understand some of the concern and share some apprehension about certain aspects of this tech, but I can't understand how some folks can't seem to find even a _little_ to be excited about here.


> As far as I'm aware crypto never saved me a minute of time

It's saved me days and weeks of international wires. Basically for the last half-decade, things that would have been international wires and subject to crazy scrutiny, missing deposits, "which institution has the funds" back and forth, and that's been reduced to domestic transfers.

The international vendor would just receive crypto, or vice versa. And we would use our local methods to have local money in our bank account.

Oh right, and at unlimited amounts. That's important, a lot of people are never moving enough to know this is a problem.

Fees were not percentage based, and the commerce could work 24/7. this is very different from whatever arbitrary limitation is placed on the other method.

This is funny to explain because it assumes we even wanted our state's currency after the fact to begin with (USD for me, I'm American), since we could buy goods and services and investments and entertainment with the crypto immediately. But going back and forth is fine. I want to pay fiat people or a credit card off? Its in the bank account same day. I want to stake in a crypto service? I can do that immediately.

I don't think there is any metric on how large this use case is: curiosity inducing international wires replaced by low risk domestic transactions. I don't think it is possible to quantify because they just look like address to address transfers, and domestic withdrawals. I think its common enough.


Yeah, I'm aware there are some legitimate uses for crypto, apologies that I came off overly dismissive.

My point is mainly that the use cases for GPT/LLMs are pretty glaringly obvious to anyone with an imagination and should be proportionally more difficult to dismiss as seems to be a somewhat popular minority opinion here on HN.


Crypto has been equally revolutionary, but the value lies in a different stratum that’s harder to relate to than the immediate gratification of AI-enhanced chat.

Things are going to get really interesting where these worlds collide. I look forward to truly open models built on blockchains. I know I’ll give off mad scientist vibes for saying this, but a highly distributed model that can’t be shut down, blocked, centrally programmed, or constrained is the future I’m excited to see.


> a highly distributed model that can’t be shut down, blocked, centrally programmed, or constrained is the future I’m excited to see

Brushing over the idea that we’d run an ML model on a blockchain like it’s a general purpose computer… we already have a distributed model that can’t be shut down. LLaMA leaked over BitTorrent, and that was used to build Alpaca - and that’s a small model (~4gb) that basically anyone can download and run on any device.

As long as people keep seeding it, then it’ll stay distributed and resistant to being blocked etc. as long as we distribute info like we do today, anyone will be able to train the model (like they did from LLaMA -> alpaca).


> Brushing over the idea that we’d run an ML model on a blockchain like it’s a general purpose computer

Glad you were able to brush that straw man out of the way for us.

> LLaMA leaked over BitTorrent . . . small model . . . anyone can download and run

I’m aware of that, but I’m referring to models with parameter counts in the trillions; that is, the kinds of models that require super computers (or globally distributed commodity infrastructure and ASICs) to run. As the state of the art advances, what leaked years or even weeks ago will not matter; people want the latest and greatest, and they’ll demand that it serves their interests, not OpenAI’s.


The dev ops potential is inspiring. Log diving can be so frustrating/cumbersome/tedious.


You get paid to log dive.


You get paid for outcomes.

It’s only a matter of time before DataDog builds this into their product, among other logging systems.


that was my first thought! This friendly dog is the personification of that company's name. Does Alexis Lê-Quôc read hackernews?


I got paid to make a simple tool and some standard conventions to make log wading. Between gateway events with errors in BPMN and correlation ids auto logged in every message, it’s pretty easy to figure out where in 40 services something blew up that failed our process.


Yes, because it's both useful and tedious. If it becomes just useful but not tedious, the price goes down but I'm also free to take on other tasks. If this happens to enough tasks, perhaps there will be need for less of us.


I'm not sure what you're suggesting but it adds no value to the discussion.


I'm suggesting that the boring and menial work making up the majority of your day is, in fact, your job and if it's automated to any meaningful degree, you will be unemployed rather than freed up to do more interesting thing.


I suppose we have different jobs. I and my team go out of the way to automate menial work so that we focus our time on value add for the customer more directly.


So if a fake dog can log dive, does that mean the current log divers are... live dogs?

Now I finally understand why people say SRE is a dog-eat-dog world.


I’m tired of all the threads of interesting and viable-looking use cases being bombarded by the same contrarian armchair philosophers offering an unsolicited lecture about how LLMs don’t provide value because they aren’t conscious, or whatever.

Keep this stuff coming!


I had the impression that people didn't really care about consciousness of LLMs. After all, they're just glorified Markov chains.

The main criticism seems to focus on the fact that they "hallucinate" a bit too often.


This may be a matter of personal preference, but I find the style of the replies annoying.

When I'm trying to get something done I don't want a bot telling me what a splendid day it is. I could be 7 layers deep into some rabbit hole with an entire stack of things in my head. Just give me a concise, accurate response.


I agree a lot with this. I’ve probably spent more time crafting system prompts to find ways to avoid it doing this. As far as I can tell the best way so far has been to say something like “you are a terse helpful AI who responds in 100 words or less”. The lack of extra words to work with forces the bot to focus hard on exactly the answer I asked for because that’s the part that is useful to me and therefore rewarded.


Absolutely.

In general, cutesy UIs are fun only the very first time you use them. But they’re a liability in the long term because once the novelty of the cuteness wears off, you risk really annoying your users, and especially in high-stress scenarios.


Somewhat off topic but this is what turns me off about Jarvis / Friday during rewatches of any MCU properties. So unrealistic to have quippy AI be useful in the midst of combat.

Also: I just realised that the Avengers would never happen if Stark just realised that launching Jarvis as a product would take the world by storm :D


Seconded! If there’s anywhere we can cut the superfluous social fakery it’s with an AI that obviously doesn’t give two hoots about us or how our day’s been going so far. I found its replies stylistically tedious.


Absolutely agree. It reminds me of all the cute copy I'd read in web apps in 2008. No, a fatal error message is not the time to be cute


2008? This is still a thing, maybe more than ever. Recent HN thread: https://news.ycombinator.com/item?id=34812857


Sometimes my wife asks me to edit correspondence for her, and I spend a good chunk of time deleting pleasantries.

Personally I try and think of "signal to noise ratio" and also respecting the recipient's time when communicating with them.

Of course it's not good to be terse to the point of being rude.


Depends on style really. Curate to your audience


If you wanted a concise, accurate response you could look at the URL and the chart, rather than the AIs guess at what they mean.


4XX errors typically indicate client issues e.g. requesting a non-existent resource. The AI dog should be more concerned about 5XX errors, which indicate server issues e.g. being unavailable.


But why is the client asking for a missing resource, commonly you can find some asset of yours linking to that missing item. These days you can have a bot chase that down and give you a damned good idea on a proper fix.


This is so cool. I can imagine a very fun future of semi-reliable digital assistants attached to every service or tool.


It's probably better to have your own that can work across services. Most queries and problems involve multiple sites or services, so having your AI assistant work above them makes sense.

I suspect this is where Google and Microsoft are heading.


From Microsofts recent co-pilot announcement that seems correct.


I'm really enjoying using ChatGPT as-is but I fear that all these jailbreakers are just going to force them to turn it off.


People always ruin everything just for the fun of it. Use it while you can. I doubt chat gpt is going to be around for much longer.


Um, I'd rather they break a 'rather powerful' piece of software rather than wait and break and control a super powerful one.


I can’t help feeling like sand running through my fingers. All the low hanging fruit applications for GPT are getting taken by companies fortuitously in place to take advantage of the them.

It feels like the stock market: by the time you hear something about a company it’s already been priced in.

It feels like the only way to be ahead is to get in on the action early, make your millions and then not have to worry about getting displaced out of the workforce in 1-15 years.


The majority of work we do now, even in skilled professions, is busy work. How much of the work you do each day is unique and novel? How much of your work could have either been automated or made unnecessary long before ChatGPT? Probably most of it. We’ve long since been able to do less work and yet we continue to do the same things over and over again. Even if ChatGPT can magically do everything for us, what evidence is there that we would take advantage of it? World hunger is a trivial problem to solve given the resources we have today and yet we apparently can’t be bothered to fix that extremely low hanging fruit — why would we buck that trend by radically rethinking knowledge work… because of an LLM?

If LLMs were enough to radically change knowledge work, we already wouldn’t be wasting our lives grinding out 40 hours a week so we can retire at 65.


  > World hunger is a trivial problem to solve given the resources we have today and yet we apparently can’t be bothered to fix that extremely low hanging fruit
Except world hunger isn't a resource problem at all, which sort of goes against your point. Solving the real problems are in fact quite hard as you need to solve for human greed and mental illness.


The best way to get ahead of big companies is to pursue an angle that seems too silly, obvious, pointless or worthless for them to entertain devoting resources to.

Every billion dollar company around today started in an environment where the big boys in the industry they disrupted called it "stupid".


So the best way to compete... is to not compete.

Not a criticism (I agree entirely), just want to highlight the irony here of how anti-competitive markets really are in reality.


Something similar happened with mobile. Established businesses in banking, social media and e-commerce were already well positioned on day 0 to make billions.


Mobile was a huge boom for indie development too, across all categories. AI will be the same.


A lot of these companies won't be able to ship a high quality GPT product. It'll feel like a shitty add-on, grafted onto an existing product (Word etc.). Just like existing players failed to think "mobile-first", "GPT-first" thinking will be just as difficult to do and execute well.

Additionally, a bunch of these products will reduce existing revenue sources, which most companies will resist until its too late.


Have you tried asking chatGPT if there are any novel applications using chatGPT that you could develop?


One option I think could be interesting, provided by GPT-4:

> AI for Experimental Recipe Generation: Build a platform that generates unique and experimental recipes based on users' ingredient preferences and dietary restrictions. Target home cooks, food enthusiasts, and culinary professionals.


Is this Clippy for Grafana?


There seem to be literally 0 pics of dogs on that webpage


Did they tell it to write in Victorian style so it would be more polite?


This would be great to visualize the carbon footprint of your infrastructure.


Just don’t think too hard about the carbon footprint of that ChatGPT bot


In context of benefit to society this should be a tremendously smaller amount as what we do to drive


For now, while we only have a few small models, sure.

But models are inefficient and always will be by the nature of being general purpose.

You don’t need a language model to tell you about your carbon footprint report. You could just use some of your human brain time to understand instead.

Not saying that AI is inferior always to human Intelligence, but it’s much like driving a car vs walking. One is much faster, but has a much higher impact on the environment


You can run a ChatGPT-like model now using LLaMA + Alpaca on a regular CPU device in around 4GB of RAM. So if you want a low carbon ChatGPT-style model there are great options now.


If the prices are cost, and the cost is electricity not hardware, and the electricity is $0.05/kWh, every 1000 tokens (750 words) of ChatGPT is $0.002 = 1/25 kWh = 40 Wh.

Average human reading speed varies by language but one average over many languages is 184±29 wpm, so those 750 words will take you about 4 minutes (4.07) to read.

If you're an American, your average electric consumption in that time is 1,387 W * 4.07 minutes ≈ 94 Wh.


But the human still exists.

(You compute your carbon footprint per minute by adding up all the things you do in a large period of time and dividing by the duration. This chatgpt footprint should be added to the numerator, it doesn't replace the whole thing)


But the human requires oxygen and water and lots of climate control.

If AI is useful as we think it is, then having huge compute farms in space would be the best long term goal.


Sure, I'm just saying ChatGPT is low energy even compared to the opportunity cost of reading its output.


To put it in context, ChatGPT would have to be forthcoming about their footprint. Which they are not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: