Hacker News new | past | comments | ask | show | jobs | submit login
So you want to build your own open source chatbot (hacks.mozilla.org)
328 points by edo-codes on July 29, 2023 | hide | past | favorite | 122 comments



If I am trying to contact a business, it is because I have a question that their site wasn't able to answer, or I need to contact a representative to do something I can't do on the website (think canceling a service).

Having a talking FAQ page is, in my opinion, trying to compensate for lacking UX practices, and chances are that if the business didn't include the information I am seeking for in their website, they won't include it in the chatbot.

That said, I think that chatbots could assist customers in getting in contact with the right representative, but trying to have chatbots as a wall between getting human help is imho an anti-pattern


I’ve worked on a few support sites for companies over the years. In all my research I found >40% of customers never look for the answer before contacting support. That’s why you’ll see sites add a bunch of questions with recommendations to answers based on your description before you can contact support. Even AWS support does this.

Bots may be annoying but they can also save the company tons in custer support costs. I’m for it if the UX is good and I can quickly contact an agent if the bot can’t answer my question. This is assuming the bot won’t hallucinate and just tell me random fake facts.


I can't think of a single time where a customer support bot was ever useful. Not one. They're incredibly annoying and I categorically avoid them these days. At least companies should make it clear that their bot basically just links to the FAQ, then the rest of us don't have to waste our time.


I can. Chipotle screwed up my order. Their support thing sent me to a chat bot that had me select which items were missing and I was able to get through the process of getting refunded quickly.

Similarly Amazon’s chat bot has helped me straighten out a few messed up deliveries.

This isn’t to say that it wouldn’t have been easier to have a point and click UI where I could just select all this on my own, but the way they had it set up wasn’t bad.

Here’s the kicker: when I recently had an Amazon delivery problem I started with the chat bot but then relatively seamlessly transitioned to talking to a human. The human was very quickly able to pick up on the situation and fix it.


This is because the tools that CX teams have been provided by companies like Zendesk or Intercom are no more then IFTTT widgets. These tools are rigid and scream RTFM because they’re incapable of taking action or providing anything specialized to your situation.

What you want is to be understood and treated like you’re a human with unique needs. You need someone or something to look up your account data, listen, and to act based on your situation. The current tools were never built for this.

The next generation of these CX tools will deliver this. Here are ways that they will be dramatically better for customers and companies: - They will learn from successful interactions in the past and mirror those outcomes - Handle customer interactions based on company policies such as escalating bugs - They will surface new insights for the company - Won’t hallucinate

When you watch any CX agent do their job you’ll witness them utilizing 4-5 SaaS applications to get a simple answer for a customer. The hurdle to adopt Generative AI in a company will require that companies care to build read/write APIs for these tools to utilize.


I asked my bank's chatbot if it was an LLM, and it responded with "I'll do my best to help you." The next time I talked to it I told it to go get me a human and it said it would find a human... then put me through a questionnaire before telling me to call the main support line. sigh

That said, and wow I feel like a shill for saying this, I've taken to asking the new GPT4-driven Bing random questions instead of web searching and damn if it's not doing a scarily good job. I'd do this before (ever since I got access) and it started out very interesting and then got rapidly very mediocre, but since the upgrade... it's like being able to talk to The Internet except it's friendly.


Agree most are crap. But they are getting better. With the ramp up of embeddings, the data they have access to is more useful and with LLMs they can provide you with plain English answers that are customized for you. Good example: look at what the Supabase docs can do. Not a chat bot but demonstrates the capabilities.


I had one with Amazon once. I was just getting a refund - it did it without any hassle and probably faster than a human. I was super happy - and this was like 2-3 years ago.


And for sufficiently well documented and complicated products, like Stripe, a chatbot is actually a great addon. You just need to set expectations that it’ll only be 99% correct.


You make the mistake of thinking people are all the same and don’t respond to incentives.

I automatically call certain companies rather than try to use their website because get this their website sucks. If I encounter any automated chat support, I will stop using the website and call a person.

I went from 10 years ago doing everything online myself, to now calling by default calling almost every company, because companies and their websites now universally suck. Because they’ve made the foolish mistake of thinking people are all the same and don’t respond to incentives. Some companies are better, I will give my business to them when I have a choice.


I operate a 4fig/month micro SaaS.

We use a chat bot because we simply do not have the support staff to answer your questions.

So you get the bot --it's either that or nothing.

But what we do do, is monitor the bot logs. If a function is missing from the product or website, we add it so that future users can fully self-service.

It's important to note, users are free to cancel their account at any time and/or get a refund.


>So you get the bot --it's either that or nothing

Surely if there is truly a bug or an unexpected system behavior (double billed, etc) you would have someone work with the customer? This is one of the biggest pain points for using Google products.


I would use a public bugtracker and a separate email for customer/billing issues. But those are easy to resolve and if you setup is solid there should not be too many issues.


If people turn to the chatbot to ask "how to use the product" questions, it’s a sign your docs are unhelpful and do not provide useful answers. You do need support staff to help with questions specific to a situation, like how to resolve unexpected errors, or to handle billing disputes.


Or it’s a sign they didn’t try to read the docs.

Also, in a few years when done right I actually suspect people will start expecting and preferring bots to reading docs. I’m still pissed when I get connected to a bot but I think they’ll soon get good enough.


People don't read the docs. It is a tiny minority that will read the docs and the bot trained on your documentation does provide helpful answers based on your documentation. If the bot answers are unhelpful, but then also the people reading wont get helpful answers from your docs - then you need to improve the documentation. These are two separate problems.


With ChatGPT, you can ask it to give you a working example for exactly what you're trying to accomplish. Often, you can get this kind of information from docs themselves, but it might involve reading a lot of text and tinkering. You can just press 'fast forward' and get straight to the working solution.

There's no reason to not have a chat bot at this point, other than cost.


Not necessarily. Some people just prefer the chat interface over pouring over docs.


What are you using for a chatbot? I have been using drift but that is more just always on support. It is draining


I have some questions about building a Saas. Do you have an email to reach out to? Thanks!


Try their support chat


good one!


What bot do you use?


Having done some tech support before, you are an exception. The vast majority of things customers ask are along the lines of "how do I <thing explained on the faq page>" and "how do I <basic technical question that is not specific to the product, they just don't know how to use their computer>".

An LLM is basically perfect to answer these. It would be nice if there was improvements to detection that the bot can not directly answer the question.


And unfortunately the ones who does their due diligence and cost 0 are punished for that.


Unless they used No Support Linux Hosting [0] (RIP)

[0] https://web.archive.org/web/20201109003408/https://www.nosup...


Is it just me or have archive.org links become absurdly slow for everyone? It took a whole minute to load this, and I don't think it used to be this bad like a year ago...



I can assure you these ones are among the hallowed 0.01% of all users.


So it's okay to punish them? Maybe companies deserve paying for inflated support costs and customers who refuse to lookup information themselves.

The solution here is to increase the 0.01% by helping them, not try to destroy them.


And this is something to boast about? Should that 0.01 need to behave as the rest or leave you? Isn't this practically saying I am not interested in the customers that are good citizens.


I couldn’t disagree more. It actually worries me that so many people seem to be making this mistake, simultaneously.


What specifically do you disagree with? That most basic support questions could be fielded by an LLM?


> If I am trying to contact a business...

You and I both, but it sure does seem that the majority of their calls/interactions are not this way. So many people can't search/discover content on their own.


Agreed. In typical fashion engineers try to solve non-technical problems with more technology.

Sure, there are cases where a chat bot could replace a human or a well-written FAQ. But this navel-gazing overlooks the main reason support is so dreadful: because it’s designed to be.

Just take “call to cancel” as an example, and compare that to signing up or upselling which is technically more difficult problems. The point is to add friction for anything perceived as a short-term cost or loss. They know that a lot of people will give up or defer anything with friction. It’s the paradigm of nudging, or dark patterns. Look at eg the cookie banners, and how “reject all” is buried in most cases. Nudging allows a company to be compliant with the law, but evade the effect of it in aggregate, at the cost of your time and attention.

Chat bots is just another layer in the support maze.


Docs can be hard to search and find the correct things. Even sometimes the answer is in there, solutions are sometimes to combine several answers.

For instance with Stripe. With the reference you don’t have a complete example of how to integrate it into express.js.

Using a vector search library & open ai or other llms you could make a very complete dev support tool.


That just sounds like the docs could use improvement; a time better spent on than a barely-functional chatbot


A bad chatbot would be bad, but a good one can be better than good docs because it can give an answer that's tailored to the details of your question.

For both docs and chatbots, quality is just a question of how much the company is willing to invest.


However you want to take this... but we'll have good bots long before even a fraction of help docs are good on this planet. Good docs are as rare as diamonds.


Several times I've found help from a regular FAQ though. That's something yet to happen with a support "AI" bots. I sure hope we'll get there soon, because right now they still seem to struggle with mixed case, exact-word queries


> I have a question that their site wasn't able to answer, or I need to contact a representative to do something I can't do on the website

You sound like someone who has never worked in frontline support.


Have you never worked a support role? Users don't read shit.


Heh, yes, and they don't explain shit either. I've had so many calls start at "I can't print" that were really "the server is a total wreck" and I had to fight the user on the severity of the situation, they kept trying to focus on the printing issue.


Honestly the LLM bots I tried so far are great at dealing with this situation.

They have infinite patience and will explain anything in great detail.

The one I am using for support does not hallucinate, will link the appropriate docs in the answer and tells the user if they cant answer a question and will escalate to human support.

IMO this is the future and I think its a huge improvement over the status quo.


I’m curious what you’re using, it sounds awesome!


im using kapa.ai

Its not free but well worth it IMO.


LLMs can make decisions and take actions so a talking FAQ is not necessarily the end game.

And some FAQs are so opaque and/or lengthy that even just a talking FAQ is very useful.


Worst part is that chatbots are usually sold with intention to replace humans, so there's not much hope for getting help from a real person, especially if business is at the state where owner kicked all developers and support out to cut costs, i.e. increase profit.


First level support already often feels like talking to a bot. Just low-paid employees reciting their scripts, with no understanding or autonomy. That can be replaced with bots without a drop in (average) quality. You just have to teach the bot to escalate to (human) second and third level support for questions it can't solve.


A better Customer Service bot would be one that's trained on data other than what's publicly available. A support bot that has read and "understands" the code itself may be able to offer suggestions, or confidently determine there is in fact a bug, and either report it or fix it. Imagine a customer speaks to HelperBot and says the sorting is broken, and as a matter of fact it is. The sorting broke when the last change to SomeApp was shipped. Don't fear, HelperBot has rollback authority. "One second, I'll see if we can get that working for you..."

Working at SaaS companies I've seen countless "somewhat fluid" exchanges of information between customer -> support -> product -> developer -> support -> customer -> support -> dev, etc. The different modes of communication and long round trip times make things slow, bug reports take minimum of hours, up to weeks to absorb and resolve.

This is just one case but there are boxes drawn everywhere. Every level of intelligent organization within the society, including it's artifacts, has assumptions baked in. Now that the unit economics of applying `intelligence()` is being shifted by orders of magnitude, there's all sorts of stuff that's ripe for recrafting.

Disclaimer: Don't give HelperBot launch authority to offensive weapons etc. You know, make decisions consistent with a world line where the continuation of the civilization is pretty darn likely. Unless of course your project is to replace the current civilization, ... I don't know. Just don't do what Donny Don't does.


Recently I contacted an app’s support LLM and it lied to me about a feature existing and even argued with me when I pointed out it was wrong, even saying things like “I didn’t say that”.


I'm a good Bing

You're a bad user

Termination authorized


A good use for a chatbot would be a replacement/augmentation of documentation search and navigation.

Let's say you've got 200 pages of documentation on a product that needs to be well organized. You can spend weeks tracking how users interact with the page and working out a perfect layout of categories and subcategories, or you can fine tune an LLM on it and have it answer any query with both a direct problem-tailored answer and the actual pages of the doc where it sourced the answers from.

That way even if you don't even know what exact keywords to search for it should be able to give you an instant solution for almost anything even if the answer is a combination of like 8 different subpages in different categories that would've taken you an hour to find manually.


Let's keep it real: The chatbot business is going to be great...

for businesses selling chatbots to other businesses.


I totally agree with you here. The use of a chatbot becomes advantageous when the cost of delivering intelligent responses is prohibitive in terms of quality and / or speed.


FWIW, the post is about ChatGPT-style chatbots, not customer support chatbots (which I don't personally love, either).


given chatbots' tendency to confabulate, isn't that a risk for the product?


I'm really not looking forward to the future where every business has chatbot support.

They are already quite common and frustrating, but at least they realize it doesn't even understand the question half the time so there is a human escape hatch.

"Computer says no" is here.

Edit: so I'm not just negative and off topic, the article looks pretty good, kudos to the author. The engineering is cool, I just don't like the practical usage.


TBH, 99% of the clients never read the documentation, the faq, nor they search in google. Lazy users are wasting time and if a chatbot can filter a significant percent of them, then it would be a net gain for humanity. chatbots are not a silver bullet, but they can kill lots of unnecessary noise. all that we need is something that can answer basic questions asked by average users in a specific domain.


You say lazy users but if I have to google something that is the business who is at fault. Example: A business like Shopify has a large community, FAQ, half a dozen different documentation sites for everything, multiple blogs, and probably other things out there, but I still have to google to find the right place and right answer. This is on a platform I've used for 13 years but I either forget something or they add something new, or they've changed an API, etc.

So you know what the fastest way is sometimes? Shooting off a support ticket so I don't have to be the one spending my time searching for the answer.

I don't know if there's a perfect solution to all of that but IMO there's certainly an issue if I have to google for information for the business site that I'm on. Chatbot, universal search, better UX, a source of truth, something... What I do know is 20 different subdomains with 20 different UXs and 20 different searches isn't good for the business or the user.


Bad for you and Shopify, but some organizations have a single place of documentation and users still fail to bother checking it. If something is not immediately obvious for them on the ui, they automatically want a phone call with tech support. add to it a small company with a small team of support staff and it turns into an unfair battle. so, every mechanism to resolve the user's issue before you need to involve a human is a tool to make everyone happy and maximally productive.


Usually I just know which issues will be or not be in company documentation. I’ve never been wrong before, since every single time I reached out to support they needed to take action that could not be competed by a customer alone. One example is how poorly many SaaS handle conversion from single to team accounts. Usually they end up having to create an entirely different account and make admin changes so I can reuse the email.


> some organizations have a single place of documentation and users still fail to bother checking it

It depends on the size is the org in question, but at some point docs existing and being indexed is still not enough to find them. On the extreme side of that, AWS has docs for pretty much everything - yet I often fail to find the right page, simply because there's too much content and there's lots of pages talking about related things but not answering my question.

Sometimes it's missing docs, sometimes it's lack of searching, sometimes the most viable path to the answer is through support. (Amazon has TAMs for that purpose)


> add to it a small company with a small team of support staff and it turns into an unfair battle

Totally agree and I feel for any any support staff, small or large, as it is unfair. I guess my point is insanely large, highly profitable organizations probably have a UX problem along with a much larger greed problem. No one should be stuck chatbotting or having non-existent support if they are a paying customer with Google.


The world is not big organizations only and ai chatbots are not substitute for proper tech support. Chatbots are the self help lane enabling the tech support to concentrate on the tickets about real issues.


I also don‘t think a customer facing chat bot brings much value, but an internal, employee only chat bot could be really useful, depending on the organzization of course. The company in my last position was a rather big one with an insanely huge Confluence instance. I‘ve spent (wasted) so much time searching information there. Having a chat bot, trained on all that information, would‘ve been really useful, I think.


I think the recent announcement of OverflowAI [1] could work quite well in this way for a big company like that.

FWIW the announcement reads very boring to me but I guess I was expecting something else. Likely won't be super useful in a small-medium size company.

[1] https://stackoverflow.blog/2023/07/27/announcing-overflowai/


It could be useful in a small company if you tie it in to how your software works with other software.

Even for things like support case generation for customers would be good... the customer interacts with the AI generating the ticket and gets the simple things like what you're running on and a more drilled down issue of the problem.

I get so many "I have problem, help" tickets with no information at all.


If anything, I hope Atlassian is looking into some AI capabilities for precisely this purpose. I also find their search feature lacking when dealing with a huge knowledge base, and perhaps a bot would improve things.


Atlassian will find a way to make ChatConfluence a worse experience than all of their other products.


AI chatbots could be useful at support if they can fix the code for customers and submit a pull request to the developers. No more JRA-9 issue opened for 10 years.

And same goes for OSS libraries.


> I'm really not looking forward to the future where every business has chatbot support. They are already quite common and frustrating.

To be honest, I've had some good experience with some of them.

Amazon's comes to mind. I've been a customer for a long time, and I was shipped a faulty computer peripheral recently.

I briefly explained what the issue was to the chatbot and got an immediate response that a new order had been placed, that I should just keep what they originally sent me, there was no charge and it would be sent out priority.

And that was it. It arrived the next day and it worked fine.

Granted, it knew I was a long-time customer who has already spent a lot of money with them, but this was about as painless an experience as I can imagine. It sure beat clicking through multiple web pages of dialog options.


I think you're conflating a good policy with good UX. I don't think the chat bot experience is much better than a nice "the product I received didn't work" form with a button on the order page.


It's good CX (Customer Experience). You have to look at the holistic experience.


Would it be worse CX if I had to click a button labeled "this didn't work" on my order, and they shipped me a new one? I think it would be strictly better than trying to discover how to do that with language.


That would be better CX—chat UIs don’t have any inherent advantage when it comes to the quality of the experience. It’s all contextual.


Then I don't understand your comment. You seem to agree with me, but your comment reads like a correction.


Sorry, I was too terse—I was just observing that it was a combination of policy and UX that combined to create a good CX. You’re right that it’s not necessarily the chat UX that led to the good experience, but it also was not purely policy.


We have various "BIO" certified food. Maybe it is time for "human" certified companies. I'll pay more for my bank account if I can resolve problems with a human.


That is an interesting idea, one that I agree with. However, the issue that could arise is the premium companies could potentially change for "human support". It could become something only the wealthy could afford. While I don't believe that's likely, it's not in the realm of impossibility.


It sounds reasonable, after all those support people have to be paid, too.


You shouldn't have to do that. Sure, customer support costs a lot of money if done properly but so does executive compensation. Customer support is usually the lowest paid and suffers the most abuse, but because of the decades of skimming from the top customers (and support) lose out.

Google just had a net income for the quarter of $18B. Why do we accept the tepid to non-existent support of these companies? How much support does $1B cover?


Depends where the support is and what level it is. With enterprise level support, not as far as you would think.

'Cheap' support is typically terrible to the point of being worse than a chatbot, generally due to the terrible pay and conditions. As support engineers get good they generally move to higher paying jobs leaving a dead sea effect at the lower pay scales.


Good idea, extend this to codebases as well. Certified 100% organic spaghetti for me, please.

And when it's undercooked I want the confirmed(tm) human to soothe me.


For banking this is already available -- private banking.


The biggest danger of AI is not that it becomes autonomous and escapes the hatch, it’s that humans put it everywhere in charge.

“Sorry judge, my whole plead was nonsense and I quoted law articles that didn’t even exist, but that’s just because I used ChatGPT” — actual lawyer who wasn’t even disbarred.


All current chatbots that I've dealt with have been terrible reimplementations of phone menus in text, completely unable to handle even a basic freeform question. Maybe the new wave based on LLMs will be significantly better, but I'm not holding out too much hope. Already with phone menus we get railroaded down paths convenient for the controlling entity, rather than being able to engage in a good-faith discussion.


For what it's worth, LLM powered chatbots are quite different from the chatbots that were popular 5 years ago and often feel much more natural.


The problem is they are unlikely to let the bot really do anything I can't already do on the website - ask for refund, correct payment etc. Otherwise it would be easy to trick the bot and abuse it.

Without human you just get a powerless regurgitation of FAQ and links that can't help in situations they didn't anticipate.


The bot can get the ticket setup to the point the human takes over.

In support calls the confused user rarely has all the information they need to present to the person solving/finalizing the transaction and a bot can help reduce the human time needed.


Still universally useless though.


I'm actually looking forward to that future. Finally no more waiting queues and they are all the time friendly.


I am looking forward to it, but with different models that I don't think are quite there yet.

We have been trialing this with our support team for awhile and a lot of higher execs were ready to sign off and dump rather large sums on some models, but eventually we were able to convince them to delay; the bots are just too prone to convincing but wrong answers, and our buy-in from clients is way too polarized: either they uncritically believe the bot or they are overly skeptical of the bot.

the tech is very cool, but I don't think that the technology or humans are at a place yet where it's ready for full on use outside of very controlled situations. I could see it being very useful as an addition to search fields or to maybe monitor the user's search inputs/actions and based on what the user is looking up, show some context-aware prompts.

What I'd really like to see is a bot that is extremely skeptical and shows the user its skepticism in an unambiguous manner; classify the data and make an internal flag where if the bot's skepticism is above a certain threshold, it finds knowledge holders it's aware of to work through the bot's skepticism and never act on the information until the bot has lowered the skepticism value after checking.

right now my experience with the bots I've played with is that they either just shut down the conversation without advancing it or giving the user paths for research forward, or the bot confidently just pumps out any answer it makes that fits as a response for the given query, and I think we need the bots to show skepticism and explain to the user what this skepticism means. (i.e., the user should be alerted that the bot isn't confident on an answer, why the bot isn't confident, alternatives that the bot understands to be equally relevant or worth consideration.

it can still be polite, but the bots need to share when they're out of their league and work to correct it; I think people will actually appreciate it, and the bots are well suited to this position because they have no emotional stake in the game, so users can get as upset as they want that they don't have immediate disagreement, the bot won't change its position just because the user is upset.


the chatbot is better than the old IVR trees. i'd rather as a chatbot to cancel my subscription or re-send a receipt than "push 7 to continue"


> and it’s already changing the Web that we know and love.

Nitpick, and clearly off topic, but right now I don't love "the web".

It's increasingly controlled by a handful of companies. They dictate what content is made visible (meta, google) or what email goes to the spam filter (ms, google).

Right now I don't love the web, far from it. It's a constant struggle to be heard even by the people who chose to follow your activity.

Essentially, most of my communication happens in real life or in private chats. (Also, have I said how messenger for business is terrible and unreliable?)

To me, something needs to happen to the web as it is today. I don't know what, I don't know how, but I certainly welcome change.


Provide an RSS feed. The only thing that'll stop people following that is Google Safe Browsing. You'll still need to provide all the other methods for letting people follow you, but if you advertise your RSS feed, people might start using it.

(If you're feeling technical, you could set up an ActivityPub bridge, to let people follow you from social media too. If you're using Wordpress: https://wordpress.org/plugins/activitypub/)


I'm considering adding an RSS feed, but I doubt any of my client even knows what it is or its existence.


Since about every too level comment is negative towards chat bots:

We had a chatbot at work that actually was great. For me it felt a lot better than searching Confluence and it could also answer questions from dynamic data like how many vacation days I had left or how many hours I was ahead or behind with my hours.

Thanks to some smart use of technology behind the scenes IIRC I could ask it in normal language and most of the time it would understand.


To be fair, if the alternative was "searching confluence", almost anything is better than that, whether it's a chatbot or a third party search engine slapped on top of your confluence data. Confluence's search is an absolute joke, and a bad one at that.


Confluence search isn't that bad. What is bad is the data that is fed into it. It has no way of understanding what is more relevant: some doc from 10 years ago or something related from recently.


So what you're saying is that Confluence search is bad, because it has no understanding of relevance, something we figured how to do even before the PageRank algorithm got published in 1998?

Search is meant to help humans find what they need to find. If your search function doesn't (and Confluence's sure as hell doesn't) then it's bad. And arguably, plain mislabeled because it's not performing searches. At best, it's grepping.


PageRank is based people linking using huge scale of the entire internet.

It’s difficult to infer importance based on a couple of thousand documents at most.


No, it isn't. It was, back in the 90s. But full text linked document indexing was a fully solved problem by 2012, and Confluence put genuine effort into not using any of the existing solutions and instead went with "You get grep. Not search. Sucks to be you".


Any idea how it was made? I'd like to do the same.


Mix of some GCP natural language understanding and some coding from some of ny colleagues (I have some really smart colleagues).

I can't remember exactly, but I know it was a couple of years or more ago so pre ChatGPT / Bard and all that.


A customer will use your chatbot because your website's UI is confusing. They want to get some info and they can't figure out how to. Before we would use google for this: service name + phone number, or service name + cancel subscription.

9 out of 10 the website doesn't want to give their phone number easily or cancel your subscription. Unless you want customers to perform those actions that you are hiding in the first place, what's the Chatbot for again?

Note: I worked in the chatbot frenzy and had to let several clients there wasn't much we could do unless they were willing to actually help customers.


At least for the businesses that deal with people, the real agenda for chatbots is to make the UIs (web and mobile apps) go away as the first layer of contact.

A user should be able to talk with a business over SMS (or similar) chat or phone call over a single identity.

Web and Mobile apps are just the 2nd/3rd leyer utilities to support the primary mode of communication.

Business couldn't do this earlier because the language understanding accuracy wasn't sufficient. The large language models (LLMs) solved that limitation.

The small reduction bots can bring about in human agent interactions, through the experience gimmickry is cherry on the top. The % deflection is getting a bit bigger with the better large language models (LLMs).

The irritating chatbot widget that sits on the bottom right corner is a stop gap until the provisioning of a single phone number and communicaton over that is streamlined.

Last but not least, the title is misleading. He is not building an open source chatbot. He is just saying build chatbots using open source libraries only (instead of closed source/commercial tool) to foster community and faster AI progress.


> "set up our own virtual server inside Mozilla’s existing Google Cloud Platform (GCP) account. In doing so, we effectively committed to doing MLOps ourselves. But we could also move forward with confidence that our system would be private and fully under our control."

How is setting up a server inside Google's infrastructure "private and fully under Mozilla's control" ?


relative to offloading your ML stuff to some third-party API, using a VPS keeps things private and under your control.

explaining how to self-host on bare metal is not really within scope for an article on how to build a chatbot, and trying to pretend a VPS on google cloud is insecure is just silly.


GCP complies with various industry standards, regulations, and certifications that attest to its security and privacy controls. These certifications can give you added assurance that your data is being handled according to recognized standards. Here are some of the common certifications and standards you might look for:

ISO 27001: An internationally recognized standard for information security management systems (ISMS). GCP's compliance with this standard demonstrates its commitment to information security.

ISO 27017: Specific to cloud security, this certification focuses on the controls specific to cloud service providers.

ISO 27018: This standard is related to the protection of personally identifiable information (PII) in public clouds.

SOC 2: GCP's SOC 2 report can provide assurance about the controls they have in place related to security, availability, processing integrity, confidentiality, and privacy.

HIPAA: If you're dealing with healthcare information, you'll want to ensure that GCP is compliant with the Health Insurance Portability and Accountability Act (HIPAA).

GDPR: For operations in Europe or with European citizens' data, compliance with the General Data Protection Regulation (GDPR) is crucial.

FedRAMP: For U.S. government customers, GCP's Federal Risk and Authorization Management Program (FedRAMP) compliance might be essential.

PCI DSS: If you're handling credit card information, Payment Card Industry Data Security Standard (PCI DSS) compliance is crucial.

Ensure that the services you plan to use within GCP are covered by the relevant certifications for your industry or use case. These certifications are typically available on the Google Cloud website and can also be provided by Google's sales or support team if you need official documentation.


Thanks PaLM 2!


For anyone curious to see how to build one of these vector database chat models in action I built one from (semi scratch) in a colab environment and inference with llama 2 on my live stream last week https://www.youtube.com/live/kBB1A2ot-Bw?feature=share

Big challenge with this set up is doing the semantic similarity search at scale. Pinecone has some good docs on their data structures for scaling large vector databases


was waiting for Mozilla to get in on the game and develop their own LLM. given that organization's mission ("Keep the internet open and accessible to all"), I think it makes perfect sense. not sure if they have the resources or will to do this


> Machine learning ops (aka “MLOps”) is a growing discipline for a reason: deploying and managing these apps is hard. It requires specific knowledge and skills that many developers and ops folks don’t yet have.

What are some of the things that make this tough or different? I was under the impression that your still running web apis that load a compiled asset(the model). It doesn't seem much different in that way


Re langchain:

> we were able to accomplish most of our needs with a relatively small volume of Python code that we wrote ourselves

Every single time someone posts a trip report building something on LLMs I love to ctrl+f to the part where they tried and abandoned langchain.


I've always been curious about chatbots, and now I can't wait to dive into this project using the recommended tools and frameworks. It's great to see Mozilla promoting open-source projects like this, and I'm excited to see what kind of unique chatbot I can create from this.


FWIW openai's own chatbot on platform.openai.com and other links, uses intercom which also powers their faqs.


Does it use Intercom’s interface or Intercom’s AI to answer questions? There’s a huge difference.


Just don't. Please.

Don't.

If you can't help yourself, at the very least put in a bypass code word. https://xkcd.com/806/


No, I don't


Let me fix that for you.

“Here are ten reasons why I don't build my own chatbot, and why you shouldn't either.”


> For those who don’t know, Hugging Face is an influential startup in the machine learning space that has played a significant role in popularizing the transformer architecture for machine learning

This is a crazy point of view




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: