Hacker News new | past | comments | ask | show | jobs | submit | bsdpython's comments login

I did not get into it (very short write up), but in general there is a never ending need to build new software and systems. Those efficiencies will simply allow more to be done and more software to be built. That's been the history of software so far.


Google Cloud offers many different options to build Retrieval Augmented Generation (RAG) powered applications. This includes discrete components that comprise RAG solutions (embeddings, vector search, LLMs) but also includes options that combine multiple steps or even the entire RAG application in a single service. The best option for you will depend on factors such as your use case, engineering expertise, existing tech stack and future needs.

Let's start with a set of use cases and design a solution architecture using the most appropriate options. After that let's go through a detailed breakdown of the full list of services with the pros, cons and recommendations for when to use each.


Building this agent provides a nice lesson in why AI agents are so exciting but also so difficult to scale.

Besides the write up, you can try out the live demo (https://tinyurl.com/trendr-bot) and view the source code (https://github.com/brettdidonato/trendr-bot/tree/main).


I spent most of the past year presenting AI topics to executives, engineers and data scientists from companies that are Google Cloud customers. Last week was a new type of challenging audience: 2nd graders.

I spoke to three different 2nd grade classes at my 7 year old daughter’s elementary school. For the event I created a presentation and an AI powered website for the kids to create their own stories in a safe manner. The kids seemed to have a great time, with one kid even shouting out “I love AI!”

In the end, I directed students interested in AI to first focus on education and computer fundamentals: reading, writing, math, critical thinking and using computers to create things (but without using AI). This, along with getting the kids excited about technology in general, is the most important takeaway.

I have included a link to the presentation and code for the kids storytelling website within the article.


A short write up highlighting a few over hyped AI products. I’m starting to see a bit more thoughtful push back on the unrestrained hype, which is a good sign that this AI cycle is maturing.


i was just talking about this with a friend of mine, I posed the question "WHat tech should I learn next?" and he said "well you should short AI, its in a huge bubble...and sooner or later people are gonna realize it only does a somewhat half-assed job of what they claim it can do." -- I think he's onto something.


How do you know which LLM is the best option to use for your particular use case? I published an open source repo to evaluate models based on your own set of prompts across Anthropic, Google and OpenAI. Besides model evaluation, it can also be useful for prompt engineering, API response time benchmarking and production application monitoring.


Sometimes I feel like I'm seeing something completely different from what is described in the popular narrative. This is a good example. I wrote a post detailing:

* What does Sora actually do? * What does it not do? * What will it likely be useful for? * And finally, what will be needed to actually replace the majority of video generation use cases?

https://shorturl.at/auIK0


Can We Prevent LLMs From Hallucinating? And if not, what implications does this have for the future of AI? Let's talk about it.


I pay for YouTube premium to turn off the ads. It's worth every penny.


Amazon also refuses to give feedback and then spam mails you to provide them feedback on the interview process. That tells you a lot about how they are as a company.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: