Hacker News new | past | comments | ask | show | jobs | submit | gitroom's comments login

thank you so much!


yay


It's the same as the cloud offering! :)


How will you distinguish Laminar as "the Supabase for LLMOps" from the many LLM observability platforms already claiming similar aims? Is the integration of text analytics into execution traces your secret sauce? Or, could this perceived advantage just add complexity for developers who like their systems simple and their setups minimal?


Hey there! Good question. Our main distinguishing features are:

* Ingestion of Otel traces

* Semantic events-based analytics

* Semantically searchable traces

* High performance, reliability and efficiency out of the box, thanks to our stack

* High quality FE which is fully open-source

* LLM Pipeline manager, first of it's kind, highly customizable and optimized for performance

* Ability to track progression of locally run evals, combining full flexibility of running code locally without need to manage data infra

* Very generous free tier plan. Our infra is so efficient, that we can accommodate large number of free tier users without scaling it too much.

And many more to come in the coming weeks! On of our biggest next priorities is to focus on high quality docs.

All of these features can be used as standalone products, similar to Supabase. So, devs who prefer keep things lightweight might just use our tracing solution and be very happy with it.


What were the primary reasons that made students who used ChatGPT do poorly on math assessments, even though they had worked correctly through a greater number of practice problems?


They didn’t have to figure out how to solve the problem. Instead of struggling a bit, which is where the learning happens, they would likely go to ChatGPT for the answer. When the answer bot was taken away, they weren’t prepared to think about how to solve the problem and work it out.

I’ve noticed this even using Copilot in VS Code. I rarely use it, but if I start pulling it out, I notice at the first hint of actually thinking about how to do something, my brain seeks to ask Copilot instead. It’s like there is an off switch that gets flicked when there is an easy button available. If I were to figure it out on my own, I’d know what to do next time I run into a problem like that… if I use Copilot, I’ve learned nothing, other than next time I run into this, use Copilot.

It’s a crutch when it comes to learning.


Exactly. I stopped using it entirely, and I noticed that I would write something and then pause, expecting copilot to take over for a while after quitting. It felt like my brain wasn't really engaged.

Its anecdotal, but I feel a lot better after giving it up and I think I can do a lot more when I can fully reason about the problem after poking at it from different angles.


> they had worked correctly

ChatGPT worked correctly for them but they learned nothing. It should be pretty obvious that you don’t learn by copying answers.


This is an impressive innovation! Mem0 directly addresses a big problem that many of us have had with present large language models. It seems to me that the addition of a stateful memory layer potentially allows for LLMs that are not only more intelligent but also more efficient and user-friendly, because they can be tailored to individual users. And your design for an open-source, hybrid memory system also seems like a big step forward for the developer community both for the inventiveness of the system itself and for the potential it has for serving as a model for whatever comes next after LLMs.


Thank you!


Thank you so much!


It was Gitroom before, will be renamed to Postiz soon!


What about external libraries like dayjs?


I guess the point is that you won’t/shouldn’t need an externak lib for basic date stuff.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: