Hacker News new | past | comments | ask | show | jobs | submit | mirman's comments login

what sorta moat do you have now that openai’s got eyes?


We expected openai's announcement of the multi-modal models and built our platform to complement it. In modern LLM/agents, even openAI and Google have trouble defining their moat. So, as a bootstrapped startup, we are in no position to postulate a moat.

Instead, we aim to achieve a growing advantage by having considerable user adoption that exposes the agents and platform to the ever-growing long tail of unexpected edge cases. And boy, do we have weird edge cases in web design worldwide. But this is good and helps build up robustness. We have one agent per critical workflow to accommodate that. Despite this, our sign-in agent struggles with icon depictions. We purposefully didn't debug it or invest in computer vision, as we knew multi-modal was around the corner.

Instead, we invested in traditional engineering and ensured the table stakes and workflows were robust and scaled well. We use it ourselves multiple times per day on every pull request to ensure it feels natural.

But the most important thing is to find the users who help you build the leading tooling. Feedback is our oxygen, and we make sure there is a feedback button two clicks away.

Imho, this accumulation of smaller gains is as close to an advantage as possible, but what's your opinion on it?


So number of lines of code makes a library a joke? One of the most commonly used Haskell libraries is only 25.

http://hackage.haskell.org/packages/archive/forkable-monad/0...

Also, this doesn't implement some stuff from gevent, it implements some stuff over and using gevent.


Yes that's exactly what I'm saying. If external code is less then some hundred lines of code, it would be better suited as a tutorial and people would be better at implementing those lines themselves. And yes, that's only my opinion.


Do you have any reasoning behind that opinion, or is it just unfounded religious zeal?

Why should somebody who wants to use threading have to know how to write a preemptive scheduler? If you want a tutorial in addition to the library thats fine, but many good tutorials also release their code as libraries. In open source world, releasing code as a library does not mean having to keep people from understanding the workings of that code. Why should people HAVE to learn how the code works to use it though?


This is a library that can be used to supplement any implementation.

If I'd had the option to switch us to stackless easily, and I could guarantee it was as fast, worked with all the libraries, and was as stable, I probably wouldn't have written this. I imagine that there are a lot of people in the same boat, where switching interpreters isn't really an option.


Technically no, you can't have two things happening concurrently and but not finishing in the same time period.

But what a "thing" and how long a period is are up for grabs. If we choose period to be anything longer than 1 millisecond, then this library will finish executing both things in that time period.

http://stackoverflow.com/questions/1050222/concurrency-vs-pa...


No offense taken - both of these are obviously visibly delicate.

There is a version in the history that used Greenlet instead of gevent which was potentially a bit less delicate, but it required wrapping of the main file and didn't work with time.sleep, and I didn't feel like it was worth writing my own locks, semaphores, mutexes, pipes and whatnot.


waitAll gone. Before I put it in any package managers it will get style guided. This is currently on version -1.0.0.


> version -1.0.0.

Version negative one? I don't think I've ever seen that before. Usually, the very earliest versions of software are numbered like 0.0.1 or something like that.


- here actually negates the ordering of the numbers in the list. -1.0.0 === 0.0.1.


> - here actually negates the ordering of the numbers in the list. -1.0.0 === 0.0.1.

Well, I've certainly never seen that before.


It should be written 0.0.1[::-1]


The lack of options for parallel (and non-parallel) concurrency is, along with other things, feeding the high defection rate to many other languages.


We've had processes, threads and greenlets for a while now... if anything the problem isn't that there are no options, but too many options that require understanding to choose and apply.

Many of the people complaining about this issue don't have a demonstrated problem and could try any simple approach first (if the point is not just to slam Python in favor of something else, from the beginning).


This type of response is why I gave up on Python entirely. Not to pile on you pekk, it isn't your fault, but it is a tone... defensive apologist... "first of all there is no problem, and if there was a problem... it is that Python is too awesome"

As someone who has had to ship stuff using multiprocess & gevent to actually meet real world scaling needs -- and integrating them with C code and communicating to a C++ application via ZMQ (inprocess by sharing the object) ... the sad fact is once we started to tackle really hard problems in Python that aren't pre-solved via a nice library all those early advantages fell away and we craved the blessed simplicity of C++ (note: sarcasm, C++ isn't simple, but it was far simpler than the Frankenstein's monster we built).


MetaCosm made the point quite eloquently, but let me juxtapose "too many options that require understanding to choose and apply" with Go, which has exactly one option, which requires no special understanding to choose and apply, and gives one exactly what one wants in basically any situation.

I am not really a Go proponent. I'm a Haskell user, personally, and Haskell, like Python, has three or four options that require understanding to choose and apply. The difference there being that in Haskell, each one of them actually gets you real parallelism, no fine print necessary. I bring it up to point out that the situation with Python is not a good example of what you might call "intrinsic complexity" (as you seem to be implying) or the Go solution would not be so much simpler, nor is it really an example of there being many better higher-level abstractions, or more of them would resemble Haskell's many high-level options. It is simply a bad situation that produces many poor kludges, and the mentality that everything is fine is (in my opinion) feeding a substantial defection rate to Go.


Fewer explicit yields & no monkey patching necessary for IO performing libraries.


As a former python user, I wish this library had existed 18 months ago -- would have helped with some of the nasty cases you can get caught on with gevent.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: