Hacker News new | past | comments | ask | show | jobs | submit | mirkodrummer's comments login

I’m going to put a memo on the calendar for 2029, will then email Dr Kurzweil cause I want my best years back… Seriously I can’t believe the madness that’s going on and seems it’s getting worst day by day

“ I've launched 42 dev tools on Product Hunt in the last 2 years — or maybe 55? To be honest, I've lost count.” That is a valid reason to NOT launch a product there. Imagine the bloat level and how difficult would it be to emerge.

that's the point of the story: keep it simple, keep launching.

What we solved? Layout? Still need to go back a re read some shady aspects of flex and grid. Sticky positioning is one of the most useful features ever; yet if for any any reason doesn’t work it’s very hard or almost impossible to debug and understand why(does any parent up the tree have any overflow?), and good luck finding any good documentation explaining corner cases. Without dev tools and a lot of wasted clicks it’s impossible to debug css from source code alone, you have to inspect that element at runtime. Most native inputs still have very limited styling capabilities(please don’t tell me it’s a good thing), we waited years for native modals and popovers. Last time I checked can’t still animate from display none to block with height. I can go on for days, my point is how did we become so accustomed to all the problems css have and lost any bit of criticism?


We can now transition to height auto: https://css-tricks.com/transitioning-to-auto-height/


What’s the incentive for websites to let Kagi and others indexing content if llms in search show relevant informations right away? Wouldn’t something like perplexity ai making more sense then? Or perhaps better application of llms to search


I haven't used Kagi's Quick Answer very often yet, but when I do, it always cites its sources and I often end up clicking into at least one of the sources to look for more detail or context.


Bingo. I nearly always use the quick answer, and then will use the cited sources to click onto the page to either read more or verify that the summary was accurate (and it always has been in the 300+ times I've used it).


I've seen it cite sources, and then say the opposite of what the citations say

If you're going to use this feature, always check the citations.


If you publish to make information known, then your incentive is that it helps spread that information to people who may not otherwise visit your site. If you are trying to make money off of search engine traffic then you might not like this much at all. I think most people would rather not be pointed to those sites in the first place, so it’s a win-win if they block crawlers.


This is a bad take. I don't make money off search engine traffic (beyond the occasional donated dollar), and yet I would quite rather[0] that AI doesn't visit my site.

Imagine for a second a world where instead of publishing directly to your own website (home), you publish into the LLM's knowledge base. Nobody comes to you for information, they come to the LLM. Nobody cares about you, or the work you put in. Your labour is just a means to their end. To some extent, you could argue that Wikipedia works in this way, but it really doesn't. The work you put in is reproduced verbatim, and the work is collaborative. You get the joy of seeing your writing being used to help other people in a very direct sort of way, as opposed to being aggregated, misinterpreted and generally warped by a non-intelligent LLM.

In other words, you cannot possibly expect others to want to work in a sweat shop, toiling away to provide you with instant gratification. We must leave room for human expression.

[0]: https://boehs.org/llms.txt


Okay, I’ve imagined the world you described. I don’t care for it. I also don’t think that’s a likely outcome of LLMs. Why would somebody continue “toiling away to provide you with instant gratification” if there’s nothing in it for them?


If you are selling something on your website, you are just as happy for people to find your information if it is through an LLM or a search engine.

If you are publishing information for free, you don't care how people access it.

If you are publishing just information without selling anything and want to make a living on it, you should paywall it or get with a publisher.


Swift is not Mac OS only, it’s actually cross platform. Maybe you’re referring to Mac OS/ios swift sdk that is platform specific


How do you read text out of a bunch of pixels rendered on a canvas? Yes the dom is eventually rendered by Skia but still exposes an api for querying the underlying structure


Flutter also exposes the semantics tree via the DOM so you can actually still target elements, but yeah if you want to target the WebGPU elements, there needs to be a new API for that, which I believe there will be in the future because a lot of WebGPU renderer are coming out, in Dart, Rust, C#, etc.


There certainly is a need for an alternative to google search, maybe not ads based. It’s something that’s often discussed here on HN or elsewhere. Whether it being brave search, ddg, kagi search or whatever. Thing is I doubt the next “google search” killer will be a traditional search engine at all, like this Mojeek, and I can’t see something like Kagi attracting the masses(who never paid for search) with a paid subscription, it’s very clear to me that their business model is a niche business model. I don’t want to say the inflated word of todays news, but my feeling is the only way to kill google search is to not directly compete with them but rather have something disruptive like an llm is. Direct competition is a dirty and expensive business, and sooner or later you must turn profitable, where do you think the company will turn against in order to make a profit? Funny thing is that I believe that Google knows this and jeopardize his own business with cramped Gemini in search


> I can’t see something like Kagi attracting the masses

For me it’s less about paying and more about having to log in and attach a payment profile. I’m sure the creators of Kagi are nice people, I just don’t trust whoever they may someday sell it to.


I pay their highest package on one account which has hardly a search run with it, and just make new trial accounts in VMs for my searching generally. If they ever plug the hole, I stop paying and move on. I am paying max to tip them, maintain privacy, and not be a mooching degenerate.


Although niche, and not aiming to be a google killer, kagi is profitable (according to a post blog post from May this year https://blog.kagi.com/what-is-next-for-kagi#1 ). So, who knows. Other search engines may actually live in a market owned by google...


They also mention that “there are cases where browser APIs are backed by optimized native implementations that are difficult to compete with using Wasm … the team saw nearly a 100 times speedup of regular expression operations when switching from re2j to the RegExp browser API in Chrome” if so how they call RegExp from WasmGC or viceversa? 100 times speedup for native web apis is not something you can ignore, that’s the important lesson here, not everything compiled to wasm will result in a speed again


You can call from wasm back into js, I believe some webgl games do that


Yep it's a bi-directional communication layer. You can even have shared memory.


I'd be really curious to see the same benchmark applied to re2j on a native jvm, contrasted with those same optimized chrome internals. 100x is in the right ballpark for just a core algorithmic difference (e.g., worst-case linear instead of exponential but with a big constant factor), VM overhead (excessive boxing, virtual function calls for single-op workloads, ...), or questionable serialization choices masking the benchmarking (getting the data into and out of WASM).


The difference is primarily algorithmic. Java on WasmGC is about 2x slower than Java on the JVM. The remaining 50x is just Chrome's regex impl being awesome.


perhaps we should have more optimized web apis to use? maybe for creating optimized data structures with strong typing? Not just the typed arrays types, but also typed maps


Don't forget that the author of core-js, a library not a service like polyfill but with almost the same goal, is tired of open source as well https://github.com/zloirock/core-js/blob/master/docs/2023-02...


To me Microsoft, Apple, Google, Facebook they all seem by a degree or another totally out of control. They seem to be so careless of any short-long term consequence that I wonder if there is any accountability at all or if it’s just a shitshow of who does perform better and get bonuses


They’re all monopolies. We just need to formally redefine the terminology around anti trust to punish the anti competitive nature of all these tech giants


Remember the MS case from the 90s? It didn't bring the supposed split and instead, end-users got an option to "hide" various software. The company kept its dominant position and is abusing it as we speak. Even the browser choice window enforced by the EU didn't change much IMO. Surely, Firefox managed to cut slice of market share but lost it to Chrome which heavily "promoted" their browser. In the end, we're living in a Chromium-dominated reality where even Microsoft uses it.

I'd like to believe that we still can found politicians, govt officials eager to work for the public interest and against wealth of the corporations, their monopolies. But it seems almost impossible considering lack of significant and damaging to the monopolies antitrust proceedings in last 20 years. I'd assume there are deals behind public eye where govts turn, well, a blind eye - perhaps getting something valuable in return.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: