As a kagi early adopter… why would I bug report on a feature I actively avoid using?
I can totally recommend search to anyone, but I agree with others in this chat that most toys feel beta. I’m glad to have them but can’t recommend them.
For maps, your goals of being ad free go against what I need from maps search. 90% of the time I search for restaurants, museums, businesses, opening hours, phone numbers of various local shops. People add that data to google, and not that many other maps services :(. That is where they advertise how to be contacted. Addresses and directions are really secondary to a maps search.
"enabling frame pointers is a 1-2% performance loss, which translates to the loss of about 1 or 2 years of compiler improvements"
Wait, are we really that close to the maximum of what a compiler can optimize that we're getting barely 1% performance improvements per year with new versions?
As a part time compiler author I'm extremely skeptical we're getting a global 1–2%/yr. I'd've thought more like a tenth to half that? I've not seen any numbers, so I'm just making shit up.
However, for sure, if compiler optimizations disappeared, HW would pick up the slack in a few years.
There’s likely a lot of performance still on the table if compilers were permitted to change data structure layout, but I think doing this effectively is an open problem.
Current compilers could do a lot better with vectorization, but it will often be limited by the data structure layout.
Clearly this isn't the case. Plenty of neat C++ "reference implementation" code ends up 5x faster when hand optimized, parallelized, vectorized, etc.
There are some transformations that compilers are really bad at. Rearranging data structures, switching out algorithms for equivalent ones with better big-O complexity, generating & using lookup tables, bit-packing things, using caches, hash tables and bloom filters for time/memory trade offs, etc.
The spec doesn't prevent such optimizations, but current compilers aren't smart enough to find them.
Imagine the outcry if compilers switched algorithms. How can the compiler know my input size and input distribution? Maybe my dumb algorithm is optimal for my data.
Compilers can easily runtime-detect the size and shape of the problem, and run different code for different problem sizes. Many already do for loop-unrolling. Ie. if you memcpy 2 bytes, they won't even branch into the fancy SIMD version.
This would just be an extension of that. If the code creates and uses a linked list, yet the list is 1M items long and being accessed entirely by index, branch to a different version of the code which uses an array, etc.
If I know my input shape in advance and write the correct algorithm for it, I don't want any runtime checking of the input and the associated costs for branching and code size inflation.
That's my question. I'm also under the impression that optimizations CAN be made manyally, but I find it surprising that "current compilers aren't smart enought to find them" isn't improving
Having a few eink readers/tablets, I can say that I almost never care about battery life. You end up charging them once a week or two anyway. I never look at battery stats for them because it almost never ends up important in practice. I don't even look at the battery stat when deciding which to buy or when I recommend one to a friend.
I "love" that almost everything I want to see in a browser (content, shops) wants me to install an app, and almost everything I want to have as an app (tools, editors, email... MS Office is the biggest offender here in my books) wants me to be in a browser or is just a bloated packaged website with subpar UX.
Can you even win a war without winning some battles? I joke there… but keys assume you have the research on the topic, then how do you answer if you want to fight a battle and give out an answer?
The way I read the article it doesn’t talk about producing a better argument. It talks about being a better listener/reader such that the other party is more killed to listen to the argument you already have.
I do see it worth spending the 1000x effort at times, but not to convince someone else about topic A. I would spend that if I’m unsure of my standing on topic A.
Certain classes of programs should be built as ReleaseSafe rather than ReleaseFast to keep many of the runtime checks. It's perfectly reasonable to write a database and build as ReleaseSafe, but also make a game and build it as ReleaseFast.
It feels like the album art could make use of some cool dithering algorithm instead of a simple black/white filter. Something in the style of Return of the Obra Dinn.
I’d wager the furniture industry is currently responsible for a significant % of anual deforestation, which as far as I know isn’t regrowing fast enough.
An approach like this could benefit from crops which are not productive for humanity otherwise, but which grows much faster and eats CO2 cheaper than trees.
Does that mean “stop replanting forests?” Absolutely not.
I’m skimming through this and it feels like a well thought out research proposal with concrete next steps. My thermodynamics is too bad to comment on the approach but it looks cool. As long as setting up experiments for it is reasonable in cost, wouldn’t take too long to show results (before it’s too late for the planet), and can show that enough CO2 can be captured and long term costs make sense, then it sounds great! I hope some of the proposed next steps get funding.
Commenting “wouldn’t Z be better instead” feels counterproductive to the discussion here.
The idea proposed is so incredibly cheap, the only real cost is land and unskilled labor. This screams "government experiment" but we are doing less and less of that.
If carbon credits actually become a thing, this might be a way to cheaply sink carbon. But there is so much graft and corruption in that space at the moment.
Something I never considered, I wonder how clicking to be a human works for people with disabilities. There’s gotta be accessibility features there, and I bet bots are abusing them.
At least for cloudflare "captchas", you don't have no solve anything, only click a button. Therefore it's pretty accessible. My guess is that they care less about whether you're a human or not, and more about imposing resource costs on any attacker, because solving those challenges requires a full browser runtime (ie. hundreds of megs of memory + some non-trivial amount of CPU time). That's significantly more expensive than you spamming requests.post() with on a thousand threads.
I can totally recommend search to anyone, but I agree with others in this chat that most toys feel beta. I’m glad to have them but can’t recommend them.
For maps, your goals of being ad free go against what I need from maps search. 90% of the time I search for restaurants, museums, businesses, opening hours, phone numbers of various local shops. People add that data to google, and not that many other maps services :(. That is where they advertise how to be contacted. Addresses and directions are really secondary to a maps search.