Hacker News new | past | comments | ask | show | jobs | submit | zero_iq's comments login

One possibility is to replace part of the battery. The smaller battery can be designed to lie about its charge, or you can replace with a higher energy-density battery and use the space saved for a detonation system (perhaps even incorporating the battery itself into this) and a small quantity of high explosive, which is pretty stable and safe until detonated. Contrary to popular belief, high explosives are actually relatively safe, and usually even burn safely or are hard to ignite at all in some cases. Package it up into something that looks identical to an unmodified battery. Modify device firmware and battery control circuitry to detonate it on receipt of a specific signal and... boom.


It's also at least as expensive as more advanced machines you can buy already-assembled.

If this was half the price, I might be interested. But if I wanted a coffee maker with open source control, I'd probably just hack an existing cheaper product. And I'm someone who absolutely loves assembling stuff from kits!

Heck, I'd be surprised if someone hasn't already got Doom running on a Sage.


It's also at least as expensive as more advanced machines you can buy already-assembled.

Exactly. The parent post mentions this being "the wrong solution to the problem", but I don't know what the problem is this product is addressing. E61 machines are well understood and diagrammed with (somewhat) interchangeable parts. If this product appeals to you, you can buy a cheaper/similarly priced machine, take it apart and put it back together yourself.


Oh, don't get me started on what the problem is :D

* Espresso machine electronics are very proprietary. There's basically one manufacturer, no published schematics and very closed firmwares.

* That one manufacturer's hardware occasionally breaks and needs replacement, and they charge up the wazoo for it.

* Firmware updates is not a thing. Buying a new controller with the new firmware is your only option.

* Espresso machine electronics hardware is pretty firmly stuck in the past. If you're lucky you have a 128x64 px OLED, but more likely you have LED indicators, 7 segment displays, or graphical LCDs.

There are absolutely exceptions to this, but for 95% of the espresso machines out there, you're definitely not getting the full potential of the hardware.


> * Espresso machine electronics hardware is pretty firmly stuck in the past. If you're lucky you have a 128x64 px OLED, but more likely you have LED indicators, 7 segment displays, or graphical LCDs.

I don't know if I'd consider adding a screen to an espresso machine to be an improvement. What would it be useful for?


I have a no-display machine and I wish it had a few things that a screen would facilitate:

(1) Automatic shot timer.

(2) Shot volume display (my machine is a volumetric one, but I have to measure the weight of the output to calculate what volume it dispensed)

(3) Ability to configure other parameters, such as pre-infusion time, where I'm guessing the manufacturer just left this out because it would complicate an already kind of painful button + LED UI.

I also wish it had a group head temperature sensor, but that would add more hardware to the machine than just the screen.


All this exists from Decent Espresso, but they are the 1% of the market that is the exception.


I think we’re coming full circle back to identifying what this company is trying to solve for.


I don't think any of these issues are solved by selling you a pile of parts and having you build the machine yourself. An open/more flexible PID is a great idea, but that's just one piece of this product & could be built into an already assembled machine. There are some machines (Decent, Sam Remo You) that give you a lot more control, but even this level of control probably goes unused by a lot of its users if the h-b forums are to be believed.

More control than that, or a totally open PID, might be a hard sell for safety reasons. That alone is a nonstarter, but even as you approach that level of openness, it would be pretty hard to support and really isn't needed if you just want a good shot of espresso and aren't taking an niche academic approach to an already niche process. This is why you likely won't see it in more commercial machines.


Which one manufacturer do you have in mind?

Perhaps you're thinking of the commercial market and I'm thinking of consumer, but there's been a lot of really interesting developments in the last few years at the intersection of affordable and high quality output -- I own a plus Bambino plus now, for example, which is simply a delightful machine, though I too wish I could modify the firmware.


Gicar. If you're looking at home machines from manufacturers that also do commercial (e.g. Profitec, ECM, Lelit, Rancilio, La Marzocco etc), they almost exclusively use Gicar electronics.


This is like building your own keyboards. It's for people that like putting things together.

I've ordered DIY framework laptop not because it was cheaper, but because it was fun to build it.


The last sentence of my post addresses this.


And also branded machine got QC before going to my home, instead I’d not trust myself to do anything related to the boilers


Not my Baratza Vario grinder. First two units had line-hot shorted to ground.


Now the big question - was Baratza cool to deal with? I haven’t had too much interaction with them, but I replaced (they sold me) a controller board for my Vario-W for a reasonable price, as well as burrs and the drive. The machines are good for what they are, but their service (what they pride themselves on) is exceptional in my experience.


Yes, response was good.


Do you have a list of recommendations of kits that you enjoyed assembling?


72057594037927936 addresses ought to be enough for anybody... ;)


The problem is, those addresses are completely interchangeable, nothing stops e.g. malloc() from allocating addresses somewhere around the very top of the legal addresses instead from starting near the .data's end. In fact, it seems that mmap(3) in general does pretty much that by default, so reusing address's top-bits is inherently unreliable: you don't know how much of those are actually unused which is precisely the reason why x64 made addresses effectively signed-extended integers.


You opt-in to any of the top byte masking schemes via prctl on Linux. It's fully forward compatible, in that programs that don't enable it will continue to work like normal. Additionally, Linux won't map memory at addresses higher than 2*48 by default either because non-hardware accelerated top bits pointer tagging would have the same problem. I don't think either of your complaints are valid here.


The models do a pretty good job at rendering plausible global illumination, radiosity, reflections, caustics, etc. in a whole bunch of scenarios. It's not necessarily physically accurate (usually not in fact), but usually good enough to trick the human brain unless you start paying very close attention to details, angles, etc.

This fascinated me when SD was first released, so I tested a whole bunch of scenarios. While it's quite easy to find situations that don't provide accurate results and produce all manner of glitches (some of which you can use to detect some SD-produced images), the results are nearly always convincing at a quick glance.


One thing they don't so far do is have consistent perspective and vanishing points.

https://arxiv.org/abs/2311.17138


As well as light and shadows, yes. It can be fixed explicitly during training like the paper you linked suggests by offering a classifier, but it will probably also keep getting better in new models on its own, just as a result of better training sets, lower compression ratios, and better understanding of the real world by models.


No, it doesn't.

The phone doesn't need to broadcast anything to control the drone directly. The phone talks to the remote control unit, which is what broadcasts signals to control the drone. You don't need wifi or mobile internet, or even bluetooth to fly a DJI drone (the phone connects by cable to the remote control unit).

(Actually, that's not 100% true -- if you're in a locked zone that requires permission to fly (such as near airfields or other protected sites), you will need internet access to start your flight and unlock the zone using your DJI account. Otherwise the drone may refuse to fly into restricted zones.)

You don't even need the phone at all -- the remote unit is quite capable of controlling the drone in flight with the phone switched off.


Also worth noting that, unlike physical objects, images are not bound by the speed of light. Patterns of light and shadow can move across a sensor at unrestricted speeds.


I'm confused what this means. Are patterns of light and shadow not also light, and bound by the speed of light (on the upper end)? How can patterns consisting of light (or the absence of it) move faster than light?


https://physics.stackexchange.com/a/48329

In other words, speed of a projection of light from 3d space to 2d space may be higher than the original speed in 3d. (Because one dimension gets squished to 0, so movement in this dimension is perceived to be instant.)

It's like a diagonal of a cube 1x1x1 has length sqrt(3), but if you apply orthogonal projection onto R^2, its image will be a diagonal of a square and it will have length sqrt(2). Shorter distance -> shorter time to travel.


> It's like a diagonal of a cube 1x1x1 has length sqrt(3), but if you apply orthogonal projection onto R^2, its image will be a diagonal of a square and it will have length sqrt(2). Shorter distance -> shorter time to travel.

This example doesn't make sense to me. In that analogy, wouldn't anything on that diagonal appear to move more slowly in 2D than the same thing moving along the diagonal of a face? The cube diagonal would make it move farther than it does in 2D space.

I remember seeing a simulator in my optics class that combined multiple wavelengths of light. The interference pattern moved faster than the speed of light, but that was fine because information wasn't moving faster. That was just the result of adding them together.


But when you move the laser emitter in your hand you're controlling the speed in that 2d space, not in 3d. You don't ever affect the position of photons in the Z dimension. So you’re not constrained by speed in 3d which would later be slowed down after being projected. So you move your laser emitter along the diagonal of a face with velocity v. And the perceived light which would get projected onto a plane needs to match the position of the emitter on the face. Which creates the illusion that light travelled along the 3d longer diagonal faster than at v (in order to match the projection which describes how you/camera sensor see the light). But in reality the light never travelled along this longer diagonal. It’s only an illusion. And it is this illusion that we’re measuring the speed of. Photons on this diagonal arrived straight from the emitter, i.e. each of them appeared in only one point of the diagonal throughout its entire history. In other words, the photon at the beginning of the perceived movement is a different photon than at the end. They travelled along different paths. And when some photons were at the diagonal, some others were on their way there.


Shine a laser into space and the image of your laser can be much faster than the speed of light. Nothing actually moved faster than the speed of light though.


What do you mean by "image faster than light"?

How is an image not light?

Or do you mean a captured image may show items from different points in time?

But that's only relevant after the photo has been created, not during the window of time that a sensor is capturing light.


Stand a meter away from a wall and wave a laser pointer such that the spot travels back and forth between two points a meter apart in one second. Move two meters away, but keep your movement exactly the same; the spot now moves two meters in one second.

Move two light-seconds away and do the same movement. The spot now moves two light-seconds in one second: twice the speed of light. Of course it takes two seconds from when you turn the laser on to when an observer at the wall would see it, and four seconds before you see the spot on the wall, but the spot itself moves faster than light.


Ah, so for the sake of capturing conceptual / perceived "objects", the global shutter, at least, can do a better job at what would be perceived during a short period of time that the shutter opens and captures each pixel.

A rolling shutter might capture points along the way but leave gaps in comparison. In the laser pointer example, you'd probably want a longer exposure, but the global shutter would still give you uniform capture better matching what your eyes / brain perceived.


Possible evidence for this sort of thing in Peru too: "doorways" carved into rock faces etc. at local spiritual sites ("huacas") although little solid evidence of what they were actually used for, or exactly how old they are.


In general no, but the provided example depends on parallel memory accesses at the cache level, so cache effects can indeed come into play with instruction-level parallelism. Did you just miss this detail in the article, or are you suggesting it's wrong?


Superscalar execution has nothing to do with caches. You could do it on an architecture with no caches at all.


My hypothesis: it's a kind of performance art... All AI responses, with your score given randomly with a win probability of two-thirds.


Article should be called "Totally Expected Downsides..."

If you want temporal locality, use ULIDs instead.


The unexpected part for me was not the lack of temporal locality, but which cache it thrashed. The whole dataset fits into buffer cache so one might think that the lack of locality is not that important...


I was going to use UUID with the time portion at the start (also known as UUIDv7) but this looks better.


I've been thinking about converting a legacy application which uses UUIDv4 to use ULIDs instead going forward, but then to represent these ULIDs in a format that is compliant with a UUIDv4. I have not thought through the possible downsides, but I think it should be a pretty straightforward change. Of course old records will remain true random UUIDv4s, but at least new records will be time-order and as such will create less stress on the B-Tree index when writing them.


One potential downside is ULID does not have an RFC, unlike UUID V7



It's only mentioned there in 'motivation', a study of trends?


Good point. It has libraries in most languages and that's good enough for me, but might be a problem if you're interfacing with other projects.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: