Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What under-the-radar technology are you excited about?
347 points by ilmoi on April 12, 2021 | hide | past | favorite | 535 comments
Something that the world hasn't yet noticed but you think will be huge.



1) Fiducials. This makes it so incredibly easy to map the physical world to the digital world, and vice versa. I guess it’s not under the radar, but it just feels like such a superpower. Sub pixel accurate 6dof tracking & unique identification for anything you want. It’s like a cheat code for computer vision as it makes the whole problem trivial. I half expect fiducials on all public streets within 10 years. On every Consumer product. All over the place, and some of them maybe invisible.

2) Custom silicon. Open source tools & decentralization of fab tech (by countries not wanting to be subject to international trade problems... as well as Moore’s Law slowing) are gonna make this like making a PCB.

3) Electric airplanes. With wings. “Drones” as VTVL electric helicopters have been big for a decade, but there are physical limitations to that approach. if you want to see the future, look at drone delivery in Rwanda by Zipline. But also, I think we’ll see passenger electric flight start small and from underserved regional airports sooner than you think, doing routes and speeds comparable to High Speed Rail (and with comparable efficiency) but much more granular and without massive infrastructure.

4) Small tunnels by The Boring Company and probably competitor companies with similar ideas. When you can build tunnels for one or two orders of magnitude cheaper than today, cheaper than an urban surface lane, then why NOT put it underground? And they’ll be for pedestrians, bikes, and small subways like the London Underground uses. Gets lot of hate from the Twitter Urbanist crowd, but what TBC has done for tunneling cost (on the order of $10 million per mile) is insanely important for how we think about infrastructure in the US.

5) Reusable launch technology. The world still doesn’t grok this. They will. It (long term) enables access to orbit comparable to really long distance airfare or air freight.

6) Geoengineering. I’m not happy about it, but it’s insanely cheap so... it’ll probably happen.


> Fiducials

If you mean what I think you mean (tiny marks on everything that encode information to help computers figure out what they're looking at), I agree. In particular, I've long been waiting for someone in self-driving sphere to give up on trying to crack the problem with just imaging the world as it is. In a saner world, countries would already be standardizing machine-readable markers on roads and posts and traffic signs. I'm still hoping someone will wake up and make use of this "cheat code".


I worked for a number of years on making invisible fiducials that can appear only in infrared, and am one of the chief inventors on 3M's smart code. The idea was that on any retroreflective surface you can place this fiducial sticker and relay some information. We were originally looking at approximately 64 bits of information. This isn't a ton, but we thought it a good balance of error correction, and module (the pixels in the code) size.

At least from an IR perspective, it was generally loved by nearly everyone we spoke to, but there was no takers in the end due to the IR requirement and such cameras not being everywhere yet.

Here is a link to some of our first work. That picture of a stop sign is our first prototype with hand cut special film.

https://www.businessinsider.com/3m-hides-tech-in-sides-to-he...


Nice! Thanks for sharing! Didn't realize 3M was working in this space (but then again, whenever I ask or check if 3M is working on $randomThing, it turns out they do; it's a real-world ACME).

> but there was no takers in the end due to the IR requirement and such cameras not being everywhere yet

I wonder why that's the case. What kind of IR camera it required? I was under the impression that turning a regular camera into an IR one was just a matter of digging out its IR filter and replacing it with a visible light filter. Or did it require the kind of cameras that are used in thermal imaging? Unfortunately, I don't really know what the technical and business challenges are for ubiquitous IR. Could you share some information about it?


We could tune the material to whatever wavelength we wanted, but for ours we targeted above 850nm as that was a common filter. Regarding the actual camera to IR, you are right. Even the front facing camera on a iPhone is sufficient to read them.

Many cars use front facing cameras that with minimal adjustment could read at the proper wave lengths, but one issue for a lot of vehicles right now is that the windshield has an IR filter to minimize heat and interior damage. For cameras behind the rear view mirror, the standard windshield creates an issue. Windshields with a small cut of the film would be sufficient, but they are not manufactured to my knowledge.

For general fiducials not related to these, I had hoped to put them everywhere. Think even hidden everywhere and read by phones. But at least for a while I think the read facing cameras on phones will continue to have the filter and using the phone backwards with the front facing camera is awkward.


And punk kids would be covering them over with stickers that machine read as something entirely different. :P Not as easy a thing to crack as it could be. You could cryptographically sign them (sign your signs!) and maybe encode approximate GPS coordinates to prevent them from being moved too far from their original position, but that’s a lot of data to encode in a marker.


That's an over-worried worry. Just like existing road signs which also get tampered with, they would be a redundant source of information on top of the other self driving tech that would reduce instances of a human assistant or driver having to intervene.


Pretty easy, make it a felony. Punk kids (usually) don't go around killing people.


Because killing people feels bad, but destroying objects feels like a prank (especially to those who don't understand the consequences). I don't think laws hold back punks from murder; customs do.


Forgive me, genuine question. What happens when it snows?


Same thing that happens to humans. You see parts of the signage. Or none of it, and drive on memory (and follow the behavior of other cars).

The idea behind adding special markings for computers is that you can waste a lot of time and effort trying to perfect algorithms interpreting signage optimized for human consumption, or... you can spend much less time and effort, to much better effect, setting up additional signage that's easy for computers to consume and provides information relevant to computers (which is not necessarily the same as what humans need).

Note that roads are controlled, artificial environments. There's always someone responsible for maintaining a road and signage on it. The infrastructure to deploy additional markings dedicated to self-driving cars already exist.


Yup. Or have signage at an angle that deflects snow. Or has a heater or wiper. Or as others have said, use non-visible-light fiducials.

Humans rely on road lines and signs as well and when those are covered, human capability is reduced and they have to drive slower.


For the fiducials we previously worked on, the goals was to maximize error correction for the types of occlusions commonly on signs. Because most signs are vertical, there is a lower likely hood of a ton of snow on them. The more common occlusion is an edge occlusion from either a natural road feature, or other vehicles.

For snow that generally disrupts the sign fiducial, we had a few solutions. The first is that if fiducials are dense enough, then dead reckoning may be sufficient until the next fiducial is observerd. The second is to try and build different layers of data with different error correction capabilities. One system developed could relay a low number of bits from a far distance and reasonable error correction capability. The remaining bits were then much smaller and readable from up close. The thought being that if you are able to fully resolve a fiducial in one area then assuming an apiori map of fiducials, the first 16 to 24 bits of a 64 bit code is likely enough to accurately resolve location.


Forgive me, genuine question: even in the richest countries we apparently don't have enough budget to paint new yellow lines in the middle of the roads when they get worn away - how are we going to afford something like this?

The vision of self driving going much beyond driver assists like we have today is going to die a slow death as more and more people realize it just isn't worth the CAPEX and OPEX. Humans brains are cheap in comparison.


Fiducials don’t need to be ON the road. They can be on signage or whatever. But fiducials are cheap. Just paint. (And you can, in fact, draw them by hand using a ruler, a square, and patience.)

They are error-detecting and error-correcting digital signage that gives you precision 6dof orientation “for free.” (And you can do even better... can stick a bunch of them all over a deformable object to map back the shape and deformation.... without lengthy registration and with only a single camera... unlike vidicon) And they’re easy to implement in software. Can be extremely fast (<10ms... no real limit as you can implement it in an FPGA) and fairly lightweight and can use basically any kind of digital camera (good or bad).


Are there such fiducials+libraries available today?


You can use almost any 2D matrix barcode as a fiducial, and those include error correction/detection already. Signing it would be easy.


I’m familiar with ARUCOs, but in my experience they tend to be somewhat finicky (Eg: mid-identified with a little bit of noise or unfavorable lighting), and accurate poses tend to require a lot more fiducial markers than might be considered “reasonable”.


The standard Aruco library in openCV is not the most stable fiducial tracking. But there are lots to choose from


Hello from Montreal, a city that is booming where you can't go a block without hitting a giant hole in the road or a bridge that's about to collapse. Having proper lines painted is pretty far along our hierarchy of needs. That said, I can easily imagine machine readable markers leapfrogging most of the existing infrastructure deficit.

It's easier to get money for a futuristic sounding project than for routine shoring up of infrastructure so it doesn't keep falling apart. (Just like many codebases)


machine readable can involve more than visible spectrum. Radar and light beam reflectors, non-vis spectrum strobes, radio beacons, swarm. Volvo was working on this years back, before "autonomous" - cars sharing data via wireless comms, using installed navigation markers by roads and more. There are ways, just no central will.

Google, among others, is using private (external, not owned or controlled by google) wi-fi access points as radio beacons for navigation today. Good old wardriving google. This is one factor facilitating navigation in cities when GPS is spotty. And then you have cell towers.

Centralized, standardized navigation facilitation solutions would be better, more reliable, not require mobile internet access for alignment with beacons, as they would be in a stored database in the vehicle, etc.


Or what happens if some kids prank a fiducial towards a cliff?


What if someone redraws road lines (or fakes the speed limit signs or whatever) to confuse drivers today?

With machine-readable signage based on fiducials, not only can you error-correct the signage (making it highly resistant to simple alterations and ambiguous readings), you can also encrypt it and get 6 degree of freedom vehicle vs signage pose information. All from a single camera view. That makes it MUCH more resistant to abuse than machine-learning of human-readable signage... arguably more resistant to abuse even than human readable signage is to humans.

(Also, I’ve thought of additional ways that fiducials could be resistant to such measures.)


What happens today when some kids move road cones so they point to oncoming traffic as a prank? You can have a car react the same way I did in that situation - slow down, take in the entirety of the situation ("cones say X, but that leads you into oncoming traffic") and ask the operator for directions or maneuver accordingly.

Just because some Teslas are dumb enough to drive into barriers today doesn't mean we can't continue improving the technology.


You’re actively paying huge dollar figures to experiment with your survival. Which I don’t judge even if I find it too risky for my taste. But your giant expensive experimental hunk of alloys and minerals is also a death sentence for others when it messes up.


Which is why we should use something less experimental and black-boxy than machine learning and more verifiable and robust like fiducials. (Regardless of whether we're talking about using machine vision for attempts at "self-driving," autopilot, or just driver safety features like lane-keeping or auto-braking or automatic speed limit reading.)


I mean you’d hope that self driving cars had a certain amount of error correction and weren’t just looking at one data point...


Hope is not a great thing to depend on when it comes to multi-ton fast moving objects


Natural feature tracking is cooler. I mean.... little black boxes are alright but just establishing that a bespoke 3D shape or hierarchy of shapes to orient your targeting looks less crappy and has a lot more interesting potential.

One time many many years ago I went to kinkos and printed giant fiducial squares like 48"x48" and wheat pasted them all over Austin, TX to do large at scale AR with markers. Just putting giant dongs on down town condos and stuff. Good times.


I did something similar in Europe about ten years ago but lost access to the domain I used as part of the process. That happened through no fault of my own, but it really took the wind out of my sails to run into such bad luck with one of the critical components. However it was a lot of fun working on it, and I've been thinking about this and things like it but rather in the context of "street art" that only machines can see and interpret.


I always thought coded signs that also emit some sort of signal to match would make the most sense .. and some sort of consortium to set standards.

Then you could zone off areas that are designated safe for self-driving, among other things.


It would seem like the attack vector for this technology is SUPER simple to pull off. How do you trust self-driving cars that use this in a world where graffiti exists?


But this is actually the advantage of this approach vs the alternative. Self-driving (or "autopilot") cars today try to use machine learning to read human-readable signage, but they don't have any good way to verify the values with a checksum and so are really vulnerable to graffiti (as are human drivers to some extent). It's easy to checksum a fiducial. So machine-readable signage is actually much more resistant to that kind of attack than human readable signage.

Error detecting and correcting signage would be able to correct or at least detect the error, and it’s also possible to encrypt or, um, sign the signage. None of those are terribly feasible with human readable signage.

A human can take a crowbar to a railroad track. A human can drop a brick from an overpass. But modifying a signed and error corrected fiducial is gonna be pretty tough.


What good does encryption do? All someone has to do is copy or move the sign.


I think your question was rhetorical, but there's an answer:

Because it means that modifying the sign is not useful. (Unlike, say, modifying a speed limit sign.) Also, the fiducial can encode its orientation or position (perhaps in relation to other nearby signs... and could be a hash or checksum of its position, to save space), so the vehicle would be able to know there's a mismatch and thus mark the sign as suspect/unreliable if it was in any other spot or orientation.

There are other solutions to these. But the same problem you describe occurs if someone moved a human-readable sign (but without any way to checksum).

I think fiducials are not a panacea. They are just one additional data source in what needs to be a robust sensor-fusion approach. But they make a whole bunch of stuff in machine vision a LOT easier to solve. Machine-learning approaches have the same problems but with less opportunity to address them, less robustness, and more overhead.


I supposed location and information could be added to a sign along with a certification or checksum. As long as its done in a way that doesn't require exact placement, signs found outside their location are ignored.

...something like, "Forest St. Northbound CLOSED (lat,lon) <encrypted checksum>"?


The sign could easily be tied to location, no?


FYI - This is literally no different than the current state of affairs.

https://www.extremetech.com/extreme/306346-researchers-tape-...


Yeah, in fact it'd be a LOT harder to modify a fiducial that has been checksummed/signed/encrypted.

Human readable signage is potentially much easier to attack than machine-readable signage.


There are so many simpler attacks against human-driven cars that could be pulled off even easier today, and yet, they're not pulled off.

Physical world has a much different threat model than the Internet.


It could complement but not replace a general purpose system, and anyway the bigger problem is not navigation but avoiding other cars, pedestrians and obstacles on the road.

Since your solution is based on sending someone to mark the roads, why not use the same someone to simply build a map of the road, Google Maps style with higher accuracy? I think there are a few startups doing just that now.


This won't work.

Because it's incredibly vulnerable in an adversarial world. (Human readable has a human in the loop, and so can adjust for the worst adversarial attacks. Machine readable can't)


What about attaching normal cars with "beacons" that ping every 5ms? Cheaper?


Not the same. It could complement but not replace a general purpose system.

Scania has a self driving truck convoy where a manned truck is followed by several driver less trucks using such beacons. It is not a trivial solution once you dig into it- normal wifi is too slow at highway speeds, and once the air is congested and re-transmissions occur it is even slower. Scania is using 802.11p [1] and still has backups for when connection is lost or delayed.

[1] https://www.nxp.com/products/wireless/dsrc-safety-modem/road...


You'd need, what, 50 years of enforcement before you'd have enough cars on the road to trust it, and then of course what happens if its broken? What about pedestrians?


Well, yes ofcourse, object detection will still be a feature no doubt. The vehicle must have context of where it is. But overhauling infrastructure to be machine readable is an excellent idea to implement on high risk situations like suburbia and cities where pedestrians and dogs are actual factors. The vast majority of roads and infrastructure have no pedestrians. It's just highway and freeway. It would make sense for the car to follow other cars and calibrate its position accordingly without neccesarily "seeing".


Wouldn't super high resolution longitude/latitude coordination (measuring close to a feet) help this to do it without any external tool?


True, that does give direction but not context. LAT/LONG gives no opinion on traffic speed and how to overtake when to switch lanes. My idea is that the pings are the other cars saying "I'm here, I'm here..." and the car can compute it's position relative to the other cars without having to see them. This is obviously not the main system, and more of an auxillary measure so the car can function in any condition.


> And they’ll be for pedestrians, bikes, and small subways like the London Underground uses.

Assuming such tunneling is in fact practical, I vote for burying the vehicles and letting the pedestrians have the surface...


I completely agree. That really is the ideal scenario and we should indeed make the surface safe for kids to walk around with no fear of getting run over... Banish the cars underground. However there are cases where that isn’t practical or where there are weather constraints that make tunneling desirable (for instance, in Minnesota).


For a real world example of this, check Madrid's city centre. This is exactly what happened in the last few years, and the result is amazing.


Regarding the boring company, i have seen at least one engineer's breakdown of their tech and it appears they are using off the shelf tunneling equipment and their costs are in line with and sometimes higher than several other established companies that dig smaller tunnels like the Boring Company does. It seems the only competitive advantage they have is the bully pulpit that is Elon Musk and his blind followers that can popularize the idea of small tunnels.


Except it doesn't matter if their only "innovation" is normalizing using sewer tunneling type equipment with sewer tunneling type costs. That's a MASSIVE advantage over the insane costs we have for tunneling projects in the US.


To me, seems the real problem is, why are tunnels built in the US orders of magnitude more expensive than tunnels built literally anywhere else? It's a people problem, not a technical one. There are plenty of tunnel projects around the world that are in my opinion far more valuable than shuffling around private cars (the scale of public transit through such tunnels is many orders of magnitude more impactful). But until the US gets its labor costs in check building infrastructure is poor cost-value ratio there.


Fiducials are amazing and way under utilized. I’ve worked in the space and there’s a ton of innovation to be done.

https://austingwalters.com/chromatags/

Imagine encoding virtual objects or NPCs into a fiducial without a database... basically a 3D model + actions into a piece of paper you can attach anywhere.


So I imagined it, and I came to the conclusion it's not more than a QR code.


From what I understand (which is limited) a QR code can function as a fiducial but the cutting edge of fiducials far exceed QR codes in terms of recognition speed and can be understood in terms of their relative 3D orientation to the scanner much more accurately and efficiently than QR codes.

The docs page for AprilTag[0] makes for an explanatory example,

"AprilTags are conceptually similar to QR Codes, in that they are a type of two-dimensional bar code. However, they are designed to encode far smaller data payloads (between 4 and 12 bits), allowing them to be detected more robustly and from longer ranges. Further, they are designed for high localization accuracy— you can compute the precise 3D position of the AprilTag with respect to the camera."

[0]https://april.eecs.umich.edu/software/apriltag

Something with the recognition speed and 3D orientation understanding of an AprilTag and with the information density of a QR code seems like it could be quite a useful innovation.


QR code’s are a form of fiducial markers. However, detection is far different than orientation. Also for things like AR, you need quicker detection and orientation, which QR code’s don’t afford


Yup. And Actually, you need to know the corners of the QR code pretty precisely to actually de-skew the QR tag to pull out the code from it, and it’s an easy transformation to get 6dof orientation and translation from that (although you need some idea about the camera optics, but that’s easy to calibrate with similar techniques beforehand). You’re right about about speed, though, and some fiducials are much more resistant to motion blur than QR codes are (and work with much lower resolution cameras).


Yes, I too thought this, how does a QR code not already accomplish this need?


A QR code IS a type of fiducial.

There’s an entire spectrum of fiducial types based on 2D bar code matrices. QR code is the most well known.

But 2D matrices are not as resistant to motion blur, so there are advantages to some of the other types.


You're not going to be able to encode that much data on a little stamp. Its way more reliable to use a WebVR link.


3 kilobytes (what you can store on a little fiducial QR code stamp) is a pretty large amount of data. But you don't need that much. If you just encode a 64bit identifier plus 48 bits of data encoding a bounding box in half-precision floating point for a total of 112 bits (14 bytes). Or maybe 1 byte of precision in each dimension and about 100 points defining the shape for a total of 300 bytes (and another 8 for identification).

You really can store plenty of info in a fiducial. And you can use multiple fiducials to store more if you like. This is still a pretty rich vein to mine.


I haven't seen any formats that give you that much space in a reliable way, but even then, you won't get any texture mapping, normal mapping, animation, etc etc. Why bother when you can reliably store a link and have anything you want?


Links have implied costs far greater than a few KB.


Color is tricky because it varies way too much depending on the lighting conditions, right?


> 4) ... When you can build tunnels for one or two orders of magnitude cheaper than today, cheaper than an urban surface lane ...

Didn’t Las Vegas 1.5 mi tunnel cost ~$50MM ? I don’t think 1.5mi of surface road (not considering right-of-way costs) costs that much.


It was about 1.7 miles two-way, or about $3.4 million, or ~$15 million per mile including the stations and “rolling stock.”

The tunnel in Hawthorne is about $10 million for a mile.

The advantage, of course, is in the very thing you’re not considering: right of way costs. Of course it’s cheaper to not have to tunnel at all, but there are buildings and people on the surface. And NIMBYs. So including the right of way costs, tunneling is cheaper than an urban road per lane-mile.

Tunneling projects (for larger tunnels) are about $900 million per mile in the US (can be even greater). $300 million per mile overseas. So about $10-15 million per mile is indeed about one or two orders of magnitude cheaper. Making do with smaller tunnels is a big advantage, one that I think other companies and transit agencies in the US should strongly consider as a method to address the insane infrastructure costs we have. If we have to use smaller subway trains like the London Underground uses, so what? At least we could actually get them built. We’re unlikely to saturate their capacity anyway, and I’d much rather have 4 times the routes than twice the subway train diameter.


Roads cost a ridiculous amount of money. They start at a million per mile and go up from there. Think about how many miles of road their are just in your neighborhood.

The people who complain about how much of our cities are paved have a substantial point.


But this LV tunnel is one-lane, so it should be compared to one-lane surface road, which is, like you said, is ~$1MM a mile.


No, not in urban areas.


Why wouldn't you consider right-of-way costs?


> 5) Reusable launch technology. The world still doesn’t grok this. They will. It (long term) enables access to orbit comparable to really long distance airfare or air freight.

I worked at ULA for a year about five years ago. At that time they were arguing that it wasn't going to be cost effective. Back then though there may have been one or two SpaceX landings.

since I left I haven't kept up with this debate at all. Do you know if ULA changed their stance after all the successful launches?


Not that I'm aware of. But I do know others in the industry have changed their minds, like for instance RocketLab. Peter Beck, who said that if they ever were going to make a reusable rocket he'd eat his hat, actually did announce a reusable rocket and later did actually eat (part of) his hat: https://www.youtube.com/watch?v=agqxJw5ISdk&t=188s

(Love that he actually did it haha!)


so did the ESA European Space Agency with their Themis project [1]

[1] https://www.esa.int/ESA_Multimedia/Images/2020/12/Themis


Yup, I’m a fan although disappointed they have only announced the testbed instead of an orbital rocket.


> I worked at ULA for a year about five years ago. At that time they were arguing that it wasn't going to be cost effective.

When you sell the rocket and not the launch on cost plus contract, it's indeed not cost-effective.


ULA did and has done mostly fixed price contracts. (This is different than, say, the SLS contract.)

But ULA was stuck with their approach for several reasons, only a couple of which were in their control.


It is cost effective if you charge the federal government 3-4 times more than you charge private interests.


> passenger electric flight

There are unsolved physical limitations to that with no solution in the near horizon AFAIK, the energy density of batteries is simply to low (energy per kg) for airplanes to be efficient.

> from underserved regional airports sooner than you think, doing routes and speeds comparable to High Speed Rail (and with comparable efficiency) but much more granular and without massive infrastructure

I didn't understand that, is there a limitation for using normal jet fuel planes from regional airports?


> the energy density of batteries is simply to low (energy per kg) for airplanes to be efficient.

Technically, hydrogen airplanes with fuel cells are electric and much more feasible, but I don't think that's what his post meant indeed.


Fiducials. Yes good. The more the better. But we humans cannot read them. So use UV or near infrared instead of visible spectrum. Same low-cost sensors, less territorial conflict.


It’s also possible to use fiducials that have a human-readable component (like QR code’s with a logo in the middle).


> 2) Custom silicon

I know about libresilicon [1], are there any others in this space?

[1]: https://libresilicon.com/


Old tech.

New software development is 99% aimless churn.

We don’t need more new tech. We need better applications of old tech. There is so much software that works perfectly fine already. What’s missing is connecting it to real world problems.


As I've gotten more experience in tech, I came up with a saying -

"Everything great was created in the '80s, and we've been rediscovering those things every ten years since."

I'm not firm on "the '80s" - maybe this stuff is older than I think - but I think the principle still holds. If it's a problem today, somebody probably thought about it before, and then others came around and wrapped things differently.

It's not BAD to wrap things differently, but the old stuff had more of the sharp corners sanded off, and sometimes we lose that battle-hardened aspect when we rewrite code.

Except for garbage collection/whatever is happening with memory safety today. That's the good stuff.


I tend to find that if there is a software idea, there are good odds that someone once had it before and probably made some code/prototype/paper/blog post/newspaper article on it.

However usually these systems didn't take off because they were "before their time". There were cloud services in the 80s - but PCs got faster and cheaper than internet speeds could keep up. Client side apps looked better than cloud apps. Similarly modern data centers, and cloud computing primitives didn't exist so reliability was more miss than hit.

Now the economics have turned and people need data shared across multiple devices. Cloud services are the defacto method of developing applications.


Even as a different perspective to that - a lot of what AWS/GCP/Azure are doing today, IBM was doing in the 80's with Cobol and DB2 and a bunch of tech that people today are just not interested in.

There's a post that I saw on HN a couple weeks ago[1] talking about what an AWS Lambda service would look like in Cobol and I was blown away. This is the stuff my dad used to work on when he was fresh out of college, and I'm not exactly a spring chicken (as evidenced by the fact that I used the phrase "spring chicken")!

1: https://news.ycombinator.com/item?id=25989454, link to article submitted for the lazy: https://developers.slashdot.org/comments.pl?sid=18156250&cid...


The problem was, undergrads in top 50 CS programs were not using IBM mainframes during the 80's. They were using UNIX.

Same thing today (and a few PC here and there).

Getting access to a mainframe was and still is very expensive. That's why Google, Facebook, everyone post 2000 basically got started on commodity hardware. Because it's cheap, it's what the founders knew and it works. It's also what the top 50 alumni know, and there's no vendor lock-in.


Yup. Its so shocking when something "new" or at least that feels new is actually created. I don't care about the countless companies that will make it slightly easier for me to get something from point a to point b.

I care a lot about companies that actually make something new or popularize something that already existed but didnt have widespread appeal.


Yeah. I remember looking at the Slack IPO and saying to myself, "Your F'ing kidding me. They went public for doing IRC channels in a web browser with emojis?!?!"


This isn't _as_ short-sighted as the famous "Dropbox comment"[1], but it's pretty close.

a) Slack _clearly_ offers a lot of meaningful functionality over-and-above IRC channels. "Searching" - and, implicitly, persistence - is so fundamental to the offering that it's (apocryphally) part of the acronymic name. Threading, bot support, and channel discovery are all useful features. Sure, all of those things _can_ be implemented on an IRC server, but they're not out-of-the-box.

b) Setting up and supporting an IRC server is non-trivial for a non-technical person. Sure, it's easy to you and me - but any system that can allow customers to get access to that functionality _without_ needing a dedicated I.T. team is going to be more attractive to decision-makers.

[1] https://news.ycombinator.com/item?id=8863


Sadly, Slack is also missing a whole lot of IRC functionality - starting with a proper desktop client. The logging it provides is a joke, even when you actually pay for it. Its search capabilities are nowhere near grep(1).


> Its search capabilities are nowhere near grep(1).

This is again missing the point. Yes, the statement "Slack's search isn't as powerful as grep's" is true - but many of the prospective users of Slack (and, crucially - most of those who make the decisions about corporate IT) are incapable or unwilling to use grep _anyway_. You are judging a tool by how well it suits your needs, without realizing that you are not its only target audience.


I think you're both missing the point, though. And it surprises me how many people miss this point even on this incredibly smart forum. The idea doesn't matter as much as the execution. The business side matters as much or more than the technology side.


Fair enough. And I'll admit the above statement is exactly why I'm not a business person:)


Big same. I appreciate that they exist, so that I don't have to worry about it :)


Well, I agree only in part. Their biggest feat wasn't doing IRC in a browser with emojis, it was convincing many companies they actually need it. They also managed to somehow convince the geeks saying "look, you can always use IRC gateways" only to kill them once they secured their position.


I'm still shocked how many companies basically pay for slack and use it the exact same way they used email before.


One could say that memory safety is exactly the kind of "sanding off" we need on the concept of the traditional coding paradigms that have governed the software industry since there was a software industry(which, conveniently, is something that got started in the 70's and came of age in the 80's). One long period of structural debt and organizational consolidation, most of it piled on the conceptual framework of Unix, C, and core Internet technologies. There's a shift in the cosmology of computing taking place now, where the base layers are getting reexamined.

And now, a period of reinvestment in the bottom layers and signs of a diasporic divergence emerging. Movements that are ideologically different from yesteryear's FOSS, and a tightening of SV's grip on events that increasingly causes sand to pour through, new purposings of old tech and roads previously untaken. It's like Alan Kay put it: The future is the past AND the present.


Security and privacy are another big thing. In the 90s and 200Xs I installed pretty much anything as a native app and didn't worry about it as only "bad" people made viruses/trojans and legit companies had no incentive.

That all changed with big data and marketing. Now, every native app company, and every library those native apps use, has an incentive to mine your machine for data and then use and or sell that data. And further, the vectors for exploits of various native apps have increased as well and the always connected nature of our devices has increased the incentives.

Many people complain about MacOS's new security features. Me, I love them and I don't think they go far enough. Sure I want control of my machine. I don't want to secede control to Apple. But that to me is what MacOS (not iOS) is delivering (or attempting to deliver). Stop every app from doing anything without permission. Give me way to grant that permission if I really want. I wish Windows would do the same. I wish all Steam games were sandboxed.

In other words, getting all that cool tech from the 80s to be secure and privacy respecting is a ton of work.


Your post is about software, but it applies pretty well for hardware.

Like looms.

Robotics is actually generally pretty slow. Regular (serial) robot arms are usually significantly slower than a human arm. Some parallel robots (ie where the motors are mostly stationary and don’t have to be waved around by other motors), like a SCARA or Delta robot, can go about 2-3x the speed of a human, but the difference isn’t massive (60 vs 150 picks per minute?).

But looms are insane. Their task is simpler, but they can do over 2000 picks per second (!). The yarn in air jet looms can be moving over 200 mph. And even mechanical looms like Rapier looms or projectile looms are super fast. The mechanisms are also super advanced and hard to wrap your mind around. Centuries of optimization of the first really good industrial automation instance will do that, I suppose.

It makes me think we haven’t reached a completely flat plateau in mechanical development. Our robots today are actually pretty primitive compared to where they could... where they really should be. It also shows just how hard I think a lot of futurists have underestimated human mechanical capability. Human dexterity and force density is crazy impressive. Humans are actually super strong, fast, AND precise.

And hard automation like looms are also underestimated vs “robot arms.” Hard automation is so much more effective if you can do it. Just robot arms aren’t that great vs people


The potential with arms and other humanoid robotics is to be able to plug them into existing processes without having to make massive structural changes. Robot arms in car factories look like an evolution from a human production line, rather than a completely redesigned production line (or a mashup of the two). Still, redesigning the production around the robot still makes good sense: consider a dishwashing android vs a regular dishwashing machine. We have had dishwasher tech for a long time but we are still very far from a dishwashing android. It's just that maybe that humanoid robot could also hang up the laundry, look after the kids, drive you to the supermarket etc. There is so much room to grow in the "hard automation" space, where scale makes having specialized machines doable - as opposed to a household having to buy a dishwasher, washing machine, roomba, etc.


Didn't Tesla try this for the Model 3 (a fully-automated factory) and it turned out to be a total disaster?

Are they too far ahead of the curve? Or is it just rare that your task remains identical enough (i.e., textiles) for the decades it takes to optimize hardware?


A bit of both, I think. To do this is really hard. And requires some changes to how the car is made. Some of these lessons learned are presumably are being introduced into the Model Y, like the single-piece rear casting.

There's also a ton of engineering needed to go into how to make better robots.


Tesla also patented a wiring harness rigid enough to be robotically manipulated.


Indeed. That's another thing I was thinking of. Robots are terrible at manipulating floppy objects like wiring harnesses.


I made a search in the page for "robots" and got to your comment.

It's also appropiate to put this comment under some other mentioning old tech. Because robots have become steampunk. Very dear to first sci-fi writers, now they're démodé.

As soon as NLP gets another frog leap, we'll start seeing a comeback.


Where do I read more about looms?


Do you mean a better SMTP server or making a better* e-mail in general? (As an example)

* Better in this case would be a fundamental design to prevent spoofing, provide S2S encryption maybe E2E encryption, fixing MIME typing issues, fixing Rich Text/HTML display, etc. Basically an actually good faith replacement of e-mail instead of a vendor co-opting.


There's TMTP, "a sane network protocol for email, to end attacks and promote productivity."

https://mnmnotmail.org/

https://twitter.com/mnmnotmail


Serious money may be required.

Support for MNM by way of Patreon is requested on the home page.

There's also JMAP, with credible standards people involved:

https://datatracker.ietf.org/wg/jmap/photos/

Serious industry involvement however appears limited to FastMail.


JMAP is an extension of the email protocol stack; it only replaces IMAP.

TMTP is an alternative to the entire email protocol stack. It will be standardized after it's been proven in a range of real-world scenarios.

The mnm client & server, which implement TMTP, work well today, and have nearly all the features most users need. See docs menu in the online demo.

https://mnmnotmail.org/demo.html

Re "serious money", could you elaborate?


... that also doesn’t rely on DNS!


So why post in a thread that's implicitly about little-known or emerging tech?

"The best under-the-radar car? It's a horse and buggy I tells ya!" Every one of these posts on HN has to have a hot take that's contrarian.


I’m completely serious. There’s a ton of very old boring software that is still somehow little-known to some developers. It’s under the radar the way soil is under the radar - hiding in plain sight, ignored by the folk looking to the sky when the problem they actually have is how to grow some plants.

I’m also genuinely excited that there is growing momentum away from software churn, because we’re not going to solve the complexity crisis with another framework.


Some tech appears before its time has come. It gets some popularity but stagnates because some complementary development isn't there yet.

Some form of tablets and smartphones were there years before the iPhone or iPad.


I think you just won this thread. Seriously, I agree very much with your sentiment. The question isn't so much about the very latest-and-greatest new technologies per-se, but rather about the applications of technologies that are quite often fairly old. That's not to say that there isn't really cool stuff just over the horizon, but as you say, we really need better applications more than more new tech.

And sometimes the tech we need is "out there" and has been for a while, but just hasn't hit "critical mass" yet.

Take @kroltan's answer. I am also extremely bullish on RDF, Wikidata, and the like. But most of this stuff is pretty old now, especially in "Internet years". Which leads, of course, to the question of where the line is between incremental refinement of "old tech" and actual "new tech" as a discrete thing.


I kind of agree but I think I can be more specific.

We need to find more/better ways of integrating people with tech.

Tech on its own is 10% of the solution. Integrating humans with technology is underrated.

(I say this as the owner of various enterprise SaaS businesses but I'm sure it applies in all aspects of software)


This reminds of Nintendo’s philosophy of lateral thinking with withered technology.

https://medium.com/@adamagb/nintendo-s-little-known-product-...


Not just old tech. But old tech made new again. Think about how much server-side web hosting de-complicates websites. If you had to start with a blank linux box, PHP is still one of the fastest ways to get something with a shopping cart functional to this very day. If you are into new-fangled shiny shit and can't be bothered with old, proven tech, look no further than Blazor Server for 99% of the same idea. But also way better because you have C# 8.0 and one of the best IDEs + first-class debugger support at your disposal.

In some circles, you might even be accused of being a boomer for using SQL. I think a lot of developers are missing out on just how much runway you can get out of SQL and libraries like SQLite. You would also be missing out on one of the greatest breakthroughs in the history of computer science with regard to our ability to model problem domains and perform inhuman queries against them with millisecond execution times. But hey, maybe machine learning and mongodb are working for your shop.

The final thing a lot of people miss are old ideas. Put your entire application on a single server somewhere, and all of its dependencies live in the same box. Optimize the vertical before you rewrite for horizontal, because 99% of the time you will go bankrupt before you get as big as Netflix so it wont matter anyways. Plus, you would go bankrupt faster anyways by chasing delusions of web-scale grandeur when you could have had the MVP done 3 years ago with just a simple SQLite database back-end and a T3a.micro. More likely than not you would have discovered it was a bad idea to start with and could more quickly move on to the actual thing you should have been focusing on.


> Put your entire application on a single server somewhere, and all of its dependencies live in the same box. Optimize the vertical before you rewrite for horizontal, because 99% of the time you will go bankrupt before you get as big as Netflix so it wont matter anyways.

Worse case, you'll get bottlenecked by the DB and move it to a different (bigger) server.


I've come to appreciate so many old FOSS tools that are incredibly well built and thought of in their design.

Emacs and org-mode (and many things GNU) have started to make more and more sense to me in this day and age.


Amen. We need to walk back from convoluded and bloated software to the era when every software was simple. https://collapseos.org


Such a sad perspective of innovation... new ideas don't always result in better technology, but they're certainly not aimless.

I suppose you're writing this on a 1982 Commodore 64...


A lot depends on what you value in "innovation". Better, more useful things? The industry has been falling short here. Perpetually reinventing solutions to the same problems, to secure a money flow? Yeah, we're good at it.


I agree with the concept that many industries still rely on old technology that with a 10% improvement could already surpass current "advanced" implementations. A good example being the Besides Tool Import Wizard that can scrape PDF data with way more ease and simplicity than a manual implementation with say Python. And that tool has been around for 20+ years!!


Everything gets shittier and more complicated for no obvious benefit. I used to write code. Now I just search stack overflow and try and get pieces to work together.


> We don’t need more new tech. We need better applications of old tech.

“Better application” + “old tech” ≡ “new tech”.


Can you give some examples of where this is happening today?


The real problem seems to be experience transfer. People reinvent because they don't even know what already exists. They run into the same problems because they don't know they've already been solved. They come up with a flawed solution because they don't know someone else already had the same idea, took their hits, and learned from it.


Indeed. The amount of "new tech" around auth leaves me head scratching. So many of the problems these start-up types are trying to solve have been solved by something like a well maintained Active Directory 10+ years ago.

Group permission, PAM, SSO, etc. It's like these developers have never been exposed to Active Directory ever in their life...


The other day I got around to finally learn some COM/DCOM, and realized Windows had microservices built in for 2+ decades, and with better protocols and security...


Recently I ported a suite of 30 year old c/c++ programs, each one was deployable and runable on their own and communicated via sockets, microservices from the early 90's. Likewise FaaS doesn't add much that couldn't be done via apache cgi in the 90's.


Was the communication encrypted?


How long ago did you learn that? Because back in the day I didn't know even one developer who wouldn't curse when working with DCOM/Corba implementations because of convoluted complexity, bugs, and problems with debugging.


Less than a year ago. I've been using it a little here and there for almost two decades - DirectX is accessed and operated through COM API - but I never bothered to actually learn it in depth, until I had to jump straight into debugging obscure errors in an application that used a DCOM-based service.

> Because back in the day I didn't know even one developer who wouldn't curse when working with DCOM/Corba implementations because of convoluted complexity, bugs, and problems with debugging.

Yeah, I got that impression about COM through osmosis over the years. But come to think of it, isn't the exact same thing happening with the current trend of microservices, and super-convoluted stacks of Docker containers and Kubernetes? So perhaps the problem ultimately wasn't with COM per se :).


I'm not sure I understand your comment. Active Directory is Microsoft's proprietary tech that is a modification of the Kerberos standard, reimplemented most notably in Samba, but client solutions (open and proprietary) also exist.

Group permissions predate AD by decades, PAM by a few years. SSO in today's form is a web phenomenon, so a web-oriented solution makes more sense, and there is a lot of work being done in this direction.


>Active Directory is Microsoft's proprietary tech that is a modification of the Kerberos standard

And LDAP, and some DNS, and certs, etc. It's not just Kerberos; you're sounding ignorant here.

>Group permissions predate AD by decades

No one said otherwise, but it goes beyond basic POSIX groups with things like nested groups, delegated group permissions, etc.

>SSO in today's form is a web phenomenon

So is AD's...

Basically, you're proving my point.


Windows isn't used in academia.

So you get new grads that are re-inventing the wheel.


1. GNU Name System to replace the DNS in a backwards-compatible manner, with delegation to cryptographic public keys (instead of IP addresses) and with strong guarantees against known attacks against the DNS (DNSSEC doesn't solve everything). https://gnunet.org/en/gns.html

2. Semantic sysadmin to declare your intent in regards to your infrastructure no matter how it is implemented (i.e. with a standard specification, interoperability/migration becomes possible) https://ttm.sh/dVy.md

3. GUI/WebUI CMS for contributing to a shared versioned repository. Sort of what netlify is doing, but using a standard so you can use the client of your choice and we tech folks can hold onto our CLI while our less-techie friends can enjoy a great UI/UX for publishing articles to collective websites.

4. Structured shell for the masses. Powershell isn't the worst, but in my view nushell has a bright future ahead. For the people who don't need portability, it may well entirely replace bash, Python and perl for writing more maintainable and user-friendly shell scripts. https://nushell.sh/

5. A desktop environment toolkit that focuses on empowering to build more opinionated desktops while mutualizing the burden of maintenance of core components. Most desktop environments should have a common base/library (freedesktop?) where features/bugs can be dealt with once and for all and we don't have to reinvent the wheel every single time. Last week i learnt some DE folks want to fork the whole of GTK because it's becoming too opinionated for their usage, and GNOME is nowadays really bloated and buggy thanks to javascript hell. Can't we have a user-friendly desktop with solid foundations and customizability?


PM lead for PowerShell here, thanks for the callout! I'll take "isn't the worst". ;)

I'd love to get more of your thoughts around how PowerShell might be more useful for the kinds of scenarios you're thinking about. We see a lot of folks writing portable CI/CD build/test/deploy scripts for cross-platform apps (or to support cross-platform development), but we're always looking to lower the barrier of entry to get into PowerShell, as it can be quite jarring to someone who's used Bash their whole life (myself included).

Structured shells have so much potential outside of that, though. I find myself using PowerShell to "explore" REST APIs, and then it's easy to translate that into something scripted and portable. But I'd love to get to a place one day where we could treat arbitrary datasets like that, sort of like a generalized SQL/JSON/whatever REPL.

Plus, PS enables me to Google regex less :D


Stop shipping your org chart!

Microsoft has always had this problem, but with PowerShell -- which is supposed to be this unified interface to all things Microsoft -- it is glaringly obvious that teams at Microsoft do not talk to each other.

To this day, the ActiveDirectory commands throw exceptions instead of returning Errors. Are you not allowed to talk to them?

The Exchange "Set" commands, if failing to match the provided user name, helpfully overwrite the first 1,000 users instead because... admins don't need weekends, am I right? Who doesn't enjoy a disaster recovery instead of going to the beach?

I'm what you'd categorise as a power user of PS 5.1, having written many PS1 modules and several C# modules for customers to use at scale. I've barely touched PowerShell Core because support for it within Microsoft is more miss than hit.

For example, .NET Core has caused serious issues. PowerShell needs dynamic DLL loading to work, but .NET Core hasn't prioritised that, because web apps don't need it. The runtime introduced EXE-level flags that should have been DLL-level, making certain categories of PowerShell modules impossible to develop. I gave up. I no longer develop for PowerShell at all. It's just too hard.

It's nice that Out-GridView and Show-Command are back, but they launch under the shell window, which makes them hard to find at the best of times and very irritating when the shell is embedded (E.g.: in VS Code)

The Azure commandlets are generally a pain to work with, so I've switched to ARM Templates for most things because PowerShell resource provisioning scripts cannot be re-run, unlike scripts based on the "az" command line or templates. Graph is a monstrosity, and most of my customers are still using MSOnline and are firmly tied to PS 5.1 for the foreseeable future.

Heaven help you if you need to manage a full suite of Hybrid Office 365 backoffice applications. The connection time alone is a solid 2 minutes. Commands fail regularly due to network or throttling reasons, and scripts in general aren't retry-able as mentioned above. This is a usability disaster.

Last, but not least: Who thought it was a good idea to strip the help content out and force users to jump through hoops to install it? There ought to be a guild of programmers so people like him can be summarily ejected from it!


Thanks for the thoughtful response! Many of these are totally legitimate: in particular, we're making steady progress to centralize module design, release, documentation, and modernization, or at least to bring many teams closer together. In many cases, we're at a transition point between moving from traditional PS remoting modules and filling out PS coverage for newer OAuth / REST API flows.

I don't know how recently you've tried PS7, but the back-compat (particularly on Windows) is much, much better[1]. And for those places where compatibility isn't there yet, if you're running on Windows, you can just `Import-Module -UseWindowsPowerShell FooModule` and it'll secretly load out-of-proc in Windows PS.

Unfortunately, the .NET problems are outside my area. I'm definitely not the expert, but I believe many of the decisions around the default assembly load context are integral to the refactoring of .NET Core/5+. We are looking into building a generalized assembly load context that allows for "module isolation", and I'd love to get a sense in the issue tracking that[2] whether or not fixing that would help solve some of the difficulties you're having in building modules.

For Azure, you should check out the PSArm[3] module that we just started shipping experimentally. It's basically a PS DSL around ARM templates, as someone who uses PS and writes the Azure JSON, you sound like the ideal target for it.

As for the help content, that's a very funny story for another time :D

[1]: https://aka.ms/psmodulecompat

[2]: https://github.com/PowerShell/PowerShell/issues/2083

[3]: https://github.com/powershell/psarm


It looks like the main problem people have with PowerShell is slow startup. You should probably work on making it snappy as main priority.

As far as module problems are in question, this is IMO not really fair - you can't expect that every team there have the same standards regarding how modules should work, no matter if the team is from Microsoft or not. The best you could do is perhaps form a consulting / standards enforcing team for MS grown modules.

I love PowerShell, its really poster child for how projects should be done on GH.

And I agree with you about REST API - I never use anything else to explore it (including postman and friends) - I am simply more productive in pwsh. We love it in company so much that we always create powershell REST API client for our services by hand (although some generators are available) in order to be in spirit of the language; all automatic tests are done with it, using awesome Pester 5.

Thanks for all the great work. PowerShell makes what I do joy to that point that I am always in it.


> we're always looking to lower the barrier of entry to get into PowerShell

I’ve used powershell regularly since way back when (it was still called monad when I first tied it).

I’m extremely comfortable in the Windows environment but even yesterday I found it easiest to shell out to cmd.exe to pipe the output of git fast-export to stop powershell from messing with stdout (line feeds)

I really like the idea of a pipeline that can pass more than text streams but it absolutely has to be zero friction to pipe the output of jq, git (and awk, sed etc for oldies like me) without breaking things.


We've fixed a ton of these in PowerShell 7 (pwsh.exe, as opposed to Windows PowerShell / powershell.exe), particularly because we needed to support Linux and more of its semantics.

If you're seeing issues within PowerShell 7, please file issues against us at github.com/powershell/powershell


The inability to handle simple text is my #1 annoyance. For the rest of them, see jiggawatts's comment


In case you haven't seen it already, I found https://news.ycombinator.com/item?id=26779580 to be a pretty succinct list of the biggest stumbling points (latency, telemetry and documentation).

A couple of more specific points I'd like to add after experience writing non-trivial PS scripts:

- Tooling is still spotty. Last I used the VS Code extension, it was flaky and provided little in the way of formatting, autocomplete or linting. AIUI PowerShell scripts should be easier to statically analyze than an average bash script, so something as rigorous as ShellCheck would be nice to have too.

- Docs around .NET interop still appear to be few and far between. I recall having to do quite a bit of guesswork around type conversions, calling conventions and the like.

It's nice to see the docs have had a major overhaul since I last dug into them though :)


Tooling?? I write powershell scripts in notepad.


Notepad? Amateurs and your IDEs. REAL developers work without single-level undo, conveniences of vi or emacs.

Real developers use `edlin`.


I thought we just used a magnetized needle with a steady hand?


The real pros use butterflies.


> we're always looking to lower the barrier of entry to get into PowerShell, as it can be quite jarring to someone who's used Bash their whole life (myself included).

apt search powershell returns no meaningful result on Debian unstable. I think that's a big barrier to entry, at least for me and people who deploy using docker images based on Debian and Ubuntu.


Good to know! I've generally understood that the bar for package inclusion for both Debian and Ubuntu is fairly high (where Debian wants you to push to them and Ubuntu will pull from you).

Our setup today is simply to add an apt repo[1] (of which there is just one domain for all Microsoft Linux packages), and then you can `apt install`.

We also ship pretty minimal Ubuntu and Debian (and Alpine and a bunch of other) container images here.[2]

Oh, and we ship on the Snap Store if you're using Snapcraft stuff.

[1]: http://aka.ms/install-pslinux

[2]: https://hub.docker.com/_/microsoft-powershell


Don't return everything, return what I specifically returned (yeah, I know about objects, talking about everywhere else). I know it will never happen, but one can dream. Painpoints aside, you and your team are doing excellent job. Thank you

Edit: unless you are also responsible for DSC, than I'll take it back. It's terrible.


Unfortunately, we can't ever change that one, or the whole world of existing stuff will break.

It's intended as a shell semantic where anything bare on the command line just gets run, no matter your scope.

However when we introduced classes, we thought it was a more "dev-oriented" semantic, so we changed return there.

This will only return 'this will return':

  class foo {
    [string] ReturnTest() {
      'this will not return'
      return 'this will return'
    }
  }
  
  ([foo]::new()).ReturnTest()


Please bump priority of https://github.com/PowerShell/PowerShell/issues/3415 it makes it ugly and hard to convert scripts. Also source of bugs when users add lines to scripts.


The behaviour with UTF8 is still so strange to me. I get random behaviour during piping commands because utf8 still isn't the default for everything.


> I'll take "isn't the worst". ;)

You should! It was definitely a compliment.

> I'd love to get more of your thoughts

On a technical level, i would say PowerShell is a breakthrough because it democratized the concept of structured data REPL as a shell. This pattern was well-known to GNU (and other LISP) hackers but not very popular otherwise, so thank you very much for that. Despite that, having telemetry in a shell is a serious problem in my view. That, and other technical criticisms others have mentioned (see previous HN discussions about PowerShell) is why i don't use PowerShell more.

On a more meta level, i'd say the biggest missing feature of the software is self-organization (or democracy if you'd rather call it that). The idea is great but the realization is far from perfect. Like most products pushed by a company, PowerShell is being developed by a team who has their own agenda/way and does not take time/energy to gather community feedback on language design. I believe no single group of humans can figure out the best solutions for everyone else, and that's why community involvement/criticism is important. For this reason, despite being much more modest in the current implementation, i believe NuShell being the child of many minds has more potential to evolve into a more consistent and user-friendly design in the future.

Beyond that, i have a strong political criticism of Microsoft as part of the military industrial complex, as a long-standing enemy to free-software (still no Github or Microsoft XP source code in sight despite all the ongoing openwashing) and user-controled hardware (remember when MS tried to push for SecureBoot to not be removable in BIOS settings?), as an unfair commercial actor abusing its monopoly (forced sale of Windows with computers is NOT ok, and is by law illegal in many countries) and more generally as one among many corporations in this capitalist nightmare profiting from the misery of others and contributing its fair share the destruction of our environment.

This is not a personal criticism (i don't even know you yet! :)) so please don't take it personally. We all make questionable ethical choices at some point in life to make a living (myself included), and i'm no judge of any kind (i'll let you be your own judge if you let me be mine). In my personal reflection about my own life, I found some really good points in this talk by Nabil Hassein called "Computing, Climate Change, and All our Relationships", about the human/political consequences of our trade as global-north technologists. I strongly recommend anyone to watch it: https://nabilhassein.github.io/blog/computing-climate-change...

> how PowerShell might be more useful for the kinds of scenarios you're thinking about

I don't think i've seen any form of doctests in PowerShell. I think that would be a great addition for many people. A test suite in separate files is fine when you're cloning a repo, but scripts are great precisely because they're single files that can be passed around as needed.

> Structured shells have so much potential outside of that, though.

Indeed! If they're portable enough, have some notion of permissions/capabilities and have a good type system they'd make good candidates as scripting languages to embed in other applications because these applications usually expose structured data and some form of DSL, so having a whole shell ecosystem to develop/debug scripts would be amazing.

I sometimes wonder what a modern, lightweight and consistent Excel/PowerShell frankensteinish child would look like. Both tools are excellent for less experienced users and very functional from a language perspective. From a spreadsheet perspective, a structured shell would for example enable better integration with other data sources (at a cost of security/reproducibility but the tradeoff is worthwhile in many cases i think). From a structured shell perspective, having spreadsheet features to lay data around (for later reuse, instead of linear command history) and graph it easily would be priceless.

> I'd love to get to a place one day where we could treat arbitrary datasets like that, sort of like a generalized SQL/JSON/whatever REPL.

Well that's precisely what nushell's "from" command is doing, supporting CSV, JSON, YAML, and many more! https://www.nushell.sh/book/command_reference.html no SQL there yet ;-)

PS: I wish you the best and hope you can find some time to reflect on your role/status in this world. And i hope i don't sound too condescending, because if you'd asked me yesterday what i would tell a microsoft higher-up given the occasion, it would have been full of expletives :o... so here's me trying to be friendly and constructive as much as possible, hoping we can build a better future for the next generation. Long live the Commune (150th birthday this year)!


I read all of it, as well as some more of your writings that I found, and I very much appreciate your thoughtfulness. I don't agree with everything you've said here, but you raise some very good points. Thanks, friend. :)


In case the nushell link doesn't work for anyone else, consider prefixing it with www: https://www.nushell.sh/

Admittedly, i have no idea why we even need to do that nowadays, but that seemed to work.


It's because the bare TLD isn't setup to accept requests, but the `www` subdomain is (it's DNSed to a different IP):

  $ curl -v --head https://nushell.sh/
  *   Trying 162.255.119.254...
  * TCP_NODELAY set
  * Connection failed
  * connect to 162.255.119.254 port 443 failed: Connection refused

  $ curl -v --head https://www.nushell.sh/
  *   Trying 185.199.108.153...
  * TCP_NODELAY set
  * Connected to www.nushell.sh (185.199.108.153) port 443 (#0)
Most hosts will alias or redirect away the www subdomain, but that's just a convenience. Of course technically foo.com and www.foo.com can have different DNS entries.


My first IT gig, 25 (sigh) years ago, I tried to set up <ourdomain> as an alias for www.<ourdomain>. Seemed to work ok, but somehow I noticed that I had broken email delivery through our firewall, so I reverted the change. Couldn't figure out exactly what was going on, and set it aside.

A few months after I left, I sent an email to a friend who still worked there, and it bounced exactly the same way. Called up my friend in a hurry, and sure enough, they had just finished deploying the same change.


Why even have www.<ourdomain>.<tld> in the first place then, if <ourdomain>.<tld> is entirely sufficient on its own?

It does appear that it's mostly done for historical reasons and sometimes you need CNAME records[1], but overall it feels like it probably introduces unnecessarily complexity, because the www. prefix doesn't really seem to be all that useful apart from the mentioned situation with CNAMEs.

That's kind of why i asked the question above - maybe someone can comment on additional use cases or reasons for sticking to the www convention, which aren't covered in the linked page.

When i last asked the question to a company who only had their website available with wwww but didn't without, i got an unsatisfactory and non-specific answer where spam was mentioned. I'm not sure whether there's any truth to that.

[1] https://en.wikipedia.org/wiki/World_Wide_Web#WWW_prefix


It depends on the setup. Some cloud load balancers like AWS ELB require a CNAME, which DNS (RFC 1912) doesn't allow other records at that level if it has a CNAME.

So, can't put a CNAME on the apex, which probably also has MX records. I think in some cases like Exchange, if it sees a CNAME, it doesn't bother with looking at the MX.

Back in the day "CNAME flattening" or aliases (RFC draft draft-ietf-dnsop-aname) wasn't a common thing, so only real way was to redirect the domain apex to www, and then use a CNAME on the www. You'd probably need a separate service/servers to handle that redirect (at least DNS round robin would work in this case). So yea extra complexity in that case, due to the requirements. Or, give them DNS authority (eg, AWS Route 53).

Then there's the whole era of TV/radio commercials telling people "www dot <name>" that a lot of people type it anyways. You can redirect www to apex, which some sites do for a "clean brand" but now browsers are dropping the www in the UI anyways.

I've also run into plenty situations where www worked but apex didn't. Relatedly, it's a little surprising that browsers didn't default to assuming typing the apex in the browser it would try www first. And recently, now we're getting SVCB and HTTPS DNS RRs along with A/AAAA (and maybe ANAME). Indeed lots of complexity.


While there are plenty of domain which only exist to serve a website, quite a few others have more than that.

With a website, if you want to push it onto a Content Delivery Network (CDN) it is easy to change www.example.com to point (via a CNAME record) to the right place.

If, however, you want to do that with the just example.com and also want to run things like mail, you can not use the CNAME record.

The why is long and boring, but that is the situation right now.


Is it long and boring? I thought it was just if you declare a CNAME, you can't declare any other types. Full stop.


> 3. GUI/WebUI CMS for contributing to a shared versioned repository.

Did you have some specific tool in mind? Because I completely agree that this is a great way of working with content. We have been doing that for a couple of months with our own tool. It uses Git and stores content in a pretty-printed JSON file. Techies can update that directly and push it manually. Content editors can use our tool to edit and commit to Git with a simple Web UI. Would that go into a direction you were thinking of?


The closest i can think of is NetlifyCMS, but it's terrible because it's a bunch of JavaScript both server side and client side, and is really not intended to be selfhosted despite being open-source.

If NetlifyCMS was a robust library for abstracting over versioning system and static site generator to build WebUI/TUI/GUI clients with, that would fit what i have in mind. I don't know of any such program yet, please let me know if you find something :)


OK, I see. The tool we are working on is similar but also not quite what you are looking for. In case you want to have a look, it is availale on <my-name> .io


NetlifyCMS comes to mind!


I am a lifelong sysadmin, and have thought about #2 frequently. I am thinking seriously about making it a research project. Who is behind this document you linked to? Is it tildeverse? Parts of that document are pretty up to date, so it does not seem very old. I am not really familiar with the null pointer site, or even how to find things on it (as in, every now and then I am surprised). I am surprised I have not seen this before.


Hello, i'm the author of this document although it's just a draft for the moment (lots of TODOs in there) which is why it's not on my blog yet.

nullpointer is just a file upload system of which ttm.sh is an instance residing in the tildeverse. I sometimes use it to publish drafts to collect thoughts/feedbacks on ideas i have. I'm also part of the tildeverse. I reside on thunix.net and do sysadmin for fr.tild3.org. I'm often around on #thunix, #selfhosting (etc) on tilde.chat in case you're also around :)

> I am a lifelong sysadmin, and have thought about #2 frequently. I am thinking seriously about making it a research project.

I think a lot of us have been obsessed with this idea for a while, but nobody to my knowledge has done it yet. If you feel like exploring this idea, amazing! It is in my view a complex-yet-solvable problem that many projects have failed to deal with because they've been too focused on narrow use-cases and not on the broader conception of a standardized specification ecosystem for selfhosting. If you feel like exploring this idea collectively (for example by cooperating on a standard for which you would contribute a specific implementation), count me in. I think a lot of brilliant people will be glad to board the ship once it's sailing!

If you'd like to see where this idea has taken me so far, take a look at the joinjabber.org project. The entire infrastructure is described in a single, (hopefully) human-meaningful configuration file, with Ansible roles implementing it: https://codeberg.org/joinjabber/infra

Wish you the best, please keep me updated if you have more thoughts on this topic or would like to actively start a such project


Declarative is awesome for bootstrapping an infrastructure from nothing, but once it’s running and people are depending on it, it actually matters a lot which operations are done to it, in what order, and with what dispersion in time. Doubly so if it’s stateful. In the real world we see either hints in the “declarative” config to tweak execution order, or procedural workflows expressed ad hoc as a sequence of goal states. I think the future of sysadmin will be more explicitly procedural. Services managing services through the normal building blocks of services: endpoints, workers, databases, etc.


> I think the future of sysadmin will be more explicitly procedural.

I think that's true for the lower levels of abstraction because sysadmin is a finite list of steps. Being able to program your infrastructure in a type-safe, fail-safe way is very important. In the grandparent comment, i was arguing for building higher-level abstractions (semantic sysadmin) on top of that to make it easier to understand how your infrastructure is configured, and make it reproducible/forkable.

Think of it like there's two kinds of state stored on your system: useful state (eg user data) and infrastructure byproducts (eg TLS certificates). The former must be included in backups, the latter can be regenerated on the go from server config. The kind of declarativeness i'm interested in is that which enables any member/customer to fork the entire infrastructure by editing a single config file if they're not happy with provided services, and from there they can import their own useful state (eg a website/mailbox backup). Hypothetically, the scenario would be like "is Google Reader closing down? Let's just fork their infra, edit the config file, and import our our feeds and all we have to do is use a different URL to access the very same services".


> 4. Structured shell for the masses.

I wonder if shell stuff would work better in a notebook like environment.

Edit: At least one exists: https://shellnotebook.com/


There was a time I did all my shell commands just directly in Perl. If you have it all in your head you can do crazy stuff pretty quickly. Especially by having libraries that you can import and invoke in one line.


That's really nice. I never really got into perl but i was moved by some sysadmin friends back in the days playing crazy tricks right in front of my eyes... doing the same in bash would have taken me minutes or even hours, so i always felt like this was some kind of great wizardry.

But perl is not really the most user-friendly language to learn in my opinion. I think raku is much better in this regard, but unfortunately is not deployed widely (yet?).


> GNU Name System

Really? Can you elaborate a bit on the why? As far as I can tell, GNS has been around as a proposal for years and has gained no traction.


> Really? Can you elaborate a bit on the why?

It's the only serious proposal i've seen to replace the DNS protocol. It's backwards-compatible with existing zones (same RR types plus a few new ones like PKEY), with existing governance (the protocol has a notion of global root, which is very likely to be ICANN), and replacing traditional DNS recursive resolution with GNS recursive resolution is not such a challenge.. and in any case most devices don't even do recursive resolution but act as a stub for a recursive resolver further on the network (a somewhat-worrying trend).

So, the fact that GNS can live alongside the DNS for as long as people need is for me a very appealing argument. Also, GNS uses proven p2p schemes (DHT) in a clever way instead of reinventing the wheel or yet another crypto-ponzi-scheme. "crypto" in GNS equals cryptographic properties to ensure security/privacy of the system, not tech-startup cryptocoin bullshit, and i truly appreciate that.

Then, looking at the security properties of GNS, it looks great on paper. I'm no cryptographer so i can't vet the design of it, but ensuring query privacy and preventing zone enumeration at the same time sounds like a useful property to build a global decentralized database, which is what the DNS protocol is about despite having many problems.

Building on such properties, re:claimID is a clever proposal for decentralized identity. I'm very much not a fan of digital identity systems, but that one resembles something that would be respectful of its users as persons. There's FOSDEM/CCC talks about that if you'd like to know more.

> GNS has been around as a proposal for years and has gained no traction

I wouldn't exactly say that. I'm not involved in the GNUNet project so i don't know the specifics, but GNS was at least presented at an ICANN panel as an "emerging identifier", that is one of the possible future replacements for DNS. I'd consider being recognized by ICANN as a potential DNS replacement some "traction" (more than i expected). If you can find the video from that ICANN session where both GNS and Handshake are introduced, that'll probably do a better job than me explaining why GNS is very fitted to replace DNS. URL used to be https://icann.zoom.us/rec/play/tJErIuCs-mg3E4GXtgSDB_UqW464f... but now that's not loading for me anymore.


Thanks for the very comprehensive reply. It does sound like GNS is gaining more traction than I thought, and I hadn't heard of re:claimID, which sounds pretty interesting in its own right.


Serious question: Who is the target audience of Nushell? I know for a fact that I couldn't get someone like my parents or my girlfriend to use something like that day to day, and if you need a shell to work more efficiently, why not just learn bash? The learning curve for the 2 seems pretty much exactly the same, and bash has the benefit of being installed on almost every Linux/Unix system, even Macs (I think they actually use zsh now, but still)


> I know for a fact that I couldn't get someone (...) to use something like that day to day

I know for a fact the opposite is true for me. A simple shell syntax with an amazing documentation is all it takes for people to write useful scripts.

I'm confident i can teach basic programming to a total newbie using a structured shell in a few minutes. Explaining quirks of conditionals and loops in usual shells is trickier: should i use "-eq" or "=" or "=="? why am i iterating over a non-matching globset? etc.

> why not just learn bash?

I have a love-hate relationship with bash. It's good, but full of inconsistencies, and writing reliable, fail-safe scripts is really hard. I much prefer a language in which i'm less productive, but doesn't take me hours of debugging every time i'm reaching an edge case.

Also, bash has very little embedded tooling, compared to nushell. In many cases, you have to learn more tools (languages) like awk, jq. In nushell, such features are built-in.

> being installed on almost every Linux/Unix system

Well, bash is definitely very portable. But at this game, nothing can beat a standard POSIX /bin/sh. Who knows? It may outlive us all :)


> Who is the target audience of Nushell?

People who a) are trying to escape from the insanity of traditional shells and use something that works with structured data, and b) want something other than PowerShell.


Allergy immunotherapy. I started a company on the backs of this treatment.

The way we treat allergies today, with Zyrtec and Claritin, is medieval medicine. It doesn't solve the underlying problem; it just tries to cover it up.

Allergy immunotherapy is the future. Most people don't realize that allergies are now a curable disease. In the future, taking Claritin for allergies is going to seem like taking Tylenol for an ear infection. Why would you treat the symptoms when you could just cure the disease?

I started Wyndly (https://www.wyndly.com) to bring immunotherapy for pollen, pets, and dust across the USA. But we'll expand into food allergies soon, too.


That's really cool that this is a thing, I remember that my friend tried to this to himself by eating fish he was allergic to and then epi-penning himself until his body started to ignore it. (took years!) Glad this appears to be a way safer method


Surprised to hear this is uncommon, I received allergy shots >20 years ago and it was the only allergy treatment (after many attempted medications) that had a sizable impact.

Besides the delivery vehicle, what are the differences between allergy shots and these droplets?


Well, the delivery vehicle of the sublingual mucosal membrane means there's a much better safety profile! But otherwise, the mechanism of changing the immune response is the same.


This look interesting, it took some digging to get to what the actual treatment is, so it gets your body use to the allergen to build up tolerance. What happens when you have multiple pollen allergies, how does it track all of them. Also seems like many pollens will be region specific, different trees, etc.


We use an allergy test to diagnose your allergy profile, then we treat you for exactly what you're allergic to!


I've had allergy immunotherapy shots going on 30 years now. I'd say there has been an improvement, but... is the technology going to get significantly better anytime soon? I'm skeptical - but happy to be convinced otherwise.


Tons of data that the drops are an improvement over shots, if only for the patient experience!

“Patients who have done [allergy drops] and finished a course are now free of taking any allergy medications and they don’t have symptoms anymore.” Dr. Sandra Lin, video interview, Sublingual Immunotherapy (SLIT) for Allergy Treatment: Johns Hopkins | Q&A, author of Efficacy and Safety of Subcutaneous and Sublingual Immunotherapy for Allergic Rhinoconjunctivitis and Asthma (2017)

Video: https://www.youtube.com/watch?v=dpWomI4iPLY Paper: https://pubmed.ncbi.nlm.nih.gov/28964530/

Aqueous allergy drops are both safe and effective for environmental allergies (aka allergic rhinitis): - This has been proven through 30 years of data published by leading allergists in key journals, and confirmed in 2011 independent Cochrane review Learn more: https://pubmed.ncbi.nlm.nih.gov/21154351/

Aqueous allergy drops have a better safety profile than allergy shots - Unlike shots, there has never been a documented fatality to allergy drops in 30 years of use - The risk of a systemic reaction is thought to be 1 per 100 million doses or 1 per 526,000 treatment years Learn more: https://pubmed.ncbi.nlm.nih.gov/22150126/

Allergy drops are equally as effective as allergy shots - They have widespread use in Europe (up to 80% of the immunotherapy market in some countries) - Comparison studies by leading allergists have shown both allergy shots and allergy drops to be effective but no clear superiority of one mode over the other. Learn more: https://pubmed.ncbi.nlm.nih.gov/23557834/ https://pubmed.ncbi.nlm.nih.gov/26853126/


Wow, I never knew drops were even an option!


Speaking as a very rational multiple-allergy sufferer (before 30? None. Then, like a dozen things!!) who has gotten allergy shots for years now...

Have any data to back up this substitute?

Also, Zyrtec and Claritin did nothing for me, I’m an Allegra guy


Tons of data!

“Patients who have done [allergy drops] and finished a course are now free of taking any allergy medications and they don’t have symptoms anymore.” Dr. Sandra Lin, video interview, Sublingual Immunotherapy (SLIT) for Allergy Treatment: Johns Hopkins | Q&A, author of Efficacy and Safety of Subcutaneous and Sublingual Immunotherapy for Allergic Rhinoconjunctivitis and Asthma (2017)

Video: https://www.youtube.com/watch?v=dpWomI4iPLY Paper: https://pubmed.ncbi.nlm.nih.gov/28964530/

Aqueous allergy drops are both safe and effective for environmental allergies (aka allergic rhinitis): - This has been proven through 30 years of data published by leading allergists in key journals, and confirmed in 2011 independent Cochrane review Learn more: https://pubmed.ncbi.nlm.nih.gov/21154351/

Aqueous allergy drops have a better safety profile than allergy shots - Unlike shots, there has never been a documented fatality to allergy drops in 30 years of use - The risk of a systemic reaction is thought to be 1 per 100 million doses or 1 per 526,000 treatment years Learn more: https://pubmed.ncbi.nlm.nih.gov/22150126/

Allergy drops are equally as effective as allergy shots - They have widespread use in Europe (up to 80% of the immunotherapy market in some countries) - Comparison studies by leading allergists have shown both allergy shots and allergy drops to be effective but no clear superiority of one mode over the other. Learn more: https://pubmed.ncbi.nlm.nih.gov/23557834/ https://pubmed.ncbi.nlm.nih.gov/26853126/


Nice! I'm going to review this and see what my allergist says


many years ago, I had allergy problems and normal allergy shots (increasing dose injections of small quantities of allergens) would not work. They gave me the scratch test and I was allergic to many things, then later I got the prick test and I was allergic to the rest.

What my doctor told me was -- you're going to get increasing doses a few times a week, it will take a lot of time be hard, and at some point you'll bump into an adverse reaction while you're in the waiting room after the shot.

The medical system was kind of broken wrt to my plight.

After consulting a number of folks, I finally found EPD and went to treatment.

https://en.wikipedia.org/wiki/Enzyme_potentiated_desensitiza...

It was really helping, then they stopped offering it in my area. I was pretty bummed they did away with it, because it helped me without side effects. My symptoms decreased in severity and eventually I felt fine. Apparently it was from the UK and worked well there.


I understand that each person is different, but how long does the therapy go for until someone becomes "immune"? I see other comments mentioning 20-30 years, which sounds a lot more expensive than having to take the medieval equivalent for just as long.

I hope I'm wrong!


Hey if someone said by the time I’m 40 I wouldn’t have to live in fear of peanuts anymore, I’d pay whatever they asked.


Good feedback! We suggest patients try out the treatment for 6 months to make sure their body accepts the immunotherapy, and then another 4.5 years to lock-in lifelong relief.


Great to know, thank you!


I've received allergy shots in the past. Seems to be a relatively common treatment in the south. It's just inconvenient due to office visits.

Looks like the innovation here is the move the serum from intradermal to a liquid, oral treatment?


That's a big leap for me. I spent 1 hour twice a week commuting to get allergy shots.


Definitely interested...but I couldn't find from your site exactly for how long one is to use the drops. Is this 99/mo forever? For 6 months? The answer to that question pretty much determines my interest.


Good feedback! We suggest patients try out the treatment for 6 months to make sure their body accepts the immunotherapy, and then another 4.5 years to lock-in lifelong relief.


Immunotherapy of all kinds is poised to become the revolutionary medicine of our generation. I'm excited to see what Wyndly does in the future!


I ran a quick search and apparently it works for stuff like bees and mosquitoes too. Interesting. If I had a couple thousand spare which seems to be the going rate in AU it could be worth trying for my wife who gets giant mosquito bumps that itch for days (they love her blood).


This is super interesting, thank for you sharing! I am curious - how does you handle symptom management? Are there antihistamines in the drops, or, perhaps some other form of system/method for controlling response?


Our treatment can be taken alongside traditional allergy care like antihistamines and corticosteroids to manage allergy symptoms until immunotolerance is established.


Here in the UK it's incredible how little immunotherapy is prescribed despite the fact that it's been available for ages and known to work. Anecdotally I have heard one problem is that allergy medicine is just not something that GPs are kept well informed about. My GP told me that it was 'unwise to mess with the immune system like this'. Uhh...vaccines?

(source: I have had immunotherapy)


Won't ever work for most severe food allergies (e.g. peanuts) right? From what I understand, immunotherapy has not been found to be safe/effective for these allergies.


Interested to know as well. Does comment OP have an answer?


How is this different than Staloral? (besides subscription vs regular purchase)


Wikidata, SPARQL, and RDF in general. And I guess semi-relatedly things like Prolog? I recently decided to fiddle with Wikidata, and it is fascinating to be able to query for any part of a statement, not just the value!

In SPARQL you write statements in the form

    <thing> <relation> <thing>
But the cool part is that any of those three parts can be extracted, so you can ask things like "what are the cities in <country>", or "what country harbors the city <city>", but most importantly, "how does <city> relate to <country>".

For example, if you wanted to find out all the historical monuments in state capitals of a country (using my home country as an example, also pseudocode for your time's sake):

    fetch ?monumentName, ?cityName given
    ?monument "is called" ?monumentName.
    ?monument "is located within" ?city.
    ?city "is capital of" ?state.
    ?city "is called" ?cityName.
    ?city "is located within" "Brazil".


I've built a few solutions with graph databases (cypher is my query language preference, oddly) and use an RDF/OWL ontology for some personal documentation projects, and what I think holds it back in production is it is too powerful.

"Too powerful" doesn't seem like a thing until you realize it undermines DBA's skill investments, means business level people have to learn something and solve their own problems instead of managing them, disrupts the analyst level conversations that exist in powerBI and excel, seems like an extravagent performance hit with an unclear value prop to devops people, and gives unmanageable godlike powers to the person who operates it. (this unmanagability aspect might be what holds graph products back too)

If you don't believe me, the list of companies who use them also get a rep for having uncanny powers because of their graphs, FB, twitter, palantir, uber, etc.

Using ML to parse and normalize data to fit categories in RDF graphs is singularity-level tech, imo and where that exists today, I'd bet it's mostly secret.


It's very fascination field with a lot of potential, but we're still far from that singularity, unfortunately.

I'm interested in this field and find it fascinating but we're still in it's early dark ages.

when it comes to Knowledge representation and reasoning there's too much emphasis on the representation part and less on the reasoning part, but even this representation part is not a solved problem.


I'm super excited about RDF as well. It's going to be the next big thing as we finally start connecting our machines and data sources together in semantically meaningful ways. I added relationships to CE because of this: https://concise-encoding.org/#relationships

    c1
    {
        // Marked base resource identifiers used for concatenation.
        "resources" = [
            &people:@"https://springfield.gov/people#"
            &mp:@"https://mypredicates.org/"
            &mo:@"https://myobjects.org/"
        ]

        // Map-encoded relationships (the map is the subject)
        $people:"homer_simpson" = {

            /* $mp refers to @"https://mypredicates.org/""
             * $mp:"wife" concatenates to @"https://mypredicates.org/wife"
             */
            $mp:"wife" = $people:"marge_simpson"

            // Multiple relationship objects
            $mp:"regrets" = [
                $firing
                $forgotten_birthday
            ]
        }

        "relationship statements" = [
            &marge_birthday:($people:"marge_simpson" $mp:"birthday" 1956-10-01)
            &forgotten_birthday:($people:"homer_simpson" $mp:"forgot" $marge_birthday)
            &firing:($people:"montgomery_burns" $mp:"fired" $people:"homer_simpson")

            // Multiple relationship subjects
            ([$firing $forgotten_birthday] $mp:"contribute" $mo:"marital_strife")
        ]
    }
RDF is gonna be so awesome when it finally hits the mainstream!


Time travelers unite! Is JSON-LD the modern incarnation of semantic web? Which organizations are contributing to, or using, concise-encoding?


Did you forget the /s? This sounds like you just read the first chapter of a book on the semantic web (SW) from the 90s and how great ontologies are and how they'll CHANGE EVERYTHING. The SW folks have been hyping this stuff for years. It sounds great until you begin to see the practical realities around it and it starts to look a little less, shall we say, "magical" and more like a huge pain in the ass.


Wow... haven't seen such a mean spirited and openly hostile reply in some time. You don't seem to engage much, so please take some time to familiarize yourself with the guidelines: https://news.ycombinator.com/newsguidelines.html


FWIW I didn't read it as mean and hostile. It's just the reality - semantic web is here for 20+ years with very little penetrating mainstream. And huge amounts of efforts spent on the technology, filling the databases and creating tooling. What do you think is needed so that people actually use it and it solves some problems for them?


Yes, much like AI was here for 40+ years with very little penetration into the mainstream.

Technologies like this remain stagnant until the landscape is ready for them. In this case, the advent of AI and big data is what will make relationship data important. "Semantic web" as in human edited XML/HTML with semantic data embedded was never going to happen and was silly from the get-go. But RDF-style semantic data transferred between machines that infer meaning is an absolute certainty.

It's one of those things that's forever a joke until suddenly it's not. There's a fortune of oil out there, but we have to get past the steam age first (and that'll come sooner than you think).


> Technologies like this remain stagnant until the landscape is ready for them.

So true it should have a name, like kstenerud's Law or something.


I agree with you.

Not that I have looked recently, what I see missing is a 'northwid' or 'contoso' database. As well as some MOOC with a gentle ramping up of skills.

If you know of a good MOOC on RDF I would love to know about it.


I am a huge fan of triples to represent things.

I've even written an engine that takes triples and renders web apps.

This is effectively a todo MVC as triples:

  var template = {
    "predicates": [
  "NewTodo leftOf insertButton",
  "Todos below insertButton",
                "Todos backedBy todos",
                "Todos mappedTo todos",
                "Todos key .description",
  "Todos editable $item.description",
  "insertButton on:click insert-new-item",
  "insert-new-item 0.pushes {\"description\": \"$item.NewTodo.description\"}",
  "insert-new-item 0.pushTo $item.todos",
  "NewTodo backedBy NewTodo",
  "NewTodo mappedTo editBox",
  "NewTodo editable $item.description",
  "NewTodo key .description"
    ],
    "widgets": {
        "todos": {
            "predicates": [
                "label hasContent .description"
            ]
        },
 "editBox": {
      "predicates": [
         "NewItemField hasContent .description"
      } 
 }
    },
    "data": {
 "NewTodo": {
      "description": "Hello world"
 },
        "todos": [
            {
                "description": "todo one"
            },
            {
                "description": "todo two"
            },
            {
                "description": "todo three"
            }
        ]
    }
}

See https://elaeis.cloud-angle.com/?p=71 and https://github.com/samsquire/additive-guis


Wikidata, SPARQL, and RDF in general. And I guess semi-relatedly things like Prolog?

I couldn't agree more. I know a lot of this kind of "semantic web" stuff has some pretty vocal detractors and that adoption seems limited, but I still think there is a ton of "meat on this bone". There's just too much potential awesomeness here for this stuff to not be used. I think this is an example of where incremental refinement is the name of the game. As computers get faster, as we get more data, as algorithms improve, etc. we'll get closer and closer to the tipping point where these technologies really start to reveal their potential.


A small addendum because I forgot to actually show something cool:

Another example, demonstrating the querying for the relation part, would be to find Leonardo DaVinci's family members: (again in pseudocode so you don't need to dwell in the syntax)

    fetch ?kinName, ?linkName given
    ?link "is called" ?linkName.
    ?kin "is called" ?kinName.
    "Leonardo DaVinci" ?link ?kin.
    ?link "is" "familial".
Line 3 was the "mindblow" moment for me, you can ask how two objects are related without knowing either one! (though I did know one of them in this example, Leonardo)


I've been considering writing a graph database on top of SPARQL and RDF. Beyond the official docs (which are pretty good), can you recommend any other resources for easily getting the hang of SPARQL?


I worked in a semweb company ~10 years ago - https://jena.apache.org/ as a general starting point is a useful library. I remember distinctly OWLIM https://www.w3.org/2001/sw/wiki/Owlim as a great triple store.


I cannot! I have just recently learned about it and am doing so for leisure, not for skill. I'm sorry I couldn't be of much use.

In fact I would love to know if someone else does have any other resources too :)


can you recommend any other resources for easily getting the hang of SPARQL?

I'll second the recommendation of Jena (and associated sub-project Fuseki). If you know Java (or any JVM language) you can use the Jena API directly for manipulating the triplestore, and submitting SPARQL queries. If you don't want to do that, Fuseki exposes an HTTP based API that you can interact with from any environment you prefer.


I studied SPARQL in university, about 10 years ago. I checked on it recently and found out nothing has changed. Needless to say, I am not hopeful about its feature.


datomic is the only database that i know that does this, afaik.


Similar-but-different recent threads:

Ask HN: What novel tools are you using to write web sites/apps? - https://news.ycombinator.com/item?id=26693959 - April 2021 (320 comments)

Ask HN: What startup/technology is on your 'to watch' list? - https://news.ycombinator.com/item?id=25540583 - Dec 2020 (248 comments)


Heat pumps and better insulation. It's magic how heat pumps pull heat out of thin air. I honestly think part of the reason they are not adopted as much is people can't understand them and don't trust them because of their ignorance. Insulation seems pretty boring until you start to realize how much energy can be saved by modern insulation.


They are good.

But not in very cold environments. I have one and when there is more than two degrees of frost it struggles.

So for a lot of continental areas they are almost useless since they do not function when you really need them.

For temperate climates and coastal regions they are wonderful.


I think you’ve stumbled on the REAL reason there’s hesitation with heat pumps.

There’s a massive difference in capability and efficiency and usability of heat pumps. The crappy ones don’t even work below freezing. The good ones can operate even down to -20F efficiently, even air source. And ground source ones don’t have a hard limit at all (although they face a similar wide difference in capability).

Low effort cheap heat pumps are gonna do more harm than good in that they’ll convince people that heat pumps suck.

It’s like the difference between a Tesla and a lead acid golf cart. Both are “electric” “cars”, but there’s vastly different capability.


Yeah my parents are an hour south of the Canadian border in New England and they still heat the house with a wood stove during winter for that very reason. But tell ya what that wood stove sure is amazing, keeps it very warm and it’s not a small house.


I have a buddy who put in a fireplace insert and pellet stove. Besides the fact that he picked up very modern implementations of these two devices, he lives on a 10 acre property that's mostly Cherry Trees, so an electric log splitter was purchased along-side the fireplace insert.

He put them in after discovering that heating his house on LP (the only fuel source outside of electric available where he lives) cost about $650/mo to keep his home as warm as he wanted it. IIRC, he was basically keeping his house at 85 on those two devices, alone. He added geothermal a few years later.

His home is in the middle of the windiest parts of the thumb -- very, very cold in the winter with a lot of snow.


I have one for my 20m2 shed, it works down to -5c at least.

Granted my shed has quite good insulation, but still.

Worst comes to the worse you could use a ground source heat pump.


Yes to ground source. Obvious thing to do in a cold climate.

I am very sceptical of a Heat Pump working efficiently down to -20F (-28C in modern units)

I found -3C was the lowest. It is a new heat pump obtained with the advice of experts.

Physics. What is the operating fluid that will evaporate at -28C? As it stands I think it is untrue.


> What is the operating fluid that will evaporate at -28C?

remember that its operating at a much higher pressure than normal plus there is a pressure differential between the "hot" side and the "cold" side.

my refrigerant is propane, which is a gas until -48c at atmospheric pressure.

The main issue is not the refrigerant, but ice build up on the condenser coil. this (I assume) stops the airflow over the coils and generally stops the condenser from absorbing heat(ice might also be less conductive).


Here in Sweden they are very common, my old and beaten heat pump works down to -20C (-4F) although less efficiently.


> But not in very cold environments.

This depends on what you mean by "cold" and there are several other factors as well.

For one, they work great below 28 degrees. For the area between 28-34, there can be issues. In this range, water will more readily condense out of the air and form ice on the outdoor unit. And if it is raining and it is 34 out, you'll really have some ice.

But below 28 degrees... any water in the air is already "frozen" and you aren't going to have as big of an issue of ice spontaneously forming on a colder surface.

As long as you can get air flow across the coils on the outdoor unit you are fine, in one sense, "the colder, the better".

But that leads to the next issue: what to do when you do have ice blocking air flow? And this is when price comes into play. To my knowledge, all of the lower tier brand names (goodman, payne, bryant, maybe even ruud and rheem) will use a timer based defrost control. Basically, once the outdoor coil goes below 32 degrees, a switch is tripped and a defrost cycle will be forced after 60 minutes, whether there is ice on the unit or not. When temps are below freezing, a defrost cycle could easily take 20 minutes of extra runtime to recover the temperature. Even worse, if snow is drifted up against outdoor unit, a defrost cycle will cause it to melt into the coils which will turn into ice once the defrost cycle is over. So an unnecessary defrost could take a completely ice free outdoor unit and leave it with one side caked in ice.

To combat this issue, most of your top line brands (Trane, American Standard, Lennox, Carrier) will have "on demand" defrost so you might very well go 4+ hours of runtime and never see a defrost. However, each brand has their own quirks and can still end up with unnecessary defrost cycles if the air flow through the indoor unit (dirty filter) is poor or if the system refrigeration charge is not 100% perfect.

The other thing that seems to get people is run times. In the south in 100+ degree temps, you can expect your A/C to run for 12 hours in a 24 hour period. Yet for some reason, when a heatpump runs for 2+ hours straight when it is 20 degrees outside, people flip out that it's running too long and going to blow your electric bill up so they flip it over to electric only/emergency mode.

Let's do the math... your heat pump is running for hours on end, drawing 3kwh every hour. You freak out and flip it to emergency mode which turns on a 20kw electric heater and the unit now runs for 30 minutes followed by a 30 minute off cycle. You think it is only using "half" the electricity because it is running half as much. But the reality is, you are now using 10kwh every hour.

Call me crazy, but I don't think it was politics or frozen natural gas that lead to the 2021 Texas blizzard power outage, but people with heat pumps that have no idea what they are doing. Even before temps were freezing, several local community groups on facebook had people spamming "it's going to get below 32 tonight, so for you heat pump users, make sure you put it in emergency heat mode!". And then to make matters worse, local HVAC companies, with the large influx of people complaining their heat pump had been running for "hours on end", start chiming in saying "it's too cold, go to emergency mode"... Meanwhile, I'm somewhat new to heat pumps myself, but I had forced mine to only use electric heat when in defrost mode. It ran flawlessly, I was very impressed. The vents were blowing 90 degrees all the way until it was 20 degrees out. Once it was 6 degrees out, it was blowing 81 out the vents, but still enough to hold temperature in the house (66). On the worst day, I had a combined runtime of 16 hours with 7 defrost cycles. My bill for the whole month was $140 (the highest ever), while all my neighbors that tried to "save" money by going to emergency mode had bills of $300+.


Is there some reason why heat pumps don't spray some antifreeze/salt-brine on the evaporator to clear/prevent ice? Either alone, or in conjunction with heating.

You seem like the right person to ask, and this idea has been stuck in my head since seeing the technology connections video.


Any kind of antifreeze or salt would be an environment issue, never mind the issue of it corroding metal.

Ice build up and "defrost cycles" are not really an issue if managed right. But the issue is people buying the cheapest option and expecting the greatest performance.

HVAC tech knowledge is also an issue. Most techs are going to do the bare minimum in order to close out a job and move on to the next project/service call. There is really no incentive for educating end users on how their "system" works.


Related Technology Connections video: https://www.youtube.com/watch?v=7J52mDjZzto


Almost all new houses are built with heat pumps where I live because it's kind of a no-brainer in those situations. And I see one old house after the other being wrapped in new thick isolation.

But heat pumps in old homes just aren't a thing because it's expensive and a lot of work to adapt the house's existing heating. People do understand it, they just don't want to bother with that.


What is "isolation"? Is that a typo of "insulation"?


Force of habit. In my native language we use the same word for isolation and insulation.


I was trying to narrow it down, but it turns out a lot of languages do that.


For new homes absolutely. In the UK gas boilers are the norm and every new home should have a heat pump instead.

But retrofitting requires a lot of work - replacing radiators with much larger ones, maybe ripping out pipes, and for ground source digging up the garden / street.


> For new homes absolutely. In the UK gas boilers are the norm and every new home should have a heat pump instead.

This is already happening. Redrow for example are focusing on new heating technology for new homes in preparation for the boiler ban


My condo uses a water source heat pump. It took me more than a few minutes to understand, but it’s so sensible. The building runs water in a loop through all the units. An individual unit’s heat pump can either add energy to or take energy from the water. If the water is too hot it can be cooled in a cooling tower on the roof, and if too cold it can be heated in a central water heater. The unit lives in an interior closet and there is no outdoor component, yet it’s central A/C.


In the UK there will be a gas boiler ban on new builds in 2025. That’s when we will start to see a market for these.

> I honestly think part of the reason they are not adopted as much is people can't understand them and don't trust them because of their ignorance.

I think there’s genuine reason right now.

For existing houses it only makes sense if you have really good insulation which rules a lot of people out. That’s why they’re only being focused on new builds, so they can guarantee the insolation is adequate.

They’re expensive (for existing housing) and the returns aren’t quite there yet. Parts and expertise are also quite limited compared to boilers (the market isn’t really there yet).

It also depends what climate you live in for how useful they will be


Energy efficiency technology has been an unsung hero for decades now. It's probably done more to mitigate carbon emissions than any other effort to date.


It's cost for me. Electricity isn't cheap where I live and when your house is poorly insulated you require even more energy for a heat pump.


Me too. I've been obsessing over geothermal systems for a while and I think there's a lot of opportunity there. I wrote this essay to clear up my thinking: https://www.greennewdealio.com/heating/robot-mole/


Geothermal is not uncommon here in Sweden, I think that some goes deep instead of wide maybe because we have a lot of rocky ground which is easier to drill into.

But like your article says the installation is too expensive for the expected savings. There is another problem you didn't bring up and is scalability, geothermal works fine for detached houses but not so great in a city where most of the energy goes to.

District heating is a great solution for denser population areas, here we burn trash and solve two problems, and in theory it can be combined with Geothermal heating


Along the same line of thinking, tankless water heating [1] seems like a no-brainer. It surprises me that places like the US have not used it as the default from the start.

[1]: https://en.wikipedia.org/wiki/Tankless_water_heating


Why not heat pump water heaters? Those are even more energy-efficient.


there have got to be better ways to optimize the design of boilers, surely this is a RF learning problem against heat equations.


I'm really excited, to the point of distraction by the RISC-V ISA.

Some people say that there's nothing new in it, but to my mind, they're missing the point : the Berkeley Four took what couldn't be appropriated for profit, and built a statement about what computing is... They revealed the Wizard of Oz to everyone, so that anyone with some computing background can build a processor, freely.

And now this freed wizard is working his magic, and will change the computing landscape irrevocably.


> "...so that anyone with some computing background can build a processor, freely."

They could already do that. I designed and laid out in silicon a 32 bit processor as part of my undergraduate studies in computer engineering.


Can I run Linux on it ? I almost feel like being able to completely rebuilt computer technology from raw materials should be encased in a book somewhere. Hypothetically if we dropped you and 100 other engineers in the woods with metal working equipment could you get something resembling a computer working in 5 years ? Or in this post-apocalyptic landscape would it be more practical to dig in landfills for old GameCubes to retrofit


On RISC-V? Yes... Well sort of. There are people working on ports of Fedora, Debian and OpenSuse (I think) and I've seen some version of BSD working on SiFive's boards. Debian is said to be 95% complete. Ubuntu have also announced that they're working on a port, and SkiffOS is said to be ready to run on RISC-V too. There was also a demo of AOSP working on 3 XuanTie 910 cores a few weeks ago.

Of course, then you need apps to be ported over, and that might take longer.


Yes. OpenSPARC has been free/available since 2005. There are a multitude of others of similar vintage that can also boot Linux.

It's a solution looking for a problem that doesn't exist.


You do realize that RISC-V processors aren't free right? Making an open source ISA is no small feat, but all they really did was help save some megacorporations a little extra money.

Perhaps it will lead to a processor startup, but follow that to its logical conclusion: it takes a huge, profitable company to sustain processor delivery for years. There's a very good reason why only a handful of companies make the top 6 CPU architectures. There's still Synopsys ARC, Renesas, Atmel, PIC just to name a few of RISC-V competitors.

In reality, the Berkeley Four just made a handful of semi companies richer. WDC, NXP, NVIDIA, Microchip, etc. don't have to pay Arm for IP if they use RISC-V. Did that really help anything? Meh.


You could argue that the availability of Linux saved countless megacorporations from having to pay Microsoft, IBM, or Sun. Yet the availability of Linux has been a boon to people across the board.

While I agree there is something not right about cutting into ARM’s profits for the benefit of megacorporations, I think that a royalty-free ISA might genuinely be good for civilization despite that in the same way Linux is. It’s tough though, I’m still not fully sold on that opinion.


I hear what you are saying, and that is a good point, and I do agree with the: "Why isn't there an open ISA?"

I think the key difference with an OS like Linux and an ISA like RISC-V is that Linux helped literally millions of smart, curious kids get exposed to Unix and tinkering in a OS that wasn't a locked down behemoth: Windows. It changed lives. I don't think it is an exaggeration to overstate the democrization Linux had.

However, I don't see RISC-V touching the world in an analogous way. No kid is gonna tinker with an ISA and fab a CPU. Maybe they will stick it on an FPGA? But it seems less accessible.

Plus RISC-V's biggest propopent, Patterson, isn't really giving the world anything like Linus Torvald did. Linus busted his butt (and still does!) whereas Patterson flies around getting big speaker fees to pump up RISC-V while delegating. Just seems a bit... off? Or its just me.


You raise good points. Linus is a one in a million person... At the same time I do think kids, hobbiests, small startups, and people who can’t afford expensive computers will make use of RISCV on commoditized programmable logic chips like FPGAs. Look at projects like MiSTer, simply not something that was available for hobbiests and free culture advocates 20 years ago. I can see a similar development happening in the computer space. Actually I’m surprised homebrew computers architected around an FPGA haven’t yet become more popular. Maybe as the popularity of RISCV grows we’ll see more of that. Would be great to see a project that implements a risc-v based computer on a de10-nano board as a starting point.


I spent most of my career is a microarchitect at Intel in the late 80's till mid-90's on 486 & Pentium and then again in the late 2000's on Xeon. So the idea of tinkering with an ISA is something that gives me the cold-sweats today! It's a wee bit more complicated than hacking the RTL/VHDL, but I am not saying curious kids should not have the chance to have a go at it! RISC-V is more akin to if I handed a kid a Boeing 747 and said, hey, go nuts! :)

However, I taught programming in an afterschool program in the mid-80's and the Apple //e had an amazing graphical program that stepped through an assembly program, showing the data moving through the busses from register to memory, etc. And it was instrumental to kids learning. There's a few more wires in a 32-bit RISC compared to an 8-bit 6502, but someone will benefit!


Interesting perspective. It’s true the jump from 8-bit to 32-bit architectures involved a lot more than just increasing the data bus width, it came with advances in processor architecture so I understand where you are coming from. That said, I think the hobbiest community will find a happy medium in terms of complexity / hackability. Establishing a base platform seems to me the most important first step.


Somebody knows the name of this Apple //e program?


> "At the same time I do think kids, hobbiests, small startups, and people who can’t afford expensive computers will make use of RISCV on commoditized programmable logic chips like FPGAs."

No, they won't. By definition, FPGAs always give you less performance per dollar than custom silicon. If they can't afford an expensive computer, then FPGAs and the tooling to use them is even more out of reach by a long way.


By the working analogy here, Linux is also at a disadvantage because it doesn’t run the majority of commercially supported consumer software yet many people still use it. It has shown that people are willing to choose free options even if it’s technically less advanced than closed options. A fully featured de10-nano board costs ~$150 [1] so it’s readily accessible, at least in comparison to desktop pcs.

[1] https://www.amazon.com/Terasic-Technologies-P0496-DE10-Nano-...


Steam's system survey shows Linux desktop usage at ~0.85% and general consumer surveys show less than 3% usage even after thirty years of development and that is in spite the fact that it costs nothing. Moreover, a $45 Raspberry Pi 4 costing a third as much will have vastly better performance as a desktop than any logic it's possible to fit on the DE10 board costing $150 that was mentioned. The examples provided therefore demonstrate the exact opposite of the points you are trying to make.


<=1% usage does not contradict any of the points I made. I never claimed >1% of people would make the freedom over convenience trade off, just that there exists people who would and it would be important to them.

I don’t see anywhere near 1% of people running a RISC-V softcore based computer on a de10-nano board but I would and many other hackers like me would.


I for one would love to.

Or to at least have a FPGA embedded in the machine Novena(*) laptop style.

(*) the open source laptop by Andrew Huang.


To be fair, interest around ISAs has never been so high. Now RISC-V workshops ("build your own core") are kind of everywhere, from Udemy to most universities, some of which are even targeted at teenagers.

The resulting RISC-V cores are mostly emulated, but are expanding knowledge of FPGAs immensely as a side effect.


There are so many hardware hackers and makers these days, such as Hackster.io [0], that they are transforming the world around us. We need more of them and RISC-V is well positioned in this space.

[0] http://hackster.io


I think you're underestimating the amount of curious kids who are more on the electrical engineering side of things. Who, sure, probably won't get to play with a CPU fab line, but then again that's somewhat like sending kernel contributions is to "tinkering with Linux".

It really is a just question of any level of hobbyist-accessibility (think photo-etching 2 PCBs of less sophistication vs. making a run of 100 perfect 3-layered PCBs in a factory) - and I've seen quite a lot of projects popping up around FPGAs lately that seem to indicate they're starting to approach the Arduino-ish level of approachability (though it's still obviously very far).

RISC-V just might be one of the things on the way to a whole "sillicon-tinkering" scene, so I'm pretty hopeful.


Arm deserve all that's coming to them.

A generation ago, their forerunners - Acorn - were happy to take, use and make a profit from the work of American universities, yet when Berkeley asked to be able to use Arm's ISA for research purposes, they got short shrift.

So Berkeley cobbling together a next-generation RISC ISA, and Foundation-ing it out of reach of the same thing happening again, is smart retribution.


Sure, RISC-V designs can be open or not... And of course there's always the cost of fabbing.

There're already designs freely available to use though, either as they are, or to build upon.

And there are also now many other companies designing using the ISA; decentralising the production of chips.

But - over and above the revolutionary economics of it - it's being recognised as a good ISA, and RISC-V cores are already being incorporated into consumer electronics.


Actually... What they did, ten years ago, was build a much better Computer Engineering course!


Maybe not exactly under the radar, but when I was last doing professional EE/hardware/embedded work around the turn of the millennium, the software (EDA, compilers, CAD) and hardware (instrumentation, evaluation boards, PCB fabrication) were all super expensive, to the extent that they weren't really accessible to non financially-independent bootstrappers.

Sometime in the last two decades (and again, I'm probably super late to the party on this) it's become extremely affordable to dip one's toe into electronic hardware and embedded software. And not just at the "breadboarding something with an Arduino" level, but at the level of building small production runs of a product that people would actually pay money for.

In a way it reminds me of the mid-2000s era of web technology, where over the course of a few years you went from "putting expensive servers in a data center" to "filling out a web form" in order to host an app in a reasonably high-availability environment.

Or another way of looking at it, a lot of things that maybe you previously had to fund-raise for are now things you can bootstrap, and many things are cheap even by hobby standards.

That means for a lot of projects (for technical folks) you don't need to convince anyone else that your idea has merit, you can just build it and find out.


https://ipfs.io/ seems pretty cool. I've been meaning to check it out, but not seen any uptake yet.


Same here. IPFS seems to have been lumped into the whole "crypto" scene because of filecoin. In my opinion, IPFS is much more interesting and potentially beneficial to individual freedom than crypto. The problem is the only people really talking about it are people focused on crypto, also forcing dapp conversations to focus on quick money vs. long term usability/functionality. I don't hold the money conversations against anyone...just not interested in that side of it. More interested in the preservation of knowledge long term and how/if that gets built. IPFS seems to be an interesting potential step towards those goals.


There's still a bunch of questions around filecoin, but if you set aside the token and speculative nature, the it's the first realistic proof of storage system and represents (1) a departure from proof of work that's sucking up the power output of several countries for no good reason, (2) a useful base for IPFS to store its bits persistently and (3) perhaps funding Protocol Labs so they can continue to advance this state of art.


I find it really cool that Brave implemented IPFS.


Libgen added a feature to download files with ipfs. It is ridiculously faster than a normal download.


Played around with it a few weeks ago. It was super easy to get going with it. I just dunno what benefits it gives over traditional storage? besides the classic crypto, "censorship resistant". (Also hello HN, this is the first time I've posted anything :p )


I agree with you. And we also need technology like dApps that don't cost when running it by opcodes -with its vm, but use hourly pricing like cloud based hosting, so it will run your software deliberately without throw decentralized capabilities and it will enable decentralized economy to the mass.


I want to see a variation of Signal to add BlogPost like interface.

It could then use IPFS to host "Public Facing" posts. People could pin - or pay for pinning - their posts.

IPFS is what I hope will lead to further democratization of the internet.


Could you expand on what you mean by a variation of Signal? To me, Signal’s main feature is just that it’s a reliable E2E encrypted messaging app, implying two ends to communication. A blog post interface implies one-to-anyone communication, so I’m confused what part of Signal you want to emulate.


Yeah, I haven't communicated it well.

I am aiming for a decentralised-FB like experience on Signal. And I honestly think it is feasible.

On FB, you tag a post with intended recipients(default:All "Friends"), people log in and get fed these posts with FB fiddling along the way.

Signal with a redesigned UX could show a doom scroll of posts that friends send you. Not that I think this is great, but user adoption for familiarity.

Signal E2E delivers the message, the new UX displays the posts. You apply heuristics as you want, all without ads or data gathering by a MITM.

But... public posting is a thing. I am hoping that IPFS combined with cryptographic signing for authentication, could fill that role.

Hence my comment that IPFS could hopefully lead to democratization. Although, I am also concerned that IPFS will also lead to "undeletable" content, but that seems to be almost a thing anyway.


Scuttlebutt is the software you're looking for. Check out the patchwork client. Posts are public by default but you can make them encrypted so only your friends can read them.

There's also cabal chat which is based on the same underlying technology but is chat focused rather than Facebook like.


This is very interesting! Thank you!


I keep seeing this pop up, but honestly I still don't get it. I'm also weirded out by the filecoin aspect of it. Could you maybe explain it better for the dumb folks like me?


Think of it as a distributed filesystem with content addressable by hash values. If you want to publish some content instead of posting a torrent file and seeding it, you can announce the content on IPFS to your IPFS peers. Anyone else can now find it by hash (or by name if you use IPNS). If anyone else downloads it, like with a torrent, they'll begin sharing it back out creating redundancy and, potentially, reducing access time for others as more people are sharing it out (imagine you're on a 10Mbps network connection and the only one sharing it versus having 100 others sharing it out, even if they're also all on 10Mbps connections it will be faster). Content can be "pinned" which ensures it remains on your IPFS node, if it's unpinned anything you download will eventually disappear (basically an LRU cache). So if you download the entire run of Dungeon Magazine but don't pin it, it will disappear if you continue downloading content via IPFS when the cache you've set aside for IPFS fills up (eventually). But if you pin it, the content will remain hosted by your node indefinitely, even if no one ever accesses it or pins it again and the original disappears.

Filecoin is a separate thing (mostly), and can (kind of) be thought of as pinning-as-a-service. It's built on a private IPFS network, not the main public one most people use or are directed to. So it's using IPFS, but it is not IPFS.


Is there a client that I can run and point at an existing file structure to make it all available on via IPFS? Perhaps with limiting on outgoing bandwidth?


Muon-Catalyzed Fusion [1].

Briefly it's a genuine, and scientifically uncontroversial, form of 'cold' fusion enabled by muons — a more massive relative of the electron that was in the news recently thanks to the potentially interesting results coming out of the Muon g-2 experiment at Fermilab [2].

Like conventional 'hot' plasma fusion, in all experiments to date the energy input needed to sustain the process has exceeded the output, but it may be possible to use it to generate power. Unlike conventional fusion though, it receives relatively little attention, and there is no well-funded international effort to tackle the associated technical challenges. As with conventional fusion the technical challenges seem formidable, but it could be an interesting technology if a way could be found to make it work.

Listeners of Lex Fridman's podcast may recall that it was briefly mentioned in the episode he made last year with his father, Alexander Fridman, who is a plasma physicist [3]. As someone who has been interested in the idea for years and barely hears any mention of it, I was pleasantly surprised it came up.

It was also covered on the MinutePhysics YouTube channel in 2018 [4].

[1] https://en.wikipedia.org/wiki/Muon_catalyzed_fusion

[2] https://www.youtube.com/watch?v=O4Ko7NW2yQo

[3] https://www.youtube.com/watch?v=hNCz-8QIWuI

[4] https://www.youtube.com/watch?v=aDfB3gnxRhc

Bonus fact: Muon-Catalyzed fusion was first demonstrated by the Nobel laureate Luis Alvarez, who, with his geologist son Walter Alvarez, later proposed the 'Alvarez hypothesis' for the extinction of the dinosaurs by asteroid impact.


Yay, something that wasn't software related!

-A Software Engineer


I thought research on it had stalled (I looked it up for a high-school project… 20-odd years ago). I'm glad to see otherwise (or that at least more people see it as interesting), I always thought it looked promising.


There's an on-going project right now with ARPA-E's "BETHE" program to experimentally study the potential for MCF in higher-density plasmas. IMO, this is one of the better choices that ARPA-E's fusion program has made -- it's low-cost, high-risk, high-reward.

https://www.arpa-e.energy.gov/technologies/projects/conditio...


The MinutePhysics video you linked seemed pretty firm in saying we had no clear path to reducing the energy input required to create muons or increasing the energy output of the fusion, thus never generating net energy.

Are there recent developments in this field that change that?



It's not clear if increased energy efficiency is one of the benefits of this technology. My guess is no.


It's disappointing that MCF doesn't get more attention.


Homomorphic encryption, which enables you to process data without decrypting it. Would solve privacy / data security issues around sending data to be processed in the cloud


The extra cost is worrying. You're talking at 4 to 6 orders of magnitude increase in resource usage for the same computation.

Unless we figure out some awesome hardware acceleration for it, it's not practical but for a few niche applications.

It also has the problem that you can use computation results to derive the data, if you have enough control over the computation (e.g. a reporting application that allows aggregate reports).


Modern homomorphic encryption schemes only have a ~100x performance hit. That sounds bad, until you remember that computers are really fast and spend the majority of time doing nothing, and that the difference between writing a program in Python or C is already something like 10x.


> Modern homomorphic encryption schemes only have a ~100x performance hit.

Really?! Now I'm curious. If I have a simple program for an 8-bit CPU @ 1 MHz, when can I run this program on a virtual machine using homomorphic encryption, under a reasonable runtime? Is it possible yet? If the performance hit is only 100x, the runtime should not be much longer than the actual chip. But the last time I checked, the hypothetical runtime seems to be still impractical.


8 bit is definitely doable today, fast

There are basically 2 strategies:

- do fast operations, with a limit on how many you can do. This is called Leveled Homomorphic Encryption, with CKKS being the most popular scheme. Microsoft open sourced a lib called Seal for it.

- do unlimited operations, but with extra overhead. This is called Fully Homomorphic Encryption, with TFHE being the fastest implementation. My company Zama has open sourced an library in Rust called Concrete.

Reminds me a lot of deep learning in 2010, just before it took off!


Thanks.


Still, 100x is a lot. I would still bet that it depends on the complexity of the workload.


Zero knowledge proofs for the win! This is one of the things I need to see in a cryptocurrency before I believe it will succeed at scale.

1 Zero-knowledge proofs,

2 shielded ledgers,

3 democratized and energy efficiency mining,

4 inflationary control, and

5 wallet recovery.

No one has all of these yet, but ZKP is a big part of it.


Can you explain what you mean by shielded ledgers?


The fact that wallet IDs are visible in the blockchain breaks it completely for me. I do NOT want an immortal record of every penny I spent and to whom I gave it. Fuuuuuuuck that. Today I have a choice, but with many cryptocurrencices (ETH, BTC) that is a "feature" not a "bug".

Monero and Ravencoin have transparent and shielded entries. I believe the node is encrypted with an ECDH shared secret, so the payer and the payee know each other's wallets, but no one else does.


If by inflationary control you mean non limited supply, grin (mimblewimble protocol) has all those attributes.


yes, i did mean that. also pegging it to a real currency, like USDCoin. I have not heard of Mimblewimble. Thanks for the reference, I'll add it to my list of coins to study.


I am also curious what you mean.


See my peer comment.


zk-SNARKs are a practical application of ZKPs in crypto.


3 - proof of stake (Cardano etc) solves this, no?


Like I said, lots of coins touch some of these, but I don't know of any that has all of them.


You should look into Monero.


Yep. I have. RandomX and ECDH ledgers are a big plus.


monero is proof of work? is there a proof of stake alternative to monero?


Advances in computational chemistry and computational materials science. Specially using ML to speed up computation. Computational chemistry is already patchwork of different heuristic approaches. Graph neural networks and even some language models seem very promising addition.

If we could simulate and observe what happens with complex chemistry accurately, it would completely change the biology, medicine and materials science.


> If we could simulate and observe what happens with complex chemistry accurately, it would completely change the biology, medicine and materials science.

That's probably far less true than you imagine. See Derek Lowe's take on it: https://blogs.sciencemag.org/pipeline/archives/2021/03/19/ai...

The rate-limiting steps in drug discovery is in figuring out a) what you need to muck up to improve health or b) how to muck it up without mucking up other things badly enough to kill you. Computational chemistry has generally focused more on solving problems c) how to muck up this target more effectively or d) how to make the mucker-upper in the first place, which, while not useless, is not going to be a revolutionary change by any stretch of the imagination.


>If we could simulate and observe what happens with complex chemistry accurately

There is the rub. Biological simulations have been written for 40 years now. It's an extremely difficult problem considering how many latent variables are at play, and people have been working on it for a very long time now.


I suspect that the recent breakthrough in protein structure prediction is just a start.

ML techniques will be used to cut that latent variable space in both quantum chemistry and molecular mechanics based methods.


A lot of my work revolves around human physiology. I think a lot of that will be improved with very accurate, high resolution, and continuous sensors. When that happens (with BioMEMS of course for most cases), AI will be able to fill in a gap, in a lot of various applications.

In the early 2010s, I was an undergraduate electrical engineering student with type 1 diabetes playing around with such models by reprogramming stuff that had been presented in peer-reviewed journal articles. I eventually programmed a closed loop control system (also known as "Artificial Pancreas System") as a spring break project to inform my insulin dosages. Mostly, it was a soul-searching project as engineering school was physically enduring for me, as I have serious health problems. I had found a paper about a sliding mode control with respect to type 1 diabetes that looked solvable to me, but I did not know if it was actually solvable. I decided to see what I could do with it, and I was successful, in 2011 and barely old enough to legally drink!

Anyways, I can assure you that while research on control systems is drying up, including for physiological systems, that the excitement is just about to begin for what you mention, starting in about 5 years.


Yes! This would be fantastic, I'd love to see this happen.

If anyone is working on this, and is looking to hire a computational organic chemist-turned-ai engineer, let me know!



Wonderful, thanks!


Contact info?


Thanks, I realized I never filled in my profile. Mail can go to anything at rombouts dot email.


I have a relative who is a PhD chemist who I talked to about some of this stuff. He is generally skeptical of it because "the interesting chemicals are the ones that have extreme properties and predicting extreme properties is hard". It's a lot easier to model normal behavior than unusual behavior. He wasn't totally against the idea, he just thought it would be much harder than many people assume.


The GraphBLAS[1] is an amazing API and mathematical framework for building parallel graph algorithms using sparse matrix multiplication in the language of Linear Algebra.

There are Python and and MATLAB bindings, in fact MATLAB 2021a now uses SuiteSparse:GraphBLAS for sparse matrix multiplication (A*B) built-in.

[1] https://graphblas.github.io/


I am working on GraphBLAS-based implementations right now. While the API itself is great, using it full-time I notice documentation is still quite behind and still not too many features. Also there is almost no community outside academia. Crazy efficient though, and very solid overall


Absorbent panels for aircraft and angular design that prevents reflection of emitted waves.

Oh wait, that's under-the-radar technology.


angular? i thought we are all using vue now.


It's not very stealth tech if it's vue


Is that a startup in stealth mode?


Just sprayed my coffee out my nose... thanks for the morning laugh


https://paddle.com - it's like Stripe, but their "Reseller of record" means that they handle all sales tax / VAT issues. For most SaaS businesses it seems to reduce headaches a lot.


Former Paddle customer, don't believe the hype. Everything is duct tape and half assed with the API, the checkout process is happy path or bust, and you're going to have a hard time migrating to a different processor.

Stripe + TaxJar was cheaper and easier to implement and maintain.


Another former Paddle customer: I agree with Mike here. Additionally, issuing anything other than the most trivial coupon was a huge hassle, to the point that it distracted from other work (small team).


https://www.fonoa.com/ These guys also have an API for dealing with different tax laws around the world. Founded by some ex-Uber people.


Ex-Uber isn't really a selling point for a service where you face large fines if they get fiddly little legal details wrong.


Not sure if I'm missing something, but there have been companies operating the same way for decades, such as FastSpring. These existing companies all charge a pretty penny though - is Paddle different in that regard?


I’m a customer, and it handles a lot of headaches for accounting. Basically you have 1 client - paddle, as far as the law is concerned. Plus has an easy checkout feature that’s inline for collecting payment info that never touches your server, no headaches securing that info.


Can you share an approximate percentage they take from a transaction for offering that service? Their pricing information seems to be hidden behind an email form.


Around the internet, I saw 5% transaction fee. But, like Plaid, they probably customize the fee based on the product sold and volume.


Did you compare them to others in this space, like FastSpring? Would be really interested to see how it stacks up.


ReBCO tape is interesting stuff. It's a high-temperature superconductor (high-temperature meaning still really cold, but you don't need liquid helium to get there) that can carry a lot of current before it stops being superconducting.

It's being used in new tokomak fusion reactor designs, like SPARC.

https://en.wikipedia.org/wiki/Rare-earth_barium_copper_oxide


An excellent video on this from one of the best Youtube channels in existence (IMHO): https://www.youtube.com/watch?v=zcpDGKH9_SE


You're right, that's a great video.


I recently came across Polymarket[1] and am really excited about what crypto has in store for prediction markets and insurance. I've dabbled in prediction markets over the years. I think they have a lot of promise but execution is always an issue. Using ETH to create a two-sided marketplace feels much better than having prediction markets decide what one can bet on.

More broadly, decentralizing insurance in this way would be very cool too... There's little difference, in my mind, between a prediction market predicting weather changes or elections, and insurance contracts around risk.

... and what's even cooler is: can we build bots and models to actually get an edge on these predictions? Imagine applying HFT strategies from stocks to predicting real-world events... Now it sounds like we can actually get good at forecasting difficult-to-predict human events, rather than just stock prices.

[1] https://polymarket.com/


What is Polymarket doing in the insurance space? Didn't see anything along those lines on their site. I've previously encountered https://etherisc.com in passing, but it's not clear that they have a plan to address the hard parts, particularly leverage/capitalization. I'll also mention/disclose that my current employer, https://ledgerinvesting.com, is doing some cool stuff in the direction of decentralized insurance.


They focus only on prediction markets. I just meant that insurance and prediction markets are two sides of the same coin. Thanks for sharing those links though -- I will check them out.


Fees are borderline insane if you are a small volume trader.

If you’re in the US there is a regulated prediction market set to launch soon.


Is there really? I worked at Intrade when I was in college. It was messy internally but a great concept. Pity the US Gov smashed it. It had a lot of potential.


Agreed. Very expensive if you're throwing in a bit of cash.

Do you have more info about the regulated prediction market? I'd love to learn more.



Thanks!!


Doesn't seem much different than Auger


When I played with Augur, the community was very quiet and not a lot was going on. That was a big challenge for me. I also didn't like that you had to use their crypto to actually participate -- in Polymarket's case, I understand that they are using ETH smart contracts.

There are definitely a few players in this space and I'm excited to see where it goes.


Interesting, and yeah I will always favor platforms that use ETH vs creating their own token so I'll check this out.


- Native File System support coming to the browser: https://web.dev/file-system-access/

- jamstack.wtf

- federation-based networks

- CRDTs: https://josephg.com/blog/crdts-are-the-future/

- data-oriented programming paradigm (https://rugpullindex.com/ shameless plug)

- web components: https://docs.ficusjs.org/index.html


> federation-based networks

e.g. lemmy.ml


Temporal Stores/Databases like Datomic (https://www.datomic.com/) and Crux (https://opencrux.com/). I can't stress enough how much pain they can lift from organizations building teams to "scale" in domains that by nature are tied to requirements about past knowledge like finance, healthcare, governance, audit systems, etc.


How mature is performance optimisation in Datalog? I've joined a place that uses SQL Server, and brought my open source perception of SQL Server as an expensive lumbering behemoth. I'm astonished by how fast SQL Server is, and how mature the tools around it are. We rely on them to tune queries, diagnose contention, etc. My big concern about adopting something like Datomic or Crux is that we'd lose the insight that lets us increase performance without increasing hardware spend.


I work on Crux so can share a few details about our implementation of Datalog. The query is compiled into a kind of Worst-Case Optimal Join algorithm [0] which means that certain types of queries (e.g. cyclic graph-analytical queries, like counting triangles) are generally more efficient than what is possible with a non-WCOJ query execution strategy. However, the potency of this approach relies on the query planner calculating a good ordering of variables for the join order, and this is a hard problem in itself.

Crux is usually quite competent at selecting a sensible variable ordering but when it makes a bad choice your query will take an unnecessary performance hit. The workaround for these situations is to break your query into smaller queries (since we don't wish to support any kind of hinting). Over the longer term we will be continuing to build more intelligent heuristics that make use of advanced population statistics. For instance we are about to merge a PR that uses HyperLogLog to inform attribute selectivity: https://github.com/juxt/crux/pull/1472

EDIT: it's also worth pointing out that the workaround of splitting queries apart is only plausible because of the "database as a value" semantics that make it possible to query repeatedly against a fixed transaction-time snapshot. This is useful for composition more generally and makes it much simpler to write compile-to-Datalog query translation layers on top, such as for SQL: https://opencrux.com/blog/crux-sql.html

[0] https://cs.stanford.edu/people/chrismre/papers/paper49.Ngo.p...


Hyperspectral imaging.

It reminds me of a Star Trek tricorder. Imagine having a camera where you can see easily ID greenhouse gases, quantify water/fat content in food, identify plant diseases, verify drug components, identify tumours, and measure blood oxygenation. On the machine vision side of things: it could probably outperform any conventional imaging + DNN combination, and you'd probably get pixel-wise segmentation for free while you're there.

There's been a lot of academic progress going on - it shouldn't be long until hyperspectral imaging makes its way into our lives.


It is very much a part of your life if you are a member of the military or any number of government agencies. Has been a long time.


Yeah for sure, the current ones are disgustingly expensive and atrociously hard to use though!


Out of curiosity, do you work in this space? Would be interested in finding out more about the state of the field.


Yeah! Ping me anytime, email in profile.


https://darklang.com/

https://news.ycombinator.com/item?id=20985429

https://news.ycombinator.com/item?id=20394166

I think it's inevitable that darklang's vision will be achieved eventually, at least in part, whether by darklang or by other projects. We are already at the stage where you can define your infrastructure in code, and execute functions on managed "serverless" runtimes. It's not too much further to the point that cloud providers will build tightly integrated developer experiences that allow a developer to "just code" while handling all of the complexity that comes after. Within some large software companies, there is something close to this experience, but it hasn't yet been wrapped up and sold to the public.


Developments in the AR/VR space. The haptics stuff coming out is really cool and may be consumer friendly in the near future. Meanwhile headset tech is seeing major investment and light field technology is going to do amazing things for this space.


I have a Valve Index. It is the most mind blowing thing to me. I’m really excited for AR to get going. I’ve only seen it for shopping (see what item x would look like in your house). I think it might be cool for collaboration.


I work full time in VR, using virtual screens (ImmersedVR). I was never really all that excited about AR/VR outside of entertainment until the first time I accidentally spent 8 hours of focused work.


The Oculus Quest 2 is so cheap for a very high quality and high resolution experience. And even more importantly, it’s extremely simple to setup.

Wireless VR really is a necessity for it to become more than just a tiny niche.

Not yet mainstream, but it’s actually a joy to use and I think we’ll have significant marginal improvements over time which will keep making it more and more worthwhile.

I think the real thing will be commercial applications of VR, where companies use it because it’s the best way to get certain kinds of work done. And NOT desk work, either. We’re maybe a decade or two from that being mainstream, but it’s going to be a significant improvement.


I can't believe nobody mentioned new natural teeth growth.

No more ceramic implants, no more root canals. Grow new shiny and healthy teeth.


A link please? I've never heard of this before and sounds very interesting.



* FPGAs, and their associated FOSS stack to use them.

I haven't been able to really get into FPGAs but I'm optimistic about them. They're pretty clunky right now but I'm hoping they'll just get easier and more accessible.

If we want computation that we're able to verify as being secure, FPGAs are the closest I see us getting to it. There's also applications in terms of parallel computation that they can do over more traditional approaches.

This might go by the way of Transmeta or always remain niche but it seems like they have a lot of potential.

* Open Source Hardware, specifically relating to robotics

Electronics production is becoming cheaper and easier. Open source hardware has the potential to become as ubiquitous as open source software.

Electronics hardware is still way harder than it needs to be, so the progress is slow, but if we get within range of having an iteration cycle in electronics that's as fast as software development, we'll see spectacular innovation. Robotics especially, as that's a kind of straight forward physical manifestation of electronics that has a potentially large market.

There's a $5k fiber laser that can ablate copper. This could potentially fuel the next round of cottage industry board fab houses (in the US and other non-Chinese countries) and enable rapid turn around time. I wish I could justify the $5k to play around with it but it's just outside of my price range.

* Solar

I'm not really sure if this is 'under-the-radar' but for the first time, solar has become cheaper than coal. This means besides giving a moral incentive for people to use solar, there's now an economic one, which means the transition will most likely be broad.

Coupled with battery technology advances, this could have drastic impacts on the ubiquity and cost of power. I wonder if we'll see a kind of "energy internet" where people will create their own distributed electrified infrastructure.


OSHWA board member [1] here and developer advocate at Open Robotics (the people who maintain ROS) here.

May I suggest you take a look at microROS[3]?

I am also super excited about OSHWA certified open hardware [4].

[1] https://certification.oshwa.org/ [2] https://www.openrobotics.org/ [3] https://micro.ros.org/ [4] https://certification.oshwa.org/


I follow you on Twitter! I'm sure you don't remember me at all but I spoke to you briefly at one of the Maker Fairs nearly a decade ago and I've seen you at a few of the Open Hardware Summits. Not that you need to hear it from me but keep up the good work.

I'm also excited about the OSHWA certification. I've found a bunch of great projects through it.

I've been passively watching ROS but it's always seemed a bit heavy weight for a lot of the things I'd want to do or for what's available cheaply right now. I'm sure this will get easier as full Linux systems will become cheaper and more ubiquitous for embedded applications.

I haven't seen microROS, though, so thanks for the link, I'll check it out.


Uncertainty quantification and OOD detection in machine learning. It's on some people's radar, but has the potential to get ML adopted much more widely as people understand what it is actually really good at, and stop giving it things to do that it's bad at.

For a great recent example that get at some of this, see "Does Your Dermatology Classifier Know What It Doesn't Know? Detecting the Long-Tail of Unseen Conditions" - https://arxiv.org/abs/2104.03829

I'm not affiliated with this work but I am building a company in this area (because I'm excited). Company is in my profile.


Training AI on CPUs faster than on GPUs (SLIDE):

https://techxplore.com/news/2021-04-rice-intel-optimize-ai-c...


How is it working exactly ? Since it is a change in the algorithm, could it be put on a GPU to go even faster afterwards, or is it too sequential for that ?


The same algorithm can put on GPU, it works by approximating non-linear activation of a layer by storing which neuron get activated for a certain activation and then only calculate those neurons for similar input. This costs a lot of memory. This involves traversing a hash table but that is possible efficiently on GPU. This entire paper is entire publication is a red herring for the reason you gave.

If you (or anybody else) happens to be a graphics programmer with experience of implementing the bounding-box tree traversal common in CUDA implementations of ray tracing please get in touch with me as there as a good chance an rusty old 1080 can defeat most expensive xeon on the market since it has more memory bandwidth than Intels part. If you ever wanted to piss on Intels leg with a multiple week-end project please get in touch. email in profile. There is a slim chance that this will actually help deep learning.


1. Cost per kg for orbital payloads. Soon every company can send payload to space. In 10 years from now we will have 50 launches every day. The sky will contain at least 100 times more satelites than today. every single square cm on earth has connectivity to the great web. And not a single part is invisible or unmonitored.

2. VR and AR In 10 years from now when hardware is capable of displaying 16k per eye in a casual lightweight not bigger than regular sunglasses devices. Everyone will be wearing one making all mobilephones obsolete. Every object, animal or human you watch at will be argumented. It will be the greatest new technological impact since the rise of mobile phones. Changing the live of human being so dramatically


1) Capability Based Security - Makes it possible to have actually secure computers, and return the fun to computing by allowing experimentation without fear.

2) Reconfigurable computing - The power of the FPGA without the hassle, a homogeneous lattice of computing nodes as small as single bits allows for fault tolerance, almost unlimited scaling, and perfect security. It offers the power of custom silicon without the grief.

3) Magnetic core logic, initially realized when transistors still weren't reliable enough to build computers out of, may be making a comeback for extreme environments, such as that on Venus.

4) Reversible compilation - being able to work from source --> abstract syntax tree --> source in any language (with comments intact) will be a quite powerful way to refactor legacy codebases in relative safety.

5) Rich source / Literate Programming - embedding content in the program instead of having a ton of "resources" helps reduce the cognitive load of programming.



what do you like about it? (disclaimer, i work at temporal, but just genuinely interested in how you'd describe it in your words, good or bad lets hear it)


I would say that it allows one to write statefu, long lasting workflows or processes in a durable and persistent way using all the niceties of regular programming languages such as Go/Java, like control structures, abstraction, etc. In a similar way that generators in Python/JS allow one to write even complicated iterators much more easily than manually keeping track of state yourself, Temporal allows one to easily define long-lived stateful processes (on order of years even) with the simplicity of roughly:

    while(true) {
      Thread.sleep(1, MONTH);
      sendReminderEmail(user);
    }
...which would normally require one to manually keep track of state in queues and key-value stores and idempotent updates, but with temporal the developer can just focus on the simple business rules. The runtime of Temporal takes care of compiling that down into proper state-machine logic.


yeah nice, exactly. i like calling Temporal a "framework" for that reason - you'd have to code + provision all this stuff anyway if you're doing anything async, and you're probably not testing it properly/thinking through scalability needs or lacking debugging tools to investigate issues.

I'll volunteer that we don't have Python or JS SDKs right now but are working on it (https://github.com/temporalio/sdk-node).

i think `Thread.sleep(1, MONTH);` is a profound paradigm change in programming that I have been struggling to find the words to describe. It's like, where you used to need to write "multiuser" code, you can now write code as if you simply have one machine per user and it idled or did work as long as your user's activity is still going, whether we're talking the end-to-end journey of a food delivery startup or machine learning data pipelines or infrastructure provisioning at a cloud startup (you can even run a workflow for forever, with some careful segmentation https://docs.temporal.io/docs/concept-workflows#faq).

this is useful market research for us, thank you for taking up the suggestion :)


I may be able to answer your question. I only watched the video on the landing page and glanced at the docs, but it reminds me of what a saga system would do (like redux-saga perhaps), meaning that much of the side effects, such as networking, are abstracted away from the business logic, and there is the concept of compensation when things don't go as planned. Very neat!


yeah, "saga for the backend" is an appealing angle to some folks (eg our Coinbase testimonial has that) but i'm not sure how many developers know about sagas so I've been hesitant to push that too hard (bc then we'd have to teach people an extra concept before they get to us).

i'd say something that is maybe hard to appreciate until you really get into it is just how much goes into making this ultra scalable with distributed transactions. If you have ~20 mins for it, I wrote this up recently: https://docs.temporal.io/blog/workflow-engine-principles


Not hard at all to appreciate. I know the consistency woes of juggling thousands of message queues with millions of messages and containers with enormous databases. The problem though, as I found out ironically, is that most dilbert bosses don't appreciate solutions like temporal, unless a tech "guru" tells them about it. For example, I get laughed at when I tell other developers that databases are an antiquated idea and should be avoided in general.

What I've been working on in recent years is actually not too far off in terms of how to approach making truly simple code, but is far off in the sense that it'll take me (alone) years to make a pragmatic implementation, probably as a language in its own right. Rather than attempt to make a system partition tolerant I started thinking what if network partitions were assumed as the default? To answer that question you have to do things like measure the information entropy of data, use knot theory, representation theory such as young lattices, symmetric group mapping, character groups, prime number theory, among other goodies, to represent event-states, workflows, etc. Most of all (this is where things get weird) is rather than code programs, the emphasis becomes how to easily declaratively code and build "homotopic" multiplexed protocols instead (while keeping the business logic familiar), that way SDKs and integrations are a thing of the past. All this has of course has to use existing web standards like HTTP, otherwise it won't be adopted. My friend always laughs at me when I try to explain it to him, so I apologize, ha. But that's all the more reason to appreciate technology like temporal because it's something a developer can use today.


jeez, you had me until knot theory lol.

sounds good, you get it. if you are interested in working with us... we're hiring haha. or happy to help promote if you want to write up your thoughts


I'll check out your hiring page.


Narrowband IoT data service. Very low power, low bandwidth, cheap, with increasingly great coverage. It’s like an offshoot of LTE but classified as a 5G technology for massive numbers of low power internet connected devices.


I have to admit, the idea of a run-of-the-mill security-hole-ridden IoT device being exposed to the bare internet over a wireless connection that's out of my control gives me the heebie-jeebies... and that's ignoring the possibilities for malicious use (e.g. using a low powered wireless connection for uncontrollable surveillance/data collection).


The operators would argue this is where their "managed service" offering kicks in and gives you value.

Most operators haven't figured out their business model for NB-IoT quite yet (at least in Europe) - they're still dabbling. Some seem likely to try pair it with enterprise "private APN" type solutions. Under such a setup, you can actually get quite an interesting system in place - the operator locks the SIM to a custom APN, and that APN only allows comms to a managed, operator-provided IoT backend.

Then the operator's enterprise services team turns that into dashboards and other things the customer can use and access. In a sense, they're using "extra slack capacity" on their FDD cellular networks (as an NB-IoT carrier is only 200 kHz wide and can sit inside or adjacent to a regular 4G or 5G carrier), and delivering higher margin managed enterprise services.

Some other comments point out the potential to use LoRa - indeed, although if you can use LoRa, you probably aren't the target market for NB-IoT. If you want to deploy 50 million smart meters, a nationwide existing network and managed service from the operator starts to get appealing, as does them handling security, isolating the devices onto a private APN, and helping you update devices in the field.

If you are using LoRa, you need to handle this and deploy the infrastructure. To date though, I've seen lots of "unmanaged" NB-IoT testing taking place, but not a whole lot of the "full service managed offering".

Otherwise I would agree entirely with your point about connecting modern IoT devices to the internet, but in this case I think it will end up for enterprise type deployments where they're restricting that for you.


200% this. Pretty soon I'd have to live in a Faraday cage, or put all my devices in a Faraday case/jacket.


I've implemented some use-cases for t-mobile, they basically peddle our products as a whitelabel solution to municipalities. One of my biggest beef with nbIoT is that 90% of all the use-cases I designed would have been better of using LoRaWAN instead. The reason why nbIoT is chosen is obviously that it provides the operator recurring revenue via a data-SIM but apart from costs there are other massive drawbacks such as energy draw of an nbIoT enabled sensor when compared to LoRa.

nbIoT is justified if I know that the data volume of my "solution" might increase due to feature/scope creep (and replacing the battery/sensor isn't going to become an annoyance in 2-3 years at end of life).

For most use-case LoRaWAN makes more sense but doesn't have the same marketing budget that is available to T-mobile, Vodafone and Co.


I work adjacent to this space and LoRa is great (really great, the specs look impossible on paper) but everyone has been peddling proprietary cloud-based SaaS solutions that come and go every year, for about a decade now. For non-cloud/SaaS solutions, nothing seemed to be able to compete against the branding of wifi and bluetooth. I think this is the main reason it didn't take off. 802.11ah looked like it was going to finally become a good option, with wifi branding, but it somehow was never really released into the market.


I am really surprised this isn't already more of a 'thing'.

When I did my postgrad research project, back in 2016, I was using LoRaWAN and thought it was so obviously going to be huge in e.g. AgriTech. Surprised not that much has happened with it tbh.


Cheap LoRa chips are finally hitting the market for long enough that ecosystems can grow around them. As an example, for Radio Control vehicles (planes, drones, cars) there have been 3 different systems released over the past year or so, and now an open source system called ExpressLRS which is gaining traction.


Most importantly for adoption, it is backed by the major Telcos.


United Therapeutics is trying to "3D print" human kidneys. I pray that they succeed at this venture. It will be yet another world-changing endeavor for Rothblatt. If you aren't familiar with Martine Rothblatt, listen to this interview with Tim Ferriss: https://tim.blog/2020/12/16/martine-rothblatt/


Data Hubs.

It’s queriable like a database but it doesn’t store your data - it proxies all your other databases and data lakes and stuff, and let’s you join between them.

Trino is a great example.


I'm not too deep into the data side of things, but this is interesting to think about.

Aren't stuff like data lakes and warehouses supposed to address the need for a centralized datastore?

Outside of perhaps an easy-to-apply interface, what benefit would a data hub provide over just streaming duplicates from all of your databases into a single data lake like Snowflake?


Want to cross ref the erp database with that stuff team x has in a lake and join with what team z has in a dwh? You don’t need pipelines and jobs and working out where to store that data... you just need a hub! And you can query it ad hoc too.

We used to have to copy and shuffle data into centralized systems so we can query and join it.

Data hubs do away with all that. Stop needing to think about storage. Instead, it’s just about access.

There have always been fringe tools, eg I once did some cool stuff with MySQL spider engine. But modern systems like Trino (formerly called Presto) make it easy. And, I predict, will hit the mainstream soon.


> Stop needing to think about storage. Instead, it’s just about access.

100% resonates when you put it that way. Thanks for the explanation!


I don't think these are really under the radar. We have Hue. We also have other apps that act as data hubs, but are slightly more constrained.


Postgres Foreign Data Wrappers


like DB2 Federation?


is that stuff like Presto?


Trino is the community-open-source fork of Presto (they were recently required to change the name from PrestoSQL, which they had been using to make the Presto lineage clear).

(not affiliated in any way). https://trino.io/


I've used Presto within Hue. I'm not sure if Presto is, but I know Hue is.


Sounds like graphQL


Any grid energy storage technology. Whether it's small home units (smart batteries, vehicle to grid, hydrogen packs...) or huge grid scale (liquid air, gravity, liquid metal batteries...)


IPFS and WASM - Combined they let you build apps that don't need a cloud provider, and can effectively be immortal.


Huh, I've looked at IPFS and WASM a lot but never really considered them together as a way to distribute apps in a decentralized way. That is a really exciting possibility. I wonder if IPFS will ever gain enough momentum to make it usable by enough "normal" people (not just devs who have it installed)


Materialize (https://materialize.com/). Although a bit known here, my coworkers never heard about it. I think it's going to be a game changer.


Materialize is so cool. I think these to blogs have great wow factor for anyone who's worked with streaming data before.

https://materialize.com/lateral-joins-and-demand-driven-quer...

https://materialize.com/temporal-filters/


Any idea how Materialize compares to Apache Flink?


Javascript and Typescript replacements.

Started learning Clojure / ClojureScript and keeping an eye on ML languages like ReScript and ReasonML.

I wish soon I'll be able to never write JS/TS code again.


Take a look at haxe[0] for a compile-to-js ML inspired language. It's relatively mature these days and integrates with existing typescript definitions via dts2hx[1]

I talk about this more here: https://news.ycombinator.com/item?id=26084187

[0] https://haxe.org/

[1] https://github.com/haxiomic/dts2hx


In a similar vein, Fable (an F#-to-JS compiler) is gaining a lot of momentum in the F# community. https://fable.io/


Dependent types and homotopy type theory for general purpose programming. I am definitely excited about it, but not really sure if those hopes will ever materialize as some useful (even in the limited scope) technology.


Fusion power.

I think traditional tokamaks are 5-10 years from positive power due to better superconductor tech. There is finally private investment in the space and it has been growing at an absolutely crazy pace.

I think in about 5 years there is going to be a fusion power gold rush.


Not exactly under the radar. Also, you have to keep in mind that demonstrating positive power isn't sufficient. SPARC is aiming for Q>2, ITER for Q>10, but Q>20 or 30 is needed for a powerplant. Also, once sufficient gain has been demonstrated, there are many serious nuclear engineering challenges that need to be solved.


Electric cars and planes are going to be huge, of course. Increasingly batteries will be replaced by fuel cells, and the hydrogen that powers them will be produced by means of hydrolysis produced by sunlight.

https://phys.org/news/2021-04-hydrogen-fuel-machine-ultimate...

https://newatlas.com/energy/osu-turro-solar-spectrum-hydroge...

https://uh.edu/news-events/stories/2017/April/05152017Ren-Wa...


The goalposts are actually moving in the other direction. Batteries keep eating fuel cell's target applications.

And there is currently a very large PR effort by fossil fuel companies to promote hydrogen. I'd suggest extreme skepticism about any "news" promoting it at present. Always ask where the hydrogen is actually coming from in the present, not 30 years down the road.


Is this under the radar?


For electric planes, it is. Up until a year or two ago, I still had fairly intelligent people still saying electric flight is “essentially impossible.”


AFAICT intelligent people are still saying that apart from short-hop 10 seater planes, electric flight is impossible.


Yeah, that’s what they say if you press them on it today. They’ll keep moving the goalposts as soon as it becomes too obviously wrong, and then never admit to having been wrong in the first place. ;)


Could you show me anyone making substantiated claims that 100-ish-passenger, 1-2 hour flight is doable with battery electric?


Fair enough. Musk was on Joe Rogan talking about it last month so a lot of people have heard of it.


Graphene -- generally has a bunch of uses that are not fully explored and I think it has a lot of potential.

Example company creating anti-COVID solutions with it: https://www.zengraphene.com/


Yosys and open source FPGA toolchains. I think FPGA are vastly underrated and hard to use because of tooling and this is promising in changing it.


Indeed. It seems like we're finally seeing a smidgeon of movement in this direction lately as well. Lattice have been at least semi-friendly to people building OSS support for some of their FPGA's and now QuickLogic have come out with some explicit support[1] for OSS as well. And I have to admit, given that AMD have been at least a nugget more friendly to the OSS world over the years, I'm harboring a faint kernel of hope that Xilinx may move in this direction a bit as well, under their stewardship[2].

[1]: https://www.quicklogic.com/qorc/

[2]: https://www.amd.com/en/corporate/xilinx-acquisition


I work on FPGAs professionally and it is hard for me to come up with many commercial applications for them outside of what they're currently used for.


What if FPGAs were $1 each, would that change it for you?


The problem is getting data to/from the FPGA, which imposes unavoidable latency. If you want to do this fast it can't be done cheaply because it requires too much silicon. Aside from simulation type tasks, the tasks best suited for FPGAs are streaming tasks for this reason since once you've started streaming data through the FPGA you don't need to worry about latency too much.


High accuracy long read DNA sequencing. The new chemistry for Oxford Nanopore is getting close to where you can use it exclusively without polishing (down to ~1 in 100 base errors), and the price per gigabase has become competitive with Illumina (but with reads in the 10s or 100s of kilobases in lengths). On the other hand, PacBio HiFi reads are really reliably high accuracy and still long enough to resolve complex variants missed by Illumina, and only ~2x more expensive than ONT.

Both technologies totally redefine what we mean by "sequencing a genome" and open up broad categories of mutations that are completely invisible to more common forms of genotyping or sequencing.


I’m more excited by a mindset shift that might happen where people take their own security and privacy seriously, reject entities like Facebook and companies that provide convenient privacy and security for reasonable fees do well. Think social networks that people pay for making a comeback like friends reunited of yore.

Also anything that is multipurpose. Rope + tarp = shelter, sunshade, awning, hammock, sail, etc. One gadget cooks and chops etc.

Oh and household robots. Already have vacuum mopping and pool robot. Considering a lawn robot. Clothes folding can’t be far off right?

Siri is underrated in my circles, I hardly see anyone use it. Social anxiety of yelling at your phone?


This new crypto project just launched its mainnet a few weeks ago: https://chia.net

It's a novel "Proof of" algorithm (Proof of Space and Time) that front loads the resource needs into a Plotting phase, with a very efficient Farming phase after that to process blocks with transactions. Seems like a much more fair, sustainable model for having a secure digital currency.

It also has an interesting, Lisp based programming language on it.

But what excites me is that it's lead by Bram Cohen, the dude who invented BitTorrent, one of the best pieces of tech I've used nearly my whole tech life.


I think you mean https://www.chia.net


Thank you very much! The idea to have all the computations already in place and just play bingo/lotto is awesome. This is the true spirit of defi


yep, you're right. I'll fix.


The people who have the most hard drive space will mine the most Chia. How is that any different to just paying with USD to get more? In terms of it being a "much more" fair model.


Like you, I'm struggling to see how this is substantially different from proof of work, besides trading one limited resource (GPUs and power) for another (disks). It surely still comes down to real-world money - whoever can afford to buy the most disks can mine the most crypto.


I think the big part is that farming with hard drives is a much more approachable thing to do than to try and mine BTC/ETH, considering both of those at a minimum require crazy GPU hardware, if not full blown custom chips, that no normal people will buy. Also, insane energy usage too, which Chia doesn't really have.

That being said, yes, with the right amount of investment, someone could try and take over the network. That being said, look how many Full Nodes are already in the network...

https://www.chiaexplorer.com/charts/nodes


Not just the most hard drive space, but the very specific fast kinds of hard drives you need to farm it efficiently.

The idea that you can do a sustainable cryptocurrency that remains sustainable no matter how valuable the tokens become in real money terms is self-evidently ridiculous. There's always some limiting resource you'll hit first, and if the cryptotokens are worth real money, that resource will get scarce.

But Chia is a good example of a brilliant person being so seduced by a challenging technical problem that they lose any ability to see foundational problems that people with a tenth of the brampower would be able to spot instantly.


"Good artists copy, great artists steal".

https://en.m.wikipedia.org/wiki/Proof_of_space



> DNA digital data storage is the process of encoding and decoding binary data to and from synthesized strands of DNA.

If each "bit" of DNA can be either A, C, G, or T, why call that binary?


The data is binary, the encoding doesn't have to be. Most modulation schemes in high-speed digital communication uses more than 2 states, but it doesn't make it less "digital" (in the end, everything is analog). Gigabit Ethernet's PAM-5 uses 5 different voltage levels, but one would not call it "quinary computing". Similarly, Wi-Fi's QAM-64 uses 64 possible combinations of phases and amplitudes, Base64 uses 64-symbol alphabet, but the data represented by all of them is still binary.


I think it’s kinda like modern SSDs which also have more states than just on or off per storage cell, but it’s still holding data that can be written/interpreted in binary.


1) Dr. Rafael G. González-Acuña's analytical solution on the Wasserman-Wolf problem

TLDR: We can now make lenses in any shape we please, not just with parabolas and circles (kinda).

Should have implications for anything that has to do with light: Telescopes, lasers, com-sats, AR/VR, etc.

https://gizmodo.com/a-mexican-physicist-solved-a-2-000-year-...

2) Memristors. We've not found a cheap and stable little component yet like the rest of the 2-lead elements, but it seems to be on the way (says every futurist)

TLDR: We'll be forced to re-do a lot of computer HW as the memristive ones will be (likely) much faster and cheaper on power. Think coin-cell batteries powering very good image recognition systems cheap as a dollar-store watch.

https://en.wikipedia.org/wiki/Memristor


Qubes OS, the desktop OS providing strong security through isolation: https://qubes-os.org.


I'm not sure this is exactly under the radar but I'm excited about wearable AR. No it doesn't look real but once you get past that we have some amazing tech on the horizon.

Microsoft and Oculus have hands free controls that actually work. Inside out tracking is progressing quickly. New UX patterns are getting forged.

I'm exited to see what we'll have in a few years time. In my mind its far more exciting than something like crypto but gets much less press.


Inside out tracking is crazy good. I have a Quest 1, and this technology is seriously impressive. Quest 2 is significantly better and cheaper.


Any specific devices? Or just generally the brands of oculus & microsoft?


The Quest2 packs a ton of power into a very comfortable headset. While not designed for passthrough, its still quite functional.

The Hololens opens up an entirely new UX flow with accurate enough hand tracking and the ability to keep your hands free to operate a keyboard or any other kind of device.


The GET (Guaranteed Entrance Token) Protocol https://get-protocol.io/

The protocol offers blockchain-based smart ticketing which eliminates fraud and prevents scalping. This has the potential to get huge when events start coming back post-covid.


soil science influenced agriculture, watch 'kiss the ground' on netflix.


This stuff seems so obviously correct that it shocks me that all farmers aren't already doing it. It makes me wonder if there is some catch that the proponents of regenerative ag, etc. avoid talking about.



Why are you interested in this?

If it is for anything other than very LOW power (microwatts), you're going to be disappointed.

It is essentially a beta emitter hooked to a capacitor via some electronics to handle voltage conversion. The thing is the beta source is use it or lose it, and very low power. If you scaled this up to power a Tesla, it would be a nightmare, as it would need to dissipate the full power the car requires, all the time, or it would melt down (aka Fukashima)

For a longer debunk - https://www.youtube.com/watch?v=uzV_uzSTCTM



Database applications of category theory!


USDC and other cryptocurrencies that track real currencies. It for the first time allows anyone to be their own bank with programmable money. You can have it on your phone for direct to merchant payments. You can use software to setup auto payment systems for any bills or rent, you could connect it to a service that gives a loans on demand against collateral.

I think a lot of really cool innovation is going to come out of easily transmissible programmable money.


Automated green farms - https://www.optimal.ag


Decentralized Autonomous Organization (DAO) - I think future societies will benefit a lot from such organization.


The application of computer science and uncertainty logic to create general human programming frameworks, possibly through applying category theory to neuroscience, psychology, daily life.

I'm the only one I know trying to do this. It's changed my life. I'm now applying my ideas in how to choose to lovingly coevolve with partner and the 2.5-year old we conceived. The results from this experiment are getting to the point that people are noticing. There exists a unifying spiritual path through my (mis?)application of category theory in my daily life. I am a noob at hacking the human and I'm saying that after having had major successes within this human body. I also recognize I'm doing way more than many people, so if I'm a noob, so are most-if-not-all people. The Buddha is an example of an elite hacker of the human.

Still waiting for people willing to take the first step, which is cultivating an ideal learning environment within oneself. This means learning to abandon judgments by default.


Can you be a bit more specific? What you say seems very abstract and probably would be understandable to someone that groks the context.


Trustchain.

This will replace Facebook, YouTube, Cloud, Google, Android, everything. (In a millennia or so. )

https://dl.acm.org/doi/pdf/10.1145/3428662.3429744


maybe it's been a bit more noticed, but I am enjoying the Fediverse so far, especially peer tube. I think there's a lot of potential there.

now.. the race to see if we can fill it with normal stuff instead of letting conspiracy theorists and racists flood it.


I'm bullish on Fusion energy.


OK, to add my own answer, instead of just commnenting.

Ummm... to start with "what everybody else has already said." If I have anything to add it might be (and somebody might have said this already as well, and I just missed it)

Synthetic Biology - this entire field fascinates me, and I expect big things to come in the future when we can customize DNA and grow items we need, that are tailor made to various parameters. This is also the beginning plot-line to many horror novels and movies though, so "everything isn't rainbows and sunshine" as they say.

Nanotech - related to above, but as with synthetic biology, it fascinates me to think what we can do when we have atomic scale self-assembling machines.

AR/VR - maybe not "under the radar" anymore, but I think there's a ton of untapped potential in this space.

Semantic Web - Yes, I'm still a believer in the idea of RDF / SPARQL / etc. I've said enough about this in the past, so I'm not going to drill any deeper here.

AI - maybe more "AGI" than the ML stuff we have today that gets labeled "AI". And saying that is not an attempt to denigrate ML or any of the radical stuff going today. It's just that for as much as contemporary "AI" can do, I think there's a lot it still can't do, and I like to daydream about the potential of AI's that get closer and closer to (and exceed?) human abilities. See above about horror movie plots though. :-(

Fusion: this has definitely been mentioned already, but add me to the list of people who are hopeful/excited about the prospects.

Time Travel: Actually no. I kinda hope this is impossible. I have a feeling that if unrestricted "Doctor Who" like time travel was possible, causality would collapse and all of reality would just become a big, jumbled mess, incapable of supporting life.


Nuclear fusion energy.

Green mobility concepts like: - self driving shared cars for first and last mile - self driving trains and busses for urban transportation - self driving high-speed trains and self flying airplanes for longer distances


Elixir.

Yes, most of you have heard of it, but I think it is still very underrated.

Especially interesting is the ML libraries that have come out recently and OTP24 whose new JIT compiler gives a ~2x speed improvement to Elixir code, depending on the task.


Neurotech - Sure there is all the stuff about BCI, but we're barely in control of our own brains ATM, so why are we using them to try to control a computer?

We've learned to optimize our bodies through nutrition and physical fitness (even if not everyone does it, we have the know how), but our brains are the next frontier.

I've seen lots of snake oil in this space so far, I was going to link to Halo Neuro, but they've been acquired by a tCDS company - from what I understand, the technology isn't ready yet.

We're building a sleep headband that monitors your sleep state, and uses sound to improve your Sleep Performance -https://soundmind.co

Others in this space are Emotiv and Muse, Dreem


Passivhaus. Rust (performance == energy efficiency). EV conversions (hybrid drivetrain to replace an ICE). Electromobility in a wider sense. Heat pumps (not so much under the radar anymore tho).



Alright, I'll bite.

We're probably going to see a wave of disruptions from technologies like GPT3.

For example, we might see something like this in the long term:

A) Someone will create a model to accurately convert low level source code to higher level source code that does the same thing when compiled down. Think assembly / machine code to high level code or even English descriptions of the underlying semantics.

B) At this point, why not pipe in some DNA/RNA into the model from A) to get high level insights

C) Give it a couple iterations and it might be possible to create a compiler. For example... C to RNA

D) Finally... solve problems by creating sequences from scratch instead of re-using bits from mother nature

If we ever do get to D), I sure hope no country tries to use this in a terrible way...


RDFox, a semantic/graph database with incremental reasoning.


- oracles for blockchain -

Blockchains are secure systems because they're isolated systems. But smart contracts aren't very exciting without data from the real world. Oracles are the bridges to supply data from the real world to the blockchain world.

However, a system is only as strong as its weakest link. You'd want the same security guarantees as what blockchain can provide. So the blockchain needs to "trust" oracles to deliver the correct data that's immune to manipulations.

With the rise of smart contracts and full automations, I think oracles will play a huge role in all of this.

The leading project that's working on this is Chainlink.


> oracles

Oh no, not another term the "crypto" people have taken from the "cryptography" people and re-interpreted in a completely different way. :(


E-Waste Recycling, Biorefining precious metals > https://www.mint.bio/


Biotechnology like Elysium Basis and Matter. Healthy aging longer makes a big difference for things like investments and working longer while healthy.


Gaze tracking. There are retail solutions that are easy to use and highly accurate. It's a very low-friction way to do hands free interactivity.


https://tryfinch.com

It's an API for Payroll. The number of use cases is pretty amazing!


3D Printing.

I believe 3D printing will change the world in less than twenty years. We currently in the hobby-est stage–think home computers in the later 70's early eighties. It took a company to see the potential and package it up for anyone to use. I think there will be a breakthrough home 3D printer that will start whole new industries. You will be able to buy physical products direct from anyone. Anyone can design and sell a vase or a bowl or a boomerang because manufacturing and distribution are no longer barriers to market. Think of when music and video moved from mostly only possible with studios to being able to record at a studio to home recording. Anyways... I'm super excited.


Can I say "I agree" but also add that there may be more to the story than 3D printing? That is, I'd call out "personal fabrication" in an even more general sense, to include both 3D printing and things like inexpensive, personal CNC milling machines and other "personal scale" fabricating devices. Even going as far as more highly specialized devices, like at-home reflow ovens for SMD soldering, and laser cutters. Heck, even items as exotic as plasma cutters are getting cheap enough that almost anybody can own one. See, for example, https://www.amazon.com/Reboot-Portable-Digital-Frequency-Inv...

Edit: I should also add something about the 3rd party fabrication / assembly services that are bringing a lot of capabilities to bear, for the average person, who would not have been able to afford them a few years ago. Look at OSH Park, OSH Cut, JLCPCB and the like. Need PCB's? Done. Need later cut metal parts? Done. There are similar services for injection molding, etc., etc., etc.

I'm pretty excited about this space. I just bought my first desktop mill (so new I haven't even unboxed it yet) and my first 3D printer. And picked up a cheap Black and Decker convection oven to convert into a reflow oven about the same time. Definitely excited to start exploring the intersection of all of these tools for "building things" without needing a full fledged machine shop, wood shop, yadda, yadda, yadda.


Starlink has the potential to revolutionize and enable a number of new technologies. Let's say urban electric planes for one.


IMHO, it is Restya Core. (Disclosure: I'm on their private beta.) It can eat the market of Jira, Slack and Asana.


Perovskite for cheaper/more-efficient solar cells. Liquid metal batteries (a la Ambri) for grid-scale storage.


Atmospheric Water Generators


RADAR base home security, co-ordinated cameras zooming in & stuff.


Holographic displays

Alternative data especially for investment decisions


Hydrogen cars and autonomous delivery.


IPFS


Better cryptocurrency Chia


Neuromorphic computing


for me, its https://qloppi.com


What am I looking at?


seems to be a parody of those newfangled non-fungible-token markets.


NativeScript ;-)


helium.com


dapr.io


Geometric algebra, which i implemented in software in Julia: Grassmann.jl https://github.com/chakravala/Grassmann.jl

In the future, geometric algebra will likely be part of everything we do, but it is still very unknown as of now.


Cheap, 4K, bigger than 24 inch monitors.

I think a lot of people are only allocated one monitor in most industries. This will change the way they work.

It will also change design.


IoT smart spoons.


Put a chip in it!


www.store-dot.com perhaps.


Digital vaccine passports, as in the HealthIT that revolves around that space


blockchain internet addresses?


Elon Musk's Neuralink. The monkey demo is absolutely insane. The possibilities and the speed with which Neuralink progressed is breath-taking. This could be bigger than any of the other miracles Musk has performed.


BCI - Brain Computer Interface. Sure it has received a lot of attention thanks to the work of Elon and neurallink, I still think that it hasn't received enough interest especially for how it'll affect all different parts of ours lives.


I think Bitcoin is genuinely exciting. I know it's not exactly "under-the-radar", but a lot of the discussion on HN is unfortunately only related to the price. It's a bit of a shame, but I can understand it's hard to see through the noise sometimes.

The fact that there isn't more discussion around the cryptography, networking, etc. suggests to me that many are still unfamiliar with the power of the underlying technology.


yeah I am doing more research on p2p and things like merkle trees. https://www.codementor.io/blog/merkle-trees-5h9arzd3n8

I am pretty excited about Ethereum and related ecosystem.


Don't get too distracted by the shiny. The ethereum ecosystem is a train wreck. What we see today is the reinvention of the same of scum tricks that Wall Street has played on the people for decades. More printing of funny money, tricks on yield, relending and rehypothication. Its is unscalable, and PoS is a insecure trick. I don't think the value proposition of ether can exist in a PoS world. I also think some elements are interesting, but I am far more excited about bitcoin! Bitcoin cannot be inflated, takes mass energy to create it (this is good thing!), censorship resistant and not under any kind of central control. As long as we are careful, this technology can be scaled responsibly.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: