Hacker News new | past | comments | ask | show | jobs | submit login
Unconventional Uses of FPGAs (voltagedivide.com)
323 points by voxadam 3 months ago | hide | past | favorite | 115 comments



> A ring oscillator in an FPGA can be used as a strain gauge. If you implement a ring oscillator on an FPGA and then bend the FPGA, the frequency of the ring oscillator will shift.

That's horrifying. I love it.


A game controller I worked on was accidentally sensitive to physical strain.

We discovered this after the first prototypes/consoles were sent to early reviewers. They kept seeing weird in-game behavior no one could replicate, even when affected reviewers tried to demonstrate. Turns out they were gripping hard enough to minutely bend the PCB during intense moments in games, which created voltage spikes the software mistook for button presses. You needed to be angry to replicate.


Stories like these are my catnip. The 500 mile email is another great one.

How did you end up debugging/reproducing? I conjured a picture in my head of an engineer comically exasperated and squeezing the controller followed by an “Aha!”


There was bare PCB prototype running on my desk that was in the way. I started cleaning my desk to work on something less frustrating and it got knocked off by accident. The logger went haywire immediately. That was the hint I needed.


Accident-driven development.


Aka my career


This is definitely one of the reasons I wrote this article. To be able to see the cause/effect when a problem is very convoluted is something difficult to learn and difficult to teach. You just have to consume a lot of second hand stories and experience a few yourself.

A story I have heard several times in different instances is some light sensitivity, I think this happened to the Raspberry Pi team (PMIC transient induced by camera flash).


I think one of the Raspberry Pis were resetting on flash because of an exposed/uncapped chip.

Yes. It’s the Raspberry Pi 2: https://hackaday.com/2015/02/08/photonic-reset-of-the-raspbe...


If you work on hardware long enough you run into all kinds of things that make you realize just how little you know.

I had an interesting case around fifteen years ago. I had designed a real time image/video processor using a large Xilinx FPGA at the center of it. The system worked very well, except that it would have problems when flown on a helicopter. Actually, only certain kinds of helicopters, with certain types of cameras.

I scratched my head for months trying to solve this riddle.

Months later, while working on a completely unrelated problem, I touched the FPGA and burned my finger. It was surprisingly hot. This thing ran cool as can be, even without a heatsink.

What happened? At first I thought I had a bad board, so I swapped it out. Sure enough, it got very hot, very quickly.

Huh?

That's when I realized I was using footage from one of the helo's. For some reason my brain made the connection. I instantly knew why we were having problems with helicopter-mounted cameras and why I burned my finger: Vibration.

CMOS consumes power on state transitions, not so much once you reached the final state (low or high). More transitions per unit of time means more power. If the transitions are high speed edges (which is the case for real-time image processing) even worse.

The helicopters in question did not have motion-stabilized cameras. They were hard-mounted to the verticals of the skids and that was that. Which means that every bit of vibration from the helo made it onto the camera. Add that to the helo travelling and you have a perfect storm: Every pixel was changing on every frame at 120 frames per second. The translation was a massive increase in power consumption and heat dissipation. Hence my burned index finger and the problems we had with helo-mounted systems.

So, yeah, the power consumed by an imaging system is a direct function of the rate of change per unit of time of pixel data. A still frame consumes nothing when compared to a busy sequence where every pixel is changing. Logical, yet not obvious when you are not thinking in those terms.

You have to burn your finger (or knock stuff off your desk) to learn this kind of thing.


>If you work on hardware long enough you run into all kinds of things that make you realize just how little you know.

I was part of a team commissioning all of the process analyzers in a new chemical plant. We had an oxygen analyzer (measures oxygen content in a process stream) that kept giving off problematic readings. Every single flange and tubing fitting upstream of the analyzer, as well as the analyzer itself, was leak checked repeatedly to no avail.

One of the chemical engineers recommended we start leak checking downstream. Problem solved. Ambient oxygen was getting into the system downstream, diffusing upstream against the process flow, and was being detected by the analyzer. All the non-chem e's were mindblown.


Funny you should use that example. I have a system where that could be an issue we hadn't anticipated. Now I have to check, thanks.


Don't burn your finger!

These days I almost always have a FLIR camera pointed at any new design I am working on. All it takes is getting burned (literally) once.


Humans can generally tolerate skin contact up to 50C. Anything higher and you will pull your hand/finger away.

Keeping with the "unconventional" theme, lips are more sensitive to temperature than fingers, but make sure you don't burn your fingers before you try kissing the chip. With practice, your lips can tell which 0402 passive is getting hotter than others.

Or you can buy a thermal camera.


Coworker once got a 25W @ 400 MHz radio burn of their hand by accidentally setting a transmitter to high power with a small whip antenna connected and grasping it without thinking about it.


Before I understood how voltage regulators worked, I thought you could just attach a LM7805 to a 22 volt lithium source for running the electronics on a high-power robot. I learned that a failing/overheating LM7805 will pass the full voltage to your microcontroller, and that the microcontroller core will still be "oven hot" a full minute or two after it releases its innards.


Work in the radio group of an industrial electronics company. Arcane magic, unexplained gremlins, and qualitative standard recipes that happen to work are par for the course.


Very interesting story :) Was the fix to add a beefy heat sink, or..?


Yes! Well. Once I knew what the issue was, I ran FEA thermal simulations and was able to move the heat away from the FPGA efficiently while keeping weight down. It was beefy in the sense that it moved heat well while remaining as light as possible.


Was there not an idea ahead of time based on the components in the design how much power might possibly pass through the image processor or FPGA?


While there are tools that help you compute expected power consumption, nothing beats contact with reality. I just didn't think about the "everything changing at the same time" scenario.


Sorry if I came across as critical, I was mostly curious.

In the last project I worked on with an FPGA (Xilinx Zynq) and cameras, the embedded system that had fairly strict power budget and IIRC the whole design could dissipate something like 45W, which led to a rather impressive looking enclosure that functioned as a heatsink. There was of course a margin, but the thermal design was driven by how much power could be run through the system.


No worries, I didn't take it as criticism at all.

Tools have gotten better, of course. However, I think the fundamentals haven't really changed in the sense that accurate modelling is impossible outside of certain domains. The risk is that you can under or over-design thermal management. This is where experience with a design can be invaluable. As the design goes through iterations one notes performance, compares to assumptions and estimates and makes adjustments.

We tend to work on projects that are highly constrained in terms of such things as allowable mass. And so, I can't just liberally apply a multiplier to estimates and call it good. We need to know. With time you develop test suites that stress things enough to achieve pretty decent test-based thermal data.


The power consumption of an FPGA strongly depends on the design you upload to it.


A helicopter should be a pretty good heatsink. No need to add extra mass.


Aside from being impractical, Nobody is going to let you even attempt that without going through certification. If every random equipment provided did as they wished aircraft safety would be compromised.


Fun fact: integrated circuit packages put stress on the die. It’s called “package stress”, and when I worked in flash memory testing we would see measurable shifts in analog circuit behavior after packaging.


Just when you think you've found all the possible side channel attacks, along comes a new one: The mole can Morse-Code-tap their secret messages on your Cisco edge router and abscond with your secrets.


Eagerly awaiting Mordechai's paper on this one.


This does remind me of the Raspberry Pi 2 SoC, which was sensitive to light: https://forums.raspberrypi.com/viewtopic.php?f=28&t=99042


The same thing happens with PIC microcontrollers:

https://www.youtube.com/watch?v=UCXCLR3xq8U


How do you think a solid state strain gauge works?


The most interesting unconventional FPGA application I ever came across was Adrian Thompson's foray into evolutionary algorithms in the 90's. Implementing a "survival of the fittest" approach he taught a board to discern the difference between a 1kHz and 10KHz tone.

What's fascinating is the final generation of the circuit was more compact than anything a human engineer would ever come up with (a mere 37 logic gates), and utilized all kinds of physical nuances specific to the chip it evolved on - including feedback loops, and EMI effects between unconnected logic units.

Article: https://www.damninteresting.com/on-the-origin-of-circuits/

Paper: https://www.researchgate.net/publication/2737441_An_Evolved_...

Reddit: https://www.reddit.com/r/MachineLearning/comments/2t5ozk/wha...

If you think about how intelligence evolved on earth, we had a rich environment of physics for variety of causes and effects to interact in all kinds of diverse ways (not just boolean 1's and 0's or a limited set of instructions). Makes me wonder if to achieve true AGI you need to offer evolution a fairly rich set of primitives on which to flourish.

Edit: Just realized harwoodr beat me to this example.


The strain gauge is an interesting one that I hadn't seen before, and theoretically should work. Very creative!

A few more fun things you can do with FPGA fabric:

* A temperature sensor - timing some ring oscillators against a reference clock generated off-chip can give you a measure of the temperature of the chip because transistor speed is temperature dependent.

* ADC/power supply monitor - Launching a signal down a TDC at a known interval gives you a time measurement that correlates with power supply voltage (EDIT: to be clear, this is a MUCH stronger correlation than with temperature). Also, ring oscillator speed correlates with power supply voltage. This is also a possible attack vector for side-channel attacks on multi-tenant FPGAs and multi-tenant systems that involve programmable FPGAs.

* An oven/thermostat - Switching transistors on and off creates heat by burning power. This can be hooked up to a temperature sensor (either a synthetic one in FPGA fabric or the one in the device ADC/system monitor) with a PID loop to create a temperature control system. This is actually kind of useful if you want to do analog stuff with reasonable stability.

Also, if the author is here, I would appreciate a link to https://arbitrand.com in the "TRNG" section.


It's actually difficult to build something that isn't a temperature sensor!


Usually one builds in a better temperature sensor and calculates a correction function of temperature.


I forgot about self-heating FPGAs! I am definitely going to add that to the article at some point. This is a paper I read on this a few years ago. https://ieeexplore.ieee.org/document/6818753


It wasn't clear to me, unless I'm missing something, if arbitrand uses hardware external to the FPGA or not, if it does, what type of RNG does it use?

If it's purely FPGA based, is it ring oscillator based?


Yes, FPGA-based, and no ring oscillators. AWS doesn't allow you to use latches or comb loops at all, but has the same area efficiency as the best RO-based designs. We'll publish eventually.

For the FPGA users out there - we will be packaging a few shapes of RNGs as IP in the near future, but this kind of chip/FPGA IP needs a LOT of very subtle verification, which made a lot more sense to do for full cards first.


That bit about stabilizing the tempture was suoer interesting I could see that being useful for like a synthesizer or something.


It's not uncommon in sensitive analog systems to do something like this, ~but I haven't seen anyone doing it with an FPGA yet~. Very old analog systems (that were a lot more temperature-dependent than modern analog circuits) often were contained in small ovens, and an intermediate step between a normal oscillator and an atomic clock today is to use an oven-controlled oscillator. In both cases, they keep the ambient temperature on the analog devices at 40-50 C to make sure that they operate at a stable temperature regardless of ambient temperature.

Edit: See the sibling comment - someone has found a reasonable circumstance to do it!


My favourite example of this type of thing isn't mentioned in the article - using evolutionary computing to do some of the things mentioned in said article:

https://www.researchgate.net/publication/2737441_An_Evolved_...


I read that paper about 20 years ago - at the time I was very interested in ML, and was torn between neural networks and evolutionary algorithms. This paper caused me to go all in on evolutionary algorithms, and eventually burn out because I could never get any kind of promising result out of it. Wrong choice I guess!


Yeah, I tried my hand at some evolutionary algorithms and found it's all about the fitness function. I'd spend hours and hours refining it and eventually realized that you're really just optimizing the function to fit what you want to see. Kinda took the magic out of it for me.


> 20 years ago... This paper caused me to go all in on evolutionary algorithms, and eventually burn out because I could never get any kind of promising result out of it. Wrong choice I guess!

Wow. 36 years ago, I was super excited about neural networks, but lacked the skill to get good results fast enough; meanwhile, a very talented colleague was getting really interesting results from evolutionary algorithms.

It can be hard to move forward all alone.


Further on this - there's a more modern project that is trying to get the original experiments working on iCE40 FPGAs now:

https://evolvablehardware.org/


more: On the Origin of Circuits (2007): https://news.ycombinator.com/item?id=9885558


I like the strain gauge application best!

I do personally feel that the #1 application of FPGAs is to make short run and high-end hardware fantastically more expensive for business and government (+military) customers.

As it has nothing to do with electronics, I'll put it to the wisdom of the crowd to decide if this application is conventional or unconventional!


> I do personally feel that the #1 application of FPGAs is to make short run and high-end hardware fantastically more expensive for business and government (+military) customers.

Not sure I understand. Take for example applications in communications where you need a real time signal processing. The alternatives (custom ASICs) are several orders more expensive. I would actually say the opposite, FPGAs make short-run, high-end affordable to others that are not government or large coperations.


I kind of empathize where they are coming from, my electronics version of "this meeting could have been an email" is "this fpga could have been an arduino".

I feel like we're constantly spoiled by the clock speed of computers, they are good enough for so many things. FPGAs can make simple things a bear to implement, and you will just get there faster.

But there's a bunch of things where FPGAs are like discovering fire, all the discrete digital logic you ever wanted in the palm of your hand (until you have to meet timing).


Apart from my overengineered audio stuff (which is intentionally overengineered, as it is mostly an hobby project/research vehicle) I'm guilty of designing what was essentially an simple scintillation detector around fast ADC, somewhat beefy FPGA and bunch of synchronous SRAM, the final shipped hardware had MSP430, bunch of discrete 74ACT logic and bunch of discrete analog comparators instead of the ~100MSps ADC…

You should always do the simplest thing that could possibly work…


> "this fpga could have been an arduino"

That's giving a lot of credit! I see way more of "this FPGA could have been two logic chips and an EEPROM"


The FPGA in my scope probe that is dutifully implementing the functionality of an EEPROM while loading its configuration from an even larger EEPROM exists in a state of extreme disagreement to your comment.

Yeah obviously FPGA's are unquestionably awesome devices, but IMO the real problem with them is they have no long tail or economies of scale compared to other device types. Having worked with them personally, I mostly believe this is because there aren't really any great cross vendor standards analogous to a CPU's ISA. The lack of interoperability absolutely destroys the ability of competition to benefit customers. I'd like to be able to buy a modern $5 part with capabilities equivalent to something like a Cyclone IV, but there's nothing even remotely close.


For a slowly increasing number of use cases, Lattice is starting to change that. Not quite Cyclone IV, but the ECP5 can be pretty decent for the price. Over time I think their new CertusPro-NX line is going to become interesting.


That is good! I am hopeful as well! However I don't think that the parts that you are talking about are fundamentally solving the problems I discussed. You have always been able to go to a vendor outside of the top few and get cheaper parts -- for the small price of having to rework basically everything. Something has to break the vendor lock-in a little more completely before this can fundamentally change.


One project I had to learn to use an FPGA for was to make a working sensor system for the HTC Vive.

It involves timing laser pulses very accurately and quickly and potentially concurrently. A regular microcontroller with interrupts wouldn't work well with 20+ sensors as each interrupt could potentially block another one happening at the same time.

So with an FPGA you define a "block" of hardware that handles one sensor, then copy it multiple times and have each block send the final timing data to an overarching hardware block. That way each sensor has its own dedicated hardware block and they can work in parallel.


For military use FPGAs with encrypted bitstreams is a pretty interesting application.

The arming process provides the key and once the single-use weapon hits the target, there's no way for the enemy to get to the original bitstream.

So the enemy can't reverse engineer any secrets that are part of the logic.

Can't do same with ASICs.


> Can't do same with ASICs.

shrug I can't really say anything except that's not the only way to accomplish that sort of thing. And if it's a common enough use case that you need to build it into millions of pieces of ammunition, it doesn't explain why economies of scale never kicked in or a more purposeful custom part was never designed.

Limitless military budgets, by contrast, explain quite well why this /hasnt/ happened.

The cost of making a chip is proportional to the size of the chip and the process node, and that's the end of the story. FPGA and CPU quite literally cost the same to manufacture; it doesn't explain why the one categorically is priced 10 times higher than the other by die area, even in the long tail of the supply chain.


> Can't do same with ASICs.

You can make latches with ASIC perfectly well. Or even more complex memory elements.


Sure, but the point was that nothing of the original logic will be recoverable, once the FPGA loses power.

Unless you make an ASIC that's effectively an FPGA.


FPGAs are ASICs. :)

The benefit being highlighted is that the "executable code" of the system is stored in volatile memory. The same can be done for a more traditional software system.


To nitpick, FPGAs are COTS (Commercial Off The Shelf) components since a single FPGA can be bought to be used in a huge number of different products. An Application Specific Integrated Circuit is designed for use in a single product, as the name says. A classic example were the two big chips Apple designed for the Apple //e to replace the large number of TTLs used by the Apple ][+.

To be fair, more people use the term ASIC wrong than right.


I think that's a reasonable distinction to make. I also think it's more like "firmware" insofar as it's a squishy, environment-dependent usage.

It seems unreasonable to me to cite the name "application specific" and conflate that to "single product." Nobody would debate that an Nvidia Tegra T20, for example, is an ASIC and yet it is contained within numerous products. An even more prevalent example would be any TI buck converter, used in a likely unknowable number of products.

I expect the level of application specificity needed to satisfy a particular view on ASIC varies by exposure and domain usage, which is why I think it's more like "firmware" than like "program" in a taxonomy of names. I also think that ASICs is a kind of parent group, members of which are things like microcontrollers, microprocessors, memory devices, FPGAs, and CPLDs. To draw such a hard contrast between "ASIC" and "FPGA" seems to not appreciate that an FPGA necessarily has fixed design elements (components, peripherals, etc.) to effect the function and provide more tight tolerance functionality like high speed signaling, which are limited resources by locality (only assignable within some bank of pins, for example).


On FPGAs for military applications:

- IP being ephemeral is a feature, preventing duplication and extraction by an adversary.

- IP doesn't leave your organization: by deploying an FPGA, no ASIC fab has access to it.


FPGAs aren't inherently secure. I'd assume that a nation-state-level actor would have the ability to extract the bitstream from an operating FPGA which would invalidate both of your statements if additional protections and security considerations are not met in the hardware design. But you are correct that an FPGA readout is a far more difficult attack than, say, bypassing the read protection on a microcontroller's flash. At the same time, it might be easier to attack an FPGA than a modern CPU's secure area/secure enclave.

That is all to say the security is not coming the FPGA but from a systems design process that prioritizes security. There are a number of solutions to the problem which may or may not involve FPGAs but when you scramble it all up along with the preverse incentives driving the pricing/costing of these contracts, it's pretty clear that good+expensive solutions are chosen more often than good+inexpensive solutions.


What are you doing with a Xilinx bitstream even if you have it besides producing more of the device without authorization?

How exactly are you analyzing it for any meaningful information?

It's absolutely not "easier" to attack an FPGA in this manner because the information yielded isn't something a smart fellow can just pop into IDA and glean every secret from: it's a highly optimized gate configuration with massive amounts of redundant register, LUT, etc. insertion. All structures resembling "code" are gone.

You'll never see RTL from this information and there's nothing like assembler for it.

All of this is assuming that the nation state actor can keep the device powered while doing this extraction, because once that voltage drops even a couple hundred mV below the allowed value, that bitstream is gone. Tamper switch, timer, batteries dead, a 30 cent microcontroller analyzing environmental conditions, velocity, etc. and cutting the power: it's gone forever.


I'm not suggesting that FPGAs are easy to exploit; I'm saying that the premise that they are fundamentally secure and/or the only way to implement this requirement is wrong. Having a hypothetical discussion about how an FPGA might be attacked is not the point. Your entire premise is a handwaving argument reliant on security by obscurity, and furthermore its not even correct. Once you can reconstruct the netlist its game over.


Isn't that how those Fortezza crypto cards worked? The chips leave the factory unclassified, then the NSA loads their classified Suite A algorithms onto them?


I think you meant more affordable or less expensive? Or even viable/possible at all.


Are you saying that FPGAs increase the production cost of low rate (<1000 units) hardware? Compared to ASIC fabrication?


Probably compared to micro controllers that are getting fast enough to do a lot of what used to take custom hardware or FPGAs


Author here, thanks for reading, I just scaled the VPS to hopefully handle the traffic better, hahaha


Well joke's on you: your website worked fine for me, however HN itself was down after reading your article :p.

Good job !


My favorite unconventional use of an FPGA was when Costis used clock glitching and precise undervolting to extract the Boot Rom out of the Game Boy Color. There was an FPGA involved at some point in the process.

http://web.archive.org/web/20091001114207/www.fpgb.org/?page...


This may be unconventional in the sense that most FPGAs aren't used for it, but it's quite common in general, and you can buy commercial products that use FPGAs in this way (e.g. ChipWhisperer)


I've actually been working on a personal project using asynchronous ring oscillators on FPGAs to perform meaningful computation. If you make the oscillators interact in the right way, you can leverage them to solve certain graph problems!

Once I finish the write-up I'm planning on posting it on HN :)

https://github.com/zbelateche/digial-ising


I promise I'm not trying to rain on your parade, and I blame the academics for this, but:

1. NP-complete problems are (today) global search problems (3-SAT algos search for a satisfying assignment...);

2. Combinatorial search spaces are not only non-convex, they don't have gradients (sub-gradients don't count) and even if they did, without convexity, it's still hopeless.

Thus, you cannot (today) appreciably parallelize NP-complete algos, you can only draw more lottery tickets (for where to seed your search), i.e., constant factor improvement.

I'm just putting this out there in case someone gets tantalized by the "fancy math/physics/fpga thing solves NP-complete problems" hook. Yes it can but no better than your computer (despite what paper authors claim in order to get published). To wit: if this were a compelling solution to NP-complete problems it would either a) already be a proof that NP=P or b) be implemented in absolutely every single SAT solver (using GPU or whatever rather than FPGA). In contrast, every single SAT solver uses CDCL which is ~roughly DFS with clever backtracking.

And I say this as someone that has investigated solving ILPs on FPGA - i.e., I wish it were true but it ain't.


This is a personal project aiming to replicate analog coupling dynamics with asynchronous digital circuits FPGAs, not a claim that we can solve NP complete problems with a fundamental speedup.

Also, these sorts of Ising architectures get their speed-up are based on oscillator interaction, not parallelism. It's relevant to this article because it leverages oscillators on FPGAs in a nifty way :)


> This is a personal project aiming to replicate analog coupling dynamics with asynchronous digital circuits FPGAs, not a claim that we can solve NP complete problems with a fundamental speedup.

Then you shouldn't have led with that? It's like academic clickbait to say "here's this thing I'm working on, it uses XYZ technique to solve REALLY HARD PROBLEM" but omit the "...very poorly" part. Like I said I don't entirely blame you, I blame the academics that have an entire cottage industry and culture around these kinds of "results".

> These sorts of Ising architectures get their speed-up are based on oscillator interaction, not parallelism.

Not sure what you're trying to say - it's quantum annealing "brought to life" using simple 2-state (up/down) particles/phonos/whatever you want to call them. The quantum in quantum annealing means superposition ie parallelism; your implementation is basically a D-Wave machine without the qubits.


I dunno man, I think saying "this is a personal project of mine" and linking to an open source github repo implies that it's a casual side project. I never made any performance claims, it's just for fun. I'm not trying to put up clickbait.

And yeah, I think the fact that you can build a classical, analog, D-Wave-sans-qubits machine on an FPGA is cool!


The digital ising thing was a very interesting line of research that Fujitsu has been trying to sell for 10 years or so: https://www.fujitsu.com/global/services/business-services/di...

They presented their v1 chip architecture at ISSCC in 2015 much to the consternation of a few quantum computing people who (rightly) objected to the characterization of it as an "Ising" chip, because it cannot represent superposition.

It's a pretty cool sort of system that should be able to solve pseudo-energy-minimization problems, but they are only theoretically sound when energy minimization corresponds to minimizing a Lyapunov function. Sadly, it's not clear that there is a way to generically construct a Lyapunov function with a minimum that corresponds to an NP-complete problem. The Fujitsu device supposedly is good at "easy" versions of problems, but so is a SAT solver.


The Fujitsu device is a synchronous digital architecture. I'm using asynchronous oscillators, more like Lo et. al. (https://www.nature.com/articles/s41593-024-01607-5)

I personally think it's super cool that we can replicate analog coupling dynamics using asynchronous digital logic!


I hate to burst your bubble, but when you do the dynamic systems math, they are sadly equivalent. You are working in the continuous time domain while the synchronous devices are working in the discrete time domain.


In practice, though, asynchronous systems can run a lot faster than their synchronous counterparts! Also, I'm mostly working on this project because it's cool and interesting to build oscillator-based computing systems on FPGAs, not because I'm trying to build a practical system.

That's why I posted it here: it's a fun use-case for asynchronous stuff you can do on FPGAs.


Yeah, it's nice to go faster, but "faster" will not outrace P != NP.

In all seriousness, I'm hoping that "Lyapunov computing" will get some more theoretical attention, because it is very easy to simulate extremely complicated dynamic systems (not constraining yourself to linked oscillator systems), but quantum computing has sort of sucked the air out of the "alternative computing technologies" ecosystem. In any case, the circuits have far outpaced the development of theory in this area.


I just skimmed it. Quite over my head in electronics. Is it stuff like this that you’re doing and looking for?

https://www.dcs.gla.ac.uk/~wpc/reports/HwRelaxParadigm-submi...


I started playing with Clash recently, which is basically a Haskell-to-FPGA compiler.

It's very cool and I'm slowly making progress, but man it feels like I'm learning to program all over again. I didn't take any electronics or computer engineering courses in college (focused a lot more on the discrete-math section of computer science), and the closest I've ever been to this is GPIO programming on an Arduino.

It's really rewarding though, and it's clear that it's a skill that might come in handy some day in the future. I can think of a few cases where I'd benefit from dedicated hardware for some stuff.


One of my favorite FPGA gadgets is the mojo 2 headphone amplifier by Chord electronics. The amplifier is nearly flawless. Coming from the old days of tube amplifiers and amazing analogue devices it is nice to see how far the digital circuitry has gone with analog applications.


https://www.audiosciencereview.com/forum/index.php?threads/c...

It's not bad, but it's not the best, and pretty much everything that measures better is using a more conventional architecture and in many cases is significantly cheaper. Chords use of FPGAs gets points for novelty but I don't think it has any practical merit.

https://www.audiosciencereview.com/forum/index.php?threads/c...

Even their $14,000 monster desktop DAC/Amp gets beaten by units a fraction of the price in objective measurements. The audiophile market has taken a fascinating turn since ASR arrived on the scene, the snake oil salesmen are still on their bullshit but there has been an uptick in companies doing legitimately good engineering and making products with phenomenal performance for dirt cheap. There are DAC/Amp combos under $200 which are objectively perfectly transparent well beyond the theoretical limits of lossless CD audio now.


I will give Chord some credit for the idea of running the DAC super fast. I play music and occasionally record myself, and I can tell you that when you look at the waveforms, there is definitely a component of the human ear that can distinguish time intervals faster than 1/(22 kHz) despite being unable to hear frequencies over about 15-20 kHz (eg very tight flams in drum parts are audible). DSD had this idea, and I am 100% on board with it. Despite being a gold standard because of how it's produced, lossless CD audio cuts off this "high-frequency" component.

However, the actual analog parts of Chord's system around the fast DAC and the upscaling circuit seemed to be sort of suspect to me, from the handling of reference clocks to the nature of the output amplifiers. I'm glad ASR has confirmed those suspicions before I bought one.

At some point, I started a project making a multibit DAC running at 1 Msps with the same kind of FPGA upscaling (although ADI's SHARC DSPs are amazing) but a much more conventional frontend and better handling of clocks, but I realized it was too complicated to do while also doing all the other things I was doing at the time (and way too complicated now). The idea is still on the shelf for later (although it's probably already obsolete as a concept - delta-sigma is really good).


As a musician I absolutely love my "so small/light forgot I had it in my backpack"-500W Bass amp, compared to a 36 kg tube one. Class D is the shit.


Fellow bass player here, I concur. Get a transparent class D and pair with a tube preamp of your choice. I love my Ampeg, but it lives in the basement now and doesn't leave.


Wow, just looked that up. That's a chunk of change for a headphone amplifier.


Agreed. I personally think it is the ultimate piece of equipment for sound reproduction from computers or personal recordings, unless one has a set of very specialized cans that it could even drive. But of course sound sensitivity is subjective and there are limits to what some people care about so their may exist cheaper options with some comparable quality, but at some point I gave up to quest with that little gadget.


If you had a sensor who’s analog signal you wanted to modify in real-time, for example you did an engine swap and the new engine’s sensors weren’t compatible with the original ECU. Say they used a different voltage range or hertz rate. Would an fpga be overkill for this application? What would be the correct topic to google?


If it's a digital sensor and you're modifying a digital bus then yes*. If it's an analog (voltage based) sensor then no, but if you know the input and output ranges you can probably build a gain stage out of much cheaper and simper components.

*"Real time" is complex though, the best you can probably do is slightly longer than the time it takes to receive plus transmit the packet, best case slightly longer than the longer of the two processes.


Thank you very much. Do you have general suggestions of where to look? Do I "just" need to learn analog circuit design?


Can you give me a better idea of what the two signals are? I can probably point you in the right direction.


Kind of related to this post: https://news.ycombinator.com/item?id=39735665

> The solution to these problems came from Khuri-Yakub’s lab at Stanford University. In experiments in the early 2000s, the researchers found that increasing the voltage on CMUT-like structures caused the electrostatic forces to overcome the restoring forces of the membrane. As a result, the center of the membrane collapses onto the substrate. A collapsed membrane seemed disastrous at first but turned out to be a way of making CMUTs both more efficient and more tunable to different frequencies[...]


A more extreme version of the ADC/DAC projects mentioned here is my (old) project that built a Bluetooth transceiver using just an FPGA and an antenna (and a low pass filter if you don't want the FCC coming after you): https://github.com/newhouseb/onebitbt

Tl;dr: if you blast in/out the right 5GHz binary digital signal from a SERDES block on an FPGA connected to a short wire you can talk to your phone!


> Bluetooth transceiver

> if you blast in/out the right 5GHz binary digital signal

These two things are not related.


Very cool idea :)


Anyone less technical or unfamiliar, here's a decent visual explanation of how FPGAs are constructed.[0]

Generally, there are 5 types of resources:

1. Clocking and clock distribution

2. Interconnect and bus logic

3. Sequential logic (state)

4. Combinational logic (functions)

5. Specialized blocks that implement complex functions

It's the responsibility of the EDA toolchain to simulate and validate that an FPGA "program" will satisfy timing requirements. As the article mentions, it is sometimes possible to violate these constraints purposely when undefined or nondeterministic behavior is desired.

Notes:

0. (PDF) https://inst.eecs.berkeley.edu/~eecs151/sp20/files/lec5-FPGA...


i think i have done most of these at one time or another. for example: strain gauge, random number generator, glitch generator, antenna.

unfortunately always unintentionally. which is much, much, less fun.


The silicon strain-gauge effect is well-known, and has to be accounted for when mounting precision chips like voltage references. Here's one where it's clearly visible as two slots in the PCB on either side of the precision chip, to prevent board strain from being transferred up into the chip:

https://voltagestandard.com/01%25-voltage-references


Further down the line... that chip has a plastic package. Plastic is hygroscopic, i.e. it will take up water. Even the minute amounts of water from the humidity in the air. This deforms the plastic, and adds additional strain to the IC.

The chip you linked is in the 3-8 ppm/°C range. Stuff in the 0.5 or 0.05ppm range like LM399/LTZ1000 are all in metal cans for that reason

(another fun fact... the LTZ1000 ultra precision reference has an output voltage of 7.0 to 7.5 Volt. It's precise, but not accurate)


After the 1958 Taiwan straight crisis when mainland China caught an unexploded sidewinder and reverse engineered it: execute+write only missile guidance memory.


What specifically are you referring to?


I edit my comment, but pre FPGA, much of our missile guidance electronics was entirely analog, and thus observable if your buddy say, misplaced one in the side of a mig.

Now the bitstream can be downloaded as needed, and bit rots quickly after power loss.


Offtopic, but I love unconventional uses for things. I once saw a book about how to build unconventional toys with model airplane engines, however, I have never been able to find it again... Does anyone recall the title of this book?


Self-Destruct circuitry to blow keys if the tamper signal bit is set? I'm not sure these can totally hide what the key is, I suspect there are recovery options from delidding the chip but .. maybe not?


Very fun. I particularly like the FOSS TRNG in 100 LEs. It's great that I can get open source HDL and compile it on any hardware supported by yosys.


I sometimes wonder if something as changeable as fpga can be made for asynchronous circuits...


Yes, quite easily, we made a few different versions. There are much more benefits than just runtime programmable asynchronous circuits.


Links/pointers? I know only of Achronix' original Speedster series, which was alas abandoned for a conventional architecture ("Power" was the reason I was told).


Reconfigurable asynchronous logic automata: (RALA) https://dl.acm.org/doi/abs/10.1145/1707801.1706301

Morphle Logic https://github.com/fiberhood/MorphleLogic/blob/main/README_M...

https://core.ac.uk/download/pdf/276277602.pdf

With Google scholar you can search for 'asynchronous programmable logic'

For example asynchronous logic with standard FPGA's

https://www.researchgate.net/profile/Jerome-Quartana/publica...

http://vlsi.csl.cornell.edu/~rajit/ps/fpga.pdf




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: