Most of the attacks I see on Nordic devices are power based attacks, where cutting the power for a brief instant causes protection instructions not to run.
This one is entirely different, and attacks the initialization code directly. This code has no restrictions on its ability to access memory, allowing a full dump.
Unless I misunderstood the article [1], it describes this very same HW-based method of power glitching the chip at a crucial time using an external STM32F0 controlling the Vcore going to the nRF. Basically preventing it from checking if it's correctly locked or not at boot.
After managing to connect through glitching, they dump the FW, then turn off APPROTECT, reflash, and have open debug access.
>Our attack setup thus consists out of 1) a transistor connected to the CPU core supply voltage and ground, 2) a dev board to control the glitch timing, and 3) a debug probe to try to access the debug interface.
This really isn't viable in somewhere like the UK.
At our current electricity prices, if that machine ran at 100W, you'd be spending the equivalent of what you paid for it EVERY month in electricity.
I've always seen Americans, who typically have more space and cheaper energy and fuel suggest people grab 1U servers for $40 etc, and respond just like you did when people ask them _not_ to.
My car also does 50mpg, and I still pay more pay mile for fuel.
In Europe, we need to spend more to buy efficient tech because the running costs require it.
It's perfectly viable in the UK. The Optiplex 7050 is available for under a hundred pounds and the PSU tops out at 65W. It idles under eight watts. This is comparable with any other router capable of handling gigabit traffic.
I think the misunderstanding is the assumption about form factor. The 7050 series is available in a case the size of a paperback book.
I'm willing to accept your point if you or they, can confirm that they are in fact talking about an optiplex that idles at 8W, which I doubt, for 43 dollars.
Despite DELLs willingness to slap the same series number on everything from an ATX tower to a "paperback book", they are far from the same thing.
I'm willing to accept I may be wrong, so let's see.
First page, the one on the right. I don't see why you'd doubt an 8-watt idle, since it's basically a laptop processor and chipset. Please note that these models are available with both 35W and 65W TDP processors; I'm talking about the (generally cheaper) 35W models, of course.
I doubt the original poster is talking about the small "book" sides micro form factor at ~$40.
I expect that one to cost at least double, if not more (a quick eBay search outs the micro form factor at closer to a £200 average), and the tower form factor, with it's 240W PSU is likely to be drawing much more than ~8W in a typical configuration.
My point stands; if you want to spend £35 on an x86 machine, youll pay in electricity costs, spend ~£200+ and overall youll likely save money overall if it lasts a decent period.
It probably doesn't run at 100W, although I'd be quite curious to know whether someone actually measured the consumption, since I have the HP version of this.
The reason why I don't expect it to draw as much is because I have an 8-core xeon with 2x10k RPM + 4*7200 RPM drives, a dedicated raid card, integrated BMC, and it reports ~100W power draw when booting up. When sitting around doing nothing but with the drives spinning, it reports a draw of about 80W.
edit: The BMC alone draws 11W, judging by the reported power consumption when the server is off, but plugged in.
The main issue I see with that option is space and power, the micro-form factor (NUC-sized) clone systems that people are discussing in here are quite nice for that. Functionally an old Dell tower should be fine.
Sure, but those tend to be much more expensive, so depending on the actual cost of electricity in your area, it may take a long time for you to break even.
However, if you want to have a really small appliance-like pc, then yeah, the NUC-sized ones are much better.
The low-power AMD based systems everyone recommended a couple years ago were indeed great, until the ebay resellers realized what was going on and started slapping "opnsense pfsense AES-NI" etc on their listings and doubled their prices. The systems went from selling at $40-50 to over $100.
Also, the AMD processor in those systems is getting extremely long in the tooth.
Yes, and if it's about the most minimal non-empty quine pretty sure it would be this:
s='s=%r;print(s%%s)';print(s%s)
Anyway, let's make it a bit less boring and make it a C/Python polyglot (not relay) quine:
#include<stdio.h>
#define len int main(){char*
#define zip return 0;}
#if 0
def printf(f,*a):print(f%a,end=str())
#endif
len
s="#include<stdio.h>%c#define len int main(){char*%c#define zip return 0;}%c#if 0%cdef printf(f,*a):print(f%%a,end=str())%c#endif%clen%cs=%c%s%c;printf(s,10,10,10,10,10,10,10,34,s,34,10);zip%c";printf(s,10,10,10,10,10,10,10,34,s,34,10);zip
I don't agree. A subset of high performance applications in a datacenter can use a new protocol while still supporting TCP for other applications. I'm not saying this is easy, or even worth it, but it isn't all or nothing.
I think the parent commenter would be pretty surprised how much code would really need a rewrite. Google and Amazon do not have hundreds of thousands of engineers messing around in the mud with sockets and connections and IP addresses and DNS. There's a service mesh and a standardised RPC framework. You say "get me the X service" and you make RPCs to it. Whether they're transported over HTTP, UDP, unix domain sockets, or local function calls is fully abstracted.
If you need to go that fast why not implement a layer 2 protocol?
The point of these abstractions is that they are insurance. We pay taxes on best case scenarios all the time in order to avoid or clamp worst case scenarios. When industries start chasing that last 5% by gambling on removing resiliency, that usually ends poorly for the rest of us. See also train lines in the US.
> If you need to go that fast why not implement a layer 2 protocol?
How? Datacenters aren't a single link, you need to route packets.
> The point of these abstractions is that they are insurance. We pay taxes on best case scenarios all the time in order to avoid or clamp worst case scenarios. When industries start chasing that last 5% by gambling on removing resiliency, that usually ends poorly for the rest of us.
I'm not sure what this means. There are other transport level protocols already, UDP is in fairly regular use. Is your argument that TCP offers us some insurance that Homa will not?
This took me 14:02 on the first try. Definitely got faster as time went on but I found myself getting stuck on a few squares for no good reason. Starting from the destination and working toward the source helped.
Dating has a large emotional aspect. Using heavyweight management software for this kind of thing is ridiculous. If you need CRM to remind yourself of the emotional impact you had on a date, then some part of this process has gone horribly wrong.
Honestly, this sounds like the gulf between men’s experience dating and women’s, and neurotypical vs neurodivergent. Some ND people like helpers like this; and, women get so many suitors that a system to help them may be actually beneficial.
Cloud services do go down and it's out of your control when they do. If something must work, you need redundancy.
Kafka is often used for financial applications that must not miss events, so having a backup buffer is a reasonable strategy for those use cases. Things like tracking data is likely not worth backing up due to high data volume and low external visibility when data is dropped.
This one is entirely different, and attacks the initialization code directly. This code has no restrictions on its ability to access memory, allowing a full dump.
Great method.