Hacker News new | past | comments | ask | show | jobs | submit login
The curious case of the Raspberry Pi in the network closet (2019) (blog.haschek.at)
799 points by BayAreaEscapee on Jan 17, 2022 | hide | past | favorite | 262 comments



Author of the article here. Since I first published this blog post I was getting messages from people asking how it ended.

Sadly it's pretty anticlimactic as the owner of the place had a meeting with the guy who put the Pi there (without me as he didn't want the Pi-dropper to feel ambushed) and in the end decided not to escalate it to legal and just basically told him to pack his things and get out.

So no legal after play and just a slap on the wrist


Seem pertinent to atleast get an affidavit from the ex-employee detailing what he as done, agree to hold on to the hardware as evidence, put liability on the employee for any time-bombs that might have been stored, ask him explicitly to give in writing all the activities he performed, etc.

Just to have a thread to pull on, in the future, when something might go wrong.


We did get a hand written statement from him and the original evidence (hardware) is still untouched and locked away.

In his statement he wrote that the pi logged to the SD card but there was no data on the SD card (well not on the data partition) and I'm pretty sure that was a lie and it just logged to Balena.

But even though we could never decipher what the nodejs program actually did (because it was so heavily obfuscated) our internal working theory is that he was tracking the movement data of the boss to avoid him whenever possible.


>he was tracking the movement data of the boss to avoid him whenever possible.

Wow, imagine hating your boss so much you go to so much creative and illegal lengths (that can backfire against you) to track him, instead of using same skills legally to finding a better job.

I just don't get, something doesn't feel right about this being the true reason. To me it looks more like he wanted a covert backdoor in the company network for IP-theft, black-mail or other such data exfiltration purposes.

If only he knew that in a year he could avoid his boss all the time thanks to covid-WFH.


> Wow, imagine hating your boss so much you go to so much creative and illegal lengths (that can backfire against you) to track him, instead of using same skills legally to finding a better job.

I’ve mentored a lot of juniors. It’s not uncommon for young people, especially those with less developed social skills, to have an undeserved fear of their boss or anyone else with authority. It’s common with young people who have debilitating anxiety and a tendency toward rumination. They think that as long as they avoid the authority figure, they can avoid any negative social interactions (which are largely imagined).

It’s possible that the boss was bad, of course, but I kind of doubt it given that his response to this situation was to let the person off easy.


>I’ve mentored a lot of juniors. It’s not uncommon for young people, especially those with less developed social skills.

Sure, but even as a junior employee, we're still talking about mature adults here, not kindergarten kiddies, who can vote, pay taxes and are held accountable for their actions in front of the law, so they should be aware that deliberately backdooring their employer so that they can surveillance their boss, not only most likely violates their employment contract they signed and can have serious legal backlash against then both from the company and from the person who's privacy they were trying to break.

>It’s common with young people who have debilitating anxiety and a tendency toward rumination.

Yeah, I get that, but how is this in excuse for hacking your employer/boss? Why not seek therapy from professionals for that and try to either quit toxic workplaces or report abusive bosses and find a workplace that accommodates your personality and emotional type, not try to hack and backdoor your employer's network to keep tabs on your boss.

There is no workplace in the world and no work colleagues that will tolerate you hacking their network and invading their privacy because you have anxiety and a tendency toward rumination.


> Why not seek therapy from professionals

No disagreement here, but to answer your question: If someone is struggling with social anxiety, they actually have to somehow overcome their anxiety enough to seek that help. It can be a real catch-22. (Not a justification for this person's actions by any means. Just explaining motivation.)


> Sure, but even as a junior employee, we're still talking about mature adults here

It’s a wider range than you’d think. Juniors range from seasoned employees who have had various jobs over the years to completely green employees who have never had to work a day in their lives. The latter group can allow a lot of people to avoid dealing with their problems and maturing for a long time.

> Yeah, I get that, but how is this in excuse for hacking your employer/boss?

It’s not, and I never said it was. I was only replying to the insistence that the boss must be a terrible person.

This behavior is never acceptable.


Doesn't anxiety tend to not make you want to sprinkle boxes of malware in network closets?

Like, I would be absolutely terrified to even accidentally overhear someone talking about this and possibly be dragged into it that way.


The author of this piece didn't work at the company. It sounds like the company wasn't really full of technical people. The perpetrator probably thought they were so much smarter than everyone else that they'd never be caught.


I think this is probably a fair assessment.


That, or the person found leverage.


I think it depends on the company. Larger corporations like banks tend to have management types who are sociopaths or giant egos who only care about making themselves look good to their own boss. They expect their reports to work unpaid overtime and don't recognize their efforts.


> Wow, imagine hating your boss so much you go to so much creative and illegal lengths (that can backfire against you) to track him, instead of using same skills legally to finding a better job.

I once worked at a place where one of the founders would too often get the shits with someone or some team, and become a micro managing asshole for a few weeks. I wrote a python script to run on the wifi router to monitor for MAC addresses connecting and disconnecting, ostensibly this was to publish a webpage with a "Is manager X in the building?" dashboard. Which also just happened to have filterable notification subscriptions and a Slack integration. Pretty soon, everybody he was micromanaging ended up getting 90 seconds or so notice of him arriving, as his phone connected to the wifi while he walked in from the car park.

The other managers and PMs all loved the dashboard, and I got a bonus for it at performance review time.


How did that founder feel about it?

And what did the employees do with the information? Leave the building?


Not sure the micro managing founder ever found out people were using it to alert when he arrived. His asshole tendencies extended beyond micromanaging staff, and he ended up in a fight with the other two founders that resulted in him leaving the country with a warrant for his arrest of fraud charges within about a year.

People in his firing line would mostly use it to make sure that they were at their desk and had Jira open while waiting for something to compile, instead of HN or Reddit…

The managers-in-the-building website dashboard stayed running for at least several years after that, when I left, and it was still in regular use. People liked being able to do things like go “Hey, we’ve got the PM, Account Manager, and the CEO all in the office right now, let’s grab the tech lead security guys, and set up a 3 minute corridor meeting to make this decision.”


That's pretty cool and funny too, but AFAIK, tracking people at work without their explicit consent is illegal in most of the EU even before GDPR.


Realistically, how is this different from logging login attempts? If the device is configured to attach to the company network isn't it within the company's rights to know that a device is logged on at any given time even under GDPR? Or would it be the publication of that information - even internally - that would be the issue?


>If the device is configured to attach to the company network isn't it within the company's rights to know that a device is logged on at any given time even under GDPR? Or would it be the publication of that information - even internally - that would be the issue?

Logging anonymized MAC addresses is one thing, but converting the MAC addresses to employee names, revealing their location on premises that is shared with everyone in the organization without their consent is a completely different thing and is illegal under most EU privacy laws (at least in Austria and Germany).

Sure, in theory the company could already know when I come it at work from the logs of me swiping my access badge at the main security entrance door but any such logs are kept private and can only accessed by security and upper management if some act of theft or gross misconduct has occurred which warrants an investigation.

Sharing this information publicly with everyone in the org would be a privacy breach. If you want to know if I'm "at work" just look at my Slack/$CHAT_APP notification color.


They sell laser trip-wires that act as usb keyboards and can hide windows, lock your computer, or run scripts.

https://www.tindie.com/products/dekuNukem/daytripper-hide-my...


what if this guy is just a hell of an introvert who is more comfortable rigging something like this up than with interacting with this boss. If this kid was in his early 20s I'd probably slap his wrist and impress on him the dangers of screwing with the company network closet. If he is an adult he really ought to know better


just came here to say that while I understand the sentiment, people in their 20s can vote, and should be considered adults, not kids.


Legally that's true but I think you know as well as anyone that people don't just suddenly mature on the day of their 18th birthday.


I wonder what effect treating legal adults as children has on their maturity trajectory.


It’s super complex. There are cases where the person “gets it” and just getting caught is enough to cause growth. Accountability in the form of punishment may be a waste of time or even harmful to growth because the experience is too painful to integrate. On the other hand, someone who is always let off the hook may never develop a true sense of responsibility and things only get worse. There’s no single factor to tell what’s the right thing to do all the time.

But within the theme of this thread, I strongly doubt the optimum solution is “full punishment in every case for everyone the moment they cross the age of majority.”


Well the effect of applying draconian computer intrusion laws is extremely damaging to anybody's trajectory, so it's understandable to want to find some empathizeable reason to soften the blow. "Kids" get punished by paying damages and a stern "don't do that again", whereas for adults it's like here's your ten year federal prison sentence for being a witch.



What an intelligent way to look at the world. If the law says something that is the exact truth. No room for any nuance.

The day before your 18th birthday, you're a kid, the day after you're an adult. Makes perfect sense.

Clearly someone who looks at the world this way must be under 18.


At one point you wrote "It is beyond me why a co-founder of a company would distribute these devices around town but well.." I take it, however, that the installer turned out to be someone else. Now I am curious as to whether this company advertises itself as a supplier of such things, and if so, what it claims about their capabilities. Given that the code has not been reverse engineered, can you be sure its capabilities are limited to data exfiltration? I'm also wondering what the perpetrator was up to, if the device's purpose was indeed to help him avoid the boss.


This is what I was thinking, except that I started wondering what weird shit this company or its owner are up to.. Maybe a slap on the wrist is just a solution to a mutually assured destruction situation. We all love conspiracy theories so if i were the author of this article id quickly quash this one and provide some more deets.


how hard can you obfuscate nodejs? I'm pretty sure if you drop the code in some infosec channels they will happily take the challenge and tell you what it does ;)


An easier solution might be to look at the packets the nodejs program is sending over the network (if you can configure a MITM)


Its package.json and / or node_modules might also give some clues


For what it's worth, I'm pretty sure it's quite likely there is a disproportionate number of people Out There™ that would be very happy to sign an NDA and have a look at the nodejs program for free.

You might even be able to find someone local. Maybe wander over to the next in-person security conference vaguely nearby?

Sadly you have no contact info in your profile so I can't even suggest to people seeing this to cold-email you.

(I objectively don't think I would be very successful myself, given that you've mentioned everyone in the office looked at it; I don't have a lot of relevant experience, which sounds reasonably necessary to be successful here.)


Any plans to release to code? I would love to take a look.


The license.md does not say it is open source :)


The person who has the device never agreed to that license…


Finding a book on a sidewalk doesn't mean you can scan it and legally distribute it.


Yes but if said book was used in the commission of a crime there is a certain level where it doesn't matter.

Don't plug shit into private networks unless you want it reverse engineered. This falls under the fair use exceptions (learning what software is doing / was doing to your network).

The copyright holder can take it up with whoever they licensed it to, there is a reason a lot of them read "not to be used in the commission of a crime".


Yeah, it'd be a pretty brazen or stupid hacker who tried to sue you for copyright infringement for code that if they claimed ownership of, provides proof of their illegal activity.


If the author was not the person who planted the device, they'd have a decent case. If person A throws person B's cellphone through your window, does that permit you to post person B's nudes online?


...which means that by default they're basically not allowed to do anything with it.


> So no legal after play and just a slap on the wrist

The problem with this is you have no idea what harm the guy actually may have caused; nor what other RPis he may have set up around the company or around town. Next time he may be more careful with his username, set up the disk to be encrypted w/ a network key, &c, making future exploits more difficult to track down.


This, truly, is the thing to worry about: if it happened here, it likely happened at other companies. Turning a blind eye is a blank check to do it again.


The issue here is that this isn't just "one bad apple" that if we can remove everything will be ok. Which is what motivates the idea that punishing this bad actor will make everything better.

There is a systematic issue at the heart of the way we do network security.

You can by a lighting / usb cable that can do all of these things and more for $120 if he'd used that he'd never have gotten caught.

We treat network security like physical security at our peril.


> The issue here is that this isn't just "one bad apple" that if we can remove everything will be ok. Which is what motivates the idea that punishing this bad actor will make everything better.

I think they are talking about this particular, singular, bad apple and the other companies that bad apple is also attacking right now and stopping that harm as opposed to "sending a message" to other bad apples.


That feels like a choice for the victim.

If after the business owner sat down with the perpetrator they decided it is just some script kiddie playing at being a spy then that's up to them.

The wider issue remains that some script kiddie with $120 could have done this and got away with it for ever.


Do you have a suggestion for a change to treating network security?


1) 802.1x certificate based network security (The MDM configures each approved network device with a certificate so rogue devices can't get on the network) 2) Periodic security review (look at attached network devices and determine an owner and purpose for each one). 3) Configure SIEM to alert on long-lived outbound connections.


Can that (1) be done with windows/Mac clients?


Answering myself: yes, is industry standard, definitely a little odd to not have it configured on a corporate network past a handful of employees.


There's a decent amount of infrastructure involved in getting 802.1x authentication up and running in an efficient manner. While it does provide very good security, it's not widely used because of that.


Any idea on a good, at-home or small network alternative?


There really isn't one. 802.1x is the wired security standard, and almost never worth the hassle for home or small business networks unless you are really interested in learning the ins and outs.


Having a list of allowed MAC addresses, enforced per-port by a managed switch (or at least by the DHCP server and router), is a first step, though naturally it's easy to spoof a MAC address.


> though naturally it's easy to spoof a MAC address.

I used to live in an apartment that was within high gain antenna range of the local McDonalds Free WiFi. I had a antenna/wifi adaptor set up in promiscuous mode to listen to all traffic on their network, looking for MAC addresses that connected for a while, then stopped connecting. It'd then switch to that MAC address and BitTorrent until the 500MB daily cap per device ran out, then go back to monitoring mode looking for someone else who'd agreed to the captive portal T&Cs had their MAC address whitelisted and then left. I think I got pretty much all of Game Of Thrones that way...

For a little while, I was monitoring my own home network, and one thing I tried was running map against any reconnection of a known/allowed MAC address, to try and confirm it at least looked like the same device. A RaspberryPi connecting using the MAC address of a phone or a MacBook stood out like a sore thumb. That never turned out useful enough for me to bother wrapping it up into a project I kept running or would have shared.


MAC address filtering isn't a first step towards 802.1x, precisely because of the reason you mentioned. It's damn near pointless for all but the most basic security scenarios.


Obviously it's not a first step toward certificates, but it is a first step away from "anyone can casually plug in a hidden Pi."


Anyone who knows how to setup that RPi to do anything meaningful knows how to spoof mac


But would they be able to figure out a MAC to spoof without significant amounts of time in the data center/switch?


Treat every computer like it's connected to the internet.

Probably by actually connecting it to the internet. Since the idea that you can keep people out of your network is probably more dangerous in the long term.


Yep. Anything that can connect _out_ to the internet can be misused to connect _in from_ the internet. All it takes is a human or technology flaw on the inside to "breach" your outbound connections only security policy. As a whole bunch of unwitting Log4J users recently found out. Reverse SSH Tunnels aren't all that different from Remote Access Trojans.


Without changing things so radically that we might not even be able to continue having a "Do everything in software with one click" society, social deterrence is going to to be important.

Unless people are manually verifying GPG keys in person all the time, you're gonna need to trust someone. Even with a two man rule you need some degree of trust. Trust is easier when people know they might go to jail if they break it.


File that under "not this companies problem"


It is possible the perpetrator acquired some embarrassing evidence about the company owner and was blackmailed. We’ll never know.


In the extremely unlikely chance he did, so what? He can face legal issues then

Most private and embarrassing stuff rarely ever matters anyways. This isn't a movie


An encrypted disk would be kind of useless in such a device as it would require the user to login every time the device reboots, unless they intend for it to never be rebooted. I’m not sure what you mean by network key in this case.


> An encrypted disk would be kind of useless in such a device as it would require the user to login every time the device reboots

There is actually a solution for that (shameless plug): https://www.recompile.se/mandos


While I'm sure I could configure this on a system, the level of understanding required to actually create it honestly is fantastic.

Is this something you created yourself, or was it a community project?


Initial idea and C++ implementation (using TLS with X.509 certificates and explicit UDP broadcasts) was done in 2007 by another person. Redesign of the protocol (to TLS with OpenPGP keys¹ and DNS Service Discovery²), and re-implementation in Python and C, I did in collaboration with that person. In addition to ongoing maintenance, the relatively recent switch from TLS with OpenPGP keys to TLS with Raw Public Keys³ was done by me.

The level of understanding required is something I would think that all system administrators worth their salt had at the time. I would think that the best way to acquire such knowledge is doing the Linux From Scratch⁴ exercise, even though I have not done it myself.

1. RFC 6091

2. http://www.dns-sd.org/, RFC 6763

3. RFC 7250

4. https://www.linuxfromscratch.org/lfs/


Looks like a neat project but the intro/faq should probably be a bit more self-critical to point out weaknesses. The “nope, it’s protected by TLS” answers ignore the fact that anyone attacking this could also have attacked the PKI. If someone gets the client cert and key, they can probably fake the request to get the decryption password. I’m assuming that client key isn’t protected by a password, since then that would be the thing a user has to provide at boot time. And what about the vector where someone attacks the CA that issued the certs? Where is that stored? Can fake roots be injected by someone in possession of both machines? This may be moot if you are using self-signed certs, but of course those introduce their own management issues.

Also, I don’t really see any discussion of availability concerns. This is a system with a pretty gnarly fail-closed kill switch that could happen with a simple network outage. That doesn’t really seem to be acknowledged and there’s no discussion of the inherent balance between security and availability. You really need to be able to guarantee a certain level of availability or things basically self-destruct. Presumably there’s a mechanism that allows a self-destructed pair or cluster of these mandros’d servers to go back to a normal operating mode?

Anyway, I don’t mean to be too critical. It’s a really cool project. A little Byzantine but with a stated reason for that. Would just like to see more focus on the weaknesses and potential critical operational issues. A section called “reasons you may not want to use this” that is very up front about those seems appropriate.


(The other coauthor here.)

Teddyh's answer describes some of the technical aspects, through I would like to add the security scenarios that Mandos works to address. Any security measure is in one way or another designed with known threats, assets and costs/outcomes.

If one operates a bunch of servers with FDE in a server room, getting there every time there is a need to reboot is a significant problem. To mention a few causes, redundant nodes going up and down in the middle of the night, updates to the operating system and kernel, and misbehaving hardware. At the same time, those servers are likely to hold a lot of sensitive data to companies or persons, especially email, which puts the administrator at conflict between using full disk encryption or not using it. In my experience, unless there are regulations that dictate otherwise, servers are not encrypted because of the hassle and downsides of manual or needing to attend reboots in person. This was the initial case as to why Mandos was created many years ago. If the server hall loses both primary and backup power, there is a real risk that the administrator does need to travel there to bring the machines back up. That would be one of the major trade offs, through I would still recommend administrators to do that, compared to the risk of an unencrypted disk getting lost, stolen, or cases where someone comes in and takes all the servers.

There are naturally other scenarios that one can use Mandos for, but like any tool it's good to know whats it is designed for. It is not intended to replace setups where one is already using FDE where one types in the passwords manually at the terminal and is happy with it. If one does not need the unattended aspect but want to remotely reboot the server, there are things like Dropbear or IPMI/remote KVMs, in which case the security will rely on those components' security. In my experience, IPMI's security should not be exposed to the internet which means one first needs a security entry point to the local network. Dropbear uses ssh which mean one should use client certificates and verify the signature before use. Depending on the use case and what risks one wants to take there are benefits and drawbacks, but the key point I do want to come back to is that people really should use full disk encryption and Mandos alleviates the primary reason people don't use FDE.


> If the server hall loses both primary and backup power, there is a real risk that the administrator does need to travel there to bring the machines back up. That would be one of the major trade offs, through I would still recommend administrators to do that, compared to the risk of an unencrypted disk getting lost, stolen, or cases where someone comes in and takes all the servers.

Yeah, I agree with all of this. My nitpick is just basically requesting the doc talk about this being a conscious tradeoff where your infra availability and your lack of tolerance for frequent fail-closed events might lead you to intentionally weaken the security guarantees by lengthening the timeouts. In other words, you set the timeouts as short as you can tolerate, based on your infra.


> If someone gets the client cert and key, they can probably fake the request to get the decryption password.

Yes, that is a weakness, which is openly addressed in the FAQ: https://www.recompile.se/mandos/man/intro.8mandos#quick TLDR: It only works if an attacker is pretty quick about it. See also here: https://www.recompile.se/mandos/man/intro.8mandos#security

> And what about the vector where someone attacks the CA that issued the certs?

There is no CA involved, nor any X.509 keys. The keys used in TLS are ed25519 raw keys, and the server has a list of, and checks, individual key fingerprints.

> This may be moot if you are using self-signed certs, but of course those introduce their own management issues.

Yes, you have to generate and transport keys out-of-band (i.e. by hand) as part of the initial setup. The instructions on exactly how to do this are shown as part of installation and configuration.

> a pretty gnarly fail-closed kill switch

That’s a feature. A security system should fail closed.

> Presumably there’s a mechanism that allows a self-destructed pair or cluster of these [mandos]’d servers to go back to a normal operating mode?

Yes. You either type in a password on the console on one of the servers, or use a dropbear to ssh in remotely to do it.

> A section called “reasons you may not want to use this” that is very up front about those seems appropriate.

The project is mostly intended for those people who have already decided that full-disk encryption is a requirement, and Mandos is meant to alleviate some of the pain which they have already accepted. But sure, I see your point.


> That’s a feature. A security system should fail closed.

Of course, but there should at least be a mention of the fact that you need to tune the fail-closed parameters to take your availability into consideration. I appreciate that the various attacks would have to be done "pretty quick" according to the FAQ but the definition of "pretty quick" is necessarily countered by what kind of guarantees you can make about your availability (of the server(s) and the client), and this isn't mentioned. If a 30 second network failure causes the server to refuse keys to the client from that point on, but you can't guarantee that level of network availability (taking into account things like replacement of network switches and other types of maintenance), the definition of "pretty quick" may be too quick. It's a very direct and explicit tradeoff between security and availability and that concept is absent from the intro/FAQ. As a mental exercise, consider how you'd answer the FAQ "So I should set my timeouts super low for better security?"

Again, I'm not trying to be a picky ass, and I think the project is cool. I just think this is a topic that non-security-folks don't necessarily think about automatically, and this is the opportunity to make them think about it. The entire doc sounds like "faster timeout == better" and it would be very unfortunate for someone to configure and deploy this based on that understanding.

PS your other responses to my nitpicks were great, and somehow I missed the entry about stealing the client key being possible but having to be done very quickly. Thumbs up. I'm curious about what you mean by saying you aren't using "x509 keys" though. You must be generating a self-signed x509 cert containing the client's pubkey in order to do TLS. The packaging of the key itself isn't really relevant, is it? The "cert validation" on either side doesn't really care much about the contents of the cert other than the pubkey encoded therein, but you still do actually have to create x509 certs using those keys unless you've completely butchered the TLS stack. Right?


The timeout can’t realistically be set very short, since it needs to allow for a normal reboot of a server. Servers are, in my experience, notoriously slow to reboot. Therefore, a typical network hiccup is assumed to be shorter than that. The default timeout value of 5 minutes reflects this.

Also, you could add the Mandos server status to your alerting system, and if anything goes wrong with your network and the Mandos server times out for a client, you can be alerted to this fact, so you can fix it before the next time that client happens to reboot.

> consider how you'd answer the FAQ "So I should set my timeouts super low for better security?"

Fair; the text could be clearer about this.

> I'm curious about what you mean by saying you aren't using "x509 keys" though.

Well, we aren’t using TLS with X.509 keys. We are using TLS with Raw Public Keys, as specified by RFC 7250 (https://www.rfc-editor.org/rfc/rfc7250.html) and supported by GnuTLS: https://www.gnutls.org/manual/gnutls.html#Raw-public_002dkey...


That looks like an awesome project but I'm not sure building an LFS system would help developing a system like that. Possibly in understanding and configuring it.

I still recall how to build a Linux system from go. Coding what you're working on up in Python/C would take a large unrelated amount of knowledge.


The knowledge about how to write a program comes naturally when you know, in fine enough detail, both the problem which the program should solve, how to solve it, and the environment in which the program should run. In this case, writing a Python server program to respond to requests was relatively simple; Python provides built-in modules which makes writing servers easy. And when you know what the client program (i.e. the program running on the currently locked host) should do, and you know what environment the program has to operate in, the program more or less writes itself.

The first version of the program used a simple UDP broadcasting method to a hard-coded port to find servers, which required some rudimentary networking knowledge, but only basic TCP/IP stuff.

Later, both the server and client parts have gone through numerous refactorings which brought in many features (like a plugin system on the client side, and a D-Bus interface on the server side), but those were manageable chunks to add to an already mature and working system.

But sure, in addition to the knowledge one could acquire from LFS, I also had some high-level knowledge of how TLS and its handshake worked, I knew that there was some way to use OpenPGP keys instead of X.509 certificates in TLS, and I knew a little about how DNS-SD worked. The rest I needed I read up on as I wrote the code.


Hardware security modules are no cakewalk either. For webservers I think most people consider them overkill. They mostly IME get used to handle code signing.

And at one company they were worried about the devices getting stolen, so they could had HSMs and still couldn’t reboot unattended (though most of the signing keys were with humans rather that automated)


No, you can have the initrd boot to a dropbear sshd that allows the operator to ssh in on reboots and provide the key.


There are also options like the Zymkey[1] which is essentially an add-on TPM which can auto-decrypt the disk if it detects that the Raspberry Pi and SD card it is connected to have not changed. Not sure how difficult that would really be to defeat given enough effort though.

[1] https://www.zymbit.com/zymkey/


Something like dnscat2 would ultimately be better in my opinion. Have it connect once to get the disk key, decrypt, and end the process. Then have your device do it's thing, and once a condition is met, spin it back up, transmit the data (using small packet sizes and very large delays to possibly avoid IDS) and exfiltrate what's needed.


If you count on the device running "forever", or at least until you pick it up again, you could also just store the key on the device and delete/destroy it (the key) on boot.


Why in god's name would you pick it up later? Installing it in the first place was a huge risk. Removing it is just doubling down


Wouldn’t that part of the disk then need to be unencrypted?


Yes, it does. It's pretty small, though, on the order of 100MB.


There is a case to be made for using the legal system as a deterrent. But there is also the case to made to not do that as in the case of Aaron Swartz.


This is a lot more localized and malicious. I do think people deserve second chances, but the context of all this rubs me the wrong way. Maybe the building owner was right to not make it a legal matter, but this feels like more than a harmless experiment. The malicious persons operational security is obviously terrible.

As someone who has done security research for over 15 years, I take the ethics of this sort of thing seriously. I fully expect repercussions of the legal sort if I did something like this without permission. The key detail being that this was done secretively in a private office.


Given the relative sophistication of it, it feels more like practice. In that case, not even a slap on the wrist very well could be seen as encouragement.


Seems like the decision becomes a matter of whether you agree with the motive then.

But in this case.... the motive seems to be unknown.


He did get fired though, not as if he just got his raspberry back and went about whistling happily


Are all parents who run "gifted children" blogs scammers of some sort? It sure seems like it.


Did you ever find out what it did there exactly? Like, what it collected and what the "gifted person" wanted to do with that data?

edit: Thanks for the write-up btw. Was a nice read, although a bit short (which is the story's fault I guess)


Is “gifted person” code for something? Are they from some sort of enrichment program?


It's in the article: The author found information about the presumed attacker on a site where parents write about their gifted (= highly talented) children.


Thanks! I couldn’t handle the tension and jumped to the end of the article to see how it unfolded.


“Gifted” individuals are selected at early ages to run through rigorous education programs that greatly push them ahead of their peers. It is a pipeline to create intellectual elites and captains of industry. Gifted kids are widely accepted as the most intelligent kids of a school and held up as the finest examples of the school’s educational abilities.


Wow, that's a warped description if I ever heard one. I always felt like "gifted" was a label given to kids who were out-of-place in a normal classroom, to justify having special education so they were less likely to disrupt class or kill themselves out of boredom.


Hm, for me it meant I mostly stuck with the same student peer group throughout grade school, I think we got to skip some standardizes tests, and I was able to get a school bus to the bigger schools even though I was way out in the sticks. I had to go through an aptitude test and even though I was only like 7 I still remember sitting in the car after and being mad at myself for missing a question about "another word for water" being H20.


Yours is the warped one. Gifted student programs are very common, and while they are sometimes used for what you say, it's not the designated purpose.


However, there doesn't seem to be a correlation between membership in gifted programs and success later in life.


Do kids in gifted programs go on to become intellectual elites and “captains of industry” at higher rates than their peers?


Not by much, I'd bet. If at all.

The poster seems to have confused top-tier private schools and gifted programs. Read enough politician and C-suite and such bios and it's very clear what's going on. You practically never see "attended a pretty decent public high school—but was in the gifted program!" Private college prep secondary schools (at the very least—often it's private schools all the way) on the other hand are overwhelmingly the norm in that set.

It's kinda depressing as a parent. If you haven't scraped together 25+k/yr for elite prep school tuition (and, probably, boarding) all your "you can be anything you want if you try really hard!" is kinda a lie. Like, that's still much better than not trying hard and will likely improve your life outcomes, but, looking at the actual world, realistically... nah, sorry, you're probably locked out of a lot of options. There are de facto requirements, and we couldn't afford them. Sorry kid.

Similar story with The Arts. You start looking at the backgrounds of very high-paid artists of all kinds (actors, musicians, even authors a lot of the time if they're considered good and not "merely" popular) and you're likely screwed if you weren't at least one of: 1) born to a family that's already successful at that, or 2) had an expensive and very focused education starting before college. Lots of the successful folks had both of those things. Again: there are counter examples, and it's technically possible to get in if your parents weren't in the arts and you didn't start gigging/acting/attending-an-artsy-private-school by the time you were 12, but realistically you're looking at a serious uphill battle.


> Private college prep secondary schools (at the very least—often it's private schools all the way) on the other hand are overwhelmingly the norm in that set.

To which data set are you referring? Data from 2019 found that 80% of Fortune 100 CEOs hold undergraduate degrees from public institutions[0].

[0]: https://www.forbes.com/sites/kimberlywhitler/2019/09/07/a-ne...


I think in most cases supporting kids with money and professional experience is family merit. The family spent money and effort to help its next generation. Maybe they are not rich, just education focused and ready to sacrifice a lot to achieve it. On the other hand having too much family wealth correlates negatively with academic accomplishments.

The complexity of art and math doesn't change depending on how you learn or how rich is your father. Even with support a kid has to gain the same useful skills. What matters is ability, not how the kid got there. They are just kids, everything that shaped society into what it is happened before they were grown enough to have any say in it.


I was in the 80's gifted program in elementary school (for grades 3 through 6), but went to private schools for jr high and high school. I learned more from public gifted education.

FYI, $25K/year won't get you an elite prep school these days. For that, you'll need at least $60K+.


Good question. The programs themselves are generally good, as far as I’ve experienced, but the culture around them is often quite toxic. Many kids are treated like race horses. I’m not sure how effective they are on net. Most highly successful people seem like autodidacts that end up finding the resources they need one way or another. Would guess the best way to create more of those people is just to keep a lot of doors open and hope someone like that walks through.


Culture overrun by rich overachievers gaming the selection system?


That’s a fairly blunt description, but I think it’s roughly accurate. I think there are plenty of middle income and low income overachievers in there as well. Recent immigrants can be incredibly demanding and hard on kids who might not be naturally inclined to pursue that kind of thing without external pressure, as can competitive suburbanites.

But by trying to mitigate the risk of toxicity you can go too far in the other direction and end up not pushing smart kids to reach their full potential, which is also bad. Striking the right balance is hard.


I was put through through multiple gifted programs in both middle school and high school (Southern US). I loved the challenging course work from dealing with college level science classes as early as the 7th grade. The main problem with gifted programs is it really makes normal public schooling extra miserable once you are back with the general population. Uncaring teachers, scantron tests, and large classes sizes left me depressed with schooling quality.

Once I got to college after graduating from a boarding school for gifted teens it was like a culture shock back to the world of horrible professors. I nearly failed out of college due to being completely uninterested with the lack of engaging materials in first semester classes.

Ended up with a degree in broadcast journalism because it was an easy path to graduating in less than 3 years. Especially because I was graduating during the 2008 financial crisis and just wanted to be done with school and find whatever job I could to get a start in the real world.

It's a nice piece of paper for HR to nod at and let me pass the degree hurdle.

My favorite moment was working a shit retail job in 2010 and running into another graduate of the same gifted high school working a fast food job just to survive.

EDIT// I did have some classmates go to found companies, work for NASA, etc. They were driven people who could have prospered in any scenario honestly.


Nope, it's a dick shaking title that can give kids issues in life.

Someone I know was called gifted at some point, he didn't end up in any accelerated programs but he did end up in higher education... which he only finished after many years, meanwhile he was eating, drinking and smoking his student loans + job income away, he ended up broke and in debt, and to date - 10, 15 years later - is still unemployed.


Anecdotes are not data, but I was in the Gifted And Talented program in high school and I sure did not become either of those. I'm eking out a living as an obscure freelance artist. A lot of my friends are former G/T kids who did not live up to their supposed promise, too.

It got me some interesting opportunities here and there but I am fundamentally kind of a slacker :)


This one didn't:)


haha... no, but their parents feel special. NY public schools used their gifted program as a way to keep white kids in majority non-white schools.


> told him to pack his things and get out

I though the suspects were an ex-employee, and some guy that didn't work there (the part-owner), so was an actual current employee implicated in the end?


An ex-employee who still had a key to the office so they could move some stuff they had there. Presumably that courtesy was immediately terminated and the key was returned.


Having a key to the office and having a key to the network closet are not the same thing. The article said only four people had access to the network closet. So did this guy break into the closet to plant the pi?

I think he got off way to easy.


You'd have to ask the person who wrote the story. It's possible they said "a key" and meant "a set of keys" or something. Either way you're right the person who planted the RPi was quite lucky to get away with only a stern talking to.


oooh, I didn't realise they still had the key at that point. OK, I wouldn't have even said that - I'd have asked for the key back and boxed the remaining stuff myself. TBH, I'm surprised to what extend the employee would of had a bunch of stuff there - did they have furniture there or something?!


Yeah it sounds like the person was on good terms with the company and was trusted enough, must have stung for whoever made the decision to trust the ex-employee to be sorta betrayed like that. The blog author is somewhere in the comments here, I don't know if they're willing to share much more info but let's see what they say.


So the article mentions:

> It was registered (or first deployed or set up?) on May 13th 2018

and the post itself is dated 2019-01-16

Since it says:

> he could still have a key for a few months

I assumed that by then the employee had given back the key, but I guess I was making a few assumptions about when this happened, and when the device had been installed - they don't actually say what date the RADIUS logs revealed they had accessed the network.


My understanding is: ex-employee bought/acquired the device from the "gifted guy"/part-owner, and deployed it in the network cabinet by using the key he still had.


Would have been interesting to see what they were doing - nRF52832-MDK doesn't have wifi - perhaps the person was scanning/logging bluetooth devices.


As I was reading this I was hoping for modern day Cuckoo's Egg. But it was not to be.

Great write up. Thanks for sharing.


For anybody wondering, the Cuckoo's Egg (written in 1989 by Cliff Stoll) is a wonderful read about tracking an early hacker. I highly recommend it.


Thanks for the recommendation


omg, that guy got of the hook easy. he should play the lottery considering how lucky this was.


Shoot, with the info you got I'd have least called his parents and tattled on him. If you can't put him in jail at least embarrass the shit out of him.


post the nodejs in a git repo so we can see what he was doing.


> cat config.json | jq

cries in UUoC


Reminds me of this[1] good old quote from the IRC days

<erno> hm. I've lost a machine.. literally _lost_. it responds to ping, it works completely, I just can't figure out where in my apartment it is.

[1]: http://bash.org/?5273


We had a prod case where a server was being flooded with requests, and a downstream server kept falling over. We figured it was an attack of some sort and investigated, eventually traced it back to a computer inside our own network (we're a big computer, five floors of computers).

It had an open file share, containing some Delphi books and from which we got the computer name too. So we walked over to the Delphi team's side, and kept yelling the computer name until some dude said "Hey, that's me!"

Turns out he was running a test-case, in an infinite loop until it worked (because that's how test cases worked), and he thought he was pointed at QA, but he somehow had it set up to target Prod.

Our job was done at that point, we left the rest to management (who made sure he didn't get fired but didn't do it again).


Doesn't sound like a management failure to me. It sounds like there should be separate vlans for QA/test and Production to prevent this very thing (or potentially something more malicious like the spread of ransomware).


I'd say to say "Yeah, this was a long time ago"... but this could probably still happen.


I'm surprised employees have sufficient access to prod to make this mistake.


I've done security reviews for a dozen companies. This sort of thing is startlingly common. Every single company I've reviewed is doing something that in retrospect should have been obvious.

I try to tell people: "You don't need AI security, you need a checklist." Colonial Pipeline reused passwords, shared passwords, used the same password for all VPN users, failed to rotate it when people left. (that's 4 insanely basic violations of password security). ANY human who did a security review would have caught that. Even an intern who knew nothing and furiously googled "information security review" on the bus on the way in to kick off the review. (no disrespect to interns in over their heads, my point is they didn't prioritize security so they didn't get security)

Capital One used an admin privileged instance profile attached to a publicly accessible admin interface for a security tool (which tool, by the way, had no need of admin credentials). They were hit by an SSRF vuln and leaked their admin credentials. They also failed to alert of unexpected use of those credentials (try it, use of admin credentials is rare enough you won't have a lot of noise) failed to alert on large outbound connection (this one is subtle, but worth doing if you can figure it out)

Equifax failed to apply security updates regularly (just turn on automatic security updates. People suck at chores) Failed to deploy a SIEM, failed to conduct periodic security reviews, failed to put capable security people in place.

The above are not my clients, just public reports to illustrate that everyone can benefit from a security review to catch the obvious errors.


They shouldn't have, a lot went wrong here.


One of the issues with Knights Capital was that they forgot about a server running an old bit of code and shut down all the new ones which just sent all the data to the old server which was causing all the problems. Not keeping track of that server was very expensive.


I've also had this problem once, on a university campus though.

"net send <host> 'If you can read this, please call IT SUPPORT at ... and tell us'".

It worked :)


For a while the easiest and fastest way to identify a 1U server in a rack of 40 was to SSH in and type:

  eject


Wait, was it common for 1U servers to have optical drives back in the day?


If by back in the day you mean a decade ago then yes.


Back in the day? Mine still do!


Yes, the thin laptop style usually.


Did that with a printer that came up on an audit at a hospital once. IT director told me to go to X site to find it based on its IP in the schema and I just cranked out a job to it that said to call my extension. Two minutes later the phone rang...


Sysadm 101: "Don't try to solve administrative tasks with technical means"


Ah “net send” - I remember getting a friend in trouble in high school for telling him how to use it.

He sent one to “*” saying something about the FBI or some such, and evidently it ended up reaching computers across the entire local school system (not just our public school).

He was called out of class days later after they looked up the IP and library computer access logs.


This should really only ever happen with wireless connections. You should always be able to tell what switchport a computer is connected to and work from there.


Back in the 90s, I recall something similar. Due to cost of hardware, networking wasn't as hi-tech as it is now. So it would be common for medium sized office buildings to have CAT3/5 cables trunked from everywhere in the building down to a central patch room, in which there would be 1000s of patch ports and patch cables stringing everywhere into discrete hubs that had no-onboard management. To trace a connection you'd have to start with the wall or floor port number that the end device was plugged into, and hope it's mapped correctly to a patch panel/port number in the patch room, and then manually trace any patch cable from there onwards to the hub etc.

The whole system falls apart when you have no idea where in the building the end device is, if you are lucky there may be a managed switch on the network route somewhere that may help you narrow down the location somewhat.

So yes, it did happen sometimes that the only way to find a box was to send a desktop alert and hope the admin of that box contacted you.


And then? The cable disappears into a wall together with 100s of other cables (which most likely are not labeled or not correctly, otherwise you wouldn't have lost the machine in the first place)


It is completely irresponsible and without excuse for any main network operator/owner to not be completely aware of what each and every cable does which is connected to a switch/network router. If the owner refuses to determine this, they are responsible if there is a nefarious device on the network until they do. Wireless makes this much more complicated so any responsible admin will ensure the wireless network is completely isolated from the physical network and is privileged to only access the internet or separate devices.


I've seen bundles of cat 5 cabling the girth of a 100 year old oak tree. No chance anyone knows every cable in such a data center.


This is a silly take. People and orgs have a million reasons why their cables might be unlabeled. Shame on you for binary thinking without considering real world confounding factors.


It’s the thinking of someone who has only worked as places that are three years old and the person who built out the network still works there.

If you’re hired because the old person didn’t follow basic maintenance procedures, you’re still ignorant until you rewire or trace the whole company’s network.


What I am hearing is that is that it is not practical to expect network admins to be in control of their networks and sub-sequentially it is not practical to ensure no malicious devices are plugged into enterprise networks. Just because it’s difficult to do doesn’t mean it shouldn’t be done.


What you should be hearing is that it’s not necessarily irresponsible for somebody not to know something when they are inheriting a system, and that it’s totally reasonable to expect to encounter poorly done systems in the real world that need someone to fix them.

It’s often the case that somebody slapped something together in an area that wasn’t their expertise, it’s been noticed that it’s a real problem, and someone has been hired to fix that problem. The “not knowing” is often the reason they’ve been hired. Trying to sort out a real world scenario (while also handling other needs of the org) is almost definitionally Taking Responsibility. So let’s not shit on people trying to cleanup a bad situation by calling them irresponsible for not knowing.


Suif, you have a lot to learn my friend. First is speaking in such absolutes.

The more senior I get, the more I realize there are often a multitude of reasons things are the way they are, and many times those are valid reasons, when seeing something that is broken.

Taking a beat before pontificating and making a fool of yourself will save a ton of heartache in your career.

When you see something so broken, ask yourself why? Then ask somebody else. Some highlights from my career:

1) Last guy got cancer in the middle of a build.

2) Last guy worked his way up from one man help desk to Linux guru over 15 years all on his own, but was so busy putting out fires, he never had the chance to improve things.

3) Project started out as a proof of concept and was intended to be torn down.

4) Due to government contracts, the system has to be maintained exactly as delivered, no labels even allowed, and obviously no IT staff(?!) To make spreadsheets. Everything was paper notes by operators.

5) Pure laziness and incompetence as you alluded to.

All this to say, more often than not there is a good reason something is fucked up, finding out why may help you fix it (like in the case of politics, budget issues, firefighting, priorities, etc..)


Customer site, big insurance company. The started documenting cables and labeling them to get rid of old faulty documentation. Half way through their security department forced them to stop. Why? If an attacker gains access to the documentation he would have all the information he needed. So, the had three types of cables: old ones with faulty labels, cables with right labels and unlabeled cables. And then there was me, in the server room at 3 a.m. tracing a cable by pulling up floor tiles because the cable was handmade and the rj45 plug wouldn’t fit into the new switch we installed that night.


> Half way through their security department forced them to stop. Why? If an attacker gains access to the documentation he would have all the information he needed.

Some IT security departments have very confused ideas.


We moved into a building where the drop-ceiling had pretty much every generation of cable, going back to Twinax used by IBM 5250 terminals. Previous tenants had cut the connectors off and just shoved them up there when they moved out.

Network documentation in this case? No way. The only option is to pull it all out for recycling, and start over.


One of the many reasons that I dislike the push towards wifi/wireless for everything. It makes my hair stand on end to see people using wireless keyboards (which people usually have for at least 5 years). People seem so disgusted when you even suggest that these things are inherently bad ideas which will inevitably lead to consequences and immediately push you into a naysayer/antiprogressive category verbally or silently.


Can you explain in clear ways how the person you're telling this to will directly be harmed?


I just recently learned that Logitech unifying receivers were susceptible to “mousejacking”[1] for years before a firmware update fixed it in 2016. There’s still probably many non-updated receivers out there.

[1] https://www.theverge.com/2019/7/14/20692471/logitech-mouseja...


To a mildly capable and somewhat determined attacker (who can get relatively close to you) this means your keyboard is probably readable from the radio signals.

A Physical keystroke logger if you want to think of it that way.


I have a few commercial-grade WAPs, but they are about four years old and do not do MIMO. I wonder if any of the current hardware records RTT to sufficient accuracy so the distance from the antenna to the client is recorded/available. I also wonder if the phased-array antenna processor records the vector to the client. Such information is available from the hardware, but can anyone tell me if ANY WAP vendors are providing it via their management interface?

Such features could alleviate some of the parent poster's concerns.


Have you considered that using a wireless keyboard and other tech is OK under their threat model? I use one at home and I honestly can not see any downside to it.


Explain to me exactly how wireless keyboards are “ inherently bad ideas”, and not something that can be fixed with a robust technical solution?


Some wireless keyboards don't bother with any kind of protection to the data stream between the keyboard and the wireless receiver. That's the most obvious instance of bad keyboards. However, these days most wireless keyboards do use some kind of encryption on the pairing between the keyboard and the receiver, so that is a bit of a moot point.

Even if the data stream itself is encrypted there's still a little bit of data leakage. Your keyboard isn't constantly sending data, it really only chirps when there's an actual keypress event. So if you look at the actual physical RF, you 'll notice patterns related to the user's typing. There is some research in trying to guess key presses based on typing cadence, although I'm not sure exactly how effective it really is.

I say all of this typing on a Logitech Unifi keyboard amd routinely use bluetooth keyboards. As others have mentioned it really depends on your threat profile, and in the case of wireless keyboards you probably aren't near the level where this paranoia is justified. Are you typing state secrets that a foreign government body really wants in a public place? Probably want to have a wired keyboard...or maybe just not type such things in such places. Are you typing out a comment on Hacker News in a private space? Probably have nothing to worry about with a wireless keyboard.


These problems could be fixed with a robust technical solution.


Switch port? Jump back a few decades and try combined kilometers of shared coax runs that effectively become embedded into a building over years of redecoration...


This was roughly 19 years ago and my department was not in any way involved with the networking.

Sure, in an ideal world that would be possible - but we didn't even have access to the switches. So either it's trying to hunt down the other department in another building who /might/ solve that riddle in an unspecified amount of time... or just do it :)


I've had something similar happen to me. I was freaking out that there was something I did not know on my network, as I was going through some router configurations. Searched my office, Bride's office, asked my kid - nothing. Had a pie connected to the back of a TV, drawing power and connected to my network. It bothered me for months that _something_ was there, in my house - that I had completely forgotten was mine. Christmas time rolls around and we try to plug the kid's new console into the wall mounted TV... and there it is taped to the back of the monitor.


I work for a company with around 50k machines globally... one time we discovered a machine that was supposed to have been decommissioned five years prior still sitting on the network, just waiting to do its job. We ended up scanning our entire IP space and finding 10-20 other machines in the same state.

We now have a process that routinely scans our entire IP space for machines that somehow get lost from our inventory system.


When I first read that back in the day I thought how absurd and improbable it sounded because of how big computers were at the time. Now that raspberry pis and arduinos with wifi are a thing it seems almost inevitable.


I was looking at my network today and I realized I didn't know what one of the devices on my network was. I knew its IP, but it had no hostname and a randomized MAC. And for the life of me I couldn't remember what it was, even though I knew which room it was in! (by the AP/signal strength)

I had to use my firewall to monitor the network traffic of the IP to determine what the device was. It turned out to be a long-forgotten smartwatch collecting dust on a charger tucked away somewhere.


It's even worse with virtual machines and containers. Those things can be left over anywhere and still appear as a machine on the network.


This is surely pretty commonplace now, with all the wireless devices we have.


I think the modern version of this is forgetting where a script, cron, lambda or whatever is running from.

I have something that sends me an occasional email. I haven't needed it in years, but it's not in any of the AWS regions I remember ever using. Nor in the obvious places I might have put it playing around with azure or google cloud or whatever. I'm sure I could find it if I really tried but t only emails me once or twice a year so I just let it be.



> they identified the dongle as a microprocessor, almost as powerful as the Rasberry Pi itself

Well, its more like an order of magnitude slower than the Pi (and with a lot less RAM as well)

> A very powerful wifi, bluetooth and RFID reader.

It's 2.4GHz, but only BLE and custom protocols (2 Mbit max, GFSK modulation). The SoC can do RFID, but you have to connect a transmitter coil to use it, which doesn't seem to be the case from the photo.

I'd guess this was just used as a remote control backup connection if LAN is not working?


That puzzled me too. I didn't remember the 52832 having WiFi, but I figured it was just faulty memory.

I think the dongle might just be Nordic's cheap evaluation board.


Maybe a 6lowpan interface for maintenance. This way he could interact with it from inside the room without having to access the closet.


I was a confused by a screenshot in the article, with the caption:

> Not the actual site but a similar one

Looks like the article, when speaking of tracing down a wrongdoing suspect, used a screenshot of a Web page of an uninvolved Web site. The screenshot included photos of actual people presumably uninvolved, and a name, phone number, and email address also presumably uninvolved.

While I'd guess this probably reduces Internet vigilantism and accusations of libel (at least involving the actual suspect), I suspect that a journalism professor, editor, or lawyer would advise not to do it that way.


Was working with a NOC technician who was responsible (along with some others) for a pretty large EMEA mobile network, with many millions of subscribers. There was an RFP to update their SMS/MMS system and a certain Israeli company came in to do a site survey, or installation or something in the network data center.

Anyway the long and the short of it was one of their technicians was caught with the previous vendor's SMS-C prized open and some USB device insert into it. Similar response to this, a lot of hollering and hair pulling, but ultimately no contractual or legal implications.

I guess it happens higher up the food chain too.


PR makes it possible.

I have personally identified more than a handful of employees who'd use their work computers for... let's say "access to inappropriate content". All of them where invited by HR & legal and let go with a more then decent deal.

Absolutely everything was done to prevent the company being associated with anything nasty.


That's a very obvious and very obviously bad way of planting a network exploit. Very rookie and rather sad.

In entirely unrelated news, this guide details how to set up an encrypted boot process on a raspberry pi, with it waiting for you(r forked login agent) to ssh in and provide the LUKS password: https://github.com/ViRb3/pi-encrypted-boot-ssh


The whole part with it being tracked back to a site for G/T kids makes it sound like this was a young person somewhere in the range between "script kiddie" and "beginner hacker", so "rookie" sounds about right. Bored teen or twentysomething with time to kill and an interest in computers.


It was the parent of the child who planted the bug.


Without reverse ssh wouldn’t you need to be directly on the same network to do so?


I was setting up an encrypted-root system with ssh access to pass the passphrase, and got reading. It looks like an initrd image can connect to a VPN or set up a Tor hidden service these days. I didn't try it, though.


Reminds me of the time our head of networking came into the lab (early 2000's) asking about why our lab had '70% of the company's total outbound traffic'.

Turns out that one of our sysadmins was running a porn server in the DMZ


>And what do we do, when we want to find out a location associated with a wifi name? We go to wigle.net, enter the SSID (=wifi name) and it tells us where on the world it is found.

I've always enjoyed having unique/personal SSIDs, but had never seriously considered this consequence. I wonder what the worlds generic SSIDs are.


There's a good chance he could have also recovered a MAC from logs etc.

What's more important is that you don't set your SSID to hidden: Someone needs to broadcast the SSID for the connection to work, and if it isn't the AP, it will be your mobile device broadcasting it everywhere you go!


A little browsing around wigle.net brings me to a page listing SSIDs and manufacturers: https://wigle.net/stats#ssidstats

xfinitywifi is the top, with 2% of the routers seen having that name; it's followed by XFINITY (.73%), BTWiFi-with-FON (.38%), linksys (.37%), BTWifi-X (.35%), <no ssid> (.31%). The next one is AndroidAP at .28% and that feels like a good place to stop copying data, go look at the page if you wanna see more of the world's generic SSIDs. Basically "manufacturer name" and "internet provider name" dominate.


If you're ok with people's devices making attempts at connecting, eduroam, or some variant of Starbuck's Wifi might be good options. There'll be APs broadcasting those SSIDs all over the world.


"Home" returns quite a lot of results in my area on Wigle.net despite the fact that English isn't an official language here. You can probably pick and choose any generic Wi-Fi router manufacturer name. "Linksys" paints the map pretty well.


Consequence of the generic SSID is that your device will try to connect to any instance of this SSID and re-prompt for a password when it fails to do so.


This feels like something that's "security by obscurity" vs. "security by obscurity." Would you rather be obscure because you have the same SSID as everyone else so no one guesses which is yours or obscure because you have the same SSID as everyone else and no one knows which is yours, but it's easier to see what is going on inside the network?

One comes with more easily identifying you/your network while the other comes with being more easily hacked by readily available rainbow tables (I think, but am not sure, that WPA3 fixed this, but WPA1/WPA2 use the SSID as a salt for the password)



original discussion, 154 comments: https://news.ycombinator.com/item?id=18919129


I do wonder when the first "smart SFP" with embedded wi-fi appears - an unlabeled RPi in a junction box raises alarms, but a SFP module that's just a bit longer than the rest? Many would rather assume on first glance that accounting bought some cheaper crap due to delivery chain issues.

(For those OOTL, see https://blog.benjojo.co.uk/post/smart-sfp-linux-inside - it made the rounds on Twitter and HN a couple days ago)


OOTL ?

This reminds me of a discussion I've seen... when the Pi first came out I think ? About how we could soon make whole electric kettles or even keyboards (and Pi recently did it !) with whole spying (on wireless) computers built into them, unbeknownst to people not aware of that "extra functionality".

(IIRC with the context of potential Chinese spying ? The current reality is a bit more prosaic : USA can likely just use the backdoors (they likely have) in Intel CPUs (or Windows), and the Chinese - in Huawei's networking gear.)


Ah damn, I didn't want the story to be over. That was a good read!


Gripping! Would love to read more articles in this “genre”.

I’m wondering if there was an easy way for the attacker to encrypt or obfuscate some of these configuration files, so that defenders can’t extract settings even when physically connected to the device.


Read The Cuckoo's Egg by Cliff Stoll. An oldie but a goldie.


There’s a PBS made for TV movie about this story too, don’t know if it can still be found on streaming:

https://imdb.com/title/tt0308449/


I’ve owned a copy for a while now. This might just be the push I needed to pick it up.


I read the whole book over a long weekend, I just couldn't put it down.

Make sure you don't have any work deadlines in the few days after you start it.


The first time I read it I could not put it down. Incredible book.


Some malware will store the executable and all configuration encrypted on the disk and will only decrypt in memory with a key downloaded from the internet.

Ofcourse you can still defeat this if you dump the memory or reverse engineer the process to get the key yourself. Makes it a bit harder but still not impossible.


Unless the disk has some way of checking the hash sum of its own file structure before execution, additional debug, logging scripts can be added which load at boot time and record the entire process. It’s a cat and mouse game.


The investigative work in that piece reminds me of this old case: https://www.youtube.com/watch?v=OAI8S2houW4


This sounds like one of the classic stories by SecurityMonkey a.k.a. Chief: https://web.archive.org/web/20191006220253/https://it.toolbo...

The individual stories seem to be still available on the non-archived web here: https://www.toolbox.com/user/about/ChiefMonkey/ but not, from what I can find, the convenient story index, which I linked to above.

He seems to have planned a rewrite of all the stories and put them on… Medium.com: https://medium.com/@chiefsecuritymonkey However, the last update is from May, 2020.


Thanks OP - great read. Seems like a very sloppy network logger - I mean there's a whole raspberry pi for physical evidence! True there are probably a lot of other network hardware so it could hide in plain sight. Either way fascinating that they thought they could get away with it.


While the device itself is sloppy, for many organizations it's probably easier to install and less likely to be detected than a software-based attack.

How frequently does IT run scans of what software is running on the server vs how often does IT physically inspect the server? Remember, one of those things means I have to get up out of this chair and the other does not.


You have to wonder why they didn’t rather create a transparent bridge on the network whose traffic they were trying to log; such a device could’ve even been hidden along a network cable.



> [...] I got a message from my dad [...] I asked him to unplug it, [...] and to make an image from the SD card [...]

What a technical dad you have!


> What a technical dad you have!

Working for over 35 years for IBM and inspiring BASIC/REXX to ones child may do the trick -> https://blog.haschek.at/about/


As for deobfuscating JS, I've often had good experiences using http://jsnice.org/ ("Statistical renaming, Type inference and Deobfuscation")


Reminder (from a security guy): what the author did is risky. If you are really worried about a compromised server or a suspicious device call security consultant / forensic experts.


What are the potential risks around what he did?


Malware triggered by its absence? If the device disappears, it's likely because it was found and removed, so malware that starts erasing data or otherwise causing confusion or covering their tracks is a plausible next step (though not a good one in this case, given that the device itself led straight to the person who planted it).


- Being suspected or charged of destruction of evidence. It happened.

- Losing access to forensic data by not capturing the contents of the device RAM. Pretty common.

- Becoming witness of a crime and getting personally targeted by some criminal organization in retaliation. This one should be obvious.

- Wasting the opportunity to keep the device on to monitor the activity of the intruder


I would literally read one of these story every day before going to sleep.

I will never have enough. Amazing read!


I honestly think instead of the username if an email was found and published the author would be receiving so many offers for work from Silicon Valley companies. There aren't that many talented engineers even in SV who could pull something like this off. Sad to see amoral behavior from otherwise smart creative people who're stuck in shitty jobs with shittier bosses.


Are you serious?

Monitor BLE traffic, filter it to a known device (his boss') and update an IoT server with that information when it changes?

On an RPi, that's not even an afternoon of work. I mean, it's cool and I would definitely want to interview someone who did this, but it's hardly "hire this person now!!!" material.


I'm rather curious, why can't the RPi have soldered flash memory? How much would it cost to add 2, 4, or 8GB of flash memory on it? Because I would gladly pay for a Rpi with such memory if it added 10 dollars.

I'm suspecting it would require for them to make a new SOC, breaking compability?


>>Because I would gladly pay for a Rpi with such memory if it added 10 dollars.

That's the problem with the entire RPi ecosystem - there's a lot of things people want "even if it only adds another few dollars". Another ethernet, proper m.2 port, better audio, so-dimm slot etc etc etc....

The Rpi is meant to be cheap. Yes it means that it might not include the feature that you want. And no, "just making it a little bit more expensive" is not the solution here. It's already gotten way too expensive for what is was meant to be originally.

And if you really want a Pi with built in flash, then the compute module has that:

https://www.raspberrypi.com/products/compute-module-4/?varia...


You can have this today. Raspberry Pi sells the wonderful Compute Module 4 with the normal Pi CPU on it, and it optionally comes with built in EMMC memory. You can plop it on a carrier that gives it a normal raspberry pi form factor. I use the CM4 in my projects and it’s lovely.

Sorry these are two different distributors, but the CM4 is hard to find right now and the PiTray mini is cool, just couldn’t find them at the same place. PiTray mini is also at Digi-Key I think.

https://www.seeedstudio.com/Raspberry-Pi-Compute-Module-CM41...

https://www.dfrobot.com/product-2196.html


Using an SD card means you can reset the Pi to factory settings by swapping the card for another; and undo the reset by swapping the cards back.

This is substantially simpler for beginners than using network boot, or messing around with a bootloader via serial console.


Having an 8GB eMMC does not preclude having an SD slot. Any beginner can plug in an auto-installer on the SD card and use the same SD for different devices. Simpler and cheaper.

If that's not enough, the eMMC could even come preinstalled with an OS.


Having soldered eMMC also means that you have complicated the effort required to securely wipe the device. It doesn’t get any easier than ejecting an SD card.


Additionally, as split root storage setup because the boot partition is small is a lot more complicated than simply buying a 64GB+ sd card and (usually) have no storage problems.


Compute module has eMMC, and they haven’t been excessively costly because of it or reportedly unreliable in the way SDs are. But either way I suspect that the Foundation design team has some issues in designing power circuits rather than that SD cards being unfit or people are throwing in cheap ones.


Well, that was the issue with older Pis is that they were running powered by (micro) USB 2.0, which officially tops out at 2.5 W. While IIRC the 3rd Pi tended to top out at trying to draw 15 W - SIX TIMES MORE !!! No wonder that SD cards got destroyed in the process !

But AFAIK this shouldn't be an issue any more (assuming a non-counterfeit charger) with USB-C 3.0 (RPi 4+ ?) which starts at 15 W ?


I don’t know, but they did have brownout issues with 5.0V supplies, reluctance issues or something with official PoE HAT, and I’m not seeing PC-like multi phase MOSFET bridges on Pi to this day. It could be like there is some spike noises or something going into SD and killing it(emphasis on could be, it’s just my hunches).


I'd rather have onboard USB serial. No more trying to find a USB serial cable laying around, or enabling SSH and hunting down the IP address.

It already has the USB port for power, surely they could have gotten Broadcom to include USB serial in the SoC for negligible cost by now?


If your goal is to avoid using an sd card, have you considered a Beaglebone?


Thus tripling the cost of the cheapest Pi - which costs $5.


This is a marketing price, if you buy in bulk it costs $15.


What a missed opportunity here. By publishing that obfuscated code, top notch specialists would have untangled it for you just for the sake of satisfying their curiosity. Speedrunning their way until it is crystal clear about what the device purpose was. And completely for free.


That one really felt like a written-version of a Mr. Robot episode.

Lovely!


This[0] is probably what you had in mind:

[0] https://youtu.be/XTN_-pRZjoU?t=415


Yup. Exactly this scene. Thanks for reminding the great memories.


The nRF52832-MDK has neither WiFi nor RFID capabilities


The chip has 13.56MHz RFID capabilities but obviously needs to be attached to an appropriate antenna which this dongle does not have.



Because you can use the 2.4 Ghz chip antenna for anything you want to, including WiFi ?


Once found a Linksys Wifi router under a desk, the employee was using it to check their Hotmail. I was pretty impressed they knew to switch their network connection to wireless, but it WAS still on our network.


Something like this is less likely to be noticed: https://arstechnica.com/information-technology/2012/03/the-p...


Same category as those keylogger USB plugs.


So, not just a Pi-Hole as I immediately first assumed.


now i guess a smaller pi zero can do this with a much smaller footprint


Pi Zero doesn't have an ethernet port, so you have the size of the pi+ethernet adapter then.


Technically, I believe Pi-Hole works over Wi-Fi as well: that is, you can have the Pi Zero running Pi-Hole connect to your router via Wi-Fi. Then all your devices connect to the Pi Zero for their internet access.

I could be mistaken though; only over installed on a Pi 3.


You're right. I have a Pi Zero W which runs Pi-Hole over wifi. My mobile devices use it as a DNS server.


I've been playing around with orangepi zero for when I just need ethernet, wifi, and USB. It fits in an Altoids tin with room for some cable management.


Heard a story about some ethernet device cemented into a wall, perhaps on HN. Good luck finding that ...


Once upon a time when Zigbee was the latest hype, a friend worked on a project to cast cheap hygrometer sensors into concrete and have them report via a mesh network. Apparently sensors were predicted to be cheaper than to have an engineer walk the site taking readings to ensure it’s ok to start covering it up.


This story sounds so familiar; did this get posted 3-4 years ago? Good story and good sleuthing tho.


This article was shared here before and since then I was failing to find it again. Thanks for reposting!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: