There are a large number of similar projects out there.
I implemented login credential extraction for both Chrom* and FF-based browsers in the somewhat shambolic but generally-useful `browser_cookie3` Python module last year:
Love the brutal honesty! Even as a macOS user I admit that it's difficult to justify learning the Apple-proprietary tech needed to port apps to it (and automated testing is indeed a nightmare). Curious as to your thoughts on Android?
> > Literally China/Russia are more trustworthy.
>
> Only in the sense that, as a US citizen who has no desire to travel to China or Russia, I don't feel all that worried that either country is going to do anything bad to me directly.
I sort of get this PoV, but on the other hand…
If China had any information about you that was valuable for any purpose whatsoever (trade an intelligence tip to a corrupt businessman in a mafia state?) its government could do so with no legal or political safeguards.
The US government has legal safeguards against this, and would face _massive_ potential political risk for doing so against one of its own citizens.
>The US government has legal safeguards against this, and would face _massive_ potential political risk for doing so against one of its own citizens.
The US government literally steals cash money from its citizens and faces no repercussions whatsoever. If you carry cash with you in the US, you're in absolute danger of having it confiscated by the police as "drug money" and never seeing it again. You can claim the US has "legal safeguards", but until they're actually tested, it's just a supposition.
The US has a very large voting bloc composed of people who want their state and municipality to be free from restrictions imposed by the federal government. In practice, this leads to many places with a significantly larger amount of actually-experienced tyranny than you get in more uniformly governed countries. Ideally, this is coupled with freedom of movement, so that it's easy to get a job and housing in a state or city with more liberal governance.
They actually have. The Supreme Court had a recent ruling, congress has passed laws to try and restrict it (to the best that federal rules can affect local state ones). The distinction definitely is important, but if you have an ideological bone to pick, it’s better to ignore it.
If you actually believe that then you are amazingly ignorant about the legal structure of the USA and its dual sovereignty system. You should ask your civics teacher for a refund.
You think China and Russia don't do a hundred times worse to their citizens? The US is far from perfect, but it is drastically better than China and Russia.
Whataboutism. I never claimed they were better (and you're right, they're much worse). But the US is the one that claims to be the world leader in defending freedom and individual liberty. China never made any such claim that I know of.
> If China had any information about you that was valuable for any purpose whatsoever (trade an intelligence tip to a corrupt businessman in a mafia state?) its government could do so with no legal or political safeguards.
You should look up how Israel actively fishes LGBT palestinian people using fake dating site accounts, and threatens to out them in order to force them to contribute intelligence.
The US is clearly not that compromised. But they're not exactly clean either, considering some of the stuff that happened in central America.
It's quite intriguing that x86-64 processors of different microarchitectures and from different vendors basically all have similar non-determinsm for retired instruction counters but — at least in the authors’ brief review — IA64 (Itanium), POWER, and SPARC appear to be too deterministic.
I can't really see any good explanation for why this would be hard to get right specifically on x86-64. Can anyone else?
(And are there any more recent or thorough results on other archs, particularly arm64?)
> There is no guiding hand, metaphorical or literal, choosing how a quantum system evolves.
Indeed, nicely put.
To be even more specific about why not: Bell's theorem (https://en.wikipedia.org/wiki/Bell%27s_theorem) shows that, with some reasonable assumptions about locality, quantum mechanics cannot be explained away by a set of hidden variables that guide an "underlying" deterministic/non-random system.
I think what you're saying might be construed to be the opposite of what you're intending, so just to clarify: Bell's inequality implies that IF there is some sort of underlying force guiding quantum phenomena, then it must be non-local (AKA faster-than-light). For deep technical reasons this is such a hard pill to swallow, that a physicist would choose almost any other theory over this. It's effectively positing an infinite, all-knowing god, as anything less than this would not be able to consistently control selection of these quantum choices.
It's an added reason to be dubious though. The primary and most fundamental reason to reject this idea of "quantum selection" is that nothing is actually being selected. In a system with two possible outcomes, both happen. "We" (the current in-this-moment "we") end up in one of those paths with some probability, but both outcomes actually do happen. This is the standard, accepted model of physics today.
Usually I can grok the significance of almost any item on HN that catches my eye, but here I'm at a loss. Can someone explain why this matters?
As far as I can tell, someone has figured out how to send Ethernet packets at a relatively high rate using hardware with very limited CPU. Cool, but what can you _do with that_? If the RPi Pico has the juice to run interesting network _application-level traffic_ at line rate it's more intriguing, but I doubt that anyone's going to claim that can serve web traffic at line rate on this device, for example.
Its quite popular in the retro-computing scene, for example, to bring these old machines into the 21st century with modern microcontrollers being used to add peripheral support.
For example, the Oric-1/Atmos computers recently got a project called "LOCI" which adds USB support to the 40-year old computer[1], by using an RP2040's PIO capabilities to interface the 8-bit DATA bus with a microcontroller capable of acting as the 'gateway' to all of the devices on the USB peripheral bus.
This is amazing, frankly.
And now, being able to do Ethernet in such a simple way means that hundreds of retro-computing platforms could be put on the Internet with relative ease ..
RP2040/2350 are IO monsters. You could for example make a logic analyzer that transfers logic data through ethernet.
This "very limited" microcontroller has two cores. Both of them can execute about 25 instructions per byte for generating "application-level traffic". You could definitely saturate a 100 Mbps connection with just one core.
Now that you mention it, I think I would like to see a logic analyzer that does just that. No buffering, just straight up shovel the data to a mac address, or even IP address, and be done with it (maybe lose a few frames here and there). Let the PC worry about what to do with it, like triggers etc.
Should be cheap, right? Though 1Gbit version might still be expensive..
How is this different from the cheap salae clones now? Just sub out Ethernet for usb and that’s how they work now: a cheap ic with nothing but a2d and a usb phy samples and sends as fast as it can..
I suppose the persistence of IPv4 has broken all of our brains, but with IPv6 you can just Not Have NAT, and just have normal end-to-end connectivity to any random box in your home from outside.
IPv6 can get rid of NAT which is one of the most annoying hurdles. It unlocks the type of use case where technical people can host something from home for fun, although many can’t access it because both parties need ipv6.
But if you set your sights higher and want to build true p2p apps for non-techies, or if you want “roaming” servers (say an FTP server on your laptop), there are more obstacles than NAT, in practice:
- Opening up ports in both a residential router and sometimes the OS or 3p firewall. Most people don’t know what a port is.
- DNS & certs which require a domain name and a fixed connection (if the peer moves around across networks, eg a laptop or phone, DNS is not responsive enough)
> IPv6 can get rid of NAT which is one of the most annoying hurdles.
Right.
> It unlocks the type of use case where technical people can host something from home for fun, although many can’t access it because both parties need ipv6.
At this point, that's an obstacle, but at some future point hopefully IPv6 will hit a critical mass and network effects will take off because there'll be enough stuff that _doesn't work without IPv6_, so customers will demand it.
> if you set your sights higher and want to build true p2p apps for non-techies
Definitely.
Restoring the universal endpoint-to-endpoint connectivity on the IP network overcomes a _major hurdle_, a hurdle that's so big and longstanding that people have come to just assume its existence and fear and removal… but it certainly doesn't solve all the problems.
> or if you want “roaming” servers (say an FTP server on your laptop)
> - Opening up ports in both a residential router and sometimes the OS or 3p firewall. Most people don’t know what a port is.
I mean, UPnP makes big improvements in this area, but a lot of devices stupidly don't handle it, or block it for alleged security reasons. Frustrating.
> - DNS & certs which require a domain name and a fixed connection (if the peer moves around across networks, eg a laptop or phone, DNS is not responsive enough)
There's no real reason why TLS clients _must_ only trust certs when they see that the CN or SAN matches the _domain name_ through which they looked up the IP address. I think that with a better issuing infrastructure and UX, a TOFU-based (https://en.wikipedia.org/wiki/Trust_on_first_use) approach to self-signed certs for peer-to-peer services could be both comprehensible for non-techies and highly secure.
Would you be willing to share a few details on how you do this. And how do you prevent someone spamming your devices or is the risk so low you don't care?
Unfortunately most ISP's in my area don't dish out IPv6 addresses without ridiculous monthly charges. I hope one day it becomes more commonplace.
> Unfortunately most ISP's in my area don't dish out IPv6 addresses without ridiculous monthly charges
If you've got an IPv4 address that responds to ICMP, HE's https://tunnelbroker.net/ offers free IPv6 ranges (a bunch of /64s and a /48) for free. You can configure a tunnel to work through many routers, but with some setup you could also have something like a Raspberry Pi announce itself as an IPv6 router.
Sites like Netflix treat HE tunnels as VPNs, though, so if you run into weird playback errors, consider configuring your device's DNS server/network not to use IPv6 for that.
As for your questions:
> how you do this
Open port 8888 to (prefix):abcd:ef01:2345:56, or whatever IP your device obtains, in your firewall. It's the same process as with IPv4, except you can use the same port on multiple devices.
> And how do you prevent someone spamming your devices or is the risk so low you don't care?
While some services have started scanning IPv6, a home network from a semi-competent ISP will contain _at least_ 2^64 IPv6 addresses. Scanning the entire IPv6 network is unfeasible for most automated scanners.
You just plug a device into your network. The device acquires an address. You can type that address into another device on the Internet to attempt a connection to your device. If your device is running a web server that allows access from the whole Internet, this brings up the home page. If you have a firewall, tell the firewall to enable connections to that web server from the whole internet.
What do you mean by spamming? People are scanning the Internet the whole time to see what's there, and it isn't a threat unless you are doing something terribly insecure. Scanning IPv6 is impossible in practice anyway, due to the high number of available addresses.
Thanks for your response. Spamming was a poor choice of words on my part. I really meant DDos or just generally people sending erroneous requests or being a nuisance wasting data/resources once they know the address, say if it was leaked.
> There's something nice about being anonymous behind a communal v4 gateway.
IPv6 lets you do this -- nearly every client will use privacy addressing, so your (default) source address rotates daily. However you can still connect to the machine on its main (non-privacy protected) IPv6 address.
Tangentially, these “privacy” addresses are such an ipv6-ism of small theoretical value at the expense of extra complexity and noise. If ipv6 had been “ipv4 but now with 100% more bits”, I suspect we would have come a lot further in global deployments.
IPv6 literally is that, plus a few pretty minor changes.
SLAAC? Literally a hack that accidentally caught on because some vendor implemented it sooner than DHCPv6 for some reason. It was intended that everyone would use DHCP just like before. And that's the biggest difference from v4 other than the address format.
They might sound minor but in practice they violate assumptions that are really crucial for implementations. Everyone who deals with addresses must make decisions about how and what to do in face of these quirks.
Another example is the zone identifier string. So how do you store them efficiently in memory or a db? Golang did a really clever thing with netip but the implementation was not easy. Oh well maybe we can always ignore and strip it? Maybe, depends on the use case.
The point is going from exactly 32 bit to 128 bit + sometimes maybe a variable length string (max length, encoding, allowed chars?) is not a small change for something so important and ubiquitous as ip.
In most cases, you don't. Zone identifiers are OS-specific, contain whatever the OS says they do, and may be only valid in the short term (e.g. Linux interface number). You only need them if you are doing lower level networking things. As a web app you just don't, because they aren't part of an internet address. As a web browser you pass whatever the user typed through to the operating system.
Okay, but that's not a minor change. Regardless of why it caught on, SLAAC completely changes how addresses are handed out, and is in many/most environments a requirement if for no other reason than that Android explicitly refuses to implement DHCPv6 ( https://issuetracker.google.com/issues/36949085 ). And once SLAAC is in play, suddenly privacy problems come up and you kind of need to jump through the extra hoops to avoid, y'know, putting your MAC address in every single packet you send over the public internet.
> SLAAC? Literally a hack that accidentally caught on because some vendor implemented it sooner than DHCPv6 for some reason.
The history I recall is very much a academics designing theoretical standards vs operators who actually implement the damn things.
The academics designed the standards around the use of SLAAC deliberately and intentionally. DHCPv6 was the ‘hack’ that operators implemented after the fact.
I’m sure there’s an RFC somewhere that’ll prove this one way or another, if anyone else reading this cares enough to determine for sure (Duty Calls).
With ipv6 you can match IPs realistically to a single household across multiple sites.
Access porn.com from 12.34.56.78 and you are one of dozens of households. Just because Bob Bobson who logged into Netflix is on the same IP it doesn’t mean that house was on porn.com.
Access from 2100:1234:5678:abcd:: and you are accessing from one specific household, even if the lower 64 bits differ. You’ll likely keep the same ip for longer than in cgnat anyway (and by design you keep it until your isp changes)
I think they mean CGNAT. My mobile phone connection goes through CGNAT so it's impossible to identify my individual phone by its IPv4 address, whereas my home address uniquely identifies my home, at least for a limited period of time. Sometimes this is good and sometimes this is bad. Sometimes you want to be anonymous and sometimes you want to be delineated from the people who are being anonymous.
Really?! The Linux kernel is a _pretty enormous_ counterexample, as are many of the userland tools of most desktop Linux distros.
I am also a key developer of an entirely-written-in-C tool which I'd venture that [a large fraction of desktop Linux users in corporate environments use on a regular basis](https://gitlab.com/openconnect/openconnect).
The refusal to use C++ in Linux isn't entirely rational. Nobody else makes that decision. Other kernels are a mix of C and C++ (macOS/iOS, Windows, even hobby operating systems like SerenityOS).
Then you get into stuff that's not kernels and the user-spaces are again mostly all C++. The few exceptions that exist are coming out of the 90s UNIX culture, stuff like Apache or nginx. Beyond that it's all C++ or managed languages.
What do you mean by "freebooting"?
We added domain fronting support to the OpenConnect TLS-VPN client _in 2022_ because it is still working and useful for many people working in censored countries and environments. https://gitlab.com/openconnect/openconnect/-/merge_requests/...
reply