iOS never supported this configuration regardless, a change in SSL certificate does not cause any kind of notification to the user.
Also, you're basically objecting to the entire idea of PKI for use in IMAP which is incredibly hard to justify. Perhaps you wish to use a different model for your own personal reasons but the default being PKI should not be controversial, and if you want to use your own model you should use a different mail client.
It supported using self signed certs, but if the server suddenly switched from a self signed to a trusted CA-signed certificate, no prompt would be given. So the idea that self signed certificates are somehow more secure for this specific purpose is incorrect.
It was a complex
Trust relationship and Apple’s it just work was onerous to work around. When security is the top priority I would alway go with self-signed certificates.
I think the last few paragraphs go off the rails. If creators didn't own (at least in some way) some controlling stake in Nebula, why would they publicly say that they do? Moreover, why would creators join Nebula if the terms were not beneficial to them in the first place?
I find it funny that the author writes
> It’s equally possible, however, that the system was set up in order to keep any meaningful power away from the creators.
Does the author really think that the chance that all of these creators are lying to their audiences is just as likely as them all telling the truth?
Also, the author even admits
> As I mentioned previously, some ownership of Standard has since been offered to other creators through stock options, but it’s unclear how much or what type of stock those options represent.
Owning equity (and thus voting power) in Standard also means that the creator has the ability to vote on how Standard operates Nebula. So the conclusion that the creators have no control over Nebula literally cannot be true. So the statement that "the creators own 0 percent of Nebula" is just misleading, and yet this is somehow the important conclusion that the author wants readers to know...?
The author’s thesis is that the creators are being tricked. They own some complex bespoke right in Nebula (“an entirely new kind of cooperative corporate governance”), which they’re told and believe is equivalent to ownership, but actually it’s a sham that will break down if Standard ever wants to do something that isn’t in the creators’ interests.
The author doesn't go through this in nearly enough detail to make that argument convincing. Rather than spending the entire time trying to find what the "real" ownership amount, the author should've spent the time contextualizing the situation.
The author basically spends the entirety of one sentence dismissing the idea that there could exist a corporate governance model that allows creators to have a meaningful way to direct the company's decision making process and spends the rest of the time on a wild goose hunt to figure out the "actual" ownership percentages.
It was pretty obvious from the beginning given the repeated mentions of complex ownership models that the "real" numbers were not going to mean that creators owned "real" equity in the company. An investigation about what this actually means would've been a much better way to write this kind of essay.
Instead all we got was a long article with a conclusion that was reasonably obvious in hindsight, and no real evidence to support the thesis that "it's all just smoke and mirrors".
I don’t agree that was obvious in hindsight. I was familiar with Nebula before this article and I had always understood it to be something like a co-op where creators and only creators had genuine equity. When reading the first bit, I assumed as the author did that it must be something where the co-op owns a controlling share in some underlying company.
The conclusion that there’s nothing like a co-op at all is not what I would have expected and I really think does suggest that it’s all smoke and mirrors. If this “ownership” doesn’t consist of anything more than a right for creators to be paid based on their view counts, isn’t it just a YouTube contract with extra steps?
"The author’s thesis is that the creators are being tricked."
The author says: "Unfortunately, without access to one of their contracts, we can’t know for sure what power the broader group of creators actually has."
While the accusation of the creators being tricked might be between the lines[1], I think the more direct accusation is the subscribers being tricked.
The subscribers are made to believe:
1. the creators get 50% of Nebula's profit
2. their money goes into a co-op of creators
3. Nebula hasn't taken VC money
My reading is that the author claims, that only the first point is true.
[1] To the best of my knowledge, none of them has come forward with any accusations. On the other hand, we probably only should expect this to happen once Nebula gets in trouble or is actually sold.
It's unlikely that the company itself makes these tablets, they probably buy bulk off Alibaba or something and they all probably fail at the same time because the were all made at the same time.
The real problem is that the quoted replacement price is so high, given that we know the tablets themselves are like $30 each.
They could control the whole system over POE with a bog standard usb to ethernet adapter, make the app easily run on any android device and charge the customer less for a better more reliable product, but rather than do that they rigged up some janky interface and built a custom enclosure hired out to an overseas manufacturer who bought the parts for $20 and charged the middleman $100 for them, who then charged the dealer $250 for them, who then charged the installer $600 for them, who then charged the customer $1600 for them. (Got to get that 2.5x margin on hardware every step of the way, after all!)
If they had gone with a POE system, wiring would have been cheap, replacement parts plentiful, and customer satisfaction would be sky high. Sure, you would sell fewer full systems, but to me that is a small price to pay for having the most useful and interesting systems on the market with fans creating all sorts of mods and integrations for your equipment and becoming customers for life.
The problem with these is that bugdoors require you to target way more valuable stuff compared to backdoors. With a backdoor you can target practically any library or binary that is being run with root privileges, while with a bugdoor you can only really target code that is directly interacting with a network connection.
Direct network facing code is much more likely to have stringent testing and code review for all changes, so as of now it seems a bit easier to target codebases with very little support and maintenance compared to an attack that would target higher value code like OpenSSH or zlib.
Most of these proposals would probably make the internet a worse place rather than a better one.
Complete anonymity on L3 would result in all tracking being on L7 instead. Right now at least most people can use Google/YouTube/most other websites without creating an account. With complete anonymity, it's all but certain that all of these would need to be gated by account creation to prevent abuse.
This would actively increase the ability for websites to track you, or else they'd need to be able to somehow handle abuse with exactly 0 information about where any given connection is coming from.
I don't think these proposals were seriously thought out by the OP.
> Most of these proposals would probably make the internet a worse place rather than a better one.
Nice try, Google.
But more seriously:
> Complete anonymity on L3 would result in all tracking being on L7 instead
Good. Then we the users will have more control over it, and outright shut any tracking down. Even using a PiHole might become a thing of the past in this new reality, while also preserving anonymity and being able to pick and choose which traffic is desirable (at the client).
> With complete anonymity, it's all but certain that all of these would need to be gated by account creation to prevent abuse.
"Abuse" is such a nebulous term so as to be nearly meaningless these days. YouTube, Twitch and many others have claimed "abuse" for practically every single thing they don't like. Even today they are trying to shut down downloaders like yt-dlp by trying to obfuscate sources of the videos, adding short-lived tokens for access, and introducing ever more complex JS snippets for the official players to parse and run before being able to stream the video.
> This would actively increase the ability for websites to track you, or else they'd need to be able to somehow handle abuse with exactly 0 information about where any given connection is coming from.
Well, I for one will not weep for at least 80% of today's internet if it got down tomorrow because tracking no longer exist and those "businesses" are no longer solvent and able to sustain themselves.
As for flooding, maybe it should not be their prerogative then. ISPs should handle it. "User X just sent 1 million packets in the last 5 seconds! Shut him down!" and what do you know, suddendly DoS attacks nearly cease to exist overnight. That includes shutting down an entire internet cafe from which somebody decided to play hacker from the movies. Let the internet cafe figure it out. Let them buy a better router or install software that enforces packets per second. This software will quickly get commoditized in this new era and it will be mostly trivially easy to install it.
There are possibilities.
...I'll grant you that DDoS is still a problem though. But with enough encryption and going through several hops it might become impractical -- or at least less practical than it is right now, because these two factors increase your latency towards the attacked target, meaning that the attacked server(s) should absorb the attack(s) easier than before. And, again, individual ISPs should firmly say "NOPE" to any bad actor.
And even if this new routing and encryption get so commoditized that our current levels of DDoS become feasible again, I'll say again and again that ISPs should learn to quickly throttle misbehaving users.
Finally, how do we address malicious state actors owning their own ISPs or even entire peerings between several of them? No idea, but the next-ish ISP in the chain could still severely throttle packets per second if the bad actor ISP starts spamming. But here I am truly not sure if this can actually be solved.
Is anything I said feasible, or even making a lot of sense? Likely not much, granted, but I am not seeing "abuse" as an excuse to last much longer. Git gud, corporations!
Finally, we have so much modern tech that we can start modernizing the internet tomorrow. Of course we can't just swap tech that uses old protocols but putting payloads on top of TCP or UDP is not a problem; part of the desired anonymity guarantees will disappear, sure, but I find it weird how we in general wouldn't take even a partial win.
As long as we're redesigning the entire internet, make it so that a computer can request from its upstream that it no longer receive packets from a source. That upstream can request the same from its upstream and so on. I'm surprised this doesn't already exist honestly.
A sort of blacklist that propagates upstream, progressing thru DNS to final IP ranges. A preponderance of evidence gets a range banned until compliance is evident. Sounds good!
I agree. Complete anonymity is bullshit. Internet should be pseudoanonymous. I mean, users should have static IP and thats it. Getting additional information about IP should be limited to law enforcement gov organizations (crime fighting).
Users should build they reputation on internet. If someone is asshole, then ok, expect to be banned on most places. Right now people do NOT care, because they are quite anonymous.
As for all other points he mentions, they are absolutly bad for Internet. He specified somethink more like a TOR (with he mentions) with is ok. Thats the point, maybe its time to treat internet more like a transport network and build small Internets on top of it. Infra is already there, there are shitload of VPN providers so people are kinda aware of that layer.
> Getting additional information about IP should be limited to law enforcement gov organizations (crime fighting).
How would that work in practice? Wouldn't companies like Google and Facebook still have so much user data as to effectively know everything they need about user IPs?
> Users should build they reputation on internet. If someone is asshole, then ok, expect to be banned on most places. Right now people do NOT care, because they are quite anonymous.
Combined with legal restrictions on IPs, how would this work? We would need some central authority for universal identity. If we look to the government there as well they'd have an easy path to censor whomever they want online.
They only know more because people are careless, providing them all the data.
That is out of the scope. If you are careless, bad things can happen.
As for censorship, thet already can do it easy. Block domain for example of site when you publish you. Just try it yourself, setup up VPS w/ web that is very out of align with your gov. :)
> They only know more because people are careless, providing them all the data. That is out of the scope. If you are careless, bad things can happen.
Anyone seriously interested in not providing any data likely isn't using the internet at all though they're already fine regardless of IP tracking. There are very few types of online service that don't require some kind of data to be useful, whether its a user login, email address, or search queries.
> As for censorship, thet already can do it easy. Block domain for example of site when you publish you. Just try it yourself, setup up VPS w/ web that is very out of align with your gov. :)
That's a single point of attack though. The government can censor my website by forcing infrastructure companies to block it. Unless I misunderstood your earlier message though, a central authority gate keeping IP data would almost certainly lead to a single entity having the power to block me from the internet entirely.
I am myself not interested in providing any data to those companies and I am still using Internet. Yeah, I am avoid FAANG and related sites tho. Not that I am feeling Im loosing anything importand really...
I've noticed that it's all the chip companies that have CEOs that understand to a deep level the stuff that the company is actually doing.
Compare and contrast the big software / services tech firms...
It feels like companies like Intel, AMD, Nvidia and TSMC are much more about delivering good technical solutions to hard problems than software companies like Google or Microsoft that try to deliver "concepts" or "ideas".
I do sometimes wonder though why decision making at a company like AMD benefits so much from having a highly competent engineer as the CEO compared to let's say Oracle...
You don't want to hear this, but it's because what we programmers do is the easy part of software. In the hardware business the goal is well-defined (just be faster+) and all the magic is in how to do it. In software the difficult part is figuring out what the customer actually wants and then getting your organization to actually align on building that. The coding itself is easy.
+There is obviously some nuance to what faster means. Faster at what? But finding some popular workloads and defining benchmarks is a lot easier than what software people have to do
Who doesn't like hearing that we (programmers) get all the fun, easy work while the other suckers in the economy have to put in the hard work for our benefit?
Agreed, most hard coding problems are due to requirements thrashing and business vision churn. All things related to figuring out what the customer wants.
Calling all the “ad-ware” web companies “Tech companies” is what leads to this apparent contradiction. It is largely identity appropriation across the board.
The Topics API doesn't seem to have any abuse opportunity since it's entirely enforced by the browser itself. There's nothing the JavaScript API can do that could give Google an advantage here given as far as I know there's only a single function that can be called in the first place.
I think more research would need to be conducted to see whether this change is actually anti-competitive or not.
I don't play Roblox, but I'm aware of their history of exploiting children.
Based on my preliminary research it appears as if these websites actually just ask you for either your Roblox username/password or ask you for your Roblox cookie to authenticate.
I really doubt they would actually be apathetic to removing these, since even though they do get richer off of it... If discovery finds evidence they're trying to cover this up the damages will be endless...
The actual answer is much more complicated. For example, Google Cloud offers two different bandwidth tiers: premium and standard. The calculation on the OP assumes premium since that's the default option, but obviously it's much more expensive.
Google cloud's "premium" bandwidth is much akin to AWS Global Accelerator since it utilizes Google's own backbone network for as long as possible before exiting at the nearest peering point between Google and whatever ISP your end user is at. AWS Global Accelerator has some other options available, that make it fundamentally a different product, but the routing characteristics are much more similar to GCP Premium bandwidth than anything else AWS offers.
Ingress is free because it helps them balance their pipes, and it would be really shitty to charge for DDoS attacks. As far as I can tell, with the exception of some really expensive network environments (e.g. China), nobody has ever charged for ingress.
With an exception to OVH, none of the cheap providers the article has listed have any kind of backbone network. They all rely fully on transit providers. Turns out backbone networks are expensive to operate!
OVH is the sole example of a provider that has a backbone network, and admittedly, it's pretty good. However, nowhere near expansive as the big three, and it falls flat in Asia (which is the hardest to route traffic in). Also OVH has to build datacenters so cheaply that one of them burnt to the ground in recent years...
(Cloudflare has a backbone too, but you have to pay a lot extra to use it. Linode uses the Akamai backbone now but that's a very recent acquisition and it's expected that Akamai will eventually raise costs significantly)
Yes, bandwidth is way too expensive on cloud providers. AWS Lightsail is proof of that. However, I see no reason to believe that this is purely for vendor locking, and nobody has been able to give any evidence of causation between the two beyond "well it's so expensive!!!"
It is very simple. If you move your data off cloud provider X, cloud provider X is losing revenue because you are doing things with your data off their platform.
They therefore charge high fees to move your data off the platform to discourage this behavior. Meaning you now need to use cloud provider X’s services to do anything with the data.
Attempts at vendor lock-in have been core to software service companies since they were born.
> It is very simple. If you move your data off cloud provider X, cloud provider X is losing revenue because you are doing things with your data off their platform.
Right but if this were the case then why does the Bandwidth Alliance allow you to move data at a much lower cost for 2/3 of the major cloud providers? If they _really_ cared so much about not allowing you to do processing with a third party, the Bandwidth Alliance wouldn't exist!
AWS is the sole hold-out here, and I think the way that Cloudflare worded this makes it pretty clear that the Bandwidth Alliance is basically a middle finger to AWS than anything else, but it also seems clear that the cloud companies aren't actively trying to make it costly to do data processing on a third party.
Egress toll is not about preventing migration off-platform. It's about preventing operation off-platform. They don't want you to come to GCP for a single product like Spanner or BigQuery or some high-tech ML/AI offering while most of your infra runs in big dumb baremetals at the Hetzner or OVH datacenter down the street. If you're coming for Spanner, you also have to buy their overpriced VMs, object storage, log storage and whatever else you need. That's where the real money is made.
Bandwidth alliance looks to be a political tool for cloud providers to save face. Not dissimilar to public companies paying token tribute to ESG which is all the rage these days.
> Ingress is free because it helps them balance their pipes
That isn't how business works. Companies maximize their profits and "balance" isn't a profit center. If it didn't benefit them in some customer leveraging way, they would charge for ingress.
You pay for everything. Either directly or indirectly. Indirectly often turns out to be much more expensive.
> That isn't how business works. Companies maximize their profits and "balance" isn't a profit center. If it didn't benefit them in some customer leveraging way, they would charge for ingress.
What I'm referring to is the practice of balancing peering ratios. That is, when you make transit/peering arrangements with other ISPs, some ISPs will charge more if the amount of data you're sending to them vs the amount of data you're receiving from them is not balanced. It is in Google's best financial interest to at least try to balance their pipes in this way.
Google's "standard" bandwidth pricing is about 15%-45% cheaper than "premium", which is admittedly a significant discount, but it's still an order of magnitude more expensive than some of the other options on the list.
Nothing in your comment rejects or disproves the claim that egress costs are vendor lockdown.
Your link to the bandwidth alliance explicitly states that their justification for network costs is unloading infrastructure costs onto end users as data fees. That's their only and best justification. This is clearly a business decision that has no bearing in operational costs.
Some cloud providers charge nothing, others only start charging after hitting a high threshold from a single instance. Do they not operate infrastructure?
It's their business, it's their business model. Some restaurants charge you for a glass of tap water too. Let's not pretend they do it because of infrastructure costs.
> Some cloud providers charge nothing, others only start charging after hitting a high threshold from a single instance. Do they not operate infrastructure?
Yes you do pay for the rest of their infrastructure when you rent servers from them...
I'm not saying that the fees aren't extremely overpriced. I know what a gigabit port costs. But saying it's to keep vendor locking is just not true, and nobody has suggested any actual proof of it being true.
The bandwidth alliance exists to try to cut into AWS’ business. They could always have unilaterally cut rates closer to their cost but that margin was appealing, until they realized that they were never going to catch up with AWS without being cheaper.
High egress makes it expense both to leave and to use other services: if you use S3, you’re probably putting processing and analysis in AWS because using someone else’s service would incur hefty egress charges.
> For example, Google Cloud offers two different bandwidth tiers: premium and standard. The calculation on the OP assumes premium since that's the default option, but obviously it's much more expensive.
Of course, non-premium tier is v4 only, and only available at some locations.
Deno wasn't originally designed to be node compatible, but I think they realized nobody would want to switch to it because node is so prevalent already...
I think the main appeal of projects like Bun and Deno is the built-in tooling for building/bundling modern typescript applications without requiring dozens of dependencies for even a basic hello world app.
If node.js decided to include functionality similar to what is available on Bun/Deno, both projects would probably lose traction quickly.
that feels like a really weak value prop to me. how often do you have to install that stuff? how hard is it actually? can you really not use, e.g. for react, the typical vite starter and it's done?
The other side of it is if you want to distribute your code not as a server. If you write a CLI in Node + TS + ... then it might be pretty fiddly for someone to clone that repo and get it running locally. You'll certainly have to document exactly what's needed.
Whereas with Deno you can compile to a single binary and let them install that if they trust you. Or they can `deno install https://raw.githubusercontent.com/.../cli.ts`, or clone the repo and just run `deno task install` or `deno task run`. For those they need to install Deno, but nothing else.
> then it might be pretty fiddly for someone to clone that repo and get it running locally
with node + TS, it is straightforward (and common) to generate JS output at publish time for distribution. then, using the CLI tool or whatever is only a `npm install -g <pkg>` away, no extra steps.
sure it's not a single binary, but I'd argue _most_ users of a general CLI utility don't necessarily care about this.
So Deno is better at small scripts written in Typescript than Node. Then, the question becomes, if you're going to have Deno installed and if it works well enough to replace Node, why keep Node?
then you have to define "works well enough to replace Node"
i was excited about bun too, until v1's "drop-in node replacement"
that was in no way a drop-in node replacement. using that would be the fastest way to kill a business with its terrible bugs and rough edges.
i used to be really excited about deno, but now i think the tradeoffs aren't going to be worth it for mass adoption. i sometimes write servers in go. now that i have go installed, should i use it for all my servers? no, it's just another tool with different trade-offs. most times, node will suit my project better.
> can you really not use, e.g. for react, the typical vite starter and it's done?
I have to install that stuff everytime I'm starting a new project, switching to a new project, or creating a one-off script.
It's hard when creating a new project where there's always at least one flag that needs to be found and set different from a previous project for some random reason, every single time.
It's hard when switching to a new project, because you have to figure out which version of node you're supposed to be running because each version runs the dependencies differently between different versions of node, and different computers. It might even silently work for you without being on the right version, meaning you continue working on it, then your commits don't work for yourself later or others now or later. This leads to one of two possibilities:
1. A longer job either unwinding everything to figure out what the versions should have been the whole time.
2. A lot of trial and error with version numbers in the package and lock files trying to figure out which set of dependencies work for you, work for others, and don't break the current project.
We also can't use the typical community templates because they always become unmaintained after 2 years or so.
---------------------------
Why I like Deno:
- Stupid easy installation (single binary) with included updater
- Secure by default
- TS out of the box (including in repl making one-off-scripts super easy to get started)
- Settings are already correct by default.
-- and if you ever need to touch settings for massive projects, they all sit in one file, so no more: tsconfig/package.json/package-lock/yarnlock/prettier/babel/eslintrc/webpack/etc... And since the settings are already sensible by default, you only need to provide overrides, not the entire files, so the end result for a complex project is usually small (example link: https://docs.deno.com/runtime/manual/getting_started/configu...)
- Comes with builtin STD meaning I don't need to mess-around with dependencies
- Builtin utilities are actually good so I don't need to mess-around with the rest of the ecosystem. No jest/vitest, no webpack/rollup, no eslint/prettier/biome (but you can keep using your editor versions just fine).
- Since it came after the require -> import transition, basically everything you're going to be doing is already using the more sensible es modules.
Except they are playing catch up with what Microsoft says Typescript is supposed to mean.
I rather have pure JavaScript, or use the Typescript from source, without having to figure out if a type analysis bug is from me, or the tool that is catching up to Typescript vlatest.
Not sure what modern typescript means, but you only need one or two dependendencies (esbuild and tsc) unless you are doing something more involved, in which case deno alone might not work either.
OTOH the ways that you can improve upon node's shortcomings while staying compatible with it are limited. Bun is taking the pragmatic approach of providing fast drop-in replacements for node, npm and other standard tools, while Deno was the original creator of node going "if I started node today, what would I do differently?". So, different approaches...
Also, you're basically objecting to the entire idea of PKI for use in IMAP which is incredibly hard to justify. Perhaps you wish to use a different model for your own personal reasons but the default being PKI should not be controversial, and if you want to use your own model you should use a different mail client.