My router is blocking this domain on DNS level in OpenWRT, techcrunch.com is redirecting me there, so I can't visit this page.
Edit: I literally can not visit any techcrunch.com article, all of them redirect me to this doggy ads+tracking domain. It doesn't matter if I came from google, DDG, reddit or HN.
Same here. I’ve learned to ignore Techcrunch and other similar websites from Yahoo/Verizon.
The redirect you’re getting is to a “consent” page that’s not actually compliant with the GDPR as it takes a good 5 minutes to opt out of all the bullshit tracking and even then I don’t trust it to actually opt me out and not track.
Same here and this isn't the first article from techcrunch on HN which redirects me. To be honest, it's kinda pissing me off to click HN articles just to get redirected to spam/ad or in my case a warning page notifying me of the redirect...
I get the same, and after much investigation and trying to configure uBlock decided to just allow it at the DNS level. The redirect is happening with HTTP redirects so you can't just block JS.
Huh, sorry for that then. I used to get this all the time with TC, Engadget and a few other sites, but since turning JS off, it's stopped. Something else must have changed around the same time I tinkered with it.
If I block all JS, it still redirects and then uMatrix shows a page informing me that the ads page is blocked. The very first scripts I could allow are those ad scripts. Simply unusable page. "techcrunch" ...
In practice most non-Internet PKIs don't actually want to do proof of control which is ACME's secret sauce. As you'll see much of the document is about the proof of control, because that's both the hardest problem and the only reason for ACME to exist at all.
In private PKIs either proof of control over the name is considered an out-of-band problem or it's elided altogether. I wrote about this when Peter Gutmann was being angry about ACME years ago, Gutmann saw ACME as redundant because of existing protocols like SCEP. If you want certificate issuance automation but you don't need proof of control over the names then you don't need ACME - SCEP (or half a dozen others) are fine for this purpose and you should use those.
Now, of course if you don't want proof of control but you do want automation it's worth taking a moment to reconsider exactly what your goals really are: What is a certificate doing for me here? What exactly am I certifying, if anything? But if the technical requirement is there, regardless of whether it makes philosophical sense, SCEP delivers and ACME is overkill.
As I wrote in my thread with Peter, it's usual with things like SCEP to provide a default implementation in which the part where you only give certificates to the Right People is left marked /* TODO */. Thus often the result is pure security theatre, certificates are issued equally to bad guys or good guys without distinction and nothing at all is really being certified.
Manual processes may or may not be better in terms of actually certifying anything.
Sure, distribute your own root CA, but then how do people get signed certificates? I tend to work with large companies, where getting a signed certificate involves opening a ticket and waiting for someone on the other side to respond.
ACME would be ideal, but the official response of Let's Encrypt is that ACME is overkill for corporate environments and you should roll your own certificate automation.
Delegating enroll permissions is a solved problem technically at least in a windows domain. At that point it's an org policy problem and ACME won't help.
In an enterprise, you use the API that a CA provides, and build it into the ticketing system. I helped build a system that took care of this with ServiceNow a few times now.
At one place we aggressively policed external facing certificates. Don’t follow the process, your service gets whacked.
It’s a process you should look into, because the compliance regimes will start paying attention to it someday soon.
Unless your DNS public. That still allows Let's Encrypt to verify you and is actually used by RavenDB to provide SSL certificates to internal instances.
Automated renewal doesn’t mean that you don’t need to have monitoring. I’ve seen many letsencrypt-baked servers generating certificate errors because the automated renewal system broke (misconfigurations, borked updates, corporate-level firewalls/proxies blocking requests, etc.).
Even with letsencrypt you still MUST have a monitoring system, notifications going to the right person, and in general an organization that can act on this. The problem is often not technical: your organization must be structured in a way that those notifications are acted upon. If anything, letsencrypt lessens the frequency of those notifications so I posit that it’s even worse from the point of view of validating your organization: with regular certs, you get notified once every year or two.
Anyway, letsencrypt is a good thing: it’s just not a solution to this problem.
Let's Encrypt does exactly the right thing, which is to email you if a certificate is expiring without being renewed. Pretty much the behaviour you'd want from any CA, no?
But in a non-technical organisation, who should those messages go to?
Often the initial LetsEncrypt setup will be handled, correctly, by some IT staff.
Then it might break several months or years later for some odd reason.
The organisational challenge is to get the message through to someone who understands it and will act on it.
Yes, and: fix bugs so the setup doesn’t break. I’m constantly babysitting LetsEncrypt. It’s always failing in some stupid way, and all it can go is Email me with: “Ive been silently failing for the last couple of months and now your certificate is going to expire if you don’t drop everything and comb through my logs now LOL!”
This time the problem was LE all of a sudden decided to start storing my certificate in a directory called mydomain.com-0001 instead of mydomain.com, breaking the rest of the setup that relies on things being in the right directory. Automation is only useful when the software behaves predictably and consistently.
I'm looking for a program that is going to connect to all my SSL sites, and report back a problem if the cert is within 30 days of expiration. It is easy enough to write a client that will fail after the cert has expired, but I want one that will warn me ahead of time. And I don't want to mess with the system clock or something like that.
It shouldn't be a big program, I may just write it myself.
A bigger challenge is getting a complete list of your https websites, and an even bigger one is finding and monitoring all those non-https certificates, eg payment gateway certificates.
Also my employer (well, until the end of this week) Kynd does this as part of its broader "Check cyber-security stuff" offering for non-technical people. https://www.kynd.io/
Hi, I'm a co-founder of Amixr and we've developed curler.amixr.io, which monitors a website and delivers email notifications for free and also has an integration with our flagship product Alert Mixer
Depending on your use case, environment, etc there are existing tools like Monit that can do this for free. Or paid services like Uptime Robot and StatusCake.
updown.io https://updown.io/ has a pretty generous free tier and (in addition to checking if your service is up) sends you a warning when a cert is expiring within 14 days
But as a person who rolls LE certs across a very non-happy-path environment (many SAN domains, edge nodes which are geo-balanced).
I have a lot of issues automating this process, right now I have a HTTPd which reverse proxies the .well-known back to a central place where I run certbot and then I push out the cert to the nodes, however, sometimes one of our SAN domains will need to be removed and the whole universe comes crashing down.
DNS-01 Challenge is "nice" (although, doesn't feel super well supported); but requires domains registered with some kind of DNS server that accepts API's to change records, so Amazon route53- but it's exceptionally hard to roll your own DNS in this case. :\
1. HTTP-01 challenges have a "correct" answer for a given Let's Encrypt account which depends only on the challenge (ie the part in the HTTP GET request) and on knowing which account you want to use.
Silently Certbot creates you an account, with a private authentication key and so on, for the Let's Encrypt service. When it gives you a file to prove control by placing it in /.well-known/acme-challenge/ the content of the file is always the same as the filename, plus a suffix that depends on your key.
So long as you use the same account you can thus bake this suffix into the web server, essentially causing it to answer any request from anybody: "Hey, who is allowed to issue for somename.example ?" "dijit is allowed to do that". Bad guys can't use this because they don't know your private account key, but for you now magically everything is authorised, since when it is asked your server will answer "dijit is allowed to do that" to any question it's asked.
2. DNS-01 can be redirected using CNAME. Add a CNAME, once, manually if necessary, to redirect the DNS-01 checks to a DNS server you've set up for this specific purpose.
Besides the multiple commercial DNS providers which certbot has plugins available for there is also a dns plug-in supporting rfc2316 updates. This means it can be integrated with a large number of self-hosted DNS servers. Bind, powerdns and Microsoft DNS are just some of the servers with rfc2316 DNS update support.
HTTPS/Letsencrypt would be a good thing if it was optional. But now every 2-bit static website is forced to implement HTTPS or face complaints from users whose browsers have locked them out. What a total fsck-up!
AppViewX and Venafi are the two main competitors in this space of certificate lifecycle automation - many Fortune 1000s are customers of one or the other to prevent exactly this type of problem - Venafi is the larger incumbent but AppViewX has the newer platform and is replacing Venafi in some key accounts
Challenge is discovering all the certs that exist across the org (10k+ often) and then having a fully automated renewal process requires a pretty advanced / complex platform - don’t know anyone who has built this tech successfully in house - curious if anyone has though in the HN community?
I am going to seize this opportunity and rant out my angst against Microsoft’s worst product till date.
Has anyone even felt that Teams is a heavy app that consumes a lot of time to come alive?
Even during calls, the quality is horrible that I don’t even want to describe the pain I go through. There’s strong distortion and voices will never be heard clearly.
I use Teams every day (mainly at my desktop, but sometimes via the Android app if I'm outside) because I work from home and I have found it to work perfectly fine. I text chat with teammates or others at the company and have regular meetings (voice/ screensharing) with no issues.
Whenever Teams comes up on HN, there are people complaining about how much infinitely better some cooler product is. I don't doubt that you see a huge difference, but I feel like I'm listening to a wine expert explain how some $1000 bottle of wine is better than a $20 bottle, when to me they taste the same.
No, you're not alone. I don't hold out a great deal of hope for the future, either, since the roadmap looks like it's being handled on UserVoice and the top-voted features are mostly "duplicate existing Office functionality instead of fixing basic features (like search) that are actually broken."
Worked for Microsoft for 4 years. It’s a classic big company problem. The execs and the chain downward want their fat bonuses. You get fat bonuses by shipping features.
There’s a famous saying “no one gets promoted for fixing bugs”. I imagine it’s why Google has 10 messaging products that are half assed too.
It’s really hard to say No and focus on making the best essentials. It’s something that even Apple is having trouble with.
I think this is where Startups sometimes shine and why Slack is here to stay if they don’t also fall in the same trap.
I find it so bad compared to Slack. We used Slack internally in all the IT for a couple of years years (the company was using Skype for Business and whatever it was called before). It was decided that we will go with Teams because it's cheaper/free (we're already on Office 365 and the Microsoft stack since the beginning) and we were told to kill Slack. We didn't and still use it for my team, and the difference is insane. On macOS our conference room equipment crashes Teams (Slack and others have no issues with it), notifications are not native and sometimes they show up, sometimes they don't. You unplug your headphones, Teams rings, but no one is calling. It crashes both on Windows and mac for no reason from time to time. Today we had a meeting and half the people could not join because of this issue.
It seems like it's a clone made just to get into this space and everything looks like a generic attempt to throw money into an app to take on Slack.
You are not alone but MS has market share and easy reach with 365. Slack (as an example) is a much better product but MS can afford to be lazy. Hopefully when Slack releases SIP support it will be forced to improve.
If you're a decent sized organization, you can pay $12.50 per user per month to Slack for a messaging service, or you can pay $20 per user per month to Microsoft for Outlook, PowerPoint, Word, Excel, Access, Publisher, Exchange, Sharepoint, and a messaging service. If you just want messaging then you can pay $8 per user per month, and they'll throw in Exchange, Sharepoint, and a terabyte of cloud storage per user on top of it.
I don't think it's possible for Slack to be better enough to justify that kind of a price difference.
That's probably true for cheap labor, but ~50% of developers I know will try to switch into other teams when they are forced to use Teams because it's that annoying. When you pay somebody thousands of dollars each month, paying an extra 12.50 to make him not look for a different job isn't really something you should need to think about a lot.
On the other hand, a knowledge workers like a mid level software engineer can easily cost a business $100 an hour in salary and costs. If Slack can save employees 8 minutes a month then it's a net win.
I dont find teams to be perfect, but I find slack to be overpriced at $15/user. Happy to use Teams as part of my $15/user license that comes with email, excel, PowerAutomate, etc etc
People who don’t work with many people love it. For all of its faults, it’s not as awful as Skype for Business.
I interface with too many people, it has brought the dumpster fire of sharepoint to a chat client. I’m now enrolled in 96 teams, 94 of which have no activity.
I think there is a universal law of chat client entropy. All chat clients only get worse over time. The category hit peak UX in 1999 with AIM/Yahoo, and gets worse every day. Eventually, Teams will merge into exchange, and your chats will be specially formatted emails, stored in Sharepoint.
Calls work very well I think. The chat/group chat is a bit clunky but still miles better than the product it replaces for the majority of its users (Skype and Skype for business/lync).
My main worry was about voice and video quality as I’m full time remote, but it’s more or less perfect I think. Haven’t had a disconnect or poor audio/video.
The screen sharing does disconnect some times which is annoying to say the least.
Works great for me... It's also nice that virtually every external person I've worked with, has Teams somehow. Even ex colleagues moving on to other companies I reconnect with on Teams right away haha, because the new company is using it too.
I just don't like the fact I can't control my notifications and chat history well, probably because it's enterprise. There's also a creepy 'get notification when colleague is online' feature that I can't turn off in my privacy settings. Otherwise quite nice.
> I would think at this point they would be their own major certificate authority and maybe domain registrar.
From experience this probably wouldn't fix things.
What often happens is that somebody creates a system that uses a certificate, doesn’t automate renewal, and then the person responsible for renewing it changes teams or leaves the company. Email reminders only go so far—they not only need to go into the right inbox, but the person watching that inbox has to care.
My last domain expiration outage happened like that.
If it's in production, just buy a 10 year cert. This virtually guarantees an outage after 10 years but virtually guarantees it won't be your fault when it happens...
New certificates in the Web PKI ("SSL certificates") have a maximum lifespan of 825 days. This is enforced (if a CA were to issue a certificate with a longer lifespan Chrome for example would just treat this certificate as invalid). The commercial CAs mostly offer one year or two years, with renewals using the 825 day limit to offer renewals in the overlap, so e.g. you buy two years in June 2018, in April 2020 you can pay for two year renewal and the new certificate expires in June 2022 not April 2022.
If you're using certificates in your own PKI (as it's likely Microsoft actually was in this particular incident) then there's no need to buy them and it's up to you what your appetite for risk is on when they expire.
My approach is the opposite. For production certs, I buy them with the minimum length (usually 1 year). This exposes problems in our automation sooner, and keeps the process more fresh in our minds.
I really like that Let’s Encrypt certificates last only 90 days.
That said, the CYA aspect of “10 years, somebody else’s problem” is really appealing. If only I believed that it wouldn’t be my ass on the line 10 years from now!
This has happened to me enough times to be embarrassing. It seems to happen to other people who you'd think have some sensible way to avoid it.
Is there a reminder service out there that specializes in your long-term expiring things? I'm not sure what would be different about it than a regular calendar, but it seems like many of us need something that makes this easier.
At larger companies a lot of the issue isn't literally generating reminders, it is making sure they're sent to the correct people/departments and are actioned by anyone.
For example you sometimes have reminders sent to ex-employees, or sent to a mailing list and everyone assuming everyone else is going to action it. Or the reminder gets ping-ponged between multiple managers via email with nobody either able or willing to deal with it.
None of these are tech' issues, and they don't have technology solutions as a consequence. So whenever I see an embarrassing expired cert, I don't assume technical malfunction, I assume political malfunction.
> It would be nice if they prompted you to 'add to calendar' when you are creating certs.
That would actually be a really _awesome_ integration of ACME client software with CalDAV [0] and other ticketing software APIs like Jira: set up calendar events and/or tickets with increasing priority to verify that the certificate is updated.
If you use Nagios for monitoring, you can use the check_http plugin for this, e.g. check_http -H <hostname> -C 30,14. That will return a warning status if the cert expires within 30 days and return a critical status if the cert expires within 14 days.
Yeah, Cert Spotter does this (https://sslmate.com/certspotter/). It's a huge help both to be able to monitor cert issuance for a domain and all of its subs, and to be reminded when a cert is expiring.
It's a pretty simple little python script, though obviously you then run the risk that it's no longer running in x years time when you actually need it.
They said they're doing a post-mortem, but how much can a public post mortem really say about "forgot to renew a thing"?
I appreciate the sentiment but I think it's fine for them to just say "we examined our processes, found out what led to the issue, and have modified procedures". I worry a detailed post mortem would just throw specific folks under the bus (most likely some low level employee who isn't actually the root cause)
It is an interesting side effect of the tenancy of software developers these days that any process that requires action on a > 2 year interval is likely to fail, if the cycle is 5 years or more it will always fail.
The turnover insures that nobody in the department was there when the process was started/last interacted with, and so it is off the collective organizational radar so to speak.
Not really. The short (90 day) Let's Encrypt expiry is intended to promote automation because it's annoying to do so many renewals by hand, and is also a reflection of the relatively short lifetimes of most Internet names.
Historically it was common to issue 3 year certs, and five year certs weren't rare (until 2015). But whilst it's reasonable to expect microsoft.com or bbc.co.uk belonging to the same outfit in five years, it's hard to be as sure about say jsnes.org (currently a Javascript NES emulator) or catandgirl.com (a web comic by Dorothy Gambrell) which might well entertain offers from somebody else who wanted those names.
The underlying domain name is typically on an annual renewal cycle with perhaps just 14 days grace if you stop paying, and individual FQDNs might have even shorter turnaround. With a five year certificate this means you could buy a certificate the day before your renewal payment is due, and then still have an apparently good, working certificate for that name five years later when it's owned by somebody else entirely who has no idea you once owned that name. Not great. Let's Encrypt's renewal cycle closes this gap considerably. The BRs were also amended, the limit is now 825 days instead of 39 months or (originally) 60 months.
I'm actually curious: Is there a market for a SaaS which simply keeps track of certificates and when they expire? (Perhaps even with an auto-Deploy new certificate mechanism?)
Perhaps but I call it doing my job. I run up a SSL cert check on icinga for each system as needed. It is quite trivial to roll your own script or find one that can be run from cron. It would probably need more work maintaining an account with a saas.
its a very, very simple update alert to add to Prometheus to monitor that, and alert if the cert is within so many days of expiring. You need the 'blackbox exporter' and a simple rule such as:
alert: TlsCertExpiringSoon
expr: (probe_ssl_earliest_cert_expiry
- time()) < (86400 * 14)
for: 10m
labels:
product: Name_of_Product
severity: page
annotations:
description: the tls cert for the URL {{ $labels.instance }} expires in less than 14 days!
summary: TLS cert for {{$labels.instance}} expiring
I run a SaaS where certificate expiration monitoring is one of the features. But that's more of a nice-to-have feature rather than a primary thing that brings in customers.
I think most monitoring services will let you know if your certificate is about to expire. For example, I use https://checklyhq.com and it lets you configure how far in advance it will alert you.
They tweeted that it was an authentication certificate. I.e., probably not a regular TLS domain certificate or similar (still could be TLS client cert though), but probably more like a certificate/key that one service used to log into another. A lot of microservice/container/kubernetes setups use them for all kinds of stuff, which is really a big step forward over password logins.
Not like it matters, but it kinda does, because those tend to be private and internally generated, and not necessarily signed by an external certificate authority.
I have this problem, and it concerns me, because there's no external polling check that's going to spot that.
You have to rely on either the code itself checking each time it uses the certificate, and alerting.
Or (taken from elsewhere in this thread) you test it during your build, and hope that someone is still building the code by the time the cert comes up for renewal.
I'm going to shamelessly plug my project, Certera, here. It handles monitoring/tracking, cert issuance and renewals and helps larger organizations manage their certificate needs more consistently.
It's a good example of the difficulty of getting TLS perfectly right.
In theory this set up is fine; the default behavior of all the browsers when typing "www.certera.io" is to interpret it as a request for http://www.certera.io.
But if the client has anything in place that automatically upgrades http to https before submitting the request, you're going to need a valid cert for the www subdomain in place or you'll throw a cert error before reaching the redirect.
Even if your site omits the www subdomain in production (as certera does), a lot of users will just type it in anyway. So, you better be ready to handle that request via https.
Looks cool. I'm going to try it out for my home lab setup. I like the docs layout. What is that?
Any thoughts on the license? How is it working? Why did you pick that? I like that type of license, but it's not very common. Drone does it too, but I haven't seen many others. You don't have to answer if you don't want to, but it's nice to see people deviating from standard licenses like GPL and MIT since I feel like those make it too easy for large businesses to take advantage of small projects.
Your licensing and attribution pages look like a lot of thought went into it, so you probably have some decent insight.
GPL is great for certain types of products, however, infrastructure types of projects aren't one of them.
I haven't been marketing at all, and I just recently finished the first stable release, so the jury is still out on whether this is all a good idea or not!
The docs are based on ReadTheDocs, but settled on a single file layout instead of having multiple pages.
Amazing. A company like Microsoft could afford to hire an entire department to do nothing but make sure certificates don't expire, but this still happens.
Jokes aside, I don't understand how this problem hasn't been solved in the general case.
I worked on a code signing system years ago. It was amazing to me how much of an air of mystery there was around keygen. Generating keys is just a little bit of memorization or note-taking. Protecting them once you have them was the hard part (which is why I won't make you some keys and send them to you, buddy).
Initially, I made light of a requirement that we send out automated nag emails starting a couple months before the keys expired. With a bit more time and observation it became pretty clear this was a valid concern. I eventually came around, and while I didn't implement the feature, I did create the integration tests.
Making certs is stupid simple. Maintaining them takes some support and we don't always have it at the ready. Remembering to do something once a year or two years isn't something we're particularly built for. We have a habit of forgetting these duties when we hand projects over to others.
Microsoft are not even close to the only company this happens to. The problem is a human one - as people change positions within a company, or leave a company, responsibility for this kind of thing can fall through the cracks.
Happens all the time, even if ACME is employed, and it's unlikely to ever stop happening.
All I'm saying is, that downtime probably cost them a lot of money, and resulted from something that's extremely preventable. Even if it is a human problem, one would think they'd allocate the resources necessary to solve it.
All certificates in the Web PKI are obliged to use SAN (Subject Alternative Name). Although it is often mistaken for some sort of aliasing feature, SAN is the Internet's agreed alternative way to name things. X.509 is intended as part of the X.500 directory system, a global directory system which obviously was never actually built, and so its built-in name scheme doesn't resemble anything that actually exists.
When Netscape invented SSL back in the 1990s they hijacked X.509's Common Name field to write DNS names in, but this field is just defined as arbitrary human readable text. "news.ycombinator.com" is text, but so is "News·YCombinator,coM " and only one of those is a DNS name. So, when the IETF standardised PKIX it designed a dedicated schema for the various types of Internet names to put into certificates, Subject Alternative Names SANs.
Unlike the Common Name a SAN dnsName literally can't be anything but a DNS name using A-labels ("punycode"). So the opportunity for confusion is removed. Since it doesn't need to address universal human text it doesn't need to support weird encodings or anything else.
When PKIX was standardised the existing abuse of Common Name was grandfathered in. PKIX says all certs should list one or more SANs but they may continue to have a DNS name in the Common Name too. The Baseline Requirements, years later, explicitly tell CAs to only use a Common Name which matches one of the SANs they've also baked into the certificate, if there are no SANs they're doing it wrong. Alas, as so often, commercial priorities beat security and so even last decade it was common to find people trying to get away without SANs. However the roll out of Certificate Transparency allowed us to have a clear view, without waiting for incident reports from affected users, of non-compliant certificates still being issued, so we could address it as it happened. A few years ago Firefox and Chrome (and Safari I think?) were able to remove their code for trying to process the human readable Common Name as if it might be important, and so if anybody were to issue such a certificate with no SANs today it wouldn't "work" in popular browsers.
Just listening to the number of complex shenanigans experienced sysadmins have to employ to keep up with the demands of managing HTTPS makes me wonder how on earth your average non-technical DIY static site developer has a chance in hell of keeping his site from failing modern browsers' requirements. Universal HTTPS is a bad joke.
As a sysadmin myself the number of non-trivial modes in which certbot can fail never ceases to amaze me. Running Apache? Watch your certbot renew fail to bind to port 80 because your server is running. Now your renewal cron task needs to take into account stopping and restarting the server which the standard cron task does not include. What made the web great was that it didn't keep out the DIY developer. Now it does exactly that .... via universal compulsory HTTPS on trivial, static sites.
To be honest, even before Let's Encrypt I could buy a PositivesSSL from namecheap for about $10 a year, and set it up with Apache or Nginx in about 20-30 mins. Had to renew every X years which was annoying, but cmon. There were step by step tutorials.
Now, with Let's Encrypt, there's no excuse to not SSL.
But on static sites ... why? I'm not arguing against HTTPS/SSL where it makes sense, just the way it's been imposed indiscriminately on the whole damned www.
Because a hostile MITM agent can deliver exploit code by MITMing your static site.
Even if your site is some static video game guide for Club Penguin, an attacker can inject some 0day, some privacy-invading analytics code, or even a dumb alert("Your windows is out of date").
Our HTTPS overlords have much to answer for. How many static sites, for example, really need HTTPS and the non-trivial maintenance involved in the average Apache/Letsencrypt/certbot setup? Talk about sledge-hammer to crack a nut. And renewals every 3 months?! Don't get me started. Sure, the likes of Microsoft should be able to do better but isn't there a message here? Beyond secure sites such as finance, government, logins and ecommerce the whole HTTPS certificate nonesense is a giant burden/cost with no benefit.
> How many static sites, for example, really need HTTPS
All of them. Script jacking, ad insertion, redirection, tracking insertion, etc. are all done at scale by everything from national ISPs to coffe shop routers.
HTTPS provides authenticity of all transmitted data; this is more important than confidentiality because without authenticity you can’t tell that you are talking in secret with an atttacker.
when getting a site/API on its feet, enabling https and the cert is usually the last thing to get done and an afterthought. Certs are easy to forget about but when they expire they shut.down.everything.
Teams is great improvement over Skype for Business, which was great improvement over Lync, but it's still garbage. Interesting how the same company also made awesome VSCode.
Is it? I feel like every iteration is worse than the other. For Christ sake, Teams can't even scroll a few messages up without having to load more, and then it immediately forgets the latest messages. Like, it's an IM program that has trouble remembering more than 20 messages at once?? Also if you send messages quickly they appear in reverse order, again, it's an IM program that can't even do the "messaging" part of IM right.
Microsoft has ~ 150,000 employees. I'm not sure why it's interesting that some of those employees put out a subjectively different quality product than others.
Most corporations are systems which create products, those systems have QC, feedback loops, etc., that tend to provide homogeneity of quality?
That's how brands are supposed to work, "I've had other things from this company, they were good, I'll get more".
Companies are usually designed to prevent single shitty employees from ruining things (and usually to prevent goods being "too good" as well!).
I've heard that MS is structured as highly separated departments, like separate single-product companies. Which goes someway to explain things - if they're bad at sharing best practice and senior management can't/won't control quality.
Microsoft had a great system until the mid 2010s. Testers did an amazing job preventing crap from making it to customers. Developers simply don't make good testers. Unfortunately they decided to follow everyone else and destroy this along with replacing offices for open space.
Skype gets worse and worse, I don't understand how. You're not allowed to choose not to autostart it. My only explanation is they somehow get more out of making it crap (it's a user tracking app primarily now AFAICT) than they would out of it being good.
https://gfycat.com/fortunateyawningcopperhead
My router is blocking this domain on DNS level in OpenWRT, techcrunch.com is redirecting me there, so I can't visit this page.
Edit: I literally can not visit any techcrunch.com article, all of them redirect me to this doggy ads+tracking domain. It doesn't matter if I came from google, DDG, reddit or HN.