Hacker News new | past | comments | ask | show | jobs | submit | mymac's comments login

Fortunately not a whole of of data and for sure with a little bit like that there wasn't anything important, confidential or embarrassing in there. Looking forward to Microsoft's itemised list of what was taken, as well as their GDPR related filing.


Pentests where people actually get out of bed to do stuff (read code, read API docs etc) and then try to really hack your system are rare. Pentests where people go through the motions, send you report with a few unimportant bits highlit while patting you on the back for your exemplary security so you can check the box on whatever audit you're going through are common.


If you're a large company that's actually serious about security, you'll have a Red Team that is intimately familiar with your tech stacks, procedures, business model, etc. This team will be far better at emulating motivated attackers (as well as providing bespoke mitigation advice, vetting and testing solutions, etc.).

Unfortunately, compliance/customer requirements often stipulate having penetration tests performed by third parties. So for business reasons, these same companies, will also hire low-quality pen-tests from "check-box pen-test" firms.

So when you see that $10K "complete pen-test" being advertised as being used by [INSERT BIG SERIOUS NAME HERE], good chance this is why.


Ugh, in the work I do I run into so much of this kind of stuff.

Customer: "We had a pentest/security scan/whatever find this issue in your software"

Me: "And they realized that mitigations are in place as per the CVE that keep that issue from being an exploitable issue, right"

Customer: "Uhhhh"

Testing group: "Use smaller words please, we only click some buttons and this is the report that gets generated"


Let me tell you about the laptop connected to our network with a cellular antenna we found in a locked filing cabinet after getting a much-delayed forced-door alert. This, after some social engineering attempts that displayed unnerving familiarity with employees and a lot of virtual doorknob-rattling.

They may be rare, but "real" pentests are still a thing.


Ouch. How did that ended up?


Yep, most pentests go through the OWASP list and call it done.


The problem is that is what most companies want. They don't want to spend the money nor get the feedback beyond "Best case standards". It's a calculated risk.


Honestly, the OWASP top ten is generic enough that most vulnerability fit in it : "injection", "security misconfiguration", "insecure design".

The problem is

1. knowing the gazillion of web vulnerabilities, and technologies

2. being good enough to tests them

3. kick yourself and go through the laborious process of understand and test every key feature of the target.


It's great if it's done exhaustively


From my understanding as a non security expert:

Pentest comes across more as checking all the common attack vectors don’t exist.

Getting out of bed to do the so-called “real stuff” is typically called a bug bounty program or security researching.

Both exist and I don’t see why most companies couldn’t start a bug bounty program if they really cared a lot about the “real stuff”


I work as pentester (as a freelance nowdays).

Getting out of bed and "real stuff" is supposed to be part of a pentest.

The problem is more the sheer amout of stuff your are supposed to know to be a pentester. Most pentesters come into the field by knowing a bit of XSS, a few thing about PHP, and SQL injections.

Then you start to work, and the clients need you to tests things like:

- compromise a full Windows Network, and take control of the Active Directory Server. Because of a misconfiguration of Active Directory Certificate Services. While dealing with Windows Defender

- test a web application that use websockets, React, nodejs, and GraphQL

- test a WindDev application, with a Java Backend on a AIX server

- check the security of an architecture with multiple services that use a Single Sign on, and Kubernetes

- exploit multiple memory corruption issues ranging form buffer overflow to heap and kernel exploitation

- evaluate the security of an IoT device, with a firmware OTA update and secure boot.

- be familiar with cloud tokens, and compliance with European data protection law.

- Mobile Security, with iOS and Android

- Network : radius, ARP cache poisoning, write a Scapy Layer for a custom protocol, etc

- Cryptography, you might need it

Most of this is actual stuff I had to work on at some point.

Even if you just do web, you should be able to detect and exploit all those vulnerabilities: https://portswigger.net/web-security/all-labs

Nobody knows everything. Being a pentester is a journey.

So in the end, most pentesters fall short on a lot this. Even with an OSCP certification, you don't know most of what you should know. I heard that in some company, people don't even try and just give you the results of a Nessus scan. But even if you are competent, sooner or later, you will run into something that you don't understand. And you have max 2 week to get familiar with it and test it. You can't test something that you don't understand.

The scanner always gives you a few things that are wrong (looking at you TLS ciphers). Even if you suck, or if the system is really secure. You can put a few things into your report. As a junior pentester, my biggest fear was always to hand an empty report. What were people going to think of you, if you work 1 week and don't find anything?


>As a junior pentester, my biggest fear was always to hand an empty report.

I'm trying to remember the rule where you leave something intentionally misconfigured/wrong for the compliance people to find and that you can fix so they don't look deeper into the system. A fun one with web servers is to get them to report they are some ancient version that runs on a different operating system. Like your IIS server showing it's Apache 2.2 or vice versa.

But at least from your description it sounds like you're attempting to pentest. So many of these pentesting firms are click a button, run a script, send a report and go on to the 5 other tickets you have that day type of firms.


Thanks for your honest reply. This part was my favourite:

    Nobody knows everything. Being a pentester is a journey.
I recommend that you add some contact details in your HN bio page. You might get some good ledes after those post.


I think the concern is more about the theatre of most modern pen-testing rather than expecting deep bug-bounty work. I'm not a security expert either, but I've had to refute "security expert" consultations from pen-test companies, and the reports are absolutely asinine half the time and filled with so many false positives due to very weak signature matching that they're more or less useless and give a false sense of security.

For example, dealing with a "legal threat" situation with the product I work on because a client got hit by ransomware and they blame our product because "we just got a security assessment saying everything was fine, and your product is the only other thing on the servers" -- checked the report, basically it just runs some extremely basic port checks/windows config checks that haven't been relevant for years and didn't even apply to the Windows versions they had, and in the end the actual attack came from someone in their company opening a malicious email and having a .txt file with passwords.

I don't doubt there are proper security firms out there, but I rarely encounter them.


That’s interesting. I thought maybe it’s a resource constraint issue, where companies prioritise investment in other areas and do the minimum to “get certified” but it sounds like finding a good provider can be extremely difficult.


Not really.

Real stuff should always be a pentest - penetration test where one is actively trying to exploit vulnerabilities. So person who orders that gets report with !!exploitable vulnerabilities!!.

Checking all common attack vectors is vulnerability scanning and is mostly running scanner and weeding out false positives but not trying to exploit any. Unfortunately most of companies/people call that a penetration test, while it cannot be, because there is no attempt at penetration. While automated scanning tools might do some magic to confirm vulnerability it still is not a penetration test.

In the end, bug bounty program is different in a way - you never know if any security researcher will even be interested in testing your system. So in reality you want to order penetration test. There is usually also a difference where scope of bug bounty program is limited to what is available publicly. Where company systems might not allow to create an account for non-business users, then security researcher will never have access to authenticated account to do the stuff. Bounty program has also other limitations because pentesting company gets a contract and can get much more access like do a white box test where they know the code and can work through it to prove there is exploitable issue.


As in every industry there are cheapskates, and especially in pentesting it is often hard for the customer to tell the good ones from the bad ones. Nevertheless, I think that you have never worked with a credible pentesting vendor. I am doing these tests for a living and would be ashamed to deliver anything coming near your description :-)


Bug bounty programs are a nightmare to run. For every real bug reported you’ll get thousands of nikto pdfs with CRITICAL in big red scare letters all over them. Then you’ll get dragged on twitter constantly for not being serious about security. Narrowing the field to vetted experts will similarly get you roasted for either having something to hide or not caring about inclusion. And god help you if you have to explain that you already knew about a bug reported by anyone with more than 30 followers…

There are as many taxonomies of security services as there are companies selling them. You have to be very specific about what you want and then read the contract carefully.


The checkbox form exists because crooked vendors are catering to organizations who are intentionally lazy about their cybersecurity.

Real penetration tests provide valuable insight that a bug bounty program won't.


pentest means penetration testing which mean one need to take the attacker hat and try to enter your network or the app infrastructure and get as much data as he can, be it institutionnal or customer data. It can be through technical means as well as social engineering practices. And then report back.

This is in no way related to a bug bounty program.


Counter point: Most of the top rated Bug Bounty hunters have a background in penetration testing.

I think it's more accurate to say Bug Bounty only covers a small subset of penetration testing (mainly in that escalation and internal pivoting are against the BB policy of most companies).


> From my understanding as a non security expert:

That certainly helps.


What a shame, HackerNews typically has more insightful comments than garbage like this.

Edit: thanks to everyone who wrote some insightful responses, and there are indeed many. Faith in HackerNews restored !


People are going to chit-chat about things only tangentially related to their areas of expertise; it is good when we’re honest about our limitations.

If nothing else, an obviously wrong take is a nice setup for a correction.


what I always want to know when people talk about this is "what reputable companies can I actually pay to do a real pentest (without costing hundreds of thousands of dollars)."


The problem is security is a "Market for lemons" https://en.wikipedia.org/wiki/The_Market_for_Lemons. Just like when trying to buy a used car, you need someone who is basically an expert in selling used cars.

In order to purchase a reputable pentest, you basically have to have a security team that is mature enough to have just done it themselves.

I can throw out some names for some reputable firms, but you are still going to need to do some leg work vetting the people they will staff your project with, and who knows if those firms will be any good next year or the year after.

Here's a couple generic tips from an old pentester:

* Do not try and schedule your pentest in Q4, everyone is too busy. Go for late Q1 or Q2. Also say you are willing to wait for the best fit testers to be available.

* Ask to review resumes of the testing team. They should have some experience with your tech and at least one of them needs to have at least 2 years experience pen-testing.

* Make sure your testing environment is set up, as production like as possible, and has data in it already. Test the external access. Test all the credentials, once after you generated them, again the night before the test starts. The most common reason to lose your good pentest team and get some juniors swapped in that have no idea what they are doing is you delayed the project by not being ready day 1.


thank you!


I think hiring a security specialist is the way to go.


I've invented plenty of stuff that 100's of millions of people use every day. Whether I get credit or not doesn't really matter to me. It paid for more than half my life (and those of a lot of people around me) and it made the world a little bit better (and sometimes a little bit worse).

Inventions are a dime a dozen, if you care that much about the credit or the money then you should go try to patent your invention, and if you don't then that's fine too but there is no such thing as CC-BY for inventions (though, technically you could patent something and then put the patent in the public domain 'Manfred' style).


If you are aware of SAP breaking the GDPR and it's being swept under the carpet or if enforcement is lackluster given the scope of the problem then please supply some evidence. That SAP is large doesn't matter, what matters is if they are breaking the law.


Root cause analysis is complex for a reason. Separating out contributing causes from root causes is difficult and sometimes even impossible.

Would this dam have failed eventually? Probably yes, on a long enough time-scale. Would it have failed now if not for that storm? Probably no.

Note that even in the developed West there are plenty of pieces of infrastructure at risk because of climate driven extremities.


Would it have failed if it was properly maintained?

I bet somebody knows this. And that is the question that imposes culpability, not the ones you asked.

But if you are not going for culpability and just assumes everybody is honestly trying to improve, the question to ask is "was it properly maintained?"


Infrastructure needs maintenance, culpability only makes sense if there are enough funds for such maintenance and if there is an organization level compatible with the kind and scope of the work. I wouldn't make any assumptions about that in the case of Lybia.


Notably a large number of bridges of the German autobahn was recently found to be dangerously unmaintained to the point where the only recommended recourse was to tear them down and rebuild them. Luckily Germany doesn't normally see significant earthquakes and the storms apparently haven't been severe enough to destroy any of them.

In my part of Germany we had a severe rain storm this week that caused local flooding in part because the sewage treatment plant could not process the intake quickly enough, resulting in the sewers getting backlogged while rain was still pouring down.

Even without outright neglect a lot of infrastructure simply can't handle situations significantly outside the standard range of operation.


> Luckily Germany doesn't normally see significant earthquakes and the storms apparently haven't been severe enough to destroy any of them.

It isn't luck, if Germany had earthquakes they would have built the bridges differently. You don't build things to handle situations that doesn't happen.


Well, yes, but the lucky part is that unlike severe weather events, the frequency and severity of earthquakes isn't likely to significantly change in Germany.

Our forests died because the trees we planted can't cope with the kind of heat waves we're now seeing. Our towns get flooded because our infrastructure wasn't build to handle this much downpour. But luckily earthquakes aren't something we have to worry about as much, climate change or not.


> Note that even in the developed West there are plenty of pieces of infrastructure at risk because of climate driven extremities.

Wasn't there a failure in Scandinavia recently?


"Dam failing" and "dam failing unexpectedly" are not comparable situations.

Floods happens all the time in the west, people dying from floods however is rare since when they happen we know why and how and when it will happen so we can evacuate people.

Libya failed to warn the people here, that is why so many died, that is the most significant failure and should have been the easiest part, it shows severe problems with their management of the dam. They should have known how much stress the dam can handle, and that this storm was likely to make it fail, and evacuate the people when failure was close.

In the west we would just say "the dam will likely fail due to bad maintenance", the destruction will be costly but lives wont be lost, and in many cases we can prematurely destroy parts of the dam to avoid destroying downstream infrastructure and that way come out of it almost scot free.


The price for that will be felt decades from now. It also tells you something about how the anti-EU lobby was financed.


But the people who supported Brexit will be dead or close to it when that price will be paid, so they don't care.


It bloody well should though, 32G is massive.


Sort of - but completely normal for a developer's desktop circa 2013, even without paying the Xeon tax.

Of course, a high school at that time might have purchased poverty-spec chromebooks with 4GB of RAM for understandable reasons of price.

So when samtheprogram talks about "devices from early to mid 2010's" that could mean a great many things.


If you aggressively (and I mean really aggressively) block any and all ads you'll find that you can use that 15+ year old machine just fine on the web. The bloat is mostly in the marketing and advertising part of the web, not in the content part.


As of a couple of years ago, I was using a mid-naughts iMac with Linux installed for a number of projects. It was fine for basic scripting, data analysis, and shell-based Web access.

A heavily adblocked Firefox struggled to handle one or two tabs, and became utterly unresponsive beyond a half dozen or so.

(My typical sessions run ... well into the 100s of tabs. Yes, I'm aware I have a problem, but browser state management otherwise entirely sucks.)

That machine's replacement is now also beginning to struggle under what I've considered typical and generally reasonable Web loads.

Until browsers start heavily penalising heavy sites, this will continue to be a problem.

And on a tablet, I find that my web browser uses battery 10x faster than my bookreading software. This for documents that tend to run 10s to 100s of time longer than a typical webpage's actual text, though not their overall memory footprint.


Interesting. I routinely have 100's of tabs open on a 10+ year old thinkbook with 16G in it (they were only sold with 8 at the time but replacing the two 4G modules with 8G modules worked, 16G does not seem to work).

Firefox has an about:performance gizmo that can tell you which tabs are misbehaving, this has already led to me blacklisting some sites completely, others just to close when not in use. Especially image carrousels can be very resource hungry.


modern browsers don't really keep hundreds of tabs open, they just keep a place holder and then return the memory back to the system as resources get low. I'm not sure why people think a browser can keep 100s of webpages open when modern webpages (tabs) often use 100MB->1GB of memory.


Browser memory management has been improving, but when that system was last running, let's just say that things were bad.


SSD or HDD?

16 GB would be about 8-16x what the system I'm describing had.

How are you blacklisting sites? Pihole/DNS, or something within Firefox itself?


1 or 2 GB of RAM? Yeah, that would've been the problem. Old CPUs can handle more than people think but there's absolutely no getting away from modern memory demands.


Yeah. 8GB of ram is pretty much the minimum usable amount these days. If you can get 8GB into an older machine (with an SSD), then it should work fine.

But 4GB? Pass.


I have a Lenovo Chromebook with 4GB and GalliumOS and it is still very capable with the latest Firefox.


With the right distro, 4GB with just a HDD is quite fine for Chromebook style usage.


This is why AMP HTML was forced by Google - to make websites faster and more lightweight.

One solution is to use uBlock Origin, disable JS on most websites and whitelist it only for those which really need it.

Another is to use a textual web browser in the cloud, such as Carbonyl Terminal: https://github.com/niutech/carbonyl-terminal


Yeah something that old is gonna be heavily memory restricted, javascript chews through memory like nobody's business, then you start swapping and it's game over.


Chrome is also very resource hungry. Its great, it lives on the principle that if a machine has 16gb of RAM, it'd be stupid not to use it. Some other browsers limit themselves to the minimum but lose performance.

This is best for most people, but not for those with older devices. In that case I usually find Firefox and a mandatory adblock a lot more enjoyable than chrome+ublock. Bonus points if you have a pihole somewhere on your network.


To aggressively speed up internet just block JavaScript. I use this extension which makes it easy and convenient:

https://addons.mozilla.org/en-US/firefox/addon/disable-javas...

Keep js off by default and just whitest websites.


Never before in the history of mankind was a group so absolutely besotted with the idea of putting themselves out of a job.


That’s just one perspective… Another perspective is that LLMs enable programmers to skip a lot of the routine and boring aspects of coding - looking up stuff, essentially - so they can focus on the fun parts that engage creativity.


But it won't stop there. Why would it stop at some arbitrarily defined boundary? The savings associated with no longer having to pay programmers the amounts of money that they believe they are worth (high enough to result in collusion between employers) are just too tempting.


Some form of AI will eventually take over almost all existing jobs. Whether those jobs evolve or not somehow and new jobs replace them, we will see.

But it's definitely not just programmers. And it will take time.

Society needs to adjust. Stopping progress would not be a solution and is not possible.

However, hopefully we can pause before we create digital animals with hyperspeed reasoning and typical animal instincts like self-preservation. Researchers like LeCun are already moving on from things like LLMs and working on approaches that really imitate animal cognition (like humans) and will eventually blow all existing techniques out of the water.

The path that we are on seems to make humans obsolete within three generations or so.

So the long term concern is not jobs, but for humans to lose control of the planet in less than a century.

On the way there we might be able to manage a new golden age -- a crescendo for human civilization.


Continuing your aside…

Humans don’t become obsolete, we become bored. This tech will make us bored. When humans get too bored and need shit to stir up, we’ll start a war. Take US and China, global prosperity is not enough right? We need to stoke the flames of war over Taiwan.

In the next 300 years we’ll wipe out most of each other in some ridiculous war, and then rebuild.


I agree that WWIII is a concern but I don't think it will be brought about by boredom.

"Global prosperity" might be true in a very long-term historical sense, but it's misleading to apply it to the immediate situation.

Taiwan is not just a talking point. Control over Taiwan is critical for maintaining hegemony. When that is no longer assured, there will likely be a bloody battle before China is given the free reign that it desires.

WWIII is likely to fully break out within the next 3-30 years. We don't really have the facilities to imagine what 300 years from now will look like, but it will likely be posthuman.


I’ll go with the 30 year mark. Countries like Russia or China don’t get humbled in a loss (like Germany didn’t in WW1). Russia will negotiate some terms for Ukraine (or maintain perpetual war), but I believe it will become a military state that will funnel all money into the defense sector. The same with Iran, and the same with China.

Iran supplies Russia with drones. I can promise you Russia will help Iran enrich their uranium. They are both pariah states, what do they have to lose? Nuclear Iran, here enters Israel.

Everyone’s arming up, there’s a gun fight coming.


Okay, think about it this way. This thing helps generate tons and tons of code. The more code people (or this thing) writes, the more shit there is to debug. More and more code, each calling each other means more and more insane bugs.

We’re going to move from debugging some crap the last developer wrote to debugging an order of magnitude more code the last developer generated.

It’s going to be wonderful for job prospects really.


Until AI figures out debugging


The answer to AI stealing your job is to go ahead and start a company, solve a hard problem, sell the solution and leverage AI to do this.


The only thing that takes anyone's job is demand shortfalls. Productivity increases certainly don't do it. It's like saying getting a raise makes you poorer.


Actually, in my country, Portugal, if your salary is the Minimum wage, you are exempt from paying taxes, but if you get a raise as little as 15€ or so, you move one category up and start to pay taxes, and you will receive less money than what you used to get before the raise.


One coachman to the other: "Another perspective about this car thing, you can skip all the routine and boring trips - they are done with cars. You can focus on the nice trips that make you feel good".


This should be the only goal of mankind so we can smell the flowers instead of wasting our years in some cubicle. Some people will always want to work, but it shouldn't be the norm. What's the point really unless we're doing something we're passionate about? The economy?


Is automation not what every engineer strives for when possible? Especially software developers.

From my experience with github copilot and GPT4 - developers are NOT going anywhere anytime soon. You'll certainly be faster though.


The best interpretation of this is you mean eventually ML/AI will put programmers out of a job, and not Code LLama specifically.

However it is hard to tell how that might pan out. Can such an ML/AI do all the parts of the job effectively? A lot of non-coding skill bleed into the coder's job. For example talking to people who need an input to the task and finding out what they are really asking for, and beyond that, what the best solution is that solves the underlying problem of what they ask for, while meeting nonfunctional requirements such as performance, reliability, code complexity, and is a good fit for the business.

On the other hand eventually the end users of a lot of services might be bots. You are more likely to have a pricing.json than a pricing.html page, and bots discover the services they need from searches, negotiate deals, read contracts and sue each other etc.

Once the programming job (which is really a "technical problem solver" job) is replaced either it will just be same-but-different (like how most programmers use high level languages not C) or we have invented AGI that will take many other jobs.

In which case the "job" aspect of it is almost moot. Since we will be living in post-scarcity and you would need to figure out the "power" aspect and what it means to even be sentient/human.


Do you really want to spend you days writing REDUX accumulators?


I understand the fear of losing your job or becoming less relevant, but many of us love this work because we're passionate about technology, programming, science, and the whole world of possibilities that this makes... possible.

That's why we're so excited to see these extraordinary advances that I personally didn't think I'd see in my lifetime.

The fear is legitimate and I respect the opinions of those who oppose these advances because they have children to provide for and have worked a lifetime to get where they are. But at least in my case, the curiosity and excitement to see what will happen is far greater than my little personal garden. Damn, we are living what we used to read in the most entertaining sci-fi literature!

(And that's not to say that I don't see the risks in all of this... in fact, I think there will be consequences far more serious than just "losing a job," but I could be wrong)


When mechanized textile machinery was invented, the weavers that had jobs after their introduction were those that learned how to use them.


If we get to the point where these large language models can create complete applications and software solutions from design specs alone, then there's no reason to believe that this would be limited to merely replacing software devs.

It would likely impact a far larger swath of the engineering / design industry.


You can't get promoted unless you put yourself out of a job.


We're not looking at a product that's putting anyone out of a job though, we're looking at a product that frees up a lot of time, and time is great.


Well, since Brexit anyway.


If only Python would be able to really solve their dependency and backwards compatibility issues, those are really holding the adoption back. Though there is a good chance that even if they fixed those that people burned in the past will never go back into it.


> [python] backwards compatibility issues

What issues? A lot of problems with Python is due to keeping compatibility with Python 2.0. Implicit string concat bites me fairly often for example, and it has never been useful.


2->3 has been a complete disaster, anything older than a few weeks tends to randomly break with some kind of dependency issue, sometimes requiring multiple installations of python on the same machine which will bite each other in hard to predict ways. Python is a wonderful idea but I've yet to be able to write something in python and call it 'finished' because it never ever continues to work in the longer term. Highly frustrating and in my opinion unnecessary.


> 2->3 has been a complete disaster

It WAS a long slog yes. But now it's pretty much done. And there really was no way to fix the unicode issue without a big painful transition.

> anything older than a few weeks tends to randomly break with some kind of dependency issue

I absolutely do not have this issue. Maybe you're using libraries very different from what I do? But I do think I have pretty wide interests/projects...

> sometimes requiring multiple installations of python on the same machine which will bite each other in hard to predict ways

I don't know what you're talking about here. Do you have an example?

> I've yet to be able to write something in python and call it 'finished' because it never ever continues to work in the longer term.

I don't have this experience.


What dependency and backwards compatibility issues does Python have?

That other languages don't?


Python software simply rots while you're not watching it. Either you make it a full time occupation, every time some library gets an 'upgrade' (with a ton of breaking changes) you get to rewrite your code, sometimes in non-obvious and intrusive ways. And every time the language changes in some breaking way you get to spend (lots of) time on debugging hard to track down problems because the codebases they occur in are large enough to mask the problems that would have come out if the same situation had occurred during development.

And that's before we get into the various ways in which python versions and library versions can interfere with each other. You couldn't have made a much bigger mess if you tried. And I actually like the core language. But so many projects I wrote in python just stopped working. I remember having a pretty critical chunk of pygame code written for a CAD system that just stopped working after an upgrade and there was no way it was ever going to run again without rewriting it. That's the sort of thing that really puts me of an I remember it long after. Machine learning code is still so much in flux that it doesn't matter. But hardware abstraction layers such as pygame should be long lived and stable between versions. And that really is just one example.

Anyway, I think asking 'That other languages don't' doesn't really matter. But Haskell (see TFA) is one language that always tried hard not to be successful so breaking changes would be permitted (which is fair). Python tries both to be popular and to allow for major stuff to be broken every now and then and that is very rough on those that have large codebases in production.

By contrast, COBOL, FORTRAN, LISP, ERLANG, C and C++ code from ages ago still compiles, maybe you'll have to set a flag or two but they're pretty careful about stuff like that.


Are you pinning your dependency versions? If so, things should all still work later.

If you upgrade libs then sometimes you need to do some work. I’ve found python libs to be pretty stable though so it’s never too bad.


> Either you make it a full time occupation

LOL Javascript enters the room.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: