> Anybody who’s seen the systems inside a major tech company knows this is true. Or a minor tech company. Or the insides of any product with a software component.
Anybody who's seen the sun rising east and setting west knows that this is true: the Sun rotates around the Earth.
The other, less-obvious alternative, is that software is NOT crap, we just like to complain a lot. And it's just so easy & fun to blame everything on the software we work with.
Most software that doesn't immediately die does its job fairly well. In fact, better than whatever alternative that existed before said software.
Except maybe for OSX, none of those fits the definition of "barely maintained" (I'm kidding ofcourse - obviously OSX is maintained too!). I suspect none of it is "developed by multiple contractors" either (though, why would that automatically imply "crap"). None of it is "barely running" - they might have some issues, but they run. Some of them, impressively well (take Google Search, or Gmail. They have outstanding uptime; Slack isn't too bad, either; and both IntelliJ and OSX run rather well too).
One of my former teacher used to say; "a program is like an airplane; it either works or it doesn't. You can't say that an airplane mostly flies". If you take that worldview, sure, all software is crap. But that worldview is deeply flawed IMO, in a very practical sense.
I think the flaw in your argument here is that you're only considering software products. A large amount of software does not have a name (beyond internal denomination), and that's where most of the crap is. A lot of the products with names are crap, too, but those are more likely to be worth anything because they are governed by evolutionary processes (bad products are more likely to fail on the market, whereas an in-house accounting solution has no option for failure, just for death march).
> I suspect none of it is "developed by multiple contractors" either
> None of it is "barely running"
You do realize that Android is on your list, right? "Barely running" fits the bill for most Android devices out there, and "developed by multiple contractors" is the primary reason.
> You do realize that Android is on your list, right?
I added it later (realized I use it quite a lot and it's unfair to not put it there), but you do get to an interesting distinction: Android runs quite fine on my phone. So when it's barely running, it's a software/hardware mismatch. And products, as long as they're software products, tend to be rather good. When they are part of other products the quality may be more of a "hit and miss".
But, is that a fault of the software? Take cars, I'm sure the software in my car is imperfect, "crap" by some standards.... but, I haven't had it fail. I did have a sensor failure, though. And on the previous car, I've had lots of other hardware failures, and no observable software failures. Would a pacemaker with crappy battery (say, an exploding one) be any better than a pacemaker with crappy software? And how often is the software actually used to mask/work around the crappier parts of the product? That must be quite a lot, and increasing.
Sure, the software in a product might be crappy - but products are crappy all the time, so why should the software in some of them make any exception? You buy a cheap pair of sport shoes, you don't expect them to be the same quality as a pair of Nike shoes (and even Nike ones are imperfect)... why do you expect good software on crappy cheap security cameras?
* When it finds a wireless network that it recognises the audio it outputs (my .mp3s) start becoming choppy.
* Software is presently unable to read the status of the battery properly (probably a hardware failure).
* Many apps struggle with being persistently on and my mobile data connection and require occasional reboots to function.
* Video apps appear to take some sort of exclusive lock somewhere so if YouTube struggles with a video and I close it down and open up TwitchTv then Twitch will have exactly the same problem or even completely fail to render.
* Mine has none of those issues, so they may be at least partially hardware
* There's a difference in my book between e.g. "ocasionally annoying" and "crap"; e.g. try listing the good things Android does for you, and compare the lists.
"The bridge has collapsed and these 10 people are dead." - "Okay, but let's try listing all the people who used this bridge everyday without being killed by it, and compare the lists."
While a harsh analogy, this illustrates that the difference here is in the definition of "crap". Your definition of "crap" is "does not work", whereas the submission's author's definition is more like "unsound design".
If our metric for "good" is based solely on what people choose to use, then one is forced to admit that we also have generally good security and privacy safeguards.
Actually, I kinda agree with that assessment. "Good enough" is never good enough for academics, but practical reality is different.
> then one is forced to admit that we also have generally good security and privacy safeguards.
It's a tremendous conceit of the software community to actually believe these are purely-software problems that can be solved through software. Sure, software goes a long way, but I argue that it's a fool's errand to hope they can or should be solved in software, exclusively. Like - hardware will always be a factor, at least that should be obvious. But also culture/ society/ laws/ etc. have important practical implications. The very fact that we have lots of people scrutinizing these aspects suggests that the state of software security&privacy is really not "crap" (far from perfect, but definitely not "crap").
To you believe that article to be true, one must have a fairly black&white view of the world. 99% (probably much more) of the humans can't really bypass those "crap" security safeguards.... it's good that we have high standards (especially in areas like this), but come on. When in the human history was security(in general) better? What are we comparing it with, to postulate it's "crap"?
I think the point of the article is something that does an 95% job for 10% the cost works for a lot of things, but somethings that trade off doesn't work.
Maybe. And I could get behind a viewpoint that says, "some things are so dangerous, that a 99.999% job is not good enough". That's definitely true. But that is much less of a rant, and it requires the writer to do the hard work of defining those dangerous things, and arguing the benefits of adding the 4th, 5th, 6th nine against the cost. Saying "everything is crap" makes me just not take the article seriously, because the author only did half of the thinking job (not even half, maybe).
> Anybody who's seen the sun rising east and setting west knows that this is true: the Sun rotates around the Earth.
Just small remarks: technically it's not East and West, it changes.
Well, according to the basic physics laws: also yes. Yes because it depends on the observer. No, because it depends on the observer. The Copernicus book was about a theoretical model of an observer outside the planetary system (he didn't even claim that it was true "it was just a theory" - that's why he had no problems with the funny Catholic guys who burnt people for less).
But back to the point: both claims a true: Sun travels in space around Earth... and at the same time Earth does the same around Sun. All depends on the observer.
technically I only talked about a subset of people :)
> Yes because it depends on the observer. No, because it depends on the observer.
This gets metaphysical. If the observer's viewpoint is all that matters, the earth is flat (maybe not for you, but for me, and by definition you can't argue with me on that!). An argument that "I find software to be crap" is completely uninteresting and not worth engaging. If that's the only thing he meant... yeah, sure, more power to him, for all I care he may find all news to be fake too.
“The reason this stuff is crap is far more basic. It’s because better-than-crap costs a lot more, and crap is usually sufficient.
...
...
...
Every dollar put into making software less crappy can’t be spent on other things we might also want, the list of which is basically endless.“
Software is crap because humans on the whole have an extremely difficult time reasoning through all the possible logic flows.
Software is also crap because in order to design good software a lot of insight, experience and empathy is required, and as it turns out, people writing crap don’t have any of those prerequisites.
Finally, most software is crap because most people writing it don’t care about getting it 100% correct, since it turns out that getting the last 5% correctly functioning is exponentially difficult, and those people would rather collect a paycheck than feel proud and content about what they wrote.
All other industries manage to deliver complete, working products; we are the only industry which doesn’t; computers and software never work 100% correctly. What does that make us? It makes us jerk-offs, that’s what it makes us. I understand now why Keith Wesolowski ditched computers and went to work on a ranch. And damn it, he is right!
> All other industries manage to deliver complete, working products;
Ha. If you were more heavily involved in those industries, I believe you would find that they all have similar problems because the article is right: we make tradeoffs all the time, in everything, so very little is ever as good as it could be. You just don't notice it because you don't have a lot of knowledge about it, their products are "good enough" for you and you don't think about it beyond that. Sound familiar?
No, it does not sound familiar because I’m a hardcore do it yourself guy, and one of the reasons that is so is because I seek masters in other professions to teach me. That’s how I know what trade-offs they do or do not make, and following an insight by a family member that we (as a profession) can never get these damn computers to work correctly I’m becoming more and more discusted by the truth of it.
Do all other Industries manage it though? If they did we wouldn't have product recalls or any need for "consumer safety watchdogs". The problem is arguably worse in software but software can also be (relatively) easily patched later so a somewhat more lax attitude to quality on systems that are not safety critical makes some degree of sense.
> All other industries manage to deliver complete, working products; we are the only industry which doesn’t; computers and software never work 100% correctly.
Have you ever worked in other industries? This... is not true. We're not snow flakes.
> Software is crap because humans on the whole have an extremely difficult time reasoning through all the possible logic flows.
But we have tools and techniques that make this easier. Why don't we use them?
Also, I don't think 100% correctness is the ultimate goal of this article. Software is probably nowhere even near that correctness level firstly, and secondly, software is also partly crap because it doesn't run any faster than it did 30 years ago despite tremendous performance advances in our hardware.
> What does that make us? It makes us jerk-offs, that’s what it makes us.
Let's be honest, software as it currently is written is considerably more complex than most other engineered systems that we take for granted.
That said, we've had tools and techniques to mitigate a lot of this complexity, and to have computers double-check our work (eg. type systems and model checking), but we don't really use them as much as we should. What's worse, we even use tools that are intentionally intractable to any kind of work checking/validation (like dynamically typed languages).
For 20 years I’ve been writing entire applications in AWK, a dynamically typed language, and not once were any of the bugs due to typing. And yet, we’re such a hopeless industry when most of us believe that a strongly typed system is essential for high quality software. It makes me despair even further about this profession.
I had Ada as part of the core curriculum, just so we're clear.
And being part of the cracking / demo scene, I grew up writing in MOS 6502 and MC68000 moving on to UltraSPARC assembler, so I just might know a thing or two about typing and what it actually translates to at the machine level.
But, hey, barring suddenly landing a job working on SmartOS / illumos, that's also why I want to exit the computer industry: someone always isn't convinced and they think they know it better than I do. If I get out, anybody can not be convinced and know better for all I care at that point.
Machine level translation of language abstractions isn't relevant. Ada is an ok typed language, but still limited in many ways, but that's neither here nor there.
The reason I can confidently state that you don't know what typing is, is because a type system is simply a means to prove propositions about your program. A type is a proposition. Any proposition, really. Most type systems employ propositions about simple equalities, inequalities and subsets, but there exist type systems that prove the absence of data races, deadlocks, the conformance to protocols, and even the time and space complexity of programs.
So when you claim that none of your program bugs were a result of poor typing, you're literally claiming that none of these bugs resulted from treating a false proposition as if it were true. That's just nonsense. Literally every bug results from treating a false proposition as true.
Even if we just stick with type systems dealing with simple equalities, like ML, claiming that none of your bugs would have been caught by types means that you never mistakenly considered two values as equal when they were not. Which is by itself exceedingly unbelievable.
And even if you are that perfect, you clearly don't have a perfect understanding of your fellow, imperfect human beings who need tools to actually help them avoid mistakes.
"Even if we just stick with type systems dealing with simple equalities, like ML, claiming that none of your bugs would have been caught by types means that you never mistakenly considered two values as equal when they were not."
How could I mistake two values as equal when they were not? If I'm doing a cmp.b d0, #$31, either d0 will contain #$31 and it will be a byte or it won't. It can't be any other way (unless your central processing unit is designed by intel corporation)!
In the line of work I do I don't normally design systems with locking, but I have designed them in the past and they worked fine by the way of empirical testing (purposely trying to cause a deadlock). I didn't need a formal system to prove it, nor would I ever trust such a system to ensure that my code or logic were correct.
The rest is pure philosophy about provable code, and I've little appreciation for that: even Donald Knuth famously wrote
beware of bugs in the above code; I have only proved it correct, not tried it.
That's very instructive for one seeking insight, and I urge you to rethink your position on strong typing, because it's not the panacea you appear to believe it is. It doesn't make a difference if one is not a good programmer: in that case, nothing will save one. Not even being able to formally prove one's program logic is correct.
Finally, I leave you with a theme from my childhood when I was learning to program:
natural born coders
I recommend meditating on it if you intend to continue working in the computer industry, because it's a real thing. They are few, but they do exist, and their work is unbelievable to behold. Even without them having used a strongly typed system to formally prove their code correct.
> How could I mistake two values as equal when they were not? If I'm doing a cmp.b d0, #$31, either d0 will contain #$31 and it will be a byte or it won't.
Testing equality with a constant is not the only equality test you make in programs. Firstly, what assurances do you even have that d0 actually is a byte? d0 could be a pointer so a byte comparison is invalid, unless you're doing some low-order bit pointer tagging.
Secondly, comparisons where both operands are variables are where it starts getting murkier.
> In the line of work I do I don't normally design systems with locking, but I have designed them in the past and they worked fine by the way of empirical testing (purposely trying to cause a deadlock). I didn't need a formal system to prove it, nor would I ever trust such a system to ensure that my code or logic were correct.
You certainly need a formal system to prove it if you want to guarantee absence of deadlocks. You literally just said that you couldn't empirically trigger a deadlock, but that doesn't entail deadlocks don't exist.
As for your distrust, I'm not interested in non-factual religious arguments.
As for Knuth, you do realize he thinks that typical testing practices are often stupid, and that quote is him poking fun right? He informally proves most of the code he works with, and only does testing when he's still exploring an idea and he doesn't yet know what he wants.
> It doesn't make a difference if one is not a good programmer: in that case, nothing will save one. Not even being able to formally prove one's program logic is correct.
What kind absurdity is this? If you can prove your program's logic is correct, your program's logic is correct even if you're a bad programmer. It's a literal tautology.
Finally, I have no idea why "natural born coders" has anything to do with what this thread is about. We're talking about what should be the standards and norms across a whole industry, not a select few who might not need them. Typing is not a panacea, but results are undeniably better with a decent, modern type system than without.
“Firstly, what assurances do you even have that d0 actually is a byte?“
Because of .b in cmp.b; in assembler you control exactly what goes in and what goes out. If you are comparing bytes, then bytes it must be; the rest of the register is ignored. And if you’re cmp.l d0, #$31 comparing longwords, then you already know what you’re expecting and the very nature of the way that works is that it either works or doesn’t. cmp.b d0,d1, same deal.
You’re trying to elevate something trivial (formal proof) to something more valuable and important than it really is.
And one more thing: a bad programmer will never make an effort to formally prove the logic in his program or use a strongly typed system for that matter. And a good programmer? Well, considering that Rust is utter overcomplicated garbage mishmash, that pretty much only leaves Ada as the only realistic tool of choice, but Ada is also not a very good tool: one thing that remains seared in my mind was the acute lack of documentation on the language, especially on how to solve real world problems, which makes Ada impractical. Where does that leave us? It leaves us with our own brain, knowledge, experience and insight to not write crappy software. Sorry that you feel that strong typing is a valuable tool, but practice and reality disagree with you, as is often the case in life. Better concentrate on how to write high quality software without it, or else you won’t get too far, and what benefits you do claim aren’t worth it, no matter how well it sounds in theory.
Is being able to formally prove one’s program logic better than not being able to do so? Of course. But the programming languages, the tools to do that, are utter overcomplicated garbage. You’d have an easier time and be more productive coding in assembler where you control everything, then using a piece of overcomplicated trash fire that is Rust, for example.
The entire typing thing is really just a made up crutch necessitated by artificial abstractions by high level programming languages: when I look at the data you’re sending me from the high level language in my assembler code, it’s all bits, the only thing the hardware understands, the reality. So do yourself a favor, don’t make it more complex than it needs to be. Computers are already an overcomplicated pain in the ass, let’s not try to turn it into rocket science, because it can never be that.
> I understand now why Keith Wesolowski ditched computers and went to work on a ranch
Ha ha! Can't help but think of the Roman emperor Diocletian, who after two decades of a "moderately successful career" decided to leave the office and move to a ranch to grow divine cabbages
Yep, and he kicked it lovely in Solin. Smart guy, unfortunately I’m not there yet, or else I’d ditch too. Sad thing is, I used to live for this profession.
Other industries, for example the automobile industry?
Have a look at https://en.wikipedia.org/wiki/Ford_Pinto . And have your received a recall notice for your airbag yet?
For example a car body repair guy will weld and sand the tin and will have something tangible and beautiful to show for it at the end of the day; I’ll work on a piece of code and even if my program is 100% correct and works great... it’s all electrons on pieces of rust at the end of the day transforming one non-existant thing into another non-existant thing. Anybody hiring for race car drivers?
And the only recall on my car is the isolator on the supercharger. Even that’s a maybe.
This is cultural.
The Walmart-ification of the worlds products.
Planned Obsolescence.
The short term profits to get the stock prices up by the end of the quarter mentality.
The credit card mindset.
How fast society moves, and how cutthroat capitalism is becoming...its all about short term gain because the future feels chaotic and unstable to most of the world.
I personally think its the fact the money is getting more an more difficult to obtain because more and more is being hoarded in the upper strata of society.
So short term mindsets on financial gain take over because financial opportunities don't come easily.
Well we would expect no different from medicine, or home construction, or automobiles without some basic guarantees:
1. High visibility of problems (people die, building collapse, crashes)
2. Regulation and inspection (some standards for evaluation and lines that should not be crossed)
3. Certification (you wouldn't let an uncertified random person do heart surgery on you, or build your house).
But for this to be possible, we as a whole would have to agree these are priorities and impose these limits. We're still in the time of the "surgery without anasthesia" or "no building codes" or "just let the factories emit whatever they want". Understandable due to the short time that software has been really "important" in day-to-day life, but now its time to slow down and make sure we do it right.
We have some formal verification, it's called typed systems. Very limited, I know. But some people are still praising dynamically typed languages for their "speed" and "lack of compiler errors".
>We have some formal verification, it's called typed systems. Very limited, I know.
Very limited.
In my many years of experience doing software professionally, the serious bugs, that is, the ones that took us more than one day of debugging (after such a bug was able to be reproduced), were the ones that had nothing to do with types but with
* bad understanding of the business rules
* bad fundamental implementation of the problem domain
* misuse / incorrect use of an API or libraries
* API/library behaviour different from what the documentation says.
No type system will save you from them. Type systems don't verify how the code does the correct thing. They only verify that the types moved between functions/modules satisfy certain conditions. Big deal.
Where code reusability implies that your functions should apply to the widest type of circumstances (and thus increasing the scope in which they can be reused), type systems go fundamentally against to this goal, enforcing your functions to go very specific on what they're able to accept and return.
Dynamically typed systems allow for faster coding, and also allow for interactive development which allows for faster, easier testing of individual components as they are developed. Also, the very best dynamically typed systems also allow for redefining / uploading updated function definitions while the code is running, which greatly increases development & testing speed (emphasis added.)
This is not just a claim, this is my personal experience after about 23 years of programming where 90% of those years were spent using statically typed systems.
As for speed, for highly optimized code Java, Haskell, and F# are basically in the same ballpark. Which is, very good speed. Well, Common Lisp (dynamically typed language) is not only in the same ballpark, but faster than them in some cases.
You know what i'm going to choose.
Or, different take on this: If i was going for absolute speed i'd be hacking in C, where no Hindley-Milner type checker gets in the way of clever tricks for gaining that extra bit of performance you need on that tight loop.
> In my many years of experience doing software professionally, the serious bugs...
> This is not just a claim, this is my personal experience after about 23 years of programming where 90% of those years were spent using statically typed systems.
If most of your experience was with statically typed systems, it's just logical you would have very little experience with bugs that are prevented by statically typed systems - because this prevention would be at work. Looks like an opposite of survivor's fallacy, really.
>If most of your experience was with statically typed systems, it's just logical you would have very little experience with bugs that are prevented by statically typed systems
And then, when I used a dynamically typed language (Python) for serious stuff for the first time, I didn't miss those checks. Sincerely, the great majority of those "bugs that are prevented by statically typed systems" are bugs that only a novice programmer would make.
Moreover, a dynamically typed language with strong typing (Python & and many others) would prevent those bugs anyway; the only difference is that the check is done at runtime, not at compile-time. You do need to test the system at runtime anyways (no matter what language), so it isn't a big deal.
It's only dynamically typed languages with WEAK typing the ones that give "dynamically typed" a bad name. (In)famous examples: Javascript and PHP. Perhaps your experience of dynamically typed languages has been with Javascript?
> It's only dynamically typed languages with WEAK typing the ones that give "dynamically typed" a bad name. (In)famous examples: Javascript and PHP. Perhaps your experience of dynamically typed languages has been with Javascript?
Javascript, exactly.
I have to admit, Python have already been mentioned here a couple of times, so it seems that I have to try using it in some big project to get more experience on the matter.
Thanks for replying, because I didn't mean to be rude to you.
>Javascript, exactly.
Well, then consider you had experience with only one dynamically typed language, and one with a notoriously poor type system. Thus, you wrote: "But some people are still praising dynamically typed languages for their "speed" and "lack of compiler errors"."
If we consider "dynamically typed languages" == "Javascript", then your assertion is correct: Javascript isn't particularly fast (although it's decent in speed), and no, it doesn't help you prevent errors. But this is not because of it being "dynamically typed". It is because of many other things: Weakly typed, lack of good exception handling mechanism, lack of a good module/encapsulation system, and so on.
>Python have already been mentioned here a couple of times, so it seems that I have to try using it in some big project to get more experience on the matter.
Python is easy to learn and one of my favorites (i've used it extensively), but, however, it wouldn't be my candidate for "really good dynamically typed language". Python isn't too fast, it has some limitations for concurrency, it also has some awkward limitations for functional programming, it's OOP system isn't particularly great (but still, useful) and so on. But for example python brings something that neither Java nor C nor Javascript have and do help a lot in preventing errors: Named arguments (named function parameters.)
I like Python because it's fun/comfortable to use.
But If you want to explore really good dynamically typed languages, you should try taking a look at Common Lisp and Julia. Both of them have:
* Very strong typing
* A flexible type system
* Lots of data types (i.e. complex numbers, fractions, arbitrary length numbers)
* A very, very powerful OOP system (particularly Common Lisp... far more powerful than what you'd get on Java, C++, Objective-C or even Smalltalk)
* A decent package/module system
* Speed that can be tailored to get close to C
* Easy parallel and distributed computing (particularly Julia)
* Metaprogramming (particularly Common Lisp)
* Easy interop with C
Julia is modern and was inspired by Common Lisp. Common Lisp started in the mid '80s and it's still one of the most advanced languages you can find today. I'd say Julia is probably easier to learn and focused on scientific computing, and that CL is probably a bit more powerful and more general purpose.
Common Lisp also has an exception handling mechanism called "conditions and restarts" that is simply exemplary, because it's not only intended for catching exceptions but also for overcoming them.
You would be surprised by the safety provided by Common Lisp -- the runtime will complain if anything looks suspicious or if some error is caught; will then give you a very explicit explanation of what's wrong, and then it will give you alternatives of action. You could even go to the source code, correct the function that has the mistake, recompile that specific function, while the code is running, and watch your program continuing running, this time correctly.
Well, I've gone through SICP (book and exercises), so I might say that I have some familiarity with Lisp, as well as Python - but it's one thing to do toy or personal projects and another altogether to work on a full-scale project with several collaborators and a complicated history. There's a whole class of issues you experience in such a project that really shed a new light on a language as a tool of communication between developers.
By the way, the speed I mentioned is speed of development. Both Javascript and Python are used nowadays in speed-critical applications, with NodeJS on the servers and Python in various roles in big data and machine learning because the critical path is actually done on C level (I/O for NodeJS and GPU/math stuff for Python) while these languages are in charge of the stuff that is not that critical in terms of speed but much more complicated in terms of logic. This brings me to think that if you use a language on an appropriate domain and separate concerns between domains in a good way, it wouldn't matter if the language itself isn't that fast.
An unsound type system doesn't prove anything about type systems. Doubly so if you don't actually use types and you're just exploiting that unsoundness with pervasive casting.
There are different type systems (think C vs Rust), and having one is always better than having none. C still prevents some issues you can have with assembly.
When you talk about static typing as providing some form of formal verification, you really can only talk about languages with advanced type systems like Haskell. Type systems like those in Java and C# give you very little formal verification (and I say this as someone who programs daily in C# for work and Python and Common Lisp for fun). What Java-like type systems give you is confidence in refactoring, which is not nothing, but it is not formal verification.
(I'll also say that type inference and generic types in C# give me about 80% of what dynamic types in Python give me. The main thing I miss is heterogeneous tuples, which in C# have to be of type Object, or defined as structs/classes, which is too verbose for the job.)
I work with C# full-time and it's type system constantly helps me. Of course, it's not Haskell or Rust, but it is still so much better than my Javascript experience. Anonymous types, generic containers and interfaces are usually enough for my needs.
Can you please go into some detail about heterogenous tuples in Python? I've never used it to build big long-lasting systems so I be not aware of some of it's type capabilities.
Tuples are just two or more values with separate types lumped together, i.e. (Int, String).
For example, in Haskell, to get the elements of a map, there's this function:
assocs :: Map k v -> [(k, v)]
which gives you an array of tuples, the first element of which is of type k (the map key type) and the second element being of type v (the map value type.)
I asked a question because I assumed that I might not know something about it, since C# has Tuple<T1, T2> generic class just for that and if it would be just that, Python would have nothing on C# in that regard. But your comment is exactly what I was thinking about.
So, is there something else in Python that is not present in C#?
I doubt it, other than syntactic sugar, which makes tuples really easy to use, like destructuring and construction. I.e. in Python, if you have a function that returns a tuple:
a, b = f(xxx)
To create a tuple you just do (a, b). In Haskell, you get some other benefits, like tuples being functors (i.e. "things that can be mapped over"):
map show ("tuple", 5) == ("tuple", "5")
P.S. Not a Python expert, but I think tuples might be enumerable in Python, which works because Python is dynamically typed.
There is also a variant of a tuple which assigns names to its positions - it's also iterable. They are also allowed to have any length. Other than that, I'm not aware of any superpowers of Python tuples.
> Type systems like those in Java and C# give you very little formal verification [...] What Java-like type systems give you is confidence in refactoring, which is not nothing, but it is not formal verification.
You might be surprised to hear that you can prove a large class of programs correct using plain ol' Java types [1].
More precise understanding of faults and how to deal with them can go a long way before suggesting formal verification. Bug is not a fault, but it may cause faults though. You can try to find this bug beforehand or you can deal with the faults at runtime. In practice not just bugs cause faults, but also hardware failures, natural disasters, human mistakes and so on, so you kind of have to deal with faults either way.
> But we can certainly arrange to radically limit the scope of damage available to any particular piece of crap, which should vastly reduce systemic crappiness.
Now, this is the big idea behind supervision trees. Where you split everything into the smallest possible isolated processes and supervisors that watch over them, so that in the event of any process failing it can just get restarted, limiting the scope of the problem to only that one tiny process for the shortest possible time. This idea might even reduce the cost of software development compared to some more popular software development practices. But it does require an easy to use actor model in the language.
Caja is only getting security fixes for now, one of the GitHub issues from March 2017 states: "New features for Caja are pretty much on hold for the time being. However, if you'd like to write a patch to add this one we'd be happy to review and incorporate it."
>And still, this wouldn’t be so bad, if the crap wasn’t starting to seep into things that actually matter.
>A leading indicator of what’s to come is the state of computer security. We’re seeing an alarming rise in big security breaches, each more epic than the next, to the point where they’re hardly news any more. Target has 70 million customers’ credit card and identity information stolen. Oh no! Security clearance and personal data for 21.5 million federal employees is taken from the Office of Personnel Management. ...
I’m not a fan of this type of article, the article where we find a group of bad things and then go find something to blame for it. It’s easy to say all this stuff was caused because “we just don’t make stuff like we used to”, but I think it’s wrong-headed.
Or more probably it’s used in the wrong sense. It’s true that we don’t make things like we used to. We have made giant leaps past anything we’ve ever done before. We have hundreds of thousands of companies online and millions of devices online, a level of interconnectedness that the planet has never seen before. We also have more accessible software development tools and many more people writing software than ever before.
Increasing numbers of security breaches are a sign of our progress not a sign of our failures. It’s just a sign that there is a lot more stuff attached to the internet and that unfortunately includes people with the intention to commit crimes.
Should we tolerate security breaches? No. Should we celebrate them? Of course not.
But let’s see them for what they are and search for a solution instead of something to vilify.
I didn’t get the impression this post meant to say we just don’t make stuff like we used to, but that the crap we’ve been taking as normal is going to have terrible consequences if we don’t mitigate it somehow soon.
I agree that common practices are gradually getting better, but I don’t think that’s enough. Re: solutions, the author wrote “What are capabilities?” which got discussed here the other day.
Anybody who's seen the sun rising east and setting west knows that this is true: the Sun rotates around the Earth.
The other, less-obvious alternative, is that software is NOT crap, we just like to complain a lot. And it's just so easy & fun to blame everything on the software we work with.
Most software that doesn't immediately die does its job fairly well. In fact, better than whatever alternative that existed before said software.