Hacker News new | past | comments | ask | show | jobs | submit login
Things to hate about OOP (jot.fm)
84 points by knieveltech on Dec 10, 2010 | hide | past | favorite | 73 comments



For a more clever criticism of OOP:

I find OOP technically unsound. It attempts to decompose the world in terms of interfaces that vary on a single type. To deal with the real problems you need multisorted algebras - families of interfaces that span multiple types. I find OOP philosophically unsound. It claims that everything is an object. Even if it is true it is not very interesting - saying that everything is an object is saying nothing at all. I find OOP methodologically wrong. It starts with classes. It is as if mathematicians would start with axioms. You do not start with axioms - you start with proofs. Only when you have found a bunch of related proofs, can you come up with axioms. You end with axioms. The same thing is true in programming: you have to start with interesting algorithms. Only when you understand them well, can you come up with an interface that will let them work.

-Alexander Stepanov

edit : interview here http://www.stlport.org/resources/StepanovUSA.html


I think this is pretty well stated:

There is no value in code being some kind of model or map of an imaginary world. I don't know why this one is so compelling for some programmers, but it is extremely popular. If there's a rocket in the game, rest assured that there is a "Rocket" class (Assuming the code is C++) which contains data for exactly one rocket and does rockety stuff. With no regard at all for what data tranformation is really being done, or for the layout of the data. Or for that matter, without the basic understanding that where there's one thing, there's probably more than one.

Though there are a lot of performance penalties for this kind of design, the most significant one is that it doesn't scale. At all. One hundred rockets costs one hundred times as much as one rocket. And it's extremely likely it costs even more than that! Even to a non-programmer, that shouldn't make any sense. Economy of scale. If you have more of something, it should get cheaper, not more expensive. And the way to do that is to design the data properly and group things by similar transformations.

It's from this blog post called "3 Big Lies": http://cellperformance.beyond3d.com/articles/2008/03/three-b...

When I read it I thought: "That's a pretty common understanding isn't it? Not really a big lie." Then a few days ago I saw this: http://www.quora.com/What-is-the-best-mental-model-for-objec...

Facepalm. Note the "M.S. degree in computer science." WTF? To me that describes exactly how not to think about OOP.

I have Stepanov's book. It's only 240 pages, but 50% math. I haven't been able to get through it yet.


I've seen that quote in a number of threads criticizing OO, and I don't think I fully understand it. Could someone explain what the practical edge of his argument is here? Or is it just a theoreticians complaint, taking a (perhaps simplistic) definition of OO and declaring it unsuitable? Everyone agrees in theory that proofs and generic code are great, but in practice it seems that most programs (at least, the ones that I've seen) aren't conglomerations of mathematically reasonable abstractions, but fairly monomorphic domain-specific actions that would benefit more from what OO offers than from proofs and generic regular types.


Great observation about starting with proofs.

This is especially true when you step outside of the code and look at the problems being solved: if you don't already have a proof of concept, and even before that, if you don't have a sound problem to solve, you will not be able to produce intelligible proofs and axioms will be difficult if not impossible.

Mechanically, I'll prove to myself something works the way I want or need to in a REPL and then implement the axiom from that... which is sorta in the right direction :)


I agree, especially about the mathematics.

Only starting with algorithms bugs me. A focus on data structures seems much more natural. (On the other hand, data structures are intimately tied to algorithms. What would a red-black tree be without insert, delete, look-up?)


Seriously? This is more like, "I hate Java and teams with managers that like making diagrams instead of writing code".

Most procedural code is actually object-oriented. If you write:

    foo_t foo;
    init_foo(&foo);
    frobnicate_foo(&foo, "with a bar");
    cleanup_foo(&foo);
Guess what, that's object-oriented programming. You are ensapsulating state and behavior in a data structure and set of procedures. Calling that a "class" is just syntax sugar that a lot of people find useful.

Even the most die-hard anti-OO advocates write code like this. Why? Because it's a great way to manage complexity, and that's all programming is. Doing stuff, but also doing it in a way that is easy to change and easy to improve.

I'm not even going to comment on the rest of the article. It's clear that the author has never used an OO language other than C++ or Java, or a statically-typed language other than ... C++ or Java. Shockingly, there is a lot more to the world than C++ and Java.


I'm not directly disagreeing, just, prodding in the direction of disagreeement... But..

In that example, where is the "encapsulation" you mentioned? The way I'm reading it, the only thing connecting the data with the behaviors is the fact that you just happen to call the functions and pass-in the data.

IOW, the is-somethings and does-somethings are separate.


All that ever happens is that you "call the functions and pass-in the data", except that OOP languages know to pass this for you. The difference between OOP-style C and an actual OOP language in this regard is syntactic, not conceptual.


This remark interests me: "except that OOP languages know to pass this for you".

In practice, i'm not seeing this. In that procedural example, you can pass in any input you'd like. Suppose the function in that example accepts an array. Now, inside the function, I have to devote a non-trivial amount of effort to determine if the array contains the type of data I'm expecting. And if I want that function to change the array for possible use later, I'll need it to return the array and I'll have to store that return.

Now, that array, implemented as an Object, is structured. Strongly typed. Even private properties with setters to enforce consistency.

And that procedure, implemented as a method on that object, needs none of the consistency and validation code at the top of the procedure. And it can make changes to the properties themselves without the need to pass around variables.

Sure, most of the benefit of OO is in the structure and organization it brings. But no, it's far more than, as you're making it sound, the compiler is passing around an implicit argument on your behalf.


I think what you're talking about for type-safety, not necessarily OOP. C supports types, but is not completely type-safe. It will tell the developer that they shouldn't be doing something, but won't enforce it.

The argument above is that most developers write object-oriented code in procedural languages. They have a set of functions that perform actions on a specific type of data structure. To encapsulate functionality and keep the codebase clean, all reads and writes to these data structures should go through these functions. This pattern is ubiquitous in well-written C code. OOP itself simply provides a standard framework and syntactic sugar to make this type of code more consistent and clear.

EDIT: Here's a real-world example: (came up for the first search of hash.c) http://www.ks.uiuc.edu/Research/vmd/doxygen/hash_8c-source.h...


No, not entirely. A strongly typed language will help this a little -- at least I can know i have, say, an array of strings.

Suppose i have 2 properties in my class, 'start_date' and 'end_date'. They are private and have setters.

In a method that uses those 2 properties, I can trust that they are of the right type, that start_date is < end_date, that they represent a reasonable range based on my logic defined in the setters. All of this can be taken for granted.

In this procedural example, imagine some sort of hash or struct or associative array (whatever you want to call it in your given language) is supplied with those 2 keys.

The procedure cannot trust that the inputs are valid. It has to check everything. If you can rely on type safety, congrats, that's one less check. But there are plenty of others.


So you create a typedef struct of date_range to store the start_date and end_date and use a set of date_range_* functions to manipulate that struct. Most OOP languages allow direct access to even private variables if the developer really wants to, so the guarantees are pretty weak. C++, Java, C#, Ruby, Python... all of those allow indirect access to "private" methods and variables. You must still rely on other developers to do the right thing and not screw things up.


I think the argument that a developer can access and write-to private methods bypassing setters so therefore encapsulation is broken is pretty weak IMO.

On one hand you have an option where you have to trust the developer will chain the appropriate method calls to build an unrelated data structure that gets passed-in to your target method.

On the other, you have the possibility that a developer will knowingly and willing subvert the encapsulation by writing directly to private variables while ignoring the business logic that's being enforced in the setters.

Do you mind me asking... what exactly are you arguing there? In the first example it's easier to do it wrong than it is to do it right. In the second example it's easier to do it right than it is to do it wrong -- in fact you have to specifically write to protected members to break it.

How are we still discussing this?


I believe your conclusion is rooted in a lack of experience in writing C code. You seem to think that developers are going to go out of their way to allocate and initialize the struct manually. In almost every codebase I've worked in, struct types always have an allocation/initializer function, so I look for that first, and generally find a set of other functions that are used to work on that struct type, organized into a single .c file usually named something similar to that struct type. It is a convention, not syntactic sugar. In C++ or C#, you have syntactic sugar that makes it more difficult to violate the rule, but does not actually enforce anything.


Rbranson:

Well if you personally want to use a procedural language for OO development, that would explain a lot about this conversation.

Anyway, good talk. Appreciate the back and forth. I wouldn't hire somebody with a mindset like yours but I certainly like chatting on HN.


Sure, most of the benefit of OO is in the structure and organization it brings. But no, it's far more than, as you're making it sound, the compiler is passing around an implicit argument on your behalf.

I agree with his point that the difference between OOP-style C and code written in an object-oriented language is language support. I have read some very OOP-style C, and it is extremely centered around chunks of data (objects.) The idea that code should be organized around data structures goes way back. The terminology in this Fred Brooks quote shows how old the idea is:

Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowchart; it'll be obvious.

In object-oriented C, it is even true that most functions are closely associated with a single structure, as if they are methods, though of course it isn't as cut and dried as it is in languages where that concept is rigidly enforced.

Possibly unrelated, but I've also seen procedural C that is verging on OO, showing an interesting state of evolution: pieces of data are implicitly grouped together, in that they always appear together in function signatures, but not grouped into structures. As a trivial example, if a point had x, y, and z coordinates in space, any function needing one of those coordinates would take all three throughout the entire codebase. Instead of

  float altitude_difference(float p1_z, float p2_z);
you would see

  float altitude_difference(float p1_x, float p1_y, float p1_z,
                            float p2_x, float p2_y, float p2_z);
because points were regarded as aggregates that should be passed around together. This made it much simpler to remember function signatures, and it communicated the intent to treat certain chunks of data as aggregates. (I think this technique was used in Fortran before C even existed.) I imagine that from there it was a very short step to structs.

Almost everything in the history of OO language features has boiled down to support for practices that were first developed and used without language support, often in C. If I remember correctly (remember reading, not being there) people attempted to differentiate between public and private data and methods in C (using macros and/or something similar to the the C++ pimpl trick.)


I agree. Most OO implementations are simply the foo instance data combined with a pointer to a table of foo function/method pointers that keep the foo functions in a nice tidy location, Throw in some scoping rules enforced by the compiler as to who/what can access the foo instance data, and that's really all there is to it. The type-safety is basically the same as with procedural code - you can't pass a reference/pointer to foo to a function/method that expects a bar reference/pointer.


I agree with your complaints about the article, but the code example you give could be demonstrating programming with abstract data types in C just as well as it could be demonstrating programming with objects. In fact, unless frobnicate_foo is just a convenience function to look up a frobnicate function pointer in the foo_t structure and invoke it, I'd have to say that you're clearly demonstrating an ADT here.


Your code is object-based not object-oriented because of the lack of inheritance.


His example is a little small to see how he would implement inheritance.

Also, inheritance is not a core requirement of OOP. Before you hit that downvote button, consider this: Inheritance allows your class to share attributes and behaviors with another class, and it allows code to recognize that instances of your class have those attributes and behaviors. In other words, when we say "is a", we really mean "does things that match the interface and meaning of". You can accomplish the same thing with "aspects" instead of "inheritance". Following that idea, it is simply a matter of introducing this simple API to add your behavior to an instance:

    add_aspect(&foo, 'Some Aspect Identifier');

And that's just the tip of the iceberg. Check out Art of the Metaobject Protocol (http://www.amazon.com/Art-Metaobject-Protocol-Gregor-Kiczale...) for some huge eye-openers.


>Also, inheritance is not a core requirement of OOP.

The definition I learned was that, if you don't have inheritance, it's object-based. Of course, that was almost 20 years ago; usage might have shifted. I think the example I was given of an object-based language was Ada.


This is a roast, so picking it apart is to spoil the fun a bit :)

But: "If we go back to the origins of Smalltalk, we encounter the mantra, “Everything is an object”. Except variables. And packages. And primitives. And numbers and classes are also not really objects, and so on. Clearly “Everything is an object” cannot be the essence of the paradigm."

Even in Smalltalk, packages, numbers and classes are all objects. Primitives are methods, which are objects.

In Self, variables (in Selfspeak 'slots') are also objects.


People always focus on the everything is an object bit, I remember reading Alan Kay saying that the idea of messaging between parts of the software was much more central than inheritance or even the notion of objects.

http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-...

But I agree that the fact that so much is an object makes Smalltalk so much more flexible. I have not done any real work in Self, so I won't comment.


Yeah, I thought maybe he was confusing ST with Java. In Java are those things not objects.


Btw, for context, here is his publications list: http://scg.unibe.ch/staff/oscar/onpubs

He's not an anti-OOP guy, he's a pro-OOP guy who's giving a roast as the banquet speech at ECOOP.


He mentions that in Smalltalk, everything is an object, "Except variables. And packages. And primitives. And numbers and classes are also not really objects, and so on." This statement isn't quite correct; for example, both classes and numbers are objects in Smalltalk, as are all the primitive types such as booleans. In fact, Smalltalk's if-then statement is achieved by sending boolean objects the ifTrue:ifFalse: message with code blocks as arguments, e.g.

    (5 < 6) ifTrue: ['yes'] ifFalse: ['no']
which returns the string 'yes' if the 5 object returns true after being sent the < message with 6 as an argument.

The problem—as Paul Graham pointed out—is one of terminology. "Object oriented" means different things to different people, and what would be appropriate for an all-you-can-do-is-send-a-message system like Smalltalk is not going to be appropriate in a weakly statically typed, early-binding system like Java, which makes reasoning about all of OOP at the same time a difficult proposition.


If there is anything I hate OO is for the triumphalism of its proponents. Anyway, that's what I've come to think of main programming styles:

Imperative style - programming with a global context.

OO style - programming with a local, implicit context

Functional - programming with no context. Or explicit context.

And yes, Virginia, complex problems will stay complex no matter which style you use.


This is spot on! I would only change that from 'context' to 'state', because it is the state that matters.


I think those are reasonable descriptions, but they're not quite how I see them. I think OO is orthogonal to imperative vs functional. You can program OO in a pure functional language, and you can do pretty typical C programming with OO.


I love this description, it gets right to the core of the issue. Well done sir!


100% free of original content, and much of what's there wouldn't qualify for an an anti-OOP "best of" list. I am really confused that he is an OOP guy who gave this talk at an OOP conference, because he seems quite sincere (not ironic) yet sounds like someone who is looking on OOP from the outside in or is being forced to do OOP against his will.

1: Nobody agrees on a definition for OOP, so it isn't clear what it actually is. Right, except that you have a clear enough idea of it to hate it, so...?

2: Hatin' on Blub. Many of those quotes came from fans of OOP, so I guess one of the things he hates about OOP is that OOP fans don't speak with proper reverence and/or community solidarity. What, should Bjarne Stroustrup consult the OOP marketing department before he makes a funny remark about C++? And the final quote from Stroustrup undermines his whole blog post. After reading that quote, I had to reread the whole post twice to see if it was tongue in cheek, but I don't see any irony at all.

3, 4, 5: It's legitimate to complain about classes, methods, and restrictive type systems, but within OOP you are able to embrace or eschew these things as you wish. Everyone knows this, right?

6, 7, 8, 9, 10: Not specifically aimed at OOP. These could all appear in a list of "10 things to hate about FP" if FP were popular in corporate software development.

It's especially ironic that he complains in #1 about the lack of a specific definition of OOP, but in 2-10 he paints a very restrictive, extremely overly specific picture of OOP. Programming with Java, UML, and design patterns and saying "I hate OOP" is like living in Los Angeles and saying "I hate California." LA is popular, but if you don't like LA, there's a hell of a lot more to see in California before you badmouth the whole state.


Incidentally, #1 is sufficient to hate "OOP" (the term), because using it will never lead to useful questions or answers.


     Classes exist only in our minds. Can you give me a single real-world 
     example of class that is a true, physical entity?
Sure I can ... before building a car, you need to have a blueprint first.

I know that OOP is no panacea and that it has problems and so on. But personally I haven't learned anything from reading this article.


>before building a car, you need to have a blueprint first.

But the blueprint is not the class; it is a copy of the instructions on how to instantiate the class.


In the case of a car, not really, the assembly is entirely different matter dealt with by a different team. The car's blueprint may have to change to accommodate assembly issues but the car's blueprint is really describing the car not the assembly process.


OK, but the blueprint is still not the class; it's a piece of paper which carries a copy of information which describes the instances of the class.


A class describes how objects will behave once instantiated. Sometimes it does contain specs about the assembly process, but not necessarily. And the assembly process is described in "static methods" which are nothing more than methods on a "singleton" that's instantiated once when the application starts.

A blueprint in the real world, while just a piece of paper, it's an important physical object nonetheless. The analogy can be extended: sometimes you need to modify that piece of paper in real-time based on new requirements that arise. And the medium (e.g. piece of paper / source-code) used to express the blueprint doesn't really matter. What matters is what the blueprint represents: instructions for how the built object will behave / what interfaces it has and (for physical objects) how it looks like.


The original assertion, from the article, is that classes have no physical existence. The upstream comment claimed that a blueprint was an example of a physical object which was a class; I claim that the blueprint is not the class.


All this claiming would certainly benefit from being more specific on the motivations for the claims, otherwise, this is just yet another mindless debate.

My understanding is that the original article was making this claim because one reason used to push OO is that it is a more "natural" way to represent problems because it fits what we experience everyday, the point that the widespread implementation of OO relies on classes which have little to do with the real world.

To refer back to the blueprint example, modifying the car you purchased in ways the manufacturer hadn't intended doesn't require you to modify its blueprint. Also you may repurpose the car for something else, naming this things a car is just a matter of convention about describing its use, not something fundamental about the object itself. These limitations still apply to procedural or functional programming, the point is just it wasn't a selling point for them.


The poster writes: "Have you ever considered why it is so much harder to understand OO programs than procedural ones?"

It is? Seriously? I mean, there are badly-written OO programs, but there are badly-written programs of any variety, and assuming even a moderately good OO breakdown of the problem, the code in an OO program will fall into natural segments that are easy to see and easy to mentally file into compartments---much easier to read.

Furthermore, "written in C" and "OO program" are not mutually exclusive.


> OOP is about taming complexity through modeling […]

But, what kind of programming isn't?


The question is: how effective?


Pretty effective.

Yes, 99.99% of Java programmers do it wrong. That's because they are not programming, they are typing stuff into a computer to make sure next week's paycheck shows up on time.

Or, they think "programming is fun", and want to solve a problem. So they solve the problem in a way they know how, publish their solution to teh intarwebs, and are done. This results in a solution to a problem, not easy-to-maintain code.

Writing good code requires practice, discipline, skill, taste, continuous re-evaluation of the design, extensive thinking, and extensive learning. Not many people care enough to do it right. The program works today, and if something needs to be changed tomorrow, well, they'll change it tomorrow.

The problem is not OO, the problem is that good software is never demanded (because people think it's impossible).


OO, actually, is a big part of a problem.

OO is inherently stateful, so reasoning about OO programs is just slightly easier than reasoning about assembler programs.

OO was introduced as a way to control program behaviour - encapsulate effects, hide information about effects, abstract effects away. All while controlling them.

Using single notion of object and inheritance, as a single method of type derivation (ie, derivation of information to reason about).

This is hardly done now, with all the power of Coq type system, why should be it possible within typical OO type system of mid-90 or even current .Net type system?


Perhaps the reason OO has gained such a following is because it seems (to some) like the solution. Businesses have found they are bad at hiring programmers, because middle managers don't know how to spot the bad ones. So the best solution, in their minds, is to limit the amount of damage a bad programmer can do. In an imperative language, a bad programmer can wreak havoc. In OO, so the thinking goes, the disease is quarantined to a predefined set of functionality. Good programmers complain because OO ties the programmer's hands. But maybe that's the point.


No it isn't. Not yet. You have to define "OO" first. You have to point to a cluster in program-space worth defining, call that "OO", and call the rest "not OO". There could be fuzzy limits, but some programs have to be clearly OO, and some has to clearly not be.

I tried, and I hit two little snags:

(1) There is no agreed upon "OO" cluster. Ask Alan Kay and Bjarne Stroustrup. Most programmers even make a purely (and meaningless) syntactical distinction, calling `foo.bar(x)` OO, while calling `bar(foo, x)` not OO; or calling C++ classes OO, and calling C structs + functions not OO, even when they don't use inheritance in C++.

(2) Actually, "OO" is now meaningless. It doesn't have an interesting predictive power. No cluster in program space worth categorizing can reasonably be called "OO", because other existing terms will always be preferable. So we should stop using that term.


You know you've hit a quality post when he's arguing with Chuck Norris jokes.


Chuck Norris argues with Stallman jokes.


I hate when I get trolled by drunk me.


Object oriented programming has vastly simplified and improved programming. Come on! I would hate to develop an average GUI app without OO!


Yes, but UI is a typical use case where OOP works well. Everything can neatly be distinguished in classes: buttons, combo boxes, etc.

But look at numerical software implemented as OOP. Quite often it is absolutely horrible. I wrote such dreadful and unmaintainable code as well, because it was taught in school that OOP everywhere is the way to go.

OOP and inheritance is often mistaken as the path to genericity. Parametric polymorphism with constraints (e.g. typeclasses) is far more useful to that end.

OOP is great in some domains, but the 'OOP everywhere' philosophy needs to go. We have already tainted enough generations of students with such beliefs.


The one thing I hate about OOP is that it is so ubiquitous that many manager-types think everything must be done in OOP.


That's a pretty negative article. Would be much more useful if he'd put suggestions on how to make it better in each point, maybe with some proof of concept.

As for the topic at hand. OOP is certainly no silver bullet, and not suited for everything (an example would be an OS kernel, see the attempt of Linux).

Generally though I don't enjoy the freedom to do systems programming all the time, and static procedural compiled languages are not a solution to everything.

At times when I want to bring something nice to the screen, I actually need a framework, a set of libraries that make me not have to invent the wheel again. Best stuff I saw in this category is written in C++/OOP.

Other times I just want to do some data processing, and a scripting language like bash or python does wonders in time efficiency for quick and dirty stuff, and OOP is only of minor importance for these things.

To add to the spirit of the article, and be a bit negative: The main problem with OOP I know of is, that it's harder to get an overview of a larger software project, as you have to familiarize yourself with a huge object dependency tree instead of the core data model and flow.


Linux attempted to make an object-oriented kernel and failed? Can you give a link?

Small correction, btw: in Python everything is an object and you have classes n' stuff, making it just as 'OOP' as C++, Java.


Python is clearly described as an OOP programming language, yet, interestingly, it does not have key features often atteched to OOP: it has no enforced encapsulation, and inheritance plays a very minor role in designing python programs.

As Alan Kay described it, OOP should be about very late binding, messaging and state hiding/protection (http://userpage.fu-berlin.de/~ram/pub/pub_jf47ht81Ht/doc_kay...). That's completely different from most what people describe as OOP (by that definition, a statically typed language cannot be OOP). The notion of multiple algebra associated to objects mentioned by Stepanov was already considered before, so I don't buy that argument much. Also, as much as I admire in some perverse sense what the STL can do with C++, I don't think it is an example of a beautiful design - more advanced languages (dynamically and statically types) provide much nicer solutions to the same issue.


and that in essence is what makes python awesome.


Making it several orders of magnitude more 'OOP' than C++ and Java.


>Another thing I hate is the way that everybody loves to hate the other guy’s programming language.

Oh, yeah, that never happens with non-OOP languages.


Should rename the article to "10 Things I Don't Understand About OOP"


Insightful satire is one of the most illuminating forms of criticism.


Hating OOP is a little old. It's not hard to understand that nothing is a silver bullet, except for silver bullets.


Is it just me or does this sound like an old, bitter man raging and remeniscing old times, when things were simple and procedural?

I'm not saying OOP is perfect. It's not. But to say procedural programs are easier to understand than OOP ones I have to simply disagree (based on the real world cases I have seen).


Most OO programs ARE procedural. (eg. Anything written in Java/C#/C++)

OO does not eliminate procedural code, it merely encapsulates it.


Yes I meant "procedural-only, pre-OO".

A lot of C software (one example: glib, as used by GNOME) these days is also OO, without any language support. OO is about encapsulation, seperation of concerns, and so on, generally resulting in more readble code.

Of course it can be overdone. I've also seen software more like OO-extremism, in that it was almost impossible to see what called what, or how the loosely-knit bunch of objects, interfaces, abstractions managed to do anything at all. Eclipse has a few projects guilty of this.


As with all discussions of this type it abounds with false dichotomies - as you say all OO programs contain an element of procedural code and it is pretty difficult to avoid objects and classes completely if you use a language where they are the primary means of data and procedural abstraction.


> Just because your Java program compiles does not mean it has no errors (even type errors).

This is because

a) Java allows arbitrary subtype casting of object reference types

    Object s = "string"; 
    Integer foo(Integer i) {
       return i.intValue() + 2;
    }
    foo((Integer) s); //runtime error
b) null is valid value for all object reference variables.

    foo(null); //runtime error

Java should not be used to point out the faults of static typing.


foo := 2 * a + b;

No type checking will ever get that this is simply the wrong formula, computes as (2 * a) + b and was supposed to be 2 * (a + b) .


Just like no car will ever be able to transfer hundreds of people across an ocean.

That is not the purpose of static types, instead they offer a basic guarantee on correctness, a minimal correctness proof on the structures of the program and in particular a sanity check on your logic to see if the structures implied by your types are in fact inhabited (excluding bottom type).

Or say if the type diagrams on a series of function you are trying to use commute.

See http://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspond... for some interesting stuff.


If your static type system includes dimenions / physical units, you can spot a lot of typos in formulas. (Of course, you will also be required to add some conversion factors, when you want to express something like "Open as many connections as the sum of number of hard disks plus number of users.")


You're right. I didn't consider this too. Have you tried a language with units?

I've tried it in F# and seen it in Frink. I found it cool but havent found a need for it. I suspect that is at least partially due to being used to not having it in a language.


A friend of mine hacked up his Lisp, with lots of macros, of course, to check units at compile time. I haven't really used his system, but he was quite keen on it.


Yep, someone here added a type check to Ruby, there are some functions through the codebase that check each argument but I've not seen any real life benefit.


First, this is taken outside of context. Second, you're trying to disprove wrong thesis.

Let's take this again into context first.

In the proper context our wrong formulae will be brought up by running functional tests by field expert (or even earlier, by programmer itself). Functional tests, as opposed to unit tests, checks validity of the program as a whole in relation to problem it should solve. So they are much fewer and quite often all external.

So this is actually no-problem.

Let me reconstruct that implicit thesis you're trying to disprove.

As I can suggest from my experience and your nickname, you're trying to disprove thesis that "types solve all problems".

But real people use types not to solve all their programming-related problems, but to solve as many of them as they see fit for the task at hand.

All software processes revolve around simple thesis: the cost of defect elimination is proportional to the time between defect introduction and defect discovery. PSP/TSP, ALL Agile processes, etc.

Type systems greatly reduce that time. As do REPL and unit-tests. But, compared to unit tests, they require much less effort from programmer.

So, yes, we all need testing. Type systems, though, reduce areal of tests to where they naturally belong.

PS In my opinion, programmer should seek strongly typed language with REPL.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: