Hacker News new | past | comments | ask | show | jobs | submit login
Static Typing: Give me a break (scientopia.org)
49 points by lygaret on Jan 21, 2013 | hide | past | favorite | 60 comments



The article presents the usual dichotomy: static typing helps with correctness but makes programs harder to write--your language is less expressive. I used to believe this too. Now I think this is a false dichotomy. In this rare instance, you can have your cake and eat it too.

I used to mostly use dynamically types languages: Python, JavaScript, Perl, Scheme. I also used Java and C++, but did not think they were worth the sacrifice in productivity. However, recently, I've been using Haskell. I really like it for reasons of correctness; however, I was also pleasantly surprised to find I was more productive in Haskell than in any dynamically typed language--even Racket (a Scheme variant).

There is one very simple feature in Haskell that makes it concretely more expressive than a dynamically typed language: typeclasses. You can have values polymorphic on their return type, which makes things like overloaded string and numeric literals possible.

However, it's more than this. Types help me think about my problem and constrain the solution space. I often overcome logical problems by figuring out how the yours should match.

There are a bunch of factors like that, but they really are a matter of preference. Typeclasses aren't: you simply can't do certain things as easily without them. QuickCheck is another great example; it's far easier to use in Haskell than in, say, Erlang.

For my most recent project, I was able to use 18-bit words very easily. Moreover, it would be trivial to make my code generic over the exact type of number used! I'm thinking of passing in probability distributions instead of numbers per se. That's also easy thanks to typeclasses.

In summary: I've found static typing can be more expressive, partly out of preference but largely thanks to typeclasses. Somebody once asked me what I thought the most important feature of Haskell was. I said typeclasses, and I still stand by that.


Whether or not people should use static typing is not really a question for type theory, but a question of human factors and target area. Programming languages address a range of needs: collaboration or maintenance, performance, ease of expression in problem domain, cost of error, etc.

Programming languages sit at the mind/machine interface, I don't think it is very useful to evaluate them solely from a PLT standpoint.


It's always struck me as curious that PLT does not overlap much with human factors. What is a programming language but the human interface to an automaton?


You'd think that a self-styled "programming junkie" wouldn't confuse strong and static typing, or weak and dynamic typing.


Making the "strong/weak" versus "static/dynamic" distinction is much more common in dynamic typing circles than static typing circles. And it makes total sense. Static typing people think of types as being syntactic. "Dynamic" typing seems like a misnomer--either the syntactic terms have strong types or they don't.


Common usage has definitely varied over the time I've used it. I remember one professor back in school even referring to C as strongly typed and Scheme as weakly typed.

I think some of the confusion is that the nature of the two sets of labels are commonly misunderstood. They're supposed to represent two orthogonal dichotomies about how languages design. In truth there is an orthogonal pair of dichotomies in there, but it's not captured by that set of labels. And the labels aren't really orthogonal.

The real dichotomies concern two different places where you can track type information: It can be associated with variables, or it can be associated with values. Some languages that represent the possibilities well are:

  - C: Variables, but not values
  - Ruby: Values, but not variables
  - C#: Both values and variables
  - FORTH: Neither variables nor values.
As for how they're commonly described: C is the classic example of static and weak, and C# is a good example of static and strong (though admittedly it has facilites for switching both off in a controlled manner). Ruby is dynamic and strong.

But nobody wants to call FORTH dynamic and weak, and for good reason: The very phrase "dynamically typed" implies that types are tracked, just dynamically. But FORTH doesn't track types at all; it's much more correct to describe it as untyped.

So that's where it breaks down: While strong vs. weak corresponds well to the question of whether types are associated with values, dynamic vs. static is not actually an orthogonal dichotomy, because the word dynamic describes a decision about both ways of tracking type: Dynamic languages associate type with values but not with variables. And my earlier description of Ruby as "dynamic and strong" is redundant.


I don't see the error he's making, would you mind pointing it out?


From Wikipedia:

- Static Typing: "A programming language is said to use static typing when type checking is performed during compile-time as opposed to run-time." [1]

- Dynamic Typing: "A programming language is said to be dynamically typed when the majority of its type checking is performed at run-time as opposed to at compile-time." [2]

- Strong Typing: "A type system is said to feature strong typing when it specifies one or more restrictions on how operations involving values of different data types can be intermixed. The opposite of strong typing is weak typing." [3]

- Weak Typing: "One claimed advantage of weak typing over strong typing is that it requires less effort on the part of the programmer because the compiler or interpreter implicitly performs certain kinds of conversions. However, one claimed disadvantage is that weakly typed programming systems catch fewer errors at compile time and some of these might still remain after testing has been completed." [4]

Links:

[1] https://en.wikipedia.org/wiki/Type_system#Static_typing

[2] https://en.wikipedia.org/wiki/Type_system#Dynamic_typing

[3] https://en.wikipedia.org/wiki/Strong_typing

[4] https://en.wikipedia.org/wiki/Weak_typing


Not sure why everyone's being so cryptic, so it's probably me that's the confused one. Let me try to guess:

"There is a wikiality driven split in the English language.

Camp A says that a strongly typed language prohibits immoral implicit conversions, such as "1" + 1 => 2 or even more grotesquely, "1000" == "1e3" => true.

Camp B says that a strongly typed language gives each value a type, and that all operations which do not have well-defined semantics will signal an error rather than allow operations to execute which assume incorrect typing.

Camp A currently owns the articles "weak typing" and most of the article "strong typing". Camp B is settling for teaching the controversy in "Strong vs Weak typing" and putting passive-aggressive little notes on all the articles that Camp A's view is mistaken. "

Did I get it right? If so, camp B is correct and camp A are being dicks about terminology that they're getting wrong.


Everything before 'Did I get it right?' is correct.

I have no opinion on the bit after that. :-)


We know what the definitions are; the confusion is that I don't see where exactly in the article he is confounding them. I'll read it again...


I agree. I'm not noticing a mistake unless he has since corrected it.


The OP's statements are about type declaration, which is not necessarily a feature of static typing but of strong typing.

It is possible for a language to not have these declaration and still be static (ie; compile-time type inference).


Although there is some degree in preference with programming languages, I find it much more common to see people using hammers on screws because that's the tool they KNOW.


That informed people may have preferences different than one's own does not preclude the possibility that some people's preferences are based on their being uninformed. But it is hubris (and not the virtuous kind) to presume that's any particular person's reason for disagreement, or that it's the most common.


It is also worth pointing out first there are various flavors. There is a range of type strengths. Maybe from something like Haskell down to, I don't know, Javascript with stuff like [1,10,4,21].sort() = [1,10,21,4].

There is strong typing vs weak typing as well, and either can appear in a dynamic language. There also allowing nulls and also having or not having algebraic types.

> That's part of a more general preference, for preserving information. When I'm programming, I know what kind of thing I expect in a parameter, and I know what I expect to return.

Doesn't a functional single assignment language like Clojure or Erlang then make most sense? If your goal is to preserve information and knowing that no funny business has been happening behind your back (when you are not "figuratively" looking) isn't that what you want? This is even more important if this is a concurrent program.

You also have to ask yourself, what is the overall goal here. Ok maybe it is just aesthetics, maybe there is a small or (not so small) amount pleasure derived in figuring out the types and specifying that. Maybe it helps larger teams work better. Or it helps the compiler to make faster code (v8 might disagree though).

Most people would perhaps arrive at -- "I want my program to not crash as often". One way to do it is to then have a provable correct program. For avionics there is some degree of that. But it is very expensive. But another way is to assume your program will crash, no matter how strong your types are. Then fault isolation is a better strategy.


There's also two different (popular) typing approaches: inheritance and structural typing. Each gives you different ways of reasoning about functions and data.

Most of the hangups with static typing seem to come from three problems: Excessive boilerplate, excessive "type purity", and excessive compilation time. I would also argue that a lot of the hangups with static typing come from Java.

Java requires a lot of boilerplate to manage class constructors, etc. However, this is greatly reduced in new languages thanks to type inference.

Java is supposed to be about "pure" OOP, making it very difficult (and slow) to invoke any sort of dynamic method. It can't even type arbitrary methods, once again due to its object-centric focus. However, newer languages are offering more ways of dealing with different type systems, and/or dodging typing altogether in certain cases.

The last problem is with compilation time, which can eat into development time the same way that testing does.

There's newer static-typed languages that are making big improvements across all of these problems. Scala, Haxe, Dart, and Typescript are ways of making development easier for pre-existing platforms... either by relaxing type issues on the jvm, or providing type features for dynamic runtimes (js).

On the other hand, dynamic languages are starting to get very sophisticated linters that are catching classes of errors that are typically found by static typing.

So, in a way, both camps are addressing problems in their own way. I'm primarily in the static camp, but I develop using a lot of dynamic languages, and have been impressed with the tooling and resources available there. However, I see more improvements coming from the new static languages.


> The last problem is with compilation time, which can eat into development time the same way that testing does.

Good point. More modern languages fix some of the flaws found in Java. Among which:

Scala (not really "modern" since it's ten years old, but more recent than Java):

* No boilerplate: check

* Type inference: check

* Compilation times: er... horrendous. C++-template-linking horrendous

My sweet spot these days is Kotlin, which checks on all three points (I use both Scala and Kotlin in IDEA and it's not even close in compilation times).


"The last problem is with compilation time, which can eat into development time the same way that testing does."

The Haxe->Neko compiler is fast past any expectation, when I started using it I simply could not believe how fast it went.


bias alert: I'm a huge Haxe fan.


Here is the bottom line: static typing is getting much closer to the sweet spot by increasingly gaining dynamic typing advantages (e.g. type inference, which means less verbose sources) than dynamically typed languages are going in the other direction.

It's becoming increasingly hard to justify using a dynamically typed language these days.


There are far more compelling advantages to dynamic typing than fewer keystrokes.

Static type checking attempts to use compile time information to make claims about the runtime behavior of a program. This technique obviously fails to the extent that the runtime environment diverges from the compile time environment. Dynamic type checking, on the other hand, has no such limitation: it has perfect fidelity to the runtime environment because it occurs within that runtime environment. Even if the runtime environment itself changes at runtime!

Here's a concrete example. You compile a Haskell program that dynamically links a package. You then install a newer version of the package, which has an incompatible API change. Your program, which reported no type errors, now has a type error (a static typist may not agree this is a type error, but a dynamic typist would assert it is). And if your program starts at all, it will likely fall over and die, because GHC's codegen is brittle against such changes.

Fortunately, we do not suffer this fate on platforms like OS X or iOS, where binaries routinely run on multiple prior and future releases of the OS, with their accompanying shared libraries, while taking advantage of newer features whenever available. ObjC's dynamic typing is a big help here: it's simple to express "call this method, if it exists." And if you call a method that does not exist, you get a readable exception instead of being sent wildly through a random slot in a vtable.

(And oh yeah, having a stable ABI helps too.)

So this is one of the big strengths of dynamic typing: it works even if you don't know where exactly your program will run.


Sorry but claiming that executables written in a dynamically typed languages are resistant to invalid dynamic linking is not just bizarre, it's nonsense.

Either the code you are writing is 100% in the dynamically typed language and it couldn't care less about dynamic linking, or it's invoking functions in a dynamically linked library through an FFI, and it will fail like any other language.


What constitutes "invalid dynamic linking" is ABI and therefore language (and toolset) dependent. In C++, adding or removing a variable or virtual function from a class is an incompatible change: it will break binary compatibility with subclasses. In Haskell, adding a new entry to an algebraic data type also breaks binary compatibility. But in dynamically typed languages, these changes are usually benign.

The binary brittleness of C++ and Haskell is not a direct consequence of their lack of dynamic typing, but it is a consequence of the underlying assumption: that the compile time environment fully describes the runtime environment, and so no provisions need be made for their divergence.

I don't understand your second paragraph. Maybe you think that code written in dynamically typed languages cannot be compiled into shared objects? I work on a large Objective-C dynamic library, which is compiled to machine code and loaded by the dynamic linker, just like a C library. The dynamic aspects of ObjC pay us big dividends.


I've made language choices on quite a few projects. To be honest, I've never specifically evaluated static vs. dynamic typing as one of the major factors in the decision, and I've never seen static vs. dynamic typing become a major point of debate. The discussion typically revolves around pragmatic issues related to existing standards, compatibility with existing software, skills of the team, availability of programmers, etc...

Do you really think the use of dynamically typed languages is an important enough factor to require justification? I don't want to imply that the distinction isn't significant. I'm sure it is. I just haven't had any experiences where it came into play when making a language choice, and I'm curious whether you can elaborate on the need for justification.


> Do you really think the use of dynamically typed languages is an important enough factor to require justification?

Yes, because when you pick a dynamically typed language, you are giving up on important advantages that become crucial as your code base grows, among which:

- Type annotations, which make it easier for newcomers to understand the existing code base (and obviously, catching early errors by the compiler).

- Automatic refactorings, which guarantee that your technical debt remains at a reasonable level since it's so easy to evolve your code base as it grows. Even the simplest refactorings require human supervision in dynamically typed languages (here is a good explanation why: http://goo.gl/SKaos )


Those are consequences of not having static types; they are not consequences of having dynamic types. I have to stick up for ObjC, which has dynamic types, but enjoys the advantages you listed because it also has static types.


I up voted this comment only because the two best comments on this article are replies to this comment and deserve to be higher up in the thread.


For me, the strongest overlooked argument for strong static type systems (like Haskell's) is security. With them, you can make the compiler prove that your code is free of certain commonly exploited classes of security problems (e.g., XSS and other injections).

Without them, you're on your own. And, for almost all values of you, that means you are pretty much guaranteed to have serious holes in your software. It's not like you can test these holes away. If you don't realize that you need to escape string X into the language of string Y before combining X with Y, do you think you're going to know to write the test that checks for whether you failed to escape X?

EDIT: minor tweak for clarity.


This is a good use of types; one of my favorites, perhaps.

It's also one I implemented with relative ease in Python. The error is deferred, but other than that the implementation works equally well. Once again the question is whether up-front errors are worth static typing, and once again the answer is a preference.


But deferring the error means the implementation does not work equally well: it sacrifices safety. When a programmer tries to combine strings X and Y that are incompatible, it's because he misunderstands what X or Y actually represents. This misunderstanding is detected when X and Y are brought together, but that doesn't mean it can't also have corrupted upstream code that also uses X or Y, just not together. This upstream code never gets a chance to run if the downstream problem is detected at compile time. But, if you only catch the problem at run time, after the upstream code has already executed, that corrupted upstream code could have launched the missiles. Run time is too late.


Maybe Haskell is the exception but I find it hard to believe that a compiler can do that for you, IMO, the only way to have a decent shot at being safe from XSS is to sanitize your inputs with a whitelist.


The compiler alone cannot do this, but the compiler combined with judicious use of the type system can. This is really the big value-add of Haskell/ML-style type systems; you can fit so many useful constraints into the type system that you can enlist the compiler into proving that you haven't made various common mistakes. If you can formalize an application logic error into a type error, you can use the compiler to enforce a wide variety of application logic error checking at compile time.

A simple example: suppose that every web form input is mapped into an "UnsafeUserInput rawtype" (that is, an algebraic data type parametrized over the raw input type). By simply ensuring that your output code only works on "SanitizedUserInput" types, you enforce that at some point in the code you apply a sanitization function to any unsafe input. Now the compiler is working for you, checking that your overall data flow respects the constraints you've encoded into the type system. In this style of programming, the type system becomes a way of encoding your desired integrity constraints inline with the functions and data types rather than outside the application logic in test suites.


http://blog.moertel.com/articles/2006/10/18/a-type-based-sol... is a very good read on the subject. basically, the compiler can have different types of strings, depending on where they come from, and how they are created. it can then make sure that different types of strings are never mixed without an explicit conversion operation. it is up to the programmer, of course, to see to it that that conversion operation involves sanitising, but the compiler will make sure you don't do

    string a = get_input_from_user();
    string b = sql_lookup(database, a);
    insert_into_output_html(b);
requiring instead

    UserInputString a = get_input_from_user();
    SQLResultString b = sql_lookup(database, ConvertUserInputToSQLQueryInput(a))
    insert_into_output_html(ConvertSQLResultToHtml(b));


Yes, a compiler can do that for you. One example:

http://blog.moertel.com/articles/2006/10/18/a-type-based-sol...

(That was written over six years ago. Today, there are even better solutions that don't just apply static types to strings but to the language fragments within them.)


A compiler could check that all inputs that are possibly from a user are sanitized or explicitly marked as being unclean / clean.


It's important to realize that safety is not a property of a string but a property of the relationship between a string and a use context. Thus all "solutions" that rely upon marking strings as clean work for only one kind of context. If you want a general solution, one that works for all injection problems, you have to be able to encode the full relationship model into the type system.


The author again falls into the strong vs. weak, and static vs. dynamic trap without a solid understanding of the terminology. As per usual the widespread misuse of the terms has made them useless. It is worth reading from a more rigorous source, such as Luca Cardelli: http://lucacardelli.name/Papers/OnUnderstanding.A4.pdf. Or ignoring them as Benjamin Pierce says, "I spent a few weeks... trying to sort out the terminology of "strongly typed," "statically typed," "safe," etc., and found it amazingly difficult.... The usage of these terms is so various as to render them almost useless.".


Incredibly awesome read. Thanks for sharing this!


I never got that impression from that comic. It's not funny because some people are idiots, it's funny because it's revealing a language barrier. The punchline is about typing discipline wars, not about camp B's fragile ego.


Agreed. I remember i read that comic a while ago and i didn't get the impression that the author was trying to mock dynamic typing proponents at all. The interpretation i got from the comic was more on the lines of "as most of the people who know about type category prefer static typing, discussions on that subject tend to be very uninteresting".

And another possible interpretation is that the comic just expresses the author's preference (pun intended) of type category as a topic of discussion and thus finds the "dynamic vs static language" discussions uninteresting (because, let's face it, they are almost never about type category).

My interpretations might be completely biased though, as the first comic from CCC that i read was this one (http://ro-che.info/ccc/18.html), which could have caused a personal bias because of the good first impression :)


My suspicions only grew when I read the first comment thread on that post. It's a bit disturbing, but the actual slights all seem to be coming from the guy imagining being slighted.


This is, to me, a really poignant post. I think there's a level of civility that sometimes can be lacking in our daily lives as engineers, and markcc does a good service by writing well about it.


The inability to distinguish static/dynamic strong/weak isn't a minor mistake like fluffing up some grammar, it betrays serious ignorance of the subject material.

He's very opinionated and doesn't seem to know the subject material very well. He's just expressing his opinion slightly more loudly and eruditely than others.

I don't see the redeeming value here.


I don't think he's confused about the distinction between them. There's one sentence in the beginning where he probably meant "dynamically typed" rather than "weakly typed", but for the rest of his post, his terms are chosen correctly. I'd rather give him the benefit of the doubt, than dismiss his opinion because of what might just have been a typo.


That's because, as engineers, we believe there is one true answer to everythingTM. Since there is one true answer, there is no room for shades of gray. This is one of the things I've noticed most changes as engineers age: the world becomes less black and white and much more full of color.

I don't miss the "I know he right answer" egotism I had in my 20s!


Still, exist a one true answer to everything. The problem is that truly understanding the question and work toward the correct answer is far harder than anyone can anticipate, for any kind of above-to-super-easy task you can imagine.

I dislike the "shades of gray/full colors" vs "black & white" thing, because IMHO, create the idea that exist several competing and contradictory correct answers to things. I tough still exist one puer solution, but several * approximations* that, because our limitations and/or limitations in the tools we use look like different (in computers, the final solution is expressed in assembler/bits. Languages are a illusion!).

But at the end, and with the age, I think is become clear that some questions beg for a better answers, but suck to solve it with our current approach. And I for example, all the time, working with python, obj-c, sql, delphi/pascal, javascript, coffescript, foxpro, html have the constant grief of "What if, I could to do this, instead of what I must do, because I can't with this tool!"

However, I firmly think is super-important to use several, opposite languages, to truly discover better ways to do things, learn more effective, faster, expressive or whatever approach to a solution, and broad the selection of tools to solve a problem. For example, when I use c# or obj-c, I tend to look the solution in python (my personal experience have taught me that python folks have the super-easy-to-understand solution to anything). I'm gratefull to have learned first foxpro. Is my secret sauce to be better with sql databases. I kill to have linq in other languages. How I wish to have learned haskell before. How I hate the verbosity of anything except python. Why python have null exceptions? And so on.

Is not, I think, the true realization of many, claimed to be: "shades of gray" but instead: Look, understand the life, universe and everything else become simpler when we have telescope, microscope, radioscope, math, physic, art, music, literature, science fiction, the other boring but useful science...........


This is analogous to saying that customary units aren't better in an absolute sense than metric units, which is obviously true--driving through Jersey, I'm not going to insist that the gas station attendant give me 10 liters of gasoline, because the metric system is "just better." OTOH, it's stupid to encourage people to keep using customary units because "everyone should decide which units are best for themselves."

Getting literal again, if you think static typing is a PITA, you probably haven't used a modern language with type inference. There is really no downside to it anymore.


While I think the analysis of the comic is probably correct, both in terms of how it is intended and how it is most frequently read, I think the described relationship (horrible abuse of Venn diagrams notwithstanding) would hold without any negative implication about dynamic typing proponents: most programmers don't know much type theory - those motivated to learn will be those with a preference for static typing.


Static Typing: Hurry Up and Compile!

Dynamic Typing: Crap, did I really miss that?


Hurry up and compile?

I think a lot of times features of languages are highlighted because they make it easier or faster do write code. And so it should be. But a huge factor in this is IDE support. Those people sticking to their various flavor of the month text editors don't understand this.

Here are some examples of my own work: I used a lot of java, and its annoying that everything needs to be defined and if you change the definition, there is a large amount of work to do. And takes time to compile. This would be terrible except if you use an IDE, for example Eclipse: Eclipse has an iterative compiler - it keeps your entire project compiled at all times. If you change a line of code, the iterative compiler analyzes what must be re-compiled and does that in the background. This is so fast you wont ever notice it. The effect is that there is no compile step, ever. Compiling is a non-issue. Same for strong static typing: The IDE allows one to just write code, like a = "foo" and deal with the consequences later, it will ask you whether you want to define a as a local or instance variable, or if it already exists if you want to change the type to string. Then it makes all necessary adjustments across the entire project. De facto you have the advantages of both strong and weak typing.

The IDE therefore is very effective at eliminating perceived weaknesses of languages, with code generation, inference, refactoring, intention guessing etc.

Another example is Objective-C. I was about to lose my mind with the sheer verbosity and redundancy of this language, then along came AppCode IDE and removed all the problems.

Its like there is a pure programming language out there - Ruby is probably closest - which allows one to express code most succinctly. The most expressive pure language. But all the others can be made to be very close to that with IDE support. Its not a co-incidence - computers can automate all mindless, repeating tasks and if you take all those away from any language what is left is pure logic.


Well, we can agree that Java is not in any way expressive; as result, there are few intermediary steps that would slow down the compilation process.

Which is interesting, there are actually 3 issues at play: 1) Dynamic languages: slow runtime cost, but expressive and fun to work with 2) Static (simple) languages: fast compilation, but soul draining 3) Static (complex) languages: slow compilation, but expressive and fun to work with.

Here are examples of 1,2,3: Ruby Java Scala

I'm in the latter camp, thus the hurry up and compile! ;-)


The compilers of some statically typed languages (Turbo Pascal, D, Go, OCaml) are extremely fast.


... and in general, type-checking is simply not something that slows compilers down.

[C++ is something of a red-herring, I think: although it's one of the most famous statically-typed languages, and has a reputation for slow compilation, that has much more to do with its include-based model than it does with static typing.]


I think dynamic typing is better for explorative programming, for getting something working quickly, and then you iterate. Static typing requires a little more upfront design - i.e. understanding what you're doing before you start.

I've thought that present web development is in constant flux, as businesses figure out how to get their data online; frameworks etc are also in flux. And so dynamic typing is beneficial. Then, once we've established the best way to do things, static typing will again become important...

But it could be that development - change - has accelerated, and will only get faster. In which case, whatever gets you to working, to iterating and adapting to change, will be favoured. Which is a pity, because I like static typing.


Like the OP you are confusing static with strong and dynamic with weak; static typing has nothing to do with declaring types but with the compiler doing type-checking at compile-time, whereas dynamic only implies this type-checking is done at run-time.


A simple counter-argument:

Why is Smalltalk (an archetypal 'dynamic' language) so darned easy to work with, and (very importantly) get right?

Is it coincidence that much research in modern programming languages and concepts is being conducted using Smalltalk-like environments? (Examples: http://scg.unibe.ch/research/helvetia, http://www.viewpointsresearch.org/)


Arguments like this keep getting rehashed purely due to personal bias. There is no universal answer, use the right tool for the job.


Don't "throw away" your type information, preserve it in unit tests (which are typically far easier to write in a dynamically typed language).


I also know who drew the linked comic: Simon Peyton Jones, the only person on planet Earth who still makes use of the Comic Sans font in 2013.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: