Hacker News new | past | comments | ask | show | jobs | submit login
JSON Schema (json-schema.org)
153 points by based2 on Feb 18, 2018 | hide | past | favorite | 98 comments



I was unaware of the usefulness of JSON Schema until recently I was writing some Vega Lite json specs in VS Code.

I don't have any relevant json extensions, but VS Code started auto-suggesting values for options (e.g. a drop down for whether the chart type is a 'bar', 'line', etc.) and pointing our mistakes like 'the size should be an integer not a percent'.

This was the lightbulb moment for me - as if by magic, my code editor was double checking my json document was valid and helping me write it. Amazing, and super useful.

It took me a while to work out how it was doing it, but all vega lite examples start with the magic:

"$schema": "https://vega.github.io/schema/vega-lite/v2.json".


This light bulb is exactly why many people prefer statically-typed programming languages, especially above a certain codebase size/complexity and team size.

Make my tools do the grunt-work for me, thanks, so I can focus on the actual problem domain.


> This light bulb is exactly why many people prefer statically-typed programming languages, especially above a certain codebase size/complexity and team size.

I agree, with the caveat that sometimes you need an escape valve.

Basically I want static typing 99% of the time and I don't want guff from the compiler (or fanboys) in that 1%.


C# and Typescript are great at this, and a great combo together.


That’s what (unchecked) exceptions are for.

They’re supported by all general purpose programming languages because we — as developers — are smarter than the compiler.


1. GP is talking about type safety in general, not just exceptions.

2. Not “all general purpose programming languages” support exceptions. To name a few that don’t: C, Rust, Go.


Go has the empty interface, if you really need it.


The empty interface is not an unchecked exception, but panic() is. https://golang.org/pkg/builtin/#panic


I don’t actually know Go, but can panic() be caught? If not, it’s not an exception.


With recover(), yes.


But this is about application or user specific config.

In my experience, those tend to be either json, yml or plain terminal args regardless of the fact that the language is typed or not.

Am I missing something?


https://jsonschema.net may help you get started without the pain of writing boilerplate code.

Unfortunately it doesn't perform validation, nor implement all validation annotations - but it definitely helped me get started.


I'm trying to find time, among many blockers that I have to take a look at Vega Lite - as another alternative/addition to kibana or grafana, also last night found DejaVu - https://github.com/appbaseio/dejavu - and yes, JSON schema comes off often enough, since we moved a lot of our configuration there, and hated or not XML had the XSD sanity...



Is there something wrong with just using TypeScript's type system for expressing the required format of JSON data that would make me want to use the significantly more verbose JSON schema format instead? I've written a couple of code generators (e.g. [1]) in the past which use the former approach and found it a very convenient way to express my models.

For example this:

    {
        "title": "Person",
        "type": "object",
        "properties": {
            "firstName": {
                "type": "string"
            },
            "lastName": {
                "type": "string"
            },
            "age": {
                "description": "Age in years",
                "type": "integer",
                "minimum": 0
            }
        },
        "required": ["firstName", "lastName"]
    }
can be expressed like this:

    interface Person {
        firstName: string;
        lastName: string;
        age?: number; // Age in years
    }
I can see that there are additional constraints expressible, such as the fact that age has to be an integer and has a minimum value. However those could all conceivably comprise a superset of TypeScript's interface definitions. JSON Schema just feels to me like XML Schema all over again.

[1] https://www.npmjs.com/package/jsonidator


TypeScript provides, primarily, a type verification system.

JSON-schema provides a type assertion system.

Both serve as type documentation systems.

TS says "if you create objects in my world, I will use my type system to verify that you're creating the right kind of objects, and you can look up their definitions to see what's in them". JSON-schema says "if you give me external, un-annotated objects, I can tell you whether they comply with a type definition, and you can look at that definition to see what's in them".

TS has a limited assertion facility, but it's not nearly as capable as JSON-schema. Projects like io-ts (https://github.com/gcanti/io-ts) add more functionality to those assertions, but they're still much less easy to use, in my opinion, than JSON-schema for this purpose. If you're using TS types to assert, though, you can definitely describe more complex constraints, so it may make sense as an alternative to JSON-schema for some people.

As others have pointed out, JSON-schema is also much easier to use on non-TypeScript platforms than TS assertions, which, even if you use an assertion library like io-ts, will still be limited in use to JavaScript (and JS-to-$other_language runtimes).


A dependent type system can do both.


...yes it can?

Would you expand on how dependent type systems are relevant to the differences between TypeScript/JSON-Schema, or how the use of a DTS could help ameliorate the issues people have in those areas?


Not in an HN comment, but here's a paper called Power of Pi https://cs.ru.nl/~wouters/Publications/ThePowerOfPi.pdf that probably explains it the best. It shows the usage of it for deriving a parser and generator from a single specification, and also crypto protocols and database usage. It uses Agda for its examples, but it can be translated to Idris as well.


Because json-schema is language agnostic. You can have a schema that can be shared between servers/clients with different languages. If you want to do it the TypeScript way, then you can generate the types from json-schema (probably via an automated tool)


json is also based on javascript, but language agnostic. You could also specify a subset of typescript, just like json is a subset of javascript.


I built a tool to generate JSON Schema from typescript interfaces: https://www.npmjs.com/package/ts-json-schema-generator

The reason for having the schema are that they are language agnostic and there is wide tool support. For example, vscode supports autocomplete and tooltips when you use json schema.


There is nothing wrong with Typescript definitions if that works for your use case. Json schema is more flexible:

- it can do checks at runtime (to validate untrusted input, for example)

- there are libraries for many other programming languages

- it can express constraints on values as well as on types (size of arrays, string regex matching, integer value ranges, ...)


No, there's nothing wrong with that and it's usually significantly more readable than JSON Schema. Good idea!

Of course there are use cases that JSON Schema works better for (eg most other replies you got) but I fully agree that TypeScript type definitions are a great way to specify the structure of JSON data.

To make it feel more language-agnostic, you could consider using type aliases instead:

    type Person = {
        firstName: string,
        lastName: string,
        age?: number // Age in years
    }
This is equivalent to the interface you described, but potentially less confusing to readers who don't know what an "interface" is (eg because they mostly used Ruby or C++). You don't even need to tell people that it's TypeScript - this could be a perfectly sensible schema language for JSON. A bit like what RELAX NG[0] is for XML.

At my company we actually use typescript-json-schema[1] so we can specify the structure of our API payloads in TS instead of JSON Schema and still benefit from JSON Schema's better tool support.

[0] https://en.wikipedia.org/wiki/RELAX_NG#Compact_syntax

[1] https://github.com/YousefED/typescript-json-schema


So, in typescript a number is unsigned by default? If not, your typescript representation has lost some information compared to the JSON schema example, and would theoretically allow for negative ages.

Additionally, what you've chosen is a very simple example, and even in that case you're having trouble correctly mapping the types to what is expected. What about a slightly more complex one[1]?

    {
        "$schema": "http://json-schema.org/draft-06/schema#",
        "type": "object",
        "properties": {
            "/": {}
        },
        "patternProperties": {
            "^(/[^/]+)+$": {}
        },
        "additionalProperties": false,
        "required": [ "/" ]
    }

 1: http://json-schema.org/example2.html


AFAIK typescript interfaces don't exist at runtime, so you can't validate incoming JSON with type any.

Also you can't really hand a third party a typescript interface and tell them to conform with that.

And finally, I would imagine a good schema could enforce more things like the length of identifiers, whether a number can be negative, etc.


> AFAIK typescript interfaces don't exist at runtime, so you can't validate incoming JSON with type any.

That's mostly true, but some projects exist that allow a subset of interfaces to be used at runtime:

- https://github.com/gcanti/io-ts

- https://github.com/fabiandev/ts-runtime

- https://github.com/codemix/flow-runtime

- Or just manual assertions: https://gist.github.com/JohnWeisz/beb7b4dadc512be30ce6c7c1e4...


It is considered essential that JSON schema is expressible as JSON. There are advantages to that, but it could be only a wire representation instead, with some more concise syntax alternative.


> It is considered essential that JSON schema is expressible as JSON

Why? It's not a particularly difficult task to write a parser that could convert between the two representations. The TypeScript compiler even provides an API that lets you access its AST.


Well, here's the thing, you can write a parser to translate that TS into JSON schema format and that works.

At my job, we're currently running JS on the back-end and TS on the front-end. JSON-Schema can be used in both. We're also rebuilding our core functionality in Elixir, and guess what, JSON Schema works there, too, out of the box.


JSON schema can be the representation used to exchange information between two or more systems, languages, projects, etc. It's a form of lowest common denominator, something that can (somewhat) fully describe your interface but in a static, commonly available, easily parsed, easily generated way.


For a long time, json-schema was stuck at draft-04, and it looks like in the last few months under new authorship it's up to draft-07. The mapping of these names to IETF RFC revisions is more complicated, and explained on the site [1].

Json-schema is useful for describing JSON data structures in JSON, so you can bring a codegen and generate corresponding structures in your language, or perform some payload validation at an API gateway; and many higher-level API specs -- like OpenAPI/Swagger -- make use of Json-schema underneath.

However, I'm not sure I fully understand why they branched out into also becoming a hypermedia description language [2] when there's already half a dozen others in this space [3], and reading some of the discussion about this [4] makes my head hurt.

[1] http://json-schema.org/specification.html [2] http://json-schema.org/latest/json-schema-hypermedia.html [3] https://sookocheff.com/post/api/on-choosing-a-hypermedia-for... [4] https://github.com/json-schema-org/json-schema-spec/issues/4...


This is what I'm using for schema validation in FormAPI [1]. The best part about using a standard is that there's a lot of open source libraries you can use. It's really easy to validate a schema in Ruby [2]. I use json-schema-faker [3] to fill out PDFs with test data. I also use AlpacaJS [4] to generate HTML forms based on the JSON schema.

Anyway, I'm really glad I didn't try to roll my own thing for schema validation. Alpaca is insanely powerful, and it would have taken me weeks or months to build the online forms from scratch.

[1] http://formapi.io

[2] https://github.com/ruby-json-schema/json-schema

[3] https://github.com/json-schema-faker/json-schema-faker

[4] http://www.alpacajs.org/


We have a similar setup for extending our app with custom fields / forms using ruby-json-schema and react-jsonschema-form [1].

ruby-json-schema has worked out quite well, we added a custom schema reader so that we could generate enums dynamically based on records in the database via custom refs eg. app://schemas/organizations?type_eq=education&tags=one, any of these internal refs are inlined at the API layer.

One issue we did run into with ruby-json-schema, is the schema caching is not thread safe. We opted to clear the cache after each validation, this ended up causing race-condition issues. In the end we had to use a mutex on write and cache the validation result (not ideal, but our app isn't that write heavy).

[1] https://github.com/mozilla-services/react-jsonschema-form


Creating a JSON schema for your data and committing it is a great way to codify assumptions about the data consumed or produced by your program. Once you have a JSON schema for your data, you can generate models and serializers in many languages with quicktype: https://app.quicktype.io#s=coordinate


Reminds me of XSD - the XML schema definition language that itself is XML. Any takeaways (good/bad/ugly) from that experience that are applicable here?


My first thought on seeing the headline was "continuing to reinvent the XML ecosystem...in JSON"


Do you consider that a negative?

I personally think it's great. The faster the ecosystem is reinvented the sooner we can stop wasting our lives reading it.


It just strikes me as a circular development pattern.


XSD is only part of the value-offering... XSD + XSLT is the value provider for format heavy, robust, declarative, document transformations.

JSON-schema solves many of the validation issues of JSON in the absence of an XSD-for-JSON solution, to allow message processing and other validation assertions. From that perspective it's 'duplicated effort', but only to the extent that JSON messaging applications are maturing to the point they need the kinds of guarantees that were so clear during the development of XML/SGML. It's a whole different language supporting a range of JSON-powered clients that don't necessarily require transformation support or other guarantees.

This is about JSONs ecosystem maturing to the baseline, not about reinventing the wheel :)


It's not circular, it's what maturation looks like. Yeah, Json's great - lots of people are using it for lots of things. You could argue it's lack of being schema-bound is a great advantage, and you'd be right, and wrong, depending on your application.

This just serves to provide a way to say "yes, we can have strongly typed schemas as well as our nice succinct format" to those who need it.


XSD gets thumbs up from me.

Generate useful docs. Easy to read to get a handle on data structure. Also can be used as runtime contracts. somewhat similar but v. limited compared to Racket Contracts and Clojure spec.

Not to mention super tooling for interacting with schema (like XMLSpy)

In conclusion, having schemas for data structures(especially at boundaries) is absolutely necessary to maintain control & visibility over data.


Find a few applications where XSLTs are used for mission critical stuff... XSDs are a Good Thing (even if working in JSON is nicer than XML).

Data schemas are the first step in powering transformations, which is where most of the Enterprise is bound up.


Hi all! I'm one of the new team that works on JSON Schema since draft-4.

I could spend time addressing each issue raised here, but that could take some time...

@niftich We've (the new authorship) actually been working on JSON Schema since late 2014!

If you're using JSON Schema, or are interetested in finding out more, we DO now have a slack server; invite link at the bottom of json-schema.org. Come talk with us.

We're looking for companies who are using it in production to weigh in on new developments and releases. Our current release plan means we will have an RC with time for feedback. We want to hear your issues, comments, and suggestions!

One common problem at current is the level of support. Because draf-4 has been around for a LONG time now, it became the defacto standard, and has the most support. Developers come along and use an older library which only supports draft-4, but want to use new functionaltiy found in draft-7. Many implementations support draft-7, and some even implement new functionaltiy before it's finalised. SO remember to check which version of JSON Schema the library you're using supports.

HyperSchema already exsisted at that point, so we inheirted it. It focuses on providing specification for correct HTTP usage and HATEOAS driven APIs, where as other higher level API specification languages allow you to describe any type of API.

It looks like a good number of you have good questions, ideas, or issues. Thanks for all those replying supporting JSON Schema and those showing a balanced viewpoint.

Come chat with us!


This is what mongodb picked up for schema validation in its last release: https://docs.mongodb.com/manual/core/schema-validation/


This was incredibly handy when I was building a Slack bot that helped my colleagues figure out what microservice did what, since they often had puntastic names and there were enough that it was hard to keep track of (https://github.com/leemachin/wtf-is). It made it really easy to validate the structure of the metadata file and return errors that made at least some sense to the person trying to integrate it.

It's not particularly friendly to the maintainer compared to switching to a statically typed language, but it would have taken a fair bit more effort to wire that up without this.


Does anyone actually use JSON Schema for real?


I have used it a few years back when designing a customer facing integration API, it was definitely good to be able to tell customers "here's the JSON schema, you can use these libraries to work with it, and use it to validate your requests for correctness before sending them to us". This said I felt expressing non-trivial constraints was quite verbose (things like if field X is set to 123 then fields Y and Z are mandatory, otherwise Y is not allowed, if X is set to 456 then no other field is allowed, ...) and hard to extend once written, I definitely felt the need for a "JSON schema generator" at times where I could've created a table of some sort and have it output the schema.

In another much more generic customer facing API I instead used a custom schema, as I needed to do other things that would not have been possible in it like embedding the API documentation in the schema so the front-end could display help on-the-fly and use the schema to dynamically generate forms. The advantage was also that being fully in control of the syntax I could create ways to express constraints like field X is valid only in searches, field Y is valid only in PUT/PATCH but not POST, etc. etc.

This said especially for straightforward APIs I feel JSON schemas are quite useful, being able to have a single source of truth on what your messages look like that you can share with front-end, back-end and customers is quite powerful. One thing is though that validation errors are not very user-friendly unfortunately. This was all around draft-04 time IIRC.


Yes, Snowplow uses JSON Schema extensively for event definition and validation (https://github.com/snowplow/snowplow/).


We used to, the idea being we could reproduce validation in JavaScript and Python. It kind of worked but it wasn't that clean. Ended up switching the server to Marshmallow schemas and just having more custom logic on the client side. It's a classic DRY ... this is where "duplicate" code was better for more control and less shoehorning.


We use it for validation for data that goes into our Postgres JSONB fields. It's a nice middle ground from forcing some unstructured data to be serialized/deserialized from protobuf to enforce its schema and some half-baked server side validation that lives in the application code.


Heroku does, and has built some nice open-source tools around it: https://blog.heroku.com/json_schema_for_heroku_platform_api


Yep, I use it heavily in FormAPI (https://formapi.io). One of the core features is schema validation, to make sure that PDFs can only be generated with valid data.


Yes, found it highly useful for a JSON-based configuration with a GUI for editing (towards not so technical users). The GUI was fully based upon the schema (i.e., dynamic by using things like the descriptions, types, and constraints for displaying / input validation).

The only thing that is counter-productive is the different schema iterations; at the time v3 and v4 have been different in some key areas. Haven't looked into v7 yet, but I hope its not "breaking as much" as v3 vs v4 did.


We did at one company I was at to do some code generation (specifically in Java[0]). Since we never ended up using it to generate client libraries the gains weren't as significant as originally planned.

[0]: https://github.com/joelittlejohn/jsonschema2pojo


Yes. A library I write (Keras-RCNN) uses it to validate input before starting the expensive augmentation process.


Yes, but many of the implementations are a nightmare to use if you have a complex schema and you want to return a meaningful, useful error to API users. To be fair, there can be ambiguity making it difficult for the library to return the correct error such as using "oneOf". Two things make a difference.

1. if/then/else which was added in Draft-07.

2. Custom errors. For Node, I use ajv and ajv-errors.

With the combination of if/then/else and custom errors, I have found that I can always return a single, useful error to the API user regardless of the schema complexity.


It's used to great effect by Vega and Vega Lite - see e.g. here: https://vega.github.io/vega-lite/examples/bar.html

See also my comment here: https://news.ycombinator.com/item?id=16407707


Yes, I use it to trigger actions on certain messages in one of our APIs instead of hardcoding them.

Incoming data -> is it a valid messages at all? -> if it is a message of type x, hand it over to class X -> if it is a message of type y, hand it over to class Y -> etc

I found that to be a very easy way to setup a general-purpose processing pipeline.


Yep, I used it to design and document a schema for a recent client of mine. What was really nice was the backend developer directly used the schemas to validate requests to our flask http server just by adding decorators to the routing functions. It was a great tool for us to coordinate API changes across front and back end teams.


Shameless self plug here, few months back I created a project https://www.contenttagger.com that gives you hints for Schema tags based on a parent Schema attribute. You can also validate it with Google devtools + Facebook graph.


Yes, we use it in Nakadi (https://zalando.github.io/nakadi/). It's an event broker platform based on Kafka, and we use json-schema for event validation.


I have done it a bit lately. To document to other companies what they can send or expect to receive. Trying to avoid having too much business rules in it, though, mostly just fields and their type. Compared to xsd for xml I find the tooling lacking.


Absolutely. When you have more than one team in different locations and cultures working on stuff in parallel you end up specifying your contracts. Many ways to do so, but nothing as binding as a JSON/XML Schema including samples.


Not since I discovered GraphQL. It's hard to imagine any other abstraction on top of REST APIs that packs as much robust functionality in something so easy to use.


I'm just curious - GraphQL is more of a query standard, while JSON Schema is structure validation. How could one use GraphQL for, say, validating input?


We export massive amount of JSON downloads for our customers, with often different features enabled. So for each customer we generate a different schema for validation.


Yes because it documents what should go in the corresponding JSON, and some editors will automatically add completion.


jsonmerge is heavily based on JSON Schema.

https://github.com/avian2/jsonmerge


My startup built an extended version of JSON Schema that also supports conditional logic and data references to power a customer-configurable Workflow system. Basically a lightweight way to script a series of tasks with more advanced validations and data pre-population. Works great, customers love it.


Could you provide a link to your startup's page? I'd like to check this out.


I built this same thing in the past month for automated questionnaire rendering in react, saving and merging the responses into word documents.


whoa what's up man?! Wickedfire paths cross in the most random ways haha


Hah, seriously man! I’ve seen you here a couple times, would love to catch up. Ping me? Email in my profile


This stuff is great. I used it to design an API in Python.

A request comes in and the wrapper checks for a schema. if it is not a GET request and one does not exist it errors out even before the method is called. If one exists, the schema is validated automatically. If the target method is called, you can be safe assuming that the input is valid (in terms of type and does it exist or not).

Same goes for output. Server will error out before it lets incorrect output be returned to the user. Great way to present what the application accepts to a developer. This is what it will accept and return, because this is what we use to validate input and output!


Anybody using this in production?

We (reluctantly) went with XML for an external facing B2B API, because JSON schema was stuck in draft and did not seem to be in widespread use. JSON without schemas would have been a nightmare.


We "use" this in the form of Swagger, which is a web service that can download a specification of how an HTTP API works and generate a minimal UI to use that HTTP API from that specification. While the specification of the API is in "Swagger", the actual documentation of types used by the API is JSON-Schema.

The UI is okay; however, we've had two big downsides:

* The UI needs to be able to fetch the JSON schema document from the API; that means that it needs to correctly respond with CORS headers if you host the web interface on a separate domain. (We do, s.t. each API service doesn't need to repeat the mundane task of hosting it and keeping it up to date.)

* The schema needs to be useful. Far too often, people will document an API endpoint as taking "JSON" and returning "JSON"; this is true and correct, but not helpful. What keys exist? What are the valid values for those keys? etc. The author of the schema needs to take the time and thoroughness to document the API, and all too often either they do not (the workmanship is sloppy) or they can not (management provided deadlines that were too tight to get the job done right).

Neither of those are really problems with JSON-schema per se. You also don't need JSON schema to document a JSON API: you can always do so with prose, by using the type system of a language your devs are familiar with and providing a set of rules to map those structures/types to JSON object, etc.

My biggest nit with JSON schema: it's painful for humans to interpret. (And we actually do it in Swagger, which is in YAML, so it's slightly easier to parse and we get comments. But it's considerably verbose compared to most type systems; the upside is that it can encode more information such as descriptions, or valid ranges/values if the raw type like "number" or "string" doesn't suffice.)


I use that in production code for checking input for correctness.

I wouldn't say JSON without schemas is necessarily too problematic, I've used that a few years ago. Now I prefer JSON Schema for extra confidence and more convenience in pointing out problems.


It's such a shame that many of these standards ignore or are ignorant of the elementary and fundamental sum type. Instead I am left needing to employ some ad-hoc encoding using string tags or similar.


I agree wholeheartedly (I have so many use cases for a sum type!), but I think that opportunity was lost when JSON itself was created; JSON-schema, built on top of JSON, would have to make some assumptions about how to map any sum type onto JSON, since JSON doesn't natively support them. JSON-schema — reasonably, I think — doesn't.


Does oneOf not cover it?


oneOf looks like it could be used to encode sum types together with a custom string tag, but that's not the first class treatment that records get.


Mark (https://mark.js.org) is a new notation that extends JSON with a type-name, which can be a more structured way to encode type info, instead of using custom string tag.


I'm just now becoming familiar with JSON Schema. It's very interesting. However, before I or my team new it existed, we created this for filtering and validation, which has its own sort of schema: https://spireteam.github.io/whitelister/


Your site's CSS makes it unreadable on wide screens: https://imgur.com/a/WlYOA


Seems like a lot of work that a Google search would have obviated...


I can imagine uses, but I was looking for some kind of faq that answered what problem this was solving. Aren’t there other serialization specification formats that come with type validation built in (like protocol buffers and thrift), that can be serialized to json as well? Maybe I am confused about what this does.


These guys - https://cswr.github.io/JsonSchema/ - analyse JSON Schema and propose some mostly non-breaking adjustments.


JSON Schema is useful. Here is a post with a Python example of using it to validate some data:

Using JSON Schema with Python to validate JSON data:

https://jugad2.blogspot.in/2015/12/using-json-schema-with-py...


Techies are so f'ing useless. We keep reinventing every frikking wheel. I should get out of tech, it's depressing


What, JSON shouldn't have schemas because they exist for other formats? Or JSON shouldn't be used because XML is good enough?

While I agree that tech should reevaluate its sisyphean upgrade treadmill, I'm not sure how that relates here.


> Or JSON shouldn't be used because XML is good enough?

Mu. Neither JSON nor XML should be used, because S-expressions are good enough, and more attractive to boot.



It's Groundhog Day all over, and the cycles are getting shorter and shorter.

You'll love this talk by Robert C. Martin ("The Future Of Programming"): https://www.youtube.com/watch?v=ecIWPzGEbFc


I was looking for a comment that explained how json schemas are similar or not to xsd but your rant was the closest I could find amongst all the hype that this is great.

Did you have other technologies in mind or were you also thinking of xsd?


This is really the vapid comment you chose and not any of the comments that didn't immediately write off the technology but actually had a business problem to solve?


Please comment civilly and substantively or not at all.

https://news.ycombinator.com/newsguidelines.html


Alright. Just got frustrated. I've been programming since 5, but ever since I did 'proper' computer tech, it's been this back and forth game of: * thin client vs fat client * clientside / serverside * all the same document formats * layers upon layers upon layers of abstraction * static vs dynamic languages etc etc etc

my 286 was more responsive and would compile my games faster than most other things on my i7. It is plain insanity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: