Hacker News new | past | comments | ask | show | jobs | submit login
John Resig: Introducing the GraphQL Guide (graphql.guide)
506 points by sbr464 on June 11, 2018 | hide | past | favorite | 171 comments



We deployed our GraphQL July 4th 2016. It'll almost be 2 years. Documentation sucked back then (and apollo was non-existent), so a lot of we had to dig by ourselves.

I hope this book will cover some topics for others that was real head scratchers for us

- DataLoaders

- Authentication example

- Unions and Interfaces are your friend. Use them early.

- Try to define your custom Scalars early (especially DateTime format)

- return Connections (edge/node) instead of List cause you probably want to paginate at some point

- Folder structure (we redid ours 4 times lol)

- Naming convention (we redid ours 3 times lol)

- Subscriptions

After 2 years of using it and hacking it, we're still impressed. Once you get past the learning curves and have set conventions, writing GraphQL is a lot faster and better. Define your types, and some custom root queries, and done.

The neatest thing is that we made a schema validator -- compile all the graphql queries from the frontend and mobile, and validate them against the server schema. It really help when we changed folder structure and naming convention to see if we'd break something on the frontend.

I can't vouch for this book yet, but I'll swear my life on GQL. It's been a real game-changer.


For what it's worth, if your stack is Elixir/Phoenix (or if you're considering that stack), I can't recommend the following book enough (it covers the topics you mention).

https://pragprog.com/book/wwgraphql/craft-graphql-apis-in-el...

A nice thing about this stack is it supports the GraphQL Subscriptions protocol out of the box without the need to set up a separate pubsub server.


+1 for this book. This tutorial by one of the authors is great introduction to the Elixir gql stack: https://www.howtographql.com/graphql-elixir/0-introduction/

Worth a try if you’d to learn more about the stack or the author’s pedagogic style before buying the book.


I'd be curious to hear more on your schema validator. We've been using GQL for a little over a year and have felt some serious pain when doing big shifts in our GQL layer and even just when we're trying to clean out dead code / assess which resources are being used in which FE apps.


Sure!

Our implementation is a little bit naive - it doesn't "clean out" or see which resources are being used.

On the frontend:

There's a script that globs all `.graphql` files and compiles them into a list of {query, variables}. This probably could be made better if it was a webpack plugin that compiles this on build time. This was a Friday project so our time was pretty limited. This was honestly good enough since it caught a shitload of bad queries =).

Query contains the graphql query. Variables contains test arguments.

It saves them into a gigantic file (example: https://gist.github.com/maktouch/074339517a8da5128d62869356b...)

On the GQL server:

We take that file, we parse it, and we loop it and pass it thru the validate function (https://graphql.org/graphql-js/validation/#validate)

(truncated example: https://gist.github.com/maktouch/e1a2955dfcca42541a41665a361...)

Ops:

Frontend and GraphQL are its own Docker image. We build both in parallel. When both succeeds (in building and running their own tests), there's an extra build step that runs this specific test. It does it by taking the gql-queries.json file from the frontend container, adding it to the GraphQL image, and running the validate command.

You can do this by doing multi-stage builds in Docker (https://docs.docker.com/develop/develop-images/multistage-bu...). We don't push the resulting image - it's just there for testing.


We use apollo-codegen to generate flow types for queries.


Regarding folder structure, we developed saturn-gql [1] to help out with that. Have a look if you're still battling this.

[1]: https://github.com/electric-it/saturn-gql


Hey, that's cool! Thanks for that, will try it out on a new service/pet-project one day.

In our current service, I think we're pretty happy with the way our structure is. It's almost similar to saturn-gql but with raw javascript instead of parsing with `graphql-tools`.


I'd love to read more about this. How did naming conventions and folder structure change?


As I often do with new-ish things these days, I've been quietly weighing what degree of investment I want to put into GraphQL for a year or so.

John's vote of confidence means a lot to me. The software industry has lots of abstractions that are different ways of arranging the chairs on the deck, shuffling the work around while providing marginal productivity magnifiers.

jQuery's been somewhat superseded because it isn't an application-organizing framework, but it's still one of the best conceived and fitted abstractions for the problems it was meant to solve I've encountered in the entire time I've worked in software. If the person who conceived it thinks GraphQL yields productivity benefits, then I want to check it out.


I agree. I think it's good for the overall GraphQL community to have John's support. I purchased the book out of support for his work mainly, but I know it will be a good read as well.


I personally strongly believe so: http://artsy.github.io/blog/2018/05/08/is-graphql-the-future...

Don't get me wrong, some aspects of today's GraphQL are still a little rough around the edges, but I think it's a strong pointer towards the future of service integration.


Same here. John Resig's vote means its worth to invest the time on GraphQL.


it makes strategic sense. We've seen a slow process of javascript getting more and more responsibility and effectively direct control over the data returned from services. So a specification that allows control down to property level selection is a natural extension of this pattern.


>John's vote of confidence means a lot to me.

by selling a $80 book?


Considering the street cred John has earned for creating jQuery, it says a lot for him to endorse GraphQL and to even write a book about it.


It starts at $39. So just like a book?


Hate to complain about GraphQL in general and not specifically this guide but I ran into to major issues with it and really wanted to like it but ultimately gave up on it until they fix some core issues.

I don't see how I can implement complex object hierarchies that are not rigid. Only list is supported as an unbounded data type and for some reason there is no unbounded map/dictionary data type even with simple restrictions like requiring string key type. There is not even good advice on what key/value pair naming convention should be for all languages. The other is that there is very poor support for more general include/exclude/where filters. The idea being arbitrary complex join/filter clauses to function like sql statement with multiple joins and where clauses. Have to rely on implementation specific conventions like what sequelize does to get that but not for arbitrary object graphs. Solve some of this and it might actually become as powerful as it is frequently claimed to be.


I'm a huge fan of GraphQL, and when I see negative feedback that lists some of the points you make, I can't help but think that you are using GraphQL in a way for which it was not designed.

The most common issue I see is when folks list "what about arbitrary complex join/filter clauses to function like sql statement with multiple joins and where clauses" which, in my use of GraphQL, is literally never something I would want. GraphQL is not SQL or designed to be an arbitrary data access layer.

One of the driving principles of GraphQL, taken straight from the spec, is: "Product‐centric: GraphQL is unapologetically driven by the requirements of views and the front‐end engineers that write them. GraphQL starts with their way of thinking and requirements and builds the language and runtime necessary to enable that."

IMO GraphQL is designed to start with the UI, and develop models that conform to the needs of the UI, not to start with the data and develop arbitrary joining and fetching strategies for that data.


Maybe I'm just not that well versed in GraphQL, but reading your comment left me confused.

What would the UI data requirements have to be? Aren't they often arbitrary data fetching driven by the user? Aren't the requirements of views what data the user wants to see and handle? Depending on what user chooses, the fetching of data will have to go deeper and deeper into the data store.

I seem to remember, perhaps mistakenly, that GraphQL came as a solution to fetch arbitrary data structures that the view needed instead of having to mix and match multiple RESTful endpoints.


Do your users construct their own UI? Or do they use a set of features provided by whomever built the app/website? The arbitrary data fetching is to allow rapid feature development, bypassing the need to explicitly code up per-feature endpoints. Instead the responsibility is passed to front-end developers to compose per-feature queries.


> the fetching of data will have to go deeper and deeper into the data store.

Facebook long ago mastered server-side performance for this sort of use case [1]. Minimizing the number of network requests a mobile app has to make (enter GraphQL) is more important than mitigating costly requests, especially on slow or unreliable networks where the bulk of time in spent waiting on bytes. Entire pages/screens can now be fetched in a single HTTPS request vs. multiple.

As an added bonus, it allows front-end engineers more autonomy and flexibility without having to involve back-end API changes to support ever-changing product updates.

[1] https://en.wikipedia.org/wiki/Facebook_Query_Language


I suspect that GraphQL corresponds more to Erik Meijer's "coSQL" formalism (see A Co-Relational Model of Data for Large Shared Data Banks). The advantages with respect to SQL would be an open-world assumption (easier distribution, no normalisation), the disadvantage would be the lack of arbitrary joins (which requires normalisation and indexes).


> I can't help but think that you are using GraphQL in a way for which it was not designed.

The confusion exists because nothing even remotely formal appears to have been written down or documented. I can only find references like "a query language", which tells all of us very little.


Also, since GraphQL is not meant to solve complex relationships and queries, a lot of that complexity will still remain in resolvers.

If GraphQL solved all querying problems between the front and the back end, it would be amazing, but it only solves the most naive problems.


There is an extremely detailed, formal spec: http://facebook.github.io/graphql/


Hmm, our API is pretty big and we never had an issue where GraphQL couldn't solve it (https://player.me/api/graphiql)

> I don't see how I can implement complex object hierarchies that are not rigid.

I'm not sure I understand what you're trying to do, but did you try the JSON scalar type? https://github.com/taion/graphql-type-json

With that, it just returns JSON but you can't pick and choose what keys gets returned.

We never had to use that though. We prefer to use interfaces or unions.


My understanding is the json type stuff is a language specific workaround and not part of the standard. I have .NET, go, node.js, and others and while I could make any one of those languages work with specific changes they would not necessarily inter-operate for free or per the standard. I would hate to build heavily on workarounds that I feel should be in-built. At some point we just reinventing the wheel but in a more fragile way. I dont believe the arbitrary json was in the .NET version I was using which is what would expose data to be consumed via node.js (and other .net clients). Frankly it was easier to just implement a simple rest api that did exactly what was expected.

What I sort of what is to be able to craft a query against a large set of objects like a customer/location/asset hierarchy and filter on different levels like sql query without it being in a database. I might have 10 different orthogonal dimensions with foreign keys than can be returned and filtered against. I'm trying to avoid pulling the complete object hierarchy only for it to filter out some of the data after the fact and then finally return the expected json format. Accessing some objects might be very expensive so being able to filter data as part of the query request rather than afterwords would speed things up. These actions are usually quite hard to do in libraries I've used. Maybe if you only use javascript/node.js, I don't know its not what I've need to start with. I may have misinterpreted what GraphQL was for if it is not for querying nearly arbitrary object graphs and returning json that the client wants returned.


> My understanding is the json type stuff is a language specific workaround and not part of the standard.

The standard is made to be extended (https://www.apollographql.com/docs/graphql-tools/scalars.htm...)

For example, you should start with making your own DateTime scalar.

> What I sort of what is to be able to craft a query against a large set of objects like a customer/location/asset hierarchy and filter on different levels like sql query without it being in a database.

You can totally do this without raw json scalar. Actually, it would be better without it.

If you could go more in-depth details, I could make you a PoC.


Problem is not every language will implement DateTime the same and I had same problem with protobuf and thrift. I stuck to ISO 8601 format string for standardizing time. I could circumvent the type system via arbitrary json type but I'd rather not do so but I use dictionary in .net a lot and would like those to be more available. In .net, Dictionary objects become KeyValuePair arrays in GraphQL but then assume objects of that form while python I think that would be a tuple. These are general language serialization interop problems and the more the standard implementations help here the better.

If I want to go Customer/Physical Site/Asset Level 1/Asset level 2 versus Physical Location/Customer/Asset Level 2/Asset Level 1 as a general navigation through the data objects as a query then I have to have both of those paths available in the schema or custom queries. I'm sure I'm not explaining my issue properly but I find its related to how the graph traversal works. As number of dimensions expand this gets harder. We looked at some of the sequelize generated queries via a node implementation and was generally concerned about the ORM queries used. You can get by maybe with code generation or dynamic schemas but starting to lose ease of use and some performance if you dump the whole schema that way. I'm sure I can workaround each of these but as a whole nothing really coalesced to something simpler to build and maintain overall. GraphQL as a standard is still useful I think for simple schemas that are broad and not deep is perhaps what I'm trying to say.


I'm starting to understand what you mean.

You are right - GraphQL, even though it has the word "graph" in it, is pretty weak at complex graph traversal. Actually, I don't think it was made for complex graph traversal in mind.

https://docs.dgraph.io/master/query-language/ uses graphql, but they really modified the language to make it work for complex graph traversal, to a point where the simplicity of it is gone.


> I stuck to ISO 8601 format string for standardizing time.

In my App, I have scalar called ISO8601String, which is a custom scalar representing a date-time. The server outputs the correct serialized value, and clients are expected to deserialize as per the documentation.

It's not automatic though. Every client app has to implement the custom scalar serialize/deserialize logic or else it isn't conforming to the schema that the server declares.

GraphQL isn't a global schema standard, inclusive of implementation details. It's specific to your application's servers/clients.

Someone else is free to have a scalar called ISO8601String, which demands totally different serialization.

Implementing a client/server pretty much requires reading the documentation of the schema. You can make a Map restrict itself to only string keys, but that conformance of that will have to be a custom type implemented by your server to reject noncomforming objects.


If you made a custom scalar, wouldn’t you have control how it’s deployed across the different languages you choose to implement it? Truly curious. I haven’t attempted to work with a broad range of languages for my own services.


Not the OP but I would appreciate the PoC anyhow - as I’m sure others would.


http://prisma.io solves that second problem quite well. It implements and handles the issues of complex queries for you with minimal effort


If you want an unbounded dictionary, define your own scalar type.


I'm skeptical that this book will ever be finished. The last time I bought an unreleased book authored by John Resig it took two and a half years before it was published. If I remember correctly another author had to take over in order to complete it. I'm super interested in GraphQL, but I wouldn't consider purchasing this ebook until it is 100% complete.


Anyone know what framework the book uses (if any)?

I tried doing a GraphQL project in Python/Django recently, using Graphene for the GraphQL bits. I didn't end up getting that far because the documentation was so lacking, as were examples generally (for graphene-python I mean; I could find stuff on GraphQL in the abstract).

I'm thinking about giving it another shot, maybe on node.js this time, largely because I get the impression that the supporting tech for it is more mature there—but I'd love to hear opinions on what the most mature lib for using it is at the moment (whatever language)!


Don't give up on Python! The eco-system is young, but it is there. Saturday just gone, I gave a presentation at PyLondinium [1] on "GraphQL in Python" (Slides: [2], video to follow). For the purpose of the presentation, I put together a simple server example [3].

[1]: https://pylondinium.org/ [2]: https://alexchamberlain.github.io/presentation-pylondinium-g... [3]: https://github.com/alexchamberlain/city-api/


The ecosystem is young?

Python is older than HTML, CSS, JavaScript, PHP, Java, C#, etc.


He means the Python+GraphQL ecosystem – and he's right, Graphene-Python and graphql-core are still evolving and changing their interfaces, and there's essentially no useful documentation beyond trivial examples.


It's node: `apollo-server-express`

https://www.apollographql.com/docs/apollo-server/


I've felt the same about the documentation for graphene. It's vary basic. I've also had a struggle using subscriptions with the graphql-ws library. No explanations of how to actually hook subscriptions to some sort of pubsub system or anything like that. I figured it out enventually on my own but it did feel like I was fighting against he grain by not just using nodejs for the server component and proxying calls to my python services.

We have a significant investment in python so it was a question of spending time training backend devs on JS for one service or working out the kinks and having one of us become the expert to help others. I think we got to a point where we are ok with graphene and al. but I can't direct my team towards any decent documentation. I'm basically going to have to write my own cookbook style docs for them. Not a huge deal but I'd say if you have the option I'm not convinced you should go the python route today as the nodejs ecosystem for graphql is much more mature (yet still has a long way to go).

I'm pretty sure the GraphQL gains will be worth it in the end but we are all early adopters right now, regardless of stack.


From what I've seen by glancing through, it's mostly JS -- Node.js (express-graphql), React, and packages from the Apollo GraphQL team (react-apollo). I've been using these over the last year and can recommend checking them out.


It says right there in the post:

"We’ll be looking at the core fundamentals of GraphQL along with strategies for how to implement it (client-side with Apollo and server-side in Node.js)"


Actually that's missing the piece of data I was looking for: which Node.js libraries/frameworks (for interacting with GraphQL) is it discussing.


The specific import of the server lib is the non-Apollo express-graphql, which is under the graphql org on GitHub. I only looked briefly, not sure if there’s another example. I know the book is still a work in progress.


I believe the book is all in NodeJS. As for the most mature lib, I’d say Apollo.


Apollo is client-side though, no? I mean for integrating with server code.


The Apollo team has apollo-server and apollo-engine-js. I've had good results with both of those.


Why spend time building an API at all? Just point Postgrest at your PostreSQL database and you get a REST api for free. There’s similar projects for graphql and other DBs.


Totally agree. Sprinkle in some row level security and you've got an app server with data isolated at the DB level, and get the power of SQL for building out endpoints. Compared to this graphql feels a little like engineering theatre.


Which leaks a lot of implementation details through the public API, preventing changes without breaking existing clients. Which in practice means that you can't change anything anymore once you have more than a handful of customers. At least if you don't have the market power of a facebook who can say "Adapt your client code or it'll stop working. We don't give a fuck."

Another big issue is that it forces the client to understand a lot about how your application works. While an API can abstract that and conveniently offer various computed properties which output what the client needs.


You can do the same with GraphQL and Postgraphile.

The added benefit is that you can use the same schema for your client and server.

There's a nice guide to setting up a whole system here: https://www.graphile.org/postgraphile/postgresql-schema-desi...


Isn't GraphQL intended to be used directly by the frontend app (with direct access by any potential malicious user), whereas Postgrest is more for a backend app, that can be trusted with more control?


There's nothing in PostgREST that stops you from limiting control so that even anonymous users can use it safely. I've used PostgREST for user-facing APIs with success, but it requires some knowledge about the postgres access control model.

EDIT: And "Just point Postgrest at your PostreSQL database" is rarely a good idea in my experience, I usually have (versioned) API-schemas containing views, so that I can change my underlying data schema at will without borking the API.


Anonymous seems easier, since you can treat them as a single user. But could you do something like HN as a frontend app talking directly to a Postgrest API?



I have a GraphQL field that converts tweet-like content from the database into token lists so that individual clients don't need to implement their own tweet-parsing logic. I have many other examples.

If it was really feasible to bind our databases directly to our APIs, we wouldn't need to hire anyone except DBAs and front-end developers. But backend business logic is a thing. You may not need it, but most people do.


The database is more than just tables, for your particular example you could use a computed column https://postgrest.org/en/v5.0/api.html#computed-columns.

In general, custom business logic can be done through stored procedures/views/computed columns.


"Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should."

You can do a lot in the database, a lot more than most developers realise, that doesn't mean it's a good idea.


Anyone from the GraphQL camp care to weigh in against this argument?


If your endpoints are backed 100% by a db, then you can get by with the postgrest approach.

However GraphQL resolvers can do much more beyond a simple DB query. E.g. you can call other services and combine their reponses. Or calculate the n-th digit of Pi. Or whatever you want to do.


The DB can do much more than simple queries, you can use stored procedures for any custom business logic, you can even use PL/Python in them.

See https://postgrest.org/en/v5.0/api.html#stored-procedures for more details.


I know, but stored procedures are a PITA in general. Last time I’ve checked there was no widespread standard way of version control for stored procedures, other than DROP PROCEDURE followed by CREATE PROCEDURE, which makes blue/green deloyment impossible/awkward.


You can treat SQL code as any other code and version your schema with git. Migrations can be handled with https://sqitch.org, and you can also reduce some of the work with https://www.apgdiff.com that will generate CREATE OR REPLACE function statements for you.

Also have to mention that when using PostgREST we encourage you to decouple your data schema(where your tables are) from your api schema(only views, stored procedures, computed columns), that way you can version your schemas(having "v1" schema, "v2", etc) and prevent breaking changes.


Call me old school but i like my books printed and with on-demand printing services its not conplicated for a publisher. Can‘t read hundreds of pages on a screen...

But without seeing the content the pricing looks insane. It has to be worlds better than online resources for 289$ (!).


The 39$ version would be closest to an actual book. You get every chapter once the book is finished, and that’s that.

The more expensive versions include updates and additional features.


Every chapter except the extra chapters included in the more expensive options.


Once you pay $39 you have to Github OAuth to receive the book. Currently, there are no other authentication options.

Just an FYI as I didn't know that before I had paid.


I want to buy it but feel annoyed by these payment options.

  Tier 1: $39 - Full book when released and early access
  Tier 2: $89 - Tier 1 + updates for 2 years + extra chapters
  Tier 3: $289 - Tier 2 + free updates for life + more extra chapters
This is an oversimplification with only the things I care about. The actual packages include more stuff. What really annoys me is why would you call tier 1 the full book and say tier 2/3 has "extra chapters".

I just want all the chapters + early access + updates (2 years is fine). And I want to pay less than $100. Am I being too entitled? Buying books used to be simpler.


Yes, it's like some horrible IAP concept for books.

Normally a technical book costs around 40-60 EUR and one gets all the chapters. Updates are called "editions" and are published when there's demand and the information has become outdated.


Also "free updates for life" usually means free updates for several years and then "new product" is released with entirely new pricing and "old product" is deprecated. New and old here being a different revisions of the same thing.


Usually yes. There are some exceptions with technologies that evolve a lot. The free update feature of the ng-book 2 for instance really paid off for me. But that was included in all the "Tiers" for free, it was a $39 book.


Amazon/Google books, this page not available for preview, unlock for $5 per page. Only pay for what you need! PAYG works out cheape!r†.

limited-time treasure-chest special - unlock all pages for $500!; footnotes and references extra; internet conmection required.


On the other hand, this is the first time I've ever thought "Huh, that feels like too much for a book" (with the exception of textbooks), and I think part of the usual charge-more thinking is that if nobody is unhappy with the price, it's too low.

It's also certainly the case that me understanding GraphQL better will easily save my employer much more than $90 of my time, so I might try to expense it and see what happens....


I'm convinced most of this pricing is aimed at businesses.


With all due respect to John Resig $39 does not get you the full book.

$39 "Upon completion The full book in ebook and HTML formats"

$89 "Extra chapters: Server-side rendering Offline data and mutations Serverless Stripe integration"

Rather misleading then.

Also there should be a dead tree option (even though content gets outdated fast). Personally I'd pay $49 for a printed full book on GraphQL by John Resig.


Creating a technical book is pretty hard (source: I've done it, as well as many others here). You're not just writing the book, but attempting to lead the reader from a point of little/no knowledge, so each step in your narrative needs to build completely on the last. Added to that, you're attempting to write idiomatic, clean and bug-free code to go alongside the writing.

Right now, he's clocking in at 350+ pages with his co-author. A chapter would take me a week to write, basically, so that's about three months of effort already. That's before you've done review, editing, etc., so you're looking at raw material costs of about $40-50k just to bring the text together with code in a reasonable package.

Even on the $89 package, they're adding in interview videos, interactive exercises(!), and a bunch of extra chapters. I doubt that doubles the cost, but it's probably a chunk extra (unless the book revolves around "designing software for interactive exercises", that's more code they need to write). Then on the higher package, even more chapters and videos, plus technical support (which I think is nuts).

They're pitching it as a book which probably isn't helping, but I actually think that's decent value for what they're offering. Are they going to make a profit? Hopefully, but that's going to come down to how many people are interesting in what is really a technical niche topic.

A lot of content is free; good-quality content tends not to be, and I think the world could use more like this. The market will decide, I guess.


I don't think the price matter as much as the fact that this comes across as nickle and diming customers.

I'd probably pay (or expense) $100-$150 for a really good book, with videos, interactive examples, etc. But this pricing just puts me off the whole book as you can buy the "full" book, the "fuller" book, the "fuller" book with support for bugs in it, etc.

I also expect a book to be reference material that I can share with colleagues as needed (we have an office library), so team pricing per seat comes across as quite money-grabbing.

To be clear, I'm not at all criticising the quality of the content, and I'd be willing to pay good money for good technical content. I just don't think this pricing is very respectful of customers.


Some 10 years ago I bought a huge book about Ruby on Rails for $50. It had at least 400 pages.

This last week I ordered "App Architecture", also for some $50, and I was amazed by what I got - a tiny book that fit in a small envelope...

And now $89 for a PDF?? I love books, but no thank you.


If you considered it an online training course vs a book, would you consider it a fair price?


No, because an "online training course" in full text is a book.


which of those two books was better though? if I'm getting a voluminous tech book with a generic 'name of technology' title it's a pretty safe bet that a third of it is useless to me.


Agreed, the pricing has put me off.

Another thing I feel is odd is the charging for technical support on the code samples. Either the code doesn't work, in which case that's a bug in the _book_ and should be fixed without paying for that, or it's a failure on the reader to set up the right environment, in which case the issue is likely to be minor and could probably be answered on Stack Overflow.

If it was support implementing the ideas in real world projects that would be worth a lot of money, but that doesn't appear to be what they are selling.


I think it fair to charge for technical support. Of course you could turn to Stack Overflow, but some people can and will pay someone else for assistance.


I feel the same. Had to actually take time to understand the different tiers, which should be immediately clear in my opinion.

Also, I'm a sucker for having all the chapters, and interested in, for example, the chapter about Redis caching. Do I have to pay $289? Way too much...


> The end result is a vastly simplified backend

GraphQL specs offer none to zero of the following:

- caching

- authentication, authorization and handling data access

- protections against ad-hoc arbitrarily nested and complex queries

- routing

How does this “vastly simplify the backend”? If anything, it makes the backend hideously complex (IIRC Apollo’s approach to caching is parsing requests and responses every time, do some magic, and handle caching manually).


The author and his introduction definitely has a compelling story to tell. It is making re-think our recent strategy shift in which we evaluated GRPC and GraphQL. We chose GRPC but I'm starting to see that it's not really any better than REST + Swagger. I think I might need to actually just go ahead and buy this book and re-think if the GRPC route is worth it. Our investment in Grpc at this point has been slow - mostly because I figured the Grpc-Web project would be moving a lot faster by now. It's not and I don't see the team doubling down, whereas it looks like Apollo and GraphQL seems... ready now.

Edit: Does anyone know if he goes over typescript?


When I interviewed at Improbable (authors of gRPC Web) last year, we spoke a little about gRPC and GraphQL. The interviewer didn't really explain their choice beyond the fact that half the company came from Google and were already fans of gRPC.


Would definitely recommend GraphQL over GRPC!

We'll go over how to generate types or add them manually, but the rest of the book is in ecmascript.


No TypeScript on basic search currently, but book is not finished, can’t speak for the author.


If nothing else, the conversation around this has shown that applying a subscription-based, tiered pricing model to a book is not appreciated by the consumers.


I think it's against the usual expectation for a book. If they had split it into volumes then it would have matched better people's expectations. Also the videos and extra should be part of a course package. Messaging just make it sound like you are getting short changed.


I think he is making a mistake selling this as a book. These prices just don’t work for a book.

He should be selling it as a knowledge community. Members get access to the resources, conversations, training, etc.

Add levels of membership that have to be earned through merit somehow and membership at a higher level becomes valuable on the CV.


Why is rate limiting only part of the $289 package?

Edit: And wasn't it cheaper earlier? I was pretty sure it was $29 for the basic version, maybe that rounding up market trick got me.


Good on him to be so flexible- I don't know that much about John Resig, but he was clearly a pioneer with jquery+rest, so it's great to see him being equally enthusiastic about a project based on graph query languages and React, arguably replacements for this earlier tech!


He always said JQ was only a temporary solution.


REST was Roy Fielding but I agree. Much like the concept of Apache Cordova, the best jquery would be no jquery at all.


John didn't invent REST (nor AJAX), however jQuery made it feasible and easy for people to use.

Yes, you're right, the best jQuery would be no jQuery. jQuery was basically a prototype for the next generation of browser features.


It didn't encourage REST either. It provided $.post but not $.del (since delete is a keyword in JavaScript) so one could even say it went against REST. I think it was orthogonal to REST, though.

Here's the docs for jQuery 1.3 from when REST was starting to take off, with jQuery.post but no delete method: https://api.jquery.com/category/ajax/shorthand-methods/


It provided $.ajax({method: “DELETE”}) however... so...


Making a DELETE XHR was never the issue, browser support was. It's an issue that was impossible to paper over with a JS library alone, you needed backend integration. That's why Rails and similar frameworks at the time spoofed DELETE and PUT requests by creating a POST request with a special param set.


All browsers supported the verbs for years. IE7 had mostly full support (not PATCH though, annoyingly).

The problem was mostly backend related: most servers didn't support them. DELETE for example has been supported for over 12 years in all major browsers.


Ah, interesting, my memory is dusty. Now that I think about it, the limitations of <FORM> were front and center too: A Rails app wanted a "delete $X" page that didn't rely on JS, and the only way that was going to happen was with a <FORM>, which only supports GET and POST methods.


Ah well yes. That’s quite intentional of course but yeah, an architectural limitation of that Rails app.

Of course until ~v3 Rails didn’t fully support all verbs so...


It was also an issue on backends. Backbone had an emulateHTTP option: http://backbonejs.org/#Sync-emulateHTTP


REST had taken off before jQuery existed.


> GraphQL offers a way to push all of the logic for specifying data requirements onto the client, and in return, the server will execute the (highly structured) data query against a known schema. The end result is a vastly simplified backend that also empowers the client to execute whichever queries they need.

Why is empowering the client a good thing?

Why is moving complexity from the backend to the front end a good thing?


Because you can have one server efficiently servicing multiple clients with different, unanticipated needs.


So, reducing server load?

OTOH if the complexity is offset to the client, and you have multiple clients, then there is more complexity. No?

(assuming you have multiples types of clients)


But the client already had to consider what data it needed.

The only difference is that in a REST world it needed to make several calls to fetch all the data (often waiting for resource references in the nth response before being able to build the (n+1)th query) whereas now it can fetch them all in a single query.

So if anything it reduces the client complexity, as well as making it much easier to extend the API to support new client requirements without having to resort to REST-style versioning.


It allows a server produced by one organization to provide an API that remains efficient for future clients produced by other organizations without either changing the API to meet the needs of the other clients or anticipating every client's needs.

More to be economical in API design, rather than just server load and bandwidth.


because the client knows exactly what it needs and it makes abstracting the data-fetching pretty easy


Why is all the GraphQL documentation so tightly coupled with either Apollo or Relay? Apollo doesn't work nice with a non-react setup (imhe), not to start about Relay itself. All the Relay server side stuff makes the client also really boilerplaty. I don't think i'd buy this book, since it's also apollo stained.

Does someone have good resources on how to use vanilla GraphQL without the bloated Apollo/Relay stuff?

I'm currently working on a GraphQL server (using Graphene), and thankfully I'm able to not use Relay (with which it tightly integrates). So I'm able to implement my own pagination and filters.


Relay (Modern) doesn't require any of the Relay-specific schema patterns. But most of them (connections, the node interface) are useful enough to want to use anyway.

That said, I agree that too much of the documentation/articles/tutorials about how to use GraphQL assumes using a particular implementation. I'm far more interested in talking about higher-level concerns like best practices and data modelling.


Also if it's the new REST then it absolutely cannot be language specific, let alone library specific.


Chapters 1-5 are apollo-free! And the queries and mutations used in the apollo client chapters are applicable outside apollo.


Is Apollo the be all end all GraphQL client these days? I tried it roughly 1.5 years ago and was turned off by all the component wrapping.

Is it bad form to just fetch data via GraphQL and put it in Redux (or other store)?

I'd like to buy in, but I'd like to keep my data fetching separate from my view layer if possible.


While transitioning away from Redux, we actually fetched Data via GraphQL and put it in stores.

Honestly, it was a pain, but it's because we abused Redux.

We progressed by doing the HOC and bypassing Redux but it still was a pain, because HOC.

We're now at a stage where we updated Apollo versions, and started converting to the Query components. It feels and work a lot better.

When we started, we thought the same thing -- keeping data fetching separate from the view layer if possible. We realised that it didn't make that much sense. Keeping everything in React components is a lot more maintainable and easier to comprehend.


Fellow redux-abandoner here, be sure to check out Apollo client v2 and it’s various middleware options like link-state and link-rest, those in essential let you pull in data via graphql and REST-api and manage it via in memory store (just like redux store).

Also the subscription via websocket story there is really sweet, you basically get real-time update for nothing ;)

In short, just as John stated, it’s a huge boost of productivity on both front and backend, we verified it ruthlessly and love it.


Apollo has been in the lead for a while—it currently has 125k weekly downloads on npm, vs Relay's 20k.

You can use the new render prop API instead of the HOC API if you don't like wrapping.

The data fetching itself is separate from the view layer—it's all done by Apollo. From a maintenance perspective, listing data needs is great to have in the view layer, colocated with the components that use them.

You can manually fetch data via GraphQL and put in Redux, but there are a ton of benefits to having a library like Apollo manage the store for you. Automatic normalization, querying from either store or server (or both), optimistic mutation updates, etc.


Adopters of GraphQL are quick to call it game-changing, but is it a game worth playing? It has to add value above the standards already in place. Is it accomplishing that? How is it more valuable than whatever it is that it replaces (REST, whatever)?


I think I came to this conclusion the other day: GraphQL truly shines when the variation of ways that you want to access data is significantly more complex than the complexity of the stored data. In other words, if you want to look at 10 tables in a twenty or forty or more different views (counting the times when a view is composed into another), then I believe using GraphQL will begin to show order of magnitude efficiency rewards.

If the number of views is only slightly more than the number of tables then the rewards are more a matter of taste. It will give you a nice typed interface to your data and an interactive query GUI. But other tools provide that as well.


> if you want to look at 10 tables in a twenty or forty or more different views (counting the times when a view is composed into another), then I believe using GraphQL will begin to show order of magnitude efficiency rewards.

How will it show order of magnitude efficiency?!!

It means that you will have 20-40 views towards a database that only contains 10 tables. Which will inevitably result in highly inefficient queries towards said database. Especially if a view is composed into another.

There's no magic in GraphQL that suddenly makes that go away.


Imagine you have a cross-platform app with multiple teams. You have a lot of data-driven components like different feed types. You want to avoid situations that change in backed API needs to be synced with every team/app. You want to give power to app developers to define efficient queries.

Unless you develop multiple apps or data-driven application there is very little reason to use GraphQL. Personally, I am in favor of building two APIs: public/third party REST and JSON-RPC for the frontend. Getting REST right is difficult and after you create your resources your frontend needs to workaround incomplete/superfluous data.


This doesn't answer my question :) You still have 10 tables, and you still have 40 ineffecient ad-hoc views into that database.

Why is it that any text, and post, and comment about GraphQL focused on client-side only and completely ignores any questions about server-side?


Because your source data will be like 10 tables. But your frontend needs to access this data in different ways (views). There no way to design your database to support all possible queries in an efficient way. With complex queries even caching become tricky.

  user(id = 6) -> with ('comments') -> with ('votes')
ORM - create a complex query that hit three tables. There is no way to represent this as proper REST service (/userNameWithCommentsVotes - resource). You end up with REST resources that accept a lot query params. Each endpoint will create coupling to both data storage and consumer (front-end).

  User (id = 6) {
    name
    comments {
      votes: {
         up
         down
       }
     }
  }
GraphQL shift data access towards the client. It will make multiple DB queries but it will be trivially cached using Dataloader pattern. You hit DB more with simple queries but you don't muddle you data schema with ad-hock views/queries.

People start with pretty REST and normalized Database, then frontend and third-party requirements demolish this to another "Paypal API". I am not saying that GraphQL is a silver bullet, you need to structure your frontend application Graph as well and complex conditional queries are difficult to express.


> ORM - create a complex query that hit three tables.

1. ORM isn't a requirement for REST

2. A REST endpoint will execute a highly specialised query that will hit all the right database indices and will return just the dataset required in one roundtrip to the database.

And since it's a known quantity, it will benefit from: hot db indices and caches, intermediate caches, and even HTTP caches (because GET requests are both idempotent and cacheable, for example).

Meanwhile with GraphQL your server will have execute what's essentially `SELECT <all>` three times (otherwise your "dataloader" won't be able to cache data for more complex queries) and do all the filtering and joins in-memory in-code.

> GraphQL shift data access towards the client.

Hahahha wat? The client has no access to data. The only thing it does is send a query request to the server. The server will parse the query. The server will request data from the database (multiple times). The server will end up joining, filtering out, caching, figuring out proper auth access to, etc. etc. to data.

And only then will that data will be returned to the client in the form that the client requested. The client has no access to data. The only thing that the client can do is ad-hoc potentially non-perfromant queries to the server.

> You hit DB more with simple queries but you don't muddle you data schema with ad-hock views/queries.

Yup. You only muddle your code with those queries (the code needs to find a way to compose/filter out/etc. etc. etc. the data for the ad-hoc queries from the client). And you only do multiple redundant and expensive trips to the database (what happens when the DB is sharded, and some data required for the query lies in a different shard? What happens when connections are slow/interrupted? etc. etc.)

> it will be trivially cached using Dataloader pattern.

No it won't. Dataloader only caches some data during one request. On the next request you will do the same: expensive multiple roundtrips to the database.

Oh. By the way. Remember how you dismissed ORM? Well, your dataloaders and data resolvers (and whatever other new lingo GraphQL came up with) is nothing but a very limited and inefficient ORM.


> A REST endpoint will execute a highly specialised query that will hit all the right database indices and will return just the dataset required in one roundtrip to the database.

Each time you need to have a new highly specialized query you create REST resource. That why I prefer to be honest and just make RPC call.

> Meanwhile with GraphQL your server will have execute what's essentially `SELECT <all>` three times (otherwise your "dataloader" won't be able to cache data for more complex queries) and do all the filtering and joins in-memory in-code.

That fine, because easy cache invalidation is worth it. You will find that REST complex endpoints will take cache the same data multiple times.

> Hahahha wat? The client has no access to data. The only thing it does is send a query request to the server. The server will parse the query. The server will request data from the database (multiple times). The server will end up joining, filtering out, caching, figuring out proper auth access to, etc. etc. to data.

I said data access, not data. The benefit of GraphQL materializes when you have multiple client application that targets same backend data store. You describe your data schema and clients can figure out what data they need.

> Yup. You only muddle your code with those queries (the code needs to find a way to compose/filter out/etc. etc. etc. the data for the ad-hoc queries from the client). And you only do multiple redundant and expensive trips to the database (what happens when the DB is sharded, and some data required for the query lies in a different shard? What happens when connections are slow/interrupted? etc. etc.)

You have cache. Having only one copy of user in the cache is super important. GraphQL is not creating expensive queries. With GraphQL it is easier to shard your DB because you will have less joins.

> No it won't. Dataloader only caches some data during one request. On the next request you will do the same: expensive multiple roundtrips to the database.

Dataloader supports any Cache backend. In production, you will use something like Redis. Whole point dataloader is to cache between requests. Its supports cache invalidation as well.

> Oh. By the way. Remember how you dismissed ORM? Well, your dataloaders and data resolvers (and whatever other new lingo GraphQL came up with) is nothing but a very limited and inefficient ORM.

No, they are query interface. They expose DSL for accessing data. Mapping is something that can be done in Relay.


> Each time you need to have a new highly specialized query you create REST resource.

I love it how you meander. First you where complaining about ORM creating complex queries. When I countered with the simple fact that you don't need ORM, and "complex queries" are highly specialised efficient queries that take full advantage of DB capabilities, you immediately go off on a tangent talking about RPCs.

:-\

> That fine, because easy cache invalidation is worth it.

No it's not fine. Because instead of retrieving a simple single highly optimised dataset in one go you do multiple inefficient roundtrips to the database.

> I said data access, not data.

access. /ˈaksɛs/ 2. obtain or retrieve (computer data or a file).

All data access happens on the server through inefficient database queries and in-memory juggling of data. Clients have no access to data, they send queries.

> Whole point dataloader is to cache between requests.

Dataloader (in it's original form and specification) doesn't cache between requests.

DataLoader provides a memoization cache for all loads which occur in a single request to your application.

DataLoader caching does not replace Redis, Memcache, or any other shared application-level cache. DataLoader is first and foremost a data loading mechanism, and its cache only serves the purpose of not repeatedly loading the same data in the context of a single request to your Application.

If any other Dataloader implementation implements caching between requests, they are just reinvent the wheel.

> No, they are query interface. They expose DSL for accessing data.

ORM is a DSL at it's core. And that's what "dataloaders" and "resolvers" in essence are: ORMs.

To quote from Apollo:

In order to respond to queries, a schema needs to have resolve functions for all fields.

    const resolverMap = {
      Query: {
        author(obj, args, context, info) {
          return find(authors, { id: args.id });
        },
      },
      Author: {
        posts(author) {
          return filter(posts, { authorId: author.id });
        },
      },
    };
Oh look. A wild ORM appears! Oh look how it quickly devolves into multiple DB roundtrips for any non-trivial query.


Hmm, are you talking about performance here? I'm talking about developer efficiency and code complexity. Sorry, should have been more clear about that.


Of course we are talking about performance. When someone carelessly says "40 unforseen composable views towards 10 tables" or (as a recent article on HN mentioned [1]) "You should find yourself being able to build a query that delves 4+ relations deep without much trouble", I find myself asking: at what cost?.

There's no magic in this world. And yet, no one seems to address the question of the server. If you have 10 tables and 40 composable views, most of those views will end up highly inefficient queries towards the database (possibly with multiple roundtrips).

And that's on top of many other concerns: https://news.ycombinator.com/item?id=17293337

[1] https://news.ycombinator.com/item?id=17269028


"we are talking about performance" - Well, I was not. My comment was not about performance.


Well, that's the problem, isn't it? You can always carelessly say "ah, it's magically gives you magical capabilities of magically making 40 composable views that magically make everything shiny."

But then the hard questions come. It's no surprise that there are so few GraphQL resources, blogs, docs, posts, proponents that talk about performance on the server.


From reading around it _seems_ that caching in particular is left as a exercise for the developer, vs putting your REST API behind nginx for example when you need to scale out.


I hope John will divide up the book later.

I am interested, but as a guy working more on backend systems, frontend and mobile will not help me very much. I don't want to pay $40 for majority of content I cannot make use of.


It isn't $40 for content you don't use; it's $40 for the content you do use.


Smaller computer books tend to not be mich cheaper. So the majority of the cost is for the minority of the content you want. Maybe $10 for the rest of the content you may want someday.


While GraphQL has it uses, i can't seem to see how putting the query logic in your frontend is a good thing:

How do you control scaling, and writing performant queries? When you make an API endpoint you have control over the queries you make and how you make them.

With GraphQL you lose that control right? Maybe its good for internal projects, where you own the client and server. But often just exposing the internal DB directly to GraphQL is probably not a good idea. Maybe writing the DataLoaders yourself is better. But then you are already close to the effort of writing a simple endpoint.

For simple API's graphql seems like overkill.


With GraphQL you have the flexibility to utilize both REST and RPC patterns, with the added benefit of a type system and ability to choose the fields returned on the client. You can write common CRUD queries/mutations similar to how you would with REST (allUsers/allProjects. If you have a query that you know ahead of time will be inefficient, you can add in a more RPC or dedicated endpoint style query that you can optimize how you see fit on backend (getAllCrazyThings(filters) etc).

For the performance aspect, check out Apollo Engine, which allows you to track the latency and implications of using different fields within your queries and mutations over time:

https://www.apollographql.com/docs/engine/performance.html


GraphQL endpoints are API endpoints. You have to write the resolver yourself, which will do things like query the DB.

GraphQL is just a schema specification like Swagger. It doesn't come with a server or a client. Those are implementation details left up to you, just like in REST.


One of the problems I've noticed while using GraphQL (late last year) was that although there are 'officially supported' libraries in a number of languages, there isn't really feature parity between them. In particular, I remember that the Ruby implementation we were using required manually parsing the AST to do fairly basic operations described in the spec. I wonder if this situation has improved in the last few months.


Looks nice but why would I use GraphQL over JSON queries to a simple document database directly?

I guess it comes down to how important schemas are to you. Which I know suggesting they might not be important sometimes is for some people a sin punishible by death. But for me that's a belief system that doesn't hold up for everyones application.


Why should I use Facebook's GraphQL over OData which is an ISO standard? I've found a few comparisons but I'm curious what the HN crowd has to say.


Feels like GraphQL has more momentum behind it now than OData. OData seems to have suffered from the fact that it originated in the .Net world which is pretty much ignored by those outside of it.


So let me get this straight.

For the low price of $39 I will get the full book in ebook and HTML formats when it is done.

But for the price of $89 I will get extra chapters on Server-side rendering, Offline data and mutations, Serverless and Stripe integration.

So I am clearly not getting the full book for $39.

I just recently refused to buy a game for employing the same dishonest sales tactics, so this is an easy no thanks from me.


Honestly, I think people will disagree with me on this (due to some vague "you don't have to buy it" argument), but it's just too dishonest to me. This is possibly the biggest rip-off in pricing of any technical book I've ever seen.

If you pay $39 you get 8 fewer chapters than the actual full book. And it's online only, so no paper printing or publisher overhead. For $89 you get a video and some extra chapters (that really should be in the book you paid for in the first place). If you pay $289 (which is an absolute absurd price for any book) you get exactly what you used to get with books 10 years ago: a book with all its intended chapters, a mailing list, a glorified IRC channel and a code repo. Except, again, it's still not even a physical book (so why so expensive?).

If you pay the "Training" package ($749) you supposedly get superior in-person mentoring. Except it's not really done by the authors. It all sounds like some Tai Lopez scam.

I don't want to diminish the amount of work it took to make this book, or the quality of the authors. I'm sure it's incredible and it seems a pretty massive book. But I just don't understand these sort of things when it comes to professional programmers who write books. Supposedly they want to spread this technology they care about and want people to use it. We as programmers get paid very well, we're not in need of money. Book revenue is probably not their main source of income. So why price it out of everyone's range? You're not gonna reach nearly as many people as just selling it at a reasonable price.


I'll preface this with the fact that I completely agree with your sentiment.

> it's still not even a physical book (so why so expensive?)

I have never written or published a book, but my understanding is that the cost of writing the book far outweighs the cost of actually printing it. Human time and knowledge is way more costly than some paper and the effort of a machine that spits out books by the second.

> If you pay $289 (which is an absolute absurd price for any book)

This offer and especially the in-person one seem like they are targeting businesses. A lot of places will pay orders of magnitude more to have someone work directly with their employees.

Honestly, about $93/hour is on the cheap side for in-person training. (I'm not saying it's good training or even close to worth it, just what the landscape looks like)

Edit: actually, it would be cheaper than $93/hour since I'm not taking the other things into account. Let's say the book and all the other stuff realistically make up about $150 of the price (I am guessing the $289 is just trying to get you to buy one of the surrounding tiers). That's actually $75/hour for the training.


> I have never written or published a book, but my understanding is that the cost of writing the book far outweighs the cost of actually printing it. Human time and knowledge is way more costly than some paper and the effort of a machine that spits out books by the second.

Usually physical books are said to be expensive because publishers are the ones holding the rights to them. They pay the authors in advance and try to recoup the cost later, with prices as high as the market will bear. The publishers have some fixed overhead in payroll (for editing, typography, art), printing, marketing, etc. Add to that the risk of the book not being finished by the author and the publisher's profit margin and it goes some ways to justify their pricing (or at least it makes it understandable). The authors then get only a slim percentage of the sales (if at all).

In this case the book isn't even finished, the full proceeds go to the authors (or so it seems) and the readers are expected to advance the money themselves to offset the costs of writing it. The overhead is vastly diminished, there's no publisher, the authors incur no risk, and yet the price is much higher. Makes no sense.


> We as programmers get paid very well, we're not in need of money.

Speak for yourself.


Would you feel better if they wrote 2 smaller books instead of 1 whole book, and the premium package included 2 books? WTF is "book" anyway? It's a web publication.

The cost of a book is in the IP, not the printing costs.

What is the maximum non-absurd price for a book?

Why do you think the authors are the only experts who can lead a useful training?

Which cost-effective GraphQL book should we buy instead?


You know, HN could really do without the person who constantly tells everyone how well-off we all are. Learn economics. And then hold your tongue next time.


Day-0 DLC has arrived to technical literature. John really is ahead of the curve.


Errata can only be accessed via microtransactions. And loot boxes have a chance to drop full code samples... the F2P version leaves everything as an exercise for the reader.


Oh, oh no. There is a missed opportunity here. Common lootboxes drop a line of code, uncommon an entire method, rare, a full object, and legendary are working samples.


Hmm, you know, people want to forge their own legendary objects. Double the drop rate on working samples, but they’re obfuscated until you collect 3 of each code block. And you can sacrifice 10 useless one liners to be able to discover a block of your choice.


And if he takes as much time as he took with "Secrets of the JavaScript Ninja", we may never even see DLC


I remember waiting like 2 years for manning 'javascript ninja' book that he never finished. I think someone else took over and finished that book.

https://www.manning.com/books/secrets-of-the-javascript-ninj...


Some people had to wait four years.

The MEAP was released in 2008. Three years later Manning brought in a second author to finish the book. It was released in 2012.

A couple of quotes from https://johnresig.com/blog/secret-omens/:

> ...I got caught up in coding and stopped focusing on writing. I had to prioritize my time and I chose to prioritize doing more development and focusing on my personal life.

> ...I would absolutely not write a technical book again.


I remember this and I have both books which are excellent. Well, here I am again, contemplating getting which version of the book???


Maybe this is why he has a co-author this time around?


I bought that book solely based on author's fame. I got the book written by someone else. He offered no explantion as to why he stopped writing that book.


And in the end, it ended up being an excellent book. I don't think this complaint fits here.


It fits because I moved on from doing javascript, it was no use to me when all the chapters were finished. There was no compalint about the quality, yes it was excellent for the chapters i read.


But you knew you were paying for an unfinished book ahead of time, so why did you base your decision to "do javascript" on unfinished work?


Yes I knew but I wasn't expecting to wait 4 yrs and receive a book written by a completely different person.

He is offering people to buy another unfinished book so my comment is relevant in this context.

I didn't base by decision to write javascript on book, merely pointing out that book was no use for me after 4 yrs.


You're not even getting the full book for $89, the extra chapters stop at $289.


So would you be happier if only the $39 price point was available, without any of the extra chapters? Then you wouldn't have the other price point to make you feel like you're missing out on something?


How about he releases the entire book at a reasonable price point that isn't $250. It's unfortunate that John is playing games with pricing. This entire pricing scheme feels like something EA would cook up and that is NOT good optics.


I would be happier if they didn't lie to me and claim that $39 will get me the full book. I would be even happier if they didn't go the video game route and add several purchase tiers including day-0 DLC. It's manipulative and will make me feel like I missed out if I opt for one of the cheaper options.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: