Hacker News new | past | comments | ask | show | jobs | submit login
tRPC – Build and consume typesafe APIs without schemas or code generation (trpc.io)
316 points by olalonde on Aug 12, 2023 | hide | past | favorite | 217 comments



I'm currently in the process of removing tRPC from our codebase. It's been a nightmare of tight coupling. I've also found that it encourages more junior developers to not think about interfaces / data access patterns (there's a mapping from Prisma straight through to the component). It's fantastic for rapid prototyping but really paints you into a corner if you ever want to decouple the codebase.


This is the overlooked advantage of a schema (e.g. in GraphQL): it forces you to think about the data types and contract, and serves as a good way to align people working on different parts of the code. It also scales to other languages besides TypeScript which helps if you ever want to migrate your backend to something else or have clients in other languages (e.g. native mobile apps in Swift, Kotlin, etc).


TypeScript is actually a great schema language and fixes a number of problems in GraphQL's SDL, especially the lack of generics.

I think if you're defining a JSON API, that TypeScript is a natural fit since its types line up with with JSON - ie, number is the same in both and if you want something special like an integer with more precision, then you have to model it the same way in your JSON as your TypeScript interfaces (ie, a string or array). This makes TS good even for backends and frontends in other languages. You can also convert TS interfaces to JSON Schema for more interoperability.


It’s not a perfect mapping with JSON. Everyone knows that stuff like functions and dates can’t go over JSON, but there are also subtler things, like the fact that undefined can’t exist as a value in JSON. I’ve seen codebases get pretty mixed up about the semantics of things like properties being optional versus T | undefined.


> Everyone knows that stuff like functions and dates can’t go over JSON, but there are also subtler things, like the fact that undefined can’t exist as a value in JSON

I use `null` for that purpose, and it's been pretty reliable. What are the situations where that falls down?


I also like null | T for required properties, but for whatever reason I have seen that undefined | T is a much more common convention in the TypeScript world. Maybe the reason for that is the semantic about how object access returns undefined, but that’s precisely the source of ambiguity between “object has property X with value undefined” and “object does not have property X”.


TS is a pain with JSON Schema or OpenAPI because it doesn't directly support things like integer or precision. TS does not easily support things like `"exclusiveMinimum": 5`, `"type": "integer"` or patterned (regex) fields.

So if you want to convert your TS interfaces to JSON Schema, you may need to provide additional constraints via JSDoc and use a generator that understands those annotations. But your TS interfaces cannot express those constraints directly.

There are a number other related complications surrounding these more expressive schema definitions - like building types from them and interacting with them at runtime.


You can easily express constraints like these with Zod though (created by the same person who created tRPC v1)


Sure, if you have TS in your backend. There are OpenAPI to zod generators that can help get you started, even if they don’t give you perfect zod schemas out the gate.


Jesus, if you're making a JSON API just use JSONSchema, which while not perfect, is quite nice for language interop (and more powerful than typescript)


> just use JSONSchema

I'll "just" use the type system built into my programming language until the pain of supporting multiple languages is more expensive than installing JSONSchema tooling and messing with code generation.


We implemented tRPC at work and use all the other things that would have been 'tightly coupled' within our code base had we not planned a tiny bit ahead. tRPC is incredible but it's still just the transport layer between your back-end and your front-end. Allowing the internals of tRPC to be used deep within your business logic is just as bad as not having a clear 'controller' or 'router' layer where you can cleanly define inputs, schemas, and keep things separated. In this sense, if we ever decided to move from tRPC it would be relatively straightforward. Lifting an entire sub-system and running it over a queue for example would be trivial.


Your problem isn’t tRPC, your problem is that you have engineers who type things for typing’s sake. They’ll have the same problem in any tool.

There’s a learning curve to these things. It always starts with type FunctionIWroteTodayArgs = …, which is useless and tells you nothing.

After a few iterations (this takes years) people gradually realize that the goal is to describe your domain and create types/interfaces/apis that are reusable and informative, not just a duplication of your code. You then get more useful types and things start really flying.

I guess what I’m saying is work on that with your team, not on ripping out tRPC.


+1. Almost every time I actually write out a type it's because I want to communicate some domain knowledge. For everything else I use inferred types. IMO this is The Way.


Eh? Useless? Maybe you’ve not written generic Javascript before but “type FunctionIWroteTodayArgs” has eliminated an entire class of problems we used to face with JS code.

If you’re talking about decoupled services, that’s about business domain composition more than type description. And those types benefit from a higher level description/reusability/transportability.


Fair, it’s not literally useless, it does help with typos and autocomplete. But you can get so much more with just 10% extra care in how you design your interfaces that the lazy approach feels almost useless by comparison.

It’s similar to the problems you run into by writing the wrong kind of tests – the ones that essentially just duplicate your code instead of validating input/output at boundaries.


Could you expand on the nightmare of coupling?

I don't see how declaring an http client server side and consuming it client-side can be a worse thing.

We use the same pattern of creating services that then every consumer can use (a web interface, a cli, etc) and the fact that those things never get to break is a massive improvement over anything I've seen in the past.


If you only ever use Typescript and are sure you’ll never need to interact with the code in any other language or service in a different repo it’s fine. But as soon as you need to reuse that backend for anything else you’re stuck building something new.


You can make calls to tRPC endpoints from anything that can send an HTTP request. The RPC format for requests might not be your cup of tea, but it works.


Ostensibly, the product isn't as useful as the existing gotos (json schemas, shared libraries, graphql, etc), if you cannot create a shareable schema for validation. The ability to form arbitrary requests is already assumed. If your messages are very complex, you need some tooling.


If you’re in TS world then you can export the Zod schema that your tRPC queries/mutations are using.


Having the opposite experience with it as a small team, and I can see how it would work great in my past large teams. I bet you're gonna have the same complaints about any API you use not just tRPC (junior developers not thinking about interfaces).


I’m willing to admit that poor usage can make any tool a problem. But, tRPC is set up to make it easy to directly expose your backend for use in a component. For new projects that’s fantastic, for larger projects and teams having the ‘friction’ of defining a gRPC, GraphQL or REST endpoint is leading to more thoughtful API design and ability to keep isolation between layers.


> having the ‘friction’ of defining a gRPC, GraphQL or REST endpoint is leading to more thoughtful API design

Make devs slower so they code smarter?

More friction is just that, it just frustrates devs it doesn't make them code better.

tRPC enables good teams to ship faster. Less friction means doing what the team was already going to do anyway, but faster.


It's a dangerous anti-pattern to pass your db types through to your api handlers. Those are always different models and it's important to have an intermediary representation for the rest of your domain.


Agreed. Although, sometimes it feels like this is a losing battle. I've worked with too many people who see that level of separation as "unneeded duplication", with constant complaints about having to update a bunch of different layers "just to add a new field.

IMO, at a minimum, you have your API layer model, your internal model, and your database model with a mapping layer at each boundary.

I rarely have problems when I structure it that way, but when working on applications that pass the same model from their API down to their database, or vice-versa, it's always full of the same types of problems that comes with tight coupling.


I think there are certain types of boilerplate that used to be truly onerous that are now much less so because of types. If there are 3 different files that need to be updated to add a file, and forgetting one is only going to error at runtime (maybe even only if you're exercising that new field) that's horrible. But 3 files that require updating where the code won't compile until you've added all three is way way less of a problem.


> But 3 files that require updating where the code won't compile until you've added all three is way way less of a problem.

I think this is only really possible in a relatively small subset of programming languages, even among those with static typing. At a minimum it seems like it would require a type system that didn't allow optional properties (by default) and did distinguish between nullable and non-nullable types.

Unless I'm missing something. Would love to see some examples of this done right. Or at least examples of languages you've done this in.


Optional types can get you for sure. I've been doing this with typescript and it's been alright. Prisma -> my domain model -> ApolloServer


This is exactly my experience. Glad to see I'm not alone, sad to hear that it's so common out there.


our problem with tRPC is that we don't have an easy to way to test the endpoint in say, curl or postman.

there is a “REST Wrapper” project out there, but learning that was even needed was … fun

I don't mind it, we found other ways to test

Out of curiosity, how do you add a memcache to tRPC if you dont want to write directly to the prisma database


I use trpc playground. It takes a few lines to setup.


There are libs like ts-rest which I found to be less magical and easier to test.


We a tool bring which lets you track schema changes it over time and also ability to have an approval workflow for schema changes.

Would that help your team?

Happy to give you a demo if you reach out on Twitter dms or email (alex@trpc.io)


is it git?


Care to elaborate further? I've been building on top of trpc and even for inter service communication we use it


We’ve use an API style similar to tRPC at Notion, although our API predated tRPC by 4 years or so.

You can build this kind of thing yourself easily using Typescript’s mapped types, by building an object type where the keys are your API names, and the values are { request, response } types. Structure your “server” bits to define each API handler as a function taking APIs[“addUser”][“request”] and returning Promise<APIs[“addUser”][“response”]>. Then build a client that exposes each API as an async function with those same args and return type.

We use this strategy for our HTTPS internal API (transport over POST w/ JSON bodies), real-time APIs (transport over Websockets), Electron<>Webview APIs (transport over electron IPC), and Native (iOS, Android)<>Webview APIs (transport over OS webview ipc).

For native APIs, the “server” side of things is Swift or Kotlin, and in those cases we rewrite the request and response types from Typescript by hand. I’m sure at some point we’ll switch to a binary based format with its own IDL, but for a single cross-language API that grows slowly the developer experience overhead of Protobuf or similar hasn’t seemed worth it.


In our current project with a TS frontend and Python backend, we use an OpenAPI schema as the source of truth and openapi-typescript-codegen [0] to interface with it on the client side. While not perfect, it provides a very nice interface to our API with request/response typings.

I also wrote a 10-line mock API wrapper that you can call as mockApi<SomeService["operationName"]>((request) => response), and it will type-check that your mock function implements the API correctly and return a function that looks exactly like the real API function.

[0]: https://github.com/ferdikoomen/openapi-typescript-codegen


Can second this approach. At a past job we did the same, except to connect the frontend to a Go backend.

I really like that the openAPI approach is language agnostic, and makes it relatively simple to support SDKs for many other languages if needed. For any company where the API itself is a product, OpenAPI is great.


We use OpenAPI internally for communication between systems. It eliminates an entire category of bugs, and we can offload testing of schema conformance entirely to third-party tools. We've never had a bug due to a mistake in calling an internal API that I know of. And we get both internal documentation and a web UI for free via SwaggerUI and Redoc.

Good zero-config OpenAPI support is one of the best features of the FastAPI framework in Python. The "fast" part refers to the speed of basic product up and running.


Another great library to generate TS types from OpenAPI is https://github.com/drwpow/openapi-typescript . It provides the types as single objects you access via indexing, which is pretty nice. There's a partner library to generate a typed fetch client.


Do you know if there's something similar for Python? Or what approach would you use for Python?


Like generating pydantic models or dataclasses for an OpenAPI schema? I haven't needed to go in that direction myself, but this[0] looks promising!

Apologies if I've misunderstood your comment

https://koxudaxi.github.io/datamodel-code-generator/


This looks interesting, thank you!

I'm basically wondering what would help us produce (automatically?) a Python HTTP client based an OpenAPI spec, and achieve a development experience similar to using drwpow/openapi-typescript.

drwpow/openapi-typescript will generate the whole schema, along with paths, request and response models, as well as path and query parameters. In the IDE, all client methods will be type-safe and autocompleted.

Example:

  client = createClient<GeneratedSchema>();
  
  client.get("/my/autocompleted/endpoint/{some_param}", { params: { path: { some_param: "foobar" } } })
koxudaxi/datamodel-code-generator seems to generate the response models, but not much else. It seems we would have to manually write wrapper methods for each endpoint, manually specify the parameters and request models, and use a type hint for the return value of each method with the generated response model.

Example:

  def my_wrapped_endpoint(...) -> ResponseModel:
    pass

  client.my_wrapped_endpoint(some_param="foobar")

I'd like to avoid any manual wrapping or at least minimize the amount of it. Basically: to achieve a similar experience to what we have in TypeScript.

I hope my question is a bit clearer now. I'm not that familiar with Python, so I will appreciate any guidance regarding this problem. :)


Gotcha. I’m not sure what options are out there. Haven’t needed to do any codegen to python - only from python.

For all its issues, the JS world typically provides an excellent developer experience and takes types far more seriously than python.

Hope you find something that helps, and if you do I’d love to hear about it.


Thanks! I'll keep looking for a solution and keep you in the loop once we figure something out.


This is the way. tRPC adds unnecessary complexity over simply inferring types. My theory is that no well maintained and promoted library adopting this approach has emerged, and that’s why you don’t see it discussed very often.


Is there some trick to doing validation of request data using this process? That's a valuable part of using something like tRPC, JSON Schema + type generation, zod, etc.


We use an internal validator library that we infer request types from. It’s similar to Zod (but also predates it by a year).

I’ve also spent some time on a Typescript type to X compiler. My first prototype is open source and targets Thrift, Proto3, Python, and JSON schema: https://github.com/justjake/ts-simple-type/tree/main/src/com...

I’m not happy with the design decision in that codebase to try to “simplify” Typescript types before compiling, and probably won’t continue that implementation, but we have a few internal code generators that consume TS types and output test data builders and model clases we use in production.

I want to open source some of those bits but haven’t found the time.


Deepkit is a fantastic solution for this. It uses a compilation step to inject metadata about types into plain JS.

https://deepkit.io/


Deepkit looks really cool, but it’s so complex on the inside and leverages a forked/patched Typescript and requires full typecheck before emit.

What happens if the Deepkit guy retires? What if I want to run my code without waiting for 11 minutes of typechecking? What if there’s a bug somewhere in there?

There’s way too much risk for me to consider Deepkit for production.


Yeah, I’m reading their sample code and wondering if this is just type-imbuing wrappers on top of XHR calls. It even asks you to provide the generic argument in the invocation (this sucks for trying to keep your dependency tree in order).



That's seems to be completely missing validation. Typescript types are at their worst when they are lies, and the actual shape of the data is something completely different.


It's copied and adapted directly from their RPC docs, validation would just be busy work.

https://github.com/transmission/transmission/blob/main/docs/...


For Transmission, maybe. In general, validation is very much not "busywork".


On top of automatically generated types (eg. openapi, graphql), I would say it is.

Seems like you're just inserting yourself and ranting about typescript where it's not really applicable.


Lovely attitude, you have a nice day too.

There's others expressing the same concerns, for example https://news.ycombinator.com/item?id=37101393


What's your solution though?

To me, if the server has updated it's schema and the client has old code, the server responds with an error, the user sees "something went wrong".

And if the validation fails on the client side, the user sees "something went wrong"

And if the client isn't doing validation of what's returned, and an error gets thrown because it tried to access a now missing field, the user sees ... "something went wrong"

Intentionally maintaining multiple versions of an API, or making sure your API changes are backwards compatible (like adding new fields and marking old ones as deprecated), those are solutions but definitely require organizational effort.

All totally expressible via Typescript though!


With an explicit error you get from validation (like Zod, as used in tRPC), you could notice that your client is out of date and either refresh automatically or prompt the user to refresh[1].

The benefit from validation is that the error is easily recognizable & can be handled nicely, even in just one place. Without validation, you get things that Typescript claims are impossible, for example exceptions on code paths that look like they cannot throw, and you can't easily tell why any of that happened -- that's a recipe for miserable debugging. Of course, with less busy, less critical, sites you might not notice or care.

[1]: Refresh could cause loss of user input (e.g. text just entered), depending on the app that might matter and need browser-side storage or something along the lines of the "local first" manifesto. Easy kludge for simple apps with smart users is to make the user copy the text, click refresh, paste.


Isn't that only for one case of errors though (server not matching expected response)?

One solution I thought was clever, was a friend's single-page app would hard-reload on navigation, if there was an update to the assets. Not sure how feasible that would be for schema errors though.


Between that and catching 400 Bad Requests coming back from the server, you're pretty much catching any communication errors resulting from version skew.

SvelteKit can poll to check server version and refresh, both on timer and if assets are 404. But as you implied, that doesn't help with API evolution and e.g. users interacting with a form (apart from increasing the likelihood the page will refresh soonish due to navigation).


For cross-language, I can recommend Fern, which works with OpenAPI

http://buildwithfern.com


You can recommend it in what context, from openapi (as they claim https://github.com/fern-api/fern#starting-from-openapi ) or from their ... special ... definition schema?

For those wanting less talk, moar code: https://github.com/fern-api/fern-java/blob/0.4.2-rc3/example... -> https://github.com/fern-api/fern-java/blob/0.4.2-rc3/example...

Regrettably, I wasn't able to readily find a matching openapi example, likely cause they really seem to believe their kid-gloves format is the future (or lock in, depending on how bitter one is feeling)

---

confusingly, their repo has a "YCombinator 2023" badge on it that just links to the badge itself. Some Algolia for Launch HN didn't cough up anything, but there was a Show HN I found: https://news.ycombinator.com/item?id=34346428


Disclosure: I'm a contributor to Fern.

This is good feedback that we need to provide more examples starting with OpenAPI instead of with the Fern Definition. For some context, we convert the OpenAPI spec into a Fern definition and then pass that into the code generators.

If you want to see some real world examples, check out these links:

- https://github.com/vellum-ai/vellum-client-generator/tree/ma... -> https://github.com/vellum-ai/vellum-client-python

- https://github.com/Squidex/sdk-fern/tree/main/fern/api/opena... -> https://github.com/Squidex/sdk-node

Worth calling out that you can go from the Fern Definition -> OpenAPI anytime to prevent lock in.


Similarly, we built a typed-client for inter-service communication using lambdas at Robocorp.

The requests and responses are inferred from the interface (published following semver) defined in Zod (including the errors that are following HTTP-like conventions: 401, 409…).

It also includes “hooks” for pre and post processing on the server side (effectively middlewares)


I built something that sounds very similar at my last company called crosswalk. Open sourced here: https://github.com/danvk/crosswalk

One thing that worked surprisingly well: codegen TypeScript types from your database and use those in your API schema.


I'm a big fan of tRPC. It's amazing how it pushed TypeScript only stacks to the limit in terms of DX. Additionally, it made the GraphQL community aware of the limitations and tradeoffs of the Query language. At the same time, I think tRPC went through a really fast hype cycle and it doesn't look like we're seeing a massive move away from REST and GraphQL to RPC. That said, we see a lot of interest in RPC these days as we've adopted some ideas from tRPC and the old NextJS. In our BFF framework (https://wundergraph.com/) we've combined file based routing with RPC. In addition to tRPC, we're automatically generating a JSON Schema for each operation and an OpenAPI spec for the whole set of operations. People quite like this approach because you can easily share a set of RPC endpoints as an OpenAPI spec or postman collection. In addition, there are no discussions around HTTP verbs and such, there's only really queries, mutations and subscriptions. I'm curious what other people's experiences are with GraphQL, REST and RPC style APIs? What are you using these days and how many people/teams are involved/using your apis?


Garph is like tRPC but for GraphQL: https://garph.dev

For REST APIs there's ts-rest (https://ts-rest.com), zodios (https://www.zodios.org) and Hono (https://hono.dev)

If your team uses multiple languages, there's Fern: https://www.buildwithfern.com


+1 for ts-rest

The people on the Discord are very helplful

https://ts-rest.com/docs/comparisons/rpc-comparison


> it made the GraphQL community aware of the limitations and tradeoffs of the Query language

Could you expand on that? Our graphql types are generated from our federated schema whenever it changes, and response types for queries / mutations are generated whenever you save a file.


With tRPC and similar frameworks, you infer client types from server definitions without including actual server code into the client. As much as I like GraphQL, one thing that IDEs really suck at is recognizing when generated code changes. E.g. with Jetbrains IDEs it sometimes takes forever until a change to the generated code is actually picked up by intellisense. VSCode is a little bit better regarding this. When you infer types in tRPC style, this whole problem is gone. You can even jump between the client usage and the server implementation. That said, this is not without a cost. Large Typescript codebases can slow down the typescript autocompletion and this approach only works best when both client and server are written in typescript and ideally in the same codebase.


Hm, yeah I don't think we had similar experiences with GraphQL then

> you infer client types from server definitions without including actual server code into the client

Our types are generated into their own package, so there's no chance of importing server code.


The thing that always strikes me is that verbs and paths are pretty tiny details (and also easy to abstract away using endpoint consumer generation libs) - then what else do you really get compared to your normal average web api? You still need to perform everything in each call to the backend anyway.


I love tRPC, it's by far the best fullstack DX I've ever seen and has such a brilliant API especially when combined with Zod.

Zod and tRPC are some of the most important projects in the future of TS imho, I think we're gonna see a beautiful bloom of tRPC inspired DX across the TS space in coming years.

Two projects already have clearly have tRPC DNA attacking different use cases are Ping's UploadThing (https://github.com/pingdotgg/uploadthing) and our Lusat (https://github.com/lusatai/lusat).


I don’t really like Zod at all, it has very finicky and verbose types/generic params and it’s object generic won’t allow you to pass a data type, but instead you must pass zods own schema types as properties which are very messy and unintuitive.

Another annoyance is that the type of validation errors varies depending on what kind of schema you’re checking against. This makes it unpredictable to handle errors and there’s always too many edge cases.

Lots of room for improvement.


I don't disagree (not that I feel the same level of annoyance, on balance I love Zod for the time it saves me) but what would you recommend as an alternative?

The new valibot.dev looks cool but I haven't tried it yet.


Just took a quick glance at the source[0] and the generic type for an object schema is a record of other schemas too, similar to Zod. At least that irk won't be solved by this and for me at least it’t the biggest one because it makes typing a function with generic data that includes a schema that acts upon it unnecessarily difficult.

[0] https://github.com/fabian-hiller/valibot/blob/e6c53c8a4b0033...


I also recent did a personal project that uses tRPC and Zod and I agree that it was a fantastic experience. It also makes writing unit tests way easier.


Do you know of anything in this region for MongoDB? The two big problems there seem to be type-safe data CRUD, but also migration of data. I see people referencing Mongoose a lot, but that seems like it's a big downgrade from Zod/TS in terms of type safety.


I've never used it for Mongo, but Prisma is a pretty great type-safe ORM that supports Mongo: https://www.prisma.io/docs/getting-started.


Zod is...so freaking great. I'm feeling kind of hyperbolic at the moment, so I'll even go so far as: for a certain style of programming, adding Zod to a codebase is as big of a win as adding TypeScript.


Absolutely. Zod is to the runtime what TypeScript is to the IDE.


I would also recomment effect/core + effect/schema, the pattern of creating typed services starting from schemas is there perfect for people that are more functional-programming leaning.


I saw that a while back, but it seems like a large learning curve and I don't immediately feel much drawing me towards it. Will be keeping an eye on it anyhow.


I just checked out the Zod intro pages but I still don’t know what problem it’s trying to solve. What is it adding to TypeScript (which I am not conversant in either)?

Thanks for any insight!


runtime


"Runtime" is not a problem.


TypeScript-first schema validation with static type inference


There you go.

But schemas for what?


> Schema validation provides assurance that data is strictly similar to a set of patterns, structures, and data types you have provided. It helps identify quality issues earlier in your codebase and prevents errors that arise from incomplete or incorrect data types.

From: https://blog.logrocket.com/schema-validation-typescript-zod/

Most common use case I’ve seen is using schema validation for user input (like forms) so you don’t send junk off to the api, you can validate data at runtime (whereas typescript checks your types statically when compiled).


Thanks for the reply! I know what schema validation is, but didn't see what kind of schemas Zod was intended for. In other words, what is it parsing? That post you linked to looks very informative.


I wrote a reply but then I found this and it’s a perfect explainer: https://www.totaltypescript.com/when-should-you-use-zod

Matt is awesome! If you have any more questions I’ll do my best to answer them.


I'm wondering how they handle version skew and migration.

Fields have a lifecycle. When they're introduced, no clients or servers know about them. Clients and servers aren't restarted all at once. They won't have the same version of the types. Data will be written using one version of a type and read using a different version.

If you can guarantee all binaries, running programs, and data gets upgraded (no persistent data exists) then it might not be a problem. As soon as you have multiple organizations involved, guaranteeing all apps and servers sync to the latest version of a library, rebuild, and redeploy will be difficult.

Static type checking assumes no version skew. All the code in the binary got built with the same version of the library defining the types. It's a closed world.


Pretty much "you do it". Zod can accept unrecognized fields, and then you move to optional fields, and finally once the dust settles you can make the field required.

Of course, the demographic that most desires what tRPC does is the least likely to think about that sort of stuff.


That's why it's a tool for web apps


Web apps can still have the problem of the server and client getting out of sync.

1. Plenty of people deploy their frontend and their backend separately. 2. Even if frontend and backend are deployed at the same time, there's still the case where the user has the old version of the client loaded in their browser as a new backend gets deployed. I've seen some web apps tackle this by periodically checking if a new version is available and either refreshing the page or prompting the user to refresh the page.


Even with webapps, duration a binary rollout you can have clients reaching different versions of the app on each request (that's basic version skew).


SPA web-apps still have long-lived sessions


Of course you don’t need schemas or code generation if you are only targeting one language.


I have used tRPC for two ~50k loc web applications and love it. The DX is incredible. But I feel like tRPC had its hype time some time ago, so RSC’s hype quickly caught up. These days everyone speaks about whether to use or not to use RSC while there is such a great stable solution like tRPC out there. I am not against RSC, but there is way too much discussion about it. tRPC is a super pragmatic way to create applications with these days!

Edit: RSC = React Server Components - which come with their own read/write data philosophy.


For what it’s worth I have never heard of RSC and it’s not googleable. Have a link?


React Server Components. Sorry, I edited the comment above.


I believe the hype of RSC it's just a natural part of it being a NextJS/React core feature from those teams, and not a 3rd party package like tRPC.

React and Angular won by being backed by FAANG first, people loves to choose FOSS backed by big names for some reason gives them pause


Believe it or not, not everyone is even using a flavour of React.

One nice side effect of using tRPC is that you can pack up and move to a different framework on the front-end if desired, and a lot of the tRPC work can be used directly


What does the Royal Shakespeare Company have to do with RPC?

(or please define/link acronyms folks may not know!)


I'm guessing React Server Components which is apple to oranges with tRPC.


it is. But RSC make many library authors and maintainers question whether their library is still needed in a RSC world and if so how they can support it. Same story with tRPC https://github.com/trpc/trpc/issues/3297


I think it makes people wonder whether rsc are needed, not the libraries


Heya! Creator of tRPC here, just wanted to drop by and say thanks for all the love ♥


Hey Alex, I had the luck to learn about tRPC directly from you when working at an open source company that used it for a while.

I quite remember tRPC being the only unknown part of the stack when I started there, and I was a bit scared, but tbh it was pretty easy to pick up and it's an awesome abstraction to make the server of fullstack codebases in typescript.

I always liked GraphQL but if you're working solo it doesn't make that much sense to have the GraphQL api as contract. With external devs a little bit more.

+1 to OpenAPI and being able to generate code, SDK's, docs, automagically.


Hey dude! We met in the prisma conf in Berlin a couple of years ago and had some beers after with an Aussie dude, since then I’ve used nothing but tRPC in all of my projects and couldn’t be happier, thanks for the hard work!


This is the first time I’ve even seen an emoji in an HN comment. What?!

Unicode Character “♥” (U+2665)


How about emoji in a HN submission?

https://news.ycombinator.com/item?id=36159443


Hadn’t seen this. Neat!


Thank YOU Alex!

This project has done incredible things for TypeScript


you are awesome!


I've used tRPC and Next.js for a couple of personal projects and it's been a great experience. Hard to beat on iteration speed, especially when used with a pre-configured template like Create T3 App: https://create.t3.gg/.


Does it now work properly with Next 13 and server components?


I think patterns and best practices are still yet to be figured out. I believe you can create caller in a server component and it should work (https://trpc.io/docs/server/server-side-calls#create-caller), but the pages router appears to be a battle-tested solution.


In server components you can just await in the component itself so you don't need a solution like tRPC.


Could you expand on this?

You don't need it, but you certainly can prefer to use it regardless?

Hoping react-server-components <> trpc gets solved soon


If you use Server Actions, you can just call a server function from your frontend directly, no need for middleware like tRPC

https://nextjs.org/docs/app/building-your-application/data-f...


Thank you I only briefly read about them before and didn't consider them properly.

Will try out calling the database directly there, but I kinda liked how with tRPC I can add validation/auth checks etc/ will have to see how I develop my own strategy for that w server actions.


You can use zact for that

https://github.com/pingdotgg/zact


I'm using it with Next 13, but not using the new experimental app folder yet. No issues yet.


I hadn't heard of tRPC until a developer I was working with raved about it and it just seemed like such an obvious good after I learned about it.

We built a T3 app (tRPC, Next.js, Tailwind, TypeScript, Prisma) together (if you'd like to check it out https://github.com/stytchauth/stytch-t3-example).

Type safe APIs while working in TypeScript is just so helpful.


Dropping in a tRPC use case that I've really got a lot of mileage out of: communication between the Electron main and renderer processes using https://github.com/jsonnull/electron-trpc. Traditional Electron IPC is hard to do type safely, which electron-trpc solves, and the react-query integration (meaning you get automatic type-safe hooks to issue the requests) is really nice.


Hey there, author of electron-trpc here. I'm always happy to hear that folks get value out of the integration and have a good time using it. Thanks for the mention!


So what do you do when you decide that you don't want to use JavaScript anymore on either side of tRPC? (switch to something else on the server, or write a native mobile app, etc)


Use TypeScript instead? (semi-joking)

The way I've been approaching this lately is TypeScript on the frontend, then a thing TypeScript layer on the backend (via Deno), with those two pieces connected with tRPC. The real backend guts are in Rust (or whatever), and the backend tRPC layer talks to the Rust stuff with gRPC.

So something like this:

    [(TS web client) <--tRPC--> (TS thin backend)] <--gRPC--> (Rust service)
This is a bit awkward, but honestly worth it for what you get with tRPC. One thing that took some getting used to is with tRPC the line between "client" and "server" gets blurry, which makes me uncomfortable for all sorts of reasons but in practice works well enough to make it not worth worrying about for now.


I'm honestly pretty happy with TypeGraphQL. TypeGraphQL works code-first and lets you integrate request-scoped DI for resolvers, which makes writing more complex resolves significantly more pleasant.

Admittedly for the web front end I couldn't find a satisfactory tool so I built typed-graphql-builder (https://typed-graphql-builder.spion.dev/). You do have to run it if your backend schema changes, but not when your queries change as the queries are written in typescript and inferred on the fly. (I should probably write a watch mode for the cli, that should largely take care of the rest of the toil when quickly prototyping)


What's the point of the middle layer? Seems like an extra step and a language change for no reason. Why not just expose HTTP from your Rust stuff?


Because you lose all the stuff that’s nice about tRPC.

The experience of building a tRPC app is very different from building an app that talks to a traditional REST API. The front and back end with tRPC are very tightly bound. In a way the backend part of your tRPC app becomes the real consumer of your actual API.


Okay. I see 3 main benefits of the tRPC experience.

1. You don't need to manually write HTTP routes on the server side.

2. You don't need to manually write request code on the client side.

3. There's a type contract between both.

I don't see how this "thin server" approach helps with any of these.

1. You're no longer writing HTTP routes, but instead you're writing gRPC. You haven't eliminated the work, you've just changed the technology. If you happen to be integrating with a pre-existing gRPC deployment, why re-invent the wheel with custom TS code instead of using one of the many gRPC->HTTP transcoders?

2. Modern web frameworks have so many abstractions on this pattern that it's a non-issue at this point.

3. Your thin server is effectively a mapper from gRPC to HTTP endpoints. That's the type of thing you can build a spec from. If you've used a transcoder and have a spec, you can codegen your client library with the correct types.

I think tRPC works best for making highly cohesive full stack apps. Using it as a middleman for your backend seems weird to me.


Why not just expose grpc via grpc-web with the envoy proxy?


There are more important problems to solve when trying to boot up a self-sustaining startup or project than rewriting your app in a different technology for no apparent reason.


There are very good reasons to go native for a mobile app. React-native can be a huge time sink depending on what you want to do with the app.


Surprised this thread isn’t full of scary RPC stories and SOAP CORBA under your bed at night. Either the wave is gone or it’s just a little early.


I'm not too surprised. Most people for whom TypeScript is an option won't have been exposed to generations of RPCs that fail to deliver simplicity. So it isn't going to be on their radar.

I've been using gRPC and REST'ish+JSON APIs for years now and what I find puzzling is that REST+JSON tends to mean a lot more work, less pleasing code than gRPC, and yet people prefer it. Not because it leads to simpler, better, faster less error prone code (it doesn't. Quite the opposite), but, I think, because people feel they can understand it.

The people over at https://buf.build have been doing a great job trying to tame gRPC btw. The protoc toolchain from Google has been uniquely horrible to work with, but the 'buf' tool from said company has been a real life-saver. Not to mention their Buf Schema Registry, which admittedly I have only used on a few projects so far, but should migrate more projects to.

Though in general, I feel that RPC mechanisms that are too closely tied to a given language are a waste of time. But that's me. tRPC isn't something I'd get into even if I was doing server side TS. It just doesn't seem like a good long term choice.


"I feel that RPC mechanisms that are too closely tied to a given language are a waste of time" - THIS RIGHT HERE! - The whole point is to allow people using different languages to collaborate - your analysts using python/R should be able to talk to your Java/C++/Rust/C#/Go folks and web-frontends (and server sometimes) Typescript/Javascript/etc.

One of the worst example (in the past) was the python pickle. People have overused it, and it's not even compatible between some python releases. There are many other examples - where something works really neat, but only for that language, or even that language major or even just minor release.

There is protobuf, cap'n'proto, flat buffers, fidl, thrift, etc. - many better choices than just one that works only for your language.


People sticking with REST+Json usually don't want to have a compilation step for the API exchanges layer. That means more validation and worse tooling, but also better legacy and potentially future compatibility, and way easier debugging at any stage.

I feel this is the same debate between scripting languages and compiled languages, both provide different trade-offs and I don't think we'll see one completely disappear at any point in time.


> People sticking with REST+Json usually don't want to have > a compilation step for the API exchanges layer.

Hmm. I'm not sure what you are saying.

The way people tend to try to consume REST APIs is by generating client (and/or server code) from a spec. For instance OpenAPI 2.0. At least on Go the tooling for that leaves something to be desired, and the generated code isn't exactly beautiful. So you either depend on a library that someone generates from the spec, and then shares, or you generate it yourself.

(We do this for a bunch of languages for our APIs, and the clients are ... not uniformly beautiful :-))

If we're talking about writing the REST client by hand...well...I'm not sure why one would want to do that. (Nor am I sure that's what you meant, so I'm not accusing you of that).

My current workflow (in Go) for consuming gRPC interfaces is to use the buf.build package proxy. Which means I just add an import, run go mod tidy, and I'm good. I don't even need the tooling installed. As I think it should be.

I suppose this could be done for OpenAPI 2.0 too. You stuff OpenAPI 2.0 specs in and you get code for a bunch of languages out. (Someone must surely have built this already?)


I agree that sticking to just a library may be not a good choice. Some standard would be better. But few last times I discussed it, some folks just slapped RPC stigma on everything, even if it was just your regular REST-y-ish calls underneath. That’s absurd. POST json receive json back is okay. Wrap it into an `await server.<method>(<args>)` call and now it’s an RPC mudball. The current top commenter is removing tRPC for tight coupling. I wonder what prevents them from tight coupling over http, or over just function calls when on the same “side”. It’s a mindset that they are removing, not a specific technology.


> Some standard would be better

Careful what you wish for. Standards that aren't quite good enough can be worse than not having standards. Because then at least then people will keep looking for something that might be better.


I've just started using tRPC, and I'm in love with it so far! I use GraphQL at work, but I always felt like the boilerplate is too much for my hobby projects, and I wasn't really happy with the code-first frameworks that I could find (especially with the ones that support proper Relay integration). Thanks a lot for the creators! Also if anyone is enticed by this, I highly recommend trying it out!


Garph has no boilerplate and comes with Relay out of the box: https://garph.dev


Oh yeah, I've heard about it but by that point I was using tRPC which fits my needs for now. I'll be looking at it later though when I'll need a GraphQL api! Thanks for letting me know!


I've personally been using ts-rest for my projects, which is traditional REST but allows one to autogenerate types and API docs from a zod schema.

tRPC seems cool, but seems riskier to go with an alternative to REST or GraphQL, especially if you intend to make your API public and/or have multiple consumers of your API.


I’m also singing ts-rest.com’s praise these days. Admittedly a small project with <25 endpoints atm, but holy crap, I am saving SO MUCH TIME on implementations, documentation, and bug hunting.

I have separate packages (in a monorepo) for the “contract definition” and the “API implementation” (which is currently NextJS, but likely Express in the future — and the migration path looks like very smooth sailing)

Request/response validation, OpenAPI spec, fetch & react-query clients without any additional effort. Mocks for tests auto-generate based on contract definition as MSW. It almost feels too good to be true.


tRPC is not at all intended to ever be a public API, the way REST or GraphQL is. It's meant to be a tight, private connection between your client and your server.

If you use tRPC in your stack but want a public API, you will almost certainly need to build that out separately, via traditional REST, or GraphQL, or maybe gRPC. This feels all kinds of wrong initially, and there are some obvious and not so obvious disadvantages to this, but honestly it's not all bad when you get down to it.


At an open source company I worked briefly, we used tRPC, and I was tasked with making the Enterprise API,

We went with a nextjs app, abstracted away our tRPC routers into a package and our monorepo, and used tRPC different routers both from our webapp and the API apps.

This works great!

You end up not repeating your logic all over the place, and make thin wrappers on your nextjs api endpoints or whatever to handle the differences between implementations


I wish tRPC could interact with Go app but that's limited to TypeScript.



I'm using oto on a personal project and really enjoying it.

In my case, we generate a Typescript API client for a React app. Very nice companion to Golang.


Can highly recommend over plain REST as an alternative to graphql. The iteration speed with instant feedback is incredible. Perfect pair with NextJS.


So is it just TS to TS ?

If it is same language I just had common library implementing all the types for server and client, no need to get any more fancy than that, crossing the language barrier is the problem for type safety.


I was really hyped about tRPC until I started using Remix I still think tRPC is great and for most projects you don't need more (specifically GraphQL).


RPC IDL keeps being reinvented by those that fail to understand where we came from.

Pop fashion industry.


OpenAPI, JSON-RPC, and JSONSchema have existed for a pretty long time now. The industry has more or less standardized on JSON for data interchange and parsers/generators for JSON abound in every programming language, so it makes sense to do all this stuff with JSON.

I've never seen IDL before but I looked up some examples and it does seem useful for RPC calls. But I'm not exactly eager to switch, it looks like it's meant for a much more powerful form of RPC than I'm willing to touch, and no tooling I know of supports it.

The JS ecosystem of course is still affected by hipster disease (everyone seems to think their own wacky idea is groundbreaking). But the existence of a framework like this, built on well-established JSON-based standards shouldn't be surprising.


IDL in this context is a generic term that stands for Interface Description Language.

I think you kind of made his point for him.


> I've never seen IDL before but I looked up some examples...

Proving my point.

Search for Sun RPC, DCE RPC, XDR.

Take note when they came up, and everything in between until today.


Yes, but those systems have been obscure and uncommon for longer than a lot of today's programmers were even in the field. In the intervening time we all standardized on JSON and rediscovered or reinvented similar concepts all using JSON. Fine. It's great to understand and learn from prior inventions, but it's not like we're all going to switch back to IDL.

Consider that JSON being ubiquitous immediately makes it easier to adopt compared to a custom description language. If people actively used IDL today, there would probably be a lot of demand for a JSON variant or subset.

I'd make similar arguments about using JSON vs S-expressions for data interchange, but JSON works well with both Javascript and HTTP and everyone standardized on those, and maps cleanly to basic data structures in just about every modern programming language.

These JSON-based tools are actually very much like Lisp in that both the interface specification and the data are expressed in exactly the same format/language. This is not true of a lot of these older standards, and seems to have been the failed promise of XML.

IDL does look like it maps nicely to typed function calls in most languages, but it lacks the advantage of being expressed in a standard format/language that is already well-supported for other tasks, and seemingly doesn't impose any requirements on how the data itself is transmitted.

For an example of why language/format matters, consider the tool c2ffi (https://github.com/rpav/c2ffi). It generates a JSON description of a C header file. The header file itself is a pain to parse and extract information from. But once you have a tool to do it and put that information in a standard format, you can now build an FFI wrapper in just about any other language in at least semi-automated fashion. It makes the easy parts easier, compared to other systems like SWIG and GObject where the interface format is totally custom and you're mostly reliant on a single implementation to make it all work.

If anything, let's be grateful that the good ideas of the past are being rediscovered and reinvented in a way that might grant them more longevity and broad usefulness than they had in their first life. Did you use IDL? What was your experience like? How would you compare it to something like gRPC?


That’s an interesting statement, because from my vantage point “longevity” seems to be way, way down the priority list for pretty much any technology in JavaScript world. (The major exception being pure JSON.) It feels like if you open any JS codebase from more than 18 months ago, half the libraries will be deprecated or abandoned (not just the version, the entire library). Major patterns and frameworks reinvent themselves incompatibly on a biannual basis.

The purpose of an RPC IDL (protobuf being a “modern” example) is that you define the interface and encoding in a way that will still be functional when your “standard language” is long forgotten or unrecognizably different.


So what's wrong with using something based on JSON as the IDL?


Only obscure to developers living in the present, without caring about learning where we come from, or presenting novel ideas without proper research.

Android Binder, XPC, COM and gRPC are yet again another set of IDL quite present in current times.

JSON? I thought everyone was using YAML now. /s


I am up voting this because its a good idea and you have really nice web site.

I wrapped up something like this myself a few weeks ago, so I am in a position to offer some suggestions other people may not think about.

The most common mistake I see with RPC, especially WebSockets, using Node.js is that they must be based off of HTTP. My attempt is based upon WebSockets, RFC 6455, where RPC is a generic term for socket based communication streams not specified against any single frame definition scheme. Since these technologies are raw sockets that include their own conventions for handshakes, optional frame header definitions, and possible security conventions there is no need for HTTP. In the OSI model HTTP is layer 7 where TCP sockets are layer 4. In Node that means just using the net/tls libraries opposed to the http based descendant libraries. Since, in Node, the http library descends from from the net library and https descends from tls which descends from net http always imposes overhead that TCP based sockets do not require.

There are three benefits for executing a socket server over HTTP:

1) You are only using port 443

2) Simple and familiar implementation from Node

3) HTTP 1.1 is session-less, which allows anonymous untrusted connections. That is how the web works, but its less ideal for a security focused implementation.

The reasons to not do this include CPU cost. Running sockets over HTTP increases processing overhead which reduces the number of concurrent streams you can offer and substantially slows down processing of incoming frames. In order to reduce your execution to a single port without sacrificing performance you could run HTTP over your socket implementation which allows you to customize your approach to security, but that would also require writing your own HTTP libraries.


My issue with websockets is, they are still being blocked in corporate proxies. I particualarly see this in banks or research-places, where sockets are blocked to prevent data leaking out of the network.


I used to work at a giant bank.

The reason they might be blocked by big banks is due to deep packet inspection. How that works is that the bank intercepts certificates on TLS socket establishment for normal web traffic so that the bank proxy becomes a formal "main in the middle". They do this to provide deep packet inspection on all encrypted traffic that goes out and comes in. That level of packet inspection is more challenging with something like RPC/WebSockets because its a binary stream, where HTTP just uses plain text for its header data.

I can remember using Reddit, when I still used Reddit, when starting at the bank and I remember Reddit making heavy use of WebSockets that worked just fine from within the bank even though I was behind the bank's proxy. This was more than 6 years ago, but I believe the WebSocket traffic at that mega bank was just relayed through proxy just like the HTTP traffic and I also want to say Reddit served the WebSocket traffic different from the HTTP traffic, but I cannot remember for sure.


You can design to be transport agnostic. Then add various transport types: sockets, http, in memory (for testing)…


I don't think that is correct. HTTP is managed by applications, but TCP packets are managed in the OS kernel. That distinction determines your options of approach and how things interact when parsing transport negotiation mechanisms.


Telefunc is another alternative without the boilerplate where on the frontend you can just import and execute the backend functions remotely. https://telefunc.com/



If anyone wants to split, make backend and frontend repositories separate, take a look - https://github.com/mkosir/trpc-api-boilerplate


If you wanna take the concept up a notch: https://rakkasjs.org/guide/use-server-side-query


> When it runs on the client, variables from the surrounding scope that you use in the server-side function are captured, serialized, and sent to the server. Since anyone can send any request to the server, you have to validate everything that comes from the surrounding scope.

This is mildly horrifying to consider without good tooling support.


On the next iteration, we're planning on providing a `createServerQuery` function which will _not_ capture the closure and return a function that can take an arbitrary number of arguments to be serialized. `createServerQuery` will have a required `validator` option that will also run on the server to validate those arguments.


Love tRPC. I have built two side projects [1][2] now with it and its just so smooth. When I introduced the tool to some other people I worked with, they were amazed on how fast it is. Usually it takes some time to create frontend schemas from backend endpoints but tRPC is just so fast.

Thanks to the creator. Literally made me a more efficient developer!

[1] https://hackathon.camp/ [2] https://sheetsinterview.com/


Does anyone know if it's possible to build the ergonomics and DX of tRPC on top of gRPC-Web? tRPC is TypeScript/Node.js-based while gRPC-Web backends can be built in most languages.


DX for front or back end? The beauty of tRPC is that the types are derived/inferred from the backend runtime code (like, as you type). It would be nigh impossible to do that with grpc(-web) using proto files as the source of truth.

It's possible there's a project out there which could automatically produce proto files from something like zod, json-schema, etc. which could be directly interpreted by TS to provide similar (as you type) DX while still allowing some other language backend to consume the derived proto files (though the DX there would be less than ideal).

If you're just looking for similar TS clients/interfaces for grpc-web then I'd recommend https://github.com/timostamm/protobuf-ts which operates on plain JS objects (no new MyMessage().serialize(), instead the code generator mostly produces TS interfaces for you to work against: const myMessage: MyMessage = pojoConformingToInterface; const binary = MyMessage.toBinary(myMessage);)


Can I just say ... sigh ... I miss

1) Java remoting itself

2) GWT's "emulation" of java remoting to the client side

It was SO very, very fast (and safe) to add a new backend interaction.

Plus I loved that you could be so evil as to just serialize a java class, send it over RPC, to be executed on the other side. Security nightmare (even though it was fixable if you really wanted to), but damn, you can do absolutely everything with that.


Is there some way to reduce message size in WebSocket scenarios? It looks like it uses JSON-RPC under the hood with strings for message names etc. I'm looking for ways to improve the throughput of an embedded WS server. One idea I had was for clients to negotiate a number-based encoding for these parts to reduce all unnecessary traffic. Is there any way to implement this with tRPC?


Why not just gzip the data?

It might not beat a bespoke hand-crafted protocol that minimizes message size, but your use of the word "negotiate" makes me think that that's not what you're taking about...


The embedded device the server is running on has a very weak CPU. Since I'm trying to optimize throughput this would have quite an impact. It's fine if the first few messages are slow, the speed just has to pick up after initialization is done, which is why negotiation is okay for me. So e.g. the client requesting IDs for every method on first connection would be fine, and would keep complexity down.


I imagine you could write a custom tRPC link to send the data over something like MQTT which may be better for an embedded client - there's prior art in the likes of electron-trpc, showing how tRPC could be adapted to non-HTTP transports. Not sure how that interacts with the JSON-RPC bits of the subscription protocol though, if there's no way around it then plain MQTT may be better suited to your use case.


Thank you for the suggestions! I am specifically looking to optimize frontend-to-server communication, so WebSocket is a must. Is there some way I could inject a custom transform function between TRPC and the WebSocket? I could copy the existing WebSocket link and add my transform in there, but it would of course be easier if I could just do my own wire handling with the existing code.

Essentially I'd like to inject a custom respond[0] encoder and a custom parseMessage[1] decoder, and same for the client.

[0]: https://github.com/trpc/trpc/blob/main/packages/server/src/a...

[1]: https://github.com/trpc/trpc/blob/main/packages/server/src/a...


Update: I was able to get this working by wrapping WebSocket and WebSocketServer! The API surface for the wrapper seems to be pretty minimal. If I do use TRPC in the rewrite of the embedded server, I'll implement a custom message scheme as an alternative and benchmark them.

Here's a gist with the wrappers (transporting the data as a JSON array instead of a dict): https://gist.github.com/TimonLukas/7c757a3b234344ad71e6bd5a6...

It's not ideal insofar that I have to parse the stringified JSON again. Some kind of real "encoder/decoder" parameter could be an amazing help.


Update: I've now implemented a small PoC with ID negotation, you can see it here: https://gist.github.com/TimonLukas/c0bb7e8f9bde9d3d74d6b776b...

This will reduce message size quite a bit, since the data isn't sent as:

    { "id": 3, "method": "subscription.stop" }
but instead as:

    [3,1]


Update: I've implemented a simple TS library around this: https://github.com/TimonLukas/ts-websocket-compressor

I will add wrappers for WebSocket/WebSocketServer in the future to allow using this without the library having to support it, like in the PoC with trpc.


we're using trpc at work and it's quickly becoming the default for gateway apis built for specific frontends. Everyone seems to love it, and when combined with Zod it allows us to iterate super quickly as we pretty much only have to think about business logic as schema properties can easily be shared. though my favourite feature is zod transforms, meaning a iso date string can be validated and transformed into a DateTime object before it gets to the business logic.

also it means we can remove the npm library of types that was previously used to sync types between FE/BE (for this reason alone i love it really)

I found the Docs somewhat lacking when i was trying to get the first project going but now I've used it for 6 months and i don't need to look at the docs it's been great. (remember to have exactly the same versions installed on the FE/BE and the same tsconfig makes it easier too, spent too long on these obvious fixes... :P )


I assume the tight coupling would mean it can't be used for capacitor mobile apps?



The title is misleading, there is schema (defined with zod in specific)


I’m 99% convinced this is one of those things that will quickly go out of fashion and leave behind thousands of projects using “that legacy fox thing the previous devs wanted to use”


I find it really sad that these efforts always stop at working in language X, but no formal specification exists that would make it possible to create an interoperable version in a different language. I love TypeScript, but for various reasons one may need a different backend language on the server side. Yes, OpenAPI is a thing, but I have yet to see an OpenAPI spec + code generator that works out of the box and doesn't need a whole lot of fiddling and workarounds. I also understand that it's hard to create something truly interoperable, my previous job was building a usecase-specific typing system, but adding yet more single-language ?RPC implementations isn't really helping. Obligatory XKCD reference: https://xkcd.com/927/


I think part of why tRPC shines is because it's tightly coupled to TypeScript (and especially Zod, its schema validation library of choice - many of its features map 1:1 onto TypeScript concepts that don't exist in many other languages), which means it can avoid many of the issues that OpenAPI generators have. I'd also like to see a good TS-first OpenAPI client - Fern [0] is probably the closest I've seen.

In general in my experience, when you take away the constraint of inter-language interop, you get much smoother DX because you can take advantage of language features. A good example would be the lack of native date/time types in JSON - valuable for interop because different languages handle dates differently, but painful when you're going to/from the same language. Web applications are a special case, because the client-side is effectively constrained to either JavaScript or WebAssembly (except you'd still need at least some JS in the latter case), so it follows that you'll get the best DX if you have JS or TS on both sides of the stack, especially if you can set up your project in a way that lets you share code between both. Not always an option, but I've always felt more productive (as a full-stack dev) when I've been using TS on both the client and server, compared to TS on the client and another language on the backend.

[0]: http://buildwithfern.com/


:wave: Hey Mark -- I'm one of the primary contributors to Fern.

+1 to your comment about how needing to support multiple languages results in not being able to leverage certain language specific features. We've tried the best to manage the trade-offs here, but there's a limit to what you can do.

If you have feedback on how we can improve the TypeScript client, feel free to comment here or create an issue on our repo (https://github.com/fern-api/fern)!


I prefer building an OpenAPI-compatible API over tRPC to avoid being stuck on Typescript. I don’t fault the creators of tRPC for their decisions. It’s their project, and they don’t have to build interoperability. You can always use gRPC for that.

I would prefer they focus on doing their one thing well than trying to please everyone only to end up pleasing no one.


end-to-end (as long as the backend is Node.js)


Does end-to-end have to be agnostic to the backend to you? Do you feel the same way about end-to-end encryption? Would you say "end-to-end encryption as long as the backend is rust/go/c"? Why not, what's the difference? Most projects exist in a ecosystem, and in this case using the same language on both ends makes total sense don't you think? They pull from the same repository, use the same language servers, and run mostly the same code.

Why is that a problem?


Until very recently single-language frontend + backend stacks were not the norm.

As tRPC is in competition with solutions like GraphQL or REST APIs where the backend can be implemented in a big number of languages, I thought that limitation was worth pointing out.


"without schemas". How do you define the types that you have to submit on the client and server calls?


Lazy imports are not supported with tRPC, making serverless functions heavy which results in long cold starts.


This site used up 2gb of memory when I opened it, and apparently I was near the brink.


Is it just me, or is this over engineered? Brings me back to SOAP services and WSDLs.


     .input(z.object({ name: z.string() }))
What the heck is "z"?



Is there something like this but for JavaScript?


Not sure if you’re trolling. The whole point of this is type safety which requires…types.


It’s not the whole point. Making async calls, handling errors as usual and subscribing on channels has better ergonomics (DX) than bare requests, http streams or websockets.


I've had lots of bugs and types would have very very rarely fixed them


Okay…go fix your non-type-related bugs.


They are things that pop up during production usage from primarily logic errors...


OK, looks great but as a manager and founder of a company that is looking into sponsoring OSS, how does this make me more money?


I'll bite.

With tRPC, changing an attribute of a class in your backend immediately tells you what broke in the frontend in your IDE. No need to recompile or regenerate anything.

Also, when developing your frontend, you'll have immediate autocomplete. Again without code generation or compilation.

That alone should save you dev time and prevent a ton of frontend bugs.


Wouldn't optional code generation be rather desirable? Then you don't need a monorepo, which a lot of people don't do.


It slows down the iteration cycle - when you make a change to your API definitions you'd need to push the change to either a schema repository or into production, then regenerate the API - at which point, if you've accidentally introduced a breaking change, your frontend is already broken - you can work around this with CI that detects breaking changes, but it requires a fair amount of work. Having them in a monorepo means that breaking API changes can fail CI and never make it into production.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: