Hacker News new | past | comments | ask | show | jobs | submit login
Building single-page-apps with PostgREST (polyglot.network)
161 points by nileshtrivedi on Jan 30, 2022 | hide | past | favorite | 84 comments



This is kind of crazy but really interesting as a sort of extreme manifestation of data-centric applications. I say it's crazy, but startups like Supabase are essentially building the professional version of your idea.

The share of discourse on HN about PostgreSQL, SQL, and Databases is growing noticeably. [0] I, for one, hope we are seeing the pendulum swing back from code-centric architectures and NoSQL to a new golden age of data-centric architectures.

Next you need push and pub/sub semantics in PostgreSQL so you can build something like this [1]

[0] https://toddwschneider.com/dashboards/hacker-news-trends/?q=... [1] https://tonsky.me/blog/the-web-after-tomorrow/


I wrote postgresql-replicant [0] in order to do this. It’s not a novel idea. It lets you read the logical replication stream.

On top of this library you can do all sorts of things: replicate data to Kafka or SQS, write a web socket server to publish changes to clients, etc.

[0] https://github.com/agentultra/postgresql-replicant


> Next you need push and pub/sub semantics in PostgreSQL

It's fairly basic (then again, does it need to be more?), but NOTIFY exists: https://www.postgresql.org/docs/current/sql-notify.html


If you considered PostgreSQL as a cloud platform it has some useful properties. A well understood API that is consistent across different products. Built in authorisation and authentication. Reliable data storage without having to think to much about the underlying hardware. Triggers that allow activity in one product to initiate a process in another. Support for running sandboxed code in several languages without needing a "server". And you even have some benefits. Your entire infrastructure can be dumped to a file. The infrastructure can be altered in a transaction that can be rolled back. And external services can be integrated as first class citizens (more or less).


That's interesting - have a look at https://colossal.gitbook.io/microprocess/a-totally-new-conce...

All you need to do is map JWT to a Phantom DB User in PostgreSQL, then you can query SQL directly from the browser.


"Reversible SQL" is such a great idea. PostgreSQL does have pub/sub semantics via LISTEN and NOTIFY. At some level, we should think about removing the centralized database completely, and let all of user's devices synchronize with each other somehow. Perhaps SQLite database files synchronized with syncthing? That would really be fantastic!


CouchDB has a javascript client called PouchDB that can do multi-"master" replication and streaming. I wouldn't use it as a primary database, but the syncing story is pretty much top notch.


Postgres has logical replication[0], which the Supabase client uses for real-time data subscriptions.

[0]: https://www.postgresql.org/docs/10/logical-replication.html


Yep, I actually pulled out the Supabase realtime code into an individual library so that you can listen to Postgres changes right in Elixir:

https://github.com/cpursley/walex


I am a huge fan of this stack. I love that Supabase is spending millions of VC money to popularise this stack, solving all issues, improving PostgreSQL and PostgREST, writing documentation, conference talks etc. kind of sucks the stack success requires Supabase to die (technology becoming commodity, which it already it, Supabase people did make everything open source, true to their promises).

Supabase has zero differentiation. They have a nice "PGAdmin" if you may, but no one will pay for entire database hosting because of an admin interface. When it comes to hosting DB AWS, Azure etc have massive advantage.

Supabase will simply popularise the stack, and then when time to actually host the database comes people will move from Supabase to AWS etc.

Also the best language for UI is not JS, but something like FTD if we continue to make the progress we are making. Hopefully FTD + SQL Is all you need to know soon. Checkout this design that we are working towards for making this a reality: https://fpm.dev/journal/#explanation-of-dynamic-documents-an...


{supabase ceo}

I'm glad you like the stack

> kind of sucks the stack success requires Supabase to die

this is a zero-sum view of the world. I'm sure that as the stack becomes popular, Supabase will too (and vice-versa).

> when time to actually host the database comes people will move from Supabase to AWS etc.

Databases are inherently sticky - it's the reason we started with the "Jamstack" crowd rather than enterprise use-cases. If you start on Supabase, and we handle all your scaling, there should be no reason to migrate to a different cloud.

> When it comes to hosting DB AWS, Azure etc have massive advantage.

given the (in)stability of cloud providers in 2021, I think we will see a trend toward multi-cloud in the the next few years. I'm not sure that they have such a large advantage (we use AWS so it's not like they can offer superior hardware).


IMO a disclaimer is lacking in your post. The product you recommend in the last paragraph seems to be made yourself and you're the founder of the company that makes it. Sure the "we" implies you're involved, but doesn't disclose the extent. I believe this is important when making a recommendation of the "best language" and hyping technology in general.


Point taken. But I am not hyping the specific technology here, I would be glad FTD ends up being the language, but I believe a FTD like like language is definitely needed. We need a programming language for human beings, that we can hope all humans can write. Unfortunately JS is not that language. You can do very little with just knowing JS, you have to learn a lot of other technologies.

FTD is my proposal for one such language. https://ftd.dev/philosophy/:

> If the vision is for every human to own their data, one must ask, data in what format? XML? JSON? YML? Markdown? FTD hopes to be that format.


You should see how nice and powerful Svelte in Markdown is- https://www.mdsvex.com


Link gives a 404, you might want to replace it with https://mdsvex.com


MDX and Svelte in Markdown etc are definitely cool, but unfortunately they are not at all adequate for the job. You can not define a new component in MDX/MDSVEX, only use existing Svelte/React components in prose.

Further what about the arguments of those components, can they themselves contain markdown? How about markdown with React or Svelte component? Would you really with straight face say this is powerful:

    <Penguin text="Some text, which itself contains <Another />" />
Just try nesting it or writing multiline text, its complete torture.


The example that shows up when you click "try" shows how to use markdown as slots inside components.

I'd suggest at least doing some research before trying to put down a potential "competitor"/alternative to your own tech. This whole exchange is a tad too unprofessional to me IMO.


The example shows when there is one body which contains markdown/Svelte. But I am talking about components that take more than one attributes. Say you want to model a person object. Person has many fields. You will have to write something like:

    <person>
       <name>markdown is okay here</name>
       <bio>markdown is okay here</bio>
    </person>
If you do this it goes in the order you wrote things down, the <person> can not decide to show bio before name for example.

If you write:

    <person name="markdown difficult here" bio="markdown difficult here />
Now `person` has better control over things but it's not ergonomic to write. All the examples are "simplistic", when you have a complicated data modelling etc, this approach of leaving floating tags, eg `<name>` is not good modelling, its name of what? A person's name? A projects name? And can you get this data out of these files? You will have to write a full blown parser etc, any data in such file would be lost.

Also if you want to create a new component you have to jump to the Javascript world. And you have to learn CSS. Along with Markdown and the front matter (yml). Each with non trivial amount of learning curves.

FTD is being designed as a unified whole. All components of FTD are written in FTD. There is no boundary between "author" code and "components". A reasonable programmer can pick up FTD in an hr, check out the first video on https://ftd.dev. A non developer can pick it in a week, check out the other three videos.

Svelte etc are cool, but haven't you heard of huge number of backend developers who can't get into frontend? Current state of frontend is extremely complex. You can not tell me billions of people can learn Svelte thing when even backend developers are facing difficulties. To use mdsvex effectively you will have to write components. Which means the entire web pack, bundling etc etc. Do not pretend these are easy, anybody, even non programmers can do it.

FTD is being designed to be easy to teach. The audience of FTD is people at large, not developers (and yes they can learn some programming, Excel is used by 100s of millions). I do not believe you are saying Svelte is the super best and no such project as FTD must exist. I believe there is scope of improvement and we are working on it, help us improve.


Sorry if I haven't myself clear: I'm not questioning any technical merit of any tech. I'm just saying that I don't think this is a proper venue for advertising your product, as good as it is. This is off-topic. You should try a "Show HN" post. That's just a friendly recommendation, I'm not interested nor the target for your product.


That looks really cool, bet it'd be a great way to write interactive docs and things like that too!


FTD = Frontotemporal demensia?


FTD: https://ftd.dev/language/. A DSL for creating "programmable prose". Markdown with data declaration, event handling and soon arbitrary computation.


There are many rendering issues on their landing pages.

Things like unreadable elements with near 0 contrast between font and background and a dark mode with pieces of widgets with a white background does not fill me with confidence for a UI focused technology.


Yeah this has to be some sort of joke, I can't even read what 80% of the sidebar on mobile, and it crashes my phone every other page.


Hey, can you tell me what phone you are using and what exactly do you mean by "crashes my phone"?

Thanks for the design feedback. The theme is new. The language itself is new. The static generator is new. Sidebar issue on mobile, and contrast issue we are aware of and are working on. Markdown styling is not a trivial issue to fix as we do not allow authors or theme builders to access CSS directly, [1] is the design we have come up with.

[1]: https://fpm.dev/journal/#markdown-styling


If anyone from FTD is here, the site has some text issues on dark mode:

https://i.imgur.com/CfI5lPC.jpg

It looks fine on light mode:

https://i.imgur.com/oJPMkUZ.jpg


Thanks vsareto for reporting the issue, we have made some improvements. Appreciate it.


Oh, THAT FTD. Right.


I have a love and hate relationship with this stack. On one hand it makes sense to do everything in SQL and generate the data access patterns from it. On the other hand pushing business logic into the database has it's disadvantages

- Scaling Databases is hard, - It's easier to scale a stateless application server rather than a stateful database.

- SQL is still hard to maintain - Adding features using Postgres Functions means you have an additional language to maintain. The code might be spread across different migration files. Maintaining these functions long term has it's own issues.


I'm on a similar boat. I really really like the idea of this stack. You are going to have the pains of a database (migrations and scaling) anyway, all your data is in a database, so why not put everything there, right? And lot of business logic fits SQL and transactions very well - like finance stuff - relying on SERIALIZABLE is probably less work than writing your own locking for moving money around.

But I think the tooling is not really there, and developer experience directly translates to both productivity and team attrition so that's a really big problem. Also, lot of logic does not fit into SQL very well, say "give me a pdf" or something.

Most importantly, most developers are used to think procedurally and SQL is not natural to them. And to let my inner corporate manager speak, Java or Python or node backend developers are everywhere, while for something like this you need specialized people.


You can write functions PL/Python or whatever other availible languages. It's trivial to have well structured files with functions that you auto generate migrations from.


Hell you can even write Postgres functions in Javascript with PL/V8.


I agree but I would still argue there are types of apps where this approach can be nice: prototypes, internal tools or small apps that have modest backend needs. I took this approach via Supabase and I love that I basically don't work on the backend at all.


How about adding tool like dbt there.


Dbt is for analytics. It expects you to have some "raw data" and then building bunch of models and reports off of it.

I do think dbt is the answer for most of analytics, but for CRUD stuff with business logic it's not going to help you.


First, it's a pretty cool PoC. Kudos on that.

Back in the 90's we used to call direct access from the client "fat client" and we stopped doing that. We moved to n-tier with the server in the middle. It takes more work but we still did that move. Here's why:

* Writing logic in server code is easier * SQL servers are more powerful and harder to secure * Caching and scale - direct DB access just couldn't handle even 90's traffic * Vendor lock-in - pretty much everything here is Postgress specific

I haven't kept up on DB progress as much as I should but honestly, most of what you have here can be done in a few lines of python and maybe a bit more in Java/Spring Boot/NodeJS etc.

I'm trying to think of a use case for this other than "we can do that cool thing". Can't come up with it.


The use case is what I mentioned in the article: Easier development and deployment of open-source, self-hosted apps. This paradigm makes things a lot easier both for the user and the volunteer developers.

It simplifies the 3-tier architecture (DB, server, client) into effectively 2-tier because the server environment is replaced by a standard binary (PostgREST) with zero hand-written code. This is 33% improvement in language/framework complexity if nothing else.

If needed, one can also write postgresql functions/triggers in Javascript/Python/etc for productivity - although this doesn't seem to have taken off yet.


In a "typical" 3-tier app, you need a specific, explicitly written handler for each combination of data type, filtering, aggregation and output. That might not be a big deal if the amount of endpoint variants roughly corresponds to the amount of custom views in your app, but not every app is like that. Quite a few make do with just a few general views like a table, form and maybe map. Having to write explicit server-side handlers for every query variant means abstracting these views on the frontend is not viable, so apps with some 100 tables and little table-specific functionality take much longer to make and are much less flexible.

It might just be personal bias, but I think these kinds of projects are common and important. They're near invisible, but even small-ish companies need somewhere to store their domain-specific data, and we should be thinking of ways to make that kind of software better.


This is amazing. I’ve been wondering more and more myself why it’s always “proper” to run your backend code on separate servers from your database. Like this is considered a must have for even the most basic architectures. No cloud provider or PaaS has real support for running your app code and your database on the same machine, but that should be the default for a huge majority of apps out there that don’t need the scalability beyond a single server.


Absolutely. Also, somehow, the ability to write postgresql triggers and functions in standard languages (Python, Javascript) seems to be vastly underutilized.

The times are changing though. If there's anybody here from Supabase team, would love to chat about these topics.


Supabase ceo here.

Your article is very cool.

> write postgresql triggers and functions in standard languages (Python, Javascript) seems to be vastly underutilized.

This is definitely something we provide tooling for over time. We have plv8 installed on all databases so it should be simple enough


I’m confused. You can always run your backend code on the same machine as your database. It’s a good idea to keep the code decoupled (so when you get popular, you could move the database to a bigger box). But you have to maintain/install those things yourself. If you’re using a PaaS vendor, they will use specific instances or installations for your database to help make your setup more scalable. In reality, it is to make your code and all of their other customers code more scalable as a whole.

But, the first step in developing an MVP should probably be running all of the code on a single server. But again, that assumes you’re the one setting up the server.


> You can always run your backend code on the same machine as your database.

What I am saying is that nobody ever, ever does this. It’s considered a strict antipattern for any app, even dopey internal MVPs. And I think it should be the default pattern until you need to scale.


Sure people do. Trivially you have to do this when using sqlite - but for small number of users, postgres and an application server and a reverse proxy will run fine on a small vm.


Where is this magical land? I remember trying to ship an internal tool with SQLite several years ago and getting admonished by a senior dev for it. Haven’t seen it since across 4 different companies and and additional 8 years of coding.


Proper or best practise is a short hand. Think thru these 10 points or apply this best practise.


Author here.

I really think there's a lot of value in this model for users to self-host apps. PostgreSQL can run cron jobs, operate on external data with foreign data wrappers, and even do some number-crunching with Madlib right within SQL.

So, the user only has to manage a postgresql db, retains control on data and can easily get upgrades of webapps via CDN and mobile apps via app stores. Developers benefit because (a) there is no server-side programming language to deploy/maintain and (b) PostgREST API is very well thought-out and very predictable - leading to easier development across frontend/mobile frameworks.


Nice! In the past I've also rendered an OSM map[1] with this approach and as you mention caching/CDN should make it suitable for production.

[1]: https://gist.github.com/steve-chavez/c1435a8c9583d2524e87e4f...


Pretty cool! Was the function being named "index.html", made it available on /rpc instead of /rpc/index.html ?


Ah, no - PostgREST doesn't have that kind of convention builtin(default to searching an index.html on path). I'll update the gist and change the function to `map.html` to avoid any confusion.


FYI: On android/ff the site menu hovers, and takes up over a third of my screen. It's pretty annoying.


The site is made with Gitbook which seems to have gone through major changes recently. I'll see what I can do about it.


Seconded.


Meta: your code blocks have the CSS `overflow: scroll`, which is wrong, causing scrollbars to appear unnecessarily; 9999⁄10000 times, `overflow: auto` is what you want instead.

(macOS is the only significant desktop platform that uses overlay scrollbars by default, where the difference is not generally visible.)


I am using Gitbook. They seemed to have changed things in a big way and a number of issues seem to have cropped up. Unlikely I can fix these myself.


Recently started a small project with a limited number of developers and in the end; a limit number of users. But the project has an important function.

Decided on PostgREST as backend, with a thin VueJS frontend. We get a backend and API for free, and our developers know SQL very well.

I think it's a brilliant choice for our use case, although admittedly not something I would use for some other projects where I'd need more scaling on DB level, etc.


"there is no backend code!" - no... There is, just you moved it into the database. That doesn't mean you don't have to deal with it, maintain it, and change it over time.


Perhaps a more apt description would be "there is no middleware code".


This.


This looks very good. Especially avoiding lots of repetitive backend code. Anyone know if Supabase can manage something like firebases offline data capabilities? E.g. https://firebase.google.com/docs/firestore/manage-data/enabl...

I'm pretty tempted by this stack for business applications that really are mainly storing stuff in a database.


We don't have offline capabilities yet - it's something we will investigate probably in the later half of this year.

In the meantime you can use something like WatermelonDb: https://nozbe.github.io/WatermelonDB/


Interesting. Thanks for pointing me to Watermelon. I'll take a look.


I can’t actually imagine the niche for this tech.

Pet projects with simplest logic? ToDo app examples?

It won’t work in a large scale projects because this approach doesn’t scale well.

On a small scale it adds very uneven comlexity curve. Until some point everything is quite easy, but one day you need to write a stored function in plsql, and it’s not an easy task. Python/js won’t help in this case, because there will no be maturity of nodejs/django in decades.

Why should anyone prefer this thing to Rails?


Open-source apps meant for personal self-hosting as the article mentions?

- Scaling is not a worry.

- Simplicity is essential because developers are volunteering their time.

- Users also benefit because they only need to manage a database and not 6 different server language runtimes. Can even use managed services like Supabase or RDS.


It scales fine for 99.9% of projects. You can write functions in Python, JS or whatever other language you fancy. Why do you need rails for the REST API?


Because i need production-ready auth, csrf. I want to add some cache layer with redis/memcached. I want control over fields I pass to/from frontend. I want to be able to handle multipart requests. And I want a decent logging system and debugger.


I don't see any issues with everything you just outlined.


>Checking your browser before accessing blog.polyglot.network.

Anyone stuck at that page trying to follow the link?

Edit: it finally worked for me… had to click the link half a dozen times before I stopped getting the refreshing cloudflare page


I could only get it to work by switching from Firefox to safari. (On my phone)


Stuck here on iPhone/firefox… The irony.


The sql/js syntax kind of reminds me of coldfusion for some reason.


I thought that too.

I still work on an old cold fusion application that works fine / makes money.

There’s something super fast in some cases where you write your sql, markup in one file and you are done.


Awesome job! I want know how can javascripts in frond-end browser get access to PostREST api with the CORS limit.


Have used a stack like that on one application, a rather simple "knowledge vault", a few years back. It worked very well, and at the time I liked it very much. But these days, I'm no longer fond of client-side scripting (because it gets very complex very quickly, and npm+webpack is a monster). So my next stack would be closer to BCHS [1], only with Linux, Golang, and nginx instead of BSD, httpd, and C. I might add in a tiny bit of client-side scripting such as [2], but no more npm, webpack, etc. for me.

[1] https://learnbchs.org/index.html [2] https://htmx.org/ https://unpoly.com/ https://hotwired.dev/


Can't do much about NPM, but webpack is legacy now, no reason to use it for new projects. You can use esbuild if you just want compiling, or vite (which uses esbuild and roll up) if you want compiling + bundling + front end assets. It Just Works™. It's very fast, minimal effort, and vastly less complicated than webpack ever was. This is first time since we first started transpiring that I have found JavaScript tooling to actually be maintainable production quality and not a brittle hacky configuration management nightmare.


Thanks for the tip! I guess I only used webpack because vue-cli set up my project that way.

Whenever I return to the project, hell breaks loose: "npm install" (whoops, 51 vulnerabilities!), "npm audit --fix" (what now? 72 vulnerabilities? I thought this was supposed to go down not up), initialize a fresh project with the CLI so as to get the latest possible scaffolding and then migrate stuff over from the old project etc. etc. This really appears brittle to me.


I used Preact+HTM for the same reason. The latter uses tagged templates which are well supported in browsers now, instead of JSX which would have required a compiler of some kind. Do have a look at the frontend.js mentioned in the article.


Is there a good solution for rendering HTML with PostgREST? Without starting an MPA vs SPA flameware, it'd be useful to be able to do.


What do you think is missing? Do you have any suggestions?

In the article HTML is just treated as a `text` type inside the SQL function, you could do the same for rendering different pages - create functions that return HTML as text.


A bit off topic, but you can convert simple text to database script here (PostgreSQL also supported):

https://text2db.com/

You can find tutorial here : https://www.youtube.com/watch?v=8hYtZ38wuz4


I feel very uneasy about this, and I am not sure why. I feel like the "meat and potatoes" of any application is its data and backend code, not the frontend. Frontends can change among clients, or even be given up altogether (just use a CLI/API interface for example) but the backend business logic is what defines a program. An app is what it _does_, not what it _looks like_.

This approach has been tried several times over the past few years: I first used DreamFactory[1] several years ago for a small production app (less than a few thousand users) and now I see Supabase [2] trying the same. But in my experience these don't go past very limited internal enterprise use-cases, MVPs and toy frontend apps.

[1] https://www.dreamfactory.com [2] https://supabase.com


I must be missing something. What is the advantage of this approach versus hosting the index.html on a CDN as well?


The first page gives users control on what code ends up being loaded with access to their data over API. First page coming from CDN, would not just be an issue for cross-origin resource sharing, also removes the ability to use SRI hash for subsequent JS.

Also, the HTML page on CDN wouldn’t know what endpoint (domain, port) user has deployed their API on. One more thing for user to configure. Here, the HTML configures window.postgrest_url which the JS code can use.


In the example, the SRI hash is not computed by the database, it probably comes from a build phase. This build phase can (or often does) generate the HTML at the same time it computes the JS file.

As for configuration, I see it as a minor improvement over manually deploying one (or several) index.html with the API URL conf.

The CORS issue is the bigget sbenefit I think, but easy to accomplish in a CDN with custom headers, or with a custom domain.

So I think that it's smart, but it brings no benefit over existing approaches, and opens a whole new class of problem (like sizing your Postgrest instance for when the site is under attack, or visited by a bot).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: