Hacker News new | past | comments | ask | show | jobs | submit login

Still not much, realistically 4096 bytes or less.

Browsers aren’t as much the issue as they’ve been in the past, but I’ve hit snags with proxies, old servers, etc.






How does pagination in urls work nowadays? You'd need ~3 bytes to index a reasonable number of pages naively, no?

But curious what current art is re: performance optimizations between frontend and backend. Or is it simply page indices?


If you do limit/offset on database side, page number is enough. Though this doesn't work well for bigger page numbers. There's other ways to do pagination, e.g. with "cursors", where cursor is simply id of last record on previous page. SQL query is very efficient, but jumping to page X is impossible. In this scenario storing cursors for past pages is needed

There are libraries for Ecto that help with this.

https://github.com/duffelhq/paginator


Careful that this library last I used it (2020 or so) used a particularly insecure encoding of the cursor that basically allows remote execution. Not sure if they ever addressed it.

Here's the fork I created at the time to work around some of these issues: https://github.com/1player/paginator



Thanks to thread for the informative responses! (And the useful README on your fork/the upstream)

I try to assume someone's thought of better than the best I could, or at least learned the hard way what edge cases need to be handled.


Thanks for pointing that out.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: