Hacker News new | past | comments | ask | show | jobs | submit login

Hi! I'm the dev here! I built this on a whim at after seeing someone ask for it on twitter. It was 12:30 at night but I couldn't pass down the opportunity to build it.

The code is very simple, there's no backend at all actually, I believe because wikipedia's api is very permissive and you can just make the requests in the frontend. So you just simply request random articles, get some snippets, and the image attached!

I used Claude and cursor do 90% of the heavy lifting, so I am positive there's plenty of room for optimizations. But right now as it stands, it's quite fun to play with, even without anything very sophisticated.

Here is the source code. https://github.com/IsaacGemal/wikitok






Shoutout to APIs that do not enforce CORS preventing requests be made from FE without a need for a BE. There's so many toy apps I started building that would have just worked if this was more common, but they have CORS restrictions requiring me to spin up a BE which for many one-off tools and personal tools just isn't worth doing and maintaining. Same with OAuth.

nit: same-origin policy is the restriction. CORS isn't the restriction, it's the thing that helps you. CORS is the solution, not the problem.

Yes, exactly. People who want to "disable" it have no idea how the web works. Developers have all kinds of misconceptions about what it is, I even heard someone saying it disallows backends to call their API.

And in particular CORS is the region you can read the wikkpedia api cross origin (unless you are doing jsonp, but hopefully they are using CORS because it is better in every way)

There's many services to solve this pain point. I've used https://allorigins.win/ in the past.

These services are called CORS proxies! I recently made an updated list of the currently working free ones here: https://gist.github.com/reynaldichernando/eab9c4e31e30677f17...

Do note that these proxies are for testing only, and they are heavily rate limited.

For production use case, you might consider using Corsfix (https://corsfix.com)

(I am affiliated with Corsfix)



Oh this looks neat!

Neet, if you want to leak your users' credentials in a XSS attack.

I would only use something like this that requires absolutely no authentication. For example, I had a one page app that showed me instantly when the next shuttle(s) were scheduled for my stop. Instead of having to click through multiple steps, this allowed me to see it in one step. As far as I know, I was the only user for this thing I built and put up on gitlab pages. I don't know exactly because I didn't bother to track who visited the page.

This is the way to go, you wouldn't want to use a CORS proxy for something authenticated/with credentials (e.g. API key). But for public unauthenticated request, they work just fine.

Oh that explains why it's not a popular architecture.

I kind of miss the era of JSON-P supported APIs. Feels like such a weird little moment in time.

The only caveat I feel is that the speed of the API is definitely not comparable to something more purpose built for this kind of scale, but overall I'm happy as it works well enough that I don't have to think about it too hard.

I think Github Actions could be used for scheduled builds, so that the initial load would have random articles right in. Further requests could then be made in advance so users would not notice any delay from the API.

Do you have any examples of that I can look at as a reference? I'm used to github actions just being my CI/CD build step checking tool.

I attempted to implement the schedule trigger [1] on GitHub Actions as an example, but it is not being triggered as I expected. It needs more digging if you're so inclined.

Aside from that, the whole gist is that the initial data can be injected into the static files during the build step, or even saved as separate JSON files that the app can load instead of reaching out to the API. As long as you're willing to refresh the static data from time to time, of course.

I created a basic example at https://schedbuild.pages.dev/ with a rough, manual implementation of a build step. Frameworks like Next.js offer a more sophisticated approach that can render the entire HTML, allowing users to load the static page with the initial data already rendered without Javascript, and subsequent interactions taking over from there more seamlessly.

If the Github Actions schedule feature is ever sorted out, in my opinion it's a reasonable alternative to setting up a backend just for this.

[1] https://docs.github.com/en/actions/writing-workflows/choosin...


in lieu of a cron server, I use scheduled jobs without any issues for a few production workloads on azure devops (AKA gh actions 0.1).

You're right. I just checked the example project now and it's been updated hourly since then. It's just slightly delayed.

Edit to the other comment: the cron job wasn't being triggered at first, but turns out it's just slightly delayed. The example has been updated hourly since then.

You can use a schedule trigger [1] on GitHub Actions.

The whole gist is that the initial data can be injected into the static files during the build step, or even saved as separate JSON files that the app can load instead of reaching out to the API. As long as you're willing to refresh the static data from time to time, of course.

I created a basic example at https://schedbuild.pages.dev/ with a rough, manual implementation of a build step. Frameworks like Next.js offer a more sophisticated approach that can render the entire HTML, allowing users to load the static page with the initial data already rendered without Javascript, and subsequent interactions taking over from there more seamlessly.

In my opinion this is a reasonable alternative to setting up a backend just for this.

[1] https://docs.github.com/en/actions/writing-workflows/choosin...


Could you just preload the next few entries before the user swipes?

Shameless plug for Magic Loops -- we run code in isolated MicroVMs and students love our lack of CORS enforcement, as the APIs they build can be easily integrated into their hackathon projects :)


That’s it!

Tell me more?

We built an LLM-based no-code "all-code" tool for non-developers to automate their daily tasks.

Counterintuitively, it's been picking up steam among student developers and professional devs due to how fast you can spin up API endpoints.

We're currently working to build on this momentum, and are now shifting focus to existing devs.

tl;dr - we use LLMs to create APIs that are run in Firecracker-based MicroVMs


Many platforms can enable proxying through their service to avoid CORS issues: https://pico.sh/pgs#proxy-to-another-service

Using nextjs with a serverless function acting as a proxy is pretty simple

Don't you mean Node.js ? I don't see why you would use a full Next.js framework for just a reverse proxy.

A great way to get around this is with an edge function from deno deploy.

use firebase cloud functions free tier

This is awesome. I can imagine you likely are not interested in building one but this site could hugely benefit from a recommendations algorithm.

For example an algorithm could understand how much a user really enjoys a certain article and then starts sending the users down a rabbit hole of similar and tangential content. Designing, building and maintaining an algorithm like this though is no small feat.


I would not be surprised if Claude + Openai's reasoning models could develop a simple rudimentary algorithm that would work. Of course it wouldn't be as sophisticated as something like TikTok and would require a lot of fine tuning, but it's definitely possible.

Or even use an LLM directly "user liked articles with these titles, what others might they like"

+ keeping it in the front end with local storage

You are joking right?

why would it be a joke?

echo chambers bad

Why is it bad if it learns that I like to read about medieval fortresses, and that it should skip showing me rocket ships, for instance?

Personalization is not inherently bad and I believe personal feeds can be engineered in a way that doesn’t result in echo chambers but in communities.

The problem is capital incentives are not aligned with making these interfaces which is why we have the feeds we have today on Facebook, Twitter, etc. I look forward to the innovations happening on bring-your-own-algo Bluesky.

As a side note, I’m currently building a personalized Hacker News service. I might throw it at Show HN once it gets closer to completion.


Not to plug myself too shamelessly, but here's my resume if anyone is interested :) https://www.aizk.sh/Isaac's%20Resume.pdf

This is far from a shameless plug, you built the thing! V nice love the idea.

I think tomorrow I'm going to write a detailed blog of exactly what I did building out WikiTok. One last little bit of info on this subject (and at the end of it I'll say that I'm open to work or whatever).

I remember seeing that tweet, I thought it was the craziest coincidence ever when I saw this on the front page. I guess it’s not haha

As soon as I saw the tweet, I realized the opportunity was there waiting for me. And also, twitter's algorithm is REALLY good at pairing the right tweets to one another, so many people saw those two tweets side by side, which added to the humor.

What was the tweet?


Next steps: ingest these offline and process them into quick 30 second videos with the most salient facts. TTS narration, additional images. Generate stock video using the images in the API and perhaps text-to-video. That would be a killer app.

Bonus: come up with a heuristic or model to filter out or de-rank universally uninteresting articles.


Kudos for the anthropological experiment. Indeed makes you wonder what's there about the sliding that makes it so entertaining.

I suggest you add some sort of summary that flows, so to add certain level of animation. Some articles have actual sound and animations to them.

Great inspiration!


> Indeed makes you wonder what's there about the sliding that makes it so entertaining.

Check out "skinner box" - the fact that you may get something interesting or may not is more exciting than just getting something good. They're lootboxes of information/entertainment.


It’s lottery indeed, you are right.

This is freaking really cool, I’m at work browsing HN instead of doing actual work, but I’ll look into it more later, but the “killer feature” I think would be to add audio narration do this, or a quick summary, I would scroll that all day…

Awesome job!


This is super super cool. I’ll tell you the single barrier to me actually using this regularly—it needs an algorithm. It would be so cool (and a good learning project) to even build the simplest of recommendation algorithms behind it based on my likes, dislikes, bookmarks, and whether I click “read more”.

Wow this is surprisingly addictive :)

Nice job whipping up something so simple, yet elegant, so fast.

This is what I love about LLM tools like cursor, it makes the effort to just try and build something so low, you can just try it one night, and can make cool things that might not have been built otherwise.


It would be cool to add an algorithm that learns my interests and suggests relevant articles

Very nice! I wonder if it would be possible to see articles related to a topic. Maybe using the hubs as a starting point and then following the related links in each article?

Nice! I made WikTok[1] in the past, but your version looks much better. :-)

[1] https://wiktok.org/


I just wanna say I love it. So simple, but so cool. You've taken a popular idea and done something interesting and intellectual out of it. good job.

Love it!

One of the rare website i added to my android homescreen. Maybe someone has a good idea for a nice favicon.


I added a dark wikipedia logo as the favicon, which should be good for now.

Congrats! Tomie here, this is absolutely great!

This is great! Thanks for sharing it.

Great job!



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: