Hacker News new | past | comments | ask | show | jobs | submit login

Would it be possible to use this to target specific domains rather than IP blocks? I have been looking for a way to break my instant gratification browsing habits (twitter, reddit etc.) by introducing random delays into those websites, essentially making them barely usable, which works a lot better than blocking them outright. There are some existing browser extensions with a similar idea, but they usually only delay the initial load, so once you get through that barrier you are allowed to browse freely.

I could manually do a DNS lookup and plug the IP address in, but I don't know enough about internet protocols to know whether that would work with Cloudflare etc.




IP/TCP (unsurprisingly) works over IPs, not domains, which is over DNS, so there are one step involved before actually making the requests (simplified obviously).

With that said, you could try to limit things based on the IP range of the resolved IP of a domain. Other services the same company runs might be a casualty in this cross-fire, but maybe that's not a problem.

Make this:

    $ comcast --device=eth0 --latency=250 --target-bw=1000 --default-bw=1000000 --packet-loss=10% --target-addr=8.8.8.8,10.0.0.0/24 --target-proto=tcp,udp,icmp 
Into this:

    $ comcast --device=eth0 --latency=250 --target-bw=1000 --default-bw=1000000 --packet-loss=10% --target-addr=$(whois $(dig +short google.com a) | grep -i cidr | cut -d ':' -f 2 | xargs) --target-proto=tcp,udp,icmp 
The `whois ... dig .. grep cidr` stuff gets a CIDR from the currently resolved IP address from a DNS query. So probably you want to add this as a systemd service or something, which restarts each five minutes or something (as what the dig command returns will change over time), and you probably want to add multiple domains (like the domains they use for APIs, CDNs and so on) as well.


Just gave it a quick test run - this works exactly as I hoped for, thank you so much!


I'm glad it helped you :)


At the kernel layer all it sees is IP. Your best best would be to do the IP lookup but that can be problematic if they use multiple IPs, share IPs with other sites or change IPs at some point.

Luckily most of these big sites tend to use a low number of anycast IP addresses so it may be pretty effective. Sometimes they will even publish IP ranges.


Nitter instance hosted on an island on the opposite side of the world from you?

Teddit?

piped.kavin.rocks but through a proxy in a different country?

There are some ways to build in some latency. :)

I like Twitter a lot, but when browsing or doomscrolling I force myself to use Nitter on one of the public instances that gets rate limited a lot, which keeps me from using it all the time.


You could setup toxiproxy or something similar (slow squid cache maybe?) and then setup a DNS server that overrides the domains of some of the big names and redirects to it. Microtix routers have an easy way to do this; will probably throw SSL cert errors but you could solve that with some self-signed trusted certs.


You cannot solve a thoroughly human problem (compulsion, addiction) with a technical solution. I suggest you find a way to build up your willpower to resist using those sites.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: