Hacker News new | past | comments | ask | show | jobs | submit login
Comcast: Simulating shitty network connections so you can build better systems (github.com/tylertreat)
370 points by olalonde on July 16, 2022 | hide | past | favorite | 138 comments



The greatest part of this project is that if you ever get sued for trademark infringement, you can force their lawyers to have to explain why anyone could possibly confuse the product of Comcast Corp. with making your internet as flakey and slow as possible.


To be honest, when I looked at this title, I thought Comcast made a tool and publicized the tool to simulate suboptimal network...


This is the type of confusion trademark law seeks to protect against.


Not a lawyer, but:

https://www.uspto.gov/trademarks/basics/what-trademark

"A common misconception is that having a trademark means you legally own a particular word or phrase and can prevent others from using it. However, you don’t have rights to the word or phrase in general, only to how that word or phrase is used with your specific goods or services.

For example, let's say you use a logo as a trademark for your small woodworking business to identify and distinguish your goods or services from others in the woodworking field. This doesn't mean you can stop others from using a similar logo for non-woodworking related goods or services."

Comcast the shitty network simulator should not be confused with Comcast the Internet Service Provider. Joke really is on them if they ever file a lawsuit.


Follow the link on that page to “strong trademarks.” Comcast is a made-up, fanciful word, which is a much easier mark to defend.


A network simulator and an ISP are easily confused according to legal standards. Even The Beatles' Apple Records had a case against Apple Computer in the early days of the company.


Trademarks are registered under specific categories of commerce. It's within those categories that trademark confusion can be argued.

Comcast has like 50 trademarks (look them up on TESS), and they impact on everything from cable to internet services to hats and coffee mugs, basically anything the company makes or does. Let's take COMCAST BUSINESS as a random example. This trademark is registered for a variety of categories for which confusion can easily occur, but one stands out (italics mine):

> "IC 009. US 021 023 026 036 038. G & S: Downloadable cloud-based software used to back up business data; downloadable cloud-based software used to share business data; downloadable cloud-based software used to send and manage secure documents; *downloadable cloud-based software used to manage, secure and protect computer networks;* downloadable cloud-based software used to create and transmit documents with electronic signatures; downloadable cloud-based software used to manage telephone systems; downloadable cloud-based software used to manage advanced telephone features; downloadable cloud-based software used to provide web conferencing; downloadable cloud-based software used to monitor business security systems; *downloadable cloud-based software used to detect and remediate computer viruses, worms, trojans, spyware, adware, malware and unauthorized data and programs;* high-definition multimedia interface cables; and Ethernet and wireless networking hardware and computer hardware and components thereof."

Then there's this one:

> "IC 038. US 100 101 104. G & S: *Providing high-speed access to the Internet, mobile networks, wired and wireless networks and other electronic communications networks;* telecommunication access services; telecommunications gateway services; wireless broadband communication services; wireless broadband telecommunications services; telecommunications services, namely, voice, video, and data transmissions provided through the use of cable television distribution facilities; telecommunication services, namely, transmission of voice, data, graphics, images, audio and video by means of wireless communication networks; *high-speed electronic data interchange services provided via modems, hybrid fiber coaxial cable networks, routers, and servers;* electronic, local and long distance transmission of voice, data, and graphics by means of cable, telephone, wireless, ISDN, and technologies; telephone communication services; providing voice communications services via cable, fiber optics, the internet, wired and wireless networks, mobile networks and other electronic communications networks; providing fiber optic network services; Voice over Internet Protocol (VoIP) services; cable television services; cable television broadcasting services; cable television transmission services; providing access to telecommunication networks; provision of telecommunication access to video and audio content via video-on-demand, interactive television, pay per view, and pay television subscription services; video-on-demand transmission services; video conferencing services; wireless PBX services; cloud-based PBX services; streaming of video and audio material via the Internet, wired and wireless networks, mobile networks and other electronic communications networks; telecommunications services, namely, providing advanced calling features and leasing or rental of telecommunications equipment; leasing or rental of telecommunications equipment; computer network access services by means of a metro Ethernet; electronic transmission and streaming of digital media content for others via the Internet, wired and wireless networks, mobile networks and other electronic communications networks; cloud-based communications services in the nature of instant messaging, telephony, audio and video conferencing, screen sharing, file transfer, call control and management, voice mail, e-mail, SMS and facsimile services; and providing wireless hot spots."

And that's just the first thing I picked out. It goes on like this. So, yeah. Comcast the network degradation simulation software almost unquestionably overlaps with Comcast's trademarks.


Interesting... Then the fun is only guaranteed.


Maybe he can rename it to “comcrap”? Jokes aside it’s probably better to use a less controversial name.


Maybe he can rename it to “comcrap”? Jokes aside it’s probably better to use a less controversial name.

Perhaps he should rebrand it "Xfinity." That seems like a perfectly harmless name that is in no way associated with poor products and crap customer service.


Since it is designed to simulate a poor network, Xfinite would be a funny name.


comcaste


To begin with, you've got a very narrow set of people who would even discover this project. Now intersect that with the people who will reliably miss context clues ("shitty network connections") to get a joke, and the set becomes even narrower.


Yes, but once the lawsuit does come, which lets face it will come (at least a C&D), the dev(s) can have all of the tech trade rags/blogs/forums/etc paste it all over as news. Then, they'll become internet famous. At the end of the day, they probably know they'll have to change the name and are probably hoping for it just for the publicity. Let it live as long as it can, then ride the wave


> which lets face it will come

I've had this project starred for at least 7 years, so perhaps Comcast legal has more of a sense of humor than we're giving them credit for.


Or they just don't know about it and/or they have bigger fish to fry first.

Trademark and copyright tend to requirement historic evidence of enforcement to be enforceable in the future. If they fail to enforce this and it's significantly in the public sphere, it could be used as evidence the trademark and copyright no longer apply, depending on the judge.


Perhaps. I wouldn't have named my project this, either, just for the avoidance of an eventual headache. But I find it unlikely this hasn't been flagged to their legal by now.


Instead of the typical HN hug of death, mabye it'll be the HN Eye of Sauron that is the ultimate reaction to making the front page. <shrugs>

However, if you do get the eventual C&D, you might as well tweet about it and get another 15mins of fame!

*Edit, if you think that a corporate legal team is allowed to display a sense of humor, you'd be very mistaken. Sure, some might be cognizant of their corporate overlords, but to think that they would let something like this slide once made aware of it would/could be an expensive mistaken idea to have


Jeez. I'm starting to think the people on this thread want it to get a C&D.


There's a difference wanting something to happen and being able to understand the world and realizing the inevitabilities that will occur when certain conditions are met. It's like wanting the snowman you made in the front yard to survive the summer, but it is inevitable that it will melt.


Legally, it doesn’t matter. Trademark law (not copyright) requires active enforcement to keep yours. So, even if it’s a joke, the act of making a networking related thing while Comcast has a trademark in that category is enough to make their lawyers wake up.


Comcast has to actively enforce against ISPs or network providers.

Comcast the ISP doesn't have to actively enforce against Comcast brand tires or Comcast brand coffee.

A lot of people are confused about this, and big companies use "we have to actively enforce" as an excuse to crush companies in unrelated fields. But you don't.


"Duty to police" seems up there with "requirement to maximize profits" in the realm of excuses to indulge in the sort of bad behavior that execs are fond of.


Perhaps being on the front page of HN will be the catalyst which leads to the lawyers being informed.


Me too!

And my second brief thought was: maybe they forgot to turn it off for me?

Wish I was kidding.


Nah, this is for people who don't already have Comcast.


There's an old paper on that networking, by design, is not reliable or efficient - which is more cogent than ever in large scale distributed systems. What Comcast would cite would likely be that, so I doubt it would reveal the quiet part said out loud about the reliability of Comcast's services.

A couple citations I liked:

https://queue.acm.org/detail.cfm?id=2655736

https://blog.acolyer.org/2014/12/18/the-network-is-reliable/


> cogent

No, comcast. :)

Jokes aside, that paper is very worth reading. And HN readers will recognize one of the authors as the creator of Jepsen, of distributed database testing fame.


On the other hand projects named this way are not easily googleable.


Currently it's the top result for "comcast network tool", ranked above comcast's own github page.

I recall at least one other project with this kind of naming sense: https://github.com/auchenberg/volkswagen

which was named after https://en.wikipedia.org/wiki/Volkswagen_emissions_scandal

and is currently the top result for "volkswagen test library".


Not really, they just need to point at the trademark registration. There is nothing to explain or be embarrassed about.


Trademarks are supposed to be consumer protection regulation. They aren't there to protect the corporations. If I make some brown goo and call it "Pepsi" then Pepsico can sue me because I confuse their customers and not because this will cause losses for Pepsico.

At least, this was the original intention. In today's more corporation-friendly world, more emphasis is being put on the corporations interests unfortunately.



In the 1995 case of Qualitex Co. v. Jacobson Products Co., the Supreme Court described trademark law as "preventing others from copying a source-identifying mark" and assisting the customer in making purchasing decisions.


This is a reddit-type urban legend that has no basis in reality.


Citation? Genuinely never heard this argument before.


That's because it isn't true. Trademarks since ancient Egypt have simply been a way to specify origin of a product. Laws regulating trademark are an attempt to stop consumer confusion, either by accident or through fraud, but have always been written with the interests of the producer in mind, not the consumer.


Does this mean we can never rebase toward the consumer again?


Did they trademark the use of the mark for "simulating a bad network". I don't think so.


That's not a product category. They have registered the trademark for computer networking.


I don't know why this was dead - this is how it works: trademark protection extends to whole classes of productd, so called Nizza classes.


Simple, they already know Comcast is synonymous with flakey slow Internet and shitty customer service. But they already paid off congress and know there is nothing you can do about it.


https://www.youtube.com/watch?v=CHgUN_95UAw

"So the next time you complain about your phone service, why don't you try using two Dixie Cups with a string? We don't care. We don't have to. We're the phone company." -Lilly Tomlin as Ernestine


Well Canadian Tire Corp tried to claim ownership of the domain crappytire.com and lost. Looks like the domain was eventually bought out.


If only Comcast customers had a way of simulating non-shitty network connections...


From docs it looks like you can turn off the shitty network connection by executing

comcast --stop


You could also rebrand this program to comcast-simulator lmao


Grinning from ear to ear on that point...!


Sounds expensive.


Tangent but I was always annoyed at how poorly google products work offline. One time I was using google maps to navigate to a remote address. Once I got close to the address google maps crashed and I re-opened the app. I had no cellular service. The app remembered the actual map data but did not remember the address I had just typed in to the program 15 minutes earlier. I had to guess what street address I was heading to! Why they designed the app to store all search history exclusively in the cloud and not keep a local copy is beyond me, but I presume when working at google they always have an excellent connection and hadn’t thought of what would happen if a user lost service. Being able to at least see the search history would have saved me some grief!


I've experienced that same behavior too. But I also happen to know that, at least in the earlier days of android, they had sophisticated labs available for emulating poor network conditions and software for spoofing GPS on-device in the lab.

Which just makes it more frustrating.


> Why they designed the app to store all search history exclusively in the cloud and not keep a local copy is beyond me

It used to. It was intentionally removed. I say this as a customer who read the update notes, not as an employee of Google.

I'll let you guess why they did. I have my opinion why and it is not flattering to the company, to say the least.


https://www.cnn.com/2019/07/22/tech/google-street-view-priva...

Hey, who remembers Street View cars collecting SSIDs...


For all the hate it gets, I can confirm that Apple Maps does not exhibit this behaviour, and I also believe Waze doesn’t, as well; though my memory is hazy on that one.

Driving around in Northern Canada, as I am often wont to do with my partner; you’re without cell service at least half the time; and even when you do have cell service - data is very uncommon.

My girlfriend and I ran into this issue twice with Google Maps, tested if Apple Maps exhibited the same behaviour, and since it hadn’t - neither of us have actually used Google Maps since.

This is - of course - iOS-specific advice.


Making a tool that retains it's use in isolation from Google and the cloud is antithetical to the SaaS philosophy.


If you are on macOS you can use the "Network Link Conditioner" System Preferences pane that is included with Xcode for the same effect.

You can find that in the Additional Tools for Xcode download, in the Hardware folder.


I used to work with someone who would frequently explode on group video calls, usually with the anger directed at a single victim.

I worked out that if it happened to me, I could use the OSX link conditioner (built in to XCode) to degrade the connection realistically. Coming back after he'd finished his tirade with a 'sorry, my internet capped out, could you repeat that' took a lot of the heat out of the situation.


Sidenote, but I have never ever witnessed someone getting angry at another person in a meeting, remote or in person. I don't even know how I'd react except being flabbergasted at the unprofessionalism.


I haven't otherwise, this was quite the experience. Quite soon after the first incident I started planning my exit. Attrition was very high at that company...


Oh boy, I have. It was the president of the company. I lined up a job and quit so fast


I wish more companies cared about making their product accessible with poor connection, but also no connection at all.

Google Maps is probably the most egregious example of that: you are likely to need it outside of your usual Wifi, and possibly where cellular coverage is poor. Why not offer reasonable off-line support? The Apple TV app is confusing too: they offer the option to download shows (like Netflix does) but the feature is essentially unusable.


It's a hard problem.

Offline means you're dealing with conflict detection, resolution, and caching strategies. Do you do multi-master? Have a single remote master but cache commands as a fallback? How do you test your solution?

And if you're dealing with the web, then you have to deal with IndexedDB which can sometimes be nuked by the browser, or have show stopping bugs (looking at you safari). There's also limited capacity compared to severside. Even if you use a library, under the hood they all use IndexedDB - there is no getting away from its limitations.

If anyone's interested in this stuff, I'd love to chat. I'm between contracts and trying to make an old app of mine work offline, lots of fun but also challenging.


> Offline means you're dealing with conflict detection, resolution, and caching strategies

users of google maps / apple TV don't have to send 'write' operations except for a few features (set-favorite, etc)


one could argue that users of google maps don't have to send 'write' operations at all to save favourites locally, but if you don't allow location history and GPS you barely get autocomplete and can't even keep a local list :)


Read-only offline is not a hard problem. Regardless, lots of things are hard problems _the first time it’s solved_ but become easy problems with the right tools and design.


A naive approach is to have a integer named "version" and a bool named "synced" beside each post/record/row. Then increment version and set synced=false when updating, and synced=true when synced. Or make the data/state immutable and just have the "synced" variable.

A more complicated approach is to have transform operations for state changes, with a version variable, then the version variable is incremented on on a central server each change and decide in which order the transform operations should be applied.


Nothing against you personally. I just would like to point out this awesome display of jumping into specific solutioning and show of technical prowess instead of staying on the abstract level of the parent and discussing the pros and cons and challenges of offline operation.

None of the approaches you are talking about here are new. There are many ways to implement this. The parent knows this and simply abstractly said you need to "do conflict detection". And yes that is an issue. A solvable one which has implications though and the level of challenge depends on the domain and the number of operations (potentially dependent on each other operations) that you made while offline. In your domain, do multiple users usually edit the same 'thing' simultaneously or at least with a frequency higher than the amount of time you are offline? If so you have a challenge on your hand.

A simple example of this is something we all do a lot. Coding. If you use say git and work offline 8h a day and push at the end of the day and you 10 other developers working on the same code base and your code and your work is structured in a way where all of you need to make changes to the same part of the code base you are going to be in a world of hurt of solving conflicts.

Luckily code structure and work breakdown can be done in a way where you will only have minimal conflicts. Git also chose a specific resolution strategy: last one to the table has to deal with conflict resolution.

Depending on your domain and how many dependent and complicated operations you allowed the user to make this can be really frustrating for users and impossible to resolve in an automated way.

Of course simply providing an offline mode for your email program in a mailbox only you have access to is a much simpler domain with easily automatable resolution strategies that are easy for users to understand as well.


That only works until the user has TWO offline devices and both making conflicting changes. A better solution would be to use something like Simperium (https://simperium.com/).

Disclosure: I work for Automattic.


I’m real confused about you calling out Google Maps when it has fantastic offline support (which I use all the time).


He’s probably talking about the web version. Under any other interpretation that comment makes no sense.


I'm not so sure. If I load a saved map in Google Maps (e.g. someone has saved a route with markers and shared it with me), and then I go offline whilst viewing it, Google Maps on Android will show an error after a while and I'll lose the route and markers entirely. This occurs even if I've marked the area as an offline map. I guess it's just a case that they treat the saved routes in a different way. But it's really annoying for my use case (which is finding crewing points for long distance running events in the middle of nowhere with no mobile coverage).


I was talking about Google Maps on iOS. There are countless instances where it will refuse to work without a connection; there's even a modal pop-up to scold you for not having a connection, rather than trigger a dedicated mode like other Google apps. You can't lock in a destination and have the routing work consistently until you are there. The journey will disappear at some point without a warning. You can't find it back without an active connection. Simply having the option to lock a journey and only delete it when expressly told so would be a major improvement.

Sure you can download local maps. but that just saves some bandwidth. Neither Search nor Routing works offline. Without Routing, the app loses most of its uses. Not having Search wouldn't be a problem if you could pick a location on a map and ask: what's there? or even flag it––but you can't. You have to find the name otherwise and search for that, often zooming ridiculously close to get the location you are interested in to appear.


Besides being unable to find the names of nearby stores on the map, I haven’t experienced any of that with Google Maps on iOS. I saved an offline map many years ago and the app automatically uses it if I turn on airplane mode or the internet drops when I’m driving. Navigation still works and I can change destinations while offline. Maybe you drew the short stick on an AB test?


Driving generally works well offline, but biking, walking, and transit don’t. Some random features like search history or shared lists also seem to not work or work badly offline. Maybe you and the parent comment are using it differently?


Sounds like it, but I'm not sure why they would run it for so long.


Maps does let you download areas for offline use. I'd agree that it's non-ideal for long trips, though in practice this has never been a huge problem for me.


Offline support on Android is excellent. I'd guess most people don't use it offline on a laptop.


Google Maps is literally free and here you are complaining it won't work when you don't have internet... But it will: https://support.google.com/maps/answer/6291838?hl=en&co=GENI...


He was talking about it crashing while navigating and not coming back up properly with no connection. That's a valid concern.

Offline maps are nice too, but this wouldn't have helped him get his navigation route back.

I've experienced many of these issues through the years too. And a dedicated offline navigation program is probably the way to go.

I do think your tone is somewhat unnecessarily dismissive though.


The best way to do this, is minimize the amount of data, going back and forth.

In today's age, with multi-megabyte 4K background videos, and enormous dependency chains, that's a hard sell.

I dealt with it, by writing my own backend, with a fairly optimized API, and consulting it in JIT "bursts."

That’s considered “square,” these days.


I use Organic Maps exactly for this reason. Google Maps is too unreliable when cycling and I need an offline solution (also OpenStreetMaps have better coverage of cycling routes than Google does).

https://organicmaps.app/


HERE maps are downloadable.


HERE is wonderful for international travel - you can download entire countries or subsets thereof, and turn off data roaming and just use GPS. Then you can stick to free wifi.


Love the project, likely to use it, and I wish more developers were compelled to use Southeast-Asian-island levels of latency, jitter, and bandwidth, at least once a week while writing websites.

The name is choice, I almost hope you get a nasty letter so you can share it with the rest of us. Hard to search for though...

Might I offer.... Concast?


There's nothing more annoying than recognizing that some developer wrote their own "timeout logic" when the socket is still transferring data, just very slowly. Pro-tip, don't implement your own timeout logic, just set a timeout on the socket.


There's bandwidth and latency simulator in the Chrome devtools, no extra tools needed.


This tool is at the socket level, and used for developing distributed systems. The Chrome dev tool is just to demo the user experience of poor internet. They're very different use cases.


ClownCast


There's also toxiproxy - https://github.com/Shopify/toxiproxy


Toxiproxy is fantastic. I wish they supported a full configuration file in JSON or TOML or something but other than that it has been a lifesaver testing websockets.


About 15 years or so ago, we looked at buying a hardware appliance that would let you build and simulate virtual networks with all sorts of different characteristics. Packet loss, latency, etc.

We developed software for retail point of sale systems, and some of the stores we had to support had all sorts of awful infrastructure, both inside the store and between the store and the “rest of the world.”

I can’t remember the name, but I believe it was an Israeli company, and the device was programmed/configured through Visio (iirc).


Often with amazingly crappy internet what you want are "one packet wonders" - do everything possible in single packets, so that if one gets through you win.


That was Shunra, I worked there for a brief time. It was sold to HP at some point and I don't know what happened to the product afterwards.


It was probably incorporated into HP's consumer printers


These products were typically known as "WAN emulators" and were pretty commonly used in the exact situation you describe: developing large scale distributed systems that were going to be used over unreliable connections. This functionality is now available in the Linux kernel via netem and is accessible via the `tc` command.


The 2.6 Linux kernel includes the ability to create this sort of havoc on your network interfaces via netem[1]. Using the `tc` tool you can add latency, jitter, packet loss, packet reordering, corruption, duplication, or rate limiting. You can apply this to an entire interface or have it only apply to single IPs or ranges of IPs as it works through the existing QOS framework in Linux. Combined with a routing interface and you can leverage this to create a wan emulator for external hosts without much trouble.

This is incredibly useful when smoke testing large scale distributed systems or for test/development of protocols intended to be used over dicey connections.

[1] https://www.linux.org/docs/man8/tc-netem.html


That's pretty much what this tool does, except that it also supports alternative command lines available on BSDs and macOS. The README even shows how to use tc and iptables to simulate a bad network!


> Here's a list of network conditions with values that you can plug into Comcast

Here's an idea: take this list and embed it into the program so that you can execute it like this:

  comcast -use gprs


I don't need this software, because Comcast is my ISP.


I remember there is some kind of provision in the copyright law for satire in the performance art. There was a cafe set up once for few days called “Dumb Starbucks” and that was not deemed copyright infringement. Not a lawyer, this is not a legal advice.


If anything, it would have been trademark infringement.


Comcast is pretty good compared to Frontier. People with DSL don’t want to hear you bitch about your cable provider, particularly one that (relative to others) charges a premium price for a premium product.


Generally when a product doesn't work for half an hour every day, with no acknowledgement from support or fix, I would hesitate to use the word "premium." I've also lived with 12 Mbps DSL; at least it was rock solid 24/7.


That's the big problem with cable / DOCSIS: it's totally inconsistent from one day to the next. The last time I had trouble, they indicated there was "upstream ingress" in the neighborhood. This caused significant packet loss and took literally months for them to track down. In my experience, unless it's a complete outage, cable companies do not give it a priority.


I bought a wireless home internet subscription and a router with WAN failover capabilities because I was tired of 10 minute outages 3 times a week. Great for you if Comcast provides you with premium product. They simply do not here.

I'm not completely unsympathetic to people stuck with DSL since I grew up with dial-up and that's still all my parents can get (apart from wireless), but I'm absolutely entitled to and will continue to bitch about Comcast's service when relevant.


There’s another tool called “clumsy” that I used some years ago.


Yeah, that’s the one I use. Unlike Comcast here it works on Windows. http://jagt.github.io/clumsy/


I enjoy this tool.


Comcast rebranded themselves as Xfinity to try to shake their reputation. Maybe this project should be renamed too.


They only rebranded the consumer side, and that was as I recall coincident with their move from being a 'cable company' to pushing bundled IP services. The business side is still branded Comcast. I don't think they much care about their reputation, at least not enough on it's own to rebrand. In many of their markets there isn't much in the way of an alternative other than the at-least-as-shitty AT&T.


Would it be possible to use this to target specific domains rather than IP blocks? I have been looking for a way to break my instant gratification browsing habits (twitter, reddit etc.) by introducing random delays into those websites, essentially making them barely usable, which works a lot better than blocking them outright. There are some existing browser extensions with a similar idea, but they usually only delay the initial load, so once you get through that barrier you are allowed to browse freely.

I could manually do a DNS lookup and plug the IP address in, but I don't know enough about internet protocols to know whether that would work with Cloudflare etc.


IP/TCP (unsurprisingly) works over IPs, not domains, which is over DNS, so there are one step involved before actually making the requests (simplified obviously).

With that said, you could try to limit things based on the IP range of the resolved IP of a domain. Other services the same company runs might be a casualty in this cross-fire, but maybe that's not a problem.

Make this:

    $ comcast --device=eth0 --latency=250 --target-bw=1000 --default-bw=1000000 --packet-loss=10% --target-addr=8.8.8.8,10.0.0.0/24 --target-proto=tcp,udp,icmp 
Into this:

    $ comcast --device=eth0 --latency=250 --target-bw=1000 --default-bw=1000000 --packet-loss=10% --target-addr=$(whois $(dig +short google.com a) | grep -i cidr | cut -d ':' -f 2 | xargs) --target-proto=tcp,udp,icmp 
The `whois ... dig .. grep cidr` stuff gets a CIDR from the currently resolved IP address from a DNS query. So probably you want to add this as a systemd service or something, which restarts each five minutes or something (as what the dig command returns will change over time), and you probably want to add multiple domains (like the domains they use for APIs, CDNs and so on) as well.


Just gave it a quick test run - this works exactly as I hoped for, thank you so much!


I'm glad it helped you :)


At the kernel layer all it sees is IP. Your best best would be to do the IP lookup but that can be problematic if they use multiple IPs, share IPs with other sites or change IPs at some point.

Luckily most of these big sites tend to use a low number of anycast IP addresses so it may be pretty effective. Sometimes they will even publish IP ranges.


Nitter instance hosted on an island on the opposite side of the world from you?

Teddit?

piped.kavin.rocks but through a proxy in a different country?

There are some ways to build in some latency. :)

I like Twitter a lot, but when browsing or doomscrolling I force myself to use Nitter on one of the public instances that gets rate limited a lot, which keeps me from using it all the time.


You could setup toxiproxy or something similar (slow squid cache maybe?) and then setup a DNS server that overrides the domains of some of the big names and redirects to it. Microtix routers have an easy way to do this; will probably throw SSL cert errors but you could solve that with some self-signed trusted certs.


You cannot solve a thoroughly human problem (compulsion, addiction) with a technical solution. I suggest you find a way to build up your willpower to resist using those sites.


Another tool that is built into linux is tc[1]

[1]https://man7.org/linux/man-pages/man8/tc.8.html


This is unnecessary. You can use tc qdisc as written on the readme.


This tool provides a common interface for multiple OSes, which might be useful for someone who frequently switches between Linux and OS X or does development on a Mac but uses Linux in their pipelines.


Not sure why you're getting downvoted, because you're right. This tool is literally a wrapper around that.


It's a wrapper with a much less obtuse command line interface, unified across different architectures, covering iptables+tc, ipfw, and pfctl as much as it can. It's a helper tool to make the obscure qdisc syntax readable for when you want to intentionally slow down your network (rather than provide traffic shaping to make your crappy network more stable).


This reminds me of an Amiga utility someone made called 'viacom' that when run would create a nice semi-transparent pattern of static interference all over the screen.


This is the first search result if you google “github comcast” which is amazing since comcast does also have their own open source repos on github.


For any frontend folks looking to use packet-level throttling over simulation with lab testing tools such as Lighthouse, I highly recommend throttle from the sitespeed.io folks. Inspired by comcast, but installable in Node.

https://github.com/sitespeedio/throttle


Many people are talking about sueing or c&d - I wonder why such projects are not hosted in countries where it does not matter (say, China)

I even wonder what would happen if it was hired in Europe, what would comcast do. Hire lawyers in Europe to represent them? On the basis of a US law?


Now we need a tool to simulate long time to resolution when you've proven and notified your service provider of an issue, and also happening to be the underlying LEC for the other service provider in your area, so when there's an upstream issue both go down (AT&T).


PF/OPNsense routers offer this feature in the settings, you can limit speed and add packet loss etc


"This feature" exists in a lot of hardware/software, but the context of which the tool is built for matters too. Doing this in the router would impact all devices connected to the router, right? Changing configuration on your router in order to test something locally on your computer also feels slightly overkill.


On Android we use Throttly to mess with the network: https://play.google.com/store/apps/details?id=me.twocities.t...


The solution to drop random packets using iptables could be used to prank your linux using friends


It's also fun against portscanners. A random split between reject and drop will cause almost all scanners to either assume packet loss or provide spammy output.


I would love to hear from any regular user of Comcast (or tools similar to it) what their actual workflow looks like for using it to improve their systems?


Loving the concept and the name. I wish more people used these tools to test their products outside of CI lab test like environments.


Great name!


Just live in Berlin instead.


i love comcast!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: