Hacker News new | past | comments | ask | show | jobs | submit | more rainsil's comments login

Well the first year of the Iraq War cost the US $54 billion, according to congress's budget[0]. This doesn't include the total cost of the supporting infrastructure need to be able to deploy troops in Iraq quickly, but we can estimate that using the increase in defence budget from 2002-3, or $94 billion ($132B in 2020)[1].

According to Wikipedia, Minuteman III ICBMs have a 2020 unit cost of $20 million[2], so for the cost of an Iraq invasion, the US could have fired about 6600 missiles. Considering the invasion toppled the Iraqi government, it's pretty unlikely that firing 6600 missiles with conventional payloads would have been anywhere near as effective.

[0]: https://en.wikipedia.org/wiki/Financial_cost_of_the_Iraq_War...

[1]: https://en.wikipedia.org/wiki/Military_budget_of_the_United_...

[2]: https://en.wikipedia.org/wiki/LGM-30_Minuteman#Counterforce


The comparison we're making is whether precision attacks, presumably on roughly building-sized targets, would be cheaper to do from long range via ICBMs (with conventional warheads), or via much cheaper but shorter-range missiles. My guess is that neither ICBMs nor shorter-range missiles could have accomplished what the U.S. military accomplished in Iraq. Presumably missiles alone were responsible for a small portion of that $54 billion.


If I can trust https://en.wikipedia.org/wiki/LGM-30_Minuteman a Minuteman III (which is the current ICBM design used by the US) will land within 800ft (240m) of its intended target 50% of the time. And outside that circle the other 50%.

In other words, you can't really target a "building-size" target with these (with maybe exceptions like the Pentagon).

For nuclear payloads, a few hundred meters of error is much less of an issue, of course.


In the first Iraq war, "surgical strike" was euphemism for undiscriminate carpet bombing. Was the second Iraq war any different ?


I didn't think typosquatting actually worked. I wonder if there's a general way to figure out the most common misspellings of a given domain name...


Easier to just do bitsquatting: register all the domains that are one cosmic ray induced bit flip away from a common domain name, e.g. https://www.bleepingcomputer.com/news/security/hijacking-tra...


We did this for a customer and to see what leaks. It’s very surprising and sometimes very bad from a security perspective on popular and high traffic domains of service providers.


I remember when this hit HN a few months(?) back, for me it was the first time learning about this and I assumed this might be an obscure thing.

I ran the python script against my (very large) employer's domain name and was pleasantly surprised to see we owned all the bitsquatted versions already (there were maybe 10?)


I recall reading a story about someone who became legendary among squatters because he somehow managed to negotiate the rights to commercialize Colombia's TLD (.co), meaning he positioned himself to take a cut of every .com -> .co typosquat ever.

Here's the guy himself talking about it in a NYT article[0]

[0] https://www.nytimes.com/2012/07/01/jobs/from-dot-com-to-dot-...


Oh god, this sounds like it could be an interview question


Put each misspelling on top of a round hole and see if it falls in.


I could see this being a leetcode medium/hard level backtracking question.


it worked for that person because gmail.com is a hugely popular domain and they had gail.com before gmail was even created. nowadays much more competitive


The nudity exception is probably more along the lines of the usual carve-outs for 'scientific, educational, or artistic' content.


Somewhat disappointed "securities fraud" wasn't mentioned, although the SEC is ever-present.


Oh yes, good point! It currently only has "entities". There's one other graph with "insider trading" but I can definitely add one with "securities fraud".


I have now added a plot for "fraud". There weren't as many mentions for specifically "securities fraud".


Elixir and Phoenix is a great option. As a functional language, the focus is on pipelines of functions that progressively transform data structures, which is how Phoenix processes requests (through "plugs" transforming a Plug.Conn struct), and how Ecto, the database library, performs validations. Pattern matching and assertive programming, the use of keyword lists for optional arguments, behaviours (i.e. interfaces), and an ergonomic macro system make the language (almost) as productive as something like Ruby, and typespecs, along with Dialyzer, allow you to claw back some type safety. The architecture that Phoenix promotes, particularly the use of contexts, encourages you to write maintainable, well organised and encapsulated code.

LiveView is really Phoenix's killer app. It allows you to write responsive SPA-like applications without writing any JavaScript. It sends user interaction to the server over websockets and ships an optimised diff structure back to the client to patch the DOM, without having to deal with client-side state management and JSON deserialisation. It's built on top of OTP GenServers, which make it easy to write performant, stateful "processes" that scale well and are monitored by "Supervisors" that deal with restarting (or otherwise dealing) with them in the event of crashes, which reduces the need for defensive programming.

The ecosystem is also quite good, and I've found well-designed libraries for dealing with most common problems, such as auth, persistent background jobs, serialisation, wrapping common APIs. Documentation for every library is available on hexdocs.pm, and is usually decently comprehensive. For Phoenix itself, pick up Programming Phoenix, which does a better job of giving a big-picture understanding than the docs, but only has a short chapter on LiveView.

By way of criticism, I'd say that Phoenix should work on improving its directory structure to be something similar to [0], and deployment story (it's relatively difficult to find a nice solution that deals with things like migrations).

[0]: https://elixirforum.com/t/best-practice-for-directory-struct...


Isn't the Messenger app a "messenger only app that does not include a feed for me to angrily browse."? They also seem to have standalone web and desktop versions at messenger.com


It's not clear that the government is legally able to censor all types of spam. If the spam is commercial, the Central Hudson four-pronged test on commercial speech restrictions applies,[1] which requires, among other things, that the restrictions be "no more extensive than necessary to serve [the government's] interest". Non-commercial forms of spam would probably be protected from removal by the First Amendment, unless, perhaps, they overloaded the government's servers.

Trolling categorised as "obscene" would be removable, but otherwise may be protected by the First Amendment. Also, the fact that every content moderation decision may be subject to litigation would make it extremely difficult for the government to meaningfully moderate content.

Private platforms, not bound by the First Amendment, and allowed to moderate without assuming liability by Section 230, wouldn't suffer these problems

[1]: Via White Buffalo v. UT Austin


The difference is they could require proof of identity and American citizenship to be able to access the site, and the site itself could have rules that are enforceable to limit human generated spam without violating the first amendment. They would not be helpless to moderate the site just because of the first amendment, they would just need to pass moderation clauses for the site before going live with it.

In a similar vein with the fact that it's illegal to yell out "FIRE!" in a crowded theater unless there is actually a fire, they could make it illegal to advertise private businesses and make posters directly legally responsible for what they post.

If this does come about though, I hope they prohibit businesses and "Media Groups" or whatever from creating accounts or at least lock them into a walled garden that you have to voluntarily enter, segregating them from the general populace.


Is the 1.2% intended as a contrast to the "<1%" claim?


I believe the concepts in the first paper are unrelated to those of modern lenses (i.e. functional references). See: https://stackoverflow.com/questions/17198072/how-is-anamorph...


I doubt this explanation really captures the gist of the phenomenon. Many current and upcoming trends (containers/Kubernetes, machine learning and Rust, for instance) have suffered in varying degrees from either of those problems, yet their adoption among companies is ever-increasing. I suspect they suffer from an issue also shared by the likes of cryptocurrencies: they don't have a "killer benefit"; a concise, persuasive reason that using them is a discrete improvement over current practice (like "high-level systems programming!", or "self-driving cars!").


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: