Hacker News new | past | comments | ask | show | jobs | submit | awda's comments login

> I don't see how microbes degrading plastics is any better than putting plastic bottles in a recycling bin so they can be melted down to make new ones."

You don't have to waste money and energy sorting bottles out of the trash if you can just dump a bunch of this bacteria in your landfill.


The comment you are responding was what the scientists in the article concluded, not something I was saying.


RPMFusion isn't considered part of Fedora. Yes, it would be nice if RPMFusion served hashes securely.


Latency is a real problem. Distances to satellites in orbit are very long. Also, there is a scalability problem. How many ground targets can a satellite simultaneously provide service to? My guess is, not a lot.

This could be an incremental bandwidth (but not latency) upgrade over existing satellite internet service to remote areas (by transmitting to a single ground receiver that serves a local area), but that's about it.


"Distances to satellites in orbit are very long."

That's not a fundamental limit. Existing satellites have high latency, because they're sited at insanely high altitude -- ~36,000 km (6 earth radii; 120 light-milliseconds (-> 240 ms minimum round trip)). This is for engineering and economic reasons which aren't solid: one, because geostationary [0] orbits allow dumb dishes that can't track moving objects; and two, because it allows small satellite networks -- i.e. one satellite covering a whole continent -- commensurate with the small size of the market.

If instead you had a network of satellites at say 500-1,000 km (unjustified guess), the latencies could be no worse than a direct optical fiber.

edit: Here's a sophisticated diagram, https://i.imgur.com/t1SOVpZ.png

[0] https://en.wikipedia.org/wiki/Geosynchronous_orbit#Geostatio...


What you're describing is essentially a data version what the Iridium network provides. Iridium is a constellation of 66 satellites at 450 miles up. Unfortunately, when Iridium was launched, forward thinking wasn't part of the plan -- data is stuck at around ancient dialup modem speeds.

There were plans for similar services, such as Teledesic, which went nowhere. I guess that enough land-based internet covers the majority of the target market, so there isn't enough market left over to justify the cost of a high speed satellite data provider. Remember, in LEO orbit, the satellites have to be replaced after about 5 years or so (atmospheric drag, and they run out of booster fuel).

Lower cost to launch via Space-x reusable rockets may change the cost equations though.


Solar powered boosters and spaceodynamic lifting bodies.


If you're talking lasers though, don't forget to factor in the time needed to reacquire a new satellite in once the existing one goes over the horizon. The shorter the orbit, the more often you'd have to do this.


Fuck everything, we're doing five (simultaneous) lasers.


and we are using drones to deliver them, whether you like it or not


I actually have zero idea how such a thing could work, which is why I'm asking here. :)


If they had a way to have almost unlimited ground targets (maybe using a rotating mirror, or something like https://www.kickstarter.com/projects/117421627/the-peachy-pr...), I could deal with the not so bad ~240ms lag for most of my communications (if you aren't gaming), especially if it's faster then my slow cable connection....

Speed of light: ~299,792 km / s Geostationary orbit distance from earth: ~35,786 km


A grid of sats designing for blanketing the earth in comms is going to be at most 400 miles up, most of my packets go way further than 800 miles, so no, latency won't be an issue.


How far out is servo from being the mainline Firefox renderer? A year? More? Pie-in-the-sky?


Servo is a prototype – closest to your "Pie-in-the-sky" option.

I'm not affiliated with Mozilla or Servo but from the information on the Servo page and the number of Acid2 and other issues they're tracking, it's clear that there's still plenty of work to do. Throw in the fact that the Rust language itself is targeting the end of the year before they hit version 1.0 and you'd have to guess that Servo as a separate project likely has minimum 6 months (more likely a year) before they examine whether it's successful enough to start integrating with Firefox.

Then they'd need to integrate and test.

Think about how long the new "Australis" UI was in development (more than 2 years). It was just a user-interface change without changing programming language or other dev-tools.

The renderer is the core of the program. And integrating Servo would involve integrating and testing a new language along with the new code. I doubt a large, capable team could perform that much integration and testing in under 12 months – even after Servo itself was considered "complete" (which it isn't).

My prediction: a release of Firefox with Servo code is 2 years away or more (assuming Servo is considered a "success" in 6-12 months).


My guess is not so optimistic. # of real rust programmers is low. If integrating Firefox with Servo means having 1 small part of Firefox integrated with Servo, mabye, a month of work? But the entire Firefox render engine with Servo will probably be at least 5 years away. While 5 years seems a long time, don't forget time fly and bug blockers come up.


It's been stated before that Servo may get a Webkit-compatible interface, meaning you should be able to build a Chromium with Servo pretty easily, and far before it would take to integrate Servo into Firefox.

I'm not sure if that's still the plan, however.


Does Webkit have a very large API/ABI surface? This seems like a nightmare to implement correctly and maintain.


I have no idea.


The idea is to start being dogfoodable for the team by the end of the year, but that only covers as tiny subset of sites that are used by the team, maybe Etherpad (which is used for meetings) and /r/rust. The main focus with Servo so far has been to look at the real performance bottlenecks that other engines face during layout and parallelize them. The plethora of other features that need to be supported will come later after these problems have been solved (they mostly have been). That said, replacing Gecko is a looong way off.


80s and 90s Unix.


It happened several years ago.


Go west, young man!


The bad part is bs=1024. That block size will make it take longer.

Often, a sparse file doesn't really emulate a real file well, and you do want a big file full of zeroes...


Not on a copy on write file system.


What does CoW have to do with anything?


Why Haskell?


Ah, my favorite question.

We previously had a custom DSL and it outgrew it's DSL-ness. The DSL was really good at one thing (implicit concurrency and scheduling io), and bad at everything else (cpu, memory, debugging, tooling). The predecessor was wildly successful and created new problems. Once all those secondary concerns became first order, we didn't want to start building all this ecosystem stuff for our homemade DSL. We needed to go from DSL to, ya know, an L. So the question is which...

If you understand the central idea of Haxl, I don't know of any other language that would let you do what Haxl in Haskell does. The built in language support for building DSLs (hijacking the operators including applicative/monadic operations) -really- shines in this case. I would -love- to see haxl-like implicit concurrency in other languages that feel as natural and concise. Consider that a challenge. I thought about trying to do it in C++ for edification/pedagogical purposes but it's an absolutely brutal mess of templates and hackery. There may be a better way, though.


Did you have a problem with namespace collisions of identical field names in records? Is that much of a problem in Haskell? How did you deal with them? Thanks.


To be completely honest, the namespace/module situation with Haskell could certainly be a _ton_ better, but after 8 years of it I can't ever remember a time when it was ever at the top of my mind as game-breaking. Occasionally quite annoying? Yes, most definitely. But I'd say there are more many more annoying things day to day, and in any case, it is certainly a tradeoff I'll put up with for the returns.

That said, GHC 7.10 will ship with a new extension called `OverloadedRecordFields` that will allow you to have identical record field names for different datatypes. Yay!


Thanks! If it's not a pain in regular Haskell work then I'm relieved. Perhaps other data types are more prevalent in Haskell than records?


I agree with the parent that it is a pain, but not very much of one. My pains in lacking other features when I move away from Haskell are far greater than my pains in name collision in Haskell.


I find records are used most heavily in web development, where you are pretty much just shuffling data from browsers to databases and the other way around. But even there the field name thing doesn't pose much of a problem, I prefer defining my records in the module that handles the functions for it, so there's no issue with conflicting names anyways. I found the fact that 'id' is a standard library function to be a bigger minor annoyance.


I only skimmed the tutorial, but in Scala/Finagle:

  val (friendsX, friendsY) = (friendsOf(x), friendsOf(y))
  for {
    fx <- friendsX
    fy <- friendsY
  } yield (fx intersect fy).size
really liking the look of this, thanks for open-sourcing!


As I understand this, this is an easy but still explicit concurrency construction, which is not what haxl does. It's easy to imagine how something like the toy example of friend-list-intersection could be converted to explicit concurrency in languages with good concurrency support. This is an easy optimization to make by hand in the cases where concurrency is obvious at some layer or within some abstraction. The power of haxl comes when the two (or whatever) requests come from wildly different places in the AST. For example, if you combine the result of friend-list-intersection with some other numeric quantity that's the result of some other fetches. Haxl essentially performs this optimization automatically, over the entire program.

Something like...

    renderPage :: Haxl Html
    renderPage = renderHeader + renderBody + renderRightPane + renderFooter
This will travel as far as possible through all paths in the AST collecting all IO to be performed in the first round (which, once fetched, will unblock more of the AST, and the process repeats until we have an answer).


How about Scala?


Anybody that's made a serious go of FP in Scala (applicative, monad, etc) has found that it's not worth the hassle. It fights you the whole way.


Scalaz?


I hang out with the Scalazzi and authors of FP in Scala on IRC. Most of them, including the original creator, would rather write Haskell and some are doing just that.


I think Scalaz is a pathway to Haskell. When people start loving Scalaz, they just happen to migrate to Haskell.


You would only choose Scala over Haskell if the pros (having the java library and running under the jvm) outweigh the cons (having the java library and running under the jvm) :-)


I would choose OCaml, but you made a good point. 10 internets for you, sir.


I'd only change it to "... blaming drivers on your toll bridge when you're the one ..."

The drivers (customers) are paying for the bridge!


Sure, but they're only using the bridge to drive to McDonalds. It's McDonalds generating all that traffic -- they should pay the bridge operator too! /sarcasm.


If we had usage-paid highways (which we couldn't have earlier due to missing technology but now such things start to pop up), I can imagine some remotely-located mega-store could advertise as "come to us and we'll pay your road toll - both directions!".


Usage-paid highways have been standard in some countries for ages - they just use toll bridges at every entrance and exit and deduce your price from entrance and exit position.


Subsidized parking is a similar sort of thing.


It's like asking Walmart to pay for the cost of the welfare that its employees take.

Wait!


Was that sarcasm? If not, and it was a serious idea then keep in mind Walmart is not the "parent" of their employees - it's not responsible for their upkeep.

Hopefully that was just sarcasm and I'm replying for no reason.


They do. Property taxes.


Seeing as ISPs also receive money from the government and Netflix pays taxes, you could make the same pointless observation about Netflix and Verizon.

But you shouldn't. This discussion is about a direct company-to-company attempted shakedown, not inefficient procurement.


I'm not sure you mean to argue for a publicly funded Internet, but that's basically what this implies.


You've already got that as the government already pays subsidies. I'm all for basic infrastructure to be collectively owned; be it co-ops, collectives, not for profits or even the government (local, national etc). This is the way it is for roads, rubbish, sewerage, etc. If you want a premium / luxury service then feel free to pay extra. Contract out support, maintenance, sales for the infrastructure by all means, but the ownership should be a collective of the users.


Property taxes tied to the level of traffic destined for a particular commercial location? This is the first I've heard of something like this, can you tell me more?


Property taxes for a given location are determined by the size and nature of the business.


Property taxes are based on a poor approximation of what a property would be expected to sell for. The relationship between that and the size or nature of the business operating there is highly attenuated, and it's almost entirely decoupled from how much traffic a business generates on any particular road.


They don't pay taxes to toll road operators.


I'd say it's like blaming Toyota for the traffic jams when you close lanes on the bay bridge.

Or alternatively like planing the SF Giants or AT&T Park for the traffic jams when you close lanes on the bay bridge.


Isn't that true with all bridges though? You don't have to pay for each use, but you sure paid for the construction and potentially the upkeep (or are likely to have paid).


This is going very tangential, but no, not all bridges are directly paid for only by people driving on them.

To make public bridges into an ISP analogy, imagine: municipal and state taxes as well as tax revenue from other states (via Federal highway dollars) pay for a municipal broadband network. You may or may not actually use the service, and you pay for it either way. You likely don't have any other choice of ISP. And it's not-for-profit, and the general public and lawmakers constantly clamor and legislate for better service at lower prices.


Although not true, I always liked to think that taxes on my vehicle, fuel, and tolls paid for the infrastructure upon which I drive. I have no idea whether those are sufficient, but I'll bet in my state they state of high taxes and crappy infrastructure they are more than enough.


Generally, the taxes and tolls levied on drivers only pay for ~50% of US road spending - the balance comes out of the general tax pool on both drivers and non-drivers, so it's (arguably) heavily subsidized. (But one could argue that non-drivers benefit from the road system in terms of things delivered to them by truck, etc.)


> non-drivers benefit from the road system in terms of things delivered to them by truck

Perhaps, but they pay a delivery company for that, which pays taxes for its trucks to use those roads.


Yes, once again, at about 50% of the cost of maintaining the provided road.


Some bridges, such as the Seattle Lake Washington bridge, require a toll to cross even though they're on a toll-free highway.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: