Hacker News new | past | comments | ask | show | jobs | submit | nixos's comments login

Then if the team works hard (say, they're superhuman and can work 24 hours a day 7 days a week straight) and do what other teams do in 3 months in one, will they get the other two off?

If not, you're just pushing responsibility up.


Not exactly, but a similar situation happened with my team.

There was a period where we simply accomplished a lot more than other teams. It wasn't superhuman effort, which isn't sustainable. Instead, it was better tool selection and streamlined processes (automation and continuous delivery, which shed a lot of the baggage that comes with large releases) that accounted for most of the increase in productivity. In fact, my team worked less hours and had fewer production issues than other teams.

The organization (eventually) reacted the way that healthy organizations should...they promoted most of the team into leadership roles in other teams that weren't performing as well. The reward for our success was more money and responsibility and the entire organization benefited from what we'd learned.

I say eventually, because I had to fight a pretty big PR battle before management above me saw the virtue of the way that my team operated. Before that, there was a tendency to give star performer awards and kudos to people putting in long hours to make release deadlines or putting in heroic efforts to keep error-prone production environments up an running. Management saw my team leaving at a normal time every day and didn't think that was worth rewarding. It took a mountain of data for me to show that a boring, predictable production environment that allowed us to push out more features was better serving customers.


The problem is that software engineering is hard.

Immensely so.

On a scale of engineering "hardness" (meaning, we can predict all side affects of action), software engineering is closer to medicine than to, say, civil engineering.

We know stresses, materials, and how they interact. We can predict what will happen, and how to avoid edge cases.

Software? Is there any commonly used secure software? Forget about Windows and Linux. What about OpenBSD?

Did it ever have a security hole?

And that's just the OS. What about software?

There are just too many variables.

So what will happen?

There will become "best practices" enshrined by law. Most will be security theater. Most will remove our rights, and most will actually make things less safe.

Right now, the number one problem of IoT security is fragmentation. Samsung puts out an S6, three years later stops updating it, a hole is found, too bad. Game over.

The problem is that "locking firmware" is common "security theater", which, if there'll ever be a legal security requirement on IoT, it'll require locked bootloader and firmware.

And you can't make a requirement to "keep code secure", because then the question will be for "how long"? Five years? 10 years?


> On a scale of engineering "hardness" (meaning, we can predict all side affects of action), software engineering is closer to medicine than to, say, civil engineering.

This level of hubris is pretty revolting. Software engineering is easy. Writing secure software is easy. The difference between civil engineering or medicine and software engineering is that practitioners of the former are held responsible for their work, and software engineers are not and never have been.

Nothing will improve until there are consequences for failure. It's that simple.


It's not hubris. Software really is hard - that's why it looks more like voodoo than respectable engineering discipline. It has too many degrees of freedom; most programmers are only aware of a tiny subspace of states their program can be in.

I agree lack of consequences is a big part of the problem. But this only hints at a solution strategy, it doesn't describe the problem itself. The problem is that software is so internally complex that it's beyond comprehension of a human mind. To ultimately solve it and turn programming into a profession[0], we'd need to rein in the complexity - and that would involve actually developing detailed "industry best practices"[1] and stick to them. This would require seriously dumbing down the whole discipline.

--

[0] - which I'm not sure I want; I like that I can do whatever the fuck I want with my general-purpose computer, and I would hate it if my children couldn't play with a Turing-complete language before they graduate with an engineering degree.

[1] - which we basically don't have now.


Software really is hard - that's why it looks more like voodoo than respectable engineering discipline. It has too many degrees of freedom;

No, sorry, software does not inherently have more degrees of freedom than e.g. building a bridge has. The reason other engineering fields are perceived as "limiting" is exactly because they have standards: they have models about what works and what not, and liability for failing to adhere to those standards.

I would argue that the lack of standards is exactly what makes software engineering look like voodoo -- but it is because of immaturity of the field, it's not an inherent property. Part of the reason software is so complex is exactly because engineers afford themselves too many degrees of freedom.

And I disagree that establishing standards constitutes a dumbing down of the discipline, in fact the opposite: software engineering isn't, exactly because every nitwit can write their own shoddy software and sell it, mostly without repercussions. That lack of accountability is part of what keeps software immature and dumbs down the profession. As an example, compare Microsoft's API documentation with Intel's x86 Reference Manual: one of the two is concise, complete, and has published errata. The other isn't of professional quality.


I push engineering methods for software. It really is hard for systems of significant complexity. Just a 32-bit adder takes 4 billion tests to know it will always work. The kind of formal methods that can show heap safety took a few decades to develop. They just did an OS kernel and basic app a few years ago. Each project took significant resources for developing the methods then applying them. Many failed where the new methods could handle some things but not others. Hardware is a precautionary tale where it has fewer states plus an automatable logic. They still have errata in CPU's with tons of verification.

So, it's definitely not easy. The people that pull it off are usually quite bright, well paid, have at least one specialist, and are given time to complete the task. The introduction of regulations might make this a baseline with lots of reusable solutions. We'd loose a lot of functionality that's too complex for full verification with slower development and equipment, though. Market would fight that.


Agreed, I never meant to imply that it was easy. I just meant that a "professional" software engineering discipline is neither a pipe dream, nor undesirable.


> Nothing will improve until there are consequences for failure. It's that simple.

Of course it's not that simple. Clearly you've never written much, if any, real software.

You want to make an SSL connection to another web site in your backend. You use a library. If that library is found to contain a vulnerability that allows your site to be used in a DDoS, where do the "consequences for failure" lie? You used a library.

Do you think people will write free libraries if the "consequences" fall back on them? If not, have you even the slightest understanding of how much less secure, less interoperable and more expensive things will be if every developer needs to implement every line themselves to cover their backs? Say goodbye to anyone except MegaCorps being able to write any software.

Where does this end? Would we need to each write our own OSes to cover ourselves against these "consequences", our own languages?


The same could be said for any industry.

Anyone can practise carpentry, but if someone is going to do so professionally and build structures that can cause injury or damage if they fail, then they should be accountable for the consequences. This is why indemnity insurance exists.

In software, a lack of rigour is fine for toy applications, but when livelihoods and safety become involved, we need to be mindful of the consequences and prepared to take responsibility, just like everyone else in society is expected to do.


The problem is identifying potential risks. It's obvious if I build a building it might fall down. It's not obvious if you sell web cams they might be used to take part in massive DDoS attacks.


Well now it is obvious, and honestly it has been so for a while. The reason we have shitty security is not because the risks are unknown.


Here's some risks:

1. Your system might be hacked if connected to a hostile network. Avoid that by default.

2. If connected, use a VPN and/or deterministic protocols for the connections. Include ability to update these. No insecure protocols listening by default. Sane configuration.

3. Certain languages or tools allow easy code injection. Avoid them where possible.

4. Hackers like to rootkit the firmware, OS, or application to maintain persistence. Use an architecture that prevents that or just boot from ROM w/ signed firmware if you cant.

5. DDOS detection, rate-limiting, and/or shutdown at ISP level. Penalties for customers that let it happen too often like how insurance does with wrecks.

That's not a big list even though it covers quite a lot of hacks. I'm with the other commenter thinking all the unknowns may not be causing our current problems.


You use a library.

On what basis did you choose that library? Did robustness of the software come in to your evaluation? Did you request a sample from the supplier, and performed stress testing on it? Did you check for certifications/audits of the code you were including in your project?

If that library is found to contain a vulnerability that allows your site to be used in a DDoS, where do the "consequences for failure" lie?

With you, unless you have a contract with your supplier stating otherwise.


   > On what basis did you choose that library? Did robustness of the software come in to your evaluation? 
   Did you request a sample from the supplier, and performed 
   stress testing on it? 
   Did you check for certifications/audits of the code you were including in your project?
Even if, you did everything on this list, you could still get a library that has a potential bug, because software is just that complex. Microsoft puts millions of dollars into security and it still has regular vulnerabilities discovered.

And even if, you implement rigorous audit of code, that means you can't update, because you have to go through the same audit rigamarole, each time a bug is found. By the time you audit your software, a new vulnerability will probably be discovered.

Not to mention this essentially makes open sources software nonviable.


There's a finite number of error classes that lead to codd injection that causes our biggest problems. Some languages prevent them by default, some tools prove their absence, some methods react when they happen, and some OS strategies contain the damage. There's also CPU dedigns for each of these. Under regulations, companies can just use stuff like that to vastly simplify their production and maintenance of software with stronger security.


I disagree there are finite number of error classes that lead to attackers disrupting your software/hardware. Code injection is just one of many possible ways to gain control of your computer.


If you have no interpreters & sane defaults in config, then there aren't many ways to take over your computer. They basically always exploited a vulnerability in the applications that let them run code. That either was in privileged one they wanted to be in or was a step toward one. Blocking code injection in apps would knock out vast majority of severe CVE's I've seen that relate to apps.

Far as finite amount, the vulnerabilities coming in fall into similar patterns enough that people are making taxonomies of them.

https://cwe.mitre.org/documents/sources/SevenPerniciousKingd...


Writing secure software is far from easy. It's super, super hard. The fact that you are saying this, makes me wonder if you ever attempted to write secure software?


Do you write code ?


Regarding secure software, there are at least some efforts to make writing formally verified software more approachable.

The seL4 project has produced a formally verified microkernel, open sourced along with end-to-end proofs of correctness [0].

On the web front, Project Everest [1] is attempting to produce a full, verified HTTPS stack. The miTLS sub-project has made good headway in providing development and reference implementations of 'safe' TLS [2].

These are only a few projects, but imo they're a huge step in the right direction for producing software solutions that have a higher level of engineering rigor.

[0] https://wiki.sel4.systems/FrequentlyAskedQuestions

[1] https://project-everest.github.io

[2] n.b. I'm not crypto-savvy, so I can't comment on what is or isn't 'safe' as any more than an interested layperson.


I don't really think the main problem is that software engineering in general is hard. I think the problem we're facing right now is that writing secure software using the tools we have available now isn't realistically feasible.

We need to ruthlessly eradicate undefined behavior at all levels of our software stacks. That means we need new operating systems. We need new programming languages. We need well-thought-out programming models for concurrency that don't allow the programmer to introduce race conditions accidentally. We need carefully designed APIs that are hard or impossible to mis-use.

Rust is promising. It's not the final word when it comes to safety, but it's a good start.

An interesting thought experiment is what would we have left if we threw out all the C and C++ code and tried to build a usable system without those languages? For me, it's hard to imagine. It eliminates most of the tools I use every day. Maybe those aren't all security critical and don't all need to be re-written, but many of them do if we want our systems to be trustworthy and secure. That's a huge undertaking, and there's not a lot of money in that kind of work so I don't know how it's going to get done.


Can we remove undefined features? We can get rid of the GCC optimizations which rely on the premise of undefined behavior to break code to win a speed prize or something, but undefined behavior exists for a reason:

It depends on the CPU.

The problem is that C was designed to be as close as possible to hardware, and some places (RTOS? Kernel?) speed is critical.


We can abstract the CPU away. However, undefined behavior is just the tip of the iceberg. You can fix it all you want but we'll be stuck with logic bugs, side channel attacks, info leaks, bad permissions & malconfigured servers, poor passwords, outdated & broken crypto schemes, poor access control schemes and policies, human error or negligence, etcetra.

There is a huge amount of ways security can go haywire even with perfectly defined behavior. Make no mistake, I love watching as unsafe unbehavior is slowly getting fixed, but I think language nerds are too fixated on the UB to see that it's not the big deal and won't get rid of our problems.

Another problem language nerds miss is that we can adapt existing code and tools (in "unsafe") languages to weed out problems with undefined behavior. It's just that people aren't interested enough for it to be mainstream practice. Yet the bar is much lower than asking everybody to rewrite everything in a whole new programming language. So why do they keep proposing that a new programming language is going to be the solution? And if people just don't care about security, well, we would have all the "defined behavior" security flaws in the new code written in the new shiny programming language.


I don't think that better languages will fix all the security problems. (One can, after all, create a CPU simulator to execute compiled C programs in any reasonably powerful "safe" language.) I just think that C and C++ are specifically unsuitable for building secure systems, and we won't make much meaningful progress as long as we're dependent on enormously complex software written in languages that don't at least have some degree of memory safety as a basic feature.


This is only partially right. Software engineering is hard. But trust is harder. Much much harder. And most things you have to trust people with just doesn't matter.

However, in the future where software can do everything, there is no such thing as "limited trust." If you trust someone to operate on your car, you are trusting them with everything the car interacts with. Which... quickly explodes to everything.


software itself isn't intractable, it's that the field is young, and we are stuck with choices made when nothing was understood, and its gonna take a while to turn the ship. but i think we have a pretty good idea of where we are trying to go wrt writing secure software.


> it's that the field is young

The opposite. When the field was in its infancy, one was able to keep whole stacks in his head.

How complicated were CPUs in the 1960s?

How many lines of assembler was in the LM?

How many lines is Linux or FreeBSD kernel? Now add libc.

Now you have a 1970s C compiler.

Now take into account all the optimizations any modern C compiler does. Now make sure there's no bugs _there_.

Now add a Python stack.

Now you can have decent, "safe" code. Most hacks don't target this part. The low hanging fruit is lower.

You need a math library. OK, import that. You need some other library. OK, import that.

Oops, there's a bug in one module. Or the admin setup wasn't done right. Or something blew.

Bam. You have the keys to the kingdom.

And this is all deterministic. Someone _could_ verify that there are no bugs here.

But what about Neural Networks? The whole point of training is that the programmers _can't_ write a deterministic algorithm to self drive, and have to have a huge NN do the heavy lifting.

And that's not verifiable.

_This_ is what's going to be running your self-driving car.

That's why I compared software engineering to biology, where we "test" a lot, hope for the best, and have it blow up in our face a generation later.


The need to hold whole stacks in the head is the problem. That's not abstraction. That's not how math works. The mouse doesn't escape the wheel by running faster.


I'd say the main problem is developpers carelessness and incompetence.

New SQL injection vulnerabilities are being introduced every day. Passwords being MD5. Array boundaries being sourced from client data. I mean there are perhaps 5 to 10 coding errors that are generating most of the vulnerabilities.

That's not the only problem. We also need to trust the users, who are either careless or malicious. But I'd like at the very least to be able to trust our systems.


> Cheap (and preferably clean) energy

This is the difficulty.

Right now, the only cheap and clean energy somewhat on the horizon is fusion, which is 50 years in the future for the past 50 years.


Nuclear is clean. Cleaner than solar, at least.


The waste is not simply the spent nuclear fuel, but much of the machinery and systems around it, plus the discarded items used daily in the management of a plant (clothes etc). This low level nuclear waste while 'only' dangerous for 100 to 500 years, is huge - vastly bigger than the 76000 metric tons of spent fuel: http://www.nei.org/Knowledge-Center/Nuclear-Statistics/On-Si.... I don't have the tonnage of the low level waste to hand, but it is certainly much larger than the high level waste.


Sorry copied link in correctly: http://www.nei.org/Knowledge-Center/Nuclear-Statistics/On-Si...

Rough estimate of low level waste is 360000 tons


It normally is! But. It has catastrophic failure modes. (Yes, pebble beds. Let's talk about realities, not if-onlys.)

Because of those catastrophic failure modes, nobody except governments want to assume the risk. And governments only do it because they have sovereign immunity from those whose interests they're supposedly representing.

I firmly believe that any interested parties who want to go nuclear, should, and reap those rewards. If you can't find an insurer, go find wealthy people who believe in your design to indemnify you.

Just don't pick my pocket to build it and then poison me.


> Because of those catastrophic failure modes, nobody except governments want to assume the risk.

That would be a fine if it was a level playing field.

Lots of things have catastrophic failure modes. Dams[1], supertankers[2], chemical plants[3], oil drilling rigs[4], coal mines[5], etc. To say nothing of climate change.

[1] https://en.wikipedia.org/wiki/Banqiao_Dam [2] https://en.wikipedia.org/wiki/Exxon_Valdez_oil_spill [3] https://en.wikipedia.org/wiki/Bhopal_disaster [4] https://en.wikipedia.org/wiki/Deepwater_Horizon_oil_spill [5] https://en.wikipedia.org/wiki/Benxihu_Colliery

The reason those things have no trouble being built anyway is that they aren't required to carry billions of dollars in insurance coverage to begin with.

And maybe they should, but you can't single out nuclear for a requirement to carry that much insurance and then fault it for not being able to satisfy that requirement when its competitors don't.


> Just don't pick my pocket to build it and then poison me.

Yet that's exactly what we're doing with coal and petroleum.

Let's remove the subsidies for those and then you can get back to me about the cost effectiveness of things like nuclear.


>Yet that's exactly what we're doing with coal and petroleum.

Only coal and petroleum is mined thousands of miles aways from me, and even if e.g. there's an oil spill, it won't kill thousands in hundreds miles radius.


Coal is mined where I grew up and they rip entire mountains apart, throw the dross back, and leak sulfur into the streams for the next 50 years.

In addition, in California, the radioactive sulfur in pollution from the coal burning plants in China SWAMPS the radiation release from Fukushima by several orders of magnitude.

I'm going to stop here, because any relevant adjective I would use to describe people like you would just get me banned.


>I'm going to stop here, because any relevant adjective I would use to describe people like you would just get me banned.

The main word I'd use to describe "people like you", given the above, is rude.

The ad-hominem doesn't add anything to the case. And who would "people like me" be? Anybody that has concerns or might be against nuclear power? Because they are necessarily ignorant luddites, and only those for it are the level-headed ones, right?

Well, nuclear reactors and energy production is not science (the science part is done at the academic level), it is applied technology. And technology mingles with private interests, politics and bad actors all the time (e.g. constructors who don't install enough safety measures, governments who don't give a shit about global environmental treaties, loonies who might want to blow up reactors or get their hands to the by products, human errors, political ass-saving, tons of money to be made, higher profit margins by not properly taking care of by products, etc.),

Now to answer the specific points:

"Leak[ing] sulfur into the streams for the next 50 years" doesn't even compare to having to take care of radioactive materials for the next millennia, neither in the extend of time, nor to the potential impact.

Your answer also seems to imply that e.g. uranium mining doesn't have an environmental impact, and it's only coal that "rips entire mountains apart"...

>In addition, in California, the radioactive sulfur in pollution from the coal burning plants in China SWAMPS the radiation release from Fukushima by several orders of magnitude.

All caps "swamps" aside, this would be only relevant if Fukushima was the epitome of nuclear disaster and the "radiation release from Fukushima" was the highest level of tradition release possible (or close).


> this would be only relevant if Fukushima was the epitome of nuclear disaster

Isn't it? I thought what happened at Fukushima was the worst case scenario for a nuclear power plant of its type. What is the worst that could have happened?

Aren't new designs even safer than that?



clarify please. Are you referring to construction, mining, installation, storage, or just trolling?


Nuclear is very clean. What most people came to hate is the waste, which is surprisingly little (per kW). Also the danger in case of failure, which is massive.


Also, there's literally no way to clean it up in case of accident and you have to keep it out of harms way for thousands of years.

Theoretically, we can put CO2 back in the bottle. Practically, we can do it now, just not with enough efficiency and scale to make it worth it.


We have no practical way to clean CO2 from the atmosphere and we don't need catastrophe for it to be a problem. We're poisoning the planet with real carbon dioxide while we fret about the hypothetical risks of hypothetical nuclear waste.


>hypothetical nuclear waste.

Chernobyl isn't hypothetical. And neither is Fukushima. And those weren't as bad as they could have been.

And we still don't know what to do with all the waste we have.


We could replace coal with nuclear yesterday if the anti-nuclear activists would stand down. Instead, they have fought to continue a status quo whose death toll we start couting at 25,000 people per year lost to black lung [0]. No serial killer or terrorist could dream of effecting mass casualites as efficiently as the proponents of this viewpoint do when they take action that results in the continued and expanding operation of coal power generation, despite an alternative which is actually viable in every respect but their opposition.

Yes, nuclear power has problems. But even if it killed 24,000 people per year, blocking the replacement of coal by nuclear would still be a willful choice to cause the deaths of 1,000 people (it's getting really hard not to say murder).

[0] https://en.wikipedia.org/wiki/Coalworker's_pneumoconiosis


Of course we know what to do with the "waste": Keep it close, it's precious fuel for breeder reactors.


The waste isn't just the spent fuel though - see my other comments...


The total volume of which would fit in an apartment block. That block could be dropped into the marina trench if you really are that paranoid, total cost a few million dollars.

There is no conceivable way of removing even daily worldwide human carbon waste from the atmosphere for that kind of money.


72000 metric tons - would that be safe to place together in a single container? http://www.nei.org/Knowledge-Center/Nuclear-Statistics/On-Si...

Sure the physical size of a spent fuel might be 'small' but this is not the only issue.

And the low level waste - of which there is approx 360000 tons? Would the Mariana Trench would be a good/safe place to place this? Based on what reckoning?


No you can dump low level waste under 1m of topsoil, and build a hospital or school or kindergarten on top of it. That's why its called low-level waste.

>> And the low level waste - of which there is approx 360000 tons? Would the Mariana Trench would be a good/safe place to place this? Based on what reckoning?

https://en.wikipedia.org/wiki/Ocean_disposal_of_radioactive_...


Solar panels production. Currently, they are mostly produced from scraps of semiconductor industry, and it's already close to capacity. This process is energy inefficient and environmentally unfriendly (silicon tetrachloride is an intermediate stage).

Pushing for more silicon solar panels beyond what's possible as semiconductor industry byproduct is unsustainable, both economically and environmentally.


Most high purity silicon is now destined for solar PV; PV's "scraps of semiconductor industry" phase was a decade or more ago.

http://www.upi.com/Business_News/Energy-Industry/2007/05/22/...

"In 2006, for the first time, more than half the world's polysilicon was used to produce solar PV cells."

It's true that the intermediates in silicon refining like are quite hazardous, but in a well-run production facility those intermediates don't get released to the biosphere. They affect the toxicity of the end product no more than the intermediate use of acetic anhydride in aspirin production, or the intermediate use of uranium hexafluoride in nuclear fuel rod production. There was a famous story in 2008 about a Chinese silicon production facility that was illegally dumping SiCl4, but if you're going to pick the most horrifying Chinese examples you'd think that nothing at all can be made safely.


It is not that it is impossible to run photovoltaic panels production de novo (and it is done in large scale these days, as you have correctly stated). The problem is that it is economically unsustainable and have to be financed by government subsidies (or, alternatively, moved to cheap Chinese factories disregarding environmental costs completely).

Photovoltaics: scalable, green, economically sustainable (choose two).


There was a famous story in 2008 about a Chinese silicon production facility that was illegally dumping SiCl4, but if you're going to pick the most horrifying Chinese examples you'd think that nothing at all can be made safely.

There are some interesting moral gymnastics required there, no? The Chinese lead the world in PV manufacturing right now. Is this PV-revolution necessarily built on dirty manufacturing? Would we still have a PV-revolution if we weren't so accepting of an environmental disaster that takes place in a distant country?


My guess is that there would still be a PV revolution even without the Chinese factories, though the cost drops might have a come a bit slower. Costs were dropping at about the same year-over-year rate for decades before China leapt to the top of the PV manufacturing ranks.

Silicon refiners in the US actually had lower production costs than Chinese refiners even with the extra labor and environmental costs in the US. Unfortunately, a few years ago China imposed punitive trade barriers against silicon imported from the US. It was in retaliation for trade barriers the US put up against imports of Chinese solar modules. Until both sides erected their dueling trade barriers, the value of US-to-China silicon exports just about balanced the value of China-to-US modules. It was like a textbook example of comparative advantage. Now Chinese manufacturers get higher priced silicon made with fewer environmental protections, and US buyers get higher priced modules :-/


My impression is that most large-scale clean energy projects go for wind power, indirect solar energy (use sunlight to heat up some medium which then drives turbines or rotors) or bio fuel - with more exotic stuff like geothermic energy where it's applicable.

Solar panels seem to be one of the most inefficient clean energy solutions - so are they actually relevant here?


There is no reason to believe fusion will be cheap. It has the same problem of nuclear power plants that building things is expensive.


Haha what? It doesn't have nearly the same issues. Building a nuke plant is easy, it's the safety measures that make it expensive.


You're talking about Arizona. Hello, solar...


His biggest (current) problem regarding SpaceX is that satellites aren't mass-produced.

If it was, buying a spot on a Falcon or on a Delta would be simple:

TCOFalcon = cost of launch + self-insurance-markup.

TCODelta = cost of launch + self-insurance-markup.

Falcon costs $1233/lb

Delta IV costs $8694/lb.

To compare apples to apples, to launch 50,000 lbs (a full Falcon FT 50,000 lb), a Falcon would cost 61,650,000, and a Delta IV would cost 434,700,000).

A Delta IV failed once out of 33, and the Falcon 9 failed three out of 29.

Therefore, TotalCostOfLaunch = CostOfLaunch+costOfSatellite * failureRate.

For TCODelta = TCOFalcon

CostOfLaunchDelta+costOfSatellitefailureRateDelta = CostOfLaunchFalcon+costOfSatellitefailureRateFalcon.

Plugging in Numbers,

434,700,000+c * (1/33) = 61,650,000+c * 3/29

c/33-2 * c/28 = 61,650,000 - 434,700,000

c = 5,100,126,428

Any satellite worth less than five billion (!! That's an _insanely_ expensive single mission) would be cheaper to launch on a Falcon, despite its failure rate

The only problem is that you have to wait for a new satellite


I see it as somewhat of a catch-22, which I think is the point of dogfooding the boosters with SpaceX's own launch demand.

Nobody builds commodity satellites because there's no cost efficient way to launch them (microsats aside), and nobody tries to pioneer more cost efficient but lower reliability launch systems because there's no proven demand.

Your reasoning shows the edge under current economics. But I think the real money will be made once we get to "Well, I could build a second satellite for lower unit cost and have two in orbit." Because when demand shifts to that, suddenly anyone without a cost-efficient launch system to offer gets priced out of that chunk of the market.


There were only 2 failures. There was one partial failure, but the primary mission was successful and the secondary mission was partially successful.

I also think that the Falcon 9 has changed far more during its live time, compared to the Delta 4, so you would expect some more failures.


My question is how would this affect the market?

Apple will stay Apple. I don't think they'll go anywhere.

The question is Google. If this happened in 2008, I don't think Android would have taken off anywhere close to the way it did.

But now? One one hand, Android has millions of apps already on the market. On the other hand, Microsoft now has potentially millions of old, existing, applications.

I don't think it will make a dent in the phone market. It's too commonly used as a hand-held rather than a station, and windows apps are useless there.

On the other hand, it can tank the Android tablet market


I agree on the phone side. How many of those millions of apps are usable from a phone screen, using a phone interface? This sounds like cool technology, but this particular use case sounds very limited use on phones.

However, on the tablet side, it may allow Microsoft to bring down the price of the Surface a bit while still maintaining legacy app compatibility.

My understanding is that Intel caught up with ARM a couple years ago on performance-per-Watt, but how's the idle power consumption of 64-bit Atom processors these days compared to 64-bit ARM offerings? For many consumer use cases, idle power consumption has a bigger impact on battery life than does performance-per-Watt.

I'm a bit sad this news came out after SoftBank bought up my ARM shares, but I'm glad to see more evidence we may yet get the x86 monkey off our back.


A lot of WP fans seem to think that having desktop apps will help it gain a ton of popularity, but I think at this point people would probably like the idea, but it would still be a very niche choice.


My skepticism says it'll benefit HP's competition but, most of all, Microsoft, since whatever they do will be part of Windows and become available to every other Windows OEM.


Google killed its golden egg.

Google lives off the open web. Two out of four of its main products depend on the open web:

1. Search - the less of an open web, the less is searchable by Google. Facebook is a classic example. 2. AdWords/AdSense - Facebook, CNN, BBC don't need it. They have their own networks, and they 're big enough to offer a "take it or leave it" approach.

(The only two main products not relying on Open Web is Google Cloud and Android).

Google actually had everything set up, a social network (blogger), a wall (reader), IM (Google Voice, email).

But they decided that FB is taking over. What did they do? They made "their own FB". Which solved no one's problems. Google+ had nothing over FB (except for circles, which FB promptly copied), and killed their old social apps.

Now their running around as a chicken without a head.


That's a great point. Google should want the blogosphere to exist as a matter of competitive advantage. The higher ups in Google need to listen to this. They can still bring the blog back, and make the internet great again!


As I pointed out in other comments, something like FB can never be truly open for the simple reason that most people really don't want their posts to be searchable by everybody. Facebook is Facebook because you can post silly things on it without worrying if some future employer could find them if they wanted to. Sometimes an information silo can actually be a good thing, I suppose. But perhaps more research into federated social networks could take away this concern.


Theoretically, Google could let you "hide" certain posts if you're not logged in.

The thing that killed blogs as they are is that they're too serious. When signing up you need a title, subtitle, and input is optimized for long essays. Facebook is optimized for sign up, write your name, find friends, and post pictures, videos and sometimes text.

Really, if Google+ was Blogger Basic where you sign up, put AdWords, post pics, and it could have taken off.


There are/were microblogging platforms that let you write a line or post an image.


Like twitter?


> Now their running around as a chicken without a head.

And a massive stampede of advertisers and small businesses blowing their whole marketing spend on said chicken

No worries


>And a massive stampede of advertisers and small businesses blowing their whole marketing spend on said chicken

That's what newspaper execs said in the 90s


well yes, I would be excited to see everyone simultaneously realize they won't get an ROI on their adspend.

But that isn't the sentiment right now. Discoverability is really bad and one of the only solutions people can come up with are creepy targeted advertisements


The fact that, even after applying all this technology and know-how, the ROI on ad spend is a black box with no meaningful metrics, and the spending vectors are based on sentiment tells me that advertising is not a safe long term foundation for revenue. At some point, someone is going to have to prove that the money is well spent.


"I think if we could manage to analyze that expenditure of money we would find that a vast percentage of it, probably one-half, is entirely wasted." - Robert Ogden, 1898.

Yet it's still going, more than a century later.


I don't disagree with your comment, but wouldn't you agree that the last thirty years have introduced an unprecedented technology environment for tracking ROI on advertising? It seems clear that a lack of incentives, not abilities, is what keeps this racket afloat. For that reason, I'm not so certain that it will be allowed to continue forever. Once advertisers get a taste of real ROI metrics, they'll drop the hand-waving platforms overnight. Biggest market opportunity of our age?


As someone from outside the ad industry, what are you referring to ? Just tv brand advertising ,or something else ?


Or in other words, "Nobody was ever fired for suggesting they buy from IBM."

In the absence of alternatives, marketing money goes to advertising.


Mathematically I don't know if it is possible to eliminate all returns from ad spend market wide. Then again, negative interest rates should have been impossible.

Every trend I see points to greater ad reach. AR + automated transportation means the web expands from your phone to your surrounding. Regulatory intervention could just firewall behavioral targeting and e-commerce in to existing platforms, making Google, Facebook, and Amazon far more powerful.


More and more people are self selecting into facebook and the advertisers follow. Google could fight it, but they would probably lose. There is an order of magnitude more people capable and willing to use fb and twitter compared to blogger.


More and more are self selecting into facebook because it's where everyone is.

Everyone was on blogs ten years ago. Had Google kept up their product, people wouldn't need to leave.


I think that most of the people that are on facebook now were not "on the internet" at all ten years ago. Maybe as consumers, but not as content producers.

One could of course argue the value added by a typical facebook post, but let's not pretend that tens of millions of people were setting up blogs to keep their grandparents updated with pictures of their grandchildren.

Blogs are still here, I follow many of them and RSS is alive and kicking. Most people that had blogs before and produced long-form content are still there. The only problem is that the advertising well might go dry.


Interesting comments but you seem to have an outdated view of the company. Google is an incredible advertising machine that is far from running around "without a head."

They have an immense amount of data, easily matching and beating Facebook, with control at every layer including Android, Chrome, Google Analytics, Maps, Gmail and more, while also running the adtech infrastructure for 90% of the web. Publishers like CNN are not big enough to have their own networks and are constantly fighting a losing battle where Google is controlling ever more of the adtech supply chain. Even NYTimes runs Google's ad stack.

Sites today are really only left with custom executions using their production talents which is seen in the rise of sponsored/branded content, and Google is already making inroads there.

Search will always exist and always be massive, just as the open web will always exist and continues to grow. There will not be a consolidation into a single walled-garden, what you're seeing with Facebook is just another cycle that was repeated in the past with AOL and others. And search is still one of the best performing channels and will continue to get a majority of ad dollars online.

Google is also ramping up their Cloud Platform and have already overtaken AWS in some areas. Cloud computing stands to be an even bigger revenue source than their entire current ad business so they are well poised for the future.

I wouldn't underestimate this company anytime soon. They might have missed social (although not as much as you think, see Youtube) but there is plenty of opportunity out there.


I don't understand how Blogger was a social network in the way FB is?


That's the idea here. If all your friends had blogs, they could easily create/ share content there the way they do on FB now. You could follow them in Google Reader, or any other RSS reader. Moreover, you could add content from news sites, most of which still offer RSS feeds.


But what if you want to share stuff only with friends, and not with the rest of the world?


True, that is not really covered in this scenario. Except Blogger gives you an option of whether or not to list your blog on search engines. I have a privat blog from before the FB era that is not listed, and does not show up on Google searches for my name. Although of course, I would not put any confidential information on there.


Well, of course it should allow my friends to search my blogs . Just not the rest of the world.

But I guess that if Blogger would be "open", so that other search engines could also index those blogs, then this whole scenario would not work (malignant search engines could expose everything to everybody). So my suspicion is that it would actually be quite hard (or at least require more research) to make an "open" version of facebook.


The original post misses the visibility point.

To me, Twitter benefits from being public discourse and an open forum for replies. On the other side of the coin, Facebook benefits from being a private forum (as most people use it).

Solutions that don't facilitate these use cases aren't going to be successful. Understand why people chose the services they did (discounting network effect) and then try and build something to compete.


And what if what you really wanted to share was a photo, or a comment, or an article from NYT or Vogue, and not a long piece of writing? Isn't that what most of Facebook is used for?


On Facebook, one at least has some protection against outsiders searching for their name and finding one's "silly" (careless) postings. If all posts were open, no such protection would exist. I think this could be a real barrier for widespread adoption of this approach.


a blog is a webbed personal log with comments and the core functionality fb offers is just that, a simple to set up homepage? Blogs have a blogroll instead of a friendlist, does that make a big difference? Seriously asking, I don't use the latter.


All it lacked was a built-in phpbb forum (FB groups)


Well the interface and way of sharing and communicating with people is totally different, so I am surprised to see them compared.


>the way of communicating with people is totally different

lol u srs? j/k <3


At one point, Blogger was as much a social network as Tumblr is now. At least, I think I remember that; it was about a decade ago now.


I think you mean that Google killed the goose that laid the golden eggs. But then it turns into a chicken with no head. Make you mind up, man!


>as in, bring in designers

That's the mistake. You bring in UX experts (who also happened to understand your primary users)

Graphics design != UX


> who also happened to understand your primary users

That would be the hard part...


And in Java, and in any language with operator overloading


operator overloading and meta is very nice, but can add a lot of complexity.


Yes, but you can have static typing and ease of use


The problem is that it doesn't help.

Commercial open source tends to be done for two reasons:

1. Common goal development. Several companies needs a UNIX. Rather than do all the development themselves, they work together. That's how Linux works. 2. Backup if original company's strategic goals move from your goal.

Here, neither work as you can't cooperate with others due to licensing issues.


If the license doesn't allow applying or distributing third party patches I would be annoyed. If it allows it provided each party has paid for a license I would be happy.


It says right here, https://supportedsource.org/definition "Modifications. The license must permit modifications to the source code."

But you also can't just take parts of the code and redistribute it for free, because "Restricted Uses. The license should include restricted uses, such as disallowing resale or sublicensing."

The point isn't to prevent people from customizing their use of the software. The point is to create a system where developers get paid and projects are financially sustainable, instead of being abandoned or barely maintained.


> The point isn't to prevent people from customizing their use of the software. The point is to create a system where developers get paid and projects are financially sustainable, instead of being abandoned or barely maintained.

The problem is that's actually one of the main powers of Open Source - the ability to fork it past the desires of upstream.

For example, I use Windows, which is "supported source". OK. Microsoft decides to put spyware. Now I need to rip it out. OK. Did so.

Now, one day later, Windows gets updated.

I have to go through the code again.

If it would be "Open Source", I'd just fork it. But now, I can't even share modifications (is it a derivative work?).Now every user would have to go through the code, find the privacy violations, and re-compile it.

Also, what happens Microsoft get's fed up and fully closes source (no more updates to "Supported Source", and they drop out of the program)? Each user has to keep up his version of Windows?

Not too useful.

If you think about it, how are there commercial communities around Apache/MIT/BSD?

Because no one wants to upkeep his fork, so they contribute code back so others can help maintain it.

That freedom can only be maintained by the ability to fork.


They're not copying Google. They're taking the worst of both worlds.

Windows isn't free. It just happens to come with the cost of your PC, but the PC manufacturer pays for it (and passes the cost to you).

They just noticed that no one upgrades Windows (and why should they? When was the last time you were excited about a windows release? XP? 95? 3.1? Maybe 7 if you're coming from Vista?)


I have been using Linux for 13 years.

I fail to see the worst?

I personally was VERY excited for Windows 7 and it proved to be a very solid edition. VERY VERY excited for Ubuntu bash in Windows 10 in the preview edition.

> They're not copying Google. They're taking the worst of both worlds.

Google's model is advertising

Apple's is Hardware

Microsoft is selling services more.

I find it really awesome that Microsoft has really turned into a company I have a positive view of and glad that Steve Balmer is gone.


> Microsoft is selling services more.

Maybe, but what I'm saying is that:

Apple costs, is closed-source but (somewhat) cares about privacy. Google is free, (somewhat) open-source but actively doesn't care about privacy. Windows costs, closed source and actively doesn't care about privacy.

In other words, with Google you're not the customer, you're the product. With Microsoft you're the product and you have to pay for the privilege to be a product.


I have a strong negative bias vs Apple and everything they make.

One thing is Apple is not a closed source company. Though it is a mixed bag they have made some good contributions to the Open Sources world. Though their Walled Garden is YUGE

https://developer.apple.com/opensource/

* Bonjour

* Webkit

* Swift


I think they are the biggest driver behind LLVM projects (LLVM/Clang/LLDB) as well which is a huge value to OSS too.


Though those contributions aren't strategic.

As of now, most of Android APIs are in AOSP. How many Apple APIs are?


Microsoft is a lot more than just Windows.


Microsoft doesn't care about privacy? They stood with Apple against FBI so you know. They are more reliably data protective than Amazon or Google in their agreements. They don't know what your data is on the cloud for example. Amazon stole Target's data and used it against them.


> I personally was VERY excited for Windows 7 and it proved to be a very solid edition. VERY VERY excited for Ubuntu bash in Windows 10 in the preview edition.

You can't compare the need for Windows 7 vs XP and the need for 95 after 3.1 . There was a huge amount of software which simply required 95.


Windows 98SE was loved and slowly droped for XP.

One word about Windows 7

Vista


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: