Hacker News new | past | comments | ask | show | jobs | submit login

I think the explanation is satisfactory, but how often, if at all, do other components of such projects change, up until lift off? I imagine engines and other propulsion parts won't be upgraded much after the original spec, even if such components even remotely followed the same curve of improvement as digital chips do. But what about other types of sensors used by the rover? What is the most recent piece of tech used by the rover?

In any case, it's kind of surprising that by 2004, NASA engineers wouldn't have proposed a solution that anticipated vast improvements in digital sensor technology, so that something, in 2009-2010, could be "dropped in" (relatively speaking, not literally) as a replacement.

Of course such a design feature is going to take way more planning and resources than it would for the holiday consumer camera lineup...but a) this is NASA, some of the best of the best engineers. And b) while panning-and-stitching is always a solution, doesn't that have additional operational risk of its own? Additional panning requires additional mechanical movement and attention to moving parts.




Last minute changes are made. Or at the very least I know of one such change in the case of Curiosity.

The dust covers over the haz cams (the navigation cameras on the belly of Curiosity) were added in the last minute. Here is the engineer who implemented them writing about the covers: http://forum.nasaspaceflight.com/index.php?topic=29612.msg93...

Basically, the Phoenix lander (landed on May 25th 2008) kicked up more dust than expected. They were worried and did a review for Curiosity, but found out that only the haz cams (because of their location on the belly) were in danger, so they decided to add dust covers to them (and also kept them transparent to see whether there really was a dust problem – as you could see from the first photos with covers still attached there very clearly was).

My guess is that stuff gets changed and updated a) if there is money and resources left or b) if the mission is in danger if you don’t change something.

The 2MP sensor is very clearly good enough. Any update in resolution would give you diminishing returns – so something like that gets pushed back.


It's all about risk. It's not about "leftover money".

Cost, schedule, risk. They are the fundamental resources for a project like this. Cost and schedule are more familiar than risk.


So if it’s about cost it’s about leftover money. That’s one and the same. (One just doesn’t sound as professional as the other.)


You offered two possible explanations for the use of a 2MP camera, (a) money, (b) risk. I was saying that, in this case, it did not really have to do with money.

My second sentence above was trying to point out that many people underestimate how important risk is to a space mission.


Oh, no, I wasn’t claiming that this change had anything to do with money. In this case it was clearly related to risk. I was just trying to make a general point.


Thanks, understood.


Yeesh, I don't even remember the Phoenix lander. Kinda crazy that there have been enough to start losing track of them.


It's the one that found water ice on Mars.


Personally I'd be very wary of that. As I said on another comment, this is a planned two-year mission. So if they drop-in a compatible component and later found out there was a manufacturing problem that shortens its life, that's not acceptable. I think NASA's very failure-avoidant, and I really can't blame them, because their mistakes are extremely expensive and occasionally deadly.

The alternative they chose, it sounds like, was extreme caution, using well-tested components at the possible expense of image quality. It's Good Enough.

And like others have said, macroscopic images aren't the sole or even primary purpose of this mission. At this point, it's just whiz-bang PR, since Spirit and Opportunity got enough pictures to last a lifetime. The secret sauce here is the spectrum analyzers, close-up camera, rock-vaporizing laser, etc. THAT'S the important stuff, scientifically.


As for panning and stitching, don't they already want to be able to point the camera in different directions?

And sensors -- they're not just looking for fancy new imaging sensors, they're looking for well-tested, radiation-tolerant sensors that can handle a range of temperatures. And then you need to redesign the rest of the circuit around it to handle more data -- all the chips driving the fancy imaging chip would have to be well-tested, radiation-tolerant chips that handle a range of temperatures.

The risk here is "use tech that's 8 years old" or "increase the chance that something goes wrong on a $2.5 billion dollar project".


Based on history, the way NASA tends to improve missions via software rather than hardware. For example, stitching lower resolution images together rather than developing a higher pixel count camera.

Getting a camera there is far more important than its spec sheet, and given that the lifetime of nuclear powered instruments can exceed 30 years (e.g. Pioneer and Voyeger) any over-achieving mission is going to be dependent on obsolescent hardware for a very long time. When Curiosity was developed, the choice of a four megapixel camera would still be quaint by today's consumer expectations.


They tried this idea with the zoom lenses:

"In early 2010, NASA reconsidered the VFL [zoom lens] cameras and work resumed on assembling these cameras, which will replace the FFL cameras described here if the work is completed in time and the instruments meet their requirements."

http://msl-scicorner.jpl.nasa.gov/Instruments/Mastcam/




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: