There's a lot of this sort of discussion in the strong artifical intelligence/transhumanism crowd. The reasoning goes:
* The threat (and possible upside) of strong AI is too great to ignore.
* We should be spending as much effort on understanding and developing friendly strong AI as we can, to outcompete any efforts that will produce a malevolent strong AI (whether intentionally or not).
* Ergo, everyone should maximise their gross income by becoming a high-flying lawyer, derivatives trader, or similar, live off a minimal income and give the rest to a strong AI research group.
Which is fine, except that the economy is, by nature, a huge web of networked interactions. The ways in which those interactions are linked is probably very hard to discern.
Which is to say that I think the kind of redistribution of wealth to good causes that people are calling for is worthwhile, but only to an extent. It's the problem that utilitarianism has - it requires perfect knowledge of the outcomes of the various choices you could make.
As others have said, the thing you develop might prove useful in treating some disease. Or the skills you pick up while developing it might lead to down a more 'socially responsible' path. Or the thing you make might produce utility for someone else and lead them to contribute to an important cause (e.g. the entrepreneur who sets up a coffee shop in town isn't curing cancer, but the people who now have a place to gather and collaborate might).
* The threat (and possible upside) of strong AI is too great to ignore.
* We should be spending as much effort on understanding and developing friendly strong AI as we can, to outcompete any efforts that will produce a malevolent strong AI (whether intentionally or not).
* Ergo, everyone should maximise their gross income by becoming a high-flying lawyer, derivatives trader, or similar, live off a minimal income and give the rest to a strong AI research group.
Which is fine, except that the economy is, by nature, a huge web of networked interactions. The ways in which those interactions are linked is probably very hard to discern.
Which is to say that I think the kind of redistribution of wealth to good causes that people are calling for is worthwhile, but only to an extent. It's the problem that utilitarianism has - it requires perfect knowledge of the outcomes of the various choices you could make.
As others have said, the thing you develop might prove useful in treating some disease. Or the skills you pick up while developing it might lead to down a more 'socially responsible' path. Or the thing you make might produce utility for someone else and lead them to contribute to an important cause (e.g. the entrepreneur who sets up a coffee shop in town isn't curing cancer, but the people who now have a place to gather and collaborate might).