> These kinds of technologies, like AI models, are fundamentally “dual use”.
It is certainly true that technologies can be used for good and evil. But that doesn’t mean that in practice good and evil benefit equally. “Dual use” implies a more or less equal split, but what about a good/bad 10/90 or 1/99 split? Technology, at its core, makes accomplishing certain tasks easier or harder, and besides the assertion of dual use, the article doesn’t really justify AI models being equally good and bad.
In the Soviet Union, a large percentage of the population was used for surveillance. The U.S. had surveillance too, but less. Technological limitations made surveilling every person prohibitively expensive. Police couldn’t just surveil everyone.
Today, surveillance is not only ubiquitous but better. It is possible to track millions of people in near real time. So this technology has caused a decrease in the cost and scalability of mass surveillance, which in conjunction with the third party doctrine (read: loophole) has the emergent effect of neutering the 4th amendment.
What makes this hard/impossible is anticipating likely applications, which is why I lean towards not regulating. However, we should recognize the possibility of a moral hazard here: by shielding industry from certain consequences of their actions, we may make those consequences more likely in the future.
> The general purpose computation capabilities of AI models, like these other technologies, is not amenable to control.
Sure. And we can’t stop people from posting copyrighted material online, but we can hold people accountable for distributing it. The question in my mind is whether we will have something like Section 230 for these models, which shields large distributors from first-pass liability. I don’t know how that would work though.
It is certainly true that technologies can be used for good and evil. But that doesn’t mean that in practice good and evil benefit equally. “Dual use” implies a more or less equal split, but what about a good/bad 10/90 or 1/99 split? Technology, at its core, makes accomplishing certain tasks easier or harder, and besides the assertion of dual use, the article doesn’t really justify AI models being equally good and bad.
In the Soviet Union, a large percentage of the population was used for surveillance. The U.S. had surveillance too, but less. Technological limitations made surveilling every person prohibitively expensive. Police couldn’t just surveil everyone.
Today, surveillance is not only ubiquitous but better. It is possible to track millions of people in near real time. So this technology has caused a decrease in the cost and scalability of mass surveillance, which in conjunction with the third party doctrine (read: loophole) has the emergent effect of neutering the 4th amendment.
What makes this hard/impossible is anticipating likely applications, which is why I lean towards not regulating. However, we should recognize the possibility of a moral hazard here: by shielding industry from certain consequences of their actions, we may make those consequences more likely in the future.
> The general purpose computation capabilities of AI models, like these other technologies, is not amenable to control.
Sure. And we can’t stop people from posting copyrighted material online, but we can hold people accountable for distributing it. The question in my mind is whether we will have something like Section 230 for these models, which shields large distributors from first-pass liability. I don’t know how that would work though.