It's not going to happen if we just sit around and hope for it, but since the current model for supporting creators is failing so badly, it seems likely that if somebody can get it even half way right, their system would have a huge advantage. Half-way right, in my view, would avoid most of these:
- Incentivize the creation of technology that does more harm than good (e.g. DRM).
- Create legal constructs that are later used for censorship.
- Require that artists share profits with lawyers.
- Require artists to focus mostly on stuff that's not their art.
And would achieve some of these:
- Citing sources is impactful. The graph structure for determining trustworthyness is what also determines payment, or credit, or warm fuzzy feelings, or whatever the relevant good thing is.
- Has an culture of rewarding (and scrutinizing) curators such that successful curators only endorse content which fair about how it defers to its sources.
- Supports inheritance such that making derivative works that credit their parent is easy.
- Treats transport and attribution separately so that I can work with the data via whatever tool scratches the itch (e.g. rsync, and not some janky website).
So yes I do think it's possible. I'm working on tooling in this imagined ecosystem. I want to use CTPH hashes (i.e. the tech used by virus scanners) to annotate bitstreams with metadata re: trustworthyness. What I don't think is possible is to take an AI's output and mapping it backwards to annotations of this type in the training data, but I'm hoping that some AI wizard comes along and shows me that I'm wrong about this.
Do you believe the reasons why the current model is failing creators is for lack of good technological tools? Because I personally believe the issues are more anthropological than technological which is precisely why I don't have much hope.
Better tools can improve the situation probably, I can't say for certain because I never dived into this space but I don't think they'll solve all the issues.
Yes. We're tool using primates, that quip about having only a hammer and seeing only nails is really descriptive of the human condition. There's an excellent radio-lab episode which argues that cultures don't develop a word for the color blue until they can make blue dye (and the implication is that they don't even perceive it before that).
Abstractions like "value" or "property" arose organically, and if we didn't have this fixation on tools they'd likely have changed organically... But we made tools for working with those abstractions and now we live in a world shaped by those tools and it has created a sort of inertia for the old way if doing things.
It's kind of like how all of the spellings stopped changing when the printing press was invented, so now we have wacky spellings like "through" that we would've moved on from had that not happened at that particular time (see: "the great vowel shift")
The historical circumstance around the creation of the printing press is what gave us our notion of intellectual property to begin with, and I think it'll remain more or less unchanged until some other technology forces it to change.
It's incredibly difficult to visualize a different way in our current setting, and it's especially difficult to get paid to work towards it, but I think that pretty much any change is possible given some MVP toolset that makes it doable and some critical mass of people willing to give it a try.
- Incentivize the creation of technology that does more harm than good (e.g. DRM).
- Create legal constructs that are later used for censorship.
- Require that artists share profits with lawyers.
- Require artists to focus mostly on stuff that's not their art.
And would achieve some of these:
- Citing sources is impactful. The graph structure for determining trustworthyness is what also determines payment, or credit, or warm fuzzy feelings, or whatever the relevant good thing is.
- Has an culture of rewarding (and scrutinizing) curators such that successful curators only endorse content which fair about how it defers to its sources.
- Supports inheritance such that making derivative works that credit their parent is easy.
- Treats transport and attribution separately so that I can work with the data via whatever tool scratches the itch (e.g. rsync, and not some janky website).
So yes I do think it's possible. I'm working on tooling in this imagined ecosystem. I want to use CTPH hashes (i.e. the tech used by virus scanners) to annotate bitstreams with metadata re: trustworthyness. What I don't think is possible is to take an AI's output and mapping it backwards to annotations of this type in the training data, but I'm hoping that some AI wizard comes along and shows me that I'm wrong about this.