In theory that makes the language models auditable and tamper-proof. I'm not so sure about the supposed benefit of that, though. Yes, it means that the model itself cannot be tampered with (in order to introduce bias to the summaries, for instance) but as long as the algorithm itself remains closed source you could still alter the results by for example boosting some values while attributing less significance to others.
Simply publishing both the algorithm and the model as open source alongside with an SHA-2 hash to make sure neither has been tampered with would achieve a lot more in terms of reproducibility and trustworthiness.
Then again, they would've had one buzzword less in that case ...
Right but you could also commit your changes to your model into a Git repo and use the "blockchain" that Git provides.
When people say blockchain the meaning that there is a distributed consensus comes into the picture. In this case, there is no reason for a distributed consensus on ordering or anything.
But if you are suggesting there are many text parsers that train the model, and there is a central modal that's held by the network state, sure. But I don't know what's the benefit to that as I don't think simply training on more text will allow this bot to produce better summaries.
In theory that makes the language models auditable and tamper-proof. I'm not so sure about the supposed benefit of that, though. Yes, it means that the model itself cannot be tampered with (in order to introduce bias to the summaries, for instance) but as long as the algorithm itself remains closed source you could still alter the results by for example boosting some values while attributing less significance to others.
Simply publishing both the algorithm and the model as open source alongside with an SHA-2 hash to make sure neither has been tampered with would achieve a lot more in terms of reproducibility and trustworthiness.
Then again, they would've had one buzzword less in that case ...