The hash chain doesn't contain the data, only a hash of the data. So the original article can still be altered, and the hash chain would only prove that it had been changed. I believe nothing in these "right to be forgotten" laws forbid noting that an article has been edited to remove names.
An interesting alternative would be to hash "chunks" of the original article so that a future verification could be applied to particular parts of the content. Let's imagine you hashed every 32 bytes, you could then determine which chunks changed at what times, without revealing the plain text content.
The question of how to identify large complex works, of potentially variable forms (markup or editing format, HTML, PDF, ePub, .mobi, etc.) such that changes and correspondences can be noted, is a question I've been kicking around.
Chunk-based hashing could work. Corresponding those to document structure (paragraphs, sentences, chapters, ...) might make more sense.
Yeah that's an interesting question. How to parse the content into meaningful pieces and then hash in such a way that the content is not known, but the hash can be mapped to where it was in the document at an earlier time.
Keep in mind that at the scale of a large work, some level of looseness may be sufficient. Identity and integrity are distinct, the former is largely based on metadata (author, title, publication date, assigned identifiers such as ISBN, OCLC, DOI, etc.). Integrity is largely presumed unless challenged.
As it pertains to private citizens, I would not recommend something like this to archive or verify their personal data. But for government records, campaign records, etc, I would think that those laws do not apply to that information.
They will be as effective as anti-piracy law would be if pirates were paid to seed. At best they will prevent respectable publications from directly using distributed archives as a source.