Hacker News new | past | comments | ask | show | jobs | submit login

Wishing that the landing page had a clearer explanation of how Solid provides a means for the owner of the data to control who can access their data, people and applications, as well as what the process of moving ones data from a given store to another.

This project looks like it has huge potential, I just want to be able to understand at a glance what privacy controls are in place without reading the source code before committing my personal information.

EDIT:

Found the specifications documentation linked to from some of the example apps[0][1]. It would be great to have more of this information on the landing page for the project.

[0] https://github.com/solid/solid-spec

[1] https://github.com/solid/solid/blob/master/README.md




Reading the landing page left my scratching my head too. Great lofty goals and all, but zero informat on _how._ The fact that you had to go to links from the example kind of makes it a failure.


Exactly. "Solid (derived from "social linked data") is a proposed set of conventions and tools for building decentralized social applications based on Linked Data principles. Solid is modular and extensible and it relies as much as possible on existing W3C standards and protocols." That sounds like a pitch from some ICO. The site has the buzzwords. It's got the neckbeards. What it doesn't have is a convincing use case. It comes across as some really complicated scheme for address book synchronization.

They have three sample applications, yet all you can click on is somebody's blog entry. Clicking on the "publishing" app gets you a screenshot of the abstract of someone's paper. It's a single page web site, like all the cool kids have now. Clicking on the top menu items just scrolls the page.

It has MIT and Bernars-Lee behind it, so it can't be totally bogus. If those names weren't on this, I'd assume it was from someone either clueless or crooked.

There's a decent description of Solid on Github.[1] From there, you can see the real problem. It's only useful if the big players adopt it. Which they won't, because it breaks their walled gardens.

This looks like is another try at Bernars-Lee's "semantic web" - hammer as much content as possible into standard formats so it can be machine processed. This is an old idea, and tends to break down once you get beyond contact lists and library catalogs.

There have been major efforts to make that work in the business sector, where it's called "electronic data interchange", and parties want to exchange purchase orders, invoices, and bills of lading.[2] It's worth looking at that area to see how hard this is for even simple-seeming problems like that. And they have cooperation - buyer, seller, and shipper all want that data to flow smoothly between the parties. Trying to do this in today's world of competing closed web empires is much tougher.

The medical data records people have it even worse, I hear.

[1] https://github.com/solid/solid-tutorial-intro

[2] https://www.edibasics.com/what-is-edi/


My experience in companies is that data exchange, data reuse and data referential are a nightmare. My mental model of a company is an insane amount of customized ETL, on top of web services, obscure IDs in each data/service silo representing the same thing, data exchange format poorly (un:)documented that require massive boilerplate code. I think it is the middle age. I have been a huge fan of the LinkedOpenData. And its Lack of adoption profoundly disturbs me. I still don't understand how people can use the Knowledge graph of Google daily, and discard the need for something similar in companies, or widely available for OpenData.


If machines can read, why does data need to be machine-readable? Seems like a transient.


For the same reason an organization will ask you to fill in a standard form page (even on paper) rather than writing a long form essay.


Why do they do that, again? I'm not arguing against summaries. Just saying that if machines can read as well or better than humans, then machine-readable in the sense of using a simplified ontology, grammar, or alphabet is unnecessary.


My point is that even humans make fewer mistakes when they're reading an established form structure with pre-defined fields over long-form text. Hence you'd expect a machine to process them better as well, even if it could read like a human.


Because even if you can read you need context, or you need to be able to identify what a new word means. Reusing a shared vocabulary is useful.


I don't understand. If you already agree on a shared vocabulary, then you're not using new words. Context is more present in the long-form un-pre-processed text than in the short-form text.


SOLID is a spec that TBL & co (like Dmitri Zagidulin) have worked on. It is quite good and based on linked data. It was mostly worked on the other year, but I think it is only now starting to be marketed - which means TBL thinks it is ready for people to implement.

We wound up implementing something very similar, about the same time (I wish I had known about SOLID earlier), except using CRDTs and graphs (which can support Linked Data). Our implementation is live and functioning already, including:

- Realtime updates across a decentralized network.

- End-to-end encryption with P2P identities.

- Backs up to localStorage/disk (if you Electron-ify/etc. it) - Can also be backed up by remote storage services.

It is as if IPFS and Firebase had a love child, and we've tried really hard to reduce the API to just a couple lines of code to get fully working P2P social netowrking dApps in place.

Check out:

- Intro http://hackernoon.com/so-you-want-to-build-a-p2p-twitter-wit...

- 4min interactive coding tutorial https://scrimba.com/c/c2gBgt4


I know nothing about the subject, how does file removal work in decentralized networks? I understand once something is on the internet it's out of your control but what if I accidentally post a naked picture when I'm trying to sell something on decentralized-bay and I try to delete it immediately or a few minutes after? What about illegal content that I don't want to redistribute without my knowledge?


Um... it is hard. Really hard. Let us just hope you don't have WiFi turned on, and can revert your changes first!

In our case, we have a tombstone delete method. So as long as every peer that saved your data (and people aren't gonna just save it for free) comes online at some point, after you've nulled/tombstoned the data, then it will be deleted.

This is only true because of our CRDT system that lets you update/mutate data. It is not true or a general property of other decentralized systems though.


We have to learn to treat all the content we post like the words we speak publicly. If it's out, it can't be taken back.



> huge potential

I was about to say "but there's no information" but then you added the Github links. Thanks!

I don't see why that's not referenced on the main page. I mean it might be a marketing tool to get funding, but it still needs to have links to same prototypes for the technical audience (even a little github cat logo to the repo would have been good).

Even a non-tech audience would understand a link to "see the spec" and knows what Github is, even if they wouldn't be able to understand any of what's there.


personal robots.txt style disallow rules that platforms can choose to honor?


The fact that they're still using RSA makes me want to avoid it: https://github.com/solid/solid/blob/master/proposals/auth-we...


Last Updated Feb 2016.

I'm sure when they come back to this they'll change the SHA1 to SHA256 (I'd hope) .. might just do that myself and submit a pull request.

What's wrong with RSA? DSA has been depreciated in most tools. Do you think they should use ECDSA?


I think they should use EdDSA for signatures and X25519 or X448 for ECDH. RFC 7748 / RFC 8032.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: