Tribler is O.G. legit. Created in 2005 originally as a part of a research paper- I believe- around trying to create "fairer" allocation of bandwidth in BitTorrent (itself from 2001). Words like "incentivization" and "reptuation" dot their work for many years.
Tribler has continued to be one of the most leading edge pieces of p2p software on the planet. It went on to pioneer p2p search, streaming partially completed videos, live-streaming, a range of privacy/security enhancements, tagging, moderation.
In 2012 they dubbed their efforts "4th Generation P2P", encompassing a couple of these goals, many of which were already underway. I believe they've succeeded on all these fronts, & have only continued pushing further since. https://www.tribler.org/4thGenerationP2P/
The team has been up to ongoing cutting edge research for a long time. Their ability to create p2p search is, still, basically unparalleled in this world.
Tribler founder here, great to see people still care about cute old P2P file sharing.
My lab has been trying to get sharing, searching, and crowdsourcing to scale towards millions for 16 years now.
The fundamental science is making progress to re-decentralise The Internet. Decades of work left obviously. Lots of progress on creating non-profit versions of all of Big Tech services (well in principle that is). Even Amazon can be decentralised, see a recent phd thesis from the lab, https://repository.tudelft.nl/islandora/object/uuid%3Aa4f750...
European Commission might come in and allow open source clients of Facebook, instagram, Twitter, Amazon, etc. Our trustchain technology is specifically designed to be the superstructure for this. So EU might break down the defensive moats around 10 T$ of market cap. The legal foundation is already there, the new eIDAS Regulation is designed to _enforce_ login using the upcoming EU digital passport for all of Big Tech. So Open Source EU Metaverse, connecting all Big Tech protocols into a single repo and identity management solution with true privacy protection..
> The legal foundation is already there, the new eIDAS Regulation is designed to _enforce_ login using the upcoming EU digital passport for all of Big Tech. So Open Source EU Metaverse, connecting all Big Tech protocols into a single repo and identity management solution with true privacy protection..
Legally mandating a single user-identity system seems like the worst possible scenario I ever could have imagined for cyberspace. Imagining trusting such a system is enormously difficult. But more so, to let the mold set on cyerspace, to create a single way the internet has to work, & deny new frontiers, new possibilities, new creativities... that seems monstrous. Beyond imagining.
apologies for the poor formulation. Another login system you are required to support. People can create personas and if you try to de-anonymize them across service you're in gross violation of EU law. So more like privacy friendly must-have, right?
It seems to be easier and less prone to flame wars than asking:
* how easy is it for a newcomer to discover content on the privacy overlay?
* what metadata gets leaked when someone searches, downloads, or hosts content?
* how does the privacy overlay perform compared to the regular old web/internet?
* what's the user experience like (e.g., do the devs even care whether non-technical people use their software or not)
* what happens when some mildly interested party attacks the network
* probably other bullets I'm forgetting
It's way easier to just know if Sci-hub is there. Because if it is then a) they've solved the most important problems of a privacy overlay and b) they are probably actively being attacked so nobody has to speculate about their defenses.
Malicious relays have the option to give you bogus content or real content. The block-level hashes of Bittorrent prevent anything bad getting through. So just avoid relays which are not giving you good blocks.
At another level, if relaying as honest, but curious (e.g. spying) you need to randomly select several relays and use them all in a long relay path. Pioneered by Tor team, results in excessive bandwidth usage.
Any kind of telemetry service inside of a program purporting to care about centralised avenues of control should be an indicator to never ever use it. Either the developers don't understand what they're talking about or they are disingenuous. It doesn't really matter to the end user, both will mean that your use-case is not being served :)
It tries to scrub passwords and secret keys based on some text filters by default, but it can be configured to scrub arbitrary data (via a hook in the sdk).
Crash reporter uses sentry. We don't want to know anything else about our users.
When Tribler crashes the core dump (e.g. Python TraceBack) can be inserted as a Github issue, using Sentry. Requires the user to opt-in by clicking "send" on crash reporter.
I am old enough to remember when all torrenting was decentralized (edonkey, imesh, napster and million others). Then, all of a sudden, everyone switched to torrents, that rely on [centralized] web sites for search. I guess convenience always beats privacy.
Napster, KaZaA and most of the initial wave of p2p sharing had centralized servers which were required for coordination. That's why it got shut down. The record companies sued the companies behind them and won, forcing them to shutdown their networks.
The second wave of Gnutella and all of the various BitTorrent clients was decentralized. No single entity controlled the network so there was no single point of failure. Record companies and movie studios came after individual nodes, but were never ever to shut down the network.
However, no one ever invented a great way to do search and discovery on top of BitTorrent, so we've always had centralized servers for that piece.
It doesn't (strictly) need to have centralized search; you can passively collect info from the DHT swarm and build up an index over time of torrents people are sharing.
> you can passively collect info from the DHT swarm and build up an index over time
For anyone wanting to pursue this, I feel like I can share, I used this recently https://github.com/boramalper/magnetico, people share database dumps regularly. I found ~4 DB dumps and merging scripts are all you need to get up and running.
Because we needed curation. Very large proportion of the p2p content was viruses/malware etc or fake (you thought you were downloading A, but in fact you were downloading B).
With links in centralized websites and fora we at least had a little more confidence on the safety/quality of the content.
To make decentralised search work you need to solve the trust problem. Strangers sharing content with you without spam, decoys, and malware. You need to create trust without any central coordinator. Crawling the DHT for content is not going to get you Google-quality search. The Tribler lab is working on the web-of-trust problem and decentralised (ledger) accounting since pre-Bitcoin days. Its a stubborn problem. But we're making consistent progress. Latest peer-reviewed science: https://www.ifaamas.org/Proceedings/aamas2021/pdfs/p1263.pdf
You generally can't catch viruses from mp3 or video files. As for the games, I very much doubt a torrent web site can check hundreds of gigabyte-sized .exe files for trojans. So I wouldn't recommend torrenting games on the same computer you store your Bitcoin wallet.
edonkey and napster required bootstrap nodes, I think? and sending your search terms out to other peers on the network could be considered less private, depending on your view.
You can search bittorrent via DHT in a similar manner (I'm unsure if the bootstrap nodes there are strictly required) or search an indexer website from one IP then use the magnet link to download from another IP.
Privacy is not the only drawback of centralization. It is also easier to take down by simply disabling the central component. Although not used as much as during the 90's, some decentralized networks like gnutella and kademlia are still to this day.
Using this opportunity to ask - has anyone here used torrentpanda before? Imo it was the best torrent search site with a big database of crawled torrents. They seem to have disappeared a few months ago and I haven't found any working domains so far.
The old site is torrentpanda.org but going to it now gives error page and redirects to softcore porn.
There's a pretty big caveat in the anonymity section on the website about seeding by default not being anonymous (although there is an option for hidden seeding as well which apparently slows speeds quite a bit) but that aside, does this mean that downloading is by default anonymous to ISPs? That's pretty big, weird that I've never seen this before.
Very interesting. On Debian the Ubuntu .deb package installs fine without asking for other dependencies.
I wonder if it could be split into a core+GUI pair for those of us who like to have p2p software run on small boxes (RasPI, other ARM boards, etc). I'm so used to run Transmission as a service on my NAS then use the GUI interface from PCs when they're turned on, which is extremely convenient.
Nothing. Copyright enforcement bootlickers briefly join the swarm to grab all the IPs and thats it. They (in theory) only have claims on hashes they are payrolled to monitor. Also they need the tracker to cooperate so that rules out private sites with a competent opsec.
Edit: the moment I hit send I realized I fell for a common innuendo. Feeling a bit dumb now.
I know the guys from TU delft who worked on it ( parallel and distributed systems group).
It is one of respected groups in CS in Netherlands. With your comment you are actually insulting people who have worked for years on this. With your one-liner you are discrediting many thesis, hard work, without any credentials or proof from your side.
> With your comment you are actually insulting people
It's not a personal insult to say a paper that someone produced was of poor quality (I don't know whether or not it was, I have not read it). Someone can produce something bad without it saying anything about their character.
I am not talking personal insults either. It is insulting their work and profession. Producing paper is their work. Producing paper and delivering a project that makes an impact is rare and should be appreciated.
It is like you have a successful opensource project and somebody says you are a really bad programmer without any context.
It's not an insult to say that work was done bad. Nobody deserves appreciation just because they did something, and nobody deserves to be told to essentially shut up when they give a value judgement you do not agree with.
Thanks for the link, I will read it.
Edit: Given that the ideas presented in the paper are somewhat novel (and there is more than one such idea), I would guess the original post was complaining about the grammatical errors that are in it. I've definitely read papers that purported to be more "important" that eventually (after many words saying essentially nothing of value) turned out to have far less novel information in them, so I would say that in the realm of compsci papers that invent new relevant (as opposed to purely academic) algorithms this one is above-average.
Tribler has continued to be one of the most leading edge pieces of p2p software on the planet. It went on to pioneer p2p search, streaming partially completed videos, live-streaming, a range of privacy/security enhancements, tagging, moderation.
In 2012 they dubbed their efforts "4th Generation P2P", encompassing a couple of these goals, many of which were already underway. I believe they've succeeded on all these fronts, & have only continued pushing further since. https://www.tribler.org/4thGenerationP2P/
The team has been up to ongoing cutting edge research for a long time. Their ability to create p2p search is, still, basically unparalleled in this world.