Hacker News new | past | comments | ask | show | jobs | submit | jdiff's comments login

Click on the link they provided.

We can't though. This is what the marketing department keeps shouting from the rooftops, this is what the futurists keep prophesying, but in practice these things seem to be giant liability machines with no common sense, self doubt, or ability to say "I don't know, let me ask someone else."

They're neat, they have uses, but they're not replacements for anything. Even Whisper cannot replace a human transcriptionist as it will just make up and insert random lines that are not present in the source audio.


Google speech to text, siri, zoom subtitles, youtube subtitles etc insert almost only things I didn't say into the transcriptions. Whisper understands exactly what I say, even if I mumble, use abbreviations etc, and, at speed too. Maybe it does something wrong sometimes; the old way is infinitely worse. It's almost a joke between me and my colleagues to switch on speech to text when doing team calls (even 1 on 1); it gets 99% completely wrong; we talk about programming Typescript; it transcribes about robots and sex and rocks in the purple water; it's funny to read. If you would turn off the audio, you, as a third party, would think we are drunk or on acid, while with sound you would follow every fine.

I assume native english speakers do better (?) but we speak english (with accents) and whisper has no issues at all.


Again, it's seriously impressive tech, and it has its uses. But the failure modes are wildly different and severe, made even more severe by how impressive it is in the usual case. Medical transcriptions find themselves containing cancer diagnoses that were never uttered in reality, for instance.

A failing traditional TTS can be spotted by glancing through a transcript. A failing Whisper can only be identified by thorough comparison, with the failures being far more impactful and important to spot.


I believe it bounces through accounts.youtube.com just to establish cookies on that domain. Everything else is under google.com, so the rest of the services are already good to go without a bounce.

This seems to get brought at least once in the comments for every one of these articles that pops up.

The IA has tried distributing their stores, but nowhere near enough people actually put their storage where their mouths are.


Nearly every entry in the library has a torrent file (which is a distributed storage system), but with the index pages down, they're not accessible.


I can hardly find a healthy torrent for an obscure feature film that I care about. How am I supposed to find a healthy torrent for a random web page from the aughts?



If we want it to be distributed across laymen, we need something easier than opening torrent files (or inputting magnet URI) over a thousand times. Perhaps https://github.com/ipfs/in-web-browsers?


You're correct, but even then you've still the problem of storage - the torrents are only useful (and there's a lot of them) if a sustainable number of seeds remain available.


How abiut torrenting a collection of websites in one collection?

You can distribute less popular websites with more used ones to avoid losing it? And Torrents are good with transfering large files in my experience.


> You can distribute less popular websites with more used ones to avoid losing it?

So long as this distributed protocol has the concept of individual files, there _will_ be clients out there that allow the user to select `popular-site.archive.tar.gz` and not `less-popular.tar.gz` for download.

And what one person doesn't download... they can't seed back. Distributed stuff is really good for low cost, high scale distribution of in-demand content. It's _terrible_ for long term reliability/availability, though.


That is fundamentally the problem, no one wants to donate storage to host stuff they're not interested in.


More concretely, nobody wants to donate anything. They just want it to exist. Charity has never been a functional solution to normal coordination problems. We have centuries of evidence of this.


Maybe there needs to be a torrentable offline-first HTML file (only goes online to tell you if there's a new torrent whatsoever with more files), that lets you look through for more torrents (Magnet links are really tiny).

I miss when TPB used to have a CSV of all their magnet links, their new UI is trash. I can't even find anything like the old days, pretty much TPB is a dying old relic.


The problem with their torrents is they are usually broken. Lots of complaints on them being broken. But no one fixing it.


It's not abnormal for their torrents to be missing most of their direct downloads on the same page


They're not using DHT?


They're not talking about peer discovery, they're talking about .torrent file discovery.


Perhaps one idea is to let people choose what they want to protect. This way people wanting to support it can have their mission.


I want it to protect all sorts of random obscure documents, mostly kind of crappy, that I can't predict in advance, so I can pursue my hobby of answering random obscure questions. For instance:

* What is a "bird famine", and did one happen in 1880?

* Did any astrologer ever claim that the constellations "remember" the areas of the sky, and hence zodiac signs, that they belonged to in ancient times before precession shifted them around?

* Who first said "psychology is pulling habits out of rats", and in what context? (That one's on Wikiquote now, but only because I put it there after research on IA.)

Or consider the recently rediscovered Bram Stoker short story. That was found in an actual library, but only because the library kept copies of old Irish newspapers instead of lining cupboards with them.

The necessary documents to answer highly specific questions are very boring, and nobody has any reason to like them.


You could let users choose what to mirror, and one of those choices could be a big bucket of all the least available stuff, for pure preservationists who don't want to focus on particular segments of the data.

Sort of like the bittorrent algorithm that favors retrieving and sharing the least-available chunks if you haven't assigned any priority to certain parts.


My favorite question is: whether or not Bowser took the princess to another castle.


Since the IA had a collection of emulators (some of them running online*), and old ROMs and floppies and such, it could probably help with that one too.

* Strictly speaking, running in-browser, but that sounded like "Bowser" so I wrote online instead.


You already can, they have torrents for everything.


> they have torrents for everything

Including the index itself? That would be awesome.


Their torrents suck and IME don’t update to changes in the archive.


Aren't torrents terrible at handling updates in general? If you want to make a change to the data, or even just add our remove data, you have to create a new torrent and somehow get people to update their torrent and data as well.


There's a mutable torrent extension (BEP-46) but unfortunately I don't think it's widely supported. I think IPFS/IPNS is the more likely direction.


Which IA has moved into and hasn’t found much luck in, unfortunately.


How come?


Torrents are immutable in principle, which is good for preserving things. A new version of a set of files should be a new torrent.


> Torrents are immutable in principle

In practice, that's mostly how they're being used.

But the protocol does support mutation. The BEP describing the behavior even has archive.org as an example...

> The intention is to allow publishers to serve content that might change over time in a more decentralized fashion. Consumers interested in the publisher's content only need to know their public key + optional salt. For instance, entities like Archive.org could publish their database dumps, and benefit from not having to maintain a central HTTP feed server to notify consumers about updates.

http://www.bittorrent.org/beps/bep_0046.html


Specs are nice but does any client actually implement this?


How would preservationists go about automatically updating the torrent and data they seed? Or would they need to manually regularly check, if they are still seeding the up-to-date content?


This is accurate, their torrent-generating system is basically broken to the point of being useless.


Perhaps a naïve question, but hasn't this problem been solved by the FreeNet Project (now HyphaNet) [0]? (the re-write — current FreeNet — was previously called Locutus, IIRC [1]).

Side note: As an outsider, and someone who hasn't tried either version of FreeNet in more than almost 2 decades, was this kind of a schism like the Python 2 vs. Python 3 kerfuffle? Is there more to it?

[0]: https://www.hyphanet.org/

[1]: https://freenet.org/


Hi, Freenet's FAQ explains the renaming/rebranding here: [1]

Neither version of Freenet is designed for long-term archiving of large amounts of data so it probably isn't ideally suited to replacing archive.org, but we are planning to build decentralized alternatives to services like wikipedia on top of Freenet.

[1] https://freenet.org/faq/#why-was-freenet-rearchitected-and-r...


Thanks for pointing it out and for correcting me!


And it's guaranteed not to happen if the efforts don't continue.


You could say the same thing about perpetual motion. Being realistic about why past efforts have failed is key to doing better in the future: for example, people won’t mirror content which could get them in trouble and most people want to feel some kind of benefit or thanks. People should be thinking about how to change dynamics like those rather than burning out volunteers trying more ideas which don’t change the underlying game.


There are certainly research questions and cost questions and practicality and subsetting and whatnot. Addressed by some ideas and not by others.

What there isn't is a currently maintained and advertised client and plan. That I can find. Clunky or not, incomplete or not.

There are other systems that have a rough plan for duplication and local copy and backup. You can easily contribute to them, run them, or make local copies. But not IA. (I mean you can try and cook up your own duplication method. And you can use a personal solution to mirror locally everything you visit and such.) No duplication or backup client or plan. No sister mirrored institution that you might fund. Nothing.


> nowhere near enough people actually put their storage where their mouths are.

Typically because most people who have the upload, don't know that they can. And if they come to the notion on their own, they won't know how.

If they put the notion to a search engine, the keywords they come up with probably don't return the needed ELI5 page.

As in: How do I [?] for the Internet Archive?, most folks won't know what [?] needs to be.


This is literally torrents. Just give up


> This is literally torrents. Just give up

Most casual visitors to IA don't know that. Which is the point.

Giving up is for others.


The problem with torrents is they have a bad reputation since people use it to steal and redistribute other people’s content without their consent.


Torrents have a bad reputation due to malicious executables, I have never met someone who genuinely saw piracy as stealing, only as dangerous. In fact, stealing as a definition cannot cover digital piracy, as stealing is to take something away, and to take is to possess something physically. The correct term is copying, because you are duplicating files. And that’s not even getting into the cultural protection piracy affords in today’s DRM and license-filled world.


What does this have to do with torrents? If you get an executable from the internet it is widely known not to execute it if not trusted. You can get malicious executables from websites too.

If this is what people think we need to work on education...


Piracy also is not unique to torrents, and yet that was what GP used.

The average person, in my experience, can barely work a non-cellphone filesystem and actively stresses when a terminal is in front of them, especially for a brief moment. Education went out the window a decade ago.


The problem with websites is they have a bad reputation since people use it to steal and redistribute other people’s content without their consent.


The problem with file transfer is they have a bad reputation since people use it to [insert illegal or immoral activity here].

Then rename it from "torrent" to something else.


I'm not sure what the argumentative line is here. But file uploading and downloading needs to have accountability for hosting, which p2p obscures.

The bad reputation is inherent to the tech, not a random quirk.


It doesn't really, you can host a server off a raw IP.

Downloading from example.com is just peer to peer with someone big. There's lots of hosting providers and DNS providers that are happy to host illegal-in-some-places content.


Incorrect.

The protocols for downloading from example.com are assymettrical client server architectures, not symmetrical decentralized peer to peer.


Is there any form of torrent where you can do a full text search? That, to me, is the more important problem with torrents.


But internet archive doesn't do this? It's a key based search (url keys)


Internet archive allows full text search of books, newspapers, etc.. Or anyway it did, before being breached.


It does transcribe books (through imperfect OCR) so I guess that's possible. Never relied on it as I search by title and author.

But anyways not the case for the wayback product which is the unique core to IA.


That's not unique, not a product, and not the part I use most.

Well, OK, maybe other webpage archives don't work as well, I haven't tried them, but there are others. And they're newer, so don't have such extensive historical pages.

Large numbers of Wikipedia references (which relied on IA to prevent link rot) must be completely broken now.


To me this is like saying you shouldn't use a knife because they are also used by criminals.


This kind of talk is simply modern politik-speak. I can't stand it and the people who fall for their deception. Stretch the truth to disarm the constituents


In what way? Torrents are used all over for content delivery. Battle.net uses a proprietary version of BitTorrent. It’s now owned by Microsoft. There’s many more legitimate uses as commented by many others.

Criminals using tools does not make the tools criminal.


It's a matter of numbers, if tens of thousands of criminals use tech X, and it has few genuine uses, it's going to be restricted.

This has precedent in illegal drug categorization, it's not just about the damage, but its ratio of noxious to helpful use.


This precedent is problematic I think. It seems like the populist way of addressing issues. Always just following the biggest outcry instead of the symptoms. Just because there are currently more illegitimate users for a a thing we shouldn't prevent legitimate uses I think. The ratio might just be skewed because in the legitimate world you grow your audience with marketing investing tons of capital, while for illegitimate use cases, the marketing is often just word of mouth because of features.


That's how most countries work, if enough people raise a big enough fuss, it's restricted like dangerous drugs.


That precedent was and still is legally used to federally regulate marijuana harsher than fentanyl, a precedent I strongly disagree with, so you'll have to forgive me for believing that the degree to which something causes harm matters more than the amount of misuse


Marihuana ruins millions of young minds.


Literally millions of people use it (whether they know it or not).

Societies should criminalize behavior and then (shocker!) enforce the laws! Let tools be tools.


Give it a good reputation then.

What are some legal torrent trackers?


What is your definition of a legal torrent tracker? I was not aware there were even any illegal ones.


A tracker that only tracks legal torrents, e.g. free software, OCRemix content, etc.



How would you keep the definition of legality without a centralizing authority?


A tracker is a centralized authority.


But legality doesn't have a central authority. What is illegal in one jurisdiction is ok in another


Just track things that are legal everywhere or in most jurisdictions then.


I don't see how that would be enforceable. Policy perhaps, but it would be impossible to absolutely prevent it from being used for that purpose IMO.


> I was not aware there were even any illegal ones.

Depends on the jurisdiction. Remember what happened in the The Pirate Bay trial?


My understanding is that that court case did not show that operating a torrent tracker is illegal, but specifically operating a (any) service with the explicit intent of violating copyright... huge difference IMO.

To me that's not even related to it being a torrent tracker, just that they were "aiding and abetting" copyright infringement.


Ok. But what is the case law in hosting illegal content? Sure you may operate a torrent, but if your client is distributing child porn, in my view, you bear responsibility.


I'm backing ranger_danger here.

In Law the technicalities matter.

Trackers generally do not host any content, just hashcodes and (sometimes) meta data descriptions of content.

If "your" (ie let's say _you_ TZubiri) client is distributing child pornography content because you have a partially downloaded CP file then that's on _you_ and not on the tracker.

The "tracker" has unique hashcode signatures of tens of millions of torrents - it literaly just puts clients (such as the one that you might be running yourself on your machine in the example above) in touch with other clients who are "just asking" about the same unique hashcode signature.

Some tracker affiliated websites (eg: TPB) might host searchable indexes of metadata associated with specific torrents (and still not host the torrents themselves) but "pure" trackers can literally operate with zero knowledge of any content - just arrange handshakes between clients looking for matching hashes - whether that's UbuntuLatest or DonkeyNotKong


We agree in that if my client distributes illegal content, I am responsible, at least in part.

On the other hand I also believe that a tracker that hosts hashes of illegal content, provides search facilities for and facilitates their download, is responsible, in a big way. That's my personal opinion and I think it's backed in cases like the pirate bay and sci hub.

That 0 knowledge tracker is interesting, my first reaction is that it's going to end up in very nasty places like Tor, onion, etc..


> That 0 knowledge tracker is interesting,

Most actual trackers are zero knowledge.

A tracker (bit of central software that handles 100+ thousand connections/second) is not a "torrent site" such as TPB, EZTV, etc.

A tracker handshakes torrent clients and introduces peers to each other, it has no idea nor needs an idea that "SomeName 1080p DSPN" maps to D23F5C5AAE3D5C361476108C97557F200327718A

All it needs is to store IP addresses that are interested in that hash and to pass handfuls of interested IP addresses to other interested parties (and some other bookkeeping).

From an actual tracker PoV the content is irrelevant and there's no means of telling one thing from another other than size - it's how trackers have operated for 20+ years now.

Here are some actual tracker addresses and ports

    udp://tracker.opentrackr.org:1337/announce
    udp://p4p.arenabg.com:1337/announce
    udp://tracker.torrent.eu.org:451/announce
    udp://tracker.dler.org:6969/announce
    udp://open.stealth.si:80/announce
    udp://ipv4.tracker.harry.lu:80/announce
    https://opentracker.i2p.rocks:443/announce
Here's the bittorrent protocol: http://bittorrent.org/beps/bep_0052.html

Trackers can hand out .torrent files if asked (bencoded dictionaries that describe filenames, sizes, checksums, directory structures of a torrents contents) but they don't have to; mostly they hand out peer lists of other clients .. peers can also answer requests for .torrent files.

A .torrent file isn't enough to determine illegal content.

Pornography can be contained in files labelled "BeautifulSunset.mkv" and Rick Astley parody videos can frequently be found in files labelled "DirtyFilthyRepubicanFootTappingNudeAfrica.avi"

Given that it's not clear how trackers could effectively filter by content that never actually traverses their servers.


Oh ok, it seems to be a misconception of mine then.

Mathematically a tracker would offer a function that given a hash, it returns you a list of peers with that file.

While a "torrent site" like TPB or SH, would offer a search mechanism, whereby they would host an index, content hashes and english descriptors, along with a search engine.

A user would then need to first use the "torrent site" to enter their search terms, and find the hash, then they would need to give the hash to a tracker, which would return the list of peers?

Is that right?

In any case, each party in the transaction shares liability. If we were analyzing a drug case or a people trafficking case, each distributor, wholesaler or retailer would bear liability and face criminal charges. A legal defense of the type "I just connected buyers with sellers I never exchanged the drug" would not have much chance of succeding, although it is a common method to obstruct justice by complicating evidence gathering. (One member collects the money, the other gives the drugs.)


> A user would then need to first use the "torrent site" to enter their search terms, and find the hash, then they would need to give the hash to a tracker, which would return the list of peers?

> Is that right?

More or less.

> In any case, each party in the transaction shares liability.

That's exactly right Bob. Just as a telephone exchange shares liability for connecting drug sellers to drug buyers when given a phone number.

Clearly the telephone exchange should know by the number that the parties intend to discuss sharing child pornography rather than public access to free to air documentaries.

How do you propose that a telephone exchange vet phone numbers to ensure drugs are not discussed?

Bear in mind that in the case of a tracker the 'call' is NOT routed through the exchange.

With a proper telephone exchange the call data (voices) pass through the exchange equipment, with a tracker no actual file content passes through the trackers hardware.

The tracker, given a number, tells interested parties about each other .. they then talk directly to each other; be it about The Sky at Night -s2024e07- 2024-10-07 Question Time or about Debbie Does Donkeys.

Also keep in mind that trackers juggle a vast volume of connections of which a very small amount would be (say) child abuse related.


Interesting. That's a good point.

I'll restate the principle of good usage to bad usage ratio, telephone providers are a well established service with millions of legitimate users and uses. Furthermore they are a recognized service in law, they are regulated, and they can comply with law enforcement.

They are closer to the ISP, which according to my theory has some liability as well.

It's just a matter of the liability being small and the service to society being useful and necessary.

To take a spin to a similar but newer tech, consider crypto. My position is that its legality and liability for illegal usage of users (considering that of exchanges and online wallets, since the network is often not a legal entity) will depend on the ratio of legitimate to ilegitimate use that will be given to it.

There's definitely a second system effect, were undesirables go to the second system, so it might be a semantical difference unrelated to the technical protocols. Maybe if one system came first, or if by chance it were the most popular, the tables would be turned.

But I feel more strongly that there's design features that make law compliance, traceability and accountability difficult. In the case of trackers perhaps the microservice/object is a simple key-value store, but it is semantically associated with other protocols which have 'noxious' features described above AND are semantically associates with illegal material.


> I'll restate the principle of good usage to bad usage ratio, telephone providers are a well established service with millions of legitimate users and uses

Ditto trackers.

Have a look at the graphs here: https://opentrackr.org/

Over 10 million torrents tracked daily, on the order of 300 thousand connections per second, handshaking between some 200 million peers per week.

That's material from the Internet Archive, software releases, pooled filesharing, legitimate content sharing via embedded clients that use torrents to share load, and a lot of TV and movies that have variable copyright status

( One of the largest TV|Movie sharing sites for decades recent closed down after the sole operator stopped bearing the cost and didn't want to take on dubious revenue sources; that was housed in a country that had no copyright agreements with the US or UK and was entirely legal on its home soil.

Another "club" MVGroup only rip documentaries that are "free to air" in the US, the UK, Japan, Australia, etc. and in 20 years of publicaly sharing publicaly funded content haven't had any real issues )

> the ISP, which according to my theory has some liability as well.

The world's a big place.

The US MPA (Motion Picture Association - the big five) backed an Australian mini-me group AFACT (Australian Federation Against Copyright Theft) to establish ISP liability in a G20 country as a beach head bit of legislation.

That did not go well: Roadshow Films Pty Ltd v iiNet Ltd decided in the High Court of Australia (2012) https://en.wikipedia.org/wiki/Roadshow_Films_Pty_Ltd_v_iiNet...

    The alliance of 34 companies unsuccessfully claimed that iiNet authorised primary copyright infringement by failing to take reasonable steps to prevent its customers from downloading and sharing infringing copies of films and television programs using BitTorrent.
That was a three strikes total face plant:

    The trial court delivered judgment on 4 February 2010, dismissing the application and awarding costs to iiNet.

    An appeal to the Full Court of the Federal Court was dismissed.

    A subsequent appeal to the High Court was unanimously dismissed on 20 April 2012.
It set a legal precedent:

    This case is important in copyright law of Australia because it tests copyright law changes required in the Australia–United States Free Trade Agreement, and set a precedent for future law suits about the responsibility of Australian Internet service providers with regards to copyright infringement via their services.
It's also now part of Crown Law .. ie. not directly part of the core British Law body, but a recognised bit of Commonwealth High Court Law that can be referenced for consideration in the UK, Canada, etc.

> but it is semantically associated with other protocols which have 'noxious' features described above AND are semantically associates with illegal material.

Gosh, semantics hey. Some people feel in their waters that this is a protocol used by criminals and must therefore by banned or policed into non existance?

Is that a legal argument?


Are you sure open.stealth.si is a zero knowledge tracker? Some trackers reject unregistered torrents.


The list I gave was of some public trackers, I made no claim that they were zero knowledge trackers, I simply made a statement that trackers needn't be aware of .torrent file manifests in order to share peer lists.

I also indicated above that having knowledge of .torrent manifests is problematic as that doesn't provide real actual knowledge of file contents just knowledge of file names ... LatestActionMovie.mkv might be a rootkit virus and HappyBunnyRabbits.avi might be the worst most exploitative underage pornography you can think of.

Some trackers are also private and require membership keys to access.

I was skating a lot as TZubiri seems unaware of many of the actual details and legitimate use cases, existing law, etc.


I don't think TPB ever hosted any copyrighted content, even indirectly by its users. Torrent peers do not ever send any file contents through the tracker.


Humble Bundle. Various Linux iso


archive.org to name one


That's debatable. Most of their torrents are for things under copyright, though any other decentralized archive would have the same problem.


That’s a copyright problem. 99% of things made in the last 100 years fall under copyright.


Except when their own employees publicly tell people not to worry about copyright and just upload stuff anyway, they make it their own problem.


and a good number of things that were going to pass into copyright were further extended to 2053.


Keep in mind the IA archives a lot of garbage. If it could be more focused it would be more likely to work.


The IA only works because it archives everything. You don't know what you need until you need it.


Archives generally purposefully don’t have a strong editorial streak. My trash is your treasure.


They have to if they don't want to use infinite space.


The attempts have actually been focused on specific types of content, such as historical videos.


personally I love all the random crap on IA!


I'm a high school teacher. Not for long, but for a few years now.

Never have I ever seen a student reading on their phone. I'm not saying it doesn't happen, but it must be a vanishingly small fraction that I have not yet encountered. When my students are on their phones, it's games, or it's (primarily video-based) social media. A smaller but notable fraction is background media consumption, either music or movies.

That's not to say I don't have kids who read, though they're much rarer than the music listeners, just that the readers seem to prefer physical books.

So at least in my experience, I wouldn't expect that metric to be vulnerable to this particular flavor of distortion.


Thanks for the reality check. I was worried about how I could be conflating my own personal view as a parent with the popular narrative of "kids these days and their Instagram/TikTok." Probably says a lot more about me, but I vastly prefer the reading experience of a thick book on a phone than as a physical copy. And I have since I was a teenager (and it was just PDAs and clever TI-89 hackery).


Most people don’t have the patience and attention span to read thick books in the first place. That’s something that you have to develop with practice, and kids who have access to TikTok aren’t going to get that practice.


I used to read on PDA and then later on Nokia Internet Tablet. But never in school even if I had one where. At those times it was just games(Bejewelled, Space Trader, DopeWars) or graphic calculator software.


> games(Bejewelled, Space Trader, DopeWars) or graphic calculator software

Snake!


I know I’m an autist and Wikipedia addictions are not the norm, but how can people not enjoy reading?

I got in trouble all the time in school for reading (real books) during class time when the teacher was lecturing about things I already knew. Do kids like this not exist anymore? I thought autism rates were going up!

Seriously, no one reads? I thought kids were getting all political and woke, presumably from reading progressive things? I guess it’s all TikTok mind control?

I categorically oppose phone bans on the grounds that it harms the “brilliant lazy”, and that these forces are exactly the kind you want to cultivate. (insert the famous bill gates and 4 types of German officers quote about this here) - but if this class of people has evaporated from the school systems than who am I even defending?


I do have readers, they're just not as common as the types. And like you they prefer physical books.


I read a lot on my phone as a student.


It's advised in the implementation documentation to add a page explaining it. Shift is also used naturally when inputting information, with the visual feedback inside the button giving an opportunity for discoverability.


Time is of the essence when you're hitting an escape shortcut. That's why this component blanks the page immediately, then loads the decoy, there can be no delay even for the browser to tear down the page as it fetches the next. If you have enough time to just go and open Solitaire, you have no need for an escape button.

If you are with someone who cannot know what you are doing, who has appeared suddenly, you are quickly closing what you're doing and, yes, you will be looking at a blank page without some sort of escape mechanism like this. And if it's sudden and unexpected, you might not have been anticipating needing to pop open some decoys.

This seems like a complete misunderstanding of the situation.


If time is of the essence, why are you wasting it requiring 3 key presses and a site load? It take longer to do that vs a single shortcut, and is more visible (pages don't load immediately)

> If you have enough time to just go and open Solitaire, you have no need for an escape button.

You don't have enough time to complete that, you do that not to appear just staring at a blank screen. Activity of opening Solitaire is enough in itself.

> who cannot know what you are doing,

which is easier achieved when the browser is closed vs. when a browser is opened, since in the latter case it's easier to think about checking "previous" browser history

> This seems like a complete misunderstanding of the situation.

Indeed, so much so that this overengineered-but-underthought solution has none of the supposed benefits under the conditions people come up with to defend it


At this point I have to assume that this is willful. You are continuing to ignore things that have been addressed by both myself in my last comment and the article. I invite you to read the article more deeply and look into the actual research backing these UI patterns if you are genuinely struggling to understand.


Likewise I assume you have no arguments left, so "have to" resort to the meta "read more" and imaginary research


And I'd advise against thinking that you that you have thought of things within 5 minutes that, inexplicably, researched, data-backed experts have missed so easily. That's a mindset that does not lend itself to intellectual growth.

In almost all cases, it's not just so obvious that the experts in a field are so misguided. It's that there is complexity and depth that is not perceivable at a glance.


The army of "researched, data-backed experts" behind you is imaginary. So instead of repeating the same appeal to an imaginary authority I'd advise you cite a single good UI research study where a slower site-specific shortcut is better that a more common faster one when "time is of the essence" (and whatever else you think is based on "expert research")


Escape might be more intuitive but it's not more discoverable. Shift is used often when inputting information, and the mentioned visual feedback give this behavior an opportunity to be discovered.

Having said that, regardless of the key the guidelines on using this pattern say that you should explicitly inform the user of the feature before they first encounter it.

https://design-system.service.gov.uk/patterns/exit-a-page-qu...


You're misunderstanding hx-swap-oob. Each element with that attribute will go and replace the element with the matching ID, keeping them all in sync with one response from the server.


> 1) you dont care about keeping UI state on the parts that are replaced (incl any stateful children of the UI element that happen to be there in your DOM structure), and 2) when you dont have to update the app in any other places when your state changes

Htmx does have tools for both of these cases. Out of the box, htmx throws out state, but there are plugins such as morphdom-swap for merging in the new DOM fragment into the old while keeping state. I have some client-only state that holds references to DOM elements in Javascript, and by default, yes, htmx breaks all those references as those elements no longer exist. Link in morphdom-swap, and my references live on across reloads, even across attribute and content changes.

And for #2, htmx also allows you to swap in elements that are not the target element, just by specifying that that's what you want.

IMO these are pretty basic tools of htmx. Like you said, without them about the most complex thing you can create is a to-do list, and sometimes not even that.


> morphdom-swap

this https://github.com/bigskysoftware/htmx-extensions/tree/main/... with 159 stars is a basic tool of htmx? is this the community consensus?


Parts of htmx were yoinked out of the core library into an extensions repo relatively recently, as part of htmx 2.0. That might explain the relatively lower number of stars. More important than github stars is that it is indeed part of the htmx project and is documented here https://htmx.org/extensions/


No. Morph swaps[0] are the basic tool of htmx. Morphdom-swap is simply the one that works for my usecase.

[0] https://htmx.org/docs/#morphing


The htmx author maintains his own swapping library which has more stars (although I wouldn't judge a project solely based on Github stars): https://github.com/bigskysoftware/idiomorph?tab=readme-ov-fi...


Just step back for a second and think about programming without modeling the states. Framework or not, no amount of hacking/tooling can help you with that.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: