Hacker News new | past | comments | ask | show | jobs | submit login
Double-clicking on the Web (ma.ttias.be)
106 points by Mojah on April 18, 2015 | hide | past | favorite | 105 comments



HTTP already handles this just fine if you have sensitive forms: your form can include a one-time token which your server validates. If the token has already been used, you don't process the second request.

What we definitely shouldn't do (as the author suggests) is disable form submissions on subsequent clicks. What if the first response fails? You'll have to enter the entire form all over again, instead of being able to hit the back button. Madness!

The author's proposal makes things marginally easier for developers and shifts all of the pain to users. That's not really acceptable, at least not to me.


While I agree that you should include a one-time token with all sensitive forms, I don't think that disabling links/submit buttons on the first click is a bad idea in general. It's an easy fix that will immediately prevent most accidental redundant server requests, but as it's a client-side fix it's inherently unreliable.

What's wrong though is the author's solution of disabling _all_ links and submit buttons on the page indefinitely. Here's an alternative solution that only disables whatever has been clicked on for half a second:

    (function() {
      function stopClickEvent (ev) {
        ev.preventDefault();
        ev.stopPropagation();
      }
      document.body.addEventListener('click', function (ev) {
        if (ev.target.tagName === 'A' || ev.target.getAttribute('type').toLowerCase() === 'submit') {
          setTimeout(function () {
            // Needs to happen _after_ the request goes through, hence the timeout
            ev.target.addEventListener('click', stopClickEvent);
          }, 0);
          setTimeout(function () {
            ev.target.removeEventListener('click', stopClickEvent);
          }, 500);
        }
      });
    })();
Unlike the author's solution, this really works for links (they don't support the `disabled` attribute), it doesn't change the appearance of the button, it only disables one element, and it will work for elements that have been added to the document after the DOM was loaded. You can try it here: http://codepen.io/anon/pen/xGKdLX


I like the 500ms timeout as a quick solution.

Of course, a common source of duplicate form submissions is when then submission request is taking long (>1000ms) and the user gives the button another click in a little while.


I know, and you could set the timeout higher to prevent these (the button becomes a placebo button that's still clickable but doesn't do anything), but it gets hard to draw a line at which the request really might have failed and the button should be re-enabled. I feel like this is a different problem that needs to be addressed server-side, both with quicker response times and single-use tokens.


This gets seriously annoying if you have an intermittent connection. I know the first click didn't succeed, so I have the choice of refreshing and losing all my typed content or manually using the web inspector to re-enable the button. It's shitty user experience.


Sorry, you might want to re-read my comment. My solution will only disable the button or link that has been clicked on for half a second, which ought to be enough to both prevent accidental and allow intentional resubmissions.


Ah! Yes I jumped to an assumption there. That's a much better solution.


I wish I could upvote this more than once. In the company I worked for for 10+ years (2002-2013) we had a huge web application used heavily by 500+ users daily. I have seen a lot, and the one-time-token approach is the only bulletproof way to solve this once and forever.

It's not only the double-click that causes problems. For example, user might click and if the server does not respond right away it might seem that click did not get accepted, so the user will try to click again 3-4 seconds later. The one-time token solves this problem as well.

Then you might have AJAX stuff that gives no feedback until the server does its job (which might take a while if it's under load at the time) and if you disable the button it prevents the user to retry. Sure, you can add indeterminate progress bars and animations, but users quickly learn to ignore those if they don't get any response from the server in expected time.

What really should be done is educating developers to implement this pattern. Any beginner book that talks about HTML and HTTP should include this. Unfortunately, it's so easy to enter web development today that many people have to learn from their own mistakes, and reinvent the wheel over and over again.

Maybe I'm too pessimistic, but I just don't see it happening.


Forms should probably have a one-time token generated when the form is rendered. This will also help prevent CSRF and DDOS via hogging of resources.

In fact, the same technique should apply to session tokens in general. They should be signed by the server, which removes the need to do I/O to filter out unauthorized sessions and mitigate DDOS attacks.

So yes, this is the correct way to do it. But additionally, I don't see why it's so bad for the client to prevent quick submissions in succession... maybe disable the button until the response comes back, with a 1 second timeout, and alternate suggestions on repeated failures.


Signing is slow, if anything signing the CSRF tokens just became your bottleneck, if not you just inflated the request size by kilobytes for no good reason.


Signing doesn't require I/O and can be massively parallelized.

And also the CSRF token doesn't need signing, it comes from the session data stored on the server. The session id is what needs signing.


So the DDoS attacker can't retrieve said tokens?


Well, presumably creating an account will be expensive, otherwise an attacker can just execute a http://en.wikipedia.org/wiki/Sybil_attack

Once an account is created, throttling resources is straightforward. If you want, you can allow a person to have one X per Y (time, points, whatever).


I hate it when people do this. I'd say >30% of my HTTP requests fail on my phone (I have to go into airplane mode and go out of airplane mode about 10 times a day to keep my connection up), so it's really, really annoying when people disable submission because often I do have to try 2-3 times.


This just moves the concurrency issue to the client, particularly for xhr requests.

Say you double click on some action with a one time token, the first request gets to the server and succeeds while the second request fails due to the token being used. But, the response to the first request is delayed and the second response (failure) gets to the client first. The client can either 'obey' this failure response or wait till all in flight requests return (or maybe till one comes through with success). The solutions to dealing with this look a lot like debouncing/throttling the requests in the first place.

Using a one time token is very useful for avoiding multiple executions of non idempotent operations but it's really a separate issue from dealing with multiple requests/responses on the client. i.e. the UX.


Subsequent requests to the same token don't have to fail - they just have to silently pass.


That doesn't make the situation any better.

Request 1 takes the 'token'

Request 2 sees the token gone, returns 'silent success'

Request 2 response reaches client, client assumes success.

Request 1 fails somewhere internally

Request 1 returns failure to client, which has already acted on the success of request 2

User thinks the action was successful, when it wasn't.

Now, you can build yourself a lovely two phase commit system around the token and the operations performed by the action to deal with a UI issue, or you can just throttle the requests.


Why not treat this the way a REST web service would?

Request 1 submits token

Request 2 submits token

Response 2 returns first with a 409 conflict

Response 1 returns with a 200 OK

It's a question of handling the error codes properly. 409 means resubmit with a new If-Match. Other 400-series errors indicate some other problem with the client's submission.

Actually, if I were reviewing the original author's proposal, I'd say that double-click should be disabled only on <form method="post">


This has nothing to do with REST, the same issues exist.


What do you mean it has nothing to do with REST? It's an application built on HTTP.

The same issues do not exist. Response 2 does not return "silent success" it returns 409 Conflict. The client can distinguish between conflict & success.


The error codes nor the transport are important here.

You can make an arbitrary number of requests that can respond with various responses (or fail) in arbitrary order. Who do you listen to? Which response is 'correct'? Sure you can build a service on the client to track the requests, the order they are issued in, what responses supersede other responses, how long to wait for all responses, etc, etc. But why build such a complicated solution to such a simple problem?


If the first request errored, you can error on the second one as well. I don't see the problem.


If there is no throttling, there could be a dozen requests, with different responses arriving in arbitrary order. Which one do you believe to determine the outcome of the operation? How long do you wait for responses? etc. etc. If your going to build all the machinery in the client to deal with this situation, it's much easier to just throttle.


I'm not saying "don't throttle". I'm just saying tokens can work pretty well too...


Even before I got into web dev, I assumed something like that one-time-use token could be implemented to solve the problem. But back in the day, every online store had explicit instructions to "only hit submit once". Why didn't they just use the token solution?


Because back in the day everyone was a novice.


Firefox seems to remove the second click sometimes. If I click a link, and then, while the page is loading but the old page is still visible change my mind and click another link, it sometimes ignores the second click. However it doesn't seem to happen all the time.

As an aside, the distinction between double click and single click used to be much clearer (in Windows 3.1/95 times!):

- If its on a white background, maybe in a well, then it is a selectable. Click on it to select it, right mouse button to do something with it, and double-click to do the default action with it (which is bold in the context menu).

- If it looks like a physical button (3D and raised), then you can perform an action by clicking on it.

Links are a bit odd, as they look selectable (and in fact they are, by clicking and dragging from outside in), but one click performs an action. But by now, they are so ubiquotious that everybody knows that one click activates them (Well, at least one click. Two clicks do no harm in most situations.)

What I hate about modern flat design is that it often removes these hints towards what kind of control something is (the "affordance" in hip UX speak). If I click/tap on something, does it get selected, opened, or does nothing happen because it is a label? No way to know without trying.


> Firefox seems to remove the second click sometimes. If I click a link, and then, while the page is loading but the old page is still visible change my mind and click another link, it sometimes ignores the second click. However it doesn't seem to happen all the time.

To have better response time (and be a good citizen and save bandwidth), if you try to load in quick succession two different pages the first request is aborted. If you're too late the content of the first request has already been downloaded and being processed, resulting in that page being loaded.


I don't disagree with the premise that double clicks are a broken thing on the web. There's too much of a cognitive shift between "the web" and "everything else" where double clicks are either a bad thing to do or a required action to complete tasks. Like the author said, even tech savvy people will double click things that they wouldn't need to.

But, saying that idempotency is hard and thus we should build features to get around it.. that seems wrong to me. The web is inherently a distributed system and you're going to have concurrent requests and weird edge cases that arise in those situations. The right way to handle that isn't to bandaid it with disabling client side behavior. That just hides to problem. Your server is the place to handle it.[1]

And, it really isn't that hard either. It's just a couple extra validations on your input before you do a write. There are places where it is more difficult but still not overly complicated to implement.

[1] Dogmatism alert. Of course, there are exceptions to everything that's a 'best practice'. Here, I'm just talking about disabling double-click as a general solution for the web to a concurrency problem.


Because never trust user input and race conditions.

The author says "Server-side, this is a much harder problem to solve." and he is correct. But that doesn't mean that if the browsers did solve this problem that you wouldn't still need to solve it server side. For the same reason you still need to implement server side validation even though you have client side validation (never trust user input).

Say you had an order form only allowing one order per customer (some special offer) and on the server you have a check to ensure order.count==1 you might assume that because the browser prevents double clicks on the form you are now safe. Except you aren't. It's pretty trivial to still send that form submission multiple times simultaneously (curl, ab) and trigger a race condition where each order.count check returns 0.


Agreed, this should be solved server-side.

Indeed it is possible to implement idempotent behavior of POST forms server-side. Even if a bit tedious to implement at first, it meshes very well with multi-user web apps, when several people may be changing the same underlying data simultaneously.

One possible approach is carry in POST both old (original) state and new (user input) state, and apply it as a diff to underlying storage.

Used this approach with 100% success on WWW based kiosks used by tradesmen, some of whom were habitual doubleclickers. Since the form was very simple -- one or two fields -- the old state was carried in action url. In case of double clicks, the seconds ended up changing nothing in backend storage, and returning correct data.


The problem isn't idempotency as that just solves the problem of multiple synchronous submits (If order submitted what happens if order is submitted again). The real problem is the asynchronous nature of the web where both submissions could arrive at exactly the same time. And this is solved generally by using some kind of global lock (often at the database level).


coarse locking is only a possibility at fairly small scales, unless your users like waiting. CSRF protection is the solution here as well. this entire article could have one comment that says nothing more and it would perfect.


Who said anything about course locking? Most databases support row level locking (think lock user row so each user cannot submit multiple orders at the same time).

And I think you are misunderstanding CSRF. As the name implies it is protection against Cross Site attacks and will do nothing to help you prevent race conditions.


That reminds me of the way my mother is using a computer. She clicks EVERYTHING twice. No matter if online or offline. That often causes problems. For example if you double-click an icon in the windows taskbar to open an application it opens twice.


Exactly a lot of real world (older) users do this and I cringe a little every time I see it. They don't even know that you can single click. This isn't a huge issue, but it's much more common than the people here seem to think.


"They don't even know that you can single click."

I see alot of younger people are like this about some things. (but not single-clicking obviously.)

How about raking your scrollwheel through fifteen pages to get somewhere? Analog scrolling, I call it. Because that's what it is, a skeuomorph of scrolling through a microfilm... I would've thought the advantage of having a computer was that you can go directly to page n, to line n, whatever.

In terms of efficiency, I would liken the scroll wheel to the arrow keys: it's alright for small distances, but not more.


Mine is the opposite, she never double clicks despite numerous attempts to teach her. Not sure about online but offline she "right-click + open"s everything, which is painful to watch.


Just set her desktop to single click mode?


I suggest teaching "click once at first and wait a moment; if nothing desired happens, then double-click." It's what I do when I explore the UIs of unfamiliar apps, and seems to work well enough.


Completely agree - I see the same behaviour.


All I can think of is that someone is using the mouse (and keyboard) quite a bit different than I do.

I can understand that single vs double clicking can be an issue with non-technical users, but disabling double click won't really change that issue, because they usually don't understand that these are two different actions. Of course one can and should try to make things easier for them, but once it starts to affect the usability for others, it's not really worth it anymore. These people should learn to adjust their behavior.

Now that this is out of the way, I really wonder what he meant with:

> For techies like us, a double-click happens by accident.

What kind of techies is he talking about, because I'm using the PC many hours each day and I don't just accidentally double click stuff. Maybe it's the environment they're working with, but for me I'd say, I do a larger portion of single-clicks than double-clicks per day and if a signification percentage were accidental double-clicks I would have more issues that just a site that opens twice.

And last but not least I'd say a big percentage of my clicks on the web are middle clicks anyways, i.e. open link in a new tab. Something that wasn't even looked at, which again makes me wonder how the author is using the web.

What we can learn from this however is, that websites should add checks to prevent double submissions of forms. And provide proper feedback if e.g. an AJAX submission failed or if it's still sending data.


This makes no sense. The examples he gives of "double-clicks everywhere" are all instances of the same thing: items that are selected with single-click and opened with double-click.

We're not "trained to double-click anything", unless some particular trainer is extremely misguided.

Double-clicking is used when there is some action available for an item in addition to the single-click action, where the double-click usually leads to both actions taking place: we single-click to select files and double-click to open them, single-click the title bar to activate the window and double-click to maximize, single-click the window menu to view the menu and double-click to close the window (which is one of the options in the menu).

Some examples of things we only single-click: buttons, scrollbars, tabs, menus, dropdowns, text (for selection).

Single-click is everywhere. Everything that can be double-clicked can also be single-clicked. Single-click is the default. If a single-click only leads to an item beings selected, that's a pretty good hint to try double-click if you also wanted it to be opened.

(Incidentally, what do those who double-click everything do when they want to select a file? Do they just open it, close it again (leaving it selected) and accept that as normal?)


Self-guided experience "trains" many people, and some of them infer a rule that double clicking is for opening things. Older people especially can fail to notice that many widgets respond to their first click and continue with the same action on the second click, so their behavior doesn't stand much chance of correction.


Are double click actions still a thing? Windows has had an option for single-click mode since forever, and most users I know use it – mainly because they're used to single click from the web. The remainder and most Mac/Linux users don't even bother and navigate with the keyboard.

I can't remember the last time I had to double-click anything, except for text selection.


There's something particularly ugly about people who feign ignorance of the way people less technical than them do things, in order to make themselves sound more elite.


I get annoyed with this sort of unawareness too ("don't these people ever hang out with their families??"), but you're reading uncharitably. I see it as an unawareness of "how the other half lives", which can be overcome through self-awareness and active observation, rather than feigned ignorance to sound elite.


It's not being less technical: using double click simply means you did desktop computing in the mid nineties on Windows. Anyone younger simply doesn't have that memory.


Or Macs... the "single click to select, double-click to apply default action" occurs both in Windows and MacOS.


OK I'm wrong. I use a Mac every day and thought they didn't do double click. Apparently it's a somewhat subconscious action.


Wasn't that the whole point of the article? How many times did he point that out? More than two:

>Everywhere in the Operating System, whether it's Windows or Mac OSX, the default behaviour to navigate between directories is by double-clicking them. We're trained to double-click anything.

>Want to open an application? Double-click the icon. Want to open an e-mail in your mail client? Double-click the subject. Double-clicks everywhere.

>We know we should only single-click a link. We know we should only click a form submit once. But sometimes, we double-click. Not because we do so intentionally, but because our brains are just hardwired to double-click everything.

>For techies like us, a double-click happens by accident. It's an automated double-click, one we don't really think about. One we didn't mean to do.


Yep, I read that and thought it was wrong. Double clicks require two clicks in a matter of milliseconds and they're e really hard for novice users, I'd read somewhere that Macs didn't use them - evidently that's now wrong, and I'm double clicking constantly but not realising.


Following a link and opening a directory are distinct enough in most people's minds to not confuse the two. That's why hyperlinks are normally underlined, colored, and give you a different mouse cursor.


I disagree that you can make such a sweeping statement without evidence, having worked on HyperTIES [1] [2], an early hypermedia browser and authoring system with Ben Shneiderman, who invented and published the idea of underlining links, and who has performed and published empirical studies evaluating browsing strategies, single and double clicking, touch screen tracking, and other user interaction techniques.

Hyperlinks do not necessarily have to be triggered by single clicks. In HyperTIES, single clicking on a hyperlink (either inline text or embedded graphical menus) would display a description of the link destination at the bottom of the screen, and double clicking would follow the link. That gave users an easy way to get more information on a link without losing their context and navigating away from the page they were reading. Clicking on the background would highlight all links on the page (which was convenient for discovering embedded graphical links in pictures). [3] [4]

The most recent anecdotal evidence close at hand (in the sibling and grandparent comments to yours) that it's confusing is that nailer did indeed confuse double clicking with single clicking in his memory, not remembering that he subconsciously double clicks on Macs all the time.

I would argue that much in the same way the Windows desktop gives users an option to enable single-click navigation like web browsers, web browsers should also give users an option to enable double-click link navigation like HyperTIES, so a single click can display more information and actions related to the link without taking you away from your current context, and a double click navigates the link. (Of course in the real world, scripted pages and AJAX apps probably wouldn't seamlessly support both styles of interface, but double click navigation could be built into higher level toolkits, and dynamically applied to normal links by a browser extension.)

In order to make a sweeping statement like "Following a link and opening a directory are distinct enough in most people's minds to not confuse the two" you would have to perform user testing -- you can't just make up statements like that without any supporting evidence. Can you at least refer me to some empirical studies that support your claim, please?

[1] http://www.cs.umd.edu/hcil/hyperties/

Starting in 1982, HCIL developed an early hypertext system on the IBM PC computers. Ben Shneiderman invented the idea of having the text itself be the link marker, a concept that came to be called embedded menus or illuminated links. Earlier systems used typed-in codes, numbered menus or link icons. Embedded menus were first implemented by Dan Ostroff in 1983 and then applied and tested by Larry Koved (Koved and Shneiderman, 1986). In 1984-85 the work was supported by a contract from the US Department of Interior in connection with the U.S. Holocaust Memorial Museum and Education Center. Originally called The Interactive Encyclopedia Systems (TIES), we ran into trademark conflicts and in 1986 changed the name to HyperTIES as we moved toward commercial licensing with Cognetics Corporation. We conducted approximately 20 empirical studies of many design variables which were reported at the Hypertext 1987 conference and in array of journals and books. Issues such as the use of light blue highlighting as the default color for links, the inclusion of a history stack, easy access to a BACK button, article length, and global string search were all studied empirically. We used Hyperties in the widely circulated ACM-published disk Hypertext on Hypertext which contained the full text of the 8 papers in the July 1988 Communications of the ACM.

[...]

Today, the World Wide Web uses hypertext to link tens of millions of documents together. The basic highlighted text link can be traced back to a key innovation, developed in 1983, as part of TIES (The Interactive Encyclopedia System, the research predecessor to Hyperties). The original concept was to eliminate menus by embedding highlighted link phrases directly in the text (Koved and Shneiderman, 1986). Earlier designs required typing codes, selecting from menu lists, or clicking on visually distracting markers in the text. The embedded text link idea was adopted by others and became a user interface component of the World Wide Web (Berners-Lee, 1994).

[2] http://www.donhopkins.com/home/ties/LookBackAtHyperTIES.html

Designing to facilitate browsing: A look back at the Hyperties workstation browser

Ben Shneiderman, Catherine Plaisant, Rodrigo Botafogo, Don Hopkins, William Weiland

Human-Computer Interaction Laboratory, A.V. Williams Bldg., University of Maryland, College Park MD 20742, U.S.A.

[3] https://www.youtube.com/watch?v=fZi4gUjaGAM

University of Maryland Human Computer Interaction Lab HyperTIES Demo. Research performed under the direction of Ben Shneiderman. HyperTIES hypermedia browser developed by Ben Shneiderman, Bill Weiland, Catherine Plaisant and Don Hopkins. Demonstrated by Don Hopkins.

[4] https://www.youtube.com/watch?v=hhmU2B79EDU

Demo of UniPress Emacs based HyperTIES authoring tool, by Don Hopkins, at the University of Maryland Human Computer Interaction Lab.


You can't just request informed studies without providing funding.


Of course I can. I said "Can you at least refer me to some empirical studies that support your claim, please?", and I provided links to informed studies that I and other people published.

But I'll humor you: How much funding would you suggest that I should offer him to look up some proof of what he said on google or wikipedia? And how much money should I have asked him to pay me for the information I gave him for free?

I didn't realize it was customary to pay people for supporting their statements with evidence on Hacker News. Can you please refer me to the section of the FAQ about that? Or do you have Hacker News confused with Kickstarter or experiment.com?


I'm a GNU/Linux user for almost a decade now, sysadmin, programmer. I still double click things everytime I'm using a Windows desktop (not on the web of course), as does everyone I know here in Brazil.


"Are double click actions still a thing?" Yes, and he knows it. He's feigning ignorance because we all know the Mac and Windows desktops both require double clicking by default.


Really? I would be surprised if anyone I know is even aware of the fact that Windows ever had a single click mode let alone how to turn it on. And the mac users I know certainly don't navigate with the keyboard.


We single click links just like we single click buttons. Form buttons on a browser should behave like regular buttons in the operating system: single click.

Clicking an item in a list should select the item. Double clicking an item in a list should open it. This is the part that might be broken in the web browser because I see check boxes and all sorts of other UI gymnastics to enable the functionality of how a list works in file browsers, and email.

There are a few other UI things in the web browser that I think are slightly broken. These UI things are a legacy from the original web browser. For example check boxes and radio buttons separate the text from the check/radio box, but I think you should be able to click on the text to toggle the box -- yes, I know there are work-around using JavaScript, but basic UI should be consistent.


There's tag `label` made for connecting text and input elements. And any correct form should use labels and it'll work in any browser just as you described: click on text label will trigger input activation (toggle checkbox or focus text input).


I work in tech support and I have literally never come across a user using single click mode in windows, nor does anyone I know use it.


Double-clicks everywhere.

I disagree. By far the most actions are performed with a single-click, double-clicks are mostly for one very specific use case - having a container control and you want to select one of the children and perform the default action on it.


Interesting problem and valid questioning but flawled reasoning.

"because our brains are just hardwired to double-click everything." that might be true for people that formed muscular memory in the golden days of the mouse and heavy OS use (as oposed to heavy internet use).

An acceptable solution has to anticipate the muscular memory we are generating in legions of mobile users: tapping.

If we apply that author's central argument to today's mobile influence, then who instead of tapping will have her brain just wired to double tap?


Browsers do not provide immediate visual feedback on some things for which they should. When you click on a link which exits the page, something visual should happen immediately, long before the new page loads. The browser knows you're leaving. Dimming the page, or some other visual transition, would be a good start. That would make a clear distinction between page-exiting actions and ones which keep the page active.


There used to be much better feedback with the "throbber", i.e. the big animated Mosaic/Netscape/Mozilla logo. But that seems to have gone away with the decrease in size of the toolbar buttons.


Here's an example of what happens when you try to treat single- and double-click as the same thing (which may or may not have been the intent in this particular case).

In Android 5.0, swiping from the top brings up the notification menu and a two-finger swipe brings up the settings shortcuts. From the shortcuts screen, the back button brings you back to the notification menu. Another press of the back button brings you all the way back to where you were.

So to quickly go back after bringing up the shortcuts screen, you would naturally press the back button twice in quick succession. But that's a lot like a double-click, and if we want to treat those as single-clicks ... it has to ignore one press and just bring you back to the notifications screen instead. Which is exactly what happens.

So to actually go back twice, which is a very common and natural thing in this case, you have to go back ... then wait ... wait ... and then finally go back again.

> If the same form submit has been registered by the browser in less than 2 seconds, surely that must have been a mistake and would count as an accidental double-click?

Or maybe they're using POST requests to control some process which they get feedback from through another channel. They could be rotating pieces in a Tetris game – should I have to wait for 2 seconds between my two identical commands for rotating a piece 180 degrees to the right?


Backward compatibility? Some click targets are meant to be clicked more than once, and can you guarantee that none of those targets may inadvertently be swept up in a change like this?

I'd flip the proposal around, and go for:

  <form action="/something" onlyOnce>
  ...
  </form>
The problem then is for the hung responses which still happen frequently enough, the typical hack to unfreeze a requeust is to mash the link/button a second time. Now, would the user have to hunt for the "Cancel" button (an increasingly diminishing target) before being able to click Submit again?

Since you can't guarantee a client will have this behavior, you have to plan for it anyway on the server. I think that's why features like this tend not to be deployed. Features that might help most of the time tend to be beaten to death by the people who keep insisting that they don't completely and entirely solve a problem, so somehow that means it's better not to have them at all, i.e. false hope.


never a good idea to camelCase html attributes :) (in case you didn't know, this won't work: $('form').attr('onlyOnce') but this will $('form').attr('onlyonce')

if your attribute is written as onlyOnce


Mixed case in attribute names is perfectly valid HTML, and tag and attribute names are case insensitive. If jQuery attribute selection isn't, then maybe it should be considered a bug in jQuery.

That's not to say that it isn't a bad idea, but I don't think the inconsistent behavior in jQuery should be considered in arriving at that conclusion.


no it's not valid html.. that's like saying writing gibberish is valid HTML. You can't read mixed case attribute names.. jQuery nor Javascript. It's always converted to lowercase without you knowing.


Yes, it's valid HTML. It's not at all like saying that writing gibberish is valid HTML.

Why would you say these things without looking it up in the w3 reference (http://www.w3.org/TR/html-markup/documents.html#case-insensi...) or just trying it out (http://5ccf7f9c97075d1b.paste.se)


Is it me, or why is a setTimeout needed in this example?

$(document).ready(function(){ $("form").submit(function(){ setTimeout(function() { $('input').attr('disabled', 'disabled'); $('a').attr('disabled', 'disabled'); }, 50); }) });


50 should be a constant called NEXT_TICK with a value of 0. This makes it run on the next event loop, ie, default event submit form, next tick afterwards the form is disabled


You are looking for setImmediate[1].

EDIT: Oh, never knew that this method was a Node-ism only.

[1] https://developer.mozilla.org/en-US/docs/Web/API/Window/setI...


Actually it's common enough that

    var setNextTick = function(nextTickFunction){window.setTimeout(nextTickFunction), 0}) 
...sounds like a damn good idea.

Hate the name 'setImmediate' though, immediate implies the current tick and people would wonder why you're using it.


Some browsers will not submit the form if you disable the submit input inside the submit event.


Was asking myself the same thing. This would make sense if your submit button will actually cause an ajax request that you might want to repeat. But actually submitting the form will cause a page reload either way, or not?


Only one click via dock, double click is only for when you want to browse with your keyboard so you can click once to select starting position, or for selecting if needed. But you don't really need that in a browser, for the odd occasion you have a file browser, it's currently easy enough to implement if wanted.


I allow multiple form submits on my sites. What I don't do is allow the same form to be submitted twice if the checksum of values it is submitting is the same as a prior submit.

This allows for the best balance between preventing double-submission errors, and a good user experience if a user has made an error.

I still de-dupe check on the server too (in case JS is disabled), and in a distributed environment the server submissoin de-dupe check isn't guaranteed (what if two submissions went to different servers?).

Perhaps if anything should be added to browsers it's simply a checksum of all of the values to submitted and an option to prevent submission if a prior checksum has been submitted. But even then... this is trivial to do in JS.

The above only applies to POST, for GET I don't care... it's cached, and perhaps the second request is the because the first one stalled (bad mobile network, etc).


Please don't disable links permanently! (as in the included jQuery snippet)

If submission gets stuck and doesn't go through (because mobile networks, public wifi, and "cloud" servers have more failure modes than we'd like…) user is stuck with permanently-disabled buttons and can't retry submitting.


As an interesting data point, since about a month Google Docs have introduced double-clicking as a gesture.


I hate that. I have single click everywhere, all around in my usual OS... but not in Google Docs because there's no way to configure it.


I don't see the problem. Contrary to the post, double clicking a link doesn't open it twice, and double clicking a submit button doesn't submit the form twice; at least, not on Chrome or Firefox on Linux.


That's true so I assume that the author is talking about ajax requests. I have a rule to always disable buttons that trigger ajax requests until they finish. Not only does this reduce double-submissions, it makes it clear to the user that their click was successful and to wait for the action to complete.


Don't go breaking the word selector.


I dont think this is a problem at all.


I love double clicking in the new google drive file browser!


Browser makers like Mozilla think it's appropriate that holding down F5 should repeatedly pummel the server in accord with keyboard repeat rates. It's like the browser makers are being intentionally hostile towards web servers.

E.g. https://bugzilla.mozilla.org/show_bug.cgi?id=224026 https://bugzilla.mozilla.org/show_bug.cgi?id=873045

There's no excuse, but they don't fix it.


> There's no excuse, but they don't fix it.

Browsers exist for my convenience, not yours.


Is a denial-of-service tool so convenient for you?


Thats a poor argument. Literally no one gets DoS'ed by a few guys F5'ing in coordination. If your server is so poorly set up as to allow any small number of IP's to impact it in any way then you are doing it wrong.

When I was getting into nodejs a few years back I wrote DoS script to kill a site that was scraping content from one of my sites and posing it as their own. I made it just for shits n giggles in about 5 minutes and I was surprised when it actually worked, their website just went down.

DoS is and always will be easy.


That is a poor deflection of the underlying point. It's absurd to conclude that an obviously undesirable behavior -- however unlikely to pose a problem in reality -- should not even be considered let alone addressed.

It could be as simple as a modest global rate limit on repeated GET requests to the same URL. We could start with 250 msec and see how that goes.

Or it could be as simple as limiting F5 reloads to once per keydown. Let users work for their accidental DoS attacks. :-)


what's absurd is protecting the server from a vanishingly rare accident by changing the client. if you feel you need to be protected, put that protection where it belongs, on the server, where it works against more likely things as well.


It protects the end user as much as it does the server.


So rather than fix your one server to be more robust, every one of the thousands of browsers that exist should change?


Rather than tweak the top four web browsers to be more intelligent towards actual user intent, every one of the millions of web applications should change?


How do you know what user intent is? If I'm holding the F5 key down, what do you think I want to happen? What if I'm running a plugin that does that for me? What if I'm writing my own web crawler or something?

Yes, I expect everyone who writes a web application to make it able to handle server requests reliably. There is no such thing as making a robust request -- a browser is just a browser. It doesn't know enough about your server to know what it doesn't like or can't handle. It also doesn't know enough about the user to know what their intent is.

You are blaming the pipeline for end-point issues.


> How do you know what user intent is?

I can tell you what it's unlikely to be. It's unlikely to be to fire HTTP requests at the arbitrary keyboard repeat of their particular computer. It's unlikely to be to machine gun off HTTP requests so rapidly that no pages ever get rendered.

Or perhaps I'm wrong. Perhaps there's a reasonable user behavior I'm forgetting. Please, enlighten me.

> If I'm holding the F5 key down, what do you think I want to happen?

At a guess, it would be to refresh the page repeatedly. In which case the ideal browser behavior might be to refresh the page at the maximum possible speed that would still allow some kind of visual feedback. Perhaps it could repeat as soon as the previous requests hits first paint. That would make sense.


double clicking must die. my parents still continue to double click on almost everything they see — all because their first computer experiences were on my Mac SE. (Mac OS 5 or 6 or 3?)

plus... double click means double RSI no?


Why is this so high on HN? While the premise of the post sounds interesting, the whole story says nothing new and provides no real solution.


I was using that Google 3D plot viewer thing the other day and assumed right-click-dragging would do something for me. It didn't, and it occurred to me that the reason we have this is that Microsoft, Apple, or some other microcomputer vendor once decided people were too dumb to know they have more than one finger.

And now on the subject of double-clicking, it occurs to me that if the one-finger thing hadn't been the way, everything that was ever a double-click action could've just been a single click on the next button over. Which would be simpler.


I think you'd be less dismissive if you were better informed about your topic.


If we're being objective, I think we can agree that having multiple buttons is more basic and more self-evident knowledge than being able to click a button in multiple ways.


?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: