Hacker News new | past | comments | ask | show | jobs | submit login

This is probably going to send this whole thread on a wild tangent, but here goes...

I've posted this before, but check out this link:

http://www.elevatesoft.com:8081/maxgridtest/maxgridtest.html

That app was created with our web development product (check the site if you want to know more), and has the following characteristics:

Two files, one HTML loader and one monolithic JS app, so latency during loading is minimal. The HTML loader is ~189K and the JS app is ~462K, and includes the entire runtime and UI layer, and a lot of the control/component library. Both the HTML and JS are aggressively compressed/obfuscated by a compiler, and the coding is done in a statically-typed, OO/procedural language with RTTI and other nice things. The UI was designed using a WYSIWYG designer with two-way tools (code-behind).

So, there are products/tools out there that will do something along the lines of what you want. And the existing JS engines are very good in terms of performance, so all that developers like us need to do is some quick compilation to JS and we're all set.

However, I do agree with you on two points:

1) JS, by itself, just isn't structured enough for large-scale applications.

2) The push towards libraries and away from frameworks was misguided because JS, by itself, doesn't have the means to allow for this approach to be successful. Instead, what we have now is every single small library reproducing the same functionality over and over again. Case in point: I was looking at writing an external interface (tells our compiler how to type-check external JS code) to ChartJS this week (great little library), and started looking at the code. 80-90% of the "common" code in the library was code that was already present, in some form, in our UI/runtime layer, and that was around 70K right there. Multiply this by the number of small libraries, and you end up with a lot of duplication of functionality that is, essentially, dead weight. I don't know if it's 10MB of dead weight, but it's pretty significant.




190KB of maxgridtest.html took 1.44 s to download

462KB of maxgridtest.js took 2.63 s to download

5.2MB of datasets?method=rows&dataset=IPCountry&Country=%27United%20States%27 took 21.22 s to load

"39246 rows load in 40948 msecs" - 41secs total just to see something!

This is not an awesome presentation, and misses some of the stuff the previous commenters mention about "windows 3.1" like "don't load an entire dataset in memory, but show a small slice of it at a time". At the time, memory was very limited, so developers were forced to be efficient, and it was good for users. Even on my old 486 DOS machine, I could load up a spreadsheet application, and navigate through it (with way more than 40k rows) at nearly instant speed because it was smart about what it was doing and what it's limitations were (memory, disk access speed, etc)


Our product is for developing applications, not consumer-facing web sites. It is expected that the first hit on the HTML/JS is going to take more time than normal (we offer progress options for this). After the initial hit, the browser will cache both, and the load is instantaneous until it is updated again.

Also, the total size/download time that you're seeing isn't for just a combo box and a grid, it's for the entire UI layer and a lot of the component library. IOW, as you add more and more to the application, you're only going to see very minor increments in the total size of the application because most of the code is already baked in. Most extensive client-side JS UI frameworks are at least 300K or so, minified.

In retrospect, it was probably a bad idea to post that particular example without an explanation of why that app was created. The whole point of that particular example is to show that you can handle large numbers of rows in JS apps without killing the browser if your UI framework does smart things like implement virtual grids. Many other frameworks don't handle things very well when the number of rows grow beyond 1000 or so, and memory consumption gets out of hand very quickly.


After the initial hit, the browser will cache both, and the load is instantaneous until it is updated again.

Ok. That's pretty much the definition of normal HTTP caching policy though.

Many other frameworks don't handle things very well when the number of rows grow beyond 1000 or so, and memory consumption gets out of hand very quickly.

Fair enough. Obviously you've got the UI down as it's butter smooth once everything is loaded.

But many of the popular JavaScript grid systems already have stuff like virtual grids, and have had them for a while (scroller[0][1] for datatables was released in 2012 for example). They may not be quite as smooth as yours, but the DataTables example[0] (timing from the moment each has started loading of the data until there is some visual data on the screen) is less than 300ms, whereas yours is still over 10 seconds (not a 100% comparison as yours is 5MB of data). Definitely rather have people working on actual product, rather than waiting 30 seconds to do something (and each time the dropdown is switch to another country and back it's another 5MB download and 30 second wait).

we offer progress options for this

Then please, please show this. Showing people a demo of good code doing smart things is a great first impression. For example, a demo of 5 million rows of data via server side processing with that butter smooth animation would be amazing. Then when people are hooked, and start asking about doing stuff like loading 100% of their giant files into browser memory, you can impress them more with how there is no slowdown other than the pain they cause themselves by not loading data piecemail.

But ultimately, any demo that takes 40 seconds to load is just going to be unsavory without a lot of context.

[0] - http://datatables.net/extensions/scroller/examples/initialis... - client side scrolling through 50k records (generated inline rather than downloaded, making for an unfair comparison)

[1] - http://datatables.net/extensions/scroller/examples/initialis... - infinite scroll on server side processing of 5M records


Thanks for the links.

Just to clarify: I would never recommend that someone actually try to load 30K+ rows into a grid in a web application. So, most of the countries not called "United States" is probably more representative of the typical usage of such a grid. The majority of the time spent is not the server request, but rather the loading of the incoming JSON data into what we call a "dataset".

Having said that, we've had a lot of requests for more incremental row loading, so that will definitely be in the works at some point in the future. Initially, we tried to make the "datasets" as dumb as possible, but information about primary/unique keys are available from the back-end, and it is possible to progressively load the rows as necessary (without requiring manual pagination). But, again, loading that many rows isn't a typical use case, and we normally advise against it.


It might be great for the developers, but frankly, as a user, I find that demo very disagreeable. It breaks multiple useful UI conventions that work fine in native browser components, and it still took 5s to download a dropdown and an empty table (for comparison, the HN frontpage takes <1s here).


Please see my reply above about the loading time.

Also, which UI conventions are you referring to, specifically ?


Also, which UI conventions are you referring to, specifically ?

I'm from Portugal, so I click on the dropdown and press P. The standard behaviour is to jump to the first result, but this one automatically selects it, closing the dropdown from under me and starting an update.

There's no horizontal scrollbar even when the content doesn't fit, so if I reduce the window horizontally, it becomes unusable.

When I click on a row of the table, nothing changes; I must move the cursor to see it has in fact selected it.

And the text is not selectable.


Thanks for the feedback, it is very much appreciated.

The near-search is just how it was coded (it's responding to a selection by loading the rows). It is a bit off-putting, though, so I'll change that.

The horizontal scrollbar is the same thing. You can specify how you want the surface (body element) to behave with respect to scrolling, and the I turned off the scrolling.

As for the clicking, that's the "hot" state over "focus" state preference in the UI layer. We're also thinking of changing that: it used to be "focus" over "hot", but it was changed, and I'm not sure that it was a good decision because of what you describe.


I've updated the example to fix the issues that you mentioned.

Also, the text isn't selectable on purpose. There's an option for our grid control to put it into "row select" mode where it is only used for navigation/selection (similar to a listview control under Windows).


It's definitively much better now, though it still has a weird behavior when pressing a letter to jump in the countries list: if I press P repeatedly to get to Portugal, only 1 in 5 presses actually register, the others are ignored. It seems there's some "cool-off period" after a keypress.

Still, on a larger point, it's not just important that one can configure the software to behave "well"; programmers are lazy, so the defaults are crucial, and as bad as browsers can be, the basis are pretty much solid nowadays, so it's hard for me to be confident that these will be well handled by new frameworks.

By the way, how is the software on acessibility? Can a blind user understand that there's a dropdown and how to select an option?


Yeah, there's a near-search timeout involved in order to allow multiple keystrokes to be used, and it was set a little high as the default (it's configurable as a property). I've bumped the default from 500ms down to 200ms and that seems to be a better fit.

Accessibility is so-so. We use text blocks for any labels, grid headers, grid cells, etc., but I still need to audit the UI completely with the ADA helper tools that are available. As for the combo box, the answer there is "I don't know" until I complete that audit, but it probably won't be as usable as our other combo box options that are actual edit controls (you can select text, etc.). The reason for the combo box that you see here is touch environments, specifically those that automatically pop up a virtual keyboard. It's used for situations where you don't want the keyboard constantly popping up, but you need a button-style drop-down list control.

The problem with the current HTML incarnation, and why stuff like the above is done, is that it's just not flexible enough to handle real-world applications (equivalent to desktop apps) without some serious compromises on controls, or by subverting the HTML semantics. We chose the latter. It's the only way to do things like virtual list controls (the combo box has a virtual list control associated with it). If the browser offered a way to define semantics for custom controls, it would help immensely.


Another minor ones: 1. If I click on the background with the mouse so the input lose the focus, I cannot use the tab key to bring back the focus on any of the input elements (in Firefox)

2. Cannot use Ctrl + mouse wheel to Zoom the page, but Ctrl + "+" works.

3. The dropdown list opens using any of the mouse buttons not just the left mouse (should not open on right click or middle click)

4. Cannot select the data in the list with mouse (but can select whole page text with Ctrl + A)

Also it's strange that you are talking about fast load time but I see the opposite: 660KB loaded in 4.5 seconds for the inital page load.

Some performance tips:

It should be obvious: use HTTP compression, a quick check shows: the total size could be reduced to 27% ! of the current one. Maybe your custom server ("Elevate Web Builder Web Server") doesn't support it yet? This could help with the HTML and JS load times.

In the case of the countries JSON it's not the transfer size that is slow but the response time of the server: 600ms for a 8KB JSON is not so fast especially if the data is not changing and easily cachable.

If it's a single-page app then you could embed the JS in the the HTML so all the data comes in one continuous request, and if even the initial JSON data (for the countries) is included then additional more than 1 second can be saved that is currently there between page loaded and starting to load that JSON.

Also loading big dataset by selecting United States from the dropdown freezes the browser and Firefox shows the unresponsive script warning (maxgridtext.js). There could be some processing in the JS that may not be necessary to do for the whole dataset but do it only on the visible part of it or it could be done on the server.

I post this because I see you care about the UX and performance and I hope it helps you a little.


Thanks for the feedback. I fixed the focus issue, and I'll look into the keystroke/mouse handling.

As for load times, I never stated that the initial load time was fast - I said that the latency was low. As you say, combining the JS into the actual HTML file would improve it even further.

Yes, the server doesn't support gzip yet, but will soon. It's basically our "here's a web server to get you started" web server that we include with the product, but you can use any web server that you want. We actually didn't even plan on including one, but you know how plans go..... :-)

The JSON isn't cacheable, exactly, because it's the result of a database query. So, the response time you're seeing includes setup/teardown time for the database connection, etc. Again, this isn't production-level stuff here, just a one-off coded to show something in particular.

As for the loading of the "United States": the server request is actually very fast, but the loading of the JSON takes some time. We do custom JSON parsing in order to validate the JSON properly and allow for missing column values. We investigated the built-in JSON parsing, but we are still left with the same issues in terms of finding whether certain column values exist in the resultant JS objects, and would end up using almost twice as much memory because the resultant JS objects would still need to be copied into different target structures.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: