I enjoy seeing the latest fad being burned at the stake as much as the next guy, but I don't think we should blow this out of proportions.
Atom is a text editor, and text editors have an insanely high bar to clear in terms of performance and responsiveness. Users will abandon a text editor if the cursor takes a bit too long to move. On top of that, Atom has been criticized about its slowness since the very first announcement. They don't have any margin for error there (and to be honest, I think their technical choice of going for Javascript will be their ultimate downfall, but that's a discussion for another day).
Also, as it was pointed out, Atom didn't really embrace much of React to start with (which is to their credit: always be very conservative when you're adopting a bleeding edge, unproven technology).
I think React has potential. It's at about the same stage of maturity that Angular was five years ago, and if it's as successful, we can expect it to enjoy five years of being the new darling in the Javascript world, until the Next Big Framework comes around.
I'm really enjoying how fast Javascript frameworks and practices are churning, it makes me feel like I'm witnessing the birth of a brand new software field with my very eyes.
React is so much different from other frameworks, I feel.
Someone that doesn't know React can basically come in and start working on a large app from Day 1. It is so much less frustrating than Angular, Backbone + Ember + handlebars, etc. You can continue to add features to a React application and not slow down.
Also React isn't unproven. Over 1 Billion people use a React application everyday (Facebook + Instagram web). Unlike Google with Angular, React is Facebook's baby, the FB team is constantly churning out great additions to React.
I feel though with these JS editors, they should just basically give up for the next 5 years or so. Switching between Atom / Brackets to Sublime or Vim is extremely painful, I can't stand how slow it is. I love adopting new technologies, but I do not have faith in JS applications outside of a browser, they are too slow and lack in features.
>Over 1 Billion people use a React application everyday (Facebook + Instagram web)
It's fun to actually see which parts of public websites are written with react by running this in the browser console:
Fake test. React Components don't always return a DOM element, also not every element shows up all the time, instead you should download the React dev tools.
There are at least 30-40 top level components on Facebook. Nice try though.
You are of course correct that some React components will not be caught by this single-line test. It is impossible to truly know unless you are the author of the app, and that's the case even with React tools (e.g. you would not be able to identify my server-rendered components that are not mounted on the front-end).
The intent was not to provide a debugging tool - it's just to provide people with limited exposure to React a simple one-liner to help them understand what is meant when someone says Facebook/Instagram use React, and see some components in the wild. To that goal, using a simple test for data-reactid attribute will catch the overwhelming majority of in-the-wild use cases.
I had the opposite impression when looking at the React docs. JSX was intimidating and getting the quick examples to work resulted in several errors. JSX is of course optional, but the way that Angular just worked, without any non sense of installing anything, is what really drew me into the framework.
Thing is with React though, is that whilst it's initially a bit weird, once you get the hang of component lifecycle, there's very little conceptually left to learn, whereas with angular it just keeps getting more complicated.
JSX is simple, it is just simplified HTML. I think you may want to give it another try, sharpen your skills with it. It is not difficult to work with if you try.
And on installing things, it is basically what any developer does these days. We install tools for almost anything. Just think of the JSX tools as another tool in your belt to help with the translation. The paradigm shift opens up your palette to new technologies. For instance, you can go from using just JSX to using ES6 with JSX (babeljs) as well, with no added cost, all because you already have your build process set up.
I will say though that when it comes to abstracting elements, this behavior seems to be implicit across the board (angular directive templates require the same, though they include a somewhat-informative error state).
As someone who has only begun to feel out React/JSX after developing some large, semi-complex enterprise UI's in Angular (with the respective back-end, usually an abstracted REST API) I'm finding the React model to be a lot more intuitive and scalable.
Angular is fantastic until you run up against problems with complex tasks that either must run outside of the digest loop or require manual control of painting/rendering to compensate for the abysmal binding performance.
I agree - I find React has a higher initial overhead than Angular. Frontend web development is complex, so in the long run, the complexities will still show if you make a misstep.
That said, I'm happy with my time so far with React. I haven't gone into nitty gritty engineering with it yet due to not having the time to start really coding on my project with it, but it has been fairly simple relatively speaking.
I'm excited for Angular 2 though - the amount of planning going into it is excellent, and I believe it will also turn heads with its simplicity & performance.
Frontend web development is seeing some exciting changes that should ultimately make our development ecosystem much better to work with.
Intimidating was probably a poor choice of words. I guess odd would be better? Idk. It was just a deterrent for me. I can get over the syntax but the installation errors coupled with it being weird was just enough to make me drop the tutorial I was working on.
I do hope to start working with React again. It took me a few times to really get into angular so I'm sure once a guide comes out that really gets the details right I'll be fine. Something like this[1] for React would be amazing.
"and text editors have an insanely high bar to clear in terms of performance and responsiveness"
For some reason that sentence really bothers me. Not because our standards are so high for text editors. But because they're so low for damn near everything else.
My first text editor was usable on a machine 300,000 times slower with a million times less RAM. It bothers me that text editor performance is ever considered challenging at all.
For this sort of thing, with a fixed amount of developer time:
- Performance
- Stability
- Actually working
you can pick 2. Also, if you pick performace, 50% of the time you can only pick 1.
Everything is a tradeoff but it's a lot easier to sell something slow than to sell something that doesn't work, and a lot of the performance costs are due to general techniques encoded in these frameworks that reduce bugs.
Another thing is that things do get faster. I think V8 has gotten 10x faster (at least) on real code since its inception. And that's despite JS not having changed.
Cloud9 manages to pull off all 3, and in javascript to boot. I honestly don't notice any difference in performance between it and ST3 (other than initial loading, which is a given seeing as it's web based).
Vim itself is fine, but the plethora of plugins that a lot of people install can drag it down. Firefox has the same problem with extensions. And it's really not the fault of the base package. It's the plugin developers not doing due diligence, and it's the shear number of plugins making it impossible to test all combinations.
I'm not sure why you would open a log file with vim, but that's besides the issue. The issue is with 500 line files. Open one up in Sublime and one in Vim. If you only use Vim you'll never discover what it to means to have a performant editor (I've used Vim for ~10yrs and still regularly do).
I have no idea what you are talking about. I opened a 26,000 line file (no I don't want to talk about it). In vim it opened instantly and my scrolling was only limited by my key repeat rate. The scrolling isn't as smooth looking because vim doesn't animate the scrolling between lines. Sublime took 2 seconds to open the same file, and scrolling was smooth.
Maybe you are using to many vim plugins? I'll admit vim plugins are a real problem, the gui rendering desperately needs to be moved to its own thread.
I think there may be a possibility of the terminal itself being slow. Years ago when I used semi transparent terminals in something like enlightenment it was seriously noticeable when you were trying to scroll through text and it'd take a while to appear. Then I switched to using a simple terminal like rxvt and no transparency effects under ion2 and wow was it fast.
So, perhaps the answer is that it's not the editor nor plugins that's slow in the scenario. I think the line numbering even causes some slight slowdown. Anything that actually parses text will obviously cause a performance hit too. I notice slowdowns when I use visual selection modes combined with tricky combos of commands. Even without any of that, it might be possible that the version of vim is doing something stupid like what happens when you open up a 1GB log file in less and hit G. Haven't run an strafe to understand the fseek calls being made but that's been painful without exception in my experience. Perhaps that's what the poster was recalling.
You're right. Editing a large file with syntax coloring and vim slows to a crawl. I just can't get away from Vim though as it works so well in terminals.
This is mostly to do with the syntax matching, the regexes used for that, and thus ultimately also the implementation of the regex engine (which was more or less swapped in Vim 7.4). It bother me a bit too (but not too much). Perhaps it can be mended, we'll see :).
I love ST; I happily bought a v2 license a while ago, and will buy v3 when it's an option. I just get nervous about the resources being allocated to its development, so I've run Atom almost exclusively for a few months as a hedge.
It's worth noting that if you haven't checked in recently, ST3 development appears to be back on track and proceeding at a pretty good clip. (Having said that, I'm keeping my eye on Atom myself. My big concern with ST3 is that the combination of closed source and single developer gives it a bus factor of 1. I'm hoping Jon follows through on his vague plan to get another developer in this year.)
That is definitely not one those "pick two" situations. The relationship between those 3 things is quite different, in fact having one normally implies a better than average chance of having the others.
You can not release a feature until it is performant, stable and working. You just won't release a lot of features. If you have the testing and release procedures that keep things stable and working - testing for performance regressions is easier, not harder.
The classic "pick 2" situations is time, cost, features. Picking any one of those makes the other two harder to do, not easier.
It might have something to do with the demographics. Text editor users are developers, power users, or at the very least, very tech savvy (there is also the other extreme of the spectrum, people who know nothing better, but you get my point). With that demographics, we (at least I know I do) tend to be more picky.
The majority of people are content with coffee-making-loading-time for their OS, and the slug that is called Microsoft Word or Adobe Reader.
I'm not a fan, but Microsoft Word 2013 is massively more performant than Atom, at any basic task that both programs do (launching, opening files, find/replace, etc).
> I think React has potential. It's at about the same stage of maturity that Angular was five years ago, and if it's as successful, we can expect it to enjoy five years of being the new darling in the Javascript world, until the Next Big Framework comes around.
React is something which stands astride on a line that divides a framework and library. I believe, like all good libraries out there, which do one thing and one thing well, React might stay for a longer period of time. At the first look of it, it might look heavy and bloated but it professes a far simpler view of building UIs.
The best part is, just like jQuery, it didn't take me more than a few hours to get comfortable with React and incorporate it into building something, something that could have taken a few weeks with other frameworks. I am not saying there won't be anything better than this but I am sure it isn't just another hyped fad.
> I enjoy seeing the latest fad being burned at the stake as much as the next guy
You're talking about a project developed in Coffeescript.
> I think their technical choice of going for Javascript will be their ultimate downfall,
This is debatable. Editors like cloud9 have acceptable performances and webtechs allow one to distribute software right in the browser,which fits a huge number of use cases.
> Atom is a text editor, and text editors have an insanely high bar to clear in terms of performance and responsiveness.
This "insanely high bar" has been easily cleared by text editors running on humble home computers for 30+ years. I'm not sure it qualifies as being insanely high.
I think the point is not necessarily that "text editors intrinsically require gigabytes of RAM and gigahertz CPUs", but that they have "insane" latency requirements.
On the one hand, yes, our computers ought to be able to handle text editing. On the other hand, as long as you meet the latency requirements one way or another it doesn't much matter whether you barely met them or utterly blew them out of the water by four orders of magnitude... instant is instant, to a human.
A browser is a tough place to build a text editor. You're running a huge, complicated, sophisticated text rendering and typesetting environment, which you're using only a vanishing fraction of but the browser doesn't know that and can't much optimize for it. (Some, sure, I'm sure it's got special routines for monospaced text, but it still can't know you won't just stick a picture in the middle of it a moment from now.) You're running on top of a fairly slow language, even after all the optimization work on it [1]. You're running in an environment that is deeply structured to be synchronous so if you accidentally write a for loop over anything that turns out to be larger than you expected, you've frozen your environment until you're done, to say nothing of accidentally handing the browser a big rendering job ("did you just open a directory with 20,000 files? here, let me render that for you..."). Any of these things individually might be overcome, but the combination is quite challenging. It's nifty that the browser lets you run "anywhere" but it is also in a lot of ways a crazy place to try to build a programmer-quality text editor.
[1]: I have to justify this every time I say it since somehow the idea that "Javascript is fast!" has sunk into the community, but it's not true. The easiest proof is asm.js... if Javascript was already "C-fast" or even close, it would not even exist. It exists precisely because Javascript is not a fast language. Javascript is much faster than it started out as, but it started out as an extraordinarily slow afterthought meant to run a statement or 4 in response to a mouse click. It has still stabilized on "much slower than C" and appears to have essentially plateaued. The result of speeding up something miserably slow by quite a bit can still result in something slow in the end.
> I think the point is not necessarily that "text editors intrinsically require gigabytes of RAM and gigahertz CPUs", but that they have "insane" latency requirements.
What about the latency requirements is insane, then? I can't think of what "insane" could possibly mean in this context other than technically challenging or infeasible.
> A browser is a tough place to build a text editor.
I agree. To some extent I'd call that in itself an "insane requirement". The merge request proves, however, that you can attain reasonable performance by avoiding additional abstractions on top of the already very abstracted platform. Looking at the call graphs, Javascript itself or even the DOM obviously were far from being the bottlenecks.
"I can't think of what "insane" could possibly mean in this context other than technically challenging or infeasible."
Apparently, responding to keystrokes by putting a character on the screen in less than a couple hundred milliseconds is still technically challenging, or at least, doing everything we want to do within that time frame is still technically challenging for a high-powered editor. Which is less silly if you think about it. A single keystroke in "notepad" is one thing, a single keystroke in a programmer editor is quite another.
"Looking at the call graphs, Javascript itself or even the DOM obviously were far from being the bottlenecks."
If Javascript really was a fast language on par with C++ or something, that amount of abstraction would not have been a problem. The fact that Javascript is slow really is a problem. It is not necessarily the problem, because many other things are contributing, but it is a very significant part of the problem.
You can bring a Java or C++ program with its knees with too much abstraction too, but it takes a lot, lot more work than that. (Many have managed to leap this bar and more, though!)
It's one of the reasons I'm really trying to get the 1990s-style dynamic scripting languages out of my professional life. It really isn't that hard to reach a point where you simply can not have both of "a good design" and "sufficient response time" because you literally want to do more work than the language can get done on one core in 100ms unless you essentially manually inline everything, but that's not practical for its own reasons.
In fact, on that note, note that this is it for Atom. This is the fastest they can go with this layer. Should they end up pushing the editor a bit farther and should they end up needing a bit more performance to do something else properly, they won't be able too, because they just tapped out this well.
Does it really need to happen on one core? The platform supports webworkers and serializing certain events across channels wouldn't take much... if you look at, for example some of the cross channel rendering with react that has been experimented with, there's merit there.
For this use case, I suspect you wouldn't be able to win much with WebWorkers. You'd have to profile, but having done a lot of profiling with multithreading in several environments I've learned to expect disappointment relative to my initial lofty hopes. It feels like it ought to be easy to get linear speedups and that it ought to take something really unusual and special to slow it down; in practice you're lucky to even get "good" speedups.
Games in js have had similar requirements, or even tougher. expectation out of a game has been increasing for a long while, and there are games being built on js (even full fledge games using asm.js and webgl),
So i dint think the requirement for latency is as insane, you just need good dicipline, which game develers have had for decades
An asm.js game using WebGL drops the slow language problem and drops the vast bulk of the browser layout problems, leaving only the problem that you're still sorta stuck on synchronousness, but games have dealt with that for years. The fact that you reached for those details to justify your point is further evidence of my point, not denial of it.
It's not javascript. Modern javascript engines (like v8 which powers Atom) are well beyond fast enough. GC can cause occasional latency if the programmer is lackadaisical with allocations, but with care it's a non-issue.
But Atom simply will not achieve performance competitive with Sublime while they are using the DOM. The DOM is too general-purpose for what is almost always just a grid of monospaced text. The overhead introduced by allowing plugins to render entire webpages inline with the text is just too much.
I want so very much to like Atom --- a Free text editor that isn't vim or emacs and is actually powerful enough to replace them, but the imperceivable latency manifests as a gradual accumulation of stress.
Why not? The browser is something like a mechanism for delivering text, images, layouts, and interactive scripts. It's highly cross-platform. Quite perfect for many uses, and the popularity of web apps is almost evidence in itself.
CSS is a bit annoying, but with evergreen browsers and flexbox it's getting very good—and even as annoying as it is, it's still mostly easier to work with than most GUI toolkits.
I often think of the browser as something like an X11 server. It doesn't matter how old you are; why don't you think it makes sense?
> Quite perfect for many uses, and the popularity of web apps is almost evidence in itself.
Mobile and micro-services will change that.
> why don't you think it makes sense?
Because I went through the evolution of using text based GUIs, Amiga, Atari and PC GUI toolkits, drawing UIs with the likes of Visual Basic, Turbo Pascal, Delphi and C++ Builder. To the modern ones of XAML, Cocoa, QML and so on.
The list is just a bit long to post here. Most of them provided a saner developer experience than HTML/CSS/JavaScript will ever do. Unless we throw DOM away and replace it by something more tailored to applications instead of rendering documents.
Even if the DOM gets replaced, the browser can hardly provide an integrated experience in terms of what any native desktop/mobile environment offers to its users in terms of immersion and interactivity between applications.
It is just no different than using Java with AWT 1.0 in the early years without native widgets and integration to the respective desktop APIs.
Hmm, interesting. I've used a few GUI toolkits, written bindings for GTK+, worked with Java and Swing, etc. Modern web development, especially using React, is in many ways the sanest thing I've come across.
It reminds me of stuff like Emacs, Lisp systems, Smalltalk, etc. Inspectable environments using dynamic languages... But React's model for components and DOM elements seems to me in many ways easier and better than Emacs buffers or Smalltalk widgets.
I use browser developer tools a lot for changing scripts and styles interactively, and while it's nowhere near perfect, it's often extremely useful and convenient, and I think the model has lots of potential.
> It reminds me of stuff like Emacs, Lisp systems, Smalltalk, etc. Inspectable environments using dynamic languages
In a way yes, but only if we can get rid of DOM and have 1:1 parity with desktop APIs.
One ends up piling div elements with CSS and JavaScript, to imitate something that looks like native widgets, but don't behave like them nor can interact with the desktop environment.
I find Qt5's QML to be a million times easier to use, read, debug, maintain. And that is because it is designed as a markup for applications. HTML was never designed for that. Neither was CSS. This is the reason for all this incredibly inefficient HTML webapp madness.
The DOM's main advantage is supposed to be accessibility and portability to different form factors; otherwise, just roll your own UI using Canvas (like I do using WPF canvas anyways for my editor work). It's not clear to me how accessibility could be preserved while allowing for more flexibility.
Mobile and micro-services will change what? Even then microservices are an architectural pattern, it's not inherently tied to a platform, whether it runs native or in the browser.
Many native application on the mobile space are using micro services with a native UI, as a means to provide an integrated user experience with the respective platform features and keeping the business logic portable.
Without having installers that include adware (Java) or that seem borderline-unsupported (X11/Quartz) or that require funny multi-stage compiling (Qt).
> or that seem borderline-unsupported (X11/Quartz)
Since when is Quartz unsupported?
> or that require funny multi-stage compiling (Qt).
Since when do users compile GUI frameworks?
Cross platform UI is nothing new, there are much more options to choose from than those you listed, and browsers can hardly offer more than a 90's GUI experience.
I don't like that it's this way, but the web is a (mostly) open standard for GUIs that is supported virtually everywhere that has a GUI at all.
Cocoa and .NET produce good interfaces but are proprietary and only work on one platform. GTK+ is Free and "works" on all desktop platforms, but anyone who's used a GTK app on mac can tell you how awful it is (and it's really the only place that it's great is on GNOME-based linux DEs).
Qt is Free, and it has fairly good support on all platforms (even mobile), but making the interfaces look and work great on all platforms requires the developers to put a lot of work into custom UI elements (see how Qt programs look on GNOME or OS X when they use the default UI elements). It also requires developers to use C++, offensive to programmers who prefer hosted languages (although arguably better for those planning to port to mobile). But the C++ it uses is so far removed from non-qt C++, even requiring compiler plugins, that many C++ developers have qualms with the language it uses.
The web is programmer-friendly, cross-platform and Free. Writing a desktop app using web tech means it's relatively easy to port it to ChromeOS (or FirefoxOS if that becomes popular ever) or an in-browser app. It has the lowest performance of any of the mentioned technologies, which is only relevant for especially performance-sensitive apps (text editors) and mobile.
Really Qt wins out on technological merit, but web is comes in a close second, and there's actually a large supply of developers who know what they're doing with web.
With Qt5, you write your interfaces (and optionally can even write all or most of your app) in QML. I tried it out, and it is such a breath of fresh air, compared to HTML/CSS. You can achieve the exact same thing with a small fraction of the resource consumption.
Most native applications use a language runtime and most GUI ones use a general-purpose declarative layout engine. Put those together and you basically have a browser.
Except their GUI (if well made) is much better at interop, handles tons of shortcuts that are forgotten in webapps, and native apps don't have to bother with crazy amount of limits browsers have "because security". Like Chrome not letting you bind to half of the Emacs shortcuts because some evil actor could use them on a webpage to phish you.
It's the closest thing we have to a general purpose, cross platform application engine that pretty much runs anywhere with a GUI interface (for anything updated in the past 3-5 years).
Browsers are more easily available on every major, and most minor platforms than any GUI toolkit out there... Things like atom, brackets and similar make sense. They translate well between standalone app in an OS, or a platform app for the likes of ChromeOS, or as a SaaS app in a browser.
No, they aren't the fastest, lightest or best performing. They don't even have the smallest codebases... what they are is broadly available, with minimal variance and a lot of developers with most of the knowledge needed to maintain them.
Using CSS magic tricks and JavaScript to make an unordered list appear like a menu or a toolbar just feels wrong, just as one example from many.
I liked the way XHTML was going to remove the semantic and pave way for proper components, sadly it never happened that way.
Anyone experienced with XAML and similar layout engines can see how much better the browsers could be, if people wouldn't insist into binding a document model into an application engine.
Even with HTML 5, which still is a problem to support properly across multiple devices without a "debug everywhere" attitude, has less widgets support than a 90's GUI.
Completely agree. The fundamental problem is that HTML and CSS are the wrong tools for webapps. They are designed for documents. This is why all these CSS rules are so hard to use for applications. It is built on the model of a document, not a user interface.
The situation is immensely worse on machines like the Raspberry Pi or the BeagleBoard. People have drunk the HTML5 kool-aid and now want to run HTML5 on such machines. When you see how these boards choke when running stuff like an asteroids clone in Firefox or Chrome, you weep. Write the same thing in C++, or in something like XAML/XUL/QML/Enlightenment Edje, and it is super smooth.
My favourite would be something like QML, but with Lua instead of Javascript (but a Lua variant where the array indices start with 0, not with 1). Lua is much easier to accelerate properly, as LuaJIT has shown.
So use a component adapter, like polymer... That said, I wouldn't mind something closer to XAML or Flash/Flex that was an open specification, with a cleaner language implementation (though, I like JS)...
The trouble is actually getting a cross-platform rendering engine working everywhere that HTML/HTTP already works... So, you go build it, get it working at LEAST on Windows, OSX, Linux, Android and iOS... then we'll see how adoption goes. More likely you'll see React* bindings take off with generators to whatever platform is targetted, with JS as the language in use.
I don't want to spend money on the side projects I'm just playing with, but it would be nice to have them cross-platform (including mobile). Do any of those options still apply? IIRC Xamarin is expensive and there's no free way to run Java on iOS? All the SDL-based GUIs I've seen have looked awful.
Qt would be lovely - what does using it from Android Java look like? I remember messing with the Qt Java bindings a while ago and finding them a bit ropey.
So much this. The DOM is absolutely going to be the bottleneck more than JavaScript. I can't really provide any concrete benchmarks to support this because its a very complex topic, and you can really only come to that conclusion after trying to optimize the DOM for something as critical as a text editor.
It can be strategic even if limited to quick edits and for some subsets of the repository now. Anyway I can think of a GitHub augmented with an IDE and integrated with a CI and CD system with deployment to a Heroku. It could let you write code from pretty everywhere, occasionally even on a tablet or a large phone. Somebody will use it.
GitHub has been slowly tricking me into doing more and more of those quick edits on their site. If they can work Atom in as seamlessly as their other updates I'm sold. Somehow I'm still circumspect.
I use web-mode to edit JSX. It's not bad at all actually - the indentation and highlighting are there. I've started using web-mode for its intended purpose, too (when editing templates that have CSS and JS snippets).
As someone who never really jumped into code until very recently, I find Atom far and away the most accessible text editor. Familiar interface (to a browser) and not overwhelming to set up. I've tried emacs, vim, and sublime. I'm sure that I could come to love emacs or vim, but it just seemed like too much messing with it before I would really enjoy it. Sublime is better, but I just prefer Atom. Guess that makes me some kind of radical minority here :)
Actually Sublime was the first one I tried. And I liked it quite a bit more than vim or emacs, perhaps due to being a noob. But I still preferred Atom to Sublime.
> I'm really enjoying how fast Javascript frameworks and practices are churning, it makes me feel like I'm witnessing the birth of a brand new software field with my very eyes.
I totally agree. A language or technology that doesn't change is dead. The JS world can get a crazy sometimes but we need to understand WHY certain frameworks/patterns may be better and when to apply them.
Kudos to the Atom team on making their product better.
The problem with React is that it doesn't improve the computational complexity of the problem. In fact, it makes the situation worse, since every little update is now O(n).
Modifying the DOM is usually the bottleneck in web apps. To get a fast app (extreme simplification), you need to only apply the minimum set of mutations. It turns out that in a large codebase this is extremely hard to do. React asks the developer for a virtual representation and computes the diff between the previous one. In most situations, the time it takes to compute the set of mutations is negligible compared to the cost of the unneeded dom mutations it removed.
But, if you need extremely high performance and your problem is small (eg: the text editor part of atom), you can write specialized code that will compute the minimum set of mutations yourself and don't pay the cost of React. That's also why you see so many small benchmarks beating React but those wins don't translate in real applications.
Now, it doesn't mean yet that you should drop React. The great thing is that you can make a React component <TextEditor> that itself uses manual dom operations to be super fast. And the rest if your app uses React for its wins.
In Atom case, they also want to support people writing plugins. Now it's not only technical but becomes political. Do you want to force people to use React for writing plugins? What if they want to use jquery or ember or angular?
You also get into dependency issues. React requirement today is that there can only be one version loaded at the same time, otherwise everything breaks. If you update React in Atom core, you run the risk of breaking all the plugins that were written for a different version of React.
Given those, it makes sense to remove React as a dependency from the core. Fortunately, it's still totally possible to write atom plugins using React
React requirement today is that there can only be one version loaded at the same time, otherwise everything breaks. If you update React in Atom core, you run the risk of breaking all the plugins that were written for a different version of React.
Is this still an issue when you override `ID_ATTRIBUTE_NAME`?
I see it as similar to the C vs ASM discussion - usually it's faster to trust the compiler to optimise, but for very specific situations, it's faster to code your own assembly.
It turns out that in a large codebase this is extremely hard to do.
Underestimating the difficulty of that task (or a structurally equivalent one) is remarkably common. I don't know if it's because devs all think we know better than our neighbor or because we don't accurately gauge the cost of adding new features. In any case, it's depressing.
It looks like they weren't actually using React for much. The diff is +230 lines, and there's close to that much of just new tests. Most of the actual changes are trivial (e.g. @isMounted() to @mounted) and there's not all that much DOM manipulation logic in the end result.
Overall it looks like React guided them in the right direction for how to design their view code, but they don't actually need most of React's functionality.
I haven't looked at the code, but since you have, why were they experiencing so much overhead if it wasn't being used for much?
Was it just a relatively constant amount of overhead that everything using React will experience regardless of how much of React is "used"? Why is the overhead so high?
I haven't read the code, but I can make an guess as to why (at least one reason): Atom is using one web-renderer. React supports many version of many renderers/browser. Drop all that cruft and just go directly to the 'native' code.
This change doesn't seem to be about removing boilerplate to support multiple browsers, though. They're not implementing their own virtual dom diff their "one web-renderer", but moving away from this technology.
So React taught them how to update the DOM manually ;) I suspect the case of a text editor is systematic enough so that they can have a specialized and minimal DOM update algo, without too much maintenance cost (which would be high if you try to implement specific minimal DOM update in a random app).
This is really interesting. In the summer of last year, I was looking into various JS libraries to use for an upcoming project at work when I saw the story that Atom was moving to React for their UI, so I decided to take a look. The philosophy really clicked with me and that's what we ended up going with. I don't regret that choice - it's worked out really well for us so far - but it's interesting that it hasn't for Atom.
I suspect that Atom editor is a bit of a pathological case for something like React - a very flat hierarchy with lots of children can result in lots of expensive React renders, then subsequent virtual DOM diffing, for what effectively amounts to appending a character to the text area.
The is absolutely likely to be the case (for now). I admit I haven't looked into the technical issues very much because I've been spending 100% my time on ReactNative which seeks to resolve the deepest issues with the browser environment for React development - it's certainly a different kind of performance work.
For this kind of stuff, most people create a highly custom React base class that "cuts right to the chase" as far as updating small pieces that change in large lists. Immutable data structures are often the most helpful tool in accomplishing that. I'm sure they have totally legitimate reasons to go with this approach in the mean time, and most of all I want their project Atom to succeed because it's such a great idea. I hope we can help the Atom team soon to resolve these other issues though.
> Immutable data structures are often the most helpful tool in accomplishing that.
This is said often but without much qualification. I've found mutable data structures with change propagation (via observable or what not) to work much better given that the whole diffing thing can be avoided altogether since you know exactly what has changed.
It is my understanding that the DOM is broken in how it handles invalidation/re-rendering (doing it for each modification rather than batching), but again, I don't see how immutability helps fix that problem any better than just doing it the right way with a mutable virtual DOM?
Most of the people that don't believe that immutable, persistent data structures are effective tools to increase performance of reconciliation, only put one foot in (that's my personal experience). That kind of reservation often compels people to "just try immutability on a part of their tree". It doesn't work like that and you often should go all in before you begin seeing the benefits. It's a leap of faith, admittedly.
Almost every application (ever) is a list of lists of lists (and so on). Even text can be broken up into paragraphs/code-blocks etc which form the lists. If these structures form a tree, and that tree is somewhat well balanced, then small changes can be found in log(n) time by comparing reference identity without developer intervention (by either a mixin or Om-like system). log(n) ends up being extremely fast for n in the range of graphical nodes in most UI applications. For everything else, a windowing infrastructure can be used (usually baked into a very sophisticated <ListView> component - our ReactNative mobile applications use this approach (special <ListView> component that exploits immutability without developer intervention)).
I work in this field and have written lots of code, both mutable and immutable, declarative, OO and functional, to solve a variety UI problems. I've also written my own language-enabled editors using multiple techniques (see comment https://news.ycombinator.com/item?id=9117234 for the one I'm working on right now, but you can see https://www.youtube.com/watch?v=SAoRWmjl1i4 for an Eclipse-based one I did in 2007), so I've definitely got multiple feet in the game.
I don't work in Web, most of my UI code was written for Swing, SWT (Eclipse), and these days WPF in immediate mode. It seems like React is solving a bunch of JS/DOM problems, so maybe my experience doesn't transfer, but I've found that for my work, it is much easier to just go with mutable data structures that support change propagation, so you make a change that affects a line in a block (my current editor architecture, block - lines - more blocks - more lines - etc...), the change is just...O(1) because the line can be damaged/repaired directly! So why would I give that up for O(log(n))?
It's a great question. All of this (immutability) only makes sense to even attempt if you believe that immutability is easier to reason about than mutability. If we don't agree there, then I have nothing more to add really. But assuming we agree, then there is the question of performance.
In most applications, we have three tiers of time durations.
Tier One: During an interaction/animation you must update the screen every 16ms. Code may not run longer than 16ms (less in practice).
Tier Two: You are not interacting, but may begin interacting with something at some unknown time. Code may not block the UI thread for longer than about 50ms so that there is a guarantee of not introducing perceivable delays into the interface.
Tier Three: Long running tasks which should be executed in parallel with the UI thread/process.
If going from O(1) to O(log(n)) still allows you to meet your deadlines for whichever latency tier you are targeting in whichever supported device classes you want to support, then it's worth it in order to program with better abstractions. Blocking for 1ms is as good as blocking for 13ms in Tier 1. Blocking for 25.5 is as good as blocking for 40ms in Tier 2. (This is helped by a decent event loop/task scheduler etc).
Again, all of this assumes you genuinely value immutability as a programming paradigm over mutability. If you'd really rather mutate, then you should just be using mutations/observables. I would not rather. Sometimes, I still perform mutations for the most critical parts of a UI abstraction (such as a scroller animation etc) - but I am up front about it being a compromise of what I'd rather do.
> All of this (immutability) only makes sense to even attempt if you believe that immutability is easier to reason about than mutability. If we don't agree there, then I have nothing more to add really.
Ok, that makes a lot of sense. I spend a lot of time trying to make mutability easy to reason about (my research), so I agree on the problems but disagree on solution strategies (fixing mutability with managed time vs. avoiding it). There are plenty of differing and evolving opinions in this field (e.g. back in the early 90s, constraints were going to save us).
I see as the main benefit of React is that there is no need for change propagation, just do a fast diff at the end. That's interesting and could be an overall win given the constant overhead of change propagation (O(1) with lots of bookkeeping), but I'm not sure how that would scale in practice. One of the reasons I moved on from declarative UIs (I did my dissertation on one called SuperGlue) was because the paradigm is difficult to scale in expressiveness (let alone performance) beyond simple examples; e.g. I couldn't build interactive compilers bolted onto seamlessly onto editors for richer programming experiences.
Ah, it's good to know we have an ally against a common foe and I respect that you're taking a different approach (I suspect you'd be interested in Mezzo). I'd really encourage you to try out React though. It's different than other FRP type systems because the granularity of "reactivity" is much larger (at the component level) which means the majority of your code executes on plain data structures such as arrays and objects. There's no need to continuously be lifting your data into some "reactive" data container. For example, if you have two numbers that are changing over time, if you want to add them, you use the + operator instead of creating some kind of "ReactivePlus" operator.
Also, in some types of apps, when you have many of these point to point bindings wired up, the bookkeeping of them can start to add up too, especially when everything ends up changing any time anything small changes. This hurts two cases predominantly:
1. Your initial rendering of the UI. You usually have to set up these point to point bindings. It would be faster to not have to when blocking the initial user experience (which is critical).
2. When small changes end up rerendering the entire page anyways. This is the worst time to be slow because you already have so much to do! If some small imperceivable delay becomes a medium imperceivable delay, it's not so bad. But when the entire scene changes all the time, a framework like React that anticipates this can cut right to the chase and do the rerendering without also having to do the bookkeeping along the way. Different apps will have different sweet spots in different paradigms. I will say that it's worked out well for us at Facebook/Instagram and many other serious production apps (not just simple examples). I'd encourage you to try it out and give us more feedback since you've done so much research in this area.
and it would compile that into a continuous data binding expression (auto lifting was also useful for defining shaders in C#, they use similar techniques in JS/WebGL). But you are absolutely right: the book keeping was too much, and if you had say a chart with 100x100 cells each individually bound, your startup time would really suck. Glitch (my current work) has a much lower cost per state read (where you install listeners), and, like React, does not require lifted operations: so you read some state into values, operate on some values, and store the values in some state somewhere else. Each read is traced as it happens, each write is logged as it occurs, everything in between is just normal code; no lifting is necessary (I think React is like that for reads and doesn't allow for state writes outside of DOM updates that appear immutable).
The only question now is about granularity, and that is completely tweakable. There are some issues with state cycles, which have to be rejected (otherwise, non-monotonic changes might not work correctly in an incremental context), and keeping state separate helps prevent "false" cycles from occurring, but I'm looking at ways to separate cycle detection with state update granularity, and anyways, none of that applies to React since writes aren't handled in the framework.
What I'm interested in is expressiveness; as a PL person I have a cliche benchmark: can you implement a compiler in React? A compiler, after all, is just a "view" of flat text into some other form (e.g. a typed AST that can be used as a model in a language-aware editor). For React, I think the symbol table would stop you (everything else in a compiler is pretty much functional), but I might be wrong.
I'm definitely interested in React and will keep looking at it. Unfortunately, I don't do any web work so finding a proper context is hard.
Your argument seems to stem from immutability is easier to reason, therefore you are willing to give up performance to accommodate it.
You stated that in some cases it doesn't matter if you use 1% or 100% of processing time in the current frame which I would argue is risky.
The more complex your application the more expensive handling wholesale changes becomes, unless you have something like react managing change detection you are going to suffer. From my point of view this is the interesting part of react, you hand off the task of change detection to generic, tested code. However it's still going to be slower than the alternative for fine grain changes.
It's also worth noting that on larger applications where you modularise common components, you typically push complexity down the chain. Take a data grid for example, do you process the data for every cell up front, or do you do it only when the cell is rendered. If it's the latter mutability of that data structure is really important, otherwise one little change is going to cost you far too much.
Holy cow this may be the most salient explanation of these various concerns I've seen.
Blocking for 1ms is as good as blocking for 13ms in Tier 1.
+1. Let's use all the resources we have available, and also make sure we understand where those resources are coming from. If one of our resources is user perception time we need to manage that just as we manage memory and CPU usage.
If the DOM itself is mutable, a primary philosophical problem I have with layering an immutable abstraction on top of it is that the inevitable leaks from things that lie outside of the immutable abstraction are hard to deal with.
It certainly makes sense to me to use immutable approaches in pure abstractions, but the world is mutable, so you have to be extremely clever to abstract that away.
Also as an application gets more complex, it may be harder to keep the juggling act of mutable structures going, and keeping them as fast as they were before.
Mutable structures have a tax that you often must pay with complexity. Immutable structures and React’s declarative approach have more of a flat tax, with means complexity and performance are much more predictable.
> Immutable data unlocks powerful memoization techniques and prohibits accidental coupling via shared mutable state.
Memoization is still more expensive than just tracking changes and avoiding unnecessary recomputations directly (it only starts winning when doing dynamic programming). There is a point on accidental coupling but this is more of a correctness rather than performance issue.
In the high-performance computing field, use of immutable data structures is suicidal; even tries are a magnitude slower than in-place mutable data structures. And he only compares against naked shared mutable state, not against managed mutable state with change propagation.
And that gets to the end of the talk: the real reason is they want to avoid using frameworks that already solve this problem by tracking changes, and React somehow avoids that since you can do just the diff post facto. Ok, I get that.
Yes, but immutability means that reference checks are enough to spot unchanged areas. And we're not in HPC land right now, we're writing JavaScript. In that world, Om (immutable) is vastly faster than Backbone, Knockout, Angular, Ember &c (mutable). Partially because, when you can reason more clearly, it's easier to do something about the performance.
Reference checks are conservative in spotting unchanged areas (if true, definitely no change, if false, maybe no change) unless all values are internalized (a bit expensive to do that). Also, diffing is only needed at all when you need to compare values anyways; dirty bits are otherwise sufficient to mark changes.
I'm not really familiar with the web ecosystem, but why would it differ so much from say C#/WPF? or a system based on change propagation instead of diffing?
If everything is immutable then a reference check is all you need. They key is to avoid deep copies. Observables are problematic when changes can ripple through your model/view-model - resulting in multiple DOM changes. This equally applies to WPF.
WPF has a retained scene graph, so no diffs are necessary, all changes are just O(1).
Reference equalities only work to know what really hasn't changed, they of course can't tell you that two values are still equal even if their references are different (unless compketely internalized, of course). For react, that's fine: it's just some extra work if false inequality is encountered, there are other applications where its not ok.
> This is said often but without much qualification. I've found mutable data structures with change propagation (via observable or what not) to work much better given that the whole diffing thing can be avoided altogether since you know exactly what has changed.
The problem with this though, is that the entire framework would have to be written around this idea, and everyone who uses the framework would have to use these datastructures correctly. That being said, it will probably be both more efficient and relatively easy to use once Object.observe lands in Ecmascript 7.
In the meantime React works with every datastructure out there, which is a big plus.
Immutable datastructures, while more expensive to change, could be very cheap to diff, because you know if two objects have the same pointer, they're equal.
> The problem with this though, is that the entire framework would have to be written around this idea, and everyone who uses the framework would have to use these datastructures correctly.
That is the problem, and one of the reasons why I haven't ported Glitch (works in C#) to Javascript yet (instead, opting to wrap it up in a new language).
> Immutable datastructures, while more expensive to change, could be very cheap to diff, because you know if two objects have the same pointer, they're equal.
Yes, I use this property a lot in my own code; I'm pragmatic and use both mutable/immutable data structures. That being said, I find it easy to trace changes and do change propagation on a mutable data structure (that can change) vs. an immutable one (which obviously require diffing since you can't know exactly what changed). Using immutable data structures actually make that problem much harder from my point of view, but I see its utility as an easy way to integrate with existing code and programming practices (something I can't offer).
The true speed of using an immutable library for your data in React comes from shouldComponentUpdate. Using immutable structures is really fast as the most deepest comparisons are always O(1) because all you're doing is comparing memory addresses, thus a component can very cheaply work out whether the data given to them has changed, and as little as possible can be recalculated.
On a funny note. My coworker called it. He said a month or so ago -- In couple of months you'll start seeing articles about "Why we moved away from React".
Wonder what's next. Maybe it wraps around back to jQuery...
Atom is very different from other things. The editor component is something that revolves almost entirely about state, and a huge amount of state at that.
I think the future is, like many abstractions, one where your tighter loops escape the abstraction (like numpy's C bindings). There's still advantages on a big-picture scale to using declarative frameworks like React
EDIT: one thing is that a text editor can know a lot better how to edit the DOM after an event (like hitting a character) than React's general algorithm
Note that all the live editor examples in the essay are written in Glitch. Think of Glitch as a react like framework that focuses on fixing mutable state through time management rather than avoiding it.
Hi Sean, having skimmed the presentation you linked to, Glitch's programming model seems similar to VHDL and Verilog, in that the "tick" is an explicit construct and statements are not guaranteed to execute sequentially.
I'm not experienced in VHDL, so I might be completely wrong here.
Did you take any design cues or inspiration from hardware design languages?
There is an inspiration from synchronous reactive languages, which are in turn inspired by hardware design (not sure if they predate or post date vhdl); you can read the related work section of the essay-linked conference paper if you are interested about lineage.
Glitch is a bit weirder in that all statements execute at the same time within a tick, their order isn't just unfixed: they are guaranteed to see all of each other's effects (except event handlers, which execute more hardware-like discretely to do state transitions).
After having watched so many frameworks come and go, the pattern I've noticed is that it's not just about hotness. It's that the new ideas that the frameworks provide push everyone else into better directions.
Everything becomes familiar. Then someone tries something new. Sometimes it works, sometimes it doesn't, sometimes it promises more than it can deliver. But all the folks who stick with familiar will take the reasons that people leave, and roll their own solutions.
Sometimes a fad framework really is a bust, but more often than not, it's always a stepping stone and motivation for every project out there.
They're using an approach that is absolutely inspired by React, for a use case that is inhospitable, pathalogical even, to a "generic web" approach to performance.
Choosing to not use React itself to implement the editing component of a text editor? Not a big deal at all.
This is not a solid "everyone moving away from React" example at all.
At least until something better comes along i am going with React. React makes lot easier and cleaner to build apps. I have built a stock ticker app with changing values and highlighting the change. Initially there were few performance issues, but using immutable data and shouldcomponentupdate those were resolved.
Atom already wraps jQuery - they wrote their own DSL for mutating the DOM called space-pen[1]. They tried to get rid of it later (and rightfully so), but its too late now.
Given the huge feature gains in vanilla JS over the past few years (and in particular, the past year via ES6), I think we'll see more people shedding frameworks altogether. At least for the common tasks that were accomplished / made simpler by jQuery.
I'm seeing a lot of criticism of Atom's speed and responsiveness, and not much support. I wonder how many of these people are actually using it, and what computers they are using it on?
I tried Atom early, and repeatedly every month or three for a while, and the issue that prevented me from giving it a good trial was poor font rendering (on Windows at least). This issue has been fixed for about a month or more.
I've been using Atom full-time for just about a month now (having previously used Sublime Text 3 and Notepad++) and have no problems. As a programmer the quality of my workstation is fairly important to me, but I think my computer is not a powerhouse: Core i5-4570S 2.9GHz and 16GB RAM. I am fairly sensitive to editor responsiveness - I've tried dozens of editors over the years, and discount most of them for issues that some might consider to be minor, but are important to me - autocomplete responsiveness is a big one.
Atom may not load as quickly as Sublime, but once it's running I haven't had any issues with its performance. I use it for JavaScript and TypeScript programming, and for those purposes it is excellent.
> but I think my computer is not a powerhouse: Core i5-4570S 2.9GHz and 16GB RAM.
SSD would be more important for editor evaluation, but I'm willing to bet you're in the top 1% as far as personal workstation performance goes. That's an absolute machine. If you can't make a responsive editor with those specs, just give up.
I know a lot of programmers, however, and some who use Atom while enjoying specs much more humble than yours. They seem to manage alright. I wouldn't, but startup time and lags are more important to me than to others. As evidenced by the multitude of Atom users performance isn't that big of a deal to a lot of people.
Speaking for myself, of course. TRAMP (especially with eshell and grep), IDO, magit, effortless buffer splits, org-mode, inline execution of elisp.
I am also rather used to the command set now. To the point that it just feels natural to jump around a file with emacs. ace-jump-mode is also good. Even something as simple as subword-mode is awesome.
Really, TRAMP has been the killer feature for me lately. The way it enhances grep results is ridiculously useful.
I like indent-region and the simplicity of C-x b to switch buffers.
I do think Atom has the potential to really go beyond what emacs has accomplished b/c more people know js/coffee than elisp, but emacs is also a moving target and has become a lot better than it was 10 years go.
Meanwhile, a new Sublime Text 3 Dev build today added enhancements to its minihtml module.
It looks like a race of whether Atom can become ST3 faster than ST3 can become Atom.
The main competitive advantage that Atom has over ST3, IMO, is that it's open source. If Sublime Text 3 were to become open source, that would be a huge win.
Also, that open source ST3 clone limetext [1] written in Go seems to be making progress.
I came here wondering the same thing. Based on the comments above, and what I know of the two editors, and the fact that there were some improvements to minihtml and suddenly today the sublime text changelog popped in HTML format, I would guess that minihtml is a light html renderer that can be used by plugins inside of sublime text. Atom already has this ability by design, so it can do things like draw a usage graph in a side panel, while previously sublime text could only have text-based panels produced by plugins. If I am correct, minihtml should allow plugins to display more complex user interfaces and data through the a minihtml pane.
Do you have a link? I have not heard of this, but I'd be curious to see. If I knew about a ST3-related Kickstarter, there's a good chance I'd give it money (depending on the details).
The problem with anything that has to be compatible with everything is that the overhead will necessarily get to a point that you can't reasonably use it in production code. Not to say people won't still do it anyway, just that if you're looking for a bottleneck, you don't have to look very far. Until "frameworks" start being broken up into modules with individual functionality (kind of like how everyone tells you to learn how to program), there will always be churn like this.
It's kind of sad to me because we'd be a lot farther in the future if there was one library that made dom elements faster than any other library, and one library that made diffing faster than any other library. But we're stuck with these monsters of frameworks that black box so hard for syntax that they completely give up on solving the problems they meant to.
React and Mithril are for regenerating the entire virtual DOM of a component from scratch every time. Then the dirty-checking / diffing happens in the virtual DOM.
Angular on the other hand does the dirty-checking / diffing in the ViewModel, after a digest cycle, and then does the calculations and updates the DOM elements with markers linked to directives that say they need to be updated ({{ interpolation }} is also done then). Sometimes it just re-sets the innerHTML again, but rarely.
You don't need any of this stuff. Most of your components know how to redraw themselves without re-generating the whole DOM. Your components can just expose a function that you call when you've modified their state atomically.
The question is really about batching all your DOM reads/writes on the next animation frame. For this you should use GSOP or FastDOM and be done with it!
Just chiming in. Switched from Angular to React, and saw massive improvement on any fronts. Maintainability, Speed of development, performance, etc.
The big caveat is that we're using it on Phonegap where javascript operations are much more expensive. I.e. something that takes 5ms on my laptop takes up to 50-100ms on the phone. So, unfortunately, the "Pure React" approach didn't work on some page because there were too much comparisons.. 95% of the app use Pure React and is much faster than before, but in some very specific cases, we have to use mutation and mutate the DOM manually. I think it's a totally fair tradeoff for all the benefits we got from using React.. Similar to using Python but having some optimization in C when necessary.
In 20/20 hindsight, using React for a text editor isn't a very good choice. Text editors, especially ones geared towards fast typists / programmers, are a double whammy in terms of being latency sensitive and needing to open large files.
Atom was already at a disadvantage on both counts on account of it being a Javascript app running inside chromeless Chrome. Making a browser do large things fast is difficult, and having React's paradigm of diffing a now huge virtual DOM running on every single keystroke can't really work.
React still works great for smaller and more demanding sites, though - but it does hit limits on large and complex DOM diffs at high frequencies with low latency demands.
It seems like they never really gave react a real chance. although react is just a view render, to really get its red line performance requires going all in. I think it would have been worth it. a unidirectional data flow is worth the pain. It's so much easier to design something that only ever has to "rerender".
React isn't a magic bullet without using shouldComponentUpdate, and shouldComponentUpdate really works best with immutable data. Immutable data works best all in... See where this all in keeps going? Basically atom didn't wanr react to dictate their entire app and dictate how the plugins would all have to be rendered.
A question from someone who is not a front-end developer: Would it be possible (and faster) to avoid the DOM and use Canvas to make a text editor (assuming we are happy with one font and basic syntax highlighting)?
The text font rendering in Canvas2D is slow. For example the Canvas based charting library Flot use HTML Divs overlays instead of Canvas font rendering: http://www.flotcharts.org/flot/examples/ (last link)
I think developers are still a little shy about using the Canvas for much. In my own toying with a text editor, I found the Canvas to be pretty nice. The advantage is that one needs to render just the visible lines.
With the way developers use the DOM for things like text editors, it's no wonder that they may reach the limit of the browser performance. If we add too much stuff to the DOM, the browser does not have the flexibility of just "rendering the visible lines" like we could with the Canvas. The browser has to calculate everything again.
I was recently surprised when I added 100k table rows with 2 columns in a sample, and when I clicked on a button to toggle the visibility of some list items next to the table, the browser would take a few seconds to finish processing it. Since it was a sample test using React, I thought I would try it without React to see whether the slowdown was because of React somehow. But I found out that even without React it was about the same difference.
If you notice, Atom has a hard limit on 2MBs buffers. I just created a test loading almost 2 MBs in an Atom text tab, and it indeed became unresponsive. It took forever to load and then had problem handling text editing at the end of the buffer.
I think Atom would be better with Canvas. But many JavaScript editors use the DOM and for 1,000 lines they tend to work very well. Which is generally enough for running samples and such. Another advantage of the DOM is that they could more easily embed images and have varied size text, I guess. So those WYSIWYG HTML Editors may be better served by some DOM Editor.
I guess in the future Atom could also add a Canvas editor and keep both.
That's actually the approach Mozilla's Bespin/Skywriter project took. It was shuttered when Cloud9/Ace (which render to DOM) supplanted most of its goals. I would be surprised if the performance delta of switching to canvas would actually be that large, given that using the DOM allows you to harness GPU acceleration for things like translation.
Doesn't canvas have its own GPU accelerated batched draw operations? All tast needs to be accelerated, really, is font rendering, and I would be very surprised if it wasn't.
I think this is a very good question, and the short answer in my opinion is yes, canvas rendering would make it much faster (as the guys at Flipboard have shown [1]). The downside is you would have to implement your own rendering engine from scratch which is quite challenging.
It certainly would be possible, take a look at what is done with canvas out there! But it would be a LOT of work to get the rendering right, most guys who implement "fast" text editors these days are not super intelligent hyper programmers but simply people who prefer to embed native OS text rendering views instead of inventing 10x odd gui junk on top of them.
From an user experience this optimization seems to have little effect I would guess. When you type, 2ms isn't noticeable. But it's a sign of good engineering and focus on details.
I really don't like the idea of running an editor written in JS - I just tried out the latest stable build and it's still very much slower than Sublime Text 3.
I get that they have a lot of JS devs and there's a big community but if they went with Ruby I would have been so much more inclined to stick with it. Performance-wise perhaps it would not have been better. Emacs it is for me.
I have a feeling that that's correct, but to the same degree that every great Perl, Ruby, and Python developer can write reasonable code in any of Perl, Ruby, and Python given good reference docs. The languages aren't really all that different if you take a macro view (and include lisps, static typed languages, etc).
Just in case you don't know about it. If you are looking for an editor written, and extensible, in (j)ruby then there is always http://redcareditor.com/
Project is dead but the program is a pretty fully-fledged text editor from what I have heard.
Some of the React devs were using Atom during the React Conf. I assumed they did so to have more interaction with apps that use their library to find more opportunities for improvement.
I wonder if they'll be switching back to whatever editor they used before Atom.
Nop, sticking with Atom :) it's awesome because it lets us write plugins with web technologies. Even though they pull React off of the core, it's still possible to use React to write plugins :)
(1) Atom wants to support a world in which every Atom package can install whatever version of a dependency it wants, including React. This is very common in Node (incidentally, this causes problems if you want to use singletons or instanceof in Node), but fairly uncommon on the Web (where React is primarily used). That is, it's rare that you inadvertently load multiple versions of React in your single-page application. If you did, you would likely get an error when adding one React component as a child of another React component because of the way React vends ids. (Solutions are possible, but none is employed today.)
From Atom's perspective, that is a problem. The only way they can work around it is by providing "one true version of React" with Atom core that all packages must use, but then Atom is forcing all packages to use a particular version of React. That would violate Atom's philosophy of letting each package choose its own dependencies.
(2) This is not just an issue for React, but for any UI toolkit whose components are not guaranteed to interoperate across versions. To sidestep this issue, Atom has currently decided to use the DOM API/HTML custom elements. I would call this the "least common denominator approach," which satisfies Atom's design goals, but fails to provide a higher-level abstraction for building UI in Atom. It's a tradeoff.
(3) React does not currently support the shadow DOM or custom attributes, which is the new direction that Atom has chosen. As React has not yet been evicted from Atom core, I recently upstreamed a change (https://github.com/atom/react/pull/1) to add one-off support for Atom's primary custom elements, <atom-text-editor mini> and <atom-panel>, in the fork of React bundled with Atom. As I develop Atom packages using babel (formerly 6to5) http://blog.atom.io/2015/02/04/built-in-6to5.html, which has default support for JSX, building UI in React has been a lot of fun. However, the lack of support for custom attributes makes it difficult to do things like add an appropriate onChange handler to an <atom-text-editor> to update the React component's state as shown in http://facebook.github.io/react/docs/forms.html.
(4) React is still version 0.x.x, which means it has not yet committed to a stable API. This makes choosing a version of React to bundle with Atom an even more uncomfortable decision for the Atom team, assuming they were willing to do so in the first place.
None of these items implies that there is something fundamentally broken about React's model. It just means that the React team has some work to do in order to support Atom's use case. The performance graphs cited in the original post are also significant (and of interest to the React team), but even if the performance problems were fixed tomorrow, that alone would probably not be enough for Atom to pull React back into core right now.
It's pretty common for post titles to be changed (by HN admins). The most frequent change I see is they change the title to match the title of the original linked article/story/whatever, which seems to be the case here.
Why, I can't speak to the reasons in this case but I suppose when you allow any user to make up anything for a title, they sometimes inject their own opinion or otherwise make the title not properly reflect the intent of the original title. Not saying this is what happened here.
Of course it can go the other way. Sometimes original titles are not clear, and the HN title can get preserved or changed for clarity or neutrality.
One editor barely using React doesn't mean it's going to swing back the other way now. React actually has good ideas and breathes fresh air into the JS community.
I remember when Atom first started to Fork and I left it. I could handle the clunkiness and performance. I find that with React and my vim editor, I'm happy coding again. If React could just borrow a few more ideas from Angular I think it would be on the right track to gain even more widespread momentum.
It's not really a surprise that if you manually code something (with someone who knows what they are doing) that its better than a framework. Frameworks are there to help eliminate duplication (DRY), and to help people who are new to coding advance quicker than if they learned on their own, it also helps to add consistency to results. I'd always say doing something without a framework well would almost always be faster than the latter, its just hard to find good coders.
As others have said, this is a unique situation for the text editor component.
In a lot of more conventional situations, it's very likely that React's diffing code is highly tuned for performance over years and will do as good a job as you manually writing the JS, while saving you a ton of time in having to worry constantly about the DOM.
It's really easy to dismiss React. My suggestion would be to give it a try... it's very easy to pick up (unlike Angular) and would suit most of the use cases where you would use Angular. Writing a state heavy code editor in React is definitely not a best use case for React.
The latter is more explicit and also declarative. It also gives you a lot more flexibility and is much more efficient than dirty-checking.
React kind of encourages building the components the second way. It eschews two-way binding in favor of a code-based rendering approach. IoC pattern isn't the same as embracing a declarative style where static data contains instructions.
So, React is more efficient and mitigates the code-based rendering by introducing JSX inside JS. Basically, the templates are declared inside your JS files. Same as in Angular you'd have directives.
But then the question becomes, why do you need to do all the fancy DOM diffing, also? Just fire events when an attribute of some ViewModel changes. Attach event listeners to recompute some values and then do DOM updates using a library like FastDOM. I guess React could help in the diffing by storing copies of your DOM snippets, but that's usually not the biggest bottleneck. You usually ALREADY store the previous state in JS and can see what changed, before rendering. With libraries like Fastdom or GSAP updates are plenty fast to the point of powering 60fps animations!
When one has 500+ states to manage, it's not just a matter of saving a few strokes anymore.
now if you're really smart you'll write a compiler that desugar and inline everything so you get the best of both world: A declarative syntax and production speed.
Atom is a text editor, and text editors have an insanely high bar to clear in terms of performance and responsiveness. Users will abandon a text editor if the cursor takes a bit too long to move. On top of that, Atom has been criticized about its slowness since the very first announcement. They don't have any margin for error there (and to be honest, I think their technical choice of going for Javascript will be their ultimate downfall, but that's a discussion for another day).
Also, as it was pointed out, Atom didn't really embrace much of React to start with (which is to their credit: always be very conservative when you're adopting a bleeding edge, unproven technology).
I think React has potential. It's at about the same stage of maturity that Angular was five years ago, and if it's as successful, we can expect it to enjoy five years of being the new darling in the Javascript world, until the Next Big Framework comes around.
I'm really enjoying how fast Javascript frameworks and practices are churning, it makes me feel like I'm witnessing the birth of a brand new software field with my very eyes.