I don't know if anyone from the team is reading this, but I'll tell you want I want from a Javascript debugger:
I want to be able to modify/add code, try it out, then rewind, and try something else. Then save my changes once they work perfectly.
Simply setting breakpoints and watching variables is nice, but I can do basically the same thing by echoing/console.log'ing things. If you are going to go to the effort of making a debugger, then make it do something I can't do another way.
Firefox has done awesome work here, so I apologize for stepping in as a "competitor", but Chrome DevTools can do what you're asking for with the combingation JavaScript LiveEdit and Workspaces for persistence:
That's quite nice, but it's only part of what I was asking for - it doesn't let you rewind. You have to reload the page each time if your edit was wrong.
It's not necessary to save state by each instruction, it's enough that it would be able to restart the current function.
Ideally it could walk back up the stack and restart other functions, but even if it was limited to just the function with the breakpoint that would be huge.
> You have to reload the page each time if your edit was wrong.
Nope. :) Whenever you hit Ctrl-S, the VM is patched with the new definition of the function (well, the entire file's functions, really), and then execution automatically restarts at the top of the call frame. Also, at any time, you can right-click any call frame in the "Call Stack" pane and select "Restart Frame".
You have to reload the page each time if your edit was wrong.
You'd have to do that anyway. If code is wrong, it corrupts state. Unless you're saying you want it to keep track of diffs of the heap, which might be interesting. But how would it handle, say, a socketIO connection that a bad edit terminates? There'd be no way to rewind that.
Maybe it doesn't matter for most work, since most work doesn't interact with persistent objects (sockets, file handles...)
> Unless you're saying you want it to keep track of diffs of the heap
Yup, that's exactly what I'm saying. Keep track of (store) the full browser state before entering a function, and when you rewind restore that exactly.
Local I/O is no problem, remote I/O will simply be restarted. Or it will error. It doesn't have to be perfect to be useful.
I've worked with this model a bit (I made a tool to watch the file system and automatically sync changed code into Chrome - plug: https://github.com/aidos/chromesync).
It turns out that corrupting state isn't always such a big issue. A load of what you end up doing in js is event handlers, and by being able to replace those in realtime you can save a load of dev time.
(As ever, when this conversation comes up I complain bitterly about how Chrome only allows a single websocket connection to the debugger so you need to do weird ws proxy stuff to use live reloading and devtools at the same time....)
I also use a similar thing to have live reloading in iPython for my backend work. Honestly, I'm not sure how I ever lived without being able to swap out the code in my functions without having to rebuild all the state. It's not without issue, and you _can_ corrupt your internal state, but in practice it's fine 98% of the time.
Live edit is great. It has its limits though. From a developer what I want is to mark some files / library for 'MRI execution'. When every line is hit in the file / function, it records the variable states, I can know what are the callers, callees, their states. I can go back in time, visualize call stacks as frame graph with the parameters invoked.
Yes, but I want to do that in my editor, not in the browser.
> try it out, then rewind, and try something else. Then save my changes once they work perfectly.
Certainly I do not want to copy'n'paste the code then back from browser to my editor.
Let me all do this in my editor of choice. The browser is only where I execute stuff, a debugger belongs into the editor/ide.
FWIW, I think it's planned by debugger.html project that the debugger can be used by different editors/IDEs. See the presentation at React Rally (relevant snippet is at https://www.youtube.com/watch?v=Fk--XUEorvc&feature=youtu.be...) showing the integration into emacs or even using debugger.html as CLI tool.
That is, you can already integrate quite well with some editors. I think IDEA has something similar, as well. Most of this is not in creating all new protocols and whatnot, but simply setting up reasonable defaults in tools you are already using.
WebReplay! One of the coolest projects at Mozilla right now, IMO. You can record and rewind the entire state of the browser and do reverse-stepping etc like with rr.
I agree that "live edit" is an exciting feature. It supports a tighter feedback loop that lets you iterate in the correct context.
There are two challenges
1. transpiling, which would require a tighter integration with the project's build tools. Not impossible, but something that would require the correct UX. Chrome DevTools supports live edit now, but that is only for non source-mapped sources.
2. mapping to project sources. VS Code has done a lot of great work with source maps to map to the on disk sources with source maps. In theory, this should be easy, but there are many build tools and some edge cases with nested source maps, imagine an ember app that pulls in ember libraries which are also mapped.
With that said, this is definitely exciting and with the new front-end we can begin to work on an ambitious projects like this!
It would be great if 'save' from the devtools could just post the updated version back to server where it came from. Let my server handle getting it onto disk- it knows where it came from in the first place!
That's something the community will simply have to deal with. Transpiling already creates issues when you're debugging code since what you debug is different than what you wrote. Honestly this is one of the biggest issues I have with transpiling everything; you don't have to transpile even though our industry seems to be standardizing on it.
That's because integrated debuggers are a rather crass design error.
Windows doesn't come with a debugger. Linux doesn't come with a debugger. The JVM has no right-click launched debugger. CPython doesn't jump you into an interactive debug console. But all of those have a debug interface.
Leave building debuggers to others who have a clear picture of how they are using your platform.
Linux and windows comes with multiple different debuggers.
Linux comes with strace, kdb, kgdb. You also have gdb by virtue having C compiler.
Only by tight integration with VM/compiler you can get decent debuggers. For example Smalltalk debuggers are class above to what JS can offer today. Crass design error is ignoring debugging needs until it is too late. Adding needed hooks and state managers is expensive, just look how much effort it is now: https://bugzilla.mozilla.org/show_bug.cgi?id=1207696
I'm just looking for an accurate symbol table to be exported in something like ctags format for code navigation. I've seen tools that capture somewhere between about 1% and 9% of large projects. [1]
I know it's a hard problem that can't be solved with static analysis like in C, indeed! I know people could do
var x = "do", y = "something", z[x + y] = function() {};
...
z[x + y] = (some other function now)
Contextual reassignments, various forms of composability, yep!
This thing is still important. The goal is to reduce the "Wtf is going on here" phase of diving into existing code. I have had to honestly abandon projects and back out of jobs with good friends because I couldn't get to that answer fast enough.
You can do so many cryptic layered indirect things with javascript, it makes unraveling it really quite hard - especially if there's a language barrier (as there was in the one I'm thinking of where I had to resign after 6 weeks in shame for a guy I had worked with for 2 years on a previous company).
I tried to build this claimed introspection tool myself in vain over the course of about 2 weeks, but diminishing returns hit and things got pretty hard. Javascript can be beautiful and simple - but it can also have cascading chains of action-at-a-distance with layers of framework scaffolding obscuring the business logic code so that the caller and the callee are not 1 frame, but 10 or 15 frames apart. [2]
My campaign to encourage people to write code in the plain in a way that hopefully any future junior developer will understand is a lifetime project.
Until I get there, tools like this would be truly invaluable. Back in the C/C++ days, it saved my butt daily.
---
[1] Excuse me for not finger pointing to specific projects. I don't find it courteous or productive. I aggressively surveyed [probably] all of them, including those offered by IDEs, in 2015.
[2] This leads to two easier tools which are
(1) the "stack and step collapser" - essentially a way to tell the debugger to ignore entire directories of code when stepping through code or printing stackdumps. Having to go through stack traces that can be literally over 100 levels with only 1 or 2 being your code is certainly a system we can improve on.
(2) a stack of stacks ... so that if an error is caught, the stack isn't smashed and replaced, but preserved in a second dimensional stack so when an error is just "some error occurred in these 1,000 lines" you can still see the previous stack.
I've often just seen stack dumps be the frameworks error handler (or the error handler erroring) and have no indication as to the actual source of the error. It's crazy how useless it can be. A stack of stacks would solve this (again, as would not coding silly things like this in the first place).
I can't imagine I'm the only person who has to deal with these problems on a daily basis. They waste so much time and leave me so flabbergasted as to how to address problems sometimes.
I'm pretty sure the older Firefox debugger had both of your easier tools, and the new one probably does too. "Stack and step collapser" was called "blackboxing". "Stack of stacks", also known as long stacks or async stacks, is definitely implemented at the lower levels (at least for non-primitive exceptions); I'm not sure whether or how they were surfaced in the UI.
Is this expected to be faster than the XUL based stuff? Currently "React" to me makes me think poor performance, simply due to bad experiences I've had with web applications and 'native applications' (read: web browser wrappers).
Surely there's going to be overhead using the remote debug protocol, it looks like it's built on JSON so every response is going to need to be deserialised? I can understand that it's nice to have the official tools using the same APIs that external tools would use, though (and supporting multiple targets is a nice benefit).
Further (and this is perhaps less on-topic), why do all UIs need to be written in HTML/CSS these days to be considered "modern"? More broadly speaking, is a GTK or Qt interface going to be less performant than something using a browser engine? The motivation I've seen at a lot of companies seems to be that they already have web designers who can design web pages, and so these people are put to work designing desktop applications too. That's all well and good, but it seems to often result in a poor user experience and be visually different from the rest of the OS.
I don't think the plan is for it to be faster necessarily, but rather to move the existing tools off of a technology they are (slowly) moving away from. While there will be some overhead due to the use of React I doubt it will be anything noticeable. In fact, I just checked it out in the Nightly version of Firefox and it doesn't feel any slower than the previous version of the debugger.
As for the remote debugging protocol, I don't think that will really cost much of anything. The debugger itself is using Javascript so the deserialization cost of JSON is pretty much nonexistent on that end.
As for everything needing to be written in HTML/CSS, this seems like an odd place to bring that up as this is quite literally a tool found inside of a web browser. As a whole though, I think HTML/CSS "apps" are cropping up largely because there are so many people familiar and comfortable with the technology, not because it's seen as more performant. And now that things like Electron and NW.js exist, it's trivial to leverage that knowledge to make a UI which uses it verses having to learn a new language/framework.
> Further (and this is perhaps less on-topic), why do all UIs need to be written in HTML/CSS these days to be considered "modern"?
Because there are a lot of web developers who want to make a desktop app without having to learn C++. Not to mention there are a lot more web developers than C++/Qt/any-other-technology developers in general.
Writing in HTML/CSS gets you some of the benefits of the Web, eg cross-platform, portability, backwards compatibility, an active base on which to develop (new web standards are being developed all the time). It's also familiar to a larger pool of developers, which I agree isn't the greatest argument if your goal is quality.
Don't underestimate the power of working on top of a platform that is being actively worked on and improved by a large pool of developers. Even if the platform has large deficiencies compared to some other platform, the rate of progress is going to make the Web win eventually, as long as you don't look at very specific metrics that happen to be Web pain points. (cf "Always bet on JavaScript")
As for fitting in with an OS's look and feel, that is indeed a current weakness of HTML/CSS, but there is energy behind fixing that (eg with Mozilla's desire to move away from XUL). So it's one of those things that will happen slower than you want in the near term, but faster than you expect in the long term.
At least, it sort of makes sense to do inside a browser, as you don't need to load in an entire web browser as a library.
Also just in general, the long-term plan for Firefox is to phase out XUL and to replace it with HTML/CSS/JS, so this is not just someone choosing to write their application in web technologies for no reason, but means that they will actually have less maintenance work.
The debugger landed in Nightly today, its not quite ready for prime time yet but on by default and getting updated regularly. https://nightly.mozilla.org/
I think the comparison is that most of the Firefox UI has traditionally been built with XUL components, whereas this part is being built with HTML elements instead.
Right, just saying that the underlying render target is HTML rather than XUL. The "CONTRIBUTING" file actually calls that out:
> The name debugger.html was chosen because this debugger interface is being written using modern web technologies where as the previous Firefox debugger was written in XUL.
The underlying renderer targets DOM Nodes. I'm not sure if I would say DOM Nodes === HTML or not. I think of HTML as the thing I write in .html files. but whatever, mostly semantics here.
I would disagree that debugger.html was written using modern web technologies. It was written using one web technology but rejects many others, like html written in html, the `<template>` tag, web components, etc.
You're basically saying no site/app is ever written using modern web technologies unless it forgoes any framework usage, using every "vanilla" browser feature where possible.
The HTML5 spec even says: "The DOM is not just an API; the conformance criteria of HTML implementations are defined, in this specification, in terms of operations on the DOM."
and: "[HTML] implementations must support DOM and the events defined in DOM Events, because this specification is defined in terms of the DOM, and some of the features are defined as extensions to the DOM interfaces."
This debugger is 100% web technology and describing it as "HTML", the rendering platform it targets, is 100% accurate. The fact that it's written using React does not change this in the slightest.
Yes, canvas is an HTML element and so would be an HTML app. You seem to be caught up on how much serialized HTML there is. The serialization is one tiny, tiny part of the HTML spec.
Ok, you consider a canvas app to be an html app. We simply don't define html the same way; you seem to define it as anything that runs in a web browser. I define it as a markup language.
Too far apart on this one, time for me to move on.
I think you've missed the point being made here. What's being said seems to be that regardless of how much scripting/API calls there are, if the base is HTML then it is HTML.
To paint this another way, let's say you have UI toolkit "X" which defines elements using YAML but also provides a programmatic API in your favorite programming language to add, remove, modify, or otherwise modify elements. Now let's say there is another UI toolkit "Y" which defines elements using XML but, like toolkit X, it also provides a similar API. Regardless of how many YAML elements you explicitly create you've used X and not Y, in fact you've used YAML and not XML. It doesn't really matter if >50% of your code is using the API or not, just that you've used some amount of YAML.
That's exactly the situation here: the existing debugger interface is using XUL and this new version is using HTML. Even if there wasn't a single line of HTML code, it would still be using the DOM to generate HTML and not XUL--and that's the distinction being made.
Web browsers are HTML engines, so almost everything that runs in [the document frame of] a web browser is indeed HTML. You cannot under any normal circumstances get a browser to execute JavaScript outside the context of a document – even in the Developer Console. You cannot create a Canvas (which is an element, as in HTML element) outside the context of a document.
That is why to create one you do:
document.createElement('canvas')
and there is no other way. `document`, by the way, being none other than:
> document.constructor
function HTMLDocument() { ... }
It's amusing because there's hardly any html. Instead of using platform features like template it uses a meta-framework that seeks to make html an implementation detail.
This could just as well have been written to a canvas. This is a JS project, not an html project.
I see the point, did think the same when reaching "npm install" it's like gcc -o a.out something.html
But there are teams dedicated to the "image" of opensource projects, and people that needs to collaborate that ways, creating wordings... like:
> The debugger.html project is hosted on GitHub and uses modern frameworks and toolchains, making it readily available and attractive to a wide audience of...
Of developers.
We as developers, at the end of the day, are humans.
And image branding, helps to projects adoption. Not always and not only. But it helps.
By other comments, that ".html" sounds like those ".io" or ".me" domains... a way of bring an ip response to you in 2016.
Javascript and CSS, is the 90% codebase intention of those HTML browsers that we use. (And binary size).
It could have been worse... usually we need a .html to load many .js (and moar stufz)... hey, it's related.
They could have call it an unrelated extension like .ninja or more confusing, related to computing, but unrelated to html/js, something like SOMETH~1.TXT
I find it amusing that this is the second-longest comment subthread in this page, proof of the bikeshed theory of discussions: because it's about superficial naming stuff, everyone is happy to chime in and have an opinion.
I've seen the word 'React' showing up lot after Facebook released their React Javascript library. Is the that library is somehow related to what you are referring to?
I don't do any web programming, so can you explain simple what it means by 'react' or 'react like'? What is Reactive programming?
I'm sure it's been considered already, but I'm immediately concerned about the security implications of running a node server when debugging. Perhaps that is only the case when debugging in a stand-alone setup?
Think if you RDP to Firefox or Chrome and forget that the server is running. Does that mean that if I browse to http://your.machine:8000 that I can control your browser?
This is only for development of the debugger itself and shouldn't be used for a production release. Within Firefox there is no node process it is run purely as a web application.
Ouch.. yeah. If there is the slightest degree of sanity in the world, that server is bound to 127.0.0.1 and not the external interface (with appropriate hoops to jump through if you want to change it)
Another feature I was trying to find was a way to see the history log of changes of a variable and the lines where it happened, between two program states.
I like that idea. I think we can do several things when it comes down to tracking changes:
* make it easier to pause when a reference is set/get
* make conditional breakpoints more of an every-day experience
* better symbol search so that it is easier to see the different variable references and add the right breakpoints
Of course it's trackable with the use of traditional breakpoints but in big applications with a lot of dependencies step by step approach (through different scopes where the variable changes names) could take long time. But yes the good practises you mention make it easier.it's just something I Would like to see automated.
I think it's a naming convention for their new product offerings, based on their new technology stack. Servo's UI is named "browser.html"(https://github.com/browserhtml/browserhtml)
Much of the Firefox UI is and has been written in JS for a long, long time. What's happening now is that Mozilla is moving some components away from the XUL renderer -- an old, crusty piece of tech that is only still maintained because Firefox depends on it -- to the HTML renderer. Having more things on one stack means development resources can be spent more effectively.
I'd say just look at Chrome dev tools and copy them at this point. Debugging in Firefox is a mediocre experience IMHO. As for the name .html is unnecessary.
So, what does Chrome dev tools provide that current Firefox does not? I use Firefox because it's what I've always used (and I prefer the browser) and what I'm familiar with. What are the good reasons for me to learn the Chrome dev tools?
1) Speed. Chrome is blinking fast, Firefox, especially when fiddling with margins on CSS classes that are heavily used, is dog slow.
2) Compatibility with Webkit-based mobile browsers for debugging. I'm so not having two browsers with hundreds of MB of RAM usage open.
Firefox' advantage is, though, the mouse inspector - it shows rulers for the selected element at it's borders. Invaluable when trying to align text across columns or rows!
1. I haven't found Firefox to be "dog slow". I may not be doing things that stress it, but overall, speed is not something I have had complaints about.
2. Ummm...don't you have to test against non-WebKit browsers, anyway? Or, do you really expect all of your users to only use WebKit based browsers? That seems problematic.
> 2. Ummm...don't you have to test against non-WebKit browsers, anyway?
Testing in Chrome on OS X takes care about Chrome, Windows, Safari, Android, iOS (even though the iOS Webkit has some quirks) and Firefox. Compatibility between browsers in rendering is not an issue these days any more, the only notorious exception being Internet Explorer, but even there the margin is getting smaller and smaller.
I usually develop and full-test in Chrome first, and do a quick "does everything look and behave correctly in other browsers" at the end, so far I have not encountered any major differences between the platforms.
Because Firefox is dog slow (I hope that at least the frequent UI lockups due to synchronous scripts in the site are soon history), and cannot speak with anything webkit.
I want to be able to modify/add code, try it out, then rewind, and try something else. Then save my changes once they work perfectly.
Simply setting breakpoints and watching variables is nice, but I can do basically the same thing by echoing/console.log'ing things. If you are going to go to the effort of making a debugger, then make it do something I can't do another way.