Hacker News new | past | comments | ask | show | jobs | submit login
Software Development at 1 Hz (medium.com/martincracauer)
180 points by akkartik on Sept 25, 2016 | hide | past | favorite | 118 comments



I wrote numerical code for a while, it simulated a magnetic material. It would take hours to run a simulation long enough to be able to verify that it was correct. Make a change, wait five hours, check if the change worked.

I eventually started keeping a journal of every code change I made, along with the hash of the binary that it created. I could use this to make several independent changes and run them all at the same time. When one run finished, I would verify its behavior, look at the binary's hash, and then know that my code change was safe. It was a very slow process, but effective. After doing this kind of development for a while I'm weary of claims like this that imply that feedback with a latency <10s is actually a good thing. High latency feedback forces you to be more methodical in your development and think about the changes you're making.


Being methodical is good when your problem is intricate and well defined. Fast feedback won't be helpful in implementing a complex algorithm like a compiler or a numerical simulation (although I would argue it will help you debug it).

When what you're doing is simple but error prone, or not well defined fast feedback can make you much more efficient. If you're using an underdocumented API/dataset, the best way to understand it is to probe it with code, quickly iterating.

I used to write a lot of LaTeX and make lots of simple mistakes (missing backslashes typically). I found having an environment where I found my mistakes as soon as I made them more efficient than searching through the output for all mistakes, and then for each mistake finding the corresponding source and correcting them.


The problem is fast feedback means you tend to create more bugs. It's faster in the short term, but very quickly produces unmaintainable code.

In the end you can write code to be read (aka maintainable), to be run (aka fast), or to be written (aka cheap).

PS: Sure, in theory it's simply better. However, people get lazy make some change and assume if it's passing their tests it works.


> When what you're doing is simple but error prone

That just means the problem is deeper than your understanding of it.


Or deeper than you >can< understand it without probing, because documentation is simplistic, wrong or non-existent.


In that case you have a much bigger problem :P


> High latency feedback forces you to be more methodical in your development and think about the changes you're making.

This! People rely on their fancy REPLs and super fast feedback loops and 1000 unit tests too much these days. What do you actually do when you can't run the code? What if you have to debug it just by reading it?

There's a lot to be said about being efficient with trivial changes vs being methodical and able to solve much complicated problems when they arise.


This is exactly the reason why I actually quite strongly discourage teaching programming by starting with IDEs. Far too often I see beginners fall into what I call "programming tunnel vision" where they repeatedly make very tiny and often random changes to a piece of code in an attempt to get it to compile or produce the right result, seeming to completely abandon any thoughts about the overall goal. A lower latency feedback only encourages this behaviour more. The same phenomenon also happens if you give them a debugger --- they spend plenty of time just stepping through the code without any good sense of the bigger picture. Maybe it feels productive, but it's not. Their attention is too preoccupied with the feedback that they do not think deeply enough about their solution, and as a result, overall code quality often also suffers.

Instead, I believe in thinking carefully about the problem. Close your eyes and visualise the program and its data and control flow in your mind, then write the code. Use a whiteboard or even pencil and paper to collect your thoughts and get a good mental model of what you're trying to accomplish. Block out all other distractions and focus on the problem.

Many others I've talked to are in disbelief when I tell them I can spend an hour writing several hundred lines of code that compiles and works flawlessly the first time, but this is what careful thought will allow. Even with a very fast feedback loop you may spend several times longer fiddling with the code until you get something that seems to work, but actually doesn't in all cases precisely because you did not ever think about those cases while you were fiddling with it and had your attention focused on getting that next dose of feedback.


I'm glad I found someone who shares my point of view. You're right about IDEs and debuggers.

> Instead, I believe in thinking carefully about the problem. Close your eyes and visualise the program and its data and control flow in your mind, then write the code. Use a whiteboard or even pencil and paper to collect your thoughts and get a good mental model of what you're trying to accomplish. Block out all other distractions and focus on the problem.

It's funny how many problems I've solved by writing code on paper/whiteboard when I got stuck doing actual programming. It's so much easier to focus on the problem when there's no code to run.

Another thing I found useful is just reading the code outside of an editor. Either by printing it out and scribbling over it with a pencil, or just reading it on a phone/tablet that can't run the code.

> Many others I've talked to are in disbelief when I tell them I can spend an hour writing several hundred lines of code that compiles and works flawlessly the first time, but this is what careful thought will allow.

I've been having the same experience. Recently at a uni we were given an assignment to write an interpreter for a rather simple imperative language (conditionals, loops, simple recursive functions and stack depth checking). We were given 3 hours to write a program that could interpret a sample program.

Most people struggled to get anything working at all during that time, since each and every one of them I talked to didn't have a clear picture of they were trying to build.

It took me a little over an hour to write the whole thing in almost a single pass, in a modular fashion with separate tokenizer, parser and evaluator with the necessary checks. There was no need to run the code, most of it was rather trivial, implementing simple state machines. It was quite a bit of code (over 1000 lines), but there was almost no thinking required if you knew how the parse tree should look.

In situations like this I'd even say it's hard to make the program not work if you're methodical, working step by step and checking if you've covered all the cases.


>>"I quite strongly discourage teaching programming by starting with IDEs..."

>>"...also happens if you give them a debugger..."

1). I assume you can cite no research supporting the idea that new programmers are better off with your recommendations?

2). Your idea doesn't seem to take into account that different people think in different ways. I believe this approach was good for you. But as far as we know you could be in the minority right?

For these reasons I don't think there is enough data to make blanket recommendations against IDEs and debuggers.


Are you genuinely attempting to argue that thinking ahead and fully understanding the problem isn't preferable to tweaking one's way to a solution?


"Thinking ahead" is often a great excuse to design an overengineered mess of a solution that can't be tweaked and doesn't really properly solve the problem either. To be pithy - see Java.

Sometimes, exploring the problem space can give you a fuller understanding of a problem faster by forcing you to confront pitfalls that may not be obvious until you try a solution. We use all kinds of wonderful terms for this - "Agile", "Prototyping", etc.

Both extremes - fetishizing planning and up front design, or fetishizing short term iteration and poking things without deeper thought - have their problems, and occur too often. Neither tool is a panacea, but both have their place.


There's a difference between "thinking ahead into the next problem", i.e. premature generalisation, and "thinking ahead into the details of the current problem".

It's good that you mentioned Java, because it is a language which I find extremely IDE-centric, and I suspect that's also what causes easy premature generalisation --- creating new classes with tons of boilerplate automatically generated by the IDE is so easy that it encourages programmers to. That doesn't help one bit with the details of the algorithm, unfortunately; it often gets "OOP-ified" into a dozen classes and much-too-short methods created as a result of the "fiddle with it until it works" mentality.


'Fiddle with it until it works' has to be done when you are working with a product that isn't documented well enough. If that mentality is used in general for programming it is bad, but there are some situations where experimentation has to be done to work out how parts the product work.


Java as a language is producing more value to actual businesses than most other popular languages. Where you see an over engineered mess, others see valuable abstractions, extensibility, compatibility and self documentation. Unfortunately, understanding this so called mess requires knowledge of the lingua franca of object oriented design which has fallen out of favour by the new generation.

I'm not saying that there are no unjustifiable over engineered java libraries, but the current hype cycle of web frameworks seem to indicate the burden of proof of good design should lie with current technologies as well as previous.


> Java as a language is producing more value to actual businesses than most other popular languages. Where you see an over engineered mess, others see valuable abstractions, extensibility, compatibility and self documentation.

"Everyone uses it" or "it's producing value" doesn't mean it's not an overengineered mess that everyone recognizes as such - it just means that imperfect code still trumps no code. Switching languages usually means tossing out your old codebase, leaving you at "no code".

I have worked on such messes, created such messes (oops!), and cleaned up such messes.

That said, I'm sure there is a Java project out there which actually benefits from stereotypical levels of Java abstraction and patterns - and I'm sure there's a few codebases out there where "my" and "others" opinions differ exactly as you say.

> I'm not saying that there are no unjustifiable over engineered java libraries, but the current hype cycle of web frameworks seem to indicate the burden of proof of good design should lie with current technologies as well as previous.

100% agreed - not that I'm qualified enough at web dev to have much of an opinion on this. If anything, the churn of web frameworks smacks of being both overengineered (do you really need a whole framework for that?) and underengineered (wait why are we replacing things yet again?) simultaneously.


Spring comes to mind as a widely used framework that benefits from those "stereotypical levels of Java abstraction and patterns."

But it's the exception rather than the rule. Once you have something like Spring in your codebase, to take care of modularity and reuse, everything else should be coded with as little "abstraction and patterns" as possible.


WhitneyLand is probably not arguing against the claim you make in the large, but against the unsubstantiated argument that using IDEs and debuggers is more likely to lead you to that style of thinking than high latency variants.

One could just as easily hypothesize that these tools let you avoid thinking in the small, and help you form a big picture overview that would otherwise be difficult to understand.


Those were not my words. I said to discourage all students from using IDEs and debuggers doesn't make sense.

I went quite a while with no tools other than a hex editor to type in op codes. I don't think it did anything except hurt productivity.

Maybe you learn or work better that way. I don't. And I don't see how you justify assuming all new programmers would.


Discouraging someone to use an IDE is more a symptom of the target language's shortcomings. Xcode provided me with beautiful compile-time errors for both Objective-C and Swift, and forced me to really think about what I was doing. Incidentally, I learned both languages from the IDE.

Would I recommend an IDE for a low level language like C? Probably not, because it forces a kind of laziness on the programmer.

Maybe an IDE isn't the solution, but a starting point to build upon. Something that's an interactive environment like LightTable has, where you can quickly eval blocks of code and see the end result without having re-compile your entire program. Certain languages are better suited to this, and certain paradigms (reactive programming comes to mind).


If they won't, I will. Working code (in a good language) is the best way to work on the problem, far better than a whiteboard where you have no undo, no VCS tagging, no ability to come up with reusable components... . Trying to do it all in your head would be even worse.

Of course it's possible to push code around on the page until it seems to work, just as it's possible to push symbols around the page until it seems to work when answering a mathematical question on paper. (Unfortunately some languages/compilers will run code that doesn't make any sense, but that's more true on paper, not less)


No, that was your interpretation.


I think the "compleat" programmer can move freely between the two extremes. I have one piece of code -- in Common Lisp! -- that I've been working on for a couple of months of weekends and still haven't tried to run (except for an occasional one-line experiment to check that I have the correct syntax for a macro).

But I can also adopt a much more interactive, experimental approach in situations where experiments are cheap and easy.

It all depends on the nature of the task.


oh, those 1000 lines of C...

If you think small code/functions are the just the result of "short term decision", that is wrong. Small and short doesn't imply ill-developed, short-sighted nor a lack of whole picture.

Small and well-thought code often contains an abstraction. And compiling such an abstract-level code takes even shorter time, still do some syntax check and (optionally) checks types. If you fix the abstract layer then you proceed to the details.

"Close your eyes and visualize the program"? Why not just draw the image as an abstract program on the screen? You know, Hackers and Painters is a real thing. Good abstract code describes itself on the screen, you don't have to imagine the behavior in your head. Code is much better than your volatile image in the head that may go away if you go to sleep.


I think when you have to resort to using debugger, probably it is better to discard the code entirely and rethink the solution.


I've only used a debugger a few times, when the program did not seem to behave according to the source code. That was invariably due to a third-party bug (compiler, library, OS) or to me failing to understand a subtle point about the language or library I was using.

But in general I agree with you.


If you have to break out the multimeter, it's probably just better to throw out that radio and make a new one.


I don't think this is a right analogy, because the code can't "go bad" on its own one day (as if some capacitor would go dry), unless you modify it to do so. Maybe when you don't have access to the source code and you want to see what is going on, use of debugger could make sense.


One thing that helps is having short, self-contained, composable pieces of code that are easy to run in a repl or compile. This also helps testing and general understanding.


> High latency feedback forces you to be more methodical in your development and think about the changes you're making.

There is also an attitude in computer music that low latency audio and realtime systems are simply part of the march of historical progress and ultimately represent a step forward from the days when composers could wait a week to hear 15 seconds of music rendered.

I found that I like the think-write-render-listen-repeat loop that comes from a compiled music workflow. (I discovered this when writing a -- initially very slow -- nonrealtime python computer music system which in the early days would sometimes have me waiting a few hours to render a couple minutes of music.)

Realtime computer systems enabled all sorts of new forms of live improvisation, interactive algorithmic systems, etc but sometimes it's very nice to have to sit in a quiet room with a text editor or notebook and think your way through the next long render.


I'm in a similar situation -- I write a lot of long-running hadoop jobs where I won't know the results for an hour or more. I try to "think twice, run once".

When I'm debugging, I'll kick off several variants to test different ideas, and write myself notes on what I expect to learn from each one when it finishes. Then I can go off and work on something else until it's done, and use the notes to get myself back into the debugging groove. Without the notes, I'd find myself an hour later struggling to remember why I even ran a particular variant.


> High latency feedback forces you to be more methodical in your development and think about the changes you're making.

So does not having tests.


Oddly... The most tested code bases I have worked in had similar problems. They had tested and designed themselves into a solution that did not lend itself to changes. To the point that making a change required reasoning about several micro services and integration test packages.

Again, there are no panaceas.


Right. If I would have worked on this again, the first thing I'd do is come up with some way of automatically testing the code.


Sure, but my point is that being forced to be more methodical isn't a good thing. Not needing to be methodical is better (because your tools will catch your mistakes for you).


Doesn't that also remove the need for discipline? So the last vestiges of 'engineering' that so many programmers aspire to vanish, they can blame their tools, and the only development challenges are found in the tools themselves (perhaps already true to some extent).

This sounds depressing.


See it another way: you can spend the mental effort instead on solving harder and more interesting problems. That is, if you can find harder problems to solve...


> the only development challenges are found in the tools themselves (perhaps already true to some extent).

Definitely true on the web side of things. Bootstrap+React+Angular+jQuery+whatever else, minify, deploy on docker, done.


The best method is to automatize your methods.


This is... What?

I'm not all in on what I call "Poirot's doctrines," but there is almost always room for method. I grant that some methods aren't needed and speed is always of high value.

However, speed of bouncing from beep to beep in your tools are not always faster than stepping back and thinking.

As an example, no amount of fast feedback will help you complete all of the Euler problems.


> no amount of fast feedback will help you complete all of the Euler problems

I find this one almost a perfect counterexample. The Euler problems are almost ideally suited to high levels of interactivity, in order to tease out patterns and solutions.

By no means am I a massive Euler user, but when I was solving #555 I was sure glad to be using Python. If I had a >1m recompile time I probably would have just given up.


I contend that if you have solved 500 ish of those problems, you are a bit beyond the majority of programmers.

Also... I don't see how any single Euler problem could have minute plus compiles. No matter the language. :(


I haven't solved 555 problems, just problem #555 [1]. Sorry if I was unclear.

I also wasn't saying that any language would get long compile times on Euler questions -- they're far too short for that -- but that theoretically if they did it would cause problems.

[1]: https://projecteuler.net/problem=555


That misunderstanding is on me. What you put was perfectly clear, so my apologies.

It is worth noting that many of those problems originate from people that did them by hand. :)


I am a huge believer of <10s Dev loop. I quit my previous team because compile times were greater than 2mins and I wasn't in a position to influence making it shorter.

In my new team I got it from 30s to 5s and the effects have been amazing. I have to thank VScode and Gulp to make this possible. With typescript vscode does fast parsing of your code on every keystroke and gives squigglys instantly. On every save, gulp does its magic and vscode runs the problem matcher and shows more squiggly on my editor.

The browser knows when a file has changed and refreshes immediately. Vscode has a chrome-debug plugin so my breakpoints hit immediately in the IDE.

It's been an amazing experience so far. We've also revved up our CI systems to run 1000's of tests in parallel.

You actually enjoy work when you don't spend forever waiting for things to compile.


> In my new team I got it from 30s to 5s and the effects have been amazing.

I was proud of being part of the team that got the compile time for the core of our product from about 2 hours down to about 30 minutes (the same amount of code, loaded into a RAMdisk and built on a 16-core machine builds in about 2-3 minutes today, but most of our full builds take from 20 minutes to 2 hours now).

After you build the core, a fairly simple plug-in on a reasonable development machine will compile+link in about 5 seconds, maybe. How long it takes to test the fix depends on how far into the program's operation a failure is expected to occur...anywhere from a couple seconds in, if it can't contact the server, to an unbounded length of time, if there's a rare bug triggered by some weird pattern in the customer's data or use of the system.

I enjoy hearing about what other people expect from their dev environments. I've worked almost exclusively in large C++ systems.


This is a problem I'm having at the moment, 20s webpack builds with ts-loader, plus other build steps running in VS 2015. Seems I need to drop ts-loader, and build more directly. Are your build files available to look at somewhere?


That sounds... high. No guarantees that I can help with your current stack, but my email is in my profile if you'd like to run your setup by me.


Do you run webpack on Windows? It's much slower there than on *nix platforms


Web engineers are waiting for stuff to compile these days? I thought things were supposed to be moving forward, not backward.


It's a two steps forward two steps back kind of situation. For a while we just had HTML+PHP... save it, alt tab, reload. Then we had servers that have to boot and that took a step back. Then we had hot reloading servers and everything is fast again. Then we added asset complication and things took 30 seconds to process again. Now we're working on incremental building of assets to get back that speed again.


There's always been a segment of web engineers who have had to wait for stuff to compile. That's not a bad thing, in and of itself. It's the really long compile times due to bad tooling that sucks. Those are avoidable, though.


It's simultaneously frustrating and entertaining to watch the younger generation toil under the impression that they're inventing something new.


deleted


No it doesn't.


Moving from Grunt to Broccoli helped a lot. 30 sec went down to a few hundreds of milisec. Broccoli is just insane.


Interesting. I may have to give that setup another shot. I found TypeScript compilation to be really slow. Was recently amazed by Bucklescript's lightning-fast compile times and killer repl, but its integration with the JavaScript ecosystem was nowhere near as smooth.


A 20s turnover is not even that slow. Debugging undecidable timing issues in an FPGA may have turnover counted in hours given how slow HW synthesizers are.

If the need of better tooling is clear, the OP does not mention one important point: faster machines have created some sort of fast food approach to programming, where programs are built on the go with the help of the appropriate tools (some sort of computer-aided programming).

Back in the 60s, when programs where being shipped by snail mail to some data center somewhere in the country to be entered by a random assistant and scheduled in a long pool of jobs, turnover was counted in days. Yet, we walked on the moon.

So beyond better tooling, maybe a key to maintaining that attention span is check is to simply spend more time on the design board.


Possibly. All this talk about getting antsy if things take time to build seems to fit a worry i have about our current web focused world.

Observe Google and their management of Android.

Again and again they have pushed out some half-baked X.0 version with the unstated intent that it will be sorted in a near future update. A clear example of that was the introduction of the Storage Access Framework.

This is the web mentality seeping into the development of firmware and physical products, and it is leading to a massive culture clash and crap products.

Basically you can't just code, push, run, repeat on devices in peoples pockets and bags like you can a web site.


Seriously, the reason I can't rely on my phone is mostly Google Play Services. I never know when it will update behind my back, and suddenly I'll feel a warm spot on my leg, and less than halfway into my day my phone's battery is at 5% because of Google Play Services running infinite loops.

Or how at least every two days, the phone's wifi and LTE connections just stop working, for no apparent reason. But if I Force Stop on Google Play Services, they suddenly work again. This has been going on for several months now.

Of course, these issues didn't used to happen with older versions of Play Services. And if I downgrade the Play Services app to the version installed with my ROM, I never have these problems--but then I can't use any current Google apps, like Gmail, Maps, etc.

The AOSP software is mostly fine. But the Play Services side of Google simply cannot be trusted. Their CADT/web-style of development is almost enough to push me back to iOS.

I want a phone that just runs Debian. :(


I would say that it is a balance of consequences. When the consequence of a mistyped variable is two days, you better believe I'm going to triple check everything. When it is the ten seconds for the compiler to run, I tend not to worry as much.

Reducing the turnaround is a good thing, because that time that would have gone into triple-checking variable names can now go into triple-checking the algorithm on a whiteboard.


I wouldn't say FPGA synthesizers are slow. They take a long time, but that's not the same thing.


A design board is just another tool, a workaround for not being able to run the program quickly enough (compare e.g. "presentation compilers" in languages that take a long time to fully compile).


This is one reason why I'm so happy about the Raspberry Pi. It is not that the Pi as such added anything to the world that wasn't there before. But it is small enough to do the little things I always wanted while still running a normal OS, and it is attractive and widespread enough that SBCL was ported to it. I can now have the whole stack of SBCL with SLIME and all the other Lisp toys on the Pi.


I don't think it's hard to give credence to the thought that there perhaps is a natural boundary for responsiveness that, when crossed, results in vastly increased productivity. That's the idea here. This works for people with short or long attention span. The paucity of short term memory is perhaps made worse by attention span issues, but that's not the core issue.


When I write code, I have this mental model of how the world (application) works. I write some lines of code based on it, and then test the code. More often than not, something happened that I hadn't thought about, and my mental model is updated.

This summer I worked on an application where it took 8 minutes to see the effect of changed code. Between each iteration, I had forgot most of the assumptions I had made. So when things didn't work, I had a hard time figuring out why my model of the world didn't work. I basically started from scratch each iteration.

Possibly very obvious, but it was interesting to learn how I attack a problem. And scary to see that I didn't just lose the 8 minutes between each run of the program, but much, much more.


If 8 minutes sounds like a lot... turnaround times when you work with FPGA code can be hours (and months if you're taping out an ASIC). After having to cope with that for a while, I've found that even with software I think a lot more before writing any code, and now I find myself stepping into the debugger much less often.

(My point is: debugging first in your head is yet another skill that should really be taught to everybody but isn't).


> debugging first in your head is yet another skill that should really be taught to everybody but isn't

Along with smelting iron. I don't know why people advocate skills one doesn't need, saying "but they make you better".

Skills you don't need will atrophy, because you don't need them. Conversely, skills you need with strengthen. Why would I debug first in my inaccurate head when I have a perfectly accurate debugger right here, and I can run my test suite in ten seconds?

Debugging first in your head is a good skill if actual debugging will take hours. If feedback is instant, you're probably faster writing the first thing that comes to you and iterating on it.


This is as silly as asking why people advocate exercise. I mean, you clearly don't need to be able to lift heavy things. Even if it is sometimes helpful.

Similarly, thinking about things before you do them will almost always be something you could just skip out on. But... It can be very helpful. And exercise is a great way to get better at work. "Practise makes perfect" and all.


No, people advocate exercise because it's necessary for long-term health.


Some exercise certainly helps long-term health. Calling it necessary greatly over states the importance, though.

And, I should be clear, I was saying "exercise" to refer to gym style exercise.


Being unable to run through code in your head and fully conceptualise a codebase is a great way to introduce subtle integration-level bugs that escape unit testing.


That's why we have things called "integration tests".

https://en.wikipedia.org/wiki/Integration_testing


And of course, there are never any bugs in software, ever, because we have integration testing!


The really interesting thing is that while I'm completely happy to spend 30 mins writing line after line of code to implement some algorithm, winding up with >100 lines and only compiling at the end, if it takes more than around 30 seconds to compile I'll lose my train of thought entirely.

So I understand the concept from the point of view of task-switching.

But a lot of people in the comments allude to needing a feedback loop for each and every line of code, which sounds absolutely horrifying to me - is that really what development is like in some environments? you can't really be sure of what a single line of code is going to do?


> At the same time I cannot use toy languages that have no compile time type checking

This guy seems like and sounds like a serious developer, so I'm totally confused by this statement.

Dynamic languages that don't do compile time type checking are not toys.

I used to only write in Java or C++, but I think it's a stage of maturity as a developer to realize that you can develop code that can take arguments with the assumption that the objects sent in are of types that will have the behavior you need to work with them.

If you argue that the code is faster when it is compiled- that's fine, and I agree, and that's good, if it matters.

If you argue that you need types because otherwise you can't be safe, I'm sorry, but that's like being a helicopter-parent. Sometimes maybe you can't trust what is calling your code even when you give it trust, and that's valid; just like as a parent, sometimes the child really needs that level of micromanagement. But, for a lot of if not most of practical web development, you can use dynamic typing, and most children do not need that level of micromanagement.

There's nothing wrong with languages that provide type checking, but it isn't necessarily a deficiency when it's not there.


> But, for a lot of if not most of practical web development, you can use dynamic typing, and most children do not need that level of micromanagement.

I use TypeScript because that level of "micromanagement" saves me more time in avoiding bugs than I spend adding static annotation. Hell, the improved intellisense alone means I no longer need to read docs in a lot of cases, meaning writing code is faster for me too - even ignoring bug rates. (Of course, it's sometimes prudent to check the docs for things like edge cases regardless.)

In C++-land, I've started using e.g. clang's threading annotations to good effect in catching some of the most heinous bugs to debug - incorrect multithreaded code that forgets to do simple things like lock mutexes meant to protect data structures.

I've dabbled in a toy project in Rust-land. The ability to catch and prevent data races is fascinating, and the ability to stem the tide of null dereferences at runtime seems pretty handy. Have you never had a hell-to-reproduce null deref that only occurs in your release builds? It's pretty bad when it ships to a large number of customers.

I see SQL injection vulnerabilities, and wish APIs properly segregated SQL Data from SQL Commands - two entirely different types of things entirely.

And yet for awhile I was a lot more forgiving of dynamically typed languages. Until I was able to compare JavaScript vs TypeScript - which I'd argue started as basically JavaScript with static typing tacked on as, effectively, an afterthought.

> If you argue that you need types because otherwise you can't be safe, I'm sorry, but that's like being a helicopter-parent.

If helicopter parents were as beneficial as static typing, I'd have a lot less against them.


> If you argue that you need types because otherwise you can't be safe, I'm sorry, but that's like being a helicopter-parent. Sometimes maybe you can't trust what is calling your code even when you give it trust, and that's valid; just like as a parent, sometimes the child really needs that level of micromanagement. But, for a lot of if not most of practical web development, you can use dynamic typing, and most children do not need that level of micromanagement.

This is backwards. Working without types is like walking around with your eyes closed: sure, you can do it, most of the time; if you're not doing anything particularly dangerous you can even do it reasonably safely. But it makes everything a lot slower.

The arguments against types usually boil down to either, as the saying goes "The belief that you can't explain to a computer why your code works, but you can keep track of it all in your head", or having only used languages where the explanation to the computer is so cumbersome as to not be worth doing (valid, but only in the scope of those languages, and the correct response is almost always to get a better language).

Try using a language with a decent type system some time (something along the lines of OCaml, Haskell, F# or Scala). Back when I'd only written Java and C++ I also thought type checking wasn't worth it.


I'm a huge fan of static type systems and their ever helpful checkers.

For me the most difficult argument against static types is that the sweet spot remains elusive: some type systems are too simplistic (e.g. the difficulty of writing generic print in OCaml) while some are too fancy and difficult (e.g. how many people understand even most of GHC Haskell's type system?).

There's also some real problems with compiler error messages. A great type checker needs to be able to explain problems understandably, or decoding the type errors will be more difficult than tracking down a null pointer in an interactive debugger.

I wonder about the possibility of making type checkers more interactive. It can be hard to understand them because they build up lots of implicit understanding that's not apparent.


I apologize for the swipe at "toy languages". I tried using more common languages like the usual scripting crowd, but it wasn't successful, mostly for performance reasons. Python in particular is frustrating since the very own spec prevents pretty much all optimizations and you can't even call a function without stirring the heap.

I don't advocate a full type-safe language. In Common Lisp I generally don't have to declare any types. The compiler points out obvious, unavoidable type problems that it can deduct, e.g. inside a function that has some typed things such as literals. I can then add types as I like, to both variables and function interfaces, and the compiler points out more.

Another property of Common Lisp is that if you add type declarations they speed up your code if you compile with speed==high and safety==low. But you can also compile your code with safety set higher than speed. In that case a compiler like SBCL turns your declarations into runtime type assertions.

Then you run your automated tests in both modes and you have higher confidence in the code.


I once had

- a super-fast assembler

- a super-fast way to get the assembled code over to a target system

... and my turnaround time was on the order of five seconds: Edit, hit a button making the target ready to receive the code, assemble: Running.

It almost didn't matter that I was writing 6502 assembly; things just fell together and it was magic.

Years later I was in a place where it was common to have half-day builds (30 minutes if you arranged things well). In fact, my last three weeks on that job I never got a working build at all, despite our group having an entire source control team whose job it was to make builds work.

Current big project, it's about 30 seconds of tool churning, then another 30 of startup time. Could be better.


I always laugh thinking that Turbo Pascal was so quick I didn't understand the difference between build and "build then run". And that was statically typed code on Pentium class computer.


Turbo Pascal was blazingly fast even on a 286 or 386! Such a great IDE.


I think Pascal was specifically designed in a way that allowed a compiler to go straight from the source code to machine code in just one pass (although the output wouldn't be particularly well optimised).


True, I've read that they made both the syntax easier to parse, and kept the compiler relatively simple to avoid intermediate structures/allocations etc.

I took a look at Wirth first Pascal compiler too, it's just too files, of acceptable length, lot of coupling though, but it's easy to guess that it's an imperative style that brings a lot of mechanical sympathy.


Can you go into more details? Why did something take 30 minutes or longer to build? Was the environment being cleaned every time? Was an artifact server not being used?


I've had long turnarounds (other than the artificial punchcard hell that is really just economics surfaced by IT budgets) in many projects. The ones that come to mind:

- Burning EPROMs for the hardware bring-up of a consumer 68000 machine. Download and burn time for six EPROMs was well over 45 minutes. Things got better when we wrote a downloader, but initial bring-up was kind of painful for a couple of months.

- An unbelievably crappy and development-hostile environment for set-top boxes. Getting 900K of code to a device could take 20-30 minutes, and the transfers often just abjectly failed. "Working as designed" said the people running the head-end. If you wonder why set-top-box software sucks so hard, this is one of the reasons. The whole TV industry is a fractal of shitty practices. Not that I'm bitter. [I did a work-around that let us do dynamic code loading over a different pipe that we theoretically weren't supposed to use "for network stability purposes" and got a biiig bonus after our team's productivity went way up]

- Game development using a cross-assembler on a minicomputer,. The mini was also used as the department's email and office memo system. 45 minutes during the day turned into less than 5 minutes at night, so you can imagine the hours I kept.

- Working on some Windows internals, the less said about this, the better.

The common thread: When bad decisions and designs were institutionalized, things never got better. When it was possible for individuals to improve things, they did.


You can get this experience in most interpreted languages fairly easily (if an interactive debugger like pry or a JS console isn't good enough), and compiled languages with sufficiently good tooling for dynamic code replacement under a debugger.

For dynamic languages, use the `watch` command combined with a host script that loads the code you're developing dynamically and executes it with some interesting parameters, and dumps the output. As you edit the source, you can see the effects immediately, character by character.

You can get the same effect in something like Java in an IDE if you can structure the interesting code like a game loop (ideally it is a game loop): it's being continuously evaluated, so every change has an immediate effect. You can see Notch use this technique in this video here:

https://youtu.be/rhN35bGvM8c?t=5757

As he edits code in the IDE, the application responds dynamically.


I've been working on/with an environment that gives me feedback on every keystroke. The effect has been amazing, and it's always painful to go back to regular environments. Even traditional Smalltalks or REPLs where I have to do something to execute the current command are jarring. Regular compile/link/debug is painful. Xcode is excruciating.


I agree. That's the experience I have doing web development in Lisp. Most other languages have at least a deployment step, even if it's a scripting language that doesn't require compilation.

I'm getting this with a language that has a very high performance native compiler. And it's sad not more people use this.


What are you using now?


My own Objective-Smalltalk.

Here with a more traditionally interactive environment (so updates are applied when you save the method): https://www.youtube.com/watch?v=ArcClqt2vTc

Here in a really live environment I call CodeDraw, with updates computed on every keystroke (the "Run" button is just left-over UI, it doesn't do anything): https://www.youtube.com/watch?v=sypkOhE-ufs

Most interpreted or incrementally compiled systems should be able to do this.


Why wouldn't you include the name of said environment?


This is why interview code tests are... badly misguided. Most "fizzbuzz" screening is grossly artificial, denying one the feedback loops which are a critical element of productivity, and without which one is relegated to spending time manual checking what automation does almost instantly.

I rely heavily on the IDE reminding me of things, and quick compile/run cycles verifying correctness, rather than trying to think thru a myriad of special cases. Relegated to "whiteboard coding" interview questions, I'm left looking a whole lot worse than I am - not because I don't know it, but because I know enough that thoroughness is painfully slow (without leveraging tools multiplying my skills & speed).


I would argue that if you can't write a correct FizzBuzz without needing to compile and run it, or even worse, an IDE to hold your hand, you don't actually understand what's happening and are just leaning on a crutch.

I rely heavily on the IDE reminding me of things, and quick compile/run cycles verifying correctness, rather than trying to think thru a myriad of special cases.

The peril of that sort of workflow is that you often won't realise the importance of those special cases until it's too late to change things easily. "It looks like it works, it compiles and runs" --- I've heard this sentiment from such IDE users many times, and yet when I inspect their code, they inevitably missed something important.

Someone once gave me a phrase I like to keep in mind when programming: "How can you tell the machine what to do if you're not even sure how to do it yourself?"


All tools are crutches. I mean a good carpenter probably should be able to bash a nail in with a rock rather than a hammer, but that would be a crazy way to do interviews.


> This is why interview code tests are... badly misguided. Most "fizzbuzz" screening is grossly artificial, denying one the feedback loops which are a critical element of productivity, and without which one is relegated to spending time manual checking what automation does almost instantly.

Do you really need to compile/run something like fizzbuzz? What if you're writing code that can't be run, such as when modifying a larger piece of code that won't run until all modifications are made?

Isn't there some value in being able to verify correctness of code just by looking at it for a few seconds? Surely you can overlook some things, but with practice probably a lot less than people think.


"fizzbuzz" was the wrong example, picked for familiarity but understating intended complexity. Point was that forms of "whiteboard coding" completely overlook the point of the article.


This is an enormous deal to me, although I'm only messing about with Node and Meteor.

Productivity plumets if a change takes long enough to be visible for me to justify looking at Twitter.


I'm reminded of this very amusing C++ compilation speedup trick: cat * > everything.cpp - http://stackoverflow.com/a/318495/3229684

---

Also, I tend to do all my work using a realtime-feedback model like this.

I discovered inotifywait a few years ago, and consider it to be the coolest thing I've discovered. Using a loop like:

    while true; do clear; ./program; inotifywait -qq -e delete_self program; done
I run my program, then sit waiting until I re-save it, at which point I run it again. This approach is very flexible; the example above works for shell scripts, or I can do "gcc -o file file.c && ./file", or I can do "node file.js", or whatever. Switching the sequence around I can have inotifywait pause before the first execution too, but I prefer the method above. I occasionally substitute "clear" for "tput reset" so I can wipe my scrollback and shift+pgup stops at the top of the current execution.

There are some caveats though.

The biggest is that this approach doesn't work too well for things that need to be killed to be restarted, like socket servers; that's fixable but nontrivial and likely project-specific too.

The second problem are the race conditions that will likely arise between your editor's slow file-save process and inotify's fast response time, producing irritating "File is in use" errors 50% of the time (since inotifywait exited, bash looped, and ./program is trying to be read by your compiler or interpreter while still locked by your editor).

The DELETE_SELF inotify event is specific to the Geany text editor; when Geany saves a file it does quite a few operations, and DELETE_SELF is amongst the last (but inotify doesn't see any of the following events since DELETE_SELF marks that the file - that inotify was watching - got deleted; this is thankfully coincidental with Geany being done with the file). You'll need to do something like "inotifywait -m program" and watch what events occur; inotifywait exits on the first event received; hopefully there's a lone unique event at the end of the sequence. Worst-case scenario you might have to add in "sleep 0.05"; I have not tested the success of prefixing the compilation step with something like "while [ ! -r program ]; do true; done".


I would really like to see someone write enough code to do anything (or even compile), compile it, and interpret the results in a useful manner in less than a second. Continually, over a significant period of time. The tooling is certainly available, but the biological part of that process would seem to be superhuman.


The point is that you want to use the high frequency to get the simpler tools you will need out of the way, preserving your energy and attention for the complicated parts.

If you used up your energy by the time you made it through the simple stuff you are looking at a bad time later.


It's really interesting how little time it is before things become disjointed. Recently working with some legacy code I noticed this. Usually I have a continuous unit testing process which normally is < 4 seconds to give feedback. I had to wrap some tests around some legacy code, which required some expensive setup which added an extra 10 seconds. I really feel pained working with that code and the extra 10 seconds. It is just enough time for the brain to drift off somewhere else.

Ironic when in the past I have worked with C++ code bases that would take 20 minutes to compile, though incremental builds and parallel builds brought that number down.

Also in embedded systems, I've got 3 minute cycles to try something out. However most of the coding is done on a PC with a fast unit testing cycle which dramatically improves feedback loops.


I know work with phoenix + live-reload and the nearly instant dev cycle is the best thing that happened to me since I started web dev.

I know rails has something similar but I haven't worked with rails for nearly 2 years now so I can't tell.

As a side note, having a short dev loop is really good for education.


From this perspective, the worst things I meet are setting up or tuning complex CI pipelines and developing infrastructure automation: a cycle can easily last more than half an hour and there doesn't seem to be an easy way to speed up things, really.


With a modular TDD approach, you could in theory test individual components very rapidly. I haven't looked into it too much, but I assume such systems exist.


I have also often complained about this pain of CI systems.

One of the problems is that the things you need to have a proper CI system conflict with fast build times. Proper CI requires a clean checkout, and a total compile fromscratch. If you're doing something in Docker, you ought to start your Docker process from scratch. If you're in a VM, you really ought to revert to a snapshot to make sure you're not accidentally accumulating un-CI'ed state. And so on.

While in normal development you may have a very fast turnaround, a properly configured CI system needs to assume the worst, start from scratch, and build everything, in every combination you support. (You may also want a less accurate CI build that trades speed for accuracy and just does an incremental build of some particular aspect of the system. But that should be supported by the full CI I describe here.)

Consequently, something that fails only 97% into that build process, and only fails on the CI server, can be very annoying to fix. But you don't really have a choice, because any hacky alternative is too risky. A CI system that has human intervention isn't a CI system.


similarly, when in development i'll hack around slow CI times by storing the output of the most cycle heavy steps in the CI and on subsequent runs determine if i can just use the old stored values stead running through everything

that said, i agree that there should be effort, or at least roadmap, to get CI compile times down to near native

to play devels[sic] advocate we've had lengthy compile times on systems and native apps since time immemorial, it just seems that browser work has caught up in complexity and ubiquity.. even if some of that complexity can be blamed on negligent or lazy development


This isn't relevant only for development. Applications that take 10+ seconds to process an action are annoying and tiring as well.

At work a lot of stuff is like this. Enterprisey applications that take 10, 15, 20 seconds to process each entry. Time enough to get distracted. Time enough to tab over to email or some other application and lose focus. Just overall mentally fatiguing.

Developers who are annoyed by 10 second delays in their development process should remember that their users will be just as annoyed by slow response time in applications. Cut the weight. Make it faster. I'm more and more convinced that nothing else will go as far in making users happy.


For Ruby pry comes close but he's right about the inability to have the same kind of turnaround cycle when you're developing a C extension. I don't think anyone is gonna beat SBCL in that regard any time soon. There is no way to take Ruby and then generate machine code that ends up being callable from the current address space. Although I guess there is some way to do it with cffi and a compiler constantly running in the background.


Today I was thinking in the idea that LISP and Smalltalk are probably at the top on instant feedback for development. And Smalltalk dev tools might rank even higher because you can dig any instance to any dept and evaluate code "talking to it" and getting instant answers right there.


My dev-loop currently takes ~10sec. But I worked with worse...

Yes I prefer faster loops, but nerver had a job where I could influence the build pipeline...


I just try and fix the build pipeline when it annoys me.

I was running into some 1-minute link times per project on Android, with several projects. Investigated and found out there was an alternative linker named "gold" - a few minutes to figure out how to reconfigure our builds and our link times were down to ~10 seconds / project or something. Nobody complained when I checked that in ;)

Dealing with C++ build pipelines for large projects, there's only so low I can get build times for the whole project - better to try and make the data hot reloadable in a lot of cases. Similar goal: Fast iteration loops. Even if you can't fix them for the entire project, you may be able to fix them for whatever you're working on right now.


Writing django and JavaScript apps with a hot reloader has spoiled me. The dev loop for other cases now feels almost painful.


The aversion towards upfront thinking and tendency to glorify the TDD slot machine is a bit unsettling.


That's somewhat unfair don't you think? It's perfectly possible to incrementally evolve code using TDD towards a particular destination determined by upfront thinking.


There's enough "upfront thinking" to be done on the problem you actually try to solve...


Relevant XKCD: http://xkcd.com/303/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: