I mostly agree with this - bite-sized chunks is really the main ingredient to success with complex code base reformations.
FWIW, if you want to have a look at a reasonably complex code base being broken up into maintainable modules of modernized code, I rewrote Knockout.js with a view to creating version 4.0 with modern tooling. It is now in alpha, maintained as a monorepo of ES6 packages at https://github.com/knockout/tko
In retrospect it would've been much faster to just rewrite Knockout from scratch. That said, we've kept almost all the unit tests, so there's a reasonable expectation of backwards compatibility with KO 3.x.
> In retrospect it would've been much faster to just rewrite Knockout from scratch.
That's most likely not true, but looking backwards it often feels that way. The problem is that you're now a lot wiser about that codebase than you were at the beginning and if you had done that rewrite there could have easily been fatalities.
But of course it feels as if the rewrite would be faster and cleaner. How bad could it be, right? ;)
And then you suddenly have two systems to maintain, one that is not yet feature complete and broken in unexpected ways and one that is servicing real users who can't wait until you're done with your big-bang effort. And then you start missing deadlines and so on.
It's funny in a way that even after a successful incremental project that itch still will not go away.
And I'm sure Netscape is far from alone in that category ;-)
But (disclaimer) as someone who as advocated for big-bang-rewrite's before, I'm still under the impression that there are situations where they can be net-better.
Factors may include:
- there is no database involved, just code. Even more helpful if the existing code is "pure".
- a single developer can hold the functionality in their head.
- there are few bugs-as-features, tricky edge cases that must be backwards-compatibility, etc.
- as stated above, it's the primary author.
- much of the existing functionality is poor, and the path for building, launching, and shifting to a "replacement product" is relatively clear.
Advocating to never rewrite can be harmful, and make things harder for people for whom that actually would be the best approach.
Yes, but those are special cases. For every rule there is an exception, and of course if the parts above apply you are fully in control and are well able to judge whether you should rewrite or not.
But the situation that I'm describing is not ticking any of those boxes and I think I made that quite clear in the pre-amble.
One thing that bothers me is that people tend to expect miracles. I usually tell them it will take as long as it took to fuck it up to fix it. But that doesn't mean that you can't have some initial results to point the way in a short time. It's more about establishing a process and showing that there is a way out of the swamp than that it is something super tricky or difficult. Just follow the recipe, don't let yourself be distracted (this can be really hard, some management just can't seem to get out of the way) and keep moving.
> In retrospect it would've been much faster to just rewrite Knockout from scratch.
You're getting a bit of pushback on this sentiment, so I'll play devil's advocate a bit here.
I've tried gradual refactors in the past, with poor results, because unfocused technical teams and employee turnover can really kill velocity on long-term goals that take gradual but detailed work.
That is, replacing all those v1 API calls with the v2 API calls over five months seems fine, but there's risk that it actually takes several years after unexpected bugs and/or "urgent" feature releases come into play. And by that time, you might have employee turnover costs, retraining costs, etc.
I'm just saying the risk equation isn't as cut and dry as it seems. There's is survivor bias in play in both the "rewrite it" and the "gradually migrate it" camps.
The rewrite only works - in my experience, YMMV - if the team is already 100% familiar with the codebase as it is and the task is a relatively simple one and there is a nice set of tests and docs to go with the whole package.
The one caveat is that there are times when the business realizes that their old workflows and features aren't what they now need. The rewrite becomes a new project competing with the old rather than a functional rewrite.
This is also fraught with peril. However, it is a different set of problems. In an ideal world, you have engineers who can make reasoned decisions.
However, if the company culture allowed one application to devolve into chaos, what will make the second application better?
You raise an excellent point and usually in tandem we educate management (not the tech people) on how they failed in their oversight and guidance role.
The real problem of course is to let things slide this far in the first place. But that's an entirely different subject, for sure the two go hand-in-hand and often what you touch on is the major reason the original talent has long ago left the company. By the time we get called in it is 11:58 or thereabouts.
Assuming something off the shelf is available, yes. In fact, if something off the shelf is available we'll be happy to make that recommendation, too many companies that aren't software houses suddenly feel that they need to write everything from the ground up. And even companies that are software houses suffer from NIH more often than not. (Though, I have to say that in my experience in the last couple of years or so this is improving, it used to be that every company had their own in-house developed framework but now we see more and more standardization.)
I agree about the YMMV part. The same caveats, small scope and developers with expertise, apply in the gradual migration plans as well in my experience. It's clearly true in the extreme cases (python2 -> python3) and I've seen the same patterns happen inside companies as well.
Looks like you had a too ambitious goal. Your rewrite would suffer from even more unexpected bugs, and the same urgent features, but worst, because you will have to fix them in 2 different systems. When your organization won't help you, you have to do less.
Is anyone else flabbergasted by the amount of effort required to mock a function call in Go, as described by this talk?
Like, when at 3:20 the presenter says there's a thing you can do that makes it utterly trivial to test this feature, I immediately assumed she'll just have to write some mocks for the `comm` package, and plug that in. Cool, I guess she'll talk about a nice mocking library or something, or there's some business complexity involved where the comm package is particularly stateful and so difficult to mock.
But no. The big difficulty seems to be that the language doesn't allow you to mock package-level functions; and so before you can mock anything you have to introduce an indirection - add an interface through which the notify package has to call things, move the code in the comm package into methods on that interface, correct all code to pass around this interface and call methods on it.
Why would you choose to work in language that makes the most common testing action so painful?
It shouldn't be 'the most common testing action'. In my mind, the number of mocks required for a test is usually inversely proportional to the quality of the code; if you need to mock out 20 random implementations to test something, you've either got an integration test masquerading as a unit test, or you've got very tightly coupled code. Mocks that need to be injected via monkey patching are worse than 'normal', dependency injected mocks. `quality = mock_count^-1 + monkey_patched_mock_count^-2`
Monkey patching is a sign of bad code in 99% of cases. In that 1% of cases where it might be justified, you can restructure your code to use indirection and dependency injection, and avoid having to use monkey patching. It might not be as nice as monkey patching in that 1% of cases. But I'd rather work in a language without monkey patching, precisely because it makes it incredibly obvious when you've coupled your shit.
Working in Go changed how I write my JS code. I don't know if you write much JS, but to my mind, `sinon` is mocking. `proxyquire` and `rewire` are monkey patching; monkey patching with the aim of helping mocking, but monkey patching none the less. My JS tests now don't use proxyquire or rewire, though they might use sinon. I find this produces easier to read code.
Well, every external call is 'coupling' in your code. Whether it happens on an interface passed as an argument or by resolving the name in some other fashion doesn't really change how tightly coupled your code is.
To me, having to change a function into a method on a singleton interface just to be able to mock it for tests seems like working around inadequacies of the language. And I'm not sure why `module.Interface.method` is easier to read than `module.function`.
> In retrospect it would've been much faster to just rewrite Knockout from scratch.
Why do you say that? The idea one could get it right writing from scratch is one of those seductive thoughts, but in my experience it never works out that way.
> Why do you say that? The idea one could get it right writing from scratch is one of those seductive thoughts, but in my experience it never works out that way.
Of course the alternate route – rewriting - is just a hypothetical so we can only suppose how it would've turned out.
That said, rewriting from scratch would've been pretty straightforward, since the design is pretty much set.
The real value of the existing code resides in the unit tests that Steve Sanderson, Ryan Niemeyer, and Michael Best created – since they illuminated a lot of weird and deceptive edge cases that would've likely been missed if we had rewritten from scratch.
So I suspect you are right, that it's just a seductive thought.
KnockoutJS is hands down my favourite JS library of all times (it's a large part of why I build things in a structured better way (I was/am primarily a backend dev)), it's awesome to see that it has a modern future since I have quite a few projects using it and 'porting' will be a lot easier so thanks for the amazing work you are doing :).
Can I still install and use via a nuget package? It looks like it's integrated with all those crazy npm tools now but I'm not sure if that's just for development nor usage.
FWIW, if you want to have a look at a reasonably complex code base being broken up into maintainable modules of modernized code, I rewrote Knockout.js with a view to creating version 4.0 with modern tooling. It is now in alpha, maintained as a monorepo of ES6 packages at https://github.com/knockout/tko
You can see the rough transition strategy here: https://github.com/knockout/tko/issues/1
In retrospect it would've been much faster to just rewrite Knockout from scratch. That said, we've kept almost all the unit tests, so there's a reasonable expectation of backwards compatibility with KO 3.x.