Hacker News new | past | comments | ask | show | jobs | submit login

I've run across this situation many times (I'm a "senior" team lead, meaning, I've been working for 25+ years). I've witnessed companies that overcame tech debt, and seen companies fail because of it. There's basically three approaches that people can take.

One is the "big re-write". They start on a new code base, and try to develop it in parallel. It takes a very long time, and the teams have to work on two solutions for some time. It's a big bang approach, and it often fails, or drags on for years.

The second is massive refactoring. It requires intensive testing and best practices. This strategy requires that the teams focus on testing intensely. However, often the testing culture is not there, which is why the code became unmanageable in the first place. It's kind of like starting over. And the new focus and discipline on testing is hard for teams to do without strong leadership, training, or new talent.

The last, and most effective in my opinion, is to go with a service-based, incremental approach. If the code base is not already using services, APIs must be built. Frontend/apps must be de-coupled from the legacy components. A clear domain model has to be agreed upon, and then parts of the legacy codebase are put behind APIs, and de-coupled from the rest. Over time, sections are refactored independently, and the APIs can hide legacy away. Maybe the legacy parts are refactored and replaced, or they stay around for a while. But the key is, that this approach allows multiple people or teams to work in parallel and focus on their areas. This is domain-driven design in action, and it works. New features can actually be developed sooner, even though the legacy is not replaced yet.

In the end, overcoming tech debt is about people. And on larger code bases, it's more of an organizational problem than a code problem. Developers need to be able to move forward without having to navigate too much code or too many different people.




Do you subscribe to the microservice philosophy? I ask because you come from the era of shared objects, and I personally still consider libraries with well defined APIs to be much simpler than dealing with multiple processes possibly running across different hardware.

I do break my infrastructure apart, but far less aggressively than some advocates.

I'm curious what 25 years had lead you to believe.


I'm not the op and I don't have 25 years experience but I'm a manager/architect with 18 year's behind me. My take:

I very much believe in microservices. I've repeatedly seen small library based approaches fail because one of the key and near universal truths of tech debt IME is code being tightly coupled. When you force an API interaction to happen via an outside protocol, you force a clean contract and a culture of coding to a contract. Decoupling the code allows yet team to move faster and more independently.


>The last, and most effective in my opinion, is to go with a service-based, incremental approach.

I was once handed a project where the original architect drank the microservices kool aid. It didn't stop the different services from being tightly coupled to one another - it just made the pain of that happening worse.

It made testing a pain - you needed to set up and run 11 different services on your machine to test anything.

It made debugging a pain - you had to trace calls over multiple different services with code often written in different languages. Debugging became more and more like detective work.

It created a multiplicity of irritating edge cases. The 'calculation server' could time out if it took too long - and it sometimes did. Serialization/deserialization was also an area rife with bugs.

The code quality got worse due to this approach, exacerbated by the team lead at one point giving people 'responsibility' for different services to different people.

I think microservices where they've "worked" has typically been a path of least resistance to realizing Conway's law - a tacit acknowledgement that different corporate fiefdoms want to write and deploy code in their own way and won't communicate effectively with one another. In that respect I think it's effective because it's easier to draw up REST API contracts between disparate often different-language-speaking teams using microservices than it is to draw up library API contracts.

Surrounding technical debt with integration tests and incrementally refactoring (decoupling modules, deduplicating code and adding assertions) is the only way to approach technical debt.

>In the end, overcoming tech debt is about people.

No. It's a technical problem. People problems definitely exacerbate it - deadlines, politics, etc. but it's still a technical problem in the end.


> It made testing a pain - you needed to set up and run 11 different services on your machine to test anything.

That is explicitly not the "microservices kool aid." The first thing a microservice needs to be is independently testable, so it can be independently developed.

All you had there was one monolithic service.


Thats what we do, and its working reasonably well. Using microservices instead of libs makes working in independent teams easier and faster, and we can decouple better (different internal domain models, managing their own persistence).

It definitely is a drag, but there's just no way around that.


I look at services as being an admission that we haven't really evolved language design into the Eli Whitney era. It looked for a time like we were moving that way with VBX, workflow engines, and mobile agent style divisions. Everything we do seems external to the languages we use.


The last scenario you outline has a lot in common with the Strangler Application pattern: http://www.martinfowler.com/bliki/StranglerApplication.html


I'm finding your service/API suggestion interesting,in light of Steve Yegge's somewhat infamous post about Amazon's initiative to acomplish this ssome years back.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: