Hacker News new | past | comments | ask | show | jobs | submit login

I'll throw another one down your way. An organization I worked with had a about 5 million lines of COBOL in one system (they had several more and this one systems was only about 15% of their total transactional workload). It used a proprietary pre-relational database that allowed users to do both queries (of a sort) and do things like the value from the query result + 1500 bytes.

They tried re-writing pieces in Java at a cost of tens of millions of dollars. Java was the new hotness. In addition, they built out a Java hosting environment using expensive, proprietary Unix hardware to reach the same production volume as the mainframe. However, it was grossly under-utilized because the Java code couldn't do much more than ask the COBOL code what the answer was to a question by using Message queues. More millions of dollars went to keep up licenses and support contracts on essentially idle hardware.

They tried moving it to Windows, using .NET and MicroFocus COBOL. But the problem was they would still be tied to COBOL, even though they (conceptually) had a path to introduce .NET components or to wrap the green-screen pieces in more updated UIs. But that in itself was a problem because all their people knew the greenscreen UI so well it was all muscle memory. Several workers complained because new GUI actually made them slower at their jobs.

They were stuck because they had no way to reverse engineer the requirements from the COBOL code, some of it going back 25+ years. Of course it wasn't documented, or if it was, the documentation was long gone. For the most part they were tied to that COBOL code because no one understood everything that it did and there were only a handful of COBOL programmers left in their shop (I think 6) and they were busy making emergency fixes on that + several other millions of lines of code in other systems.

They were, however, looking for an argument to retire COBOL and retire the mainframes. The cheapest solution would have been to stick with COBOL. Hire programmers. Teach them COBOL (because it was painfully difficult to find any new COBOL people and for various reasons they could not off-shore the project). Continue to develop and fix in COBOL (especially before the last remaining COBOL programmers died or retired). If you cleaned up or fixed a module, maybe move it to Java when possible.

The long story short is the decision to introduce a new technology, even in the face of an ancient, largely proprietary (since it's really about IBM COBOL on mainframes), and over-priced solution can actually lead to a worse outcome. Had they stayed with boring technology. Had they in-sourced more of their COBOL workforce. They might not have felt happy, but they would have been in a much strong, better position. Instead they were paying for a mainframe, and a proprietary Unix server farm, and software licenses on both Unix and z/OS.

When I last was there they were buying a new solution from Oracle which was supposed to arrive racked up and ready to go. Several weeks in they essentially said it would take months before the first of the new Oracle servers would be ready for an internal cloud deployment on which to try to re-host some software. I'm not even sure what they think they would be re-hosting but they talked about automatic translation of COBOL to Java.




> They were stuck because they had no way to reverse engineer the requirements from the COBOL code, some of it going back 25+ years. Of course it wasn't documented, or if it was, the documentation was long gone.

Can you explain for people who never ever been close to such an environment how this can happen, and why do they still care about upholding the requirements they don't know about?


Let's say you have a business process, like if a shipping manifest goes through any one of the following 3 cities, then you need to file form XYZ, unless the shipper is one of the following official government agencies and they've filed forms ABC and DEF. That was the original requirement in 1980. It was documented, put in a series of binders, and placed on a shelf.

1982, Adds another port to the list of special port cities, but only if shipping goods of type JKL or MNO. That change was documented in an inter-office memo and filed away. Except the only time you have the type of goods information is in a different module - so even though it pertains to the original business process, it's in another module that prints the ship's manifest to (physically) mail it to the insurer.

1989, the original requirements binders are moved to a storage facility.

1992, The memo is also sent to an archive facility. Original manuals have been destroyed because the records retention policy is 10 years.

1994, There's a change in the law and an emergency fix was put in, and the comments were put into the source code.

1995, The source code with the comments is lost, so an older version of the source code is recovered with the just the code change.

And so on and so on

Until 2015. You have 5,000 to 10,000 lines of code that deal with the original requirement. They're split into multiple modules. They reside in a source code base of 5,000,000 lines of code. The people that use your software have a combination of the software + a whole bunch of unwritten rules like: "If it's this country, and this port, and this port of origin - PF10 to get to the override screen and approve the shipment. Add 'As per J. Randal' in the comments."


That whole thing sounds way too familiar to me. It literally could be the same company.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: