Can you expound on the data modeling they missed out on? Is the UML code generator a bad way to go in your opinion or did they just not do due diligence with data modeling?
(Asking because I'm starting a new data-heavy project and I'm considering generating code from UML.)
At the core, they sort of missed the problem. With the advent of ACA, you could no longer use medical history to set rates. You had age, sex, smoker -- and the killer location. What they were actually after is a bit of a rules engine, as one person may have 30k+ offerings, another location none. Think actuarial tables at the zip code level. The data model itself was a bit ... modeled around people and missed the rate part. I think they had about 2 years - and spent much of that time modeling. The cracks were not evident until way too late.
One of my favorites was the way they serialized the POJOs. Data object turned to XML. Send to a process that added more stuff. Send to another process, lose all the non-base stuff. Lots of data corruption. The model being wrong required them to try and tack on all sorts of extra stuff... but the framework really did not support it.
They tried to match a handful of A players with a bunch of C grade developers. Then they pulled all the A players into never ending meetings. I saw little to no code review of what was actually going on. Folks literally copied switch blocks, because the code worked, and left in the old case statements. Exceptions eaten. Text book example after example of what you might expect in the daily wtf type code.
(Asking because I'm starting a new data-heavy project and I'm considering generating code from UML.)