This was a multi-tenant centrally hosted application. There were 2000 sites served, each with kiosk PCs and some associated special-purpose hardware.
The actual application code ran in just four virtual machines in two data centres.
No templates, no Terraform, no microservices, etc…
Just vanilla ASP.NET on IIS with SQL Server as the back end.
The efficiency stemmed from having a single consolidated schema for all tenants with a tenant ID as a prefix to every primary key.
Shared tables (reference data) simply didn’t have a prefix.
The vendor product that replaced this was not multi-tenant in this sense. They deployed a database-per-tenant, and lots of application servers. Not one per tenant, but something like one per ten, so two hundred large virtual machines running twenty instances of their app.
Multiply the above for HA and non-production. The end result was something like a thousand virtual machines that took several racks of tin to host.
Management of the new system took serious automation, template disk image builds, etc…
The repetition of the reference data bloated the database from 50GB to terabytes.
It “worked” but it was very expensive, slow, and difficult to maintain. It took them several years to upgrade the database engine, for example.
That task for my version was a single after-hours change. Backup or rollback was about an hour, simply because the data volume was so much lower.
The simplicity in my solution stemmed from a type of mechanical sympathy. I tailored the app to the customer’s specific style of multi-tenant central hosting, which made it very efficient.
Of course, it is hard to say without knowing more about it, but it seems that jiggawatts solution is closer to optimal than the second one. The 50GB database could fit on a USB drive after all and we know empirically that a single SQL server database was able to handle the requests since the old system worked.
Also, the fact that a consulting company was able to turn a part time gig for one person into a $100M+ project at the taxpayer's expense is very frustrating.
This was a multi-tenant centrally hosted application. There were 2000 sites served, each with kiosk PCs and some associated special-purpose hardware.
The actual application code ran in just four virtual machines in two data centres.
No templates, no Terraform, no microservices, etc…
Just vanilla ASP.NET on IIS with SQL Server as the back end.
The efficiency stemmed from having a single consolidated schema for all tenants with a tenant ID as a prefix to every primary key.
Shared tables (reference data) simply didn’t have a prefix.
The vendor product that replaced this was not multi-tenant in this sense. They deployed a database-per-tenant, and lots of application servers. Not one per tenant, but something like one per ten, so two hundred large virtual machines running twenty instances of their app.
Multiply the above for HA and non-production. The end result was something like a thousand virtual machines that took several racks of tin to host.
Management of the new system took serious automation, template disk image builds, etc…
The repetition of the reference data bloated the database from 50GB to terabytes.
It “worked” but it was very expensive, slow, and difficult to maintain. It took them several years to upgrade the database engine, for example.
That task for my version was a single after-hours change. Backup or rollback was about an hour, simply because the data volume was so much lower.
The simplicity in my solution stemmed from a type of mechanical sympathy. I tailored the app to the customer’s specific style of multi-tenant central hosting, which made it very efficient.