In a previous job we/I trained the analysts to be more technical and write the T part of ELT in DBT. They effectively became what is known as "Analytics Engineers" so they owned the T and then wrote their analysis on top of the models they had created.
That works for ELT, especially if you have documentation around the raw data being loaded in but sounds like it adds a bit of overhead to the analysts' jobs which may or may not be more than just having the engineering team own it and document it well (something they already have to do for the analysts to write transformation code). I'm curious how you handle the upstream data schema changing. Loading in raw data means handling compatibility in another place outside the application.
Not if it's just a part of those engineers' jobs. They're already familiar with the underlying application data so owning the transformation is just understanding what the data needs to look like and documenting it. They're going to need to document the raw data anyway to avoid those analysts asking them a million questions. Might as well avoid hiring analysts who can also learn the transformation bit and just give them good data.
I think we've worked in very different jobs, in my case the analysts had a good idea of the underlying application data and often worked closely with both data engineering and regular engineering to understand it so they can make better analyses. They were quite competent in their own right, otherwise I wouldn't have given them control over the T which only made a net benefit to my life as reduced work.
exactly. analysts are always a step behind engineers when it comes to really understand what data really means and what changes are coming down the line. this always results in delays, broken pipelines ect.
modern tools like dbt make it easy for data producing teams to also own T part.