Those resources won't really help OP. What they're talking about is better handled by bespoke ETL architecture alongside workflow orchestration tooling (like Airflow or Prefect) to handle versioning and deployment of modeling and ingestion services in production.
The orchestration part handles the workflows that comprise your ingestion and ETL processes. These are like managed cron jobs specific to data engineering lifecycles. The bespoke part of the architecture is what you'd compose together to handle all of the other requirements; for example, what applications do you build, and how do you design your data warehouse, such that the architecture can be used by both data science and marketing teams?
The orchestration part handles the workflows that comprise your ingestion and ETL processes. These are like managed cron jobs specific to data engineering lifecycles. The bespoke part of the architecture is what you'd compose together to handle all of the other requirements; for example, what applications do you build, and how do you design your data warehouse, such that the architecture can be used by both data science and marketing teams?