You are leaving our Website
Using an external Link:
You are now leaving our website. The following page is operated by a third party. We accept no responsibility for the content, data protection, or security of the linked page..
URL:
DM250: Data Engineering & DataOps NEW
Participants learn to implement scalable and operable data products with a DataOps mindset—from lakehouse/streaming and ELT/ETL to pipeline design and orchestration, through CI/CD, testing, and observability. Focus: principles rather than tool training (version control, data contracts, monitoring, incident handling), plus governance integration (metadata/lineage, access models, quality rules, responsibilities). Capstone: an end-to-end scenario including runbooks, alerts, SLOs, and a release checklist.
Unfortunately there are currently no available appointments.
Would you like to request an appointment? Then click on 'No matching appointment?'
Agenda:
- Modern data architectures: warehouse vs. lakehouse, medallion/layering, domain interfaces
- Batch & streaming: event models, basic concepts of windowing, late data, exactly-once vs. at-least-once
- Pipeline design: ingestion patterns, transformation, partitioning, backfills, idempotency
- Orchestration: DAG design, dependencies, parameterization, scheduling, retries, SLAs
- Analytics engineering (e.g., dbt thinking): modeling, documentation, tests, semantic layer concepts
- CI/CD for data: Git flow, build/deploy pipelines, environments, secrets, artifacting
- Automated testing: unit/integration, schema tests, data quality checks, contract tests, regression strategies
- Observability: logs/metrics/traces mindset, pipeline health, freshness, volume, distribution, cost
- Operations & incident response: alert design, on-call basics, runbooks, postmortems
- Security & governance touchpoints: access patterns, masking, lineage integration, catalog integration
- Capstone: end-to-end blueprint + operations and quality concept
Objectives:
- Select modern data architectures and outline them with clear justification (including batch/streaming classification)
- Design data pipelines so they are robust (idempotent), scalable, and maintainable
- Structure orchestration cleanly (DAG design, dependencies, backfills, SLAs)
- Build a CI/CD-capable delivery structure for data projects (repo structure, environments, releases)
- Implement a testing strategy for data (schema, contracts, data quality checks, regression)
- Define an observability concept (SLOs, alerts, monitoring metrics, cost indicators)
- Ensure operability (runbooks, incident procedures, postmortem improvements)
- “Build in” governance requirements technically (metadata/lineage, access, DQ as code)
Target audience:
- Data Engineers, Analytics Engineers, DataOps roles, platform/cloud engineers with a data focus
- MLOps/ML Engineers responsible for data feeds, tech leads for data platforms
Prerequisites:
- To be able to follow the course content and learning pace in the DM250 Data Engineering & DataOps training, we recommend having attended the following courses beforehand:
- DM100 Fundamentals of Data Management & Data Governance
- DM200 Data Governance & Data Asset Management in Practice
or equivalent foundational knowledge
- Solid foundational knowledge of SQL
- Basic knowledge of Python (or a comparable language) is very helpful
- Basic understanding of data pipelines/ETL/ELT as well as Git/version control is recommended
Description:
The DM250 Data Engineering & DataOps course addresses the technical implementation of reliable, scalable, and operable data products—using the mindset and practices of DataOps. Participants work across the entire delivery chain: from modern data architectures (lakehouse approaches, streaming/event-driven, ELT/ETL patterns) through pipeline design and orchestration (e.g., Airflow principles, dbt workflows, or comparable tools) to CI/CD, automated testing, and observability. The focus is not on “tool training,” but on robust, transferable principles: version control, environment strategy, a data testing pyramid, data contracts, deployments, monitoring, incident handling, and continuous improvement.A key outcome of the DM250 Data Engineering & DataOps workshop is the ability to deliver data pipelines like product software: reproducible, measurable, secure, and cost-efficient. This also includes alignment with governance (DM100 Fundamentals of Data Management & Data Governance / DM200 Data Governance & Data Asset Management in Practice): metadata/lineage, access models, quality rules, and documented responsibilities are treated as integral parts of engineering practice—not as “afterthoughts.” In a capstone scenario (e.g., “Batch + Streaming Ingestion → Transformation → Serving Layer”), the teams design an end-to-end solution including operational artifacts (runbooks, alerts, SLOs, release checklist).
Guaranteed implementation:
from 2 Attendees
Booking information:
Duration:
4 Days
Price:
2.950,00 € plus VAT.
(including lunch & drinks for in-person participation on-site)
Appointment selection:
No appointment available
Authorized training partner
Authorized training partner
Memberships
Memberships
Shopping cart
DM250: Data Engineering & DataOps
was added to the shopping cart.