Astro/Airflow Engineer
Tata Consultancy Services - Pennington, NJ
Apply NowJob Description
Must Have Technical/Functional Skills • 5-8+ years building/operating data or platform systems; 3+ years running Airflow in production at scale (hundreds-thousands of DAGs and high task throughput) • Deep Airflow expertise: DAG design and testing, idempotency, deferrable operators/sensors, dynamic task mapping, task groups, datasets, pools/queues, SLAs, retries/backfills, cross-DAG dependencies • Strong Kubernetes experience running Airflow and supporting services: Helm, autoscaling, node/pod tuning, topology spread, network policies, PDBs, and blue/green or canary strategies • Observability and SRE practices: Prometheus/Grafana/StatsD, centralized logging, alert design, capacity/throughput modeling, performance tuning • Security/compliance: SSO/OIDC, RBAC, secrets management (Vault/Secrets Manager), auditing, least-privilege connection management, and change control • Proven incident leadership, runbook creation, and platform roadmap execution; excellent cross-functional communication • Experience operating and leading migrations to/from Airflow • OpenLineage/Marquez adoption; Great Expectations or other data quality frameworks; data contracts • Cost optimization and capacity planning for schedulers and workers; spot instance strategies • Multi-region HA/DR for Airflow metadata DB; backup/restore and disaster drills • Building internal developer platforms/portals (e.g., Backstage) for self-service pipelines • Contributions to Apache Airflow or provider packages; familiarity with recent AlPs/ Airflow 2.7+ features • Architect, deploy, and operate production-grade Airflow on Kubernetes including all components and user application dependencies, with focus on upgrades, capacity planning, HA, security, and performance tuning • Operate a multi-scheduler ecosystem: determine when to use Airflow, distributed compute schedulers, or lightweight task runners based on workload requirements; provide unified developer experience across schedulers • Build automation infrastructure: Terraform modules and Helm charts with GitOps-driven CI/CD for environment provisioning, upgrades, and zero-downtime rollouts • Standardize the developer experience: DAG repo templates, shared operator libraries, connection and secrets management, dependency packaging, code ownership, linting, unit testing, and pre-commit hooks • Implement comprehensive observability: metrics collection, dashboards, distributed tracing, SLA/latency monitoring, intelligent alerting, and runbook automation • Enable resilient workflow patterns: build idempotency frameworks, retry/backoff strategies, deferrable operators and sensors, dynamic task mapping, and data aware scheduling • Ensure reliability at enterprise scale: architect and tune resource allocation (pools, queues, concurrency limits) to support high-throughput workloads; optimize large-scale backfill strategies; develop comprehensive runbooks and lead incident response/postmortems • Partner with teams across the organization to provide enablement, documentation, and self-service tooling Salary Range- $110,000-$120,000 a year
Created: 2026-03-10