Back to all jobs

Staff Engineer - DataOps Engineer at Nagarro

Senior Posted about 3 hours ago RemoteFirstJobs Product
Engineer

AI summary: Staff-level DataOps engineer manages data pipelines, ETL reliability, cloud infrastructure, and monitoring across analytics platforms using Python, SQL, and DevOps tools.

Description

Company Description

We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (18000+ experts across 36 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That is where you come in!

Job Description

Hi Team, We are seeking a DataOps Engineer to join Tech Delivery and Infrastructure Operations teams, playing a key role in ensuring the reliability, automation, and performance of our analytics and data platforms.

This role is primarily DataOps-focused, combining elements of DevOps and SRE to sustain and optimize data-driven environments across global business units. You will manage end-to-end data operations from SQL diagnostics and data pipeline reliability to automation, monitoring, and deployment of analytics workloads on cloud platforms.

Key Responsibilities

  • Manage and support data pipelines, ETL processes, and analytics platforms, ensuring reliability, accuracy, and accessibility
  • Execute data validation, quality checks, and performance tuning using SQL and Python/Shell scripting - Implement monitoring and observability using Datadog, Grafana, and Prometheus to track system health and performance
  • Collaborate with DevOps and Infra teams to integrate data deployments within CI/CD pipelines (Jenkins, Azure DevOps, Git)
  • Apply infrastructure-as-code principles (Terraform, Ansible) for provisioning and automation of data environments
  • Support incident and request management via ServiceNow, ensuring SLA adherence and root cause analysis
  • Work closely with security and compliance teams to maintain data governance and protection standards
  • Participate in Agile ceremonies within Scrum/Kanban models to align with cross-functional delivery squads

Required Skills & Experience

  • 8+ years in DataOps, Data Engineering Operations, or Analytics Platform Support, with good exposure to DevOps/SRE practices
  • Proficiency in SQL and Python/Shell scripting for automation and data diagnostics
  • Experience with cloud platforms (AWS mandatory; exposure to Azure/GCP a plus)
  • Familiarity with CI/CD tools (Jenkins, Azure DevOps), version control (Git), and IaC frameworks (Terraform, Ansible)
  • Working knowledge of monitoring tools (Datadog, Grafana, Prometheus)
  • Understanding of containerization (Docker, Kubernetes) concepts
  • Strong grasp of data governance, observability, and quality frameworks
  • Experience in incident management and operational metrics tracking (MTTR, uptime, latency)

Qualifications

Must have Skills: Python (Strong), SQL (Strong), DevOps - AWS (Strong), DevOps - Azure (Strong), DataDog