Taro Logo

Principal Engineer

Dun & Bradstreet unlocks the power of data through analytics, helping clients turn uncertainty into confidence, risk into opportunity and potential into prosperity.
Data
Principal Software Engineer
Hybrid
5,000+ Employees
10+ years of experience
Enterprise SaaS

Job Description

Dun & Bradstreet, a global leader in data analytics with over 6,000 team members worldwide, is seeking a Principal Engineer to join their technology team in Hyderabad, India. This role combines big data engineering with cloud infrastructure expertise, focusing on building and maintaining scalable data pipelines using cutting-edge technologies like Apache Spark and Airflow.

The position requires a seasoned professional with 10+ years of experience in big data technologies, who will be responsible for architecting and implementing distributed data processing solutions. You'll work with cloud platforms (particularly GCP and AWS), manage complex data pipelines, and optimize resource allocation across multiple tenants.

This is an excellent opportunity for an experienced engineer who is passionate about data architecture and wants to work with a company that values innovation and continuous learning. The role offers a hybrid work environment and the chance to work with a global team on solutions that help businesses worldwide make better decisions.

The ideal candidate will bring deep expertise in Python programming, cloud infrastructure management, and big data technologies, along with the ability to create detailed designs and proof-of-concepts for new technical capabilities. You'll be part of a diverse, global community that values being data-inspired, relentlessly curious, and inherently generous.

Working at Dun & Bradstreet means joining a company with a rich history in data analytics that continues to evolve and innovate. The company culture promotes creativity and growth, offering opportunities to work on challenging problems while contributing to solutions that help businesses worldwide transform uncertainty into confidence.

Last updated 2 days ago

Responsibilities For Principal Engineer

  • Design and develop scalable data pipelines using Apache Spark and Apache Airflow
  • Design and implement distributed data processing solutions
  • Document pipelines and datasets for clarity and maintainability
  • Manage and optimize cloud-based data infrastructure
  • Develop and manage workflows using Apache Airflow
  • Create detailed designs and proof-of-concepts for new workloads
  • Manage workloads and optimize resource allocation
  • Collaborate with data science teams

Requirements For Principal Engineer

Python
  • 10+ years of hands-on experience in Big Data technologies
  • Minimum 3 years experience working with Spark, Pyspark
  • Experience with Google Cloud Platform (GCP), particularly with Dataproc
  • 6 years of experience in cloud environments
  • Hands-on experience in managing cloud-deployed solutions
  • Experience with NoSQL and Graph databases
  • Expert-level programming skills in Python