Taro Logo

Pyspark Azure Synapse Data Engineer

Capco, a Wipro company, is a global technology and management consulting firm supporting 100+ clients across banking, financial and Energy sectors.
Data
Mid-Level Software Engineer
Hybrid
1,000 - 5,000 Employees
3+ years of experience
Finance
This job posting may no longer be active. You may be interested in these related jobs instead:

Description For Pyspark Azure Synapse Data Engineer

Capco, a Wipro company, is seeking a Pyspark Azure Synapse Data Engineer to join their team. As a Data Engineer, you'll be responsible for designing, building, and maintaining data pipelines and infrastructure on Microsoft Azure. You'll work on transformative projects for major international and local banks, insurance companies, and payment service providers.

Key responsibilities include:

  • Extracting, transforming, and loading data from various sources
  • Developing and maintaining data models for warehousing and data lakes
  • Implementing security best practices for data storage and access
  • Monitoring and troubleshooting data pipelines and quality issues

Required skills:

  • Strong experience with Microsoft Azure Data Services (Data Factory, Data Lake Storage, Synapse Analytics, Azure SQL Database)
  • Proficiency in Python, PySpark, Scala, or SQL
  • Knowledge of data warehousing, modeling, and ETL/ELT principles
  • Experience with CI/CD processes (Maven, Git, Jenkins)
  • Familiarity with Airflow and Elastic
  • Understanding of SDLC and Agile processes

Capco offers a tolerant, open culture valuing diversity and creativity. They provide opportunities for career advancement and have been recognized as a top company for women in India. Join Capco to make an impact in the financial services industry and work on innovative projects that are changing the landscape of energy and financial services.

Last updated 10 months ago

Responsibilities For Pyspark Azure Synapse Data Engineer

  • Design, build, and maintain data pipelines on Microsoft Azure
  • Extract, transform, and load data from various sources
  • Develop and maintain data models for warehousing and data lakes
  • Implement security best practices for data storage and access
  • Monitor and troubleshoot data pipelines and quality issues

Requirements For Pyspark Azure Synapse Data Engineer

Python
  • Strong experience with Microsoft Azure Data Services
  • Proficiency in Python, PySpark, Scala, or SQL
  • Knowledge of data warehousing, modeling, and ETL/ELT principles
  • Experience with CI/CD processes
  • Familiarity with Airflow and Elastic
  • Understanding of SDLC and Agile processes

Benefits For Pyspark Azure Synapse Data Engineer

  • Career advancement opportunities
  • Diverse and inclusive work environment

Interested in this job?