Cloud Data Engineer (AWS/ Azure)

Rackspace Technology is a multicloud solutions expert, combining expertise with world's leading technologies across applications, data, and security to deliver end-to-end solutions.
Data
Mid-Level Software Engineer
Hybrid
AI · Enterprise SaaS
This job posting may no longer be active. You may be interested in these related jobs instead:
AWS Data Engineer

AWS Data Engineer position at Rackspace Technology, building complex data pipelines and solutions using AWS services, requiring 3+ years of experience in distributed applications.

Business Intelligence Engineer, Amazon Last Mile

Business Intelligence Engineer role at Amazon Last Mile focusing on data analytics and optimization of delivery network operations through advanced analytics and cross-functional collaboration.

Data Engineer, Amazon Robotics - Robotic Storage Tech

Lead data engineering for Amazon Robotics' storage analytics team, building scalable data architectures and ETL pipelines using AWS technologies to optimize robotic warehouse operations.

Data Engineer (3-5 Years)

Mid-level Data Engineer position at Fam, India's leading youth-focused fintech, requiring 3-5 years experience in building scalable data pipelines and real-time analytics systems.

Data Engineer

Data Engineer position at Apply Digital, working on cloud-native data pipelines and digital transformation projects for global brands like Kraft Heinz.

Description For Cloud Data Engineer (AWS/ Azure)

Rackspace Technology is seeking a skilled Cloud Data Engineer (AWS/ Azure) for their Cloud Data Services Team in Bangalore, India. This customer-facing role involves working with cutting-edge technologies to build, optimize, and maintain large-scale data processing frameworks on the cloud. The ideal candidate will collaborate with customers to design and deliver efficient, scalable data solutions.

Key responsibilities include:

  • Leveraging AWS services (S3, EMR, Athena, Glue) or equivalent cloud platforms for data storage and processing
  • Demonstrating proficiency with AbInitio product suite, including PDL, Meta-programming, Conduct>It, Express>It, and GDE-based development
  • Implementing and maintaining CI/CD pipelines using tools like Jenkins and GitHub
  • Working with schedulers such as Apache Airflow to automate workflows and data pipelines
  • Automating tasks and processes using Python, shell scripting, and other relevant programming languages
  • Developing, troubleshooting, and optimizing PL/SQL queries for complex data workflows
  • Working extensively on Big Data platforms like Hadoop and Hive to process large datasets
  • Driving the development of large-scale, self-service enterprise data frameworks
  • Experience with open table formats like Iceberg is highly desirable

Rackspace Technology offers a collaborative work environment and has been recognized as a best place to work by Fortune, Forbes, and Glassdoor. They are committed to embracing technology, empowering customers, and delivering the future. The company values diversity and unique perspectives, welcoming applications from individuals of all backgrounds.

Last updated 7 months ago

Responsibilities For Cloud Data Engineer (AWS/ Azure)

  • Build, optimize, and maintain large-scale data processing frameworks on the cloud
  • Collaborate with customers to design and deliver efficient, scalable data solutions
  • Implement and maintain CI/CD pipelines
  • Automate workflows and data pipelines
  • Develop and optimize complex data workflows
  • Process large datasets using Big Data platforms
  • Drive development of enterprise data frameworks

Requirements For Cloud Data Engineer (AWS/ Azure)

Python
  • Proficiency with AWS services (S3, EMR, Athena, Glue) or equivalent cloud platforms
  • Experience with AbInitio product suite (PDL, Meta-programming, Conduct>It, Express>It, GDE-based development)
  • Knowledge of CI/CD pipelines using tools like Jenkins and GitHub
  • Familiarity with schedulers such as Apache Airflow
  • Python and shell scripting skills
  • PL/SQL query development and optimization
  • Experience with Big Data platforms like Hadoop and Hive
  • Knowledge of enterprise data frameworks
  • Exposure to open table formats like Iceberg (desirable)

Interested in this job?