Data Engineer

Digital platform focused on improving lives in Latin America through auto, real estate, and corporate benefits ecosystems.
Data
Mid-Level Software Engineer
Hybrid
1,000 - 5,000 Employees
3+ years of experience
Finance

Description For Data Engineer

Creditas is a leading Latin American fintech company that's revolutionizing financial services through its digital platform focused on auto, real estate, and corporate benefits. Founded in 2012 and backed by over $829 million in venture capital, the company has grown to over 2,500 employees across offices in São Paulo, Valencia, and Mexico City.

As a Data Engineer at Creditas, you'll be part of the Product Technology team that's crucial in scaling the business, optimizing processes, and delivering exceptional customer experiences. You'll work with cutting-edge technologies including Python, Spark, and AWS services to build and maintain scalable data solutions that power the company's innovative financial products.

The role offers an exciting opportunity to work with distributed systems and modern data architecture, implementing solutions that directly impact millions of customers across Latin America. You'll be responsible for designing and maintaining data pipelines, ensuring data governance, and optimizing AWS-based solutions.

The ideal candidate should have a strong foundation in Python and SQL, with an understanding of distributed systems. Experience with tools like Docker, Airflow, AWS services, and the Hadoop ecosystem would be advantageous. Creditas offers a comprehensive benefits package and promotes a diverse, inclusive work environment where innovation and problem-solving are highly valued.

This position provides an excellent opportunity to join one of the world's most promising fintechs, recognized by KPMG, Business Insider, and Glassdoor, while working on meaningful problems that improve people's financial lives across Latin America.

Last updated 16 days ago

Responsibilities For Data Engineer

  • Design, build and maintain scalable and reliable data pipelines and database systems using tools like Spark, AirFlow, Redash, EMR, and Redshift
  • Design, build, maintain and optimize AWS solutions, developing scalable data processing solutions
  • Implement CI/CD processes for data solutions
  • Implement data solutions using cutting-edge technologies like Spark, Python, and various AWS services (S3, Redshift, EMR, Athena, Glue)

Requirements For Data Engineer

Python
PostgreSQL
  • Interest in Data field
  • Basic Python programming skills
  • SQL knowledge
  • Understanding of distributed systems
  • Commitment to quality

Benefits For Data Engineer

Medical Insurance
Dental Insurance
Mental Health Assistance
Parental Leave
Education Budget
  • Meal Allowance (Creditas Card)
  • Group Life Insurance
  • Health Insurance (SulAmérica)
  • Dental Insurance (SulAmérica)
  • Corporate University (Academy)
  • Mental Health Support (Wellz)
  • Gym Access (Wellhub)
  • Birthday Day Off
  • Childcare Assistance
  • Parental Leave (6 months for gestating parent, 35 days for other parent)
  • Transportation Allowance or Parking
  • Pharmacy Benefits
  • Salary Advance Option
  • Financial Education Program

Interested in this job?

Jobs Related To Creditas Data Engineer

Data Engineer

Join Creditas as a Data Engineer to build scalable data solutions using Python, AWS, and modern data tools, while contributing to Latin America's financial innovation.

Data Engineer

Mid-level Data Engineer position at Creditas, working with Python, SQL, and distributed systems to develop data solutions for a leading Latin American fintech company.

Data Engineer

Join Creditas as a Data Engineer to develop innovative data solutions and drive business growth in Latin America's leading fintech platform.

Data Engineer Pleno

Creditas is hiring a Mid-Level Data Engineer to develop and maintain data products, translate business problems into solutions, and support continuous improvement processes.

Data Engineer, Amazon Business

Data Engineer role at Amazon Business focusing on building scalable data infrastructure and ETL pipelines for B2B commerce analytics