Taro Logo

Data Engineer

Global cloud-based software company specializing in customer relationship management (CRM) services
$137,100 - $227,700
Data
Mid-Level Software Engineer
Hybrid
2+ years of experience
Enterprise SaaS
This job posting may no longer be active. You may be interested in these related jobs instead:
Data Scientist

Data Scientist position at Salesforce focusing on marketing analytics, requiring Python, SQL, and visualization skills, offering $157,600-$236,500 with hybrid work options in San Francisco.

Data Scientist

Data Scientist position at Walmart focusing on machine learning, statistical analysis, and data-driven solutions, based in Bentonville, AR.

Data Engineer III

Data Engineer III position at Walmart focusing on big data analytics, pipeline development, and data modeling using Java, Python, and various big data technologies.

Data Engineer III

Data Engineer III position at Walmart focusing on building and maintaining data infrastructure using Python, Java, and modern data warehousing technologies.

Data Scientist

Data Scientist position at Walmart focusing on machine learning, statistical analysis, and business problem-solving using Python, PyTorch, and SQL in Bentonville, AR.

Description For Data Engineer

Salesforce's Cloud Economics and Capacity Management (CECM) team is seeking a Data and Backend Engineer to join their innovative team. This role focuses on building breakthrough features for internal customers while ensuring stable and scalable applications. The team develops intelligent, data-driven tools for strategic decision-making in infrastructure expenditure and capacity management. You'll work with petabytes of data, applying advanced machine learning techniques to create actionable predictions and business insights.

The position offers a unique opportunity to work directly with customers while developing distributed systems with company-wide visibility. You'll be part of a modern, lean, self-governing product engineering team where versatility is key – from coding to requirements gathering and quality testing. The platform provides near real-time monitoring of cost and capacity utilization of infrastructure, helping optimize prioritization and minimize costs.

As a Data Engineer, you'll work with cutting-edge technologies in big data processing, including Spark, Trino, Hive, and Airflow. The role requires strong expertise in data architecture, ETL processes, and SQL, with opportunities to work on automated data pipelines. You'll collaborate with engineers, architects, and product managers across global teams in AMER and APAC.

This position is ideal for someone passionate about building scalable, resilient distributed systems and turning massive volumes of operational data into valuable insights. The hybrid work environment in either San Francisco or Bellevue offers flexibility while maintaining team collaboration. Salesforce's strong company culture and focus on innovation make this an excellent opportunity for career growth in data engineering.

Last updated 6 months ago

Responsibilities For Data Engineer

  • Develop, automate, enhance, maintain scalable ETL pipelines
  • Independently design & develop resilient and reusable automation data frameworks
  • Responsible for end-to-end delivery including performance tuning, monitoring applications, analyzing logs and performing system operations
  • Evaluate and determine root cause and resolve production issues
  • Work with internal team members and external partners to support data collection and analysis and understand reporting needs
  • Work and collaborate with global teams across AMER and APAC

Requirements For Data Engineer

Python
  • A related technical degree required
  • 2+ years of experience in data engineering, data modeling, automation and analytics
  • Understanding of data engineering tech stack, database designs, associated tools, system components, internal processes and architecture
  • Must be able to strategically communicate status and identify risks
  • Self-starter, highly motivated, able to shift directions quickly when priorities change
  • Hands-on expertise in building scalable data pipelines using standard methodologies in data modelling and ETL processes
  • Experience in distributed SQL analytics engines such as Spark and Trino
  • Hands-on experience in Big Data technologies like Spark, Trino, Hive, Airflow
  • Experience working in Agile and Scrum methodology, incremental delivery, and CI/CD

Interested in this job?