Taro Logo

Big Data Engineer Interview Experience - Phoenix, Arizona

September 1, 2025
Positive ExperienceNo Offer

Process

Screening call followed by an interview via Teams. The interview went for 40 minutes. It started with asking about my skills and tech stack, then current project responsibilities. Questions were asked on:

I have worked extensively with reporting and ETL tools across different environments. My experience includes working with:

  • dbt (Data Build Tool): Mainly for transformation-focused ELT processes on cloud data warehouses.
  • Metallion: To optimize platforms like Snowflake, Redshift, and BigQuery.
  • Informatica PowerCenter: For advanced data governance and handling very large datasets in enterprise data warehouses.

To optimize ETL/ELT processes, I have implemented various techniques, ensuring better performance and governance.


Example Project: Ford Motors (Data Assessment & Migration)

One of the major projects I worked on was with Ford Motors, where I was responsible for data assessment, migration planning, and ETL pipeline development.

Scope & Responsibilities:

  • Analyzed the SQL Server database schema and identified critical tables for migration.
  • Performed data assessment and migration planning to move data from SQL Server to Snowflake.
  • Developed Azure Data Factory (ADF) pipelines for batch and streaming data processing.
  • Handled deployment, optimization, and version control for ETL pipelines.
  • Managed the end-to-end lifecycle: development, testing, and production deployments.

Key Achievements:

  • Successfully migrated 50 TB of data from SQL Server to Snowflake.
  • Achieved 30% cost reduction in storage and compute by optimizing ELT processes.
  • Improved query performance by 40% through optimized star schema modeling.

Technical Strengths

  • Strong experience integrating and implementing ETL on cloud platforms using SQL and Snowflake.
  • Set up GCP and Snowflake environments to handle extraction, transformation, and loading of data.
  • Automated ETL processes using GCP Cloud Functions, scheduling, and monitoring.
  • Advanced SQL skills (self-rated 4 out of 5).

Questions

SQL Interview Scenarios

  1. Finding Duplicate Records

  2. Running Total Calculation

  3. Aggregations with ROLLUP

  4. Handling NULL Values in Aggregations

Was this helpful?

Interview Statistics

The following metrics were computed from 1 interview experience for the American Express Big Data Engineer role in Phoenix, Arizona.

Success Rate

0%
Pass Rate

American Express's interview process for their Big Data Engineer roles in Phoenix, Arizona is extremely selective, failing the vast majority of engineers.

Experience Rating

Positive100%
Neutral0%
Negative0%

Candidates reported having very good feelings for American Express's Big Data Engineer interview process in Phoenix, Arizona.

American Express Work Experiences