The 1st round was telephonic. In that, they asked basic questions on Spark, Scala, Hive, Sqoop, Oozie, and cluster configuration.
The next day, I was called for a face-to-face interview at Lowe's office on Friday. The interview process took 2 hours.
What is combineByKey? SCD1 logic. Difference between edge node and data node. Where will the code be deployed? (edge node or in cluster) YARN architecture. What are all the versions of Spark you have worked with? Diff between SchemaRDD and DataFrame. Different ways to create a DataFrame. What is a bundle in Oozie? Fork action in Oozie? distcp command. How do you decide the number of mappers in a Sqoop job? What is the optimal number of mappers provided there is no restriction in establishing a connection to the DB? How do you pull CLOB, BLOB data types in Oracle to HDFS? Semi-join, anti-join in Scala. Diff between logical plan and physical plan. Where can we see the logical plan?
The following metrics were computed from 2 interview experiences for the Lowe's Big Data Engineer role in Bengaluru, Karnataka.
Lowe's's interview process for their Big Data Engineer roles in Bengaluru, Karnataka is fairly selective, failing a large portion of engineers who go through it.
Candidates reported having very good feelings for Lowe's's Big Data Engineer interview process in Bengaluru, Karnataka.