In this part of the course, we focus on feature engineering, a critical area where interviewers differentiate between average and exceptional ML candidates. This is our opportunity to show not just technical skill, but creativity and judgment.
- Mediocre feature engineering relies on basic, mechanical steps—like one-hot encoding categoricals or normalizing numericals—without much customization.
- Stellar candidates go further, creating new features from raw data (e.g., extracting time-of-day from timestamps, or combining variables into meaningful interactions).
- We show thoughtfulness by dropping irrelevant features, checking for multicollinearity, and aligning features closely with the problem's objectives.
- We demonstrate versatility by handling complex data types—such as transforming freeform text, working with embeddings, or aggregating time series effectively.
- Ultimately, we recognize that feature engineering can matter more than the model itself, and our job is to shape the data so that the model can truly learn from it.
If you want to learn even more from Yayun: