Code review is one of the most common and powerful ways for software engineers to learn. On top of increasing code quality, it builds engineering culture within a team.
I have been seeking to develop habits / follow a system to increase code quality, both in terms of readability and number of functional issues. I have come up with a few ideas I have listed below but I am curious what other people have been trying out! Have any of the below worked for you? What specific steps do you take to ensure code quality?
What’s the best way to handle a situation as a tech lead when you are under pressure to deliver your tasks and ensure that the other team members are also on track? If one developer slacks and doesn’t meet the deadline, is it automatically the lead's fault, especially if the situation was raised well in advance. If the person continues to under deliver with poorly written code, is it okay to spend time and pair program and get work done just to meet deadlines? Keep in mind that the developer is a junior engineer and requires fair bit of hand holding.
I had to approve a PR, although I was not 100% confident, it wasn’t a breaking code and it wouldn’t have caused any backward compatible issues. I received extremely harsh feedback for approving a sub standard from a peer/colleague. Had I not completed the PR, my manager would have provided feedback on the delayed project, what’s the best way to navigate between two leads or managers who are not aligned and expect different outcomes from the same situation?
How do you keep up with everything that is happening on the team with different projects, doc designs and code reviews?
Hi guys, I found an interesting point in another thread and I don't know, what should be a correct response to it.
Here is it:
"You spent 12 hours of work on an issue and got a working solution. You submit a PR with only a day left before the sprint ends. One of your coworkers looks at your code, thinks it's good enough, but suggests a better solution you haven't thought of. What do you do?"
Especially, what if I have a bigger feature, spent one week for it and need another 3 days, if I will change my code. What would you do?
When working on a new task, I often find myself asked to estimate how long a task may take. Luckily, my team is pretty forgiving, but a critical step will be to start more accurately estimating tasks.
How do you get better at breaking down tasks to understand what needs to get done, and then giving proper estimations for how long those tasks will take?
Despite declining these requests and suggesting alternative solutions, such as recommending another engineer with more familiarity with the feature, the web team still approves these large Pull Requests. How can I effectively communicate my concerns about the code quality and potential issues or bad coding practices within these large PRs, and what steps can I take to ensure that the appropriate team members are reviewing and approving them?
Context about me & external team: I'm a senior engineer and one of the code owners. I'm responsible for backend changes for API, but I am often asked to review Pull Requests coming from different changes also for Web (React, TypeScript). I noticed a lot of Pull Request coming to me since our codebase is 1 large monolithic application that powers web, mobile, and backend platform services, and it is stored in 1 GitHub Repro.
The external team are people who I do not work directly with and they are new to our codebase, but their team is pushing out new features as new major initiative from another Team and the stake are high for their project to be roll out. Lately, there's multiple bugs introduced from the external team, and it causes Production issues. And I'm getting buy-in from my Directors that we need to lay down some rules for changes from the external team.
I understand the idea and the purpose behind an atomic diff but what would that look like in practice if we were using Github? My understanding about this is sort of vague.
I'll lay out two different approaches that I've been thinking about along with their pros/cons. Please let me know if I'm thinking about this correctly.
EDIT: Upon using GPT, I found an extremely useful resource from google's documentation that actually outlines this problem very well, and resolved my question (links below):
Tl;dr -- it's better to have smaller and more focused PRs (approach 2) since smaller PRs are easier to fix compared to larger ones. If there are merge conflicts when rebasing, the impact on subsequent PRs could be mitigated due to their smaller sizes. Other additional advantages may include (1) faster feedback from peers -- esp. if you're wasting time solving the wrong problem(s) or needing additional feedback for approach, or (2) allow for concurrent engineering while some PR(s) can enter their review cycle. One feature can be broken up into multiple atomic changes (PRs).
With this, I would recommend creating a thread of PRs linked together describing all of their changes made during the development cycle. Each PR should serve their atomic purpose, and all of their changes combined should contribute towards fulfilling some story description (which gives additional context to the reviewer).
General principle: reduce the amount of changes you would need to make for each PR by making them smaller. Otherwise, it may take longer for each PR to be reviewed and gain acceptance for submission, which could cause a chain effect on subsequent PRs as well.
Approach 1 (1 big PR with atomic commits):
Say we have a feature branch that we're working on. We keep each commit atomic. We provide a summary description of our PR, a test plan section, and a commit (diff) section in our PR outlining each commit hash and a short summary for each change (in bullet point fashion).
The Files Changed section on github could be extremely long and contain 800-1000 lines of code although we could keep each commit atomic to control the complexity and make it somewhat easier to review with each commit containing 50-200 lines of code. Another pro is we keep 1 PR as opposed to having multiple prs but the con is changes might take longer to approve because of how many different commits there are to review...?
Meanwhile, there's approach 2 which is to keep several PRs (each 50-250 lines of code) and a master PR that aggregates all of the smaller PRs to outline a story of all the changes that are necessary for some feature that we're working on. The pros is smaller PRs are easier to review and easier to merge. The con is we have to pay the cost of rebasing subsequent branches especially if there are stacked dependencies from one pr to another.
I'm not really sure what the "best approach" would be. Any thoughts?
I'm an E5 mobile engineer at a Big Tech company. Some engineers have a habit of leaving "drive-by comments" in code reviews that are only tangential to my PR. How should I push back on these?
I'm sure Alex and Rahul talk about this in their , but wondering how to write clean code, particularly in the context of someone who doesn't have technical people available to review my pull requests in my work or in side-projects.
I'm sure this skill is almost entirely developed by working with talented people and writing a lot of code, but is there a place for resources like "Clean Code" the book or courses on Udemy?
I am working on my P0 goals for this half however, the XFN is not communicating well. I am done with my changes and blocked on them for days. They don't review code as fast as I need and keep me waiting. How should I approach them? Also, I want to build good relationship with them for a good Performance cycle feedback. I think they don't take me seriously. what is the right approach here?
Over the past year, we've focused on developing our product quickly, which has led to a significant amount of technical debt in our codebase. As a result, we've shipped applications without proper architecture and planning. However, when I suggest refactoring or re-architecting to my team, many of the senior developers and the engineering manager are skeptical. In my opinion, I am not just offering refactoring for just sake of refactoring; it's necessary because our current code base lacks abstraction and separation of concerns and has slow development times.
In those examples, how can I reduce skepticism in my team and bring more productivity, and more good code quality, and best practices to my team?
Even though starting to work for a big company like Meta, Amazon, Google, etc. I believe is a hard to achieve (I haven't work for) somehow it looks pretty straightforward. Learn for interview, get the job, level up. Yes, I am sure it's hard and not many will do it but still you know what should be done (yes, may don't know how). But let me tell you a different story:
I work in a not that famous country in the EU and non of the top tech companies is there. Actually 90+% of the companies are outsourcing companies. As a SE with 10 years of experience in the outsourcing world I can tell you how it works: you work on a legacy code which is so old and so bad (hundreds of people have tried write code there) you can't see good practice at all, no code reviews (sometimes there is bad it is very rare), no unit tests, performance review is only about client's feedback and so on, you got the point. It's about the money only and nobody cares if you are good or not if the client is happy. In very rare cases I have started something from scratch but all of my colleagues were so bad progmmers like myself that we messed up all. It's a deadlock. After 10 years I realized I am a bad programmer and I've seen so many bad practices that I have no passion to do anything anymore. Now to the questions:
The ultimate goal of my career (and maybe in life) is to fill the gap not only in my skills but to create a company (product based or outsourcing) where everyone who join to have a chance to become a great programmer. But before helping others, I need to help myslelf. This is how I found Taro.
Currently, we have tech debt mainly discoverable via a custom annotation in the code.
Some examples are:
The core tech debt is easy to address and manageable via project boards, but the problem is the rest (product related) often gets out of date and never gets resolved, for which there are some reasons like:
I believe the product-related tech debt should be owned and tracked by the product team but often there are some tech debts left in favor of speed like - breaking complex screens, and classes, and find better approaches etc etc
What's the best way to promote ownership and address such tech debt, so that they are addressed during code reviews and not left in the code as comments indefinitely, nor as a ticket in some project board which never addressed?
Context: Working in a small startup where there are a very small number of wiser senior engineers
Everyone in the development team contributes to code reviews
Often I’m unsure about the quality of the feedback and when to act upon it
Sometimes people even change their mind after you have acted on that feedback which is frustrating and can contribute to the delay of finishing a task
Does this also happen in big tech? How do you handle these situations ?
I have had issues with some of my teammates where code or docs don't get reviewed early and a ton of feedback is left the day before the deadline. This is after sending multiple reminders almost everyday leading up to the deadline and I don't like escalating because it feels like I am snitching on them. I do give reminders in our weekly standup and our common channel.
I got tired of this happening and setup a code bot to send reminders to everyone in the code channel and have considered sending out email reminders (nobody else really does this) or booking a meeting in advance before I even start coding. I tried a concept similar to RFC but I still don't get feedback in time. I get it people are busy but they just completely ignore me, I don't even get a slack sometimes saying they are busy.
I am an SDE 1 if that helps
When a colleague sends me a pretty large PR to review, I usually have no idea what’s going on. I know it’s not expected to pull their branch down but how do you usually go through it and add constructive feedback? I usually just wait until someone else has approved it (after their given feedback is incorporated) and then I hit approve too 😬
Often we talk about how, when getting feedback during a PR or Design Review, there are mistakes that I have seen, at least in my case, creep up repeatedly. Any tips to ensure that the learnings stick?
At my company, I've experienced receiving a lot slower code reviews than usual. I think this is because I'm on a Platform team but each individual member is very specialized to their focus area. We're encouraged to tag a specific person (and I've been assigned a specific person for my code reviews) as well as the team for broader visibility.
However, I'm often stuck waiting 24-48 hours before receiving anything and always need to ping the person that has been assigned to my reviews. It felt pretty awkward needing to do that all the time so I brought it to my manager. My manager essentially let me know we don't have a good process around code reviews yet and are still figuring it out, and it could be a good opportunity for me to bring a good process to the table. He recommends pinging people in the meantime and to not view it as bothering. I am open to doing that, but still wondering if there is a better way.
I have 2 questions:
I was hired to be an Android developer for a startup. Other engineers on my team bounce around back-end, web iOS and Android. I make sure to review every Android PR that comes in but at the same time I have strived to review code from other projects like back-end and iOS. My primary language is Kotlin so I can offer advice on that but since others are written in typescript/Swift I am not sure how to offer reviews although I want to. I mostly end up asking questions.
My team owns a really wide set of features spanning across multiple repos and programming languages (C++, React, Angular). On top of that, each person works on pretty different areas (telemetry/data, UI components, client side api, etc.). We have a v1 of the app being maintained and a v2 currently being developed with some people working on v1 and some working on v2. It's impossible to know all the context behind every pull request.
How can I give quality code reviews? At most I can only provide feedback to simple code logic and really well written PR descriptions. Should I be trying to private message the developer for each PR to understand more, or is this just something that comes with time?
My work will often times span across multiple services within Uber, which means that I need to write code in codebases outside of my team's. Sometimes when I do this, it takes a long time for my commit to land as there's many rounds of follow-up questions, leading to a lot of rebases, revisions, and time spent on my end. I understand that they're trying to be the best stewards of their codebase, but this can be a bit frustrating sometimes. How can I make this process smoother and land these kinds of changes more quickly?
Since the engineer implementing the task went in a different direction from what I had in mind, now I feel like I have to dive into the code and explain the approaches I wrote, and also get involved in the code much more than I would like. How can I overcome this?
Since our company is so small (< 10 engineers), we do code reviews across platforms – so I review iOS and web code even though I’m an Android engineer.
I haven’t gotten any negative feedback on my code review yet, but I want to make sure I’m doing it properly. How can I suggest improvements on other engineer’s code, even if they’re on a different platform, or if they’re more senior than me?
A good code review process is needed to catch bugs, increase maintainability, and guarantee a high velocity future contributors.