My team made a ton of changes (e.g., different UI, different SDKs, etc.) when rearchitecting a large feature last year. When metrics were impacted during the rollout, we didn’t know what caused the metrics degradation due to all these changes and had to ramp down the experiment repeatedly last quarter. We’ll likely run into lots of unanticipated issues when ramping back up this quarter as well. Due to this uncertainty, what should we have as an OKR for this feature? Committing to a 100% rollout seems like setting the team up for failure. My team’s also working on another project this quarter, but we want to make some progress on this feature. Since it’s unlikely we can hit an OKR for 100% rollout, would it make sense to have an OKR for, say, committing X% time for feature Y? We’ll likely ramp up, discover some issues, ramp down, resolve the issues, then ramp back up repeatedly. Unfortunately we don't know what issues we'll run into ahead of time.
You want concrete progress. It may be better to state a goal in terms of testing, I.e. automate testing for all issues discovered during Q4 rollout attempt, do small population testing to identify new issues, and have full deployment plan established targeting end of Q2’23.
saying you’ll be x% rolled out isn’t something you can take solid actions to achieve. You don’t know what issues you’ll uncover, if you have to roll back, etc. See if shadow testing (like doing the work of the feature on a segregated fleet, in a hidden place, whatever) can help you gain confidence without user impact. Also work to set acceptable regression values and time frames to address them. If 0 regression is acceptable you will find yourselves correcting or optimizing things far outside of scope to compensate and it will snowball.