For high level decisions such as designing a database schema, system design, deciding tech stack, etc., I have a tendency to spend a lot of time figuring out what the "best" design is. For example, today I spent hours figuring out what the best data schema would be for some structs I plan to store in a database. Another time, I spent hours figuring out what websocket library to use.
For these high level tasks, I spend most of my time browsing online figuring out the tradeoffs between different libraries are. For example with the websocket libraries, the language I work in has 3 well-known websocket libraries. I spent hours through Reddit and Hackernews to see what people's experiences were with latency/throughput of each websocket, what the developer experience is like and how easily integratable the library is into existing systems.
I think the issue is that switching libraries midway through development costs a lot of time when you discover the library isn't what you wanted. So I spend a lot of time trying to get it right the first time. This is unlike code where I write a lot of code, get it working, then refactor as necessary.
I wonder how people get over analysis paralysis for these high level decisions, and what are some strategies to mitigate risks when you make a mistake (choose wrong architecture, wrong database, etc.)?
For side projects my goto is to ask friends (or people in Taro) what the status quo is and do that. Don't overthink it. Use perplexity to speed up research. Give it your details and ask it to make a suggestion.
I feel like frameworks follow the 90/10 rule. 90% of the features will be similar and only 10% differ. For side projects you'll almost certainly not hit these niche requirements
Also in terms of database, schemas are supposed to be evolving. Most libraries will have some migration handler to handle changing schemas so you should definitely not spend more than a few mins worrying about schemas.
In general most learning happens when you can no longer do what you did before because of a new constraint. When you naturally hit these constrains your learning 10x'es
"I think the issue is that switching libraries midway through development costs a lot of time when you discover the library isn't what you wanted"
There's
1.) Switching libraries/architecture because the requirements weren't poorly defined/a use case popped up that the team didn't know about and
2.) Switching libraries/architectures because as a company/organization matures the business use case changes
Point number 2 above will happen at some point (it's called technical debt) so there's no point trying to optimize for a use case that didn't happen yet.
Regarding trying to optimize for point number 1, I echo Sai's point in that you typically want to pick the most stable dependency with the most amount of community backing OR the dependency that your company typically uses IMO. This is because
This is an incredible question!
Today, you're making decisions in software design, tomorrow they will be about large system architectures, and some day perhaps business decisions. Not to mention the countless life decisions we need to make along the way.
Making decisions that are almost always 'right' is a key skill and it can play a big role in determining our life trajectory. It comes with experience, but you can fast track it by asking on forums like these.
On the tactical front, I'd identify the desired outcome and use that to reverse-engineer the properties of the desired system.
For example, for the database schema, write the down the queries you need to run and let that inform the schema.
For websocket libraries, identify your current and projected need for latency/throughput and devex to make a list of the properties the library must have. Then it becomes a matter of sifting through the docs to identify the right candidates quickly.
This approach will keep you from going down rabbit holes to identify the "best" candidates that might have features you don't care for.
The deeper answer is to have an easier relationship with "best." :)
By nature, all software requirements are not understood ahead of time. So, we will be wrong from time to time—and it's not a personal failure or lack of diligence. Researching too deeply comes with the additional cost of indecision and burnout, and yields diminishing returns with time spent.
Working backwards from the identified properties will enable you to balance "best" with "good enough" or "right for now" and make progress quickly while feeling reasonably confident about your choice.
Does this answer your questions?
It's definitely good that you are doing as much research as possible. Doing hours of research is pretty typical especially if you consider the impact of the decision. Your decision could potentially influence thousands of engineers with millions of users depending on how mature your company is.
One thing to keep in mind is that everyone's use case is different, so make sure that you get the full context around what you read on the internet. Because, it's possible that someone is using the library for a pet project which may not scale up. Here are some other things to consider: