2

How to stay relevant in 10 years?

Profile picture
Staff Software Engineer at Taro Community17 days ago

I recently stumbled upon a post on LinkedIn that some experts are not anticipating writing code by 2030.

While this prediction may not materialize in the same way, let's assume the relevance of DSA and coding may not be the most significant skill.

How would you suggest we develop skills for relevance in the future?

Please share inputs to be worked on tech domain, while I agree business impact would be an important metric, I am more curious to know the group's thoughts on tech.

PS: Someone which experise strictly in backend development.

75
3

Discussion

(3 comments)
  • 1
    Profile picture
    Tech Lead/Manager at Meta, Pinterest, Kosei
    17 days ago

    I honestly don't have a great idea of how the tech landscape will change in 10 years. That's a long time!

    I find planning at that timescale is unproductive since it's so ambiguous. Who would have predicted in 2014 that 2024 would look like it does today?

    My advice (mostly copied from YC) is to work with smart people who are living a few years into the future. Figure out the companies or open source projects they're involved in, and try to contribute there.

  • 1
    Profile picture
    Tech Lead @ Robinhood, Meta, Course Hero
    15 days ago

    Honestly, just learn to write extremely good code and be adaptable.

    Code Quality

    GenAI tools have been around for a couple years now, and their code quality is largely garbage. This is a fundamental problem I don't see being solved as these tools are trained on internet code samples and those are mostly terrible, just focusing purely on getting stuff to work. I think if you can truly master the concepts from here, you'll be fine: [Course] Level Up Your Code Quality As A Software Engineer

    Adaptability

    There will always be garbage legacy code. In fact, even FAANG is filled with garbage legacy code (I would know, I wrote 1,000+ commits cleaning it up). There's no way AI will rewrite those entire codebases to be clean (even if it could, companies wouldn't take that risk). This means that if you're someone who can dive into any codebase, no matter how giant, old, or messy, and hit the ground running quickly, you are extremely valuable. This is why at Meta, very high adaptability was effectively a requirement for E6 (Staff).

    Here's a good thread about valuable non-technical skills in the age of AI as well: "What are product skills and how to develop them in the age of ChatGpt and CopilotX?"

    • 3
      Profile picture
      Thoughtful Tarodactyl
      Taro Community
      15 days ago

      I would push back on the code quality claim.

      1. Yes it is trained on the internet, but all LLMs go through "post training" where they are "aligned" for human expectations. It's what made chat-gpt so powerful compared gpt3 and doesnt spew hate comments like what you see on reddit. The same logic can 100% be applied for coding problems
      2. We are seeing a shift from pre-training to inference time compute. The main "aha" moment of GPT o1 is that its a fundamental shift from large pre-training to letting the model "think". The key takeaway from that is that the longer you let the model think the better its responses are. Quite frankly o1 is nowhere close to having any utility but I see it as a proof of concept
      3. I think the current large flaw with LLMs is they struggle with
        1. large codebases
        2. understanding how different components link together
        3. being able to actually have domain knowledge of the edge cases. For example if we move from google auth sign up to magic link, what are edge cases in the product that might come up? maybe the magic link returns different info (eg. we might not get user name data and no profile pic)