Taro Logo
15

Why Taro if I can use ChatGPT for mentorship?

Profile picture
Anonymous User at Taro Community10 months ago

First of all, I'm a huge fan of Alex & Rahul and I want to grow in this community together with mentors, seniors & humans.

But lately being in meetings I've stopped asking much more questions to my seniors because they come with limited knowledge whereas GPT can follow up with me can guide me expertly. Whereas it can even give me expert advise in tech career growth and many such areas.

Even I don't even open stack overflow now.

548
2

Discussion

(2 comments)
  • 19
    Profile picture
    Head of Engineering at Capgemini
    10 months ago

    I'll give my response based on the main sources of value from human mentorship and coaching. Will preface that Taro is good at bringing together people who are good at this.

    • Finding the right question to ask or reframing the situation where the initial question isn't the real problem at hand. From my experience messing around with ChatGPT, it doesn't "zoom out" well and ask you whether there's something deeper behind what you're asking. The best mentors/coaches I've observed deliver the most of their value from these observations, which leads to a reframe, saving me from going down a rabbit hole.
    • Continuity and carrying over context over time. I've had the most results from people I've worked / interacted with over an extended period of time. You could leverage prompts to set context or use sessions with ChatGPT, but it's pretty hard to replicate a long term human relationship. If it's pure Q&A transactional, you could get a pretty solid result from ChatGPT, but that's just a Google search on steroids.
    • Give feedback other than what is asked. ChatGPT won't tell you that there's a better way to frame the question you're asking or give suggestions beyond what the question you ask (e.g. would be great if you talked to X person about this as well or you should document the learnings from this conversation and share it as a valuable artifact with you colleagues)
  • 17
    Profile picture
    Tech Lead @ Robinhood, Meta, Course Hero
    10 months ago

    Great question! I think about it a lot, especially because the Taro team has spent so long figuring out how to integrate ChatGPT and other AI into our product to add more value.

    On top of the awesome points that Casey shared, here's what I've learned during this process, particularly with the value (and lack thereof) of AI:

    • It can't tell stories to deepen concepts - One of the most powerful ways to teach someone something is to back it up with a real-life anecdote from your experience as a example. Humans are social animals, so we love stories. Since AI is this weird amalgamation of other people's knowledge, it has no personal experiences it can draw upon to make the principles it's teaching really resonate.
    • It doesn't give real answers to hard questions - Working any job is rarely sunshine and roses, and you're often going to run into these darker problems situations (which a lot of the questions on Taro are about). For example, if someone clearly has a toxic manager, people, especially those on Taro, will give the advice of "You should probably just leave the team ASAP, no job is worth your mental health". But something I've noticed about ChatGPT in particular is that it will always play it safe instead and give these extremely long, generic, non-opinionated responses about tactics you can try to improve the situation, even though the situation is almost certainly too far gone to repair. In short, ChatGPT rarely gives it to you straight.
    • It's just wrong a lot - This is especially true with the niche space Taro occupies - The software industry breaks a lot of rules and is filled with industry specific jargon. For example, I literally just asked ChatGPT what to focus on for the L3 -> L4 promo at a Big Tech company (junior -> mid-level), and here's how it responded:
      • It gave 7 things - It's hard to focus on 7 concurrent things especially as a junior engineer. This response was just overwhelming.
      • It talked about L5+ behavior instead - For example, the response said that you should try to lead teams, especially across XFN boundaries. It also talked about how you should develop business and product sense to choose the best projects. These are solidly L5/L6 behaviors and is dangerous advice to give an L3 (this is a prime recipe for PIP). It barely talked about sharpening code quality, which is easily the most important skill to improve at for L3 -> L4 promo.

    In a nutshell, I think AI is excellent at solving relatively straightforward problems but struggles a lot when any form of nuance is introduced. If you need to write the most concise for loop in Python or write a clean project update email, AI is great - By all means use it. But for anything fuzzier, reputable humans are still far superior (at least for now).

    If you want to go deeper into leveraging AI to supercharge your workflow, check out the in-depth case study we gave on it: [Case Study] How Engineers Unlock Huge Impact With Artificial Intelligence (AI)

    Here's some other great discussions about the intersection between AI and engineering as well: