Treat ChatGPT like a junior dev on your team — helpful, but always needing review.Treat ChatGPT like a junior dev on your team — helpful, but always needing review.

Using ChatGPT Like a Junior Dev: Productive, But Needs Checking

2025/09/24 14:12

AI coding assistants like ChatGPT are everywhere now. They can scaffold components, generate test cases, and even debug code. But here’s the catch: they’re not senior engineers. They don’t have context of your project history, and they don’t automatically spot when the tests themselves are wrong.

In other words: treat ChatGPT like a junior dev on your team — helpful, but always needing review.

\


My Experience: Fixing Legacy Code Against Broken Tests

I was recently working on a legacy React form validation feature. The requirements were simple:

  • Validate name, email, employee ID, and joining date.
  • Show error messages until inputs are valid.
  • Enable submit only when everything passes.

The tricky part? I didn’t just have to implement the form — I had to make it pass an existing test suite that had been written years ago.

I turned to ChatGPT for help, thinking it could quickly draft a working component. It generated a solution — but when I ran the tests, they kept failing.

At first, I thought maybe I had misunderstood the requirements, so I asked ChatGPT to debug. We went back and forth multiple times. I provided more context, clarified each input validation rule, and even explained what the error messages should be. ChatGPT suggested fixes each time, but none of them worked.

It wasn’t until I dug into the test suite myself that I realized the real problem: the tests were wrong.

\


The Test That Broke Everything

One test hard-coded "2025-04-12" as a “future date”:

changeInputFields("UserA", "user@email.com", 123456, "2025-04-12"); expect(inputJoiningDate.children[1])   .toHaveTextContent("Joining Date cannot be in the future"); 

The problem? We’re already past April 2025. That date is no longer in the future, so the expected error message would never appear. The component was fine — the tests were broken.

I had to dig through the logic, analyze the assumptions, and rewrite the test with relative dates, like so:

// Corrected test using relative dates const futureDate = new Date(); futureDate.setDate(futureDate.getDate() + 30); // always 30 days ahead const futureDateStr = futureDate.toISOString().slice(0, 10);  changeInputFields("UserA", "user@email.com", 123456, futureDateStr); expect(   screen.getByText("Joining Date cannot be in the future") ).toBeInTheDocument(); 

This small change makes your test time-proof, so it will work regardless of the current year.

\


Lessons Learned

  1. AI will follow broken requirements blindly - ChatGPT can’t tell that a test is logically invalid. It will try to satisfy the failing test, even if the test itself makes no sense.

  2. Treat output like a junior PR - ChatGPT’s suggestions were helpful as scaffolding, but it struggled to see the root cause. I had to step in, dig through the legacy code, and analyze the tests myself.

  3. Tests can rot too - Hard-coded dates, magic numbers, or outdated assumptions make test suites brittle. If the tests are wrong, no amount of component fixes will help.

  4. Relative values keep tests reliable - Replace absolute dates or values with calculations relative to today. This ensures your tests work across time.

    \


How to Work Effectively With AI Tools

  • Give context, but don’t rely on it to reason like a senior dev.

  • Ask “why”, and inspect its explanations carefully.

  • Validate everything yourself — especially when working with legacy code.

  • Iteratively refine — use AI as scaffolding, but you own the fix.

    \


Closing Thoughts

My experience taught me a simple truth: AI can accelerate coding, but it cannot replace human judgment, especially when dealing with messy, legacy code and outdated tests.

Treat ChatGPT like a junior teammate:

  • Helpful, eager to please, fast.
  • Sometimes confidently wrong.
  • Needs review, oversight, and occasionally, a reality check.

If you keep that mindset, you’ll get the productivity boost without blindly following bad guidance — and you’ll know when to dig in yourself.


💡 Takeaway: When working with code, the human developer is still the ultimate problem-solver. AI is there to assist, not to replace your reasoning.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Share Insights

You May Also Like