Research: AI May Help You Code, But It Won’t Help You Master the Skills


AI may be able to help complete certain types of jobs, like coding, more quickly. But this may come at the cost of improving what you do and mastering new skills in the process, new search watch.

The AI ​​company Anthropic, which develops the competitor ChatGPT Claude, conducted the study out of 52 junior software engineers. Participants were given a series of Python-based coding tasks, preceded by a short warm-up, and then asked about the skills they acquired afterward. The whole process took about an hour and 15 minutes.

The researchers found that the AI-assisted group completed the tasks two minutes faster than the non-AI group, but generally performed significantly worse on the quiz afterward. The AI ​​group scored an average of 50% on the post-task quiz, compared to 67% in the non-AI coding group. The largest difference in scores between the two groups was on debugging questions, where programmers were asked how to fix broken code.

“Cognitive effort – and even getting painfully stuck – is likely important in promoting mastery,” the researchers said. “It’s also a lesson that applies to how individuals choose to work with AI and the tools they use.”

The study also found that it was not just whether programmers used AI that impacted skill acquisition, but also how they used it. The researchers identified several common patterns that emerged among the high- and low-performing groups during the post-task test.

The lowest performing participants had either completely delegated all of their coding to the AI ​​or had started by attempting to code manually before enlisting the help of the AI. Users who used AI to debug their code directly, rather than asking about errors, also generally performed poorly in the subsequent test.

Meanwhile, programmers who asked the AI ​​questions about why the generated code worked (and then asked additional questions) performed much better. Users who took a hybrid approach to code explanation, asking the AI ​​to explain the code as it generated it, saw even better results. Users who asked only “conceptual” questions (asking for explanations of concepts and problems rather than letting the AI ​​do the work directly) subsequently performed by far the best on the test.

Recommended by our editors

These results come as companies like Google and Microsoft set ambitious goals for integrating AI into their code, with Meta saying it plans to have more than 50% of its code written by AI. Even NASA’s cutting-edge missions aren’t safe from code written by AI. In December, orders generated by Anthropic Claude, under human supervision, were sent to NASA’s Perseverance Rover on Mars.

And even though the coders in this study were faster overall when using AI, whether AI-assisted coding is actually faster overall remains a matter of debate. A study from Metr, an AI research nonprofit, found earlier this year that AI actually slowed down the programmers it tested, because the time spent prodding the AI ​​was equal to or greater than the time saved through its assistance.

Get our best stories!

Your daily dose of our best tech news

Subscribe to our What’s new now newsletter to receive the latest news, best new products, and expert advice from PCMag editors.

By clicking Sign Up, you confirm that you are 16 years or older and agree to our Terms of Use and Privacy Policy.

Thank you for registering!

Your subscription has been confirmed. Keep an eye on your inbox!

About our expert



Leave a Reply

Your email address will not be published. Required fields are marked *