🐱🐉 Very Last Developer Will Be Kept in a Museum?
Look, AI, let’s be real: Where do you see yourself in 5 years?
We’ve crossed 4500 subscribers 🥳 Refer your friends and let’s send it to 5000!
Generative AI and large language models (LLMs) took center stage in 2023. Transformer-based systems like ChatGPT proved to efficiently transform casual, readable descriptions into functional code, stirring enthusiasm in the tech community.
The prospect that generative AI could transform anyone with a flair for writing into a proficient programmer is thrilling. But here’s the burning question: Will AI render developers obsolete?
The new GitHub Copilot Research compares AI-assisted code to what a human would produce in terms of quality and maintainability: Is it similar to the contributions of a Senior Developer or does it lean toward the disjointed work of a short-term contractor?
Nick Scialli, a senior software engineer at Microsoft, reveals what’s beyond code generation and comes up with his perspective on the truly difficult parts of being a software engineer.
Is there more to the story? Come, let’s seek insights into AI coding!
Question for 2024: Who’s on the Hook to Clean up the Mess?
In an era when AI has surged in popularity, and code lines are being added faster than ever before, the Coding on Copilot whitepaper from GitClear raises a pivotal question for 2024: “Who's on the hook to clean up the mess afterward?”
The new study raises the question if AI generated code is “more similar to the careful, refined contributions of a Senior Developer, or more akin to the disjointed work of a short-term contractor." It comes as a counterpoint to a GitHub study from 2022 noting that developers using GitHub Copilot completed tasks 55% faster, with positive effects on productivity, satisfaction, and mental energy.
Looks like the need for speed ain't the whole story when it comes to coding.
For its recent study, GitClear collected and analyzed 153 million changed lines of code authored between January 2020 and December 2023. It reveals a doubling of code churn – the % of lines that are reverted or updated less than two weeks after
being written – in 2024 compared to its 2021 baseline.
Plus, the increase of “added” and “copy-pasted” code in relation to “updated,” “deleted,” and “moved” code suggests AI-generated code resembles an itinerant contributor, possibly violating the DRY (Don't Repeat Yourself) principle.
So, what are the 3 significant changes since Copilot's rise?
Burgeoning Churn: There’s a strong correlation between using Copilot and the introduction of mistake code. When developers wrote all the code themselves, churn rate was negligible. With a higher churn, there’s a greater chance of mistake code making it to production.
Less Moved Code Means Less Refactoring, Less Use: AI Assistants discourage code reuse: Including code that has already been tested & proven stable in production. Instead of refactoring and working to DRY code, AI Assistants offer to repeat existing code.
More Copy-Pasted Code Implies Future Headaches: Few things pose a more significant threat to the long-term maintainability of code than the inclusion of copy-pasted code segments. In essence, when a non-keyword line of code is replicated, it implies that the original implementation wasn't subject to a thorough evaluation due to time constraints.
Opting to re-add code rather than reusing it creates a burden for future maintainers, who are tasked with the challenge of consolidating parallel code paths implementing frequently required functionalities. This not only complicates the codebase but also places the onus on subsequent developers to find and streamline redundant implementations.
The Perks of Being a (Good) Software Engineer
The truth is, generating code was never the hard part. Nick Scialli believes the true value of a developer is largely in the work that happens before code generation.
While coding is certainly part of the equation, a developer’s skillset goes far beyond it. The crux of a developer’s professional value lies in tasks preceding it: requirements, clarification, negotiation, technical design, and tradeoff analysis.
A lot of developer’s work is about quickly noticing shortcomings in requirements and determining how best to integrate any function into a large codebase.
For an AI model to write something complex and integrated into an existing, extensive codebase, it should be asking clarifying questions and bringing up technical design considerations: Do the coding requirements fall short? Are they being underspecified? Or is a solution overly prescriptive?
Picture this, you open ChatGPT and give it the prompt to write a humble adder function in typescript. The task seems elementary. Yet, the model should be asking the real questions: "How many numbers are we adding?" or "What if no input or an invalid input is used?"
Prior to that, the model should have understood the use case, assessing the necessity for an adder function. Once it did identify the need for the adder function and devised an implementation strategy, the next step would involve determining the optimal method for incorporating this function into an extensive codebase.
Sure, sure, in fairness to ChatGPT, it is designed as merely a generative AI model rather than a software engineer.
But as Scialli concluded, “as impressive as it may be to see code being generated, I have yet to see any AI that can do these other things—the truly hard parts of being a software engineer.”
Want to become a truly valuable programmer and solve problems better? Improve your thinking. No way AI can one up you then!
We’ve crossed 4500 subscribers 🥳 Refer your friends and let’s send it to 5000!