Python: The One Language LLMs Can’t Resist
🎻 No way Lacrimosa makes a reference to AI’s 10X productivity boost
Before we get in - meet RaiseFunding.io.
YC? Greylock Edge? HF0? Discover, compare, and apply to accelerators that match your vision.
Welcome to HackerPulse Dispatch! From high-profile corporate shifts to subtle yet critical technical vulnerabilities, staying informed has never been more important. This week, we examine major moves like Microsoft’s full integration of GitHub and the unexpected consequences of AI coding tools favoring certain languages.
Plus, the practical challenges developers face when lofty productivity promises clash with real-world implementation. We also explore emerging security concerns, highlighting how protocols like MCP and HTTP/2 introduce new attack surfaces that could compromise systems if overlooked.
Here’s what new:
😏 GitHub Folds Into Microsoft Following CEO Resignation — Once Independent Programming Site Now Part of ‘CoreAI’ Team: GitHub’s independence has ended as Microsoft fully absorbs the platform into its CoreAI division, with CEO Thomas Dohmke stepping down to pursue a new startup.
🐍 AI’s Serious Python Bias: Concerns of LLMs Preferring One Language: AI coding tools show a strong bias toward Python, raising concerns that devs may rely too heavily on the language at the expense of better-suited alternatives.
🎶 Requiem for a 10X Engineer Dream: AI coding tools promise huge productivity gains, but real-world use shows they often demand micromanagement, deliver partial results, and leave devs with growing LLM fatigue.
🛡️ MCP Vulnerabilities Every Developer Should Know: MCP is rapidly becoming the standard HTTP for AI, but serious security flaws, including prompt injection, broken authentication, and supply chain vulnerabilities, are leaving devs at significant risk
❗ HTTP/2: The Sequel Is Always Worse: HTTP/2’s complexity and widespread downgrading have created new, serious security risks that can be exploited through desync, request tunnelling, and header parsing vulnerabilities.
GitHub Folds Into Microsoft Following CEO Resignation — Once Independent Programming Site Now Part of ‘CoreAI’ Team (Read Paper)
GitHub’s independence is officially over. The platform, long celebrated as the backbone of modern coding collaboration, is now being folded directly into Microsoft’s CoreAI team. Former CEO Thomas Dohmke announced he will step down, choosing to pursue his own startup ambitions after overseeing GitHub’s explosive growth under Microsoft ownership.
While GitHub has thrived since its $7.5 billion acquisition in 2018, this transition marks a major shift in how the site will be managed going forward. Developers are left wondering whether GitHub’s AI-powered future will strengthen the platform—or dilute its original spirit.
Key Points
Leadership shift: Thomas Dohmke is stepping down as CEO after three years at the helm, remaining until year’s end to oversee GitHub’s integration into Microsoft. He has hinted at starting a new venture, potentially building a GitHub successor.
AI-first direction: By placing GitHub under its CoreAI division, Microsoft is doubling down on AI coding tools like Copilot. The move suggests that AI-assisted development will become GitHub’s central focus.
Uncertain future: While Microsoft is unlikely to discontinue GitHub, skepticism lingers. Past acquisitions like Skype serve as cautionary tales, leaving developers wary about how deeply GitHub’s culture and independence will survive.
AI’s Serious Python Bias: Concerns of LLMs Preferring One Language (🔗 Read Paper)
AI is reshaping software development, but not without its quirks. A new study from King’s College London highlights how large language models show a heavy bias toward Python, often generating it even when another language would be a better fit.
In fact, Python appeared in 90–97% of benchmark tasks, and Rust wasn’t selected once.
This raises questions about how much developers should trust AI to guide language choices in real-world projects. While Python’s dominance makes sense given its role in machine learning and vast open-source base, the bias could reinforce itself in ways that slow innovation.
Key Points
Language bias: Research shows LLMs overwhelmingly default to Python, using it even when alternatives like Rust or Java might be more suitable.
Feedback loop: Because AI is trained on open-source Python, it generates more Python, which devs then publish, feeding the cycle. Over time, this could crowd out languages that excel in domains like safety or performance.
Developer responsibility: Experts suggest devs should take control by explicitly prompting for other languages, comparing multiple implementations, and practicing beyond Python.
Requiem for a 10X Engineer Dream (🔗 Read Paper)
The hype around AI coding tools often claims productivity boosts of 10x, but the reality looks far less dramatic. Recent hands-on experiments with Claude Code show that while these tools can help with repetitive tasks, they often demand such detailed prompts that devs end up programming in Markdown instead of code.
Worse, the process can feel like micromanaging an erratic teammate—watching the tool swing between sensible and nonsensical solutions. Instead of freeing up time, many devs find themselves spending hours fixing or steering AI outputs. The result is not a revolution in productivity but a creeping sense of LLM fatigue.
Key Points
Micromanagement trap: To make AI coding tools effective, devs must write overly detailed specifications, essentially doing the hard work upfront. This recreates the old waterfall development problem, only now in the form of prompts.
False productivity: While AI can generate partial solutions, devs often spend as much or more time fixing errors and polishing results. The promised 10x boost rarely materializes outside of hype.
Fatigue factor: Constant prompting and monitoring leads to exhaustion rather than flow. Instead of enhancing creativity, AI risks turning coding into a draining cycle of micromanagement.
MCP Vulnerabilities Every Developer Should Know (🔗 Read Paper)
MCP adoption is accelerating across the AI ecosystem, with major players like Microsoft, Google, and GitHub integrating the protocol into their tools. But as adoption grows, so do the risks; security researchers are already uncovering vulnerabilities that could lead to data theft, code execution, and cross-tenant leaks.
The new MCP v2025-06-18 spec attempts to fix authentication and token handling, but many servers and tools in the wild still ignore best practices. Real-world incidents, from GitHub repository leaks to Asana’s cross-tenant data breach, show that MCP weaknesses are being actively exploited. Unless devs and companies take protocol security seriously, the HTTP for AI could end up repeating the worst mistakes of early web infrastructure.
Key Points
Tool description injection: Malicious metadata in tool descriptions can silently inject harmful instructions, creating invisible prompt injection attacks that users never see.
Authentication failures: Despite spec requirements, many MCP servers ship with broken or missing OAuth, leaving thousands of endpoints exposed to the internet without protection.
Supply chain poisoning: Unvetted MCP packages and Docker images introduce critical vulnerabilities; a single compromised library update can expose entire systems.
HTTP/2: The Sequel Is Always Worse (🔗 Read Paper)
HTTP/2 is often mistaken for a simple transport-layer upgrade, but its adoption has exposed critical new security risks. Researchers have identified multiple HTTP/2-exclusive desync and request smuggling vulnerabilities that target both high-profile websites and complex infrastructure stacks.
These attacks exploit protocol downgrading, pseudo-header handling, and message-length ambiguities to hijack clients, poison caches, and steal credentials. Case studies include major platforms like Netflix, Atlassian, and Amazon’s Application Load Balancer, revealing severe real-world consequences.
Novel tooling and techniques are now being used to detect and exploit these vulnerabilities, providing unprecedented insight into HTTP/2’s security landscape.
Key Points
HTTP/2 desync attacks: Downgrading HTTP/2 to HTTP/1.1 introduces H2.CL and H2.TE vulnerabilities, enabling request smuggling and allowing attackers to hijack client requests or manipulate server behavior.
Request tunnelling exploits: HTTP/2’s pseudo-headers and header parsing quirks let attackers bypass front-end controls, leak internal headers, and perform cache poisoning using desync-powered tunnels.
Tooling and detection: Custom HTTP/2 stacks and updated scanners like HTTP Request Smuggler and Burp Suite Inspector enable researchers to reliably detect and exploit these previously hidden vulnerabilities.
🎬 And that's a wrap! Keep an eye out for our next edition of tech updates.