Before we jump in, let’s thank Superflex. Superflex is an extension for VSCode, Cursor and Windsurf that converts Figma to production-ready code that matches your coding style, design system and reuses your UI components
Get 30% off Superflex for 1 month. Use code HACKERPULSE.
Welcome to HackerPulse Dispatch! This week’s newsletter explores the rapidly shifting perspective in modern software development. From bold predictions by GitHub’s CEO to GitHub’s critical security investigations, we take a closer look at ways AI is making devs indispensable, not replacing them.
While automation accelerates the pace of code generation, it also introduces new challenges around quality, security, and human oversight. We’re seeing both the promise and the pitfalls: AI tools are creating new opportunities for creators, while also exposing systemic weaknesses in the platforms and devices we rely on.
Whether it’s leaked credentials on GitHub, flawed content in production systems, or vulnerable consumer hardware, the need for human engineering has never been greater.
Here’s what new:
🐮 GitHub CEO Says the “Smartest” Companies Will Hire More Software Engineers Not Less as AI Develops: Thomas Dohmke is bullish on the future of devs: says AI is supercharging, not replacing devs, making this the most exciting and accessible era yet for software creation.
🕵️ Guest Post: How I Scanned All of Github’s “Oops Commits” for Leaked Secrets: Security researcher Sharon Brizinov uncovered thousands of live secrets from deleted GitHub commits, including one with admin access to all Istio repos. This proved that once a secret is committed, it must be treated as compromised, even if deleted.
🛠️ “I’m Being Paid to Fix Issues Caused by AI”: AI is creating more work and income for skilled writers like Sarah Skidd, as businesses increasingly rely on humans to fix the often flawed content and code generated by AI tools.
🍾 Writing Code Was Never the Bottleneck: While LLMs make writing code faster, the true bottleneck remains the human effort required to understand, review, and maintain that code collaboratively.
👀 Exploiting the Ikko Activebuds “AI Powered” Earbuds, Running Doom, Stealing Their OpenAI API Key and Customer Data: The IKKO Activebuds earbuds expose serious security flaws, including enabled ADB access, direct OpenAI API connections with embedded keys, and insecure app endpoints that leak user data despite partial fixes.
GitHub CEO Says the “Smartest” Companies Will Hire More Software Engineers Not Less as AI Develops (🔗 Read Paper)
Despite layoffs and industry-wide uncertainty, GitHub CEO Thomas Dohmke is bullish on the future of software development. Speaking live at VivaTech in Paris, he called this “the most exciting time to be a developer,” thanks to AI tools that accelerate rather than replace coding.
From GitHub Copilot to no-code experiments, Dohmke argues that AI expands possibilities while increasing the demand for skilled engineers. He sees AI as a “supercharger” that fuels creativity, lowers entry barriers, and amplifies output — not a shortcut to eliminate talent. For those worried about being left behind, his advice is simple: learn the tools and lead the change.
Key Points
AI is multiplying dev impact, not reducing headcount: Dohmke believes companies will hire more developers, not fewer, to harness AI’s ability to amplify productivity. The smartest teams will see AI as a way to 10x output, not cut costs.
Coding is more accessible, but skill still matters: While AI lowers the entry barrier, complex software still requires deep engineering knowledge. Prompts can help you start, but scaling systems still demands real code fluency.
The next-gen devs will grow up with AI: Kids today will learn to build with AI the way Gen Z learned to swipe. From casual creators to infrastructure builders, the developer spectrum is expanding — and AI is powering the shift.
Guest Post: How I Scanned All of Github’s “Oops Commits” for Leaked Secrets (🔗 Read Paper)
White-hat hacker Sharon Brizinov has returned with a powerful follow-up to his viral “How I Made 64k From Deleted Files” post—this time targeting secrets hidden in deleted GitHub commits. Published via Truffle Security’s Research CFP, his latest project reveals how GitHub’s force-pushed commits, often assumed to be gone forever, are actually archived and retrievable.
Using the GitHub Event API and GH Archive data, Sharon built an automation pipeline that scans deleted commits for sensitive credentials. His open-source tool, Force Push Scanner, makes this process accessible to any organization looking to audit its GitHub footprint.
The results? Thousands of leaked secrets, including one that could’ve led to a catastrophic supply chain compromise
Key Points
Secrets never die: Deleted commits don’t actually disappear. Sharon’s automation scanned zero-commit force pushes across GH Archive, recovered the deleted commits, and used TruffleHog to detect secrets, surfacing thousands of active credentials, some live for years.
From JSON to jackpot: Manual triage and a homegrown “vibe-coded” review tool helped surface the highest-value credentials — from GitHub PATs to AWS keys. Combined with filters, open-source analyzers, and a simple frontend, Sharon earned ~$25K in bug bounties and spotlighted .env files as the most dangerous offenders.
Almost nuking Istio: One discovered PAT had admin access to all of Istio’s repos — a project used by giants like Google and Red Hat. Sharon responsibly disclosed it, potentially preventing a devastating supply-chain attack. His takeaway: if it’s ever committed, even briefly, treat it as compromised.
“I’m Being Paid to Fix Issues Caused by AI” (🔗 Read Paper)
Product marketing manager Sarah Skidd is finding AI’s rise in content creation to be a source of extra income rather than a threat. After being asked to urgently rewrite bland AI-generated copy for a hospitality client, she spent 20 hours reworking it entirely, charging a premium rate for her expertise. Skidd believes skilled writers won’t be replaced but instead will increasingly fix AI’s missteps—a trend echoed by others in the industry.
Plus, digital marketing agency co-owner Sophie Warner warns that AI-driven shortcuts often backfire, causing website errors and costly downtime. Experts emphasize that while AI offers promise, human oversight remains essential to avoid poor quality and reputational damage.
Key Points
AI creates more work for writers: Skidd and many other copywriters now spend a significant part of their time fixing generic or flawed AI-generated content, turning what was supposed to be a cost-saving measure into a lucrative opportunity.
AI-generated mistakes cost businesses: Warner highlights how clients relying on AI for coding or content often face crashes, security vulnerabilities, and expensive downtime—sometimes needing professional help to fix problems that could have been avoided.
Human expertise remains crucial: Experts like Prof. Feng Li stress that AI tools have limitations and hallucinate, requiring human skill and context to ensure quality, brand alignment, and avoid costly errors despite AI’s growing popularity.
Writing Code Was Never the Bottleneck (🔗 Read Paper)
For years, software engineering bottlenecks have rarely been about writing code itself but rather the human processes around it—code reviews, mentoring, testing, and communication. Now, large language models (LLMs) can generate code faster than ever, sparking a new belief that coding speed has been the main problem.
However, this overlooks the greater challenge: understanding, testing, and trusting the generated code. LLMs shift the workload rather than eliminate it, often increasing pressure on teams to verify and integrate more code at scale. Ultimately, the human factors of collaboration and shared understanding remain the true constraints in software development.
Key Points
LLMs shift the workload, not remove it: AI tools speed up code generation, but teams still face challenges verifying unfamiliar patterns and edge cases, which can slow down integration and maintenance.
Understanding code remains the hardest part: While LLMs reduce coding time, reasoning about behavior, debugging subtle issues, and ensuring long-term maintainability still require substantial effort.
Collaboration and trust are still essential: Faster code production risks bypassing review and discussion, putting strain on mentors and reviewers and underscoring that clear thinking and shared context remain fundamental.
Exploiting the Ikko Activebuds “AI Powered” Earbuds, Running Doom, Stealing Their OpenAI API Key and Customer Data (🔗 Read Paper)
The IKKO Activebuds “AI powered” earbuds have drawn attention recently, especially after a deep security dive revealed serious vulnerabilities. These earbuds run Android with ADB enabled by default, making them an easy target for exploration.
The device connects directly to OpenAI’s API, exposing the embedded ChatGPT key and multiple unsecured endpoints. A companion app stores chat histories with minimal authentication, allowing potential access to sensitive user data.
Despite some fixes after responsible disclosure, several risks remain, raising questions about privacy and security in AI-integrated consumer devices.
Key Points
ADB enabled and direct openai connection: The earbuds come with ADB enabled, allowing easy access to the device internals and direct communication with OpenAI’s API via an embedded ChatGPT key.
Insecure API endpoints and data exposure: The companion app’s APIs lack proper authentication, exposing user chat histories and personal information through predictable device IDs and QR code bindings.
Partial fixes but lingering vulnerabilities: After disclosure, IKKO implemented measures like API key rotation and added request signatures, yet some endpoints still allow unauthorized QR code generation and message injection.
🎬 And that's a wrap! Stay tuned to be the first to learn about happenings in tech.