It’s Not About Replacing Hackers. It’s About Making Us Better.
Let’s be honest, pentesting can be a slog. You spend most of your time buried in scanner results, chasing down false positives, and trying to find that one critical vulnerability in a mountain of digital noise. It feels less like elite hacking and more like being an overworked data analyst. For a long time, it’s felt like the attackers have the advantage, and we’re just trying to keep up.
The automated tools we’ve relied on for years are great for finding the easy stuff — the forgotten default password or the server that missed a patch. But they’re dumb. They don’t get the big picture, they can’t think outside the box, and they definitely can’t spot a clever attack path that weaves together a few minor issues into a major breach. That’s always been our job, the human part of the equation. But there are only so many hours in the day, and the digital world we’re supposed to protect just keeps growing.
I got tired of feeling like I was always a step behind, which is why I started Project Cortex. I wasn’t trying to build a robot to take my job. I wanted a partner, an AI that could handle the grunt work so I could focus on what really matters: thinking like a creative attacker.
Cortex isn’t your typical AI. It’s a mix of a few different technologies. At its core is a Large Language Model (LLM) that I fed a steady diet of cybersecurity knowledge — everything from technical manuals and the MITRE ATT&CK framework to my own notes from years of pentesting. The goal was to teach it not just how to attack, but how to understand the why behind a system’s design.
I then connected that “brain” to a Graph Neural Network (GNN). You can picture the GNN as a massive, interactive map of the target’s entire network. Every server, every employee account, every piece of data is a dot on this map. The LLM’s job is to read all the public information it can find — think company blog posts, developer notes on GitHub, you name it — and then draw lines between the dots, showing me the attack paths I might have missed.
The moment I knew I was onto something wasn’t some dramatic, “I’m in” movie scene. It was much quieter. During a test against a pretty secure cloud setup, Cortex flagged a series of seemingly unrelated, low-level issues:
- First, it pointed out a public S3 bucket with some old marketing files. Nothing to worry about, right?
- Next, it noted an API that let users upload files, but it had solid security checks in place.
- Then came the interesting part. It had scanned a lead developer’s public code and found he had a habit of using an older, specific version of a Java library for processing images.
This is where it got brilliant. Cortex connected the dots. It figured out that a specially crafted image file, uploaded to that secure API, could trigger a tiny bug in that old Java library. The bug wouldn’t crash anything, but it would spit back an error message containing an internal file path. That leaked path was the key. It could be used to guess the internal names of other servers, turning that “harmless” marketing bucket into the perfect launchpad for a much more serious attack.
Sure, I could have found that. Eventually. After days of caffeine-fueled digging. Cortex did it in minutes. It didn’t launch the attack; it just handed me the strategy on a silver platter.
Building Cortex taught me that AI isn’t here to replace us. It’s here to make us better. It’s the ultimate assistant, cutting through the noise and showing us where to focus our skills. My job is no longer about digging for clues; it’s about taking the intelligence the AI gives me and using my human creativity to land the final blow.
This isn’t sci-fi anymore. The bad guys are building tools just like this, you can bet on it. The only way to stay ahead is to understand what’s coming, and that means building the future of hacking ourselves.