Over the past four weeks, I have personally observed compelling evidence in “AI” that will most likely cause massive shocks to employment patterns in IT, software development, systems administration, and cybersecurity. I know some have already experienced minor shocks. They are nothing compared to what most likely awaits us.
Nobody probably wants to hear this, but you absolutely you need to do or take time this year to identify what you can do that AI can’t and create some of those items if your list is short or empty.
Weavers in the 1800s resorted to violence to gain a pseudo-20-year reprieve from falling into obsolescence. We have about 18 months left. I’m as hesitant about this “AI” as it makes sense. I wish the bubble would burst. Even if it did, our clicktatorship leaders would only fuel a rapid rebuild.
Four exclusively human security capabilities
In my (broad) field, I think there are some things that make humans 110% necessary. Here’s my list – and it would be great if people working in very specific sub-domains of cyber provided similar ones. I try to stay in my lane.
1. Judgment under uncertainty with real consequences
These new “AI” systems can use tools to analyze billions of sessions and cluster payloads, but they don’t (or definitely shouldn’t) take responsibility for the “we’re ending production” decision at 3 a.m. This “weight of consequences” shapes human expertise in ways that inform intuition, risk tolerance, and the ability to act decisively with incomplete information.
Organizations will continue to need people who can take ownership of results, not just produce analysis.
2. Contradictory creativity and original formulation of problems
Newer “AI” systems are actually very good at matching patterns to known patterns and recombining existing approaches. They are absolutely rubbish at the “real novel” – the attack vector that no one has documented, the defensive technique that requires understanding how a specific organization actually works versus how it should work.
The best security practitioners think like attackers in a way that goes beyond “here are common TTPs.”
3. Institutional knowledge and relational capital
An excellent one.
Understand that the finance team always ignores security warnings, especially Dave, when closing the quarter. That the legacy SCADA system cannot be fixed because the vendor went bankrupt in 2019. That the CISO and CTO have a long-standing disagreement over cloud migration.
This context determines which recommendations are actually actionable. Many technically correct analyzes are organizationally useless.
4. The ability to establish and maintain trust
The biggest.
When a breach occurs, executives don’t want a report from an “AI.” They want someone who can look them in the eye, explain what happened, and take ownership of the path forward. The human element of security leadership is absolutely not going away.
How to develop these abilities
Develop in depth the areas that require your presence or legal responsibility. Disciplines such as incident response, compliance attestation, or security architecture for isolated or classified environments. These present regulatory and practical barriers to full automation.
Develop expertise in linking systems. Understanding how a given combination of existing mainframes, cloud services, and OT environments actually interconnects requires the kind of institutional archeology (or the powers of a sexton) that doesn’t exist in training data.
Get comfortable being the human in the know. I know this will often get me muted or blocked, but you will need to feel comfortable as a human in the loop of “AI” augmented workflows. The analyst who can effectively direct the tools, validate outputs (because these things will always make things up), and translating the results for different audiences has different work than before, but still necessary.
Learn to ask better questions. Bring your assumptions, your domain expertise, and know which threads are worth tackling. This editorial judgment about what matters is underestimated and will take time to percolate into “AI” systems.
We’re all John Henry now
A year ago, even with a long brain fog linked to Covid, I could surpass…”Jean Henri” all commercial AI models for programming, cybersecurity and writing tasks. Both in terms of speed and quality.
Now that the fog has cleared, it will probably take me about 3 months to be slower than the “AI” on a significant number of basic tasks that it absolutely can do. I saw it. I validated the outputs. It’s rubbish. This really really sucks. And it’s not because I’m weak or have some other undisclosed brain disease (unlike 47). These systems are designed to do just that: erase all of us, John Henry.
The people who prosper will be those who are able to understand which capabilities of “AI” are not pure garbage and use them with unique human judgment rather than competing on tasks where “AI” has obvious advantages.
The pipeline problem
The very uncomfortable truth: There will be fewer entry-level positions that are primarily “review and escalate” alerts. This pipeline to the field is narrowing at a frightening rate.
What concerns me most is not the senior practitioners. We will adapt and probably become even more efficient. It’s the young people who won’t benefit from years of exposure to the models that built our intuition in the first place.
This is a pipeline problem that the industry has yet to seriously confront – and is unlikely to be resolved due to the hot, rarefied air in the offices and boardrooms of myopic and greedy senior executives.
The position AI checks your IT/cyber career: the only human capabilities that matter appeared first on rud.est.
***This is a Security Bloggers Network syndicated blog from rud.est written by hrbrmstr. Read the original message at: