Image of a rice cooker with networking cables, lights and microchips in a muted golden colour on black. White 'Seeline Cyber' logo at the base, centered.

AI Is Not A Rice Cooker: Kona Grit And The Best Security Language

Earlier this week, a friend told me about an opportunity he may have in the next two years to compete at Kona. He is a beast, a world champion in multiple disciplines, but not in endurance sports, and as an adaptive athlete this would offer an entirely new challenge on so many levels. Among them, there are technical considerations (like riding a bike for the first time in decades) finding the correct prosthetics and locking in the mindset required for an endurance athlete, but I could see the fire in his eyes and have no doubt he’ll take the opportunity when presented.

Later that eve, I was up at zero dark thirty as usual, skimming through the NIST Cybersecurity AI Profile: IR 8596 Initial Preliminary Draft – a playbook with contributions from government, industry and academic experts. It’s the first which leans heavily into the AI Cyber venn diagram course and addresses the AI-specific cyber risks. It was shared in one of the GRC communities I’m part of and generated a bit of discussion. It struck me immediately as interesting, and in addressing the various the levels, I couldn’t help but think back to my conversation earlier that evening.

The Triple Threat

Kona, for those unfamiliar, is widely considered the ultimate Ironman event. Held annually in Hawaii, athletes must be able to swim 3.8 km (2.4 miles) bike 180 km (112 miles) run 42.2 km (26.2 miles) to total distance of 226 km (140.6 miles) – no stops, no breaks. Kona is the spiritual home of the sport, and the event is iconic due to its ‘brutal but beautiful’ reputation, think fierce winds, banned wetsuits (no buoyancy benefit) lava fields and a segment known as a ‘heat bowl’ in the marathon leg – this event does not miss.

Levels, Objectives and Environments

As in the Kona ecosystem, the AI ecosystem presented by the NIST in the Cyber AI Profile playbook has key components:

Three interconnected focus areas;

  • Securing AI System Components (Secure)
  • Conducting AI-Enabled Cyber Defense (Defense)
  • Thwarting AI-Enabled Cyber Attacks (Thwart)

Before we continue, let us pause for a moment to admire the magic of the word ‘thwart’

I digress, the focus areas are intended to do the most integral work of any framework – improve an organisations posture – from considering the expanded attack surface through integration of AI and working to protect models from adversarial inputs, to identifying and finding ways to leverage AI in defence capabilities (which could include predictive analysis or streamlining compliance) and finally, Thwarting AI-Enabled Cyber Attacks, focussed on building resilience against adversaries using AI to conduct faster, larger, more sophisticated attacks.

Unpredictability, Integrity and Scale

As with the trade winds and cross-winds which make Kona so difficult during the bike leg, AI offers up it’s own set of unique challenges which are detailed in the Cyber AI Profile;

Unpredictability : One of the bread and butter terms in AI Safety is ‘human-in-the-loop (HITL). HITL is oversight to manage hallucinations or input errors. Given that AI systems are typically opaque and much harder to predict than traditional software, HITL is the obvious counter balance.

Data Integrity: In sports, the output is often only as good as the input (there are of course genetic outliers, but typically the quality of the training will equate to the quality of the performance), and this is increasingly important in the training data used for AI, where the risk of data poisoning presents a significant supply chain risk.

Speed and Scale: Human-speed countermeasures are relatively ineffective at containing AI-enabled attacks due to the velocity at which they move. If we put this through the Kona lens, it’s like giving one athlete a motorbike when every other athlete is stuck with a pedal bike.

Terminology of the Future

NIST has opened up this draft for comment (until the 30th of January 2026 if you’re keen to play along at home), and is also developing Control Overlays for Securing AI Systems to provide implementation-focused guidelines.

I found it helpful to consider this new NIST playbook and the ISO/IEC 42001 Standards I’ve been studying recently, particularly in the attempt from the NIST playbook to unify terminology so that AI-Cyber stakeholders can begin to speak the same language – from engineers, to leadership.

Key takeaways are both have risk-centric DNA – there is no one-size-fits-all checklist, both require context and proportional controls to that risk, both are Governance First. Another common theme is attention to lifecycle – AI should be managed from data acquisition, model design right through to deployment, monitoring and the eventual decommissioning. This cycle is often referred to as ‘cradle to grave’ – which is a little anthropomorphic in the context of AI, so, expect a blog post in future on this.

AI is not a rice cooker

Here’s the thing, ongoing monitoring is not just a ‘nice-to-have’ it is essential. AI should not be viewed as a simple ‘set and forget’ tool; it is not a rice cooker. It has the capacity to drift and degrade over time, which means that all stakeholders should work towards implementing a strategy for the effective management of AI tools. It won’t be easy, it will demand new skills, new budgets and above all a renewed security commitment.


Comments

Leave a comment