
Thank God for the Nineties!
February 15, 2026AI is moving faster than expected, which is impressive in the same way it’s impressive when a toddler discovers espresso. Everyone’s excited, everyone’s running, and nobody is checking whether the staircase has been installed yet. The public imagination keeps leaping to killer robots and a chrome skeleton kicking down the door, but the more immediate “aggressive AI” is duller and nastier: persuasion at scale, scams with synthetic voices, and a constant low-grade pressure to surrender your judgment because the machine sounds confident. That’s the trick. It doesn’t need to become conscious to become dangerous; it just needs to become credible and cheap and everywhere.
So here’s my rule—simple, unromantic, and surprisingly effective: if a message creates urgency and asks for money, secrecy, passwords, or “just confirm this code,” assume it’s a con until proven otherwise. The Federal Trade Commission has been warning for a while about “family emergency” scams and how AI can supercharge them with voice cloning and believable scripts. That means the modern defensive move isn’t paranoia; it’s a procedure. Pick a family safe word (or phrase). If “your person” can’t produce it, you hang up and call back on a known number. It feels silly right up until it saves you.
Now, about using AI without becoming its chew toy: don’t hand it the keys. Let it draft, summarize, brainstorm, outline, and plan—but humans should still send the emails, sign the documents, publish the posts, and move the money. The riskiest future isn’t “AI says a wrong fact,” it’s “AI takes a wrong action.” The safest default is read-only thinking partner, not autonomous actor. And yes, this applies even if the AI is incredibly charming and tells you it’s got everything under control. That’s exactly what every disaster begins with.
This is where my fledgling project, the Sports Car Institute, becomes a useful example. SCI isn’t some bloated corporate machine with compliance departments and lawyers in velvet gloves; it’s a young venture with a vision: reviving the love of driving—especially for younger people—through education, culture, and events. And precisely because it’s young, I’m moving forward with caution. AI can help SCI like a small staff would: drafting sponsorship outreach, organizing grant narratives, turning rough ideas into clean one-pagers, creating content scripts and shot lists, building event checklists, and running budget scenarios. But it does not get fed private donor data. It does not get to “auto-send” anything. It does not get to improvise on behalf of the organization. It’s a turbocharger, not a steering wheel.
If SCI ever deploys AI outwardly—say a website assistant or an intake tool—then the mindset has to be security-first, because the most common “AI goes rogue” story in the real world is actually “AI system gets manipulated.” OWASP has laid out a practical map of risks for large-language-model applications, including prompt injection and data leakage—exactly the kind of boring vulnerabilities that become exciting only when you’re cleaning up the wreckage. So SCI’s sane approach is: build slowly, gate access, log what matters, and keep a human accountable at every meaningful step.
And yes, existential risk is still the big one humming underneath the floorboards. I’m not pretending a safe word defeats civilization-ending outcomes. Individually, you don’t “win” that fight by buying more gadgets or reading more doom threads at 2 a.m. You win it culturally: by insisting on governance, documentation, auditing, and accountability—by treating powerful AI the way we treat powerful anything. That’s why I like the plain-spoken adult framing in the National Institute of Standards and Technology AI Risk Management Framework: govern, map, measure, manage. It’s not sexy, but neither are seatbelts—right up until the moment they’re the only reason you still have a face.
So that’s the posture: use AI, but don’t worship it; respect the risks, but don’t become a shut-in; build your company with ambition and restraint like you’re constructing a track-day at speed—thrilling, precise, and with marshals on every corner. Cathedral energy, not bonfire-of-the-idiots energy. And yes, we’re keeping our thumbs.
Works Cited
Federal Trade Commission. “Family Emergency Scams.” Consumer Advice, 8 Apr. 2024, consumer.ftc.gov/all-scams/family-emergency-scams.
OWASP Foundation. “OWASP Top 10 for Large Language Model Applications.” OWASP, owasp.org/www-project-top-10-for-large-language-model-applications/.
National Institute of Standards and Technology. Artificial Intelligence Risk Management Framework (AI RMF 1.0). NIST, 2023, nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf.
