Discover more from KSG Intelligence Services
KSG Executive Brief: Uncert(AI)nty Reigns
Nobody knows how to effectively integrate AI and manage risk
Uncert(AI)nty Reigns in Enterprise AI Adoption and Risk Management (you see what we did there...?)
Business columns, monthly publications, and your Twitter feed (Threads, anyone?) is full of Generative AI How-Tos and Risk Frameworks du jour. But who’s actually doing what, and why? (TL;DR: Everybody is winging it and nobody has it figured out).
From Hype to How…
At the moment, there is a lot of peering over the proverbial cubicle: corporate leadership and boards are anxiously asking their teams “How do we catch the AI wave?” in hopes of impressing investors on the next quarterly earnings call. A piece in The Economist last week noted how companies that referenced AI have outperformed the S&P this year. The pressure is on, but as our grade-school teachers taught us: “haste makes waste”…
The Generative AI Value Chain is complex and rapidly evolving: from Hardware Manufacturers (NVIDIA, AMD, TSMC, Intel, but mostly NVIDIA…), Infrastructure & Platforms (Microsoft, Google, AWS, and IBM cloud are jockeying for pole position), and Foundation Model Developers (OpenAI, Anthropic, DeepMind, Meta, and the open source hoards at Hugging Face) to Application Developers (SaaS incumbents, industry innovators, and startups sprouting like weeds) and End-Users (soon, everyone).
All this is occurring while development and security tools, governance, and risk frameworks are a work-in-progress and as the regulatory Leviathans (in DC, Brussels, and Beijing) wake up to the economic, social, political, and geopolitical implications. Frankly, it’s a confusing mess, and will remain so for the foreseeable future.
Amidst this swirl of hype and hope, our clients have adopted one of three stances: Don’t Go, Go Slow, and YOLO.
Don’t Go: These folks are prohibiting by policy (with limited technical visibility and patchwork security controls) personnel from using third-party AI tools, like ChatGPT and Bard. Many are curious about (and even eager to develop or deploy) these capabilities, but are waiting until a proven, fully vetted, and compliant solution comes to market that they can either run on-prem or in a trusted cloud instance, with strong data/access controls and application support. Others are abstaining completely, for now, given their role in critical infrastructure, operational environments, and other regulated arenas where AI product risks impinge on life safety and homeland security.
Go Slow: The responsible innovators of the bunch—far from Schumpeterian “disrupters”—are dipping their toe in the water, standing up AI centers of excellence, convening tiger teams, and spinning up pilot projects to explore model and application development (in a controlled and limited manner). Here, corporate leadership has put up hard-to-enforce but easy-to-reference policy and governance guardrails on external tools (you did read that letter from the CEO last week, right Dennis?). Eventually a few budding engineers and ambitious managers will figure out an application that saves admin cost or launch new services with some clever custom-tuned kit (either entirely home brewed, externally sourced, or a re-mixed concoction). In-house development teams will try to strap together a competitive offering via open-source HuggingFace projects, but progress may be stilted as the suits in the C-suite don’t trust it enough to scale.
YOLO: Bitten by the entrepreneurial spirit, some adventurous firms are going all-in, trying to beat their competitors, impress investors, and outmaneuver regulators. Moving fast and breaking things is the mantra of the silicon society which birthed these tools, and many from those cloistered hills are smitten by utopian dreams (or nightmares depending on who you ask) of the exponential potential that may shortly unfurl. Those in the broader policy space are not yet sure whether to snicker, sneer, or shudder at these “takeover” scenarios, but we know they are being taken increasingly seriously in major capitals…
AI Apocalypse-aside, for most of our clients and friends, the risks from AI integration in the near-future business environment will arise when incentives to be responsible breakdown—when the Don’t Gos or Go Slows suddenly lurch to rapidly deploy a pilot project that looks like it will save them oodles on overhead, or when the YOLOers’ stock price is mooning and anxious investors start repeating their questions on shareholder calls.
For it’s not speed itself that will wreck you… it’s the tight corner you didn’t see coming, and the under/over steer that sends you into the tires…
For more information or assistance on these issues, please reach out to firstname.lastname@example.org.
Forwarded this ExecBrief by a friend? Click below to sign up for our weekly dispatch.
China Introduces Export Curbs on Key Minerals: Beijing imposed tight controls on the supply of germanium and gallium – critical elements for fiber optics and a range of other industrial uses, for which China is a monopoly producer. The move comes immediately following Dutch restrictions on chipmaking components to China, and threatens to spark a trade row with the broader EU.
How the Ukraine War Changed Business Risk Calculus: For many multinationals, political security has overtaken other efficiencies as the starting point when considering continuing or expanding operations abroad.
China Introduces its First Indigenous Computer Operating System: Beijing takes a major step toward cutting reliance on Western tech with OpenKylin—a Linux-based system already used in its aerospace, finance, and energy sectors.
Companies have tried for more than a decade to create an “independently-made and controllable” operating system for the PRC market. OpenKylin has some government supporters, but is far from assuredly becoming China’s OS.
Ransomware Gang Using ‘Malvertising’ to Infect Corporate Networks: BlackCat used Google and Bing search results for a popular Windows tool in order to drop malicious payloads and gain administrator-level access.
FBI Digital Sting Shows Promise and Limits of Hacking Hackers: The takedown of the Hive cybercrime ring earlier this year marks a shift in US strategy—from pursuit of prosecution to prioritizing disruption.
Strategic and Emerging Technology
Deep-Sea Mining Eyed for Battery Metals: The International Seabed Authority is planning to resume accepting mining permit applications, as demand for renewables shifts focus to seabed deposits of copper, nickel, aluminum, manganese, zinc, lithium, and cobalt.
Meeting the Demand for Cement by Making it More Efficient: Boston startup Sublime Systems uses electro-chemical processing to transform minerals into cement at a fraction of the energy costs.
BP Backs California-Based Biofuel Company: The oil giant invested $10 million in WasteFuel, which converts municipal and agricultural waste into bio-methanol. Many global shippers are already converting to such lower-carbon fuels.
Hong Kong and Beijing Settle Data-Transfer Dispute: The breakthrough deal comes two years after Beijing had tightened controls on collection, processing, storage and use of data generated on the mainland. New draft rules will allow for the full flow of data within the Greater Bay region.
Google Says Everything Online is Fair Game for AI Training: The company’s updated policy poses complex legal and ethical questions about copyright and privacy enforcements. “It’s no longer a question of who can see the information [online], but how it could be used” in later applications like Bard and ChatGPT.
Center for European Policy Analysis: Losing the Transatlantic Battle for Critical Minerals
Sandra Joyce, VP of Mandiant Intelligence: Google Cloud CISO Perspectives, Late June 2023
The Economist, Special Report: Battlefield Lessons from the War in Ukraine