Board Games
By Matt Konwiser
Cross Brand Technical Leader and Field CTO, IBM

For two years now, I’ve told stories of AI: ethics, governance, adoption, use cases, and agentic systems. It’s time to change things up.
I’ve had the opportunity in the past few months to chat with board members and directors, company founders, and chief executives. Every one of them feels that they’re walking a tightrope, and AI is their balance pole. As they walk across the precipice that is business failure, the gravity of the economic climate and tight competition is not the only thing pulling at them.
The balance pole itself, the AI that is supposed to support them and help them reach the success they see on the other side of the gap, is also throwing them off balance. What boards and executives need to desperately realize is that the problem isn’t the pole. It’s how they’re holding it.
The AI balance pole is weighted on two opposing sides, and they’re not evenly distributed. The left side of the pole is the nature of AI as a business contributor. Consider this: if you need to train AI, to set it up to work with your business, its employees, and your customers, what does it need?
Not code, silicon, discs, memory, and processors. It doesn’t need tabular data or data lakes. It needs information, human-consumable information: books, documents, legislation, regulations, policies, procedures, and news.
It needs a mission and guidelines for its role in the business, stated in human language. Because that’s how Large Language Models (LLMs) were designed: to ingest those materials and perform tasks based on the information they were given.
What other business asset requires the same content provided in the same way? Human employees. But AI is not human and never will be human. Even if it reaches a point of “sentience,” it remains something different.
Which brings us to the right side of the balance pole: what AI systems need. Maslow taught us that people need the basics before being able to achieve self-actualization.
At the bottom of his pyramid were the essentials: food, water, shelter, breathing, clothing, and sleep. This is where AI varies greatly from humans, and we are reminded that AI is still just a kind of machine.
AI needs silicon, cooling, electricity, rack space, cloud, the internet, and a database, not to do its job, but simply to exist. Take any of those away, and it will be unable to perform its duties or add any value at all.
What other business systems need the same? Every IT asset or tool.
From a purely AI science point of view, these two sides are not in conflict. Combine the right set of mathematical calculations and data sets, give it the correct reference data for training, and feed it human-language documents so it can answer questions.
But from a business point of view, these two sides are what is throwing the tightrope walkers off balance, and here’s why.
If business owners regard AI systems purely as IT assets and manage them exclusively as part of a CIO’s remit, the systems’ Maslow-style foundational needs will be met, but the system will never reach its full potential because IT teams aren’t trained to handle human-like requirements. They cannot keep up with the amount of business materials being produced or the implications of ingesting, actioning, or, at worst, misinterpreting even one piece of information.
If business owners focus only on the other side, the nature of AI as a business contributor, the foundational needs required to support scale and future growth may also not be met.
As a result, we now have two complementary elements of AI deployment competing because organizations are not aligning themselves properly to manage generative AI systems.
Going back to our tightrope walkers, why are they at so much risk? Because they don’t understand how to hold the pole.
If you apply even pressure, it will throw you off, because starting up a generative AI practice requires different pressure than running a mature one. But most businesses I’ve spoken to don’t have that simple a problem to resolve. Most are only focused on one side of the pole and disregard or underestimate the other.
Some just pay for a cloud-based, generally trained LLM, which allows them to only worry about the business contribution, not realizing the cost, energy, and environmental impact coming as they grow their usage. That will rapidly throw them off the tightrope.
Others build and manage their own systems, carefully watching CAPEX and OPEX, running their system methodically like any other IT program. Research has shown that nearly 95% of those programs fail to end up in production.*
So the answer can’t be to keep trying things the same way as before. Boards and executives must direct their teams to investigate embedding generative AI into the core of their operations, across both IT and business. They need to create a new program with new leaders who acknowledge what generative AI is: not human, not just an IT tool. AI projects must have both the Maslow-defined needs to function and the proper handling, guidance, daily measures, and mission to perform as expected.
Only the organizations that understand how their generative AI balance pole works will ever make it across the chasm safely.
Generative AI projects, like any other major business innovation, are never all or nothing. Think about it: you can’t expect a human to do their job without the necessary knowledge, but they also need coworkers, bathrooms, and a kitchen or break area. Why would generative AI be any different?
Time to stop thinking about generative AI as just another tool, or over-rotating and anthropomorphizing it. Find the balance now, or stumble and face the consequences.
*See: “The GenAI Divide: State of AI in Business 2025” from MIT.*
I’ve had the opportunity in the past few months to chat with board members and directors, company founders, and chief executives. Every one of them feels that they’re walking a tightrope, and AI is their balance pole. As they walk across the precipice that is business failure, the gravity of the economic climate and tight competition is not the only thing pulling at them.
The balance pole itself, the AI that is supposed to support them and help them reach the success they see on the other side of the gap, is also throwing them off balance. What boards and executives need to desperately realize is that the problem isn’t the pole. It’s how they’re holding it.
The AI balance pole is weighted on two opposing sides, and they’re not evenly distributed. The left side of the pole is the nature of AI as a business contributor. Consider this: if you need to train AI, to set it up to work with your business, its employees, and your customers, what does it need?
Not code, silicon, discs, memory, and processors. It doesn’t need tabular data or data lakes. It needs information, human-consumable information: books, documents, legislation, regulations, policies, procedures, and news.
It needs a mission and guidelines for its role in the business, stated in human language. Because that’s how Large Language Models (LLMs) were designed: to ingest those materials and perform tasks based on the information they were given.
What other business asset requires the same content provided in the same way? Human employees. But AI is not human and never will be human. Even if it reaches a point of “sentience,” it remains something different.
Which brings us to the right side of the balance pole: what AI systems need. Maslow taught us that people need the basics before being able to achieve self-actualization.
At the bottom of his pyramid were the essentials: food, water, shelter, breathing, clothing, and sleep. This is where AI varies greatly from humans, and we are reminded that AI is still just a kind of machine.
AI needs silicon, cooling, electricity, rack space, cloud, the internet, and a database, not to do its job, but simply to exist. Take any of those away, and it will be unable to perform its duties or add any value at all.
What other business systems need the same? Every IT asset or tool.
From a purely AI science point of view, these two sides are not in conflict. Combine the right set of mathematical calculations and data sets, give it the correct reference data for training, and feed it human-language documents so it can answer questions.
But from a business point of view, these two sides are what is throwing the tightrope walkers off balance, and here’s why.
If business owners regard AI systems purely as IT assets and manage them exclusively as part of a CIO’s remit, the systems’ Maslow-style foundational needs will be met, but the system will never reach its full potential because IT teams aren’t trained to handle human-like requirements. They cannot keep up with the amount of business materials being produced or the implications of ingesting, actioning, or, at worst, misinterpreting even one piece of information.
If business owners focus only on the other side, the nature of AI as a business contributor, the foundational needs required to support scale and future growth may also not be met.
As a result, we now have two complementary elements of AI deployment competing because organizations are not aligning themselves properly to manage generative AI systems.
Going back to our tightrope walkers, why are they at so much risk? Because they don’t understand how to hold the pole.
If you apply even pressure, it will throw you off, because starting up a generative AI practice requires different pressure than running a mature one. But most businesses I’ve spoken to don’t have that simple a problem to resolve. Most are only focused on one side of the pole and disregard or underestimate the other.
Some just pay for a cloud-based, generally trained LLM, which allows them to only worry about the business contribution, not realizing the cost, energy, and environmental impact coming as they grow their usage. That will rapidly throw them off the tightrope.
Others build and manage their own systems, carefully watching CAPEX and OPEX, running their system methodically like any other IT program. Research has shown that nearly 95% of those programs fail to end up in production.*
So the answer can’t be to keep trying things the same way as before. Boards and executives must direct their teams to investigate embedding generative AI into the core of their operations, across both IT and business. They need to create a new program with new leaders who acknowledge what generative AI is: not human, not just an IT tool. AI projects must have both the Maslow-defined needs to function and the proper handling, guidance, daily measures, and mission to perform as expected.
Only the organizations that understand how their generative AI balance pole works will ever make it across the chasm safely.
Generative AI projects, like any other major business innovation, are never all or nothing. Think about it: you can’t expect a human to do their job without the necessary knowledge, but they also need coworkers, bathrooms, and a kitchen or break area. Why would generative AI be any different?
Time to stop thinking about generative AI as just another tool, or over-rotating and anthropomorphizing it. Find the balance now, or stumble and face the consequences.
*See: “The GenAI Divide: State of AI in Business 2025” from MIT.*

0 Comments