80% of AI Projects Crash and Burn—Here’s How to Make Sure Yours Isn’t One of Them
Prevention is the best medicine.
There is a reason for a project manager’s expression of fear when they are given an AI project.
The think tank RAND says over 80% of AI projects fail, almost twice the failure rate of non-AI corporate IT projects. AI projects cost more than traditional ones, so millions of dollars are going up in flames. It seems that AI is proving to be a challenge for even the best project managers.
Project managers need to be highly cautious when starting an AI project. Knowing the signs of trouble can help prevent your project from failing.
Let’s explore why some AI projects never make it to the finish line.
AI projects miss the big picture
As a project manager, you're always the middleman between upper management and your team.
A common problem in AI projects is poor leadership communication. The team doesn't understand the true goals they need to achieve, so the AI models are incorrect, and stakeholders have deemed the project unsuccessful.
Communication and clarity on the business goal are vital for an AI project to succeed. But, companies want to rush ahead. This haste leads to confusion and the eventual failure of the project. If the businesses had focused on the project's basics, it would have had a more targeted response.
That's why everyone needs to be on the same page to ensure the project succeeds.
Don’t fall for the hype-laden sales pitch
Salespeople promise you the world, and with AI they now have a new solution to sell you.
They rarely understand what the tech does, but their company says this plug-and-play solution will make AI effortless. The problem is that no app will instantly package up AI and turn their business into a tech powerhouse. However, large Fortune 500 companies chase the new shiny, which leads to disappointment.
IT directors and project managers believe in the hype. The media's spell is so powerful it draws them in. Plus, sales pitches use buzzwords like "machine learning," "neural networks," and "predictive analytics." They have no idea if the solutions will work for their business. After the sale, the staff struggles to implement the new awesomeness. The project fails as it is a poor fit for the company.
These companies failed to question vendors or hire consultants to evaluate the software, so the AI project was canceled before implementation.
AI needs its own playbook, and it’s not Agile
A big reason for AI project failure is trying to shoehorn AI into Agile's toolbox.
If you try to manage this type of project like another agile development sprint, you're in for a world of hurt. AI is a data-driven project. So, data-driven methods like CRISP-DM or CPMAI are needed.
Why isn't Agile a workable solution? The core of Agile is its ability to allow teams to iterate and adapt. But we're not developing an app or an ordinary piece of software. AI needs more work up front due to its data demands. The data is always changing, and it keeps generating new data.
This data requirement hampers Agile teams up front when they plan for resources, costs, and time to accomplish tasks.
The biggest issue that teams forget is the time involved at each stage when working on an AI project. Time is needed to collect data, clean it, and organize it. You then need time to train the models and test them out. A sprint goal with a normal cadence may not hold up due to unforeseen issues. If you rush it, your team might take shortcuts. This could lead to a model that is useless in the real world.
Agile delivers iterative value. A data-driven format will limit upgrade loops when the model is deployed. However, the AI project needs a long-term vision and needs to be driven by the data and the ROI needs.
Don’t be confused between a PoC and a pilot
Repeat after me — a proof of concept (PoC) is much different from a pilot.
The AI projects that fail are the ones who think they are the same thing. There's a massive difference between the two. It's the difference between a lab rat and one you see out in the wild rooting through trash cans.
A PoC is just that—a concept that should work in theory. These are usually one-off tests in a very controlled environment. They will never be used in production. The simulation isn't working in the real world. It will fail if it leaves its sterile environment. The data looks great in a PowerPoint to management. But that's the only place it will look good. All AI projects fail if they try to launch a PoC.
A pilot is where the rubber hits the road — in the real world and getting feedback in a protected environment. Note that there is a difference between the two environments. "Controlled" is for the PoC, and "Protected" is for a pilot. Still, the pilot is usually in the actual business environment. The pilot is the prototype, and the PoC is the rough sketch on the chalkboard.
AI projects are more successful if they skip the PoC phase and go right to the pilot.
Your AI model will probably fail in the real world
We just discussed the higher AI project success rate you may get with a pilot. The problem is that your pilot is in a safe, protected environment. You're taking the training wheels off. It's bound to fall over. The only question is how hard it will fall. AI projects fail after being unable to recover from that paradigm shift.
The best example is self-driving cars. The AI performed well in protected environments. What happens when it gets into the real world? You will find many YouTube videos of mistakes. They show encounters with jaywalkers, overturned semis, and construction sites. The AI isn't broken; the problem is that the real world isn't using your training data.
This example didn't stop the quest for autonomous cars, but it set back the field significantly. Do you remember companies like Embark, Argo AI, Phantom Auto, and Ghost Autonomy? Probably not, since they shut down after their self-driving car projects failed.
Waymo and Tesla know there will be failures. So, they prepare their AI to handle them. That's why they are the last ones left in the industry.
Data can kill your AI projects faster than RAID
AI is like a pig in slop when it comes to data - it loves to roll around in it and get down in the data mud.
But there's a tired old phrase: “Garbage in, garbage out.” Your project will be in trouble if you aren't feeding your models enough high-quality data. As a project manager, you must ensure the data is the right size, high quality, and from reliable sources. Work with your team to achieve this. If not, your AI project will start to have undesirable results.
We've discussed the four "V"s of data. Of these, "volume" is the most important when evaluating your data sources. If your team thinks they know the data to train your models, you're likely underestimating it. Another problem is that most workplace systems may not accept the data you need for the project.
You also need high-quality data for your models to make accurate predictions. Incomplete or unstructured data will be hard to use and could kill your project before implementation. Inconsistent or irrelevant data is another factor that may derail a project.
Given data's importance, I'd guess over 50% of AI projects fail due to data issues.
Budget for the long haul with an AI project
The world is constantly moving towards a state of entropy. Why should it be different for your AI model or its data? As a project manager, you may have launched an AI project, but it will degrade over time. It needs constant supervision, and you'll need money to sustain it. AI projects get canceled when management gets a glimpse of the cost of maintaining the models.
After the AI model is deployed, it starts to degrade. The data also starts to drift—it needs to be tuned, and the maintenance can get expensive. Also, your AI project is collecting more and more data. It needs to be managed and stored. Data analysts and scientists must keep up with this, as it pulls them away from other urgent projects.
If you've deployed this project worldwide in your organization, it may need more staff, computing power, and data storage. If this hasn't been planned for or budgeted, it could be thrown on the scrap pile.
Closing thoughts
Project managers should always look for red flags that a project is in trouble and be especially vigilant about AI projects.
AI projects will require more upfront planning before getting to the pilot phase. You may think you spend too much time in the planning phase, but it’s good. Also, your phases should be gated so that if you fail to get a green light on one phase, you must repeat it before moving on. This prevents you from getting to the end of the project and finding out the AI model isn’t doing what you want it to do.
The best analogy for a project manager working on AI projects and mitigating the risk of failure is this poor soul who has to feed the cobras on a Monday morning.
AI-Driven Tools for PMs
PlusDocs - Generate AI presentations and edit slides with AI from Google Slides.
Sparky - All-in-one AI journal app for personal development and management.
AI News PMs Can Use
Deloitte’s State of Generative AI in the Enterprise 3rd Quarter Report
ABBYY Survey Reveals FOMO Drives AI Adoption in 60% of Businesses, but Raises Trust Issues
Cool ChatGPT Prompt for PMs
AI for Idea Generation
Can you generate a list of 5 potential user stories we might consider for Project [X]? I'm looking for a broad range of ideas to serve as a starting point for our team discussion. We aim to use this initial list to kickstart our brainstorming session in the upcoming team meeting.
I have started a new Newsletter called Gen X Retirement.
This Newsletter was built to solve the pain point of a lack of easy-to-understand retirement information that focuses on the “forgotten” generation (Gen X) that is rapidly approaching its golden years.
If this sounds like something that interests you, you’re welcome to sign up!