Table of Contents
This post originally appeared on Forbes
Thanks to the explosive growth of generative AI over the last year, companies with little experience in using artificial intelligence (AI) to drive business value are scrambling to catch up.
Increasing pressures from boards of directors to quickly develop a corporate AI narrative have many chief data officers (CDOs) taking stock of their AI strategies and enabling capabilities. What these CDOs are finding are significant shortfalls in traditional data management disciplines — such as data quality and data governance — that they believe stand in the way of their AI-enabled future. This is leading many CDOs to question if they are “ready” for AI and what’s needed to right the ship.
In my conversations with CDOs across the globe this year, I’ve heard this theme of AI readiness repeated across every industry and company size, especially from the CDOs in the 45-65% of all companies that have yet to widely embrace artificial intelligence (depending on which survey you reference).
These concerns are placing many CDOs in a highly defensive posture, with many needing time to re-evaluate their data and AI strategies at a time when companies are expecting quick action. For many CDOs hired within the last three years to deliver on digital transformation mandates, this perceived lack of readiness is problematic, if not existential.
Three Factors Inhibiting AI Readiness Today
This perceived lack of AI readiness among CDOs is heavily influenced by three limiting factors.- Knowledge gaps in how AI works — especially large language models (LLMs).
- Outdated mindsets rooted in legacy approaches to data.
- A lack of understanding of the disciplines of AI and data science.
1. Knowledge Gaps in AI
A lack of understanding of how AI technologies, particularly LLMs, differ from more traditional operational analytical solutions is common. A stark example of this is a widespread belief among CDOs that they’ll build custom LLMs using internal structured data to train them. The money, infrastructure and expertise needed to train an LLM means that the vast majority of companies aren’t building them — even those with significant investments in AI and data science. Trained on text data gleaned from the internet and fine-tuned on hundreds of thousands of natural language questions and answers across a wide spectrum of topics, custom LLMs are an innovation that most companies will likely never need to be “ready” to build. In time, more “small” language models may become more common, but for now, I believe the capabilities of both open-source and commercially available LLMs are sufficient to drive significant value for companies seeking to leverage this transformational technology.2. Mindsets Rooted in Legacy Approaches
Many CDOs are prone to make “apples-to-oranges” comparisons between legacy, operational analytics and more advanced analytics such as AI. Legacy analytics lack dynamism and are vulnerable to low-quality inputs, while AI-based systems are highly adaptable, tuned to support a specific outcome and can learn over time. One is deterministic and rules-based, while the other is probabilistic and outcome-based. One is highly dependent on human oversight and direction, and the other is infinitely scalable and increasingly autonomous. These fundamental differences mean that many of the data axioms of the past — like “garbage in, garbage out” — will have less predictive value in an AI-enabled future. Yet, many data leaders continue to evaluate their ability to support AI through their inability to sufficiently support legacy patterns. Assuming that the limitations of existing AI-based solutions (like the hallucinations of LLMs) negate their ability to provide value to an organization is an example of a deterministic, binary mindset that CDOs will be well-served to evolve.3. Lack of Data Science Awareness
A third barrier to AI readiness among CDOs is low awareness of the data science function itself. For the CDOs at roughly half of all companies that have not widely embraced artificial intelligence, there is a belief that inferior approaches to data management and low data quality will have an immediate and material impact on their ability to realize value from AI at scale. This is true for situations where internal data is used to generate more consistent and accurate responses from commercial LLMs using complex prompting, but it’s less true when data scientists are building customized AI solutions. In the former, data is typically an output of traditional management processes (like MDM and data quality), and in the latter, data is accessed straight from the source. Data scientists avoid using the output of traditional data pipelines because the process of data science is a highly iterative, experimental process that requires flexibility and freedom to access and manipulate data. These are two requirements ill-suited for infrastructure specifically designed to enable consumer analytics at scale. So, while CDOs must invest in traditional data management solutions to become more AI-ready, the reality is the people building bespoke AI solutions for companies would see limited benefit from these investments. The primary beneficiaries remain end-users consuming analytics or interacting with LLMs.Why AI Necessitates a Paradigm Shift
There is most certainly a lack of AI readiness at many companies today, the three primary drivers of which all trace their origins to a lack of knowledge of how AI works and how it is typically operationalized in companies. Many CDOs who claim a lack of AI readiness are swift to point fingers at the state of their corporate data as a primary roadblock, yet concerns about data quality have beguiled all organizations for decades — even those using AI to generate significant business value. Data leaders willing and able to shift their mindsets and view AI as an entirely new paradigm that does not fully comport to legacy approaches to data management are best positioned to become AI-ready. To become more AI-ready and address these three critical gaps, data leaders must:- Educate themselves and their teams on how the technologies of AI (particularly LLMs) work.
- Find ways to “learn by doing” by working with willing business stakeholders eager and willing to automate natural-language-based business processes.
- Encourage team members to integrate the use of LLMs into their daily processes.
- Act as an evangelist and advocate for the benefits of AI and LLMs within the organizations.
Malcolm Hawker
Malcolm Hawker is a former Gartner analyst and the Chief Data Officer at Profisee.