Culture
Data Management
Data Professionals

The CDO Matters Podcast Episode 67

Is Your Data Organization AI Ready? with Katharine Shaw-Paffett

X

Episode Overview:

Data leaders have a massive opportunity to drive transformational value with AI, but many are running on outdated operating models. On this episode of the CDO Matters Podcast, Katharine Shaw Paffett, the Cross Solution AI lead for UK and Ireland at Avanade, shares insights on how CDOs can re-envision their organizations to be more AI ready. 

Katharine is at the forefront of the early adoption of GenAI for many large organizations, and her guidance for any company seeking to implement AI is not to be missed.   

Episode Links & Resources:

Good morning, good afternoon, or good evening, good whatever time it is, wherever you happen to be in this amazing planet of ours. I’m Malcolm Hawker. I’m the host of the CDO Matters podcast.

Thank you everybody for joining us today. Happy New Year. We’re still recording this in twenty twenty four, but by the time you see this, it will be twenty twenty five. I hope the best for you and your friends and your family in this year, before us. I think it’s gonna be an exciting year. I am extremely optimistic.

I can’t wait to see how things unfold in twenty twenty five, and I wish you and your family the best.

Today on the podcast, we are gonna talk about how to re envision your organization for success with AI. And I can’t think of a person better suited to join me in this conversation. Today, I’m joined by Katharine Shaw Paffett. She is the Cross Solution Regional Lead, for UK and Ireland with Avanade.

If you don’t know Avanade, they are a global consulting partner. They’re joint ventureship between Microsoft and Accenture, so pretty two pretty well known names in in the space. Katharine has deep experience across a number of consulting firms, including Accenture and E and Y, was with Microsoft for four years. So come to her knowledge, honestly in this space, and I I couldn’t be more thrilled.

Katharine, thank you for being here today.

Such a pleasure, Malcolm. And, I I do listen to CDO matters quite often, on the elliptical when I’m at the gym. So, it’s just fantastic to actually be part of this story today. Thank you very much.

Well, I I’m I’m tickled that that you are here. I I knew that you had done that. You had told me because when we had met in London the first time, you know, I was I was rather humbled, in in hearing that you, that you listened to me on your elliptical.

So tell us a little bit about your role. What does it mean to be a cross solution AI lead for UK and Ireland with Avanade? What what what do you do, and and and how are you interacting with Avanade’s clients?

Fantastic question. So, I was brought in with a very, very clear mandate, to be very much across the solution areas or across the Microsoft stack and really create value for our clients right through all of the technological capabilities that Avanade can offer.

We’re obviously very much aligned to Microsoft. So we’re twenty percent owned by Microsoft, and that means we’re also aligned to Microsoft in terms of our technology stack. So we have infrastructure, we have application development, we have modern work, so enter Copilot there. We have business applications, so the whole plethora of power platform capabilities, which is also very, very exciting right now, in terms of what you can now do as a citizen analyst or citizen developer.

And we have data and AI as well. And I work across all of these, capabilities, together with my colleagues every single day, to create solutions that can, we hope, help our clients’ business, that often involve, AI. So I’m I’m doing that on a day to day basis. I work obviously, obviously, one of the, you know, advantages of AI right now is that you can really use it to leverage your proprietary data, which can release even more of a competitive advantage.

So I work on a day to day basis probably the most with our our data and AI team, and then with our modern work, team on, for instance, custom copilots, for our clients. So a lot of our clients, wanted to first kind of familiarize themselves with with generative AI with a with a kind of off the shelf solution like, m three six five Copilot. Then they decided to connect it to their, processes like Jira, ServiceNow, connect it more to to their own data to see those those those initial synergies and then, create a custom copilot maybe for, maybe for a particular process or a particular domain specific or industry specific use case.

So that’s what I’m doing on a very high level day to day. And I’m I’m industry agnostic right now, so I’m working quite a lot. Because my background is financial services.

So I’m working a lot with our financial services team, but also, more recently with our with our health care team, which I find, absolutely fascinating.

So I’m why I’m particularly excited to talk to you is is because of all those other areas outside of the data world that that you mentioned.

Data folks like us tend to be, I wish we weren’t, but we tend to be a little insular.

We tend to be a little insular in in our in our worlds. And, yes, we work with our business customers. Of course, we do. But having that kind of that cross enterprise view, particularly, what you called kind of the modern workplace, which kind of, I assume, is kind of, you know, the organizational impacts, more of the HR centric impacts.

How do you have a well trained workforce? How do you have an engaged workforce and motivated in a productive workforce? I’m interested to talk about that. Also interested to talk about how these copilots are plugging into business applications, like old school CRMs and ERPs and the dynamics of the world.

Love to get into that. So this is gonna this is gonna be a great conversation. I’m I’m I’m excited about this.

You know, in the last year well, two years, since ChatTpT exploded, I I think the number one theme that I’ve been hearing with the CDOs that I talk to every day is this idea of being more AI ready.

Now now, of course, you know, I come at this a little bit more from the data in AI side, but what are you seeing in the market in terms of organizations becoming and getting more AI ready? What are are are organizations, like, standing up kind of AI task force forces, or are data and analytics teams driving these, or are they more cross functional? What what are you seeing in the round of this idea of AI readiness?

That’s a fantastic question. I think that will be a a theme kind of throughout our call, as we dive deeper later on into topics as well, like, responsible AI.

I was at, DataIQ, the DataIQ Leadership Summit this week. And one of the one of the takeaways for me was, yeah, we’ve got we’ve got such an opportunity right now, to create competitive advantage in a totally different way.

And what what is competitive advantage in the data world?

It’s, I guess, very much in the proprietary data because it’s your unique insights that you know about your industry that no one else knows.

And with with Gen AI, obviously, you can, you can really, conjure up those unique insights a lot faster.

You’ve got better access to them. If they were in obviously, Jenny has got a fantastic affinity for unstructured data. So I read a statistic, a while ago that so far, before we actually had that that shift to more transformative technology like generative AI a couple of years ago, so far we can actually really leverage ten percent of our data because that was the structured data. And now we can actually, when we talk a little bit more about multimodality later on, we can use AI to actually interpret a plethora of unstructured data, which obviously opens up huge amounts of possibilities just to query our proprietary data in a much faster way. We have insights about our clients a lot faster and we can shift from that minimum viable product much more to that minimum lovable product, which I think a lot of our consumers are gonna be expecting going forward.

Key considerations, I mean, obviously, you’ll be able to tell me better than most that data quality is really one of the the blockers that I think was forty six percent of CDOs say is really blocking them from realizing their their AI ambition, as Gartner would call it. And you’re only really gonna be able to leverage the power of LOMs and SLMs, you know, if you’ve actually got your, you know, your your organization can actually train those proprietary data sets for those models, before actually, I guess, tailoring and using kind of targeted prompt engineering to really maximize that.

Key considerations, think about enterprise grade infrastructure and security. So I think right now it’s really a case of shifting left with security right across the software development lifecycle.

Security should really be in from the start because as we’ve seen these advances in technology with AI that really exceed Moore’s law, yet we’ve also seen advances in in attacks because, yeah, that gives them a whole different toolset as well. So, I would say, considerations. Yeah. Enterprise grade infrastructure and security.

So obviously, consider migrating to the cloud if you’re not there.

Data readiness.

A lot of our clients, you know, they’ve they’ve got a kind of, yeah, they’ve got to think build versus buy now. And, you know, we all have relatively flat headcount, so we can’t necessarily have a master data management department, for instance. We can’t necessarily, have a whole department that’s labeling the data. I think, we need to think very strategically about independent software vendors like yourself, like other ISVs who can actually help us get that data house in order so that we’re in the best possible position to really maximize the competitive advantage that we have in our data.

Skilling. So we have our school of AI program, for instance, that we’ve also, deployed at quite a number of our clients. So, you know, the the the I guess the learnings that we’ve used, that we’ve we’ve learned ourselves, we’re now branching out to our clients with, organizational culture, obviously, and then, ethical and legal considerations. So those are kind of my my top of mind, I guess, when it comes to getting your organization to be AI ready. I think also, certainly quite a few of our clients, they have quite large, SharePoint, depositories and repositories. And it’s it’s also now a a case of how can I use, for instance, retrieval, augmented generation to actually put that plethora of unstructured data, into something a lot more structured so I can actually start start start pulling insights in in in in a much, more refined way?

Wow. Okay. So there’s a lot to pull out there.

The first the first nugget for me as as a data person was something that I’ve shared often in this podcast, which is the idea that there’s kind of a Pareto principle in here in that, you know, eighty percent of the data out there floating around is largely unstructured.

And Oh, yeah. Unmonetized, unvalued, unactioned, just sitting collecting digital dust. And that is one idea of how to become more AI ready is how do we start to automatically classify or put that data into specific taxonomy so that it, you know, we can we can understand what’s there and apply things like data quality rules to it so that we avoid what happened to Air Canada. If everybody knows the Air Canada case where a chatbot gave the wrong information, they got sued, it was a mess.

So that’s one, is getting our arms around, unstructured data. I heard, you know, a lot there around infrastructure and tools. Couldn’t agree more.

One thing though before we we go off this topic, you know, if if the three kind of areas where these initiatives could be getting, kind of, initiated by. Right? Are are these are these larger scale AI programs or initiatives, are they are they being driven out of the day data and AI team, or is it more on the business application side where where people want to make their ERP more efficient? Are these cross functional teams?

Or are you seeing this more maybe even at a board level where a board is saying, hey. Go do this. Go figure out AI and make it happen. Where what what where are these programs where’s the genesis for these programs happening in organizations?

She depends a little bit and it depends on the size of the organization. I hate to get such a diplomatic answer, but it’s very much dependent on the organization. I met with a board this week and we were talking very, very specifically about, use cases that we thought could benefit from transformative technologies and then would actually benefit the business. If it’s a slightly larger company, then it might be coming much more from, as you say, the, you know, the data side of the house.

But it’s it’s a mix. And I I think also that’s partly the point since we’ve had this this shift, two years ago nearly now. I thought, there was an executive from Vodafone at Big Data London where we both were, and he he he made a point. He just said Jenny I has really given us a carrot for data management.

So he said that GDPR was kind of a push, but Jenny I is really that that carrot, which has has has opened up those those board level discussions and really put it on a board level as well.

And well, so another message that I’ve been giving here through previous podcasts and and through LinkedIn posts and you name it, If you’re a and this is my editorial perspective. You if you’re a CDO and you’re kind of waiting for somebody to ask you for for for an AI driven solution, I I’m not sure that’s the right approach. I would be more on the offensive here as a CDO. I would be taking the bull by the the horns and saying, okay, what can I do in order to start understanding this technology, integrating this technology, operationalizing it across the organization? I would be going on the offensive with AI because sitting back and waiting is, I would suggest, not a great career choice. Anyway, let’s let’s let’s dovetail into more in the technology stuff because this is, I think, an area where you and I could get really excited and fall down a couple of really deep rabbit holes. Go ahead.

I had one of the point, that, a lot of our clients now utilizing Microsoft Fabric, which, again, I think if you can simplify your your data integration across these clouds, that is really just a fantastic, foundation, for for your AI journey.

I am I am a believer. I have been a believer since late twenty nineteen when we started talking about the fabric at Gartner.

I think Microsoft is head and shoulders above others, but I would argue it’s it’s very much a v one product, but you gotta start somewhere. And it is still transformative eve even as a v one, I would argue. So I’m with you. Totally bullish on the fabric, totally bullish on the ability to use a single kind of operating pain, as it were to initiate any sort of jobs, whether it’s a Spark job, whether it’s just a classic pipeline, whether it’s a SQL statement, and being able to hit data anywhere. I mean, that in and of itself, one place to access data across multiple infrastructures, including AWS and GCP, I yeah. I think it’s a game changer. Totally, totally with you on the fabric.

Okay. So, that’s a good segue into into technology. You know, where do you kind of see things heading for the next six months to a year? We’re we’re now into twenty twenty five where it’s, you know, two plus years since, you know, Chattopty, you know, took the market by storm. Where do you see things heading? Is this are we doing more RAG, Retriever, Augmented Generation? Are we doing more small model development?

You mentioned leveraging internal data sources within some of these models. What what do you see for the next few months and years?

So, just to put a disclaimer on this, this is before Microsoft Ignite, so let’s hope that the world hasn’t totally changed when the episode gets released.

But I would still I would still argue that there are, some underlying trends that I think will, just get stronger and and and more refined and more developed going into into twenty twenty five. We are obviously in a in a in a market that outpaces Moore’s Law. So Moore Moore said he the technological progression would double every couple of years. Satya said Satya Nadella said, a while ago that that AI’s progression doubles every six months.

If I think about so I would say that a strategic trajectory for me right now is six months in tech.

Got it.

Okay. Although it would normally be three to five years, but for for me, strategic could also be just six months right now. What do I see coming?

I think AI will be increasingly more multimodal.

We’ll have a plethora of different models to explore, I think, going on, going into f one twenty five, into twenty twenty five. Obviously, agenda k I, advanced connectivity, and then even more of a stringent emphasis on, I guess, you know, the the devil being in the data data management. So that’s good news for for studios everywhere. Just to touch on multimodality, first of all, obviously, we we mentioned the kind of affinity that generative AI has for unstructured data.

And now you can feed chat GPT not just with text. You can feed it with sounds. You can feed it with videos. You can feed it with pictures.

I think there’s multimodal capabilities, particularly now we’ve got the O models now, so GPT four point zero for instance, will advance. I saw a use case at one of our clients the other day, where we made it was a manufacturer actually and we made a chat interface for them. And they can actually feed this interface now, the sound the machine is making. And then our agents so we’ve we’ve created an agentic workflow there.

Obviously, with agentic AI, they can actually, you know, act autonomously now. They’ve all been trained on different data sets, so they can come up with an action and a judgment. So we fed in the sound the machine was making, I don’t know, click, click, click. And then all of the agents were working away, figuring out what was wrong there.

Is it rusty? Why is it making that noise? And I think multi modality will just continue to just absolutely amaze us next year. On the multimodal front, what am I excited about?

Small language models. I’m really excited. We just had an announcement about from Microsoft a couple of days ago, about industry specific small language models. But I’m very excited about small language models, particularly because some of our clients are small mid market.

You don’t have to use GPT-four to just summarize or classify now, you can use one of the fee three models.

So we can make quite strategic cost decisions now going forward based on which model we will choose. So maybe we use GPT for for a much more complex reasoning task, but then we can utilize the small language models for something a lot simpler.

Their code bases are smaller, so they’re supposed to be less vulnerable to attack. You can use them on devices with limited processing power.

I recently started a series called Imagine What You Will Do with AI, hosted by Avanade UK in Ireland. And I was interviewing, obviously, I interviewed you on the episode on one of the episodes, but I also interviewed a global black belt at Microsoft, an AI global black belt near. And he was saying how, potentially, you could leverage small language models in in formula one, for instance, very effectively because they’re very responsive. And you don’t really have, the luxury of having that latency to upload everything onto the cloud when they’re going so fast, but you still need to pull the data from the cars and know what’s happening.

So, potentially, you could leverage a small language model there, for instance. So that was one of the most interesting use cases I’ve seen so far. But as I said, for much more simple simpler tasks like just classifications, summarization, I see I think I think we’ll see I think we’ll just be a lot more deliberate about which model we use next year. Obviously, we’ve got a plethora of open source models, which opens up a lot of freedom.

And we’ve got the o one models now that have, because of the reinforcement learning, have cognitive capabilities. So, I think we’ve we will just have a plethora of different models to play with next year. I touched upon agentic AI already.

We are, for instance, working with Mangata, a satellite communications company.

And, this is also my next point around advanced connectivity.

We’re we’re working on agentic, AI with them, for instance.

If we go on to the next trend that I’m seeing, advanced connectivity. I was reading the other day, actually, only seventy to eighty percent of use cases are actually possible with the technology that we have now. So we need to think a little bit more about our new digital backbone. So, not just leveraging our traditional Wi Fi, but leveraging five gs, leveraging low Earth orbit satellites, for instance, edge computing, satellite connectivity in general.

As I said, we’ve partnered up with Mangata, and we’re building edge based solutions with them, for instance, for locations you know, where where they would have bandwidth issues. And then potentially if you build an agentic workflow into that, then an agent could, obviously, autonomously because it’s an agent, but the agent could make the decision which device should I give the bandwidth to if I need to prioritize bandwidth.

So I think AgenTek AI advanced connectivity will also play a huge role next year.

And then lastly, just data, data, data. I think, I think it’s so important. And, yeah, we still haven’t I see that every day. We still haven’t done everything that we need to do, you know, with our with our data in our organizations to really get an AI ready. So, I think that will be, paramount.

And, as our executive from Vodafone said at Big Data World, we’ve we’ve narrowly got a carrot to actually get our data house in order.

Yeah. And it’s a massive carrot. There are people out there talking about, you know, the AI bubble bursting, and I find that kind of humorous. Right? And and and I don’t mean that in, like, in a negative in a negative way and and a judgmental way, but, you know, there’s a lot of people saying that this is this is very similar to, you know, two thousand, what, eight, nine. Right?

Around the or I’m getting my decades mixed up, I’m old. But around the kind of the Internet dot com, you know, implosion, which would have been actually two thousand nineteen ninety nine, two thousand, two thousand and one. And they’re saying, well, that that’s this is this is the same situation. And I just have a hard time believing that for for a lot of different reasons.

Now, I mean, it’s sure. A lot of startups are gonna go broke. Of course a lot of startups go broke because startups go broke. That’s what they that’s what they do.

They’re high risk, high return, and, you know, and a lot of them will be called out of the market.

But I have a hard time thinking that there isn’t meat on this bone. Right? There isn’t, like, incredible transformative value. And what you’re seeing in the market, Katharine, is that, yeah, companies are running towards this stuff and are making strategic investments in this. So what do you think when you hear people you know, what’s your response to somebody saying, well, the AI bubble is is gonna burst? What what how do you respond to that?

So first first first point from my side, you were mentioning startups and startups going bust. I thought Sam Altman, he came up with this phrase, about startups wearing a t shirt, OpenAI killed my startup.

And he said the startups that, base their technology and what is available now will you know, they’ll be wearing that t shirt in six months’ time. The startups that are really looking around the corner and, you know, building their solutions around what will be available are the ones that are definitely not gonna be wearing that t shirt.

I I don’t think the bubble is going to burst.

I think we’ve got a fantastic opportunity, here. And I and I also think we’ve got to a point I remember being on a panel a couple of years ago at the Mobile World Congress called Can Anything And the kind of essay question was, can anything be connected?

And should anything or everything be connected? And, one of my kind of, I guess, arguments, or points during this panel was that we’ve got to a point in terms of kind of megatrends, in terms of climate change, in terms of chronic disease, in terms of technological progression, in terms of digital debt to you that, you know, we can’t actually deal with the amount of traffic that we get. It’s just surpassed our our ability to actually deal with it. I would argue this technology and the the shift that we’ve seen, paradigm shift as well as platform shift that we’ve seen over the last couple of years is actually needed, from a from an evolutionary perspective. So, no, I don’t think it’s going to explode, and I I think it comes at the right time.

To totally agree. There’s a little bit of an irony embedded in what you just shared in that I think there is so much untapped value out there. For example, in the eighty percent of data that is, you know, completely unstructured, like all this data sitting in data data centers that, by the way, is consuming scarce energy.

Yeah. But that but that aside, you’ve got all this data out there that is largely just collecting dust.

And we’ve been unable to monetize it, unable to govern it, unable to analyze it because our systems just don’t operate at that at scale. We’re we we historically, they have not. But along comes AI, and AI does scale really well. Yes.

It consumes energy to do it, but the the I d like, using AI to drive value that AI can then operationalize, which I that’s kind of the irony. Right? Is that is that to kind of solve some of these bigger problems, I think we’re gonna need to use AI to be able to govern here’s an example. To be able to govern AI and to be able to and we’re gonna get into this in responsible AI next.

But to be able to effectively govern it, you’re going to have to use AI because that’s the only way that you’re gonna be able to execute any of these programs at scale. So I I find there a little bit of an irony there, but I think it’s okay because the value is going to be truly transformative.

I touched on responsible AI. Yeah.

In all of the conferences that I’ve gone to over the last two years since the explosion of ChatGPT, one of the bigger themes has been the idea of what data people like me call AI governance.

Right? Historically, governance defined the policies and procedures of what you can and can’t do with data, what you should and what you shouldn’t do with data. It’s it touches kind of on the idea of bias. We’ll talk about that a little bit more.

But what I’m hearing a lot in the data world at the very least is a lot of platitudes and a lot of frameworks. And don’t get me wrong, I like frameworks. So there are people saying, Hey, here are the things you need to do in in order to do AI the right way. And one of those things you need to do is you need to be responsible.

But I’ll be honest and tell you, Katharine, I’m not entirely sure what that means. I mean, I know what I know what it means from a definitional perspective, but when it comes to operationalizing this, I see a lot of challenges here because I I I don’t know. How how do you quantify the idea of responsibility? What what are you seeing your clients do? What are you seeing your clients implement? And and and and how are you guiding them in this journey?

Fantastic question. And I I definitely don’t have all of the answers here.

I think, first of all, kind of what does what does responsible AI mean for me and why is it why is it so why is it so important?

We’ve just we’ve just talked about the capabilities of this technology and, you know, paradigm shift we’ve seen since since it was created.

And I think when you’re dealing with a technology at this scale, with this with these capabilities, it’s got the ability to exclude, and it’s it’s also got the ability to to to be very, very harmful.

So it’s very, very, very, very important that we get this right. And you did mention frameworks, but I do think it’s important to have a kind of responsible AI framework with principles that you can live through through tooling and actions from the start. Before I quickly dive into that, I actually wrote this down. I saw a very interesting quote from Stephen Hawking.

And he said yeah. He said I wrote this down. He said, the real risk with AI isn’t malice, but competence. A super intelligent AI will be extremely good at accomplishing its goals. And if those goals aren’t aligned with ours, we are in trouble. So particularly, when we think about next year and the trends I just alluded to with regard to agentic AI, when, AI can actually, yeah, be very much autonomous and act autonomously in that kind of agentic framework and take an action for us. It’s really paramount that the goals are aligned.

A lot of our clients, especially I was talking to a client the other day, and they built a responsible AI framework up from the start.

Very much like Microsoft has, for instance, they’ve got in their responsible AI principles, they’ve got inclusivity, for instance, privacy and security, transparency, to name a few. And this client actually approached it very, very similar that they built up their principles in a framework, and then actually started to utilize some of our tooling. So we’ve got our, responsible AI, our AI control tower, for instance. So, every use case, you feed through every single use case, that includes AI that you’re working on.

It will grade it in terms of low, medium, and high risk. It will look for the danger signs, for instance, And it will also totally align with your organizational reporting. So it’s constantly reporting on, the potential risks that it sees in the use cases where we’re we’re producing. And that that actually, allows us we use it at Avanade as well, allows us to have a very, I guess, actionable, governance and also allows us to really live our principles.

I can just touch upon a little bit as well, because we’re we’re very much aligned also to Microsoft’s responsible AI principles. Do you know how they came, they came to be?

No. So, Microsoft, I think the first draft came out in twenty nineteen, for their responsibility principles, but, they actually went to Rome. Oh. So, yeah, just before the pandemic kicked off, they went to Rome. They visited the pope together with FAO, and IBM, and actually three Abrahamic religious leaders, and actually signed a covenant called, the Rome call for AI ethics, and actually talked through with the with the pope, and the other religious leaders, about AI, its ramifications for society and actually how to govern it.

So that, I think, leads on to our next topic of, of AI being a social cement. But have I have I answered your questions with regard to responsible AI?

Yes.

At a at a high level, and I think for where we are as an industry and I think totally and completely appropriate because I don’t think anybody’s got all the answers yet.

By the way, I didn’t mean to, cast aspersions on frameworks. I love frameworks. They’re needed and they’re necessary.

My my challenge to the industry, and this is to everybody, is to is to go to the next step, which is actual Yeah. How do I implement this as a data person? How do I implement these rules? And and you started to touch on them.

Step one is to actually define what the rules would be. Define def define the utility functions as it were of what we expect these things these systems to do. I’m fascinated by the the whole pope thing and the Rome thing. I need to do a little bit more research there.

But we’re on a journey here. I don’t think anybody has all the answers at day one. I the Hawking quote was really interesting because, you know, I I think what he touches on is the idea of unintended consequences.

Right? I mean, and we see unintended consequences every day. An example would be microplastics.

Right? You know, plastics have revolutionized, how we how we kind of store and manage, you know, even things like food and water. And you I think you could argue that plastics have been a huge huge, driver, of, you know, you know, limiting starvation even and and and helping, you know, developing economies to prosper more. But a consequence of that is garbage.

Right? And I think what AI will do is that is that we will find unintended consequences much faster.

Right? Because the AI will be out there doing its thing. It’ll be solving these problems that we tell it to go solve. But what we’ll learn along the way is, oh, wow.

We didn’t we didn’t program in that it shouldn’t, I don’t know, do these things. Cause we never even considered those things because we’re merely human. We didn’t even consider them. So that’s I I think a lot of this will be, you know, a lot of the responsible AI will be reactionary, unfortunately.

I think so too. And I I think if I’m I’m putting on my kind of my CDO hat and thinking about, Microsoft’s responsibility principles and and is some of the frameworks we’ve built up at our clients, particularly that inclusiveness pillar, really, really, really sticks out. I think the CDOs really have, a huge imperative to make sure that the applications we’re building are trained on a wide range of datasets, for instance. So I was, I was, I heard something the other day about a car manufacturer, for instance.

And, they built a solution where you could unlock your car door just by putting your hand in front of the sensor, but, they’d only trained, they’d only trained that, those models, in a particular part of the world, on a very particular demographic. So, the only people who could open those car doors were white men who were forty five years old. So I wouldn’t have been able to open the car door, for instance, because they hadn’t thought okay, they’d thought about who is in the room, but they hadn’t thought who was not in the room. And I think I think CDOs, responsibility and mandate just to make sure that we have very inclusive data, when we’re when we’re, when we’re building applications and when we’re when we’re training our our models will will be paramount in terms of responsible AI. I think they have such a pivotal role there.

Yep. That that’s gonna be an interesting one, Because I think often we will be tasked to try to create the data that may not be there. Right? If we if we don’t have data for whatever reason, for whatever reason, we don’t have the data.

In the case that you mentioned, that’s just I wouldn’t say malpractice. That’s too strong a word. That’s just that’s that’s that’s an error of omission. Right?

Mhmm. But but but but in the case of situations where we just don’t have data, but we probably should, in order to ensure things like inclusivity, right, I think we’ll have to probably make it. Right? We’ll have to synthesize this data, which is interesting.

Separate podcasts, maybe just talk about synthetic data and how to actually make it. Yeah. Yeah. And how would you make data for, you know, smaller hands or different demographic hands? I I just such a fascinating fascinating topic.

I mean, look at big data world.

Look how many independent software vendors there were at that, trade fair who were Yep. You know, producing synthetic data, for instance.

Yeah. Exactly. But even synthetic data needs something to be synthesized from.

Yeah.

Right? Like, it’s it’s it’s anonymized and it’s genericized, but it’s still being synthesized from something. So here we go, we get, again, we get to a point where, well, maybe we could use AI to do that.

Right? We could use AI to synthesize the data of a different demographic hand.

Oh, boy. Any oh, there’s such such big problems. Boy.

It’s it’s a good time to be in data though, I’ll tell you, because this is certainly a time of you’ve done your job, like, your have.

We’ve got our we’ve we’ve got our hands full of all shapes and sizes. Hands are their full.

So let’s let’s talk about our last topic here. In in our in our conversation, doing the the prep for the podcast, you you had raised this this concept of something you called a social cement and how technology can be used as a social cement. I’d never heard that before. I think it actually may be yours.

That was from me. Yeah. Yeah. Yay. Alright. Yeah. That’s smart.

Social cement. Well well, tell me what that that that means to you. What is what is technology as a social cement? And how does it relate to AI?

Yes. So that’s fantastic question. So, obviously, a social cement is a term to describe elements in a society that bring people together. So I know, I forgotten his name. There was a philosopher that said pop music, for instance, brings people together.

I’m sure, a lot of people in society are brought together, because of sports. They’ve got a common set of values, and and and they’re then, you know, brought together on on on on that that basis.

I’m very much a people person. So I remember doing my Clifton strengths test when I joined Accenture and I had WOO as as one of the the strengths. So winning others over and that’s very much about getting very getting very excitable about people and also about making new connections with people.

So, I do go to a lot of tech events. And what always astounds me is that you’ve got a plethora of different stakeholders. You look at the Mobile World Congress, for instance. You’ve got maybe seventy thousand people over three to four days from every country, from every background, from a range of disciplines, but they’re all coming together very much with with a common goal or a common set of values to achieve something amazing, with technology.

So I do believe technology is very much a social cement. I gave the, I gave the, the example of the the call to Rome. So, yeah, when Microsoft, FAO, IBM met the pope, yeah, so three Abrahamic religious leaders, a couple of other religious leaders there too. When else would you have that?

When else would you have that group of stakeholders coming together to align on a common set of values? You probably wouldn’t unless it wasn’t tech. So I think that that gets me very excited about tech.

I remember having a bit of a a Stephen, which probably brings me on to the next thread, which is multidisciplinary teams, and that you actually really need a multidisciplinary team if you’re you’re engaging with technologies now, like generative AI.

I remember, it was when I was at Microsoft leading the data and AI part ecosystem.

And we just had that platform shift at the web. Yeah, a lot of lot of partners were beginning to use, OpenAI on Esher, for instance.

And they were doing hackathons, to explore those those initial use cases with with their clients and even create them. And one of the partners went went up to me and he said, yeah, Katharine, we’ve just done our first, you know, Jenny I hackathon. And I said, how did it go? And he said, it went really well.

And we had a lawyer in the room. So they had a lawyer sitting in on that hackathon. And again, I just it’s it’s very much now about a multidisciplinary team. I see that every day at Avanage.

I’m not in one solution area. I’m across a huge technology stack now. And I I think, yeah, that that that platform shift and now paradigm shift has very much opened up this this kind of multidisciplinary way of working, as well as this, this this social cement that I I like to do like to call technology, I guess. I also think going forward, what can what can CDOs, I guess, take from this?

I think if if if we’re gonna really excel and scale, in this era, we need to work with multidisciplinary teams. We need to also be very inclusive about design. So, a lot of, there’s a lot of out of the box thinking required, to to to really excel, I would say, in the age of Jenny Ice. That means also thinking about inclusive office space. So, for instance, yeah, neurodivergent colleagues, for instance, also, you know, can can really, bring their best to work, for instance, and, yeah, come together, I guess, to to to build something exceptional.

Well, I can’t think of a better way to end our conversation than on that note of optimism. I love it. I’m a believer. I I think I think AI will do good things for society, and will help transform things and solve really, really difficult problems. And that means it’s a great time to be in data and analytics. It’s a great time to be a CDO. It’s a great time to be partnered with Avanade.

Thank you, Katharine, for taking an hour out of your busy day to share your knowledge with our listeners. It’s been a thrill to have you. Thank you.

It’s my pleasure. Thank you very much, Malcolm.

With that, happy New Year. Happy twenty twenty five, everybody. If you’ve made it this far and you haven’t subscribed to the podcast, if you’re not connected with me on LinkedIn or with Katharine on LinkedIn, please make the connection. I would happily, get into discussions with you on LinkedIn.

I’m posting almost every day content related to CDOs, data leaders, data stewards, data governors, you name it. If it’s data related, I’m talking about it, and I would love to hear from you. With that, we will leave you for now. Please tune in to another episode of CDO Matters podcast sometime very, very soon.

Happy New Year. We will see you all very soon. Thanks. Bye for now.

ABOUT THE SHOW

How can today’s Chief Data Officers help their organizations become more data-driven? Join former Gartner analyst Malcolm Hawker as he interviews thought leaders on all things data management – ranging from data fabrics to blockchain and more — and learns why they matter to today’s CDOs. If you want to dig deep into the CDO Matters that are top-of-mind for today’s modern data leaders, this show is for you.

Malcolm Hawker

Malcolm Hawker is an experienced thought leader in data management and governance and has consulted on thousands of software implementations in his years as a Gartner analyst, architect at Dun & Bradstreet and more. Now as an evangelist for helping companies become truly data-driven, he’s here to help CDOs understand how data can be a competitive advantage.
Facebook
Twitter
LinkedIn

LET'S DO THIS!

Complete the form below to request your spot at Profisee’s happy hour and dinner at Il Mulino in the Swan Hotel on Tuesday, March 21 at 6:30pm.

REGISTER BELOW

MDM vs. MDS graphic
The Profisee website uses cookies to help ensure you have the best experience possible.  Learn more