Culture
Data Management
Data Professionals

The CDO Matters Podcast Episode 93

Professional Development in Data with Eric Overby

X

Episode Overview:

In Episode 93 of the CDO Matters podcast, host Malcolm Hawker sits down with Eric Overby, Faculty Director at Georgia Tech, for a thoughtful conversation on professional development in data. From building lasting skills to staying relevant as the field evolves, they explore what it takes to grow a meaningful, resilient career in today’s data-driven world.

Episode Links & Resources:

Good morning. Good afternoon. Good evening. Good. Whatever time it is, wherever you are in this amazing blue floating planet of ours, I am Malcolm Hawker.

I am the host of the CDO Matters podcast. Thanks everybody for joining. And, hey, just looking ahead of my schedule, perhaps Merry Christmas. Happy holidays.

I was just looking if this episode should be released on or near Christmas day. So if this is Christmas day, which I doubt we’ll probably won’t publish on Christmas day, but we may publish the day off after, which Canadians like me call Boxing Day.

We could go separate conversation as to why it’s called Boxing Day, but that’s in Canada, just an FYI, that is like Canada’s Black Friday. So the big retail day in Canada is the day after Christmas. That’s when everybody goes shopping for whatever reason. I don’t know why, but happy Boxing Day if it happens to be Boxing Day. Okay. Enough. Happy holidays.

Enough with that. Today, we’re gonna talk about professional development. We’re gonna talk about learning and development specifically. We’re gonna talk with Eric Overby today. Eric is a professor of business and technology. Did I did I get that right? Technology and management.

Right. Right. That’s part my full title, this is Steven a Denning, professor of technology and management.

Okay. My my apologies. I missed the first part, which is I assume an endowment of some sort. Yes?

That is right. Yes.

Okay. Okay. So to make a long story short, you are a professor at Georgia Institute of Technology, also known as Georgia Tech. So, like, you don’t say Georgia Georgia Tech anymore? Is that, like they still say it on all the sports channels, I think.

Like Still say it’s mostly it’s mostly Georgia Tech.

Formerly Georgia Institute of Technology, but everybody just says Georgia Tech. Go, Jack.

Okay.

Well oh, alright. Well, one in the same, Yellow Jackets. You you you are a professor. We’re gonna talk today about a program that Scheller has launched. We’re also gonna talk writ large about professional development, learning and development. Hey. Should you go back to school?

What other forms maybe of going back to school? Just learning. We live in a really fast paced environment. I’m a huge believer in in learning and ongoing training.

So that’s gonna be the topic of today’s discussion. So let let’s start just with the first high level overview of the program that you’re offering Right. At Georgia Tech, and then we can take the conversation from there. So what do you guys do at Georgia Tech, Eric?

Great. Yeah. Thank you, Malcolm.

A host of programs, the one that I am directing as of the past few months is called the chief AI and data officer program, and we’re using a blended approach. There’s a brief residency at the beginning. The bulk of the program is synchronous and online, and then a a capstone residency at the end. And the format’s there to both provide the flexibility needed for working professionals to be a part of this, Also, people that may not be able to get to the campus in Atlanta. And then the residencies and the program as a whole, but the residencies also help building the network that you have a chance to, you know, rub elbows with the other other participants and the faculty as well.

So so tell me a little bit about your average student. Are are are these early career professionals, mid to late career professionals? Yeah. And and then tell me, like, why? What are you hearing? What about, like, some of motivations? Why why are people going back to get additional learning from you?

Yeah. So so this is this is one of our executive education programs. And the target is the person who’s in the the role, chief AI officer, chief data officer, combination of both, And and people that are are a step or two away from that role, that are their next career move will be that role or maybe a career move close close to that role.

And the the idea is that the the role is there’s a technical aspect to it clearly, understanding the capabilities and separating the reality from the hype with what’s happening in in AI. But there’s also a big managerial component as far as identifying what projects should be implemented, how to identify those projects, what sort of leadership needs to be there being a c level executive, and some of the legal and ethical considerations that come along with data handling. So all those things kinda meld together, and the goal of the program is to provide provide a a a depth in all of those because all those things are important for the role, whether it’s the AI officer, data officer, or some sort of blend.

So where are you getting your faculty for this program from? Are are these established tenured professors that are on staff at Georgia Tech? Are these local professionals?

Are they people like me maybe that you’re you’re bringing in Yeah.

Yeah. To teach specific courses? What does the faculty look like?

Right. So the faculty is predominantly for this program within the Scheller College across all of our areas of all of disciplines. So you can think of it as we’ve got we’ve well, the area we call the information technology management area, bringing a lot of the technology and business pieces of what is happening with AI, how does it affect workflows and practices that you’d have in your organization. There’s some of our finance faculty will talk about valuing projects and determining which ones make sense to move forward with. Our organizational behavior faculty will talk about leadership and communication, for example.

Operations management folks will look at process work and how AI works there. Law and ethics, we have a a strong area there, which we’ll look at. And this is a big issue with AI now and also data handling. What are the legal requirements? What are the regulatory considerations? And how should you do it ethically?

So and I’ve I’ve left out a couple of areas that Don didn’t mean to, but just to give you a flavor of the breadth of the program. It’s it’s touching across all the different functions of business. And that’s important role is gonna touch on all those things. You gotta be able to manage people, identify and value projects, manage those projects, report up and down about the success or lack thereof of those projects, and know what’s going on technologically.

So are you aware of many, you know, formal graduate type programs? I’m aware of a of a couple, but are you aware of of many in this field? It seems like to me that from a, like, a degree program perspective, particularly, you know, postgraduate, there’s not a ton out there. Am I what what are your perspectives on this? Are you aware of any?

Right. There are some. Yeah. There are some. They’re not a lot. I think it’s partly a a function of the newness of the role that the chief data officer has been around longer than the chief AI officer, but they’re both fairly fairly new.

So there are some programs, but it’s a it’s a growing area. And given the way the technology is improving and its impact on business, it is will continue to be a growing area.

Gonna be gonna be more need for for this this this competency.

For sure. So I have worked in the past with a program, doctor John Talbert. It’s a pro he runs a program at the University of Arkansas, Little Rock. I know he’s got a degree program for CDOs.

I believe there’s one through Carnegie Mellon. I believe there’s a degree program there that’s focused specifically on on CDOs and and AI officers as well. I think speaking of Pittsburgh and Carnegie Mellon, I think Pitt may have one as well, but there’s not very many of them out there. I’d I’d love to see more of these programs in academia.

And I and I guess starting with what you’re doing is is at least a very, very good starting point. Getting back to the to the average student. So, you know, these are people in the job. They’re they’re they’re doing the work.

When you talk to some of these students, are are you getting a flavor for for why they’re they’re they’re they’re joining your program? Are they, you know, feeling like they just don’t know enough about the technology and the technology is moving too fast? Are they going specifically from one of those those perspectives that you talked about, like law and ethics and some of those frameworks or evaluation? Is there any one common theme in some of your students?

It’s a it’s a mix, but but the predominant path is the person who has the technological depth and wants to expand into the managerial realm. That’s common across many of our programs. A lot of of double jackets and triple jackets who have mechanical engineering degrees, other computing degrees, physics degrees from Georgia Tech. And they have a lot of technical depth and they wanna do more in the HR, more in the finance, more in the broader business side.

So that’s one kind of key profile of the student is how do I kind of expand beyond my technical silo and expand into this leadership role because I’m gonna need a lot more than just my technical jobs.

Got it. So you had talked about the the capstone aspect here kind of being in person. But given the virtual nature of this, are you seeing folks from, I would assume, pretty much all over the southeast, if not over the entire United States?

That’s that’s the goal. So don’t have the the data on the geographic distribution.

Okay.

But the the idea behind having it be this hybrid structure is to allow allow for that.

And that’s that’s a trend everywhere regardless of the program, whether it’s the full time flagship MBA program, where there’s additional flexibility through online offerings are all you’re seeing that everywhere just to allow for that that greater reach and just the greater flexibility for the students. And we’re getting a good sense now, kind of post lockdown COVID stuff, of what works well in the classroom and what what works well on a platform that’s a online synchronous, you know, what do you need to be together for and what can you do otherwise? And so trying to strike that balance here to program the right things in the right at the right moments. Because not everything requires, you know, all all students and instructor to be co located.

Got it. Well, if you ever do reach beyond your in house faculty and are looking for local experts I happen to know a guy. He runs a podcast.

He’s actually a chief data officer. So just I’m just saying if you ever Yeah. Yeah. I I might know a guy. I might know a guy that could that could that could you up.

So Maybe I’ll just hand it off.

That would be that’d be we we can talk.

Hey. Give me a microphone. I can talk about anything for an hour.

Alright. So you had talked about one interesting field, and we had discussed this in our in our planning session. One of the areas you said that you focus on is prioritization and what what to focus on, what not to focus on. Now I know you may not necessarily be teaching this, but given you’re the program director, I I assume you have some insight into what that actually looks like.

Describe for me what what are some of the tools or methodologies that you may be looking at from a prioritization perspective? Reason I ask is is because this is this is a particular challenge for a lot of data leaders. Sure. If you ask me, the number one challenge facing a lot of data leaders is an inability or lack of desire to quantify some of the things they’re doing.

So so what what sort of insights can you provide there from a kind of a a business valuation kind of ROI perspective?

It’s a great question. There’s two pieces to it. There’s the what I’ll call the valuation piece and then the storytelling piece. So the valuation piece, think spreadsheet.

You’ve got your input cost, which maybe are sort of quantifiable in the sense you can determine what is my software license fee? What people am I gonna allocate to this? And what do they cost on a fully loaded basis? Is it one FTE, three FTEs?

What are my costs there?

And, you know, you make that investment and then you’re trying to model out, will it pay off? And that’s the harder part in the sense of what the return on the investment’s gonna be. So what are you going to measure as the benefit? Are you increasing sales and by how much?

And what is your margin on those additional units sold? That can go into the spreadsheet. Are you expecting to reduce headcount? And what are those people cost?

And that can go into the spreadsheet. Maybe it’s a combination of both. Maybe it’s some measure of how satisfied your customers are, which you quantify as the percentage of repeat business you would get. Maybe it nudges from sixty percent to sixty two percent if you do this thing, and you can put that in.

And then you just do your your net present value assessment. Like, will this pay off if these assumptions hold? So that’s the first part of it. The second part, which is a storytelling piece, is that none of these things are necessarily true.

Like you don’t really know precisely what the benefit’s gonna be. You don’t know necessarily what your costs are gonna be. So you do some you do some sensitivity analysis in the in the spreadsheet that gives you a range of outcomes. And then you’re then you have to tell the story.

Then you have to say, look, here’s what I think I’ve got. This is why I think this is a net NPV positive investment to make. And here’s why. Here’s what could go wrong.

Here are the risks. And if these things go south, then we’re gonna lose money. But I you know, maybe sixty, seventy, eighty percent likely we’re gonna be okay. And then you move ahead.

You get your approval from CFO, CEO, what what have you, and you do it or not. But the storytelling part’s a big piece of it. Because the spreadsheet, you know, you can you can put any numbers you want in there. You have to tell the story that the spreadsheet’s you have to present the story the spreadsheet’s telling in a way that seems credible and that lays out all the risk factors and you make a recent decision.

So you the last bit there is something you touched on, I think, is is really important. It was all important. Don’t don’t get me wrong. But the last bit there was really important and I think will help you data leaders tell a better story. That is the risk aspect of this. Right? So what could go wrong?

Or what are the cost of this? What are what’s the cost of not doing this? So more of an opportunity cost perspective. But adding that little bit of spice to your analysis will help go a long way because, generally, that to me, that’s CEOs that’s how CEOs think.

CEOs are thinking, okay. Here’s an opportunity. Here’s how much it costs. Here’s how much it’s gonna benefit me.

But then what are the potential costs? Right? What are the potential risks of doing this? How how should I potentially be marking this down as it were in order to do a full analysis?

So that last little bit you recommended is is absolutely incredible, I think.

I think one of the best ways to be credible when you’re telling the story and you’re presenting the scenario is to be very transparent about what the challenges are gonna be and where it might go south. And you can say, look, here are two critical things that I don’t know what’s gonna happen. If they go badly, then things aren’t gonna work out. But this is how I foresee overcoming those hurdles. So I’m going tell you there’s a challenge around maybe this thing’s going to hallucinate too much. And we’re going to end up with a huge liability from a nonexistent policy that it makes up or what have you.

But in order to overcome that, to limit to mitigate that risk, we should do steps one, two, and three.

And then being honest and transparent and candid about what the problems are gonna be, but then having a plan for how those problems are gonna get solved, I think makes it much more credible.

Assuming that the plan you have is credible. And if you can’t come up with a credible plan, then pick another project.

Totally agree. And one way to make that plan credible in in from my perspective, and I would love your your perspective on this as well.

One way to make the plan credible is to have it not be yours. And what I mean by that is to go work with your chief procurement officer, your chief revenue officer, your whatever officer, chief marketing officer, and sit down and say, hey. What problem are you trying to solve? Then work with them to develop that business case. Work with their spreadsheet people. Work with their finance people. Work with all the people that are running intimately running that biz that line of business or that domain or that function.

Work with them to come up with that plan, and then tell their story on their behalf. To me, that’s the best way to make it credible. Is that something that you guys talk about as well as how to interface with business units and build these plans?

That’s a great point. And that’s part of the there’s a change management piece to all of this.

Yep.

And an organizational effectiveness piece to all of this. And some of our super outstanding organizational behavior faculty are really good at that. And so that that’s programmed in in the sense of how do you so you’ve you’ve got the idea. You’ve got your maybe you’ve a preliminary model. And then how do you get buy in for it? And as you put it, getting the your your peers that are gonna be directly affected to comment on it, to collaborate with you on it, to help you message it is is is the best practice.

Got it. Totally agree. So let’s pivot a little bit to the legal and ethical conversations that are happening in your program and that are happening in companies around the globe related to AI. What do you think? And what are some of the things that you’re talking about within your program related to these ethical and legal challenges of AI? What are some of the things we should be thinking about?

Yeah. You know, I’ve I’ll borrow a a framework from it’s a it’s a very nice Harvard Business Review article by Segalla and and Ruziz from a couple of years ago. But they talk about things like provenance of the data and the purpose of the data. So where’s the data come from?

And was it meant to be used the way you’re planning to use it? So if you’re training an AI model, if you’re doing supervised learning with these these images that are labeled and you’re coming up with an application based on that, is that do you have the rights to use that data? And is that the purpose the data was collected for? And if either those things is no, then you run into at least an ethical challenge, if not a full on legal problem.

The great, the classic case from a couple years back is Clearview AI. Right? So they build this really robust facial recognition system by taking people’s Facebook profile pictures.

Their their pictures, their names, and now you’re in the database. And probably neither of those no one meant that meant for their images, name, image, and likeness to be used that way. So there have been lawsuits about that and then there’s there’s class action stuff happening. But that kind of, it boils down to Providence, where the data come from, and purpose was it for that are critical legal, regulatory, ethical considerations to consider. And again, my colleagues in the law and ethics group, you know, know this more deeply than I do. So they’re doing that module and not me, but those are some of the issues that come up.

So I think that’s really fascinating because I think in the world of marketing, when you’re when you’re, you know, gleaning data from people, asking permission to use it in a certain way, I think is is well known. Right? Marketers have been doing that a while. Can I use your name to and and your data to market to you or to build or to improve our products is always kind of the loosey goosey one that that is, you know, probably a little more squishy?

But there’s a whole other world out there of internal data that we generate within our organizations related to how our businesses run. Right? Like like, who knows? We’re out there gathering information on on everything these days.

And that issue of intent, I think, is a really interesting one. I I and I think we can kind of figure out provenance. I think we can kind of we can we’re reasonably good at that. But the what was it built for?

Right? I suspect what we’ll find is the data was often built to optimize a workflow or optimize a process that may not necessarily be that great for AI, and it may not have been built necessarily for AI from an intention perspective. Are these just, like, some of the conversations you’re having? It sounds like they may be.

Well, there’s there’s the idea of let’s capture these things. Let’s see what they’re useful for in ways we may not know. And in fact, one of the great potential use cases of AI is coming up with ideas that we would never thought of. Finding patterns in the data that are hidden, that are correlated with the outcomes you want that you would never picked up on your own. And so this is why these legal and regulatory and ethical things are on sort of a continuum. There’s nothing as black and white.

Because even the ethical side, if something is maybe it shouldn’t have been done that way, but the benefits are just so huge, then maybe it nets out Okay. So these are the key questions to think about.

When you get into personal information, you do have some more clear bright lines that you can and can’t cross about divulging information. In our world, it’s FERPA, which is a student information sort of thing. Healthcare, it’s HIPAA. You have these sorts of constraints that exist that are much, much clearer about what you can and can’t do.

So that gives some clarity in certain realms. But a lot of it, you don’t have those equivalences and you’re just left with, well, let’s try to apply principles. There aren’t rules, but there are principles to making what we think is the right decision. And and then and and live with that and and hope it it it it’s it was a good decision.

Well, something I’m seeing is, you know, a lot of state legislatures are seem to be kind of taking up the cause of enforcing or trying to enforce specific regulations on AI where those limitations already exist for human beings. Right? So for for example, don’t build a bot that is discriminatory. Right?

If you’re going to build a bot that is being used to assess resumes or to assess potential candidates. Don’t build one that is is biased or unethical or or stereotypes or all of the things that we wanna try to avoid. Something that I find is is a bit interesting is that makes complete sense to me. But at the same time, those laws already exist for humans.

But now what we’re doing is we’re writing them it seems like we’re writing them for specifically for robots, AI slash algorithms Yeah. Because those laws today only exist to humans. They don’t pertain to algorithms.

So this is I I find this to be a rather interesting field. Think I think kinda keeping an eye on how things evolve from a state perspective within the US, in in lieu of any sort of kind of broader, more kind of federal based mandate, Some of these state driven initiatives when it comes to kind of legal frameworks for AI are really interesting. Are you seeing the same?

So you there so one of the state level things that California has is just transparency act, and I can’t it’s got, like, five words in the name. I forget the full name.

But it’s it’s an interesting approach because it doesn’t actually really regulate anything. It just says you have to tell us what you did. It has to be open so that it can be inspected and then critiqued and perhaps litigated. But if you are a frontier developer, quote unquote, and there’s a definition of what that means, but it’s basically who you would expect.

It’s the OpenAI’s and the Googles and so forth. Then you have to say something about what it is your models do. And that then allows for community comments and maybe is a way to strike a balance between over regulating you can and can’t do this to just say, do whatever you want, but just tell us what it was. And then we can go from there.

There was a second piece, but I I I Well, something that that comes to mind is explainability.

Right? It it seems to me like a good way as a data and AI leader to potentially harden yourself to future legislation that you can’t necessarily anticipate would be explainability. Right? Can can you explain how something works in a logical way that would would pass muster? To me, that’s that is it seems to be a good way to maybe future proof our investment in some of these systems. What do you think?

Yes. And that actually reminded me of the other point, which is the laws that exist that are being adapted or expanded maybe to include AI agents, for example.

They they may they may be okay because if there’s what we’ve seen so far in some of the case law is that if if the AI messed up, it’s still some human behind its responsibility or the company is still liable. No matter what the AI said, it’s your AI. So good case. The American Airlines, you know, fake bereavement airfare reimbursement that didn’t exist.

The AI made it up. And Americans like, well, wasn’t us. The AI made it up. Well, no, is you.

Judge ruled that pretty clearly. So the ability to have the human in the loop there, I think helps. Well, the idea that these are assistance to us and not autonomous means that the existing frameworks still apply. We just have tools.

Like we have word processors and we have fax machines and we have now chatbots. So all that stuff all that stuff is still there. So that helps.

That’s it. That’s it. That’s interesting to me. Right? Particularly when we start to get even more reliant, right, from a self driving car perspective or a robot perspective.

And in terms of liability, Whether that liability I mean, there there is increasing case law about liability of two data officers for data quality and data accuracy. I wonder if we keep peeling that onion, you know, how far things will go in terms of, you know, is it this one person who coded something, or is corporation? I don’t think we know necessarily the answer to that.

Yeah.

But these are the types of conversations that it sound it sounds like you’re having in these classrooms and they’re absolutely necessary conversations to be thinking about.

And and the liability, the explainability, they they connect. You’ve mentioned about explainable models that if you understand how it got to that decision, then you hopefully can eliminate some of the problems that will lead to liability later on. Like what’s actually happening here? And how can we address this?

And one of the roles for the human interacting with the AI is going to be this kind of system analyst type of fine tuning sort of job where you look for exceptions and you figure out how we fix that. And just the fine tuning of the models to make sure that they help us versus create unlimited liability.

Yeah. Well, there’s yeah. There there there’s two key aspects here. One is the underlying data. Right? And understanding that, understanding the provenance, understanding the lineage, understanding the quality, all the underlying data. Then there’s the model behavior itself.

And I think I can even argue the data plus the model could be a unique combination given that these systems are highly adaptive and can necessarily change over time based on new inputs. I guess that would still be the data. But as a data practitioner, I think you’re gonna need to have to understand both of these, right, in a in a way that may be new to you, at least from the perspective of being more deeply plugged into the engineering side of things. Because if you got people using these models and then they just turn to use your data, you’re gonna wanna know how those things interact. The data plus the model equals some new behavior because it’s gonna affect you. Yeah. So let’s let’s transition into leadership briefly.

You know, obviously, that’s a big part of your program. It’s a big part of some of the things that you’re teaching. What do you think are are and you’ve kind of already inferred on this based on some of the conversations we’ve had, storytelling, being able to build a business case. What are some of the the the leadership characteristics, leadership behaviors that you see as reasonably unique to this role? What should people be thinking about from a I need to build my leadership skills perspective?

Right. I think that a lot of classic stuff applies, but the one specific thing that I would call out is back to leveraging that technical technical depth. That knowing what the tools, how they work, and knowing the being able to distinguish reality from hype and and just having being able to distinguish the technology from the magic is is important because it’s gonna affect what what projects you identify. It’s gonna help communicate down the chain with people that are building this stuff and working with it.

You know, the risk of being, this stuff can do anything is you just don’t know what’s feasible to manage your team.

So that is differentiating skill needed for this role.

And it’s something that we’ve preached to Georgia Tech forever is that’s this technical stuff is not going away. You need to know at some level of sophistication to be able to use it effectively and to manage effectively.

Yeah. And something that I’ve referred to over the years is is being you know, this is not a new phrase at all, but being more t shaped, like the letter t. Right? And being kind of broad and wide on all technologies, understand how all these things kind of work at a high level.

Do you need to co be able to code Python as as well as write Java? Probably not, but you need to know how those things work. You need to know how all the pieces fit together. And then bonus, I think it is extremely helpful to be able to go deep on one subject matter area.

That that’s something that I’ve seen in my own career. People who are kind of reasonably t shaped, inch deep, mile wide when it comes to technology seem to have an advantage here. Do you agree?

I I think I would apply that to the the c suite in general. You don’t want all of this everybody to do the exact same. You need people you need this guy to have this depth. You need this this this person to have this depth in order to provide a a holistic view of the organization. And that’s why you’ve got different types of c’s is to have that depth in finance, have that depth in IT, have the depth in marketing, what have you.

Got it. Well, so one thing I’d I’d I’d like to ask before we wrap up, you were in business. You were I I believe you were a consultant at Arthur Andersen. Correct? And you Yes.

That’s right.

Yeah. And and you made a pivot. You had a consulting career going, and you made a pivot into academia. Tell us a little bit about about your journey, maybe to get others interested in this. I mean, there’s not I mean, I could see more of us getting into academia. I’ve certainly thought about that. Tell us about your journey and and what’s been the best part of that?

You know, I’m I’m glad I I did both.

I got a good background in consulting. I understand how that works and worked with a lot of fascinating clients and doing interesting projects and working with great people. The thing about academia is, at least for one thing on the research side, is the ability and the incentive just to go super deep.

So you have a topic that interests you and you can spend years working on that topic, really analyzing how it leads to be and what all the contextual factors are. And that that’s pretty cool if you have that kind of of intellectual itch, so to speak. And then the student side of it is it’s very gratifying working with students, helping them think through problems, seeing them succeed. It’s it’s it’s very fun that way.

And to your point, if we are going to a so this is empirically true. The the number of academic jobs has increased over over time as people have more time to pursue higher education. That instead of spending your whole life working in the field, you go to college, maybe get a master’s degree. So that’s gonna that trend will keep going. So there’s gonna be additional need for faculty, and it’s a good thing. You know, it’s it’s a it’s a progress of the human race. We become better educated and we think of we think things more deeply and we solve harder problems.

Got it. Doctor Eric Overby, thank you so much for your time today.

What’s a good way for somebody to get more information on this program offered through the Scheller Business School at at Georgia Tech?

Right. So it’s called the CAIDO Chief AI and Data Officer Program. And just a quick search, we’ll we’ll surface it.

C a t o?

C a I d o.

C a I CAIDO. Yeah.

Chief chief AI data officer.

C AIDL. Kato. Not not like Kato of the from doctor who’s that? Kato.

Anyway Right.

Inspector Cluso. That’s correct. I forgot. Anyway, thank you. I’ve fallen just down a rabbit hole there.

Thank you so much for joining. I hope people check out your program, and check out other programs. There are lots of ways for us to grow professionally and learn new things, learn to be better leaders, Learn technologies. There’s plenty of opportunities out there.

I would invite all of you to check it out. Lots of different venues, whether it’s through a Georgia Tech, whether it’s through a data varsity, whether it’s through a formal degree program. All of these are extremely, extremely good things. I would recommend all of them.

So with that, happy holidays once again to all of our listeners. Please take a moment to subscribe to the CDO Matters podcast if you haven’t done that already. Check me out on LinkedIn. I do a live event every third Friday of the month, CDO Matters live.

It’s like an ask me anything. You can hop online. You can drop me questions about data strategy, data management, how to become a CDO. You name it, nothing is offline.

I hope you join this growing community of CDOs and people who want to be CDOs. With that, Eric, again, thank you. Really appreciate your time.

Thank you, Malcolm. Great talking with you.

I appreciate Likewise.

The opportunity. Alright. That’s it for this episode of CDO matters. We will see you at another episode sometime very soon. Bye for now.

ABOUT THE SHOW

How can today’s Chief Data Officers help their organizations become more data-driven? Join former Gartner analyst Malcolm Hawker as he interviews thought leaders on all things data management – ranging from data fabrics to blockchain and more — and learns why they matter to today’s CDOs. If you want to dig deep into the CDO Matters that are top-of-mind for today’s modern data leaders, this show is for you.
Malcom Hawker - Gartner analyst and co-author of the most recent MQ.

Malcolm Hawker

Malcolm Hawker is an experienced thought leader in data management and governance and has consulted on thousands of software implementations in his years as a Gartner analyst, architect at Dun & Bradstreet and more. Now as an evangelist for helping companies become truly data-driven, he’s here to help CDOs understand how data can be a competitive advantage.
Facebook
Twitter
LinkedIn

LET'S DO THIS!

Complete the form below to request your spot at Profisee’s happy hour and dinner at Il Mulino in the Swan Hotel on Tuesday, March 21 at 6:30pm.

REGISTER BELOW

MDM vs. MDS graphic

Profisee is a Leader in the 2026 Gartner® Magic Quadrant™ for Master Data Management Solutions