CDO MATTERS WITH MALCOLM HAWKER

CDO Matters Ep. 30 | Bill Schmarzo On AI & Data Literacy

August 10, 2023

Episode Overview:

In this 30th episode of the CDO Matter Podcast, Malcolm has an inspired exchange with the ‘Dean of Big Data,’ Bill Schmarzo. Bill is an accomplished author, teacher, and C-level executive with a depth of knowledge in building and running large-scale data operations in service of some of the biggest companies in the world.

In the discussion of his most recent book, “AI & Data Literacy,” Bill shares his insights into the six key responsibilities that all consumers of AI must fulfill to optimize their relationship with this transformative technology. In becoming more AI literate, we increase the likelihood that the utility of AI will be maximized for the good of society and not just the corporate bottom line. A critical component of this includes quantifying the concept of ‘ethics,’ which Bill strongly advocates is not only possible, but is absolutely necessary.

Beyond the topic of AI, Bill and Malcolm touch on some of the bigger challenges CDOs face daily, including the importance of connecting data investments to business outcomes and how to potentially solve the ‘value’ equation inherent to every data estate. For CDOs seeking inspiration about the future of data management in an AI-enabled future, look no further than this episode of CDO Matters.

Episode Links & Resources:

Hi. I’m Malcolm Hakwer, and this is the CDO matters podcast.

The show where I dig deep into the strategic insights, best practices, and practical recommendations that modern data leaders need to help the organizations become truly data driven.

Tune in for thought provoking discussions with data IT and business leaders to learn about the CDO matters that are top of mind for today’s chief data officers.

Good morning, evening or afternoon, whatever time it is where you are. I am Malcolm Hakwer. They had the host of the CDO matters podcast today. I’m thrilled to be joined by the Dean A big data, Bill Fargo.

Bill, thank you for joining. It’s a pleasure to have you. Thanks, Malcolm. I’ve I’ve been very much looking forward to this conversation.

We’ve we’ve been conversing on LinkedIn across a number of different topics for seems like ages. And so I am thrilled that I finally get a chance to do something live in, you know, in person?

Me too, your your posts are always provocative, and you’re one of the people that when I read something, I want to invest time to have a cogent response.

I I don’t know why, but but the nature of your post is is such that the level of of of contribution that I want to make to me, it feels like I need to invest it because because you’ve got a community on LinkedIn.

Of really smart people. Like, I’m I’m I’m blown away by the quality of the interaction, the depth of the insights that that is pretty much on every one of your posts. So when I’m reading your stuff and when I’m responding, it’s like, okay, I better I better dust myself up, pick myself off, you know, put on some makeup, you know, I gotta look good for this post because I don’t wanna look like a big dummy because you’re talking to some of the smartest people on the planet. So I I I appreciate I appreciate the comp compliment and the comment, because it it it certainly works both ways. I was excited to have you as well.

You you are a prolific author. I’m I’m gonna plug you a little bit here for for our audience.

Just going back through your your library. I mean, is is just amazing how much you’ve written over the last decade. Several books, I’ll I’ll just read off a few of them, a book called big data, understanding how data powers big business, a big data MBA.

The economics of data analytics and digital transformation. That’s a great one. The art of thinking like a data scientist And most recently that we’re gonna talk a little bit about, today is, AI and data literacy. Now, I have not read the book.

I did did read the the preface of the preface, of the book. So I have a decent understanding of of of kind of the the tone and some of the things that you’re trying to touch on, but I wanna I wanna drill down into that first, but it is available for pre order on Amazon right now. So the latest book by Bill is is out there and and you can go grab it on Amazon. I just checked that.

In addition to be a prolific writer, Bill’s professional career, it is is is truly remarkable.

I I looked through your resume and I was like, holy cow. I didn’t even know all of this stuff before we talked So you were you were the CDO at EMC.

Was that during the acquisition from Dell?

That was right before I joined EMC about the same time. Actually, the same day they announced the Green Plumb acquisition.

  1. And so EMC was making a big move. You know, they’re a very large, probably the world’s leading storage company. They are making a move into big data.

I was brought on board as a CDO of the big data consulting practice because we knew that the big data conversation was more than just storage. And, credit to EMC for seeing that. And we created a a very robust, probably a two hundred person consulting group that focused in on big data, and how do you get value from it? And wonderful experience, EMC was a wonderful company.

Got acquired by Dell I left Dell for Hitachi Ventura for for a few years where I was a chief innovation officer, but I came back to Dell because I was intrigued by their vision re respect to what they’re trying to do with data management and and such. And we can spend more time talking about that later, but, yes.

Well, so at your time that you had talked to Ventura, you would have overlapped briefly with Renee Lotte. Yes? A lot. A lot. Well, so she was my most recent guest. So we just published an episode of the podcast last week, which would have been the first week of July, we’re recording this in in July, probably published in August, but my conversation with Renee was fantastic.

And again, holy cow, talk about a smart person who’s who who who knows it from a CIO perspective. So I I, like, and I saw that on the resume as like, oh, wow. Sarin Dipit at you and Renee worked together. Well, and she she’s probably my best student. So there’s a story here. Okay.

We’re at Hitachi, as the chief innovation officer. I had three distinct groups. I had a data science team, most of whom were people I had brought with me when I left EMC.

I had a co design team that drove value engineering engagements, which also kinda came with me from EMC. And then I had a design thinking team. And we used that to try to transform how we were engaging with our customers. We you had this value engineering approach.

And so I was at sales kickoff. Was talked to trying to educate our sales teams about how we needed to have a different conversation with our customers around not just storing data, but how do you create value from data? And I walked through the methodology we we we were doing, and I and I got done, and I hadn’t even gotten off the stage. I was walking off the stage, and, and Renee was kind of in the front row.

It was a big, big audit, you know, mixed place and a lot of sales people, marketing people, but she comes running up literally onto the stage and grabs me and she’s shaking me. She goes, you gotta talk to me. We gotta work together. And she she was so fired up at what she had done on that’s don’t mean to go off on a tangent here, but she had done the classic mistake.

She brought in a, you know, a bunch of analysts. She brought in a bunch of consultants. And they said, build a data lake and load all your data into it. And she did that.

She you built a giant data lake, put thirty five data sets in there, and then open the doors. It’s for for the world to be fruitful and make money.

And all she got were crickets.

Right? And so we did a different approach. We used the approach that we took. We the value engineering approach We picked a use case.

We found it friendly in Jonathan Martin, who was the head of marketing, and we built a use case that delivered twenty two additional twenty two rev dollars in revenue, the first year it was out. The first year paid for that big giant data lake that was sitting there you know, doing nothing. But she after we did that, she became a believer, and I’ve seen her run a number of different projects since leaving Natachi. Where she’s helped companies really unleash the value of their data.

Well, so she she will highlight that the the data lake field of dreams.

She’ll highlight that as as as one of her greatest professional failures aka learning opportunities.

Because she’s a big believer in growth through through through through missteps as as am I because I got to where I am by failing multiple times.

But, yeah, I really enjoyed hearing that story from her.

And and now just some more reinforcement from you, because that’s something that I talked about all the time at was was was avoid the field of dreams. Right? Don’t build stuff for just just cause.

Honestly built, you know, that’s that’s kind of on the tombstone of big data, and I’d I’d love your response to this. Right? On on the tombstone of of big data to the degree that there is one, I think you could make a case that it should say something to the effect of, you know, it it was the dream that never it was it was the field of dreams. Right? Because so I know so many companies that invested so much in hadoop to answer a bunch of questions that nobody was asking.

A amen. It it there Okay. You’re not you’re gonna argue. Okay. No. There is a there will be a tombstone fort.

And I think you’re exactly right. The few of the dream things, build it, and they will get value from it, did not work. It was a colossal failure.

And we could see this coming though. We saw this with data warehouses.

We saw the same trend with data warehouse, and I was in my life, Malcolm, a lot of failures, but also I would’ve been very lucky. A lot of forced gut moments, right place, right time. Right? Not because I’m Paul or good looking data.

Yeah. Sometimes in life, you just get lucky. And and I was very fortunate to spend twenty some years of my life working very much in the BI space. We’re closely with Ralph Campbell.

And Margie Ross. When I was at Procter and Gamble, working with them in the late eighties, we built, but probably was one of the very first data warehouses in our collaboration with Walmart and trailing point of sale data. And it became very clear that if you just threw data into a repository and didn’t understand what decisions you’re trying to drive from it, how the organization created value. Your data warehouse is gonna fail.

And we saw data warehouses failed massively. So with no surprise that this idea of just gathering data thrown into one place and you’re gonna be successful as flawed and not to get myself or you into too much trouble, but we see the same thing now with you know, data lake house and data fabrics and data meshes. They all talk. They’re so focused on, well, we gotta decentralize governance.

We gotta do this. Right? Guess, it’s all really good.

But how do you create value?

What is this?

You need to start the data the data valuation conversation, you know, creating value from data doesn’t start with data. It starts with value. And they we don’t do that. It drives me freaking nuts.

I I I just made a post about this yesterday that I know that I know that you saw. It it was in in my three years as a Gartner analyst, I I I was given the amazing incredible honor of being able to talk to fifteen hundred senior data leaders. Sometimes CDOs, sometimes CIOs, sometimes VPs of data and analytics, sometimes senior managers, doesn’t matter talking about their challenges.

And easily easily seventy percent of the time, my estimate would be you could trace back those challenges to a inability or a lack of desire. I don’t know if it’s it’s it’s ability or desire. I’d like to talk about that. And we will get to the book, by the way.

Whether it’s a lack of ability or lack of desire or whatever it is to actually understand the business value of in of the things that you were investing in. In the data and analytics solutions that you’re providing, the infrastructure you were turning up an inability to to be able to quantify that value had all of these downstream impacts. The the list of downstream impacts is is as long as my arm, inability to prioritize, inability to control scope, inability to to secure stakeholder funding, ability to protect your budgets year over year, I could keep going. And they always traced back to that. And one of the reasons why I left Gartner built was because for three years, I was telling people, you gotta build business cases, you gotta build business cases, you gotta build business cases, and they didn’t.

And I came to the conclusion that, okay, it’s not the message. It can’t be the message. It cannot be the message because this message is sound. Maybe it’s the maybe it’s the messenger and and I’m open to that. Maybe it’s me, or maybe it was the medium.

I landed kind of on the medium and said, okay. Well, you know what? Maybe maybe, Gardner, it’s just a checkbox and maybe there’s a better way for me to impart this message because the medium I’m using today, I think could be part of the problem, which leads me to podcasts and leads me to LinkedIn and leads me to these other areas. But what do we gotta do to get over this hump? So you you raised a really interesting point.

Is the probability or desire? I’m gonna go at I’m I’m gonna add a third one. I think the real problem is comfort level. I think these are conversations that These folks have not been trained to have, and therefore are uncomfortable having those conversations.

It’s like the problem we see with David scientists. You know, on all the LinkedIn conversations, we always there’s a there’s a a handful of people who really blast data scientists because of the tyranny of precision. I think that the term is used. And they’re so focused on getting the building the right models that they don’t even know what they’re building the model for.

And so I think part of it is a comfort of other. It’s interesting because the reason behind this book, which I this is, you know, the the economics of data and analytics, future transformation, the the the frustration for this book. This book, I cranked out less than six months. I just blasted.

Right? And I was so pissed because at the beginning of the pandemic. And I and what we were doing to make decisions around COVID and and and lockdowns.

And and, you know, the the decisions we’re making were were so were so awful and so wrong. We made decisions based on averages, and some of the averages weren’t even based on facts. And it was just a it was travesty of areas. You mean we didn’t follow the science?

Yeah. We didn’t. We ignored the science. And here’s the reason why. Here’s the reason why We need to stop stop treating data as a technology thing.

It scares off all the business people or another tech We need to talk about it as an economic asset. And so that was my that was the motivation. Is it, hey, if you’re a business stakeholder, I can okay. You don’t wanna talk technology.

We don’t wanna talk large language model or, you know, generative AI. We understand that. Do you wanna talk economics?

Because fundamentally, what is economics? Economics is a is the study of the distribution and creation of wealth or value. It’s a value conversation.

And so I wrote that book because if you wanna change the game, you have to change frame, and I wanted to change the frame of the conversation because you’re right. I don’t think it’s ability or desire. I think it’s a comfort level. And if I can turn this into a conversation that they feel like they can drive, you know, most business stakeholders don’t feel like they can drive technology conversation.

They’re not prepared for it. You know, somebody might throw out something about a large language model and and transformers, and they’re all like, what am I gonna do now? But they can hold a conversation on economics and talk about how are we building assets that I can reuse over and over again that I can reapply across the organization to drive quantifiable value. And so I think getting back to your point is I don’t think it’s a messenger.

I don’t think it’s a message.

I don’t think it’s desire. And maybe ability has to relate to their comfort level. If we can change this conversation, let’s stop making it about data and technology and let’s make it about economics and value creation.

I I think there’s something that I I think I agree. And and I think there’s something there also about mindset.

I’m starting to think that that mindset plays a bigger role in our relationship, meaning data provisioners, data data leaders. I I’m starting to think that mindsets start plays a bigger role than we may otherwise acknowledge.

I I think in my in my experience at Gartner, I saw what could only be described And and this may not be the right word, but but it’s a starting point.

What what could only be described as a lot of animosity.

Towards towards customers and and that would never fly in in if if your customer was like an external paying customer. Somebody buying your stuff. Level of animosity that would never fly with with with real customers where it’s like, oh, well, data quality. Well, you can lead a horse to water.

Oh, data literacy, they just don’t get it. Right? And and this could be a good segue into into the book, by the way. So I’m gonna explore a little bit more of that and that that mindset I think you’re you what you just said is kind of having the conversation and be comfortable with a conversation around business around the economics of business cost benefit, you know, building business cases, there’s there’s absolutely I think there’s there’s cert certainly something there.

You could also make a case about incentives, I think. Oh my gosh. Yes. Yes. What do you think?

So one of the things that John’s mail, it was a CEO at Procter and Gamble used to say back in the eighties.

And I this You are what you measure, and you measure what you reward, which at his way of seeing, we are as a company what we pay our people to do. So if we say we’re all about environmental issues and diversity and social good and all this stuff, and we pay our executives based on quarterly profits, Your message is a lie. Your charter is a lie.

Right? And that has, by the way, his huge ramifications in the ei space.

Huge. We can get into the AI utility function and how how if you if you’re optimizing on short term lagging indicators, your AI models are gonna just we’re gonna get wiped out fast, but yet that’s what we do as a as a society. We optimize the pay of our executives and are some of our comp plans on short term lagging indicators.

Right? And it’s it’s it’s disastrous in the long term. So you talked about this mindset.

I think that’s key. That’s key. It’s the My experience in working with companies is the companies that are most successful in getting value out of data. For the most part, are not the large organizations, but they’re the medium sized organizations where you have a CEO or some leader who says we’re in it for the long run.

I’m not here to build a business and flip it. I’m here to build a community. I’m here to build, you know, a, a living entity that supports the by law, community and my customers and my employees and has has a broad view. And when you have that kind of a mindset that has that broader economic perspective, you’ve got a much better chance of leveraging data and analytics AI and such to really impact the business because now you’re thinking more about that second and third and fourth level ramifications that that fall out in the process?

Well, that’s that’s an indictment of will have potential indictment, not a full and complete indictment, but of of public companies in general and kind of a short term profit motive and the quarter by quarter quarter profit motive, because I agree.

The the longer term perspective is absolutely positively needed here. And what you express at least from a corporate perspective would would kind of align to, you know, privately held companies where the long term perspective is is the guiding principle here. And it certainly seems like we could we could absolutely use a little bit more of that. But So there were multiple tiebacks into the world of AI. So let’s let’s let’s let’s go there because we we could literally talk for hours about how to empower CDOs and data leaders.

To be more value centric. Because because I think that’s key, maybe a separate conversation track. But we we did just touch on something that one of the things that kinda struck me in in rereading the the the preface to your book, again, this morning, in advance of our conversation, at at the beginning, you kind of talk about the idea that AI is being built for the betterment of mankind.

Do you do you do you do you do you feel that way? Or is is that something that we that we as a society need to feel in order to progress in order to get some of the fud that is that is kind of paralyzing a lot of us out there today.

So AI can certainly be built for the benefit of humankind.

AI to date and I use the term AI to date because there’s a lot of things that get thrown into the AI category that really aren’t AI, you know, regression analysis, association rules. It’s just not AI. But, you know, a lot of those tools are built in They suffer from confirmation bias. They suffer from all kinds of problems.

So the way that we’ve done things to date certainly hasn’t benefited all society. In fact, some of it has been very very dangerous. Kathy, O’Neil, I think her name is worth a book called weapons of math destruction. Yeah.

It’s a marvelous book. I love the title. I wish I just stole them. But what a great title.

Right? Anyway, so but there’s every reason to believe that AI can certainly be used to benefit everybody.

But but for it to benefit everybody, everybody needs to be involved.

That’s the challenge, and that it isn’t just sitting on the sidelines and hoping the government or some institution over here or some organization over here does what’s right for me? No.

To be a citizen, so that, you know, the full title of the book is AI and data literacy, empowering citizens of data science, and the most important word towards empowering and citizen, because as a citizen is a being a citizen, citizenship is a proactive effort. Right? To be a citizen, you have to get involved. You have to vote.

You have to make sure your voice is heard. And if by my favorite chapter in the book, I think, is chapter nine at the very end of chapter nine. I just I was so fired up about what do we mean by empowerment? Because chapter nine is all about empowerment.

We do everything and everything out, but here’s what empowerment is. And I just and I I literally wrote it down in about ten minutes five or six pages of what is empowerment, and you’ll see it in that end of that chapter. Because as a citizen, you must be empowered, and you must hold yourself accountable for being empowered. You need to step up.

So if we want AI to benefit all of us, All of us need to understand how it works and how we make certain our voices heard. That we become, you know, like Tom Hanks in the movie big. And we raise our hands and say, I don’t get it. And that’d be a afraid to be wrong and not be afraid or be embarrassed or shy about it because this is no time to be shy.

So I really believe that AI has this great potential, but it’s gonna require everybody to be to to become a citizen of the challenge and make sure that they’re involved with the AI models, in particular, that AI utility function, which is the beating heart of the AI model, has a diverse set of variables and metrics that represents the good will of what we’re trying to achieve across all of society and all of humankind.

I I love the the idea of citizenship.

Right? Because because I think that a lot of people assume citizenship is is an endowment when in reality, it’s a responsibility.

Well said. And and it is about being informed It’s about being active. It’s about being part of a community. It’s it’s it’s it’s not just something that you are given as as as a birthright. It’s something that you that you earn through, you know, being responsible resident somewhere. And as a responsible resident of AI land, what do I need to do? What what what do I what do I need to do to to to better become more empowered and and and and, you know, a a producing productive citizen of AI land.

Well, so you’re you’re really leading into the book here, Malcolm, which I appreciate that. What I felt was there are there are six things that we need everybody to be comfortable with. Use that term comfortable. Right?

So I can feel like I, I have enough insights into these variables where I can be I can be an intelligent citizen in in AI land here. So number one, you need to understand what data is, how your data is being collected, especially in the realm of big data, and all the ways that that data is being collected about you, and all that ways that data is in use to influence your decisions, your actions, and your beliefs, both good and bad. We also need to be very aware in that sort of aim area of data that a lot of the data privacy laws and GDPR, all these things, you know, privacy statements written in websites, they’re not written to protect you.

They’re written to protect companies and organizations. So there needs a first off, just an awareness of how everything you do. You walk around with your, you know, with your, with your smartphone, constantly transmitting out to the world, where you are, what you’re doing, what you like, all your preferences, what you eat, what car you drive, who you date, what you should all this stuff is there. So data data data is the foundation.

Number two, we need to talk about analytics. We need to have a very introductory understanding of the analytics, and this is where I’ve got one chapter talks about traditional analytics regression and, you know, association rules and clustering and how that’s been used historically, but I have a whole chapter dedicated to AI. And the reason why is that when I think about What’s happened in world of analytics?

Regression analysis and clustering and all those kind of traditional, supervised and unsupervised machine learning They were all optimization techniques. They all sought to optimize a problem, you know, inventory or marketing campaigns or customer retention.

AI, especially as we see it being used for reinforcement learning, is a learning technology.

It doesn’t just optimize it learns and continuously adapts.

And it does this with minimal human intervention at a at billions, if not trillions of times faster than humans. This opens up a whole new way to think. This is this is not your father’s analytics.

This is something new. And so we need to be aware that AI is a learning vehicle and how it learns is dictated by the variables and metrics around which we tell it what to learn. So we are, as citizens of data science, we are empowered to make certain that that, again, that AI utility function represents the variables and metrics that we think holistically are important for society. So not just financial, not just operational, not just customer but employee stakeholder ecosystem, society, diversity, environmental, and even ethical.

Right? We They need to have those variables need to find a way inside there. So the AI utility function isn’t gonna have fifteen or twenty metrics. It’s gonna have hundreds of metrics.

And we can talk about how it resolves those and etcetera, etcetera. But, again, understanding that. The next section is then if we have data and we have analytics, the next thing is around how do we make more effective decisions? Right.

How do we make informed decisions? How do we improve the odds of us ourselves doing whether it’s us individually or us as a society. Right? The reason why, you know, you should wear a seat belt, which by the way, eleven percent of people in America don’t wear seat belt eleven percent.

Even though the facts show that where does seat belt doubles the probability that you will survive a car accident, and improves by seventy five percent, the the likelihood that you won’t have a serious injury in a car accident. Right?

Eleven percent still don’t don’t see the facts. Right? And so we need to understand how do we help people leverage data and analytics in the situation they’re in, and given all the biases in their mind, confirmation, recency biases, all these sort of things, you know, organizational biases to help make informed decisions to help them as well, help society.

Number four, we gotta talk about predictions as that’s you gotta understand some really basics about how statistics work and because it’s important to understand statistics to help us make more informed decisions that are that we’re using analytics for. The fifth one is about value engineering.

You know, how do you create value from data? How do I use analytics to create value? And if you want everybody to participate, they need to see a direct line aside from they’re gonna get involved in this AI land, AI world, to create value for themselves and for whoever they’re working for. And, of course, the underlying component to all this is ethics.

We need to be able to talk about ethics to golden rule, how that impacts. We need to find a way to integrate ethics as a decodify ethics and put it into a utility function, which can be done, by the way. But it takes it takes everybody understanding what do we mean by ethics and what? How do we define ethical and responsible behaviors?

So that’s that’s the six parts of of this, of this book. There’s a couple of bonus chapters. There’s a chapter about empowerment that I think is by far my favorite chapter. It’s chapter nine.

It’s my, far, my favorite chapter. And the last chapter is actually a chapter on chat GPT and generative AI. And I use that I I explained what it is, could provide a little primer. And then I use Chad GPT and and generative AI as a vehicle say, okay.

Let’s see how applying these six, these six AI and data literacy components, how does that factor into how we use chat, GPT.

So it became a way for me to test the framework against a real world situation that, by the way, showed up about, you know, halfway through writing the book is like, oh, there’s new technologies out there, and it’s taken over the world. I better find a way to make sure the book is relevant with respect to how we’re gonna use that.

Anyway, you don’t need to buy the book now. I got all the answers right there. Isn’t it amazing that two times in our in our life? That that that there’s been these these these transformative, like, level game changing level stuff, the internet and now this.

To me, I find I find that amazing. It’s like, we live in amazing times. But backing up. What I think I heard you just describe, yes, it was a summary of your book, but what I heard you just describe to me could arguably be a solid framework for a AI governance model or an AI governance operating model.

So there’s a lot here to talk about at a society level. And I could talk all day about, at a kind of a society level and is government even prepared to be to be leading these discussions or, or, you know, driving these policies But let let’s take it down a level in into an individual corporation.

K? Because because this this podcast is for CDOs. And and I’m a CDO, and I’m trying to wrestle with all this stuff and figure out things. What I just heard you describe, these six things to me sound like a solid framework or solid lens for me as a CDO to start asking some of these questions, and I’m not prepared to answer these questions. Right? Am I do I have a governance framework in place that would even allow these conversation to happen.

Right? And and if I’m not prepared across these six vectors that you describe in your book sounds like you would probably want to be if you were CDO, making these decisions and driving these policies for a corporation. What what do you think? Totally agree.

In fact, I at the end of one of the chapter when I introduced, I think chapter one introduces the whole framework, I created a, a spider chart, a kind of a radar chart across these six vectors. I like that term vector by the way, where do you sit today across these vectors? Now, I do it for an individual perspective, but it can be very easy for a CDO to say, across our organization, where do we sit? And then I reintroduce that thing for the end of the book and say, you know, we’ve gone through the book.

How well have you done? How well is the book done in trying to move you across each of these vectors in this radar or spider chart. So I do think it works very well from from a CDOand surprisingly from a board of directors perspective, and knowing what the board of directors has to know, what kind of questions I should be asking. Of my senior leadership.

And I think this gives you the six vectors around which you need to explore. But it the book is really While it’s written, it’s not written for CDO. CDO can get benefit. I really wanted to write something for everybody.

Yeah. And this was part of the I took the publishers packed, p a c t, p a c k t, who did my previous book, and I went with them because I had two demands.

One, I wanted the cover to look like it looks, and it’s not a conventional cover for most textbooks.

It’s very simple. Two tone colors, the dark blue, light blue, title, my name, and that’s about it. Because I wanted to I wanted the book to be simple looking because this is a simple conversation.

Once we have people’s awareness, the conversations along those six vectors can be very simple. So I wanted to cover the book to look very simple to not be imposing. The other thing is I wanted a price point. Everybody can get this.

So we negotiated for this. The ebook price is, you know, it’s nine dollars. Yeah. I think even the the hard copy prices, under twenty bucks because I didn’t want Nineteen ninety nine.

Yep. Right? Okay. Good. Good. Because I didn’t want so I had a that was part of my negotiations with the publisher.

I said, Now, I I’m not doing this to make money. I mean, maybe I’ll get a few more toys from my back here from, you know, from the royalties. This is about giving back. This is about saying thank you god for everything that you have given me, all these four gump moments, all these great people who have popped up in my life, it’s on me to share that.

And so this book is written for everybody. That was my goal. It’s not gonna be relevant probably for I mean, I’m not sure my mother-in-law would we could understand all of it. But maybe there’s a couple of chapters she would go, well, I get that.

I understand that. But I want everybody to feel comfortable and empowered to have these conversations so that we we make sure as a society that everybody that that AI is working for everybody in AI land.

So so one of the things that I’m I’m intrigued by And and this was touched on again in in the preface of the book is is this idea that AI will tell we’ll do what we tell it to do.

Is is that going to be a true statement in a world of AGI or an artificial generalized intelligence because this this is the world that I think that most peep this is most the skynet for most people. Right? This is when the machines take over world. When we we can have a discussion about, you know, what generalized artificial intelligence means, but to to me, it means novel problem solving, right, where it means, it means, like, solving a problem that has never been solved before that would go beyond what was learned simply out of a training set. So so first of all, do do you agree with that kind of really super high level definition.

So I gotta be honest, Malcolm. I don’t know. Okay. I don’t know All I know today is that the AI utility function, even for generative AI Yep. Is driven by the AI the, you know, the AI model driven by the AI utility function. And we want to make certain that those we take an economics approach. We look for a wide variety of leading indicators.

Some of them conflict.

We should have conflicting variables in there because the AI models need to make trade off decisions in the same way that we humans have to make trade off decisions. It’s not optimized, and it’s learning based on the situation, this is the right decision. Now maybe a little bit later, it’s not the right decision. Right?

Think about an autonomous car. Right? Oh, it’s a good time to pass this car now. Wait.

Now it’s not a good time to pass the car. So it’s constantly making those trade off decisions based on all the variables it’s looking at. And so from a simple perspective of today’s AI, AI is only going to take whatever variables you give it it’s gonna look at what your intentions are, what your desired outcomes are, what decisions you’re trying to drive, and it’s going to basically take those variables, and it’s gonna change the weights of those variables. Sometimes it’s gonna avoid this variable higher.

So it’s gonna do it continuously and change based on the how the environment changes and based on what your intentions are. That’s how it works today. How it works tomorrow?

I have no idea. I don’t know. Okay.

Yeah. I mean, I I don’t know if we’ll build something that’ll say. It’ll say, no, I’m gonna start using this variable. It says, you know, whack off human heads.

And we do get terminators, but I I just don’t think I just don’t think that’s where it’s gonna go. I mean, one thing these AI models, at least today, they don’t collaborate. I mean, A lot of the fears we have around fake, you know, deep fakes, we’re already building AI products that can flag deep fakes. We’re gonna have just like the old maga, mad magazine comic, you know, spy versus spy.

We’re gonna have AI tools monitoring AI that tells you. And and that could be think about the benefits that if we’d use something like that for social media. Social media was totally untethered, unregulated, and anybody could post any sort of BSA wanted out there And there’d be somebody who’d believe what you said. What if you had AI in there who was monitoring that thing saying, you know, the probability of that being true is, you know, this kind of percentage.

So that we had some sort of intelligence helping us to make informed decisions by making sure we had vetted quality data.

Well, you you just touched on something. You and I were both around for the birth of the internet.

You were at yahoo and I was at AOL. So arguably, we were both doing our part to lay the the series of plastic tubes by which all of this amazing information is is racing around in a in a super highway, which is what we called it back in the day. But one could argue that the internet became what it became because of a lack of regulation or at the very least because of how it was regulated, which carriers like Yahoo and AOL were were were in in essence common carriers. They weren’t responsible for the stuff that was put on the wire.

Which which which allowed them, you know, to get to the point where they are today.

My question really is kind of around regulation.

I think you could make a a you could.

I wouldn’t, but I think one could make a case that that AI could be viewed as a public good or should be regulated as a public good potentially. If it has all of this this this amazing opportunity to create for so many.

Where where do you see regulation going here?

And I know we’re crystal balling this, which is always dangerous. But but it it it when you got folks like, you know, Elon Musk saying it should be regulated. I mean, that that’s that’s something we need to listen What do you think? Yeah. So first off, the internet versus AI comparison, which I we were very fortunate both.

The difference is in AI, we were building connectivity in pipes. I mean, in internet, we’re building connectivity impacts. Right? Yep.

In AI, we’re building we’re building agents that can continue to learn and adapt with minimal if any human intervention. So we’ve we’ve got a different sort of almost, you know, techno phobia techno, you know, base creature, homo, not a homo sapien, a techno and techno sapien. They can learn and adapt. And we’ve and that’s very different than we faced before.

You know, the internet had we we we bad things happen in internet too. I also technology is a is a double edged sword, but the thing that scares me most. It’s also the most powerful applic potential of it is the fact that it can learn and adapt. Right?

Think about how your GPS system works and how marvelous it is to get you from here to there you know, any sort of traffic accident comes off, it’s learning and adapting and changing things and such.

The the need to ensure that we are using AI ethically is more related to what we learned about the unregulated use of social media than it is about the unregulated use of internet.

Social media had had all kinds of undesirable unintended consequences, without getting into any sort of, political bend here, I personally believe that it’s driven a divide between brothers and friends.

It’s very easy. It’s very easy in a faceless encounter to to radicalize your statements to other people. I’ve done it, but I’ve said things in on social media to a person that I would never say to them in person. And it’s cost me friends. And that’s why I’ve stopped using, for example, I don’t use Facebook because I found I just it just was like, It was like an evil devil tempted me to reply to somebody’s response in a way that I knew was not it was not good for me and was not good for them.

And so I think what we learned about social media and the unregulated use of that has a lot to to bear on AI, because AI could be a lot more troublesome.

So I do believe we should have regulations.

I’m thinking, you know, guardrails, not railroad tracks, that these are variables we need to reflect into. I think we need to make sure we’re we’re response respective of protected classes and we treat people. I I I just write in a series of blogs now on the golden rule and how you integrate the golden rule. The golden rule is a fundamental it’s not just Christianity.

Every religion has its version of the golden rule, you know, treat others as you would have them treat you. I do think that’s something we need to put into. Mean, I wish we would have put that in a social media. I wish we would have put that in a social media so it would have prevented me from saying something stupid to somebody who was once a good friend of mine who now thinks I’m a total jerk because I was a total jerk.

And so I would have loved to have had those guardrails in place that says, hey, Shmarzo, Don’t write that. Don’t write that. In fact, call the person and talk to them. Take the time to understand their rationale for their belief.

And then learn from them instead of just, you know, canceling them. So It’s it’s interesting.

I’m I’m I’m having flashbacks to a a previous conversation I had on the podcast with Chris, Chris Wiggins, who’s the chief data scientist at the New York Times, recently wrote a book kind of on the evolution of how we got to where we are. It was called how data happened. I certainly recommend it to to others and a lot of what you just described, I would argue, as a function of of the business model that was applied to the internet.

That that in many ways kind of draw is is driving some of the polarization as a means to affect more effectively segment people and sell them stuff and advertise to them. A world another world, which you and I know, well.

But tying off, we we’re running short on time here. And I do wanna touch on one thing. You’d mentioned the golden rule. Another thing that you have been talking about often on social media, at least on LinkedIn, I should say, let’s draw a distinction there because I I I still see LinkedIn as as some place that can be incredibly productive and a wonderful exchange of ideas between professionals.

One of the other things you’ve been touching on there is the idea of quantifying ethics Yes. Right? As a necessary dependency for this this informed citizenry and and figuring out a way to quantify and actually put math behind ethics. How do we do that, Bill?

There so I there are variables and metrics that I think if we take ethics and start breaking it down into what what what comprises ethics, ethics, I think we can we can find variables and metrics that can help us. There’s there’s actually a worksheet in the book.

It’s on the ethics chapter where the I I propose a worksheet for how do you codify ethics? How do you start thinking about them? You know, do you as an organization, for example, do you do you measure you could put in measures regarding society give back. How much how a time do you volunteer?

You know, diversity of workforce. There there are ways to measure it. But here’s the the fun part about that, Malcolm, is it takes human creativity and curiosity to start thinking about not only what metrics do we have, but what metrics should we have to think about, well, are we actually doing things ethically?

Are we treating others as we want them to treat us? Right? And and start thinking about, well, how do I measure that? What are the variables and metrics against which I would measure?

My effect is as a human? No. And and there’s lot there are ways. There’s no one single metric.

That’s the both the challenge and the beauty. There’s a number of metrics that go together. That really define what ethical behaviors are. It can be done, but it’s gonna require us to be humans.

And maybe maybe the way to end this whole this this wonderful engagement, podcast now. I mean, by the way, I’m always willing to come out and do another one. Awesome. Is it I think what’s interesting about the conversations we’re having is that AI may actually force us to become more human.

That will start to realize what distinctly makes us human. And our ability to forgive, to love, our abilities for tolerance, right, and and start to not only realize that those are what we do, but we can actually start thinking about the variables and metrics that actually go into an AI model that we want to have model what we think is ethical behaviors.

I love it. Right? Like as a parent, you try to model, right, as a society, we should try to model. And ultimately, what we’re talking about here is a modeling exercise That’s true, by the way, getting back to our first conversation about value.

And and and I was listening to you speak. I I I thought I was thinking in my mind, okay, know, this this is value engineering. It’s just a different twist on the definition of value, right, where value could be a societal value. It could be an environmental value tying back also to that previous thing that you’re talking about, which is a different perspective on value.

Instead of just a single quarter, maybe it’s more of a generation or maybe it’s even a lifetime or maybe it’s even the the life of an entire planet potentially. So different ways of thinking about things. This is This this idea of of being more human, being more creative, thinking more holistically thinking beyond just the initial quarter or the initial reaction to something your initial post on the social media platform. I I think I think that that’s just good words to live by.

Right? And and I think what I’m hearing you say is that AI is going to force us to start asking and answering more of the questions that potentially maybe we’ve been ignoring for too long. And that’s for the the role of regulation, I think, comes in. Because we’re good I’ve a good leader would it make certain whether it’s at a presidential congress level or a corporation level or at a parent level would start asking the hard these hard questions about what what is our ethical behaviors?

What are the ethical outcomes we seek to drive And what are the variables and metrics against what you’re gonna measure the effectiveness of those ethical outcomes? That’s what leaders need to do. And that’s why we all need to become empowered as citizens of data science. So we have a voice at that table to make certain that our view of ethics is being incorporated into this conversation.

What’s true for AI is true for our country. So I I love it. It it just coming off the heels of July fourth. That’s fantastic.

We all need to be more empowered citizens. Love it. Love it. Bill, thank you so much for for carving, time out of your incredibly busy schedule.

You’re probably working on your next four or five books already. I look forward to seeing one of them. I really hope in one day in in the very new future we can we can meet in person and and have some of these additional conversations, because because I think I I think we could just chat for hours I really, really enjoyed it. To all of our listeners, to our guests, to our viewers, thank you so much for tuning into another episode of the CDO matters podcast.

Again, Bill, thank you. We will talk to you all very, very soon. Thanks, all. Bye.

Thanks, everybody. Bye bye.

ABOUT THE SHOW

How can today’s Chief Data Officers help their organizations become more data-driven? Join former Gartner analyst Malcolm Hawker as he interviews thought leaders on all things data management – ranging from data fabrics to blockchain and more — and learns why they matter to today’s CDOs. If you want to dig deep into the CDO Matters that are top-of-mind for today’s modern data leaders, this show is for you.

Malcolm Hawker
Malcolm Hawker is an experienced thought leader in data management and governance and has consulted on thousands of software implementations in his years as a Gartner analyst, architect at Dun & Bradstreet and more. Now as an evangelist for helping companies become truly data-driven, he’s here to help CDOs understand how data can be a competitive advantage.

LET'S DO THIS!

Complete the form below to request your spot at Profisee’s happy hour and dinner at Il Mulino in the Swan Hotel on Tuesday, March 21 at 6:30pm.

REGISTER BELOW

MDM vs. MDS graphic
The Profisee website uses cookies to help ensure you have the best experience possible.  Learn more