In part 2 of my “Ask Me Anything”, I answer questions about digital transformation in a very challenging sector, about chasing KPIs vs. defining strategy, about managing bias in AI, and more.
Many thanks to Clark Boyd, co-founder of Novela, for joining me for this conversation!
If you missed part one, you can see it here:
Your Questions (Part 2):
QUESTION 1: DX in Government [0:00]
Why is digital transformation (DX) in government so hard?
How can we make it work better?
What examples can we learn from around the world? (with cases from Estonia, Dubai, and the US Marine Corp)
QUESTION 2: Mistaking KPIs for Strategy [08:24]
“ROI” is not a strategy. “Grow market share” is not a strategy.
How to rethink strategy to unlock growth and value.
(With a look at the Parmenides’ Fallacy, and why change is often harder when your EBITDA is good.)
QUESTION 3: Avoiding the Traps of A.I. Bias with Human Expertise [15:05]
Spotting the AI opportunity in your own industry.
The “jagged edge” of AI, and the danger of over-extrapolating from success.
Why expert humans are needed “in the loop” to give oversight to AI’s output.
LIGHTNING ROUND ⚡
[27:46] “If you had one thing to say about the shift from when you wrote the DX Playbook to the DX Roadmap, what would it be?”
[30:04] “Do you think the term ‘digital transformation’ has lost its meaning?”
Watch & Read
Click to watch the video above ☝…
…and read the transcript below. 👇
And please let me know in the comments if you’d like to see another edition of “Ask Me Anything” later in 2024!
Transcript:
[00:00:00] Clark Boyd: Fantastic. Now, in the second part of our discussion, we're going to start by looking at digital transformation in a very specific setting, but also look at what we can learn more broadly from that, too. And this question is from Brian O'Donoghue. Brian asks,
“Given barriers such as bureaucracy, legacy systems, and budget constraints, and as referenced in The Digital Transformation Playbook, that changing a culture like a Coca-Cola, could be like turning an aircraft carrier, can public sector or government agencies really implement digital transformation?”
[00:00:40] David Rogers: Yes, this is one of those common challenges. I hear, or questions. I hear, and the answer is, yes, it can be done. It's just hard, but it is possible. governments do face additional challenges, even compared with the challenges of those companies with a 70 percent failure rate.
First of all, if you're talking about federal governments, they can be quite large especially in larger companies, larger countries like the u. s and the other, of course, mechanism is that you see a lot of things you see elsewhere. Yes, there's regulations. there's regulations if you're in health care, finance and so forth.
And, but you lack that market mechanism for the most part, which gives you that quick feedback loop when something's not working, the financial indicators start going down. You're losing customers, your revenue is drying up, your costs are spiraling out of control or something, usually get a pretty strong signal if something's really not working well, whereas the signal is there but less direct with when government is really struggling or failing to achieve its mission or goals.
We can see examples of digital transformation having a real impact in governments. It's an earlier days. There's been less effort for less number of years. The bigger success stories are mostly smaller countries. If you're talking about federal scale, a national government, but Estonia, for example, very small country, but done some really amazing things in terms of digital engagement and democratic process of its citizens.
Singapore has been very progressive and innovative in this area. Also, some of the countries in the Middle East. I was recently in the United Arab Emirates. I was actually in Dubai, but I was meeting with the government of the whole UAE, because this has been a big priority for them, digital transformation across.
I was meeting with folks from the pension fund to the education ministry. I met with the minister of the future. That's pretty cool. I wish we had a minister of the future, a cabinet level position in my country, and they've been really focused on trying to deliver against their mandate to serve their citizens and support the economy.
And they have very clear goals, and they've been pursuing this for some time. 1 of the big agendas that they're actually focusing on right now is they're looking okay, what's our next stage of really pushing ourselves in terms of digitally transforming government services is what they call it.
I think their slogan is 0 bureaucracy. It's a very ambitious goal, but the idea being, let's take everything that we do currently that touches the stakeholders we serve and let's try to simple make fewer steps, less time, less complexity and try to push that as a, as an organizing principle and see how much of an impact we can have across again, different ministries of the government.
This requires resources. It requires people and it requires a real mindset. So that's actually what I was actually spending a lot of my time there talking, was about a digital culture and how do we define the mindset shift that we want in folks? I'll give another example.
There's from the U. S. Marine Corps, a really important And Part of the U. S. Marine Corps is their depot command, which is serving the rest of all their field operations and basically managing and maintaining all of their equipment. So, everything from a, a firearm, a rifle to, a tank, a vehicle, a watercraft, et cetera.
so incredibly complicated, huge operational, machine, constantly working. Mission And this is what you get government to is it's often very much very mission critical needs. If you take longer getting something done, it could put someone's life at risk, whether you're in the health ministry or, and, military affairs, et cetera.
And they're doing some really amazing work, thinking about how do we start to transform and bring this organization, this piece again, so not like the entire government or even the entire military, but this particular piece up to, to really delivering much better against its mission by taking advantage of being a more data driven, data enabled, organization.
Organization new technologies, but also new ways of working. And what's great is they really see that as the challenge and we focused on how for governments in general. I think this is a really important lesson because they tend to have such broad mandates and constituents when you're trying to drive digital change do not.
There's this phrase from startup land called don't boil the ocean. and what that means here is don't say well, everything about, here's all the different problems with the commerce ministry or. The Department of State or something, whatever part of our government and there's lots of them and they're interconnected.
Don't say, we have to fix it all once together. And therefore, what do we do? We wind up doing this huge planning process. That's the instinct in governments. All right. We’ll create this huge, involved plan of how we're going to build a whole new IT system. That's going to rebuild everything in the back end of the Department of State or the Agriculture Department or something. And then finally, all these things are going to work better. And so, what do you wind up doing? You fall into this trap of this long planning process and then RFPs and vendors. And it's years before the thing gets built. And by the time it's built, the needs have changed.
And even if you did a good job and built something that solved what the needs were, when you started four years ago or five years ago, it's no longer right. So instead of focusing on the customer picking specific problems, bring it back to the Marine Corps. What are the specific issues that you could most improve?
Is it about reducing the turnaround time? Is it around better predictability? Is it around better cost management? Is it around creating data so you can say not only we're helping repair equipment, but we're using that data to help you think about, gee, what should be the next equipment we buy? Actually, given the record of the data, what's happened with this tool versus that tool.
So, what are the customer problems? What customer problems can you solve? Which ones are most urgent? and then how do we really set up small teams to move fast, with not lacking oversight, but with A leaner form of oversight before scaling. in those early stages of testing out a new solution to whatever you can move more quickly than you're maybe accommodated to or accustomed to context and teams don't go in with that mindset that everything has to go through 100 checkboxes of, regulations and procedures and forms and so forth.
They're focused on the outcome. Say, let's just build something that improves this outcome. And then move Sure, before we roll it out at scale or something, we're going to go back to additional levels of due diligence and scrutiny and so forth, but empowering teams to move faster and giving them the permission to do so.
[00:08:24] Clark Boyd: Yeah, I'd love to see zero bureaucracy brought to life at government level. It's an ongoing battle in many countries. A lot of things that could be happening simply are not. Now, our next question that is from Moe Hillel. Hi, Moe. Moe's question is about aligning with leadership. And this is an eternal question here.
“Aligning with leadership is very difficult since the main driver of ROI is still short strategy targeting. Market share and revenue. Just really short-term thinking. I feel when the numbers are great, there is a space for discussion on transformation. If not, leadership will just go back to the basics. So how do we align on a consistent approach to digital transformation?”
[00:09:14] David Rogers: great question. Thank you, Mo. So, I like this question in particular, because this is a point I've been stressing with a group of companies I met with in Asia recently. And to repeat, he says the main driving ROI is the short-term strategy targeting market share and revenue.
And so, what I was telling these other leaders is I was trying to stress as we're talking about defining your strategic focus, ROI is not a strategy. Market share, improve our market share or, and grow our top line revenue. That's not a strategy. Those are KPIs and they have their place. But when you look at companies that are driving change and driving growth.
They may say, although honestly, I find they spend much less time talking about something like that market share target. but what they mostly focus on is how are we going to get there? That's what strategy is. Strategy is like, what are the problems we need to solve that we believe if we can do these, we can fix these or improve these or solve this problem for the customer or solve this problem in our internal operations that is going to unlock.
That greater revenue or that greater market share or reduce our operating costs or are, add more resilience to our operations, which is going to ultimately lead to that ROI. So, strategy is a theory about how do we achieve the goal? It's not just saying, here's the metric and we know if we're done.
So, with that in mind, how do we align the leadership? So, it starts again with step one of the sheer vision. And in particular, I would say, think about there's a question in there in the shared vision chapter about what if we do nothing? This is the Parmenides fallacy that companies’ leaders compare a possible course of action with the president present.
And so, they say, if we do that, it could cost this much. We don't know if it will work versus right now. We seem like we've got a safe position, but instead, you need to ask yourself at the very beginning. This is one of the first parts of the shared vision. Where is our world going such that if we just stay the course, we just hunker down and keep doing what we're doing today, what is going to happen over time?
What if we do nothing? And I have to say these companies I met with in Asia did a really good job, each of them within their own particular sector, identifying why that was not a tenable position. For the long term, why they would face ever increasing problems and their current situation that looked pretty good would clearly worsen over time if they just stay the current course.
I find that's a really important way to bring together alignment in leadership teams is to focus on that question. What will happen if we do nothing? and then as you get the whole shared vision together, from that. You start to focus on step two, which I call picking the problems that matter most.
And this is where strategy gets much more specific. And you say, these are the critical things we have to address. It was to go back to the example, the New York times, when they were facing this crisis in their business model, they didn't just say, our goal is to have more subscribers and more revenue.
They said, we think these are, and they defined in this document, our path forward, I think it was called. It was five key priorities, the things that they thought would get them there. One of them was more of an inner, more international expansion, getting more and more readers outside of the U.S.
Another was about changing the product experience so that subscription felt like your Amazon Prime or Netflix subscription. That was something you wouldn't consider giving up. and they had a few others. They thought about, from the user experience, some of them were more about changing the way we operate and do our work inside the organization.
Others were about revenue sources like the advertising versus, geography versus, subscription. That’s picking the problems that matter most to your business and to the customer. And this is where we really need to think about aligning leadership, not just on, oh, “digital matters,” or “we're doing digital transformation.”
It's got to get a lot further than that. You've got to align on what are the stakes? What if we do nothing? What's our impact we're trying to achieve? And what do we think are the biggest levers? What do we think are a few things that if we can get these right, it's going to get us where we want to go.
Lastly, I would say: Mo remarks that when the profit and so forth are great. It seems like there's more space for considering transformation. It always depends on the organization and the culture. I have actually heard more of the opposite from corporate leaders is that when your earnings, your EBITDA, your, your ROI is high, it's often harder to change because there's a certain complacency and there's a feeling like don't touch the numbers.
They looked really good when we reported them to investors last quarter, broke, don't fix it. And then actually, it's easier to rally people to try. to really push change when things are a little shaky, when the numbers are not doing so well. Often that's the case. There's never a perfect time, but whatever situation you're in again, you've got to align the leadership.
Around the case for change and on what, agreeing on a few primary levers that you think are going to drive that strategy for them.
[00:15:05] Clark Boyd: And speaking of this notion that, we could be thinking, what if we do nothing? How will that change things? We are well, in our final five, 10 minutes, we're finally going to address AI, because I think a lot of people will be using that thought exercise at the minute to say, okay, we actually need to start doing something because if we don't, the world could look very different for us in 6, 12 months’ time, maybe even weeks. our questions here from Carlos are about taking a critical view of AI, and I really like this, because there's lots and lots of hype we know, and the AI is now so accessible and so powerful that it's not even a differentiator anymore saying, we are AI powered or whatever, everything kind of is to some extent. Carlos is asking,
“Why are people led to believe that artificial intelligence will always provide the best answers and solutions, considering that answers can be biased or manipulated? What are the skills and competencies necessary for people to have a critical view of the results generated by artificial intelligence?”
[00:16:17] David Rogers: Great. Thank you for the questions. Carlos, I'll start by saying, overall, the view we want to have, of course, is to not fall in love with the solution. Fall in love with the problem, not the solution. start from the customer in the business first, not from the technology. I've written a lot about this, but that being said, there's a lot of rapid advances going on in AI right now, particularly, in the area of large language models and what's called generative AI generally.
and so it does seem like there's a lot of opportunity and it depends very much on what industry you're in. Whether this is oh, my gosh, this is already, changed how many people were hiring every day because we don't need people for certain jobs or it's dramatically changed the way we do the work in some departments of our company and in others.
It's still much more. We think this is going to be significant, but there's some lack of clarity of where or which types are going to be actually most impactful for us in the near term, given our business model. If you're an industrial manufacturer. if you're repairing those tanks for the Marine Corps.
I'm not sure that a chat bot that writes language incredibly fluently is maybe the most important machine learning model to your business at this particular moment. You might be doing much better with a predictive model on things like turnaround time. But anyway, with that point of view, let me zero in on what Carlos asked here.
Why are people led to believe that artificial intelligence will always provide the best answers? Considering that they can be biased, the answers can be biased or manipulated. I guess the why is, human nature, hype, excitement, falling in love for the technology. There's a gap between the demonstrated find something that it does amazingly well, versus others where it doesn't.
That’s a big part of it. We have right now what Ethan Mollick refers to as “the jagged edge.” So, it's very much a characteristic of this new wave of AI that the bleeding edge of what it can do and what it can't is very uneven. Like the same tool, whether you're talking about Gemini or Claude or Chet GPT or whatever, may perform stunningly well on one task, and then you turn around and give it another task that seems to you as relatively similar, and it does shockingly poorly.
So that's one of the reasons. We see an example where it does really great. And so, we think, oh, this can do anything! We over extrapolate. There are a lot of sources of bias and so that's, I think, still an important area of education. People don't recognize that. So, you talked about the answers can be biased or manipulated, broadly speaking, the biggest errors, bias in terms of training data.
One broad area is basically human bias, filtering in through training data into the algorithms, and then they actually amplify them. So, a classic example of this is on HR and hiring, there's a well-known problem. That's very hard. If you're operating human resources for a big organization that you're trying to fill jobs, and you get lots and lots of applications and how do we deal with this top of the funnel with just all these applications and pick who we should even be looking at.
Seems like a great application of AI, but the problem is what do you train the data on? And the easier trap to fall into is you show it all the resumes of the people who you wound up hiring in the past. but in fact, if you're looking for large amounts of historical data, you may have a lot of human bias.
In other words, depending on your organization and where you are in the world, whatever, those choices in the past have been marked by bias against some, genders and ethnicities and people with a certain educational background and so forth. That doesn't really reflect what you need, maybe what you need in the past and maybe what you need going forward in the future.
If you just feed that data in, not only will you not realize that the algorithm is going to perpetuate that past bias of human judgment. It will very often actually amplify it. Like it will notice that there was a slight bias towards male candidates versus female candidates, hiring men versus women.
And they'll say, oh, men are better choices than women. And that'll ramp up that factor. And it'll just discount all the women. So human biases is one sort, but others are less, Obvious collection bias. There's a famous example from terms of medical imaging, which has been a great application of AI.
But some of the early problems were, for example, there was a certain kind of I think it wasn't actually skins. It was like, tumor or melanoma or something like that. The image of it and then you've got and then you're measuring the size of it is one of the key factors in the shape and it turned out that just because of the collection process that there were in some of the pictures, there was like a ruler visible on the edge of the image.
Someone was using a ruler to actually visually measure the size of a tumor or a spot on the skin or whatever it was, and it turned out that correlated by happenstance. It was more often there was a ruler in the examples where I forget there was a problem with the, with the spot or there was not a problem.
And so, the algorithm assumed that oh, ruler is a key predictor of malignancy or something, which is obviously not. So, the point is, you can have collection bias that just in the process of pulling in data can bring in hidden biases. Of course, we're all aware now because of generative AI and these large language models that they're nature of them is they learn from these vast stores of human created language to produce content text very often that is very natural, very engaging, and be very helpful.
But although it writes in a style that is very persuasive and seems very confident and accurate, the system is not actually designed or optimized, fundamentally around predictiveness. This is why it's very different from these predictive models of AI, these generative models. They’re not actually designed about pattern matching, in that sense.
It's a different kind of linguistic pattern matching rather than truthful pattern matching. So, we get what's called hallucinations. All of these happen. There's never going to be a perfect solution that none of these happen. So that's why it's so critical as Carlos said, how do we have the skill, the competency to spot this?
It's not that you're ever going to have a technical solution that none of these errors are going to happen. Obviously, you want technical solutions to reduce the hallucinations and the collection bias and the human bias and so forth, but you also are always going to need some degree of humans in the loop.
There are more closed loop models where what you're predicting is very objective. It's something like the temperature and then you have a thermostat, and you measure whether your prediction was right, and you feed that back into the model. And most of the training can be done on an automated basis.
That's the heart of machine learning in many cases. But other cases it's not with generative really the only judge of whether What it gave was useful or helpful is a person. There's always got to be some degree of humans in the loop, even in the most automated systems. So, the thing is you need expertise.
To be able to spot, wait, something's wrong. It keeps whenever the ruler's in there, it keeps having this, it's overcompensating, or I've noticed this bias here in our hiring practices or hallucinations. This is one of the tricky things about generative AI, because it writes in such a persuasive way.
People who don't know the subject really well, and you're using the chatbot because you're like, I don't know how to write a paper on this, you write it for me. Then you aren't going to have any ability to judge whether or not that paper is making any stuff up or if it is. purely accurate. You need an expert.
So this is the interesting, I think one of the most interesting challenges about where AI is going right now is that we are seeing, if anything, this ever greater need for more subtle expertise to be able to fine tune these models and even Fine tune the use cases, the specific application in your business, in your industry, in your organization.
Oh, is this the right place? And what do we need to make sure? What kind of errors do crop up? How do we spot them? How do we make sure they're the kind of errors that don't really matter to our customers or business versus we can't really apply it here because it's only a 1 percent error rate, but It's a really bad error when it happens.
And without this system, we've already got our error rate down to 0. 1 percent as we've used a sophisticated process. So that requires expertise. So, the future question is the trend of the current wave of AI seems to be. Meaning, at least in some areas, we'll need fewer people, or we'll need fewer interns and new hires and junior recruits who don't have a lot of expertise, because that's the stuff these things do really well.
But where are the experts of the future going to come from? Who are going to be the folks testing and validating and tweaking and keeping an eye on these models? 10 years from now in your organization, if you aren't hiring them today and bringing them up through the ranks and giving them all that rich real-world experience, that they'll be the one who can actually catch those subtle mistakes or, lack of alignment from the business goals versus what the AI is producing.
So, it's a really important question and one that is, I think, going to continue to be front and center for businesses in years ahead.
[00:26:24] Clark Boyd: Yeah, no doubt. And as soon as I saw the question, I thought, yeah, we, with the tools that we're building at Novela. We've been amazed by Just how quickly people believe what they're told by the AI in a way that isn't good.
Yeah. They wait for direction from the AI. I've had to explain to them sometimes, it's not real. You're not talking to real people. These are AI personas, like their word isn't gospel, but very quickly. So, I think it runs deep, Carlos’s question about what's necessary to get these skills and competencies.
We're probably going to see an education curriculum change quickly to help people spot these things, to think more critically. And it's long overdue.
[00:27:08] David Rogers: Yeah. It's interesting for years, many of us have been arguing, particularly with the growth of just the internet, the web and social media, the increasing importance of what's called media literacy, teaching from a very early age, everything you read or look at, ask the question, where's this from?
Who made it? What do I think about the source? Not just what is it saying? but I think you're right. Going forward, it's going to be thinking about that from the AI perspective. What, not who made this, but what algorithm made this, how did it make it? And like, how, by what basis can I confirm that this is reliable or not?
[00:27:46] Clark Boyd: Exactly. We'll round off with a couple of, do another little lightning rounds to see us through here. Quick fire questions. One from Tim,
“If you had one thing to say about the shift in thinking from the Playbook to the Roadmap, what would it be?”
[00:28:04] David Rogers: So, I would say when I wrote, the digital transformation playbook, the, blue book, as I call it. At that point, companies were really grappling with the question of, do we really need to change? Do we need to transform? and the challenge here was getting companies to stop and getting leaders to stop thinking about their business through the lens of the past and to, get past their blind spots and really recognize what was changing, in the world around them and why they were going to have to rethink and reimagine the future of their business.
So again, do we need to change? Now that's not the challenge. Every company recognizes the digital era, and all these waves of technological change impact them. It doesn't matter. There's no safe space. It doesn't matter what industry you're in. Instead, the question now is really, how do we do this right?
We’ve been trying and it's hard and we're struggling. We did a really expensive IT-based project, and it hasn't delivered the results we wanted. So how do we do better? I guess the other way I'd answer Tim's question is, When I wrote that book, I went out and started talking with the companies, and I asked them what they thought would be challenging.
What did they see as the barriers and the hard part of digital transformation? The most common answers were money and technology. Everyone thought, oh, it's going to be hard because it's expensive. Or it's going to be hard because our technology has to be replaced and that's always really complicated.
Now, again, because companies, by and large, have been trying something, there’s been some kind of digital transformation effort, and they've tried to actually do this in the real world, nobody answers that the biggest problems are money or technology. When I ask what are the biggest barriers? What's the hardest part of digital transformation? It’s always about culture or mindset and about process and the way the organization itself runs.
[00:30:04] Clark Boyd: Perfect. And our final one from Ben. A big one.
“Do you think the term “digital transformation” has lost its meaning? Should we try to reclaim it, or should we find a better word or phrase?”
[00:30:17] David Rogers: So, I'm not super wedded to particular language. I focus on clarity and then whatever phrases or words you're using, defining it and being clear, again, aligning, saying, what are we actually talking about and what matters here?
So, I wouldn't be opposed to some new phrase, coming along, but I will, let me make a case, please, just don't call it AI transformation, AI business or something. Let's not put the technology 1st. That's the point. It's got to be the technology in service of the business and the customer. Let's not put the technology on a pedestal and think that it's this magical oracle that will tell us everything or be always truthful and so forth. But sure, as I keep talking with companies about how Digital transformation has to be led by the business, has to be business 1st, maybe we could call it digital business transformation or continuous business transformation.
The key point here, I think, whatever term we're using, that I keep trying to stress to companies is that the challenge is how do organizations change not once, but get better at changing over and over, faster and faster because in the digital era, the era, this era that we are in all these technologies and these waves of technologies are just continuing to accelerate the change around our organization.
So that is the real challenge is this continuous process of change and getting better and better at it. So we can call it digital transformation. You can give another name. I'm open to it.
[00:31:59] Clark Boyd: Brilliant. I think that's us just about on time. Thanks everyone for joining. And of course, for your fantastic questions and more than anything, thanks to you, David, for having me along.
It's been a blast as always. We'd love to do it again.
[00:32:12] David Rogers: Absolutely. Thank you so much for joining us. Clark, this was great. I always love these conversations we have. So when I thought about this AMA session, of course you were the first one, who came to mind. Thanks everyone for their questions.
and my best wishes to all our readers. Keep the questions coming. You can use the comments button at the bottom of any issue of David Rogers on digital. You can also just hit the reply on the email that hits your inbox and share a question there. We will look forward to hearing from you.
Thanks very much. Take care. Thanks, Clark!
NEW BOOK:
“THE DIGITAL TRANSFORMATION ROADMAP: Rebuild Your Organization for Continuous Change”
ORDER NOW:
Hardcover: https://amzn.to/41U85dl
Kindle: https://amzn.to/3OWD437
Audiobook: https://bit.ly/DXR-Audiobook
Bulk orders up to 60% off: https://bit.ly/DXR-bulk-orders
Your Questions Answered: “Ask Me Anything” Part 2