Fearless Thinking

🎙️EP 70: Ethical AI and Responsible Innovation for Entrepreneurs

Michael D Devous Jr Season 3 Episode 70

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 23:21

Send us Fan Mail

In this episode of Fearless Thinking, Michael Devous tackles the crucial topic of ethical AI and responsible innovation for entrepreneurs. As AI becomes increasingly integrated into various aspects of business, it's essential for entrepreneurs to consider the ethical implications and ensure their AI systems are fair, unbiased, and transparent. Michael discusses key ethical considerations, such as data privacy, bias mitigation, and explainability, and provides actionable steps for building a culture of responsible AI innovation within your company.

Segment 1: The Importance of Ethical AI

  • AI's Impact: AI is transforming the business landscape, offering immense opportunities but also potential pitfalls. Michael Devous notes, "AI is no longer a futuristic fantasy; it's here, it's now, and it's changing the game for entrepreneurs" (02:54:874 - 02:56:309).
  • Ethical AI: It involves creating AI systems that are fair, unbiased, and transparent, ensuring AI serves humanity (03:13:727 - 03:19:566).
  • Case Study: A startup faced legal and reputational issues due to biased AI in job applicant screening, highlighting the need for ethical AI from the outset (04:17:157 - 04:22:796).

Segment 2: Key Ethical Considerations in AI Development

  • Privacy and Data Protection: Ensuring responsible data collection, storage, and use. Michael Devous emphasizes, "AI thrives on data, but that data often belongs to real people" (05:42:175 - 05:43:143).
  • Transparency and Explainability: Making AI decisions understandable and explainable. Devous notes, "AI shouldn't be a magic box. Can you explain how your AI system works and why it makes certain decisions?" (06:10:637 - 06:13:440).
  • Fairness and Bias Mitigation: Identifying and addressing biases to ensure fairness (06:56:483 - 06:57:350).
  • Accountability and Responsibility: Establishing clear accountability for AI decisions (07:35:355 - 07:36:560).

Segment 3: Bias in AI and Mitigation Strategies

  • Types of Bias: Algorithmic bias and data bias can affect AI systems.
  • Mitigation Strategies: Using diverse and representative training data, auditing for bias, and building inclusive development teams (07:27:313 - 07:31:151).
  • Example: A company improved its AI-powered medical diagnosis tool by diversifying its training data.

Segment 4: Transparency and Explainability in AI Systems

  • Importance of Transparency: Builds trust with users and empowers informed decision-making. Devous notes, "It's not enough for AI systems to be accurate; they also need to be understandable" (06:28:655 - 06:32:292).
  • Techniques for Explainability: Using interpretable models, AI explanation tools, and clear communication.

Segment 5: Building a Culture of Responsible Innovation

  • AI Ethics Committee: Guides and oversees AI development.
  • Ethical Guidelines: Ensures AI aligns with company values.
  • Open Discussions: Encourages honest conversations about AI ethics.
  • Impactful Quote: "By being an ethical AI leader, you're not just doing what's right; you're also building a stronger, more sustainable business" (17:00 - 19:00).

Key Takeaways

  • Prioritize ethical considerations in your AI development process to build trust and avoid costly mistakes.
  • Ensure data privacy, transparency, fairness, and accountability in your AI systems.
  • Actively work to identify and mitigate bias in your AI algorithms and data sets.
  • Foster a culture of responsible AI innovation within your company by establishing ethical guidelines and encouraging open discussions.

1

00:00:08,608 --> 00:00:12,379

Hey there, everybody, and welcome to the

Fearless Thinking podcast designed to

 

2

00:00:12,412 --> 00:00:13,113

help you navigate

 

3

00:00:13,213 --> 00:00:15,682

authentic leadership and the

entrepreneurial journey.

 

4

00:00:16,182 --> 00:00:19,219

I'm your host, Michael Devous, and these

are my thoughts, lessons, and insights

 

5

00:00:19,352 --> 00:00:19,853

from my

 

6

00:00:20,53 --> 00:00:24,290

entrepreneurial journey into coaching,

workshops, motivational speaking, and of

 

7

00:00:24,290 --> 00:00:24,591

course, what

 

8

00:00:24,591 --> 00:00:27,60

I picked up from interviews with

inspiring leaders and entrepreneurs

 

9

00:00:27,394 --> 00:00:28,328

along the way.

 

10

00:00:28,962 --> 00:00:33,99

My mission is to help people unlock

their untapped potential by using fear

 

11

00:00:33,233 --> 00:00:33,767

as a catalyst

 

12

00:00:33,833 --> 00:00:37,337

for powerful change and growth so they

can step into the greatest version of

 

13

00:00:37,404 --> 00:00:37,804

their most

 

14

00:00:37,904 --> 00:00:38,872

authentic selves.

 

15

00:00:39,639 --> 00:00:42,342

All right, let's get into some fearless

thinking.

 

16

00:00:57,123 --> 00:00:57,924

Hey everybody.

 

17

00:00:58,425 --> 00:00:59,926

Welcome back to Fearless Thinking.

 

18

00:01:00,93 --> 00:01:01,294

I'm your host, Michael Devous.

 

19

00:01:01,795 --> 00:01:03,530

This is the episode about

 

20

00:01:03,697 --> 00:01:07,0

Ethics and AI, Responsible Innovation

for Entrepreneurs.

 

21

00:01:07,200 --> 00:01:08,134

Now, we know we were getting

 

22

00:01:08,134 --> 00:01:08,435

here.

 

23

00:01:08,568 --> 00:01:14,207

We knew we were getting here this past

month as we were examining all the different

 

24

00:01:14,274 --> 00:01:19,713

types of AI systems, innovations,

opportunities, frameworks, you name it,

 

25

00:01:20,246 --> 00:01:21,414

that different companies,

 

26

00:01:21,514 --> 00:01:26,986

organizations, and industries are using

currently to build out their tech stack,

 

27

00:01:27,253 --> 00:01:27,754

to build out

 

28

00:01:27,754 --> 00:01:31,691

their systems to navigate the world of

using AI, whether that's, you know,

 

29

00:01:31,825 --> 00:01:33,93

company-wide or individually.

 

30

00:01:33,593 --> 00:01:35,795

And ethics is falling into the big part

of it.

 

31

00:01:35,829 --> 00:01:36,29

You

 

32

00:01:36,29 --> 00:01:39,699

can see a lot of consulting, ethics

consulting jobs out there that are

 

33

00:01:39,733 --> 00:01:40,867

popping up on the radars.

 

34

00:01:41,968 --> 00:01:44,337

So yeah, let's dive into this, right?

 

35

00:01:45,805 --> 00:01:47,540

This year, what's

 

36

00:01:47,607 --> 00:01:52,779

funny, because almost all the scripts

that I have, that I put through my AI and

 

37

00:01:53,513 --> 00:01:58,852

put them together and stuff AI almost

always says by 2025 as it's it's almost

 

38

00:01:58,852 --> 00:02:04,190

as if a I still thinks 2025 is further

away and off in the distance when in

 

39

00:02:04,224 --> 00:02:10,96

fact it's here it's happening today so

no it's not by 2025 by the end of 2025

 

40

00:02:12,198 --> 00:02:15,135

AI ethicists and that's a hard one to

say I think we have to come up with a

 

41

00:02:15,168 --> 00:02:21,408

new term will be in high demand across

almost all of the market industries

 

42

00:02:22,575 --> 00:02:26,46

ethical considerations, become paramount

with artificial intelligence,

 

43

00:02:26,513 --> 00:02:31,151

implementing those locally if you're

doing it large businesses on their

 

44

00:02:31,351 --> 00:02:34,754

servers and ingesting certain data sets

that you've had over the years, right?

 

45

00:02:34,754 --> 00:02:34,954

So

 

46

00:02:34,954 --> 00:02:38,825

are we ready to lead the charge in

responsible AI innovation

 

47

00:02:39,59 --> 00:02:41,695

both as entrepreneurs, solopreneurs, or

even as a

 

48

00:02:41,728 --> 00:02:42,128

business?

 

49

00:02:42,562 --> 00:02:44,664

What does it look like to start building

out the frameworks

 

50

00:02:45,65 --> 00:02:50,36

for an ethical AI system or a set of

processes and policies

 

51

00:02:50,603 --> 00:02:52,405

at your organization or your company.

 

52

00:02:52,572 --> 00:02:53,373

And what should we consider?

 

53

00:02:53,873 --> 00:02:54,541

AI is no longer,

 

54

00:02:54,874 --> 00:02:56,309

you know, it's just no longer the

future.

 

55

00:02:56,743 --> 00:02:57,610

It's not a fantasy.

 

56

00:02:57,811 --> 00:02:58,11

It's here.

 

57

00:02:58,111 --> 00:02:58,578

It's here now,

 

58

00:02:58,812 --> 00:03:01,214

and it's happening to us everywhere,

right?

 

59

00:03:01,314 --> 00:03:04,617

And it is a game changer for

entrepreneurs.

 

60

00:03:04,984 --> 00:03:10,123

But as we embrace the power of AI, we do

need to be mindful of its potential pitfalls.

 

61

00:03:10,256 --> 00:03:10,490

And that's

 

62

00:03:10,623 --> 00:03:13,26

where ethical AI comes in.

 

63

00:03:13,727 --> 00:03:19,566

Ethical AI is about building AI systems

that are fair, unbiased,

 

64

00:03:19,632 --> 00:03:20,600

and transparent.

 

65

00:03:20,934 --> 00:03:24,738

A few trigger words in these day and

ages, right?

 

66

00:03:25,405 --> 00:03:27,474

With our political climate and

 

67

00:03:27,674 --> 00:03:31,111

businesses and stuff ditching their DEI

programs, how are they going to balance

 

68

00:03:31,411 --> 00:03:32,746

the need for a fair,

 

69

00:03:32,912 --> 00:03:38,318

unbiased, and transparent AI systems as

well as ensuring that the data sets that

 

70

00:03:38,318 --> 00:03:39,219

they put into it

 

71

00:03:39,352 --> 00:03:42,555

are also fair and unbiased and

transparent, right?

 

72

00:03:42,856 --> 00:03:45,625

It's about ensuring that AI serves us,

 

73

00:03:45,792 --> 00:03:47,494

humanity and not the other way around.

 

74

00:03:48,28 --> 00:03:49,396

And by the way, for entrepreneurs and

 

75

00:03:49,529 --> 00:03:53,933

solopreneurs, it's not just about doing

the right thing, but it's about doing the

 

76

00:03:54,67 --> 00:03:55,402

smart thing for your business.

 

77

00:03:55,502 --> 00:03:57,370

Because remember, if the world out

there, if your

 

78

00:03:57,470 --> 00:04:03,543

customers and your consumers know that

you are fair and unbiased and

 

79

00:04:03,710 --> 00:04:07,781

transparent in your use of AI, you can

leverage that to help connect people

 

80

00:04:07,847 --> 00:04:10,617

with your brand and your message and

your mission, right?

 

81

00:04:10,817 --> 00:04:11,551

Getting even more

 

82

00:04:11,584 --> 00:04:12,85

authentic.

 

83

00:04:12,485 --> 00:04:16,956

So remember, the story about that

startup, by the way, you've probably

 

84

00:04:17,157 --> 00:04:18,358

heard of it,

 

85

00:04:18,658 --> 00:04:22,796

that used AI to screen job applicants

only to find out that it was biased

 

86

00:04:22,996 --> 00:04:23,530

against women,

 

87

00:04:23,630 --> 00:04:29,736

and they got sued, huge lawsuits, and

bad damage, and damage to their

 

88

00:04:29,769 --> 00:04:31,104

reputation, by the way.

 

89

00:04:31,304 --> 00:04:36,242

They lost valuable time and money and

resources simply because they didn't do

 

90

00:04:36,309 --> 00:04:37,10

their due diligence

 

91

00:04:37,310 --> 00:04:40,580

to ensure that their AI data set, that

the data sets they were putting into

 

92

00:04:40,847 --> 00:04:41,481

their AI

 

93

00:04:42,515 --> 00:04:44,584

was fair, was unbiased, right?

 

94

00:04:44,617 --> 00:04:46,586

And sometimes we don't even know this

about ourselves, you know,

 

95

00:04:46,753 --> 00:04:48,955

these are complicit bias, unknown

biases.

 

96

00:04:50,90 --> 00:04:52,992

So I think it's important that you

create a, when you're

 

97

00:04:53,193 --> 00:04:55,962

creating a framework to start out with

this, you understand what questions

 

98

00:04:56,129 --> 00:04:56,830

you're asking and why

 

99

00:04:56,963 --> 00:05:02,68

you're asking them, and then you provide

the right data sets so that your AI can

 

100

00:05:02,235 --> 00:05:03,236

perform to the

 

101

00:05:03,236 --> 00:05:05,805

the standards that you expect it to for

your business model.

 

102

00:05:05,972 --> 00:05:06,406

By

 

103

00:05:06,573 --> 00:05:11,778

embedding ethical considerations into

your AI as a strategy from

 

104

00:05:11,911 --> 00:05:14,481

day one, you can avoid a lot of these

costly mistakes.

 

105

00:05:15,181 --> 00:05:15,382

And you

 

106

00:05:15,415 --> 00:05:18,485

can build a business that is not only

innovative, but it's

 

107

00:05:18,651 --> 00:05:19,185

responsible.

 

108

00:05:19,786 --> 00:05:22,689

So some key ethical considerations for

AI.

 

109

00:05:23,390 --> 00:05:23,990

What do

 

110

00:05:24,24 --> 00:05:24,991

we want to look for?

 

111

00:05:25,325 --> 00:05:27,227

One, privacy and data protection.

 

112

00:05:27,394 --> 00:05:27,594

We

 

113

00:05:27,594 --> 00:05:28,828

know that this is important.

 

114

00:05:29,295 --> 00:05:30,497

It's everywhere that we go.

 

115

00:05:31,264 --> 00:05:31,731

All

 

116

00:05:31,731 --> 00:05:32,565

of our data is everywhere.

 

117

00:05:32,565 --> 00:05:34,334

You've agreed to let people use some of

it.

 

118

00:05:34,434 --> 00:05:35,835

You've agreed to let people do whatever

they

 

119

00:05:35,969 --> 00:05:37,470

want to with it right across the board.

 

120

00:05:37,937 --> 00:05:42,175

However, when we're talking about AI, AI

thrives on data.

 

121

00:05:42,175 --> 00:05:43,143

It needs that

 

122

00:05:43,309 --> 00:05:44,844

data in order to make decisions.

 

123

00:05:45,345 --> 00:05:47,347

Your other types of data, they don't

make decisions.

 

124

00:05:47,547 --> 00:05:48,448

They go out to people that

 

125

00:05:48,515 --> 00:05:49,49

make decisions.

 

126

00:05:49,849 --> 00:05:52,85

But that data belongs to real people.

 

127

00:05:52,318 --> 00:05:53,53

That's valuable.

 

128

00:05:53,253 --> 00:05:54,387

That's a valuable resource.

 

129

00:05:54,587 --> 00:05:55,622

So are you as a

 

130

00:05:55,622 --> 00:05:59,292

business collecting and storing and

using data responsibly?

 

131

00:05:59,826 --> 00:06:00,827

And if so, how are

 

132

00:06:00,894 --> 00:06:04,564

you communicating that forward to the

public, to your audience, to your

 

133

00:06:04,664 --> 00:06:04,998

consumers?

 

134

00:06:05,932 --> 00:06:09,736

Two is transparency and explainability.

 

135

00:06:10,437 --> 00:06:10,637

AI

 

136

00:06:10,637 --> 00:06:13,440

shouldn't be, you know, a magic box.

 

137

00:06:13,606 --> 00:06:16,209

Can you actually explain how your AI

systems

 

138

00:06:16,843 --> 00:06:19,646

work and why they make the decisions

that they make?

 

139

00:06:19,746 --> 00:06:20,714

You will see some of this

 

140

00:06:20,714 --> 00:06:27,620

in new AI platforms such as DeepSeek and

Perplexity and I think Claude now, where

 

141

00:06:27,620 --> 00:06:28,588

when you put

 

142

00:06:28,655 --> 00:06:32,292

the query in or the prompt in, you can

see it actually going through its decision-making

 

143

00:06:32,292 --> 00:06:36,229

process and asking the questions and you

can see the steps it's taking.

 

144

00:06:36,563 --> 00:06:37,430

So you can go back

 

145

00:06:37,464 --> 00:06:40,834

and look at when did it make the

decision it made and why did it make

 

146

00:06:40,834 --> 00:06:41,334

that decision

 

147

00:06:41,634 --> 00:06:46,506

based on the previous pieces of data

sets that it may have had to make that

 

148

00:06:46,573 --> 00:06:46,973

choice or

 

149

00:06:46,973 --> 00:06:47,774

make that decision.

 

150

00:06:47,907 --> 00:06:53,246

I think it's important to know what and

how your AI is making those choices.

 

151

00:06:54,314 --> 00:06:54,514

Right?

 

152

00:06:55,181 --> 00:06:56,216

Three is fairness

 

153

00:06:56,483 --> 00:06:57,350

and bias mitigation.

 

154

00:06:57,917 --> 00:06:59,853

It is your AI system.

 

155

00:07:00,453 --> 00:07:04,90

Is it treating everyone fairly

regardless of background?

 

156

00:07:04,824 --> 00:07:05,425

Or what are the

 

157

00:07:05,525 --> 00:07:05,825

standards?

 

158

00:07:06,59 --> 00:07:09,429

You know, how do you want it to treat

people based on their what?

 

159

00:07:10,30 --> 00:07:11,965

Color, race, creed, all those things you

 

160

00:07:11,965 --> 00:07:13,333

put into your employment contracts?

 

161

00:07:13,633 --> 00:07:14,567

Can you ensure that

 

162

00:07:14,834 --> 00:07:19,939

your AI system is ingesting that

information and, and acting in

 

163

00:07:20,373 --> 00:07:23,76

alignment with those standards and

policies and procedures?

 

164

00:07:23,209 --> 00:07:23,410

Are

 

165

00:07:23,410 --> 00:07:27,180

you actively working, by the way, to

identify and mitigate

 

166

00:07:27,313 --> 00:07:31,151

bias in your organization or in your

copy or your content or

 

167

00:07:31,518 --> 00:07:32,385

your data sets?

 

168

00:07:32,652 --> 00:07:32,852

Right?

 

169

00:07:33,86 --> 00:07:35,255

And the last one is accountability and

 

170

00:07:35,355 --> 00:07:36,56

responsibility.

 

171

00:07:36,690 --> 00:07:41,528

If your AI system makes a mistake, who

is

 

172

00:07:41,528 --> 00:07:41,895

accountable?

 

173

00:07:42,629 --> 00:07:45,231

Have you even established clear lines of

 

174

00:07:45,365 --> 00:07:48,668

responsibility when something like that

might occur, if

 

175

00:07:48,735 --> 00:07:49,269

something like that?

 

176

00:07:49,436 --> 00:07:51,237

And it will, by the way, it's going to

 

177

00:07:51,271 --> 00:07:51,538

occur.

 

178

00:07:52,105 --> 00:07:53,206

So where does the buck stop?

 

179

00:07:53,973 --> 00:07:55,942

Have you decided how you

 

180

00:07:55,975 --> 00:07:57,43

want that to be managed?

 

181

00:07:57,143 --> 00:07:59,112

And who's the first to step up and

 

182

00:07:59,179 --> 00:08:03,616

own that situation and then fix it

accordingly, right?

 

183

00:08:03,650 --> 00:08:04,84

Repair it

 

184

00:08:04,150 --> 00:08:04,517

accordingly.

 

185

00:08:04,851 --> 00:08:07,287

So these are ethical dilemmas, right?

 

186

00:08:07,487 --> 00:08:07,687

They're

 

187

00:08:07,687 --> 00:08:09,856

not just hypotheticals anymore.

 

188

00:08:10,123 --> 00:08:11,191

They're actually playing out in

 

189

00:08:11,191 --> 00:08:12,492

real world today.

 

190

00:08:13,393 --> 00:08:21,134

So if you think about Facebook's facial

recognition software challenge that they

 

191

00:08:21,267 --> 00:08:26,6

had, the technology was being used in

surveillance and it's also being used in

 

192

00:08:26,39 --> 00:08:27,374

AI-powered loan

 

193

00:08:27,507 --> 00:08:30,910

applications where they were

discriminating against certain demographics.

 

194

00:08:31,845 --> 00:08:32,78

Facebook's

 

195

00:08:36,249 --> 00:08:41,21

face recognition was discriminating

against people of color.

 

196

00:08:42,88 --> 00:08:43,656

These are big problems, right?

 

197

00:08:43,823 --> 00:08:47,560

And while you might be able to do it

one-to-one individually, where you might

 

198

00:08:47,560 --> 00:08:48,128

have a manager

 

199

00:08:48,294 --> 00:08:52,832

that's biased or bigoted or prejudiced,

it's not going to work in the bigger,

 

200

00:08:52,966 --> 00:08:54,567

larger scheme of

 

201

00:08:54,834 --> 00:08:55,35

things.

 

202

00:08:55,68 --> 00:08:58,805

You're going to have systems in place

where people are going to know when they walk

 

203

00:08:58,972 --> 00:09:02,342

through that door, interact with your

AI, whether or not they're accepted or not.

 

204

00:09:02,676 --> 00:09:03,343

And then that's

 

205

00:09:03,343 --> 00:09:05,712

going to tell people where to spend

their money.

 

206

00:09:06,12 --> 00:09:06,980

Maybe you want that.

 

207

00:09:07,47 --> 00:09:07,914

Maybe you do want that as a

 

208

00:09:08,81 --> 00:09:08,948

filter for your business.

 

209

00:09:09,716 --> 00:09:12,719

But if you don't and you really want to

be transparent, I think it's

 

210

00:09:12,886 --> 00:09:14,387

important to think about these things.

 

211

00:09:14,554 --> 00:09:17,357

As entrepreneurs, we have a

responsibility to be

 

212

00:09:17,524 --> 00:09:22,28

aware of these issues and then address

them head on in our development of not

 

213

00:09:22,62 --> 00:09:23,196

only our policies and

 

214

00:09:23,363 --> 00:09:25,398

procedures but our AI processes.

 

215

00:09:26,566 --> 00:09:30,603

So how do we mitigate bias with AI?

 

216

00:09:30,970 --> 00:09:32,5

What kind of strategies

 

217

00:09:32,5 --> 00:09:36,910

can we put together and can we use to

mitigate these things, right?

 

218

00:09:37,344 --> 00:09:39,979

So biases can sneak up

 

219

00:09:40,380 --> 00:09:43,550

into the systems, into AI systems,

right, in many different ways, from the

 

220

00:09:43,550 --> 00:09:44,484

data sets that we use to

 

221

00:09:44,584 --> 00:09:48,21

train on to the very algorithms that

they're built on.

 

222

00:09:48,688 --> 00:09:50,790

For instance, if you have data sets that

you

 

223

00:09:50,990 --> 00:09:55,595

want to use from the past 10 years, how

do you know that your employees, your

 

224

00:09:55,595 --> 00:09:56,329

staff creating those

 

225

00:09:56,329 --> 00:09:59,933

data sets and the information that was

going into them didn't place unfair

 

226

00:10:00,633 --> 00:10:02,102

biases, prejudices,

 

227

00:10:02,235 --> 00:10:06,172

or opinions in the data set, how would

you know?

 

228

00:10:06,406 --> 00:10:08,475

We've never had the ability to really go

back and

 

229

00:10:08,708 --> 00:10:10,777

scrape and scrub these things to find

these things out, right?

 

230

00:10:11,44 --> 00:10:13,113

So one is you've got to know the types

 

231

00:10:13,179 --> 00:10:14,14

of biases, right?

 

232

00:10:14,14 --> 00:10:20,387

There's algorithmic bias, right, where

AI system itself is biased and the data is

 

233

00:10:20,387 --> 00:10:20,854

is biased?

 

234

00:10:21,54 --> 00:10:22,889

Where did the data come from?

 

235

00:10:23,556 --> 00:10:24,724

Where the data that

 

236

00:10:24,791 --> 00:10:28,995

was used to train the AI system is

actually biased data, right?

 

237

00:10:29,229 --> 00:10:31,631

And if you don't have data, and you're

starting out, you have a

 

238

00:10:31,698 --> 00:10:34,367

new startup, and you want to create an

AI system, and you

 

239

00:10:34,401 --> 00:10:38,271

want to check those things, but you need

datasets, do you go and

 

240

00:10:38,304 --> 00:10:38,938

lease datasets?

 

241

00:10:39,139 --> 00:10:42,475

Do you purchase datasets from McKinsey

and other

 

242

00:10:42,509 --> 00:10:46,46

big institutions, with focus groups and

all of that, that

 

243

00:10:46,146 --> 00:10:48,548

data, do you purchase it from them and

ingest it into the

 

244

00:10:48,548 --> 00:10:48,848

system?

 

245

00:10:48,915 --> 00:10:52,185

And how do you know, when you do so,

that that data

 

246

00:10:52,352 --> 00:10:54,287

set doesn't include biases?

 

247

00:10:55,755 --> 00:10:57,290

Mitigation strategies, if you're

 

248

00:10:57,357 --> 00:11:05,198

going to tackle this, we need to ensure

that our training data is

 

249

00:11:05,231 --> 00:11:08,335

as diverse and representative of all the

people that that are in

 

250

00:11:08,368 --> 00:11:10,737

our company, and that our consumers that

are our target

 

251

00:11:10,804 --> 00:11:11,604

market, right?

 

252

00:11:11,971 --> 00:11:15,75

We also need to regularly audit our

systems, our

 

253

00:11:15,141 --> 00:11:18,311

both AI systems, as well as our policies

and procedures, for

 

254

00:11:18,311 --> 00:11:24,17

bias as it might occur and build in

inclusive development teams that can

 

255

00:11:24,17 --> 00:11:26,453

spot this and address these potential

biases

 

256

00:11:27,554 --> 00:11:27,821

quickly.

 

257

00:11:28,655 --> 00:11:32,759

So there's a company recently that

developed an AI-powered medical

 

258

00:11:33,326 --> 00:11:34,527

diagnosis tool.

 

259

00:11:36,396 --> 00:11:36,596

I don't know if you've

 

260

00:11:36,596 --> 00:11:44,838

AI is helping a lot of medical

facilities diagnose patients, provide

 

261

00:11:45,472 --> 00:11:47,907

expert advice and

 

262

00:11:48,8 --> 00:11:53,913

additional deep learning advice on

potential cases or situations for patients.

 

263

00:11:54,781 --> 00:12:03,23

They discovered that in their initial

training data, that this data set was

 

264

00:12:03,56 --> 00:12:05,91

mostly consisting

 

265

00:12:05,191 --> 00:12:06,793

of only white patients.

 

266

00:12:07,394 --> 00:12:08,361

And this led to

 

267

00:12:08,628 --> 00:12:10,497

inaccurate diagnosis for people of

color.

 

268

00:12:10,964 --> 00:12:13,133

So you can imagine if your data set,

like I

 

269

00:12:13,133 --> 00:12:15,869

was saying, if it only includes one

section of

 

270

00:12:15,935 --> 00:12:19,239

humanity, then you're clearly not going

to get a

 

271

00:12:19,305 --> 00:12:21,875

diverse set of responses and answers

from AI.

 

272

00:12:21,975 --> 00:12:22,175

It

 

273

00:12:22,175 --> 00:12:24,444

has no ability to do so because the only

data

 

274

00:12:24,544 --> 00:12:28,214

set it has to use to base those

decisions on is a

 

275

00:12:28,214 --> 00:12:30,83

real limited data set.

 

276

00:12:30,583 --> 00:12:32,652

Now, if I told you I was

 

277

00:12:32,652 --> 00:12:34,788

going to give you a limited data set to

operate

 

278

00:12:34,788 --> 00:12:37,991

function for your business and your life

and good luck to you.

 

279

00:12:38,191 --> 00:12:39,125

Would you accept that?

 

280

00:12:39,325 --> 00:12:40,260

Would you want that?

 

281

00:12:40,994 --> 00:12:41,928

No, you wouldn't.

 

282

00:12:42,62 --> 00:12:43,730

You'd be like, I don't want a limited

data set.

 

283

00:12:43,830 --> 00:12:48,468

I want data sets that are specific to my

business, specific to my target market

 

284

00:12:48,735 --> 00:12:52,439

as wide and varied as possible so that I

can include as many individuals as I can.

 

285

00:12:52,605 --> 00:12:54,741

Because it's about revenue at the end of

the day.

 

286

00:12:55,275 --> 00:12:56,843

Honestly, you know, we do want to serve.

 

287

00:12:56,943 --> 00:12:59,746

We want to create impact, but guys, if

you're not making money and you're

 

288

00:12:59,946 --> 00:13:02,716

impacting your ability to make money

because you're making bad choices about

 

289

00:13:02,749 --> 00:13:05,485

data sets going into your AI systems,

that's on you.

 

290

00:13:05,585 --> 00:13:05,885

And

 

291

00:13:05,952 --> 00:13:07,387

that's going to crush your business.

 

292

00:13:07,520 --> 00:13:08,455

And I don't see why I

 

293

00:13:08,488 --> 00:13:10,256

don't understand why anybody would do

that.

 

294

00:13:11,57 --> 00:13:12,258

But I can

 

295

00:13:12,292 --> 00:13:14,627

understand there are circumstances under

which you

 

296

00:13:14,627 --> 00:13:18,98

would want very specific types of data

sets being put in,

 

297

00:13:18,198 --> 00:13:19,566

especially when you're doing case

studies.

 

298

00:13:20,533 --> 00:13:20,767

Right?

 

299

00:13:21,134 --> 00:13:21,768

So by

 

300

00:13:21,868 --> 00:13:25,772

diversifying our training data, we are

able to significantly

 

301

00:13:25,872 --> 00:13:29,476

improve accuracy and fairness within the

AI system, right?

 

302

00:13:29,809 --> 00:13:30,76

So

 

303

00:13:30,443 --> 00:13:33,313

Transparency and explainability in AI

system.

 

304

00:13:33,580 --> 00:13:34,247

This is another one.

 

305

00:13:34,347 --> 00:13:38,218

We all know people want us to be

transparent in our business model, our

 

306

00:13:38,218 --> 00:13:40,687

policies and procedures, but also with

our data sets, right?

 

307

00:13:41,388 --> 00:13:43,823

How do we provide transparency?

 

308

00:13:44,324 --> 00:13:48,528

And by the way, explainability, meaning

explaining why and how our systems are

 

309

00:13:48,528 --> 00:13:49,429

doing what they're doing.

 

310

00:13:49,729 --> 00:13:54,34

When customers and clients and people

want to know, why is your data doing

 

311

00:13:54,167 --> 00:13:54,734

what it's doing?

 

312

00:13:55,168 --> 00:13:56,903

How did you train it?

 

313

00:13:57,237 --> 00:14:00,6

And they're going to begin to ask these

questions, because they are going to

 

314

00:14:00,6 --> 00:14:00,440

want to know,

 

315

00:14:00,640 --> 00:14:01,775

who they're working with.

 

316

00:14:01,875 --> 00:14:04,177

They're going to want to know who

they're buying from,

 

317

00:14:04,444 --> 00:14:04,711

right?

 

318

00:14:04,844 --> 00:14:08,615

And it's very telling when they

understand that these systems, you can't

 

319

00:14:08,648 --> 00:14:11,951

answer those questions clear and

transparently, and you can't provide

 

320

00:14:12,85 --> 00:14:12,786

explainability.

 

321

00:14:13,486 --> 00:14:14,688

That's going to be tough.

 

322

00:14:16,22 --> 00:14:18,892

Transparency, obviously, you know, in

this

 

323

00:14:19,25 --> 00:14:22,896

world, it's tough because you're going

to be transparent with the data, but you

 

324

00:14:22,929 --> 00:14:25,98

want to be transparent with how you're

using the data.

 

325

00:14:25,432 --> 00:14:26,766

AI systems build trust

 

326

00:14:26,866 --> 00:14:31,338

with the users who use them and they

empower us to make informed decisions so

 

327

00:14:31,338 --> 00:14:35,342

we want to make sure that we are also

being transparent with that to our

 

328

00:14:35,475 --> 00:14:37,344

customers, our clients, and our

community.

 

329

00:14:38,244 --> 00:14:40,747

Techniques for explainability.

 

330

00:14:40,914 --> 00:14:41,114

This

 

331

00:14:41,214 --> 00:14:46,86

is when we are interpreting, the

machines are interpreting the

 

332

00:14:46,152 --> 00:14:46,853

learning models.

 

333

00:14:47,153 --> 00:14:51,524

We can use them in this fashion to

implement AI

 

334

00:14:52,92 --> 00:14:53,193

explanation tools.

 

335

00:14:53,693 --> 00:14:56,830

This way they are clearly communicating

not only their

 

336

00:14:57,30 --> 00:15:00,934

capabilities and their limitations, but

also how they came to their

 

337

00:15:01,67 --> 00:15:01,634

decisions.

 

338

00:15:02,535 --> 00:15:06,873

So if some AI made a choice at a certain

juncture during a

 

339

00:15:06,906 --> 00:15:12,712

customer's journey and they were looking

for help and the result was not what the

 

340

00:15:12,779 --> 00:15:17,684

customer wanted, we need to be able to

go back and figure out in that system the

 

341

00:15:17,684 --> 00:15:22,589

learning model, what tools do we have to

explain why and how can we can clearly

 

342

00:15:22,756 --> 00:15:27,127

communicate why this choice was made at

this juncture and then adjust the

 

343

00:15:27,293 --> 00:15:32,132

algorithm or apply new data sets in

order for the algorithm to function

 

344

00:15:32,265 --> 00:15:33,366

better, right?

 

345

00:15:34,34 --> 00:15:37,103

So a prime example, and we're getting

very close to these, is

 

346

00:15:37,103 --> 00:15:38,672

these self-driving car situations.

 

347

00:15:38,872 --> 00:15:41,374

You know, a self-driving car suddenly

makes

 

348

00:15:41,374 --> 00:15:43,243

a stop, you're the passenger.

 

349

00:15:44,878 --> 00:15:50,83

AI transparency would be useful here,

because without it,

 

350

00:15:50,850 --> 00:15:54,187

the passenger could be confused and

scared, not understanding why the car

 

351

00:15:54,220 --> 00:15:55,155

just suddenly stopped.

 

352

00:15:55,288 --> 00:15:58,24

No, you're not in the front seat, you're

not driving, you're not paying attention.

 

353

00:15:58,491 --> 00:16:02,629

But if the car were able to explain to

the passenger that it stopped, perhaps because

 

354

00:16:02,962 --> 00:16:07,500

it detected a pedestrian in the

sidewalk, or stepping off the curb, or

 

355

00:16:07,600 --> 00:16:09,235

any number of different

 

356

00:16:09,869 --> 00:16:13,206

obstacles that might have been or

appeared in the road, then the passenger

 

357

00:16:13,306 --> 00:16:14,307

would feel safer

 

358

00:16:14,774 --> 00:16:17,410

and more confident in the AI's

abilities.

 

359

00:16:18,44 --> 00:16:20,714

Having an automation system in your

cars,

 

360

00:16:20,947 --> 00:16:24,951

these cars, I think the ones that

explain the most and share with you the

 

361

00:16:24,951 --> 00:16:27,53

most about what

 

362

00:16:27,287 --> 00:16:30,123

they're doing to make you feel

comfortable are going to be the first

 

363

00:16:30,256 --> 00:16:31,491

and early adopters,

 

364

00:16:31,825 --> 00:16:33,993

and then eventually we'll get more

comfortable that we won't need them.

 

365

00:16:34,27 --> 00:16:34,894

We'll be able to change

 

366

00:16:34,894 --> 00:16:39,132

the sharing parameters, I would say,

where you're like, I don't need you to

 

367

00:16:39,132 --> 00:16:40,333

share so much, I trust

 

368

00:16:40,467 --> 00:16:41,735

you kind of a situation.

 

369

00:16:42,969 --> 00:16:47,40

Next is building a culture of

responsible innovation.

 

370

00:16:47,307 --> 00:16:48,375

This whole process,

 

371

00:16:48,708 --> 00:16:54,14

right, of creating these workplaces and

building AI ethics into things and these

 

372

00:16:54,14 --> 00:16:54,647

systems that we're

 

373

00:16:54,848 --> 00:16:57,450

creating is going to ultimately create a

culture.

 

374

00:16:58,284 --> 00:17:00,553

And how do we want to build a culture of

responsible

 

375

00:17:00,587 --> 00:17:03,223

innovation within our companies and our

communities, right?

 

376

00:17:03,256 --> 00:17:07,527

It starts with AI ethics committees,

perhaps, considering,

 

377

00:17:08,795 --> 00:17:14,300

can consider establishing an AI ethics

committee for your company to guide and

 

378

00:17:14,634 --> 00:17:16,670

oversee your AI development efforts.

 

379

00:17:17,170 --> 00:17:17,370

That

 

380

00:17:17,404 --> 00:17:22,676

it could include a psychologist, a

therapist, a philosopher, it could

 

381

00:17:22,676 --> 00:17:27,647

include HR, but also a very good sect,

subsection of

 

382

00:17:27,647 --> 00:17:31,351

your employees across multiple

departments, right?

 

383

00:17:31,384 --> 00:17:31,584

So you're

 

384

00:17:31,618 --> 00:17:34,454

getting diverse perspectives and

experiences and feedback and

 

385

00:17:34,521 --> 00:17:34,754

input.

 

386

00:17:35,288 --> 00:17:38,591

Next is an ethical set of guidelines and

frameworks.

 

387

00:17:39,292 --> 00:17:41,628

Implementing clear ethical guidelines

and frameworks to

 

388

00:17:41,661 --> 00:17:44,431

ensure that your AI development aligns

with your company values

 

389

00:17:44,431 --> 00:17:45,98

is crucial.

 

390

00:17:45,498 --> 00:17:47,967

So check the purpose statement, mission

statement,

 

391

00:17:48,68 --> 00:17:48,702

value statement.

 

392

00:17:49,69 --> 00:17:50,770

Do these actually align not only with

 

393

00:17:50,837 --> 00:17:51,838

your messaging and marketing?

 

394

00:17:52,305 --> 00:17:53,807

Do these align with what's going

 

395

00:17:53,840 --> 00:17:54,174

out there?

 

396

00:17:54,441 --> 00:17:57,444

Do your customers see this as aligning

with who you

 

397

00:17:57,444 --> 00:17:57,711

are.

 

398

00:17:58,11 --> 00:18:02,615

Do your, does your staff and your

employees think that these align?

 

399

00:18:02,716 --> 00:18:03,983

And then finally, when you're putting it

 

400

00:18:04,117 --> 00:18:10,523

into your AI systems, do the algorithms,

the measurements, the criteria, do they

 

401

00:18:10,557 --> 00:18:11,725

align with your values and your

 

402

00:18:11,825 --> 00:18:13,660

mission as an organization and a

company?

 

403

00:18:14,294 --> 00:18:15,729

Next is having open discussions.

 

404

00:18:16,29 --> 00:18:18,365

In order to get feedback, in order to

 

405

00:18:18,565 --> 00:18:21,935

know if it's working or not, you need to

encourage open and honest communication

 

406

00:18:22,535 --> 00:18:25,238

and conversations and AI ethics

 

407

00:18:25,305 --> 00:18:26,706

within your company, right?

 

408

00:18:26,773 --> 00:18:28,108

You got to create that space that

 

409

00:18:28,274 --> 00:18:30,710

feels safe for employees to feel

comfortable raising their

 

410

00:18:30,844 --> 00:18:32,345

concerns and sharing their ideas.

 

411

00:18:32,512 --> 00:18:33,813

With that feedback, you

 

412

00:18:33,847 --> 00:18:36,149

can almost assuredly make sure that

whatever you're

 

413

00:18:36,249 --> 00:18:38,518

implementing, you're doing it not only

with their approval,

 

414

00:18:38,818 --> 00:18:41,221

and their understanding, but you're also

creating an

 

415

00:18:41,287 --> 00:18:44,624

environment and a culture where they

feel heard, and it's

 

416

00:18:44,657 --> 00:18:47,327

important and valued, they feel

important and valued, right?

 

417

00:18:47,861 --> 00:18:48,161

So

 

418

00:18:48,328 --> 00:18:51,564

by being an ethical AI leader, you're

not just doing what's

 

419

00:18:51,631 --> 00:18:54,67

right, you're doing what's better and

stronger for your

 

420

00:18:54,67 --> 00:18:58,304

company, building more sustainable

business practices, consumers are

 

421

00:18:59,5 --> 00:19:03,543

increasingly demanding more ethical AI,

more transparency.

 

422

00:19:04,244 --> 00:19:05,245

And the companies

 

423

00:19:05,311 --> 00:19:08,815

that prioritize this will be rewarded,

by the way, with the trust and loyalty

 

424

00:19:08,882 --> 00:19:12,852

and ultimately, the purchasing power of

those customers.

 

425

00:19:13,953 --> 00:19:14,154

Right.

 

426

00:19:14,521 --> 00:19:17,424

So if we have

 

427

00:19:17,624 --> 00:19:23,430

fearless entrepreneurs, and we're

stepping out, by the way, because we're

 

428

00:19:23,430 --> 00:19:26,900

their early adopters onto this path of

innovation and

 

429

00:19:27,33 --> 00:19:27,233

learning.

 

430

00:19:27,901 --> 00:19:29,469

And while, by the way, you know, this

was

 

431

00:19:29,569 --> 00:19:32,706

tech adoption, this was model adoption,

this was a

 

432

00:19:32,772 --> 00:19:33,406

lot of different things.

 

433

00:19:33,473 --> 00:19:34,941

But up until now, when we're

 

434

00:19:35,8 --> 00:19:39,579

looking at AI being part of our

development process

 

435

00:19:39,579 --> 00:19:41,214

and part of our systems and part of our

business

 

436

00:19:41,314 --> 00:19:42,749

models, right?

 

437

00:19:43,316 --> 00:19:44,851

We are the ones out at the front,

 

438

00:19:45,18 --> 00:19:45,218

right?

 

439

00:19:45,218 --> 00:19:47,454

So here's our challenge this week.

 

440

00:19:48,521 --> 00:19:48,822

Create

 

441

00:19:48,822 --> 00:19:51,991

an ethical audit of your business and

your systems.

 

442

00:19:52,58 --> 00:19:55,829

Take a close look at your current

systems or your planned AI

 

443

00:19:55,995 --> 00:19:57,731

implementations if you have them.

 

444

00:19:58,231 --> 00:20:02,936

And have you considered, by the way, are

there any ethical concerns that you need

 

445

00:20:03,103 --> 00:20:05,71

to address to AI ethics policy?

 

446

00:20:05,238 --> 00:20:09,609

Create a policy that it matches, like I

said, that's in alignment with mission,

 

447

00:20:09,809 --> 00:20:14,14

purpose, value, messaging, marketing,

audience, employees.

 

448

00:20:14,814 --> 00:20:15,749

All of them have to

 

449

00:20:15,749 --> 00:20:16,282

be in alignment.

 

450

00:20:16,516 --> 00:20:18,685

If they're not and they're incongruent,

if they're out of

 

451

00:20:18,685 --> 00:20:22,255

alignment at any point during the way,

you are going to run into problems and

 

452

00:20:22,355 --> 00:20:26,626

your AI is most likely going to put a

highlight on it because it will make the

 

453

00:20:26,626 --> 00:20:31,64

mistake very quickly and very easily by

making assumptions based on your lack of

 

454

00:20:31,197 --> 00:20:35,1

judgment or in inefficient AI ethics

policies.

 

455

00:20:35,702 --> 00:20:37,203

And then finally diverse and

 

456

00:20:37,203 --> 00:20:38,71

inclusive approach.

 

457

00:20:38,171 --> 00:20:40,106

We're never going to get rid of DEI.

 

458

00:20:40,306 --> 00:20:40,674

I'm sorry.

 

459

00:20:41,374 --> 00:20:44,511

Diversity, equity, and inclusivity is

not going away,

 

460

00:20:44,711 --> 00:20:44,911

people.

 

461

00:20:45,11 --> 00:20:48,48

And I don't care how much you want to

jettison it, walk away from it, forget

 

462

00:20:48,81 --> 00:20:49,416

about it, whatever it is you've got

 

463

00:20:49,416 --> 00:20:49,749

to do.

 

464

00:20:50,483 --> 00:20:54,854

If you want to be successful in today's

market, and you want to start using AI

 

465

00:20:55,155 --> 00:20:56,790

to get ahead, to have that

 

466

00:20:56,923 --> 00:20:58,224

advantage, guess what?

 

467

00:20:58,558 --> 00:21:05,98

You're going to need to teach and train

and provide processes that are both data

 

468

00:21:05,265 --> 00:21:05,865

sets, by the

 

469

00:21:05,899 --> 00:21:07,667

way that are diverse and inclusive.

 

470

00:21:08,501 --> 00:21:09,769

That's tough to do.

 

471

00:21:10,203 --> 00:21:14,541

How do you balance those two when

they're in conflict with

 

472

00:21:14,708 --> 00:21:15,8

each other?

 

473

00:21:15,575 --> 00:21:18,445

Be curious to see what people have to

say about that.

 

474

00:21:18,812 --> 00:21:21,448

I think we're going to see more

opportunities for

 

475

00:21:21,614 --> 00:21:25,285

dialogues around this particular thing

coming up for sure.

 

476

00:21:25,919 --> 00:21:28,421

So make sure that you're putting in when

you're developing

 

477

00:21:28,455 --> 00:21:32,525

your AI that you have testing processes

to ensure diversity and inclusive data

 

478

00:21:32,792 --> 00:21:36,96

and reactions and responses, right?

 

479

00:21:36,329 --> 00:21:36,529

Not

 

480

00:21:36,629 --> 00:21:38,898

only for your AI, but also for your

employees and your staff.

 

481

00:21:39,966 --> 00:21:44,471

Now, remember that focusing on the

positive impact for

 

482

00:21:44,571 --> 00:21:50,76

your organization and your company using

AI ethical practices, this can help you

 

483

00:21:50,143 --> 00:21:51,845

navigate complex issues with

 

484

00:21:52,345 --> 00:21:53,446

confidence, right?

 

485

00:21:53,646 --> 00:21:55,815

Stay true to your values, obviously, and

 

486

00:21:55,982 --> 00:21:59,386

engage in an open dialogue consistently

with your, with

 

487

00:21:59,452 --> 00:22:01,54

your people, with your teams, right?

 

488

00:22:01,187 --> 00:22:02,188

And then build that AI

 

489

00:22:02,288 --> 00:22:05,558

powered future that benefits everyone,

everyone in your

 

490

00:22:05,692 --> 00:22:08,461

company, and even your customers, right?

 

491

00:22:09,229 --> 00:22:09,629

So that's a

 

492

00:22:09,729 --> 00:22:12,766

wrap this week on fearless thinking, AI

ethics.

 

493

00:22:13,133 --> 00:22:14,134

Ooh, who

 

494

00:22:14,200 --> 00:22:15,135

knew we'd get this far?

 

495

00:22:15,168 --> 00:22:17,904

Who knew that AI would be, you know,

that

 

496

00:22:17,904 --> 00:22:22,976

we would be talking about it in so many

different facets and so many different areas.

 

497

00:22:23,309 --> 00:22:24,10

I think it's

 

498

00:22:24,210 --> 00:22:26,946

fascinating and I still think there's so

much more to come.

 

499

00:22:27,80 --> 00:22:28,14

There's so much more that's going

 

500

00:22:28,148 --> 00:22:32,218

to happen and just we need to be on top

of it and ahead of it and be opening a

 

501

00:22:32,252 --> 00:22:33,53

dialogue about it.

 

502

00:22:33,353 --> 00:22:33,553

So

 

503

00:22:33,687 --> 00:22:35,889

yeah, have a fantastic week everybody.

 

504

00:22:36,289 --> 00:22:38,625

Enjoy yourselves and do a little AI

ethical thinking

 

505

00:22:39,592 --> 00:22:42,28

this week and get back to me on, you

know, do you have a strategy?

 

506

00:22:42,128 --> 00:22:42,662

Do you have a plan?

 

507

00:22:43,96 --> 00:22:46,66

And oh, if you don't have data sets,

where are you going to get those?

 

508

00:22:46,332 --> 00:22:47,200

I'd be curious to find

 

509

00:22:47,267 --> 00:22:49,169

That would be very interesting for us to

know.

 

510

00:22:49,336 --> 00:22:49,602

All right.

 

511

00:22:49,869 --> 00:22:50,970

Well, have a wonderful week.

 

512

00:22:51,4 --> 00:22:51,571

I'm Michael Devous.

 

513

00:22:51,671 --> 00:22:53,807

This is Fearless, Fearless Thinking

 

514

00:22:55,108 --> 00:22:56,576

Okay, have a wonderful day.

 

515

00:22:56,609 --> 00:22:57,644

We'll see you next week.

 

516

00:22:57,744 --> 00:22:57,944

Bye.

 

517

00:22:57,944 --> 00:22:58,144

Bye

 

518

00:23:04,984 --> 00:23:08,922

This has been a Fearless Road Network

Productions brought to you by DeVous

 

519

00:23:09,556 --> 00:23:11,291

Media Holdings, LLC

 

520

00:23:19,265 --> 00:23:19,466

you