On this episode of the AI For All Podcast, Frits Bussemaker, Chair of the Institute for Accountability in the Digital Age, joins Ryan Chacon and Neil Sahota to discuss accountability in the age of AI. They talk about accountability in the digital age, how accountability has evolved, using technology to govern technology, the challenges in establishing global governance for AI, the need for localized governance, AI ethics, and balancing innovation and governance.
About Frits Bussemaker
Frits Bussemaker is the Chair of the Institute for Accountability in the Digital Age, an association instigated by UNESCO in 2017 aimed to help define tools for and manage accountability in the digital age. Bussemaker has been working in the international ICT industry since the 1980s. He began with a Dutch IT startup and held various marketing and alliance positions including at CIONET, where he represented a community of over 7,000 CIOs with international institutes like the European Commission. He is the initiator of the Global Digital Leader Alliance linking over 20,000 digital leaders from China, Europe, India, Japan, South America, and the US.
Interested in connecting with Frits? Reach out on LinkedIn!
About I4ADA
The Institute for Accountability in the Digital Age (I4ADA) was founded in 2017 with the mission to ensure that issues and concerns do not undermine the Internet’s potential for increasing access to knowledge, spreading global tolerance and understanding, and promoting sustainable prosperity. In pursuit of its mission of helping the world derive maximum benefit from the internet, the Institute is dedicated to helping create a fair and balanced framework of best practices and, where necessary, regulation. Among other activities, these are the main activities of the Institute:
- Create awareness for accountability in the digital age
- Reach out to and connect with a global multi-stakeholder community
- Provide a platform for knowledge sharing on accountability in the digital age
- Promote the various legal instruments, frameworks, and technical solutions being developed for accountability
- Create awareness for accountability in the digital age
- Reach out to and connect with a global multi-stakeholder community
- Provide a platform for knowledge sharing on accountability in the digital age
- Promote the various legal instruments, frameworks, and technical solutions being developed for accountability
Key Questions and Topics from This Episode:
(00:30) Introduction to Frits Bussemaker and I4ADA
(00:54) Understanding accountability in the digital age
(03:23) Evolution of accountability over time
(04:09) The impact of digital technology on accountability
(07:21) Using technology to govern technology
(08:56) The importance of accountability in AI
(10:28) Challenges in establishing global governance for AI
(13:37) The role of ethics in AI governance
(18:34) The need for localized governance
(23:17) The urgency of accountability in the digital age
(25:48) Balancing innovation and governance
(26:32) Learn more and follow up
(00:54) Understanding accountability in the digital age
(03:23) Evolution of accountability over time
(04:09) The impact of digital technology on accountability
(07:21) Using technology to govern technology
(08:56) The importance of accountability in AI
(10:28) Challenges in establishing global governance for AI
(13:37) The role of ethics in AI governance
(18:34) The need for localized governance
(23:17) The urgency of accountability in the digital age
(25:48) Balancing innovation and governance
(26:32) Learn more and follow up
Transcript:
- [Ryan] Welcome everybody to another episode of the AI For All Podcast. I'm Ryan Chacon and with me today is my co-host, Neil Sahota, the AI Advisor to the UN and one of the founders of AI for Good. Neil, how's it going?
- [Neil] Hey, I'm doing well. How about yourself, Ryan?
- [Ryan] Not bad. Not bad. We also have Nikolai, our producer with us on the call.
- [Nikolai] Hello everyone.
- [Ryan] All right, so today's episode, very good conversation we have planned, some exciting topics around accountability in the digital age and how to govern digital technology. To discuss this, we have Frits Bussemaker, the Chair of the Institute of Accountability in the Digital Age, and their mission is to ensure that issues and concerns do not undermine the internet's potential for increasing access to knowledge, spreading global tolerance and understanding, and promoting sustainable prosperity.
Frits, thanks for being on the podcast.
- [Frits] Hey Ryan, thank you. Great to be here. And hopefully I can share a couple of our insights.
- [Ryan] So I wanted to kick this off, Frits, and just ask you when it comes to accountability in the digital age that we're in now, what does that mean to you? Can you high level that for our audience?
- [Frits] Yeah, sure. And I'll put it in a context how I got introduced to the topic because the reason I'm sharing the issue for accountability is at the time in 2016, 2017, I was one of the people managing a very large CIO community, CIONET, and I got a call from somebody at UNESCO I happened to know Inderit Banayee who was the director knowledge societies and said, hey, we have a growing concern with our member states about the fact that digital technologies are growing so fast that the legal and regulatory framework cannot keep up with. We need to have a discussion on that, how we're going to resolve that. So that was back in 2016, 17, we started to start that conversation and actually the whole European GDPR, it kick started that discussion because then you saw that technology in the social media space couldn't really be, had an issue with the current, the legal legislation at the time.
Soon we added cyber security to the discussion. You won't believe it today, but AI had a backbench. We knew it was coming, but we didn't see the significance. It's actually the last two or three years that AI is very much on the forefront how we're going to govern technology. So for me, accountability is anybody involved in supplying technology, passing through technology, using technology. My default is AI for good. Just like Neil, I am also involved in the AI for Good initiative. So my default is let's see how we can put it to good use. You will have situations where it's going to be used for something bad. And people are going to abuse and misuse that. In any of those situations where the, I would say, the current legislation cannot cope with, you could say, the governance, that is where we come in to see how can we provide tools and technologies to actually resolve the issues we come across.
- [Ryan] A lot of things I want to expand on there, but the first thing I wanted to ask is, before we get too deep into this, how has the concept of accountability just evolved over time through your experience? I think it's a really interesting thing to just elaborate on a little bit.
- [Frits] You started off with, okay, there was a lawsuit I believe in Spain, which triggered this whole discussion on GDPR, where a guy was falsely accused of something, and he wanted Google to take off the information. The lawyer at the time said, hey, you cannot just take it off es, the Spanish website, you have to take it down from everything. That in itself is I would say a big nuisance for somebody's individual life. We wanted to take the big corporates, hold them accountable. But over those years, once you see how you can use digital technology, how you can use AI to nudge people into making decisions, Brexit is a good example, the American elections are a good example, then all of a sudden it does become something which attacks people's lives, and I think that, I also would like to link this to COVID. Pre-COVID, digital was, yeah, very important but not essential. I think during the COVID we realized digital is very essential for our daily life and therefore also making certain that the governance of technology is going to be organized also became more sensible. I think so COVID helped us realize what the real impact is of the technology we're using day to day.
- [Neil] I think that's actually a really good analogy, Frits. And I think it's a good reference to your earlier comment about use and misuse, right? As technologists, we're building towards a specific outcome. And I think about Zoom. I've actually been using Zoom for almost a decade. You think about the video conferencing and stuff, but you saw during the pandemic zoom bombing, right? People would break into things, especially people doing Zoom classes and everyone's whoa, wait a second, why? I never thought of that actually happening. Where's the bridge here when it comes to like accountability?
- [Frits] And maybe actually I just realized coming coming back, I realized I associated accountability to my direct involvement in the accountability discussion. But I'll give you an anecdote of what I, I joined the IT industry over 35 years ago. That's when I started to work for a Dutch startup. And I needed to install software somewhere in Germany. I live in the Netherlands, was on a plane to Germany. And I was confronted with a very big fax PDP-11 system. I got a big tape, and we had a Sun Terminal, brand new, just installed. And so I asked the system operator, hey, do you have a password? Because I need to install software. Nope, don't have it. Okay, so what are we going to do? Hey, let's follow up the local HQ of Sun in London and say, hey, we're, hey I'm Frits, I'm at this terminal here at this oil company, I want to install software, can you help? And the guy just said, oh yeah, hold on, don't worry. This is pre internet, by the way, and I think that's significant. So he said could you turn the machine around? Could you read the number on the back? Oh, yeah, that's the number. Okay, yeah, I recognize that machine. Here's the password. He gave me the root password. I could do anything I wanted on that system. And at the time, we were gullible. We did not think that you could use this in a way you don't, that is going to be abused. We were just helping each other out. And yeah, if a stranger called, you needed a little password, you got it. From the accountability side, it's a hard no today. It's really hilarious if you think about it. We've also learned to grow with the involvement of digital technology and realize that times have changed.
- [Ryan] What are the challenges with trying to maintain that accountability as things change, change rapidly?
- [Frits] Actually, I want to refer to a comment from Michael Rogers we often refer to when we talk about accountability in digital age. And a couple of years ago, he made a comment, we're in a position today in the digital age where technology has outstripped our legal and standards framework. In other words, and everybody knows this, digital technology is moving faster than the legal and regulatory framework can keep up with. So the question is should we try to speed up and catch up the way we've always organized governance by speeding up the process of organizing a legal regulatory framework?
The answer is in your dreams, it's not going to happen. So we need to think about other ways we can organize the governance and therefore accountability of technology. And that's also where you see interesting discussions happening. Hey, can we, for instance, change the rule, the law from rule based to principle based, which gives you a little bit more principle. Can you use technology to govern technology? What I've seen over time is that we are slowly rethinking how we're going to govern the technology. And not just by how we've always done regulation, and that's by the legal framework. So I think that for me is a change which is happening.
- [Ryan] Neil, let me ask you this question from your experience, which I know you and Frits worked together in different ways. How has the importance of accountability really ramped up with the explosion of AI particularly? Because there's been lots of discussions over the last number of months around how governments are going to handle regulating, creating governance for AI, how countries are going to work together, who's responsible for doing all this kind of governance? And I'm just curious from your perspective and then Frits, you could please jump in, how, what's really unique about and happening as it relates AI particularly?
- [Neil] Unfortunately, alluding to what Frits is talking about, what's happening today is happening very organically. You have what we call hard and soft kind of regulations and policies or guidelines being put out there. Everybody is trying to suddenly work on this, which is not necessarily a bad thing, but it's very uncoordinated. I forgot there was some institute actually in Switzerland and another one in the UK that came out with their guidelines, but they have no actual authority on this.
So it's just, we call it like soft things. Whereas like in California, legislature is actually working on a hard legislation right now, but that's one small area around healthcare. And then you got some people in Michigan working in an area around manufacturing and supply chain. And so it's very piecemealed and that's why you, everyone's turning around saying who can actually take the lead on this?
Someone has to be the leader and who's the right entity or organization for that. And that's where, one of the things we're talking about is the United Nations and should they play a role? I think at this point, the consensus is yes. And should that role be leadership, which is why now you hear secretary António Guterres talking about more strongly supporting the formation of a new UN agency just on the governance of science and technology.
- [Frits] On one hand, I fully support that idea that they take it serious. And you refer to just like the IPCC type of initiative, but there's also a downside to having the UN do this. And then I come back again to what Michael Rogers said is that the time it takes an average UN agency to actually make a decision because they need consensus of all members. And I think that's one of the big issues we're facing. How do you reach that? That you have a global governing body, I think that's good, but it's going to be very interesting to see what kind of mandate they're going to give themselves or be given.
- [Neil] Frits probably knows this, but for the general audience, to stand up a new UN agency, you have to call a convention, have a treaty. It'll probably take four to six years to create a new agency. In the meantime, you still have to do things. That's the question, is what's that parallel track going to be? And again, who is going to have the authority to own that and enforce that?
- [Frits] I very much like, I had a discussion earlier this year with Kay Firth-Butterfield, who at the time was the, well actually, she's the world's first AI Ethics Officer, and she was at the World Economic Forum. So we had a discussion on her perspective, how we're going to organize this, and she said we cannot expect the same ethics parameters around the world to be adopted by everybody else. So we cannot enforce our ethics parameters on each other. And that's, I think, that's when a global standard is going to be an issue because we're going to have different ethics. So her point of view is as long as you just accept one, I'm happy, because at least you're making a decision. And I also want to point out to the audience, I would very much recommend them to go to the OECD website, oecd.ai. And believe it or not, but we have over, almost 600 frameworks and standards in the world and counting how we can regulate AI. Choose whatever you like. There are too many standards. So having standards, having a regulatory system, I think that's not the question. We already have that. We have way too much, but what I like about the accountability and the definition of accountability, and that is the best way to go about organizing accountability is look in the mirror because this is about personal responsibility. What are you going to do as an individual? What are you going to do as an organization? And all the finger pointing, hey, we need a government, we need a framework, that in itself is not going to solve the accountability. It's going to help you maybe as an instrument, but it's not going to help you actually be accountable.
- [Ryan] Speaking of the ethical side of things, how do you, what are some of the, aside from the challenges you mentioned, obviously, globally, when you're looking at different countries, different cultures, the ethical standards are going to change what they're looking for, right? Are there any types of general ethical considerations that you feel are going to be most important to make sure that all the governance is set up around, or is it really going to be a challenge because of how different areas of the world are?
- [Frits] I do believe that, this almost, this will become a philosophical discussion, that there are a couple of universal norms and values you come across, cultures you come across, religions where everybody agrees, yeah, that, we should do that and that's something like a do good. Devil's in the detail here as well. I remember calling Gary Shapiro, the CEO of the Consumer Technology Association, organizes CES in Las Vegas. I said, hey, we want to organize a summit in the Netherlands. You want to come over, talk about your perspective on accountability. And he said, well, then I'm coming to Europe. I'm coming to a government led legislation mindset because that's the culture, the ethics in, let the government take care of it. And we are much more industry led. We want to have self government, self control. And I think that's related to the ethical side of it because how you control things also you could say boils down to what your ethics are. Did point out to him, you're coming to a UN agency, a UN city, the Hague, so your point of view should be just as relevant as the point of view of somebody from Europe. But I think that's where the issue is. The concepts of what we have in common around the globe, do good, I think that's almost an easy one. But then the how and the who, that's I think where people are going to have lots of discussions because then they start to disagree with each other.
- [Neil] These are hard conversations, right? Each one of us has our own personal kind of moral code of conduct. How do you really reconcile that? 2017, I was asked to give a speech at GSR, the big summit for all the regulators, and I was willing to say what nobody wanted to talk about, is exactly what Frits was talking about, is that until we come up with essentially a baseline set of ethics and morals, we can't do any of these things. That's the honest truth. Who's to say what's right versus wrong use of this technology. And we live in a time where digital age, there's no boundaries. And I will tell you when I was done because I threw the hand grenade everyone's thinking about, no one clapped, nobody clapped, people looked very angry. And I remember the DSG, Deputy Secretary General ran up, and he's like that was a very brave thing to say, it was the right thing to say. I don't think you made any friends though. You should probably get out of here. But I was surprised the next year, they invited me back and, but I heard some of the hallway conversations, and these people were talking to each other about some of these things so at least the conversation has started.
- [Frits] And that I think is unfortunately where we have to be very happy with as a milestone. Because when I started in this game, again, it was about social, social side, cyber security. AI was, I would say, a non discussed issue. It was hardly discussed at the very first AI for Good Summit we had. I just came back from the World Conference on Innovation and Technology in Kuching, Malaysia. I was asked to speak there, and I was, I'd say, pleasantly surprised to find that one of the top people in the Malaysian government said that the discussion we had, it was biased because it was on a panel I managed, said this is the most important discussion of the whole conference, that we talk about governance of technology and for me that's already a win that the discussion is main stage, that we're having the discussion, we're getting the right people together, and we're listening to what the concerns and issues are. I think that's where we start to think about possible solutions.
- [Nikolai] I think it's interesting the reaction Neil got because it's a very contemporary view that we shouldn't criticize or judge different moral systems. But I think what's interesting is that technology, and particularly AI, I'm not the first person to word it this way, but forcing everyone to do philosophy on a deadline. We have to admit that there are right and wrong answers to questions of this nature. And that's a very unpopular view, I would say, in certain circles for, I won't go into the reasons. Because like you used to be able to just have your local, software locally, so if you think of government as a software, all constrained locally, but like very literally or like with AI, this is a global technology. So like any governance that you, and software scales globally, so any governance you have over software is going to have to apply everywhere or else it's not going to work
- [Frits] I do believe that a part of the solution is going to be in localizing this, as we've done with regular software because for a lot of the local situations, we already have existing legislation which can be used to govern the technology. And it's going to be different in one country from the other, but that's and sometimes I think we're spending too much time on the exception when it crosses border, when we start to use it internationally. And, let's first fix it on a local level but be mindful that we also have that International and global level to deal with. If we start to have a one size fits all global solution, I think that's not going to work.
- [Nikolai] Yeah, there was an old, old movie called The Day the Earth Stood Still about this robot alien or whatever it was, it lands on earth and essentially the message of the movie is, oh, isn't it so terrible that we don't have a global government to deal with this coordination problem of dealing with this alien.
So it's, that idea goes back that long, that was like decades ago. But like you're saying, I agree, it's like, this local control is probably, because the other direction, you could take it in the other direction, very extreme, where you could just say down to the individual, just let everyone do what they want with AI, and that'll raise everyone to like the same playing field, and so nothing will have actually changed. That idea might not work either.
- [Frits] I think dependent on which level you're operating, you're going to have different speeds. I'm not against having a UN agency. So I think, I endorse and support what António Guterres is calling for. But we should not just focus on that solution. It's also, hey, what can we do today? And what I like about this discussion is, the saying if you're not at the table, you're on the menu very much applies also in this AI space. You have to be part of the discussion and as an institute, we're a very small institute. We've been set up by UNESCO actually specifically outside so we can speed up the process, speed up the discussion. If you're a very small organization, anything which takes time is frustrating and delayed, but looking back, what I now see is, as mentioned, accountability, governance is becoming a mainstay discussion. That means I feel we have done our part to get the world where it should be having, and it's having that discussion. I'm not claiming that we'll have solutions. I'm not claiming that you will find any answers when you talk to us. But I do want to make certain that discussion takes place. That is at least from what we are capable of, we can provide, make certain that the powers that be, hey, start to think about this.
- [Ryan] When we're talking about being able to localize some of this, how does that impact companies and individuals who interact with different technology all over the place? How does that kind of play a role in the idea of being able to bring this more to a local level to make decisions that you're saying need to happen?
- [Frits] Just like you're localizing technology, just from the language the technology is using, you might need to localize the technology from accountability rules and regulation. And if that means, I believe Elon Musk threatened, okay, I'm going to take Twitter or X, whatever it's called, out of the UK because of the current laws. It's his right to do that, now a lot of people are going to suffer, but I think as a, the institutions, the governments should at least, I think, take the lead and control where the discussion should be going to because they represent the voice, the people, the democracy, and not let them be dictated by what a couple of big organizations feel should be the right way.
- [Neil] Is that the real challenge? Is that people don't want to accept the consequences?
- [Frits] We're talking about, just look at five, ten years ago. Okay, hey, all this technology is free. I'm not, I'm using it for free. No, you're not using it for free. You're giving away your data. You're giving away your privacy. If that's, if you are, if you accept the trade off, hey, no problem. If you don't accept the trade off, as an individual or representing the individuals as a government, okay, are you going to accept the consequences that you're going to shut things down?
- [Neil] I liken it to the trolley problem, right, with autonomous vehicles and that. It's not so much we're worried about that scenario happening, what happens, we're more worried that before you react, and so you had some air cover, and now it's like you have to make these life or death decisions beforehand, and evaluating the life, we're uncomfortable with that, right? That's the real struggle we have.
- [Frits] I realize we're preaching to choir. We're talking to people who either have an AI t-shirt on or AI in their name. I actually like that, Nikolai, that you're the first person I saw that has AI in his name. But how do we get the general public, how do we get the politicians, the policymakers to realize really where we are. And even for an audience, I think, biased towards understanding of technologies, when I talk about, for instance, say, the robot implementation, that robots are a citizen in Saudi Arabia, that robots can own a patent in South America, that robots actually can write a thesis and are peer reviewed. That robots, technology, AI has I'll say human characteristics. People are scared when they realize that's already happening today because we talk a little bit about, okay, we got time to fix it. And I think that's, I think what the issue is not making people aware of what the issues are but making people aware of the time and the urgency.
- [Neil] We're going to experience a hundred years of change in the next decade. We're not moving fast enough from a policy regulation standpoint. That's handcuffed business to a degree because they don't want to make the investment or commit funding and resources on something that might have to go away in a couple of years.
- [Frits] But I'm optimistic, Neil. I'm optimistic. And I want to refer to something which and that's called en l'an deux mille. It's French for in the year 2000. And about 150 years ago, a French cigar manufacturer said, hey, in over a hundred years, we have the magical year 2000. And to sell the cigarettes, he asked an artist to draw up how the year 2000 would look like. He would visualize the year 2000. And it was amazing to see if you look at those pictures that they actually have quite a good image, idea of what the world would look like in a hundred years.
So I have a, I have faith in our ability to actually look at a long term perspective. Also in, we are in a hyper change situation, I realize that, but I also at the same time realize as optimistic when I look at our track record as a human species, we can also eventually find solutions how we're going to deal with this.
- [Ryan] Kind of great way to, I think, wrap this up here because I wanted to be able to touch on innovation and governance at the same time. So just said, what, any last words around how you feel like we can balance innovation and adequate governance as things are rapidly changing going forward?
- [Frits] The balancing act, as you describe it, assumes that the governance would slow down innovation, and I think the governance should not be seen as hitting the brakes. I think it's as giving innovation guidelines.
- [Neil] I'm in agreement, right? What's that old adage? If you don't have a destination in mind, then any road will get you there. And I think that's what really governance is, to help us figure out what are those destinations we actually want to get to.
- [Ryan] For our audience out there who wants to potentially learn more, for instance, follow up, just engage with these types of topics, what's the best way, like what resources should they be looking at, how can they maybe follow up with you if they have anything like that?
- [Frits] I think the best way is just to reach out on LinkedIn, either to me personally or the Institute for Accountability in the Digital Age. That's the best way. If they're curious, if you want to follow up, I'll definitely not have the answers, but I think I'm capable enough to at least point them into the right direction and help them a step further.
- [Ryan] Yeah, that's great. All right, everyone. Thank you all, and yeah, appreciate the time Frits.
- [Frits] Yeah. Enjoyed it. Thank you.
Special Guest
Frits Bussemaker
- Chair, I4ADA
Hosted By
AI For All
Subscribe to Our Podcast
YouTube
Apple Podcasts
Google Podcasts
Spotify
Amazon Music
Overcast