Technology and Trust: GETS 2023 Plenary Session 1
good morning and welcome. It is so lovely to see so many familiar faces in the room today and do back in person to the scale that we are. I have had the pleasure of working with Gary on this conference for over ten years, and I think it's fair to say that this is my favorite conference of the year. And I know that for so many of us, we look forward to it.
So, Gary, thank you for your leadership. And mainly what a wonderful way to start the morning off. Thank you. So we are going to talk about trust and technology this morning. It's probably not the topic that you think, first of all, when you think of governing of emerging technologies. But even today with the Supreme Court coming out about two decisions associated with Google and Twitter, we are reminded just of how important trust is when it comes to technology.
And I think Gary's decision to put this session first shows that this is how we should frame the next two days. Trust is central and pivotal to the adoption of new technologies in our society, and I am honored and privileged to actually get to moderate this panel with four amazing individuals who have been at the forefront of thinking about trust in technology. Today, our session is going to start. I will do a brief introduction if we would go through their CVS and all that they have achieved, it would take up, oh, 90 minutes. So I'll be short and sharp.
Each of our speakers will then come to the podium and speak for about 5 minutes. We will then have a moderated Q&A and then we'll have an opportunity to open up the session to you for you to be able to ask our four experts some questions. So with no further ado, let me introduce our four phenomenal speakers. First, I'll begin with Hilary Sutcliffe, who was the brains behind this panel.
I've had the pleasure of working with Hilary for 16, nearly 17 years and which fields I think I just dated myself in that respect. Hilary currently runs London based, not for profit society inside, through which he explores trustworthy and trust in the governance of technologies which is supported by the World Economic Forum. Among many other things, Hilary sits on the advisory board at the Carnegie Council of Ethics and AI and Advisor to the International Congress for the Governance of Artificial Intelligence and one of the 100 brilliant women in A.I. Ethics in 2019 and 2021. Hilary, welcome and thank you for coming to from across the pond and to do so again.
Michelle is the deputy director of Endless and Insight with the science team at the Food Standards Agency, the Regulator for Food in England, Wales and Northern Ireland. With more than 20 years in government and a background communication regime, Raines and endlessly curious scholar public attitudes and behaviors. She leads a multidisciplinary team of analysts and has access to leading experts and a wide range of research tools to understand how and why people do what they do and what the future looks like when it comes to food. Again, welcome and thank you for coming across the pond and I love this one.
Helen Autonomous Futures, good founder of Tech Strategies, is an internationally recognized futurist, consultant, speaker and author. She looks at trustworthy technology and developing IEEE Standards for Organization Governance of AI. How in is best known for starting and leading Intel's efforts in automated driving, which resulted in a more than $15 billion acquisition of mobile, which is now worth more than $2 billion a year. Her passion is for making the world a better place and making our autonomous future more cool and less creepy. Our fourth panelist today is Mary Engle, who is executive vice President policy at B National Programs, where she oversees and develops advertising, privacy and other self regulatory and dispute resolution programs. Mary joined B national programs in 2020 after 30 year career with the Federal Trade Commission, where she played a leading role in developing the FTC's policies on, among many things, green marketing, influencer marketing, child directed marketing and video game marketing.
Mary was awarded the Presidential Meritorious Executive Rank Award in 2012. I would now like to invite Hilary to the floor. Thank you. Thank you very much. And good morning. And such an honor to be here and your 10th anniversary as well.
And so my research on trust in governance began with the observation that we talk a lot about trust in technology, but we didn't really talk about how the regulation in its own right needs to consider and take seriously the earning of trust of the public, but also of its other stakeholders. And I spent since 2017 looking at this and researching this. And it's quite interesting.
Trust is really good to have. So whether you're an academic institution, an individual, a regulator, or a government, if you trusted, you just let you get you are let to get on with things. Things are smoother for you. People want to work with you. They want to collaborate with you. You're given the benefit of the doubt when things go wrong.
Trust gives you freedom to act. Being distrusted does the opposite being distrusted. People don't listen to you in the same way. Just think about when you don't trust someone.
Whatever they say, you actually are skeptical. You don't really want to work with them. You don't really trust what they're doing.
So you put blocks and you put sort of things in the way. If you're distrusted, you don't have the freedom to operate and you actually have more costs, more time, more effort involved and whatever organization or individual you are, if you are distrusted, life is much more difficult. And I have the observation, actually, that regulation is based on that distrust. Regulation says we know companies won't act in the public interest. We know they're really act in their own self-interest. So we're just going to have to make them.
I just thought, you know what? If organizations were trustworthy and could be trusted, how would regulation look then? But trust in regulators is important. I'm looking particularly at at public trust, and I've been talking about this importance of regulators owning trust for a while. And everyone's going, yeah, yeah, yeah, right, right. Until COVID. And then suddenly trust in the approvals process of the vaccine was as important for public confidence as trust in the vaccine itself.
And everyone, sadly knew what I was on about, which was nice. And then we found also trust in technology. The Center for Data Ethics, which is a government body in the UK, did some research and the biggest predictor of it, the biggest predictor of whether people felt the tech in COVID had a role to play was trust in the rules and the regulations surrounding that. That whole technology that was more predictive than whether they what they thought about, what they thought about COVID, what they thought about the technology, what they thought, even if it was going to work or not. The fact that the regulation was on the case and was good regulation in the public interest was the single thing that made them believe that that technology was valuable. And similarly, too, in terms of a lot of technologies, I, I sort of started this because I'm 112.
I started doing technology, really. We started this in 1996 where it wasn't really technology. It was sort of governance of science and a bit of technology. And across that time, I'd been working in nanotech and biotech. It was on the quantum Tech Public Dialog Oversight Group, Oversight Group and AI and Robotics.
And I did an analysis of the public dialogs in the UK and Europe, about 20 different public dialogs and all sorts of different technologies. And contrary to our beliefs, that the public is very sort of risk averse. Publics are quite supportive of technologies, they're quite intrigued and interested to see what will happen. But they felt that when it goes wrong and that was the thing, they were quite sanguine. You get a cock up when it goes wrong. How are the regulations going to deal with this? Have you thought about it in advance and what is the governance like to make this safe for me? And the regulation loomed really large in their minds, and I hadn't appreciated until I did that work that that would happen.
And so I think when this work that I was doing and trust in fact, I remember I looked at trust in psychology and political science and social science and and it was really, really confusing. I actually do remember laying my head on my desk and crying at one point because no one actually, you know, it was so confusing. No one agreed with each other. It was all. And then I realized, in fact, the head of the regulator in the UK told me, No, forget the trust of the academics, don't agree with each other. They're all fighting for their own little corner. Trust is this trust is that.
So I just was so relieved. It's not me. It's actually that it's really complicated. No one really agrees with each other. But what happened when I started, when I got over that was I realized there were things that were common across all these different this different disciplines of trust, which were actually the drivers of trust, what I'm now calling the signals of trustworthiness.
And these are an intent, a public interest intent, not a particularly self-serving intent competence. You can you can have the greatest intent, but if you're useless, then you can't be trusted. Honesty. A big driver of trust is honesty and integrity, fairness and openness, inclusion and respect. And when I distill all these, I was really disappointed because I wanted some really new sexy trust things so that I could talk about. But actually, these values are so sort of commonplace, even in all sorts of different cultures, these trust values, when you see people that you don't trust and I'm in the UK at the moment, we have a government and those trust drivers are absent.
When you see people you don't consider trustworthy, When you trust people, they are absolutely clear to see. So part of the point of when you ask me to do this, you know what is important, what what actually makes trustworthiness? These values are the things that make trustworthiness. And these are the things that actually I've now spent the last few years looking at how they then manifest themselves with tech and with governance. And we'll talk about that a bit later on. Thank you very much. Thank you for framing this morning's discussion so perfectly. I saw many people smiling and nodding away as you were talking about both the academics, but also what happens when you do not trust.
And that is a perfect segue way to Michel coming from a regulatory agency. Please. So good morning. Thank you so much, Hilary. And thank you, Diana.
And thank you to you for having me here. Am I to talk to you about trust? Well, I've worked on regulators for most of my career. I've had it up comms teams. I've transferred into science teams, and now I work in the UK's most trusted regulator, which is the Food Standards Agency. We do food and we are well nigh obsessed with trust, and I'm quite obsessed with it because it's my area of great interest.
I spent ages being on the kind of the really sharp end of what happens when things go wrong and when you're trying to argue your point and when you're trying to say something is safe and you're being, you know, phoned up at 2 a.m. by the Daily Mail because something is falling on the floor and I wanted to kind of start today and I'm really glad that this is the first session. It's great framing for all of the other really fascinating sessions with some of the things I've learned about doing in real life, because we were set up in the wake of a breach of trust, right? The BSE crisis, you must all heard about that science said it was fine, politicians didn't want to own up to it. People died. And so in the result of that, they set up an independent, non-ministerial funded science and evidence department who could call it out when there was something wrong with the food. And the reason that's so important for food and hopefully be able to make the link to all the other new things is that as Slovic tells us, risk is a feeling, right? It is.
It is an emotional transaction that people make thousands of times a day to trust a thing, to trust what they're putting in the mouth, to trust the car they're getting into without a driver and they trust us. They make that deal with us and we tell them food is safe again. And so we look at this a lot and we with the most trusted regulator, I think because we really do track this, I mean, I track this monthly, six monthly, and those figures are really, really high.
I won't read them all out. You can see them. It's an amazingly big screen. But what's really interested and interesting is also we really trusted by the businesses that we regulate.
There are about 500,000 small and medium food businesses in the UK, but ten really, really big ones. And in the latest wave of our small and micro-business tracker, we found that 95% of the businesses that we spoke to and we didn't, you know, go in like the regulator, we sent an independent researchers so that they thought we were influential in maintaining the standards of food. 93% said that we worked really hard to do that. 90% said that we were good at identifying where they were poor standards, and 87% said that we were good at understanding the needs of business, which actually, from a regulator's point of view, is incredibly high. Now, I wanted to show you this because this was a piece of work that I looked at in about 2018 is about the relationship of trustworthiness in food, the food regulator, the food system.
And we did this through a piece of deliberative work. And what was interesting is on top of understanding loads about that, which I'll come to in a minute, we learned about how you can get a person from a state of what I like to call blind faith through to a state of informed trust. And this happens so often in the deliberative work that we do, and we do deliberative work because it's a complex system. We ask people to really think about how this Kit Kat got on to their table and they freak out and they freak out in the same way every time. And this is the journey they go on.
And I think this is a really interesting one because it's a big ask to get a regulator, a company, an innovator, to take the people. They're asking to trust them on that journey from blind faith. Everything's fine through to I understand this a bit and I'm willing to make that transaction with you, but it is worth it because that situation at the end is far more resilient than the fragile blind faith is when people are not, you know, fully aware of what's going on. So I just wanted to share with you this journey for Caitlin Connors, who's one of the researchers I worked with a lot. She calls this a deliberative dip, and we've experienced this when we're trying to introduce new technology, gene editing, synthetic biology to the public and see where see where they're going with it all. Echo what Hillary said
about what builds trust. And I'm going to say it's trustworthiness, it's social trust. Do I believe that you're acting in line with my values, not just your profit and its cognitive trust? Do I believe that you're competent enough to deliver on the promises you've made? And I'm going to argue that actually, when you're doing it in real life, there's a third factor, which is visibility.
So people have to see this. You have to be proactive about this. You have to demonstrate this and earn this. There is no way of faking it that people have asked me for years, you know, how do I was the quick way to build trust? There isn't a quick way to be trustworthy. Every time I make the right call, every time. And that's right.
It's actually really hard because there's a pace mismatch between the innovation that you're being asked to get to the market and the risk perceptions of the public that you're asking to trust you. Because when things go wrong, when they do, honesty is tough. The environment you're operating in is hostile, and the people that you're asking to go and explain this to people, to get people through this deliberative dip, earn their trust, aren't really trained for it.
So when you know, when the rubber hits the road, as it were, lots of scientists get a bit afraid they won't stand up, they won't go on Twitter, they won't explain, and some of them can't explain in terms that are clear enough for anyone to really trust. And then sometimes you can't offer what everybody wants. Public opinion and the public interest are not always the same thing. And as a regulator, you have to weigh up really carefully what those what the public interest is. What is the decision that you need to make that is in the best interests of everyone? And how do we demonstrate that the decision we've made is in the best interests of everyone? And I'll give you an example. Gene editing in the UK, Bill has just gone through very, very clear that what the public want is transparent information.
They do want labels. We can't give them labels, so we can offer them transparent information, we can offer them confidence, but we can't offer them choice, which is what they said they wanted. Decision was taken out of our hands by ministers that that was the way it was going to go. So sometimes it's a very, very complex business and you've got, you know, people cross with us on all sides about that one. But there we are. And sometimes it's hard because it looks like it's going wrong.
Those headlines from 2017. There was pesticide in the eggs. We found out about it. We told everyone about it.
I had a real fight with Downing Street to be able to be as upfront about this as we had to be. We thought we had to be because otherwise, in the long run, we would be eroding our social trust. People expect us to call it out when there's something in the food. In fact, our reputation, our trust scores went up and up and up because of how proactive we were with how we handled this. But when you look at those headlines, you can imagine they were not best pleased from day one. And what does it mean in real life? Well, it means that you've got to focus on demonstrating that social trust as well as being good at it.
You've got to demonstrate that you have people's interests at heart. And I do that. I just tell people, show people, be open, be transparent with the information, explain why you're making the decisions, Explain the trade offs you've had to make.
This is hard. Demonstrate your independence because that's part of the values that people expect from a regulator. You've got to be resilient enough to go out there and tell your story, and that is terrifying.
But you know, you can influence the coverage of an incident, for example, but never want, never be empty chaired, be aware of who is that's telling the story. I think going back to what Hillary is saying, if you don't trust the person who's telling you, you'll never believe what they say. So think about who is delivering that message. And in order to do that, you have to develop the people from the start.
You have to hire people who can do this. I mean, here's the problem. Most scientists aren't trained for this. So having spent my life at the interface between communication and science and having, you know, I've been listened to differently, let's put it that way, depending on which department I found myself in, I think either you're going to have to develop those communication skills or you're going to have to start respecting the people that have them and then really set your mind to it.
Now, you know, you can't really fatten a pig by measuring the pig, but certainly you can demonstrate the importance of the pig by having open metrics about it. And that's why we are so keen to measure trust, because when it is all up in the air, what looks like a PR disaster can actually be the thing that saves you if you respond to it properly. So that's the last thing I would say is, you know, have the measures think about them, not just in terms of have we heard of the brand, what do we think, but what are the things behind it? What are the things that drive that trust? What perceptions of social and cognitive trustworthiness are you really looking for here? So hopefully those are some good opening thoughts to frame the conversation. So congratulations on being the most trusted regulator in the UK. That is no small feat, and the fact that it's such a competition clearly between the regulatory agencies says a lot about the political environment and just the landscape that you are operating in, and especially when it's something so intimate.
Food is such an intimate experience and thing in our lives. And you meet the most trusted, says a lot about the work you're doing. So thank you for sharing your story. I would now like to welcome Helen today. I liked her. Thank you. Thank you, Diana.
And it's my honor and privilege to be here today. So I'm first I have some explaining to do. So I have a teenage son.
And my teenage son said, Mom, your parents gave you cool f initials. You should use them. So that's what I'm doing.
And also because there's a doppelganger, Helen Gould, who advises the tech industry, who grew up five from my house, and we went to the same pediatrician. And then there's another Helen Gould, who is a proctologist and a prolific publisher. So I need to kind of differentiate. All right. So all right, so a little bit more about me. So thank you, Diana, for the introduction.
I was that little girl who loved robots and science fiction, and I grew up in a house filled to the rafters with technology of all kinds from vintage vacuum tubes, all the way to the latest and greatest computer technology. And I chose as a career to focus on humans and autonomous systems because I knew I didn't want to program robots or build them. But I was fascinated by how humans and robots and autonomous systems could work together. And I am on the optimistic side, I confess. So as was mentioned, I started Intel's automated driving business over a decade ago, and one of my jobs was to figure out what can possibly go wrong with a computer driving your car.
So I led the corporate risk assessments. I made the business case for us to enter the business and how to do so to be a trustworthy and a, you know, competent and a a great supplier of this new technology in this new space that's going to revolutionize the way we move from point A to point B, in addition to what First, how many people have driver assistance systems on their car? Okay, a good number, maybe some on the balcony, too. Over 70% of those are made by Mobileye and Intel. So, you know, a lot of you have intel inside, whether you knew it or not. And in addition, I had the opportunity during my long career at Intel to work on not automated factories and highly automated factories and to work with with Moore's Law, there was a technology treadmill and it disrupted people. So people had factories would close and factories would open.
And there was a constant increase in terms of technology, but also automation. And so part of my job was to help the workforce transition, you know, into this new automated world that they faced. I worked on the digital home, I worked on the autonomous data center. So at what point did data center start to make their own decisions? It's fascinating.
And then I had the opportunity to co-lead Intel's efforts in the autonomous aerial revolution. Everything from flying taxis and delivery drones to aerial ride sharing and overhauling the world's air traffic control system because it was designed humans 70 years ago for humans on the ground to talk to humans in the air. And that doesn't work at all for unmanned aerial vehicles. So, you know, a fascinating challenge to solve. I retired in January 2020, just before the pandemic, and I'm not here speaking on behalf of Intel, so I'm kind of on my own and working with Gary and team on the standards for organizational governance of AI. And I started my own consulting firm, and my primary focus is on sharing some of my experience, sharing some of my thoughts and ideas.
And COVID was a real wake up call. You know, we're only here for what period of time. So I'm here to share my ideas and the book I'm writing. The first one is Creating Trustworthy Technology, A Practitioner's Toolkit.
So the intent is not only how can we make technology, but a little bit of nudging towards how can we do it in a trustworthy manner. Okay, so I want to start by talking, echoing what was said earlier about trust and distrust and add one in the middle. Okay, So trust is about can and should we trust the technology? I think we all intrinsically understand that. But what's also important is to have a language that we can use.
So on the side of distrust, I have intent to deceive to kind of just as we have misinformation and disinformation. I think the prefix here is very, very important, and that is can technical how can technology intentionally be misused or abused On the distrust side, then what's fascinating is the middle where we have mistrust, where it may be inappropriate, it's hard to see the slides incorrect or inappropriate use of technology. And here you have over trust in technology, under trust in technology and the potential for unintentional misuse. So intentionality is a very important concept here. And then I wanted to introduce the concept of inferred trust.
So one of my favorite examples is how I got here. I boarded an airplane yesterday morning in Portland, Oregon. I decided to leave terra firma and get into into a metal cylinder air and leave the ground and, you know, magically appear in Arizona two and a half hours later. So it's incredible that humans adapted and were able to make that leap of faith that it was not only okay, but a fabulous experience to travel by air. And that's only been in the last hundred years that we have really made that leap of faith, you know, as a civilization.
So what's interesting also about it is we're placing when we do that inferred trust, we're placing I as a passenger place my trust that I will safely reach my destination. And yes, I did. And I also placed trust in the airlines that the pilot was trained.
The staff knew what we were doing, that the plane was well maintained in the regulators, that they had done what they needed to do to preserve airworthiness. And it was a Boeing. Okay.
So what Boeing did to design and build a safe and reliable aircraft and then their suppliers. So it's fascinating, you know, kind of not only direct trust, but also how we infer trust. And now I put the two together, inferred trust, inferred mistrust and inferred distrust. So this is gets fascinating. So Michelle did an outstanding job of explaining how she the her agency has inferred trust both from the public and I assume from other regulators and from the businesses that are being regulated. So that's a super example of inferred trust.
Then I was at a conference a few weeks ago, Urbanism next fabulous conference on it had the theme was automated driving and the implications on public policy and urban planning. And I got into a really interesting discussion with and one of the colleagues there, and he was talking about, well, I heard that Company X had a fatal crash involving an autonomous vehicle. So in his mind this technology was not safe and that he would never ride in an autonomous vehicle, he would never buy one, and it didn't matter which company it was from. So this is an example of under trust and how what is happens in one situation can be inferred to apply to an industry, to other companies, you know, and more broadly. So I thought that was interesting.
And then in for distress, I had to go there. Okay, So, you know, 2016 there was a big Cambridge Analytica and Facebook scandal and more than half century later, now not century decade. Okay. And there was a a survey done by Digital Trust benchmarks by insider intelligence. And Facebook was still the least trusted of the nine major platforms.
So once you have broken that trust and you have distrust, it's very hard to repair. And it can take a long, long time. So there's that.
And then one of the things I've also been working on is how to foster greater trust. What mechanisms are we using today that we may not have put a label on? And how can those help to to create greater trust? So the first one is advisory trust, and that's typically based on standards or or subject matter experts. And it's not legally binding.
Okay. That's kind of advise me. So and then there's protective trust, which this tends to have rules and regs and laws and is typically enforceable. And then explanatory trust, which is through education and building understanding and how things work and why and things that nature and then demonstrative trust which is earned when you have proof or you have certification or you have independent verification.
And what was fascinating to me as I was joining this panel is I realized that we perfectly map okay, so Hillary and society inside maps perfectly with advisory trust and what Michelle is doing with food safety maps perfectly with protective trust. And what Diana and myself are doing is kind of in the explanatory trust area. And then what we'll hear about next with Mary is all about demonstrative trust. Show me. So this was really interesting that all of us mapped and it is my hope that the book I'm writing will be, you know, help us in the explanatory trust mechanism.
Okay. And with that, I would like to say thank you and have Mary join us. Congratulations on the book. I think you might be the one person that we can ask, When are we getting the jets and cars? Oh, yes. So it's coming.
Okay. So and we're making excellent progress and there's a few companies that have been working on airworthiness certifications. And it's fast. It's coming.
I've always wanted to find somebody who had actually answered that question. So. Well, that's that's the next question. So what we'll do first is we need to establish trust with ground vehicle. Then we infer that trust to aerial when people. Thank you.
Now, I'd like to welcome to the floor, Mary. Thank you. Okay. Good morning, everybody.
It's a delight to be here, although a little bit hard to follow the The Jetsons car, I have to say. And I wasn't necessarily a fan of robots per se growing up, but I was a huge Star Trek fan. So I'm still looking for the ability to beam somewhere that's even more advanced. So this morning I would talk a little bit.
I think it follows very nicely from the discussions we've just heard about what businesses can do to increase trust on both on their own and collectively when working together. So first, I'd like to give a little bit of background. Let's see, Whereas okay, here we are, background on my organization, B, B, B national program.
So in 2019, the Better Business Bureau we spun off from the better Bureau we have, the Better Business Bureau, A, B, B, B, which runs the local bibs and accredits companies. And then my organization is a separate nonprofit organization that develops and operates over a dozen different industry self-regulatory and accountability programs, primarily in the areas of of advertising, privacy and dispute resolution. So our mission is to create marketplace trust and to serve both businesses and consumers.
And so I think it makes sense to talk a little bit about what industry self-regulation is. And for our organization, what we saw is different companies facing marketplace problems and coming together to try to solve those problems and doing that by creating programs that have various different elements. But, you know, usually, in fact, it's sort of a whole kind of pyramid of different types of accountability measures that can be included in a self-regulatory program. But there's usually some sort of agreed upon standards, some sort of monitoring and compliance, some dispute resolution procedures, and maybe an appellate process. It's also important to have a good close relationship with whatever regulator is overseas industry. For most of our programs that is the Federal Trade Commission, which as Diana mentioned, I worked at for 30 years before I joined the ABC National Programs, National programs.
So for example, we have six different programs in the area of advertising and there are oldest programs actually founded back in 1971, two of them, the National Advertising Division, the National Advertising Review Board, and they were a result of the ad industry wanting to increase trust in advertising. And there was a lot of concern about what's going on on TV. And so they created this program. We opened investigations based on our own monitoring or competitors file complaints.
We render a decision and that decision can be appealed to the National Advertising Review Board, which is a body made up of of advertising industry members. So it's kind of a peer review process rather than being heard by attorneys. We have, even though it's a voluntary process, we have excellent compliance, 90 to 95% compliance. In the case of noncompliance or not participation, the company gets referred to the appropriate regulator. Usually that is, the Federal Trade Commission. We also have the programs in the areas of privacy.
And so, for example, our Children's Advertising Review unit actually issued guidelines on children's online privacy in 1996, really at the dawn of of the Internet and the World Wide Web, even before Congress passed the Children's Online Privacy Protection after a COPPA. And now we run a self a Safe Harbor program undercover. We were the first ones to be approved by the FTC to do that. So being in the business of self-regulation, we got very alarmed when we started hearing regulators say the time for self-regulation is over. The headlines that are on the screen here came from a Senate hearing on big tech social media platforms where there was, you know, their centers were concerned and upset about the lack of self-regulation by tech numbers with respect to social media in particular, and all the harms that were befalling consumers, especially young people.
But it's really important in this contract, in this context, to distinguish between a company's own self-policing efforts and independent industry self-regulation done by a third party. And it's obviously very important for industries and companies to self-police. But their self-policing efforts will always be viewed with some suspicion.
It'll be looked upon as a little bit of the fox guarding the hen house that processes aren't sufficiently transparent. There's not enough accountability and not really meeting all of the trust factors that Hilary set out. I was actually very interested to see the trust factors because they actually align pretty closely with what is needed for trustworthy self-regulation. And and so on the screen, here are some characteristics of successful industry self-regulation that the FTC has highlighted, and they include clear standards, widespread industry adoption, active monitoring programs, effective enforcement mechanisms, procedures to resolve conflicts, a transparent process and sufficient independence, independence from industry itself.
So the FTC, you know, has put out these these types of of standards or characteristics, and we work hard at the national programs to make sure that our programs actually meet and follow these types of of characteristics. So that they do have credibility. So and as well, the FTC has pointed out to the value of independent industry self-regulation.
And those there are factors that they have listed are up on the screen and looking at the ability and the value of self regulation and what it can bring to a marketplace. We have been thinking about the problems and perils posed by emerging technologies, whether it's social media or AI, especially generative AI and all of the other technologies. And it seems to us that there is a possibility here for industry self-regulation, independent energy, self regulation to take a stab at trying to resolve some of these problems. A couple of years ago, Gary and his team here at ASU really kind of surveyed the landscape for what was going on with ethical principles and found hundreds of different frameworks and standards and codes of conduct and the like. But they found that very few of them actually had third party independent verification. And I think that is, you know, that is kind of the missing link of what needs to be be done here.
And even since that that collection or survey of the landscape and especially with the release of Chartbeat and the apparent arms race on generative AI, more and more proposals have come out. There's now legislative proposals as well. And so it really does raise the possibility of whether this is is an area that makes sense for self-regulation. You know, the government is looking at this space, but the ability of the government to and you know, I am a former regulator in law enforcement, so I have much love for the government.
But I do also recognize it's its limitations and the ability of the government to regulate in this area is questionable. When you look at things like we've been concerned about online privacy for decades and Congress has yet to pass a comprehensive privacy bill. As I mentioned, Congress has had held a ton of hearings on social media platforms and all the threats of big tech. And they and they rant and they rave and they, you know, name and shame, essentially. But in the end, they haven't been able to pass a bill, and I doubt that one will pass that can survive First Amendment scrutiny.
And I even just saw the headlines this morning about the Supreme Court's latest ruling, which and I haven't had a chance to read or digest at all, but that actually, you know, just supports the idea of that. It's very difficult for the government to regulate content moderation and those sorts of decisions in this space without running afoul of the First Amendment. So we have a situation here, I think, where government where where a business business leaders could show their comparative advantage in this space.
They they know what, you know, restrictions and guardrails would work and what wouldn't work. They can move more quickly and nimbly. They can change to, you know, according to what's changing technology more easily. They have the expertise and experience to bring to bear on solving these problems. And they can even choose to limit their own speech and their own conduct in a way that the government cannot do without running afoul of the First Amendment.
So thinking about this, will we see an opportunity to create trust with the public for these emerging technologies? Tech leaders could choose, could convene to create guidelines to hold themselves accountable to a greater monitoring and verification and public reporting. So many people and hopefully some people remember years ago, Ronald Reagan said trust but verify. That was in the context of nuclear arms control. And I remember thinking at the time, well, that doesn't really make sense to me. It seems contradictory. If you trust, you don't need to verify.
But when the stakes are high, it does make sense to have verification. And I think to Hillary is I think you mentioned that there's a difference between blind faith and trust. Right. So, yeah. So the verification will really enhance that trust. And it could be what's needed here to help address some of these problems. Oops, I had another slide that I didn't show, but that's okay.
Thank you. I as I was just listening to you, I think we had the recent congressional hearings and the Supreme Court rulings this morning. I think there's one thing we can be very clear on is that we actually don't trust government to be able to regulate this. Well, I think that if we did a quick survey of our hands, I think that would be overwhelming in that direction.
So thank you for four amazing presentations. I so many ways to go and saw I think Hillary I might start with you if that's okay. But this is definitely a question I think everybody I'd love to hate jumping on is why is trust important for governance? Well, I think so. I would say I was trying to say in the talk, thinking about when it goes wrong.
So I've been doing more work on this trust and the sort of costs of distrust. And I'm very taken with Obama's foreign policy mantra, which is don't do stupid shit when it goes wrong. It's just so obvious. So obvious. We we see it coming, you know, people see it coming a mile away.
And when you know, the biggest finding of my trust project actually, is to listen and take seriously the people you don't take seriously because they are the ones who are telling you all along the thing that's going to get you into trouble. So I think in terms of, you know, why trust is important when it's gone, it is really difficult for everybody. I mean, in fact, trust in regulation when we look at the implications, particularly with new technologies on society, societal confidence, what that means, there's a great report from the the Inter-American Development Bank, which looked at the the impact of distrust on the whole of society.
And it really unraveled society. People don't trust each other. People don't trust politicians, people don't trust regulators. And therefore, we all shrink into ourselves. Do our own thing and won't share, and it's for the detriment of society.
So I would go as far to say I'm not necessarily tech in its own right, but the the implications, for example, of the problems we're seeing with social media objectively, where we can't trust anything has dramatic and will have dramatic implications for social cohesion, but for all sorts of things. So I think we have to take trust socially because as the in the development bank showed, actually it means that society can actually unravel from lack of trust like what anybody else like to jump in. And if you press the button on the microphone in the green light, you go on to show that the microphone is on a you.
So I just thought that that was very interesting. And when I was thinking about the vaccine take up in the UK because and one of the two determinants of vaccine take it is not just in science, it was trust in government and the constituencies that would use like vaccines because people were not worried about the competence of the vaccine. They were worried about the intent of the people pushing it. And if you had your whole life that the intent of the people has not been demonstrating your interests at all. And we see this with various ethnic groups, different social groups, why would you trust at that point? Now that that's where, you know, we were asking people to take a huge leap on the word of these people.
And it was really interesting to see how the demographics broke down and why people weren't given vaccine. Doesn't trust in science is trust in the intention of people. And also just to build on that, actually, people don't trust politicians themselves, but they do trust regulators. And actually it was the trust in the process and the regulators that were more open and more communicative with what they were doing across the world were actually more trusting. And listen, really interesting starts with I don't have to mind about trust and vaccine. Take up this associate with regulation and the ability of the regulator to communicate and the efficiency of their processes.
All right. Thank you. So building on that, we're already touching upon different industries, different tech applications. And Mary, I'd love to ask you to different industries have different trust relationships with the public. Okay.
Yeah, I think I think that's right. I mean, into the experience of of it goes to what people have experience with with different industries right now it's very interesting data on the trust barometer has shows that individuals consumers have more trust in businesses right now than they do in government or in the media. And not clear exactly why this is, but it's thought that it has something to do with the how how companies responded to the pandemic. And we all ended up relying on tech companies and in a big way. So even at the same time, there's a lot of concern about, especially in the privacy arena, about what tech companies are doing. That was, people also saw that, you know, when push came to shove, they were able to do some really remarkable things to keep us all working and living and doing whatever, doing whatever else.
But it is, you know, I think it's just a function of of of what the companies and businesses have done over time. And I think Helen pointed out how long has taken Facebook to kind of regain the trust after the Cambridge Analytica scandal. And one wonders, you know, I don't know if that always happens, though.
It's been fascinating to me to see to what extent is it a function of how much consumers really need a certain product or a company. And so maybe some some get away with more than others. Fantastic.
Thank you. So, Helen, I'd like to pick up a little bit in terms of your background and bio and the concept of science fiction obviously played a big role in your childhood and even into adulthood. How do you think science fiction influences trust in technology? So that's a very interesting question. So thank you. Is Andrew here? I saw him, yes, he's out the back.
Okay. Andrew. All right. So feel free to jump in if you want. Okay, So what's fascinating to me is humans have been wrestling with trust since the dawn of human civilization. Okay. You know, back to Caveman days and warring tribes and, you know, all through human history, there's been a lot about human trust of other humans.
Now, what's fascinating to me that it's largely been the last century where technology has really become such a huge part of our lives that we now have to also grasp, grapple with not just trust of other humans, but trust of technology itself. And then coincident with that, roughly the century we have science fiction. And science fiction has brought both dystopian views of the future, as well as utopian views of the future, as well as kind of a really interesting hybrid kind of between the two. So our thoughts and perceptions of technology are shaped by experience, but they're also shaped by, you know, the books we've read, the shows we've watched, the media, we've consumed the movies, and many of us have visceral memories of the voice of how from 2001 or, you know, the The Jetsons, as was mentioned earlier, or the Back to the Future hoverboards and, you know, flying cars. So this is now become part of our culture, part of our view of the world.
And we can look at it in a robot filmic type way where we have a positive view from science fiction, or we can look at it in a robot phobic view where we we go to more of that dystopian side and we think of Sky. So it's all just fascinating about how much science fiction has shaped our views of technology and for the good in terms of what's possible. If we look at it now, you know, the Star Trek Replicators and 3D printing, that's interesting. And the use of a cell phone, well, the use of tasers and phasers on stun from Star Trek. So the whole thing has really shaped our views and our thoughts and our perceptions of technology as technology for good, as well as technology for not good. Thank you.
Say, Michelle, I picked up in your presentation it was the egg scandal. The and you alluded to it very briefly in terms of a interaction that you had with Downing Street. And clearly trust was central to this. Can you walk us through what happened? Yeah. So, I mean, people are Downing Street is like it's where the prime minister lives. It's the head office of the cabinet and most most government departments have a minister and they answer to a minister.
We don't for a really good reason. And the reason is, you know, sometimes you have to say stuff that is politically unpalatable, like there's something in the eggs and it's been there for three weeks and we don't know how many of them there are. And while we're pretty sure it won't do you any harm unless you eat a really inordinate amount of eggs, and even then it will kill you. It shouldn't be there. And we don't quite know yet how it got there.
And you can pretend that didn't happen or you can tell people and we have one job to tell people, but that isn't really necessarily something that is in line with the political agenda. And without going into too much detail, that was under some pressure to not do that. And we had to go all the way up to the head of the civil service, the independent civil service he's passed on of. It's Jeremy Heywood, who was the head of the Civil service. He's one of the only people old enough to remember why we were set up in the first place and, you know, put the case to say they don't get to sign off my press release. I just don't get to there's an exception made when it comes to food.
And because we got it wrong before we have to do this, we don't do it very often. And he turned around is like she's rightly right. And we put we put it out.
Now, what was really interesting about that one is I had all of the leadership of the organization doing jazz hands around me, going, you know, this is going to be terrible for our reputation. We have you know, we've gone out there and we've basically committed harakiri in the middle of the media. We have we have to do 20 interviews back to back.
We had an amazing director of who could put the issues out there, honestly, tell people what we didn't know, tell people what we were doing about it. And that's when I started to really think about trustworthiness and the drivers of it and how you demonstrate it, not tell people about it, but demonstrate it. Do it. Because while the headlines were saying there's a breach of competence here and people were worried about that, the way it was responded to was reinforcing that with actually there's no breach of values then. So this has happened. It happens.
This is how we are going to tell you about it. We're going to tell you honestly what we don't know. We're going to tell you what we're doing about it. And we're trading on, you know, 20 years of having done that. And what I started to see was those measures going up and up and up. And so that's why I did this piece of work to understand Olympic what was behind trustworthiness, because you shouldn't really just do the postmortem when it goes wrong.
You should do it when it goes right. So get it right again next time. But the pressure to not do the right thing was substantial. And if it hadn't been for the particular legislative framework that set up our organization, where I could go now look at the Food Standards Act. You don't get to do that, you know, which is career limiting, possibly. But if it hadn't been for that set up, we wouldn't have been able to do the right thing.
So the independence really mattered then massively. Which brings me to another question, and I'll start with Hillary, but I would love to hear everybody starts just trust in tech. And trust in regulators differ between the UK and the US.
In your view? I don't really know. I don't really know. I think there's so no, I don't really know, because there's so much going on in politics, the influences regulation that influences trust, but I think they're so intertwined. And also we've got that, you know, all the tech's coming from you guys, so you can say you want me. I'm Australian. Yeah, that's right. So I think we're talking, obviously, Wendell, you know, looking at this, this is an international problem and we've got national regulators and we've got national self-interest and we've got geopolitics.
And all of this wades in to, you know, the delivery efficient regulation I think that we could be regulating earlier and better with a more collaborative approach. I think the biggest cause of distrust in regulation is the fact that people believe that money is being put before people. And lo and behold, you know, the tech companies are putting money before people and the regulators are not doing anything about it and then wondering why trust is falling. So I think equal actually between both of our countries. I think there's a philosophical difference really between the two in terms of how regulation is conceived of.
So in Europe, you can't put anything onto the market until it's been proven to be safe here. Completely different legislative framework, totally different culture. You can put it on until someone proves in law that it's not to something. I mean, that's a brute characterization. And lawyers will have a better way of expressing it. But I think we've got that that basic difference where I'm imagining on this side of the Atlantic, what we do looks extremely precautionary, possibly to a fault.
Okay, I'm going next. Okay. So my hypothesis has been that in the US it's more of an innocent until proven guilty.
And I'm sitting here in a law school with a lot of people who know a whole lot more about this than I do. But in other parts of the of the world, it's more guilty until proven innocent. So it's taking what Michelle was saying. But, you know, kind of exploring it a little bit more.
And that's more a hypothesis. But I think there's some truth to it. And what's interesting to me is where self regulation comes in is also in more of the innocent until proven guilty. And for me, one of the interesting examples is the approach to air worthiness. So in the UK and in Europe, EASA does a lot of the regulation of the aviation industry and whether a plane is safe or not safe.
In the US you have a delegation of some responsibility from the FAA to people that have been identified to act on behalf of the FAA and do airworthiness certifications. So I've personally witnessed two airworthiness certifications and it is fascinating that they pick a person, they train them, and then they do the right thing in terms of making sure that this aircraft is safe and reliable and ready to be used. So I think there really is a philosophical difference. And I think Michelle, you know, did a good job of explaining it. Yeah, I would agree with that. And I think it actually extends even to self-regulation because various countries around the world have self-regulatory organizations to mine.
And in the UK it's a NASA Standards authority and we meet and get together from time to time. And even there I see much more of a willingness and ability to take action and to kind of to assume guilt as opposed to in a sense. And it's just maybe the, I don't know, the culture of the US where we come from. It's much, much more suspicion about government action and suspicion about whether that's the right thing to do and more willingness to give leeway to the companies themselves.
So it's been interesting for me to see that even on a self-regulatory basis, we are way more hands off then than the others. So I just wanted to say one more thing. So what's interesting is the hybrid, if you can clear the hurdle in the EU, for example, and you can clear the hurdle in the US, then you're even more confident that yes, these planes are okay, just isn't it? That's a really interesting framing because we the way the way you framed it is very much in terms of I mean, I am not a lawyer, so forgive me, but that sounds like criminal law to me.
Innocent until proven guilty. Proof that those are not the terms in which we often deal. We do prosecute, but I think it's more at the basis of it.
You're probably right. There's a real difference in the appetite for and the envelope for government intervention in the first place on anything. And there's a slightly different balance, I guess, in kind of the interests of industry and innovation, the interests of public health, public welfare that you see seated throughout society, you know, in different countries. And so I think there is something quite profound about that that makes it different. Fantastic. So I think that's this is a great point to actually open up questions to the floor.
We did have this mark, but I got told by Gary that I'm not to throw it because apparently somebody was going to sue us when it fell on the head. So rather I don't want to get sued today. Exactly. Only in America. So rather than try to throw this at the whole is of the size or if you just stand up and use a loud voice, we should be able to hear. I'm sorry if when you do have a question, please indicate and I will point, but I'll say please introduce yourself and say where you're from or I knew Day was going to be the first question.
So, Jay, we'll start with you. Thank you. My name's Jake, the founder of Desert Blockchain Community. And I have a question for Hillary. And you kind of touched upon this in the context of the COVID 19 pandemic. Would you would would you give the regulatory regime overall in terms of this number one, and what grade would you give the regulatory regime overall in terms of the public's trust of the regulatory machinery? On a scale of 1 to 10 random stuff, I think that is a great question.
I have no idea. I would say different in different countries and different in different countries with different relationships to government and regulation, different beliefs in the effectiveness of governance. I know from the UK that we had a government that was distrusted, so perhaps on a sort of two, but we had an NHS and a regulatory system that was sort of on a segment and that
2023-08-03 10:46