Emerging U.S. Policies, Legislation, and Executive Orders on AI | Intel Technology

Emerging U.S. Policies, Legislation, and Executive Orders on AI | Intel Technology

Show Video

(upbeat music) - [Narrator] You're watching "In Technology," a video cast where you can get smarter about cybersecurity, sustainability, and technology. - I'm Camille Morhardt, host of "In Technology" podcast, and today I have with me Chloe Autio. She is an Independent Advisor on AI Policy and Governance based in DC and we are gonna cover AI policy, AI legislation, AI regulations, AI executive orders, everything somebody might wanna know about it. We're gonna focus on United States policies, but we're gonna have a little bit of a glance or a window into global policies as well.

Welcome to the podcast, Chloe. - Thanks so much, Camille, it's great to be back with you and I'm excited to chat today. - I've heard probably like everybody else the news and what's coming and should we be worried? We hear Elon Musk tell us we should all be very worried and we need regulation and checks for AI, and then we hear from other people that it's really just a tool and we're over-worrying and we don't wanna stifle innovation. So can you give us like a little bit of a framework or a lens to even approach the conversation and then we'll walk through various topics? - Absolutely, and I think that grounding is really important. The US and global discussions have been talking about AI regulation for almost, I don't know, 5 to 10 years now. The first discussions that happened in the US started in about 2016 and 2017 in Congress and actually in the Obama White House.

A little bit before that with a white paper on what artificial intelligence was and what it wasn't. And those discussions talk about the need for oversight, these AI tools, particularly increasingly powerful ones, has built over the last five years or so. The floodgates of the discussion have opened this year with the advent and uptake of really popular generative AI tools like ChatGPT, and DALL-E, and Midjourney, and Stable Diffusion that really sort of put this technology in the hands of consumers and policymakers in a way that I think felt a lot more real. And so the focus from policymakers on needing to oversee and control and really understand better this technology has been really heightened in the last six months. - Are there any laws or legislation on the books, or are we still in the guidance and policy or executive order space? - There is so much out there right now on AI regulation, AI policymaking, AI investment by government.

And to spare you a lengthy recited memo of the tours of the AI policy world, which I do a lot of for the work that I do, I'll sort of focus on trends and sort of new happenings, particularly in the US. But really, really quickly, you know, obviously around the world, we have the EU's AI Act, which has also been in development for three to four years. The European Union is currently in what's called a trilogue so intergovernmental discussion really about the final issues in the EU AI Act. And from there, the law will be adopted and likely go into effect sometime next year. Organizations and governments across the world will be, and are, really paying attention and reacting to that. Other countries obviously are looking at where they can step into the AI regulatory debate.

I won't go into all of them. I think talking about the EU AI Act, while it's not the most important regulation, it's definitely the fastest moving and most well-known sort of broad regulation on AI that's actually going to come into force relatively soon and so has driven a lot of the discussions. But to bring it back to the US, there's a lot happening, right? The Biden administration and the White House Office of Science and Technology Policy and other White House offices have really, really focused on AI as a primary policy issue and an issue of focus, particularly with how, to the point I made earlier, consumers have really focused on and started to play with these tools in a way that I think we haven't really seen before. In just two months, ChatGPT became the fastest growing consumer tool with 100 million users, and that's crazy, right? Before that we were talking, you know, TikTok and Meta. And so to have these kinds of technologies in the hands of people also raise the questions about how they can be used by bad actors, right, malicious actors.

And so I think that, you know, a lot of offices within the White House and across the executive branch has sort of leaned into this discussion and said, hey, what do we need to be doing to control these technologies? So earlier this year, the White House got about 15 companies to agree to a number of commitments on the security, the safety, and trust in AI systems, and particularly focused on these really powerful models, foundation models that form sort of the basis for large language models and the chat bots and really powerful models that have captured public attention lately. The administration is also working on an executive order, which I'm sure you've heard about. It's been delayed quite a few times, but I think finally expected to come out around October 30th or 31st. And that will really focus on more investment and understanding among federal agencies and the US government on these LLMs and how they can actually be used. It'll also focus on some workforce implications so how can we get more talent into the US government? We really desperately need more AI talent, more tech talent generally into federal agencies.

And so the EO will also focus on that. There are a few more things that it will cover. Many of them have been sort of teased or fabled in the news, but broadly we'll also focus on really helping different agencies understand how they can control and oversee these technologies at a high level, and working really closely with NIST, the National Institute for Standards and Technology to sort of create testing and guidelines around using these really powerful models. - Could you back up to the cadre of companies that came together, tech companies, and this was in the news and announced a commitment that they were making.

Can you remind us who they were or who several of them were and what does this mean? And the fact that they're coming and stating what they're willing to do, you know, versus, you know, being told what to do from legislation or from executive. Give us a sense of what does this mean that these companies are coming together? - I think what it really means, Camille, is that these companies understand that they need to do something, that they need to demonstrate some sort of responsibility and willingness to work with government on controlling these issues, and not just the perception of the concerns around generative AI, but also what they're actually doing to govern and control these really powerful models. And I think it's really important that in addition to the big players that are chronically in the news about generative AI, you know, Microsoft, OpenAI, DeepMind, Google, Anthropic, in the second round of commitments, some of these smaller companies, Cohere, Stability AI, Nvidia, also came together, Salesforce, right? And said, we're also gonna commit to test and look into what we can do around watermarking.

We're gonna commit to setting up third party or external red teams, internal and external so we're bringing sort of outside stakeholders in to evaluate and interrogate our models and really get feedback from them. And also invest in better cybersecurity infrastructure, which is something that underlies all of this, and up until sort of recently has not been as much a part of the discussion. So, you know, really I think what this means and what we'll see is these companies working together more to really build and develop some of these standards. We'll see that too, where one of the outcomes of these commitments, at least the initial round, was the formation of something called the Frontier Model Forum, which will be sort of, it's sort of an industry coalition, but not quite, made up of the four largest players in this space, Microsoft, OpenAI, Google, Anthropic, and they're gonna be coming together and really trying to develop best practices related to all of these things, related to watermarking, related to red teaming, related to model evaluations, and kind of working together in a way to demonstrate that, you know, they're taking these issues really seriously. - [Camille] Can you pause and just tell us what watermarking is? I think you explained red teaming well, but.

- Yeah, so watermarking is a technique that involves embedding a signal in a piece of data or a piece of content, essentially to identify its provenance, where it came from, who made it, have any changes been made to it, was it manipulated? And this is really important in the context of AI because one of the greatest concerns right now with these generative tools, in addition to copyright and IP, have these tools or work been created using copyrighted works? A big concern in addition to that is misinformation. Are we looking at deep fakes? You know, a video of the president tripping and falling sort of in a worse way, in a bad way can have implications for how people view him and how people think about the information that they're getting. It also has impacts for trust in technology generally.

And so inserting a watermark on a piece of content can give either developers or users a better sense of where that technology came from, including if it was AI-generated and not actually real. - Is there any noticeable misses in the companies that are coming to the table? - That's also a really good and super important question right now. And it's really important because so much of the focus right now in policymaking, whether it's in Congress with sort of Schumer's AI Insight Forums or even these White House commitments, has shifted to focus on these really powerful models. The reality is that most AI development is not happening at that level. It's in and across the enterprise.

It's not foundation models, it's not models trained with billions of parameters, like these foundation models are. It's what I maybe would call clunky AI, computer vision models, reference implementations. And not to say that these technologies aren't advanced, but they are the technologies and sort of intensive data workflows and applications that are really being used and adopted right now that are creating real harms, right? Like bias algorithms used in hiring context, algorithms used to make decisions about loan eligibility, that sort of thing. I can't say that one company or one organization is missing per se, but I think what's missing out of this conversation, particularly in policy circles generally, is that focus on, you know, how can we address the harms and the concerns with AI that are happening today and how can we sort of shift the focus back, or at least maintain focus on the AI that was being used and that needs governance and regulation before ChatGPT and foundation models entered the room? - One other question I have is when we see larger companies come to the table, granted there's a few smaller ones, but they are often the very companies that are benefiting and collecting the most information and have the resources to process it and use the models, train the models, and then implement whatever it is they're gonna implement for their benefit and potentially other people's benefit. But is there any space given or thought given or consideration at this point to making the information that's collected by large aggregation companies available to, let's just say, the public or developers more broadly so that the concentration of information is more distributed or accessible? - Yeah, it's a good point and I think that this has been the focus of policymakers, but more so in the context of sharing sort of compute and resources and data. One of the big policy initiatives right now, there's actually a bill up on the hill, is the creation of a National AI Research Resource, the NAIRR for short.

We're talking about a big AI computing cloud full of a lot of data and GPUs and CPUs and a place basically where researchers from the public sector, from academia, from smaller companies, even students can come together and sort of work with and play with data, AI models, and leverage public compute basically to broaden access. As you know, compute is extremely expensive. Training these models, particularly these large language models and foundation models can take millions of dollars, billions of dollars sometimes, right? As we're seeing a lot of these big companies sort of pair up with large infrastructure clouds. So creating this space through the NSF and the Office of Science and Technology Policy for different researchers from the public sector and not just private sector to sort of come together and share resources has been a big focus of a lot of policy makers.

I will say unfortunately though it has not yet been funded, so there's been a lot of work to sort of study what such an infrastructure, such a NAIRR would look like, but we need some bills to pass in Congress to really authorize and appropriate that funding next. - Let's move into legislation 'cause that was sort of the executive branch we covered. So what is going on with legislation both in the US and more broadly? - Yeah, so believe it or not, members of Congress are still in a very like fact-finding stage with AI. This may feel surprising to folks who have testified in AI hearings or helped develop AI hearings or, you know, have sort of been in touch or keeping track of AI policy proposals like the Algorithmic Accountability Act or the AI in Government Act, which are bills and proposals that have been introduced in Congress in the last several years. But again, with this new focus on generative AI, a lot of the members in Congress have said, okay, hang on, we know we need to do something, but we really wanna understand what's going on.

And I think maybe helping them sort of place us in this moment of AI development. And so the biggest sort of convening that people are talking about is Chuck Schumer, the Senate Majority Leader from New York, has convened these AI Insight Forums, so he calls them, and the goal really is to sort of bring together experts from across the AI field and other disciplines to educate members of Congress really and their staff, on all sorts of things relating to AI, risks, benefits, harms, open source, different types of proposals around licensing. I think we'll expect to see toward the end of the year some sort of proposal from the majority leader and his co-sponsors, co-collaborators I should say, on AI issues in a bipartisan way, on an AI framework or a framework for AI safety that also promotes innovation. - Are there multiple areas of focus that we know of so far that legislators are concerned with or is there a predominant one? - I think it really varies depending on the member, right, and their priorities and their politics, honestly. But most of the bills I'm seeing are really focused on weaving the right balance, threading the right needle between protecting rights, and values, and civil liberties, and also fostering innovation in the technology, or at least not limiting innovation too much.

And an underlying theme that really supports that is, as you mentioned, national security and competition with China. That is definitely one of the themes that gets a lot of attention on the hill, as China is our global competitor, they're also a major competitor in developing AI research and cranking out AI journals and citations, and papers, and making contributions to the AI space. And so there's a lot of concern from US lawmakers about, what's been dubbed, the AI race, though I'm not enamored with that term, to really make sure that the US state's competitive as a government and as an industry in developing really powerful AI models and also preventing China from getting too powerful with AI development. - So we talked about the US and the EU, but are there other legislation or policy or regulations or standards or guidelines being developed in other parts of the world or other regions? - One of the things that's getting a lot of attention, in fact will be coming up in a few weeks here is the UK's AI Summit.

And interestingly, the UK has positioned itself in between the US and the EU in terms of its approach to AI oversight. You know, if you're sort of thinking about this on a spectrum, the US hasn't exactly come out with an omnibus or broad AI regulation. The EU on the other hand, is in the process of doing that as we speak. And so the UK has said, we're gonna take an innovation-forward approach while also thinking about a lot of these concerns and risks. And they put out a great white paper earlier this year. But will be hosting actually an AI Safety Summit on the 2nd of November.

And the prime minister Rishi Sunak has said this is a really important priority for him, if not one of the largest priorities in his government. And that summit will bring together a lot of the AI model developers that we just talked about, governments from around the world, and they'll look to create some principles and guidelines around AI safety and work in a really multi-stakeholder fashion to put forth some kind of agreement on AI safety, the details of which have been pretty broad for most of us who have worked in this area and field for a while. But I'll be interested to see what tangible outcomes come from that.

In addition to the UK Summit, all of these countries all over the world have been participating in the G7 around AI governance principles and the Hiroshima Process, which explicitly focuses on, you know, creating some sorts of agreements on AI development and use. Lots of multi-stakeholder, multilateral international coordination on these issues. And we're also seeing different countries sort of making advancements to kind of lead and say, you know, we're gonna be the convener of these issues and we wanna play that role. - Are there any major disagreements at this point that are cropping up between different countries or geographies? - It's how international policymaking and standards goes generally, right? Every country, every citizenry has a different approach to values related to technology governance that are based on norms and cultural values that are, you know, independent and context-specific within their country, right, or communities. And so I think there's a lot of focus and desire, particularly from industry to have more international coordination on global governance.

But I think that these discussions will remain pretty high level in nature and that's how a lot of these processes go, right? Internet governance, privacy principles, they can help set a standard. I'm thinking for example, like the OECD privacy principles, right? That was a really a leading body of work that helped set a value-aligned framework for thinking about privacy regulation, but it was up to each individual country to take that and make it their own and adopt it and run with it in the context of AI use in their country and sort of cultural norms. And so I think we'll probably see something similar, but at this stage, not a lot of broad or fraught disagreement around what AI development should look like. Rather, a broad sort of consensus that we want to be developing these technologies for good and the way to do that is to acknowledge and manage the risks and harms, while also not totally shutting it down or pulling the plug in a way that will prevent one country from being competitive. - I'm wondering if things like regulations and ideas and standards, et cetera, or agreements around topics like privacy or cybersecurity, if those map over into AI, like, "well, whatever we said about data with respect to privacy before applies now to AI," or whether AI and large language models are fundamentally different in the way that they collect information and then generate insights based on an aggregation of that data, and now who owns those insights, right? As opposed to here's the piece of data and I can track what it is and who owns it always.

If I'm generating an insight from a bunch of data, now who owns it and now what does privacy mean? - One of the things that I think about a lot and that I think a lot of policy makers are focused on is figuring out how we can leverage old frameworks, and not necessarily old frameworks, but existing frameworks for cybersecurity risk management, you know, thinking about the security of critical infrastructure, privacy laws, good consent practices, good data mapping and governance practices. These are things that industry and governments have been doing and thinking about for a long time. And to say we need to reinvent the wheel for AI, I don't think is the right approach. At the same time, I think that AI, particularly these general AI models, do raise a lot of new and novel concerns about the things that you mentioned, right? If I'm building a data set that's not for an LLM, I can, you know, look at my consents and obviously this is an oversimplification, but, you know, I have my data sources if it's PII, if I understand my consents and have maps of where that data came from, and then I'm gonna look at my use cases and say, you know, what regulatory environment, what industry-specific regulations apply to the ways in which I'm gonna use this data? The problem with foundation models is that the data used to train them has come from millions, billions of sources sometimes, off the public domain, from Wikipedia posts that may not be public, private Facebook posts that, you know, may have just been out there for years now where people weren't really understanding of what they were putting on the internet and where and who had access to it. And all of that information, all of those probabilities and weights have been baked into these foundation models, which are now being inserted and used and sort of implemented into different tools.

And so that is something that we really do need to think about and consider, and particularly in the really obvious context for intellectual property and copyright. And obviously I'm sure you're well aware of the writers and the actors strikes where there's a lot of concerns about the use of my body or my image or my retinal gaze, right? How do I control those things and how should I be able to control those things? They're all really important questions that I think have been really centered by this discussion. But when it comes to general technology governance, right? We've been doing this for decades, and so where we can sort of lean on and learn from technology governance frameworks and technology standards, either privacy standards or sort of general risk management standards, we really should think about doing that. And many people are, but it's just to say that I think there's this tendency to think about AI like this big flashy thing, and it's an entirely new problem. The reality is that, you know, we've managed technology solutions for a while, obviously not always perfectly, and it does come with risks, but we have frameworks to think about that and we should be using them.

- So if you're a business, and I will, I'll just say if there is a difference between small or large, maybe you can qualify your answer, but what are the main things that you should be like aware of and tracking right now if you're working with AI? - The first thing that I always advise organizations to understand is, and this may sound basic or sort of like a step back, but just really understanding AI use in your business and what you want to use AI for, whether you're developing it, whether you're procuring it, whether you're making a co-development. A lot of government and public servants that I talk to, you know, are sort of bombarded chronically by new flashy AI solutions and tools, and obviously this is an issue in the industry too. And some of these tools, you know, purport to solve a problem that we may not actually need AI to solve.

We always need to be asking ourselves, you know, what is the problem that I'm trying to solve right now? What is the business case? What is the business issue? And is AI the right solution? And then get to the question of what kind of AI, right? Am I building it, am I buying it? Am I working with a supplier or a customer? Because those discussions will inform the next set of questions that are really important for risk management, right? Where did the data come from? What data do I want to use? Do I need to buy that data? Do I need to get a license for it? Do I need to obtain new consents? Am I going to be building a data set myself? If you're bringing a model into your data environment, what do I need to be thinking about in terms of outputs? Who owns the outputs of the model combined with my data and external data? These are all questions that at least in a lot of government contexts are still being sorted out because there aren't a lot of rules around procurement, AI procurement specifically in this country. So thinking about where data is coming from and what you're going to be using it for, and then, you know, obviously the AI use case and just getting really clear about and precise about what that is so that you can really understand the risks is really important. That's where I would start. - What else should I have asked or should I be asking that I haven't asked yet on this topic? - The one thing that's captured a lot of attention right now, particularly in Washington, is this debate around open source and how we should be controlling open source data and models specifically versus controlling them through something like a licensing regime, which a number of these big AI labs have come out and supported. And it's turning into, you know, sort of a big thorny debate, as a lot of these things do in Washington, but I think we'll see some more discussion on that topic certainly in the coming months. And I think that the licensing argument and whether or not the US needs to set up an independent agency to oversee that licensing regime and these risks, whether that's needed will be part of the conversation for sure.

- What are the main components of the a thorny debate? Is that the main component or what are the other ones? - There is a concern with a lot of the development of these models that particularly if they're open sourced, if the technology itself, the techniques, the access to the models is open, that they will fall into the hands or be used by malicious actors who can use them to do things like develop a bio weapon or use them to manipulate somebody who has access or control to those types of things, use them to manipulate systems or infrastructure to get access to, you know, powerful codes or something like that. You've probably heard a lot about this sort of existential risk topic, which basically describes sort of these catastrophic threats to humanity or human existence that could be created by AI models. The concerns around that and access to these models has sparked this debate around how we control them, right? And one of the primary solutions proposed by Sam Altman from OpenAI and somewhat supported by Microsoft, although a little bit more delicately, has been the creation of a licensing regime administered by a new federal agency to say, what are the thresholds for compute or model performance that we need companies to obtain a license for or a license for to sell? And obviously, you know, this has implications for access because only really powerful companies with a lot of resources, not just for compute, but for compliance will be able to participate in such a licensing regime, but it will continue to be sort of an issue. And of course, you know, on the open source side, it's really that these models should be available for use, people should be able to use them. Currently, there's a huge power imbalance in access with who can actually build and develop these models, highly concentrated in these, you know, big labs backed by large compute providers.

And, you know, a lot of these capabilities should be open so that people in the broader community can learn from them and build things that are good too. So it'll be interesting to see how that one plays out. But I don't think that the US is ready personally for a new agency or could even make that happen in Congress to administer or create such a licensing regime. - Chloe, you personally, are you optimistic or pessimistic about the future of all of this policy around AI? - I'm pessimistic.

I hate to say it too. I mean, these technologies are really powerful, but Congress is so broken and they're just not really able, I think, to focus on, or like what's going on in the broader context of technology governance and development. We've been talking about responsible AI for almost 10 years.

I helped build the Responsible AI program at Intel in 2017, was when this all kicked off and there's still really no incentive for companies, organizations to take this work seriously. And the fact that Chuck Schumer, arguably the most powerful, one of the most powerful people in government period, is spending his time bringing like Mark Andreessen and Elon Musk and all of these effectively, effective altruists, which is important, but not the bulk of the AI debate, you know, into these Insight Forums to kind of talk about things like catastrophic risk to civilization instead of like, you know, not getting a loan because bias data has been used to train like a sort of dumb algorithm being used by a defense or a, you know, a government contractor, has just really kind of distorted this debate in DC and so it makes me feel kind of generally pessimistic. I try to not be so pessimistic on podcasts and things like that because I think it would be a real doozy. - It's important to have that perspective though, like you point out, I mean there's sort of like the very practical things that we have to get to if we want to, you know, regulating like things that are being used on kind of a daily basis, these smaller things, but that make a big difference in individual lives versus the gray goo and what do we do about that? Because even if we regulate that within one country, there's the rest of the world.

- Yeah, that's exactly it. And hopefully I tried to kind of cover that, which is like, you know, we could talk about foundation models a long time, but really what we also need to be focused on is just like general risk management and data management of these technologies that are the ones being used right now. So yeah, it's a strange time in policy and just generally, and I think with the election coming up, we didn't get into this much, but I think we're really in for one with misinformation and artificially-generated content and tools. And watermarking is a solution, but if something is fake with a watermark on it, it doesn't really do anything for anyone.

So I think, yeah, these video editing, I guess it is video editing and then sort of generation tools are really, are getting really powerful and robust. - And I feel sometimes like the misinformation topic, it has multiple sides, right, or it has multiple perspectives and one is that somebody is going to generate something that's not accurate and so we better have some stops in place so that people can understand whether something is true or not, you know, actually happened or didn't actually happen. But the other side of it is who now is getting to decide what's true or, you know, saying this is misinformation or this is not misinformation? And so are we gonna put the power in the hands of the few to determine that or, you know, to regulate that? And so do you think about that side of it also? - The technology, at least to my understanding, and I'm writing a paper on this right now, isn't actually good enough to be able to provide an ecosystem-wide solution to any of this. But I think what we need to do is give consumers who are already so politically-disengaged, like a better sense of where to get trusted information.

But back to the point I was making, like the technology innovation, like DeepMind created this thing called SynthID where they're basically embedding like a watermark on photos generated on their developer tool Vertex, which is like great for content on the developer tool, but, you know, if you're not using, if something's not coming from there, it's not really helpful. And so, I don't know, I feel like the industry almost needs to come together. I know they're doing some of this in the C2PA, that organization that Intel's been a part of for a while actually with Adobe and others, to try to develop some tools for provenance and watermarking, but I don't think it's moving fast enough.

And even then, not to get too political, but I think that the way that politics and elections have gone down in this country in the last five years have created like much bigger problems that technology won't be able to solve. People are more focused on these technology fixes than like these big issues, but they're not the things that people want to talk about because they get too political. - Wow, Chloe Autio, Independent Advisor on AI Policy, Regulations and Laws, thank you so much for joining us today. I feel a little bit smarter right now, and also there is so much to go read up on to get smart on this topic. - You know where to find me if you need any help.

And thank you so much. It was so fun to join you again, Camille. And yeah, we'll see what happens this year and beyond.

(upbeat music) - [Narrator] Never miss an episode of "In Technology" by following us here on YouTube or wherever you get your audio podcasts. - [Announcer] The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation. (upbeat music)

2023-11-09 12:28

Show Video

Other news