Innovation in Neurotechnology, Innovation in Governance?

Innovation in Neurotechnology, Innovation in Governance?

Show Video

-Hello and welcome, everybody, to Innovation in Neurotechnology, Innovation in Governance. I am Jozef Sulik. I am the Assistant Director of the Center for Science and Society and the Presidential Scholars in Society and Neuroscience at Columbia University. Normally, our chair, Pamela Smith, or our current interim chair, Carol Mason, do these introductions, but unfortunately, they couldn't be here today, so I am doing this instead.

I will give you a very quick rundown of the Presidential Scholars Program, and it'll be very quick, I promise. Then I will introduce you to our moderator, Dr. Madi Whitman. The Presidential Scholars in Society and Neuroscience Program brings together postdoctoral researchers with renowned faculty across neuroscience and other disciplines. The program promotes cross-disciplinary research and events, supports postdocs and mentors, and funds research projects. The program is housed at the Center for Science and Society which is a dynamic hub that bridges traditional disciplines and advances public understanding of science. If you want to stay in touch with us, please do so either signing up for our newsletter or through our social media channels.

You can follow us on Facebook, Instagram, or Twitter. You can also, if you are joining us on Zoom, join the discussion anytime by submitting your questions in the Q&A, and we hope you will. The speakers are really looking forward to your questions. For anybody in the room, just raise your hand at the end, and we'll come up to you with the microphone. The PSSN Program owes its success to Columbia's leadership, our dedicated steering committee, and faculty from across the university who serve on the advisory committee, mentor our scholars, and review applications, and we are grateful for all of the support.

Now without delay, I just want to introduce today's moderator and event organizer. Dr. Madi Whitman is a postdoctoral research scholar and assistant director of curriculum development in the Center for Science and Society at Columbia University. As a sociocultural anthropologist and science and technology studies researcher, Dr. Whitman studies data collection and surveillance technologies on college campuses. Dr. Whitman received a PhD in anthropology from Purdue University

and was previously a visiting research fellow at the Harvard Kennedy School of Government. At Columbia, Dr. Whitman is currently developing curricula in science, technology, and society. Please join me in welcoming Dr. Madi Whitman. [applause] -Thank you, Jozef. Hello, and thanks so much for joining us for what I trust will be a generative set of presentations and conversations.

Before I introduce the speakers, I want to say a bit about what inspired this event and how we've been thinking about it. Dr. Frahm will do some framing work momentarily, but here I want to use this image as an organizational device to trace a few of the questions and concerns that have structured this event. When we chose this image, which you've probably seen by now on the promotional materials, we sorted through a lot of images. iStock is a rabbit hole, let me tell you.

When we search for terms like neurotechnology or brain data or innovation or even technology and ethics, our searches yielded all sorts of results such as what you see here. You'll perhaps notice a prominent blue aesthetic with floating dots and there's a pervasive trope of the brain as machine, the brain as data, innovation as tied to technology, and innovation as a terrain to map and traverse. Moreover, these images are also laden with implications about how people do and ought to interact with the brain via technology. We wanted something a little less overt and so kept looking and stumbled across this image you see here. What's immediately clear to me is what it isn't. It's not explicitly about the brain, data, technology, innovation, ethics, governance, et cetera, and the rest is unclear.

It looks like it could be organic material or it could be synthetic. It could be both. A reminder perhaps that these boundaries are not so rigid. Most of all, it reminds me of a shimmery piece of fabric, threads woven together, sharp and reflective in places and blurrier in others, folded some places and elsewhere entirely undefined. What does this have to do with neurotechnology and governance? One point of connection, for me anyway, is that this image, more than others, is a little more open to how we might relate to it and it comes into focus through the lenses we might apply according to our own points of view. In turn, the seminar approaches a massive question, that of how we can and ought to relate to humans and the brain in particular, vis-a-vis neurotechnologies.

Which points of view are urgent and which need more attention? What might we learn from the spaces that seem more and less defined? Which approaches to governance and by whom and for whom will steer how issues related to neurotechnology and its implications come into focus? We hope this seminar will be illuminating. Before I introduce the speakers in the order that they will present, I want to first thank the Center for Science and Society and the Presidential Scholars in Society and Neuroscience, and in particular thank Jozef Sulik and Caroline Surman for all of their work in making it happen. As stated, I'll present or introduce speakers in the order that they're going to present. First of all, we will have Dr. Nina Frahm, who is a postdoctoral researcher at the Department of Digital Design and Information Studies at Aarhus University, currently pursuing a research project on the making of AI ethics in the European Union.

As a scholar in the Social Studies of Science, Technology, and Society, STS, she has been working on differences in governance approaches to emerging technologies across countries and institutions and in a variety of domains, including biotechnology and neurotechnology. Previously, she's been a doctoral researcher at the Technical University of Munich, a research fellow at the Harvard Kennedy School's STS program, and a junior researcher at the University of Quilmes, Argentina. She regularly translates her work for policy and wider audiences and has served, among others, as an internal consultant for the OECD's Working Party on Bio, Nano, and Convergent Technologies. Next we'll have Dr. Riki Banerjee,

who is the Vice President of Research and Development at Synchron, a neurotechnology startup with a mission to commercialize the first implantable brain-computer interface, BCI, for millions of people with paralysis to reconnect with the world. Dr. Banerjee came to Synchron with experience commercializing world-class neurotech products that treated many different conditions such as Parkinson's disease, chronic pain, and urinary incontinence. She's passionate about bringing together people with deep expertise and creating team culture that drives impactful neurotech innovation and makes a difference in people's lives. Before joining Synchron, Dr. Banerjee held engineering and management positions

at 3M company and Medtronic in Minnesota. She earned a PhD and a master's in electrical engineering from the University of Minnesota in 2005 and 2003 and her bachelor's degree in electrical engineering from the University of Wisconsin-Madison in 2000. Third, we will have Dr. Khara Ramos who is a neuroscientist with extensive scientific management and communications expertise.

Dr. Ramos oversees the Dana Foundation's strategy and programs in that area. Before joining the Dana Foundation, Dr. Ramos was the inaugural director of the neuroethics program

and led the neuroscience content and strategy branch in the communications office at the National Institute of Neurological Disorders and Stroke at the National Institutes of Health. Dr. Ramos was instrumental in establishing the rapid growth of neuroethics efforts for the NIH BRAIN Initiative, positioning NIH as a global leader in the emerging field of neuroethics. Finally, we will hear from Dr. Rafael Yuste,

who is a neuroscientist who studies the cerebral cortex at Columbia University, where he holds the positions of Professor of Biological Sciences and Director of the Neurotechnology Center. He has pioneered the development of imaging technologies such as calcium imaging of neuronal circuits, two-photon imaging of spines and circuits, photo-stimulation using inorganic caged compounds, two-photon optogenetics, and holographic microscopy. He led the team that proposed the US BRAIN Initiative, launched the International BRAIN Initiative, and led the Morningside group of 25 researchers and clinicians who proposed novel human rights, neuro rights, to protect citizens from neurotechnologies. He has received awards for his research from the mayor of New York City, the Society of Neuroscience, the director of the US National Institutes of Health, and the Tallberg Eliasson Global Leadership Prize. We're so excited to have you all with us, and at this point I will ask Dr. Frahm to come up to the stage.

[applause] -Thank you also from my side for being here and taking the time. Thanks to all my co-panelists for making it in time. Thanks to Dr. Whitman for gathering us all here and guiding our discussions, and to Jozef Sulik for actually organizing all of the background work that has led to this event.

What I want to do as a kind of framing or introduction to today's event is to give you a little bit of an overview on the different experiments that have been done across countries and institutions in the last couple of years when it comes to the interface of neurotech innovation and governance. My work has looked closely into these differences and commonalities, how countries, institutions, and different actors have dealt with the various issues that come up when we think about neurotech innovation. I won't spend too much time on my own work, but more try to give you a little bit of an overview of this growing and rapidly transforming landscape not only in the technological field but also in the different policy approaches that actors and institutions are coming up with.

I think that my co-panelists are much better equipped in talking about the different neurotechnologies that are rising at the moment. I am not the expert on this, but I'm the expert on the policy side. I would just say that I understand neurotechnologies as a range of technologies concerned with monitoring and/or intervening in the human brain and the nervous system. These technologies have been increasingly developed and researched and funded since the late 1990s, which was declared the decade of the brain by the senior President Bush. Since then, we have seen all kinds of promises in innovation in neurotechnology, and with these promises, also increased investments on both sides of the private sector, but also the public sector increasingly both through initiatives such as the BRAIN Initiative, but also the Human Brain Project and other publicly-led science research and development projects across the world.

Neurotechnologies are not only promise to fix all kinds of things related to mental and psychiatric disorders such as Parkinson's, depression, schizophrenia, PTSD, and related issues but are also increasingly promising to fix the educational systems' problems with workplace environments and inspire increasingly intelligent machines such as AI. These promises are also accompanied by all kinds of questions and possible scenarios about the risk that these neurotechnologies might raise. Issues related to neurotechnology innovation have been discussed at all kinds of levels, but have been majorly revolving around questions such as how can we draw the line between treatment and enhancement? How can we anticipate the use of neurotechnologies beyond medical and clinical purposes? Who will have access to these technologies and these treatments and what will they cost? What is the public and what's the private benefit? For what purposes could neurotechnologies be used in military and security contexts? Particularly right now, and I think Rafa can talk about this extensively, how are we to protect the data that we are deriving increasingly from the brain? As one of the actors that I've interviewed for my work has described the situation that we have right now because there's really no overarching or encompassing regulatory framework that we have for all of these possible scenarios that neurotechnologies are raising in the future, he described the situation like that. "We're pushing the frontier in terms of regulation, ethics, standards and things like these. These scenarios currently have no frame and are open to all kinds of abuse. We understand this is something special."

The idea that this is something special, neurotechnology and the very different governance issues that they raise, is taken up quite differently across those countries and institutions invested in pushing innovation in neurotechnology forward. Experiments have been particularly prominent in those countries putting most of the funding in innovation in neurotechnology, including the US where we have seen a new form of ethical expertise. Building up that Khara, I think, can talk a lot to, this new form of expertise is called neuroethics and its further development of the broader governance framework of bioethics. It has been led particularly by neuroscientists who increasingly have a concern with the use of neuroscience and neurotechnology in society. These neuroethicists have by now been integrated to a greater or lesser extent in initiatives such as the BRAIN Initiative, and they have traditionally followed a principle that ethics approach, meaning that you have really high-level principles such as privacy, autonomy, agency, and so on that are declared as key for the development and use of neuroscience in society.

What is characteristic for the neuroethical approach is that it traditionally delegates questions of governance to neuroscientists and the users of neurotechnologies themselves where there's the belief that by using these principles or following these high-level principles such as agency and autonomy, innovation in neurotechnology can be actually delivered in ethical ways. This is quite different in the European Union. The European Union has focused deliberations around issues in neuroscience and neurotechnology through all kinds of exercises in public engagement. Very early on we have seen that citizens in the EU have been put somehow in the driver's seat of deliberating not only what the issues are but also who should be responsible for governing neurotechnologies. It was already in 2006 that we had a big so-called Meeting of Minds where hundreds of citizens from all across Europe have been put together to deliberate in the monthly process about what issues in neurotechnology innovation could be and who should actually take care of them.

The 37 recommendations that came out of this process actually emphasized not so much the ethical or moral dimensions of neurotechnology innovation, but more the political dimensions that underlie these innovations and have, amongst others, called for an active role of the public sector and political institution in steering and regulating these technologies. Rather than leaving these questions to scientists and the users of these technologies, they have really called for politicians and policymakers to take an active role in regulation. For example, when it comes to possible dual use of brain research, citizens in the EU have been quite clear in saying that they prefer that public funds for neurotechnologic innovation should only be used for civilian purposes and not for military and security ones.

This is a case that I think Rafa can talk better about than me, but in Chile, a country that has been pioneering a very different approach to governance in neurotechnology, we have seen emphasis and the rise of so-called neurorights. In a recent reform process of the Chilean constitution, Chile tried to amend its constitution with an article that emphasized what I will read to you as the protection of the physical and mental integrity of individuals, the privacy of neuronal data, the right to autonomy or liberty of individual decision-making, and the right to fair access to those neurotechnologies that enhance mental capabilities. Although this constitutional amendment was eventually voted down last year, Chile continues to play a pioneering role in the neurorights-based approach. Particularly staggering has been the case that has been actually resolved last month, I think in October, where the Supreme Court in Chile has charged the neurotech company a motive with breaches in the protection of brain data derived from its customers. The idea that neurotech should be governed through a particular set of new human rights or at least through the frameworks that we already have on human rights is being discussed at various levels, including the UNESCO, the United Nations, and the EU, but has yet not led to a international agreement or really new set of frameworks with legally binding character.

Rather, we have seen agreement on international level through so-called soft laws, so legally non-binding laws, particularly pioneered through the Organization of Economic Corporation and Development, short, OECD. They have developed a recommendation on responsible innovation in neurotechnology which really tries to bridge the approaches between neuroethics and public engagement of the US and the EU, but it is very hard to say as these are legally non-binding norms, how this implementation of this recommendation is going, and if countries are really engaging in the mutual learning processes through this recommendation or not. Interestingly, all of these initiatives have been led and driven by the public sector, which has been, as I said, a big funder in neurotech science and innovation, but what is currently missing is the active or more active role of the private sector, where I'm really interested to see what Riki has to say to this. Although there have been efforts to gather neurotech companies and particularly startups to discuss governance challenges and to come up with solutions or at least contributions to the development of frameworks for governance, large questions remain when it comes to how to mobilize the private sector for responsive innovation in neurotechnology and actually gaining better insights in how companies and startups in particular that are at the forefront of neurotech innovation actually envision how an ethical, more responsible future of these technologies might look like, but also what challenges they see for themselves for putting different governance approaches into practice.

If we compare all of these different experiments that have been now really a quick run-through that I've tried to give you, I think that we can see how these approaches and instruments are really situated in different contexts, not only political cultures but also institutional contexts, how there are large differences in how not only neuroscience and neurotechnology are being framed, so what are they good for, so what kinds of futures can they lead, who can benefit from them, et cetera, but also what the challenges and what the issues are. These, again, define clearly what justifies governance intervention at which level and through which kinds of measures, whether it would be legally non-binding principles, soft law, or really a new set of human rights. Eventually, it comes these differences in governance approaches come down in asking who really should enjoy authority in deliberating on the future of neurotechnology, who has the power in designing different approaches, and who actually will do the decision-making in the future.

As I've shown you, there are large discrepancies between thinking that the scientific community, for example, should be leading these discussions or scientists themselves should be in the driving seat when it comes to educating these questions. With that, I'm looking forward to the next presentations, and thank you very much. [applause] -Thank you. I'd like to invite Dr. Banerjee to come up. -Thank you. Thanks for the invitation. All right. I don't have slides prepared,

but I thought I would just share a little bit about myself. I came into this field. I was very interested, certainly, in doing something meaningful, but also really wanted my work to be used by people. I came from a research background. I did a PhD in electrical engineering

and started my career even in corporate technology, which is much more applied sometimes than university technology. Even that, I, after four years in a corporate technology group, just wanted to create something that I could see used by lots of people. That's what drove me to-- I ended up at Medtronic neuromodulation. It's kind of interesting in the context of what we're talking about today because Medtronic has been around for a long time. I think it was founded in 1949 and the founder of it was the inventor of the pacemaker. Coming up through the pacemaker, and it's been about 40 years ago that that technology was transitioned to start stimulating other areas of the body that became neuromodulation.

Deep brain stimulation, brain modulation, sometimes it's called, has been around for that long. That technology got brought over to treat movement disorders. If you're not familiar, I'm sure a lot of people in this room are familiar, but it was kind of like a pacemaker-like device and the lead wire comes up the neck and it targets or it implants deep into the brain in this area called the subthalamic nucleus, and that's where it introduces stimulation and that can dramatically change someone's tremor response. If you look online, you'll see they have severe tremor and then they're calm with that stimulation. Then in addition to that, there's now devices for spinal cord stimulation.

I worked in the urinary incontinence area. I worked in gastroparesis, which is a terrible disorder of intractable vomiting, and got that experience of having my work go out to people. I think at this point, the work that I had done is in tens of thousands of people certainly, if not more, being used.

It definitely fostered in me a love of this neurotechnology space. I came over to Synchron a couple of years ago and as the-- Tom Oxley, the founder of Synchron, when he reached out to me, I was like, "What is this about?" I was like, "Wow, this is an opportunity to do something truly, truly very impactful for a population that is very much in need." If you're not familiar with Synchron, Synchron is an implantable brain-computer interface company, and as I was describing, everything that I worked on is implantable neurotechnology, which is a little bit different because I think if we look at the broad spectrum of neurotechnology, implantable is kind of a slice and it is very heavily regulated. Maybe this part is, there's more to be covered there, but it is very heavily regulated.

We operate with a lot of regulations in mind. Anyway, so when Synchron contacted me-- Oh, maybe I'll just describe what Synchron does a little bit. In the brain-computer interface domain for implantables, one of the challenges, of course, is the cranium, and that introduces a big blocker to get to the ability of those brain signals.

A lot of brain-computer interface research has a craniotomy. You remove a portion of the cranium and you actually apply electrodes to the surface of the brain. When you look at the research, often you see an image of a patient, they're often severely paralyzed, so they're in a wheelchair, but right next to them is a huge rack of mounted equipment. A lot of data processing to do something with that information and you often need scientists sitting with that patient. What Synchron is striving to do is, first of all, make that procedure more accessible and amenable, and then also make it so that that patient can go home and use this device and open up their ability to communicate. Our utility is fairly simple.

Our device is introduced through the vasculature. It's building upon neurovascular technology that's used for strokes. The device is introduced through the jugular vein and it sits at the superior sagittal sinus at the top of the head, basically. That's where it's kind of capturing some high-level motor function.

What are we doing? We're just creating a mouse. If somebody's thinking, they're paralyzed, so they can't actually move their ankles or they can't tap their fingers, but if they're thinking about it, they can still activate those neurons and our device can pick that up and bring it out to a computer. If you have a mouse, now the world unlocks. You think about interfacing that with an iPad and suddenly you can turn on the lights. You have this autonomy restored to you of interacting with your environment. You can change the temperature, especially with the internet of things these days and so many things that you can control through an app.

You've restored autonomy to this population of patients that are often becoming increasingly more and more locked in and losing their ability to communicate and even telling their loved ones that they love them. Very, very meaningful opportunity. I really saw this technology coming from where I was coming as really having the possibility of making this scalable and accessible for use by a significant population. How do I think about governance and neurotechnology? I hope that the headlines and the fear, which is real, aspects of which are real, don't inhibit our innovation for these people that are in need. Some of you saw me kind of rushing in with my suitcase. Where was I this week? I was with two of our patients.

I could share a little bit of my experience there. We've now implanted this device in 10 people across the world. Six of them are located nearby in this New York and Pennsylvania area. You just see that there just aren't very many options for these people.

We're trying to figure out what that care pathway looks like, and we're like, "Who are the doctors that manage these people that have severe needs? What does insurance look like for these people?" They're in a situation where there aren't a lot of opportunities. These people are severely paralyzed, locked in the bed, and insurance won't pay for caregivers for them. We don't have a recognition of what these people need.

They don't have a voice in some fashion. From a technology standpoint, we think, oh, you go through rehab and all these different things. A lot of the infrastructure there for this severely affected patient population really doesn't exist.

We're in a new domain where if we can take some amount of neural signaling and restore that function, restore that capability, we can really help a patient group that's in dire need. I hope we can find a pathway that still enables us to keep innovating for those in need. [applause] -Thank you very much, Dr. Banerjee. I'd like to invite Dr. Ramos to come up. -All right. Good evening, everybody. It's great to be here. Thank you to the organizers

for the invitation to be part of the panel. My name's Khara Ramos. I'm Vice President for Neuroscience and Society at the Dana Foundation. I am going to share my bottom line up front. I think that effective governance of what I would say is neuroscience and neurotechnology should include these three principles, namely that such governance would be proactive, inclusive, and practical.

I'll say a little bit more about each of these, but first, I just wanted to make the note, and this is related to the excellent overview that Nina was giving earlier, that governance is not just formal regulation. I think that's a really important point and this framing from the OECD is really helpful, I think. They wrote in the broadest sense, technology governance represents the many ways in which individuals and organizations shape technology and how technology shapes social order.

With that said, I am going to just step through each of these three principles and say a little bit about how my work and perspectives intersects with each of them. First, this idea of being proactive. I think that this is something that I really lived and breathed when I worked at NIH between 2012 and 2021. You heard a little bit about the BRAIN Initiative already, but for those who aren't familiar with it, it is a flagship neurotechnology effort funded by the NIH that's aimed at revolutionizing our understanding of the human brain.

It's a really big effort. The budget this year was $680 million, and it's focused on developing tools and technologies for recording and modulating brain function at the level of cells and circuits with really unprecedented spatial and temporal resolution. There's a strong emphasis in the initiative on human neuroscience, both basic neuroscience that's done largely with neurosurgery patients, as well as translational and clinical neuroscience that focuses especially on neural devices along the lines of what Riki was just talking about. In the original blueprint for the BRAIN Initiative, this statement was included in that document. That "Because the brain gives rise to consciousness, our thoughts, our most basic human needs, mechanistic studies of the brain have already resulted in new social and ethical questions."

I think what you're seeing here is that even before the initiative has started, there's this preemptive, prospective, proactive raising of this issue of social and ethical questions. In fact, there was a directive from President Obama, under whose administration the BRAIN Initiative launched, to do exactly that, to proactively engage with the social and ethical issues that this work would raise. That is essentially the frame for the work that I led at the BRAIN Initiative around neuroethics. I was involved in this proactive neuroethics work wherein we stood up a neuroethics working group, we established a robust neuroethics research portfolio.

We worked to facilitate collaborations between neuroscientists and ethicists so that as discovery is unfolding at the bench, if you will, there were ethicists there to be able to discuss with those researchers where that science might go and, again, proactively engage with those broader implications. We also did an annual scan of the neuroscience research portfolio to preemptively identify ethical questions that were coming. We funded research on those questions. We organized workshops, wrote papers on these topics, and the images here just show some of the types of issues we were engaged with, including some of the points that Riki was making. How do we think about insurance payers covering these technologies? Who's responsible for care? This is something that [?] who's in the audience, who was one of my grantees, and now part of my team at Dana has worked on.

What do we owe patients who are in neurotechnology trials when the trial ends? Where often their insurance won't cover maintenance of the device or explanation of the device, or continued care for that patient. Again, just this idea that we need to preemptively, proactively engage with these governance issues. Second principle I want to talk about is this idea of being inclusive in governance. I think that there's this persistent myth around something called the deficit model, which is the idea that public audiences just don't know enough about science, and if they did, they would agree with the outcomes that we are seeking as scientists. In this context, the frame would be that you might encounter public resistance to technology advancement, and that stems from a lack of information and understanding among those public audiences about what technology could do.

I call it a myth because social scientists have shown that, in fact, public resistance or even skepticism about advances in science and technology instead typically stems from things like value conflicts, concerns about equitable distribution of and access to technology, and often distrust of regulatory institutes, for instance, that are seen as untrustworthy. The upshot of this is that you have this disconnect between those who are doing the science and technology and those who stand to benefit from it. The Dana Foundation's mission is very connected with this point in that we're trying to help bridge that gap, that disconnect. Just to say a little bit about the foundation, we are a private grant-making foundation based here in Manhattan.

Our vision is brain science for a better future. That comes from the recognition that you can find so many examples in history where science has produced both salutary or positive and negative effects on human well-being and flourishing. Our mission is to advance neuroscience that benefits society and reflects the aspirations of all people. We believe that neuroscience is advancing rapidly, often outpacing public dialogue about both the potential harms that could come from neuroscience and neurotechnology, but I think more interestingly and more hopefully, the potential benefits that could come from this science and technology.

I'm really excited about the idea of co-creating and imagining together what kind of futures we want to build with the power that is latent in neuroscience and neurotech. To help support that, at the foundation, we do grantmaking in three different areas. The Dana Frontiers program, the Dana NextGen program, and the Dana Education Program.

These focus on education, training, and public engagement on neuroscience and society issues is how we put it. I won't go into detail now for the sake of time, but you can certainly read more about this on our website. The point that I really want to underscore here is that we believe that public engagement is a critical component of neurotechnology governance. You heard about this a bit from Nina earlier, that this is something that has been emphasized in the EU.

Turning to the last point, this idea that neurotechnology governance ought to have a practical frame, I'll explain what I mean about this. When you think about neuroscience and its ethical or societal implications, these may be the kinds of headlines you think of. Neuralink, mind reading, scientists growing mini-brains in the lab, that might be conscious, which they're not.

The point that I want to make here is that I think governance issues encompass so much more than those emerging neurotechnologies and what is often the hype associated with them. I've just shared here a different set of headlines that I think speaks to this point about the practical governance issues that I think we need to engage with that relate to things like, again, distribution and access to the fruits of neuroscience and neurotechnology. The last thing I wanted to share is just another example of what I mean by this practical frame.

This is a focus area that is of particular interest to the Dana Foundation, and that is the intersection of neuroscience and the law. I've included here a few questions related to this intersection. For instance, what are the ways that neuroscience is utilized in the courtroom? What can neuroscience tell us about human behavior? What can it not tell us about human behavior? What are the challenges that emerging understanding of the brain and emerging neurotechnologies pose for judicial processes? What training should members of the judiciary have in neuroscience? This is something that I've been aware of for a number of years, but have had recent opportunity to directly speak with folks in the judicial system, and has really driven home for me how these questions have such present moment impact on people's lives in terms of who's getting incarcerated versus diversion programs that are giving people a chance to recover through treatment, for instance, for mental illness or substance use disorders. Again, this is just an example, I think, of a really important practical neuroscience and neurotechnology governance issue. That's it. [applause] -Thanks so much, Dr. Ramos.

Our final presentation will be from Dr. Yuste. -Thank you. Thank you for inviting me. Maybe we should define what is neurotechnology, what are we talking about? You can define neurotechnology as methods, tools, or devices to do two things, either to record the activity of neurons or of the central or peripheral nervous system, or to change the activity of neurons. That's it. These devices, these tools can be of many types.

They could be electrical, they could be optical, they could be acoustical, they could be magnetic. They could be based on nanophysics. But at the end of the day, that's the only two things you do, either record brain activity or to change that. Why is neurotechnology important? Well, turns out that the brain is not just another organ of the body, but happens to be the organ that generates all of our mental and cognitive abilities.

I'm talking about our thoughts, our perception, our memories, our emotions, our imagination, our decision-making, even our subconscious. It doesn't come out of thin air. It comes out from the activity of these neural circuits that we have in the nervous system. If you have a technology that lets you record brain activity and change brain activity, by definition, that technology enables you to record mental activity and change mental activity. This has three major implications. First of all, the use of this neurotechnology has a major benefit in science.

Neuroscientists like ourselves and many people around the world are trying to understand how the brain works, curiosity-driven basic research. This technology actually lets you get into the brain and finally measure what's going on and be able to change it and manipulate it. It's fantastic. It's the driving force of neurosciences, neurotechnology. The second major benefit has to do with the clinic. As you know from personal experiences with family members or friends, brain diseases have essentially no cure.

I'm talking about both neurological and mental diseases. Very little that we can do as doctors to help these patients because we don't understand the function of the organ, what we call the physiology. We cannot figure out what is the pathophysiology of the disease. It's essentially, we're flying blind in spite of the heroic efforts of psychiatrists and neurologists. There's very little we can do for these patients. Obviously neurotechnology can enable us to get into these diseases, the dark corner of medicine, the brain diseases, and understand the pathophysiology in order to figure out better ways to diagnose and hopefully treat them.

The third reason why this neurotechnology is important has to do with the economy. The ability to build a neurotechnological devices to connect directly the brain with computers. Within that, like a brain-computer interface, it's widely viewed as the next step to replace these gadgets that we have in our pockets with some sort of headgear that we'll carry in our lives and will enable us to connect to the internet and also very likely lead to the change of the human species towards the future, where some of the brain processing will happen actually outside our bodies now. This is a vision of technology in general, that it's I would say commonly sponsored by all the major tech companies in the world. Because of these benefits, scientific, medical, and economic, President Obama launched the US BRAIN Initiative.

That led to a revolution in neurotechnology because it's not just that the US is injecting what's approximately calculated to be $6 billion into the BRAIN Initiative for a period of 12 years that's supporting currently about 550 labs around the country and the world working on developing these devices, but the US BRAIN Initiative led to similar initiatives in other countries, in China, Japan, South Korea, Australia, Canada, Israel, and in the European Union. They all came up with similar projects, large-scale projects to develop neurotechnology for this recent scientific, medical, economic, and in some cases also military for like the Chinese BRAIN Initiatives that's run by the Army. In principle, this is all good. We have BRAIN Initiatives going on all over the world.

We have these great benefits for neurotechnology. We should all be happy. The problem, as I was saying earlier, is that this is not just any organ of the body. This happens to be the organ that generates the human mind. In experiments, in laboratories like our own laboratory, for about 15, 20 years, neuroscientists have been able to work and develop new technology with laboratory animals to decode and manipulate brain activity with increasing precision. This is not science fiction.

We do this all the time in the lab. In fact, our laboratory is one of the places that is well-known for developing this type of technology to even implant images into the heads of mice and make them behave as if they were seeing something that they're not seeing. We don't do this because we want to play with mice, but because we want to understand how the brain works and how to help patients. Because of these concerns, it happened here at Columbia six years ago that we met in the building next door, in the northwest corner building, 25 experts representing all the BRAIN Initiatives from all over the world, from all these countries that I told you about. Experts coming from the clinic, from neurologists, neurosurgeons, experts coming from ethics, from philosophers, and also representatives of Silicon Valley of neurotech companies and of technology companies to study the ethical and societal issues of neurotechnology. We call ourselves the Morningside group because we met here at the Morningside campus.

After three days of meetings and we were inspired, by the way, by the Pupin Hall that's literally next to this building that we're talking to. The reason we were inspired by that building is because in the basement of the Pupin Hall they built the first atomic reactor in history. That was the beginning of the Manhattan Project. That's why it's called Manhattan. It started right here, literally. What you should know is that the same physicists that built the atomic bomb were the first ones to ask the UN for international regulation of atomic energy.

Through the lobbying, the UN driven by President Eisenhower, who was also the President of Columbia, had created Atomic Energy Agency in Vienna that has control and regulated atomic energy since then, crossing fingers without any mistakes. Inspired by the history that happened here, we thought this Morningside group that we also had the responsibility, as developers and users of this technology, to alert society of the potential risk, ethical, and societal issues. We came to the conclusion that this is a human rights issue.

In fact, if this is not a human rights issue, what is a human rights issue? Because you have a technology that lets you get to the core of what makes us human, which is the human mind. This is the first time that humanity has a technology that lets you get into the brain, decode this information, and manipulate it with increasing precision. We coined the term neuro rights or brain rights, and proposed a series of five areas of concern that should be protected. Mental privacy, so that the data from your brain activity shouldn't be decoded without your concern. Our own personal identity, our mental identity, because the self, who you are, is actually, guess what, generated by the brain, and with neurotechnology, you can actually change that. You can change the personalities of people.

There are cases already of that, and we thought that this should be a basic human right, that no one should mess with your own personality. We also argue for protection of our agency, that that should be a human rights level protection. This is the same thing that people have called cognitive liberty, or our liberty, our free will.

In different countries, people use different terms, but the idea is that your decision-making should happen based on your own brain, not based on external influences through neurotechnology. We also argue that propose equal access to neuro augmentation, so that when the time comes for humans to augment ourselves mentally, that this should be ushered into the world under the universal principle of justice, and also for protection of biases on the information used by neurotechnology into the brain because that information goes right into the middle of the organ that makes you who you are. That injection of potentially biased information is much more serious than the biased information that we can pick up from reading the newspapers or the newsfeeds in the internet. That was the idea of the neurorights. We wrote a paper that was published in Nature in 2017.

Since then, we've been working to promote that agenda of neuroprotection of neurorights. We created a foundation right here, that's called the NeuroRights Foundation. This foundation has three goals, to do research, advocacy, and outreach with respect to neuro rights. In terms of research, together with our colleagues in the foundation that have background in the law and international human rights, we've generated three reports.

One is a gap analysis of existing eight major human rights treaties in the world with respect to these five areas of concern of neurorights. The conclusion of this gap analysis, which was actually commissioned by the secretary general of the UN and we published that last year, was that the current existing human rights treaties are insufficient to cover these areas of concern, that they have to be remodeled or beefed up or something has to be done to bring human rights level protection to brain activity and to brain data. We also generated a second reported market analysis of the current investments in neurotechnology and our conclusion is that last year was $23 billion was the global market for neurotechnology. That outspends by far the combined investment of neurotechnology of all these brain initiatives that I discussed earlier.

Now we're living in the time where neotechnology is starting to be developed more and more by the private sector rather than by the public sector. We also are finishing and published analysis on the consumer user agreements of 30 neurotechnology companies in the US and around the world. The consumer user agreements is the little print that nobody reads that you have to click I agree before you turn on a device or before you download the software. Our legal team actually went through every word of every consumer user agreements and we essentially found that out of the 30 companies that we investigated, every one of these companies took possession of all the brain data of the user.

In fact, the majority of the companies also authorized themselves to sell this brain data to third parties. The bottom line, brain data could not be less protected than it is and the reason is because there's no regulation for neurotechnology anywhere in the world except for Chile. Let me tell you about our advocacy work. We've worked with different countries, international organizations, and I should mention the Chilean case, and I have to correct something that Nina said. Two years ago, Chile approved unanimously a constitutional amendment to Article 19 of the existing valid Chilean constitution. That amendment protects brain activity and the information that comes from it as a basic human right of Chilean citizens.

This constitutional amendment was approved unanimously by the Senate of the republic, by the House, and signed into law by President Piñera just exactly two years ago. That amendment is the only piece of hard law, as until last week, in the world that protects brain activity. That amendment is not being recalled. It's valid. In fact, that was the reason that there was in August this year, the first jurisprudence case of neurotechnology with one citizen from Chile sued an American neurotechnology company because they took his brain data and he argued he was protected by the Chilean constitution. The Supreme Court in Chile unanimously rule that the citizen was right, and they forced the company to delete the brain data and to subject itself to the inspection of the medical authorities.

This is the first case in the world where the brain data of one person is actually being protected with some sort of hard law and the court agrees with that. Besides Chile, we've been working with different countries. There's a very similar constitutional amendment going on in Brazil, sponsored by the Senate of the republic, and that should be voted within the next few months. This will provide similar protection.

In this case, it's the famous Article 5 of the Brazilian constitution, which is the most important article of that constitution. In Brazil, two weeks ago, I was just there, I heard the good news that the state of Rio Grande do Sul, which is one of the most progressive states in Brazil, unanimously passed its legislature, a law that forces every future laws of that state to comply with the principle of protecting brain activity and brain data. This is the second place in the world where there's hard law in this state of Brazil that is already active, where its citizens have protection of their brain data and brain activity through the legislature. For that piece of legislation, there was unanimous agreement across 16 different political parties in the state of Rio Grande do Sul. This has never happened before. Just like in Chile, they never agreed on anything except for this protection of brain activity.

I should also mention there's a similar constitutional amendment going on in Mexico and in terms of other hard laws, we're working with the European Union with the interpretation of the GDPR, which is, as you know, the most famous data protection law in the world, the most rigorous one, probably. You cannot amend the GDPR, it's just not technically feasible but we're working with the possibility of extending the interpretation of the GDPR to cover brain data. This would be one way in which the European Union could actually lead by updating its data protection law to include brain data.

Finally, I should mention that in the US, we're working with the state of Colorado, which also has a bill being crafted that would extend to brain data the protection that's afforded by the Colorado Data Protection Act, which is already in place. It's a similar strategy to the European GDPR to take a law that protects data and include brain data as a specific case, as far interpretation of this law. This Colorado bill will be voted in February so it's possible that Colorado will be the first state of the union where there is protection for brain activity. Finally, I should mention, we're also working with the UN, as you've heard from Nina's talk some of these initiatives, particularly the one I would mention, the Human Rights Council, which is the group of the 64 countries that have signed the most important human rights treaty in the world, International Covenant for Civil and Political Rights.

This is the human rights treaty that protects freedom of thought, and they are considering a special opinion to that treaty to include mental privacy as part of the areas that are within the purview of the treaty and this is something that will be voted, I think also in February next year. Finally, in terms of outreach, besides a series of international meetings that we've organized here since 2016 on this topic, we've also worked with the German director, Werner Herzog, as a documentary filmmaker to make a documentary on neurotechnology and ethical issues of neurotechnology. This is a documentary that's called The Theater of Thought. You cannot yet stream it.

We're trying to sell it to a studio so that you can watch it in the comfort of your living room. We got the authorization from Herzog to screen it for organizations that are involved in either scientific, medical or governance issues related to neurotechnology and this is something that if any of you is interested or belongs to one of these organizations is interested in screening this documentary, we can facilitate. With that, I finish again I'm delighted to be here, and as I said, talking in the building across the courtyard from where it all happened with the Morningside Group. Thank you. -Wonderful. Thank you all so much

for sharing your perspectives and approaches. I'd like to ask you to all come up now so that we might commence a bit of a panel discussion before we move to some questions from the audience. Great. First I would like to give you an opportunity

to respond or to react to each other if that is something that might kick off a discussion or I most definitely have some questions prepared to you that I think play really nicely, I hope play really nicely on all you've offered to us. -Thank you. I will try to talk as loud as I can, but I don't want to scream at you. -The audience? -Yes. I was also addressing my co-panelists, I'm sorry. Really the people on this panel have been pioneering different approaches when it comes to this really new but also somehow of course the field with the history as you have.

Now I'm getting feedback because I'm talking so loud. [laughs] I think it's wonderful to hear from them I would love to hear more about not only how successful they were and pushing through all of these exciting approaches, but also what the challenges are because this is the most difficult part, not only in terms of bringing new technologies on the market that can eventually help people, but also bringing new governance approaches into a landscape that is highly competitive and highly also driven by economic competitiveness in particular. Maybe we want to start the discussion off with that if you don't mind. -Sure. I can go first.

I think I am very in the nitty-gritty of what it takes to create scalable neurotechnology and it's very challenging. I mentioned that a lot of technologies are outside of the brain and there's a cranium there and that mutes the brain activity. Just getting that access and getting sufficient signal is really challenging and then decoding it and what does it mean? I think a lot of times we see someone using brain-computer interface and they're doing something amazing and you think about it very literally. Even Eddie Chang's work, he is decoding speech, and there's a lot of press about that. What he's decoding is your motor function. When you formulate the words, there's motor activity happening and he has figured out how to decode that motor activity.

When you're decoding motor activity, you're decoding something that's very visible. Even the Neuralink technology, it's all pretty on the cortex, a lot of this stuff is motor function and so that tends to be very visible. If somebody's thinking about, oh, I'm going to move my ankle, or I'm going to tap my finger, if they were able-bodied, you'd see it. That's a thing that you would see. I do think it's an important topic to talk about, think about, but from a very practical pragmatic standpoint, one of the challenges, the heartbeat is all so loud, it just swamps out your brain signal all the time and creating a device that can manage that is one of the biggest challenges.

Then I mentioned the data processing. If you're really going to get every neuronal action potential out, the ability to process that data, the world will get better at processing data and whatnot, but there are still some practical realities to that. -If I can follow up.

One good piece of news is the realization that the brain, it's heavily interconnected, and in a few synapses you can go from any neuron in the brain to any other neuron in the brain. Traditionally we thought that different parts of the brain do different things and the function it's essentially localized. More and more through the development of new technology, we're finding that you can decode pretty much anything from anywhere. A former student who actually started right here built a company called Control Labs in Manhattan, where they've built a bracelet that records indirectly your brain activity from the cortex through the information that gets relayed to the spinal cord. They can even pick up cognitive information from your wrist. I think this is wonderful because again, it has fantastic applications for medicine now can enable non-invasive devices.

Just like the atomic energy technology is neutral. You can use it to decode the thoughts of a person that has Alzheimer's and cannot speak or is paralyzed, or you can use it to decode the intention of the thoughts of someone that you want to monitor. You need to both provide the tools but also the governance model. One thing I would highlight is that in the last year, there's been a revolution in neurotechnology, not just because the devices are getting better and better thanks to the Brain Initiative and all these other investments that are going into the field, but particularly through the application of generative AI models now. This has taken everyone by surprise.

Now with generative AI models, you can decode information that you couldn't decode a year ago. A lot of the key papers have been published in the last years including Eddie Chang's, the only difference is not the hardware, it's actually the algorithms. Again, this is both an opportunity for research for the clinic, for the economy, but also at risk because it potentially exposes more of our mental privacy for unintended use of neurotechnology. -Pass the microphone here.

I think a question that maybe follows up from that is what other forms of emergent technologies outside of the domain of neurotechnology such as artificial intelligence or biotech, or you've just pointed out now that there isn't a firm boundary between these rights. What governance challenges in the context of neurotechnology or maybe different or similar to governance challenges in relation to biotech or AI, and what lessons might you learn from the ways that the governance of AI particularly is how that's playing out right now? What kinds of implications this has for the governance of neurotechnologies? -I can actually if you let me I have an anecdote that goes to this question. This is the last time we went to the White House about a year and a half ago.


2024-02-02 11:47

Show Video

Other news