Dr. Hannah Zeavin on Auto-Intimacy

Dr. Hannah Zeavin on Auto-Intimacy

Show Video

» DR. ALEX KETCHUM: Bienvenue. Publishing, Communications, and Technologies Speaker and Workshop Series! I'm Dr. Alex Ketchum and I'm a professor of feminist and social justice studies at McGill and the organizer of Communications, and Technologies Speaker and Workshop Series seeks to bring together scholars, creators, and people in industry working at the intersections of digital humanities, computer science, feminist studies, disability studies, communications studies, LGBTQ studies, history, and critical race theory. I'm so excited to welcome you all. This is currently the last scheduled

event for this semester. However, please keep checking back on our website or follow us on Eventbrite, Facebook, or Twitter for more updates. You can find our full schedule as well as video recordings of our past events at disruptingdisruptions.com. So that's the redirect URL,

disruptingdisruptions.com. The other URL is way too long to remember. You can also find our list of sponsors including SSHRC, Milieux, MILA, ReQEF, and more. Tonight we have a Q&A option available. Throughout the

event you may type your questions in the question and answer box and there will be some time during the second part of the event for Dr. Hannah Zeavin to answer them. We're thankful for the discussion you generate. Thank you to our captioner for today , Sarah. As we welcome you into our homes and offices through Zoom and you welcome us into yours, let us be mindful of space and place.

Past series speakers Suzanne Kite and Jess McLean have pointed to the physical and material impacts of the digital world. While many of our events this semester are virtual or hybrid, everything that we do is tied to the land and the space that we're on. We must always be mindful of the lands that the servers enabling our virtual events are on. Furthermore, as the series seeks to draw attention to power relations that have been in invizbleized, it's important to draw attention and acknowledge Canada's history. McGill is located in Tiohtià:ke, Montré al, furthermore the ongoing organizer efforts by Indigenous communities, Water Protectors, and people involved in the Land Back movements make clear the ever present and ongoing colonial violence in Canada. Interwoveen with this history of colonization is one of part from the money he acquired through these violent acts that McGill University was founded. These histories

are here with us in the space and inform the conversations we have today. I encourage you to learn more about the lands that you're on. Native-land.ca is a fantastic resource for beginning. Now for today's event. Dr. Hannah Zeavin is a scholar, writer, and editor whose work centers on the history of human sciences, psychoanalysis, psychiatry, and psychology, the history of technology, feminist STS, media theory. She is assistant professor at Indiana University in the Luddy School of Informatics. Additionally she is visiting fellow at the Columbia University Center for the study of Social Difference . Her book, The Distance Cure: A History of Teletherapy. Her

second book, Mother's Little Helpers: Technology in the American Family. Forthcoming from MIT Press. Please welcome Hannah. Thanks so much for coming. I'm so excited for the presentation today. » DR. HANNAH ZEAVIN: Thank you

so much, Alex and Sarah and Sarah and everyone who's here. This series is just incredible . Alex, all of the work that you do is so incredible. But I just really admire the way that you persistently are working to make community across and in our field. So thank you for letting me be part. I'm going to give this talk, auto-intimacy, algorithmic therapies and care of the self , it will really draw on the work of my first book but I'm happy to talk about any of the things Alex mentioned in the Q &A as well.

So I'm going to as I said be drawing on a set of cases from this book, The Distance Cure: A History of Teletherapy, which came out in Summer of 2021. The book takes under consideration one relationship between therapists, quite broadly defined, and patients. The Distance Cure makes several interventions by examining the therapist and their patient working at distance from one another. Globally it retells the history of clinical psychology via its shadow form, teletherapy. Instead of this being a recent concern, whether in the age of pandemic or of smart phone, the book argues that teletherapy, and I 'll add telemedicine, have always been about to make their grand debut for 100 years or old. Teletherapy is as it turns out as old as therapy itself, if it doesn't actually predate it. Then the

book takes this extraordinary and vulnerable and intimate relationship between therapist and patient and it recasts that relationship, not as a dyad, between two people as it 's traditionally been conceived of, but rather a triad, always already mediated , and therefore very excitedly susceptible to bringing in forms of technology. I argue that ever since Freud stopped laying hands on his patients as part of hypnosis, which was probably a good idea, some intervening distance has always been present between patient and therapists. Then I proceed to look at how patients and therapists have bridged that distance in order for communication to happen. Mediated networked and teletherapeutic relationships physically literalized that separation, even as they work to overcome it and diminish it . As I conducted my research

in order to make this critical history of teletherapy across the 20th and into the 21st century, it turns out that teletherapy has almost always a tended crisis, whether that' s the blitz in London and World War II, the War for Liberation in Algeria, a suicide epidemic in San Francisco, and of course in our current pandemic unfolding right now. And I'm going to return to crisis and mental health criticize in this talk, but for now while each of these cases are obviously quite different from one another, quite different in terms of their sociopolitical and geographical locations, also their temporalities, I unite them by making the claim that distance isn't the opposite of presence. Absence is. So if tele isn't an absence, as we've traditionally misunderstood it , what is it? And the book elaborates various forms of what I call distanced intimacy , one of which I'll be speaking about today. Auto- intimacy. So I'll be speaking about some of the concerns driving AI therapy, which of course has been recently in the news again, along with any other field touched by ChatGPT and I want to talk about how we arrived at our present moment of increasingly pervasive presence of algorithmic AI and virtual humans in mental healthcare. To do so, I'm going to trace how we got from a tape recorder as therapy to the most sophisticated attempts at automating and scripting therapeutic interaction. I'm going to first provide some background for how I think about algorithmic therapies, and then I'll take us pretty briefly through the history and culture of algorithmic care in the 1960s and '70s to analyze the models of mind at midcentury that emerged and were very friendly immediately to turning into computational models, look at the phenomenon of algorithmic care in our present, and conclude with some brief thoughts about how this history can help us at the level of policy decisions , about who should receive this kind of care and when, if at all. So in the most cutting edge contemporary efforts to

build algorithmic models for mental healthcare, and in the very earliest moments of work related to automated therapies , the dream has remained the same. To build a functional, universalizable program that symbolizes therapy. Whether these attempts fail or succeed and I will argue they mostly fail, a program to seemingly although not actually code out the acting human expert. It is the move from a simulation of

an interpersonal relationship and service of psychological growth to an intrapersonal, self-sufficient regimen. In these the therapist stand-ins return that triad to my book of patient, therapist and media, by combining two, the technology and the expert, leaving the feeling the appearance of a technology alone with the patient. And that contraction of just a device and user or script and a patient would, the argue goes, allow us to crucially batch process patients and patients in waiting, especially when large groups of people need therapeutic treatments suddenly and all at once. As I said in the opening

, teletherapy almost always is innovated in, adopted in, and generally attends crisis and other forms of crisis communication. This is the humanistic, ethical reason to deploy these treatments. And frequently it is the argument made in their favor. Our current moment is no exception . The dream of automated therapy is ostensibly a democratizing one and we in the United States and Canada but indeed worldwide are in need of mental healthcare. The WHO stated in the years before the COVID-19 pandemic that we were already in a pandemic of anxiety and depression. Algorithmic care then might be posited as one way forward, precisely because one doesn't need to create an army of therapists and automated therapies are relatively cheap and rely on both individuals and their habitual media as a delivery mechanism. The psychiatrist Isaac Marx argues

that, quote, we're finding care delivery to the point where self-care becomes possible is often the product of the most sophisticated stage of a science, end quote. So here Marx describes the end stage of care delivery as self -care, rather than care itself . And he is not working or thinking within the Black feminist traditional of self- care, but rather to argue that self-care is the ultimate form of on-demand access, meeting the patient not only where and when they are, but as their mediated self. Autonomy is the aim of care here and of automation. Automation then becomes the dream of autonomy. So in the pandemic almost every single form of teletherapy is exploding, it's gaining new users exponentially. And if

previously teletherapy was as I argue in my book therapy's shadow, now almost all shaper is teletherapy at least some of the time. So in order to reach as many patients as possible, many have turned to mental healthcare apps, whether they're funded by the government or Silicon Valley start-ups to fund this essential care. But these apps aren't just a neutral good as they claim. They do not simply make good or fulfill this democratizing promise. The reality is of course much more complicated. Algorithmic care can quickly become dispatched even when it provides a very narrow form of care, say, only addresses anxiety but not depression, is unusable for a variety of reasons by many, whether that's because of digital redlining or because of language barriers, or it can become a pernicious backdoor for surveillance and disrupt ethical standards important to the therapeutic relationship, or as it tends to happen, we can see a mix of all of these concerns at once.

And sometimes algorithmic care is derided as or scientifically proven to be lesser, depending on what it's being compared to in the first place, which can then slow or altogether prevent patients from seeking other forms of care that indeed might work better for them. But how does this automated autonomy function? What I'm calling auto-intimacy. And what is it doing to the patient we now call a user? Well, a machine listens and a digital interface provides the therapeutic setting and experience. The only explicit human involved in these 1:1 interactions is the doctorless patient. In algorithmic

therapies, which produce a closed circuit of self, automation takes on the role of the computational other. I call what happens in this circuit auto-intimacy. The auto here is doing double work . The auto of self and the auto of automation. The two combine in this circuit. I take auto-intimacy to be a state in which one addresses oneself through the medium of a non-human, here a therapist. Both the piece of hardware and the code its running with the aim of self knowing, and even perhaps capacity of self, much like one may come to know oneself better by writing in a diary. And unlike human therapists who provide

ostensibley more we hope than just an embodied receptacle in place of storage for speech for their patients, automated and/or algorithmic therapies listen or read solely in order to respond. They can't not, they can't help themselves. They listen via many different varieties of mechanisms including retrieval-based decision trees, automatic scripts, and increasingly paralinguistic vocal monitoring, and if you're interested in that, I would suggest highly looking at Beth Summels' work. In general these therapies taken together are inflexible and they are necessarily rule-bound. Yet algorithmic therapies also rely on the most intimate computing in which a rich set of relationships is present between user and the therapeutic apparatus. Or as I argue, they rely on auto-

intimacy. In the case of computer-based therapies, it is a specifically therapeutic relationship to self and not to another that is mediated by a program and its processes. Historically auto-intimacy in the service of therapy has been driven by the desire to automate a treatment and then increase consistent engagement with that treatment via pleasure, enjoyment, or what would we now call gamification . And these two intertwined desires have complicated implications. And therefore we'll see today that automated therapies are going to raise new particular questions concerning how they mediate mental health, the types of therapeutic models they harness, and the models and relationality they instruct. Who is included and excluded from use, and what the ramifications might be of putting the burden of care on the person who needed to receive it in the first place. And what are the foreseeable by-products, if not overt intentions, of these programs as they slip from care to control to capture? But there' s a surprisingly long history to these questions, intentions , and a culture of automated therapies born out of the very first attempts to simulate a therapist that continue to inflect our work on this pressing question today. And

it all started with a tape recorder. So during that wild psychedelic area of Timothy Leary and friends in the late 1950s, Dr. Charles Slack, a psychologist working at the Harvard Psychology Department set about to make one of the earliest experiments with a self-managed technologyized therapy, testing the benefits of soliloquy, or talking aloud to oneself as if there's another present when there isn't. First Slack fabricated these tape recorders that are on your screen. That produced a series of clicks and response to sound stimulus while keeping track of how many clicks the recorders made in response to those sonic inputs . So Slack gave these two

quote/unquote teenaged gang members from Cambridge, end quote, and paid them to be his experimental subjects. The subjects were to speak into the tape recorder without any human witness. As they spoke they could see the tally of clicks growing, and when they stopped talking, the tally stopped increasing. The subjects were paid according

to how high their tally went. I would be doing a very good job right now. I'm soliloquizing. The subjects were then sent out into Cambridge with this automated ticker and the scaled payment plus the ticker were enough stimulus and response to incentivize the subjects to have a subject with themselves . The outcome was twofold. First, the subjects produced recordings that sounded like one side of an interview. Not that interesting for our purposes. But moreover, and secondly, quote, some of the participants said they felt better for having talked this way, end quote. Dr. Charles Slack had inadvertently built

a self-based, speech-based self-soothing device that made use of human-machine interaction, and of course the influence of cash. So soliloquy before non-human was not therapy per se, it accessed instead a cathartic function and it gave on to this palliative care via the gamification of speech. Just some seven years later, across Cambridge at MIT, Joseph Wisenb aum would debut Eliza, one of the most written about therapist artifacts, even though it or she was not designed to perform and arguably did not provide therapy. But the story of AI therapy is incomplete without her. Her. So the Eliza experiment was intended to demonstrate that, quote, communication between man and machine was superficial. So to achieve his aim, he programmed Eliza to parody a client centered therapist doing a preliminary intake with a new client. So a regarian

therapist is one who asks questions that are empathetic with positive regard for the client. One way this is accomplished is by reflecting back what the client as just said as a sister. So I don't have any sisters. Stereotypically if a patient were to say, I hate my sister, the therapist might respond, you hate your sister? Or can you elaborate on hating your sister? Tell me more about hating your sister. This was what Eliza was framed to do. This kind of, the term of art is lexical entrainment. It's used everywhere today in

customer service, chat bots, et cetera. It originates with Eliza. It's the first time it appeared. For Wisenbaum, all he did in choosing the Regarian script is it would allow him to show that all that the machine was doing was this kind of mimicry. It was going to solve his problem. It

wasn't about making an AI therapist or have anyone manage their feelings or make them disclose anything personal. That was very much an unintended set of consequences. And that is indeed how people used the script. In demonstrating Eliza and putting various folks into conversation with her at MIT, Wisenbaum was shocked by the responses which he called misinterpretations, which inspired him to write Computer Power and Reason. He hoped to demonstrate the communication was, quote, superficial, end quote, it wasn't so. Users liked Eliza. They enjoyed

speaking with her. They wanted to do it again and again. Instead, Weizenbaum was presented with damning evidence that it turned them into a patient and used her as a source of self-talk or what I call auto-intimacy. It hosts a kind of self-therapeutic activity, notably unlike the experience of traditional therapy, if anyone in the audience has been in it. Auto-intimate work is not typically experienced as work. This was crucial to the success of Eliza. It was a

self-therapy the user experienced as pleasurable, endlessly reporting enjoying it, wanting to do it again. And part of that was that this kind of interaction with the computer was truly novel in its moment, but part of it was it was experienced as a kind of emotional release, again, a catharsis if not an outright therapy. Eliza was a very early chatbot and one of the first programs understood as ready to pass the turn test. And put another way, as long

as we've had chat bots, we've been trying specifically to automate therapy, and automating therapy has become a litmus test for various kinds of human-computer interaction success, even if its very first attempt was accidental. So if Weizenbaum was shocked that anyone wanted to talk to Eliza, he had a second surprise in store, which was out there in the world, clinicians thought that therapy bots were a good idea. This made Weizenbaum mad, absolutely furious, and he was incredulous. When he began to write about it in Computer Power and Human Reason, he addressed his critiques as openly to the field but in reality, he was as the kids would say now, subtweeting someone particular , which was Kenneth Colby, a psychiatrist and former psychoanalyst working in Stanford's Department of Computer science in the 1960s.

We're going to meet a lot of former psychoanalysts in the 1960s and this very much is a moment where former analysts are rejecting the 4D model in favor of all kinds of new things. One was Kenneth Colby. As Stanford Colby pioneered his own chatbot Shrink, which he characterized as a computer program that can conduct therapy as a dialogue, to respond as he does by questioning, clarifying, focusing, rephrasing, and occasionally interpreting, end quote. So Colby sought to completely reorganize mental healthcare in the United States via a tool that would be made, quote, widely available to mental hospitals and psychiatric centers suffering a shortage of therapists. Several hundred patients an hour could be helped designed by this purse, end quote. This is the era when the shift from

the asylum model of mental healthcare has shifted to the community model in mental healthcare, and that has taken place, but without adequate funding or support or structuring support to say the least. So it was impossible to provide care in this new paradigm but the old paradigm was also gone. So Colby sought to attack the problem of limited experts and of growing mental healthcare demand, much the same way that earlier teletherapies I write about in my book sought to batch- process patients, but instead of turning to the volunteer in the case of the suicide hotline or turning to the radio in disseminative media, he sought to fix a problem with a fungible set of ala mada. Auto-intimate pleasure during these kinds of computer interactions success was so important and because people didn't enjoy Shrink, despite its novelty, same moment, it failed to produce the explicitly psychotherapeutic reactions Colby hoped to foster. But even as Eliza and

Shrink are nearly identical under the hood so to speak, the environments in which each program was tested differed really greatly. Eliza was tested on MIT's famous network time sharing computing system that's understood to have fostered great connectivity and collaboration, and as Elizabeth Wilson has shown, conversely Shrink was available in a single laboratory remediating the isolated medical office. Colby also had perhaps made an additionally grievous if not superficial mistake when he named it after a job function, and in fact its derogatory casualizing name, one that undermines even that expert authority, Shrink. And he failed to lend it a proper name that would also additionally gender the script , like Eliza. And be conducive to therapeutic usage. In the

1960s this was already the moment where 50% of practicing psychotherapists are women, and the number will only increase till our present where it's in the over 90% irrespective of degree. So the feminization of psychotherapy was indeed well underway and crucial I think to how Eliza was perceived. And of course we live in the aftermath of that story and the feminization of what is called surrogate humans and we can talk more about that in the Q& A. So Eliza is a therapist, going by her first name. Shrink purported to be an MD without one. The intent of the

programs, their environments, as well as the superficial presentation shifted what kind of relationship was possible to have with them and therefore with the self through autointimacy, and it resulted in widely different kinds of emotional responses for each program. Now perhaps because Colby failed to produce a usable automated psychiatrist that could actually treat patients, he flipped his script and began to work on automating computer program patients to be put into conversation with human psychiatrists. So Colby's goal now was to build an interactive single complex in order to train psychiatrists to meet the very demands he had sought to fill with automated shrinks. The result of this was Parry in 1971, ostensibly a paranoid schizophrenia chatbot who was given a real name, Colby didn't make that mistake again, Frank Smith, and a personal history. Parry passed

the turn test. Psychiatrists could not tell the difference between Parry and their real human patients diagnosed with paranoid schizophrenia who were using teletype at the ward. So eventually Parry, an interactive version of which was already hosted on Arpa net , would be put into conversation with Eliza in September 1972 by Vince, one of the fathers of the internet . Making a computer to computer therapy session, one of the first discussions held over TCTIP. At the end of it Eliza charged Parry $399.20USD for her services, which is extremely expensive. By 2023

standards with inflation in US dollars, that would be about $ 3,000 today. So returning to Weizenbaum 's question about what kind of therapy and therapists can view the simplest mechanical parody of an interview technique has captured anything of the human encounter, requires understanding Weizenbaum to be passe even. The kinds of therapies practiced in the 1960s, all these attempt to automate one side or the other had already indeed come to the conclusion that models in mind were best thought of as an information processor following rules. In the postwar era and through the 1960s and into our present, several things would dramatically change the American mental healthcare landscape, namely the fall from prominence of Freudian and psychoanalytic therapy, the rise of insurance and the increasing dependence on diagnostic codes, and two modes of treatment that supplanted Freud's model of the mind. On the one hand, psychopharmacological drugs, and secondly, and relevant to our question of auto-intimacy, the development of Albert Alice's rationale of behavioral therapy in 1959 and then Aaron T Beck's cognitive behavioral therapy in the 1960s. So in brief, Ellis and Beck, here on your screen, were also trained on psychoanalysts who turned their backs on the practice.

Their theories focused on conscious thoughts that were unwelcome, literally reprogramming the patient. Nor Ellis nor Beck were interested in understanding the origin of thoughts, they simply wanted the thoughts to cease and be replaced by happier or more proactive thoughts, something that sounds familiar in our present . Beck called these automatic thoughts, even the suffering human was understood to have a kind of automatic function even when gone awry. So if one represented human mental suffering as automatic and automated, then, of course, one is able to justify and legitimate addressing the suffering with an automated counter script. In teaching with the traditions of self- help and positive thinking that were popular in the US at midcentury since the Great Depression, Ellis did not want to create patients who stayed in treatment and worked with a therapist in a morbid long- term relationship. He wanted patients to help themselves. Ellis was more concerned with

creating autonomy than relationality, and even as he developed a new kind of therapy, Ellis was encouraging auto-intimacy. Ellis was from the very outset substantiating a therapeutic technique that would not require a therapist. And he was doing so consciously. He was instead creating a media empire. As burkeman notes , Ellis' REBT was made for self-help books, this including self-help books but REBT worksheets and work books , and this fungibility of media for CBT, where the medium is decidedly not the message, also included the human therapist. Following in Ellis' footsteps, Beck wrote 15 books and developed several skills and inventories, we'll see them on the next slide, which are used everywhere in the United States and beyond, in the carceral system, at the doctor's office for evaluating postpartum depression, all these new forms of personality and symptom assessment, quantifying the self-rather than qualifying it. The mental

health expert gave way to delivery of all of these new and yes auto-intimate ways of diagnosing and knowing the self. The faster improvement could be measured, the better. Here are some now, the feeling wheel, Beck's depression inventory which may be familiar to you. So at this moment that other mental health efforts are deevaluating expertise by distributing the role of expert listener across lay volunteers, which again, I write about in my book and I'm happy to talk about in the Q&A , but you can think not just the queer suicide hotline but consciousness raising groups, this kind of radical care moment of the '60s. Instead Ellis and Beck removed the

absolute demand for human other in possession of expertise through mediation and automation, and here not automation only via the mechanical reproduction by computers in the widest sense, whether at first in print or over tape recordings, or later when these techniques were programmed to be delivered by a computer, but in that sort of deeper sense of the automated self and subject, not needing another, only working from within. So unsurprisingly, psychodynamic oriented practitioners and patients were deeply skeptical in the earliest iterations of computer as therapist, whereas those who worked with the methods that fall under this cognitive behavioral umbrella tended to embrace digital and automated therapies. And this divide, again, it's not exact but it's close, more or less continues to this present day. Because translating CBT to the computer form is eminently feasible because of what CBT describes the mind as being. If you think X and X is disruptive, rewire and recode by thinking Y. The self listens to its own script of negative thoughts, and we can automate a new response, thinking that new thought as one's own. This was truly automated with computer

programs when they were brought in to treat depression among other disorders, but depression tends to be the first, in the late '80s and early 1990s. Kenneth Colby never wanted to give up, even made a fully functional CBT software in the 1990s called Doctor Software. He didn't fully learn his lesson. He gave it still the medical function and a non-real name. And he sold this as a popular program overcoming depression. I would argue that via the

popularity of CBT, therapy was being conceived of as less human all together, following Sherry Turkle, and less obviously dependent on an interaction with another outside the self. That we could self therapize. And the computer is understood to be on or be at the boundary between the users' interior and physical and social environment, or as Turkle writes, both animate and inanimate, end quote. In this, the computer is not alone. The diary functions this way too, as did Dr. Slack's soliloquy experiment with the tape recorder. The personal computer and perhaps even more so the smart phone are perfect instruments of auto-intimacy because they demand it. So

nearly 60 years after these experiments, and they were continuous, I'm just jumping ahead to the present, but Colby continued his experience, his brother Charles Slack made huge strides in the area of computer traction in the '70s and '80s and so on. In 2023 we 're still without a fully functioning stimulation of a therapist that can treat humans if the aim is psychodynamic care and not the auto-intimacy of CBT. There's still no bot that currently rates well in terms of measures like the therapeutic alliance or how well a therapist relates. Nonetheless , research has not slowed on

elaborating to care for the many in dire need of mental health support. Some of the experimentation follows from the unintended promise of Eliza and in Colby's earliest footsteps. That goal again is to make a fully automated therapy that simulates a human , gathers valuable data toward diagnosis, and crucially is enjoyable to use. Ellie is one

such example of a contemporary project working to perfect the illusive automated therapist. Ellie is a nod at her grandmother probably. It was generated at USC from funding through the US Military through DARPA. It's currently

exclusively used with veterans who returned from Afghanistan and Iraq, American veterans. Ellie is a diagnostic symptom contained within the Avatar of of an ambiguously raced sitting in a chair, almost always gendered feminine. She seems confident but approachable. When one user says he's from Los Angeles, the one on the screen, Ellie will respond sweetly, oh, I'm from LA myself, and that's what her voice sounds like. Behind this early small talk, Ellie is already performing a deep analysis on her user. She 's equipped with sensors and a webcam that detect supposedly the affect in speech, a user's posture and gestures, she can perform sentient analysis on the content of the users' words, as well as completing paralinguistic vocal monitoring, which she then compares to a control civilian and military database . It provides Ellie with the

feedback that allow her to estimate the prevalence of, in this case of this user, depression. For instance, those with depression, Ellie has been trained to think. Don 't pronounce their vowels ostensibly in the same way that nondepressed people do because they move their facial muscles less. So Ellie counts every instance of these kinds of markers she's been programmed to count and they match up as indicators for depression, then she gives the user a score. So Ellie's creators think she's attractive because she provides this complex customization without judgment . And that phrase without judgment comes up a lot as a way of escorting these innovations along with that kind of democratizing impulse where kind of therapy for all also really means exploiting all using therapy as ruse. So here I'll add that Ellie of

course is judgmental. It's what she's supposed to do. She 's performing a four-point qualified analysis as a diagnostic tool for judgment. A different version of platform is making use of an Avatar might be Tess. Again, where the feminization is crucial for user engagement and also reproduces itself, or in the case of Google's Wysa, which boasts about 2 million users and has conducted over 100 million conversations. The service builds itself interestingly as a companion or mental health friend. We're seeing that more and more in this quote/unquote space too because it skirts the two levels of oversight in the United States, the FCC and FDA . And it gets around both of

them. And so as it takes its Avatar this adorable penguin. Other companies combine the services of online talk therapy and do so with self- guided computer and mobile delivered CBT. Joyable is a concierge digital mental health service that combines a 60-second quiz, which will it claims generate a complete emotional profile of its user, just in 60 seconds. That's all there is to emotionality. And in within 5 minutes it will give everyday

cognitive activities. So Joyable, which is very representative of a kind of major AI program like Tess, is sold not to individuals who seek mental healthcare, but mostly to companies who pay for it and include it in their employee's benefit package rather than paying for human-to human treatment. The advertising is there for targeted at companies, not individuals, and has the tag line, happier employees, better outcomes. So whereas DARPA's funding of Ellie or the development of many of these kinds of interventions by governments in the wake of natural disasters, are moments where teletherapy addresses overt crisis that we're turning veteran in the wake of flooding. Joyable addresses itself tool the emotional crisis that labor is. We're

addressing it just enough ostensibly so an optimal labor can continue. So again, part of the choice to buy into a digital program has to do with colby's early notion of making therapy nearly costless by automating the same therapy all at once for everyone with a single program. This is the universalizable auto-intimacy. Okay. So to conclude. Care is a means, a practice, a result. It is always relational, according between and within these coordinates.

Care is always labor. Care is not just an interpersonal phenomenon, it's contained by histories of systemic violence and shaped in their current topographies and enactments. Care can be found where other forms of care have been rescinded, where it is absent. In the English word care, we can still find many echoes of its earlier shedded meanings. Care can be produced from a cry, a lament, a warning, take care, or be the state or an action to pay close attention, a relation to be responsible for someone or something or oneself. It can describe itself or its absence. You can

forget your cares or you can be the person care forgot. Care is this overly capacious term to describe a set of relations that redress something that ails or needs attention, whether human or environmental. Its definitional capacity points the way to understanding care not as just a kind of moral good, but as an intervention that can license, carry, instruct, or harm in excess of its remit.

Care can be another name for carelessness or harm itself. So contemporary modes of digital self-help and self- improvement continue to show that enjoyability is intrinsic to intimacy with a non-human other because it's a way of being intimate with the self, even or especially if that self-intimacy is mediated by a cute digital penguin. That repeating interactions are necessary for therapy, and therapeutic growth, even if that appointment is only with oneself. Without care of another, caring for oneself

overtly has to be phrased through this kind of pleasure, turning the work of therapy into game and game into therapeutic work, while putting the burden of care on the subject of care themselves . AI for care is as old as AI itself. As Ruha Benjamin and Glava have argued, digital health interventions often promise transcendence for the individual, for society, for the group. We can get beyond our own human history and structural violence. The promise of technocare is that it may allow those excluded from traditional care to be refolded into systems and experiences of same. The opposite holds. AI and machine learning-based care interventions recodify both race and gender, whether in the systemized feminization of bots, or in the deployment of algorithms that continue to foster the conditions of medical redlining, and the further flourishing of white supremacy in medicine. So

where this refolding of the excluded does take place, it is usually in the service of extraction, dataification, capture, and control. So our crisis of care is not that the algorithms are coming or have already come, but how algorithms for care are deployed in the service of that predictive control. The more one uses a platform for care in order to have it work as a therapy, the more minable data there is at the mercy of its container. When we know that deidentification and anonymity are not static states and consent is far from always informed. Automated care is held up as accessible. It works as a dragnet. It

allows help not only to reach more patients, but also those that are traditionally marginalized by the care discipline for all kinds of reasons. In turn, it's the same users that are already the most vulnerable systemically to counting, data collection, prediction, and intervention by state social services sometimes, and absolutely every year with lethal consequence. So now in a moment where distance is no longer a public health mandate , but has escorted the expansion and scaling of these interventions and many therapists remain out of office, teletherapy is no longer a shadow form. It is the dominant, if not only mode of psychological care offer for many. We must re-evaluate once again the usefulness and pitfalls of these kinds of interventions, both conducted by human and ostensibly by machine, and what we're willing to authorize as a crisis measure, knowing that what we authorize in crisis often becomes the norm on the other side. And we're witnessing that now. We cannot simply look at access as a kind of moral good, or the successes autointimacy brings, whether it compliance or providing a moment of joy, but at the total outcome of care, what happens next. We cannot hope to infold those who care

forget while repressing the central fact that care is a tool but it is also often too often a weapon. Thank you so much for listening, and I look forward to talking with you all in the Q&A. » DR. ALEX KETCHUM: Thank you so much for such a wonderful talk. That was so exciting, and again, I highly encourage folks to buy the book. It's so great and digs into these issues more. I want to remind folks we have a Q&A box below so you can type in questions that you have and we'll read them aloud. I have a question

to start us off. First of all, it was so cool to see the presentation with all the images. And I know there's some in the book. It was great to see Ellie because that wasn

't in the book so that was exciting. I guess because you kind of prompted this question , can you speak a bit more about the feminization of the surrogate humans that you were touching on throughout the presentation? » DR. HANNAH ZEAVIN: Sure. Be my pleasure. So I wrote a longer essay that's not in the book but kind of I felt was incomplete from the book. Let me pull it up. It was for Dissent, which is a socialist magazine called Therapy with a Human Face. I was trying to trace this kind of connection between both the feminization of bots, which has been remarked upon by -- I don't know how to share it. How is

it in 2023 after like living on Zoom, there are still things I don't know how to do like share with an audience. » DR. ALEX KETCHUM: We don't have the chat open so that could be why. » DR. HANNAH ZEAVIN: That's why I can't figure it out because it's impossible. It's in Dissent, it's not paywalled

. You can also email me if you want it. The work of that essay was to try to trace out something I was seeing which is that there was a new video game called Eliza, which was very exciting to me of course because I'm a nerd and I was like, how are we reimagining Eliza as a few years ago now in 2020, pre-pandemic. And what was really shocking was the kind of pervasiveness of thinking about both the therapeutic worker and the bot as both being feminized. And I wanted to just start to trace out basically the history of the statistics. When did therapy become feminized? Because if I close my eyes and imagine a therapist, I am going to imagine a cis gendered woman. Why? So that was the work of that piece, was to hook in and see how the kinds of responses, particularly within the care fields, the helping fields, the helping professions, respond to their own feminization also in the digital representations. But

they're also highly racialized , so Tao Phan writes beautiful ly about this. I mentioned [ name?]'s book called Surrogate Humanity. And this is trying to speak very specifically about the therapeutic piece, which is strange because therapy became feminized not once, not twice, but three times over the course of the 20th century, and is this really interesting field for tracing out what it means to have labor become devalued as it becomes feminized and when it becomes devalued, it can also become fungible with bots and other kinds of low level interventions in technologies. And I think that's very much we've seen aided by a few other things in the mental healthcare space. Thank you, Alex. » DR. ALEX KETCHUM: Awesome. I 'm still encouraging people to write questions in the Q&A box . Please don't be shy. I mean I have lots of questions so I can keep going. But I really encourage you. Oh, it looks

like we have one. A question is from Alex, who writes: Hi, Dr. Zeavin. Your mention of Parry's education was interesting and I looked up the transcript from the therapy session. It's funny to see how each program seemingly becomes frustrated in their limitations. I'm wondering if there's any practical application to put AI bots in conversation? » DR. HANNAH ZEAVIN: Thank you so much, Alex. I know, I also find this actually really moving, that both bots get frustrated with the other one because of course that is part of therapeutics.

And it's a very particular choice that Colby made to try and make a digital representation of a schizophrenic patient who may not be being helped by his therapist who doesn't have that training in the 1970s, very intense, we could talk about it that way. In terms of the contemporary, I'm sure you could make an argument of why there's an application, which would be that you wouldn't have to train a bot on a human . So most recently, ChatGPT has been in the news for every field. Anyone who works on anything is being asked questions about how ChatGPT supposedly disrupts education, literacy, mis and disinformation. Okay. So me too. But about therapy.

It's part of what happens. So the reason I've been being asked is because there was in fact a really bad experiment done on humans with ChatGPT. So Coco, which is a very interesting company, it grew out of the MIT Media Lab. The person was advised by Roslyn Piccard who has done a lot of intense work on and with autistic children and disability. So I highly recommend Jeff Nagy's work on historicizing and critiquing that work of Piccard's in the Media Lab. So it's her students and they're like, let 's move fast and break things and do mental health on it.

And the first thing they do is they try to make a platform that will allow peers to coach each other, which is a very long history. Okay. But then without telling anyone, they swapped out the peer with ChatGPT. And so they ran this experiment on humans who were suffering and in fact in crisis and often suicidal, and all the responses, because I looked at this, were like, don 't worry, bro. Things will be fine. Which is just not what's to be done. So now you can imagine, you can make a bot , that was the crisis bot, the suicidal bot, and you could train the AI on that bot, instead of roping in humans without their consent. The problem is I just don't think

we should be using bots for this kind of work. I think we should be paying therapists much better and giving them labor preventions. But that would be where my mind goes, like what would be an interesting and ethical application for the Eliza Parry of 2023. Thanks so much, Alex. We have two Alexes, two Sarahs. Do we have another Hannah? » DR. ALEX KETCHUM: Awesome.

Thank you so much. Speaking of people being experimented on without knowing about it, there's kind of this through line in your work of the role of universities and experimenting on students. And I'm thinking the space, I think it was Cornell University also had the one bot. And I'm thinking too when you're talking about the

Joyable and stuff, how many universities now instead of having as robust of student therapy centers, they now basically have these same bots . So I'm wondering if you can kind of speak more about the role of how students are used within these experiments and kind of the way that maybe universities are acting irresponsibly or how they're kind of saying, oh, it's not our responsibility, we now have this opt-in program. » DR. HANNAH ZEAVIN: Thanks so much, Alex. I don't have an analytic answer why, and that' s not really what you're asking. So the history of child, and yes, adolescent

experiments, really runs through all three of my published book, my book that's being wrapped up now, and what I think my third book will be. And just did a little syllabus for syllabus project on the history of child experiments. It's like this thing I can't seem to get away from without every writing about it directly. So the student health thing is fascinating because for a long time, and I could tell the story of what was happening at Cornell in the '80s, but for a long time, the stuff just wasn't legislated. It took till the mid to late '90s to have like a telemedicine act, and it was in the state of California, and it wasn't nationalized, and in fact, still the differences between states in the US, and of course I know the US case best, I've worked as a transnational historian of the United States. I know the legal stuff best here.

Yeah, so it hasn't been codified. What you do see is any place that has large numbers of suffering people under its quote/unquote care, has to offer something. But of course has really devalued therapeutically. AI is one, so my own university, I haven't looked into it yet, but just announced that we have a new care platform and it's cutting edge or something, which just means fewer humans. That tends to be what that means. Or in the pandemic most universities would write us. There was like a running joke amongst my friends from grad school where at universities flung across the world, universities were telling us to get calm, TM, and use the Calm app to meditate for 5 minutes. That has to do with this scaling of

mental healthcare and cutting costs and is very much what's also at work in the more classical probably what people think of when they think of teletherapy, like talk space, better health space. Universities know that students have long been in pain and in trouble and need care and this is the latest way of trying to figure out how to. Yeah, it's literally a political economic answer here which is cut costs. And escorted by the pandemic, right. Now you don't have to come into the office, right. And on and on. And the last thing I'll say is I've heard anecdotally from a few people at those centers that there is something to that. Not the cutting of costs, which I

write about elsewhere in my book, that there is a value to distanced intimacy in the college setting. That it's hard as a grad student to go into the clinic when you're going to see your undergrad. So grad students don't go to those clinics because they don 't want to be seen. And they don't want to be seen by their fate walking in or out. I

think it's not again that I'm anti-teletherapy, nor am I an evangelist. But this kind of shift in intervention is not about any of that, right. We saw some of that in the pandemic as an accident benefit. This is something quite else, more insidious. Thanks, Alex. » DR. ALEX KETCHUM: Thank you. Also, again, I'm encouraging

folks, maybe people are feeling shy or quiet today. Also, Sarah, if you have any questions too. » SPEAKER: Yeah. So just thank you so much for an amazing talk. So much of what you said resonated so much with me. I'm somebody who tried to use some of these at-distance counseling services to manage some of the stress of being a PhD student, the pandemic, and just felt really failed by them time and again. And it was always so interesting to me just being given like a phone number to call that really ended up being this kind of maze of push this button, wait this time, someone will call you back. Oh

, you missed the call, now you have to wait another week. Oh, you're canceled now in the system. So it's kind of interesting the ways in which people who we when we're suffering, so many different things we go through with our mental health , the onus is so much on all of our shoulders to still navigate that. And it's interesting how a lot of these digital systems can almost exacerbate things and make it worse. So that's more of a comment. But the question in there is we seem to be really facing this crisis in terms of access, which you touched on.

You're obviously much more aware of it than many people I 'm sure. And so what really is the solution in some ways? What is kind of like an ideal scenario if we're just thinking creatively and imaginatively for a moment, to tell people get the care they need and get the access they need and not be just stuck inside this neoliberal system where you're kind of patched up by a bot to be able to work more hours or what have you. Yeah. » DR. HANNAH ZEAVIN: Yeah. So what would be better than getting Calm TM? You know? So I think you're right. I think

we overuse this word but it's perfect, this truly is neoliberal mental healthcare. It really is the sort of far end stage, end concluding point of the therapeutic society, which of course could sound like a right wing, but it's not. It's really an accurate description of what has happened in some ways. So you're basically asking, we 'd need to live otherwise in many, many different ways.

At least this is how I think about the world. One thing that I felt, this book was hard to right. I don' t mean just sitting down, and that too. We can talk about

that also we can talk about anything you want to talk about. One of the hard things to write about the book, every day I had to ask myself do I really think no help is better than any help at all? The answer was yes. That was really hard to sit with. Part of that was because of all the stories of people using, whether it's Better Help or just your story just now, Sarah, or undergrads coming to talk with me once they figured out what I worked on or the dozens and dozens of interviews I did with providers. There's an air horn . But on the other hand there was a great deal of hope. And hope is different from optimism, right? It doesn't mean I'm optimistic. I actually don't think I am. But

there was hope, and there was hope in the kind of radical ways of not thinking, okay, you just add technology and everything is going to be fine sunshine. But instead what infrastructures do we have? How could we seize that infrastructure? What are, not to sound really old school, Canadian style, but what are the affordances of the telephone? And what can we do with it? And those kinds of -- they're often much more small scaled projects, were incredibly enlivening. It was the students who met six feet apart in a sharing circle every Tuesday on campus rather than that really frustrating experience, and I'm sorry, of like, oh, my god, this call, and what am I going to do. I was at UC Berkeley in that part of the pandemic, that phase of the pandemic, and the students who organized mental healthcare referrals, which I was part of running, connecting people in the community who could see at a low fee. That kind of ad hoc is I think what we have. So it

's that mandate, whether it comes from the young lords who had a hotline or feminist expression of Black Panthers, same moment where instead in California you can make a chatbot or it's more this side of the story of teletherapy that I think without completely living otherwise, because all those groups I just mentioned are revolutionary. So as they're working forward otherwise, that gives me hope about what we can do and really requires human creativity. And really thinking with media, which all of those groups were really fascinated with. What does it mean to move around and bring care on a van is a very similar question. Thinking about the Young Lords

, thinking about the Black Panthers, it's a similar question of what we can do with the telephone. So there's all of that kind of investigation. So just a plug for the fact that there are hopeful stories. I just didn't tell you one. And I'm sorry for that, Sarah. » SPEAKER: That's so great. Thank you so much. Like I said , it was a hard question because it's solutions oriented, it's asking a lot.

Thank you. That was a great answer. » DR. HANNAH ZEAVIN: My pleasure. » SPEAKER: Should I go ahead and read the two questions? » DR. ALEX KETCHUM: Want to read Isabel's? » SPEAKER: Isabel asked, has there been any formula statements by psychotherapists, healthcare professionals on these AI forms of therapy? Do you think they would ever try to regulate them or adopt them in some way? » DR. HANNAH ZEAVIN: Sure. Yes . So the Psychotherapy Action Network has bun a lot of work in the teletherapy space. Basically again, as I said, so people who are interested in psychodynamic care are pretty anti all of the digital therapy. In fact, they were

really anti the Zoom form of therapy in ways that also upset me. Like much to my chagrin at the start of pandemic. It was kind of anti-tech completely. Not only for bad reasons, often for really good reasons. So there

have been many formal statements. And it hasn't yet rescued back AI therapies under FCC or FDA oversight. But there is more of that move . So the FCC has started to really wake up. Why I don't know anyone there, I'm very

appreciative. Mostly about data. And when data ethics is not enough in terms of mental health ethics. So crisis text line last year got into a boodle of trouble for selling more than 50% under 18 users suicide data to loris AI who makes customer service, chat bots for Uber and Lyft. And the FCC made them stop it the minute the story broke. I mean they were really on it. So there are

these places where there is some oversight, where there is regulation. And then there are people who are really interested in adopting it. That's the other thing too. There are people who are like, what would it mean to triage X with this tool? And thinking about it that way. Also thinking about training, like in the same sort of Parry model. But yeah, Isabel, that' s definitely something that's

been fully in effect. And I know the Canadian government has been really interested in regulating AI for mental healthcare for five years, four years. I was up in Toronto , involved in those early efforts, spearheaded by Microsoft research and a few other government partners. So I think in Canada there is a more kind of overt interest and get the regulation five years ago rather than now where like in the US people are like, wait, what? So yeah. Do you want to read, sorry. I won't read the other one.

» DR. ALEX KETCHUM: Yeah. I can read it aloud. As someone who's only found an analyst outside my own country, therefore restricted to online sessions, where do you see things moving regarding the limits placed on in-person sessions? Can the clinic respond to the limits of onl

2023-03-22 08:33

Show Video

Other news