Data Protection and (In)accuracy – Emotional AI as a Case Study

Data Protection and (In)accuracy – Emotional AI as a Case Study

Show Video

I think we'll slowly get started now  uh most people uh had a chance to join the webinar   the zoom webinar but people will filter in over  the next couple minutes uh but in the interests   of uh remaining on schedule I want to get us  started here so that we don't run over and eat   into people's uh lunches unduly um so first off  I'll introduce myself my name is Dr Ryan Whalen   I'm really happy to be here today at the invitation  of the hku law and Technology Centre to chair   this talk and moderate this talk I'm really  excited to hear what our speaker today has   to share with us. The structure of the  talk will be as follows. So it'll be initially will give   um Dr Clifford a period of time to present  his work um I suspect that would be 30 to 40   minutes although you're free to Air on either  side of that window as you prefer Dr Clifford   um and as we go along if the audience has  questions please feel free to use the Q&A   box and then in the period after the talk I'll  I'll pose those questions to Dr Clifford and we   can have a bit of an extra change of ideas okay so  first off I'm going to introduce today's speaker   Dr Damian Clifford is a senior lecturer at the  Australian National University College of Law and   he's a chief investigator at the ANU humanizing  machine intelligence brand challenge project and   the socially responsible insurance in the age  of artificial intelligence ARC linkage project   he's also an affiliate of the ARC Center of  Excellence for automated decision making in   society and at The Institute of advanced legal  studies which is at the University of London so   Dr Clifford thank you so much for taking time  out of your busy day to talk to us here today   at HKU and in Hong Kong and I'm going to give  you the floor now to to present your work to us   thanks very much Ryan um so I'll just  share my slides and I can get started   okay so you should be able to see those um thank  you um yeah so thanks very much for the invitation   and the opportunity to to speak um so today  I'm going to speak about data protection and   the accuracy principle in data protection and I'm  going to do it through the lens of emotional AI   so uh you know to start this off um I thought that  you know I just kind of give a brief explanation   as to what this accuracy principle is that  appears in a lot of data protection legislation um   essentially it requires that personal data is kept  up to date and corrected or deleted were relevant   um and my I suppose starting premise um you know  for the work that I'm presenting is that you know   I think this is an underexplored dimension or  principle within data protection legislation   um okay but that's like fair enough um and you  know you might be wondering why I'm actually   interested in it and I suppose my interest started  inaccuracy largely because I started to look at   emotional AI or affective computing I suppose  that the context which I'm going to be exploring   um you know the accuracy principle in this talk  um so there are a range of examples I'm just going   to give a couple from Facebook here because um I  suppose when it comes to kind of you know um some   behavior that has uh you know led to some debates  uh they provide a few uh very good examples   so the first one is the emotional contagion  experiment uh in which I suppose researchers   bought a different academic institutions and at  Facebook uh found that if they manipulated the   user feeds of users they could then change how  those users interacted with the framework and   whether they did so kind of either positively or  negatively um there has also been um leaks around   um you know Facebook saying to advertisers that  they can allow them to target teens who are   feelings insecure or vulnerable um there were a  range of patents um dealing with emotional AI and   of course then there's the Cambridge analytical  Scandal which had a clear kind of emotional impact   I suppose around the capacity to persuade now  these technologies have been very controversial   for a variety of reasons but some of them have  been very controversial as well from an accuracy   perspective so there's been wide scale critiques  of what's known as facial action coding or the   detection of emotions through facial expressions  so there have been a lot of discussions more   recently about the accuracy of such technologies  in particular so the reason I kind of   wanted to look at this thing is well because of the  kind of garbage in garbage out so if you look at   the role of data protection and potentially this  accuracy principle as a means of regulating some of the issues kind of actually play some sort  of a role in terms of correcting or requiring the   deletion of information that isn't accurate and  that may play a role I suppose in the deployment   of these technologies now aside from that I'm  also interested in it because data protection   law and in particular I suppose I'm referring  to the GDPR as a particular so the general   data protection regulation in the EU as  a particularly strong example of regulation that   has extensive accountability mechanisms or tools  within it so you have things like data protection   impact assessments or privacy impact assessments  and other jurisdictions as they're known and you   have requirements for data protection or privacy by  design by default and you also have protections   around automated individual decision making or  profiling so you know there are robust regulatory   tools that could play a role in this space and  through which I suppose we could look at accuracy   as a principle and whether it could actually  play a role in mitigating some of the challenges   um before I get into I suppose some of the  specifics on this I wanted to highlight a couple   of two preliminary points so the first is that  you know when we're looking at accuracy often what   we're talking about in this context or any context  would be more in the range of kind of accurate   inputs so there's a focus very much in ensuring  that the data process isn't isn't incorrect   um and you could say that that kind of reflects  the historical roots of these types of Frameworks   but also um there are broad goals so um their  goals uh what I mean here is you know that   they're designed to protect privacy under data  protection but also um the the other goal within   this is to provide a level of uniformity and  certainty to allow for information flows and to   provide that certainty for businesses now within  that I suppose those competing values um there is   also uh uh I think that that's manifested in the  fact that accuracy has a bit of an unusual role   yeah and so in that even inaccurate data are still  considered personal data or personal information   so um it is a principle I suppose rather than a  strict rule that information has to be accurate   and there's good practical reasons um uh for that  you know um say for instance just a very simple   example if you had um you know provided your  address and telephone number and various other   personal information on sign up just because you  change your address your telephone number doesn't   mean that they don't then relate to you in the  future so this feature of data protection that   even inaccurate information is still personal  information is manifested clearly in different   types of Frameworks be that the general data  protection in the ndu or the Australian Privacy   Act within the Privacy principles um as I've  noted there in the slide um but let's look at   this notion of like accurate inputs and where it  could kind of play some sort of a role then in   order to to provide some sort of illustration  um so to do this I'm just going to use um   uh this figure that I had um that I suppose I used  in an article with some colleagues that I wrote   um and what I want to kind of highlight is  that traditionally I suppose we would see a   reward for the accuracy principle and data  protection to be everything within the the   red circle so if it comes within the definition  of personal information there's a requirement   to correct or ensure that the information you  know in your sign up but also potentially any   personal information about might be within  the training data or that might be used to   ensure that it is accurate if it is if it comes  within the definition of personal information   um now another point that I want to highlight out  of this is that the broadness of your definition   of personal data will affect accuracy as well so  what do I actually uh mean by that now if you have   a broader definition of personal information more  information is obviously going to come within it   and that raises potential issues because if that  information May relate to multiple people so let's   take an example in order to kind of highlight it  a little bit so we take this um graph again or   this figure which effectively is discussing um the  potential deployment of a machine learning system   in a hypothetical Insurance context but if we take  the data here that I've kind of circles now in red   with a smartphone Smartwatch sensor data um you  know that device related information would clearly   come within the definition of price information  in certain jurisdictions but there are doubts in   others such as Australia because that information  may not necessarily be about the individual so you   know if you're talking about something like  an IP address um you know that may relate to   multiple individuals but it still comes within  the definition of person information in the EU   um even though it may relate to multiple  individuals so the broader your definition   of personal information the more that you  have challenges with this accuracy principle   um so it does kind of show the flexibility of it  and that it should be seen as a principle because   um you know its operation is a little bit context  dependent now within this as well it's also   important to remember that this also relates  to so-called sensitive data categories in data   protection um in multiple regimes in different  jurisdictions they protect either personal data   um and or sensitive data so there are certain  categories of sensitive information such as health   data for instance um so when we're considering  this and considering it through the context of   emotional AI um we may then start to think about  the sensitive instances that may be drawn from   seemingly innocuous device related information  regarding for instance our emotional state and how   we classify those types of information are those  sensitive inferences potentially sensitive data   within the scope of data protection adjustation so  let me kind of explain that within the garbage out   side of this equation so if we look at this then  we're looking at the right hand side of our our   diagram yeah so the inferred data output so you  know you have the detailed processing of various   types of information which run through a machine  learning model and then spit out some sort of an   output so the question here is whether you know  these potentially sensitive piece uh information   whether they also come within the definition  of process information or personal data and   whether then the protections that are afforded in  data Protection Law um in particular the accuracy   principle also applies to this side of you know  the equation so does it play a role both in terms   of the garbage in but also the garbage out um now  um you know within this I think you know we can uh   there have been some developments and discussions  um uh but it would be uh generally um accepted I   suppose now come to this in a second that you know  the definition of person information uh would be   covered within this inferred data output um and  there have been some developments uh recently   um speaking about you know um the uh overlapping  intersection between personal data information   and uh sensitive inferences that may be drawn uh  particularly in the U um but also um for instance   in the reform of the Australian Privacy Act where  you can see here with the Triple C's and digital   platforms inquiry report a direct reference to you  know a recommendation that we need to consider the   role of uh inferences yeah so what would happen  and a further elaboration about in the Attorney   General's discussion paper uh around whether the  categories of sensitive information need to be   adjusted to take account of the fact that um such  developments are able to draw sensitive inferences   so um you know we can see um that there is a  role potentially for accuracy both uh in the   context of the uh inputs of uh these types  of systems but also uh in terms of an output   um but the next thing then to consider is whether  there are any potential weaknesses in that regard   you know so any potential restrictions um to  accuracy playing a key role for instance in   regulating the impacts of emotional AI in  terms of its inaccuracy so there are a few   here that I want to to highlight the first  is that um you know when you're looking at   data Protection Law there's an assumption  generally that the purposes are legitimate   um and effectively the Frameworks are kind of  set up as risk mitigation mechanisms uh you know   the Frameworks avoid afford certain rights  to individuals to whom the data will relate   um they play certain obligations on the  shoulders of those processing the information   um but they generally don't say anything  about the legitimacy of the purposes for   which you're actually using the information  and generally speaking if you're talking to   particular users they're often concerned about  the particular uses that technology are put up   within this concern then you have washed  barchap corpses referred to as you know the   focus of data protection being on the Upstream  so the input data as I have put it as opposed   to the output part of it and the last part then  you know I mean we have to remember with on all   this is that even inaccurate data are still  considered personal uh data or information   and therefore can we really consider there  to be some sort of a positive obligation to   mitigate the negative impacts the risks  associated with uh inaccurate outcomes   um particularly when we consider that you know  users or consumers will often have agreed to   the potential for this risk of inaccuracy uh  when signing up to use the particular service   um so we have I think you know some limitations  there within the consideration of data protection   um but maybe to kind of spell out some of that  more specifically in the context of emotional AI   um okay so given that inferences our  personal uh data at least in the EU anyway   um what role you may be wondering does  data protection have to play in preventing   inaccuracies in the context that you know the case  study that I'm using to test this a little bit   um so um I I mentioned I suppose that  there were these white uh uh you know   these uh broad concerns with the accuracy  of facial action coding in particular   um but I suppose you know there's just to kind of  give you a flavoring of these criticisms you have   um the AI now Institute at NYU basically  saying that the whole thing is a pseudoscience   um and then you have uh researchers such as  Feldman barish and others uh saying that there's   a methodological flaw within it in that they  rely on what are known as basic emotions and   that fails to capture the richness of emotional  experiences so I suppose there are um kind of   fundamental theoretical uh debates here as to  whether the entire thing is the pseudoscience   or whether there is a need for I suppose some  sort of um methodological tweaking uh within the   development of the Technologies to ensure that  they actually achieve what they set out to do um no the criticism um you know the response to  the criticism that has been there is to kind of   adjust some of the methodology methodologies so  instead of looking at a basic emotion approach   they've adjusted to a more appraisal approach  and that effectively requires the Gathering   of far more data you know because it's a  far more context aware of uh assessment   um and uh essentially that requires then uh more  uh information gathering in order to put what is   gathered into context to drive more accurate  inferences as to someone's emotional state   So within that then you end up with a question  yeah so accuracy um say if we take the general   data protection regulation in the U as an  example has a series of principles within it   um one of which is accuracy um as I've  been speaking about but others includes a   data minimization where you only have to gather  where you should only gather the minimum amount   of information in order to achieve a specific  purpose so um you know adjustments um here in   terms of the methodology require some balancing  between the interests the different principles   in order to achieve accuracy if that's we're  setting out but that may then negatively impact   uh individuals uh vis-a-vis um you know the more  extensive Gathering of information so you end up   I suppose having to to question I suppose how  these various principles are balanced fairly um the next key point I suppose that I want to  highlight is um you know it's kind of a statement   but there's also a question to it I suppose is you  know if there's a flaw in the entire methodology   um there's a question as to whether it  really matters in terms of input or output   um whether that's accurate yeah so  what's the role of data Protection Law if   um you know the entire methodology  that is being relied upon uh is flawed   so um I thought I'd take again the example  of the facial action coding to kind of   um illustrate what I'm trying to to say here um  but if we um you know so with this uh particular   um example of emotional AI essentially the  premise behind it is that the facial expressions   of an individual reveal these basic emotions so  how they are feeling at any particular moment   um and there have been serious questions regarding  that premise that methodology or understanding   that grounds um the technological developments  um so now within okay I suppose an accuracy's   perspective from the data side um the data might  be perfectly accurate in that the detection of   the facial expressions might be Flawless but  the underlying methodology or the correlation   between those facial expressions and the emotion  that the individual is actually feeling may not   be correct so there is a question as to  whether data protection is the appropriate   lens through which to consider this potential  fault or this particular particular problem   so that kind of leads me then um to this point  so if data protection isn't the solution how   do we kind of regulate the potential for  garbage in the middle yeah so we can see a   role for data protection as an input you know when  personal data is an input and for data protection   when there's sensitive personal data outputs or  inferences that are drawn through this detailed   complex processing but what about regulating the  machine learning model in the middle that might be   um you know flawed in terms of its fundamental  reasoning as to how it is drawing these inferences   in the first place so this is kind of um you know  where I am at the moment in terms of this research   and where I'm going but I I'd like to kind of  maybe spell out in the last few slides exactly   where my thinking is and uh where the the progress  for the future research that I'm doing and how   it's going to tie in with this so um the first  thing to mention and you know we need to I mean   consider some of the regulatory developments that  are happening in this space is that the proposed   AI act um in the EU is essentially suggesting that  there should be transparency requirements uh for   Technologies including emotional AI why this  is interesting um in particular is that there   have been a series of um reports I suppose from  policy makers but also enforcement authorities   actually calling for a ban of these Technologies  um and that has kind of led me to kind of question   okay in what how should we regulate you know  that the the potential garbage in the middle   um these enforcement authorities of the  European data protection board and the   European data protection silver supervisor  for instance are suggesting this ban but   um how does that kind of fit together how should  we view this and that has led to me kind of   um coming up with a few different questions  um that I am particularly interested in   um so I suppose one of the fundamental ones um  that is uh I guess to a certain extent plaguing   me at the moment is you know do we want policy  makers uh deciding on you know what could be   construed as scientific debates so the merits of  different Technologies uh from a purely scientific   perspective and within that in the context  of emotional AI you can question whether it   is a scientific debate at all yeah there are  some who say that it is purely a pseudoscience   um and that's it there are others who say okay  well there are fundamental flaws in some of the   technologies that have been developed but  this is an evolving space and so you know   um there is a scientific debate to be had um now  you know I the second bullet point there kind of   points I suppose uh some of the complexities  that are underpin that you know so uh the   fact that some of the criticisms uh don't really  narrow down on what they're criticizing whether   it be it the underlying methodology or um for  facial action coding in particular and uh you   know the point within that is that policy  makers are often probably not best placed   um to recognize recognize the nuances and  these types of arguments um and you can   question whether it is actually the appropriate  place to have these types of discussions then um so you know this has kind  of LED them to me considering   um the broader um uh implications of the  research that um on this particular area um more   specifically focusing on you know how we might  go about regulating the inaccuracy more generally   um emotionally I might be just one example  of it and how we kind of start to think about   um how we regulate inaccuracy how we've  regulated in the past um and I suppose one of the   fundamental things that I want to that I noticed  I suppose such is obvious is that inaccuracy   is everywhere in consumer products so the fact  that we have some you know newfango technology   um with you know um AI in the title that uh is  suddenly inaccurate um is in particularly unique   um uh in this space the next one is that you know  there are a range of harmful products or Services   um that are easily accessible to uh to Consumers  um no there may be uh for instance certain   restrictions on the access to certain harmful  products uh for instance alcohol or tobacco but   um there are extremely harmful products  that are left open to the general public   um so one will be unique about  regulating a particular technology   um uh broad scale through the through uh  actually Banning it like emotionally I um the next thing I suppose that I'm starting  to think about a little bit is if there is   a difference between Banning a um you know  a product or a service uh versus Banning a   particular technology and if that actually matters  yeah so I suppose within this I'm starting to kind   of question whether there could be a shifting  regulatory targets when we're talking about a   specific uh deployment of Technology um you know  um as opposed to particular types of products that   we may or services that we may have regulated in  the past um that may make us far more difficult to   um effectively regulate the inaccuracy  through bands in this particular context   um Okay so then you know the final uh  points that I kind of want to highlight   is really you know the where from here you  know and what I'm kind of uh focused on and   um you know part of this research that's kind of  moving into the future of us I'll be doing with   um Professor Jeannie Patterson at University  of Melbourne where we're going to be kind   of looking at this issue through a broad or uh  regulatory lens um you know so looking at existing   contract law mechanisms but also for instance  protections that may exist in consumer law to   look at for instance misrepresentations  and capacity of these particular uh   um Technologies May perhaps contained in  terms of conditions or the potential for   the application of consumer protection mechanisms  around misleading or deceptive conduct but also   the potential role for the application of  entrepreneurial terms regulations um also   down to you know the unfair trading Provisions  that exist for instance in the EU or the US   um but also within this to view and understand  more other exante means of regulating um in   addition to data protection and data protection  accuracy mechanisms uh such as consumer guarantees   law uh and whether there could be kind of a more  principled based regulatory intervention compared   to outright bands of specific technological  developments um and but within that we also still   want to look at this role for more paternalistic  means of intervention uh more specifically bans uh   the circumstances of where and when they should  happen even for um you know as I mentioned for   very harmful products there may not be whole scale  bands and we need to kind of explore those kind of   trade-offs in order to determine okay on what  point of this kind of paternalism empowerment   scale do we need to land on in order to  effectively regulate these types of Technologies so um I think that that's uh is kind of bringing  me to the the end of what I wanted to say I   realized that I kind of you know kept that very  much within time but that then allows us for a   more elaborate discussion which I think is is good  um and I'm happy to kind of elaborate on any of   the the parts I think it's kind of like wind me up  and watch me go I think with this particular topic   So like um I'm happy to expand in any particular  areas if anyone has particular questions so thank   you okay uh thank you uh so much Damian  for a really nice uh introduction to this   um research project uh as a reminder to the  participants uh in the room uh you can use   the Q a box to uh post questions or if there's  an area you want Damien to if you want to wind   him up and let him run uh in a particular area  just just let them know up in the Q a box there   I will uh use the the chair is privilege to  ask um uh my main question really which is   um so I'm not a scholar of privacy I'm  more I think if you were to characterized   scholar of innovation so I think about these  things maybe from a different perspective   um and I guess my question is is kind of like  do we even need to worry about regulating   inaccuracy in this in these sorts of contexts  given that we can probably assume there's a   built-in Market incentive for accuracy right the  people who develop these Technologies want them   to be accurate if I'm building an emotional AI  I want it to do its job well if I'm building a   a behavioral advertising algorithm I want it to  accurately Target the advertisements that it it   chooses for particular users um and we know just  from the history of innovation that almost all   Technologies they they develop incrementally they  start off as sort of poor versions of themselves   and over time as Market incentives sort of  take their effect and the engineering Kinks   get ironed out the Technologies get progressively  better and and better and more at in this case it   would be more accurate and more accurate and and  I wonder whether or not even introducing the the   threat of as you said Banning inaccuracy might  upset those Market incentives and lead us to   miss out on technologies that that might turn out  to be very very very helpful for a lot of context   that they're maybe not even now applied I could  think of a lot of different contexts where I'm a   truly accurate emotional AI engine would be very  very useful and have a lot of uh good uses good   things that we would mostly think of as ethical  or moral uses that would help benefit Society   so there's quite a bit there but I just wonder  what your responses are to those thoughts yeah   no thanks very much for that I think um like I  think you've um eloquently uh expressed a concern   that I had you know stemming from the the calls  in the European Union to ban uh this particular   technology um I know I say that as someone who  is quite skeptical of like current developments   um and I I definitely take your point I mean I  suppose that was kind of like the the starting   point for this and that like um moves towards  a ban is an extreme regulatory outcome like   it's an ex you know there's a lot in terms of the  regulatory Spectrum in terms of interventions that   you can have that don't go as far as Banning an  entire technology um and I guess that that kind   of started um uh this project out you know so um  looking at more accented principle-based ways of   mitigating the impacts of inaccuracy um now I  take your your point the businesses themselves   want to avoid it yeah because the better product  that they have the more they'll be able to sell   it essentially so the market essentially  should at least in theory take care of it   um now the one thing I would say um to kind of  maybe draw back a little bit of what I said um   is that like you know there are particular uses of  this technology that are particularly problematic   yeah where inaccuracy could pay um a fairly uh I  mean it could have fairly massive negative impacts   yeah so if you're talking about using this to  you know surveil public spaces for instance   um now granted this is all about particular  uses yeah I mean you can have moratoriums   on particular uses of the technology um and I I  think that that's kind of part of the spectrum of   regulatory responses that I'm interested in you  know there's nothing particularly wrong with um   in my view if having it as parent to the kind of  a gaming feature uh for instance in a video game   um no obviously you still have those lingering  challenges associated with what is known as   function creep yeah so you use it for one thing  and then it just expands in terms of its uses   um but I think that that's a different regulatory  targets to the technology being inaccurate   um if that makes sense so I kind of you know um I  agree with you and then I've kind of expanded to a   certain extent so I hope that that kind of um you  know gives you another you know an understanding   of my position absolutely yes I I sort of picked  up implicitly on on that uh perspective that   you presented when you talked about your your  hesitancy about perhaps the European approach   to Banning specific Technologies especially at  this early stages in their development and the   potential follow-on costs that might have for  society we do have uh some questions in the Q   a box uh now so TG has listed three questions um  I'll read them all some of them I think you've   maybe already started to address the first one  I think is related to what we were just talking   about where TG asks whether or not companies who  are developing AI systems aren't already sort of   self-regulating by basically doing a b testing  so like they're just they're trying to make their   products better so I think the question here is  basically the same it's like do we even need to   to intervene in these contexts when companies  already are concerned about this the the second   question is quite broad TGs whether or not there's  a regulatory technology or a reg Tech solution to   to regulate accuracy the third question I think is  maybe the most interesting here is whether or not   um regulating disclosure or transparency might  upset the market and allow some companies to to   benefit off of the work done by their competitors  right so if if there's a disclosure regime that   requires companies to disclose how their black  box algorithm works is that then put them at a   competitive disadvantage because their competitors  don't need to reverse engineer it they can see it   it's all been disclosed to them and because that  then itself perhaps uh undermined incentives to   produce good algorithms because you're not able  to internalize the benefit of the algorithm so   there's quite a bit there you're free to sort  of answer any of those all those uh you like   yeah this was a maybe um respond to the last  question first then um I mean I think yes but   uh to a certain extent I mean um entire  transparency um would uh disincentivize   um innovation in that sense but I suppose you  know it isn't um it is all or nothing it's   essentially my my very brief response to that you  know if you think about like transparency around   um say automated individual decision making in  the gdpr um you can have layering of information   it might only be the information that is needed  for the consumer to have some idea of how their   information is being processed without necessarily  revealing the trade secrets that might go behind   it um you know there's a broader discussion here  about uh things I'm less varsed in as to whether   you know um uh you know there's a broader need for  um Powers regulatory powers to step in and kind of   examine I suppose what's happening underneath  the Hood um I think you know again that's kind   of a more graduated response than saying whole  scale you have to publish everything online um   it's more kind of okay well there are restricted  powers to investigate in certain circumstances   um so I think like you know my response to that  question is essentially that there's a potential   Spectrum uh when it comes to transparency  and it isn't you know entire transparency   or nothing at all um so that's kind of maybe  you know a fairly simple answer but I think   that there's some way to it um you know in terms  of the rake Tech solution for regulating accuracy   um I'm not entirely sure you know  it might depend a little bit on   um you know the that difference I was say if we  take the emotional AI context whether there's a   flaw within the underlying uh premise of the  technology so in the matter ideology that's   actually being used or not um you know I think  that that would be fairly clear based on the the   type of work that's actually going there but if  it's less based on the fundamental approach the   methodology then it can become a little bit more  difficult perhaps and then you may need Technical   Innovations in order to figure out what precisely  went wrong but again I'm you know I think that   that's more of a computer science question than  a law question or regulatory question proceed   sure so we we have another question uh here  which I think it raises something you uh you   briefly uh discussed before which was about the  use of these Technologies in video games where   you suggested maybe there's less of a concern  there and then you know they're maybe more more   sensitive areas but William lamb asks um what  if the there's sort of a transition towards   what people call the metaverse obviously that  can mean different things to different people   but I think implicit in this question is what  happens if and when I'll use the if because I   don't know that it's uh predetermined but  if a lot of more of our social behavior   starts to take place in in game like context  virtual reality whatever you want to call it   um thereby making more of these the applications  of these these Technologies potentially   um it gives them more power to sort of  influence the day-to-day activities of Our   Lives that aren't just recreational maybe  it's uh quite important things like you   know meeting of our lovers Etc things like  that really important personal experiences   um do the considerations change in a context like  that does it does it did your concerns change um yes I mean I think that they shift to concerns  I mean uh concerns about our capacity to make   decisions for ourselves essentially us or  I mean they become a lot about autonomy   um and I think like you know there's  already debates about this in terms of   the effects of a mediated environment  when you can personalize content   um and potentially I mean depending on  your your way of viewing this I mean some   would say manipulate Behavior depending on the  context of the personalization and some of the   the examples I showed at the very beginning  you know whether it be the kind of Cambridge   analytica or the emotional contagion experiments  kind of point to those types of things the risk   of you know the impacts and autonomous  decision-making capacity of individuals   um and you know you could see that with in you  know the market yeah so can they effectively   choose the products that they want are they even  aware that they're potentially being manipulated   um you know can they even retrospectively say that  they didn't want a particular product or service   or whatever whatever it is it is so certainly  I think that those things kind of feature in   um and to a large extent I think that they relate  as well to the points I kind of hinted at around   um function creep yeah so I mean you can think  as you said uh in your first question I think um   that there are uh you can point at religiousness  ethical uses of this technology when it's accurate   you know I mean you can think of potentially  Healthcare uh applications as a particular context   um but it's when um you know the data from  one particular context starts to seep in terms   of usages uh to kind of have some sort of an  influencer or direct Behavior around commercial   decision making that I really think then you  have uh you know it starts to to play some sort   of significant role so um again I think hopefully  it kind of gives you an idea of what I'm thinking   great thank you uh so we have another  question here from Anonymous attendee who uh   the premise is that most data protection laws  provide a right to correct or Rectify inaccurate   personal data that's collected about an individual  and then the question is in the case of inaccurate   inferred data how or would it even be possible  could daters could data users be able to honor   such requests so would they have to I guess tweak  the algorithm somehow at the individual level   um bite the algorithm and retrain their model  like is there is there any thoughts about how   the engineering or actual like functionally  that could take place uh I mean I I suppose   it depends on what point you're deleting yeah so  if it's in terms of the impacts of an inference   um then you can maybe change the outcome yeah  if it has to do with okay um you know various   aggregated data um are collected in order to you  know um train the model uh then it's going to be   very difficult to kind of remove anything from  you know uh that particular stage yeah so if   you're thinking about like okay uh the impacts of  this in terms of the influencer that it draws and   then the potential decision that comes out of it  I think you're kind of circling back to automated   decision making type protections um where there's  been a massive discussion around you know right   to explanation and all that kind of stuff but also  the right to kind of contest a particular outcome   um you know it might be that you know um  requesting the deletion of that particular   information isn't actually the most effective  outcome it's to change the outcome that was   reached uh or at least to be able to challenge  the outcome I think I think that that kind of   depends on the context a little bit and the  application that you're talking about but   um I would say that you know um yeah maybe maybe  the the deletion of inaccurate inferences isn't   um you know the the most appropriate  outcome that you would be seeking that exhaust the currently asked questions I'll  ask another uh follow-up question but I'll also   invite uh attendees if you have questions feel  free to add them to that q a box that's down   at the bottom of your your Zoom window here  um so one question I have I guess is like   so we were just talking about the ability of users  to potentially correct incorrect data or incorrect   uh predictions or inferences uh about them that  these um whatever you want to call emotional ai   ai in general uh might produce do you think it  might be useful to allow users to choose how much   inaccuracy they're willing to tolerate because  some people don't care right some people really   have uh they don't really care if you want to make  inaccurate emotional inferences about me or serve   me bad ads because of your behavioral algorithm it  doesn't matter to me very much but to other people   might actually feel very strongly about these  things and so is there a a mechanism whereby   the regulation could take those considerations  into account and allow people to sort of tailor   the amount of inaccuracy they're exposed to or do  you think that need to go over complicates things   well I mean I think maybe even the um proposed  approach in the uh proposed AI act actually kind   of does that because it says that you need to  be transparent I mean what you probably have   to add is um you know some information on the the  potential for inaccuracy yeah so some statistical   um I don't know transparency obligation that  would say that you have to provide this type   of information now how you would actually  practically realize that becomes a little bit   difficult because it might be a little bit context  dependent um but um in saying that I mean like   you know that would be I mean legally speaking I  think that that's the way you would do it yeah you   um from you know if we're supposed to be active  Market participants who provide the information   to the consumers and they effectively choose now  there's a you know a well-versed criticism of that   and that individuals have absolutely no idea  what's going on and they don't actively choose   um or you know you could say that they actively  uh choose not to be choosing or be informed   armed yeah um so I think within this and I  think it kind of underlines um you know the   emphasis of the the project you know where this  project is going is trying to kind of find that   um the right spot along the spectrum between  empowerment and fraternalism so you know how   much fraternalism is actually necessary depending  on the the risk associated with a particular   technology and you know I guess the feeling um  that I have and I try to kind of maybe convey   this through some of the answers and also of the  presentation is that I don't think wholesale bands   um are brand new learn of to kind of  respect that paternalism empowerment   divide that you need to think about  things a little with a bit more nuance   um in order to kind of respect both the fact  that you know consumer policy is there to protect   individuals from themselves but also you know  from I suppose different types of harm but also   to promote individual autonomous decision-making  capacity I mean it has multiple policy aims   um and you know the protection of an autonomous  capacity to choose is an important goal in itself   um so I I do think that when you're trying to  figure out okay where on the Spectrum you know   you want to lie you have to kind of EX yeah you  have to explore those kind of theoretical debates   I suppose that um are familiar as opposed to  Scholars and consumer protection but also to   a certain extent data protection privacy as  well great thanks um I got sort of another   follow-up question to that I guess which uh  it's a kind of a fundamental premise which is   like who decides who's accurate and is what  what is accurate in these contexts so I can   think of at least three actors or or entities  that that might be the important decision   makers here right so one is the individual in  question obviously and that's the most obvious   one right I should decide what's accurate uh  in these things that are inferred about me   um another is the the infer right they may  have a different opinion about what's accurate   um because they have different use cases right  maybe that their their use for their purposes   their prediction might be perceived by them  to be accurate but by me to be inaccurate and   the third is you suggested earlier in response I  think maybe to one of tg's questions that maybe   there was some room here for like a regulatory  intervention and maybe you could have I don't   know I I called them in my mind like the  algorithm police who could go in and look   into the black box and see what's going on and  so maybe they could be a third party uh in this   context who might have a useful perspective on on  what's accurate and what's inaccurate um is there   an overarching answer to that question or again  is it one of those like this is really context   dependent and it it varies based on the technology  the question of the application in question   um I mean I do think it's kind of text dependent  but I do think that it could ground I suppose   regulatory responses yeah so um you know if you  know that um the inaccuracy is being built yeah   so there's a fundamental flaw in what you're  deploying um then you can think about like   you know other means of Regulation that we have um  already have saying consumer protection around say   product liability or whatever else so like you  know we have mechanisms in order to respond to   some of these things and it's about like thinking  about their deployment to a certain extent   um and you know you know this potential for  like the the algorithm police to kind of come in   um I would say that like I'd be kind of hesitant  um to a large extent um I think it kind of   um there may be certain contexts and certain  uses that we say okay well um because accuracy   is an issue here we simply can't deploy because  the it's too risky yeah I'm not entirely sure   if that's actually within consumer products  yeah I mean we can think of emotional AI uses   that extend far beyond what we as consumers might  purchase um either you know embedded in products   or services online um you know be it in kind of  you know the policing National Security space   um where the potential for error and the risk  of error you know is uh I suppose and the risks   associated with error um increase dramatically  yeah um so I would be kind of um thinking more on   the lines of if there are going to be regulatory  interventions we have to think about okay well   we're in where is the risk calculus to work just  um and does that justify you know something like   a moratorium on the use of that technology in  that particular context rather than empowering   um you know a regulator necessarily to kind  of step in and say okay well this is only   59.5 accurate therefore it shouldn't be served  to Consumers you know because it like yeah I I   think that um it also just wouldn't result in a  regulatory outcome that would be economical for   the ones of a better you know for just yeah  and there are other concerns as well I think   um great thank you that's uh very useful we  are getting close to the end of time but we   have another question by either the same or  a different Anonymous attendee here which   is whether or not emotions actually qualify as  uh personal data under current data protection   regimes I don't know the answer to that but do  you uh yeah well I've I've already written on   this so there's this paper about a book chapter  that I wrote um uh kind of dealing I suppose   with some of these kind of basic questions as  well so um it will be personal information or   personal data I think uh that's not particularly  difficult to find particularly in in the EU um I   think where it gets a little bit more difficult  is whether it's sensitive personal information   um you know and here you kind of  end up with questions like well okay   um you know our Mo uh I suppose inferences  relating to someone's emotions can they be   classified as health related information which  would be classified as sensitive information or   if you look at particular deployments of emotional  AI say through facial action coding it'll be clear   that they'll probably be using biometric templates  which are generally if you look at various data   protection Frameworks they're classified as  sensitive information whereas if they're using   other means of detection that don't involve  the use of Biometrics then there may be more   questions yeah so then it's just a question  whether it's Health Data or not Health Data   um so I think yeah my answer that asks is kind of  simple like um you know the it will be classified   as these technologies will be using personal  data it's more of a question whether they're   using sensitive personal data and that may depend  on the context and whether you view the insights   that it derives to be kind of anywhere related  to health and falling within the sensitive that   category or whether they're using you know a  particular biometric information which would   otherwise bring it within the definition  of process sensitive process information   um so I hope that that kind of um answers it yeah  I think it does I know more about it now than I   did two minutes ago so thank you um maybe this  might be the last question because we're down to   the last few minutes of the hour um but uh so TT  asked the question that and this comes up right   sometimes this is people's response uh to these  these thorny questions of of privacy is maybe   we should just mandate uh anonymization right  so that you can no longer associate users with   their data it's it's challenging for a number of  reasons but I wonder what your response to that is   um I mean I suppose to a certain extent you  can say that is kind of is already somewhat   in the Frameworks in that like if you want to  avoid the scope of data Protection Law um you   render the data Anonymous and then it doesn't come  within it and then you can do whatever you want   um and there as there are requirements to  delete information if you process if you   process it so that it is anonymous um then you  know it's not going to come within the scope of   uh the statutory framework no one's saying that I  think that that's like far easier said than done   um because like you know there's kind of a  utility anonymization uh balancing yeah you   want the information to be useful and for it to be  useful you need to be able to drive some sort of   uh inferences or whatever or you need to be able  to relate it to some individual which basically   means that you it can't be anonymous um you know I  mean there are discussions around pseudonymization   um and uh you know depending on the jurisdiction  you're in whether Anonymous information comes   within the definition of personal data  I think the general consensus is Within   These Frameworks is that generally it does  because the impact of its use will still be   the same as to whether it's pseudonymous or  not but I think with this anonymization point   um I think really as soon as you start thinking  about anonymization you have a real impact on   the utility of the information that you've  gathered which then kind of undercuts the   purposes for which you might be gathering it in  the first place um which renders those kind of   very difficult so I hope that that has kind of  answered this and it's probably a nice way to   end as well yeah it was a good uh way of showing  out the competing interests between anonymization   and utility like how you put that okay so uh  we're basically out of time there uh that hour   flew by gaming it was very very fascinating so  thank you so much on behalf of hku and the law   and Technology Center for presenting your research  and entertaining our questions and uh thank you to   all the attendees for attending and for your great  questions and we hope to see you all at a future   hku law and law and Technology talks okay so  thank you uh thank you again thank you take care

2023-02-15 02:18

Show Video

Other news