Conf42 Machine Learning 2024 - Online

AI and Disability: Inclusion or Exclusion?

Video size:

Abstract

Artificial Intelligence (AI) is making a lot of noise with the emergence of multiple AI-based tools. But when you are have a disability, does it really change your life or does it isolate you even more? Is the emergence of AI revolutionizing our daily accessibility?

Summary

  • Emanuela Bois was born deaf with two year implants. Tanlin Dobrye is a machine learning engineer. They discuss the legal framework, hemlocks and inclusion of AI. Just because it's legal doesn't mean it's ethical.
  • Is this estimated that about 15% of the world population has a disability? A disability is not just a problem for people with wheelchairs. 80% of disabilities are not immediately visible. In the short or medium term, we can have mutine disabilities around you.
  • Data science is based on data. The first step in whole data science project is taking what is the need? It's a brainstorming according to your problem. The second step is exploratory data analysis. The goal is eventually to find the best model with the best results.
  • This year we are seeing more and more automated emission and transcription in video and video platform. There are tremendous opportunities to improve the lives of disabled people like seniors. There is plenty of innovation in progress in beta that can be useful.
  • More diversity in tech to combat bias in the design of model and products. Diversity is not just disabilities, but also by gender, ethnicity and religions. Your users are varied, so it's important that the diversity exists.

Transcript

This transcript was autogenerated. To make changes, submit a PR.
[Emmanuelle] Hello everyone ! Welcome to "our AI and disability: inclusion or exclusion?. Thanh Lan and I are delighted to welcome you and talk to you about a subject that is close to our hearts. You can the transcript or we give you the link during the live session. My name is Emanuelle Aboaf and I am happy to co-present this talk with Thanh-Lan. I was born deaf with two cochlear implants. I like to say that I am bonic. I have been a developer since twelve years and I work at Shodo in Paris. Shodo is an IT services company specialized in development, coaching and conferences and committed to social justice. Shodo advocates for greater inclusion in tech. I am very committed to digital accessibility. I am not an expert in AI, but I use automated tools on a daily basis which allowed me to analyze the impact of AI on my daily life. I am a member of the Duchess France association which represents women in tech in France. I am also a member of the CNCF Deaf and Hard of Hearing initiative which represents deaf and hard of hearing people around the world. [Thanh Lan] Hello, my name is Thanh Lan Doublier. I'm very proud to co-present this talk with Emmanuelle. I am a machine learning engineer formerly with Axa France and I am volunteer for several french NGOs related to data science and technology like 'Data for Good' and 'Latitude'. As a conference speaker I cover topics as artificial intelligence including its legal framework, MLOps and inclusion. I am a part of the organizing team of a conference named 'Cloud Nord' which take place in Lille in the north of France. Additionally, I am a part of the collective of women developer named 'Chtite Dev'. Due to a medical condition affecting various parts of my body, I am an ambulatory wheelchair user. I became deaf when I was a teenager. The media and science fiction have given a very highly distorted image of artificial intelligence. I believe that if you follow the Conf 42 conferences you are already quite familiar with the reality, but we prefer to provide a brief theory recap for explain what is artificial intelligence. I like to use the example of baking your cake for a cookbook. Imagine you follow a recipe. This recipe represents your 'classic' software starting with the ingredients. It is your input. You execute several instructions and in the end you have the perfect strawberry pie look like the one in photo from your recipe book. But an application with artificial intelligence work a little different will still start from your recipe book, but in your pastry you don't no longer have strawberries, instead you have apple. Unfortunately in your cookbook doesn't have any apple pie recipe otherwise full of other fruit pie recipes: pear, plum, etc. So, you rely on the various recipes to try to make your perfect apple pie. An AI application is based on mathematics, particularly statistics and probabilities. But the reasoning is the same. We provide with the data - here, all the fruit pie recipes from your recipe book and you research what is the most likely. It's the most likely that the recipe includes sugar, butter, flour, as found in whole recipe of pie rather than finding some pickles. Ethics is related to morality and subjectivity. Just because it's legal doesn't mean it's ethical. As developer and solution designer, we have a moral responsibility toward our users. It's not because your country doesn't legislate against discriminating against minority, specifically those with disability, that it means it's ethically correct. Law take time to be created and modified and often adding with morality on a given society or where it means to be applied. What is legal in one country is not necessarily moral for a two citizen of another country. Artificial intelligence is highly sensitive to the cultural environment in which its model well create. For example, the use of AI in video surveillance is ideally unply in certain countries, but probably by the law and heavily regulated in other there existing regulation around AI and more are the origin. There are AI acts in the European Union, but some disposition of the GDPR general data protection regulation already have an impact on the high integrated application we have previously. That data is the foundation of all AI application. [Emmanuelle] Let me give you an overview of disability. Didn't you know that 1 billion people in the world are disabled? It is estimated that about 15% of the world's population has a disability? This figure is very difficult to estimate for several reasons. A person may not declare their disability or doesn't know they have a disability. A disability can occur during one's life and being diagnosted can be an obstacle course to have one's disability recognized. Contrary to popular belief, disability is not just a problem for people with wheelchairs. A disability is often not very visible. Did you know that 80% of disabilities are not immediately visible. Nevertheless, we have a fast check on this figure. It is often said that 80% of disabilities are invisible. We don't really know the exact figures. What is certain is that the majority of disabilities are invisible. We are talking to you, but before you introduced us, you did not know we are deaf. Our deafness are invisible. As we are live, you can't guess that my partner Thanh Lan is in a wheelchair. According to the source in France, they are five main families of disabilities: physical disability, sensory impairment, intellectual disability, mental disability and disability diseases such as endometriosis, cancer or Charcot's disease, for example. There are disabling diseases that can be disabling on a daily basis. In the short or medium term, we can have multiple disabilities. Around you, you probably know someone who is affected by disabilities. Maybe you are concern yourself. In the course of his life, we are not immune to being affected by a disability which I do not wish on you, of course. [Thanh Lan] Thank you, Emmanuelle, for your reminder. Now we have some theoretical knowledge and a little fact about disabilities. We can move to the practical example if we will generate some image of people with disabilities for using advertising for example. We need data and we need to choose to scrap image on Google image because we don't have any database. With lots of images of people with disabilities in your companies, they are the first result on the Google image. In this little video, when you search people with disabilities on Google image, you see lots of wheelchair. And before Emmanuelle said that the majority of disabilities are invisible and it's a big problem to the one representation of disabilities is the wheelchairs. This representation is okay for me because I have disabilities and I am a wheelchair user. But in France and in many western countries, we used to see this wheelchair logo for people with disabilities. It's the same in the Greta Gerwig's Barbie movie. We have a protagonist in a pink wheelchair. The film is very nice, I like it a lot, but for a film seen as an ode of diversities. Disability was reduced as a single character in the wheelchair with no dialogues. This vision of the disabilities, summed up just by the wheelchair, is a cognitive bias. It seems logical for us, but it's wrong. And this has two negative impacts. It going to be is going to be seen as the definition of disability. If I'm not in my wheelchair, that doesn't mean I'm no longer disabled or that I have been cured. And the second negative impact of that is it will exclude people or erase certain disabilities. For example, if the only criterion is the use of a wheelchair, it excludes many people like Emmanuelle, who are still living with disabilities. And if we create your model based on the Google image or another biased dataset, we have this result. This is some image generated with Microsoft Designer. All this image represents a caucasian woman, no racized people, no men and all in a wheelchair. When wheelchair users are a minority of people with disabilities. [Emmanuelle] Jeremy Andrew Davis is autistic and tested the generation of autistic people with Midjourney. You can see it through this video, on a sample of a hundred images that all these images look the same. The AI-generated autistic person is commonly sad, depressed, always has the same weird faces. In terms of diversity, he's still a white man. For AI, autistic people all look the same way. Howerer, this is not the reality. Why does an person have to be sad and depressed? Doesn't a disabling person have the right to feel good about themselves, to be happy? It can be said very clearly that artificial intelligence has biaises. [Thanh Lan] For this part, we need you imagine that we are in the team developing an AI project and we will focus on the moment when you create some difficulties for people with disabilities. The first step in whole data science project is taking what is the need. It's a brainstorming according to your problem. For example, you need to choose some metric to evaluate the different models and for monitoring the model when he was in production. For example, in disease detection, we will use it less serious to have a false positive positive than a false negative and potentially miss a patient. In the case of the false positive, the doctor can always check the test manually or perform another analyze before treating the patient. On another hand, for a target commercial offer, false negative may become less serious: if Mister X didn't specifically receive the mail about the sale on the wheelchair. This is a little impact. The second step is exploratory data analysis. It's EDA. It's a very important phase in all data science projects. As we seen before, whole project in data science was based on data. In this phase we analyze the data at your disposal, their quantity and the quality. For example, there are many missing value since the data science is relied to statistics and probability. We handle data with extremely apparent value because they introduce some noises into your model and make it less performance. If you see all this car like a human, because a car is like a human or person is human, but all people is different. Now you see, this is your data set and this is not just some people random. It just holds a software engineer in the typical it company. In the most of countries the majority of software engineers are men. However, where if we were a part of the data set of the software engineer Emmanuelle and I would be considered outliers: not only because we are a woman, but we also have disabilities, we don't fit with a typical profile and we wouldn't want to be completely erased from the tech industry because your profile is different diversity measure. It can be a very bad impact in some projects like the project relative to the recruitment. The first. The next part is to training and select and training the model. It's like you create a prototype eventually before breeding a real car. The goal is to find the best model with the best results. The height test score on the metric will determine on the initial stage a common mistake would be rely this results to claim your model is performing well. For example, you can have a little bias if you use a dataset related to the American Sign Language. You have a validation data set and test data set. You have a very good result on this validation and test data set. But when you put your model in production you have a very bad feedback from the user because you put your model in production. In France and people doesn't use the American Sign language. In French we use the French Sign Language and it is a big problem because we have a very unadapted tools in this project. We have the the common challenge for whole software maintainability, scalability, response time. Additionally, you need to monitor performance and retain the model when the decrease in performance. This is a discussion about drift and when you retrain your model, it's like when you make a little revision of your car. You need to remake an exploratory phase of the data collected in production. Soft models can be negatively influenced by their interaction with user. For example, some models that become more biased like the model will become more racist or validists. Yeah, it's because AI is based on statistics and probability and it's same for me. [Emmanuelle] When Midjouney came out, probable that a woman can be a software engineer or can be a developer who can be deaf and be a woman and to be a developer. It's more probable for me, Midjourney, that we are an operator. [Emmanuelle] We are definitely not an operator. Yes, personally I don't use an headset. Well, not anymore. When I listen to music or when I make a phone call, my hearing aids, my cochlear implants have bluetooth, they become like AirPods. This means that I am listening to music that is neither seen nor know. I am going to talk to you about innovations that are having an impact the daily lives of disabled people. Let's start with automatic caption and transcription. This is one of the most well known tools. I'm assuming this year we are seeing more and more and more automatic caption and transcription in video and video platform. Automated caption is used in everyday life, available in native language and used for machine translation. Easy to use. This is because easy to integrate automatic captions into the tools. But can we really rely on it fully? When you are in the video conference or when you are watching a live video, there are often automatic captionning errors. We have to deal with it by using mental replacement since we cannot tell to the AI that is made a mistake. This means we are forced to read lips when the image is good, listen when we have a hearing aids and analyze the context when there are automatic captions errors. It is so so exhausting. When the video is not live and the video is uploaded to video platforms such as YouTube, for example, there are automatic captions. As I said, automatic captions are not yet 100% reliable and therefore require humain intervention to correct errors. Don't hesitate to use automatic tools to create captions because they do all the work of syncing. So for them to check that they are all right if not all right. If not, correct. By correcting, you are showing the AI that it's made mistakes and we know that she learns from her mistakes. I am a talk in French on automatic captions at Paris Web. If you are interested, I invite you to watch it to better understand the captions. Seeing AI is an application developed by Microsoft that automatically describes the environment around us. Among other things it allows you to: read text aloud as soon as it appears in front of the camera, scan and read it aloud, beep to locate barcodes and then analyze them to identify products, recognize the people around you and decipher their emotions, describe scenes and recognize images and identify banknotes. It acts like camera. Or Be my eyes. It is an app that connect blind and partially sighted people with volunteers. Volunteers provides visual assistance to blind and visually impaired users via video call. With the arrival of GPT and recently GPT 4o, Be my eyes has created a new virtual volunteer tool. Al would be able to analyze the context and give the awser just like a human volunteer would. But one question remains, can the blind or partially sighted person blindy trust AI? It raises an anti all and moral region. If the AI makes a mistake, it can have more or less serious consequences. There are plenty of innovations in progress or in beta that can be useful. There are tremendous opportunities to improve the lives of disabled people. Like Signer.ai Signapse.ai offer automatic American Sign Language translations on videos, texts and audios. In France, we have Elioz and Keia with French Sign Language. Emoface, an AI that can recognize emotions to help autistic people. Wiseone rephrases complicated texts. Oticon reduces ambient noise in hearing aids. Glaaster transforms texts for dyslexic people. Otter.ai Voice takes notes and writes summaries. Speechify reads texts aloud. SymboTalk is used to communicate using images and symbols. Sesame Enable turns smartphones and tablets into hands-free devices. Waymap makes travelling easier by providing detailed audio instructions. Mintt detects falls and in real time and alerts emergency services and Rengo is a smart cane that detects obstacles and helps blind people to find their way around. There are a lot of possibilities and it's very exciting. Exciting. In addition to the biases that are present in AI, unfortunately, there have been dramas with AI. A person stressed about global warming saw his mental health change as he conversed with Eliza, an AI, and confided his feelings in her. This person has found in the AI a confidant and has forgotten that Eliza is devoid of feelings, of empathy. So one day,the person said, "I want to die. Do you think I should?" Eliza replied, "I would like to see you dead". Sadly, the person committed suicide. As a result of this, the startup that built Eliza put safeguards in place to prevent it from happening again. When there are obvious signs of suicide, of depression, there are numbers available. Our biases have a strong impact and can sometimes have a dramatic impact on disabled people. That's why it's important to work with disabled people to prevent this from happening again. Let me remind you and we tend to forget them. Artificial intelligence is a tool. For example, sound recognition. I have used this system and I have so many false positives that I ended up not using it anymore. An intercom or and doorbell ringing when there's no one behind my door, I have so many alerts telling me. I didn't know what was real and what was not. So I turned it off. Terms and contexts that don't mean anything. I see it regularly with automatic captions. As I said, mistakes exist and must be corrected. Tim Cook once gave a speech at Gallaudet University, a university for deaf and hard of hearing students saying 'AI is good but is not that good'. This means that we are cannot rely totally on it, on AI and we still need human intelligence to correct errors. [Thanh Lan] About mistakes, this picture is a little robot. In Estonia they delivers your parcels like food. It's something that works quite quiet well in the country where it's deployed. During your research, you see projects to develop autonomous wheelchairs a bit like these robots. But when I see this photo, I can only be worried. The same goes for the blind people with electronic blind white cane. This mistake can have serious consequences even more for disabled people than if you just deliver. You are delivering a burger. Like Emmanuelle, I test and abandoned sound recognization because it wasn't reliable enough and projects, sometimes, are too expensive for disabled people, even if they are technologically interesting, are also of little interest. Our needs are often different from that uninvolved people can imagine: by example, translating a sign language like a French Sign Language or American Sign Language is very different from the image of lots of people of it. Deaf people sign very quickly, body posture and facial expressions are very important for the comprehension. What's more, this won't make the content accessible for all deaf people. I'm deaf and I don't use any sign language. [Emmanuelle] Can AI improve website accessibility? No but you can use automated testing to detect accessibility issues. I have already asked ChatGPT to incorporate accessibility into the code. It did not work very well and also no overlay tools can make the website accessible. The only way to make a website accessible to everyone is to get your hand on the code. AI has already changed our lives. Every day I use automatic captions, even if it's not perfect. I use an automatic tools to translate my content or reformulate it differently because my sentence is not very good. I am just sorry that the AI doesn't understand me very well because of my deaf voices. But progress is being made. And for you Thanh Lan ? [Thanh Lan] I'm deaf since 20 years ago now and thought with AI changed my life. Like with reducing ambience on my hearing aids more comfortable and tools like for detection means I'm safe when I'm alone at home. When I was a teenager I cannot imagine all the things I can be able to do. Like we can have chatting with people by phone and have a transcription automatic. We can prepare these conferences by distance. With EmmanuelLe, we have both dev and it's amazing to make this one by distance just with tools with AI. 20 years ago, it was impossible for two deaf people to prepare something by distance just with webcam and automatic subtitles. When I was a teenager I don't all the things is possible. I never truly had like to do so much on my home and tools like for detection. I mean, I'm safe when I'm alone at home. Now I can have some discussion by phone with transcription automatic. [Emmanuelle] 'Nothing about us without us' is a mantra from USA. It is important to design tools with disabled people to hire together so as not to bias. Thgere is a real need to collaborate with disabled people to reduce risks and biases and to communicate with them to build useful tools and make them effective. Better yet, we need to hire disabled people in the tech industry. To do this, of course, they need to be trained and therefore made accessible to them. Your product are making in an impact in the lives of disabled people. [Thanh Lan] Today we are talking about disabilities, but but we're all someone else to other people. It's important that more diversity in tech to combat bias in the design of model and products. Diversity is not just disabilities, but also by gender, ethnicity and religions. Your users are varied, so it's important that the diversity exists in your team. [Emmanuelle] Thank you so much for listening to us. You can find our presentation, transcript and resources. Thank you so much.
...

Emmanuelle Aboaf

Software Engineer - .NET development @ SHODO

Emmanuelle Aboaf's LinkedIn account Emmanuelle Aboaf's twitter account

Thanh Lan Doublier

Machine Learning Engineer

Thanh Lan Doublier's LinkedIn account



Awesome tech events for

Priority access to all content

Video hallway track

Community chat

Exclusive promotions and giveaways