PEAT Talks Transcript: Emerging Workplace Technologies and Vision Loss
Hello, and welcome to PEAT Talks, the virtual speaker series from the partnership on employment and accessible technology. Peat Talks showcases various organizations and individuals whose work and innovation are advancing a successful technology in the workplace. My name is Corinne Weible. I’m the Deputy Project Coordinator for PEAT, and I'll be hosting today's talk.
Before we get started, I'm going to quickly review a few logistics. We will have time for questions and answers, so please enter your questions in the chat window any time. You can also use the chat window if you're having technical difficulties, and we will do our best to resolve any issues. You can download today's presentation on PEATworks.org, and an archived recording will be posted online following today’s events. We will be live tweeting today's event from (indiscernible), so please feel to join us and follow along using the #PEATTalks.
Today, PEAT is pleased to welcome Paul Schroeder from Aira. Paul has been a dedicated member of our PEAT think tank for many years, always providing great perspective and ideas. Paul is Director of Public Policy and Strategic Alliances for Aira. He leads Aira’s public policy engagement and issue development in the public sector, including the state and federal programs, supports Aira’s business development and manages the relationship with PEAT organizations for representing interested people with disabilities.
February is Low Vision Awareness Month, so we are especially honored to have Paul as our guest today to discuss how emerging technologies are shaping landscape of employment for people with vision loss and other disabilities. With that, I’ll turn it over to Paul.
Well, Corinne, thank you so much, and thank you to PEAT for making this opportunity available. I am very excited to be here to talk about emerging technologies in the workplace. I'm going to be focusing on wearables, primarily, because there's so much innovation happening in that space. And I’ll talk about a little bit about apps as well and a little bit about an interesting offshoot happening around indoor navigation.
All of these are, of course, of particular significance to those of us who are blind or visually impaired, but I think a lot of these technologies actually have value for other people with disabilities as well. I hope you’ll have a chance to learn about that, maybe ask me some questions to see how we can help make these technologies usable by people with other disabilities as well. I will be mentioning Aira, and I will be using Aira examples a lot as we go forward to particularly talk about employment and how these technologies are influencing and making employment, I think, more possible for people who are blind or visually impaired.
By the way, you can reach me -- I’ll say this now, and I’ll probably say this again at the end. It's simple, Paul@aira.io. That’s the confusing part. Paul@aira.io. Or feel free to shoot me a text, if you want, or whatever, to my mobile, (202)714-0159, and those are on slides, if you look at slides, and I'll say them again towards the end.
So, it's clear that those of us who are blind or visually impaired have achieved undreamed-of access to and control over the information and technology that we need to function in our daily lives. I often say to people that, at least in my lifespan, this appears to be the best time to be blind or visually impaired because there are so many opportunities to have access to the world and to have information technology at our fingertips and in ways that we can actually make use of it. And a lot of that, of course, is the developments in computer and mobile technologies.
But what we’re going to talk about, barriers remain, and those barriers are often especially tied to efficient access to and use of those ubiquitous visual elements and information that are just always in our face. And for those of us who are blind or visually impaired, often create new challenges.
Now, by the way, I want to just put a couple of numbers out. There's an estimated 23.7 million adults in the United States, about 10% of the adult population who are reported to have trouble seeing even when wearing glasses or contacts. I take that right from the American Foundation of the Blind. That page, the statistics page, is a nice page if you have a chance to take a look at afb.org/stats. A shout out to my old employer. I spent a lot of years working with AFB and treasure those times.
Let's jump onto slide two. We’ll go onto slide two and talk a little bit about these emerging technologies. And, as I've noted, there's a lot of enthusiasm about the innovations that are taking place, because these innovations are doing two things that are essential for those of us who are blind or visually impaired. They are bringing visual information and visual detail quite literally into focus, as we'll see in a minute or two, and under control. And they're also helping to advance employment.
So, what are these technologies bringing to the table? What have they got? Well, as you'll note, improved support for navigation, something we'll talk a little bit about; providing immediate access to signage, to other visual elements to even the information about what's around us. They help us to identify individuals; enhance and augment visual presentations and, because these are wearable technologies that I'm primarily focusing on, they're hands free. So, for those of us, again, who are generally out and about, perhaps with a cane or a dog guide, we've got one hand tied up. So, having a wearable that keeps our hands free, actually pretty helpful.
We’ll move onto slide three and we’ll talk about what's kind of driving these technology developments. What's making this possible, these things that I just talked about? And then we'll get into in more detail.
So, mobile cameras. We’ll greet cameras that are ubiquitous in our smartphones now and have become extraordinarily good and powerful cameras, both for photos and for video.
Head-mounted displays is something we’ll mention as a great development.
Sensors, all kinds of sensors, are creeping and finding their way into wearable technologies, and that's going to help in a lot of the aspects that I just mentioned.
Mobile process or capacity. That is a fancy way to say our phones can do so much more each year and have so much capability built into them, which makes it possible for us to do a lot of things in a wearable technology as we bring it around with us in our daily lives.
Text-to-speech. A lot of us who are visually impaired know that well, spend a lot of time text to speech.
Haptics, using vibration and touch, is another interesting area that’s driving the evolution in wearables and making them just so much more capable of providing us information.
Artificial intelligence, I'll talk a little bit about that today, as it’s beginning to find its way into these technologies, and helping to do things like object recognition, maybe even facial recognition.
Labeling, all kinds of data, crowd sourcing, mapping of data that facilities are doing themselves. There's a lot of data that we can take advantage of with our wearable mobile technology. And what does that data do? Well, that data helps us to tell us where we are, what's around, what are the facilities and addresses nearby, some of that, of course, is good old-fashioned GPS from the global satellites, and some of that is the data that’s coming from labels, as we move about with our mobile technology. And what makes those labels possible, of course, are all the transmissions and transmitters, bluetooth beacons, and WiFi points and other things that are powering the technologies.
So, I just wanted to put those out there as kind of a background for what I'm going to be talking about in the next slide to kind of give you a sense of what's driving the innovation that we're so excited about.
Just to go onto, I think we're now looking at slide 4. And I thought we would start taking a look at some of these emerging technologies. I wanted to start with those that have head-mounted displays, the visual technology. But I won't be saying a whole lot about these primarily because they're not technology that I know a great deal about myself. But I do want to encourage you to take a look at and perhaps learn about some of these technologies.
The wearables that I'm going to be talking about, both here for the head-mounted displays, and of course later as I get into some of the other technologies including what Aira uses. These rely, in many respects, on glasses, smart glasses, technology glasses, e-glasses. I don’t know. You can have any number of names for them, but they're glasses that you wear on your head like a traditional eyeglass. But they have cameras and sometimes other technologies in the glass as well. Happens to be a great place to put a camera because it’s head-high, which mean that means it functions, in a sense, as your vision would at head level and eye level. And, of course, it's also a nice place to wear some technology because you've got a good vantage point from head high and it’s relatively stable place, and we'll talk a little bit about what that means as well.
So, these technologies that have the head-mounted displays, some of you have probably heard of Google Glass. They kind of were the first to really get a lot of attention using this display that sits right in front of your eye. But there are a number of others that have been developed since, and the ones I'm talking about here on this particular slide are focused primarily or at least in large part on the needs of people with low vision. That is to say people who can take advantage of that visual display. These technologies take the capability of cameras and processing and the capabilities of these visual displays that have very high definition to them to create an enhanced visual monitor or magnifying system, if you will, for people with low vision.
Now, some of these technologies often include other techniques as well such as optical character recognition. So, if your camera, if you’re looking at something with your camera, not only might you see it in your visual display, but you can also trigger your glass or phone to take a picture and do optical character recognition or OCR on that text and have it read to you in a text-to-speech voice as well. I'm told that one of these technologies even has mood recognition built in so if you're looking at somebody, you can tell what their mood is. No comment on whether that's helping or hindering marriages, I’m not sure, but it could be either one.
I will say that these technologies generally there's a limited set of visual conditions that they work best for. Again, I'm not an expert on which ones or how that best works, but I would say if you're interested in this area, you might want to, if you have low vision, talk to a medical professional to get some advice and perhaps try out some of these technologies to see if they would work for you and if the nature of the visual display and if the nature of the magnification, the contrast, and other techniques and technologies built in would be helpful.
Just to work through, I’ve mentioned on this slide cyber eyes, which is available from Cyber Timez, the Jordy which has been around a while. Those of us in the vision field may recall this one’s been here and gone through some changes. That's available from Enhanced Vision which, of course, is now part of VFO. NuEyes is another one. Of course, NuEyes is the company. And then eSight, available for eSight Eyewear. These are not the only examples, but these are good examples of the kind of diversity that’s available now in these technologies that you use smart glass, cameras, and visual displays to bring the visual world into better focus for people with low vision and, as I said, to provide some other values as well.
Let me jump on now to the next group, and this is where I will mention this slide a couple of apps just because I know key apps, and I would mention a few apps and point out that there is great work happening. There’s a number of great apps and people who follow app development for smartphones all have lots of their favorites, but there's two that I want to particularly call out here.
One is from Microsoft. That's Microsoft Seeing Al or Seeing AI, sometimes it’s referred to. I believe that is only available on the iPhone at this time. That’s a research project from Microsoft that harnesses artificial intelligence. That’s the AI to do some very cool things that I know a lot of people in the vision community have been very excited about this app. So, it's a mobile app through which you can access text, including handwriting, so both short text and document reading, color and light detection, description of objects and scenery, people, tell you age and emotion for people, and you can do some limited facial recognition as well with that app. So, something to take a look at. It's free in the iTunes app store.
The Be My Eyes is another one. This is a mobile app that’s actually been around for a few years, both iPhone and Android. And this is a smartphone that gives people who are mind or visually impaired access through streaming video to a sighted person with a volunteer. And it's interesting if you look at the numbers. There are a lot more volunteers than there are people with vision loss actually signed up for the service, which in a way, is great. It suggests that people are really excited to provide this kind of help. And so, what can you do with this? Well, you can use the smartphone to access any number of things that you might need information about, whether it's directions on a box of food or item, reading a piece of paper, reading a sign, reading, shopping, and seeing what's available in a store.
There's any number of things you can do using that camera on your smartphone connected to a sighted volunteer. And, of course, I would be remiss if I didn’t say, “Remember these are volunteers,” so you might want to be careful about how much confidential information you divulge because you don't know very much about who's on the other end. But many people love Be My Eyes. It's been a real great addition to the app world and, again, has opened up visual information in a very cool way for people who are blind.
I do want to talk about a couple of more audio on to the next slide. And here, I've mentioned ORCAM, which kind of is in an interesting position. I've got them included here as one of the audio output system. It's a lightweight camera that actually attaches to whatever eyeglasses you want to wear. So, in other words, you wear your own glasses. You put that ORCAM camera on, and with that camera you can do optical character recognition. So, if you're looking at text or as you point at text or a sign or something that you believe is text, it will take a shot, snapshot of that, a picture of that, and begin to decode and do optical character recognition, and tell you what that text is. You can do facial recognition for people that you identify and get, essentially take their pictures, and then label them. It will do product identification. And so, if you want to know what something is, an object, et cetera, and a number of other things. So, ORCAM is something that people might want to take a look at.
The other one I wanted to mention in this category, of course, is Aira. That's where I work. And I do want to take a second just to pause to talk about what Aira is and in some ways some of what I just said about Be My Eyes is true for Aira. It uses streaming video through, yes, your smartphone but also a through pair of smart glasses. So, you've got the option with Aira to use a pair of glasses like a Google glass or others that we use. In fact, our glasses don't need a visual display because we don't use it. So, we’ve found glasses that just have a nice video camera in them, and we use that camera through the glasses to straighten the video.
We have agents as well. One thing good to point out about ours is they are screened. So, we know who they are. We check their backgrounds out. We pay them. We are building a professional group of agents to provide information to people who are blind or visually impaired. As I mentioned, we do use the streaming video. Our agents also have other information available to them. So, we built a whole dashboard and, in fact, on the slide we’ve got a picture of it, and I’ll mention that in just a second. But in that dashboard or in that piece of software that we make available to our agents, they have a streaming video. So, they can see from the user's perspective what's in front of you through the video camera. But they've got GPS. So, they know your GPS location from where you are, which is helpful to have especially if you're out moving about in the community.
They've got some other public data sources that they can pull up like street view from Google and, of course, they can also access everything that's available on the web, which is nice because a lot of times, again, if you're out and about, and you want to know something about restaurants in the area, you’ll say, “Well, what’s the Yelp review say?” and the agents from Aira can also pull that information up for you. We've also got ride share integration, so that both Lyft and Uber are services that we can make available and order and then, even more important, tell you where that car is on the map and when it pulls up, help you find out.
Photos is something that our agents can take as well. Through the streaming video, they can get a still photo, which is helpful for reading, but also cool because a lot of our folks, I've done this myself, we like to have photos of things that we're experiencing so we can share with our sighted friends. I often like to say that this allows blind people to be just as boring on Instagram as everybody else, so we can put up our photos too.
On this slide, as I said, there's a picture of a screen that is the dashboard with the live camera stream, location awareness, the GPS, a personalized profile so we know something about our users.
So, let me talk specifically about employment and move on to the next slide. There are so many techniques, tools, strategies, technologies that have been developed over the years to help people with vision loss achieve independence and successfully work in a whole variety of jobs. Right? But employment numbers still are dismal for people with vision loss, and I wrote say across the disability field, unfortunately. Let's not kid ourselves. As much as we develop great tools to solve a lot of the problems that challenge access for people with vision loss, there is still a bunch of reliance on vision that we have to tackle, and that's certainly true in employment.
Access to visual information is a contributing factor, I believe, to the employment challenge facing people who are blind or visually impaired. A lot of these visual-based barriers are tied to efficiency, and some of them are tied to design choices. The reliance on visual cues in the technology interfaces and for information access. The reliance and dependence on vision as a way to do that, that's a choice that designers make, and we still haven't gotten past that.
I have some examples on this slide, and I drew, actually the next two slides, and I drew these from agents who work for Aira because I believe these are likely true for, across the board, not just for people who happen to be users of the Aira service. I drew this because they were real-world examples of what kinds of things agents are doing to help ensure that people who are blind or using Aira can access employment, get jobs, and be successful when they do get jobs.
So, what are some of these examples? Well, accessing online job sites, applications, assessments, and training. We’d like to think that accessibility has been, and PEAT has been really leading the way on that and pushing that issue forward. We’d like to think that accessibility has been, has transformed the online world, and in many ways, it has, but we still run across many online applications and training sites and assessment sites that are either not accessible or horribly inefficient to access if you can't see them and if you're using the tool like stream readers and magnification. Checking the format, appearance of resumés and cover letters and applications, there are lots of tools again to do that, but I think a lot of us like to have somebody with sight look at those things to make sure we didn't make a glaring error, wrong font, wrong color, wrong placement of things and missed it in our creating our resumé and our letters.
Photographing and attaching signatures and certificates, that's just a cool thing that I was really excited people to see people using the Aira photograph service for. But, again, this can be the kind of thing that can be a challenge for people who are blind in a fairly still visual world.
Obviously, reading paper documents and handwritten ones, too, of course, that remains a challenge. There are lots of ways around it but not always efficient ways.
Entering data in complex or inaccessible forms has come up with some of our agents as issue that they’ve worked with.
PowerPoints. Somebody gave me the example of placing photos in text in just the right place in a PowerPoint is something that was not doable with her screen reader and was something she could do with the Aira agent. On the next slide you’ve got a few more examples describing a PowerPoint presentation that have technical content graphs, diagrams. We know that graphs have remained a challenge for people who are using blindness and nonvisual techniques to get access to. Inventory and supplies is something people sometimes have to do certain jobs and it can be difficult if you can't see the products you're trying to inventory.
Filling out forms, I mentioned similar to what I said earlier.
Accessing office equipment. We still have lots of equipment and kiosks that don't have a way of using a technique for somebody who’s blind or visually impaired, and so that becomes a barrier for people who are blind. Computer screens, if you’ve had a screen reader crash on you, you know how desperate it can be sometimes to just want to have somebody help you exit that program gracefully, so you don't lose all your data but, yet your screen reader isn't talking to you, so, that’s a challenge.
Selecting just that right outfit and clothes matching matching still a challenge, of course, and we want to be able to look right. And then, finally, finding addresses, office buildings. I think in a lot of ways, people think of Aira and some of these other tools as great navigation tools, and they are, but and you'll note form all the things I said above this, there’s a lot of other ways in which access to visual information is key. But, sometimes, when you're out on the street and you’re trying to find the right address, it is just plain helpful to have somebody who can look at it for you. Let's go ahead and take a look at a video, it’s a quick video to kind of give you an example of what I was just talking about.
Okay, thanks, Paul. We are actually setting up the video right now, so we might need a minute or so to get it started. Actually, it on post online. We’re going to press play, and you just chat if you aren’t able to hear the video through your audio. Okay, great.
I don’t really know on this particular unit where to place the paper, and also to control it, we have to use a touch screen again. So, I'm going to call Aira to get some help in making a copy.
Hi, Michael, Thanks for calling Aira. This is Emily.
Hey, Emily. How are you?
I'm well. How are you?
I'm doing well. So, here we are at a copier. And I have a copy that I need to make. I'm assuming that’s the right way. The print’s facing you, or you see the print.
Could you look down just a little bit please? Okay. Perfect. Yes. So, the print is on the side where your thumbs were.
Right. Right. So, I know how to get this up, and I need to put the paper in the right place on my copier.
This printer looks like it
It was a video that actually showed the office equipment example I was referring to, the touch screen on a copier and how challenging it can be without some assistance to figure out how to hit the right buttons and actually make that work and let alone get the print situated correctly and on the scanner for the copier. I want to say next week, February 20, Aira will be announcing a special offer that we're going to make to our users who are seeking employment. So, stay tuned on February 20 for announcement about a special offer to help boost employment.
Moving on, I do want to say just a couple of quick words about indoor navigation, because that has been a challenge and one that, again, is being tackled in a lot of different ways. Wayfindr, which is a UK-based organization, and for those of you who aren’t looking at slides here, it’s W-A-Y-F-I-N-D-R. There is no E. W-A-Y-F-I-N-D-R, the U.K. based organization that has developed an open standard for how you would best present the audio information that allows a person who’s blind to navigate an indoor space. So very cool thing.
There's a report that you can take a look at. I think it costs a few bucks to access the report, but it's a good report that was developed through the G3ict group, and it talks about accelerating the adoption of indoor navigation with a standard user interface. That was done in December 2017, and we've got the website and web link available to you. By the way, that's the global initiative for inclusive information and communication technology, G3ict. That report takes a look at a lot of the technologies that are being developed to aid indoor navigation, because it's not just an issue that affects blind people.
A lot of people have trouble finding their way, finding where they're trying to get to in the indoor space, if you will, kind of having a GPS indoor, because, as you may now, GPS doesn't work that well in a large indoor facility. There's a lot of different technologies that are being developed. And what Wayfindr has done is come up with this audio standard that says, “Whatever technology you use, whether it’s beacons or WiFi or something called dead reckoning or other means, we have a standard that allows for how you would present the audio information to somebody who can't see who can then get around these complex spaces more independently like shopping centers, office buildings, transit facilities and things of that sort.”
So, we, as you can obviously guess, the signage and cues and things that other people use are not available to people who are blind and visually impaired. So, anything that will spur the development of indoor navigation and usage we think is definitely beneficial. And this new standard helps to make it, once the technologies have been put in place, to make it more easy to actually make use them for people who are blind. So, take a look at that report if you're interested. It talks about a lot of the different approaches and also some of the challenges that are present in trying to do indoor navigation.
And speaking of challenges, I want to come back to our larger picture of the wearables and innovation that's happening in emerging technologies and just sort of visit some of the challenges that are making this tough and that we are going to have to work on as a community.
So, this is on slide ten, I believe, now. Obviously cost. Cost for any of these devices and any of the ones that I showed earlier, with the exception of the two apps that are free, all of them have costs associated with them. And I think anyone who is in the disability world knows that our technology is often costly, and it's often difficult to find programs and subsidies to help pay for that. So that's a challenge.
Battery life. Yeah, there isn’t anybody who is working in the mobile space who doesn’t complain about battery life, and it's a problem. It’s a challenge, and we face it too when it comes to wearables for accessibility purposes. There's a lot of good stuff happening, but I think, again, everybody is trying to meet that challenge, so I feel fairly confident that we’ll find ways to get over that hurdle.
Processing, image processing, the processing available on a mobile device, and particularly in small like a pair of glasses remains a challenge. It's something that, again, a lot of people are working on. not just for accessibility reasons but for other purposes as well.
Data sources, availability. So, one of the things that I think this is true for indoor navigation. I think it's true for anything where you want to have data about your location, what's around you, what's nearby. There's a challenge in getting access and getting that data done. Some people have talked about having crowd-sourced data, and that is where people, kind like Foursquare, where people enter the location and name and information about a location into a common open structure, and then it becomes available for others.
There's other ways of providing data. Many facilities have an interest in putting a label on their building, and, of course, stores want to put labels, so you know where to find them and how to find them. But the truth is it's still hard to predict if when and if you'll have data available to you and this is, again, particularly important for getting around and navigating. But, as we'll see it also becomes important for identifying objects and where you are and even identifying people.
Network deployment. A lot of people are excited about 5G, one of the new, the next version of the mobile network, which will make it easier for technologies like Aira and some of the others I talked about that rely on a robust cellular network. But that remains a challenge. Anybody who is relying on the cell phone or wireless network, the mobile network knows and even the internet for that matter knows that it can be a challenge to provide and get the information that you need and to deploy the technology that’s based on that.
Privacy and acceptance. I haven’t said very match about that, and we can talk about that in the chapter if you want. But, it is an issue, and I think there's a lot of discussion around facial recognition, automated facial recognition. The legal community is, of course, interested in that area. But so are we because for those of us who are blind or visually impaired and other people with disabilities as well, it would be great if we could head into a room and know who's in that room because we have a technology that helps us recognize faces just like sighted people can do using their eyes and their brain. And so, you know, that's something that I hope we'll get to. But smart glasses, in general, and facial recognition there's still some challenges around the acceptance and the privacy issue.
AI, artificial intelligence, machine learning, various names for it. Those are areas that as they develop it will make all of our wearables that much better for doing object recognition or potentially doing facial recognition, for doing optical character recognition for automated reading. So, I should say that one of the reasons that Aira is called Aira, with AI, is it very much wants to move towards artificial intelligence as a part of what we offer. Right now, it’s very human-centered. And I know a number of wearable have the same goal of, of course, is to, already have done more automated processing.
Last thing I'll say in terms of limitation here is just complexity for the user, user apathy, user fatigue. We all hear about technologies, and we try something out. Maybe we got the wrong one. Maybe we didn't get what we actually needed or it's too complex to use, and I think that's still a challenge for wearables and that remains a challenge that we need to tackle, all of us who are kind of in this area. Last thing I want to say and leave it here, is what can you do? I think all of us who have an interest, and I really think that's everybody, right? That's people with disability the advocates for our community and accessibility. It's information and communication developers, It's accessibility experts. It’s policy makers. We all should learn about these technologies.
So, hopefully, today, whetted your appetite a little bit. If you didn’t know very much, if you knew a good bit, you learned a few technologies you weren’t aware of. Learn more about what’s going on in this space, because I think wearables and all of the things driven by mobile developments are going to be very important. We should be researching the efficacy. We need to know how well these technologies work, what they’re best used for, who they’re best used for.
So, I'd love to see more support for and in interest in research in determining and how and when to use these technologies and, of course, promote the deployment and development, whether it's indoor navigation or more wearable technologies in the workplace. I hope you'll help me, all of us, to deploy that. And, of course, I think for those of us who are advocates and who have this ability we should be asking for these technologies where it's appropriate as accommodation for the work site. And then finally working to ensure that laws like the American Disabilities Act and Section 504 are interpreted to cover these emerging technologies.
So, with that, we can leave that last slide up. It has my contact information on it. Again, that’s Paul@aira.io, if you want to reach me via email, which is probably the best way.
Paul, thank you so much. This is really fascinating and exciting stuff. It is best – but, all of this has exploded. So, we'd like to open things up to questions and answers. So, if you have a question, please type it into the chat box now. And, I'd love to get it started myself. So, you covered several different new technologies available today that have been designed specifically for users with vision loss in mind. Are there other general, emerging trends around today or upcoming that developers need to be taking accessibility into for users with vision loss?
You know, I think, probably the most interesting trend is one that is probably beyond my technical expertise, but I think the work that's happening around this is where I kind of ended my talk, but machine learning, deep learning, artificial intelligence. There’s a lot of stuff happening, a lot of interest in that area. Major companies, obviously, Microsoft, IBM, Google, Apple, they’re all working in that area. A lot of universities are working in that area and for various reasons, right? It means everybody has got a reason why they want to develop that technology to make something work better. It's not generally or even always for accessibility.
But I would certainly want to make sure that those companies or those universities and their funders, in the case of university, are thinking about how do we make sure that these developments are not only meeting the needs of people with disabilities but aren’t inadvertently shutting people out.
I know there's a lot of conversation in the artificial intelligence world about things like facial recognition and whether it disadvantages some groups. I'm not a good person to talk about that, but I know it's a challenge, and it’s a challenge that also needs to be looked at.
The other thing I'd say just in general is the, you know, I kind of quickly went through the indoor navigation. But there’s a lot of things, a lot of technological developments that will make complex environments like a transit facility or a shopping mall easier for everybody. I mean, you know, I have a sighted family that I go to malls with sometimes. My wife or daughters, and they don't seem to know a whole lot more than I do about what’s where. [laughing] It feels like there's a real need here to develop better mapping technologies, again, akin to what the GPS does for the outside.
As that is happening, and so part of my fear is that that will happen in kind of a way in which there's not really much governance over it, right now no one’s looking at accessibility from a legal or requirement standpoint, and would I hope that that's something the Department of Justice and the Access Board and other agencies of the government might pick up and take a look at too, because I do think that's an important area.
Absolutely. Thanks. I see a question here about Aira. Jeanne asks, “How is privacy addressed if an employee is using Aira if their work includes patient client information?
So, it’s an excellent question. We are currently not HIPAA. I'm told there isn't, and I don't know if this is true, but I’m told there isn’t some you could say is HIPPA-certified. But in any case, so let me say a couple of things about that. First of all, it is important to know that our agents which is what we call our sight assistants, they do sign and are held to a confidentiality requirement. So, whatever happens in the session with the user remains confidential.
Also, the user and agents do not have-- there's not a way to keep that video for their use so that once the session is done, that streaming video is not available to the user or the agents. I think that's important. And, I think that there's another point that we do have available in our app called the privacy setting. This isn’t exactly going to answer that question, but it’s important to note and I think important for all of the technologies and the wearables sector to think about, at least in the vision information sector to think about, and that is, “How do we want to handle privacy?”
So, one of the small but important pieces that Aira has in its app is called a privacy button. When you hit that button, it turned off the transmission of audio and video from the Aira user, so the glass or phone no longer is sending video or trans. You're still connected, and the agent could still talk to you, although there’d really be no reason for them to, but there's no transmission. And great use case for that is say, if you're in an airport and you’re looking to find the restroom, you'd want to hit that privacy button. [Laughing] When you get to the restroom, and if you don't, the agents will end the call., because they can't have streaming video from a restroom. [Laughing] But if you want to hit that privacy button, stay connected, so that as soon as you emerge from the restroom, you hit that privacy button, and you're back on with that agent who knows where you are, and knows the airports, and already had everything pulled up for you. That’s helpful.
In terms of the patients’ information, I think we're all going to have to work towards assistance that will allow us to enable users to use these vision access wearables in private spaces, and we're going to have to figure out what systems we need to put in place, whether it is HIPAA training for agents and/or what other ways in which we'll have to secure the information so that it's clear. We've also certainly talked with some companies that might have an interest in licensing Aira so that they would have their own agents be working at Aira agents. In other words, the information would always stay inside the company, and for certain companies, with a lot of employees who are blind or vision impaired, that might make sense. But, I think that Aira and any of the wearables that are accessing visual information, we’re going to need work through how we handle privacy.
Thanks, Paul. So, I see another question here from Jacqueline that may be related to the first question we talked about concerning something that came up. She asked, “Do you have any ideas on funding that makes these technologies more accessible?”
So, some of the companies have been successful at finding their way into systems that do pay. So, the Veterans Administration, for example, has supported some of these wearables and providing them, which is great. Vocational rehabilitation, I know, people are looking at that and it’s particularly relevant to employment and how much vocational rehabilitation system would help with making a wearable technologies for vision access available for somebody seeking employment and then maybe even for somebody getting oriented to and beginning their employment, when, hopefully, the employer will than take it over as an accommodation.
I think there's a lot of interest in what we sometimes call third party funding. So, insurance companies, Medicare, Medicaid, and whether those systems could be amended to cover these technologies so that they become available. If you think of it not unlike paying for a physical access device or service product or service for somebody who needs physical access, they're, again, it's a little bit outside of the scope of today, but there are restrictions, for example, in the Medicare statute about paying for eyeglasses, but that also makes it difficult to pay for smart glasses and wearables that have cameras, but I think these are things we can get past and begin to work with the system. To say, “Look, these are key tools for accessing visual information, just like a prosthetic might be a key tool for accessing physical spaces.
Thanks, Paul. So, if anyone has final questions for Paul, please enter it into the chat window now. But I would like to ask another question that is very interesting from PEAT's perspective. What's the #1 driver that you found that motivates companies to prioritize making their technologies accessible to users with vision loss?
I think it's probably a couple of answers to this question. I think for the sector that I'm talking about that runs gamut from these head-mounted visual displays to companies like ORCAM and Aira it’s a sense that there is an obvious clear need, it's clear that there's a bunch of visual information, that's either tricky or difficult or impossible to get access to, and so there are now technologies that can help us do that in the wearable environment. It's almost a straight-up assistive technology play, just like any other piece of assistive technology that’s designed for specific use.
On the other hand, I would say that for companies that are developing the technologies that we need to use on our computers, and our phones and apps and otherwise, honestly, I think legal pressure has a great hand in that, and you know, I was a part of putting some of those laws in place and was happy to do that because not only did the law help, more important, I think, what it did is get companies and the disability advocacy community at the same table and even some of the assistive technology community all at the table talking about solutions. That might have happened absent the laws, but I think the fact that we have some legal pressure like the Communications and Video Accessibility Act that pushes accessibility forward in a number of domains. I think that gets companies paying attention, and then I think they find that there's a very great set of advocates and thinkers and developers in the disabilities here who are willing and help them.
Hats off to PEAT as one of those. I think it’s been helpful at sort of convening and helping to encourage employers to look at technology from different ways and how they could deploy them. I think legal pressure, but I think very quickly that gets subsumed by accessibility, knowledge, and companies recognizing that doing accessibility, incorporating accessibility, is actually making for a robust and better product environment for everybody.
Thank you so much, Paul. As we wrap up here, I also want to mention that there's been an interesting discussion in the comments related to the question on funding. People have mentioned a few other sources are the Institutes of Museum and Library Services offers AT grants, that the Assistive Technology Act, local programs to allow AT to people in their states, and that some states have longer programs and funding for AT loans.
Great comments. Great advice. And I also encourage people to think about private sources, foundation sources. I there's a number of interesting things that can happen or could be done using philanthropic support. And, again, getting our donor community interested in these technological developments. In this case, interested in what the wearable technology sphere could do for people with visual loss, and I argue again for people with other disabilities as well who struggle with information and handling information, decoding, and all that stuff. I think the wearable factor, because it's so customizable and because it's so adaptable to your need at the moment, whatever is that need might be, is really a great one to the take to the donor sector to say, “Look. This would be something great to be able to support to make sure people with disabilities can get access to these life-changing technology.”
I'll often say to people that, when we talk about Aira, “Imagine what it would be like to be able to have somebody who could provide sighted assistance whenever and wherever you needed it at the push of the button, but not just anybody but actually somebody who is trained and doesn't do all the unfortunate sighted, ’Oh, it's over there. [laughter]. It's over to your left,’ when they actually to your right. All those things, and we're normal humans, and that happens, but we’ve got trained agents who can really take those challenges and give you information on as as-needed, when-you-in-it basis in a way that’s comprehensibility and usable for somebody with is blind or vision impaired. I love those funding ideas though. Thanks, folks, for sharing them.
Great. Well, that's all time we have today. If you have additional questions for Paul, as he mentioned, you can reach him at the email address Paul@aira.io. I’d like to give a special thanks to Paul for joining us today and to all of you who took the time to join us. We hope you enjoy the rest of your afternoon.