AIAE » E-LEARNING » INTERVIEW 4
Perception of AI – AI and societal challenges
The following interview snippets were given by Bambos Papacharalambous and cover the perception of AI and societal challenges and in particular give an approach about algorithms and bias in AI.
transcript
Well, my name is Bambos Papacharalambous. For most of my career, let’s say almost 30 years, I was mostly involved with the telecom part of the IT business. I’m involved with a wide range of projects that are related to IT in general. I’m the CEO and founder of Novum. Novum is a company that does software development and ICT consulting.
Quiz question 1/8
The interviewee, Bambos Papacharalambous, has been mostly involved in a wide range of projects related to
Quiz question 1/1
You have completed the Quiz!
transcript
So could you tell us more about the basics of artificial intelligence, algorithms, and cognitive bias? Yeah, okay. AI.AI is becoming another buzzword of the times. So if we if we try to see what AI really means and what it stands for, a for artificial, I for intelligence. Artificial meaning a machine, something that is nota human or any other living animal. Intelligence, meaning the thinking process that allows us to understand language, to understand speech, to understand the environment around us, and to make decisions. Now, one of the basic ideas of intelligence is that we consider something to be intelligent if it has the ability to acquire knowledge, meaning learn something, and then be able to apply what he learned in order to make decisions academically. There might be different definitions of what intelligence is, but let’s assume that what we define as intelligence is the ability to learn something and then apply that learning into some form of a decision. The intelligence in, let’s say, in academia world, it’s better understood and researched by psychologists or sociologists or other disciplines in academia. But artificial intelligence is better understood by computer scientists, data analysts, and mathematicians, at least today. The reason is that we have vast science disciplines to try to build machines that somehow apply this basic meaning of intelligence. So the same way we humans learn by reading or by processing our past experiences, an AI system should be able to learn by analysing existing sets of data and try to perform intelligence tasks. Now, you have to be aware that AI algorithms, like any other, of course, are developed by humans and therefore carry could carry the same biases and limitations that human thinking carries, right? So the same way people thinking might be skewed by cognitive biases, the AI alpha, this might also reach to the same bias decisions. So if we take a few examples of company biases that are currently identified by the psychology, let’s say, academia world, and let’s take, for example, a football game, our team will definitely win tonight, right? We’ve been beating our tonight’s opponent for the past 20years, so there is no way we lose tonight’s game. Now, this is a cognitive bias that can happen during a logical thinking of a human. But the same cognitive bias might also happen with an AI algorithm. Just because of the fact that if the only information we feed our AI algorithms is the information that our opponent lost to us during the past 20 years, the chances are that the AI algorithm, we reach to the same conclusion that the human reached. So whatever information we give to the AIA algorithm, that algorithm will try to come up with some answer and based, but only based on the information that we buy. So if we try to teach our AI algorithm now to identify, let’s say, smart people by analysing their facial characteristics, all right, and we feed this algorithm images of young white males, and we tell this algorithm, okay, now go study these images. And then when we ask you, when we give you another new image, you tell us if this person is smart or not. Could be smart or not. Well, chances are that if we only fed the algorithm with images of young white males, chances are that if we give this algorithm a mature black woman as an image, chances are that this AI algorithm will have the same bias and will pretty much tell us that this woman cannot be smart. So it’s important to get an understanding of this. And it doesn’t matter how magically we think that the algorithm will perform. It is basically reaching to an answer based on the size and the quality of the data we give. So this is my, let’s say, pretty much the answer to the question of how conclusive biases can be can affect the AI, as well as to come up with skewed results.
Quiz question 1/8
What does artificial exactly mean in the term AI?
Quiz question 1/8
What does intelligence exactly mean in the term AI?
Quiz question 1/8
So far, AI has been most extensively researched by:
Quiz question 1/1
You have completed the Quiz!
transcript
And could you now expand on ethical use of artificial intelligence? Are there some artificial intelligence applications with ethical implications? Okay. As more machines are performing more and more complex tasks nowadays, this immediately directs people’s minds to the use of machines in replacement of human labour. That’s the first thing that comes to people’s minds, and it’s a valid concern. It’s the same concerns that were raised during the Industrial Revolution, and today we see them again as relevant. Is it difficult to allow machines to harvest wheat instead of relying on people’s hands? Is it difficult to allow robots to display human workers in the car assembly lines? So, these basic same questions we see them again coming up, but now in a more, let’s say, complex environment, as machines now have developed, are performing more complex tasks. So, if people had to adjust to the new environment created by machines in the workforce, they would need to go through another readjustment process really soon. For instance, where would the truck drivers find work if the trucks now are driving themselves? Right. Where would the bus drivers find employment if the buses are driven by AI software? Where would you employ an airplane pilot if the planes fly on their own? So, the more complex now tasks that are being developed and are being run by machines, the more these types of jobs are being threatened. And it’s not just hard labour anymore. I mean, why would anyone become a doctor if a robot can perform the same operation better than any human, right? So, these concerns are valid. And of course, there are other examples that raise questions of ethics in the use of AI in our lives as well. It’s not only the labour force. Shouldn’t we be concerned with the use of machines that are used for our security? First, can we be sure that the AI software powering up a security camera in an airport has made the correct decision in identifying the terrorists? Can we be sure that the security system in a mall is following the real thief to his car and not some other innocent bystander? So, these issues are valid. There are a lot of ethical concerns within the AI space, not only through areas like employment or security, but also in the use of military weapons as well. Shouldn’t we be concerned, for instance, by the fact that the armed drones are today doing our fighting for us? Are we sure that the autonomous robot soldier tomorrow would be able to distinguish between the enemy and the friend? So, these are very valid customers. And whenever something big happens, a change happens within the industries in general. Whenever something magical happens, then there is also the chance of negative implications of its use. So the industry needs to eventually find a way to use AI for the good of the society we live in.
Quiz question 1/8
True or false?
Machine learning aims at finding data-driven automation solutions when the automation goal cannot be reached explicitly.
Quiz question 1/8
True or false?
Since AI Algorithms are developed by humans, they could carry the same biases and limitations.
Quiz question 1/8
True or false?
It is not important to get an understanding about cognitive bias.
Quiz question 1/8
True or false?
The biases of AI Algorithms depend on the amount, type and quality of the data we feed into the system.
Quiz question 1/1
You have completed the Quiz!
transcript
And so you mentioned already the impact on artificial intelligence, on employment, on security, on welfare, in the military service, and they also hold on new laws of robotics in other fields than this street field you mentioned. Yeah, this issue of the rules and the laws for robotics. It’s not an easy issue to solve. We are in unatched territory when it comes to a widely accepted set of rules that should define the ethical framework of how robots should work, even in areas outside the military, but also in the space of security, in the space of the welfare of the elderly, let’s say the space of the personal security. At home, we will see robots that will start taking decisions for us. As I said earlier, this is a very new area, and even though there are groups both in academia, in the industry, and in governments also, that try to touch upon these rules, this is at the very, very early stage right now. Right now, the discussion is being done on what the robot should or shouldn’t shouldn’t do. For instance, a robot shouldn’t harm another human. That could be like a primary, let’s say, directive for a machine. But at the same time, a robot should obey its owner and its creator. So that’s another directive, primary directive for a robot. But what if the robot’s owner is giving instructions to the robot that is outside its main directive? So what if the robot is asked to harm another human? Right, but should we limit the scope of harming something to a human? What if the robot is asked to harm an animal? Or what if the robot is asked to harm another robot? What if the robot needs to harm a human in order to protect another human? So these are ethical questions that are not easy to answer. And they’re not easy even if they are answered. They are not that easy to be put into an algorithm. So don’t forget that eventually, all these rules of the framework, they need to be translated into a mathematical, let’s say, equation. Right? So this is a pretty tough thing to do. Even though it would be better for mankind to define these rules before and not after the fact, I don’t believe that this is an easy task. Companies are fighting for product dominance on a global level. Countries are fighting for military dominance on a global level again. So it would be hard to make rules that could be agreed by anyone and then be enforced by everyone because of just how society and mankind works. A more likely scenario would probably be that this set of acceptable rules are to be defined and enforced at least after the fact. I mean, after we saw that chemical warfare during the First World War was unethical code and code, we designed the framework around it. Right after we dropped the nuclear bomb, we saw that we needed to control it. So we might need to experience the first, let’s say, disaster in order to be forced to define the rules that would govern the robot’s behaviour. We just hope that this experience of the first disaster is not it’s not a really bad move. But personally, I see that even though there are tries of setting these rules up from the beginning, I believe that this is a very impractical task to solve.
transcript
If we, if we take a step back and see what the last an Algorithm is, alright? And try to see, to distinguish between what an algorithm can do for people, but also what people can do with the use of algorithms. So with a regular alchemist’s, these two, let’s say, tasks are pretty much distinguished. So what the algorithm can do for people, it’s one thing. What people can do with this algorithm, it’s a separate thing. So for a computer science to write an algorithm, he will ask you of the rules that you want that algorithm to obey too, right? So let’s say you want an algorithm that when given the volume in leaders, it will give you back the volume in gallons. Okay? So the software developer will ask for the rule that governs this relationship between leaders and guns. And he will use this rule to write this algorithm so everyone would know the outcome, what the outcome would be, provided that the algorithm complies to those rules and it works correctly. Okay? Now this is what I would call a regular algorithm. If now the use of this algorithm can get people in trouble, then it’s not the algorithm that isn’t fault, but it’s the use of the algorithm that is at fault. So let’s say, for instance, that this piece of software that we wrote is used in an airplane and there is a switch in the airplane that says, okay, when you turn the switch on, I can display you the fuel in gallons. When you switch down, I can display the fuel that the plane has in its tank in litres. And suppose now that the switch is in the wrong position and the PILER asks for gallons instead of litres and takes off. Well, chances are that he will have to make an emergency landing somewhere else before his original destination. So what do you do? In this case? You have an algorithm that is functioned correctly, but it’s use made him a function. So in this case, the regulator step in, the industry steps in and you train the pilots correctly. You make sure that the after checks if the plane has enough fuel before you take off. You put regulations to force the ground fueling service to make sure that the plane has enough fuel to reach so we know how to handle, let’s say, all the regulatory issues that we have to handle in a regular code unquote alchemist. Now, what happens with the AI algorithm? Well, let’s take for example, a machine learning algorithm. In this case, you design the algorithm with some basic parameters and you tell the algorithm what data you have. You just fit the algorithm bunch of data and then you run it. The difference with this type of algorithm is that the system changed itself based on the parameters and the data that you gave. So you cannot really predict the outcome of the algorithms. If you had enough time for a human to go through the data that you gave it to and go through the parameters and run the calculations line by line on a piece of paper, then yes, we would predict, let’s say we would make the same predictions that the algorithm did. But it’s impossible for a human to run through that large amount of data with the parameters that you gave the algorithm to work on. So at the end of the day, the result that comes out of an AI algorithm could be considered unpredictable. We could have a result that we didn’t expect. And that’s where the ethics within the outcomes need to come. Because imagine now that you’re the Ministry of Health of a country and you ask an AI company to come up with an outcome that looks at the demographics of the population and it looks at the number of available organ donors. And we want the algorithm to suggest who will get a heart transplant, for instance, and who will not get a heart transplant. So you put in things like age, things like probability of having, let’s say, high probability of accepting this trust plan, and you put in a bunch of parameters, and then you give all the countries demographic data to this algorithm and you let it run. So the ethical questions now are required to be answered by the people who wrote the algorithm and not by the people who just use it. Whereas in a regular algorithm, we can give the responsibility of making the decision to the people that use the software. We roll in the AI version of the machine learning version of the algorithm. Well, basically, the algorithm is making the decision for us. So that is why we need to start thinking about putting ethics rules embedded in the algorithms themselves. And as we said earlier, this is an easy thing to do, because how do you decode all these ethical answers into a mathematical equation? And this algorithmic term and space is a new space. It’s being researched now at an academy, at an academic level. It’s reaching, touching a little bit the industry, but it’s definitely not there yet. So he still needs time. Yeah. Thank you very much for this input about the relevance of algorithmics.
transcript
Now we’ll move to specific questions about artificial intelligence and social challenges. So could you tell us more about assisted, augmented and autonomous intelligence? Yes. Okay, well, as we said earlier, if we take a robot, for instance, that does a specific job right now, the robot that works in the car assembly line is not really considered as a robot with an AI algorithm in it. We’ve seen these robots working in the industry for many years now before even this new wave of AI methods were becoming popular. So what happens in that space? Well, that is a task, that is a repeatable task, that is a task that is easily recreatible and that is accepted by the industry as a harmless piece of equipment. Now, the advantages for this having such a machine is that, well, we remove this high intensive labour activity from humans and we let machines do the boring and the unhealthy tasks and we allow humans to work in more productive and safer environments. Now, the disadvantage is that, of course, OK, this robot that we have in that assembly line has basically taken over the prominent opportunity of the people that we are able and willing to do these tasks. So things are, let’s say, accepted as far as the use of a machine doing a repetitive task of manual labour within a specific industry. Now, the problem becomes more complicated now when you have a robot or a machine that autonomously makes decisions and functions on a more, let’s say, code and code intelligence manner. So imagine now that you don’t have an assembly line and you have a robot which has the intelligence to perform the tasks that the regular robot performs today, but also decides that the design of the actual car needs to change. So let’s say that the robot now has the ability to design the car in a more aerodynamic way, let’s say, and it decides on its own, autonomously, that the shape of the car needs to be changed. And of course the robot will have the ability now to make the design change and at the same time build the car. Okay, the advantage would be that perhaps they come up with a better design and offer better fuel efficiency or whatever. But the disadvantage is that first of all, we displaced some more jobs now from the workforce. We perhaps took out the designer now, but at the same time, we’re not sure if these changes that this new robot now suggests will actually be safe. So yes, we might have a better efficiency, fuel efficiency, but will it be safer or safe or as safe for the driver and the passengers? So, again, everything needs to have a balance. There are advantages, there are disadvantages and that’s why there are risks and threats and challenges. But at the same time, there are opportunities.
transcript
So you already mentioned some opportunities and some risks and threats. In your opinion, what are the most relevant social concerns? I think personally, what I’m really worried about is things that come to it affects the security of people. So things like the job market and employment. I think eventually mankind will find its way and if we are careful enough, we can find answers to these problems because technology will always be there. We’ve been through this again and I’m hopeful that we’ll find answer to those questions one way or another. When it comes to things like safety, human people’s safety is one of the hardest problems to solve. And that’s where I think we should be paying more attention to. For instance, what happens if we have a virus in a car and that car now drives my mother every day to the grocery store? What happens if that piece of software is taken by a terrorist organization? What happens if the software is at floor by design, it hasn’t been tested properly? And then something happens during this trip of my mother’s trip to the grocery store and then we have a crash. The same type of problem that affects the safety and security of people. Of course, it’s multiplied by 1000 when we get into the use of machines in warfare. So right now we have drones that are driven by a pile of joystick that is sitting on the other side of the world. And this drone now is being flown by this pilot. And the pilot makes the decision of where to drop its bonds. What happens now when this drone is given the okay to decide on its own? How do you pass these ethical questions to the drone’s software? It’s the same problems that warfare was challenged every time a new web phone becomes available to the masses. So these types of decisions that affect the security and the welfare of people, I think these are the major areas that need to be looked at. And it’s probably the hottest areas that we can finance to.
transcript
And you already presented some examples in different areas. Could you give us maybe a few more examples related to facial recognition, justice and social network challenges? Yeah, the facial recognition, it’s a problem that’s been found, found and raised repeatedly the past few years. You could say that it started directly as they need to identify solutions in the security space, but it turns out that now it’s all over the different industries, around the technology. Let’s say industry. Let’s say, for example, that we train a system to use a camera that is sitting at the mall’s entrance, and we want to see if the person that is getting into the mall has a COVID or not, right? So COVID 19 is a very current issue. And let’s say that we have some kind of system that looks at the face and if we found the algorithms that I can identify along with other sensors and data can identify if this person might have coffee. So let’s say that we do identify a person that has coffee. A good next step would be, okay, let’s see, let’s use AI facial recognition and let’s find who this person is. Let’s go to Facebook, let’s go to whatever other social platform, let’s identify this person and then let’s see the pictures that this person has posted in the different social media platforms and see who this person’s friends are. And perhaps we need to reach them in order to see if they got together and they could have. They are suspects now of a COVID-19 positive test result. So where do we stop this? Right? Where do we put an end to this? If we have rules that have to do with personal data and security of data, well, perhaps these rules need to be pushed aside because of a pandemic. So do we let the algorithms do all that work for us without any control, offer help? But the fact that it’s algorithms can make decisions on its own creates this uncertainty for us. And that’s why people rightfully are sceptical about it.
transcript
Could you now tell us a bit more about the status and perspective of some regulations? For example, OECD or EU UNESCO GPAI. Yeah, like I said, I mean there are initiatives and people, smart people, are spending time on placing a framework around the use of AI. There’s also research being done right now for areas of ethics that are not being tackled by government agencies. But as we said earlier, they are at a very early stage and I have a feeling that the industry will be once again ahead of the regulators. If you have a look at the research that is currently happening with companies in the areas of robotics, you will reach the same wow reaction when you see how robots are performing today. And I have a feeling that the industry is already ahead of the regulatory framework and it’s most likely going to be another scenario being repeated where something bad will need to happen in order for everybody to react positively. The way things are moving, I don’t see why should AI be treated in any different manner. I think the industry will be moving along at the incredibly fast rates and the regulators and governments will be a step behind. This is, I believe, is the current trend right now and I don’t see any major factor changing. Thank you very much. Is there anything you would like to add regarding this issue? No, I think pretty much the question of ethics within AI. One could say that it is a valid question. It is a question that is being raised by societies throughout the world. I also believe that the industry will go ahead and design and implement whatever it thinks it’s best for their creators. I don’t think there will be anyway of stopping that from happening. I just hope that at the end of the day, the results of these AI initiatives will be beneficial to mankind and not create all these issues that we are afraid of creating. Eventually mankind kind and humanity will find its way, but I think it’s going to be a bumpy road.