EN / DE / IT / SL / EL 

AIAE » E-LEARNING » INTERVIEW 2

AI and machine learning

The following interview snippets were given by Dr. Sebastian Lapuschkin and cover the topic of AI and machine learning, gives an approach about automation and mentions further examples of machine learning.

transcript

My name is Dr. Sabasana. I’m the head of experimental AI at home in Berlin, and my task is to conduct research towards the expendability of artificial intelligence.

Quiz question 1/8

The interviewee, Dr. Sebastian Lapuschkin, is the Head of Explainable Artificial Intelligence Group at Fraunhofer HHI in Berlin. He is in charge of conducting research towards the explainability of artificial intelligence.


Quiz question 1/1

You have completed the Quiz!

Loading...

transcript

What is machine learning and how does it work? So machine learning is essentially a way to find automation solutions, data driven automation solutions when this automation goal cannot be reached explicitly for example by using or programming down algorithms. Specifically, the idea behind machine learning is to use data which represents the problem set or describes the problem set and then use machine learning algorithms to let them figure out a solution to those problems. This approach is called data driven. And could you tell us how machine learning is related to artificial intelligence and to big data? Yes, of course. Artificial intelligence at first more or less is only a marketing term describing machine learning. Or one could also say that artificial intelligence is a current subfield of machine learning. To understand this one needs to know that everything which is going on inartificial intentions is using machine learning. And with regards to this marketing term thing, this has happened or that the term artificial intentions has reappeared in the early 2000 and tens with the immersion of the deep learning hype. The three emergency is of the deep learning height at this time and it was first coined actually in the1960s or 50s with the emergence of the first machine learning big data describes the approach to the state of London to collect and organize a lot of data. Right? And in order to conduct machine learning efficiently you need data which sufficiently describes your problem and representatively describes your problem. That being said, however, if you have a lot of data it doesn’t mean that your data is good, that it describes a problem. You might also introduce some confounding features which means information which correlates with your intended targets but causes your model in the end to sort of another target because it cannot figure out what you want from the data specifically. Right? I think this is a bit convoluted to describe it. The problem is you use machine learning to solve a problem which you only can describe via data and if your data doesn’t describe the solution you want to obtain the machine learning algorithm will probably not find the solution you want but some other solution which also works. But this might be helpful.

Quiz question 1/8

True or false?

Machine learning aims at finding data-driven automation solutions when the automation goal cannot be reached explicitly.




Quiz question 1/8

True or false?

Machine learning aims at finding manual solutions to human problems based on data.




Quiz question 1/8

True or false?

Machine learning is a data-driven approach in order to find human solutions to problems caused by machines.




Quiz question 1/8

True or false?

Big Data describes the approach to collect and organise larger and complex data from various sources.




Quiz question 1/8

True or false?

In order to conduct machine learning efficiently, you need representative data that describes your problem sufficiently.




Quiz question 1/8

True or false?

Having a lot of data means the data is automatically good and can describe your problem.




Quiz question 1/1

You have completed the Quiz!

Loading...

transcript

And could you maybe tell us more about deep learning and what is the difference between machine learning and deep learning? Yes, deep learning again is a subtitle of machine learning and describes the use and training of machine learning algorithms which have a deep representation of information. And this usually describes deep neural networks. The depth in this deep learning is in the deep neural networks is that one usually stacks multiple layers of possible representation of data. You can imagine it as just layers of mathematical operations which are then learned in training. So you give the shape of the network and the function of the network is learned in an Iterative training process by providing an example data and the term deep and learning. The approach comes from the depth of the network. And what is the difference between machine learning, deep learning and traditional programming? Okay, as I said, deep learning is part of machine learning. The difference between machine learning and traditional programming is the following so consider you have some data and you know the rules how to process this data, right? Then you can implement your solutions. And this is a typical programming approach. You have your data, you know how to process it, you implement your programs and outcomes and answer. The approach to machine learning is thus you have a lot of data and you know the answers to this data, but you have no idea how to come up. Basically, you don’t have the rules, right? The task of machine learning is to train your machine to learn the rules which allow you to connect the data and produce the expected answers. And once you have this, you have a trained machine learning model which can receive new data, the data it has never seen before because it has learned the rules and should not have learned the data by heart and it can then produce answers. So in machine learning speed, we say the model should generalize, which means it should have learned general rules on how to handle this data you put in to provide the correct answers. Once you have such a model, you can plug it in as a set of rules in your programming task. For example, if the set of rules would be so complex that you never would explicitly be able to COVID it by writing the code manually.

transcript

And can you provide one or more examples of popular use of machine learning? I think one example which is used fairly often is optical character recognition which means the machines and the post office which read the target address of your letter you write this is usually not done by humans but it’s just sensory machine. The machine deciphers your handwriting then digitizes the address and feeds all this information into a database and then the letter gets directed to the target. Another approach would be facial recognition, for example. For example in digital video cameras, webcams also valiant systems. So the spectrum of applications in machine learning is quite useful. For example, what we are doing in our lab is we use machine learning for natural disaster prevention for example, where we track where we have, for example climate data or air pollution data of the last years, months and so on. And then we train a model which should be able to predict how temperature arrival and so on behaves. Given a lot of factors which cooker over the last days months could you provide one or few examples of popular use of deep learning? Pretty much everything which is quite complex and was apparently unsolvable about ten years ago we licensed deep learning and that is image recognition, for example uses deep learning because using the step of the deep networks this deep architecture allows the model to learn like cascade of different feature processing steps. Right? As a matter of fact deep neural networks are somewhat motivated by the visual cortex of the human brain which processes information in several steps beginning from just receiving color information to basically neurons triggering to simple shapes like edges and round shapes and so on neural networks actually do quite similar things. And by going from the most atomic to very complex features for example, from edges color gradients to neurons which are there or have learned to recognize the heads of lizards, for example this complex image information can be processed efficiently and quite fast. And this leads to current machine learning models in image recognition and outperforming humans for example especially if you factor in time.

Quiz question 1/8

Machine learning is used in face recognition, for example in digital video cameras, webcams and surveillance systems. The scope of application for machine learning is very large. In interviewee’s labs they use machine learning for natural disaster prevention, in which they track climate data and air pollution data from the previous years and then train a model which should be able to predict weather and climate factors. Deep learning is being used in many complex processes like image recognition.


Quiz question 1/1

You have completed the Quiz!

Loading...

transcript

And what are the opportunities and the positive aspects of machine learning for society? For one, it’s the potential to reach a state of automation which strips away tasks which are labour-intensive but boring and no one actually should do because it can be automated nicely. This, of course, increases efficiency. It reduces errors because the machine never gets tired. In a medical setting, for example, machines could be used to augment the decisions of trainee. Mr. Pathologist, for example. Mr. Pathologist is an especially interesting domain here because it is known that a historian has its highest value when he’s going into retirement because he has a lifelong basic period of learning behind himself or herself. And those almost retiree his are very much faster than the new guys who need to learn the trade, right? And by faster, I mean they intuitively look at one of those histopathology slides and immediately see what’s going on, why the newbie needs to scan every bit of the slides meticulously and take time and so on and so on. And there’s also a group in class acting from Andreas Holzinger. He’s actually training machine learning methods based on data annotations made by an expert, Mr. Pathologist, with the goal of encapsulating his life experience in his topology into machine learning model. So it can potentially be used as a training companion for starters in this domain.

transcript

And what are the most relevant risks related to ethics, for example, in your opinion? For one, of course, is the intended use case of machine learning. For example, do you want to use it for the general good? Do you want to improve of society? Do you want to improve environment? Or do you want to plug it into a cruise missile? This is the core difference here. And the next thing is these are the extreme ends of the spectrum. And then there’s a plethora of societal issues in between. For example, do you automate your credit worthiness estimation of a person and use machine learning for that? And then there’s a question what data did you use to train this model? And did you maybe model, maybe train unwanted correlations between some features and data and the outcome? For example, we had learned that some ethnicities, for whatever reason, for example, skin color, ethnicity, as I said, should not receive, I don’t know, financial aid because of this reason, right? The question is always what data do you feed in? What data do you want someone to use? There’s this principle of data spacity which means only use the data you need to solve the task because additional data might create confronting behaviour in the model. And this is of course one of the current big issues with the ongoing automation. With machine learning. On the other hand, there’s always a question if you use real data to train the machine learning models and don’t like what the model is doing because the model itself is objective, the trust can learn from the data you provide. The data is the model’s sole reality. Does it mean that you don’t like what the model is doing? Or do you not like reality? Right. And I’m thinking it might not always be the right way to fix and curate your data to get rid of certain behaviours of the model. I would see it as an indicator for change necessary in society producing this.

transcript

One question regarding Extendable artificial intelligence X-A-Y can you explain what it is like? Yes. So the target of Expandable AI is to shadow into the black box of machine learning. So current machine learning is so usually the best performing machine learning models are quite complex, which means the outside observer really even the developer really has an insight on what the model is actually learning. And with accessibility or X AI, we aim to gain back some transparency on what the model is doing. Yeah, this can be done in several ways. What we did in our lab is we developed a modified backdrop method which means if you feed in some data points into the model, the model gets transformed layer by layer to the network or the model. Basically it traverses through the model in the end and the end results in answer of the model. Right. And we can somewhat invert this process by for example, if the model receives a picture and tells me it’s a cat, I can start with the cat output and say yeah, but why? And then I can pick apart the partial decisions of the models layer by layer until I reach the input again and I can then obtain, we call it a heat map correctly. It’s basically been masking in the input space where the more things that information is and you can do this for any potential outcome. For example, if you have a dog output of the model and then you can do the same process with the dog output and then you might receive the answer why the model thinks there is no dog in the image or where the dog information is not right. This is a way to connect the model’s use of information as given by the data points to the model’s output. Yes, expandability is a quite young field.I would say the earliest serious steps and more complex models have been done in the 2010 and since then it has been evolving quite rapidly. So there’s a lot of work going on. We are working towards providing explanations which go beyond simple heat map visualizations which need a lot of interpretation at times, especially if the data is hard to understand and need to acknowledge. But our end goal is to model under the treatment of improved exploitability which you’re currently working on should be more or less self-explanatory right by not saying look at this part of the image there’s information which I think as a model speaks for cat. But the model should then inform the user. For example, that I think there’s a cat because I see this and that and that and that. Cat like features the model has, for example, learned as features to use during prediction making. Thank you very much. And what does explainable AI make possible? What can you achieve through that? For one, you can understand what the model is actually doing and you can gain understanding on a per sample basis per sample basis means in this case for each data point you put into the model you get receive feedback on the model reasoning based on this data. You can then of course use this to verify your model. But in some cases you might also end up with the information that the model is producing the right output for the wrong reasons. For example. And this might point you at faults in your training data where you have introduced some confounding information, some confounding features which the model then connects to the output of cats but which are absolutely not Cadillac just because it’s easier for the model. And then we have the problem again that the training gate of the model is the model’s sole reality. You just give it a couple of thousand images of cats and the model learns how to get from this data source to the cat. And if those images, for example, have been crowd from Flicker and they all have a copyright watermark because they are stock images or something, the model might pick up that stock images are cats. Right? This is the problem one of the problems we can identify with externality and this then to the option to improve the model, improve the data source and so on so we are basically way more informed machine learning developers than before extending.

The learning unit has not been completed

You have successfully completed the learning unit.