Beyond the headlines: How to make the best of machine learning models in the wild

author: Noura Al Moubayed, Durham University
published: Dec. 3, 2019,   recorded: October 2019,   views: 38
Categories

Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

Machine learning has achieved unprecedented results in a variety of application areas. Medical science has always been an area of high importance for AI applications due to its high social potential impact. Machine learning models are now able to reliably diagnose cancer from medical imaging and to assist physicians providing better care to their patients more efficiently. The question is how much can we trust these models? Recently, deep neural networks have been shown to be vulnerable to adversarial attacks where a designed fake input can lead to misclassification. The US Food and Drug Administration is currently reviewing its policy on accepting machine learning models in medical devices and diagnostics due a recent case of a failed cancer diagnostic model. Hence, machine learning models are not only expected to perform accurately, but they have to adhere to strict criteria on model performance, bias, and ongoing maintenance. Most importantly in critical domains, like medicine, the model has to be able to explain its decision-making process. I will present recent advances in building machine learning models that are robust to adversarial attacks and can explain their outputs.

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: