![]() |
Ethical & legal aspects |
What is ethics?
"Ethics is a study of what are good and bad ends to pursue in life and what is right and wrong to do in the conduct of life.
It is therefore, above all, a practical discipline.
Its primary aim is to determine how one ought to live and what actions one ought to do in the conduct of one's life."
Introduction to Ethics, John Deigh
Defining what is good or right is hard.
Trolley problem
Chick classifier
![]() | $\Rightarrow$ | ![]() |
|
Train a classifier to predict a persons IQ from photos and texts.
We very often use proxy values as labels.
| What we want | Proxy label |
|---|---|
| Performance at job | IQ |
| Probability that someone commits a crime | Probability that someone is convicted |
| Interest of a person | Click on a link |
| Next correct word in a sentence | Next word used by someone in a sentence |
Train a classifier to predict a persons IQ from photos and texts.
Train a classifier to predict a persons IQ from photos and texts.
The "AI Gaydar" study
Goal: Predict sexual orientation from facial images.
Humans have long tried to predict hidden characteristics from external features
Is the research question ethical?
Researchers claim they wanted to show the dangers the technology poses.
Is this a good justification?
No, the dangers are apparent without building it.
Huge potential harms vs. questionable value.
Wider class of such applications (startup Faception)
The data
Was it ethical to use this data?
Was it ethical to use this data?
Biased data
35,326 pictures of 14,776 people. All white. Gay, straight, male and female represented evenly.
Training and test data with lots of bias.
$\Rightarrow$ Classifier will likely not work well outside of this specific data set.
Assessing AI systems
Legal aspects of automated systems
Example
We have bought a smart voice assistant which has the option to buy products from an online shop.
Who is at fault (pays the shipping fees for the return)?
Details depend on jurisdiction.
Example based on "KI & Recht kompakt", Matthias Hartmann.
When delegating product ordering to the voice assistant, the shop has to assume that we are using it responsibly.
Highly dependent on what can be expected of the voice assistants owner.
While he is somehow responsible it is very unlikely that the speaker would be held liable.
Attacking machine learning systems
Idea: Use gradient to compute small perturbation towards different class.
"Explaining and Harnessing Adversarial Examples", Goodfellow, Shlens & Szegedy
Adversarial attacks can work generically with small perturbations. Example: Using adversarial accessories.
If users can modify your training data, your model is especially vulnerable.
|
Microsoft Twitter Bot "Tay" |
Not all safety issues are specific to software