Ethics Of Machine Learning
With machine learning systems exploding in popularity, there’s a sense of hope as well as fear. Will the rise of machines usher in an Utopian era or would it lead to the machines taking over humanity? The possibility of creating thinking machines has also led to the questions being raised about the inherent biases in machine learning algorithms. Since machine learning systems use lots of data to train the system to get smarter, issues related to transparency, a potential for bias and accountability have become the latest buzzwords in AI and machine learning.
Machine learning has gone beyond text prediction and anti-lock braking mechanism. With the widespread use of sophisticated machines, it has become imperative to look at the ethics of machine learning and its impact on human life.
What Is Machine Learning Bias
Machine algorithmic decisions today affect every aspect of human life. Due to its ubiquity and reduced costs along with the academic, government, military and commercial interests, machine learning algorithms have been increasingly shaping our everyday reality. From finding the best results on the World Wide Web to its use in the criminal justice system, machine learning algorithms have become the fabric of our experience. This, however, poses an important question regarding the ethics of artificial intelligence.
Machine learning systems are not implicitly good or bad. The AI systems are only as good as the data we put into them. Bad data can contain ideological, gender and caste biases which in turn may affect the decision-making process of machine learning algorithms.
For example, banks have increasingly been using machine learning algorithms to make lending decisions such as assessing a person’s eligibility for home loan applications. The algorithms consider gender, age, marital status, education, employment status and the number of dependents among other factors. By comparing these data points with the data points of thousands of prior customers, the algorithms generate a risk score, based on which banks will decide whether to extend a loan to someone.
While the results thrown up by the machine may appear unbiased, faster and judgmental, it’s important to note that biases can creep up in unexpected ways. Since the algorithms look at historical information about previous applicants and their subsequent performance, the machine can reinforce what they have learned from real-world data and may reject certain categories based on caste or gender. This implicit bias often arises for minorities due to the limited input data.
Since the machine learning systems only know that is fed into it, it’s able to tackle any problem that can be cast in suitable numerical form. So, whether it’s property prices or a list of terrorism suspects, problems are solved mechanically with machine learning algorithms. However, the power to generate quick results is also an Achilles heel. With the input data, the algorithms also include all potential biases that lie behind its construction as data in the first instance.
Apart from data bias, machine learning algorithms also suffer from the creator’s bias. The algorithms are architected from the programmers who create it and data it uses. Since the development team selects the algorithms to use, how the algorithms are set up and the metrics and parameters to use, the creator’s biases are implicitly integrated in the algorithm.
So, when machine learning is used for recruiting, we train a neural network on a set of resumes. This means that the training set of data itself contains all the biases of the original decision makers, which in turn are absorbed by the machine learning algorithm. However, unlike with a human operator, it is harder to identify reasons for bias by machines.
For example, in 2017, Amazon fired its recruiting tool for identifying software engineers as the system had become discriminatory against women. In 2016, the criminal justice system created to assist judges during sentencing and used to predict the likelihood of re-offence was found to be biased against blacks.
Ethical Machine Learning
The fragility of current machine learning systems is in stark contrast to human intelligence which can learn things quickly in one context and apply it to others. Prominent tech leaders have been sounding the alarms of the dangers of AI systems for quite some time. As machines rapidly take on decision-making roles in practical matters, it’s inevitable for data scientists and engineers to engage in discussions about ethical ramifications of machine learning. But how do you put ethics in a machine? AI ethics is concerned with ensuring that the machine’s behavior towards humans, and other machines, is ethically acceptable.
One way to resolve the ethical dilemma is to avoid unethical outcomes by creating software that implicitly supports ethical behavior. Data scientists have the moral responsibility to train the machine learning data with unbiased data. This means allowing for greater transparency along with a system of checks and balances to ensure that fair ethical AI principles are used in developing the machine learning model.
One of the biggest challenges behind the implementation of ethical principles in machine learning is a lack of a concrete definition of “fairness”. A collaboration between social scientists and machine learning engineers is needed to get a clear understanding of fairness and establish guidelines for machine ethics. A clear framework and an understanding of the fundamental principles of ethics is the only way forward. It would allow AI researchers to convince the general public that ethical machines can improve their lives and ethicists to discover and establish the fundamental principles of machine ethics.