The importance of ensuring that artificial intelligence (AI) and machine learning are trustworthy, ethical and responsible is becoming ever starker as the world grapples with maximising the benefits of this increasingly sophisticated technology to society while also minimising its potential harms.
While AI is a broad concept concerning the ability of computers to simulate human thinking and behaviour, machine learning refers to computing algorithms that learn from data without explicit programming. Put simply, machine learning enables systems to identify patterns, make decisions and improve themselves through experience and data.
Machine learning models, or algorithms, provide the basis for automated decision-making systems that help businesses streamline operations and cut costs. This has led to an explosion of applications in sectors such as healthcare, medicine, marketing, cybersecurity and even finance where machine learning is now used by banks to determine whether applicants should be considered for a loan.
While these models promise to provide the basis of a just and more equitable society, the algorithms are far from infallible. They can degrade over time, discriminate against individuals and groups and are open to abuse and attack.
The terms trustworthy, ethical, responsible, which are so often used in connection to AI, should now extend to defining the type of machine learning we are prepared to tolerate in society. Machine learning should encapsulate accuracy, fairness, privacy and security, and the onus is on the guardians, gatekeepers and developers in the field to ensure all are protected.
Ensuring machine learning models are trustworthy is no easy task and certainly not one for any single discipline to tackle on their own. It is now widely accepted that a more holistic approach is needed to study and promote trustworthiness in AI, with input from a wide array of experts in mathematics, philosophy, law, psychology, sociology and business.
A two-day, in-person workshop in the Swiss city of Zurich earlier this month brought together international researchers working on algorithmic fairness. The goal of this workshop, which I attended, was to foster dialogue between these researchers in the context of legal and societal frameworks, especially in light of attempts by the European Union to promote ethical AI.
The workshop covered a wide range of topics that merit consideration in relation to trustworthy machine learning.
Usefulness versus fairness
There is always a trade-off between usefulness from the perspective of the decision maker and fairness from the perspective of the person subject to that decision.
On the one hand, the decision maker establishes and owns the machine learning decision system to drive forward business or organisational goals. Machine learning model predictions are used to remove uncertainty by quantifying utility for the decision maker. Typically, wider concepts of social justice and equity are often not part of the decision maker’s utility set.
On the other side, the decision subject is benefitted or harmed by the decision based on the model predictions. While the decision subject understands the decision may not be favourable, they at least expect to be treated fairly in the process.
The question arises: To what extent does there exist a trade-off between decision maker utility and decision subject fairness?
Different models produce different biases
Different machine learning systems can produce varying results. Decision-support systems use risk-prediction models that may result in a selection process producing discriminatory outcomes.
Digital marketplaces use matchmaking machine learning models, which can lack transparency in terms of how sellers are matched with buyers. Online public spaces use search-recommender machine learning models, which can incorporate implicit biases in relation to suggesting content based on assumptions made about the user.
Machine learning focuses more on goals than procedure, an approach that is more concerned with gathering data and minimising the shortfall between actual output and target output.
Bridging this gap – known as ‘loss function’ – typically leads developers to address individual prediction errors and ignore group-level prediction errors, which leads to biased learning objectives.
Further bias can be introduced via the data used to train the machine learning model.
Poor data selection can also lead to problems in relation to under or over-representation of particular groups, while what constitutes bias varies from person to person.
Indeed, model features are determined by human judgement so these biases can produce biased machine learned representations of reality, and these biases can feed into unfair decisions which impact the lives of individuals and groups.
For example, Uber taxis uses its accumulated driver data to calculate in real time the probability of a driver picking up another fare following a drop-off, as well as the potential value of the next fare and the time it will take to arrive.
This type of information fed into a machine learning model can end up discriminating against passengers based in an economically deprived area compared to one that is more upmarket.
The third and final area involves investigating discrimination, which factors in sensitive information, and requires counterfactual reasoning for meaningful situation testing.
The example provided at the workshop related to a mid-40s female academic who applied for promotion and was overlooked in favour of a male colleague with similar education and experience. The female academic did not accept the decision by the promotions panel as fair and unbiased and so proceeded to appeal the decision.
This example highlights the effort required by the aggrieved individual in terms of appealing an ‘opaque’ decision. It also demonstrates how the aggrieved individual must reveal their sensitive information in order to build a case against an unfair decision.
While they provide all their energy, data and reasoning, can they successfully prove their case without having access to the data and machine learning model that helped generate the decision?
Discrimination is illegal but the ambiguity surrounding how humans make decisions often makes it hard for the legal system to know whether anyone has actively discriminated. From that perspective, involving machine learning models in the decision-making process may improve our ability to detect discrimination.
This is because processes involving algorithms can provide crucial forms of transparency that are otherwise unavailable. However, for this to be the case, we must make it so.
We should ensure that the use of machine learning models makes it easier to examine and interrogate the entire decision process, thereby making it easier to know whether discrimination has occurred.
This use of algorithms should make the trade-offs among competing values more transparent. Therefore, trustworthy machine learning is not necessarily just about regulation but, conducted in the right manner, can improve human decision making for the betterment of us all.
Dr. Adrian Byrne is a Marie Skłodowska-Curie Career-Fit Plus Fellow at CeADAR, Ireland’s centre for applied AI and machine learning.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.
The post Can we ensure machine learning models are trustworthy? appeared first on Silicon Republic.