menu-control
The Jerusalem Post
The Jerusalem Post: Business and Innovation

Are we outsourcing our brains? - opinion

 
 Neurons in the brain (photo credit: PIXABAY)
Neurons in the brain
(photo credit: PIXABAY)

There are countless more examples where learning models have proven to impact real-world systems in a very positive way. But are there some “side effects” that should concern us?

Throughout history, humans have strived to “sharpen their brains” – gather facts, learn, deduce, develop new ideas and models, etc. This trend has intensified significantly since the beginning of the Industrial Revolution nearly two centuries ago as mankind was able to harness energy and machinery to do tasks impossible to carry out earlier. With the emergence of computers after World War II, scientists shifted their focus in the everlasting drive of knowledge toward harnessing the power of computers to carry out computation tasks we humans were unable to carry out on our own. Large scale optimization models were developed and implemented in every facet of our lives – monitoring and controlling systems such as air traffic, water distribution and energy production. Statistical models were developed and used when uncertainty had to be addressed, smart simulation models were invented to help us forecast future scenarios, and more. And then came the era of Big Data and Artificial Intelligence.

As computer power continues to increase at an exponential pace, and computer storage has become ridiculously cheap, many scientists have shifted their interest to “learning models,” e.g. machine learning (ML), deep learning, reinforcement learning, etc. A common theme in these models is their ability to analyze large sets of data (typically referred to as “training set”), identify trends, symptoms or peculiarities in the data and learn how to handle similar data in the future. Since the beginning of this century, and especially in the last few years, the world has seen tremendous progress in the development of ever-more sophisticated learning models that are gradually making their way into our daily routines.

For example, consider a model that trains itself through tens of thousands of court cases, looking for features that cause certain cases to last much longer than the average. When a judge receives a new case, the model is now able to predict whether this case is likely to grow longer over time and suggest to the judge tailored actions that may shorten the duration of that case. As such, the model helps the judicial system carry out its mission more efficiently and helps society at large by saving time and money in the courts and minimizing procrastination in legal procedures that causes much agony to all involved.

There are countless more examples where learning models have proven to impact real-world systems in a very positive way. But are there some “side effects” that should concern us? Two possible risks are highlighted below.

Advertisement

The first is that, for the most part, learning models are not designed and built to provide an exact, easy to comprehend “formula” to explain the results they reach. No wonder scientists often refer to the inner “works” of such models as “neural networks.” Just as we are unable to understand exactly how a human brain, containing over one hundred billion neurons, reaches a certain conclusion, or why two people presented with identical data may reach different conclusions, we cannot really figure out a learning models’ outcomes. They resemble a “black box” guaranteeing, with a certain level of confidence, that if one feeds it with data that is like those it was trained on, the outcomes will continue to serve us well time after time. 

 The brain (credit: INGIMAGE)
The brain (credit: INGIMAGE)

The inability to fully understand the inner works exposes us to situations in which a learning model could yield inaccurate or even false results without being noticed. Such a malfunction may happen innocently (e.g., new data that contains a marginal feature absent from the training data that causes a drastic change in the model’s performance) or maliciously (e.g., a hacker who finds a way to insert some “dirty data” into the analysis).

Another related threat is that unintended outcomes could be “baked” into the training data. For example, bank managers could unknowingly discriminate against women, in that while loans are granted to women as well as men, women only receive loans if they have a stronger financial position or are more financially stable. More specifically, suppose men and women with otherwise equivalent credit situations are applying for identical loans and the process that produces the training data discriminates against women (and ends up giving them a lower probability of receiving credit compared to men in identical circumstances). A learning model trained on such data would just replicate this bias! It’s not something we would want; rather we’d want the biases fixed first.

Second, as more and more professions are served by learning models, we may see professionals working in these areas give up trying to apply their own brain to the challenges ahead of them, as they succumb to the temptation to “outsource” this effort to virtual machines that do a fantastic job time and again. Rather than carefully reading each new case and trying to understand the human aspects involved, a judge may delegate this work to the model that will do it for him. A family doctor who used to talk to her patients, look at them while attempting to translate what their words and body language are telling her, may now prefer to let a model do the diagnosis in no time and with a much smaller probability of making a mistake.

As the ML “gold rush” continues to intensify and bring with it enormous benefits and progress, perils such as the ones presented above should be noted and further investigated. Specifically, professionals should make sure they understand their data well before turning it over to a learning algorithm. Furthermore, users of learning models should not exempt themselves from the effort of trying to understand the issues they deal with, particularly when these involve human aspects.

Advertisement

The writer is Technion Executive Vice President & Director General and Professor at the Faculty of Industrial Engineering and Management.

×
Email:
×
Email: