Daffodil International University

Faculty of Science and Information Technology => Recent Technologies and Trends in Software Engineering => Software Engineering => Artificial Intelligence => Topic started by: nafees_research on October 05, 2019, 05:30:32 PM

Title: ARTIFICIAL INTELLIGENCE (AI), MACHINE LEARNING (ML), RESPONSIBLE FINANCE (RF)
Post by: nafees_research on October 05, 2019, 05:30:32 PM
ARTIFICIAL INTELLIGENCE (AI), MACHINE LEARNING (ML), RESPONSIBLE FINANCE (RF) – AND US, HUMANS

Research shows that human beings have the talent to deceive one another and be dishonest; a talent that they often use. This is mainly since it allows humans to manipulate others without using physical force. Harvard University’s Sissela Bok says that “It’s much easier to lie in order to get somebody’s money or wealth than to hit them over the head or rob a bank.”

It is thus not surprising that despite the corporate governance lapses of the early 2000s and the measures taken thereafter (Sarbanes Oxley, various codes of corporate governance, numerous board directors’ training programs, etc.) we are still witness (in 2019) to a great many governance failures on a daily-basis. This only goes to show that unless these ethical and governance standards are embraced by individuals and companies in spirit and not just in letter, not much will change in the way we govern our businesses – or in the way we use available tools, including technology, to maximize profits. Apparently, us humans still require being told what the ethical boundaries are, and we still need to be supervised to ensure that we do not fall out of step. This is where regulations and regulatory oversight come into play.

In today’s technology-driven world with “innovation” happening all around us, we can often see that regulations are always trying to catch up with the new business models. Such new ideas and business models are being introduced at a very high frequency, and most of the time the regulatory support required to protect systemic failures is not there. However, many of these new business models are addressing a need and demand in the market and thus relevant in terms of their use case. As such, we see regulators struggling to keep pace with the innovations keeping in view their role as protectors of both the system and the consumers of products and services.

One of the early adopters of technology has been the finance and banking sector. We know that the major advantage of technology is its ability to process information at very high speeds enabling quicker and better business decision making. Within the banking sector ML and AI (Artificial – or more aptly Augmented Intelligence) are being used for this purpose, mainly in the areas of Risk Management, Compliance, Credit, KYC and Cyber Security. To do this, technology needs to collect and process huge amounts of data, most of which is generated by the actions of human beings through digitalized processes. This enables designing algorithms to generate valuable insights from the collected data. It is data (both quality and quantity) that fuels the ability of ML/AI to derive insights for improved and swifter decisioning.  The manner, in which AI is now deployed has undergone a lot of changes over the years. Today, it is applied and used in anything from chat-bots, robots and autonomous cars. Thus, the quest for more and more data, and seeking more and more business opportunities for using and monetizing that data, has become a primary objective of most institutions. Herein, lies the problem.

Research also shows that creative people are more likely to exhibit unethical behavior when faced with ethical dilemmas. Thus, while innovation continues to take place in the absence of adequate regulatory frameworks, we see two things happening – either the pace of innovation is slowed or even stopped; or innovation continues in parallel with the designing of regulatory regimes. It is here, and during this lag period, that dishonesty and unethical practices can and do take place. This then begs the question whether innovation should wait till the regulatory regime is perfected – or should the innovators cooperate and work with the regulators and help them design an “innovation-friendly” regulatory framework. Given market dynamics, I would vote for the latter. However, this can only happen if during the period when the regulatory framework is weak and evolving, the innovators (and users of innovative technologies) are acting responsibly and ensuring that technology is not being misused. Unfortunately, thus far we have seen that technology has been misused on several occasions (I am sure I do not need to delve into Cambridge Analytics, Facebook, Google, and the several incidents involving banks, credit bureaus and others over the past many years; I have also written about these in other publications). Unfortunately, what such irresponsible behavior could lead to – and it seems is already leading to - is “over-regulation”, which in turn would stifle innovation, and hence the ability of both the innovators and its users to scale by leveraging technology. Already, we are seeing regulators bringing in the innovators for hours of questioning to try and understand why (and how) they did what they did in terms of data misuse, invasion of privacy, developing biased algorithms resulting in biased decisions, lax oversight and understanding of cyber security issues, (it’s a long list), etc. By such questioning the regulators are trying to determine what type of regulatory frameworks and oversight need to be developed because of irresponsible behavior.

In today’s world of digital technology, while there is ample opportunity to improve the way products and services are delivered and make processes more efficient, there is also certainly a lot of room for dishonest practices, using the same technology. For this reason, the onus is on all stakeholders - the innovators, the users of innovations, the consumers and the regulators - to ensure that technology can be leveraged strategically and sustainably. This can only happen if we realize that with the great power that technology gives to all of us, the burden of responsible behavior is even greater and heavier. And unless we live up to the responsibilities that we need to display, the use of technology might not be sustainable.

Source: https://www.linkedin.com/pulse/artifical-intelligence-ai-machine-learning-ml-finance-kaiser-naseem/