What is Artificial Intelligence? It has been defined in different articles as the same concept but with improvisation of the words. By a larger group of scholars, it has been defined as a machine’s ability to read and learn external data accurately and make the use of those learnings to ascertain several issues and problems and achieve specific goals out of those problems (Haenline and Kaplan,2019). Artificial Intelligence has swiftly opened various different backgrounds of usage in sectors like business, education, healthcare and prominently in corporate and governmental policies. As the Artificial intelligence revolution reinvigorates our world, this can signal a utopian and a fictitious future where humanity balances to live with machines, or foreshadow a dystopian era with poverty, wars and inequalities amongst the societies (Goralski,2020). This essay will discuss how AI as a technology has evolved, how it faced various stages of discontinuities and ferment, how has it actually performed in the industries while mapping out AI’s progress in the recent years. We will also discuss how one of the most sophisticated technology became some scholars’ tool of fear and why they think so.
Artificial Intelligence is a concept that has been mentioned as early as in the Greek mythology. Hephaestus the forgery god used several machines which have been described as automatons who interacted on their own and spoke from their own will. These were the bots that showed how machines would behave as a human, which relates to one of the principles of AI (McCurcock,1977). The second instance for AI was during the second world war when English mathematician Turing invented Enigma code machine to decrypt Nazi code that helped the alliance and after the war Turing printed an article describing creation of intelligent machines and aspects to test their knowledge which also came to be known as “Turing” test for AI capable machines. Taking inspiration from this, Marvin Minsky, a computer scientist, coined the term “Artificial Intelligence” and held a meeting at Dartmouth College which is now termed as Birthplace of AI. This was followed by several funding into the research of AI which resulted in new applicative uses such as ELIZA computer program which was, successfully able to converse with human beings, initially a language processing program. This was also one of the first tools that was able to pass the Turing test. A few successful applicative ventures around 1966 resulted in progressive growth into the field of artificial intelligence. However there came a time when funding, which were at large for research on artificial intelligence, were not reaping applicative rewards. During the 1970s, US Congress started to question high expenditure in funding for artificial intelligence and questioned the positive views of those AI researchers. Followed by this was UK government’s decision to withdraw funding on AI projects during that time due to which no further advances were realized in the forthcoming future (Haenlin and Kaplan,2019).
The main reasons for this period of discontinuity in artificial intelligence was not that the principles were incapable, rather this was mainly due to the mere fact that the computers were not fast enough (Goralski & Tan, 2020). During the 1980s, artificial intelligence enthusiasts were able to deduce the activity of human brain that worked due to the electrical messages being transmitted by neurons was used to relay new principles of artificial intelligence like Neural networks (Yan, 2021). Although the principles were advanced enough to cope with the needs for practical solutions, computers and their processors were not fast enough or their computing prowess was limited and they would not be able to live up to the expectations of the tasks in hand (Goralski & Tan, 2020). This event for low or no funding in the AI research was termed as “AI winter” and it was such that more than 20 years was the period of discontinuity and ferment in the field of artificial intelligence.
However there came a time in 1997 when IBM’s Deep Blue program which was a chess playing software based on the principles of expert systems and artificial intelligence was able to beat the then world champion Gary Kasparov. The event was a highlighter for the field of AI as it was watched by 200 million people worldwide and made a perception amongst the general wide audience that AI was no spoof and could be hugely invested upon to reap great rewards (Haenlin & Kaplan, 2019). During the first decade of the 21st century, the drawbacks amongst the phase in technology had been covered up and there was an easy access for large amounts of data, faster and cheaper computers were radically improved and available and some of the basic principles of artificial intelligence were utilized to form Deep learning and Artificial Neural networks (ANN) (Yan, 2021), principles that would redefine the role of AI into the lives of consumers in the forthcoming years as was seen by Google when they first introduced their self-driving cars in 2009. The introduction of Big Data and Deep learning were an added benefit in the field of AI. Also, coinciding with Neural networks, Deep learning was introduced through a program by Google known as AlphaGo, based on the principles of the game Go which was considered to be more difficult than chess and was highly regarded as a game that machines could not interpret and beat humans in it, which was eventually achieved by AlphaGo when it beat the world champion of Go (Haenlin & Kaplan, 2019). Artificial neural networks and deep learning are the basic form in all applications of this day and age.
The field of artificial intelligence can be sub-divided into two parts- (i) Narrow Artificial Intelligence (NAI) and (ii) Artificial General Intelligence (AGI). NAI consists of the existing principles and technology norms of artificial intelligence, although it is still considered to be a weaker form of AI (Goralski & Tan, 2020). Most of the today’s general world applications can be retrieved in NAI’s subset like image recognition and processing currently being used by Facebook, speech recognizing capabilities being harvested to fulfill the experience of smart speakers and self-driving cars. Next is artificial general intelligence (AGI) which is still theoretical for most of its age, but is swiftly becoming applicable on the general world. There is a subset in AGI, known as human-level machine intelligence (HLMI), which is apotheosized by many in the scholar world to be a dominant design in the field of artificial intelligence as it is considered to have an ability to be as efficient as an immensely talented human-being in cerebral tasks (Salmon et al, 2020). There is also a reason to be believed by the same scholar that this particular set of AI causes fear in a few populace as in short term it might take jobs but during the long run it might just be capable of replacing the humans as a monopolistic specie in this world. The negatives of AI will be discussed in the latter part of this essay. Below is the skill level of humans compared with human-level AI (HLMI) as shown in Fig. I.
Based on the history roadmap of artificial intelligence that has been discussed earlier, one can explain the Anderson and Tushman technology cycle for several sets of AI. (Anderson and Tushman, 1990) formulated that technology change is cyclical as shown in Fig. II. Each discontinuity in technology ushers an era of ferment until a selected dominant design arrives which in turn is followed by an age of increment in changes that can be introduced for that technology. As per the discussion held earlier, HLMI is the dominant design that most scholars believed but is in its theoretical era. Several dominant designs can be then put in place like HLMI that hold their places in the technological cycle of AI. Artificial neural networks (ANN) based on Deep learning is one such design that was utilized during the first decade of 21st century (Yan, 2021).
As the ANN design raised its command having a majority of market share within this age, a new discontinuity, is well placed, in order to replace ANN as a dominant design which would be HLMI. (Anderson and Tushman, 1990) believed that dominant design was not a cutting edge of that particular technology and hence would not be in same category as that of the discontinuity as it only catered to the requirements of majority of the market. Age of incremental change is believed to be developing competencies of that particular design, in this case ANNs have been modified with Deep learning by Google for self-driving cars or ANNs combined with Big Data being used by Facebook for generalizing customer’s choices (Yan et al, 2021). The era of discontinuity can be predicted with advancement in the field of Artificial General Intelligence (AGI) that can usher in the next stage of the technology cycle.
Artificial Intelligence was not a technology that came into being during this century. The work had been laid down at the end of 20th century where several scholars helped in raising the standards for the AI to be what it is in the current world and is still developing. (Firschein & Coles, 1973) postulated a list of several theoretical programs that they predicted would be an eventual result using the advancements in artificial intelligence and that have actually been developed as products in current eras. Table 1 shows some of those programs depicted by these scholars. This table produces and proves that the postulates being used by the scholars during the 20th century were not at fault. Rather this was an incompetency on the hardware side of the machines, primarily the high price of computing machines with little or very low processing power that could not comprehend the calculations required for these postulates. Several of the abilities proposed by Firschein in 1973 can be seen as a realityin the 21st century as shown by the table below.
The discussions held earlier in this article shows an appropriate amount of complexity that can be seen with the performance of the artificial intelligent machines that have been produced in this era. Based on these performance and innovation parameters along with the time, that these technologies have taken, an S-curve can be mapped out below-
A. Initially performance of a new technology is poor due to its inception and has a potential to improve, as was discussed earlier with the birth of AI at Dartmouth by Minsky and beginning of the phase of AI (Haenlin & Kaplan, 2019).
The S-curve model explains how the innovations begins at a slow pace, speeds up and then hit a ceiling requiring the firms to skip past onto a new technology. It is usually mapped out on the basis of performance vs effort which is time in itself as a technology takes birth and which might be replaced by something better (Lazanyi, 2018). The S-curve shown above depicts three regions within the first curve which are as followed-
- As experience with the technology that was produced grows, performance of the curve starts to improve gradually which can be depicted by Firschein’s postulates that he introduced which are based on today’s applicative realities (Firschein & Coles, 1973).
- Eventually a performance ceiling is reached and further improvements are slowed down as was the case when US Congress and UK government halted the funding in AI research due to heavy prices of computing machines which did not have much of the processing prowess that could help with the research of AI (Haenlin & Kaplan, 2019).
That is where the new wave of technology upheaval takes place, although just like the first curve, it is still not as good as the older technology. The gap between the first curve and the second represents the period of discontinuity that persisted for twenty-odd years. However, it improves rapidly and firms need to jump to a new “S-curve” which helps in improvement of technology as was seen by introduction of Artificial Neural Networks (ANN) and increase in usage of AI driven applications like self-driving cars by Google or image processing by Facebook. This is not the only S-curve, as can be seen by the discussion earlier in this article as Artificial General Intelligence (AGI) and its subset Human-level Machine Intelligence (HLMI) are in line to replace ANNs as a dominant design in the field of AI in the upcoming eras (Goralski & Tan, 2020).
With large investments in today’s modern world, artificial intelligence has led the global investments as published by Stanford institute for the year 2019 to be 70 Billion US dollars. Out of these the US, Europe and China were the ones to have the largest share amongst the countries whereas Israel and Singapore were the ones who had invested immensely in per capita terms (Savage, 2020). Start-ups which were being institutionalized on principles and research of artificial intelligence came out to be a larger chunk of the group raising more than 37 Billion US dollars in 2019 as compared to 1.3 billion US dollars in 2010.
Figure IV shows the overall share that has been maintained by the US in terms of research in the field of artificial intelligence. The report also depicts revenues in AI to be a total of 156.5 Billion US dollars despite of the fact that the world faced a global pandemic and it predicts to surpass the mark of 300 Billion US dollars in 2024 (Savage, 2020).
The prospect of AI driven world looks to be all positive with the development that follows. One cannot take the progress and turn a blind eye as to what all are the causal effects of these developments that have been brought up by this technology. While the increase in dependence of artificial intelligence can be seen as a harbinger of higher economic success, others like Elon Musk, Stephen Hawking and Bill Gates have vigilantly advised that over depending on AI might aggravate inequity in global economy, or indicate humanity’s existential crisis. AI’s main principle is based on the fact that a machine reads and learns from input data and is trained based on this information. However, one cannot be sure whether the system would be biased or not. For that reason alone, it has been witnessed that if an input data is biased, the output for that AI driven machine would also result in biasness and one such example was witnessed by Google itself when it started the self-driving cars, as they witnessed that the sensors in those cars were better at detecting lighter skin tones as compared to the darker ones (Haenlin & Kaplan, 2019). While introducing the concept of Artificial General Intelligence (AGI), the technology was prominently based on encompassing talented humans in the intellectual duties, which clearly put a doubt amongst the general populace about the removal of jobs in short term and chances of humanity being replaced by machines in the long term as a primal specie on this earth. While AGI might result in shift in jobs via improved efficacy, Narrow Artificial Intelligence (NAI) is already resulting in removal of humans in reputed professions like the report where Goldman Sachs had 600 traders during the year 2000, while only 2 human traders were left by the end of 2017 due to evolving nature of NAI (Goralski & Tan, 2020).
Despite getting increased efficiencies, it has been witnessed by scholars that there has been a huge gap of unequal distribution of benefits of this technology. It was witnessed by (Salmon et al, 2020) that the countries who invested heavily in AI were reaping greater rewards and were not utilizing the prowess of AI to help the developing nations to reach their potential due to which several countries and cities were reflecting bigger worldwide disparity. Work that was earlier being completed by humans was now being done by machines which not only lead to unemployment but also increased the pressure of stress on human psychology. This shows by the fact that the world’s prominent minds in AI like Elon Musk, Stephen Hawking and a handful of experts in the field of artificial intelligence published a global letter in 2015 stating the unintended consequences of AI (Makridakis, 2017).
Regulation might be a way to avoid unintended consequences of AI as suggested by (Klinger, 2018). He proposed firms to pent up some amount of funding out of the research in order to train and specialize the jobs of humans that would balance out the effect of AI. This regulation should not only be witnessed amongst the private firms but rather on the countries as well who use the personal information of people in order to increase the tracking of certain individuals and to keep a check on them. This might just result in dissatisfaction with the technology that is being heavily invested upon (Salmon et al, 2020).
Going with the overview of what the AI driven world could be capable of, seeing both the positives and negatives that have been witnessed in the past, one cannot take lightly the improvements in Quality of Life that AI has brought while also keeping a check on the regulations that would be widely accepted within the scholarly world, and how to implement these regulations without resulting in a wider disparity amongst the countries of the world. If the checks are regularly maintained, then there should not be any fear of the upcoming world of Artificial Intelligence and the growth it might have on the people of this world. Future of the world should and will always be kept under the hands of humanity as the beings with the knowledge and an apex specie in this world which will only help in building a sustainable, bright utopian era where relationships with countries, people and machines would be harmonious.
- Anderson, P. and Tushman, M.L., 1990. Technological discontinuities and dominant designs: A cyclical model of technological change. Administrative science quarterly, pp.604-633.
- Di Vaio, A., Palladino, R., Hassan, R. and Escobar, O., 2020. Artificial intelligence and business models in the sustainable development goals perspective: A systematic literature review. Journal of Business Research, 121, pp.283-314.
- Firschein, O., Fischler, M.A., Coles, L.S. and Tenenbaum, J.M., 1973, August. Forecasting and assessing the impact of artificial intelligence on society. In IJCAI (Vol. 5, pp. 105-120).
- Goralski, M.A. and Tan, T.K., 2020. Artificial intelligence and sustainable development. The International Journal of Management Education, 18(1), p.100330.
- Haenlein, M. and Kaplan, A., 2019. A brief history of artificial intelligence: On the past, present, and future of artificial intelligence. California management review, 61(4), pp.5- 14.
- Klinger, J., Mateos-Garcia, J.C. and Stathoulopoulos, K., 2018. Deep learning, deep change? Mapping the development of the Artificial Intelligence General Purpose Technology. Mapping the Development of the Artificial Intelligence General Purpose Technology (August 17, 2018).
- Lazanyi, K., 2018, September. Readiness for Artificial Intelligence. In 2018 IEEE 16th International Symposium on Intelligent Systems and Informatics (SISY) (pp. 000235- 000238). IEEE.
- Makridakis, S., 2017. The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms. Futures, 90, pp.46-60.
- Marr, B. (2018). What is artificial intelligence and how will it change our world? Retrieved from https://www.bernardmarr.com/default.asp?contentID=963
- Puntoni, S., Reczek, R.W., Giesler, M. and Botti, S., 2021. Consumers and artificial intelligence: an experiential perspective. Journal of Marketing, 85(1), pp.131-151.
- Salmon, P.M., Carden, T. and Hancock, P.A., 2020. Putting the humanity into inhuman systems: How human factors and ergonomics can be used to manage the risks associated with artificial general intelligence. Human Factors and Ergonomics in Manufacturing & Service Industries.
- Savage, N., 2020. The race to the top among the world’s leaders in artificial intelligence. Nature, 588(7837), pp.S102-S104.
- Soni, N., Sharma, E.K., Singh, N. and Kapoor, A., 2020. Artificial intelligence in business: from research and innovation to market deployment. Procedia Computer Science, 167, pp.2200-2210.
- Yan, S., Yao, K. and Tran, C.C., 2021. Using Artificial Neural Network for Predicting and Evaluating Situation Awareness of Operator. IEEE Access.
- Fig. I – Source-https://aiimpacts.org/human-level-ai/
- Fig. II – Source-Reference 1.
- Fig. III – Source-Reference 7.
- Table I – Source-Reference 12.
- Fig. IV – Source-Reference 2.