This is part of a series of articles being penned by an eminent educationist Arun Kapur
To channel something new into a force for good, we need to put in significant thought and hard work. To ensure that something new becomes a complete disaster, all we have to do is standby and do nothing.
Imagine this - the year is 2032. Significant advances in technology have been made over the past decade. Artificial Intelligence has grown by leaps and bounds. There are now companies that create bespoke AI solutions tailored to individual needs. What’s more? For a higher price, you can get your own personal AI assistant. You may wonder, how is that any different from the AI I have right now? I can ask Siri, Alexa or any of the other algorithms at the flick of a button! Well, these bespoke AI solutions in 2032 will have a good grasp of your context. It will know your preferences and your thinking styles. It will help spot errors in your cognition even before the results impact you. It will have an even better understanding of your biological processes, thanks to the 24-hour stream of data being streamed to it from your smart wearables. In short, it will know you probably better than you do. At this point, it may sound like I am presenting the case for a particular kind of “Terminator” or “Matrix” scenario. The reality is not so extreme. The truth is that these kinds of technologies are already being developed. There are already companies racing to develop the next generation of artificially intelligent personal assistants.
The question is what about those who cannot afford this solution? You clearly have a massive advantage over those lesser mortals, even though, ironically, they are more human than you. Is this fair? If we look at history we will notice that such inequality has always been widespread. Just look at the present, for example. If I am born to wealthy parents then there is an inherent possibility for me to attend a top ivy league university in my teens. The less fortunate don’t have such an option. Similarly, will AI be the next inequality aggravator that provides the well-off an advantage that we cannot even fully comprehend?
To channel something new into a force for good, we need to put in significant thought and hard work. To ensure that something new becomes a complete disaster, all we have to do is stand by and do nothing. Artificial Intelligence is one such emerging 'thing'. How it plays out depends on the effort we are willing to put in today and in the coming few years. In a world where artificial intelligence is constantly evolving, it's important to consider the ethics of this technology. As we move into the future, let's make sure that we create a force for good with our intelligence.
AI is already being touted as the next big thing in the world of work. It’s currently being tested in various industries and will transform how we do business. And it can be implemented at scale without significant new equipment or facilities. However, what we have today is not really human-level intelligence. This is not really AI; it is just a more cost-effective way of doing things, but the term AI has become a catch-all for anything that can be done in a sophisticated way. Today's AI cannot do most of what a four-year-old human does. That is because it is a very narrow AI. It can only focus on a narrow range of tasks—identifying tumours from scans, shopping recommendations, suggestions on what to watch on Netflix, etc. It does not fathom context. Having said that, make no mistake that it is getting there. And once it does approach Artificial General Intelligence, then all bets are off. I suspect our dictionaries may have to come up with new definitions of intelligence to differentiate humans from machines. It could either be a positive force unlike any other or could be outright negative. We should also be cautious to watch out for false positive outcomes where it appears positive, but the long-term outcomes are negative. The industrial revolution is a great example. It made life easier, but the consequences were far-reaching. In fact, we are feeling its massive implications only right now as climate change is wreaking havoc. These and other emerging threats were mostly rooted during the industrial revolution 300 years ago. We need to take a generational lens into everything we do.
As the technology improves, it will take over certain tasks and roles that were previously exclusive to humans. This is not necessarily a bad thing. It will free up our workforce to do more interesting and productive work. The jobs that are likely to be replaced first are those that are repetitive or require little discretion or decision-making. There already exists significant research into the kind of jobs that AI will replace. If it were to happen, what will happen to the humans who are displaced?
The ethics of artificial intelligence is a complex and evolving topic. As AI becomes more advanced, it's important to consider the implications of its actions on society and the future. If we let AI grow at its current pace and style, it is going to get much better soon. And the challenge here is that it may be out of our control to such an extent humans cannot decipher its decision-making capabilities. It will take a cross-pollination of philosophers, psychologists, ethicists, engineers, historians, industry leaders and policymakers to ensure that we have AI that betters humankind rather than it being used for sinister purposes or warfare. For example, we now know that many social media platforms work in silos using theories of human behaviour to ensure we are hooked on them while they serve ads and harvest our data to strengthen their profits. Their algorithms are very good at manipulating us for their agenda. We should not make these same mistakes again and ensure we have a holistic framework in place that consults a wide array of stakeholders. After all, the future of humanity could be at stake.
Another question is how we harness this new technology, and what we should do with it. What kinds of jobs should AI replace? How do we best use it as a tool? And how can we guarantee its ultimate responsibility? These are some of the questions that need to be answered sooner rather than later. But let’s not lose sight of the bigger picture. With AI, we’re still at the beginning. It’s just a tool that relies on us to decide what to do with it. We are accountable for its use. Its applications will only be as good as the values that guide us, and through us, it. We should actively strive to ensure that persons of substance are in charge. We should be careful of the road we take. If technologies replace humans rather than augmenting human capabilities to increase productivity and output, it will not end well. This is why I emphasise over and over on the need for persons of substance to usher in these changes - people who will put human welfare and the collective good over single minded profit maximisation. It is a revolution that should be driven by human ingenuity and not a technology we cannot control. We should think about how to dominate it with our values, not how to give ourselves up to it.
For workers, the biggest concern is that they might lose out on the jobs which they are accustomed to. This can be frustrating and a little demoralising. But the unfortunate fact is that businesses need to adapt to remain competitive and if AI provides that edge, businesses will queue up. Is it unfortunate? Yes, it is. But again, that is a debate on the ethics of capitalism! What we should be looking at is how to reinvest some of those profits to upskill workers who are on the cusp of losing their careers. Take for instance the deployment of a highly functional robot in an industrial park. This robot enhances output by 20%. This, in turn, means that between 5-20% of the workers in that industrial park will become redundant. What should a leader in that instance do? Does she terminate those employees and bank in the profit? Or should she look to reinvest some of those profits to increase sales of the product? Or should she upskill those employees and train them to carry out new auxiliary functions that have emerged because of the changing landscape? All these and many other options will be available. Hence the focus should be on having a person of substance at the helm to make the right choices, not just for shareholders but for all stakeholders involved.
So what is the solution for workers who might be affected? A lot of these new jobs will make use of highly skilled workers, but there is also a middle ground. Some jobs might be automated and that’s fine, but the same workers who used to do those jobs can now become trainers to teach the new AI techniques. They can also be put on other jobs where they can use their skills in a new and exciting context. It is not all doom and gloom. It is also worth remembering that AI is still in its infancy and there are some interesting projects being worked on that aim to make it more accessible for workers. Remember, we have already been through a revolution or two, and are currently amidst an emerging revolution. Every time, our lives have changed, our jobs have changed. As long as we remember that AI is not the enemy, but instead a tool, we will be fine. Be wary of being fearful. Be wary of being angry. Be wary of being too trusting. Instead, try an understand what is at play here - how is the AI learning, what are its learning strategies, what is its pace of learning, what are domains it is better than me and not better than me, and how can I use it to augment my potential?
The one arena I see AI having a positive impact in education is by personalising learning. And I do not mean that in the narrow sense of merely personalising content to suit a learner's need. Adaptive learning is one important aspect of it. It goes beyond that when AI can look at your credentials, explore the jobs you are interested in and identify the gaps that need to be filled for you to have better success and a more fulfilling experience at that job. And this is not just a one-time thing, but a part of your continuous development. You will be offered customised learning plans that will aid you in making choices as you progress by identifying your strengths and gaps through relevant and meaningful learning experiences. AI can also make learning more meaningful when it can spot connections and patterns in the content you consume. It can tell you how your interests are evolving, how your needs are changing, how your work will be impacted and give insights into your chances for success in the future.
AI has been predicted to replace teachers as well. Is that an entirely negative or positive development? I do not think AI replacing teachers is a realisation that needs to be feared. That said, I see AI supplementing teachers and not replacing them. This will help teachers to focus on their real purpose, which is to nurture and facilitate learning. AI is going to make them better at their jobs. They can focus more on personalising learning, experimenting with new curricula, developing better assessment techniques, creating and managing more online content, mentoring and guiding self-learning and being more responsive at the same time. And if AI can take away the mundane parts of teaching and make teaching more relevant and meaningful, then that is a good thing. Teaching is an art and one that is not for everyone. And I say that from experience. Teaching is very challenging.
What needs to be considered is the impact AI-assisted teaching can have on learners. Are our learners oriented to make the most of such a shift? Teachers could start laying the groundwork now by learning for themselves the basics of AI and what it is good at and what it isn't. Teachers themselves need to be open to and enthusiastic about these changes and new learning paradigms. More teacher training courses and workshops on AI are needed. Such courses can work on creating a better understanding of the AI landscape and how to apply AI in learning in ways that are beneficial to learners. They can also help teachers to better understand how AI-assisted learning can be handled. In considering the impact on learners, teachers need to consider how AI can enable them to add more value to the learning process. How can I compose and create learning that is more difficult and yet more powerful, meaningful and useful? How can I cross-pollinate, create, remix, remake and innovate teaching and learning experiences? How can I use AI to create a more enriching learning experience? Teachers are going to have to play a very important role in first upskilling themselves on the nuances of AI. This will ensure that they are in a strong position to mentor our young learners who will be living through the full impact of the current revolution.
It is important to note that while it is not all doom and gloom, it isn’t all rosy either. As with all things in life, the use of AI in education (or any other human-centred endeavour) should be done with utmost caution. AI programs are often developed and trained on datasets that reflect a certain viewpoint, typically from Western perspectives. Learning is based on the inputted information, and if the datasets reflect a biased viewpoint, then the AI will learn from these biases. Therefore, an AI that is trained on biased datasets will result in biased learning. In decision-making, for example, a machine has to rely on data and algorithms that are shaped by the humans who design them. Learning from these data, the machine may be biased; for example, it could be biased against certain races, genders or cultural backgrounds. Such biases must be considered in AI-assisted learning. This can be countered by ensuring strict quality control on input data and through a thorough and open debate on how we want AI to be employed in our lives, including in AI-assisted education.
This is where we are at now.
In summary, AI is here to stay. It has great potential to help people, especially in the workplace, in education and in medical care. There are fears that AI might outsmart us, leading to a dystopian future. The good news is that many organisations are working to spread awareness about ethical AI and encourage the use of technology which can improve the lives of everyone. There is no doubt that organisations will change because of AI. In fact, it is already happening. But in the end, AI is only a tool. Like all technologies, it is up to us to use them wisely. I can't stress the paramount need to have Persons of Substance leading us through these decisive times that will shape humankind.
The more we think about these things, the more we realise that technology is not about the object itself: a hammer, a saw, or a mobile phone. The technology itself is not the most important component. Rather, it is how we use it, what we use it for, and how it fits into our everyday lives. The reason AI looks so threatening whilst also offering unlimited possibilities is that for the first time we are about to create something that is smarter than we are in certain disciplines, and we don't exactly know how it makes judgements or learns at an exponential pace. Its power will be in its ability to make connections across hugely vast sums of data that are too complex for us to analyse, and do so without getting tired but learning from itself to get better. How we deal with AI will determine the future of humanity. It is for this reason that the attention of the world economy and governments is directed toward this issue. The potential AI offers for health care and the climate crisis facing us is significant. On the other hand, the levels of human arrogance and ignorance cannot be underestimated. These coupled with the fact that we do not have the wisdom and foresight to see decades or even a year into how our actions today will affect us puts us on a very precarious ledge. It is not a problem that will solve itself. I reiterate the need for having Persons of Substance at the helm to explore and lead humanity through this vital phase. AI will never replace the human mind if we do not allow it to do so. But that doesn’t mean that it can’t complement and augment humans. The future of AI is shrouded in potential but fraught with ethical concerns. As we strive to create intelligent machines, let's not forget to nurture our own humanity.
About the author:
Arun Kapur is an educator with over four decades of experience in the private and public education spheres. Arun currently leads initiatives at the Royal Academy, Pangbisa, Bhutan as its Director.
To read the introduction to the series, click here!