Collaboration Forum Series 5: Building a brighter future (Week-9) The Future of Technology on Business, Cities and SocietyHow should we embrace the technological opportunities that surround us today? How should we balance unlocking the power of AI with our responsibilities to our colleagues, our family, our society? How do we empower ourselves and what does the future of work and our city look like? One of the world’s leading technology entrepreneurs Dr Ayesha Khanna joined our collaboration forum to debate how we should shape our future. Watch session recording now. Topics Board Matters Cybersecurity and Privacy IT Management, Applications and Transformation Business Performance Digital Transformation Meet Our Inspiring Speaker Dr. Ayesha Khanna, CEO, ADDO AI: Artificial Intelligence Solutions Ayesha is a member of the World Economic Forum's Global Future Councils, a community of international experts who provide thought leadership on the impact and governance of emerging technologies like artificial intelligence. In 2017, ADDO AI was featured in Forbes magazine and Ayesha was named one of South East Asia's groundbreaking female entrepreneurs by Forbes magazine in 2018. She is also the Founder of 21C GIRLS, a charity that delivers free coding and artificial intelligence classes to girls in Singapore. Don’t be scared of ‘hybrid reality’ Artificial intelligence provokes strong reactions: those in favour embrace it in their home and working lives, while those against worry about jobs disappearing, and a dystopian robot future. Ayesha Kanna, chief executive of ADDO AI in Singapore, believes it can “amplify our own potential” Why do you think ‘hybrid reality’ is important for us to understand? The whole concept of hybrid reality is based on the evolution of human beings. We had the agricultural revolution, the industrial revolution, and now the information revolution. We are using our digital skills and working with each other globally. But there’s a new team member that’s going to start living among us – the artificial intelligence (AI) that resides in machines. It will be in our workplaces and homes; it will be in our cities. We need a relationship with AI that is productive and gives us agency. So, we are not passive consumers, but we’re using it to benefit ourselves. We should stand on the shoulders of machines to amplify our own potential. Has the pace of AI development increased during the pandemic? We’ve seen it everywhere; a lot of start-ups are turning towards AI now. There is a company called Carro in Singapore, which sells second-hand cars, for example. It’s very popular and has just raised $360 million. The whole buying experience is completely contactless. The robot will show you the car in real time, check for defects, tell you where it’s parked, and you can open it with a QR code. You can sit in it, drive it, and put it back. And there’s a chatbot throughout the journey answering your questions. We’ve also worked with a large telecommunications company. It was getting complaints from business customers because they couldn’t resolve issues quickly enough. We suggested that AI could classify complaints into different categories. We looked at the history of how these problems were resolved and which engineers were involved, so they could be reassigned. Once on site, AI could help them based on how something was resolved in the past. The company reduced its labour cost by 30 per cent and ‘time to resolution’ by 40 per cent. Human error, which used to occur, has been virtually eliminated. But while corporations are galloping ahead, the government is watching, and is setting up guidelines for AI. What I like about Singapore is that technology is viewed in a balanced way. There are many benefits, but we must make sure we’re controlling the downsides. That’s what a smart city supposed to do: use technology to enhance people’s lives and protect their rights. How should people, and businesses, develop a productive relationship with AI? They can look at it in two ways: as consumers and workers. As consumers, we are continually receiving nudges, downloading apps and talking to Alexa, for example. These machines are gathering a lot of information about us. It’s all interesting, creepy and useful at the same time. First of all, we should know that this is happening, and how our data is being used. We should also teach our children what to think about when interacting with apps and other machines. At work, using AI to eliminate tasks seems to be causing stress, but it really shouldn’t. The moment we change our perspective and think of AI as an assistant to free up our time, we can solve more interesting problems. We advise companies across the world and work people who are comfortable using data, analytics and AI. I would say their jobs are more secure than ever. Just think of AI as a little assistant: it will never replace you, but will make your life easier, over time. What about the risk of AI learning ‘bad behaviour’? This has actually happened. Microsoft had a chat bot called Tay, but after a while it was learning racism from social media and discontinued. So, it is the responsibility of companies to check the data that AI is being trained on, to make sure it’s not biased. There are fairness metrics and bias statistics that companies can use to help. But there might be instances when something is unacceptable, even when it’s not in the data, so that’s about humans deciding what’s right. In Singapore, starting next year, we have a certification in AI ethics for product managers. The product manager is the person who heads up design. They can ask the right questions to make sure that AI is representing the values of a company. I hope, of course, those values align with good morals and society. What are the ‘red lines’ that AI shouldn’t cross? Anything that seriously affects a person’s life, such a health decision, for example. I think the same for relationships with children, or people who are vulnerable and psychiatric advice. I’m very cautious about these things because they touch humans very deeply. Governments are beginning to think about how they should regulate AI, because of its huge influence on our decisions, so we do need some boundaries. But those boundaries have to be based on the risk they pose to our agency, our ability to think for ourselves. How should companies start their AI journey? Think about the problems to be solved and the vision of the business. Invite a group of people, including AI engineers or advisers, to help identify these challenges. Companies can also look at competitors to see what they are doing. Once these things have been explored, it will be possible to work out how AI can help. The third step is whether a company has the data to achieve its aims, because 99 per cent do not have perfect data. In the beginning, that means it’s easier to choose something doable. Try and complete it in three months, be agile, and use the cloud. At the same time, think of three other problems that can be solved, using the same data. After three months, start testing it and within four months, it will have worked, or not. If it has, then it will be easier to get the budget for the next phase. If you don’t have the talent, you can get it from three places: consulting firms, freelancers, or some software-as-a-service platforms. You don’t have to make your own custom AI; have a look at what’s happening on Google, or AWS, for example. How do you see AI developing in the next five years? AI will not become conscious in that time; it will not be ruling over us. At the moment, it has child-like intelligence: it can’t connect all the dots and does very narrow things, especially if they are routine or repetitive. It can write poetry and even articles, but it doesn’t understand what it’s doing. AI researchers are trying to integrate AI with neuroscience. If this comes to fruition, we will see AI become more human-like. It may roam among us, and we may take it more seriously. But I still recommend that we treat it as an equal, and we should always be in a position of agency, and power. Sherry Turkle from MIT says as human beings we’re hardwired: when we see a robot puppy, for example, we start cuddling it. There is a Japanese robot called Lovot, which is just adorable. We will have robots fulfilling all kinds of functions around us, and they won’t take over. But the companies and individuals that own them will have a lot of information. We must prevent monopolies, through mergers and acquisitions, because that’s when it gets dangerous. That’s when we’d have someone ‘ruling over us’. Leadership Peter Richardson Peter leads Protiviti’s focus on The Future of Work globally. In helping clients face the future with confidence in an ever more dynamic world, he emphasises rebuilding the operating model and future of work engine by empowering teams, equipping them to contribute fully ... Learn More Paul Middleton Paul joined Protiviti in August 2018 and leads our capital markets business in London. Focused on 1st Line trading and risk management initiatives, Paul works closely with our global Solutions to shape advisory, transformation and remediation initiatives across ... Learn more