Interview: AI expert Kriti Sharma

Kriti Sharma is the amazingly talented Artificial Intelligence expert that was recently named in the Forbes 30 Under 30 list for advancements in AI. While there's a lot of talk about AI, it's rare to get insights into the industry and our interview with Sharma is particularly revealing. Sit down, strap yourselves in for a read, you'll be glad you did.

Sharma currently lends her talent to Sage as the VP of Artificial Intelligence, go check them out at


Recently you posted that 'Our job is not to make AI human-like but to make #AI that improves human lives'. What are some real-world examples of where AI is improving human lives?

Artificial Intelligence (AI) has been part of our lives for several years and has resulted in many changes from the introduction of smart cars, to how we play video games, the rise of the smart home and the gadgets and gizmos powered by AI.

Predictive purchasing from organisations with an online presence such as Amazon or a large supermarket chain, is another example of improving our lives. Here AI is helping retain customers because of the understanding it has of buying behaviours and patterns and so not only predicts but also suggests future purchases.

Virtual Assistants such as Siri and Alexa assist us with finding information such as the location of a restaurant, to what the weather will be like the next day. At Sage, we also have our own virtual assistant called Pegg, which focusses on taking over repetitive manual administrative work from those in the financial industry.

According to Sage’s Productivity Tracker,  the cost of lost productivity so far in Australia in 2018 is more than $17 billion. This equates to more than $32 billion per year, $2.6 billion per month, $87 billion per day, or $1,007 per second.

But the best example of where AI is improving human lives is the work I am doing with Sage Foundation to help improve human lives with the power of AI.

One of the projects we are working on is around tackling domestic violence and abuse, which is a huge issue all around the world, Australia included. Sexual harassment or abuse are taboo subjects. It is not easy to speak up about them due to the stigma attached. Research shows that humans find it easier to speak to intelligent agents about these sensitive issues because there is the notion that machines won’t judge them.


The capabilities of AI are rapidly evolving, what does the cost curve look like and how far are we away from affordability for everyone to leverage AI ?

AI is becoming an affordable commodity and is open to both large and small to medium sized (SME) companies. In the past, SME’s had not been totally supported by the technology industry as IT infrastructures were too expensive to run. This meant that large enterprises had access to the best technology and SMEs were often left at a disadvantage. Thanks to cloud technology, that is a thing of the past, and we now building and provide AI technology to all businesses at an affordable rate making it more widespread.

As AI continues to grow in terms of acceptance and usage, costs will inevitably reduce. AI is not only becoming more accessible, but it is a key driver within the technology industry.

The other question we should be asking is whether we are living by an AI Code of Ethics to ensure the technology is used for non-malicious purposes. Because talent and skills are the biggest barrier at the moment (data scientists are too expensive!), SMEs should be looking at reskilling existing workforces to apply AI.  Australian 5G networks will be switched on from next year, which will make a great difference to the availability and accessibility of AI locally.


AI is massive in 2018, but in an effort to optimise and improve lives, do you think the products and services we use will simply be powered by AI as expected functionality, rather than it be a differentiated feature?

Yes, definitely. We are already using AI in a very advanced way every day although we may not realize it. The conversation around AI has only started happening on a large scale in the last few years.

But in saying this, we all use AI multiple times a day. Anytime you search on Google and it recommends content, that’s AI. If you use Siri or Alexa, that’s AI. If you use Google Maps, that’s AI. Even YouTube or Netflix, when they recommend a new video, that is all powered by AI.

We are very advanced in the production of artificial intelligence. People get used to leveraging such capabilities after which it becomes an expected functionality, rather than a differentiation. Creativity needs to be leveraged to ensure new capabilities are constantly being developed.


Are the tools to work with AI evolving to be more approachable by everyday people?

Yes, they are. AI is not as intimidating and scary as some people may think. For me, growing up in India, I didn’t have access to a computer, so I did some research and built one myself. When I realized I could do that, it gave me the confidence to build more advanced technology. When I was 15, I built my first robot, which was programmed to fetch chocolate bars from the snack-bar. It was a simple machine that got smarter every day.  I had a lot of fun building it and that was my first introduction to bots and AI.

Today, in the UK, Sage Foundation together with a community interest organisation called Tech for Life, run a program called FutureMakers, which is all about bringing AI to teenagers. These kids are often reluctant to understand AI because they think coding will be hard and that the image of a career in IT is not as glamourous as they’d hope their futures would be. In saying this though, when they learn the basics, it is not only easy for people to pick up, but a skill that they can use for good. When you think about the future of AI, the next generation of digital natives, will be mastering it a lot easier than our generation do.


Some businesses are running to AI to help them solve business problems. Are companies that don't invest heavily into AI efforts, on a trajectory to fail?

They won’t fail, but they will fall behind. This is not the case for all businesses, but certainly the case for businesses that have an online service aspect to their offering. This could be the case for an e-commerce store, an online banking website or even a dating app. Customers expect that the services they engage with are intuitive and easy to use, which is made possible by AI technology. Often, consumers do not realise that AI is working in the background of their apps, websites or digital services but they still respond better to the services that are easier to use and respond to their needs.


Often people think AI and robots will replace their jobs. Don't you think humans + AI is the best and most likely outcome to increase the potential outcome of a human on any given day?

Because of many Hollywood articulations, AI is something that many people who don’t understand it, are afraid of. The idea that computers could think and learn as we do, can present a terrifying notion of our jobs being “taken over” by technology that we could no longer control nor understand.

The reality of AI is not quite as bleak as all that. What we are seeing now is that applications of AI are being used more broadly by traditional industries - things like automated cars and supply chain automation are making these industries more productive.

An example of something we do at Sage is creating AI solutions for smaller businesses. These businesses may not have the resources to hire a CFO and so AI helps them to manage their finances and free their owners to concentrate on other strategic activities. As with my education and healthcare examples, it’s about being additive rather than replacing humans. For accountants, AI means that they spend less time doing repetitive admin, and more time talking to clients and solving business problems - being the strategic advisor. If anything, it’s helping them to become more ‘human’ with the work they do.


Do the ethics of AI keep you up at night? If not, what does?

AI ethics is my passion. There is a lot of work still to be done in this space such as improving the capability of machines to be unbiased. We have seen many early examples of sexism, racism and other forms of biases in AI powered algorithms. But the good news is that we create AI, and it is entirely within our control to teach it the right values, the right ethics. We just need to act sooner rather later. This action is what I fight for every day.

At Sage we’ve developed “Ethics of Code”, which help us ensure that these kinds of prejudices don’t appear in our own technology and which we hope will help other companies consider these issues more deeply.


If a business wants to run AI over sensitive corporate data, is that possible without sending it to the cloud?

We live in an era where personal and business data is sensitive - how we use AI, who creates it and what we do with the data is of the upmost importance, just look at some of this year’s biggest news stories. Companies need to ensure they have a proper ethical code set up around sensitive data.

While cloud infrastructure is robust and secure, sometimes there are concerns about the suitability of cloud for the specific applications and industries and the need for greater control. Recent advancements in chip design and GPUs allow for AI training and deployment at scale on premise, without the need to send it to the cloud. There are trade offs to each approach.


AI often finds the answer to a question; how do we ensure we're asking the right question? If a business searches for efficiencies, how do they know they're finding the biggest saving in the right area.

At Sage we’ve developed “Ethics of Code”, which help us ensure that these kinds of prejudices don’t appear in our own technology and which we hope will help other companies consider these issues more deeply.


Is there a problem with trying to apply AI to everything?

As previously stated there is still room for improvement where AI is concerned, and one size does not fit all. Artificial intelligence is not totally on par with natural intelligence although it’s getting there.   Facial recognition systems are a good example where we are not where we need to be. Many have not been designed to recognize the different face types and skin colours, and this does not apply just to humans but to animals too.

Then of course are the companies who are keen to digitally transform and embrace AI technology, but do not necessarily apply a workable code of ethics, and that is simply asking for trouble.

That said we are on course for a future where AI will be applied to (almost) everything. Microsoft’s development of a Skype system that can automatically translate from one language to another, and Facebook’s system that can describe images to those visually impaired.  These are just two examples of the marvel that is artificial intelligence, and I am very excited about what is to come.


Governments spent tax payer's money, do they have an obligation to leverage AI to streamline operations, financial decisions.

Digital transformation and implementing AI processes will go a long way in making Governments more transparent, efficient and citizen friendly.

The Australian government is doing a lot to invest in AI and this year’s budget announced $30m into developing Australia’s capabilities in artificial intelligence and machine learning. This is already a step in the right direction for the government and shows that they have a vested interest in the technology and its benefits.

Australia’s Chief Scientist, Dr Alan Finkel gave the keynote address at a Committee for Economic Development of Australia event titled ‘Artificial Intelligence: potential, impact and regulation in Sydney on 18 May 2018. He raised key issues about ethics, confidence in technology, the possibilities of AI, and with most gravitas, the discussions that need to be had.


Is past data used to power AI always a great starting point for predicting the future?

Not always. AI learns from the data that is used to train it, and the data is a reflection of the society and the communities. If the underlying data is biased, then the AI can easily learn to be biased too. We are already seeing applications of sexism in AI for HR and recruitment related AI, where it is being modelled on previous patterns and data – data that reflects that we don't have enough women in leadership and technology roles for example.

AI technology is here to help us progress and move forward, so it is important for us to make sure we do not copy the issues we have in the present and bring them into our future.

Companies need to start making sure that there are enough people in the room, making decisions about AI that all offer new perspectives. We shouldn’t only be looking to AI professionals and data to create AI, but we should be asking questions to various cross sections of the population such as women, under-represented minorities, psychologists, sociologists, teachers and artists. We should ensure that we have create diverse data sets that the AI is learning from, and that the design of AI is not biased. I get frustrated with stereotypes in AI such as female voice assistants like Alexa for every day tasks such as turning lights on and off, ordering your shopping. And male AI for important business decisions like Einstein, Watson, ROSS (lawyer).


Should we encourage students to pursue a career in AI?

Absolutely. I love how kids think about AI. As mentioned earlier, in the UK, we run a program called FutureMakers, which is all about bringing AI to teenagers. These are young people who come from very different backgrounds and may not have considered a career in technology or artificial intelligence. As part of the program, one young person built an AI translator for refugees who are living in a country different to their own where the language is not their first, and the AI will interpret road signs and translate them.  Another student used AI to help the elderly who live alone.

It is interesting to see these teenagers use AI in a positive way often coming up with solutions, rather than problems - a view I feel we should all try to embrace.


Fast forward 10 years, what does the world look like as AI is implemented in more aspects of our personal and professional lives?

I feel very positive about the future of AI. I think governments and policy makers are starting to take steps and industry groups are ensuring we build AI in a responsible way. I also think that the future of AI is not only about humanoid robots running around our house making coffee for us. It will be about solving difficult problems using AI. Problems that we have not been able to solve so far on our own.

In a business context, people sometimes get fixated on AI replacing jobs. In reality, AI is about helping companies grow by improving productivity and making human work more fulfilling. We will have people with diverse skills and backgrounds creating ethical AI for social good.

As a bonus for reading the whole interview, here's Sharma's TED Talk on robots from April this year.

Jason Cartwright
Jason Cartwright
Creator of techAU, Jason has spent the dozen+ years covering technology in Australia and around the world. Bringing a background in multimedia and passion for technology to the job, Cartwright delivers detailed product reviews, event coverage and industry news on a daily basis. Disclaimer: Tesla Shareholder from 20/01/2021


Leave a Reply


Must Read

Latest Reviews