Artificial intelligence (AI) has come out of the research labs and its applications being implemented both in the B2B and B2C domains.
While businesses are excited about using AI, there are specific challenges that need to be overcome first to realize the fair potential of this technology. At the Women in Data Science Conference held at IIM-Bengaluru, technology heads gathered to discuss how organizations can create responsible AI for the world.
The discussion centered around challenges organizations face when it comes to leveraging the power of artificial intelligence in a fair manner, and without causing harm to individuals' privacy.
The challenge of biased AI algorithms
Biased AI frameworks are probably going to become a larger issue as AI moves out of the science labs and into the scope of modern-day reality. There's an undeniable risk that without appropriate information assessment and detecting the potential for bias, specific set of individuals in the public domain could be harmed or have their rights encroached by biased AI.
Artificial intelligence frameworks are only as good as the data that is used to train them. Unfair or biased information can contain verifiable racial, sexual or ideological inclinations. The important thing is to train AI systems with the right data taken from a diverse set of social categories, according to technology leaders.
"Everyone talks about AI taking over jobs but the bigger challenge is creating fairness in AI as the technology can have can impact on certain individuals in multiple ways, said Vidhya Chandrasekaran, Engineering Manager, Global Data Science, Paypal. "The AI model that is not running on a diverse set of data can create biased decisions. So, we need representation of all categories when training AI models, particularly when it pertains to human-centric decisions."
She further explained that while one can easily eliminate bias from traditional statistical models, the advent of neural networks has made it more complex. "AI bias is not a technical but a socio-technical problem, and we need a collaboration between social and data scientists to find a solution," she added.
Management of data for running AI models
According to technology heads, one of the biggest challenges in creating successful AI models is to have a data strategy in place, so intelligence can be created out of the vast abundance of data collected by organizations. Experts believe that data sets that are relevant for AI applications to learn are rare because data is not securely managed and labelled correctly in most cases.
"AI is certainly a big opportunity for the Indian tech industry. But, without a data strategy, AI does not solve any problem," highlighted Deepa Madhavan, Director, Global Data Governance and Reg Tech at Paypal. "Businesses need to make sure that they have a data strategy in place based on their needs and objectives, only then they can start talking about artificial intelligence."
According to Sowjanya Chalamkuri, Senior Director at GE Digital, data classification is another major challenges that businesses need to overcome to create efficient AI. "The most challenging aspect of the artificial intelligence space is the variability of data and the classification of all that data in order to apply training models. With all the benefits that can be derived from AI, it must take objective decisions and not which are based on any bias," said Chalamkuri.
Data privacy and security challenges in AI
AI models rely on the high volume of data to learn and make smart decisions. The data that is deployed in these systems can sometimes be personal and sensitive. The personal data can cause harm - ranging from misuse of identity to misplaced biases in certain scenarios. Regulations such as the General Data Protection Regulation (GDPR) demand that tech enterprises be extremely careful in handling personal data. Data scientists working on AI systems will therefore need to find a balance between innovation and respecting individual privacy.
While privacy is a major concern, few tech leaders also believe that there needs to be a fine balance between individual privacy and innovation. Moreover, processes like data anonymization may hinder innovation by limiting enterprises to create the 'best AI' models. "You cannot create the best model if you don't have the right data. Your models will be based on certain parameters based on behavior of users and so there will not be much relevance if there is complete data anonymization," stated Shweta Shandilya, Program Director, IBM India during the discussion.