Artificial Intelligence (AI) is still decades away in matching human intelligence, despite major advances, according to scientists at CA technologies who are involved in cutting-edge research.


"On true general intelligence there has been a lot of work, but the whole scientific community agrees that it will take a very long term before we reach it. It will take decades," said Victor Muntes-Mulero, Vice President of Strategic Research at CA technologies, on the sidelines of the company's "Built to Change Summit" here.


Muntes-Mulero, who is part of the Advance Deep Dive Research initiative of CA, said that there was "still a long path to get there", and in between there has to be a lots and lots of work done on various aspects of intelligence. Maria Valez-Rojas, Research Scientist with the Deep Dive Research, said that the safety issues coming up in robotics, and the need to solve all of them, mean that it's not going to happen in the near future.


Many leading commentators have mentioned from time to time that human intelligence in machines is less than a decade away. Scientists now realise that making machines is easy, but bringing in the whole gamut of ethics, human rights and social construct would be difficult to imbed.


In this context, Muntes-Mulero said that they were researching how to make AI behave ethically, adding that some intelligent software recently turned out to be wrong. For example, Google Translate often became sexist when translating sentences, say from Turkish into English, stating: "He's an engineer or she's a nurse," when the original did not imply a gender. Similarly, AI delivery systems often discriminated against residences of people of colour.


"You would like AI to do what you intend it to do, but sometimes it can behave unexpectedly," he said, adding that Alexa or Siri can take non-human instructions from audio in television and deliver results which are unintended. "Your Apple watch can be hacked to get your passwords by motion detection as you type them into the laptop," he said.


Human speech and sense of fairness and human rights issues were important, but it would take a while before machines are taught to discern them. In this context, he quoted the CEO of Facebook, Mark Zuckerberg, who said that it was easier for AI to discern the image of a nipple than to detect hate speech.


Muntes-Mulero said that things got more complicated when you realised that there was no single definition of fairness or social rules and that they sometimes differed from country to country. Defining all this mathematically was a challenging task in itself, he added. But it was their goal, in research, to create a better world by ensuring fairness and appreciation of human rights in AI.


Valez-Rojas, who's working on "cobotics", or human collaboration with robotics, said that the new era of interaction with robots is coming where they will collaborate "not to replace us, but to help with our jobs". She said AI will eventually help robots to understand the human environment. She said the interaction can lead to problems because "humans are so unpredictable".


Also, it would take a lot of effort to teach robots how to differentiate between concepts. "When you tell a robot to 'clean up the mess', it might proceed beyond the coffee spilled on the floor and wipe out the massive calculations that you have done the whole week on the blackboard. Because that may mean 'mess' for it."


She said that humans have moved on from robots which did repetitive work to controlled robots like drones to somewhat intelligent ones like those that could walk across the room and avoid obstacles on their own. "Finally, we have to make robots that can behave autonomously and help humans in their jobs," she added.


Steven Greenspan, Research Scientist, explained that much research was being done at CA on security and privacy which are the top concerns in the world now. He said that General Data Protection Regulation (GDPR), recently passed in Europe, had far-reaching consequences for not only companies there, but almost any company in the world which did business with Europe.


He said CA was working on products where the requirements are embedded into them so that companies do not have to be experts in understanding what GDPR requires. The thrust of the new regulations provides that the individual is the owner of the data and can decide who it should be shared with and can take it back whenever he or she so desires.


He said access control has its problem since passwords can be cracked, hacked or stolen. In this context he said they were doing research on "behavioural markers" which would better identify the person using any software. "We can figure out how you type or what your pattern of walk is with your mobile, even if you are erratic in your behaviour," he said.


"Even if you go out for lunch or sometimes skip lunch to work through, it's all unique. If you step out after logging in, and someone else comes and uses the system, we can know. Even if you allow someone else to use your system, we can figure out," he said, adding that using these behaviour markers they had been able to get correct identification of over 95 per cent. The research is also looking into whether they identify where intent changes in a person, from benign to malicious.