Fear of AI could kill the security benefit of this ‘sixth sense’
Business Process Automation

Fear of AI could kill the security benefit of this ‘sixth sense’

If there’s one thing Chris Stancombe, head of Industrialisation and Automation at Capgemini, wants to make clear, it’s that Artificial Intelligence is our friend, not foe.

“It’s not there to take over from or equal humans; it’s there to help improve our safety and ultimately our lives.” He tells me over the phone from London. “That’s why it’s so important to recognise the benefits it’s already bringing and enable it to continue helping us in the future.”

Stancombe’s main concern is that the public’s fear of AI could slow down the advancement of the technology, even though a lot of its use cases are already prevalent in our everyday lives.

Can ‘good’ machine learning take on global cybercrime? We catch-up with Dave Palmer, Director of Technology at Darktrace to find out.

“You see it already with credit cards,” he explains. “If there’s an unusual spending pattern your card, they put a block on it and then someone will call you to ask if that's correct or not.” The credit card company will have used AI to spot the potential anomaly, employing an artificial ‘sixth sense’ that Stancombe believes will soon be used on a larger scale to mitigate some of the bigger threats we face as a society.

AI has applications across all aspects of security and the development of a sixth sense could prove vital in the future.

“It’s the ability to use little triggers, that we wouldn’t necessarily consciously notice, to observe or hear things, different sounds or just something a bit strange, that you wouldn’t necessarily think of. Automating that type of sensory input is a lot more powerful than doing it manually and in a dangerous environment such as an oil rig, where you're monitoring for equipment failures, you could process more data through different sensors.

“Or, in a home environment secured with CCTV you could monitor for unusual noises, inputting passive senses into a computer then processing that against background data and knowledge to look for unusual patterns and highlight any odd behaviour that might need investigating.”

Threats range in size and severity and whilst future applications of AI could see the technology being used to fight terrorism currently, some social media platforms have already started using AI to combat threatening language.

How will AI change the role of cyber-pros and their businesses? We talk to IBM, Deloitte, Darktrace and more.

“Computers are starting to come to terms with a number of complex problems such as facial recognition and that’s really important,” Stancombe continues. “Given the volume of interactions, it’s impossible for it to all be monitored by humans therefore you have to build an automated way to monitor and manage it.

“Platforms like Facebook and Instagram are expected to provide some sort of policing, some sort of duty of care, and the use of sentiment analysis is crucial to enabling this.”

However, achieving this requires man and machine to continue working closely together for the foreseeable future to ensure that not all AI projects end up like Microsoft’s Tay chatbot.

“It's like most things, predicting is difficult because no one is ever sure of the outcome. Therefore, predicting the future based on the past is hard but clearly you can build models that say, looking at previous events, this is what's going to happen.

“These projects need to be monitored and feedback needs to be provided so the technology can be continually improved. There's still a requirement for the empathy of humans that I don't think machines have yet. Especially in terms of the sixth sense; if you're looking at it in terms of security, be it bulling or fraud or terrorism, the system will flag up areas for investigation but we still require people to then look into them. I don't think we're ready yet to hand over total control to computers in most serious cases.”

Microsoft is not the only organisation to come under fire for failing to provide this so-called duty of care programmes designed to calculate the reoffending risk of prisoners have also been criticised for exhibiting racial bias. Further research has also concluded that machine learning algorithms are picking up on and thus reinforcing a number of ingrained race and gender prejudices.

Stancombe doesn’t shy away from these criticisms and again highlights the importance of man and machine working together in order to overcome them.

“It's a useful tool to give you an indication of what may happen but there’s no guarantee that just because it comes out of a computer that it will be accurate. I think that at this stage there's still some way to go before computers will replace humans. Humans have an important role to play for lots of different reasons and I think it's like most things in life, you can't suddenly say 'well a computer’s going to make all my decisions for me'. There needs to be a healthy degree of scepticism and a continual feedback loop.

“The growth of neural networks revolves around learning through experience and I think that’s something we’ll see a lot more interest in the future. Humans are very good at assimilating lots of information and arriving at a conclusion but what will be interesting is, even though computers can absorb much more data, will they arrive at the same conclusions as a human? If we present two different neural networks with identical data sets, will they both arrive at the same outcome? I think we will see a lot more interest and research in that as we go forward.”

The importance of human judgement when interacting with AI is a point that Stancombe continues to stress.

“Common sense is still crucial. We all use lots of technology in our everyday lives but there's still a certain degree of it needed,” he explains. “Technology is there to guide and help us but at the end of the day, I think the decision and responsibility still sits with human beings. These tools have been designed to help us and if AI's not helping us or it's leading us to the wrong conclusions then we've done a bad job of building it.”

While there’s still a number of issues that still need to be ironed out, the long-term benefits of AI are countless. Its abilities will enable us to reduce cyber, personal or domestic threats; there are cars on the market that already uses the technology to automatically apply the brakes if it perceives a risk of you crashing. On a more basic level, it can improve our experience on social media and it can help improve your quality of life at work by automating some of the more mundane work-place activities.

“Obviously, with all this comes a word of caution,” Stancombe reiterates at the end of our conversation. “Please don't throw out common sense and become a slave to the automation. Recognise it for what it is, something that's there to be used and to try and help human beings have a safer, better life.”

 

PREVIOUS ARTICLE

«Millennials talk careers: Dvir Hazout

NEXT ARTICLE

Blockchain For Dummies: What you really need to know»
Charlotte Trueman

Charlotte is Junior Staff Writer at IDG Connect

Add Your Comment

Most Recent Comments

Our Case Studies

IDG Connect delivers full creative solutions to meet all your demand generatlon needs. These cover the full scope of options, from customized content and lead delivery through to fully integrated campaigns.

images

Our Marketing Research

Our in-house analyst and editorial team create a range of insights for the global marketing community. These look at IT buying preferences, the latest soclal media trends and other zeitgeist topics.

images

Poll

Will Kotlin overtake Java as the most popular Android programming language in 2018?