Artificial Intelligence may be the answer to protecting our data and ourselves.
When imagining a benevolent artificial intelligence watching over our lives, it’s hard to do so without the lingering fear instilled by dystopian science fiction. AI is unnerving precisely for the same reasons that it’s so impressive: it learns. It can do certain things faster than any human ever could, and it can improve itself without direct instruction from humans. As our Internet of Things gathers more and more data on our lives and businesses, AI has plenty of information on us to devour. But this view of a sinister, man-made, sci-fi villain with a robotic voice and calculating logic is not helpful in understanding what AI can (and already does) do to bolster our cyber-defences and gather information that can help the emergency services save lives.
AI is already being deployed in the arms race between hackers and infosec experts. Ransomware uses machine learning to get better and better at finding chinks in the armour, and AI bots are even able to mimic the writing styles of our friends, family and colleagues in order to write convincing phishing emails. Adaptive malware can calculate the best way to break through data security defences, all without human instruction. Therefore, the old fortress mentality of the cybersecurity industry needs to adapt and change: we need new ways to neutralise a threat when it does get in, rather than naively hoping we can keep them out.
The possibilities for using artificial intelligence in infosec are endless, and developments are happening constantly. AI is excellent at looking for patterns in data, and being able to spot an anomaly in huge swathes of information allows it to spot a threat that might otherwise go unnoticed. The use of behavioural data can allow us (or, rather the AI – it wouldn’t need any input from us) to distinguish between a phishing email generated by a bot and a genuine enquiry from a potential client, or even help pinpoint individuals who may have been radicalised just by sifting through their browsing habits and purchases. This may sound a bit ‘Big Brother’-esque, but on balance there are societal benefits. Rather more controversial are the trials already happening in China with Cloud Walk, an AI programme that claims to be able to use facial recognition to track citizens and predict which ones will commit a crime. The risk of abuse of AI by totalitarian regimes is very high, and the Cloud Walk trial has been met with discomfort and concern by the Western media.
Although having an AI track your face wherever you go is certainly not without its privacy concerns, there are more benign ways that AI can use our data to keep us safe. Social media platforms are already using AI to find fake accounts and shut down those with dangerous content. Twitter now use AI bots to shut down pro-ISIS accounts, and, with its ability to sift through enormous amounts of data, is one of our best defences against the spread of propaganda.
While being able to cut off the supply of terrorist propaganda will certainly make it harder for extremists to radicalise people through social media, there are more immediate ways that AI could help keep us safe. The vast trove of data that we upload onto social media could be used by AI to give a quicker, more focused response to an emergency. For example, most terror attacks are recorded by people on their phones, leading to videos, photos, statuses being posted in real-time onto Twitter, Facebook etc. AI could use this data to direct emergency services to where they’re needed, coordinate with hospitals, accelerate calls to emergency dispatchers: the possibilities are endless. Social media is faster than traditional means of reporting an incident, and AI could be the best means of harnessing what could be a life-saving resource.
One such AI was developed by One Concern, a silicon valley company who are using cutting edge technology to help save lives. Inspired by the founder, Ahmed Wani’s, experience surviving the deadly floods in Kashmir in 2014, One Concern say they can provide “game-changing intelligence about disasters in real-time, enabling cities, businesses and the resilience ecosystem with predictive intelligence and actionable information.” Their “benevolent intelligence” is already helping emergency services prepare for and respond to natural disasters, and is a sure sign of the success governments could have in the future by combining innovative new technology and boots-on-the-ground emergency response efforts.
The possibilities are extremely exciting, but there is no end of scaremongering about AI having too much autonomy. While AI can demonstrate astounding efficiency at certain tasks, when pitted directly against a human opponent, it can be still be outwitted by the human brain. ‘False positives’ are also a grave concern when we’re dealing with security: anyone who has had their debit card frozen after using it in a foreign country because the bank assumed that the anomaly meant fraud will be able to imagine how this could go wrong. Humans will still need to be on-hand to contextualise data and ensure that a response is reasonable, so that no-one is visited by the police because an AI bot noticed you’d been googling shotguns without noticing your new membership to a clay-pigeon shooting club.
Artificial intelligence is already becoming a major player in cyber security, and it won’t be long until it takes on a critical role in physical security as well. Privacy and data protection is a hot topic, and will only be getting hotter as GDPR gets closer, so while the Chinese Cloud Walk face-tracker is unlikely to be received well in the West, we do need to be prepared to talk about how AI and Big Data are interacting. Rather than just worrying about how to protect our data, we should be excited about how we can use it and embrace the life-saving possibilities of AI. Whether we like it or not, every one of us has a wealth of personal data out there and firewalls will not work for long: AI is going to change the world of security (physical and virtual) forever.
YUDU Sentinel is an app based crisis communication platform for the management of fire, terrorist and cyber attacks, or any other critical incidents. Crisis managers have immediate access to an independent two-way communication (SMS, voice, email and in app messaging) and can view key documents on mobiles. Sentinel is a cutting edge crisis management tool. Find out more at http://www.yudu.com/do/notification/sentinel or contact us on Twitter @YUDUSentinel.