The potential of AI to be used against businesses when in the wrong hands has been highlighted with the news that an AI robot has been able to crack open a safe and that AI can be used to create custom malware that can defeat antivirus software.
Safe Cracking Robot
Witnessed live by several hundred hackers in DefCon over the weekend, a low-cost robot, developed by a team from SparkFun Electronics, was able to open a safe in around 30 minutes. The safe was developed by SentrySafe, a leading safe maker in the market.
AI Process of Elimination
The AI robot was able to quickly reduce a possible million combinations to just 1,000, which meant that the robot was then able to try the remaining combinations until it cracked the safe with 51.36.93 as the combination.
When the safe popped open, it was reportedly greeted with thunderous applause from the many hundreds of hackers attending the demonstration.
The robot’s creators, SparkFun, were reported to have been especially happy with the successful safecracking because, prior to the DefCon event, it had only previously been tried on a smaller safe and in the presence of a much smaller audience.
The apparent sophistication of the bot and its abilities were especially surprising given its ‘budget’ origins. The robot only cost around $200 to assemble, and used 3D-printed parts. The parts are easily replaceable, and can be custom-designed to fit different brands of safes that use combination security features.
Machine-Learning Tools ‘Own Language’
Quite apart from cracking safes, it has been recently discovered that machine-learning tools can also develop their own communication language.
One potentially worrying development is that some AI machines can even create custom malware to defeat antivirus software by learning how to tweak malicious binaries, and using the modified code to slip past antivirus tools.
Technical director of data science at security shop Endgame, Hyrum Anderson, showed at DefCon how the research his company has been able to adapt Elon Musk’s OpenAI framework, and apply it to the task of creating malware that current anti-virus software will be unable to spot.
The antivirus defeating machine-learning software has been posted on the Github page, and Anderson has reportedly encouraged people to try it out for themselves.
It is thought that antivirus / security firms will be among the first to try out this particular AI development with a view to assessing how their security products could be affected, and how to guard against this new potential threat.
What Does This Mean For Your Business?
Hackers and cyber criminals have access to same technology as the rest of us, and this story illustrates that just as AI can be used to make beneficial and commercial developments and innovations in the right hands and with good intentions, it also represents the next wave of threats to business and, indeed, state security when in the wrong hands. Businesses have long-trusted antivirus and other security software to provide a basic low maintenance, low cost, but effective defence against popular forms or attack. With sophisticated, low cost AI options potentially being used by cyber criminals against businesses and other organisations, the worry is that popular security solutions will have to be re-designed, constantly re-modified, and may not be able to keep up with more ‘intelligent’ threats. It may even be the case that business software security solutions will need their own AI element to be able to combat AI threats. This could have cost implications for businesses, as well as the need to re-visit risk assessments, and to check that security suppliers have adequate protection measures in place to take account of and deal with known, possible AI-based security threats.