IBM has unveiled how AI can power Malware with their DeepLocker


Author :Justin Brunnette

Category: IT News

IBM has unveiled how AI can power Malware with their DeepLocker
We seem to be entering into an AI era where in all fields of sciences have seen its application in one form or another. As we see more and more diverse applications of AI, it was only a matter of time that we would see this new power weaponized. Earlier this month at Black Hat USA 2018, the largest information security event in the world, IBM presented their proof of concept malware called DeepLocker; a malware of powerful targeting and evasive attacks powered by AI.
DeepLocker starts by hiding itself in regular applications such as video conference software in order to fly under the radar of antivirus and anti-malware software. DeepLocker will only attack once it has reached it’s intend target. The trigger for the attack can utilize any number of combination of attributes such as audio, image, geolocation and system features. The complexity of the trigger is leveraged by the nature of deep neural networks.
A brief overview of neural networks can help explain why it is essentially a black box. When developing neural networks, there are basically two machines, a machine that builds algorithms and a machine that tests those algorithms. In the case of image recognition, the developer has a collection of images that it wants the algorithm to be able to recognize, basically they have the end goal. But because the developer does not know exactly how to make an algorithm that can recognize between images, it is up to the builder machine to make algorithms.
At first the builder machine makes these algorithms nearly at random, and sends them off to the tester machine. The tester machine will test these algorithms on how well they are able to recognize these images. Since the algorithms are made nearly at random, we should expect it to have very low level of accuracy such as 1% or 3% recognition. The tester machine sends only the best performing algorithm back to the builder machine to tell it to make more algorithms like these ones. So the builder algorithm will make another set which gets sent to the tester machine for another round of testing. Now the level of accuracy will have risen from before with perhaps 4% to 6% accuracy. The best performing of these algorithms will be sent back to the builder machine for building more batches and so on.
As this cycle continues, the algorithms being produced form the builder machine is becoming more and more complex but the builder machine is only producing code at basically random but including traits of the code that was success in previous tests. This is really a high speed trial-and-error method of making code, which if it was hand made by a human programmer, it could take years or if not decades but with the neural network it speeds this process to the limits of computer processing speed. But if a developer tries to look at the source code of this end product, they will not be able to under stand what is going on. That is because with so many iterations of trials of numerous node connections and combinations, the network has become incredibly convoluted; hence the name convoluted neural networks.
The nature of these neural networks make it perfect for concealing the trigger mechanism or conditions for DeepLocker. When the trigger could be a person’s face, a specific voice command, a certain condition that the system needs to be in or any combination of any potentially infinite possibilities is going to make the neural network algorithm nearly impossible to read. On top of the trigger conditions, it can hide how the attack is execute as well as what the payload is.
 IBM has described it as three levels of concealment:
1) Target class concealment
2) Target Instance concealment
3) Malicious intent concealment
DeepLocker’s proof of concept at Black Hat concealed the infamous WannaCry ransomware in a video conference software. This made it undetectable from antivirus software and sandboxes. For their example, they used a specific person’s face as the trigger and trained their AI to recognize the face. When the video conference software is launched, the software can take numerous snapshots to send to the embedded AI. When the target person appears in the camera, the AI releases WannaCry, encrypting the whole computer, rendering it unusable.
The implications of this is that such a malware could be sent out to millions of people perhaps through a regular period update of a video conference software. In addition, any number of AI systems could be plugged into the software to find their target as well as different types of malware could be used as the payload. Considering what we observed with the Stuxnet worm, it is very well possible that malware could be started off in a remote location and make its way through hundreds of closed intranets before landing on its intended target on the other side of the world, making it impossible to track its original location.
Original Article: