How artificial intelligence will invoke new hack attacks

Author Topic: How artificial intelligence will invoke new hack attacks  (Read 667 times)

Offline farzanaSadia

  • Full Member
  • ***
  • Posts: 127
  • Test
    • View Profile
How artificial intelligence will invoke new hack attacks
« on: July 09, 2017, 05:05:50 PM »
Artificial intelligence is getting more advanced, and beginning to give technologies the ability to do things we never could have imagined years ago. Machines can translate for us, they can talk back to us, they can listen to us, and they can even automate some of our tasks for us. But as technology begins to get smarter and smarter, so will the threats that come along with it.

In the future, we may find ourselves in a situation where hackers are using artificial intelligence to invoke more sophisticated attacks on our systems. However, Jason Hong, associate professor of the Human Computer Interaction Institute at Carnegie Mellon School of Computer Science, says these attacks won’t look like the AI depicted in movies like “Terminator,” nor will they be as advanced as HBO depicted in its series “Westworld.” 

“If you look at all the movies and TV shows, they keep on showing all these different things of what people’s imaginations are on what these things can do. We are nowhere near close to that,” he said.

Hong explained AI is getting a bad rap because people let their imaginations run wild and ascribe behaviors to it that the technology can’t really do. Nonetheless, AI won’t always be used for good, and we will need to be worried about those who choose to misuse it.

“In the coming year we expect to see malware designed with adaptive, success-based learning to improve the success and efficacy of attacks. This new generation of malware will be situation-aware, meaning that it will understand the environment it is in and make calculated decisions about what to do next. In many ways, it will begin to behave like a human attacker: performing reconnaissance, identifying targets, choosing methods of attack, and intelligently evading detection,” said Derek Manky, global security strategist for Fortinet, a cybersecurity software provider.

According to Manky, this malware will use code that’s a precursor to AI. It will replace the traditional “if not this, then that” code logic with more complex decision-making logic. “Autonomous malware operates much like branch prediction technology, which is designed to guess which branch of a decision tree a transaction will take before it is executed. A branch predictor keeps track of whether or not a branch is taken, so when it encounters a conditional jump that it has seen before it makes a prediction so that over time the software becomes more efficient,” Manky said.

Hong sees an emerging field towards adversarial machine learning where hackers try to reverse-engineer how software techniques work. For example, they are finding new ways to get past spam filters, or they are finding ways to poison data specs so that the owner of the data starts training their machine learning systems on the bad data and the machine starts to make bad decisions.

But, it is important to keep in mind that artificial intelligence systems are still created with humans in the loop. Not many systems are completely automated because the side effects to this are still unknown, according to Hong.

“In the future, AI in cybersecurity will constantly adapt to the growing attack [surface]. Today, we are connecting the dots, sharing data, and applying that data to systems. However, we are the ones telling the machines what to do. In the future, a mature AI system could be capable of making decisions on its own,” said Manky. “Humans are making these complex decisions, which require intelligent correlation through human intelligence. In the future, more complex decisions could be taken on via AI. What is not attainable is full automation. That is, passing 100% control to the machines to make all decisions at any time. Humans and machines must co-exist.”

While there is a fear that AI can do more harm than good one day, Hong says that is way far out in the future, and not something the industry needs to worry about right now. “There are bigger things that security professionals need to worry about. These AI techniques only work with very sophisticated and narrow context. Once you go outside of that, they just won’t work that well anymore. Imagine the AI is playing a game of chess and then you change the game to checkers; it is just not going to work as well,” Hong said.

Instead, Hong believes organizations should worry about security issues such as data breaches, weak passwords, misconfigurations, and phishing attacks. “I would say focus on a lot more of these really basic types of security problems, and don’t worry about the really sophisticated ones yet. They will come eventually, but we will have lots of times to adapt and invest in these systems as well,” he said.
:)