233348
Campus Life  

Research trio advocate more work on AI security

What if someone hacked a traffic sign with a few well-placed dots, so your self-driving car did something dangerous, such as going straight when it should have turned right?

Don’t think it’s unlikely – it’s already happened – and an Okanagan College professor and his colleagues from France are among those saying that researchers have to invest more effort in system design and security to deal with hacks and security issues.

Youry Khmelelvsky

A research paper, co-authored by Okanagan College Computer Science Professor Dr. Youry Khmelevsky, and presented recently at an international conference held by the Institute of Electrical and Electronics Engineers (the world’s largest technical professional society), summarizes the research that has already been done into the threats and dangers associated with the machine-learning processes that underpin autonomous systems, such as self-driving cars.

Their paper also points to the needs to take research and tool development for “deep learning” to a new level. (Deep Learning – DL - is what makes facial recognition, voice recognition, and self-driving cars possible. Deep Learning systems mimic neural networks – like your brain – that can take data and process it based on information processing and communication patterns. For a good description of how artificial intelligence, machine learning and deep learning connect to each other and the role they play in our daily lives, click here.)

The paper was authored by Dr. Gaétan Hains, Arvid Jakobsson (of Huawei Parallel and Distributed Algorithms Lab at the Huawei Paris Research Centre) and Khmelevsky. “Safety of DL systems is a serious requirement for real-life systems and the research community is addressing this need with mathematically-sound but low-level methods of high computational complexity,” notes the trio’s paper. They point to the need for significant work yet to be done on security, software, and verification to ensure that systems relying on deep learning are as safe as they could be.

“It sounds very abstract,” says Khmelelvsky, “but it isn’t. It’s here today whether it’s in your car or a device that recognizes your voice and commands.”

"Deep Learning-based artificial intelligence has had immense success in applications like image recognition and is already implemented in consumer products,” notes Jakobsson. “But the power of these techniques comes at an important cost compared to ‘classic algorithms’: it is harder to understand why they work, and harder to verify that they work correctly. Before deploying DL based AI in safety critical domains, we need better tools for understanding and exhaustively exploring the behaviour of these systems, and this paper is a work in this direction."

Do Hains, Jakobsson and Khmelevsky have the answer to prevent hacks that could send your car going straight, when it should go left? Not yet, but they are developing some research proposals that could help ensure that your car, and its systems based on artificial intelligence, don’t get fooled.

“Safe AI is an important research topic attracting more and more attention worldwide,” says Hains. “Dr. Khmelevsky brings software engineering expertise to complement my team's know-how in software correctness techniques. We expect to produce new knowledge and basic techniques to support this new trend in the industry.”



More Campus Life articles

235174