Why there’s no time to wait to start protecting the AI’s mind

This is a personal copy of my Atos Ascent Blog post from:
Why there’s no time to wait to start protecting the AI’s mind

The importance and popularity of artificial intelligence (AI) has seen a great rise in recent years. However, the successful creation and application of AI models require significant investment to get it right and to harvest the business benefits. While most recognize the required investment, protecting it – which is like other intellectual property – is all too often not an obvious objective.

In this blog post, I describe reverse engineering deep neural networks to capture the underlying model definitions and the potential synthetic recreation of large (proprietary) training datasets, otherwise known as a model extraction attack.

Getting AI right costs serious money

To protect AI models, we first need to understand where the costs for developing them come from. The successful development of AI models requires the following costly ingredients:

  • Expertise – the best and brightest data scientists and AI engineers;
  • Data – collecting and storing massive amounts for training;
  • Algorithms – for training the models;
  • Computing power – the fastest high-performance computer clusters.

Large companies that depend heavily on an AI-centered business strategy can spend millions of dollars every year on the above ingredients.

Models relating to image recognition and text analysis, for example, require a significant multi-million investment. Offered by several big cloud providers typically using a very cheap pay-per-use pricing model, they are usually made available through REST API’s or in some other form of a micro-service for easy integration in applications.

While not all organizations want to expose their models to the public, we see that most organizations pursue a micro services-oriented approach to exposing AI and analytics models to the ecosystem of business applications. At first, this would seem like a very safe and well-defined approach for integration since it would shield the actual model definitions – the intellectual property that resulted from investing in expertise, algorithms, data and computing power – from curious eyes and protect them from being copied, stolen or used elsewhere.

Stealing the AI’s mind

As the challenge of reverse engineering is a long-term area of personal interest to me, I have been researching different approaches for reverse engineering deep learning models. The easiest approach is to access the model definition itself (i.e., the files), then analyze the many deep-layered neural networks and approximate the behavior of layers, neurons and weights.

Another approach – which most organizations are currently not considering to be a risk – uses the model’s interface/API, which describes the input parameters and predicted output, to remotely reverse engineer the model.  By carefully crafting a model extraction attack that follows an iterative process of preparing very specific input requests to the AI model and learning from the outcomes it would theoretically be possible to approximate the AI model’s behavior. In time, this approach could ‘relearn’ the model and reconstruct the deep neural network and its trained weights. Quality and granularity would depend on the effort and duration of this attack, but interesting results have been achieved with limited effort and costs.

Other researchers have found an attacker can leverage training data leakage to synthetical recreate the (proprietary) training data using generative techniques. This might not even be considered stealing in a legal sense since the attacker paid a small amount for obtaining his prediction results. In many countries, reverse engineering is allowed by law, if someone is in legal possession of the relevant artifacts.

What lies behind these deep layers of neurons?

While the attacks described above are currently feasible and mostly focused on stealing intellectual property, this new line of thinking opens a Pandora’s box of future threads.

With our society becoming more and more dependent on AI technology, combining present-day cybersecurity risks with the new capabilities – namely reverse engineering and possibly even the manipulation of deep neural networks – brings the fictional scenarios from the famous movie ‘Inception’ one step closer to reality.

By delving many layers deep and making small and deliberate changes, hackers could alter AI decisions. One day in the future, a small and careful manipulation of AI models for high-frequency stock trading or fraud detection could potentially pull-off the greatest bank robbery in history!

Guarding our AI’s mind

As I described at the start of this post, the importance and popularity of AI has seen a great rise in recent years. With that in mind, it’s imperative that organizations act now to protect their AI minds. There are three things they need to do:

  1. Identify and raise awareness of their AI assets – the valuable AI minds that they have invested significant amounts in and that will play an invaluable role in ensuring their future success.
  2. Identify and understand the potential threats – the different evolving approaches for reverse engineering deep learning models.
  3. Protect their ever-evolving AI assets from the ever-evolving threat landscape – not just for today but also for tomorrow.

At Atos we are following these new developments closely and are working on strategies to protect our customer’s AI assets. For example, we are putting in place the right encryption, access control measures and tighter control of the model interface specification to greatly reduce the effect of this form of attack. We are already helping our customers take their own first steps, working with them to realize the value of their AI assets, then design their full-fledged AI (security) strategies and appropriate securities to tackle these risks.

There’s no time to wait, so let’s start talking about how to guard your AI mind!

About Marcel van den Bosch

VIEW ALL POSTS BY MARCEL VAN DEN BOSCH

Lead Data Scientist, Artificial Intelligence/Machine Learning specialist and Big Data technologies consultant. and member of the Scientific CommunityMarcel van den Bosch is a Data Scientist at Atos based in the Netherlands. His main focus areas are in Data Science, Artificial Intelligence, Big Data technologies. In addition, he has leading role in the area of client innovation in The Netherlands. He participated in many client innovation workshops and helped shape their innovation roadmap. Internally, Marcel is recognized as a Senior Digital Expert. Before joining Atos he worked as an entrepreneur and independent IT consultant on several projects, in which he helped customers translate theirneeds to solutions. He was involved in software development and business intelligence projects. Marcel also provided training and education for a university in the Netherlands. Clients appreciate his pragmatic and practical result-focused approach. Marcel holds a master’s degree in Information Science from Utrecht University.

Geef een antwoord

Het e-mailadres wordt niet gepubliceerd. Vereiste velden zijn gemarkeerd met *