Monday, 15 February 2016

Zero-effort authentication : useful but difficult to get right


At the forthcoming 2016 Networking and Distributed Systems Symposium we present case study on designing systems that are simultaneously secure and easy-to-use. Transparent ("zero-effort") authentication is a very desirable goal. But designing and analyzing them correctly is a challenge. At the 2014 IEEE Security and Privacy Symposium, a top conference on information security and privacy, researchers from Dartmouth presented a simple and compelling scheme called ZEBRA for continuously but transparently re-authenticating users. Through systematic experiments, we show that a hidden design assumption in ZEBRA can be exploited by a clever attacker. More technical details in the extended research report and in our project page.

[With co-author and guest blogger Nitesh Saxena (University of Alabama at Birmingham)]

Computing systems are pervasive. We need to authenticate ourselves to them all the time. All of us know the tales of woe trying to manage zillions of passwords. But unwieldy authentication and access control systems are not limited to computing -- we struggle with multiple keys, loyalty cards and so on. This is what makes attempts to design transparent "zero-effort" authentication systems very compelling. The adjectives "transparent" and "zero effort" refer to the fact that the system is able to authenticate users without requiring them to do anything special just for the purpose of authentication. There has been a lot of work in trying to get the design of zero-effort authentication systems right and the security community has been making progress. But getting usability and security right is a difficult challenge.

Transparent de-authentication: Two years ago, researchers from Dartmouth college described a system called ZEBRA at IEEE Security and Privacy Symposium, one of the top four venues for academic security research. ZEBRA was intended to solve the problem of prompt, user-friendly deauthentication. Think about what happens when you stop interacting with your phone or PC and walk away. What you really want is your device to lock itself or log you out promptly. This is what deauthentication is. It is the means to defend against accidental or intentional misuse of your device by someone else when you walk away while leaving it open.

Typically deauthentication is based on idle time limits: if you have not interacted with your device for a period exceeding some pre-set time limit, it will deauthenticate you. But therein lies the rub: if the time limit is too short, you will be annoyed at having to log in frequently and unnecessarily; but if the time limit is too long, you risk that someone else may use your unattended device. Transparent zero-effort schemes have been tried out before. One is BlueProximity which allows you to pair your Bluetooth-enabled phone with your PC. Thereafter your PC will use Bluetooth to sense your phone's presence in the vicinity and will automatically lock if it can no longer sense your phone. BlueProximity itself is not very secure (an attacker who knows the Bluetooth address of your phone can easily break BlueProximity) but it is an example of the class of zero-effort authentication solutions that rely on proximity detection via short-range wireless channels. Another prominent example of this class of solutions is the collection of various "keyless access" schemes found in high end cars. These systems have many problems: they are vulnerable to relay attacks, and they are not effective in situations where people step away from their devices (and would thus want to be deauthenticated) but do not walk away far enough so that their devices can no longer sense their presence -- such situations are common in offices with cubicles or workstations in hospital wards and factories.


The ZEBRA approach to deauthentication
ZEBRA: This is where ZEBRA comes in with a very simple and elegant solution to the problem. In ZEBRA every user is required to wear a Bluetooth-enabled "bracelet" equipped with an accelerometer and a gyroscope in his/her dominant hand. As a ZEBRA user, when you first log into a device ("terminal" in ZEBRA terminology), it establishes a secure connection to your bracelet. Thereafter while you interact with your terminal, e.g., using its keyboard or mouse, the bracelet will send the measurements generated by your interactions over to the terminal which then uses a machine learning classifier to map them into a sequence of predicted interactions. Now your terminal has two different views of the same phenomenon of you interacting it: the first is the sequence of interactions it can directly observe via its peripherals like keyboard and mouse; the second is the sequence of predicted interactions inferred from the measurements sent over from the bracelet. Now you can perhaps see the logic behind ZEBRA: if the two sequences match, ZEBRA can conclude that the person who is interacting with it is the same person who is wearing the right bracelet for the current login session; if the sequences diverge, ZEBRA can promptly deauthenticate the current login session. ZEBRA calls this "Bilateral Recurring Authentication" -- you can think of it as the Rashomon approach to deauthentication! It is simple, elegant, and, unlike biometric authentication, does not depend on the particular behaviour of the user being authenticated.

Hunting ZEBRA: Is ZEBRA secure? To evaluate this, authors of ZEBRA conducted experiments in which participants were asked to act as attackers. A researcher, acting as the victim, left the terminal ("terminal A") with an open login session moved to another terminal ("terminal B") and started interacting with it. The job of the "attacker" was to watch what the victim was doing on terminal B, mimic the same interactions on terminal A and try to avoid being logged out of terminal A by ZEBRA. Security 101 teaches you not to under-estimate the capabilities of attackers. Realizing this, ZEBRA authors arranged to give a generous handicap to their attackers: the victims actually announced aloud what they are about to do ("typing", "scrolling" etc.). The experiments showed that ZEBRA was able to deauthenticate all attackers even with this extra help.

Evaluating security of ZEBRA experimentally


We realized that there is a problem here: in the experiments, the "attackers" were asked to mimic everything that the victim was doing; is that the best strategy for the attackers? It turned out that ZEBRA was doing its bilateral comparison only when the terminal perceived interactions via its peripherals. But the terminal being attacked (terminal A) is under the control of the attacker. Therefore we conjectured that a clever attacker does not need to mimic everything that the victim was doing but can opportunistically choose to mimic only those interactions where he has a good chance of passing ZEBRA's bilateral comparison test!

To test our theory, we needed an implementation of ZEBRA. Since the authors of ZEBRA did not have an implementation to share with us, we built our own end-to-end implementation from scratch using standard off-the-shelf Android Wear smartwatches. We then did a series of systematic experiments modeling different types of opportunistic attackers. For example, our opportunistic keyboard-only attacker ignores everything else except keyboard interactions (since he can do a lot of damage to all modern operating systems using only keyboard interactions). In our experiments, the test participants were victims while our researchers played the role of attackers.

Opportunistic attackers can circumvent ZEBRA

Our results show that while ZEBRA is indeed effective against naive attackers, it can be circumvented by opportunistic attackers. In the figure above, the graph on the left shows that ZEBRA deauthenticates all naive attackers. The graph on the right shows that opportunistic attackers evaded detection 40% of the time. The red and blue lines correspond to two different configurations of ZEBRA.

The fundamental problem in the design of ZEBRA was that the deauthentication decision was subject to an input source (interactions on terminal A) that was under the control of the attacker! We can see this as a case of tainted input, which is familiar to computer scientists. Well-known defenses against tainted input can be adapted to strengthen the design of ZEBRA.

Summary: There are two messages here:
  • Transparent, "zero-effort" authentication, which works by taking the user out of the loop of authentication decisions, is important. The need for it is demonstrated by already available production solutions that attempt to provide zero-effort authentication. But designing them correctly is subtle and difficult.
  • Modeling the adversary correctly and realistically is a fundamental challenge in the analysis of security schemes, especially when novel approaches for balancing usability and security are introduced, as in the case of ZEBRA.
We showed that under a more realistic adversary model, ZEBRA, as it was originally designed, is not secure against malicious attackers. It is still an effective defense against accidental misuse. The fundamental problem in the design of ZEBRA is a case of tainted input -- we are currently developing ways of improving ZEBRA.

More detailed information can be found in our NDSS paper or our extended research report. All figures in this post are taken from our paper/report.

This work was supported by the Academy of Finland (via the Contextual Security project) and the US National Science Foundation.

No comments:

Post a Comment

Note: only a member of this blog may post a comment.

Unintended Interactions among ML Defenses and Risks

A significant amount of work has been done in understanding various individual security/privacy risks in machine learning models. However, m...