Thursday 19 October 2017

Common sense applications of trusted hardware

Hardware security mechanisms like ARM TrustZone and Intel SGX have been widely available for a while and the academic research community is finally paying attention. Whether these mechanisms are safe and useful for ordinary people is hotly debated. In this post, we sketch two of our "common sense" applications of hardware-based trusted execution environments.


Hardware security mechanisms in commodity devices like smartphones (ARM TrustZone) and PCs (TPMs, Intel SGX) have been deployed for almost 15 years! But most of this time, ordinary app developers have not had the means to actually make use of these mechanisms. Intel made their SGX SDK publicly available two years ago. This has dramatically changed the awareness and interest in hardware security mechanisms among developers and researchers. When we published the first academic research paper on TrustZone almost a decade ago, most academics didn't even recognize the phrase "trusted execution environment (TEE)". This year NDSS had an entire session for TEEs and reportedly the most common word in the titles of the papers rejected at Usenix SEC was "SGX"!

Despite increased awareness, mentions of "trusted hardware" seem to elicit a general sense of mistrust among developers and researchers. One concern is that it requires having to trust the manufacturer, which is of course a reasonable concern. We often hear statements that are not exactly true, like "TEEs were designed for digital rights management (DRM)." For example, the genesis of ARM TrustZone, which dates back to the design of "Baseband 5 security architecture" in Nokia, was motivated not by DRM but by demands for technical mechanisms to comply with various requirements including regulatory requirements for type approval (note: the link points to a video). Of course, once TEEs were available, people did try to use them for DRM or other applications that can be perceived as unfairly limiting the freedom of users.

Nevertheless, hardware security mechanisms are strong enablers that can be used to improve the security and privacy of end users. In this post, we illustrate two such "common sense applications of TEEs" resulting from our recent work.

Private membership test

How can you check whether that new app you recently downloaded is actually malicious software?

At first glance, it may appear that there is a well-known solution for the above question - you simply install anti-malware software (often called "anti-virus") and use it to scan all the apps on your device. In the past, this anti-malware software would periodically download a database of known malware signatures, and compare your apps against this database. Of course this approach is not always 100% accurate (e.g. the database might not contain a complete list of all malware signatures), but nevertheless it can still be used to identify known malware.

However, performing this type of malware checking on a mobile device is very inefficient because every device would have to download the full malware database and expend its own energy to perform the comparisons. A much more efficient approach is to host the malware database in the cloud and allow users to upload the signatures of their apps to the cloud for checking. Statistics from Yahoo Aviate have shown that the average user installs approximately 95 apps, but according to G Data, the number of unique mobile malware samples is in the millions. Clearly it is far more efficient for each user to upload a list of 95 signatures than to download a database of millions. This scheme is also advantageous for the anti-malware vendors as they can update their databases in the cloud very frequently, and they need not reveal the database in its entirety to all users. But although it is very efficient, this approach raises serious privacy concerns! It has been shown that the selection of apps installed on a user's device reveals a lot of information about the user, some of which may be privacy-sensitive.

To overcome this challenge, we proposed a solution that makes use of a server-side TEE to perform a private-membership test. As shown in the diagram below, remote attestation is used to assure the user that she is communicating with our TEE. The user then uploads her app (or its hash) directly to the TEE, which checks if it is in the dictionary. One critical consideration in this approach is that the adversary can potentially monitor the memory access pattern of the TEE, so the TEE cannot just perform a normal search of the database. Instead we proposed a new design called a carousal, in which the entire dictionary is continuously circulated through the TEE. As we discovered through implementing this system, with the right data structures to represent the dictionary, this carousal approach can achieve surprisingly good throughput, even beating competing oblivious RAM (ORAM) schemes, because it can process batches of queries simultaneously.

The Carousal approach to prevent leaking information about queries from the TEE.


Also, as we point out in a recent PETS paper, private membership test (and more generally, private set intersection with unequal set sizes) has many other applications beyond cloud-assisted malware checking. One example we mentioned in the PETS paper is contacts discovery in a (mobile) messaging service: when a user installs a messaging client, it checks to see whether any contacts on the user's address book are already using the messaging service. As it happens, Signal recently announced their implementation of a private contacts discovery mechanism using SGX which essentially follows the design we described in our work. It is always gratifying when real world systems deploy innovations from research!

Protecting password-based authentication systems

How can you be sure your passwords are kept safe, even when sent to a compromised server?

Passwords are undoubtedly the most dominant user authentication mechanism on the web today. This is unlikely to change in the foreseeable future because password-based authentication is inexpensive to deploy; simple to use; well understood; compatible with virtually all devices; and does not require users to have any additional hardware or software.

However, password-based authentication faces various security concerns, including phishing, theft of password databases, and server compromise. For example, the adversary could attempt to trick users into entering their passwords into a phishing site, and then use these to impersonate the users. Alternatively, the adversary could steal a database of hashed passwords from a web server and use this to guess users' passwords. Most worryingly, a stealthy adversary might even compromise an honest server and siphon off passwords in transit as users login. Users' tendency to re-use passwords across different services further exacerbates these concerns.

Current approaches for protecting passwords are not fully satisfactory for various reasons: they typically address only one of the above challenges; they do not provide users with any assurance that they are in use; and they face deployability challenges in terms of cost and/or ease-of-use for service providers and end users alike.

To address these challenge, we have developed SafeKeeper, a comprehensive approach to protect the confidentiality of passwords on the web. Unlike previous approaches, SafeKeeper protects against very strong adversaries, including so-called rogue servers (i.e., either honest servers that have been compromised, or actively malicious servers), as well as sophisticated external phishers.

At its core, SafeKeeper uses a server-side TEE to perform a keyed one-way function on users' password, using a key that is only available to the TEE. This immediately prevents offline password guessing, and thus eliminates the threat of password database theft. Furthermore, using remote attestation, users can establish a secure channel directly from their browsers to the TEE, and receive strong assurance that they are indeed communicating with a legitimate SafeKeeper TEE. When used correctly, these powerful capabilities protect the user against password phishing and ensure that even if the server is compromised (or actively malicious), their passwords will be kept safe.

SafeKeeper: Protecting passwords even against an actively malicious web server.


We have implemented SafeKeeper's server-side password protection service using Intel SGX and integrated this with the PHPass password processing framework, which is used in multiple popular platforms, including WordPress and Joomla. We have developed the client-side component as a Google Chrome plugin. Our 86-participant user-study confirms that users can use this tool to protect their passwords on the web.


Epilogue: Enabling more common-sense applications with Intel SGX

In the process of designing and implementing these applications, we encountered a couple of challenges to be solved before the applications can be deployed at scale. The first is that SGX enclaves can only run in "release mode" (i.e., with full memory encryption etc.) if they have been signed by an authorized key, which requires a commercial use license from Intel. However, we expect that this would not be a challenge for a company to obtain. The second challenge is that we would like anyone to be able to verify the remote attestation of our enclaves. At the moment, verifying SGX attestation requires users to register their public keys with the Intel Attestation Service (IAS). It is probably not realistic to expect all our end users to do this, so for SafeKeeper we implemented a proxy that accepts attestation quotes from clients, sends them to IAS for verification (using our registered key), and returns the results to the client. Fortunately, IAS signs the verification result with a published Intel key, so clients can verify the response without having to trust our proxy. We are still investigating whether this is the optimal solution going forward.


- Andrew Paverd and N. Asokan

Disclaimer: By way of full disclosure, the authors are part of the Intel Collaborative Research Institute for Secure Computing which receives funding from Intel. But the views in this article reflect the opinion of the authors.

No comments:

Post a Comment

Note: only a member of this blog may post a comment.

Unintended Interactions among ML Defenses and Risks

A significant amount of work has been done in understanding various individual security/privacy risks in machine learning models. However, m...