Thursday 19 October 2017

Common sense applications of trusted hardware

Hardware security mechanisms like ARM TrustZone and Intel SGX have been widely available for a while and the academic research community is finally paying attention. Whether these mechanisms are safe and useful for ordinary people is hotly debated. In this post, we sketch two of our "common sense" applications of hardware-based trusted execution environments.

Hardware security mechanisms in commodity devices like smartphones (ARM TrustZone) and PCs (TPMs, Intel SGX) have been deployed for almost 15 years! But most of this time, ordinary app developers have not had the means to actually make use of these mechanisms. Intel made their SGX SDK publicly available two years ago. This has dramatically changed the awareness and interest in hardware security mechanisms among developers and researchers. When we published the first academic research paper on TrustZone almost a decade ago, most academics didn't even recognize the phrase "trusted execution environment (TEE)". This year NDSS had an entire session for TEEs and reportedly the most common word in the titles of the papers rejected at Usenix SEC was "SGX"!

Despite increased awareness, mentions of "trusted hardware" seem to elicit a general sense of mistrust among developers and researchers. One concern is that it requires having to trust the manufacturer, which is of course a reasonable concern. We often hear statements that are not exactly true, like "TEEs were designed for digital rights management (DRM)." For example, the genesis of ARM TrustZone, which dates back to the design of "Baseband 5 security architecture" in Nokia, was motivated not by DRM but by demands for technical mechanisms to comply with various requirements including regulatory requirements for type approval (note: the link points to a video). Of course, once TEEs were available, people did try to use them for DRM or other applications that can be perceived as unfairly limiting the freedom of users.

Nevertheless, hardware security mechanisms are strong enablers that can be used to improve the security and privacy of end users. In this post, we illustrate two such "common sense applications of TEEs" resulting from our recent work.

Private membership test

How can you check whether that new app you recently downloaded is actually malicious software?

At first glance, it may appear that there is a well-known solution for the above question - you simply install anti-malware software (often called "anti-virus") and use it to scan all the apps on your device. In the past, this anti-malware software would periodically download a database of known malware signatures, and compare your apps against this database. Of course this approach is not always 100% accurate (e.g. the database might not contain a complete list of all malware signatures), but nevertheless it can still be used to identify known malware.

However, performing this type of malware checking on a mobile device is very inefficient because every device would have to download the full malware database and expend its own energy to perform the comparisons. A much more efficient approach is to host the malware database in the cloud and allow users to upload the signatures of their apps to the cloud for checking. Statistics from Yahoo Aviate have shown that the average user installs approximately 95 apps, but according to G Data, the number of unique mobile malware samples is in the millions. Clearly it is far more efficient for each user to upload a list of 95 signatures than to download a database of millions. This scheme is also advantageous for the anti-malware vendors as they can update their databases in the cloud very frequently, and they need not reveal the database in its entirety to all users. But although it is very efficient, this approach raises serious privacy concerns! It has been shown that the selection of apps installed on a user's device reveals a lot of information about the user, some of which may be privacy-sensitive.

To overcome this challenge, we proposed a solution that makes use of a server-side TEE to perform a private-membership test. As shown in the diagram below, remote attestation is used to assure the user that she is communicating with our TEE. The user then uploads her app (or its hash) directly to the TEE, which checks if it is in the dictionary. One critical consideration in this approach is that the adversary can potentially monitor the memory access pattern of the TEE, so the TEE cannot just perform a normal search of the database. Instead we proposed a new design called a carousal, in which the entire dictionary is continuously circulated through the TEE. As we discovered through implementing this system, with the right data structures to represent the dictionary, this carousal approach can achieve surprisingly good throughput, even beating competing oblivious RAM (ORAM) schemes, because it can process batches of queries simultaneously.

The Carousal approach to prevent leaking information about queries from the TEE.

Also, as we point out in a recent PETS paper, private membership test (and more generally, private set intersection with unequal set sizes) has many other applications beyond cloud-assisted malware checking. One example we mentioned in the PETS paper is contacts discovery in a (mobile) messaging service: when a user installs a messaging client, it checks to see whether any contacts on the user's address book are already using the messaging service. As it happens, Signal recently announced their implementation of a private contacts discovery mechanism using SGX which essentially follows the design we described in our work. It is always gratifying when real world systems deploy innovations from research!

Protecting password-based authentication systems

How can you be sure your passwords are kept safe, even when sent to a compromised server?

Passwords are undoubtedly the most dominant user authentication mechanism on the web today. This is unlikely to change in the foreseeable future because password-based authentication is inexpensive to deploy; simple to use; well understood; compatible with virtually all devices; and does not require users to have any additional hardware or software.

However, password-based authentication faces various security concerns, including phishing, theft of password databases, and server compromise. For example, the adversary could attempt to trick users into entering their passwords into a phishing site, and then use these to impersonate the users. Alternatively, the adversary could steal a database of hashed passwords from a web server and use this to guess users' passwords. Most worryingly, a stealthy adversary might even compromise an honest server and siphon off passwords in transit as users login. Users' tendency to re-use passwords across different services further exacerbates these concerns.

Current approaches for protecting passwords are not fully satisfactory for various reasons: they typically address only one of the above challenges; they do not provide users with any assurance that they are in use; and they face deployability challenges in terms of cost and/or ease-of-use for service providers and end users alike.

To address these challenge, we have developed SafeKeeper, a comprehensive approach to protect the confidentiality of passwords on the web. Unlike previous approaches, SafeKeeper protects against very strong adversaries, including so-called rogue servers (i.e., either honest servers that have been compromised, or actively malicious servers), as well as sophisticated external phishers.

At its core, SafeKeeper uses a server-side TEE to perform a keyed one-way function on users' password, using a key that is only available to the TEE. This immediately prevents offline password guessing, and thus eliminates the threat of password database theft. Furthermore, using remote attestation, users can establish a secure channel directly from their browsers to the TEE, and receive strong assurance that they are indeed communicating with a legitimate SafeKeeper TEE. When used correctly, these powerful capabilities protect the user against password phishing and ensure that even if the server is compromised (or actively malicious), their passwords will be kept safe.

SafeKeeper: Protecting passwords even against an actively malicious web server.

We have implemented SafeKeeper's server-side password protection service using Intel SGX and integrated this with the PHPass password processing framework, which is used in multiple popular platforms, including WordPress and Joomla. We have developed the client-side component as a Google Chrome plugin. Our 86-participant user-study confirms that users can use this tool to protect their passwords on the web.

Epilogue: Enabling more common-sense applications with Intel SGX

In the process of designing and implementing these applications, we encountered a couple of challenges to be solved before the applications can be deployed at scale. The first is that SGX enclaves can only run in "release mode" (i.e., with full memory encryption etc.) if they have been signed by an authorized key, which requires a commercial use license from Intel. However, we expect that this would not be a challenge for a company to obtain. The second challenge is that we would like anyone to be able to verify the remote attestation of our enclaves. At the moment, verifying SGX attestation requires users to register their public keys with the Intel Attestation Service (IAS). It is probably not realistic to expect all our end users to do this, so for SafeKeeper we implemented a proxy that accepts attestation quotes from clients, sends them to IAS for verification (using our registered key), and returns the results to the client. Fortunately, IAS signs the verification result with a published Intel key, so clients can verify the response without having to trust our proxy. We are still investigating whether this is the optimal solution going forward.

- Andrew Paverd and N. Asokan

Disclaimer: By way of full disclosure, the authors are part of the Intel Collaborative Research Institute for Secure Computing which receives funding from Intel. But the views in this article reflect the opinion of the authors.

Tuesday 12 September 2017

Leading European cybersecurity research organizations and Intel Labs join forces to conduct research on Collaborative Autonomous & Resilient Systems

The new research institute operates in the fields of drones, self-driving vehicles, and collaborative systems in industrial automation.

Intel and leading European universities in the field of cybersecurity and privacy will open a new initiative for “Collaborative Autonomous & Resilient Systems” (CARS) to tackle security, privacy, and functional safety challenges of autonomous systems that may collaborate with each other – examples are drones, self-driving vehicles, or industrial automation systems.
The goal of the CARS lab is to kick off longer-term collaboration between leading research organizations to enhance resilience and trustworthiness of these cutting-edge technologies. Collaborating institutions include TU Darmstadt (Germany), Aalto University (Finland), Ruhr-Universität Bochum (Germany), TU Wien (Austria), and Luxembourg University (Luxembourg). The coordinator of the new CARS lab will be TU Darmstadt serving as the hub university, where several researchers of Intel Labs are located. The research teams will collaborate with Intel teams on campus to design, prototype, and publish novel and innovative resilient and secure autonomous collaborative systems and to build an ecosystem to further validate their ideas in real-world scenarios.

Successful collaboration continues

The CARS lab is a new Intel Collaborative Lab as a continuation of the extremely successful Intel Collaborative Research Institute for Secure Computing (ICRI-SC) that included TU Darmstadt and Aalto University and focused on mobile and IoT security between 2012 and 2017. Noteworthy achievements resulting from this collaboration include Off-the-Hook, a client-side anti-phishing technique and SafeKeeper which uses Intel Software Guard Extensions to protect user passwords in web services; as well as TrustLite, a lightweight security architecture for IoT devices.
In the renewed collaboration Intel and the – now five – partner universities focus on the study of security, privacy and functional safety of autonomous systems that collaborate with each other.
TU Darmstadt aims to build on the foundation of the promising results from the initial work in the Intel institute and will address more complex systems with advanced real-life collaboration and attack models and defenses.
“We have already had five years of very successful and fruitful experience with the collaborative research lab between TU Darmstadt and Intel. Since 2012 our researchers could closely work with an Intel team located at TU Darmstadt leading to highly impactful results in the area of embedded and IoT security. I am confident that the new and much larger CARS lab that integrates top researchers from five European universities is an excellent foundation for a creative, dynamic and highly innovative environment, and will strongly contribute to the new theme of security and safety for autonomous and intelligent systems”, adds Professor Ahmad-Reza Sadeghi from TU Darmstadt.
Aalto University plans to address autonomous systems security by focusing on security and privacy of machine learning and distributed consensus.
“Over the past four years, Intel has invested significantly to support research in our Secure Systems group. Regular interactions with Intel engineers and leaders allow us to gain valuable insights into real-world security and privacy challenges and steer our research accordingly. Being able to identify and work on real problems is hugely motivating for my students and me personally. I am delighted that Intel has decided to extend their collaboration with us in the renewed lab”, explains Professor N. Asokan from Aalto University.
Ruhr-University Bochum (RUB) will advance the security of autonomous platforms and the self-defense capabilities of distributed systems by reconfigurable extensions to protect such systems from advanced attacks.
"We have worked on cyber security in CARS since 2003, and were one of the first groups internationally in this area. We are excited to collaborate with other leading academic institutions and Intel in the emerging area of security for autonomous systems. In particular, we hope that our expertise in hardware and system security will become a valuable part of the collaboration", says Professor Christof Paar from RUB.
Luxembourg University leverages intrusion tolerance and self-healing paradigms, to achieve automatic and long-lived resilience of CARS systems to both faults and hacker attacks.
“As a recently established research group in Luxembourg, we are proud Intel has entrusted our team CritiX at the Interdisciplinary Centre for Security, Reliability and Trust (SnT) to face the resilience challenges of future collaborative autonomous system. I anxiously look forward to start working with my colleagues in CARS’ inspiring research environment, with a world leading hardware manufacturer on our side”, says Professor Paulo Esteves-Veríssimo from Luxembourg University.
TU Wien will contribute with its experience in designing fault-tolerant systems; specifically with non-conventional hardware architectures that are both energy efficient and robust against a large spectrum of faults and attacks.
“TU Wien is now joining the team, and we are thrilled about the opportunity to contribute to this unique research concept. Making systems safe and secure at the same time is definitely a great challenge, and having experts covering different perspectives of that challenge in the team is a first-class prerequisite for coming up with substantially new and sustainable solutions. At the same time the close collaboration with Intel will guide our focus to those aspects that are most relevant for large-scale industrial applications of the future. This is an ideal setting for influential research”, says Professor Andreas Steininger from TU Wien.
More information

Please visit the website of the new CARS lab at the ICRI-SC:

Monday 4 September 2017

Voiko älykodissa elää turvassa? Entä mitä riskejä liittyy fiksuun sähköverkkoon tai kulkuneuvoihin?

Puettava laite voi tehdä sinusta haavoittuvan – älykkäiden laitteiden tietoturvaa puidaan Aalto-yliopiston asiantuntijaseminaarissa 7. syyskuuta

Tervetuloa kuulemaan tietoturvan huippuasiantuntijoiden esityksiä Aalto-yliopistoon. Seminaarin aluksi julkaistaan uusi Intel Labsin yhteistyöaloite, jossa Aalto-yliopistolla on merkittävä rooli useiden muiden kansainvälisten yliopistojen rinnalla.

Aika: Torstaina 7. syyskuuta klo 9.00–16.30 Paikka: Tietotekniikan talo, luentosali T1, Konemiehentie 1, Espoo.

Seminaarissa puhuvat tietoturva-alan kiinnostavimmat asiantuntijat. Suomalaispuhujina esiintyvät Pekka Sivonen Tekesiltä, Sasu Tarkoma Helsingin yliopistosta ja Jari Arkko Ericssonilta. Ulkomaisia vieraita ovat Ahmad-Reza Sadeghi, TU Darmstadt, William Enck, NC State University, Gene Tsudik, University of California, Irvine, Patrick Traynor, University of Florida, ja Mihalis Maniatakos, New York University Abu Dhabi. Englanninkielistä seminaaria isännöi N. Asokan Aalto-yliopistosta.

Seminaarin ohjelmassa muun muassa: Ahmad-Reza Sadeghi TU Darmstadtista avaa esineiden internetiin liittyvien laitteiden haavoittuvuutta. Tietoturva- ja hakkerointiriski voi koskea paitsi älykästä sähköverkkoa, älykkäitä kulkuneuvoja ja koteja, myös henkilökohtaisia, puettavia älylaitteita. Tutkimuksissa näissä laitteissa on havaittu useita haavoittuvuuksia, ja joissain tilanteissa voi olla järkevää teknisesti eristää haavoittuvat laitteet turvallisista. Sadeghi kertoo puheenvuorossaan tarkemmin laitteiden automaattiseen tunnistamiseen ja eristämiseen liittyvistä mahdollisuuksista.

Terrence O' Connor (William Enckin sijasta) North Carolina State Universitysta käsittelee älykotien haasteita ja esittää mahdollisuuksia niiden tietoturvan parantamiseksi. Jokainen uusi kotiverkkoon kytketty laite uhkaa kotiverkon tietoturvaa – miten uhka voidaan ratkaista? Huomio, huonoista sääolosuhteista johtuen, William Enck joutui perumaan tulonsa ja hänen puheensa pitää Terrence O'Connor.

Jari Arkko Ericssonin tutkimusyksiköstä puhuu siitä, millaisia esineiden internetiin liitettävien laitteiden kannattaisi olla – ja mitä riskejä niihin liittyy. Globaalissa mittakaavassa olennaista on, että laitteet ovat avoimia ja yhteensopivia, mikä mahdollistaa riippumattomuuden yksittäisestä laitevalmistajasta, käytettävien pilvipalvelujen vaihtamisen ja laitteiden ohjelmistojen päivittämisen. Riskeistä suurin osa liittyy tietoturvakäytäntöjen puutteisiin, esimerkiksi valmistajien käyttämiin vakiosalasanoihin.

Seminaarin järjestää SEcuring Lifecycle of Internet of Things (SELIoT) -hanke, jonka tavoitteena on turvata älykkään laitteen koko elinkaari. Hankkeessa ovat mukana Aalto-yliopisto sekä University of Florida ja University of California Irvine ja sitä rahoittavat Suomen Akatemia ja yhdysvaltalainen National Science Foundation (NSF).

Sunday 2 July 2017

Erasmus Mundus Program on Security and Cloud Computing (SECCLO)

Many of the current members of the Secure Systems Group and HAIC have their background in NordSecMob, the long-running Erasmus Mundus Master’s program that is coming to a close. This program has brought exceptional information-security students to Aalto and the partner universities. It has enabled our partner companies to recruit thesis students and graduates otherwise would be difficult to come by in such numbers. For us teachers and researchers, the diverse backgrounds and experiences of the international students have made a huge difference in the classroom and in research projects.

Therefore, I’m extremely happy to announce that the NordSecMob consortium will continue in the form of a new joint MSc program. We have received European Commission funding for an Erasmus Mundus Joint Masters Degree Program, called Security and Cloud Computing (SECCLO). With a total budget of almost 3M€, the new program will last until 2022, or three intakes of 20+ students each. In addition to being able to offer scholarships to excellent students, we will continue to be listed in the Erasmus Mundus catalogue, where many potential students start their search for MSc programs.

Each student in the new program will spend their first year at Aalto learning fundamental knowledge of information security as well as cloud and mobile computing. After a summer internship in industry, they will continue to one of our partner universities (DTU, EURECOM, KTH, NTNU, or Tartu), each offering their own specialization such as cloud or network security or cryptography. EURECOM in France is a new addition to the consortium. In the last six months of the two-year curriculum, the students will be doing their thesis in industry or with a university research group. Each student will get a Master’s degree from both Aalto and a partner university.

Aalto students might note that the Erasmus Mundus program has the same name, Security and Cloud Computing, as an existing majorin our MSc program. Indeed, the only difference is that Erasmus Mundus students leave for a partner university in the second year. The curricula were planned together (as an update of the NordSecMob program and the earlier Security and Mobile Computing major) and the first years are identical. We even plan to offer the same exchange opportunities in the partner universities to all information-security students. If you are a current MSc or BSc student at Aalto focusing on security, please contact Tuomas Aura or N. Asokan to learn more.

As a result of the SECCLO funding, we can expect more outstanding MSc students to Aalto every year for majoring in information security. An important factor in the program proposal was the support and involvement of the parter companies, Nokia Bell Labs, Cybernetica, F-Secure, Guardtime, Intel and VTT, as well as the HAIC initiative and industry funded scholarships that enable long-term sustainability of the program. We invite new companies with an interest in security education and our graduates to join HAIC to ensure the long-term success of the program. Naturally, all the SECCLO students will be part of HAIC. They will seek summer internships and thesis positions during their study program. In other words, SECCLO will greatly amplify our ability to meet the goals behind the setting up of HAIC.

Sunday 12 March 2017

Ethics in information security

Our societies are undergoing pervasive digitalization. It is not an understatement to say that every facet of human endeavor is being profoundly changed by the use of computing and digital technologies. Naturally such sweeping changes also bring forth ethical issues that computing professionals have to face and deal with. The question is: are they being equipped to deal with such issues?

Ethical concerns in computing are widely recognized. For example, the recent upsurge in the popularity of applying machine learning techniques to a variety of problems has led to several ethical questions. Biases inherent in training data can render systems based on machine learning to be unfair in their decisions. Identifying such sources of unfairness and making machine learning systems accountable is an active research topic. Similarly, the rise of autonomous systems has led to questions like how to deal with the moral aspects of autonomous decision making and how societies can respond to people whose professions may be rendered obsolete by the deployment of autonomous systems.

Ethics in information security: The profession of information security has its own share of ethical considerations it has been grappling with. Among them are: privacy concerns of large scale data collection, the use of end-to-end cryptography in communication systems, wiretapping and large scale surveillance, and the practice of weaponizing software vulnerabilities for the purpose of “offensive security”.

The Vault 7 story: The latter issue was brought forth in dramatic fashion earlier this month when Wikileaks published a collection of documents which they called “Vault 7”. It consisted of information on a very large number of vulnerabilities in popular software platforms like Android and iOS that can be used to compromise end systems based on those platforms. That national intelligence agencies use such vulnerabilities are offensive weapons did not come as a surprise. But the Wikileaks revelation led to a flurry of discussion on the ethics of how vulnerabilities should be handled. Over the years, the information security community has developed best practices for dealing with vulnerabilities. Timely and “responsible disclosure” of vulnerabilities to affected vendors is a cornerstone of such practices. Using vulnerabilities for offence is at odds with responsible disclosure. As George Danezis, a well-known information security expert and professor at University College London, put it, “Not only government “Cyber” doctrine corrupts directly this practice, by hoarding security bugs and feeding an industry that does not contribute to collective computer security, but it also corrupts the process indirectly.” But when a government intelligence agency finds a new vulnerability, the decision on when to disclose it to the vendors concerned is a complex one. As another well-known expert and academic, Matt Blaze pointed out, on the one hand, an adversary may find the same vulnerability and use it against innocent people and institutions, which calls for immediate disclosure leading to a timely fix. On the other hand, the vulnerability can help intelligence agencies to thwart adversaries from harming innocent people which is the rationale for delaying disclosure. Blaze reasoned that this decision should be informed by the likelihood that a vulnerability is rediscovered but concluded that despite several studies, there is insufficient understanding of factors that affect how frequently a vulnerability is likely to be rediscovered.

Equipping infosec professionals: That brings us back to our original question: are information security professionals have the right knowledge, tools and practices for making judgement calls when confronted with such complex ethical issues. Guidelines for computing ethics have existed for decades. For example IEEE Computer Society and ACM published a code of ethics for software engineers back in 1999. But to what extent do such codes reach practitioners and inform their work? There are certainly efforts in this direction. For example, program committees of top information security conferences routinely look for a discussion on “ethical considerations” in submitted research papers that deal with privacy-sensitive data or vulnerabilities in deployed products. They frequently grapple with the issues involved in requiring authors to reveal datasets in the interests of promoting reproducibility of research results while balancing considerations of people from whom the data was collected. But this needs to be done more systematically at all levels of the profession.

Ethical considerations in information security cannot be simply outsourced to philosophers and ethicists alone because such considerations will inevitably inform the very nature of the work done by information security professionals. For example, several researchers are developing techniques that allow privacy-preserving training and prediction mechanisms for systems based on machine learning. Similarly, as Matt Blaze pointed out, active research is needed to understand the dynamics of vulnerability rediscovery.

Should undergraduate computer science curricula need mandatory exposure to ethics in computing? Should computer science departments host computing ethicists among their ranks?

Silver lining: Coming back to the Vault 7 episode, there was indeed a silver lining. The focus on amassing weaponized vulnerabilities to attack end systems suggests that the increasing adoption of end-to-end encryption by a wide variety of messaging applications has indeed been successful! Passive wiretapping is likely to be much less effective today than it was only a few years ago.

Unintended Interactions among ML Defenses and Risks

A significant amount of work has been done in understanding various individual security/privacy risks in machine learning models. However, m...