Tuesday, 12 September 2017

Leading European cybersecurity research organizations and Intel Labs join forces to conduct research on Collaborative Autonomous & Resilient Systems

The new research institute operates in the fields of drones, self-driving vehicles, and collaborative systems in industrial automation.


Intel and leading European universities in the field of cybersecurity and privacy will open a new initiative for “Collaborative Autonomous & Resilient Systems” (CARS) to tackle security, privacy, and functional safety challenges of autonomous systems that may collaborate with each other – examples are drones, self-driving vehicles, or industrial automation systems.
The goal of the CARS lab is to kick off longer-term collaboration between leading research organizations to enhance resilience and trustworthiness of these cutting-edge technologies. Collaborating institutions include TU Darmstadt (Germany), Aalto University (Finland), Ruhr-Universität Bochum (Germany), TU Wien (Austria), and Luxembourg University (Luxembourg). The coordinator of the new CARS lab will be TU Darmstadt serving as the hub university, where several researchers of Intel Labs are located. The research teams will collaborate with Intel teams on campus to design, prototype, and publish novel and innovative resilient and secure autonomous collaborative systems and to build an ecosystem to further validate their ideas in real-world scenarios.


Successful collaboration continues

The CARS lab is a new Intel Collaborative Lab as a continuation of the extremely successful Intel Collaborative Research Institute for Secure Computing (ICRI-SC) that included TU Darmstadt and Aalto University and focused on mobile and IoT security between 2012 and 2017. Noteworthy achievements resulting from this collaboration include Off-the-Hook, a client-side anti-phishing technique and SafeKeeper which uses Intel Software Guard Extensions to protect user passwords in web services; as well as TrustLite, a lightweight security architecture for IoT devices.
In the renewed collaboration Intel and the – now five – partner universities focus on the study of security, privacy and functional safety of autonomous systems that collaborate with each other.
TU Darmstadt aims to build on the foundation of the promising results from the initial work in the Intel institute and will address more complex systems with advanced real-life collaboration and attack models and defenses.
“We have already had five years of very successful and fruitful experience with the collaborative research lab between TU Darmstadt and Intel. Since 2012 our researchers could closely work with an Intel team located at TU Darmstadt leading to highly impactful results in the area of embedded and IoT security. I am confident that the new and much larger CARS lab that integrates top researchers from five European universities is an excellent foundation for a creative, dynamic and highly innovative environment, and will strongly contribute to the new theme of security and safety for autonomous and intelligent systems”, adds Professor Ahmad-Reza Sadeghi from TU Darmstadt.
Aalto University plans to address autonomous systems security by focusing on security and privacy of machine learning and distributed consensus.
“Over the past four years, Intel has invested significantly to support research in our Secure Systems group. Regular interactions with Intel engineers and leaders allow us to gain valuable insights into real-world security and privacy challenges and steer our research accordingly. Being able to identify and work on real problems is hugely motivating for my students and me personally. I am delighted that Intel has decided to extend their collaboration with us in the renewed lab”, explains Professor N. Asokan from Aalto University.
Ruhr-University Bochum (RUB) will advance the security of autonomous platforms and the self-defense capabilities of distributed systems by reconfigurable extensions to protect such systems from advanced attacks.
"We have worked on cyber security in CARS since 2003, and were one of the first groups internationally in this area. We are excited to collaborate with other leading academic institutions and Intel in the emerging area of security for autonomous systems. In particular, we hope that our expertise in hardware and system security will become a valuable part of the collaboration", says Professor Christof Paar from RUB.
Luxembourg University leverages intrusion tolerance and self-healing paradigms, to achieve automatic and long-lived resilience of CARS systems to both faults and hacker attacks.
“As a recently established research group in Luxembourg, we are proud Intel has entrusted our team CritiX at the Interdisciplinary Centre for Security, Reliability and Trust (SnT) to face the resilience challenges of future collaborative autonomous system. I anxiously look forward to start working with my colleagues in CARS’ inspiring research environment, with a world leading hardware manufacturer on our side”, says Professor Paulo Esteves-Veríssimo from Luxembourg University.
TU Wien will contribute with its experience in designing fault-tolerant systems; specifically with non-conventional hardware architectures that are both energy efficient and robust against a large spectrum of faults and attacks.
“TU Wien is now joining the team, and we are thrilled about the opportunity to contribute to this unique research concept. Making systems safe and secure at the same time is definitely a great challenge, and having experts covering different perspectives of that challenge in the team is a first-class prerequisite for coming up with substantially new and sustainable solutions. At the same time the close collaboration with Intel will guide our focus to those aspects that are most relevant for large-scale industrial applications of the future. This is an ideal setting for influential research”, says Professor Andreas Steininger from TU Wien.
More information

Please visit the website of the new CARS lab at the ICRI-SC: www.icri-sc.org

Monday, 4 September 2017

Voiko älykodissa elää turvassa? Entä mitä riskejä liittyy fiksuun sähköverkkoon tai kulkuneuvoihin?

Puettava laite voi tehdä sinusta haavoittuvan – älykkäiden laitteiden tietoturvaa puidaan Aalto-yliopiston asiantuntijaseminaarissa 7. syyskuuta

Tervetuloa kuulemaan tietoturvan huippuasiantuntijoiden esityksiä Aalto-yliopistoon. Seminaarin aluksi julkaistaan uusi Intel Labsin yhteistyöaloite, jossa Aalto-yliopistolla on merkittävä rooli useiden muiden kansainvälisten yliopistojen rinnalla.

Aika: Torstaina 7. syyskuuta klo 9.00–16.30 Paikka: Tietotekniikan talo, luentosali T1, Konemiehentie 1, Espoo.

Seminaarissa puhuvat tietoturva-alan kiinnostavimmat asiantuntijat. Suomalaispuhujina esiintyvät Pekka Sivonen Tekesiltä, Sasu Tarkoma Helsingin yliopistosta ja Jari Arkko Ericssonilta. Ulkomaisia vieraita ovat Ahmad-Reza Sadeghi, TU Darmstadt, William Enck, NC State University, Gene Tsudik, University of California, Irvine, Patrick Traynor, University of Florida, ja Mihalis Maniatakos, New York University Abu Dhabi. Englanninkielistä seminaaria isännöi N. Asokan Aalto-yliopistosta.

Seminaarin ohjelmassa muun muassa: Ahmad-Reza Sadeghi TU Darmstadtista avaa esineiden internetiin liittyvien laitteiden haavoittuvuutta. Tietoturva- ja hakkerointiriski voi koskea paitsi älykästä sähköverkkoa, älykkäitä kulkuneuvoja ja koteja, myös henkilökohtaisia, puettavia älylaitteita. Tutkimuksissa näissä laitteissa on havaittu useita haavoittuvuuksia, ja joissain tilanteissa voi olla järkevää teknisesti eristää haavoittuvat laitteet turvallisista. Sadeghi kertoo puheenvuorossaan tarkemmin laitteiden automaattiseen tunnistamiseen ja eristämiseen liittyvistä mahdollisuuksista.

Terrence O' Connor (William Enckin sijasta) North Carolina State Universitysta käsittelee älykotien haasteita ja esittää mahdollisuuksia niiden tietoturvan parantamiseksi. Jokainen uusi kotiverkkoon kytketty laite uhkaa kotiverkon tietoturvaa – miten uhka voidaan ratkaista? Huomio, huonoista sääolosuhteista johtuen, William Enck joutui perumaan tulonsa ja hänen puheensa pitää Terrence O'Connor.

Jari Arkko Ericssonin tutkimusyksiköstä puhuu siitä, millaisia esineiden internetiin liitettävien laitteiden kannattaisi olla – ja mitä riskejä niihin liittyy. Globaalissa mittakaavassa olennaista on, että laitteet ovat avoimia ja yhteensopivia, mikä mahdollistaa riippumattomuuden yksittäisestä laitevalmistajasta, käytettävien pilvipalvelujen vaihtamisen ja laitteiden ohjelmistojen päivittämisen. Riskeistä suurin osa liittyy tietoturvakäytäntöjen puutteisiin, esimerkiksi valmistajien käyttämiin vakiosalasanoihin.

Seminaarin järjestää SEcuring Lifecycle of Internet of Things (SELIoT) -hanke, jonka tavoitteena on turvata älykkään laitteen koko elinkaari. Hankkeessa ovat mukana Aalto-yliopisto sekä University of Florida ja University of California Irvine ja sitä rahoittavat Suomen Akatemia ja yhdysvaltalainen National Science Foundation (NSF).



Sunday, 2 July 2017

Erasmus Mundus Program on Security and Cloud Computing (SECCLO)

Many of the current members of the Secure Systems Group and HAIC have their background in NordSecMob, the long-running Erasmus Mundus Master’s program that is coming to a close. This program has brought exceptional information-security students to Aalto and the partner universities. It has enabled our partner companies to recruit thesis students and graduates otherwise would be difficult to come by in such numbers. For us teachers and researchers, the diverse backgrounds and experiences of the international students have made a huge difference in the classroom and in research projects.

Therefore, I’m extremely happy to announce that the NordSecMob consortium will continue in the form of a new joint MSc program. We have received European Commission funding for an Erasmus Mundus Joint Masters Degree Program, called Security and Cloud Computing (SECCLO). With a total budget of almost 3M€, the new program will last until 2022, or three intakes of 20+ students each. In addition to being able to offer scholarships to excellent students, we will continue to be listed in the Erasmus Mundus catalogue, where many potential students start their search for MSc programs.

Each student in the new program will spend their first year at Aalto learning fundamental knowledge of information security as well as cloud and mobile computing. After a summer internship in industry, they will continue to one of our partner universities (DTU, EURECOM, KTH, NTNU, or Tartu), each offering their own specialization such as cloud or network security or cryptography. EURECOM in France is a new addition to the consortium. In the last six months of the two-year curriculum, the students will be doing their thesis in industry or with a university research group. Each student will get a Master’s degree from both Aalto and a partner university.

Aalto students might note that the Erasmus Mundus program has the same name, Security and Cloud Computing, as an existing majorin our MSc program. Indeed, the only difference is that Erasmus Mundus students leave for a partner university in the second year. The curricula were planned together (as an update of the NordSecMob program and the earlier Security and Mobile Computing major) and the first years are identical. We even plan to offer the same exchange opportunities in the partner universities to all information-security students. If you are a current MSc or BSc student at Aalto focusing on security, please contact Tuomas Aura or N. Asokan to learn more.

As a result of the SECCLO funding, we can expect more outstanding MSc students to Aalto every year for majoring in information security. An important factor in the program proposal was the support and involvement of the parter companies, Nokia Bell Labs, Cybernetica, F-Secure, Guardtime, Intel and VTT, as well as the HAIC initiative and industry funded scholarships that enable long-term sustainability of the program. We invite new companies with an interest in security education and our graduates to join HAIC to ensure the long-term success of the program. Naturally, all the SECCLO students will be part of HAIC. They will seek summer internships and thesis positions during their study program. In other words, SECCLO will greatly amplify our ability to meet the goals behind the setting up of HAIC.

Sunday, 12 March 2017

Ethics in information security

Our societies are undergoing pervasive digitalization. It is not an understatement to say that every facet of human endeavor is being profoundly changed by the use of computing and digital technologies. Naturally such sweeping changes also bring forth ethical issues that computing professionals have to face and deal with. The question is: are they being equipped to deal with such issues?

Ethical concerns in computing are widely recognized. For example, the recent upsurge in the popularity of applying machine learning techniques to a variety of problems has led to several ethical questions. Biases inherent in training data can render systems based on machine learning to be unfair in their decisions. Identifying such sources of unfairness and making machine learning systems accountable is an active research topic. Similarly, the rise of autonomous systems has led to questions like how to deal with the moral aspects of autonomous decision making and how societies can respond to people whose professions may be rendered obsolete by the deployment of autonomous systems.

Ethics in information security: The profession of information security has its own share of ethical considerations it has been grappling with. Among them are: privacy concerns of large scale data collection, the use of end-to-end cryptography in communication systems, wiretapping and large scale surveillance, and the practice of weaponizing software vulnerabilities for the purpose of “offensive security”.

The Vault 7 story: The latter issue was brought forth in dramatic fashion earlier this month when Wikileaks published a collection of documents which they called “Vault 7”. It consisted of information on a very large number of vulnerabilities in popular software platforms like Android and iOS that can be used to compromise end systems based on those platforms. That national intelligence agencies use such vulnerabilities are offensive weapons did not come as a surprise. But the Wikileaks revelation led to a flurry of discussion on the ethics of how vulnerabilities should be handled. Over the years, the information security community has developed best practices for dealing with vulnerabilities. Timely and “responsible disclosure” of vulnerabilities to affected vendors is a cornerstone of such practices. Using vulnerabilities for offence is at odds with responsible disclosure. As George Danezis, a well-known information security expert and professor at University College London, put it, “Not only government “Cyber” doctrine corrupts directly this practice, by hoarding security bugs and feeding an industry that does not contribute to collective computer security, but it also corrupts the process indirectly.” But when a government intelligence agency finds a new vulnerability, the decision on when to disclose it to the vendors concerned is a complex one. As another well-known expert and academic, Matt Blaze pointed out, on the one hand, an adversary may find the same vulnerability and use it against innocent people and institutions, which calls for immediate disclosure leading to a timely fix. On the other hand, the vulnerability can help intelligence agencies to thwart adversaries from harming innocent people which is the rationale for delaying disclosure. Blaze reasoned that this decision should be informed by the likelihood that a vulnerability is rediscovered but concluded that despite several studies, there is insufficient understanding of factors that affect how frequently a vulnerability is likely to be rediscovered.

Equipping infosec professionals: That brings us back to our original question: are information security professionals have the right knowledge, tools and practices for making judgement calls when confronted with such complex ethical issues. Guidelines for computing ethics have existed for decades. For example IEEE Computer Society and ACM published a code of ethics for software engineers back in 1999. But to what extent do such codes reach practitioners and inform their work? There are certainly efforts in this direction. For example, program committees of top information security conferences routinely look for a discussion on “ethical considerations” in submitted research papers that deal with privacy-sensitive data or vulnerabilities in deployed products. They frequently grapple with the issues involved in requiring authors to reveal datasets in the interests of promoting reproducibility of research results while balancing considerations of people from whom the data was collected. But this needs to be done more systematically at all levels of the profession.

Ethical considerations in information security cannot be simply outsourced to philosophers and ethicists alone because such considerations will inevitably inform the very nature of the work done by information security professionals. For example, several researchers are developing techniques that allow privacy-preserving training and prediction mechanisms for systems based on machine learning. Similarly, as Matt Blaze pointed out, active research is needed to understand the dynamics of vulnerability rediscovery.

Should undergraduate computer science curricula need mandatory exposure to ethics in computing? Should computer science departments host computing ethicists among their ranks?

Silver lining: Coming back to the Vault 7 episode, there was indeed a silver lining. The focus on amassing weaponized vulnerabilities to attack end systems suggests that the increasing adoption of end-to-end encryption by a wide variety of messaging applications has indeed been successful! Passive wiretapping is likely to be much less effective today than it was only a few years ago.