Tuesday 12 September 2017

Leading European cybersecurity research organizations and Intel Labs join forces to conduct research on Collaborative Autonomous & Resilient Systems

The new research institute operates in the fields of drones, self-driving vehicles, and collaborative systems in industrial automation.

Intel and leading European universities in the field of cybersecurity and privacy will open a new initiative for “Collaborative Autonomous & Resilient Systems” (CARS) to tackle security, privacy, and functional safety challenges of autonomous systems that may collaborate with each other – examples are drones, self-driving vehicles, or industrial automation systems.
The goal of the CARS lab is to kick off longer-term collaboration between leading research organizations to enhance resilience and trustworthiness of these cutting-edge technologies. Collaborating institutions include TU Darmstadt (Germany), Aalto University (Finland), Ruhr-Universität Bochum (Germany), TU Wien (Austria), and Luxembourg University (Luxembourg). The coordinator of the new CARS lab will be TU Darmstadt serving as the hub university, where several researchers of Intel Labs are located. The research teams will collaborate with Intel teams on campus to design, prototype, and publish novel and innovative resilient and secure autonomous collaborative systems and to build an ecosystem to further validate their ideas in real-world scenarios.

Successful collaboration continues

The CARS lab is a new Intel Collaborative Lab as a continuation of the extremely successful Intel Collaborative Research Institute for Secure Computing (ICRI-SC) that included TU Darmstadt and Aalto University and focused on mobile and IoT security between 2012 and 2017. Noteworthy achievements resulting from this collaboration include Off-the-Hook, a client-side anti-phishing technique and SafeKeeper which uses Intel Software Guard Extensions to protect user passwords in web services; as well as TrustLite, a lightweight security architecture for IoT devices.
In the renewed collaboration Intel and the – now five – partner universities focus on the study of security, privacy and functional safety of autonomous systems that collaborate with each other.
TU Darmstadt aims to build on the foundation of the promising results from the initial work in the Intel institute and will address more complex systems with advanced real-life collaboration and attack models and defenses.
“We have already had five years of very successful and fruitful experience with the collaborative research lab between TU Darmstadt and Intel. Since 2012 our researchers could closely work with an Intel team located at TU Darmstadt leading to highly impactful results in the area of embedded and IoT security. I am confident that the new and much larger CARS lab that integrates top researchers from five European universities is an excellent foundation for a creative, dynamic and highly innovative environment, and will strongly contribute to the new theme of security and safety for autonomous and intelligent systems”, adds Professor Ahmad-Reza Sadeghi from TU Darmstadt.
Aalto University plans to address autonomous systems security by focusing on security and privacy of machine learning and distributed consensus.
“Over the past four years, Intel has invested significantly to support research in our Secure Systems group. Regular interactions with Intel engineers and leaders allow us to gain valuable insights into real-world security and privacy challenges and steer our research accordingly. Being able to identify and work on real problems is hugely motivating for my students and me personally. I am delighted that Intel has decided to extend their collaboration with us in the renewed lab”, explains Professor N. Asokan from Aalto University.
Ruhr-University Bochum (RUB) will advance the security of autonomous platforms and the self-defense capabilities of distributed systems by reconfigurable extensions to protect such systems from advanced attacks.
"We have worked on cyber security in CARS since 2003, and were one of the first groups internationally in this area. We are excited to collaborate with other leading academic institutions and Intel in the emerging area of security for autonomous systems. In particular, we hope that our expertise in hardware and system security will become a valuable part of the collaboration", says Professor Christof Paar from RUB.
Luxembourg University leverages intrusion tolerance and self-healing paradigms, to achieve automatic and long-lived resilience of CARS systems to both faults and hacker attacks.
“As a recently established research group in Luxembourg, we are proud Intel has entrusted our team CritiX at the Interdisciplinary Centre for Security, Reliability and Trust (SnT) to face the resilience challenges of future collaborative autonomous system. I anxiously look forward to start working with my colleagues in CARS’ inspiring research environment, with a world leading hardware manufacturer on our side”, says Professor Paulo Esteves-Veríssimo from Luxembourg University.
TU Wien will contribute with its experience in designing fault-tolerant systems; specifically with non-conventional hardware architectures that are both energy efficient and robust against a large spectrum of faults and attacks.
“TU Wien is now joining the team, and we are thrilled about the opportunity to contribute to this unique research concept. Making systems safe and secure at the same time is definitely a great challenge, and having experts covering different perspectives of that challenge in the team is a first-class prerequisite for coming up with substantially new and sustainable solutions. At the same time the close collaboration with Intel will guide our focus to those aspects that are most relevant for large-scale industrial applications of the future. This is an ideal setting for influential research”, says Professor Andreas Steininger from TU Wien.
More information

Please visit the website of the new CARS lab at the ICRI-SC: www.icri-sc.org

No comments:

Post a Comment

Note: only a member of this blog may post a comment.

Unintended Interactions among ML Defenses and Risks

A significant amount of work has been done in understanding various individual security/privacy risks in machine learning models. However, m...