CMMRS will include lectures from 7 professors from the University of Maryland, Cornell University, and the Max Planck Institute for Software Systems.  Lectures will cover a diverse range of topics.  The full list of lectures and topics is yet to be finalized; for examples of lecturers and topics from previous years, take a look at the lectures from Previous Editions of CMMRS.

Mariya Toneva, Max Planck Institute for Software Systems (MPI-SWS)
Bridging language in machines with language in the brain

The field of natural language processing (NLP) has undergone a revolution in the last few years, that has been brought about by the availability of large training datasets, improved computational power, and better optimization methods. In the first part of the lecture, we will introduce the recent advances in NLP and discuss some outstanding challenges. In the second part, we will discuss how we can quantify the similarity between NLP machines and the only processing system that we know truly understands language–the human brain. We will finish off with a deep dive into methods that improve the alignment between language in machines and language in the brain.

Soheil Feizi, University of Maryland
Understanding adversarial and distributional robustness in deep learning and the interplay between them

Over the last decade, deep models have enjoyed wide empirical success. In practice, however, many state-of-the-art models are currently not reliable due to their sensitivity against adversarial or natural input distributional shifts. In reliable deep learning, predictions made by models should be robust: if inputs to the system change insignificantly, the output predictions should have moderate changes as well. Such changes may occur either adversarially (e.g., when the perception system of a self-driving car encounters a “stop sign” with an adversarial patch on it) or naturally (e.g., when a self-driving car encounters a sample with a different illumination than that of training samples). In these lectures, I will explain recent advances in reliable deep learning, discuss the interplay between the two types of adversarial and distributional robustness and outline a roadmap towards developing trustworthy learning paradigms.

Robbert van Renesse, Cornell University
Topics in State Machine Replication

State Machine Replication (SMR) is a general approach to improving fault tolerance of systems. It is the backbone of many reliable cloud services and also the basis for consistent blockchains and smart contract execution. Starting from general principles, we will investigate approaches to implementing SMR under a variety of failure and timing models. We will look into how such implementations can reconfigure themselves, including the difficult problem of seamlessly adding new servers. Finally, we will consider scalability and efficiency and demonstrate a new design that can achieve almost unlimited throughput at low tail latency.

Viktor Vafeiadis, Max Planck Institute for Software Systems (MPI-SWS)
An Introduction to Weak Consistency and Persistency

Consider a multi-threaded program that persists its data to disk or to non-volatile memory. In these two lectures, we will ask three questions about such a program: (1) how can we formally define its semantics? (2) how can we specify it? and (3) how can we verify that it is correct?

Naively, one might think that its threads are interleaved in some arbitrary order, and that the observable state after a crash corresponds precisely to the updates that were executed before the crash. But sadly, neither of these assumptions is correct. Due to compiler and hardware optimisations, program instructions are often executed, propagated, and persisted out of order. This leads to complex consistency and persistency semantics, which has implications on the verification of such programs.

Thomas Ristenpart, Cornell Tech and Cornell University
Computer Security Research with At-Risk Populations

There is an increasing appreciation for the unique computer security concerns of at-risk users — those who are either more likely to be suffering active digital attacks or who, should they be targeted, may suffer outsized negative effect. In this lecture series, I’ll use our work on technology abuse in the context of intimate partner violence (IPV) as an in-depth case study of the need for new research and advocacy methods in computer security. In the first part, I’ll go over our six year research and advocacy agenda in IPV, which has culminated in establishing the Clinic to End Tech Abuse (CETA). CETA has so far worked to help hundreds of survivors of IPV in New York City. In the second part, I’ll generalize from this case study, discussing our new thoughts on trauma-informed computing and, in particular, our principles and practical strategies for guiding research with at-risk populations. These lectures will include descriptions of physical, emotional, and sexual violence.

Tom Goldstein, University of Maryland
End-to-end algorithm synthesis with thinking networks

This talk will present my lab’s recent work on neural networks for symbolic reasoning. These systems use recurrent networks to emulate a human-like thinking process, in which a logical reasoning problem is represented in memory and then iteratively manipulated and simplified over time until a solution to a problem is found. When these models are trained only on “easy” problem instances, they synthesize and represent algorithms. These neural algorithms can then solve “hard” problem instances without having ever seen one, provided the model is allowed to “think” for longer at test time.

Emma Pierson, Cornell Tech, Technion, and Cornell University
Data science for social equality

Asia Biega, Max Planck Institute for Security and Privacy (MPI-SP)
Responsible Information Access Systems

Information access systems (such as search or recommendation systems) are much more than the algorithms that power them. They’re sociotechnical — learning from user behaviour, processing personal data, influencing people’s views and decisions. Because of their potential negative impacts, these systems are increasingly regulated and redesigned to respect different societal and ethical constraints. In the first part, this lecture will discuss potential negative societal and individual impacts in information access systems as well as identifying their sources. We will then learn about different computational and non-computational impact mitigation strategies. In the second part, we will take a deep dive into some of the recent research operationalizing legal constraints in information access systems, demonstrating the key challenges and research opportunities in this area.

Christof Paar, Max Planck Institute for Security and Privacy (MPI-SP)
Fun Ways to Build Hardware Trojans: From Transistors to Firmware Manipulation

Krishna Gummadi, Max Planck Institute for Software Systems (MPI-SWS)
Foundations for Fair Social Computing