Invited Speakers
Dr. Markus Borg is a senior researcher with RISE Research Institutes of Sweden AB. He is also an adjunct lecturer at Lund University from where he obtained a PhD in software engineering in 2015. His research interests include empirical software engineering, machine learning, and software testing. Before embarking on the career in research, Markus worked at ABB as a software engineer in safety-critical process automation. Markus is also a board member of Swedsoft, an independent non-profit organization with the mission to increase the competitiveness of Swedish software.
Trained, not coded – Toward Safe AI in the Automotive Domain
Abstract: While Deep Neural Networks (DNN) have revolutionized applications that rely on computer vision, their characteristics introduce substantial challenges to automotive safety engineering. The behavior of a DNN is not explicitly expressed by an engineer in source code, instead enormous amounts of annotated data are used to learn a mapping between input and output. Automotive functional safety as defined by ISO 26262 does not match the characteristics of machine learning. New safety standards are currently evolving to meet the pressing industry need. Last year, ISO/PAS 21148 Safety of the Intended Functionality (SOTIF) was published, a stepping-stone toward a new ISO standard. SOTIF is intended to complement functional safety for automotive systems that rely on machine learning. April 1 this year, the ANSI/UL 4600 Standard for Safety for the Evaluation of Autonomous Vehicles and Other Products was released – including, but not limited to, safety of self-driving cars. In this session, we will introduce the challenge of safety for DNN-based perception systems. The session will revolve around SOTIF, but we will also compare and contrast with UL 4600 and explore how the two standards can complement each other to guide the development of safe AI.
André van Hoorn is a researcher (Akademischer Rat) with the Institute of Software Technology at the University of Stuttgart, Germany, where he was an interim professor (W3-Vertretungsprofessur) for reliable software systems from 2015 to 2017. He received his Ph.D. degree (with distinction) from Kiel University, Germany (2014) and his Master's degree (Dipl.-Inform.) from the University of Oldenburg, Germany (2007). André’s research focuses on novel methods, techniques, and tools for designing, operating, and evolving trustworthy distributed software systems. Of particular interest are quality attributes such performance, reliability, and resilience–and how they can be assessed and optimized using a smart combination of model-based and measurement-based approaches. Currently, André investigates challenges and opportunities to apply such approaches in the context of continuous software engineering and DevOps. André is the principal investigator of several research projects (e.g., funded by DFG, BMBF, and the Baden-Württemberg-Stiftung) including basic and applied research, and is actively involved in community activities, e.g., in the scope of the Research Group of the Standard Performance Evaluation Corporation (SPEC). Recently, André served as a PC co-chair of the 9th ACM/SPEC International Conference on Performance Engineering (ICPE 2018).
Performance Engineering for Microservices and Serverless Applications: The RADON approach
Abstract: Microservices and serverless functions are becoming integral parts of modern cloud-based applications. Tailored performance engineering is needed for assuring that the applications meet their requirements for quality attributes such as timeliness, resource efficiency, and elasticity. A novel DevOps-based framework for developing microservices and serverless applications is being developed in the RADON project. RADON contributes to performance engineering by including novel approaches for modeling, deployment optimization, testing, and runtime management. The tutorial will be presented jointly by André and his colleagues Alim U. Gias, Lulai Zhu, Giuliano Casale (all Imperial College London, UK), and Thomas F. Düllmann, Michael Wurster (both University of Stuttgart, Germany).
Michael Pradel is a full professor at University of Stuttgart, which he joined after a PhD at ETH Zurich, a post-doc at UC Berkeley, an assistant professorship at TU Darmstadt, and a sabbatical at Facebook. His research interests span software engineering, programming languages, security, and machine learning, with a focus on tools and techniques for building reliable, efficient, and secure software. In particular, he is interested in dynamic program analysis, test generation, concurrency, performance profiling, JavaScript-based web applications, and machine learning-based program analysis. Michael has been awarded the Software Engineering Award of the Ernst-Denert-Foundation for his dissertation, the Emmy Noether grant by the German Research Foundation (DFG), and an ERC Starting Grant.
Analyzing Software using Deep Learning
Abstract: Software developers use tools that automate particular subtasks of the development process. Recent advances in machine learning, in particular deep learning, are enabling tools that had seemed impossible only a few years ago, such as tools that predict what code to write next, which parts of a program are likely to be incorrect, and how to fix software bugs. This tutorial introduces recent techniques developed at the intersection of program analysis and machine learning. We will cover some basics of the two fields, study two recent learning-based analysis tools (DeepBugs and TypeWriter) in more detail, and get some hands-on experience with a simple deep-learning based program analysis.
Baishakhi Ray is an Assistant Professor in the Department of Computer Science, Columbia University, NY, USA. She has received her Ph.D. degree from the University of Texas, Austin. Baishakhi's research interest is in the intersection of Software Engineering and Machine Learning. Baishakhi has received Best Paper awards at FASE 2020, FSE 2017, MSR 2017, IEEE Symposium on Security and Privacy (Oakland), 2014. Her research has also been published in CACM Research Highlights and has been widely covered in trade media. She is a recipient of the NSF CAREER award, VMware Early Career Faculty Award, and IBM Faculty Award.
Systematic Software Testing for Deep Learning Applications
Abstract: We are now seeing a paradigm shift in software development, where decision making is increasingly shifting from hand-coded program logic to Deep Learning (DL)---popular applications of Speech Processing, Image Recognition, Robotics, Go game, etc. are using DL as their core components. Deep Neural Network (DNN), a widely used architecture of DL, is the key behind such progress. With such spectacular progress, they are also increasingly being used in safety-critical systems like autonomous cars, medical diagnosis, malware detection, and aircraft collision avoidance systems. Such wide adoption of DL techniques comes with concerns about the reliability of these systems, as several erroneous behaviors have already been reported. Thus, it has become crucial to rigorously test these DL applications with realistic corner cases to ensure high reliability. However, due to the fundamental architectural differences between DNN and traditional software, existing software testing techniques do not apply to them in any obvious way. In fact, companies like Google, Tesla, etc. are increasingly facing all the traditional software testing challenges to ensure reliable and safe DL applications. In this talk, I will talk about how to systematically test Deep Learning applications.