CoViD-19 pandemic forced us to postpone the celebration to 2023
Danupon Nanongkai | Managing Director of the Max Planck Institute for Informatics
Manfred Schmitt | President of Saarland University
Wolfgang Förster | Secretary of State for Finance and Science, State Secretary
Patrick Cramer | President of the Max Planck Society
Machine learning has made great advances in all fields of science and technology. Notably, large language models (LLMs) have been in the spotlight of public and scientific attention for some time now, representing an important class of foundation models, that is, models that can be adapted for a wide range of downstream tasks. This is done leveraging word embeddings that are derived from large text corpora. Less known, a specialized class of foundation models have arisen in computational biology: protein language models (PLMs). Similar to LLMs, they produce embeddings for protein sequences, which can be considered biological texts. These embeddings can be used in various protein-related downstream tasks, such as predicting protein functions, effects of mutations, or drug-target interactions.
In my talk, I will present our recent work on development of structure-aware PLMs with a specific application in predicting drug-target interactions. Further, I will introduce natural products as a rich source of novel candidate drugs and talk about applications of PLMs in natural product research.
Olga Kalinina studied mathematics at the Moscow State University, Russia, before doing her PhD in molecular biology at the Engelhardt Institute for Molecular Biology of the Russian Academy of Sciences in 2007.
She moved for a postdoc to the European Molecular Biology Laboratory in Heidelberg and the Heidelberg University between 2007 and 2011. From 2012 till 2018, she was a senior researcher at the Max Planck Institute for Informatics in Saarbrücken in the Department for Computational Biology and Applied Algorithmics led by Thomas Lengauer. Since 2019, Olga holds a professorship for Drug Bioinformatics at the Helmholtz Institute for Pharmaceutical Research Saarland (HIPS) / Helmholtz Centre for Infection Research (HZI) and the Saarland University.
Advances in networking are driven by evolving needs of society and the development of new applications. At the same time, advances in networking technology have fueled the creation of novel digital connected experiences. This talk will discuss this virtuous cycle of network innovation over the last three decades and the inflection points in the technology landscape that have led us to the current large scale, programmable, virtualized edge-to-cloud networks. The talk will also cover a view into the future network evolution, including the possibilities with generative AI, and the associated new challenges.
Sujata Banerjee leads the VMware Research Group (VRG) whose mission is to create novel technologies and unique differentiation for VMware’s technology portfolio, and advance the state of the field through external impact on the research community. She co-leads the ML Program Office within VMware’s Office of the CTO. Sujata’s research has spanned software defined networking, network function virtualization, network energy efficiency, measurement and automation. She served as the technical program co-chair of the ACM SIGCOMM 2020, USENIX NSDI 2018 and ACM SOSR 2017 conferences. She was a member of the Computing Community Consortium (CCC) Council of the Computing Research Association (CRA) from 2019-2023, and is currently serving on the board of the CRA's committee for widening participation (CRA-WP). She is on the scientific advisory committee of the FABRIC programmable research infrastructure. In 2020, she served in the AI working group of the FCC’s Technology Advisory Council and was the vice-chair of ACM SIGCOMM (2019-2021). She has over 40 US patents, is a recipient of the U.S. National Science Foundation (NSF) CAREER award in networking research and is a Fellow of the IEEE.
She was honored to be named in the list of 2018 N2Women: Stars in Computer Networking and Communications. Prior to VMware, she was a director and distinguished technologist at Hewlett Packard Enterprise Labs, leading research on enterprise, service provider and datacenter networks. Before her industrial research career, she held a tenured Associate Professor position at the University of Pittsburgh.
Key-value stores and search engines are posing a continuously growing need to efficiently store, retrieve and analyze massive sets of keys under the many and different requirements posed by users, devices, and applications. Such a new level of complexity could not be properly handled by known data structures, so that academic and industrial researchers started recently to devise new approaches that integrate classic data structures with various kinds of advanced techniques drawn from data compression, computational geometry and machine learning, hence originating what are currently called “Learned Data Structures”.
In this talk, I’ll survey the evolution of these novel kinds of data structures, discuss their theoretical and experimental performance, and point out new challenges worth of future research.
Paolo Ferragina is Professor of Algorithms at the University of Pisa, with a postdoc at the Max Planck Institute for Informatics. He served his university as Vice-Rector for ICT (2019–2022), and for Applied Research and Innovation (2010–2016), and as the Director of the PhD program in Computer Science (2018–2020).
His research focuses on designing algorithms and data structures for compressing, mining, and retrieving information from big data, which produced several patents and has featured in over 180 papers published in renowned conferences and journals.
He has spent research periods at the University of North Texas, the Courant Institute at New York University, MGH/Harvard Medical School, AT&T, Google, IBM Research, and Yahoo. Ferragina is the joint recipient of the 2022 ACM Paris Kanellakis Theory and Practice Award.
The creation of lifelike digital human faces has been pivotal in a range of applications, spanning from healthcare and telepresence to virtual assistants and cinematic visual effects. For decades, the ultimate objective has been to create digital representations so authentic that they are virtually indistinguishable from real faces, while also conveying genuine emotional depth. Overcoming the challenge of the “uncanny valley” has been crucial to this pursuit. In this talk, I will give a 30-year retrospective of pioneering research in digital humans. We will explore the evolution of various elements —including facial capture techniques, geometry, appearance modeling, soft tissue modeling as well as eyes, teeth, and hair.
The talk will also highlight the transformative impact of contemporary machine learning on facial visual effects. As we look toward the future, the focus will shift to real-time facial animation and the symbiotic relationship between digital characters and machine learning algorithms to bring AI avatars to life.
Markus Gross is the Chief Scientist of the Walt Disney Studios and a professor of Computer Science at ETH Zürich. He is one of the leading authorities in visual computing, computer animation, digital humans, virtual reality, and AI. In his role at Disney he leads the Studio segment’s research and development unit, where he and his team are pushing the forefront of technology innovation in service of the filmmaking process. Gross has published over 500 scientific papers and holds over 100 patents.
His work and achievements have been recognized widely, including two Academy Awards and the ACM SIGGRAPH Steven Anson Coons Award. Gross is member of multiple academies of science and of the Academy of Motion Picture Arts and Sciences.
Recent advances in AI have led to a significant improvement in the ability to understand visual content. Given these advances, AI-based solutions have been increasingly adopted in industry. In turn, real-world constraints, such as compute efficiency, data efficiency, model size, and so forth, need to be addressed. This talk will cover how to handle these constraints. In particular, I will introduce techniques for designing and optimizing compute-efficient neural networks. Furthermore, I will discuss data-efficient machine learning methods.
Jan Kautz is Vice President of Learning and Perception Research at NVIDIA. He and his team pursue fundamental research in the areas of computer vision and deep learning, including visual perception, geometric vision, generative models, and efficient deep learning. Their work has been given various awards and has been regularly featured in the media. Before joining NVIDIA in 2013, Jan was a tenured faculty member at University College London. He holds a degree in Computer Science from the University of Erlangen-Nürnberg (1999), an MMath from the University of Waterloo (1999), received his PhD from the Max-Planck-Institut für Informatik (2003), and worked as a post-doctoral researcher at the Massachusetts Institute of Technology (2003 –2006).
Jan has chaired numerous conferences (Eurographics Symposium on Rendering 2007, IEEE Symposium on Interactive Ray-Tracing 2008, Pacific Graphics 2011, CVMP 2012, Eurographics 2014) and has served on several editorial boards. He is a member of ACM and IEEE.
Analyzing large data sets often involves complex query execution plans, where different evaluation strategies can be used to answer a given query. This leads to interesting optimization problems, where we can invest time in optimization to speed up query execution, sometimes by orders of magnitude. On the other hand query optimization itself can become expensive, too, as many optimization problems are NP hard. In this talk we study approaches for practical query optimization that can handle complex, real world data sets and queries and that can be integrated into production query processing engines.
Thomas Neumann is a full professor in the Department of Computer Scienceat the Technical University of Munich. After his PhD in Computer Science at the University of Mannheim in 2005, he was Senior Researcher at the Max-Planck Institute for Informatics in Saarbrücken until 2010.
His research interests are in the areas of database systems, query processing, and query optimization. In 2020, he received the Gottfried Wilhelm Leibniz Prize.
In randomized experiments, we randomly assign the treatment that each experimental subject receives. Randomization can help us accurately estimate the difference in treatment effects with high probability. It also helps ensure that the groups of subjects receiving each treatment are similar. If we have already measured characteristics of our subjects that we think could influence their response to treatment, then we can increase the precision of our estimates of treatment effects by balancing those characteristics between the groups.
We show how to use the recently developed Gram-Schmidt Walk algorithm of Bansal, Dadush, Garg, and Lovett to efficiently assign treatments to subjects in a way that balances known characteristics without sacrificing the benefits of randomization. These allow us to obtain more accurate estimates of treatment effects to the extent that the measured characteristics are predictive of treatment effects, while also bounding the worst-case behavior when they are not.
This is joint work with Chris Harshaw, Fredrik Sävje, and Peng Zhang.
Daniel Alan Spielman is the Sterling Professor of Computer Science, and Professor of Statistics and Data Science, and of Mathematics at Yale. He received his B.A. in Mathematics and Computer Science from Yale in 1992, and his Ph.D in Applied Mathematics from M.I.T. in 1995. After spending a year as an NSF Mathematical Sciences Postdoctoral Fellow in the Computer Science Department at U.C. Berkeley, he became a professor in the Applied Mathematics Department at M.I.T. He moved to Yale in 2005.
He has received many awards, including the 1995 ACM Doctoral Dissertation Award, the 2002 IEEE Information Theory Paper Award, the 2008 and 2015 Godel Prizes, the 2009 Fulkerson Prize, the 2010 Nevanlinna Prize, the 2014 Polya Prize, the 2021 NAS Held Prize, the 2023 Breakthrough Prize in Mathematics, a Simons Investigator Award, and a MacArthur Fellowship.
This talk introduces the Emerald system, a new technology that uses wireless sensing and machine learning to implement touchless health monitoring. An Emerald device in the home transmits low-power radio signals and records their reflections. The recorded data is then passed to the cloud for further processing using neural network algorithms. The monitoring system can infer the movements, breathing, heart rates, and sleep stages of people in their homes, without requiring them to wear any sensors. By monitoring physiological signals continuously over a period of months, the system can automatically detect changes in health conditions. The talk will describe the underlying technology and present examples of its application in clinical drug trials.
Bruce Maggs received the S.B., S.M., and Ph.D. degrees in computer science from the Massachusetts Institute of Technology in 1985, 1986, and 1989, respectively. His advisor was Charles Leiserson. After spending one year as a Postdoctoral Associate at MIT , he worked as a Research Scientist at NEC Research Institute in Princeton from 1990 to 1993. In 1994, he moved to the Computer Science Department at Carnegie Mellon, where he ultimately achieved the rank of full Professor. While on a two-year leave-of-absence from Carnegie Mellon, Maggs was a founding employee of Akamai Technologies, serving as its first Vice President for Research and Development. In 2009, Maggs joined Duke University, where he is the Pelham Wilder Professor of Computer Science. In 2018 he was part of a large team that received the inaugural ACM SIGCOMM Networking Systems Award for the Akamai Content Distribution Network, and was named an ACM Fellow.
Maggs is currently on part-time leave from Duke and is serving as Director of Engineering for Emerald Innovations.
Everyday human activities are impressive feats of physical intelligence, for instance, the highly coordinated movement of fingers to type this sentence. Robots with even a fraction of this capability would revolutionize our lives but remain elusive.
In this talk, I will discuss my group‘s work on taking a step towards more capable robots by building 3D computer vision and machine learning methods for understanding human manual skills -- the diverse and dexterous ways in which we shape our environment with our hands.I will discuss the techniques we are building to capture, aSnalyze, and generate hand skills by observing adults and children. Our goal is to produce a large repository of high-level manual skills that can lead to a deeper understanding of hands and eventually be transferred to robots.
Srinath Sridhar is an assistant professor of computer science at Brown University. He received his PhD at the Max Planck Institute for Informatics and was subsequently a postdoctoral researcher at Stanford. His research interests are in 3D computer vision and machine learning.
Specifically, his group (https://ivl.cs.brown.edu) focuses on visual understanding of 3D human physical interactions with applications ranging from robotics to mixed reality. He is a recipient of the NSF CAREER award, a Google Research Scholar award, and his work received the Eurographics Best Paper Honorable Mention. He spends part of his time as a visiting academic at Amazon Robotics and has previously spent time at Microsoft Research Redmond and Honda Research Institute.
Machine learning has been making steady progress in database systems for many years, with tasks such as natural language interfaces, data cleaning, integration, imputation, and forecasting. The advent of large language models (LLMs) has accelerated this progress, particularly in the area of natural language interfaces.
In this talk, we will discuss the research challenges that arise in text-to-SQL generation over large, complex schemas. We will present recent research in high-recall schema subsetting for retrieval augmentation, online customization to new schemas, and handling of ambiguities. We will also discuss the potential of LLMs to enrich data analysis with explanations from outside the structured database.
However, custom models for specific data analysis tasks, such as imputation and forecasting, still need to be performed outside the LLM. We will present recent deep learning advances for performing these tasks on multidimensional time series data. The talk will conclude with a discussion of the many challenges that still remain on the path to effectively integrate the orderly self-contained world of database systems and the stochastic open-ended world of AI-driven approaches.
Sunita Sarawagi researches in the fields of databases and machine learning. She got her PhD in databases from the University of California at Berkeley and her bachelor's degree from IIT Kharagpur. She has also worked at Google Research (2014-2016), CMU (2004), and IBM Almaden Research Center (1996-1999). She was awarded the InfosysPrize in 2019 for Engineering and Computer Science, and the distinguished Alumnus award from IIT Kharagpur. She is a fellow of the ACM, INAE, and IAS.
In the last few decades supercomputers were primarily used for applications in scientific computing. However, during the same period, high performance computers were being designed for graphics and imaging. Simulating realistic physical phenomena is also required to create immersive virtual worlds. Similarly, processing images and videos is data intensive, and analyzing them requires fast implementations of compute intensive signal processing, statistics, and machine learning algorithms. There is a large consumer demand and market for these these capabilities. These quests for cost efficient powerful computers led to the modern graphics processing unit (GPU).
In this talk I will give a personal history of my journey developing software and hardware for high performance graphics supercomputers.
Pat Hanrahan is the Canon Professor of Computer Science and Electrical Engineering Emeritus at Stanford University. As a founding employee at Pixar Animation Studios, Hanrahan led the design of RenderMan. Hanrahan served as a co-founder and CTO of Tableau Software.
He has received three Academy Awards for Science and Technology, the SIGGRAPH Computer Graphics Achievement Award, the SIGGRAPH Stephen A. Coons Award, and the IEEE Visualization Career Award.
He is a member of the National Academy of Engineering and the American Academy of Arts and Sciences. In 2019, he received the ACM A. M. Turing Award.
1990 | Max Planck Society strengthens the Computer Sciences by founding of the Max Planck Institute for Informatics, the first institute of the Max Planck Society entirely devoted to computer sciences. Founding director is Kurt Mehlhorn; he and Harald Ganzinger heading the first two departments “Algorithms and Complexity” and “Programming Logics”, respectively. | |
1996 | Move into the new building | |
1999 | Hans-Peter Seidel is appointed Scientific Member of the Max Planck Society; he joines MPI-INF as director heading the third department “Computer Graphics”. | |
2000 | Founding of the International Max Planck School for Computer Science; This collaboration between MPI-INF and Saarland University is devoted to the support of young scientists. | |
2001 | Thomas Lengauer is appointed Scientific Member of the Max Planck Society; he joines MPI-INF as director heading the fourth department “Bioinformatics”. | |
2003 | Gerhard Weikum is appointed Scientific Member of the Max Planck Society; he joines MPI-INF as director heading the fifth department “Databases and Information Systems”. A cooperation between Max Planck Society and Stanford University is established to support young computer scientists in their careers: the Max Planck Center for Visual Computing and Communication. |
|
2004 | Our sister institute, the Max Planck Institute for Software Systems is founded with sites in Kaiserslautern and Saarbrücken. Harald Ganzinger passes away. |
|
2007 | As part of the German Universities Excellence Initiative, Saarbrücken wins the Cluster of Excellence “Multimodal Computing and Communication”. MPI-INF is one of the core partners. | |
2010 | Bernt Schiele is appointed Scientific Member of the Max Planck Society; he joines MPI-INF as director heading the fifth department “Computer Vision and Multimodal Computing”. | |
2015 | 25th anniversary | |
2017 | Anja Feldmann is appointed Scientific Member of the Max Planck Society; she joins MPI-INF as director heading the department „Internet Architecture“. | |
2018 | Thomas Lengauer becomes emeritus | |
2019 | On his 70th birthday, Kurt Mehlhorn becomes emeritus | |
2020 | 30 years MPI-INF -- CoViD-19 pandemic forces the celebration to be postponed | |
2021 | Christian Theobalt is appointed Scientific Member of the Max Planck Society; he joins MPI-INF as director heading the department „Visual Computing and Artificial Intelligence“ | |
2022 |
|
|
2023 |
|