View- and Temporal-consistency in Generation using Diffusion Models

Niloy Mitra, University College London

Recently, diffusion models are the best-performing 2D generative model. This is due to their ability to be trained on millions, if not billions, of images with a stable learning objective. However, adapting these models to 3D (or video) has proven to be challenging for two reasons. Firstly, obtaining a large quantity of 3D (or video) training data is much more complex than obtaining 2D images, and in practice, only tens of thousands of such training samples are available. Secondly, while extending the models to operate on 3D grids (spatial or temporal) is theoretically simple, the associated cubic growth in memory and compute complexity makes this impractical.

To address the first challenge, we have introduced a new diffusion setup that can be trained end-to-end, with only posed 2D images for supervision. Furthermore, we have tackled the second challenge by proposing an image formation model that decouples model memory from spatial memory. During this talk, I will describe results using synthetic and real data and discuss how we can extend these models to produce high-quality photorealistic outputs. I will also present a diffusion-based workflow for video data producing time-consistent stylization. 

Niloy J. Mitra leads the Smart Geometry Processing group in the Department of Computer Science at University College London and the Adobe Research London Lab. He received his Ph.D. from Stanford University under the guidance of Leonidas Guibas. His research focuses on developing machine learning frameworks for generative models for high-quality geometric and appearance content for CG applications. Niloy’s technical contributions in the field of computer graphics have earned him numerous prestigious awards. He was awarded the Eurographics Outstanding Technical Contributions Award in 2019, the British Computer Society Roger Needham Award in 2015, and the ACM SIGGRAPH Significant New Researcher Award in 2013. Furthermore, he was elected as a fellow of Eurographics in 2021 and served as the Technical Papers Chair for SIGGRAPH in 2022. His work has also earned him a place in the SIGGRAPH Academy in 2023. Besides research, Niloy is an active DIYer and loves reading, cricket, and cooking. More information: https://geometry.cs.ucl.ac.uk

A Decade of Advancements in Functional Maps: From Inception to Recent Breakthroughs

Maks Ovsjanikov, École Polytechnique

In this talk, I will share the journey of Functional Maps from their introduction to the latest developments. I will first discuss the foundations of this framework, describing its key motivations and basic properties. I will then provide a brief history of how the approaches based on Functional Maps have developed over the past ten years. Finally, I will provide a brief overview of some open problems and promising directions. Throughout the talk, I will try to emphasize especially the collective efforts of researchers who have contributed and continue to contribute to the development and growth of Functional Maps over the past decade.

Maks Ovsjanikov is a Professor at Ecole Polytechnique in France. He works on 3D shape analysis with emphasis on deep learning techniques for shape matching and comparison. He obtained his PhD from Stanford University under the supervision of Prof. Leonidas Guibas. He has received a Eurographics Young Researcher Award, an ERC Starting Grant, a CNRS Bronze Medal (a recognition for junior researchers in France) and an ERC Consolidator Grant in 2023. His works have received 11 best paper awards or nominations at top conferences, including CVPR, ICCV, 3DV, etc., while his work on Functional Maps has received a SIGGRAPH Test-of-Time Award in 2023. More information: https://www.lix.polytechnique.fr/~maks/

Evaluating the Realism of Animated Character Motion

Carol O’Sullivan, Trinity College Dublin

Recent years have seen great advances in character animation. The combination of data-driven and physics-based methods in particular, together with machine learning, has enabled the simulation of virtual humans that move around and interact naturally within a virtual environment. However, there is still much scope for research into methods and metrics for evaluating the realism and naturalness of such simulated animations. Furthermore, the simulation and evaluation of virtual humans interacting in Mixed Reality raises many interesting research questions.  In this talk, I will present a review of relevant research to date and pose some questions for the future.

Carol O’Sullivan is the Professor of Visual Computing in Trinity College Dublin. From 2013-2016 she was a Senior Research Scientist at Disney Research in Los Angeles, and spent a sabbatical year as Visiting Professor in Seoul National University from 2012-2013. Prior to her PhD studies, she spent several years in industry working in Software Development. She joined TCD as a lecturer in 1997 and served as the Dean of Graduate Studies from Jul’2007 to Jul’2010. She was elected a fellow of Trinity College in 2003 and of the European Association for Computer Graphics (Eurographics) in 2007. Her research interests include graphics and perception, animation, and crowd and human simulation. She has managed a range of projects with significant budgets during that time and successfully supervised many doctoral and post-doctoral researchers. She has been a member of many editorial boards and international program committees (including ACM SIGGRAPH and Eurographics). She is currently the Editor in Chief of the ACM Transactions on Graphics and previously served as Editor in Chief for the ACM Transactions on Applied Perception from 2006-2012. Recently, she has served as the Technical Papers chair for ACM SIGGRAPH Asia 2021 and the Courses chair for SIGGRAPH Asia 2018.