Publications

An Embedding-Dynamic Approach to Self-supervised Learning
An Embedding-Dynamic Approach to Self-supervised Learning

A number of recent self-supervised learning methods have shown impressive performance on image classification and other tasks. A somewhat bewildering variety of techniques have been used, not always with a clear understanding of the reasons for their benefits, especially when used in combination. Here we treat the embeddings of images as point particles and consider model optimization as a dynamic process on this system of particles. Our dynamic model combines an attractive force for similar images, a locally dispersive force to avoid local collapse, and a global dispersive force to achieve a globally-homogeneous distribution of particles. The dynamic perspective highlights the advantage of using a delayed-parameter image embedding (a la BYOL) together with multiple views of the same image. It also uses a purely-dynamic local dispersive force (Brownian motion) that shows improved performance over other methods and does not require knowledge of other particle coordinates. The method is called MSBReg which stands for (i) a Multiview centroid loss, which applies an attractive force to pull different image view embeddings toward their centroid, (ii) a Singular value loss, which pushes the particle system toward spatially homogeneous density, (iii) a Brownian diffusive loss. We evaluate downstream classification performance of MSBReg on ImageNet as well as transfer learning tasks including fine-grained classification, multi-class object classification, object detection, and instance segmentation. In addition, we also show that applying our regularization term to other methods further improves their performance and stabilize the training by preventing a mode collapse.

Sound-guided Semantic Video Generation
Sound-guided Semantic Video Generation

The recent success in StyleGAN demonstrates that pre-trained StyleGAN latent space is useful for realistic video generation. However, the generated motion in the video is usually not semantically meaningful due to the difficulty of determining the direction and magnitude in the StyleGAN latent space. In this paper, we propose a framework to generate realistic videos by leveraging multimodal (sound-image-text) embedding space. As sound provides the temporal contexts of the scene, our framework learns to generate a video that is semantically consistent with sound. First, our sound inversion module maps the audio directly into the StyleGAN latent space. We then incorporate the CLIP-based multimodal embedding space to further provide the audio-visual relationships. Finally, the proposed frame generator learns to find the trajectory in the latent space which is coherent with the corresponding sound and generates a video in a hierarchical manner.We provide the new high-resolution landscape video dataset (audio-visual pair) for the sound-guided video generation task. The experiments show that our model outperforms the state-of-the-art methods in terms of video quality. We further show several applications including image and video editing to verify the effectiveness of our method.

Bridging the Domain Gap towards Generalization in Automatic Colorization
Bridging the Domain Gap towards Generalization in Automatic Colorization

We propose a novel automatic colorization technique that learns domain-invariance across multiple source domains and is able to leverage such invariance to colorize grayscale images in unseen target domains. This would be particularly useful for colorizing sketches, line arts, or line drawings, which are generally difficult to colorize due to a lack of data. To address this issue, we first apply existing domain generalization (DG) techniques, which, however, produce less compelling desaturated images due to the network’s over-emphasis on learning domain-invariant contents (or shapes). Thus, we propose a new domain generalizable colorization model, which consists of two modules (i) a domain-invariant content-biased feature encoder and (ii) a source-domain-specific color generator. To mitigate the issue of insufficient source domain-specific color information in domain-invariant features, we propose a skip connection that can transfer content feature statistics via adaptive instance normalization. Our experiments with publicly available PACS and Office-Home DG benchmarks confirm that our model is indeed able to produce perceptually reasonable colorized images. Further, we conduct a user study where human evaluators are asked to (1) answer whether the generated image looks naturally colored and to (2) choose the best-generated images against alternatives. Our model significantly outperforms the alternatives, confirming the effectiveness of the proposed method.

Occupancy Flow Fields for Motion Forecasting in Autonomous Driving
Occupancy Flow Fields for Motion Forecasting in Autonomous Driving

We propose Occupancy Flow Fields, a new representation for motion forecasting of multiple agents, an important task in autonomous driving. Our representation is a spatio-temporal grid with each grid cell containing both the probability of the cell being occupied by any agent, and a two-dimensional flow vector representing the direction and magnitude of the motion in that cell. Our method successfully mitigates shortcomings of the two most commonly-used representations for motion forecasting. trajectory sets and occupancy grids. Although occupancy grids efficiently represent the probabilistic location of many agents jointly, they do not capture agent motion and lose the agent identities. To this end, we propose a deep learning architecture that generates Occupancy Flow Fields with the help of a new flow trace loss that establishes consistency between the occupancy and flow predictions. We demonstrate the effectiveness of our approach using three metrics on occupancy prediction, motion estimation, and agent ID recovery. In addition, we introduce the problem of predicting speculative agents, which are currently-occluded agents that may appear in the future through dis-occlusion or by entering the field of view. We report experimental results on a large in-house autonomous driving dataset and the public INTERACTION dataset, and show that our model outperforms state-of-the-art models.

Audio-Semantic Image Synthesis for Artistic Paintings
Audio-Semantic Image Synthesis for Artistic Paintings

There has been a long attempt to transfer the field of art such as painting to computer-based creation. In contrast to realism, non-photorealistic rendering (NPR) area, in particular, has focused on creating artificial style rendering for painting, drawing, and cartoon. With the advanced development of generative models, impressive computer-generated paintings based on artistic style learning approach have been appeared. On the other hand, researchers started to propose methods to produce creative painting beyond simply adopting the artistic style from other masterpieces. This includes utilizing auditory and visual sources to manipulate original masterpiece with novel style. Recent work manipulates an original masterpiece to integrate semantic understanding from audio source through audio-visual correlation learning. In previous works, researchers proposed audio-reactive interpolation method based on musical features and latent transfer mapping of deep music embeddings to style embeddings for video/image generation. However, previous approaches mainly utilized features and attributes of music, which were hard to convey semantically meaningful image manipulation from given audio source. We propose extending the domain of StyleCLIP to audio in order to overcome abovementioned limitation. StyleCLIP carried out text-based image manipulation by mapping latent feature of text to that of image. In our case, we replace text with audio to design audio-based semantic image manipulation. In particular, we encode both audio and image into the same latent space. This allows us maximizing advantages of both CLIP (effective representation of similarity) and StyleGAN (high-quality image generation). In this work, we introduce semantic audio-reactive painting generation model. We believe that our proposal has a high potential in creation of new field of computer art by narrowing down the gap between music and art, which were regarded as different fields of art.

A Scenario-Based Platform for Testing Autonomous Vehicle Behavior Prediction Models in Simulation
A Scenario-Based Platform for Testing Autonomous Vehicle Behavior Prediction Models in Simulation

Behavior prediction remains one of the most challenging tasks in the autonomous vehicle (AV) software stack. Forecasting the future trajectories of nearby agents plays a critical role in ensuring road safety, as it equips AVs with the necessary information to plan safe routes of travel. However, these prediction models are data-driven and trained on data collected in real life that may not represent the full range of scenarios an AV can encounter. Hence, it is important that these prediction models are extensively tested in various test scenarios involving interactive behaviors prior to deployment. To support this need, we present a simulation-based testing platform that supports (1) intuitive scenario modeling with a probabilistic programming language called SCENIC, (2) specifying a multi-objective evaluation metric with a partial priority ordering, (3) falsification of the provided metric, and (4) parallelization of simulations for scalable testing. As a part of the platform, we provide a library of 25 SCENIC programs that model challenging test scenarios involving interactive traffic participant behaviors. We demonstrate the effectiveness and the scalability of our platform by testing a trained behavior prediction model and searching for failure scenarios.

SelfReg: Self-supervised Contrastive Regularization for Domain Generalization
SelfReg: Self-supervised Contrastive Regularization for Domain Generalization

In general, an experimental environment for deep learning assumes that the training and the test dataset are sampled from the same distribution. However, in real-world situations, a difference in the distribution between two datasets, domain shift, may occur, which becomes a major factor impeding the generalization performance of the model. The research field to solve this problem is called domain generalization, and it alleviates the domain shift problem by extracting domain-invariant features explicitly or implicitly. In recent studies, contrastive learning-based domain generalization approaches have been proposed and achieved high performance. These approaches require sampling of the negative data pair. However, the performance of contrastive learning fundamentally depends on quality and quantity of negative data pairs. To address this issue, we propose a new regularization method for domain generalization based on contrastive learning, self-supervised contrastive regularization (SelfReg). The proposed approach use only positive data pairs, thus it resolves various problems caused by negative pair sampling. Moreover, we propose a class-specific domain perturbation layer (CDPL), which makes it possible to effectively apply mixup augmentation even when only positive data pairs are used. The experimental results show that the techniques incorporated by SelfReg contributed to the performance in a compatible manner. In the recent benchmark, DomainBed, the proposed method shows comparable performance to the conventional state-of-the-art alternatives. Codes are available at this https URL.