Project Home

SSA2lign - Few-Shot (Source) Video Domain Adaptation

Attentive Temporal Consistent Network (SSA2lign)

A Primary Exploration on Few-Shot (Source) Video Domain Adaptation (FSVDA)

Paper and Data

Abstract

For video models to be transferred and applied seamlessly across video tasks in varied environments, Video Unsupervised Domain Adaptation (VUDA) has been introduced to improve the robustness and transferability of video models. However, current VUDA methods rely on a vast amount of high-quality unlabeled target data, which may not be available in real-world cases. We thus consider a more realistic Few-Shot Video-based Domain Adaptation (FSVDA) scenario where we adapt video models with only a few target video samples. While a few methods have touched upon Few-Shot Domain Adaptation (FSDA) in images and in FSVDA, they rely primarily on spatial augmentation for target domain expansion with alignment performed statistically at the instance level. However, videos contain more knowledge in terms of rich temporal and semantic information, which should be fully considered while augmenting target domains and performing alignment in FSVDA. We propose a novel SSA2lign to address FSVDA at the snippet level, where the target domain is expanded through a simple snippet-level augmentation followed by the attentive alignment of snippets both semantically and statistically, where semantic alignment of snippets is conducted through multiple perspectives. Empirical results demonstrate state-of-the-art performance of SSA2lign across multiple cross-domain action recognition benchmarks.

Structure of SSA2lign

The structure of SSA2lign is as follows:

alt text

Benchmark Results

We tested our proposed SSA2lign on various benchmarks, including Daily-DA and Sports-DA, while comparing with previous domain adaptation methods (including methods which requires source data access). The results are as follows:

alt text

Paper, Code and Data

CC BY 4.0

Back to Project Page