Publications

Calibrated Domain-Invariant Learning for Highly Generalizable Large Scale Re-Identification

Published on WACV, 2020

In seek for a highly generalizable large-scale ReID method, we present an adversarialdomain-invariant feature learning framework (ADIN) that explicitly learns to separate identity-related features from challenging variations, where for the first time “free” annotations in ReID data such as video timestamp and camera index are utilized.

Recommended citation: Ye Yuan et al. (2020). "Calibrated Domain-Invariant Learning for Highly Generalizable Large Scale Re-Identification." WACV 2020. ... [link]

ABD-Net: Attentive but Diverse Person Re-Identification

Published on ICCV, 2019

We advocate the complementary powers of attention and diversity for Re-ID, by proposing an Attentive but Diverse Network (ABD-Net). ABD-Net seamlessly integrates attention modules and diversity regularizations throughout the entire network to learn features that are representative, robust, and more discriminative

Recommended citation: Tianlong Chen et al. (2019). "ABD-Net: Attentive but Diverse Person Re-Identification." ICCV 2019. ... [link]

UG2+ Track 2: A Collective Benchmark Effort for Evaluating and Advancing Image Understanding in Poor Visibility Environments

Published on Arxiv Preprint, 2019

A summary paper on datasets, fact sheets and baseline results in UG2+ Challenge (Track 2). More materials are provided in http://www.ug2challenge.org.

Recommended citation: Ye Yuan et al. (2019). "UG2+ Track 2: A Collective Benchmark Effort for Evaluating and Advancing Image Understanding in Poor Visibility Environments" Arxiv Preprint. 1904.04474. [link]

Bridging the Gap Between Computational Photography and Visual Recognition

Published on Arxiv Preprint, 2019

In this paper, we introduced the UG2 dataset as a large-scale benchmark composed of video imagery captured under challenging conditions, and two enhancement tasks designed to test algorithmic impact on visual quality and automatic object recognition. Furthermore, we proposed a set of metrics to evaluate the joint improvement of such tasks as well as individual algorithmic advances, including a novel psychophysics-based evaluation regime for human assessment and a realistic set of quantitative measures for object recognition performance. We introduced six new algorithms for image restoration or enhancement, which were created as part of the IARPA sponsored UG2 Challenge workshop held at CVPR 2018.

Recommended citation: Rosaura G. VidalMata et al. (2019). "Bridging the Gap Between Computational Photography and Visual Recognition" Arxiv Preprint. 1901.09482. [link]

Quantification of Amyloid Burden from Florbetapir PET Images without Using Target and Reference Regions: Preliminary Findings Based on the Deep Learning 3D Convolutional Neural Network Approach

Published on Alzheimer's & Dementia: The Journal of the Alzheimer's Association, 2018

Recommended citation: Ye Yuan et al. (2018). "Quantification of Amyloid Burden from Florbetapir PET Images without Using Target and Reference Regions: Preliminary Findings Based on the Deep Learning 3D Convolutional Neural Network Approach." Alzheimer's & Dementia: The Journal of the Alzheimer's Association. Volume 14, Issue 7, P31. [link]

Feasibility of Quantifying Amyloid Burden Using Volumetric MRI Data: Preliminary Findings Based on the Deep Learning 3D Convolutional Neural Network Approach

Published on Alzheimer's & Dementia: The Journal of the Alzheimer's Association, 2018

Recommended citation: Ye Yuan et al. (2018). "Feasibility of Quantifying Amyloid Burden Using Volumetric MRI Data: Preliminary Findings Based on the Deep Learning 3D Convolutional Neural Network Approach." Alzheimer's & Dementia: The Journal of the Alzheimer's Association. Volume 14, Issue 7, P30 - P31. [link]