Monday 2nd March 2020- LJLL
Sylvain Girard (Phimeca) .

Dimension reduction with auto-associative models

We consider approximation of sets with the purpose of “reducing their dimension”, namely finding
low dimensional systems of coordinates, either as an end in itself to search for interpretable structures,
or as preliminary to optimisation, probabilistic modelling, or approximation of a function whith
functional input or output. Principal component analysis (PCA) is probably the oldest and most
widely used algorithm for dimension reduction. However, sets of signals or images often have a non
linear structures, the most simple being translation or rotation, that PCA fails to capture.
The auto-associative model (AAM) algorithm [1] is meant to overcome this limitation. It proceeds
as follows:
• Select a direction by minimising a loss function.
• Compute coordinates by orthogonal projection.
• Estimate a recovery, namely an approximation of the function linking the coordinates to
original data.
• Replace data by residuals of the recovery approximation and repeat the procedure.
This defines a sequence of nested manifolds of increasing dimension d parametrised by 0 = (I − r 1 ◦
p 1 ) ◦ · · · ◦ (I − r d ◦ p d ), where r k and p k are respectively the recovery and projection from iteration
k. PCA is a linear special case of AAM with projections such that Euclidean distances are best
preserved, and the identity map for recoveries (approximation is thus an increasing sequence of
linear spaces). In addition to non linear recovery, Girard [Stéphane] and Iovleff [1] suggested to use
a loss function based on preservation of the nearest neighbour, a local topological property.
We investigated AAM of functional data, in particular inputs and outputs of atmospheric dispersion
models. While AAM did significantly outperform PCA in some experiments, we also identify current
limitations to its general applicability. We explored changes of loss function and initial metric to
improve its robustness.
[1] Girard, Stéphane and Serge Iovleff (2008). “Auto-Associative Models, Nonlinear Principal
Component Analysis, Manifolds and Projection Pursuit”. In: Lecture Notes in Computational
Science and Enginee. Springer Berlin Heidelberg, pp. 202–218. doi: 10.1007/978-3-540-73750-6_8.
url: https://doi.org/10.1007/978-3-540-73750-6_8.

Comments are closed.