TheVariationalInformationBottleneckforSpectralDisentanglement

· 27 min read

A spectrum encodes two things: what the molecule is and which instrument measured it. A carbonyl C=O stretch always appears near 1720 cm⁻¹, but its exact position, width, and baseline shift depend on the spectrometer — detector response, optical path length, lamp aging, even room temperature. Train a model on one instrument and it fails on another.

This is the calibration transfer problem, and it has been the central practical barrier to deploying spectroscopic ML in production. Traditional solutions (PDS, SBC) require 25+ paired samples measured on both instruments. The goal: get that number below 10.

The Same Molecule, Two Instruments

Before diving into the theory, consider what the calibration transfer problem looks like in practice. Here is the same molecule — ethanol — measured on two different NIR spectrometers:

spectrum — same molecule, two instruments
wavenumber (cm⁻¹)absorbanceO-H (inst. A)O-H (inst. B)C-H (inst. A)C-H (inst. B)

The teal peaks are from Instrument A; the amber peaks are from Instrument B. Same molecule, same functional groups, same bond strengths — but the peaks are shifted by 1-3 cm⁻¹, broadened differently, and sitting on different baselines. A model trained on teal will misidentify amber, not because the chemistry changed, but because the instrument signature is different.

The VIB’s job is to learn a representation where the teal and amber embeddings of ethanol land in the same region of latent space, while the instrument-specific differences are captured (and later discarded) in a separate subspace.

latent space — z_chem (UMAP projection)
16 points (8 molecules × 2 instruments) — colored by instrument

The Information Bottleneck

The Variational Information Bottleneck (Alemi et al. 2017) provides the mathematical framework. Given an input XX (a spectrum) and a target YY (the molecule), find a compressed representation ZZ that maximizes:

LIB=I(Z;Y)βI(Z;X)\mathcal{L}_{\text{IB}} = I(Z;\, Y) - \beta \cdot I(Z;\, X)

The first term says: ZZ should be maximally informative about the molecule. The second term says: ZZ should compress away everything else — noise, instrument artifacts, irrelevant variation. The parameter β\beta controls the trade-off.

In practice, we can’t compute mutual information directly. The variational approximation replaces it with a tractable bound:

loss — variational information bottleneck
VIB Objective

Classification log-likelihood minus information cost — the standard formulation (Alemi et al. 2017). Here −log p(y|z) is cross-entropy, not reconstruction.

In the standard VIB formulation (Alemi et al. 2017), the first term is a classification log-likelihood — logp(yz)-\log p(y \mid z) is cross-entropy measuring how well zz predicts the target YY (e.g. molecule identity). Spektron uses a VAE-VIB hybrid where that classification term is replaced with a masked reconstruction loss LMAE\mathcal{L}_\text{MAE}: instead of predicting molecule labels, the bottleneck must preserve enough information to reconstruct masked spectral patches. The second term is a KL divergence that regularizes the posterior toward a standard Gaussian prior — the same as a VAE, but the motivation differs. We’re not trying to generate spectra; we’re trying to forget instrument-specific information while keeping chemistry.

The third term is the adversarial loss from gradient reversal — the mechanism that actually enforces disentanglement between chemistry and instrument. Without it, the KL term compresses indiscriminately, discarding useful chemistry alongside instrument noise.

Splitting the Latent Space

The key architectural choice in Spektron is splitting ZZ into two subspaces:

  • z_chem (128 dimensions) — chemistry: molecular identity, functional groups, bond strengths
  • z_inst (64 dimensions) — instrument: detector artifacts, baseline shape, resolution effects
vib_architecture.py
1 
2 
3 
4 
5 
6 
7 
8 
9 
10 
11 

At training time, both subspaces are active. The reconstruction head uses the full [zchem;zinst][z_{\text{chem}};\, z_{\text{inst}}] concatenation to reconstruct masked spectral patches. At transfer time, zinstz_{\text{inst}} is discarded — only the chemistry survives.

But splitting the latent space alone doesn’t guarantee disentanglement. Without an explicit signal, the model can encode instrument information in zchemz_{\text{chem}} (it’s a bigger subspace, so why not?). We need an adversarial constraint.

Why 128 + 64?

The asymmetric split reflects an information-theoretic prior: molecular structure has more intrinsic degrees of freedom than instrument response.

Chemical identity is high-dimensional. The QM9S training set contains ~130K unique molecules, each with a distinct combination of functional groups, ring systems, heteroatom positions, and conformational preferences. A meaningful embedding must capture fine-grained distinctions: the difference between ortho- and meta-substituted benzenes, between primary and secondary amines, between strained and unstrained ring systems. PCA on computed force constant matrices shows ~80-100 dimensions needed for 95% variance coverage across QM9 chemical space. We allocate 128 — headroom for the nonlinear manifold structure a neural encoder learns.

Instrument variation, by contrast, is low-dimensional. The dominant effects — baseline drift (2-3 DOF for polynomial curvature), wavelength shift (1 DOF), intensity scaling (1 DOF), and resolution broadening (1 DOF) — account for ~8-10 true degrees of freedom. We allocate 64 rather than 10 because the mapping from these physical effects to spectral distortions is highly nonlinear: a small wavelength shift produces peak-position-dependent intensity changes across the entire spectrum, and baseline curvature interacts with peak height in complex ways. At transfer time, all 64 dimensions are discarded — the over-allocation costs capacity during training only, not at inference.

Gradient Reversal: The Right Way

The idea: train a small classifier that takes zchemz_{\text{chem}} and tries to predict which instrument recorded the spectrum. Then reverse the gradient — instead of helping zchemz_{\text{chem}} encode instrument information, the reversed gradient forces zchemz_{\text{chem}} to become instrument-invariant.

gradient flow — forward + reversal
forward pass — chemistry and domain signals flow to their respective heads
gradient reversal
wrong: KL-to-uniform
correct: gradient reversal

The GradientReversal layer is deceptively simple: forward pass is identity, backward pass negates the gradient. During the forward pass, the domain classifier sees zchemz_{\text{chem}} unchanged and learns to predict the instrument. During backpropagation, the negated gradient flows into the encoder, teaching it to produce zchemz_{\text{chem}} representations that actively confuse the classifier.

The Bug That Looked Like Success

The initial implementation used KL divergence to a uniform distribution on the classifier output. This made the classifier output uniform — but it didn’t touch zchemz_{\text{chem}} at all. The gradient only flowed into the classifier weights, not back through the input. The loss went down, the classifier output looked uniform, and everything appeared to work. Except zchemz_{\text{chem}} still encoded instrument information.

The fix: cross-entropy loss with gradient reversal. The classifier is trained normally (cross-entropy against true domain labels), but the gradient reversal layer ensures the encoder gets the opposite signal. Now both parts of the system are adversarially coupled.

The implementation in PyTorch:

gradient_reversal.py
1 
2 
3 
4 
5 
6 
7 
8 
9 
10 

Note the x.clone() — not x.view_as(x). The original implementation used view_as, which creates a view sharing the same storage. Under DataParallel with multiple GPUs, this caused silent gradient corruption because both GPUs wrote to the same tensor. The clone creates an independent copy, making it safe for multi-GPU training.

Beta Annealing

The β\beta parameter in the VIB loss controls how much information the bottleneck discards. Too high and the model forgets everything (including chemistry). Too low and it keeps everything (including instrument noise).

The optimal strategy is beta annealing: start with a relatively high β\beta to encourage diverse, well-spread representations in the latent space, then gradually decrease it to tighten the bottleneck:

β(t)=βend+12(βstartβend)(1+cos(πtTanneal))\beta(t) = \beta_{\text{end}} + \frac{1}{2}(\beta_{\text{start}} - \beta_{\text{end}})\left(1 + \cos\left(\frac{\pi \cdot t}{T_{\text{anneal}}}\right)\right)

beta_schedule.py
1 
2 
3 
4 
5 
6 
7 
8 
9 
10 
11 
12 
13 
14 

The intuition: in early training, a high β\beta prevents the model from collapsing zchemz_{\text{chem}} into a narrow region of the latent space. The KL penalty keeps the posterior spread out, forcing the encoder to use the full capacity of the 128-dimensional space. As training progresses and the encoder has learned meaningful structure, decreasing β\beta allows the model to form tighter, more discriminative clusters — each molecule gets its own region of latent space.

Without annealing, fixed β\beta presents a dilemma. High β\beta (0.1) early training produces well-spread latent codes but prevents the encoder from forming tight molecular clusters — chemistry resolution plateaus. Low β\beta (0.001) allows tight clusters but risks posterior collapse: the encoder discovers a few high-density modes early and never explores the rest of the latent space, leaving most of the 128 dimensions unused.

Posterior collapse is worth taking seriously at 128 dimensions. With βstart=0.1\beta_\text{start} = 0.1 and 128 latent dims, the KL penalty is strong enough to push many dimensions to the prior (μ0\mu \approx 0, σ1\sigma \approx 1, contributing zero information). The per-dimension KL diagnostic catches this early: if more than 30% of zchemz_\text{chem} dimensions have DKL(N(μi,σi2)N(0,1))<0.01D_\text{KL}(\mathcal{N}(\mu_i, \sigma_i^2) \| \mathcal{N}(0,1)) < 0.01 at step 5K, you’re collapsing. An alternative that avoids this entirely is cyclical annealing (Fu et al. 2019): instead of monotonically decreasing β\beta, it cycles it — rise, high plateau, fall — multiple times. Each cycle gives the model a chance to activate new latent dimensions that collapsed in the previous cycle. For 128-dim bottlenecks on large datasets, cyclical annealing tends to activate 20-30% more latent dimensions than monotone annealing.

The cosine schedule resolves the fixed-β\beta dilemma: explore first (high β\beta), then exploit (low β\beta). The 60% annealing window was determined empirically — shorter windows don’t allow enough exploration, while longer windows delay the tightening phase and reduce final discriminability.

Why Not Just Use Domain Adaptation?

Standard domain adaptation (MMD, CORAL, DANN) aligns the entire representation across domains. This is problematic for spectra because some domain-specific information is useful during training. The instrument response function affects peak shapes, and the model needs to understand these shapes to reconstruct masked patches correctly.

The VIB split preserves this: zinstz_{\text{inst}} keeps instrument information available for reconstruction, while zchemz_{\text{chem}} is cleaned of it. At transfer time, you discard zinstz_{\text{inst}} and keep the clean chemistry.

The differences between VIB and standard domain adaptation approaches are worth examining in detail, because the choice has practical consequences for transfer performance.

Maximum Mean Discrepancy (MMD) minimizes the distance between the mean embeddings of source and target distributions in a reproducing kernel Hilbert space. For spectral data, this forces the model to produce similar average representations across instruments — but it says nothing about the structure within each domain. Two instruments might have the same mean embedding but completely different internal organization (e.g., different functional group clusters swapped in position). MMD alignment can succeed at matching marginal statistics while failing at the molecular-level correspondence that transfer actually requires.

Correlation Alignment (CORAL) goes further: it matches both the mean and covariance of the source and target feature distributions. This is more robust than MMD for spectral data because it preserves the correlational structure (which peaks co-vary). But CORAL treats all dimensions equally — it aligns the entire 256-dimensional backbone output, including dimensions that encode genuinely instrument-specific information. For calibration transfer, this over-alignment is counterproductive: CORAL tries to make a spectrum from Instrument A “look like” one from Instrument B in every dimension, rather than extracting the instrument-independent chemistry.

Domain-Adversarial Neural Networks (DANN) are the closest relative of the VIB approach. DANN also uses gradient reversal to learn domain-invariant features. The key difference is where the reversal is applied: DANN applies it to the entire representation, while VIB applies it only to zchemz_{\text{chem}}. The separate zinstz_{\text{inst}} subspace in VIB acts as a “pressure release valve” — it gives the encoder somewhere to put instrument information without contaminating the chemistry representation. Without this valve (as in DANN), the encoder faces a harder optimization: it must encode instrument information nowhere, which means the reconstruction head loses access to useful instrument-specific features during pretraining.

One recent baseline worth tracking: LoRA-CT (Lai et al. 2025) adapts a pretrained spectral encoder to a new instrument via low-rank weight updates, achieving R² = 0.952 on Raman calibration transfer. That matches our target exactly, using a different paradigm — no explicit disentanglement, just parameter-efficient fine-tuning. The advantage of the VIB approach over LoRA-CT is the 10-sample regime: LoRA-CT requires ~50 paired samples to estimate low-rank updates reliably, while the VIB + TTT pipeline targets ≤10 unlabeled samples. Whether that advantage holds on real NIR benchmarks is what the corn moisture evaluation will determine.

Key Insight

Domain adaptation methods force the model to be instrument-blind everywhere. The VIB split forces it to be instrument-blind only where it matters (zchemz_{\text{chem}}) while preserving instrument awareness where it helps (zinstz_{\text{inst}} for reconstruction). At transfer time, you discard the instrument-aware part. This is strictly better than domain adaptation whenever the training objective benefits from instrument information — which is always the case for spectral reconstruction.

The Transfer Pipeline

At deployment, calibration transfer works in three steps:

pipeline — calibration transfer
EncodeSpectrum → z_chem + z_inst
TTT3 self-supervised steps
Retrievek-NN in z_chem space
transfer.py
1 
2 
3 
4 
5 
6 
7 
8 
9 
10 
11 
12 
13 
14 
15 
16 
17 
18 

Test-Time Training in Detail

The test-time training (TTT) step is critical. Even with a well-disentangled zchemz_{\text{chem}}, there’s residual instrument leakage — the encoder was trained on instruments A and B, but the target might be instrument C with characteristics the model has never seen.

TTT adapts the model to the new instrument without any labels. The procedure:

  1. Take KK unlabeled spectra from the target instrument (K=5K = 5-1010 typically)
  2. Apply the same masked reconstruction objective used in pretraining — mask 35% of patches, reconstruct, compute MSE loss
  3. Update only the lightweight parameters — LayerNorm affine parameters and the VIB projection heads. The D-LinOSS backbone and MoE experts are frozen. This prevents catastrophic forgetting while allowing the normalization layers to adapt to the target instrument’s intensity scale and the VIB head to adjust its chemistry/instrument split for the new domain.
  4. Run 3 gradient steps at LR =105= 10^{-5} (10x lower than pretraining LR of 10410^{-4}). More steps risk overfitting to the KK samples; fewer steps leave residual domain shift.

The key insight: the self-supervised reconstruction loss doesn’t need labels — it uses the spectrum itself as the target. The model adapts by learning to reconstruct the new instrument’s spectra, which implicitly teaches the VIB head what “instrument noise” looks like for this particular instrument. After TTT, zinstz_{\text{inst}} captures the new instrument’s characteristics, and zchemz_{\text{chem}} is cleaned of them.

What the Latent Space Looks Like

When disentanglement works, zchemz_{\text{chem}} clusters by molecule regardless of which instrument recorded the spectrum. When it fails, you see instrument-specific sub-clusters — the same molecule occupies different regions of latent space depending on the source instrument.

disentanglement tracker — training dynamics
step 0
z_chem domain acc → chance level (52.1%) · z_inst domain acc ↑ (94.7%) · z_chem mol acc ↑ (87.3%)
latent_analysis.py
1 
2 
3 
4 
5 
6 
7 
8 
9 
10 

The key metric: domain classification accuracy on zchemz_{\text{chem}} should be at chance level (50% for two instruments). If a classifier can predict the instrument from zchemz_{\text{chem}}, disentanglement has failed. On zinstz_{\text{inst}}, high domain accuracy is expected — that subspace is supposed to capture instrument variation.

A word of caution: chance-level domain accuracy is a necessary but not sufficient condition for disentanglement. A model that maps all inputs to the same point achieves 50% domain accuracy trivially — but it also encodes zero chemistry. Locatello et al. (2019) proved that fully unsupervised disentanglement is impossible without inductive biases; gradient reversal provides exactly that bias (instrument labels during training), so this is weakly-supervised disentanglement, not unsupervised. Always check molecule accuracy on zchemz_{\text{chem}} alongside domain accuracy. If molecule accuracy is below 80% at chance-level domain accuracy, the model has collapsed — not disentangled.

The sparklines in the metric cards tell the training story. The zchemz_{\text{chem}} domain accuracy starts high (~85% early in training, when the encoder hasn’t learned to hide instrument information) and drops toward chance as the gradient reversal takes effect. The zinstz_{\text{inst}} accuracy rises in the opposite direction — as zchemz_{\text{chem}} stops encoding instrument information, zinstz_{\text{inst}} takes on more of that burden. The molecule accuracy on zchemz_{\text{chem}} rises steadily throughout, confirming that chemistry information is being preserved even as instrument information is removed.

Practical Lessons

Five hard-won lessons from getting VIB to work in Spektron.

1. Test with cross-instrument data, not held-out same-instrument data. The VIB loss can look perfect — low KL, good reconstruction, nice latent clusters — while zchemz_{\text{chem}} still leaks instrument information. The only honest evaluation is to train on instrument A and evaluate on instrument B without any transfer samples. If accuracy drops more than 5 points relative to same-instrument held-out performance, disentanglement is incomplete. During development, we saw cases where same-instrument accuracy was 89% but cross-instrument accuracy was 61%. The model had memorized instrument-specific peak shapes in zchemz_{\text{chem}} because the gradient reversal weight was too low.

2. The VIB loss weight matters more than you’d expect. The total loss has at least four terms: reconstruction, VIB KL, adversarial domain classification, and optionally OT. If the VIB KL weight is too low (<104< 10^{-4}), the bottleneck is effectively absent and zchemz_{\text{chem}} encodes everything including instrument noise. If it’s too high (>101> 10^{-1}), the bottleneck over-compresses and zchemz_{\text{chem}} collapses to the prior — a spherical Gaussian carrying zero information. The sweet spot is narrow, and it interacts with the beta annealing schedule. In practice, we sweep the VIB weight on a log scale {104,103,102,101}\{10^{-4}, 10^{-3}, 10^{-2}, 10^{-1}\} and select based on cross-instrument retrieval accuracy, not training loss.

3. Gradient reversal strength needs warmup. Setting the reversal coefficient α\alpha to 1.0 from step 0 destabilizes training — the adversarial signal overwhelms the reconstruction gradient before the encoder has learned any useful features. The schedule from Ganin et al. (2016) is more principled than linear warmup:

α(p)=21+exp(γp)1,p=steptotal steps,γ=10\alpha(p) = \frac{2}{1 + \exp(-\gamma \cdot p)} - 1, \quad p = \frac{\text{step}}{\text{total steps}}, \quad \gamma = 10

This sigmoid schedule rises slowly from 0, accelerates through midtraining, and saturates at 1.0 — front-loading the easy learning and gradually introducing adversarial pressure as the encoder matures. We use γ=10\gamma = 10 as in the original paper, which produces near-linear warmup over the first 20% of training. Without warmup (any schedule), training loss oscillates wildly and the encoder learns degenerate constant-output features.

4. Simulate instruments during pretraining. QM9S contains computed (not measured) spectra — there is no real instrument variation. To train the VIB’s disentanglement during pretraining, we simulate instrument effects via augmentation: random wavenumber shifts (±\pm3 cm1^{-1}), Gaussian noise (SNR 30-60 dB), polynomial baseline drift (order 2-4), and resolution broadening (Gaussian convolution, σ\sigma = 2-8 cm1^{-1}). Each spectrum is randomly assigned to one of 4 simulated “instruments” with consistent augmentation parameters per instrument. This gives the domain classifier something to learn and the gradient reversal something to reverse.

5. Current status. The VIB head is pretraining as part of Spektron v3 on QM9S (222K training spectra, 4 simulated instruments). Beta annealing, gradient reversal with warmup, and MoE gating are all live. Evaluation on the corn moisture benchmark (3 real NIR instruments) is next — that’s where the R² > 0.952 target will be tested.

deployment_checklist.sh
1 
2 
3 
4 
5 
6 
7 
8 
9 
10 
11 

The theoretical framework connecting VIB to the spectral identifiability theory is direct: the Information Completeness Ratio R(G,N)R(G, N) tells you how much chemistry is recoverable from spectra. The VIB’s job is to extract exactly that recoverable chemistry while discarding everything else. When R=1R = 1, all chemistry is in the spectrum — the VIB just needs to separate it from instrument noise.

subscribe
1$
2
3Deep learning for spectroscopy — delivered to your inbox.
4
5email: