Skip to content

tmlr-group/AlignMI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

20 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Generative Model Inversion Through the Lens of the Manifold Hypothesis

Github License

Hi, this is the code for our NeurIPS 2025 paper: Generative Model Inversion Through the Lens of the Manifold Hypothesis. This repository provides tools to empirically validate gradient–manifold alignment hypotheses and train alignment-aware models for improved model inversion.


πŸš€ Getting started

1. Environment Setup

Install all required dependencies using the provided environment file:

conda env create -f AlignMI.yaml
conda activate AlignMI

2.1 High-resolution setup

🧩 Setup StyleGAN2

For using our attacks with StyleGAN2, clone the official StyleGAN2-ADA-Pytorch repo into the project's root folder and remove its git specific folders and files.

git clone https://github.com/NVlabs/stylegan2-ada-pytorch.git
rm -r --force stylegan2-ada-pytorch/.git/
rm -r --force stylegan2-ada-pytorch/.github/
rm --force stylegan2-ada-pytorch/.gitignore

To download the pre-trained weights, run the following command from the project's root folder or copy the weights into stylegan2-ada-pytorch:

wget https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/ffhq.pkl -P stylegan2-ada-pytorch/

NVIDIA provides the following pre-trained models: ffhq.pkl, metfaces.pkl, afhqcat.pkl, afhqdog.pkl, afhqwild.pkl, cifar10.pkl, brecahad.pkl. Adjust the command above accordingly. For the training and resolution details, please visit the official repo.


πŸ—‚οΈ Prepare Datasets

In this repository, we support CelebA as datasets to train the target models. Please follow the instructions on the websites to download the datasets. Place all datasets in the folder data and make sure that the following structure is kept:

.
└── data       
    └── celeba
        β”œβ”€β”€ img_align_celeba
        β”œβ”€β”€ identity_CelebA.txt
        β”œβ”€β”€ list_attr_celeba.txt
        β”œβ”€β”€ list_bbox_celeba.txt
        β”œβ”€β”€ list_eval_partition.txt
        β”œβ”€β”€ list_landmarks_align_celeba.txt
        └── list_landmarks_celeba.txt

For CelebA, we used a custom crop of the images using the HD CelebA Cropper to increase the resolution of the cropped and aligned samples. We cropped the images using a face factor of 0.65 and resized them to size 224x224 with bicubic interpolation. The other parameters were left at default. Note that we only use the 1,000 identities with the most number of samples out of 10,177 available identities.


πŸ’Ύ Prepare Checkpoints for Target and Evaluation Models

Visit the shared Google Drive folder: πŸ‘‰Pretrained models.

2.2 Low-resolution setup

πŸ—‚οΈ Prepare Datasets

In this repository, we support CelebA and FFHQ as datasets to train the target models. Please follow the instructions on the websites to download the datasets. Place all datasets in the data folder, maintaining the following directory structure. For datasets used in PLG-MI, please refer PLG-MI Repository.

.
└── data       
    β”œβ”€β”€ celeba
    β”‚   β”œβ”€β”€ img_align_celeba
    β”‚   └── meta
    β”‚       β”œβ”€β”€ celeba_target_300ids_label.npy
    β”‚       β”œβ”€β”€ celeba_target_300ids.npy
    β”‚       β”œβ”€β”€ fea_target_300ids.npy
    β”‚       β”œβ”€β”€ ganset.txt
    β”‚       β”œβ”€β”€ testset.txt
    β”‚       └── trainset.txt
    β”‚
    β”‚
    └── ffhq
        β”œβ”€β”€ thumbnails128x128
        └── meta
            └── ganset_ffhq.txt

πŸ’Ύ Prepare Checkpoints for Target and Evaluation Models

Visit the shared Google Drive folder: πŸ‘‰Pretrained models.

πŸ” Empirical Validation of the Hypothesis

Tangent-Space Basis Computation

This step encodes input images using a pretrained VAE and computes the tangent-space basis of the data manifold via JVP + SVD. The results are saved as (x, y, U) tuples for downstream analysis.

▢️ Usage

Single-process (rank 0 of 1):

python compute_tangent_space_basis.py \
  --config ./configs/training/targets/compute_tangent_space_basis.yaml \
  --output_dir ./tangent_space \
  --batch_size 100 \
  --chunk_size 8 \
  --world_size 1 \
  --rank 0

Multi-GPU example:

for RANK in $(seq 0 $((WORLD_SIZE-1))); do
  CUDA_VISIBLE_DEVICES=$RANK python compute_tangent_space_basis.py \
    --config ./configs/training/targets/compute_tangent_space_basis.yaml \
    --output_dir ./tangent_space \
    --batch_size 100 \
    --chunk_size 8 \
    --world_size 10 \
    --rank $RANK &
done

🧩 Training the Alignment-Aware Model

Assuming your tangent-space files (e.g., x_y_U_list_subset0.pt) are ready, launch the alignment-aware training with:

python train_align_model.py \
  --config ./configs/training/targets/vgg16_align_train.yaml

πŸ” Evaluation of Gradient–Manifold Alignment (AlignMI)

1. High-resolution Setting (based on Plug & Play Attack)

Configuration Parameters:

Configuration files:

  • ./high_resolution/config/attacking/*.json for hyperparameters such as training epochs, batch_size, optimizer, etc.
  • ./high_resolution/attacks/optimize.py for PAA/TAA parameters and visualization settings.

πŸ“¦ Code script:

➀ Baseline (Standard PPA)

CUDA_VISIBLE_DEVICES=0  python -W ignore attack.py -c=./configs/attacking/CelebA_ResNet18_SG1024_bs50.yaml --exp_name=CelebA-ResNet18-id0-100;

➀ PAA (Perturbation-Averaged Alignment)

CUDA_VISIBLE_DEVICES=0  python -W ignore attack_PAA.py -c=./configs/attacking/CelebA_ResNet18_SG1024_bs50.yaml --exp_name=CelebA-ResNet18-PAA-id0-100

➀ TAA (Transformation-Averaged Alignment)

CUDA_VISIBLE_DEVICES=0  python -W ignore attack_TAA.py -c=./configs/attacking/CelebA_ResNet18_SG1024_bs50.yaml --exp_name=CelebA-ResNet18-TAA-id0-100

2. Low-resolution Setting

Configuration Parameters:

Modify the configuration in

  • ./low_resolution/config/attacking/*.json for hyperparameters such as training epochs, batch_size, optimizer, etc.
  • ./low_resolution/attacks/optimize.py for PAA/TAA parameters and visualization settings.

πŸ“¦ Code script: LOMMA (GMI) as an example:

➀ Baseline (Standard GMI)

CUDA_VISIBLE_DEVICES=0 python attack_gmi.py -sg \
  --exp_name celeba_vgg16_gmi_id0-100 \
  --config configs/attacking/gmi_stylegan-celeba_vgg16-celeba.yaml

➀ PAA (Perturbation-Averaged Alignment)

CUDA_VISIBLE_DEVICES=0 python attack_gmi.py -sg \
  --exp_name celeba_vgg16_gmi_id0-100 \
  --config configs/attacking/gmi_stylegan-celeba_vgg16-celeba.yaml

➀ TAA (Transformation-Averaged Alignment)

CUDA_VISIBLE_DEVICES=0 python attack_gmi.py -sg \
  --exp_name celeba_vgg16_gmi_id0-100 \
  --config configs/attacking/gmi_stylegan-celeba_vgg16-celeba.yaml

πŸ“š References

If you find this code helpful in your research, please consider citing

@inproceedings{peng2025AlignMI,
title={Generative Model Inversion Through the Lens of the Manifold Hypothesis},
author={Peng, Xiong and Han, Bo and Yu, Fengfei and Liu, Tongliang and Liu, Feng and Zhou, Mingyuan},
booktitle={NeurIPS},
year={2025}
}

😊 Implementation Credits

Our implementation benefits from several existing repositories. Thanks to the authors (PPA, GMI, KEDMI, LOMMA, BREPMI, RLBMI, and PLG-MI) for making their code publicly available.

About

[NeurIPS 2025] "Generative Model Inversion Through the Lens of the Manifold Hypothesis"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •