Hi, this is the code for our NeurIPS 2025 paper: Generative Model Inversion Through the Lens of the Manifold Hypothesis. This repository provides tools to empirically validate gradientβmanifold alignment hypotheses and train alignment-aware models for improved model inversion.
Install all required dependencies using the provided environment file:
conda env create -f AlignMI.yaml
conda activate AlignMIFor using our attacks with StyleGAN2, clone the official StyleGAN2-ADA-Pytorch repo into the project's root folder and remove its git specific folders and files.
git clone https://github.com/NVlabs/stylegan2-ada-pytorch.git
rm -r --force stylegan2-ada-pytorch/.git/
rm -r --force stylegan2-ada-pytorch/.github/
rm --force stylegan2-ada-pytorch/.gitignore
To download the pre-trained weights, run the following command from the project's root folder or copy the weights into stylegan2-ada-pytorch:
wget https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/ffhq.pkl -P stylegan2-ada-pytorch/NVIDIA provides the following pre-trained models: ffhq.pkl, metfaces.pkl, afhqcat.pkl, afhqdog.pkl, afhqwild.pkl, cifar10.pkl, brecahad.pkl. Adjust the command above accordingly. For the training and resolution details, please visit the official repo.
In this repository, we support CelebA as datasets to train the target models. Please follow the instructions on the websites to download the datasets. Place all datasets in the folder data and make sure that the following structure is kept:
.
βββ data
βββ celeba
βββ img_align_celeba
βββ identity_CelebA.txt
βββ list_attr_celeba.txt
βββ list_bbox_celeba.txt
βββ list_eval_partition.txt
βββ list_landmarks_align_celeba.txt
βββ list_landmarks_celeba.txt
For CelebA, we used a custom crop of the images using the HD CelebA Cropper to increase the resolution of the cropped and aligned samples. We cropped the images using a face factor of 0.65 and resized them to size 224x224 with bicubic interpolation. The other parameters were left at default. Note that we only use the 1,000 identities with the most number of samples out of 10,177 available identities.
Visit the shared Google Drive folder: πPretrained models.
In this repository, we support CelebA and FFHQ as datasets to train the target models. Please follow the instructions on the websites to download the datasets. Place all datasets in the data folder, maintaining the following directory structure. For datasets used in PLG-MI, please refer PLG-MI Repository.
.
βββ data
βββ celeba
β βββ img_align_celeba
β βββ meta
β βββ celeba_target_300ids_label.npy
β βββ celeba_target_300ids.npy
β βββ fea_target_300ids.npy
β βββ ganset.txt
β βββ testset.txt
β βββ trainset.txt
β
β
βββ ffhq
βββ thumbnails128x128
βββ meta
βββ ganset_ffhq.txt
Visit the shared Google Drive folder: πPretrained models.
This step encodes input images using a pretrained VAE and computes the tangent-space basis of the data manifold via JVP + SVD. The results are saved as (x, y, U) tuples for downstream analysis.
Single-process (rank 0 of 1):
python compute_tangent_space_basis.py \
--config ./configs/training/targets/compute_tangent_space_basis.yaml \
--output_dir ./tangent_space \
--batch_size 100 \
--chunk_size 8 \
--world_size 1 \
--rank 0Multi-GPU example:
for RANK in $(seq 0 $((WORLD_SIZE-1))); do
CUDA_VISIBLE_DEVICES=$RANK python compute_tangent_space_basis.py \
--config ./configs/training/targets/compute_tangent_space_basis.yaml \
--output_dir ./tangent_space \
--batch_size 100 \
--chunk_size 8 \
--world_size 10 \
--rank $RANK &
done
Assuming your tangent-space files (e.g., x_y_U_list_subset0.pt) are ready, launch the alignment-aware training with:
python train_align_model.py \
--config ./configs/training/targets/vgg16_align_train.yamlConfiguration files:
./high_resolution/config/attacking/*.jsonfor hyperparameters such as training epochs, batch_size, optimizer, etc../high_resolution/attacks/optimize.pyfor PAA/TAA parameters and visualization settings.
CUDA_VISIBLE_DEVICES=0 python -W ignore attack.py -c=./configs/attacking/CelebA_ResNet18_SG1024_bs50.yaml --exp_name=CelebA-ResNet18-id0-100;CUDA_VISIBLE_DEVICES=0 python -W ignore attack_PAA.py -c=./configs/attacking/CelebA_ResNet18_SG1024_bs50.yaml --exp_name=CelebA-ResNet18-PAA-id0-100CUDA_VISIBLE_DEVICES=0 python -W ignore attack_TAA.py -c=./configs/attacking/CelebA_ResNet18_SG1024_bs50.yaml --exp_name=CelebA-ResNet18-TAA-id0-100Modify the configuration in
./low_resolution/config/attacking/*.jsonfor hyperparameters such as training epochs, batch_size, optimizer, etc../low_resolution/attacks/optimize.pyfor PAA/TAA parameters and visualization settings.
CUDA_VISIBLE_DEVICES=0 python attack_gmi.py -sg \
--exp_name celeba_vgg16_gmi_id0-100 \
--config configs/attacking/gmi_stylegan-celeba_vgg16-celeba.yamlCUDA_VISIBLE_DEVICES=0 python attack_gmi.py -sg \
--exp_name celeba_vgg16_gmi_id0-100 \
--config configs/attacking/gmi_stylegan-celeba_vgg16-celeba.yamlCUDA_VISIBLE_DEVICES=0 python attack_gmi.py -sg \
--exp_name celeba_vgg16_gmi_id0-100 \
--config configs/attacking/gmi_stylegan-celeba_vgg16-celeba.yamlIf you find this code helpful in your research, please consider citing
@inproceedings{peng2025AlignMI,
title={Generative Model Inversion Through the Lens of the Manifold Hypothesis},
author={Peng, Xiong and Han, Bo and Yu, Fengfei and Liu, Tongliang and Liu, Feng and Zhou, Mingyuan},
booktitle={NeurIPS},
year={2025}
}Our implementation benefits from several existing repositories. Thanks to the authors (PPA, GMI, KEDMI, LOMMA, BREPMI, RLBMI, and PLG-MI) for making their code publicly available.