Skip to content

offical implement of paper SGDIFF: Dual-Granularity Semantic Guided Sparse Routing Diffusion Model for General Pansharpening

Notifications You must be signed in to change notification settings

codgodtao/SGDiff

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SGDIFF: Dual-Granularity Semantic Guided Sparse Routing Diffusion Model for General Pansharpening

Contributors Forks Stargazers Issues MIT License LinkedIn

处理遥感图像融合的场景依赖问题——基于多模态MOE架构
探索本项目的文档 »

报告Bug · 提出新特性

News

  • Inference Code Release

数据集

DownLoad datasets from PanCollection

多模态语义先验获取

分别基于PAN和MS图像提取语义先验信息,包括粗粒度场景分类信息和细粒度地物描述信息 建议使用的遥感图文多模态大模型:

GeoChat

LRHS-BOT

注:SGDiff不限制MLLM的选型,即使论文基于Geochat实现,但更好的MLLM对模型存在潜力

Example File Format

CheckPoint

[Pretrained Checkpoint]

QuickStart

requirement
conda create -n SGDiff python==3.10
conda activate SGDiff
pip install -r requirements.txt 
inference
  1. update configs/general_finetune.yaml
train_data:
  train_qb:
    dataroot:  the path for h5 data
    grounding_file: the path for grounding description file
    scene_file:  the path for grounding description file
    ...
 validation_data:
   val_QB:
     dataroot:
     grounding_file:
     scene_file:
 resume_from_checkpoint: the path for pretrained model
python inference.py
  1. look up into your output_dir for .mat output, then compute metrics or visulization pansharpened results
training from scratch
python train_pansharpening.py

作者

Contact: [email protected]

Citication

Thanks

codebase: https://github.com/showlab/Tune-A-Video

About

offical implement of paper SGDIFF: Dual-Granularity Semantic Guided Sparse Routing Diffusion Model for General Pansharpening

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages