Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
87 changes: 39 additions & 48 deletions .github/workflows/docs.yml
Original file line number Diff line number Diff line change
@@ -1,9 +1,24 @@
name: Docs
on: [push, pull_request, release]
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]

# Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued.
# However, do NOT cancel in-progress runs as we want to allow these production deployments to complete.
concurrency:
group: "pages"
cancel-in-progress: false

permissions:
contents: read
pages: write
id-token: write

jobs:
setup:
name: Setup docs
build:
name: Build docs
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
Expand All @@ -12,11 +27,9 @@ jobs:
python-version: '3.10'
- name: Install dependencies
run: |
sudo apt-get install -y pandoc
python -m pip install --upgrade pip
python -m pip install black jupytext
python -m pip install nbconvert ipykernel
python -m pip install sphinx nbsphinx
python -m pip install .
python -m pip install ".[docs]"
- name: Run notebooks
run: |
jupytext --to ipynb --pipe black --execute docs/markdown/public_data_access.md
Expand All @@ -27,51 +40,29 @@ jobs:
jupytext --to ipynb --pipe black --execute docs/markdown/point_source_analysis.md
jupytext --to ipynb --pipe black --execute docs/markdown/events.md
mv docs/markdown/*.ipynb docs/notebooks
- uses: actions/upload-artifact@v7
with:
name: notebooks-for-${{ github.sha }}
path: docs/notebooks

- name: Generate API docs
run: |
sphinx-apidoc -f -e -o docs/api icecube_tools
- uses: actions/upload-artifact@v7
sphinx-build -M html docs/ docs/_build

- name: Setup pages
uses: actions/configure-pages@v6

- name: Upload artifact
uses: actions/upload-pages-artifact@v4
with:
name: api-for-${{ github.sha }}
path: docs/api
path: './docs/_build/html'

build:
name: Build docs
deploy:
name: Deploy docs
if: ${{ github.event_name == 'push' && github.ref_name == 'master' }}
needs: build
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: setup
steps:
- name: Setup
run: sudo apt-get install -y pandoc
- name: Checkout
uses: actions/checkout@v6
with:
fetch-depth: 0
- uses: actions/setup-python@v6
with:
python-version: '3.10'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install .
- uses: actions/download-artifact@v8
with:
name: notebooks-for-${{ github.sha }}
path: docs/notebooks
- uses: actions/download-artifact@v8
with:
name: api-for-${{ github.sha }}
path: docs/api
- name: Build and Commit
uses: sphinx-notes/pages@master
with:
documentation_path: docs
- name: Push changes
if: github.event_name == 'push' && github.ref_name == 'master'
uses: ad-m/github-push-action@master
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
branch: gh-pages
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v5
4 changes: 2 additions & 2 deletions docs/markdown/public_data_access.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,13 +42,13 @@ my_data.data_directory
The `fetch` method will download this dataset to this default location for later use by `icecube_tools`. This method takes a list of names, so can also be used to download multiple datasets. `fetch` has a built in delay of a few seconds between HTTP requests to avoid spamming the website. `fetch` will not overwrite files by default, but this can be forced with `overwrite=True`.

```python
my_data.fetch(found_dataset)
# my_data.fetch(found_dataset)
```

You may not want to use `icecube_tools` for other stuff, so you can also fetch to a specificed location with the keyword `write_to`.

```python
my_data.fetch(found_dataset, write_to="data", overwrite=True)
# my_data.fetch(found_dataset, write_to="data", overwrite=True)
```

For convenience, there is also the `fetch_all_to` method to download all the available data to a specified location. We comment this here as it can take a while to execute.
Expand Down
6 changes: 4 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -25,12 +25,10 @@ dependencies = [
"matplotlib",
"astropy",
"iminuit",
"jupytext",
"requests_cache",
"requests",
"bs4",
"tqdm",
"versioneer",
"vMF > 0.1",
"pandas",
"h5py",
Expand All @@ -46,6 +44,10 @@ tests = [
docs = [
"sphinx",
"nbsphinx",
"sphinx-rtd-theme",
"black",
"jupytext",
"ipykernel",
]

[project.urls]
Expand Down
Loading