Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
66 changes: 66 additions & 0 deletions .github/workflows/tests.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
name: Tests

on:
push:
branches: [ master, main, optimize-quail ]
pull_request:
branches: [ master, main ]

jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['3.9', '3.10', '3.11', '3.12']
fail-fast: false

steps:
- uses: actions/checkout@v4

- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}

- name: Install system dependencies
run: |
sudo apt-get update
sudo apt-get install -y ffmpeg

- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pytest pytest-cov
pip install torch --index-url https://download.pytorch.org/whl/cpu
pip install openai-whisper
pip install -e .

- name: Run tests
run: |
pytest tests/ -v --tb=short

- name: Run tests with coverage
run: |
pytest tests/ --cov=quail --cov-report=xml --cov-report=term-missing

lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'

- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8

- name: Lint with flake8
run: |
# Stop the build if there are Python syntax errors or undefined names
flake8 quail --count --select=E9,F63,F7,F82 --show-source --statistics
# Exit-zero treats all errors as warnings
flake8 quail --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -12,3 +12,4 @@ dist/*
venv/
docs/_build
docs/sg_execution_times.rst
CLAUDE.md
18 changes: 18 additions & 0 deletions .readthedocs.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Read the Docs configuration file for Sphinx projects
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details

version: 2

build:
os: ubuntu-22.04
tools:
python: "3.11"

sphinx:
configuration: docs/conf.py

python:
install:
- requirements: docs/doc_requirements.txt
- method: pip
path: .
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.1003184.svg)](https://doi.org/10.5281/zenodo.1003184)
[![JOSS](http://joss.theoj.org/papers/3fb5123eb2538e06f6a25ded0a088b73/status.svg)](http://joss.theoj.org/papers/10.21105/joss.00424)
[![Tests](https://github.com/ContextLab/quail/actions/workflows/tests.yml/badge.svg)](https://github.com/ContextLab/quail/actions/workflows/tests.yml)
[![Documentation Status](https://readthedocs.org/projects/cdl-quail/badge/?version=latest)](https://cdl-quail.readthedocs.io/en/latest/?badge=latest)

![Quail logo](images/Quail_Logo_small.png)

Expand All @@ -12,7 +14,7 @@ Quail is a Python package that facilitates analyses of behavioral data from memo
- Clustering metrics (e.g. single-number summaries of how often participants transition from recalling a word to another related word, where "related" can be user-defined.)
- Many nice plotting functions
- Convenience functions for loading in data
- Automatically parse speech data (audio files) using wrappers for the Google Cloud Speech to Text API
- Automatically parse speech data (audio files) using OpenAI Whisper

The intended user of this toolbox is a memory researcher who seeks an easy way to analyze and visualize data from free recall psychology experiments.

Expand Down
37 changes: 0 additions & 37 deletions benchmark_cluster.py

This file was deleted.

19 changes: 9 additions & 10 deletions docs/doc_requirements.txt
Original file line number Diff line number Diff line change
@@ -1,18 +1,17 @@
sphinx==1.5.5
sphinx_bootstrap_theme==0.4.13
sphinx>=4.0
sphinx_bootstrap_theme
sphinx-gallery
numpydoc
nbsphinx
seaborn>=0.7.1
matplotlib>=1.5.1
scipy>=0.17.1
numpy>=1.10.4
pandas==0.18.1
future
seaborn>=0.12.0
matplotlib>=3.5.0
scipy>=1.10.0
numpy>=2.0.0
pandas>=2.0.0
joblib>=1.3.0
sqlalchemy
dill
requests
pydub
multiprocessing
pathos
jupyter_client
ipykernel
10 changes: 3 additions & 7 deletions quail/analysis/clustering.py
Original file line number Diff line number Diff line change
Expand Up @@ -96,9 +96,8 @@ def _get_weight_exact(egg, feature, distdict, permute, n_perms):
rec = list(egg.get_rec_items().values[0])

if len(rec) <= 2:
warnings.warn('Not enough recalls to compute fingerprint, returning default'
'fingerprint.. (everything is .5)')
return 0.5
warnings.warn('Not enough recalls to compute fingerprint, returning NaN')
return np.nan

distmat = get_distmat(egg, feature, distdict)

Expand Down Expand Up @@ -174,14 +173,11 @@ def _get_weight_best(egg, feature, distdict, permute, n_perms, distance):

rec = list(egg.get_rec_items().values[0])
if len(rec) <= 2:
warnings.warn('Not enough recalls to compute fingerprint, returning default'
'fingerprint.. (everything is .5)')
warnings.warn('Not enough recalls to compute fingerprint, returning NaN')
return np.nan

distmat = get_distmat(egg, feature, distdict)
matchmat = get_match(egg, feature, distdict)
print(f"DEBUG: matchmat.shape={matchmat.shape}, len(rec)={len(rec)}")
print(f"DEBUG: distmat.shape={distmat.shape}")

ranks = []
for i in range(len(rec)-1):
Expand Down
7 changes: 0 additions & 7 deletions quail/fingerprint.py
Original file line number Diff line number Diff line change
Expand Up @@ -370,13 +370,6 @@ def compute_feature_stick(features, weights, alpha):
return feature_stick

def reorder_list(egg, feature_stick, dist_dict, tau):

def compute_stimulus_stick(s, tau):
'''create a 'stick' of feature weights'''

feature_stick = [[weights[feature]]*round(weights[feature]**alpha)*100 for feature in w]
return [item for sublist in feature_stick for item in sublist]

# parse egg
pres, rec, features, dist_funcs = parse_egg(egg)

Expand Down
2 changes: 1 addition & 1 deletion quail/load.py
Original file line number Diff line number Diff line change
Expand Up @@ -387,7 +387,7 @@ def getFeatures(stimDict):

# add custom filters
if filters:
filter_func = [adaptive_filter, experimeter_filter, experiments_filter] + filters
filter_func = [adaptive_filter, experimenter_filter, experiments_filter] + filters
else:
filter_func = [adaptive_filter, experimenter_filter, experiments_filter]

Expand Down
5 changes: 2 additions & 3 deletions quail/simulate.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,8 @@ def simulate_list(nwords=16, nrec=10, ncats=4):
path = os.path.join(os.path.dirname(__file__), 'data/cut_wordpool.csv')
wp = pd.read_csv(path)

# get one list
# logic seems to pick a group random
wp = wp[wp['GROUP']==np.random.choice(list(range(16)), 1)[0]].sample(16)
# get one list - pick a random group (groups are 1-16)
wp = wp[wp['GROUP']==np.random.choice(list(range(1, 17)), 1)[0]].sample(16)

wp['COLOR'] = [[int(np.random.rand() * 255) for i in range(3)] for i in range(16)]

Expand Down
1 change: 1 addition & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,3 +4,4 @@ matplotlib>=3.5.0
seaborn>=0.12.0
pandas>=2.0.0
joblib>=1.3.0
openai-whisper
16 changes: 7 additions & 9 deletions setup.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-

import os
from setuptools import setup, find_packages

DESCRIPTION = 'A python toolbox for analyzing and plotting free recall data'
Expand All @@ -12,11 +13,15 @@
- Clustering metrics (e.g. single-number summaries of how often participants transition from recalling a word to another related word, where "related" can be user-defined.)
- Many nice plotting functions
- Convenience functions for loading in data
- Automatically parse speech data (audio files) using wrappers for the Google Cloud Speech to Text API
- Automatically parse speech data (audio files) using OpenAI Whisper

The intended user of this toolbox is a memory researcher who seeks an easy way to analyze and visualize data from free recall psychology experiments.
"""

# Read requirements from requirements.txt
here = os.path.abspath(os.path.dirname(__file__))
with open(os.path.join(here, 'requirements.txt')) as f:
requirements = [line.strip() for line in f if line.strip() and not line.startswith('#')]

EXTRAS_REQUIRE={
'speech-decoding': ["pydub", "openai-whisper"],
Expand All @@ -35,13 +40,6 @@
license='MIT',
packages=find_packages(exclude=('tests', 'docs', 'paper')),
include_package_data=True,
install_requires=[
'numpy>=2.0.0',
'scipy>=1.10.0',
'matplotlib>=3.5.0',
'seaborn>=0.12.0',
'pandas>=2.0.0',
'joblib>=1.3.0',
],
install_requires=requirements,
extras_require=EXTRAS_REQUIRE,
)