diff --git a/docs/guides/contributing.md b/docs/contributing/contributing.md similarity index 100% rename from docs/guides/contributing.md rename to docs/contributing/contributing.md diff --git a/docs/guides/environment_setup.md b/docs/contributing/environment_setup.md similarity index 100% rename from docs/guides/environment_setup.md rename to docs/contributing/environment_setup.md diff --git a/docs/contributing/index.md b/docs/contributing/index.md index 1c0d276..4874561 100644 --- a/docs/contributing/index.md +++ b/docs/contributing/index.md @@ -1,9 +1,10 @@ -# Contributing Guide +# Contributing -The following is a contributing Guide. +Everything you need to set up a development environment and contribute code or documentation to any of the Tools for Experiments repositories. ```{toctree} -:caption: How do I contribute +:maxdepth: 2 -setting_up -``` \ No newline at end of file +environment_setup +contributing +``` diff --git a/docs/contributing/setting_up.md b/docs/contributing/setting_up.md deleted file mode 100644 index 501ad95..0000000 --- a/docs/contributing/setting_up.md +++ /dev/null @@ -1,15 +0,0 @@ -# Setting Development Environment. - -The following page is under construction. - -## Git Usage - -This organization uses git since we are in github - -## Mypy - -Type safety is ensured with mypy. - -## Style consistency. - -We will use Ruff to ensure style consistency \ No newline at end of file diff --git a/docs/guides/index.md b/docs/guides/index.md index e411011..f0d455f 100644 --- a/docs/guides/index.md +++ b/docs/guides/index.md @@ -1,11 +1,11 @@ -# Guides +# About Us -General guides and resources for working within the Tools for Experiments organization. +Who we are, what we build, and the principles that shape it. Start here if you want to understand the Tools for Experiments organization before diving into the code. ```{toctree} :maxdepth: 2 organization -environment_setup -contributing +philosophy +software_map ``` diff --git a/docs/guides/philosophy.md b/docs/guides/philosophy.md new file mode 100644 index 0000000..a94a4b7 --- /dev/null +++ b/docs/guides/philosophy.md @@ -0,0 +1,53 @@ +# Our Philosophy + +**ToolsForExperiments** is an organization built around [Pfafflab](https://pfaff.physics.illinois.edu/) at the University of Illinois, together with a small group of collaborators from other labs who already use our code day to day. What holds us together is not a company or a funding line but a shared need: we run computer-controlled physics experiments, and we want the software underneath those experiments to be open, reusable, and ours to shape. + +We are a physics group first, with one dedicated software developer. Code is a means to run better experiments, not the product we ship, and that shows in how we work. This page is less a set of rules and more a description of the habits and tradeoffs that fall out of our situation, so that anyone thinking about using or contributing to our tools can see what they are stepping into. + +## Small, independent, cohesive + +We prefer many small pieces over a few big ones. Each package in the stack does one thing, lives in its own repository, has its own release cycle, and stays useful on its own. You should be able to drop labcore into a codebase that has never seen the rest of our tools, or launch a plottr app against someone else's HDF5 files, and get value out of it without buying into the whole ecosystem. + +At the same time we care that the pieces fit. Conventions are shared across the stack (the `DataDict` data model, the way sweeps are described, how metadata is attached), so that when you do use several packages together they feel like one thing, not four things bolted together. Modularity without cohesion is a pile of parts; cohesion without modularity is a trap. We try to stay in between. + +## Specific first, generalized as it earns it + +New code tends to start very close to whatever concrete problem we were trying to solve that week, whether that is getting a specific measurement running, handling a new kind of data, or smoothing out some part of the daily workflow. We do not plan features on a long roadmap; we let the experiments push, fulfill the need in front of us, and only later, once the same pattern shows up in a second or third place, pull the general-purpose piece out and move it somewhere it can be reused. Things grow where they are actually being used, which keeps us from over-designing for problems that never materialize. + +Even so, the split between the general tool and our own implementation of it is something we keep in mind from the start. Over time every useful pattern finds its way into two places: a general-purpose home (usually labcore, or the relevant core package) that anyone can pick up, and an opinionated implementation on top (usually CQEDToolbox) where we are free to bake in assumptions about our hardware and our measurement sequences. The first is reusable; the second is ours. + +That discipline is also why we try not to treat our own use case as the universal one. If someone else's lab can use labcore without inheriting our opinions about transmons, fridges, or readout chains, we have done our job. + +The downside of working this way is that we can be slow to build things no one has needed yet. The upside is that the parts that do exist tend to be earning their keep. + +## Functional first, polished as we go + +We make things work first and polish them second. New code tends to start rough, grow alongside the experiment it was written for, and only later get the tests, type hints, cleaner APIs, and documentation it deserves. The recent push in labcore to add proper typing and a test suite is a good example: the library had proved itself useful, so it was time to shore it up. This means you will sometimes find corners of the codebase that are not yet as polished as we would like. We know; we get to them eventually, and contributions that help us get there faster are very welcome. + +We do hold a line on quality. Code that ships has to be functional, legible, and roughly consistent with the rest of its package. "Not yet polished" is not a license for something that will not work tomorrow. + +## Your data, your environment + +Reproducibility is an explicit value, not a byproduct. Every measurement lives in its own folder, and everything inside that folder is considered part of the data: the recorded values themselves, with units and axes on every variable; the metadata attached to the run; and any configs, notebooks, images, or other files that capture the context in which the measurement was taken. Months or years later, you should be able to open that folder and see not just the numbers but the experiment around them, with enough information to analyze and reproduce what happened. + +We also want the data itself to stay yours. Our formats are plain HDF5 with documented conventions, readable by anything that reads HDF5. There is no proprietary layer between you and your measurements, and no service you have to keep paying to get at them. + +## Data that outlives the tools + +We think about durability on a longer horizon than the next grant cycle. A measurement taken today should still be readable in twenty years, even if every package in this stack has been abandoned and all that is left is a folder on a hard drive somewhere. That contract shapes several of our choices: we stick to open, widely-supported formats (HDF5 for numerical data, plain text / JSON / YAML for anything human-readable); we document the conventions we layer on top of those formats in public, so that someone with just the files and a few hours of curiosity can recover the structure; and we avoid storing anything in a form that depends on our own code to be read. If every one of our tools disappeared tomorrow, the data would still be readable, and the context around it still intact. + +## Build on the community when we can + +We are happy to build on tools that already exist. [QCoDeS](https://microsoft.github.io/Qcodes/) is the foundation for everything instrument-related; [xarray](https://xarray.dev/) has a growing role inside labcore; lmfit handles most of our curve fitting. When something in the ecosystem covers what we need, we use it rather than re-invent it, and we are open to replacing our own code with a community alternative if we are convinced it genuinely covers the full shape of how our users rely on the thing we already have. + +That last part is the real bar. We are not attached to our own code for its own sake, but we will not swap out something we understand for something we do not unless we can see that the replacement fits the actual use cases, not just the happy path. + +## Working with us + +You do not need to be in Pfafflab, or in a circuit-QED lab, to use or contribute to our tools. Any of the following is welcome: + +- Opening issues on GitHub for feature ideas, bug reports, or questions about how something is supposed to work. +- Sending pull requests against any of the repositories. +- Emailing us directly if you are trying to adopt the tools and want help getting started, or if you want to talk through whether our stack fits your use case before committing to it. + +We are a small team, so response times are human, not instant, but we care about the people who use our code, and we would much rather hear from you than not. diff --git a/docs/guides/software_map.md b/docs/guides/software_map.md new file mode 100644 index 0000000..c90d861 --- /dev/null +++ b/docs/guides/software_map.md @@ -0,0 +1,112 @@ +# The Software Landscape + +Our software is spread across four packages, each solving a distinct problem in a computer-controlled physics experiment. They are designed to be used together, but they are also deliberately separable: you can pull in only what you need, and each package's job is narrow enough that it can evolve on its own. This page is a map, a quick orientation to what each package is for, how the pieces depend on each other, and what a typical experiment looks like when you put them together. + +Although this stack grew out of the everyday needs of a superconducting-qubit lab, none of it is tied to that setting. The tools are general enough to support any experimental lab whose work consists of repeated, computer-controlled measurements. + +The instrument-facing parts of our code sit on top of [QCoDeS](https://microsoft.github.io/Qcodes/), which provides the base abstractions for instruments and parameters. The rest of the stack (the sweep framework, the storage layer, the analysis tools, and the plotting apps) does not require QCoDeS at all, so you are free to adopt only the pieces you find useful, even if QCoDeS is not part of your workflow. + +## The packages + +### labcore + +[labcore](https://toolsforexperiments.github.io/labcore/) is our general-purpose toolkit. It is the part of the stack that does not care what you are measuring; it provides the scaffolding that every kind of experiment needs. If you are writing a new measurement from scratch, labcore is almost certainly the first thing you reach for. + +It contains: + +- A **measurement and sweep framework** that lets you declare independent and dependent variables and compose sweeps with simple operators. Existing sweeps snap together into larger ones like Lego blocks, so bigger experiments are built out of small, well-tested pieces rather than rewritten from scratch. +- A **structured HDF5 storage layer** (the `DataDict` and `DDH5Writer`) that writes self-describing datasets with units, axes, and metadata. Through `run_and_save_sweep` you can attach arbitrary metadata and auxiliary files (configs, notes, images) alongside the data itself, so the full context of a measurement lives in one folder. +- An **analysis and fitting layer** built on lmfit, with a `Fit` base class and a library of common fit functions, so routine analysis does not need to be reinvented every time. The `DatasetAnalysis` helper then lets you store arbitrary analysis results and derived data next to the original dataset, keeping raw measurement and interpretation together. +- A **protocols layer** for automating and intelligently chaining measurements together, so a multi-step characterization sequence can run end-to-end without hand-holding. +- **Command-line utilities** for live autoplotting and for recovering data from partially-written HDF5 files. + +### instrumentserver + +[instrumentserver](https://toolsforexperiments.github.io/instrumentserver/) puts QCoDeS instruments on the network. A single process holds and owns the hardware, and any number of clients on the network (Jupyter kernels, measurement scripts, GUIs) can talk to those instruments as if they were local Python objects. The same set of instruments can be shared across different computers on the same network, so a measurement notebook on one machine, a live monitor on another, and a tuning GUI on a third all see a consistent view of the hardware without ever stepping on each other. + +It contains: + +- A **server** that wraps a QCoDeS station and handles concurrent client requests with per-instrument locking. +- **Client-side proxies** (`Client`, `ProxyInstrument`, `ProxyParameter`) that mirror the native QCoDeS API, so remote instruments feel local to the code using them. +- An **optional Qt GUI** for inspecting the station, watching parameters update in real time, and running a detached server with a visible status window. The server itself can run headless, and a set of generic, customizable client-side widgets is available for building your own instrument-specific UIs. +- A runtime **parameter manager** for holding experiment-level parameters (e.g. qubit frequencies, pulse amplitudes) that are not tied to any one physical instrument. +- **Long-term instrument monitoring** via a Grafana dashboard, so parameters like fridge temperatures, line voltages, or generator outputs can be logged and visualized over hours, days, or months. See the [instrument monitoring guide](https://toolsforexperiments.github.io/instrumentserver/user_guide/instrumentmonitoring.html) for how to set it up. + +### plottr + +[plottr](https://toolsforexperiments.github.io/plottr/) is, first and foremost, a set of graphical applications you launch to look at your data. Where the other packages focus on producing and organizing data, plottr focuses on looking at it. You typically use it by running one of its programs alongside your measurements, not by importing it from your code. + +The main apps are: + +- **`plottr-monitr`** for live-monitoring a directory of HDF5 datasets, with plots that update as new data lands on disk. +- **`plottr-inspectr`** for browsing QCoDeS `.db` files and opening datasets from them. +- **`plottr-autoplot-ddh5`** for generating plots automatically from the structure of a DDH5 file. + +Underneath the apps, plottr is also a library you can extend when you need something custom. It is built around composable flowchart nodes (selection, filtering, gridding, fitting, plotting), backed by `DataDict` (the data model it shares with labcore), and with plotting backends for matplotlib and pyqtgraph. You reach for this side of plottr when you want to build a new analysis pipeline or embed a plot in a tool of your own, but most day-to-day use is simply launching one of the apps above. + +### CQEDToolbox + +[CQEDToolbox](https://github.com/toolsforexperiments/CQEDToolbox) is our opinionated library. It is the code **we** actually use, day to day, in Pfafflab to run circuit-QED experiments on superconducting qubits. It is full of the assumptions we make about how our lab operates, what our hardware looks like, and what a sensible measurement sequence is, so it is tuned to work well for us rather than to be universal. It also doubles as a worked example: if you want to see how we put the other three packages together in practice, or how we tend to structure and implement our tools, CQEDToolbox is where to look. + +It contains: + +- **Measurement protocols** for single-transmon and fluxonium characterization (spectroscopy, Rabi, T1, T2, AllXY, randomized benchmarking) implemented against OPX (Quantum Machines), QICK (Xilinx RFSoC), and VNA-based backends. +- **Custom QCoDeS drivers** for instruments we use that are not covered by the QCoDeS core set (SignalCore generators, Oxford Triton fridges, Yokogawa DC sources, and others). +- **Readout calibration and discrimination** utilities for turning raw IQ shots into qubit-state assignments. +- A **measurement setup harness** (`setup_measurements.py`) that wires everything together: it pulls instrument proxies from instrumentserver, defines the sweep with labcore, runs it, and saves it. + +## How they fit together + +``` + ┌────────────┐ ┌────────────┐ + │ QCoDeS │ │ HDF5 / │ + │ │ │ QCoDeS db │ + └─────┬──────┘ └──────┬─────┘ + │ │ + ┌─────────────┴──────────────┐ │ + │ │ │ + ▼ ▼ ▼ + ┌────────────────┐ ┌──────────────────┐ ┌──────────────┐ + │ labcore │ │ instrumentserver │ │ plottr │ + │ (sweeps, │ │ (networked │ │ (inspect, │ + │ storage, │ │ QCoDeS │ │ plot, │ + │ analysis) │ │ instruments) │ │ analyze) │ + └───────┬────────┘ └────────┬─────────┘ └──────────────┘ + │ │ + └──────────────┬─────────────┘ + │ + ▼ + ┌────────────────────────────────────────────────────────────┐ + │ CQEDToolbox │ + │ (circuit-QED protocols, drivers, readout, setup) │ + └────────────────────────────────────────────────────────────┘ +``` + + +The diagram reads top-down as a dependency graph: each arrow points from a foundation to what is built on top of it. At the top, **QCoDeS** is the foundation for everything instrument-related, and the **HDF5 / QCoDeS database files** are the corresponding foundation on the data side, the formats in which experiments get written to disk. + +In the middle sit **labcore** and **instrumentserver**, the two reusable, general-purpose pieces. Labcore handles the *"what is the measurement and what does its data look like"* side; instrumentserver handles the *"where does the hardware live and how do I talk to it"* side. They are independent of each other, you can use one without the other, but the two together cover most of what you need to run an experiment. + +At the bottom, **CQEDToolbox** consumes both: it uses labcore's sweep and fitting machinery to define its protocols, and it uses instrumentserver's client to reach the hardware. It is the most opinionated of the four packages, and how we actually run our lab. + +**plottr** is deliberately off to the side. It does not depend on any of the other three, and it does not participate in data acquisition. It reads the same HDF5 and QCoDeS formats that labcore and QCoDeS write, and does its job (displaying and analyzing data) entirely on its own. The `DataDict` abstraction, now central to labcore, in fact started in plottr and was extracted for wider reuse. + +## A typical workflow + +A circuit-QED measurement session tends to look like this: + +1. **Start the instrument server.** A long-lived instrumentserver process owns all the physical instruments for the fridge (generators, VNA, DC sources, the fridge itself) and exposes them over the network. +2. **Connect a client.** From a Jupyter notebook or a measurement script, you connect an instrumentserver `Client` and get proxies for the instruments you need. They behave exactly like local QCoDeS objects. +3. **Define a measurement.** You use a CQEDToolbox protocol (or a custom measurement written with labcore's decorators) to describe the sweep: what is swept, what is recorded, and how the data is shaped. +4. **Run and save.** The CQEDToolbox harness runs the sweep and uses labcore's `DDH5Writer` to stream the data into a structured HDF5 file, complete with axes, units, and metadata. +5. **Inspect and analyze.** You point `plottr-monitr` at the data directory for live plots during the run, and open `plottr-autoplot-ddh5` or a notebook using labcore's fitting tools for deeper analysis afterward. + +Each step uses a different package, but you rarely notice the seams, and that is the point. The packages are separate so that each can be developed, tested, and documented on its own terms, not so that you have to think about them separately as a user. + +## Where to go next + +- **labcore**: [source](https://github.com/toolsforexperiments/labcore) and [documentation](https://toolsforexperiments.github.io/labcore/) +- **instrumentserver**: [source](https://github.com/toolsforexperiments/instrumentserver) and [documentation](https://toolsforexperiments.github.io/instrumentserver/) +- **plottr**: [source](https://github.com/toolsforexperiments/plottr) and [documentation](https://toolsforexperiments.github.io/plottr/) +- **CQEDToolbox**: [source](https://github.com/toolsforexperiments/CQEDToolbox) (see the repository README for setup and usage) + diff --git a/docs/index.md b/docs/index.md index 751b125..bc6d769 100644 --- a/docs/index.md +++ b/docs/index.md @@ -7,18 +7,23 @@ myst: # Tools for Experiments -Welcome to the Tools for Experiments documentation site. This is site is currently under construction. Thank you for visiting us. +Welcome to the Tools for Experiments documentation site. This site is currently under construction. Thank you for visiting us. -Tools for experiments is an organization led by [Pfafflab](https://pfaff.physics.illinois.edu/) with the objective of providing a robust experimental software framework. We strive to make useful tools that can provide research groups with the tools necessary to make their environment what _they_ want it to be. +Tools for Experiments is an organization led by [Pfafflab](https://pfaff.physics.illinois.edu/) with the objective of providing a robust experimental software framework. We strive to make useful tools that can provide research groups with what they need to shape their environment into what _they_ want it to be. ## Who is this for -Our framework is aimed at any experimental physics/materials research whose experiments consist of relatively short, computer controlled and diverse experiments. We strive to build code that emphasizes the experiment +Our framework is aimed at any experimental physics/materials research whose experiments consist of relatively short, computer-controlled, and diverse measurements. We strive to build code that puts the experiment first. +## Where to go from here -## Guides +- **[About Us](guides/index.md)** — who we are, the packages that make up the stack, and the philosophy behind how we build them. Start here if you are new to the organization. +- **[Examples](examples/index.md)** — worked examples that show the tools in action, from a short introduction to full experiments. +- **[Contributing](contributing/index.md)** — how to set up a development environment and contribute code or documentation to any of the repositories. -Organization-wide guides, including environment setup and general information. +## About Us + +Who we are, the packages that make up the stack, and the philosophy behind how we build them. ```{toctree} :maxdepth: 2 @@ -28,7 +33,7 @@ guides/index ## Examples -More examples. +Worked examples that show the tools in action. ```{toctree} :maxdepth: 3 @@ -38,11 +43,11 @@ examples/index ## Contributing -Contributing Guide +How to set up a development environment and contribute code or documentation. ```{toctree} :maxdepth: 3 contributing/index -``` \ No newline at end of file +```