Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 2 additions & 3 deletions src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,16 +3,15 @@
- [Version solving](./version_solving.md)
- [Using the pubgrub crate](./pubgrub_crate/intro.md)
- [Basic example with OfflineDependencyProvider](./pubgrub_crate/offline_dep_provider.md)
- [Writing your own dependency provider](./pubgrub_crate/custom_dep_provider.md)
- [Caching dependencies in a DependencyProvider](./pubgrub_crate/caching.md)
- [Implementing a dependency provider](./pubgrub_crate/dep_provider.md)
- [Caching dependencies](./pubgrub_crate/caching.md)
- [Strategical decision making in a DependencyProvider](./pubgrub_crate/strategy.md)
- [Solution and error reporting](./pubgrub_crate/solution.md)
- [Writing your own error reporting logic](./pubgrub_crate/custom_report.md)
- [Advanced usage and limitations](./limitations/intro.md)
- [Optional dependencies](./limitations/optional_deps.md)
- [Allowing multiple versions of a package](./limitations/multiple_versions.md)
- [Public and Private packages](./limitations/public_private.md)
- [Versions in a continuous space](./limitations/continuous_versions.md)
- [Pre-release versions](./limitations/prerelease_versions.md)
- [Internals of the PubGrub algorithm](./internals/intro.md)
- [Overview of the algorithm](./internals/overview.md)
Expand Down
14 changes: 2 additions & 12 deletions src/internals/partial_solution.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,8 @@ have already been taken (including that one if it is a decision). If we
represent all assignments as a chronological vec, they would look like follows:

```txt
[ (0, root_derivation),
[
(0, root_derivation),
(1, root_decision),
(1, derivation_1a),
(1, derivation_1b),
Expand All @@ -26,14 +27,3 @@ represent all assignments as a chronological vec, they would look like follows:
The partial solution must also enable efficient evaluation of incompatibilities
in the unit propagation loop. For this, we need to have efficient access to all
assignments referring to the packages present in an incompatibility.

To enable both efficient backtracking and efficient access to specific package
assignments, the current implementation holds a dual representation of the the
partial solution. One is called `history` and keeps dated (with decision levels)
assignments in an ordered growing vec. The other is called `memory` and
organizes assignments in a hashmap where they are regrouped by packages which
are the hashmap keys. It would be interresting to see how the partial solution
is stored in other implementations of PubGrub such as the one in [dart
pub][pub].

[pub]: https://github.com/dart-lang/pub
95 changes: 0 additions & 95 deletions src/limitations/continuous_versions.md

This file was deleted.

24 changes: 10 additions & 14 deletions src/limitations/intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,27 +5,23 @@ a dependency system with the following constraints:

1. Packages are uniquely identified.
2. Versions are in a discrete set, with a total order.
3. The successor of a given version is always uniquely defined.
4. Dependencies of a package version are fixed.
5. Exactly one version must be selected per package depended on.
3. Dependencies of a package version are fixed.
4. Exactly one version must be selected per package depended on.

The fact that packages are uniquely identified (1) is perhaps the only
constraint that makes sense for all common dependency systems. But for the rest
of the constraints, they are all inadequate for some common real-world
dependency systems. For example, it's possible to have dependency systems where
order is not required for versions (2). In such systems, dependencies must be
specified with exact sets of compatible versions, and bounded ranges make no
sense. Being able to uniquely define the successor of any version (3) is also a
constraint that is not a natural fit if versions have a system of pre-releases.
Indeed, what is the successor of `2.0.0-alpha`? We can't tell if that is `2.0.0`
or `2.0.0-beta` or `2.0.0-whatever`. Having fixed dependencies (4) is also not
followed in programming languages allowing optional dependencies. In Rust
packages, optional dependencies are called "features" for example. Finally,
restricting solutions to only one version per package (5) is also too
constraining for dependency systems allowing breaking changes. In cases where
packages A and B both depend on different ranges of package C, we sometimes want
to be able to have a solution where two versions of C are present, and let the
compiler decide if their usages of C in the code are compatible.
sense. Having fixed dependencies (3) is also not followed in programming
languages allowing optional dependencies. In Rust packages, optional
dependencies are called "features" for example. Finally, restricting solutions
to only one version per package (4) is also too constraining for dependency
systems allowing breaking changes. In cases where packages A and B both depend
on different ranges of package C, we sometimes want to be able to have a
solution where two versions of C are present, and let the compiler decide if
their usages of C in the code are compatible.

In the following subsections, we try to show how we can circumvent those
limitations with clever usage of dependency providers.
4 changes: 2 additions & 2 deletions src/limitations/multiple_versions.md
Original file line number Diff line number Diff line change
Expand Up @@ -250,7 +250,7 @@ fn get_dependencies(
}
})
.collect();
Ok(Dependencies::Known(pkg_deps))
Ok(Dependencies::Available(pkg_deps))
}
Package::Proxy { source, target } => {
// If this is a proxy package, it depends on a single bucket package, the target,
Expand All @@ -266,7 +266,7 @@ fn get_dependencies(
}),
bucket_range.intersection(target_range),
);
Ok(Dependencies::Known(bucket_dep))
Ok(Dependencies::Available(bucket_dep))
}
}
}
Expand Down
33 changes: 15 additions & 18 deletions src/limitations/optional_deps.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,15 +56,13 @@ We define an `Index`, storing all dependencies (`Deps`) of every package version
in a double map, first indexed by package, then by version.

```rust
// Use NumberVersion, which are simple u32 for the versions.
use pubgrub::version::NumberVersion as Version;
/// Each package is identified by its name.
pub type PackageName = String;

/// Global registry of known packages.
pub struct Index {
/// Specify dependencies of each package version.
pub packages: Map<PackageName, BTreeMap<Version, Deps>>,
pub packages: Map<PackageName, BTreeMap<u32, Deps>>,
}
```

Expand Down Expand Up @@ -127,22 +125,21 @@ pub enum Package {
}
```

Let's implement the first function required by a dependency provider,
`choose_package_version`. For that we defined the `base_pkg()` method on a
`Package` that returns the string of the base package. And we defined the
`available_versions()` method on an `Index` to list existing versions of a given
package. Then we simply called the `choose_package_with_fewest_versions` helper
function provided by pubgrub.
We'll ignore `prioritize` for this example.

Let's implement the second function required by a dependency provider,
`choose_version`. For that we defined the `base_pkg()` method on a `Package`
that returns the string of the base package, and the `available_versions()`
method on an `Index` to list existing versions of a given package in descending
order.

```rust
fn choose_package_version<T: Borrow<Package>, U: Borrow<Range<Version>>>(
fn choose_version(
&self,
potential_packages: impl Iterator<Item = (T, U)>,
) -> Result<(T, Option<Version>), Box<dyn std::error::Error>> {
Ok(pubgrub::solver::choose_package_with_fewest_versions(
|p| self.available_versions(p.base_pkg()).cloned(),
potential_packages,
))
package: &Self::P,
range: &Self::VS,
) -> Result<Option<Self::V>, Self::Err> {
Ok(self.available_versions(p.base_pkg()).find(|version| range.contains(version)).cloned())
}
```

Expand All @@ -165,7 +162,7 @@ fn get_dependencies(

match package {
// If we asked for a base package, we simply return the mandatory dependencies.
Package::Base(_) => Ok(Dependencies::Known(from_deps(&deps.mandatory))),
Package::Base(_) => Ok(Dependencies::Available(from_deps(&deps.mandatory))),
// Otherwise, we concatenate the feature deps with a dependency to the base package.
Package::Feature { base, feature } => {
let feature_deps = deps.optional.get(feature).unwrap();
Expand All @@ -174,7 +171,7 @@ fn get_dependencies(
Package::Base(base.to_string()),
Range::exact(version.clone()),
);
Ok(Dependencies::Known(all_deps))
Ok(Dependencies::Available(all_deps))
},
}
}
Expand Down
74 changes: 27 additions & 47 deletions src/limitations/prerelease_versions.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,25 +3,19 @@
Pre-releasing is a very common pattern in the world of versioning. It is however
one of the worst to take into account in a dependency system, and I highly
recommend that if you can avoid introducing pre-releases in your package
manager, you should. In the context of pubgrub, pre-releases break two
fondamental properties of the solver.
manager, you should.

1. Pre-releases act similar to continuous spaces.
2. Pre-releases break the mathematical properties of subsets in a space with
total order.
In the context of pubgrub, pre-releases break the fundamental properties of the
solver that there is or isn't a version between two versions "x" and "x+1", that
there cannot be a version "(x+1).alpha.1" depending on whether an input version
had a pre-release specifier.

(1) Indeed, it is hard to answer what version comes after "1-alpha0". Is it
"1-alpha1", "1-beta0", "2"? In practice, we could say that the version that
comes after "1-alpha0" is "1-alpha0?" where the "?" character is chosen to be
the lowest character in the lexicographic order, but we clearly are on a stretch
here and it certainly isn't natural.

(2) Pre-releases are often semantically linked to version constraints written by
Pre-releases are often semantically linked to version constraints written by
humans, interpreted differently depending on context. For example, "2.0.0-beta"
is meant to exist previous to version "2.0.0". Yet, it is not supposed to be
contained in the set described by `1.0.0 <= v < 2.0.0`, and only within sets
where one of the bounds contains a pre-release marker such as
`2.0.0-alpha <= v < 2.0.0`. This poses a problem to the dependency solver
is meant to exist previous to version "2.0.0". Yet, in many versioning schemes
it is not supposed to be contained in the set described by `1.0.0 <= v < 2.0.0`,
and only within sets where one of the bounds contains a pre-release marker such
as `2.0.0-alpha <= v < 2.0.0`. This poses a problem to the dependency solver
because of backtracking. Indeed, the PubGrub algorithm relies on knowledge
accumulated all along the propagation of the solver front. And this knowledge is
composed of facts, that are thus never removed even when backtracking happens.
Expand All @@ -33,12 +27,6 @@ return nothing even without checking if a pre-release exists in that range. And
this is one of the fundamental mechanisms of the algorithm, so we should not try
to alter it.

Point (2) is probably the reason why some pubgrub implementations have issues
dealing with pre-releases when backtracking, as can be seen in [an issue of the
dart implementation][dart-prerelease-issue].

[dart-prerelease-issue]: https://github.com/dart-lang/pub/pull/3038

## Playing again with packages?

In the light of the "bucket" and "proxies" scheme we introduced in the section
Expand All @@ -63,18 +51,16 @@ version but not both.

Another issue would be that the proxy and bucket scheme breaks strategies
depending on ordering of versions. Since we have two proxy versions, one
targetting the normal bucket, and one targetting the pre-release bucket, a
targeting the normal bucket, and one targeting the pre-release bucket, a
strategy aiming at the newest versions will lean towards normal or pre-release
depending if the newest proxy version is the one for the normal or pre-release
bucket. Mitigating this issue seems complicated, but hopefully, we are also
exploring alternative API changes that could enable pre-releases.

## Multi-dimensional ranges

We are currently exploring new APIs where `Range` is transformed into a trait,
instead of a predefined struct with a single sequence of non-intersecting
intervals. For now, the new trait is called `RangeSet` and could be implemented
on structs with multiple dimensions for ranges.
Building on top of the `Ranges` API, we can implement a custom `VersionSet` of
multi-dimensional ranges:

```rust
pub struct DoubleRange<V1: Version, V2: Version> {
Expand All @@ -83,30 +69,24 @@ pub struct DoubleRange<V1: Version, V2: Version> {
}
```

With multi-dimensional ranges we could match the semantics of version
constraints in ways that do not introduce alterations of the core of the
algorithm. For example, the constraint `2.0.0-alpha <= v < 2.0.0` could be
matched to:
With multi-dimensional ranges we can match the semantics of version constraints
in ways that do not introduce alterations of the core of the algorithm. For
example, the constraint `2.0.0-alpha <= v < 2.0.0` can be matched to:

```rust
DoubleRange {
normal_range: Range::none,
prerelease_range: Range::between("2.0.0-alpha", "2.0.0"),
normal_range: Ranges::empty(),
prerelease_range: Ranges::between("2.0.0-alpha", "2.0.0"),
}
```

And the constraint `2.0.0-alpha <= v < 2.1.0` would have the same
`prerelease_range` but would have `2.0.0 <= v < 2.1.0` for the normal range.
Those constraints could also be intrepreted differently since not all
pre-release systems work the same. But the important property is that this
enable a separation of the dimensions that do not behave consistently with
regard to the mathematical properties of the sets manipulated.

All this is under ongoing experimentations, to try reaching a sweet spot
API-wise and performance-wise. If you are eager to experiment with all the
extensions and limitations mentionned in this section of the guide for your
dependency provider, don't hesitate to reach out to us in our [zulip
stream][zulip] or in [GitHub issues][issues] to let us know how it went!
And the constraint `2.0.0-alpha <= v < 2.1.0` has the same `prerelease_range`
but has `2.0.0 <= v < 2.1.0` for the normal range. Those constraints could also
be interpreted differently since not all pre-release systems work the same. But
the important property is that this enables a separation of the dimensions that
do not behave consistently with regard to the mathematical properties of the
sets manipulated.

[zulip]: https://rust-lang.zulipchat.com/#narrow/stream/260232-t-cargo.2FPubGrub
[issues]: https://github.com/pubgrub-rs/pubgrub/issues
This strategy is successfully used by
[semver-pubgrub](https://github.com/pubgrub-rs/semver-pubgrub) to model rust
dependencies.
4 changes: 2 additions & 2 deletions src/limitations/public_private.md
Original file line number Diff line number Diff line change
Expand Up @@ -210,7 +210,7 @@ fn get_dependencies(&self, package: &Package, version: &SemVer)
-> Result<Dependencies<Package, SemVer>, ...> {
match &package.seeds {
// A Constraint variant does not have any dependency
PkgSeeds::Constraint(_) => Ok(Dependencies::Known(Map::default())),
PkgSeeds::Constraint(_) => Ok(Dependencies::Available(Map::default())),
// A Markers variant has dependencies to:
// - one Constraint variant per seed marker
// - one Markers variant per original dependency
Expand All @@ -219,7 +219,7 @@ fn get_dependencies(&self, package: &Package, version: &SemVer)
let seed_constraints = ...;
// Figure out if there are private dependencies.
let has_private = ...;
Ok(Dependencies::Known(
Ok(Dependencies::Available(
// Chain the seed constraints with actual dependencies.
seed_constraints
.chain(index_deps.iter().map(|(p, (privacy, r))| {
Expand Down
Loading