Skip to content

Fix C++ int type mismatch warnings in core/ops files#1070

Draft
mhucka wants to merge 2 commits into
tensorflow:masterfrom
mhucka:fix-misc-warnings
Draft

Fix C++ int type mismatch warnings in core/ops files#1070
mhucka wants to merge 2 commits into
tensorflow:masterfrom
mhucka:fix-misc-warnings

Conversation

@mhucka
Copy link
Copy Markdown
Member

@mhucka mhucka commented May 13, 2026

PR #476 (dating from 2021) aimed to resolve numerous warnings emitted by the C++ compiler. Most have already been resolved since the time the PR was written, but some still remained. This PR addresses most (possibly all) the rest that can be addressed. (Some cannot be addressed directly because they come from other libraries.)

All of these concern the core ops (tensorflow_quantum/core/ops/) and all the warnings are essentially in the same category of thing, so while this PR touches many files, it seems appropriate to handle them together:

  • Changed the data type of variables max_num_qubits and min_num_qubits from int to uint64_t to match the underlying data structures and vector iterations where they were being used.

  • Changed the data types of some iterator and limit variables (int q_ind, int64_t q_ind, int nq, and int largest_nq) to uint64_t to resolve compiler warnings when evaluated against the updated max_num_qubits mentioned above.

  • Changed the data type of output_dim_size from int to size_t where this variable was directly assigned the result of maps.size() and similar standard library size methods.

  • Retained the data types of variables like max_psum_length have been left as int because they interface with Protobuf sizes, which are signed ints.

  • In configure.sh, removed -Wno-unused-but-set-variable from the flags used during builds of tensorflow_quantum/core/ops/noise/tfq_.*, since the underlying reason for the warnings has been addressed by this PR.

PR tensorflow#476 (dating from 2021)
aimed to resolve numerous warnings emitted by the C++ compiler. Most
have already been resolved since the time the PR was written, but some
still remained. This PR addresses most (possibly all) the rest that can
be addressed. (Some cannot be addressed directly because they come from
other libraries.)

All of these concern the core ops (`tensorflow_quantum/core/ops/`) and
all the warnings are essentially in the same category of thing, so while
this PR touches many files, it seems appropriate to handle them together:

* Changed the data type of variables `max_num_qubits` and
  `min_num_qubits` from `int` to `uint64_t` to match the underlying data
  structures and vector iterations where they were being used.

* Changed the data types of some iterator and limit variables (`int
  q_ind`, `int64_t q_ind`, `int nq`, and `int largest_nq`) to `uint64_t`
  to resolve compiler warnings when evaluated against the newly updated
  `max_num_qubits` mentioned above.

* Changed the data type of `output_dim_size` from `int` to `size_t`
  where this variable was directly assigned the result of `maps.size()`
  and similar standard library size methods.

* Retained the data types of variables like `max_psum_length` have been
  left as `int` because they interface with Protobuf sizes, which are
  signed ints.

* In `configure.sh`, removed `-Wno-unused-but-set-variable` from the
  flags used during builds of
  `tensorflow_quantum/core/ops/noise/tfq_.*`, since the underlying
  reason for the warnings has been addressed.
@mhucka
Copy link
Copy Markdown
Member Author

mhucka commented May 13, 2026

/gemini review

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors various quantum simulation kernels to use 64-bit integers for qubit counts and dimensions, and removes compiler warning suppressions in the configuration script. Feedback highlights several remaining overflow risks, specifically recommending the use of 64-bit literals for bit-shifts and int64_t for loop indices and lambda parameters in parallelized workloads to handle large batch sizes safely.

int old_batch_index = -2;
int cur_batch_index = -1;
int largest_nq = 1;
uint64_t largest_nq = 1;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

While updating largest_nq to uint64_t, please also consider changing the DoWork lambda parameters start and end (line 229) and the loop index i (line 239) to int64_t. The ParallelFor call at the end of this function (line 290) passes a total count that can exceed the range of a 32-bit int, leading to potential overflows and incorrect behavior for large batch sizes.

int old_batch_index = -2;
int cur_batch_index = -1;
int largest_nq = 1;
uint64_t largest_nq = 1;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

While updating largest_nq to uint64_t, please also consider changing the DoWork lambda parameters start and end (line 335) and the loop index i (line 347) to int64_t. The ParallelFor call at the end of this function (line 452) passes a total count that can exceed the range of a 32-bit int, leading to potential overflows and incorrect behavior for large batch sizes.

int old_batch_index = -2;
int cur_batch_index = -1;
int largest_nq = 1;
uint64_t largest_nq = 1;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

While updating largest_nq to uint64_t, please also consider changing the DoWork lambda parameters start and end (line 163) and the loop index i (line 175) to int64_t. The ParallelFor call at the end of this function (line 220) passes a total count that can exceed the range of a 32-bit int, leading to potential overflows and incorrect behavior for large batch sizes.

int old_batch_index = -2;
int cur_batch_index = -1;
int largest_nq = 1;
uint64_t largest_nq = 1;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

While updating largest_nq to uint64_t, please also consider changing the DoWork lambda parameters start and end (line 189) and the loop index i (line 210) to int64_t. The ParallelFor call at the end of this function (line 260) passes a total count that can exceed the range of a 32-bit int, leading to potential overflows and incorrect behavior for large batch sizes.


auto DoWork = [&](int start, int end) {
int largest_nq = 1;
uint64_t largest_nq = 1;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

While updating largest_nq to uint64_t, please also consider changing the DoWork lambda parameters start and end (line 156) and the loop index i (line 169) to int64_t. The ParallelFor call at the end of this function (line 210) passes a total count that can exceed the range of a 32-bit int, leading to potential overflows and incorrect behavior for large batch sizes.

// the output tensor.
int max_num_qubits = 0;
for (const int num : num_qubits) {
uint64_t max_num_qubits = 0;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The change of max_num_qubits to uint64_t is appropriate. However, please also update the subsequent line (line 98) that uses 1 << max_num_qubits. Since 1 is an int literal, this shift will overflow and result in undefined behavior if max_num_qubits is 31 or greater. Use uint64_t(1) << max_num_qubits instead.

int old_batch_index = -2;
int cur_batch_index = -1;
int largest_nq = 1;
uint64_t largest_nq = 1;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

While updating largest_nq to uint64_t, please also consider changing the DoWork lambda parameters start and end (line 196) and the loop index i (line 206) to int64_t. The ParallelFor call at the end of this function (line 248) passes a total count that can exceed the range of a 32-bit int, leading to potential overflows and incorrect behavior for large batch sizes.

int old_batch_index = -2;
int cur_batch_index = -1;
int largest_nq = 1;
uint64_t largest_nq = 1;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

While updating largest_nq to uint64_t, please also consider changing the DoWork lambda parameters start and end (line 241) and the loop index i (line 258) to int64_t. The ParallelFor call at the end of this function (line 303) passes a total count that can exceed the range of a 32-bit int, leading to potential overflows and incorrect behavior for large batch sizes.


auto DoWork = [&](int start, int end) {
int largest_nq = 1;
uint64_t largest_nq = 1;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

While updating largest_nq to uint64_t, please also consider changing the DoWork lambda parameters start and end (line 204) and the loop index i (line 213) to int64_t. The ParallelFor call at the end of this function (line 250) passes a total count that can exceed the range of a 32-bit int, leading to potential overflows and incorrect behavior for large batch sizes.


auto DoWork = [&](int start, int end) {
int largest_nq = 1;
uint64_t largest_nq = 1;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

While updating largest_nq to uint64_t, please also consider changing the DoWork lambda parameters start and end (line 183) and the loop index i (line 188) to int64_t. The ParallelFor call at the end of this function (line 214) passes a total count that can exceed the range of a 32-bit int, leading to potential overflows and incorrect behavior for large batch sizes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant