Skip to content

Conversation

@lidezhu
Copy link
Collaborator

@lidezhu lidezhu commented Jan 22, 2026

What problem does this PR solve?

Issue Number: close #xxx

What is changed and how it works?

Check List

Tests

  • Unit test
  • Integration test
  • Manual test (add detailed scripts or steps below)
  • No code

Questions

Will it cause performance regression or break compatibility?
Do you need to update user documentation, design documentation or monitoring documentation?

Release note

Please refer to [Release Notes Language Style Guide](https://pingcap.github.io/tidb-dev-guide/contribute-to-tidb/release-notes-style-guide.html) to write a quality release note.

If you don't think this PR needs a release note then fill it with `None`.

@ti-chi-bot ti-chi-bot bot added do-not-merge/needs-linked-issue release-note Denotes a PR that will be considered when it comes time to generate release notes. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. labels Jan 22, 2026
@ti-chi-bot
Copy link

ti-chi-bot bot commented Jan 22, 2026

[FORMAT CHECKER NOTIFICATION]

Notice: To remove the do-not-merge/needs-linked-issue label, please provide the linked issue number on one line in the PR body, for example: Issue Number: close #123 or Issue Number: ref #456.

📖 For more info, you can check the "Contribute Code" section in the development guide.

@ti-chi-bot
Copy link

ti-chi-bot bot commented Jan 22, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign kennytm for approval. For more information see the Code Review Process.
Please ensure that each of them provides their approval before proceeding.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@lidezhu lidezhu marked this pull request as draft January 22, 2026 01:33
@ti-chi-bot ti-chi-bot bot added the size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. label Jan 22, 2026
@gemini-code-assist
Copy link

Summary of Changes

Hello @lidezhu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new workload type, fast_slow, to the existing workload tool. This new workload is specifically designed to simulate scenarios involving Change Data Capture (CDC) lag by creating a mixed environment of tables that experience only Data Manipulation Language (DML) operations and tables that also undergo periodic Data Definition Language (DDL) changes. This enhancement provides a more robust testing environment for evaluating system performance and resilience under varying DML and DDL loads.

Highlights

  • New Workload Type: fast_slow: Introduced a new workload type named fast_slow to simulate CDC lag scenarios. This workload allows for a mix of 'fast' tables (DML only) and 'slow' tables (DML + periodic DDL).
  • Configurable DDL Operations: Added extensive configuration options for the fast_slow workload, including FastTableCount, SlowTableCount, SlowDDLInterval, SlowDDLStartDelay, SlowDDLMaxColumns, and SlowDDLOptions to control the behavior of DDL on slow tables.
  • Dedicated DDL Worker: Implemented a dedicated DDL worker (fastslow_ddl.go) that periodically executes ALTER TABLE ADD COLUMN statements on the configured 'slow' tables, with retry mechanisms for database connections.
  • Workload Schema Definition: Defined the FastSlowWorkload schema, including methods for generating CREATE TABLE, INSERT, UPDATE, and DELETE SQL statements tailored for this mixed workload type.
  • Documentation Update: Updated the readme.md to include detailed instructions and examples for using the new fast_slow workload, explaining its purpose and available parameters.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new fast_slow workload type to simulate CDC lag, which is a valuable addition for testing purposes. The implementation adds the necessary configuration, schema, and worker logic for this new workload.

I've found a few issues that should be addressed:

  1. There's a critical bug in fastslow_ddl.go where a new DDL worker is added to a WaitGroup but never signals completion, which will cause the program to hang.
  2. The new code in tools/workload/schema/fastslow/fastslow.go uses the global math/rand source from concurrent goroutines, which will cause lock contention and impact performance. Each worker should have its own rand.Rand instance.
  3. There's a minor thread-safety issue in a cache initialization function that can be improved by using sync.Once for robustness.

Overall, this is a good feature addition, and with these fixes, it will be a solid contribution.

Comment on lines +29 to +33
wg.Add(1)
go func(db *DBWrapper) {
defer wg.Done()
app.runFastSlowDDLWorker(db, wl)
}(db)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The runFastSlowDDLWorker function contains an infinite loop and never returns. This means the defer wg.Done() in the calling goroutine will never be executed, causing wg.Wait() in the Execute method to block forever. This is a critical bug that will cause the program to hang.

Given that other workers in this tool also seem to run indefinitely, the DDL worker should probably not be part of the WaitGroup. It should run as a background goroutine that is terminated when the program exits.

Please remove the wg.Add(1) and defer wg.Done() for this goroutine.


func (w *FastSlowWorkload) BuildInsertSql(tableIndex int, batchSize int) string {
tableName := w.TableName(tableIndex)
n := rand.Int63()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The math/rand package uses a global random number source that is protected by a mutex. Calling functions from math/rand (like rand.Int63()) from multiple concurrent goroutines will cause lock contention, which can be a performance bottleneck in a workload generator. This can skew benchmark results.

It is recommended to create a new rand.Rand instance for each worker goroutine (e.g., r := rand.New(rand.NewSource(time.Now().UnixNano()))) and use that for generating random numbers (e.g., r.Int63()). This would require passing the rand.Rand instance through the call chain.

Comment on lines +31 to +46
var (
cachePadString = make(map[int]string, cacheSize)
cacheIdx atomic.Int64
)

func initPadStringCache() {
if len(cachePadString) >= cacheSize {
return
}
for i := 0; i < cacheSize; i++ {
cachePadString[i] = genRandomPadString(padStringLength)
}
log.Info("Initialized pad string cache",
zap.Int("cacheSize", cacheSize),
zap.Int("stringLength", padStringLength))
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The initPadStringCache function is not safe for concurrent execution. Although it's currently called from a single thread, using sync.Once is the idiomatic and safer way to handle one-time initializations of global resources. This will prevent potential race conditions if the calling pattern changes in the future.

You will also need to add import "sync".

Suggested change
var (
cachePadString = make(map[int]string, cacheSize)
cacheIdx atomic.Int64
)
func initPadStringCache() {
if len(cachePadString) >= cacheSize {
return
}
for i := 0; i < cacheSize; i++ {
cachePadString[i] = genRandomPadString(padStringLength)
}
log.Info("Initialized pad string cache",
zap.Int("cacheSize", cacheSize),
zap.Int("stringLength", padStringLength))
}
var (
cachePadString = make(map[int]string, cacheSize)
cacheIdx atomic.Int64
initCacheOnce sync.Once
)
func initPadStringCache() {
initCacheOnce.Do(func() {
for i := 0; i < cacheSize; i++ {
cachePadString[i] = genRandomPadString(padStringLength)
}
log.Info("Initialized pad string cache",
zap.Int("cacheSize", cacheSize),
zap.Int("stringLength", padStringLength))
})
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

do-not-merge/needs-linked-issue do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants