-
Notifications
You must be signed in to change notification settings - Fork 36
[WIP] #4039
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
[WIP] #4039
Conversation
|
[FORMAT CHECKER NOTIFICATION] Notice: To remove the 📖 For more info, you can check the "Contribute Code" section in the development guide. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
Summary of ChangesHello @lidezhu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a new workload type, Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new fast_slow workload type to simulate CDC lag, which is a valuable addition for testing purposes. The implementation adds the necessary configuration, schema, and worker logic for this new workload.
I've found a few issues that should be addressed:
- There's a critical bug in
fastslow_ddl.gowhere a new DDL worker is added to aWaitGroupbut never signals completion, which will cause the program to hang. - The new code in
tools/workload/schema/fastslow/fastslow.gouses the globalmath/randsource from concurrent goroutines, which will cause lock contention and impact performance. Each worker should have its ownrand.Randinstance. - There's a minor thread-safety issue in a cache initialization function that can be improved by using
sync.Oncefor robustness.
Overall, this is a good feature addition, and with these fixes, it will be a solid contribution.
| wg.Add(1) | ||
| go func(db *DBWrapper) { | ||
| defer wg.Done() | ||
| app.runFastSlowDDLWorker(db, wl) | ||
| }(db) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The runFastSlowDDLWorker function contains an infinite loop and never returns. This means the defer wg.Done() in the calling goroutine will never be executed, causing wg.Wait() in the Execute method to block forever. This is a critical bug that will cause the program to hang.
Given that other workers in this tool also seem to run indefinitely, the DDL worker should probably not be part of the WaitGroup. It should run as a background goroutine that is terminated when the program exits.
Please remove the wg.Add(1) and defer wg.Done() for this goroutine.
|
|
||
| func (w *FastSlowWorkload) BuildInsertSql(tableIndex int, batchSize int) string { | ||
| tableName := w.TableName(tableIndex) | ||
| n := rand.Int63() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The math/rand package uses a global random number source that is protected by a mutex. Calling functions from math/rand (like rand.Int63()) from multiple concurrent goroutines will cause lock contention, which can be a performance bottleneck in a workload generator. This can skew benchmark results.
It is recommended to create a new rand.Rand instance for each worker goroutine (e.g., r := rand.New(rand.NewSource(time.Now().UnixNano()))) and use that for generating random numbers (e.g., r.Int63()). This would require passing the rand.Rand instance through the call chain.
| var ( | ||
| cachePadString = make(map[int]string, cacheSize) | ||
| cacheIdx atomic.Int64 | ||
| ) | ||
|
|
||
| func initPadStringCache() { | ||
| if len(cachePadString) >= cacheSize { | ||
| return | ||
| } | ||
| for i := 0; i < cacheSize; i++ { | ||
| cachePadString[i] = genRandomPadString(padStringLength) | ||
| } | ||
| log.Info("Initialized pad string cache", | ||
| zap.Int("cacheSize", cacheSize), | ||
| zap.Int("stringLength", padStringLength)) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The initPadStringCache function is not safe for concurrent execution. Although it's currently called from a single thread, using sync.Once is the idiomatic and safer way to handle one-time initializations of global resources. This will prevent potential race conditions if the calling pattern changes in the future.
You will also need to add import "sync".
| var ( | |
| cachePadString = make(map[int]string, cacheSize) | |
| cacheIdx atomic.Int64 | |
| ) | |
| func initPadStringCache() { | |
| if len(cachePadString) >= cacheSize { | |
| return | |
| } | |
| for i := 0; i < cacheSize; i++ { | |
| cachePadString[i] = genRandomPadString(padStringLength) | |
| } | |
| log.Info("Initialized pad string cache", | |
| zap.Int("cacheSize", cacheSize), | |
| zap.Int("stringLength", padStringLength)) | |
| } | |
| var ( | |
| cachePadString = make(map[int]string, cacheSize) | |
| cacheIdx atomic.Int64 | |
| initCacheOnce sync.Once | |
| ) | |
| func initPadStringCache() { | |
| initCacheOnce.Do(func() { | |
| for i := 0; i < cacheSize; i++ { | |
| cachePadString[i] = genRandomPadString(padStringLength) | |
| } | |
| log.Info("Initialized pad string cache", | |
| zap.Int("cacheSize", cacheSize), | |
| zap.Int("stringLength", padStringLength)) | |
| }) | |
| } |
What problem does this PR solve?
Issue Number: close #xxx
What is changed and how it works?
Check List
Tests
Questions
Will it cause performance regression or break compatibility?
Do you need to update user documentation, design documentation or monitoring documentation?
Release note