-
Notifications
You must be signed in to change notification settings - Fork 266
[CK tests] Extend conv GPU reference #3539
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
| const OutDataType* out_gn = p_out + g * out_stride_g + n * out_stride_n; | ||
| const WeiDataType* wei_g = p_wei + g * wei_stride_g; | ||
| float acc = 0.0f; | ||
| const OutDataType* out_gn0 = p_outs[0] + g * out_stride_g + n * out_stride_n; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we change these names? out_gn0, wei_g0, wei_gkc0 etc they are not clear for me
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, I'll change them.
| const OutDataType* out_extra1 = | ||
| p_outs[2] + g * out_stride_g + n * out_stride_n + | ||
| ho * out_stride_h; | ||
| out_op(out_val, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MAybe we can create some common function which will call function with proper number of parameters
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would definitely make it nicer to look at. Thanks for the suggestion!
| p_outs[2] + g * out_stride_g + n * out_stride_n + | ||
| ho * out_stride_h; | ||
| out_op(out_val, | ||
| out_gnkh0[k * out_stride_k + wo], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or use some unpack
|
|
||
| for(index_t i = 0; i <= NumAElementwise; ++i) | ||
| { | ||
| strided_copy_kernel<TOut, false> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why we need this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In order to make the naive implementation as simple as possible, the actual conv kernels only operate on packed data. To support all the various layouts, we first have to transform the non-packed tensors into packed tensor, run the kernel, and then transform back to the correct layout.
The loop performs this transformation for all the tensors used in the convolution (for bwd_data, this is the out and weight tensors), which is more than one in the bilinear convolutions.
Proposed changes
This PR extends GPU reference implementation support for convolution operations with elementwise fusion and output operations. The changes enable GPU-accelerated reference implementations for tests involving scale, bias, batchnorm, clamp, and bilinear operations across forward, backward data, and backward weight convolutions.
Key improvements:
naive_conv_fwd_gpu,naive_conv_bwd_data_gpu, andnaive_conv_bwd_weight_gputo support elementwise operations, clamp, scale, and bilinear fusionPerformance impact:
These improvements reduce total execution time for these tests from 1540 seconds to 826 seconds, saving approximately 12 minutes.
Checklist
clang-formaton all changed filesDiscussion
The implementation focuses on extending the GPU reference path to match the functionality available in the CPU reference path.
Additional improvement is possible by using GPU for verification and tensor initialization. This PR is already large, so those improvements are deferred.
The batchnorm profiler and tests are not changed since the tests are flaky. In order to keep this PR focused, those changes are also deferred.