-
Notifications
You must be signed in to change notification settings - Fork 14.1k
ggml-hexagon: gelu operation #17921
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
ggml-hexagon: gelu operation #17921
Conversation
|
While both commit 2a787a6 and 83412e0 is a fully functional gelu implementation that passed the official ggml test by running, but there is a significant performance difference between the two and maybe worth an discussion? HB=0 ./scripts/snapdragon/adb/run-tool.sh test-backend-ops -b HTP0 -o GELUThe GELU code from commit 2a787a6 is simply a sigmoid GELU implementation but commit 83412e0 is a 7th polynomial approximation of GELU generated from the qhcg tool from Hexagon SDK with some modification. When running on input of size [4096x4304] ( non-linear input data size for gemma3 vision model), I got some significant performance between this two implementation.
The data above are tested on Samsung galaxy s25 ultra using the test repo I wrote : current on refactor-dev branch. For the usec second above, I recorded the longest usec among the 6 threads that is printed out to FARF log In addition, when plotting out the polynomial approximation using my plot script I wrote in my test repo, I did not much if any error between the polynimal approximation version vs the CPU reference.
|
|
After revisiting the polynomial-approximation implementation, I noticed there was a block-prefetch operation in the code which I have commented out in commit 7233999 for a fair comparison. With that in mind, here are the updated results:
From the new testbench runs, a few things stand out:
For the remainder of this PR, I plan to:
to see whether we can push sigmoid-GELU performance ahead of the polynomial method. May I kindly ask for your thoughts and suggestions @max-krasnyansky ? |
|
For commit fc2289d, I noticed a significant performance gap between the two sigmoid implementations that handle unaligned input vectors. The first version treats On my local device, when used inside the GELU kernel, the first implementation runs in about 6400–6500 µs, whereas the unaligned version takes around 7300–7400 µs for an input of size 4096 × 4304. This seems consistent with my point (3) above? It would be good to have someone else verify these numbers, but if they hold, it might be worth applying the same approach to other functions such as |


Support GELU operation for ggml-hexagon.