-
Notifications
You must be signed in to change notification settings - Fork 11.5k
examples: refine tensor_sum_elements(tensor dump) in examples/benchmark/benchmark-matmult.cpp #7844
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
52836a4
to
b7a9d40
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Let's merge when the CI pass.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the goal of these changes when this function is only called with F32 tensors?
programmer/developer might be change data type in this example manually. this PR has no any side-effect. accordingly, it will helpful for ggml's users(programmers/developers) if there is a tensor dump utility function in ggml.h&ggml.c. this is preparation.of course, this utility function can be provided by you or the original author because you are both expert. |
There is code for printing tensors in the |
your concern is make sense. you can make/made any decision as your consideration because you are one of the owners/core maintainers of this project. |
thanks for your help and thanks so much and have a good weekend. |
your are right.this function following the existing tensor_sum_elements but the calculation process can be used for tensor dump, |
Self Reported Review Complexity
BTW, can the original author or core maintainer add a
utility function (with all supported data type and dump the accurate value in the tensor as m ( < 8) x n ( < 8) format) in ggml.h&ggml.c, it will might be very useful/helpful for programmers/developers. thanks so much.