-
Notifications
You must be signed in to change notification settings - Fork 409
xt::mean is very slow than numpy #2680
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
BTW I want to add OpenMP to for loop, that's the reason why I need a loop. |
It is probably data -= xt::mean(data, 1); (unchecked) |
Thanks for your reply, actually when I just test 1-d array by xt::mean and np.mean, if the 1-d array is a little large, xt::mean will also be very slow than np.mean. The demo code as following: xt::xarray data={1,2,3........100000};
xt::mean(data); But numpy is very fast. I will give you a full test later, and I do need to use xt::mean in my code. |
That's a bit surprising. How did you compile? How much slower? |
As a reference, I am experimenting with running benchmarking to get on top of this in the future : xtensor-stack/xtensor-python#288 . At the moment it is not completely obvious how to distinguish the cost of the pybind11 binding from the actual performance issues of xtensor. If you are interested to contribute you are more than welcome |
Hey,
I have tested some code with xt::mean, and find it very slow than numpy with hundreds of times difference. The data is a 2-d array.
xtensor version
numpy version
Does anybody know what is wrong with my code? I use m1-max, clang++ compiler.
The text was updated successfully, but these errors were encountered: