Skip to content

xt::mean is very slow than numpy #2680

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
OUCyf opened this issue Apr 6, 2023 · 5 comments
Open

xt::mean is very slow than numpy #2680

OUCyf opened this issue Apr 6, 2023 · 5 comments

Comments

@OUCyf
Copy link

OUCyf commented Apr 6, 2023

Hey,
I have tested some code with xt::mean, and find it very slow than numpy with hundreds of times difference. The data is a 2-d array.

xtensor version

xt::pyarray<double> mean(xt::pyarray<double> &data) 
{
    int samples_num = data.shape(1);
    int channels_num = data.shape(0);

    for(int i=0; i<channels_num; i++)
    {
        auto dd = xt::view(data, i, xt::all());
        auto trend = xt::mean(dd);
        xt::view(data, i, xt::all()) -= trend;
    }

    return data;
}

numpy version

data -= np.mean(data)

Does anybody know what is wrong with my code? I use m1-max, clang++ compiler.

@OUCyf
Copy link
Author

OUCyf commented Apr 6, 2023

BTW I want to add OpenMP to for loop, that's the reason why I need a loop.

@tdegeus
Copy link
Member

tdegeus commented Apr 7, 2023

It is probably xt::view that make it slow. What about doing the same as in NumPy? Probably

data -= xt::mean(data, 1);

(unchecked)

@OUCyf
Copy link
Author

OUCyf commented Apr 7, 2023

Thanks for your reply, actually when I just test 1-d array by xt::mean and np.mean, if the 1-d array is a little large, xt::mean will also be very slow than np.mean. The demo code as following:

xt::xarray data={1,2,3........100000};
xt::mean(data);

But numpy is very fast.

I will give you a full test later, and I do need to use xt::mean in my code.

@tdegeus
Copy link
Member

tdegeus commented Apr 7, 2023

That's a bit surprising. How did you compile? How much slower?

@tdegeus
Copy link
Member

tdegeus commented Apr 11, 2023

As a reference, I am experimenting with running benchmarking to get on top of this in the future : xtensor-stack/xtensor-python#288 . At the moment it is not completely obvious how to distinguish the cost of the pybind11 binding from the actual performance issues of xtensor. If you are interested to contribute you are more than welcome

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants