Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: support GlobalAveragePooling #516

Open
malickKhurram opened this issue Feb 15, 2024 · 7 comments
Open

Feature request: support GlobalAveragePooling #516

malickKhurram opened this issue Feb 15, 2024 · 7 comments
Labels
bug Something isn't working

Comments

@malickKhurram
Copy link

Summary

I have created a image classification model in PyTorch and trained on unencrypted data. After that the model is exported using ONNX and import back and checked.
Now when the model is compiled with "compile_onnx_model", it gives following error.

valueError: The following ONNX operators are required to convert the pytorch model to numpy but are not currently implemented: GlobalAveragePool

Description

  • versions affected: v1.3.0
  • python version: 3.10.6
  • config (optional: HW, OS): Linux
@malickKhurram malickKhurram added the bug Something isn't working label Feb 15, 2024
@jfrery
Copy link
Collaborator

jfrery commented Feb 15, 2024

Hi @malickKhurram,

Yes we don't support GlobalAveragePool for now. This are going to support this soon.

For now, a workaround would be to change the adaptive average pooling in your network to a simple average pooling.

For example, for the resnet18 we have

nn.AdaptiveAvgPool2d((1, 1))

this can be changed to

nn.AvgPool2d(kernel_size=7, stride=1, padding=0)

Of course this change depends on your model architecture and data input shape. The adaptive pooling basically computes the kernel size and stride automatically given the desired output size (here 1,1). So you need to find what value of kernel_size and stride give you the desired value at this specific point in your network and hardcode these values in a standard average pool.

This should make the compilation pass.

@malickKhurram
Copy link
Author

Hi @jfrery
Thank you for your quick response.

  • May I know how soon GlobalAveragePool will be incorporated into concrete ml
  • I am using a pre trained model densenet121 for covid detection. Infact I am using following code and dataset for this learning task.
    https://www.kaggle.com/code/arunrk7/covid-19-detection-pytorch-tutorial
  • Can you please guide me on the steps to compile this pretrained model to concrete ml. Or you have any similar example converted to concrete ml.

@jfrery
Copy link
Collaborator

jfrery commented Feb 19, 2024

Hi @malickKhurram,

I can give you some hints on how to workaround the GlobalAveragePooling.

You will need change the densenet model file manually. For this you need to copy that file locally https://github.com/pytorch/vision/blob/main/torchvision/models/densenet.py such that you can import the pre-trained densenet from that file instead of torch hub.

Then you will need to change that line:

out = F.adaptive_avg_pool2d(out, (1, 1))

to

out = F.avg_pool2d(out, kernel_size=(N, M))

Here, you need to find out what are N and M. Best way to do this is to get the original model and print the dimensions of the out variable before applying the pooling. If that value is (1024, 5, 5) then N = M = 5.

Let me know if you can find your way with this. I will make this issue a feature request about GlobalAveragePooling such that we can track it.

@jfrery jfrery changed the title valueError: Model compilation Error Feature request: support GlobalAveragePooling Feb 19, 2024
@malickKhurram
Copy link
Author

Hi @jfrery
Thank you for your quick response. It helped me to solve above problem. I made changes in local file.
But now I am facing following issue when compiling the onnx model.

1

Can you please guide on this.

Regards

@andrei-stoian-zama
Copy link
Collaborator

Thanks for the bug report.

It's hard to tell where the error comes from. The line you show uses numpy functions so it should return numpy.float instead of python float. Could you print the values and types of stats.rmax, stats.rmin, options.n_bits, self.offset just before that line?

Alternatively could you give code that reproduces the issue ?

@andrei-stoian-zama
Copy link
Collaborator

andrei-stoian-zama commented Feb 23, 2024

Let's continue discussion in #522

Keeping this issue open until CML supports GlobalAveragePooling

@andrei-stoian-zama
Copy link
Collaborator

As a reminder, for an image classification model, please see https://github.com/zama-ai/concrete-ml/tree/main/use_case_examples/cifar/cifar_brevitas_finetuning

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants