Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge main to release #182

Merged
merged 27 commits into from
Jun 13, 2024
Merged

Merge main to release #182

merged 27 commits into from
Jun 13, 2024

Conversation

mgehre-amd
Copy link
Collaborator

No description provided.

laurettaSchubert and others added 27 commits May 31, 2024 15:26
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
…_fix

Fix lowering of torch.aten.avg_pool1d to linalg
The previous conversions for AtenAdaptiveAvgPool1dOp and
AtenAdaptiveMaxPool2dOp are refactored into a general templated
conversion that works for all of the AtenAdaptive...PoolNdOp's.

New support is added for the following ops:

1. AtenAdaptiveMaxPool1d
2. AtenAdaptiveMaxPool3d
3. AtenAdaptiveAvgPool3d

Support is also provided for passing inputs without batch dimensions.
For example, applying adaptive_avg_pool2d to an input tensor of rank 3.

After [pytorch #118162](pytorch/pytorch#118162)
gets down to torch-mlir, I'll add a test for AdaptiveMaxPool1d with
return_indices (which will pass with that upstream fix).

---------

Co-authored-by: James Newling <james.newling@gmail.com>
This commit also fixes the average pool op' test failing for
OnnxToLinalg lowering.

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
Support for the operator in HLO has been implemented in

llvm#3259

but that change is not in this fork yet.
Emit explicit error for unsupported modes of onnx.Pad
@mgehre-amd mgehre-amd enabled auto-merge June 13, 2024 11:29
@mgehre-amd mgehre-amd merged commit f00e686 into release_rai_1_2 Jun 13, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants