We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Many tests in Integration/Dialect/Linalg/CPU are leaky (I'll disable the leak checker in Integration/Dialect/Linalg/CPU/lit.local.cfg for now).
Looking at a simple example like Integration/Dialect/Linalg/CPU/test-subtensor-insert.mlir which looks like:
func @main() { %const = constant dense<10.0> : tensor<2xf32> %insert_val = constant dense<20.0> : tensor<1xf32> %inserted = tensor.insert_slice %insert_val into %const[0][1][1] : tensor<1xf32> into tensor<2xf32>
%unranked = tensor.cast %inserted : tensor<2xf32> to tensor<*xf32> call @print_memref_f32(%unranked) : (tensor<*xf32>) -> ()
return }
Running through -linalg-bufferize -std-bufferize -tensor-constant-bufferize -tensor-bufferize -func-bufferize -finalizing-bufferize yield:
-linalg-bufferize -std-bufferize -tensor-constant-bufferize -tensor-bufferize -func-bufferize -finalizing-bufferize
module { memref.global "private" constant @__constant_1xf32 : memref<1xf32> = dense<2.000000e+01> memref.global "private" constant @__constant_2xf32 : memref<2xf32> = dense<1.000000e+01> func @main() { %0 = memref.get_global @__constant_2xf32 : memref<2xf32> %1 = memref.get_global @__constant_1xf32 : memref<1xf32> %2 = memref.alloc() : memref<2xf32> linalg.copy(%0, %2) : memref<2xf32>, memref<2xf32> %3 = memref.subview %2[0] [1] [1] : memref<2xf32> to memref<1xf32> linalg.copy(%1, %3) : memref<1xf32>, memref<1xf32> %4 = memref.cast %2 : memref<2xf32> to memref<*xf32> call @print_memref_f32(%4) : (memref<*xf32>) -> () return } func private @print_memref_f32(memref<*xf32>) }
An alloc was created but there clearly isn't any free here.
The text was updated successfully, but these errors were encountered:
assigned to @joker-eph
Sorry, something went wrong.
This should be fixed by https://reviews.llvm.org/D111059.
Not sure how to verify. Can you help Mehdi?
You can just repro locally by adding '-DLLVM_USE_SANITIZER=Address;Undefined' to your cmake invocation (I'm also building with clang as a host tool).
See the bot here: https://lab.llvm.org/staging/#/builders/191/builds/1595/ (your patch broke it, there is still one test not fixed)
joker-eph
No branches or pull requests
Extended Description
Many tests in Integration/Dialect/Linalg/CPU are leaky (I'll disable the leak checker in Integration/Dialect/Linalg/CPU/lit.local.cfg for now).
Looking at a simple example like Integration/Dialect/Linalg/CPU/test-subtensor-insert.mlir which looks like:
func @main() {
%const = constant dense<10.0> : tensor<2xf32>
%insert_val = constant dense<20.0> : tensor<1xf32>
%inserted = tensor.insert_slice %insert_val into %const[0][1][1] : tensor<1xf32> into tensor<2xf32>
%unranked = tensor.cast %inserted : tensor<2xf32> to tensor<*xf32>
call @print_memref_f32(%unranked) : (tensor<*xf32>) -> ()
return
}
Running through
-linalg-bufferize -std-bufferize -tensor-constant-bufferize -tensor-bufferize -func-bufferize -finalizing-bufferize
yield:module {
memref.global "private" constant @__constant_1xf32 : memref<1xf32> = dense<2.000000e+01>
memref.global "private" constant @__constant_2xf32 : memref<2xf32> = dense<1.000000e+01>
func @main() {
%0 = memref.get_global @__constant_2xf32 : memref<2xf32>
%1 = memref.get_global @__constant_1xf32 : memref<1xf32>
%2 = memref.alloc() : memref<2xf32>
linalg.copy(%0, %2) : memref<2xf32>, memref<2xf32>
%3 = memref.subview %2[0] [1] [1] : memref<2xf32> to memref<1xf32>
linalg.copy(%1, %3) : memref<1xf32>, memref<1xf32>
%4 = memref.cast %2 : memref<2xf32> to memref<*xf32>
call @print_memref_f32(%4) : (memref<*xf32>) -> ()
return
}
func private @print_memref_f32(memref<*xf32>)
}
An alloc was created but there clearly isn't any free here.
The text was updated successfully, but these errors were encountered: