[AMDGPU] Fixed incorrect uniform branch condition

Summary:
I had a case where multiple nested uniform ifs resulted in code that did
v_cmp comparisons, combining the results with s_and_b64, s_or_b64 and
s_xor_b64 and using the resulting mask in s_cbranch_vccnz, without first
ensuring that bits for inactive lanes were clear.

There was already code for inserting an "s_and_b64 vcc, exec, vcc" to
clear bits for inactive lanes in the case that the branch is instruction
selected as s_cbranch_scc1 and is then changed to s_cbranch_vccnz in
SIFixSGPRCopies. I have added the same code into SILowerControlFlow for
the case that the branch is instruction selected as s_cbranch_vccnz.

This de-optimizes the code in some cases where the s_and is not needed,
because vcc is the result of a v_cmp, or multiple v_cmp instructions
combined by s_and/s_or. We should add a pass to re-optimize those cases.

Reviewers: arsenm, kzhuravl

Subscribers: wdng, yaxunl, t-tye, llvm-commits, dstuttard, timcorringham, nhaehnle

Differential Revision: https://reviews.llvm.org/D41292

llvm-svn: 322119
9 files changed
tree: 913b778f0fc2c78591c1a0fe7460de17ba98ff1b
  1. clang/
  2. clang-tools-extra/
  3. compiler-rt/
  4. debuginfo-tests/
  5. libclc/
  6. libcxx/
  7. libcxxabi/
  8. libunwind/
  9. lld/
  10. lldb/
  11. llgo/
  12. llvm/
  13. openmp/
  14. parallel-libs/
  15. polly/
  16. README.md
README.md

Low Level Virtual Machine (LLVM)

This directory and its subdirectories contain source code for LLVM, a toolkit for the construction of highly optimized compilers, optimizers, and runtime environments.