diff options
author | Chandler Carruth <chandlerc@gmail.com> | 2011-12-24 10:55:54 +0000 |
---|---|---|
committer | Chandler Carruth <chandlerc@gmail.com> | 2011-12-24 10:55:54 +0000 |
commit | acc068e873a1a2afa1edef20452722d97eec8f71 (patch) | |
tree | bcff5586d3e4b64e301ef6ae94c63946abd1f68e /test/CodeGen/X86/lzcnt.ll | |
parent | c08e57c7c9ebba27360d5e27f56a44bcaa963a52 (diff) | |
download | external_llvm-acc068e873a1a2afa1edef20452722d97eec8f71.zip external_llvm-acc068e873a1a2afa1edef20452722d97eec8f71.tar.gz external_llvm-acc068e873a1a2afa1edef20452722d97eec8f71.tar.bz2 |
Switch the lowering of CTLZ_ZERO_UNDEF from a .td pattern back to the
X86ISelLowering C++ code. Because this is lowered via an xor wrapped
around a bsr, we want the dagcombine which runs after isel lowering to
have a chance to clean things up. In particular, it is very common to
see code which looks like:
(sizeof(x)*8 - 1) ^ __builtin_clz(x)
Which is trying to compute the most significant bit of 'x'. That's
actually the value computed directly by the 'bsr' instruction, but if we
match it too late, we'll get completely redundant xor instructions.
The more naive code for the above (subtracting rather than using an xor)
still isn't handled correctly due to the dagcombine getting confused.
Also, while here fix an issue spotted by inspection: we should have been
expanding the zero-undef variants to the normal variants when there is
an 'lzcnt' instruction. Do so, and test for this. We don't want to
generate unnecessary 'bsr' instructions.
These two changes fix some regressions in encoding and decoding
benchmarks. However, there is still a *lot* to be improve on in this
type of code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147244 91177308-0d34-0410-b5e6-96231b3b80d8
Diffstat (limited to 'test/CodeGen/X86/lzcnt.ll')
-rw-r--r-- | test/CodeGen/X86/lzcnt.ll | 28 |
1 files changed, 28 insertions, 0 deletions
diff --git a/test/CodeGen/X86/lzcnt.ll b/test/CodeGen/X86/lzcnt.ll index c2b3e68..eb010d7 100644 --- a/test/CodeGen/X86/lzcnt.ll +++ b/test/CodeGen/X86/lzcnt.ll @@ -32,3 +32,31 @@ define i64 @t4(i64 %x) nounwind { ; CHECK: t4: ; CHECK: lzcntq } + +define i8 @t5(i8 %x) nounwind { + %tmp = tail call i8 @llvm.ctlz.i8( i8 %x, i1 true ) + ret i8 %tmp +; CHECK: t5: +; CHECK: lzcntw +} + +define i16 @t6(i16 %x) nounwind { + %tmp = tail call i16 @llvm.ctlz.i16( i16 %x, i1 true ) + ret i16 %tmp +; CHECK: t6: +; CHECK: lzcntw +} + +define i32 @t7(i32 %x) nounwind { + %tmp = tail call i32 @llvm.ctlz.i32( i32 %x, i1 true ) + ret i32 %tmp +; CHECK: t7: +; CHECK: lzcntl +} + +define i64 @t8(i64 %x) nounwind { + %tmp = tail call i64 @llvm.ctlz.i64( i64 %x, i1 true ) + ret i64 %tmp +; CHECK: t8: +; CHECK: lzcntq +} |