diff options
author | Anton Blanchard <anton@samba.org> | 2010-08-02 20:08:34 +0000 |
---|---|---|
committer | Benjamin Herrenschmidt <benh@kernel.crashing.org> | 2010-09-02 14:07:29 +1000 |
commit | 9b83ecb0a3cf1bf7ecf84359ddcfb9dd49646bf2 (patch) | |
tree | 5ee6f0184cad6056917fcd9ecc4bfd479f7710c8 /Documentation/x86/mtrr.txt | |
parent | 93f68f1ef787d97ab688f78a01f446e85bb9a496 (diff) | |
download | kernel_samsung_smdk4412-9b83ecb0a3cf1bf7ecf84359ddcfb9dd49646bf2.zip kernel_samsung_smdk4412-9b83ecb0a3cf1bf7ecf84359ddcfb9dd49646bf2.tar.gz kernel_samsung_smdk4412-9b83ecb0a3cf1bf7ecf84359ddcfb9dd49646bf2.tar.bz2 |
powerpc: Optimise 64bit csum_partial
The main loop of csum_partial runs very slowly on recent POWER CPUs. After some
analysis on both POWER6 and POWER7 I came up with routine below. First we get
the source aligned to a double word, ignoring any odd alignment to keep things
simple. Then we do 64 bytes at a time, with an entry and exit limb of a further
64 bytes. On both POWER6 and POWER7 this should be as fast as we can go since
we are limited by the latency of the adde instructions.
To test this I forced checksumming on over loopback and ran socklib (a
simple TCP benchmark). On a POWER6 575 throughput improved by 11% with
this patch.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Diffstat (limited to 'Documentation/x86/mtrr.txt')
0 files changed, 0 insertions, 0 deletions