blob: 5c3aead9c5991c29f125fcf45c0d48e4f8009992 [file] [log] [blame]
Adam Langleyd9e397b2015-01-22 14:27:53 -08001#!/usr/bin/env perl
2#
3# ====================================================================
4# Written by Andy Polyakov <appro@openssl.org> for the OpenSSL
5# project. The module is, however, dual licensed under OpenSSL and
6# CRYPTOGAMS licenses depending on where you obtain it. For further
7# details see http://www.openssl.org/~appro/cryptogams/.
8# ====================================================================
9#
10# March, May, June 2010
11#
12# The module implements "4-bit" GCM GHASH function and underlying
13# single multiplication operation in GF(2^128). "4-bit" means that it
14# uses 256 bytes per-key table [+64/128 bytes fixed table]. It has two
15# code paths: vanilla x86 and vanilla SSE. Former will be executed on
16# 486 and Pentium, latter on all others. SSE GHASH features so called
17# "528B" variant of "4-bit" method utilizing additional 256+16 bytes
18# of per-key storage [+512 bytes shared table]. Performance results
19# are for streamed GHASH subroutine and are expressed in cycles per
20# processed byte, less is better:
21#
22# gcc 2.95.3(*) SSE assembler x86 assembler
23#
24# Pentium 105/111(**) - 50
25# PIII 68 /75 12.2 24
26# P4 125/125 17.8 84(***)
27# Opteron 66 /70 10.1 30
28# Core2 54 /67 8.4 18
29# Atom 105/105 16.8 53
30# VIA Nano 69 /71 13.0 27
31#
32# (*) gcc 3.4.x was observed to generate few percent slower code,
33# which is one of reasons why 2.95.3 results were chosen,
34# another reason is lack of 3.4.x results for older CPUs;
35# comparison with SSE results is not completely fair, because C
36# results are for vanilla "256B" implementation, while
37# assembler results are for "528B";-)
38# (**) second number is result for code compiled with -fPIC flag,
39# which is actually more relevant, because assembler code is
40# position-independent;
41# (***) see comment in non-MMX routine for further details;
42#
43# To summarize, it's >2-5 times faster than gcc-generated code. To
44# anchor it to something else SHA1 assembler processes one byte in
45# ~7 cycles on contemporary x86 cores. As for choice of MMX/SSE
46# in particular, see comment at the end of the file...
47
48# May 2010
49#
50# Add PCLMULQDQ version performing at 2.10 cycles per processed byte.
51# The question is how close is it to theoretical limit? The pclmulqdq
52# instruction latency appears to be 14 cycles and there can't be more
53# than 2 of them executing at any given time. This means that single
54# Karatsuba multiplication would take 28 cycles *plus* few cycles for
55# pre- and post-processing. Then multiplication has to be followed by
56# modulo-reduction. Given that aggregated reduction method [see
57# "Carry-less Multiplication and Its Usage for Computing the GCM Mode"
58# white paper by Intel] allows you to perform reduction only once in
59# a while we can assume that asymptotic performance can be estimated
60# as (28+Tmod/Naggr)/16, where Tmod is time to perform reduction
61# and Naggr is the aggregation factor.
62#
63# Before we proceed to this implementation let's have closer look at
64# the best-performing code suggested by Intel in their white paper.
65# By tracing inter-register dependencies Tmod is estimated as ~19
66# cycles and Naggr chosen by Intel is 4, resulting in 2.05 cycles per
67# processed byte. As implied, this is quite optimistic estimate,
68# because it does not account for Karatsuba pre- and post-processing,
69# which for a single multiplication is ~5 cycles. Unfortunately Intel
70# does not provide performance data for GHASH alone. But benchmarking
71# AES_GCM_encrypt ripped out of Fig. 15 of the white paper with aadt
72# alone resulted in 2.46 cycles per byte of out 16KB buffer. Note that
73# the result accounts even for pre-computing of degrees of the hash
74# key H, but its portion is negligible at 16KB buffer size.
75#
76# Moving on to the implementation in question. Tmod is estimated as
77# ~13 cycles and Naggr is 2, giving asymptotic performance of ...
78# 2.16. How is it possible that measured performance is better than
79# optimistic theoretical estimate? There is one thing Intel failed
80# to recognize. By serializing GHASH with CTR in same subroutine
81# former's performance is really limited to above (Tmul + Tmod/Naggr)
82# equation. But if GHASH procedure is detached, the modulo-reduction
83# can be interleaved with Naggr-1 multiplications at instruction level
84# and under ideal conditions even disappear from the equation. So that
85# optimistic theoretical estimate for this implementation is ...
86# 28/16=1.75, and not 2.16. Well, it's probably way too optimistic,
87# at least for such small Naggr. I'd argue that (28+Tproc/Naggr),
88# where Tproc is time required for Karatsuba pre- and post-processing,
89# is more realistic estimate. In this case it gives ... 1.91 cycles.
90# Or in other words, depending on how well we can interleave reduction
Robert Sloana94fe052017-02-21 08:49:28 -080091# and one of the two multiplications the performance should be between
Adam Langleyd9e397b2015-01-22 14:27:53 -080092# 1.91 and 2.16. As already mentioned, this implementation processes
93# one byte out of 8KB buffer in 2.10 cycles, while x86_64 counterpart
94# - in 2.02. x86_64 performance is better, because larger register
95# bank allows to interleave reduction and multiplication better.
96#
97# Does it make sense to increase Naggr? To start with it's virtually
98# impossible in 32-bit mode, because of limited register bank
99# capacity. Otherwise improvement has to be weighed agiainst slower
100# setup, as well as code size and complexity increase. As even
101# optimistic estimate doesn't promise 30% performance improvement,
102# there are currently no plans to increase Naggr.
103#
104# Special thanks to David Woodhouse <dwmw2@infradead.org> for
105# providing access to a Westmere-based system on behalf of Intel
106# Open Source Technology Centre.
107
108# January 2010
109#
110# Tweaked to optimize transitions between integer and FP operations
111# on same XMM register, PCLMULQDQ subroutine was measured to process
112# one byte in 2.07 cycles on Sandy Bridge, and in 2.12 - on Westmere.
113# The minor regression on Westmere is outweighed by ~15% improvement
114# on Sandy Bridge. Strangely enough attempt to modify 64-bit code in
115# similar manner resulted in almost 20% degradation on Sandy Bridge,
116# where original 64-bit code processes one byte in 1.95 cycles.
117
118#####################################################################
119# For reference, AMD Bulldozer processes one byte in 1.98 cycles in
120# 32-bit mode and 1.89 in 64-bit.
121
122# February 2013
123#
124# Overhaul: aggregate Karatsuba post-processing, improve ILP in
125# reduction_alg9. Resulting performance is 1.96 cycles per byte on
126# Westmere, 1.95 - on Sandy/Ivy Bridge, 1.76 - on Bulldozer.
127
128$0 =~ m/(.*[\/\\])[^\/\\]+$/; $dir=$1;
Robert Sloan9254e682017-04-24 09:42:06 -0700129push(@INC,"${dir}","${dir}../../../perlasm");
Adam Langleyd9e397b2015-01-22 14:27:53 -0800130require "x86asm.pl";
131
David Benjaminc895d6b2016-08-11 13:26:41 -0400132$output=pop;
133open STDOUT,">$output";
134
Robert Sloan8ff03552017-06-14 12:40:58 -0700135&asm_init($ARGV[0],$x86only = $ARGV[$#ARGV] eq "386");
Adam Langleyd9e397b2015-01-22 14:27:53 -0800136
Adam Langleye9ada862015-05-11 17:20:37 -0700137$sse2=0;
138for (@ARGV) { $sse2=1 if (/-DOPENSSL_IA32_SSE2/); }
Adam Langleyd9e397b2015-01-22 14:27:53 -0800139
140($Zhh,$Zhl,$Zlh,$Zll) = ("ebp","edx","ecx","ebx");
141$inp = "edi";
142$Htbl = "esi";
143
144$unroll = 0; # Affects x86 loop. Folded loop performs ~7% worse
145 # than unrolled, which has to be weighted against
146 # 2.5x x86-specific code size reduction.
147
148sub x86_loop {
149 my $off = shift;
150 my $rem = "eax";
151
152 &mov ($Zhh,&DWP(4,$Htbl,$Zll));
153 &mov ($Zhl,&DWP(0,$Htbl,$Zll));
154 &mov ($Zlh,&DWP(12,$Htbl,$Zll));
155 &mov ($Zll,&DWP(8,$Htbl,$Zll));
156 &xor ($rem,$rem); # avoid partial register stalls on PIII
157
158 # shrd practically kills P4, 2.5x deterioration, but P4 has
159 # MMX code-path to execute. shrd runs tad faster [than twice
160 # the shifts, move's and or's] on pre-MMX Pentium (as well as
161 # PIII and Core2), *but* minimizes code size, spares register
162 # and thus allows to fold the loop...
163 if (!$unroll) {
164 my $cnt = $inp;
165 &mov ($cnt,15);
166 &jmp (&label("x86_loop"));
167 &set_label("x86_loop",16);
168 for($i=1;$i<=2;$i++) {
169 &mov (&LB($rem),&LB($Zll));
170 &shrd ($Zll,$Zlh,4);
171 &and (&LB($rem),0xf);
172 &shrd ($Zlh,$Zhl,4);
173 &shrd ($Zhl,$Zhh,4);
174 &shr ($Zhh,4);
175 &xor ($Zhh,&DWP($off+16,"esp",$rem,4));
176
177 &mov (&LB($rem),&BP($off,"esp",$cnt));
178 if ($i&1) {
179 &and (&LB($rem),0xf0);
180 } else {
181 &shl (&LB($rem),4);
182 }
183
184 &xor ($Zll,&DWP(8,$Htbl,$rem));
185 &xor ($Zlh,&DWP(12,$Htbl,$rem));
186 &xor ($Zhl,&DWP(0,$Htbl,$rem));
187 &xor ($Zhh,&DWP(4,$Htbl,$rem));
188
189 if ($i&1) {
190 &dec ($cnt);
191 &js (&label("x86_break"));
192 } else {
193 &jmp (&label("x86_loop"));
194 }
195 }
196 &set_label("x86_break",16);
197 } else {
198 for($i=1;$i<32;$i++) {
199 &comment($i);
200 &mov (&LB($rem),&LB($Zll));
201 &shrd ($Zll,$Zlh,4);
202 &and (&LB($rem),0xf);
203 &shrd ($Zlh,$Zhl,4);
204 &shrd ($Zhl,$Zhh,4);
205 &shr ($Zhh,4);
206 &xor ($Zhh,&DWP($off+16,"esp",$rem,4));
207
208 if ($i&1) {
209 &mov (&LB($rem),&BP($off+15-($i>>1),"esp"));
210 &and (&LB($rem),0xf0);
211 } else {
212 &mov (&LB($rem),&BP($off+15-($i>>1),"esp"));
213 &shl (&LB($rem),4);
214 }
215
216 &xor ($Zll,&DWP(8,$Htbl,$rem));
217 &xor ($Zlh,&DWP(12,$Htbl,$rem));
218 &xor ($Zhl,&DWP(0,$Htbl,$rem));
219 &xor ($Zhh,&DWP(4,$Htbl,$rem));
220 }
221 }
222 &bswap ($Zll);
223 &bswap ($Zlh);
224 &bswap ($Zhl);
225 if (!$x86only) {
226 &bswap ($Zhh);
227 } else {
228 &mov ("eax",$Zhh);
229 &bswap ("eax");
230 &mov ($Zhh,"eax");
231 }
232}
233
234if ($unroll) {
235 &function_begin_B("_x86_gmult_4bit_inner");
236 &x86_loop(4);
237 &ret ();
238 &function_end_B("_x86_gmult_4bit_inner");
239}
240
241sub deposit_rem_4bit {
242 my $bias = shift;
243
244 &mov (&DWP($bias+0, "esp"),0x0000<<16);
245 &mov (&DWP($bias+4, "esp"),0x1C20<<16);
246 &mov (&DWP($bias+8, "esp"),0x3840<<16);
247 &mov (&DWP($bias+12,"esp"),0x2460<<16);
248 &mov (&DWP($bias+16,"esp"),0x7080<<16);
249 &mov (&DWP($bias+20,"esp"),0x6CA0<<16);
250 &mov (&DWP($bias+24,"esp"),0x48C0<<16);
251 &mov (&DWP($bias+28,"esp"),0x54E0<<16);
252 &mov (&DWP($bias+32,"esp"),0xE100<<16);
253 &mov (&DWP($bias+36,"esp"),0xFD20<<16);
254 &mov (&DWP($bias+40,"esp"),0xD940<<16);
255 &mov (&DWP($bias+44,"esp"),0xC560<<16);
256 &mov (&DWP($bias+48,"esp"),0x9180<<16);
257 &mov (&DWP($bias+52,"esp"),0x8DA0<<16);
258 &mov (&DWP($bias+56,"esp"),0xA9C0<<16);
259 &mov (&DWP($bias+60,"esp"),0xB5E0<<16);
260}
Adam Langleyd9e397b2015-01-22 14:27:53 -0800261
Adam Langleyd9e397b2015-01-22 14:27:53 -0800262if (!$x86only) {{{
263
264&static_label("rem_4bit");
265
266if (!$sse2) {{ # pure-MMX "May" version...
267
Steven Valdezb0b45c62017-01-17 16:23:54 -0500268 # This code was removed since SSE2 is required for BoringSSL. The
269 # outer structure of the code was retained to minimize future merge
270 # conflicts.
Adam Langleyd9e397b2015-01-22 14:27:53 -0800271
Adam Langleyd9e397b2015-01-22 14:27:53 -0800272}} else {{ # "June" MMX version...
273 # ... has slower "April" gcm_gmult_4bit_mmx with folded
274 # loop. This is done to conserve code size...
275$S=16; # shift factor for rem_4bit
276
277sub mmx_loop() {
278# MMX version performs 2.8 times better on P4 (see comment in non-MMX
279# routine for further details), 40% better on Opteron and Core2, 50%
280# better on PIII... In other words effort is considered to be well
281# spent...
282 my $inp = shift;
283 my $rem_4bit = shift;
284 my $cnt = $Zhh;
285 my $nhi = $Zhl;
286 my $nlo = $Zlh;
287 my $rem = $Zll;
288
289 my ($Zlo,$Zhi) = ("mm0","mm1");
290 my $tmp = "mm2";
291
292 &xor ($nlo,$nlo); # avoid partial register stalls on PIII
293 &mov ($nhi,$Zll);
294 &mov (&LB($nlo),&LB($nhi));
295 &mov ($cnt,14);
296 &shl (&LB($nlo),4);
297 &and ($nhi,0xf0);
298 &movq ($Zlo,&QWP(8,$Htbl,$nlo));
299 &movq ($Zhi,&QWP(0,$Htbl,$nlo));
300 &movd ($rem,$Zlo);
301 &jmp (&label("mmx_loop"));
302
303 &set_label("mmx_loop",16);
304 &psrlq ($Zlo,4);
305 &and ($rem,0xf);
306 &movq ($tmp,$Zhi);
307 &psrlq ($Zhi,4);
308 &pxor ($Zlo,&QWP(8,$Htbl,$nhi));
309 &mov (&LB($nlo),&BP(0,$inp,$cnt));
310 &psllq ($tmp,60);
311 &pxor ($Zhi,&QWP(0,$rem_4bit,$rem,8));
312 &dec ($cnt);
313 &movd ($rem,$Zlo);
314 &pxor ($Zhi,&QWP(0,$Htbl,$nhi));
315 &mov ($nhi,$nlo);
316 &pxor ($Zlo,$tmp);
317 &js (&label("mmx_break"));
318
319 &shl (&LB($nlo),4);
320 &and ($rem,0xf);
321 &psrlq ($Zlo,4);
322 &and ($nhi,0xf0);
323 &movq ($tmp,$Zhi);
324 &psrlq ($Zhi,4);
325 &pxor ($Zlo,&QWP(8,$Htbl,$nlo));
326 &psllq ($tmp,60);
327 &pxor ($Zhi,&QWP(0,$rem_4bit,$rem,8));
328 &movd ($rem,$Zlo);
329 &pxor ($Zhi,&QWP(0,$Htbl,$nlo));
330 &pxor ($Zlo,$tmp);
331 &jmp (&label("mmx_loop"));
332
333 &set_label("mmx_break",16);
334 &shl (&LB($nlo),4);
335 &and ($rem,0xf);
336 &psrlq ($Zlo,4);
337 &and ($nhi,0xf0);
338 &movq ($tmp,$Zhi);
339 &psrlq ($Zhi,4);
340 &pxor ($Zlo,&QWP(8,$Htbl,$nlo));
341 &psllq ($tmp,60);
342 &pxor ($Zhi,&QWP(0,$rem_4bit,$rem,8));
343 &movd ($rem,$Zlo);
344 &pxor ($Zhi,&QWP(0,$Htbl,$nlo));
345 &pxor ($Zlo,$tmp);
346
347 &psrlq ($Zlo,4);
348 &and ($rem,0xf);
349 &movq ($tmp,$Zhi);
350 &psrlq ($Zhi,4);
351 &pxor ($Zlo,&QWP(8,$Htbl,$nhi));
352 &psllq ($tmp,60);
353 &pxor ($Zhi,&QWP(0,$rem_4bit,$rem,8));
354 &movd ($rem,$Zlo);
355 &pxor ($Zhi,&QWP(0,$Htbl,$nhi));
356 &pxor ($Zlo,$tmp);
357
358 &psrlq ($Zlo,32); # lower part of Zlo is already there
359 &movd ($Zhl,$Zhi);
360 &psrlq ($Zhi,32);
361 &movd ($Zlh,$Zlo);
362 &movd ($Zhh,$Zhi);
363
364 &bswap ($Zll);
365 &bswap ($Zhl);
366 &bswap ($Zlh);
367 &bswap ($Zhh);
368}
369
370&function_begin("gcm_gmult_4bit_mmx");
371 &mov ($inp,&wparam(0)); # load Xi
372 &mov ($Htbl,&wparam(1)); # load Htable
373
374 &call (&label("pic_point"));
375 &set_label("pic_point");
376 &blindpop("eax");
377 &lea ("eax",&DWP(&label("rem_4bit")."-".&label("pic_point"),"eax"));
378
379 &movz ($Zll,&BP(15,$inp));
380
381 &mmx_loop($inp,"eax");
382
383 &emms ();
384 &mov (&DWP(12,$inp),$Zll);
385 &mov (&DWP(4,$inp),$Zhl);
386 &mov (&DWP(8,$inp),$Zlh);
387 &mov (&DWP(0,$inp),$Zhh);
388&function_end("gcm_gmult_4bit_mmx");
389
390######################################################################
391# Below subroutine is "528B" variant of "4-bit" GCM GHASH function
392# (see gcm128.c for details). It provides further 20-40% performance
393# improvement over above mentioned "May" version.
394
395&static_label("rem_8bit");
396
397&function_begin("gcm_ghash_4bit_mmx");
398{ my ($Zlo,$Zhi) = ("mm7","mm6");
399 my $rem_8bit = "esi";
400 my $Htbl = "ebx";
401
402 # parameter block
403 &mov ("eax",&wparam(0)); # Xi
404 &mov ("ebx",&wparam(1)); # Htable
405 &mov ("ecx",&wparam(2)); # inp
406 &mov ("edx",&wparam(3)); # len
407 &mov ("ebp","esp"); # original %esp
408 &call (&label("pic_point"));
409 &set_label ("pic_point");
410 &blindpop ($rem_8bit);
411 &lea ($rem_8bit,&DWP(&label("rem_8bit")."-".&label("pic_point"),$rem_8bit));
412
413 &sub ("esp",512+16+16); # allocate stack frame...
414 &and ("esp",-64); # ...and align it
415 &sub ("esp",16); # place for (u8)(H[]<<4)
416
417 &add ("edx","ecx"); # pointer to the end of input
418 &mov (&DWP(528+16+0,"esp"),"eax"); # save Xi
419 &mov (&DWP(528+16+8,"esp"),"edx"); # save inp+len
420 &mov (&DWP(528+16+12,"esp"),"ebp"); # save original %esp
421
422 { my @lo = ("mm0","mm1","mm2");
423 my @hi = ("mm3","mm4","mm5");
424 my @tmp = ("mm6","mm7");
425 my ($off1,$off2,$i) = (0,0,);
426
427 &add ($Htbl,128); # optimize for size
428 &lea ("edi",&DWP(16+128,"esp"));
429 &lea ("ebp",&DWP(16+256+128,"esp"));
430
431 # decompose Htable (low and high parts are kept separately),
432 # generate Htable[]>>4, (u8)(Htable[]<<4), save to stack...
433 for ($i=0;$i<18;$i++) {
434
435 &mov ("edx",&DWP(16*$i+8-128,$Htbl)) if ($i<16);
436 &movq ($lo[0],&QWP(16*$i+8-128,$Htbl)) if ($i<16);
437 &psllq ($tmp[1],60) if ($i>1);
438 &movq ($hi[0],&QWP(16*$i+0-128,$Htbl)) if ($i<16);
439 &por ($lo[2],$tmp[1]) if ($i>1);
440 &movq (&QWP($off1-128,"edi"),$lo[1]) if ($i>0 && $i<17);
441 &psrlq ($lo[1],4) if ($i>0 && $i<17);
442 &movq (&QWP($off1,"edi"),$hi[1]) if ($i>0 && $i<17);
443 &movq ($tmp[0],$hi[1]) if ($i>0 && $i<17);
444 &movq (&QWP($off2-128,"ebp"),$lo[2]) if ($i>1);
445 &psrlq ($hi[1],4) if ($i>0 && $i<17);
446 &movq (&QWP($off2,"ebp"),$hi[2]) if ($i>1);
447 &shl ("edx",4) if ($i<16);
448 &mov (&BP($i,"esp"),&LB("edx")) if ($i<16);
449
450 unshift (@lo,pop(@lo)); # "rotate" registers
451 unshift (@hi,pop(@hi));
452 unshift (@tmp,pop(@tmp));
453 $off1 += 8 if ($i>0);
454 $off2 += 8 if ($i>1);
455 }
456 }
457
458 &movq ($Zhi,&QWP(0,"eax"));
459 &mov ("ebx",&DWP(8,"eax"));
460 &mov ("edx",&DWP(12,"eax")); # load Xi
461
462&set_label("outer",16);
463 { my $nlo = "eax";
464 my $dat = "edx";
465 my @nhi = ("edi","ebp");
466 my @rem = ("ebx","ecx");
467 my @red = ("mm0","mm1","mm2");
468 my $tmp = "mm3";
469
470 &xor ($dat,&DWP(12,"ecx")); # merge input data
471 &xor ("ebx",&DWP(8,"ecx"));
472 &pxor ($Zhi,&QWP(0,"ecx"));
473 &lea ("ecx",&DWP(16,"ecx")); # inp+=16
474 #&mov (&DWP(528+12,"esp"),$dat); # save inp^Xi
475 &mov (&DWP(528+8,"esp"),"ebx");
476 &movq (&QWP(528+0,"esp"),$Zhi);
477 &mov (&DWP(528+16+4,"esp"),"ecx"); # save inp
478
479 &xor ($nlo,$nlo);
480 &rol ($dat,8);
481 &mov (&LB($nlo),&LB($dat));
482 &mov ($nhi[1],$nlo);
483 &and (&LB($nlo),0x0f);
484 &shr ($nhi[1],4);
485 &pxor ($red[0],$red[0]);
486 &rol ($dat,8); # next byte
487 &pxor ($red[1],$red[1]);
488 &pxor ($red[2],$red[2]);
489
Robert Sloana94fe052017-02-21 08:49:28 -0800490 # Just like in "May" version modulo-schedule for critical path in
Adam Langleyd9e397b2015-01-22 14:27:53 -0800491 # 'Z.hi ^= rem_8bit[Z.lo&0xff^((u8)H[nhi]<<4)]<<48'. Final 'pxor'
492 # is scheduled so late that rem_8bit[] has to be shifted *right*
493 # by 16, which is why last argument to pinsrw is 2, which
494 # corresponds to <<32=<<48>>16...
495 for ($j=11,$i=0;$i<15;$i++) {
496
497 if ($i>0) {
498 &pxor ($Zlo,&QWP(16,"esp",$nlo,8)); # Z^=H[nlo]
499 &rol ($dat,8); # next byte
500 &pxor ($Zhi,&QWP(16+128,"esp",$nlo,8));
501
502 &pxor ($Zlo,$tmp);
503 &pxor ($Zhi,&QWP(16+256+128,"esp",$nhi[0],8));
504 &xor (&LB($rem[1]),&BP(0,"esp",$nhi[0])); # rem^(H[nhi]<<4)
505 } else {
506 &movq ($Zlo,&QWP(16,"esp",$nlo,8));
507 &movq ($Zhi,&QWP(16+128,"esp",$nlo,8));
508 }
509
510 &mov (&LB($nlo),&LB($dat));
511 &mov ($dat,&DWP(528+$j,"esp")) if (--$j%4==0);
512
513 &movd ($rem[0],$Zlo);
514 &movz ($rem[1],&LB($rem[1])) if ($i>0);
515 &psrlq ($Zlo,8); # Z>>=8
516
517 &movq ($tmp,$Zhi);
518 &mov ($nhi[0],$nlo);
519 &psrlq ($Zhi,8);
520
521 &pxor ($Zlo,&QWP(16+256+0,"esp",$nhi[1],8)); # Z^=H[nhi]>>4
522 &and (&LB($nlo),0x0f);
523 &psllq ($tmp,56);
524
525 &pxor ($Zhi,$red[1]) if ($i>1);
526 &shr ($nhi[0],4);
527 &pinsrw ($red[0],&WP(0,$rem_8bit,$rem[1],2),2) if ($i>0);
528
529 unshift (@red,pop(@red)); # "rotate" registers
530 unshift (@rem,pop(@rem));
531 unshift (@nhi,pop(@nhi));
532 }
533
534 &pxor ($Zlo,&QWP(16,"esp",$nlo,8)); # Z^=H[nlo]
535 &pxor ($Zhi,&QWP(16+128,"esp",$nlo,8));
536 &xor (&LB($rem[1]),&BP(0,"esp",$nhi[0])); # rem^(H[nhi]<<4)
537
538 &pxor ($Zlo,$tmp);
539 &pxor ($Zhi,&QWP(16+256+128,"esp",$nhi[0],8));
540 &movz ($rem[1],&LB($rem[1]));
541
542 &pxor ($red[2],$red[2]); # clear 2nd word
543 &psllq ($red[1],4);
544
545 &movd ($rem[0],$Zlo);
546 &psrlq ($Zlo,4); # Z>>=4
547
548 &movq ($tmp,$Zhi);
549 &psrlq ($Zhi,4);
550 &shl ($rem[0],4); # rem<<4
551
552 &pxor ($Zlo,&QWP(16,"esp",$nhi[1],8)); # Z^=H[nhi]
553 &psllq ($tmp,60);
554 &movz ($rem[0],&LB($rem[0]));
555
556 &pxor ($Zlo,$tmp);
557 &pxor ($Zhi,&QWP(16+128,"esp",$nhi[1],8));
558
559 &pinsrw ($red[0],&WP(0,$rem_8bit,$rem[1],2),2);
560 &pxor ($Zhi,$red[1]);
561
562 &movd ($dat,$Zlo);
563 &pinsrw ($red[2],&WP(0,$rem_8bit,$rem[0],2),3); # last is <<48
564
565 &psllq ($red[0],12); # correct by <<16>>4
566 &pxor ($Zhi,$red[0]);
567 &psrlq ($Zlo,32);
568 &pxor ($Zhi,$red[2]);
569
570 &mov ("ecx",&DWP(528+16+4,"esp")); # restore inp
571 &movd ("ebx",$Zlo);
572 &movq ($tmp,$Zhi); # 01234567
573 &psllw ($Zhi,8); # 1.3.5.7.
574 &psrlw ($tmp,8); # .0.2.4.6
575 &por ($Zhi,$tmp); # 10325476
576 &bswap ($dat);
577 &pshufw ($Zhi,$Zhi,0b00011011); # 76543210
578 &bswap ("ebx");
Robert Sloana94fe052017-02-21 08:49:28 -0800579
Adam Langleyd9e397b2015-01-22 14:27:53 -0800580 &cmp ("ecx",&DWP(528+16+8,"esp")); # are we done?
581 &jne (&label("outer"));
582 }
583
584 &mov ("eax",&DWP(528+16+0,"esp")); # restore Xi
585 &mov (&DWP(12,"eax"),"edx");
586 &mov (&DWP(8,"eax"),"ebx");
587 &movq (&QWP(0,"eax"),$Zhi);
588
589 &mov ("esp",&DWP(528+16+12,"esp")); # restore original %esp
590 &emms ();
591}
592&function_end("gcm_ghash_4bit_mmx");
593}}
594
595if ($sse2) {{
596######################################################################
597# PCLMULQDQ version.
598
599$Xip="eax";
600$Htbl="edx";
601$const="ecx";
602$inp="esi";
603$len="ebx";
604
605($Xi,$Xhi)=("xmm0","xmm1"); $Hkey="xmm2";
606($T1,$T2,$T3)=("xmm3","xmm4","xmm5");
607($Xn,$Xhn)=("xmm6","xmm7");
608
609&static_label("bswap");
610
611sub clmul64x64_T2 { # minimal "register" pressure
612my ($Xhi,$Xi,$Hkey,$HK)=@_;
613
614 &movdqa ($Xhi,$Xi); #
615 &pshufd ($T1,$Xi,0b01001110);
616 &pshufd ($T2,$Hkey,0b01001110) if (!defined($HK));
617 &pxor ($T1,$Xi); #
618 &pxor ($T2,$Hkey) if (!defined($HK));
619 $HK=$T2 if (!defined($HK));
620
621 &pclmulqdq ($Xi,$Hkey,0x00); #######
622 &pclmulqdq ($Xhi,$Hkey,0x11); #######
623 &pclmulqdq ($T1,$HK,0x00); #######
624 &xorps ($T1,$Xi); #
625 &xorps ($T1,$Xhi); #
626
627 &movdqa ($T2,$T1); #
628 &psrldq ($T1,8);
629 &pslldq ($T2,8); #
630 &pxor ($Xhi,$T1);
631 &pxor ($Xi,$T2); #
632}
633
634sub clmul64x64_T3 {
635# Even though this subroutine offers visually better ILP, it
636# was empirically found to be a tad slower than above version.
637# At least in gcm_ghash_clmul context. But it's just as well,
638# because loop modulo-scheduling is possible only thanks to
639# minimized "register" pressure...
640my ($Xhi,$Xi,$Hkey)=@_;
641
642 &movdqa ($T1,$Xi); #
643 &movdqa ($Xhi,$Xi);
644 &pclmulqdq ($Xi,$Hkey,0x00); #######
645 &pclmulqdq ($Xhi,$Hkey,0x11); #######
646 &pshufd ($T2,$T1,0b01001110); #
647 &pshufd ($T3,$Hkey,0b01001110);
648 &pxor ($T2,$T1); #
649 &pxor ($T3,$Hkey);
650 &pclmulqdq ($T2,$T3,0x00); #######
651 &pxor ($T2,$Xi); #
652 &pxor ($T2,$Xhi); #
653
654 &movdqa ($T3,$T2); #
655 &psrldq ($T2,8);
656 &pslldq ($T3,8); #
657 &pxor ($Xhi,$T2);
658 &pxor ($Xi,$T3); #
659}
660
661if (1) { # Algorithm 9 with <<1 twist.
662 # Reduction is shorter and uses only two
663 # temporary registers, which makes it better
664 # candidate for interleaving with 64x64
665 # multiplication. Pre-modulo-scheduled loop
666 # was found to be ~20% faster than Algorithm 5
667 # below. Algorithm 9 was therefore chosen for
668 # further optimization...
669
670sub reduction_alg9 { # 17/11 times faster than Intel version
671my ($Xhi,$Xi) = @_;
672
673 # 1st phase
674 &movdqa ($T2,$Xi); #
675 &movdqa ($T1,$Xi);
676 &psllq ($Xi,5);
677 &pxor ($T1,$Xi); #
678 &psllq ($Xi,1);
679 &pxor ($Xi,$T1); #
680 &psllq ($Xi,57); #
681 &movdqa ($T1,$Xi); #
682 &pslldq ($Xi,8);
Robert Sloana94fe052017-02-21 08:49:28 -0800683 &psrldq ($T1,8); #
Adam Langleyd9e397b2015-01-22 14:27:53 -0800684 &pxor ($Xi,$T2);
685 &pxor ($Xhi,$T1); #
686
687 # 2nd phase
688 &movdqa ($T2,$Xi);
689 &psrlq ($Xi,1);
690 &pxor ($Xhi,$T2); #
691 &pxor ($T2,$Xi);
692 &psrlq ($Xi,5);
693 &pxor ($Xi,$T2); #
694 &psrlq ($Xi,1); #
695 &pxor ($Xi,$Xhi) #
696}
697
698&function_begin_B("gcm_init_clmul");
699 &mov ($Htbl,&wparam(0));
700 &mov ($Xip,&wparam(1));
701
702 &call (&label("pic"));
703&set_label("pic");
704 &blindpop ($const);
705 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const));
706
707 &movdqu ($Hkey,&QWP(0,$Xip));
708 &pshufd ($Hkey,$Hkey,0b01001110);# dword swap
709
710 # <<1 twist
711 &pshufd ($T2,$Hkey,0b11111111); # broadcast uppermost dword
712 &movdqa ($T1,$Hkey);
713 &psllq ($Hkey,1);
714 &pxor ($T3,$T3); #
715 &psrlq ($T1,63);
716 &pcmpgtd ($T3,$T2); # broadcast carry bit
717 &pslldq ($T1,8);
718 &por ($Hkey,$T1); # H<<=1
719
720 # magic reduction
721 &pand ($T3,&QWP(16,$const)); # 0x1c2_polynomial
722 &pxor ($Hkey,$T3); # if(carry) H^=0x1c2_polynomial
723
724 # calculate H^2
725 &movdqa ($Xi,$Hkey);
726 &clmul64x64_T2 ($Xhi,$Xi,$Hkey);
727 &reduction_alg9 ($Xhi,$Xi);
728
729 &pshufd ($T1,$Hkey,0b01001110);
730 &pshufd ($T2,$Xi,0b01001110);
731 &pxor ($T1,$Hkey); # Karatsuba pre-processing
732 &movdqu (&QWP(0,$Htbl),$Hkey); # save H
733 &pxor ($T2,$Xi); # Karatsuba pre-processing
734 &movdqu (&QWP(16,$Htbl),$Xi); # save H^2
735 &palignr ($T2,$T1,8); # low part is H.lo^H.hi
736 &movdqu (&QWP(32,$Htbl),$T2); # save Karatsuba "salt"
737
738 &ret ();
739&function_end_B("gcm_init_clmul");
740
741&function_begin_B("gcm_gmult_clmul");
742 &mov ($Xip,&wparam(0));
743 &mov ($Htbl,&wparam(1));
744
745 &call (&label("pic"));
746&set_label("pic");
747 &blindpop ($const);
748 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const));
749
750 &movdqu ($Xi,&QWP(0,$Xip));
751 &movdqa ($T3,&QWP(0,$const));
752 &movups ($Hkey,&QWP(0,$Htbl));
753 &pshufb ($Xi,$T3);
754 &movups ($T2,&QWP(32,$Htbl));
755
756 &clmul64x64_T2 ($Xhi,$Xi,$Hkey,$T2);
757 &reduction_alg9 ($Xhi,$Xi);
758
759 &pshufb ($Xi,$T3);
760 &movdqu (&QWP(0,$Xip),$Xi);
761
762 &ret ();
763&function_end_B("gcm_gmult_clmul");
764
765&function_begin("gcm_ghash_clmul");
766 &mov ($Xip,&wparam(0));
767 &mov ($Htbl,&wparam(1));
768 &mov ($inp,&wparam(2));
769 &mov ($len,&wparam(3));
770
771 &call (&label("pic"));
772&set_label("pic");
773 &blindpop ($const);
774 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const));
775
776 &movdqu ($Xi,&QWP(0,$Xip));
777 &movdqa ($T3,&QWP(0,$const));
778 &movdqu ($Hkey,&QWP(0,$Htbl));
779 &pshufb ($Xi,$T3);
780
781 &sub ($len,0x10);
782 &jz (&label("odd_tail"));
783
784 #######
785 # Xi+2 =[H*(Ii+1 + Xi+1)] mod P =
786 # [(H*Ii+1) + (H*Xi+1)] mod P =
787 # [(H*Ii+1) + H^2*(Ii+Xi)] mod P
788 #
789 &movdqu ($T1,&QWP(0,$inp)); # Ii
790 &movdqu ($Xn,&QWP(16,$inp)); # Ii+1
791 &pshufb ($T1,$T3);
792 &pshufb ($Xn,$T3);
793 &movdqu ($T3,&QWP(32,$Htbl));
794 &pxor ($Xi,$T1); # Ii+Xi
795
796 &pshufd ($T1,$Xn,0b01001110); # H*Ii+1
797 &movdqa ($Xhn,$Xn);
798 &pxor ($T1,$Xn); #
799 &lea ($inp,&DWP(32,$inp)); # i+=2
800
801 &pclmulqdq ($Xn,$Hkey,0x00); #######
802 &pclmulqdq ($Xhn,$Hkey,0x11); #######
803 &pclmulqdq ($T1,$T3,0x00); #######
804 &movups ($Hkey,&QWP(16,$Htbl)); # load H^2
805 &nop ();
806
807 &sub ($len,0x20);
808 &jbe (&label("even_tail"));
809 &jmp (&label("mod_loop"));
810
811&set_label("mod_loop",32);
812 &pshufd ($T2,$Xi,0b01001110); # H^2*(Ii+Xi)
813 &movdqa ($Xhi,$Xi);
814 &pxor ($T2,$Xi); #
815 &nop ();
816
817 &pclmulqdq ($Xi,$Hkey,0x00); #######
818 &pclmulqdq ($Xhi,$Hkey,0x11); #######
819 &pclmulqdq ($T2,$T3,0x10); #######
820 &movups ($Hkey,&QWP(0,$Htbl)); # load H
821
822 &xorps ($Xi,$Xn); # (H*Ii+1) + H^2*(Ii+Xi)
823 &movdqa ($T3,&QWP(0,$const));
824 &xorps ($Xhi,$Xhn);
825 &movdqu ($Xhn,&QWP(0,$inp)); # Ii
826 &pxor ($T1,$Xi); # aggregated Karatsuba post-processing
827 &movdqu ($Xn,&QWP(16,$inp)); # Ii+1
828 &pxor ($T1,$Xhi); #
829
830 &pshufb ($Xhn,$T3);
831 &pxor ($T2,$T1); #
832
833 &movdqa ($T1,$T2); #
834 &psrldq ($T2,8);
835 &pslldq ($T1,8); #
836 &pxor ($Xhi,$T2);
837 &pxor ($Xi,$T1); #
838 &pshufb ($Xn,$T3);
839 &pxor ($Xhi,$Xhn); # "Ii+Xi", consume early
840
841 &movdqa ($Xhn,$Xn); #&clmul64x64_TX ($Xhn,$Xn,$Hkey); H*Ii+1
842 &movdqa ($T2,$Xi); #&reduction_alg9($Xhi,$Xi); 1st phase
843 &movdqa ($T1,$Xi);
844 &psllq ($Xi,5);
845 &pxor ($T1,$Xi); #
846 &psllq ($Xi,1);
847 &pxor ($Xi,$T1); #
848 &pclmulqdq ($Xn,$Hkey,0x00); #######
849 &movups ($T3,&QWP(32,$Htbl));
850 &psllq ($Xi,57); #
851 &movdqa ($T1,$Xi); #
852 &pslldq ($Xi,8);
Robert Sloana94fe052017-02-21 08:49:28 -0800853 &psrldq ($T1,8); #
Adam Langleyd9e397b2015-01-22 14:27:53 -0800854 &pxor ($Xi,$T2);
855 &pxor ($Xhi,$T1); #
856 &pshufd ($T1,$Xhn,0b01001110);
857 &movdqa ($T2,$Xi); # 2nd phase
858 &psrlq ($Xi,1);
859 &pxor ($T1,$Xhn);
860 &pxor ($Xhi,$T2); #
861 &pclmulqdq ($Xhn,$Hkey,0x11); #######
862 &movups ($Hkey,&QWP(16,$Htbl)); # load H^2
863 &pxor ($T2,$Xi);
864 &psrlq ($Xi,5);
865 &pxor ($Xi,$T2); #
866 &psrlq ($Xi,1); #
867 &pxor ($Xi,$Xhi) #
868 &pclmulqdq ($T1,$T3,0x00); #######
869
870 &lea ($inp,&DWP(32,$inp));
871 &sub ($len,0x20);
872 &ja (&label("mod_loop"));
873
874&set_label("even_tail");
875 &pshufd ($T2,$Xi,0b01001110); # H^2*(Ii+Xi)
876 &movdqa ($Xhi,$Xi);
877 &pxor ($T2,$Xi); #
878
879 &pclmulqdq ($Xi,$Hkey,0x00); #######
880 &pclmulqdq ($Xhi,$Hkey,0x11); #######
881 &pclmulqdq ($T2,$T3,0x10); #######
882 &movdqa ($T3,&QWP(0,$const));
883
884 &xorps ($Xi,$Xn); # (H*Ii+1) + H^2*(Ii+Xi)
885 &xorps ($Xhi,$Xhn);
886 &pxor ($T1,$Xi); # aggregated Karatsuba post-processing
887 &pxor ($T1,$Xhi); #
888
889 &pxor ($T2,$T1); #
890
891 &movdqa ($T1,$T2); #
892 &psrldq ($T2,8);
893 &pslldq ($T1,8); #
894 &pxor ($Xhi,$T2);
895 &pxor ($Xi,$T1); #
896
897 &reduction_alg9 ($Xhi,$Xi);
898
899 &test ($len,$len);
900 &jnz (&label("done"));
901
902 &movups ($Hkey,&QWP(0,$Htbl)); # load H
903&set_label("odd_tail");
904 &movdqu ($T1,&QWP(0,$inp)); # Ii
905 &pshufb ($T1,$T3);
906 &pxor ($Xi,$T1); # Ii+Xi
907
908 &clmul64x64_T2 ($Xhi,$Xi,$Hkey); # H*(Ii+Xi)
909 &reduction_alg9 ($Xhi,$Xi);
910
911&set_label("done");
912 &pshufb ($Xi,$T3);
913 &movdqu (&QWP(0,$Xip),$Xi);
914&function_end("gcm_ghash_clmul");
915
Robert Sloana94fe052017-02-21 08:49:28 -0800916} else { # Algorithm 5. Kept for reference purposes.
Adam Langleyd9e397b2015-01-22 14:27:53 -0800917
918sub reduction_alg5 { # 19/16 times faster than Intel version
919my ($Xhi,$Xi)=@_;
920
921 # <<1
922 &movdqa ($T1,$Xi); #
923 &movdqa ($T2,$Xhi);
924 &pslld ($Xi,1);
925 &pslld ($Xhi,1); #
926 &psrld ($T1,31);
927 &psrld ($T2,31); #
928 &movdqa ($T3,$T1);
929 &pslldq ($T1,4);
930 &psrldq ($T3,12); #
931 &pslldq ($T2,4);
932 &por ($Xhi,$T3); #
933 &por ($Xi,$T1);
934 &por ($Xhi,$T2); #
935
936 # 1st phase
937 &movdqa ($T1,$Xi);
938 &movdqa ($T2,$Xi);
939 &movdqa ($T3,$Xi); #
940 &pslld ($T1,31);
941 &pslld ($T2,30);
942 &pslld ($Xi,25); #
943 &pxor ($T1,$T2);
944 &pxor ($T1,$Xi); #
945 &movdqa ($T2,$T1); #
946 &pslldq ($T1,12);
947 &psrldq ($T2,4); #
948 &pxor ($T3,$T1);
949
950 # 2nd phase
951 &pxor ($Xhi,$T3); #
952 &movdqa ($Xi,$T3);
953 &movdqa ($T1,$T3);
954 &psrld ($Xi,1); #
955 &psrld ($T1,2);
956 &psrld ($T3,7); #
957 &pxor ($Xi,$T1);
958 &pxor ($Xhi,$T2);
959 &pxor ($Xi,$T3); #
960 &pxor ($Xi,$Xhi); #
961}
962
963&function_begin_B("gcm_init_clmul");
964 &mov ($Htbl,&wparam(0));
965 &mov ($Xip,&wparam(1));
966
967 &call (&label("pic"));
968&set_label("pic");
969 &blindpop ($const);
970 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const));
971
972 &movdqu ($Hkey,&QWP(0,$Xip));
973 &pshufd ($Hkey,$Hkey,0b01001110);# dword swap
974
975 # calculate H^2
976 &movdqa ($Xi,$Hkey);
977 &clmul64x64_T3 ($Xhi,$Xi,$Hkey);
978 &reduction_alg5 ($Xhi,$Xi);
979
980 &movdqu (&QWP(0,$Htbl),$Hkey); # save H
981 &movdqu (&QWP(16,$Htbl),$Xi); # save H^2
982
983 &ret ();
984&function_end_B("gcm_init_clmul");
985
986&function_begin_B("gcm_gmult_clmul");
987 &mov ($Xip,&wparam(0));
988 &mov ($Htbl,&wparam(1));
989
990 &call (&label("pic"));
991&set_label("pic");
992 &blindpop ($const);
993 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const));
994
995 &movdqu ($Xi,&QWP(0,$Xip));
996 &movdqa ($Xn,&QWP(0,$const));
997 &movdqu ($Hkey,&QWP(0,$Htbl));
998 &pshufb ($Xi,$Xn);
999
1000 &clmul64x64_T3 ($Xhi,$Xi,$Hkey);
1001 &reduction_alg5 ($Xhi,$Xi);
1002
1003 &pshufb ($Xi,$Xn);
1004 &movdqu (&QWP(0,$Xip),$Xi);
1005
1006 &ret ();
1007&function_end_B("gcm_gmult_clmul");
1008
1009&function_begin("gcm_ghash_clmul");
1010 &mov ($Xip,&wparam(0));
1011 &mov ($Htbl,&wparam(1));
1012 &mov ($inp,&wparam(2));
1013 &mov ($len,&wparam(3));
1014
1015 &call (&label("pic"));
1016&set_label("pic");
1017 &blindpop ($const);
1018 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const));
1019
1020 &movdqu ($Xi,&QWP(0,$Xip));
1021 &movdqa ($T3,&QWP(0,$const));
1022 &movdqu ($Hkey,&QWP(0,$Htbl));
1023 &pshufb ($Xi,$T3);
1024
1025 &sub ($len,0x10);
1026 &jz (&label("odd_tail"));
1027
1028 #######
1029 # Xi+2 =[H*(Ii+1 + Xi+1)] mod P =
1030 # [(H*Ii+1) + (H*Xi+1)] mod P =
1031 # [(H*Ii+1) + H^2*(Ii+Xi)] mod P
1032 #
1033 &movdqu ($T1,&QWP(0,$inp)); # Ii
1034 &movdqu ($Xn,&QWP(16,$inp)); # Ii+1
1035 &pshufb ($T1,$T3);
1036 &pshufb ($Xn,$T3);
1037 &pxor ($Xi,$T1); # Ii+Xi
1038
1039 &clmul64x64_T3 ($Xhn,$Xn,$Hkey); # H*Ii+1
1040 &movdqu ($Hkey,&QWP(16,$Htbl)); # load H^2
1041
1042 &sub ($len,0x20);
1043 &lea ($inp,&DWP(32,$inp)); # i+=2
1044 &jbe (&label("even_tail"));
1045
1046&set_label("mod_loop");
1047 &clmul64x64_T3 ($Xhi,$Xi,$Hkey); # H^2*(Ii+Xi)
1048 &movdqu ($Hkey,&QWP(0,$Htbl)); # load H
1049
1050 &pxor ($Xi,$Xn); # (H*Ii+1) + H^2*(Ii+Xi)
1051 &pxor ($Xhi,$Xhn);
1052
1053 &reduction_alg5 ($Xhi,$Xi);
1054
1055 #######
1056 &movdqa ($T3,&QWP(0,$const));
1057 &movdqu ($T1,&QWP(0,$inp)); # Ii
1058 &movdqu ($Xn,&QWP(16,$inp)); # Ii+1
1059 &pshufb ($T1,$T3);
1060 &pshufb ($Xn,$T3);
1061 &pxor ($Xi,$T1); # Ii+Xi
1062
1063 &clmul64x64_T3 ($Xhn,$Xn,$Hkey); # H*Ii+1
1064 &movdqu ($Hkey,&QWP(16,$Htbl)); # load H^2
1065
1066 &sub ($len,0x20);
1067 &lea ($inp,&DWP(32,$inp));
1068 &ja (&label("mod_loop"));
1069
1070&set_label("even_tail");
1071 &clmul64x64_T3 ($Xhi,$Xi,$Hkey); # H^2*(Ii+Xi)
1072
1073 &pxor ($Xi,$Xn); # (H*Ii+1) + H^2*(Ii+Xi)
1074 &pxor ($Xhi,$Xhn);
1075
1076 &reduction_alg5 ($Xhi,$Xi);
1077
1078 &movdqa ($T3,&QWP(0,$const));
1079 &test ($len,$len);
1080 &jnz (&label("done"));
1081
1082 &movdqu ($Hkey,&QWP(0,$Htbl)); # load H
1083&set_label("odd_tail");
1084 &movdqu ($T1,&QWP(0,$inp)); # Ii
1085 &pshufb ($T1,$T3);
1086 &pxor ($Xi,$T1); # Ii+Xi
1087
1088 &clmul64x64_T3 ($Xhi,$Xi,$Hkey); # H*(Ii+Xi)
1089 &reduction_alg5 ($Xhi,$Xi);
1090
1091 &movdqa ($T3,&QWP(0,$const));
1092&set_label("done");
1093 &pshufb ($Xi,$T3);
1094 &movdqu (&QWP(0,$Xip),$Xi);
1095&function_end("gcm_ghash_clmul");
1096
1097}
1098
1099&set_label("bswap",64);
1100 &data_byte(15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0);
1101 &data_byte(1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0xc2); # 0x1c2_polynomial
1102&set_label("rem_8bit",64);
1103 &data_short(0x0000,0x01C2,0x0384,0x0246,0x0708,0x06CA,0x048C,0x054E);
1104 &data_short(0x0E10,0x0FD2,0x0D94,0x0C56,0x0918,0x08DA,0x0A9C,0x0B5E);
1105 &data_short(0x1C20,0x1DE2,0x1FA4,0x1E66,0x1B28,0x1AEA,0x18AC,0x196E);
1106 &data_short(0x1230,0x13F2,0x11B4,0x1076,0x1538,0x14FA,0x16BC,0x177E);
1107 &data_short(0x3840,0x3982,0x3BC4,0x3A06,0x3F48,0x3E8A,0x3CCC,0x3D0E);
1108 &data_short(0x3650,0x3792,0x35D4,0x3416,0x3158,0x309A,0x32DC,0x331E);
1109 &data_short(0x2460,0x25A2,0x27E4,0x2626,0x2368,0x22AA,0x20EC,0x212E);
1110 &data_short(0x2A70,0x2BB2,0x29F4,0x2836,0x2D78,0x2CBA,0x2EFC,0x2F3E);
1111 &data_short(0x7080,0x7142,0x7304,0x72C6,0x7788,0x764A,0x740C,0x75CE);
1112 &data_short(0x7E90,0x7F52,0x7D14,0x7CD6,0x7998,0x785A,0x7A1C,0x7BDE);
1113 &data_short(0x6CA0,0x6D62,0x6F24,0x6EE6,0x6BA8,0x6A6A,0x682C,0x69EE);
1114 &data_short(0x62B0,0x6372,0x6134,0x60F6,0x65B8,0x647A,0x663C,0x67FE);
1115 &data_short(0x48C0,0x4902,0x4B44,0x4A86,0x4FC8,0x4E0A,0x4C4C,0x4D8E);
1116 &data_short(0x46D0,0x4712,0x4554,0x4496,0x41D8,0x401A,0x425C,0x439E);
1117 &data_short(0x54E0,0x5522,0x5764,0x56A6,0x53E8,0x522A,0x506C,0x51AE);
1118 &data_short(0x5AF0,0x5B32,0x5974,0x58B6,0x5DF8,0x5C3A,0x5E7C,0x5FBE);
1119 &data_short(0xE100,0xE0C2,0xE284,0xE346,0xE608,0xE7CA,0xE58C,0xE44E);
1120 &data_short(0xEF10,0xEED2,0xEC94,0xED56,0xE818,0xE9DA,0xEB9C,0xEA5E);
1121 &data_short(0xFD20,0xFCE2,0xFEA4,0xFF66,0xFA28,0xFBEA,0xF9AC,0xF86E);
1122 &data_short(0xF330,0xF2F2,0xF0B4,0xF176,0xF438,0xF5FA,0xF7BC,0xF67E);
1123 &data_short(0xD940,0xD882,0xDAC4,0xDB06,0xDE48,0xDF8A,0xDDCC,0xDC0E);
1124 &data_short(0xD750,0xD692,0xD4D4,0xD516,0xD058,0xD19A,0xD3DC,0xD21E);
1125 &data_short(0xC560,0xC4A2,0xC6E4,0xC726,0xC268,0xC3AA,0xC1EC,0xC02E);
1126 &data_short(0xCB70,0xCAB2,0xC8F4,0xC936,0xCC78,0xCDBA,0xCFFC,0xCE3E);
1127 &data_short(0x9180,0x9042,0x9204,0x93C6,0x9688,0x974A,0x950C,0x94CE);
1128 &data_short(0x9F90,0x9E52,0x9C14,0x9DD6,0x9898,0x995A,0x9B1C,0x9ADE);
1129 &data_short(0x8DA0,0x8C62,0x8E24,0x8FE6,0x8AA8,0x8B6A,0x892C,0x88EE);
1130 &data_short(0x83B0,0x8272,0x8034,0x81F6,0x84B8,0x857A,0x873C,0x86FE);
1131 &data_short(0xA9C0,0xA802,0xAA44,0xAB86,0xAEC8,0xAF0A,0xAD4C,0xAC8E);
1132 &data_short(0xA7D0,0xA612,0xA454,0xA596,0xA0D8,0xA11A,0xA35C,0xA29E);
1133 &data_short(0xB5E0,0xB422,0xB664,0xB7A6,0xB2E8,0xB32A,0xB16C,0xB0AE);
1134 &data_short(0xBBF0,0xBA32,0xB874,0xB9B6,0xBCF8,0xBD3A,0xBF7C,0xBEBE);
1135}} # $sse2
1136
1137&set_label("rem_4bit",64);
1138 &data_word(0,0x0000<<$S,0,0x1C20<<$S,0,0x3840<<$S,0,0x2460<<$S);
1139 &data_word(0,0x7080<<$S,0,0x6CA0<<$S,0,0x48C0<<$S,0,0x54E0<<$S);
1140 &data_word(0,0xE100<<$S,0,0xFD20<<$S,0,0xD940<<$S,0,0xC560<<$S);
1141 &data_word(0,0x9180<<$S,0,0x8DA0<<$S,0,0xA9C0<<$S,0,0xB5E0<<$S);
1142}}} # !$x86only
1143
1144&asciz("GHASH for x86, CRYPTOGAMS by <appro\@openssl.org>");
1145&asm_finish();
1146
David Benjaminc895d6b2016-08-11 13:26:41 -04001147close STDOUT;
1148
Adam Langleyd9e397b2015-01-22 14:27:53 -08001149# A question was risen about choice of vanilla MMX. Or rather why wasn't
1150# SSE2 chosen instead? In addition to the fact that MMX runs on legacy
1151# CPUs such as PIII, "4-bit" MMX version was observed to provide better
1152# performance than *corresponding* SSE2 one even on contemporary CPUs.
1153# SSE2 results were provided by Peter-Michael Hager. He maintains SSE2
1154# implementation featuring full range of lookup-table sizes, but with
1155# per-invocation lookup table setup. Latter means that table size is
1156# chosen depending on how much data is to be hashed in every given call,
1157# more data - larger table. Best reported result for Core2 is ~4 cycles
1158# per processed byte out of 64KB block. This number accounts even for
1159# 64KB table setup overhead. As discussed in gcm128.c we choose to be
1160# more conservative in respect to lookup table sizes, but how do the
1161# results compare? Minimalistic "256B" MMX version delivers ~11 cycles
1162# on same platform. As also discussed in gcm128.c, next in line "8-bit
1163# Shoup's" or "4KB" method should deliver twice the performance of
1164# "256B" one, in other words not worse than ~6 cycles per byte. It
1165# should be also be noted that in SSE2 case improvement can be "super-
1166# linear," i.e. more than twice, mostly because >>8 maps to single
1167# instruction on SSE2 register. This is unlike "4-bit" case when >>4
1168# maps to same amount of instructions in both MMX and SSE2 cases.
1169# Bottom line is that switch to SSE2 is considered to be justifiable
1170# only in case we choose to implement "8-bit" method...