diff options
author | Sokolov Yura <funny.falcon@gmail.com> | 2017-01-05 09:36:27 +0300 |
---|---|---|
committer | Josh Bleecher Snyder <josharian@gmail.com> | 2017-02-10 19:16:29 +0000 |
commit | d03c1248604679e1e6a01253144065bc57da48b8 (patch) | |
tree | e668dc946c619cc67b9a5b79df244e58bf0b2233 /src/runtime/asm_arm64.s | |
parent | 9f75ecd5e12f2b9988086954933d610cd5647918 (diff) | |
download | go-d03c1248604679e1e6a01253144065bc57da48b8.tar.gz go-d03c1248604679e1e6a01253144065bc57da48b8.zip |
runtime: implement fastrand in go
So it could be inlined.
Using bit-tricks it could be implemented without condition
(improved trick version by Minux Ma).
Simple benchmark shows it is faster on i386 and x86_64, though
I don't know will it be faster on other architectures?
benchmark old ns/op new ns/op delta
BenchmarkFastrand-3 2.79 1.48 -46.95%
BenchmarkFastrandHashiter-3 25.9 24.9 -3.86%
Change-Id: Ie2eb6d0f598c0bb5fac7f6ad0f8b5e3eddaa361b
Reviewed-on: https://go-review.googlesource.com/34782
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Diffstat (limited to 'src/runtime/asm_arm64.s')
-rw-r--r-- | src/runtime/asm_arm64.s | 12 |
1 files changed, 0 insertions, 12 deletions
diff --git a/src/runtime/asm_arm64.s b/src/runtime/asm_arm64.s index 0e286d484f..5f2d4a5681 100644 --- a/src/runtime/asm_arm64.s +++ b/src/runtime/asm_arm64.s @@ -959,18 +959,6 @@ equal: MOVB R0, ret+48(FP) RET -TEXT runtime·fastrand(SB),NOSPLIT,$-8-4 - MOVD g_m(g), R1 - MOVWU m_fastrand(R1), R0 - ADD R0, R0 - CMPW $0, R0 - BGE notneg - EOR $0x88888eef, R0 -notneg: - MOVW R0, m_fastrand(R1) - MOVW R0, ret+0(FP) - RET - TEXT runtime·return0(SB), NOSPLIT, $0 MOVW $0, R0 RET |