aboutsummaryrefslogtreecommitdiff
path: root/src/runtime/asm_mips64x.s
diff options
context:
space:
mode:
authorAustin Clements <austin@google.com>2016-10-20 22:45:18 -0400
committerAustin Clements <austin@google.com>2016-10-26 15:44:44 +0000
commit79561a84ceb4435c1294767d26b0b8a0dd77809d (patch)
treed0535eadcf5388405c8059930817eeb9c1db5755 /src/runtime/asm_mips64x.s
parent1c3ab3d4312ec67d6450562bd750bb2c77621a66 (diff)
downloadgo-79561a84ceb4435c1294767d26b0b8a0dd77809d.tar.gz
go-79561a84ceb4435c1294767d26b0b8a0dd77809d.zip
runtime: simplify reflectcall write barriers
Currently reflectcall has a subtle dance with write barriers where the assembly code copies the result values from the stack to the in-heap argument frame without write barriers and then calls into the runtime after the fact to invoke the necessary write barriers. For the hybrid barrier (and for ROC), we need to switch to a *pre*-write write barrier, which is very difficult to do with the current setup. We could tie ourselves in knots of subtle reasoning about why it's okay in this particular case to have a post-write write barrier, but this commit instead takes a different approach. Rather than making things more complex, this simplifies reflection calls so that the argument copy is done in Go using normal bulk write barriers. The one difficulty with this approach is that calling into Go requires putting arguments on the stack, but the call* functions "donate" their entire stack frame to the called function. We can get away with this now because the copy avoids using the stack and has copied the results out before we clobber the stack frame to call into the write barrier. The solution in this CL is to call another function, passing arguments in registers instead of on the stack, and let that other function reserve more stack space and setup the arguments for the runtime. This approach seemed to work out the best. I also tried making the call* functions reserve 32 extra bytes of frame for the write barrier arguments and adjust SP up by 32 bytes around the call. However, even with the necessary changes to the assembler to correct the spdelta table, the runtime was still having trouble with the frame layout (and the changes to the assembler caused many other things that do strange things with the SP to fail to assemble). The approach I took doesn't require any funny business with the SP. Updates #17503. Change-Id: Ie2bb0084b24d6cff38b5afb218b9e0534ad2119e Reviewed-on: https://go-review.googlesource.com/31655 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Cherry Zhang <cherryyz@google.com>
Diffstat (limited to 'src/runtime/asm_mips64x.s')
-rw-r--r--src/runtime/asm_mips64x.s38
1 files changed, 15 insertions, 23 deletions
diff --git a/src/runtime/asm_mips64x.s b/src/runtime/asm_mips64x.s
index 79378df22c..4666741f28 100644
--- a/src/runtime/asm_mips64x.s
+++ b/src/runtime/asm_mips64x.s
@@ -309,8 +309,6 @@ TEXT reflect·call(SB), NOSPLIT, $0-0
TEXT ·reflectcall(SB), NOSPLIT, $-8-32
MOVWU argsize+24(FP), R1
- // NOTE(rsc): No call16, because CALLFN needs four words
- // of argument space to invoke callwritebarrier.
DISPATCH(runtime·call32, 32)
DISPATCH(runtime·call64, 64)
DISPATCH(runtime·call128, 128)
@@ -361,33 +359,27 @@ TEXT NAME(SB), WRAPPER, $MAXSIZE-24; \
PCDATA $PCDATA_StackMapIndex, $0; \
JAL (R4); \
/* copy return values back */ \
+ MOVV argtype+0(FP), R5; \
MOVV arg+16(FP), R1; \
MOVWU n+24(FP), R2; \
MOVWU retoffset+28(FP), R4; \
- MOVV R29, R3; \
+ ADDV $8, R29, R3; \
ADDV R4, R3; \
ADDV R4, R1; \
SUBVU R4, R2; \
- ADDV $8, R3; \
- ADDV R3, R2; \
-loop: \
- BEQ R3, R2, end; \
- MOVBU (R3), R4; \
- ADDV $1, R3; \
- MOVBU R4, (R1); \
- ADDV $1, R1; \
- JMP loop; \
-end: \
- /* execute write barrier updates */ \
- MOVV argtype+0(FP), R5; \
- MOVV arg+16(FP), R1; \
- MOVWU n+24(FP), R2; \
- MOVWU retoffset+28(FP), R4; \
- MOVV R5, 8(R29); \
- MOVV R1, 16(R29); \
- MOVV R2, 24(R29); \
- MOVV R4, 32(R29); \
- JAL runtime·callwritebarrier(SB); \
+ JAL callRet<>(SB); \
+ RET
+
+// callRet copies return values back at the end of call*. This is a
+// separate function so it can allocate stack space for the arguments
+// to reflectcallmove. It does not follow the Go ABI; it expects its
+// arguments in registers.
+TEXT callRet<>(SB), NOSPLIT, $32-0
+ MOVV R5, 8(R29)
+ MOVV R1, 16(R29)
+ MOVV R3, 24(R29)
+ MOVV R2, 32(R29)
+ JAL runtime·reflectcallmove(SB)
RET
CALLFN(·call16, 16)