aboutsummaryrefslogtreecommitdiff
path: root/src/cmd/compile/internal/s390x
AgeCommit message (Collapse)Author
2021-08-03[dev.typeparams] runtime,cmd/compile,cmd/link: replace jmpdefer with a loopAustin Clements
Currently, deferreturn runs deferred functions by backing up its return PC to the deferreturn call, and then effectively tail-calling the deferred function (via jmpdefer). The effect of this is that the deferred function appears to be called directly from the deferee, and when it returns, the deferee calls deferreturn again so it can run the next deferred function if necessary. This unusual flow control leads to a large number of special cases and complications all over the tool chain. This used to be necessary because deferreturn copied the deferred function's argument frame directly into its caller's frame and then had to invoke that call as if it had been called from its caller's frame so it could access it arguments. But now that we've simplified defer processing so the runtime only deals with argument-less closures, this approach is no longer necessary. This CL simplifies all of this by making deferreturn simply call deferred functions in a loop. This eliminates the need for jmpdefer, so we can delete a bunch of per-architecture assembly code. This eliminates several special cases on Wasm, since it couldn't support these calling shenanigans directly and thus had to simulate the loop a different way. Now Wasm can largely work the way the other platforms do. This eliminates the per-architecture Ginsnopdefer operation. On PPC64, this was necessary to reload the TOC pointer after the tail call (since TOC pointers in general make tail calls impossible). The tail call is gone, and in the case where we do force a jump to the deferreturn call when recovering from an open-coded defer, we go through gogo (via runtime.recovery), which handles the TOC. On other platforms, we needed a NOP so traceback didn't get confused by seeing the return to the CALL instruction, rather than the usual return to the instruction following the CALL instruction. Now we don't inject a return to the CALL instruction at all, so this NOP is also unnecessary. The one potential effect of this is that deferreturn could now appear in stack traces from deferred functions. However, this could already happen from open-coded defers, so we've long since marked deferreturn as a "wrapper" so it gets elided not only from printed stack traces, but from runtime.Callers*. This is a retry of CL 337652 because we had to back out its parent. There are no changes in this version. Change-Id: I3f54b7fec1d7ccac71cc6cf6835c6a46b7e5fb6c Reviewed-on: https://go-review.googlesource.com/c/go/+/339397 Trust: Austin Clements <austin@google.com> Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Cherry Mui <cherryyz@google.com>
2021-07-30[dev.typeparams] Revert "[dev.typeparams] runtime,cmd/compile,cmd/link: ↵Austin Clements
replace jmpdefer with a loop" This reverts CL 227652. I'm reverting CL 337651 and this builds on top of it. Change-Id: I03ce363be44c2a3defff2e43e7b1aad83386820d Reviewed-on: https://go-review.googlesource.com/c/go/+/338709 Trust: Austin Clements <austin@google.com> Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Cherry Mui <cherryyz@google.com>
2021-07-30[dev.typeparams] runtime,cmd/compile,cmd/link: replace jmpdefer with a loopAustin Clements
Currently, deferreturn runs deferred functions by backing up its return PC to the deferreturn call, and then effectively tail-calling the deferred function (via jmpdefer). The effect of this is that the deferred function appears to be called directly from the deferee, and when it returns, the deferee calls deferreturn again so it can run the next deferred function if necessary. This unusual flow control leads to a large number of special cases and complications all over the tool chain. This used to be necessary because deferreturn copied the deferred function's argument frame directly into its caller's frame and then had to invoke that call as if it had been called from its caller's frame so it could access it arguments. But now that we've simplified defer processing so the runtime only deals with argument-less closures, this approach is no longer necessary. This CL simplifies all of this by making deferreturn simply call deferred functions in a loop. This eliminates the need for jmpdefer, so we can delete a bunch of per-architecture assembly code. This eliminates several special cases on Wasm, since it couldn't support these calling shenanigans directly and thus had to simulate the loop a different way. Now Wasm can largely work the way the other platforms do. This eliminates the per-architecture Ginsnopdefer operation. On PPC64, this was necessary to reload the TOC pointer after the tail call (since TOC pointers in general make tail calls impossible). The tail call is gone, and in the case where we do force a jump to the deferreturn call when recovering from an open-coded defer, we go through gogo (via runtime.recovery), which handles the TOC. On other platforms, we needed a NOP so traceback didn't get confused by seeing the return to the CALL instruction, rather than the usual return to the instruction following the CALL instruction. Now we don't inject a return to the CALL instruction at all, so this NOP is also unnecessary. The one potential effect of this is that deferreturn could now appear in stack traces from deferred functions. However, this could already happen from open-coded defers, so we've long since marked deferreturn as a "wrapper" so it gets elided not only from printed stack traces, but from runtime.Callers*. Change-Id: Ie9f700cd3fb774f498c9edce363772a868407bf7 Reviewed-on: https://go-review.googlesource.com/c/go/+/337652 Trust: Austin Clements <austin@google.com> Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Cherry Mui <cherryyz@google.com>
2021-03-19cmd/compile: add clobberdeadreg modeCherry Zhang
When -clobberdeadreg flag is set, the compiler inserts code that clobbers integer registers at call sites. This may be helpful for debugging register ABI. Only implemented on AMD64 for now. Change-Id: Ia203d3f891c30fd95d0103489056fe01d63a2899 Reviewed-on: https://go-review.googlesource.com/c/go/+/302809 Trust: Cherry Zhang <cherryyz@google.com> Run-TryBot: Cherry Zhang <cherryyz@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: David Chase <drchase@google.com>
2021-03-09cmd/compile: make ValAndOff.{Val,Off} return an int32Alberto Donizetti
The ValAndOff type is a 64bit integer holding a 32bit value and a 32bit offset in each half, but for historical reasons its Val and Off methods returned an int64. This was convenient when AuxInt was always an int64, but now that AuxInts are typed we can return int32 from Val and Off and get rid of a several casts and now unnecessary range checks. This change: - changes the Val and Off methods to return an int32 (from int64); - adds Val64 and Off64 methods for convenience in the few remaining places (in the ssa.go files) where Val and Off are stored in int64 fields; - deletes makeValAndOff64, renames makeValAndOff32 to makeValAndOff - deletes a few ValAndOff methods that are now unused; - removes several validOff/validValAndOff check that will always return true. Passes: GOARCH=amd64 gotip build -toolexec 'toolstash -cmp' -a std GOARCH=386 gotip build -toolexec 'toolstash -cmp' -a std GOARCH=s390x gotip build -toolexec 'toolstash -cmp' -a std (the three GOARCHs with SSA rules files impacted by the change). Change-Id: I2abbbf42188c798631b94d3a55ca44256f140be7 Reviewed-on: https://go-review.googlesource.com/c/go/+/299149 Trust: Alberto Donizetti <alb.donizetti@gmail.com> Trust: Keith Randall <khr@golang.org> Run-TryBot: Alberto Donizetti <alb.donizetti@gmail.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org>
2021-03-02cmd/compile: optimize single-precision floating point square rootfanzha02
Add generic rule to rewrite the single-precision square root expression with one single-precision instruction. The optimization will reduce two times of precision converting between double-precision and single-precision. On arm64 flatform. previous: FCVTSD F0, F0 FSQRTD F0, F0 FCVTDS F0, F0 optimized: FSQRTS S0, S0 And this patch adds the test case to check the correctness. This patch refers to CL 241877, contributed by Alice Xu (dianhong.xu@arm.com) Change-Id: I6de5d02281c693017ac4bd4c10963dd55989bd7e Reviewed-on: https://go-review.googlesource.com/c/go/+/276873 Trust: fannie zhang <Fannie.Zhang@arm.com> Run-TryBot: fannie zhang <Fannie.Zhang@arm.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org>
2021-02-25cmd/internal/obj: add Prog.SetFrom3{Reg,Const}Josh Bleecher Snyder
These are the the most common uses, and they reduce line noise. I don't love adding new deprecated APIs, but since they're trivial wrappers, it'll be very easy to update them along with the rest. No functional changes; passes toolstash-check. Change-Id: I691a8175cfef9081180e463c63f326376af3f3a6 Reviewed-on: https://go-review.googlesource.com/c/go/+/296009 Trust: Josh Bleecher Snyder <josharian@gmail.com> Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org>
2021-02-25cmd/compile: automate resultInArg0 register checksJosh Bleecher Snyder
No functional changes; passes toolstash-check. No measureable performance changes. Change-Id: I2629f73d4a3cc56d80f512f33cf57cf41d8f15d3 Reviewed-on: https://go-review.googlesource.com/c/go/+/296010 Trust: Josh Bleecher Snyder <josharian@gmail.com> Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org>
2020-12-23[dev.regabi] cmd/compile: split out package ssagen [generated]Russ Cox
[git-generate] cd src/cmd/compile/internal/gc rf ' # maxOpenDefers is declared in ssa.go but used only by walk. mv maxOpenDefers walk.go # gc.Arch -> ssagen.Arch # It is not as nice but will do for now. mv Arch ArchInfo mv thearch Arch mv Arch ArchInfo arch.go # Pull dwarf out of pgen.go. mv debuginfo declPos createDwarfVars preInliningDcls \ createSimpleVars createSimpleVar \ createComplexVars createComplexVar \ dwarf.go # Pull high-level compilation out of pgen.go, # leaving only the SSA code. mv compilequeue funccompile compile compilenow \ compileFunctions isInlinableButNotInlined \ initLSym \ compile.go mv BoundsCheckFunc GCWriteBarrierReg ssa.go mv largeStack largeStackFrames CheckLargeStacks pgen.go # All that is left in dcl.go is the nowritebarrierrecCheck mv dcl.go nowb.go # Export API and unexport non-API. mv initssaconfig InitConfig mv isIntrinsicCall IsIntrinsicCall mv ssaDumpInline DumpInline mv initSSATables InitTables mv initSSAEnv InitEnv mv compileSSA Compile mv stackOffset StackOffset mv canSSAType TypeOK mv SSAGenState State mv FwdRefAux fwdRefAux mv cgoSymABIs CgoSymABIs mv readSymABIs ReadSymABIs mv initLSym InitLSym mv useABIWrapGen symabiDefs CgoSymABIs ReadSymABIs InitLSym selectLSym makeABIWrapper setupTextLSym abi.go mv arch.go abi.go nowb.go phi.go pgen.go pgen_test.go ssa.go cmd/compile/internal/ssagen ' rm go.go gsubr.go Change-Id: I47fad6cbf1d1e583fd9139003a08401d7cd048a1 Reviewed-on: https://go-review.googlesource.com/c/go/+/279476 Trust: Russ Cox <rsc@golang.org> Run-TryBot: Russ Cox <rsc@golang.org> Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2020-12-23[dev.regabi] cmd/compile: split out package objw [generated]Russ Cox
Object file writing routines are used not just at the end of the compilation but also during static data layout in walk. Split them into their own package. [git-generate] cd src/cmd/compile/internal/gc rf ' # Move bit vector to new package bitvec mv bvec.n bvec.N mv bvec.b bvec.B mv bvec BitVec mv bvalloc New mv bvbulkalloc NewBulk mv bulkBvec.next bulkBvec.Next mv bulkBvec Bulk mv H0 h0 mv Hp hp # Leave bvecSet and bitmap hashes behind - not needed as broadly. mv bvecSet.extractUniqe bvecSet.extractUnique mv h0 bvecSet bvecSet.grow bvecSet.add \ bvecSet.extractUnique hashbitmap bvset.go mv bv.go cmd/compile/internal/bitvec ex . ../arm ../arm64 ../mips ../mips64 ../ppc64 ../s390x ../riscv64 { import "cmd/internal/obj" var a *obj.Addr var i int64 Addrconst(a, i) -> a.SetConst(i) var p, to *obj.Prog Patch(p, to) -> p.To.SetTarget(to) } rm Addrconst Patch # Move object-writing API to new package objw mv duint8 Objw_Uint8 mv duint16 Objw_Uint16 mv duint32 Objw_Uint32 mv duintptr Objw_Uintptr mv duintxx Objw_UintN mv dsymptr Objw_SymPtr mv dsymptrOff Objw_SymPtrOff mv dsymptrWeakOff Objw_SymPtrWeakOff mv ggloblsym Objw_Global mv dbvec Objw_BitVec mv newProgs NewProgs mv Progs.clearp Progs.Clear mv Progs.settext Progs.SetText mv Progs.next Progs.Next mv Progs.pc Progs.PC mv Progs.pos Progs.Pos mv Progs.curfn Progs.CurFunc mv Progs.progcache Progs.Cache mv Progs.cacheidx Progs.CacheIndex mv Progs.nextLive Progs.NextLive mv Progs.prevLive Progs.PrevLive mv Progs.Appendpp Progs.Append mv LivenessIndex.stackMapIndex LivenessIndex.StackMapIndex mv LivenessIndex.isUnsafePoint LivenessIndex.IsUnsafePoint mv Objw_Uint8 Objw_Uint16 Objw_Uint32 Objw_Uintptr Objw_UintN \ Objw_SymPtr Objw_SymPtrOff Objw_SymPtrWeakOff Objw_Global \ Objw_BitVec \ objw.go mv sharedProgArray NewProgs Progs \ LivenessIndex StackMapDontCare \ LivenessDontCare LivenessIndex.StackMapValid \ Progs.NewProg Progs.Flush Progs.Free Progs.Prog Progs.Clear Progs.Append Progs.SetText \ prog.go mv prog.go objw.go cmd/compile/internal/objw # Move ggloblnod to obj with the rest of the non-objw higher-level writing. mv ggloblnod obj.go ' cd ../objw rf ' mv Objw_Uint8 Uint8 mv Objw_Uint16 Uint16 mv Objw_Uint32 Uint32 mv Objw_Uintptr Uintptr mv Objw_UintN UintN mv Objw_SymPtr SymPtr mv Objw_SymPtrOff SymPtrOff mv Objw_SymPtrWeakOff SymPtrWeakOff mv Objw_Global Global mv Objw_BitVec BitVec ' Change-Id: I2b87085aa788564fb322e9c55bddd73347b4d5fd Reviewed-on: https://go-review.googlesource.com/c/go/+/279310 Trust: Russ Cox <rsc@golang.org> Run-TryBot: Russ Cox <rsc@golang.org> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2020-11-25[dev.regabi] cmd/compile: introduce cmd/compile/internal/base [generated]Russ Cox
Move Flag, Debug, Ctxt, Exit, and error messages to new package cmd/compile/internal/base. These are the core functionality that everything in gc uses and which otherwise prevent splitting any other code out of gc into different packages. A minor milestone: the compiler source code no longer contains the string "yy". [git-generate] cd src/cmd/compile/internal/gc rf ' mv atExit AtExit mv Ctxt atExitFuncs AtExit Exit base.go mv lineno Pos mv linestr FmtPos mv flusherrors FlushErrors mv yyerror Errorf mv yyerrorl ErrorfAt mv yyerrorv ErrorfVers mv noder.yyerrorpos noder.errorAt mv Warnl WarnfAt mv errorexit ErrorExit mv base.go debug.go flag.go print.go cmd/compile/internal/base ' : # update comments sed -i '' 's/yyerrorl/ErrorfAt/g; s/yyerror/Errorf/g' *.go : # bootstrap.go is not built by default so invisible to rf sed -i '' 's/Fatalf/base.Fatalf/' bootstrap.go goimports -w bootstrap.go : # update cmd/dist to add internal/base cd ../../../dist sed -i '' '/internal.amd64/a\ "cmd/compile/internal/base", ' buildtool.go gofmt -w buildtool.go Change-Id: I59903c7084222d6eaee38823fd222159ba24a31a Reviewed-on: https://go-review.googlesource.com/c/go/+/272250 Trust: Russ Cox <rsc@golang.org> Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2020-11-25[dev.regabi] cmd/compile: clean up debug flag (-d) handling [generated]Russ Cox
The debug table is not as haphazard as flags, but there are still a few mismatches between command-line names and variable names. This CL moves them all into a consistent home (var Debug, like var Flag). Code updated automatically using the rf command below. A followup CL will make a few manual cleanups, leaving this CL completely automated and easier to regenerate during merge conflicts. [git-generate] cd src/cmd/compile/internal/gc rf ' add main.go var Debug struct{} mv Debug_append Debug.Append mv Debug_checkptr Debug.Checkptr mv Debug_closure Debug.Closure mv Debug_compilelater Debug.CompileLater mv disable_checknil Debug.DisableNil mv debug_dclstack Debug.DclStack mv Debug_gcprog Debug.GCProg mv Debug_libfuzzer Debug.Libfuzzer mv Debug_checknil Debug.Nil mv Debug_panic Debug.Panic mv Debug_slice Debug.Slice mv Debug_typeassert Debug.TypeAssert mv Debug_wb Debug.WB mv Debug_export Debug.Export mv Debug_pctab Debug.PCTab mv Debug_locationlist Debug.LocationLists mv Debug_typecheckinl Debug.TypecheckInl mv Debug_gendwarfinl Debug.DwarfInl mv Debug_softfloat Debug.SoftFloat mv Debug_defer Debug.Defer mv Debug_dumpptrs Debug.DumpPtrs mv flag.go:/parse.-d/-1,/unknown.debug/+2 parseDebug mv debugtab Debug parseDebug \ debugHelpHeader debugHelpFooter \ debug.go # Remove //go:generate line copied from main.go rm debug.go:/go:generate/-+ ' Change-Id: I625761ca5659be4052f7161a83baa00df75cca91 Reviewed-on: https://go-review.googlesource.com/c/go/+/272246 Trust: Russ Cox <rsc@golang.org> Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2020-11-06cmd/compile: optimize shift pairs and masks on s390xMichael Munday
Optimize combinations of left and right shifts by a constant value into a 'rotate then insert selected bits [into zero]' instruction. Use the same instruction for contiguous masks since it has some benefits over 'and immediate' (not restricted to 32-bits, does not overwrite source register). To keep the complexity of this change under control I've only implemented 64 bit operations for now. There are a lot more optimizations that can be done with this instruction family. However, since their function overlaps with other instructions we need to be somewhat careful not to break existing optimization rules by creating optimization dead ends. This is particularly true of the load/store merging rules which contain lots of zero extensions and shifts. This CL does interfere with the store merging rules when an operand is shifted left before it is stored: binary.BigEndian.PutUint64(b, x << 1) This is unfortunate but it's not critical and somewhat complex so I plan to fix that in a follow up CL. file before after Δ % addr2line 4117446 4117282 -164 -0.004% api 4945184 4942752 -2432 -0.049% asm 4998079 4991891 -6188 -0.124% buildid 2685158 2684074 -1084 -0.040% cgo 4553732 4553394 -338 -0.007% compile 19294446 19245070 -49376 -0.256% cover 4897105 4891319 -5786 -0.118% dist 3544389 3542785 -1604 -0.045% doc 3926795 3927617 +822 +0.021% fix 3302958 3293868 -9090 -0.275% link 6546274 6543456 -2818 -0.043% nm 4102021 4100825 -1196 -0.029% objdump 4542431 4548483 +6052 +0.133% pack 2482465 2416389 -66076 -2.662% pprof 13366541 13363915 -2626 -0.020% test2json 2829007 2761515 -67492 -2.386% trace 10216164 10219684 +3520 +0.034% vet 6773956 6773572 -384 -0.006% total 107124151 106917891 -206260 -0.193% Change-Id: I7591cce41e06867ba10a745daae9333513062746 Reviewed-on: https://go-review.googlesource.com/c/go/+/233317 Run-TryBot: Michael Munday <mike.munday@ibm.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org> Trust: Michael Munday <mike.munday@ibm.com>
2020-10-29cmd/internal/obj/arm64: add CASx/CASPx instructionsfanzha02
This patch adds support for CASx and CASPx atomic instructions. go syntax gnu syntax CASD Rs, (Rn|RSP), Rt => cas Xs, Xt, (Xn|SP) CASALW Rs, (Rn|RSP), Rt => casal Ws, Wt, (Xn|SP) CASPD (Rs, Rs+1), (Rn|RSP), (Rt, Rt+1) => casp Xs, Xs+1, Xt, Xt+1, (Xn|SP) CASPW (Rs, Rs+1), (Rn|RSP), (Rt, Rt+1) => casp Ws, Ws+1, Wt, Wt+1, (Xn|SP) This patch changes the type of prog.RestArgs from "[]Addr" to "[]struct{Addr, Pos}", Pos is a enum, indicating the position of the operand. This patch also adds test cases. Change-Id: Ib971cfda7890b7aa895d17bab22dea326c7fcaa4 Reviewed-on: https://go-review.googlesource.com/c/go/+/233277 Trust: fannie zhang <Fannie.Zhang@arm.com> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2020-10-23cmd/compile: intrinsify runtime/internal/atomic.{And,Or} on S390XMichael Pratt
This is a simplification of LANfloor/LAOfloor since we have a whole word. Change-Id: I791641fb4068cad3f73660ce51699ed4653ae0e6 Reviewed-on: https://go-review.googlesource.com/c/go/+/263151 Run-TryBot: Michael Pratt <mpratt@google.com> TryBot-Result: Go Bot <gobot@golang.org> Trust: Michael Pratt <mpratt@google.com> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2020-08-31cmd/compile,cmd/asm: simplify recording of branch targets, take 2Keith Randall
We currently use two fields to store the targets of branches. Some phases use p.To.Val, some use p.Pcond. Rewrite so that every branch instruction uses p.To.Val. p.From.Val is also used in rare instances. Introduce a Pool link for use by arm/arm64, instead of repurposing Pcond. This is a cleanup CL in preparation for some stack frame CLs. Change-Id: If8239177e4b1ea2bccd0608eb39553d23210d405 Reviewed-on: https://go-review.googlesource.com/c/go/+/251437 Run-TryBot: Keith Randall <khr@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2020-08-28Revert "cmd/compile,cmd/asm: simplify recording of branch targets"Keith Randall
This reverts CL 243318. Reason for revert: Seems to be crashing some builders. Change-Id: I2ffc59bc5535be60b884b281c8d0eff4647dc756 Reviewed-on: https://go-review.googlesource.com/c/go/+/251169 Reviewed-by: Bryan C. Mills <bcmills@google.com>
2020-08-27cmd/compile,cmd/asm: simplify recording of branch targetsKeith Randall
We currently use two fields to store the targets of branches. Some phases use p.To.Val, some use p.Pcond. Rewrite so that every branch instruction uses p.To.Val. p.From.Val is also used in rare instances. Introduce a Pool link for use by arm/arm64, instead of repurposing Pcond. This is a cleanup CL in preparation for some stack frame CLs. Change-Id: I9055bf0a1d986aff421e47951a1dedc301c846f8 Reviewed-on: https://go-review.googlesource.com/c/go/+/243318 Run-TryBot: Keith Randall <khr@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2020-04-08cmd/compile: allow floating point Ops to produce flags on s390xRuixin(Peter) Bao
On s390x, some floating point arithmetic instructions (FSUB, FADD) generate flag. This patch allows those related SSA ops to return a tuple, where the second argument of the tuple is the generated flag. We can use the flag and remove the subsequent comparison instruction (e.g: LTDBR). This CL also reduces the .text section for math.test binary by 0.4KB. Benchmarks: name old time/op new time/op delta Acos-18 12.1ns ± 0% 12.1ns ± 0% ~ (all equal) Acosh-18 18.5ns ± 0% 18.5ns ± 0% ~ (all equal) Asin-18 13.1ns ± 0% 13.1ns ± 0% ~ (all equal) Asinh-18 19.4ns ± 0% 19.5ns ± 1% ~ (p=0.444 n=5+5) Atan-18 10.0ns ± 0% 10.0ns ± 0% ~ (all equal) Atanh-18 19.1ns ± 1% 19.2ns ± 2% ~ (p=0.841 n=5+5) Atan2-18 16.4ns ± 0% 16.4ns ± 0% ~ (all equal) Cbrt-18 14.8ns ± 0% 14.8ns ± 0% ~ (all equal) Ceil-18 0.78ns ± 0% 0.78ns ± 0% ~ (all equal) Copysign-18 0.80ns ± 0% 0.80ns ± 0% ~ (all equal) Cos-18 7.19ns ± 0% 7.19ns ± 0% ~ (p=0.556 n=4+5) Cosh-18 12.4ns ± 0% 12.4ns ± 0% ~ (all equal) Erf-18 10.8ns ± 0% 10.8ns ± 0% ~ (all equal) Erfc-18 11.0ns ± 0% 11.0ns ± 0% ~ (all equal) Erfinv-18 23.0ns ±16% 26.8ns ± 1% +16.90% (p=0.008 n=5+5) Erfcinv-18 23.3ns ±15% 26.1ns ± 7% ~ (p=0.087 n=5+5) Exp-18 8.67ns ± 0% 8.67ns ± 0% ~ (p=1.000 n=4+4) ExpGo-18 50.8ns ± 3% 52.4ns ± 2% ~ (p=0.063 n=5+5) Expm1-18 9.49ns ± 1% 9.47ns ± 0% ~ (p=1.000 n=5+5) Exp2-18 52.7ns ± 1% 50.5ns ± 3% -4.10% (p=0.024 n=5+5) Exp2Go-18 50.6ns ± 1% 48.4ns ± 3% -4.39% (p=0.008 n=5+5) Abs-18 0.67ns ± 0% 0.67ns ± 0% ~ (p=0.444 n=5+5) Dim-18 1.02ns ± 0% 1.03ns ± 0% +0.98% (p=0.008 n=5+5) Floor-18 0.78ns ± 0% 0.78ns ± 0% ~ (all equal) Max-18 3.09ns ± 1% 3.05ns ± 0% -1.42% (p=0.008 n=5+5) Min-18 3.32ns ± 1% 3.30ns ± 0% -0.72% (p=0.016 n=5+4) Mod-18 62.3ns ± 1% 65.8ns ± 3% +5.55% (p=0.008 n=5+5) Frexp-18 5.05ns ± 2% 4.98ns ± 0% ~ (p=0.683 n=5+5) Gamma-18 24.4ns ± 0% 24.1ns ± 0% -1.23% (p=0.008 n=5+5) Hypot-18 10.3ns ± 0% 10.3ns ± 0% ~ (all equal) HypotGo-18 10.2ns ± 0% 10.2ns ± 0% ~ (all equal) Ilogb-18 3.56ns ± 1% 3.54ns ± 0% ~ (p=0.595 n=5+5) J0-18 113ns ± 0% 108ns ± 1% -4.42% (p=0.016 n=4+5) J1-18 115ns ± 0% 109ns ± 1% -4.87% (p=0.016 n=4+5) Jn-18 240ns ± 0% 230ns ± 2% -4.41% (p=0.008 n=5+5) Ldexp-18 6.19ns ± 0% 6.19ns ± 0% ~ (p=0.444 n=5+5) Lgamma-18 32.2ns ± 0% 32.2ns ± 0% ~ (all equal) Log-18 13.1ns ± 0% 13.1ns ± 0% ~ (all equal) Logb-18 4.23ns ± 0% 4.22ns ± 0% ~ (p=0.444 n=5+5) Log1p-18 12.7ns ± 0% 12.7ns ± 0% ~ (all equal) Log10-18 18.1ns ± 0% 18.2ns ± 0% ~ (p=0.167 n=5+5) Log2-18 14.0ns ± 0% 14.0ns ± 0% ~ (all equal) Modf-18 10.4ns ± 0% 10.5ns ± 0% +0.96% (p=0.016 n=4+5) Nextafter32-18 11.3ns ± 0% 11.3ns ± 0% ~ (all equal) Nextafter64-18 4.01ns ± 1% 3.97ns ± 0% ~ (p=0.333 n=5+4) PowInt-18 32.7ns ± 0% 32.7ns ± 0% ~ (all equal) PowFrac-18 33.2ns ± 0% 33.1ns ± 0% ~ (p=0.095 n=4+5) Pow10Pos-18 1.58ns ± 0% 1.58ns ± 0% ~ (all equal) Pow10Neg-18 5.81ns ± 0% 5.81ns ± 0% ~ (all equal) Round-18 0.78ns ± 0% 0.78ns ± 0% ~ (all equal) RoundToEven-18 0.78ns ± 0% 0.78ns ± 0% ~ (all equal) Remainder-18 40.6ns ± 0% 40.7ns ± 0% ~ (p=0.238 n=5+4) Signbit-18 1.57ns ± 0% 1.57ns ± 0% ~ (all equal) Sin-18 6.75ns ± 0% 6.74ns ± 0% ~ (p=0.333 n=5+4) Sincos-18 29.5ns ± 0% 29.5ns ± 0% ~ (all equal) Sinh-18 14.4ns ± 0% 14.4ns ± 0% ~ (all equal) SqrtIndirect-18 3.97ns ± 0% 4.15ns ± 0% +4.59% (p=0.008 n=5+5) SqrtLatency-18 8.01ns ± 0% 8.01ns ± 0% ~ (all equal) SqrtIndirectLatency-18 11.6ns ± 0% 11.6ns ± 0% ~ (all equal) SqrtGoLatency-18 44.7ns ± 0% 45.0ns ± 0% +0.67% (p=0.008 n=5+5) SqrtPrime-18 1.26µs ± 0% 1.27µs ± 0% +0.63% (p=0.029 n=4+4) Tan-18 11.1ns ± 0% 11.1ns ± 0% ~ (all equal) Tanh-18 15.8ns ± 0% 15.8ns ± 0% ~ (all equal) Trunc-18 0.78ns ± 0% 0.78ns ± 0% ~ (all equal) Y0-18 113ns ± 2% 108ns ± 3% -5.11% (p=0.008 n=5+5) Y1-18 112ns ± 3% 107ns ± 0% -4.29% (p=0.000 n=5+4) Yn-18 229ns ± 0% 220ns ± 1% -3.76% (p=0.016 n=4+5) Float64bits-18 1.09ns ± 0% 1.09ns ± 0% ~ (all equal) Float64frombits-18 0.55ns ± 0% 0.55ns ± 0% ~ (all equal) Float32bits-18 0.96ns ±16% 0.86ns ± 0% ~ (p=0.563 n=5+5) Float32frombits-18 1.03ns ±28% 0.84ns ± 0% ~ (p=0.167 n=5+5) FMA-18 1.60ns ± 0% 1.60ns ± 0% ~ (all equal) [Geo mean] 10.0ns 9.9ns -0.41% Change-Id: Ief7e63ea5a8ba404b0a4696e12b9b7e0b05a9a03 Reviewed-on: https://go-review.googlesource.com/c/go/+/209160 Reviewed-by: Michael Munday <mike.munday@ibm.com> Run-TryBot: Michael Munday <mike.munday@ibm.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
2020-03-25cmd/compile: use load and test instructions on s390xRuixin(Peter) Bao
The load and test instructions compare the given value against zero and will produce a condition code indicating one of the following scenarios: 0: Result is zero 1: Result is less than zero 2: Result is greater than zero 3: Result is not a number (NaN) The instruction can be used to simplify floating point comparisons against zero, which can enable further optimizations. This CL also reduces the size of .text section of math.test binary by around 0.7 KB (in hexadecimal, from 1358f0 to 135620). Change-Id: I33cb714f0c6feebac7a1c46dfcc735e7daceff9c Reviewed-on: https://go-review.googlesource.com/c/go/+/209159 Reviewed-by: Michael Munday <mike.munday@ibm.com> Run-TryBot: Michael Munday <mike.munday@ibm.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
2020-02-29cmd/compile: lower float to uint conversions on s390xRuixin(Peter) Bao
Add rules for lowering float <-> unsigned int on s390x. During compilation, Cvt64Uto64F rule triggers around 80 times, Cvt64Fto64U rule triggers around 20 times, Cvt64Uto32F rule triggers around 5 times. Change-Id: If4c9d128b9132fce8c0bea9abc09cb43a5df7989 Reviewed-on: https://go-review.googlesource.com/c/go/+/209177 Reviewed-by: Michael Munday <mike.munday@ibm.com> Run-TryBot: Michael Munday <mike.munday@ibm.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-11-11cmd/compile, runtime: intrinsify atomic And8 and Or8 on s390xMichael Munday
Intrinsify these functions to match other platforms. Update the sequence of instructions used in the assembly implementations to match the intrinsics. Also, add a micro benchmark so we can more easily measure the performance of these two functions: name old time/op new time/op delta And8-8 5.33ns ± 7% 2.55ns ± 8% -52.12% (p=0.000 n=20+20) And8Parallel-8 7.39ns ± 5% 3.74ns ± 4% -49.34% (p=0.000 n=20+20) Or8-8 4.84ns ±15% 2.64ns ±11% -45.50% (p=0.000 n=20+20) Or8Parallel-8 7.27ns ± 3% 3.84ns ± 4% -47.10% (p=0.000 n=19+20) By using a 'rotate then xor selected bits' instruction combined with either a 'load and and' or a 'load and or' instruction we can implement And8 and Or8 with far fewer instructions. Replacing 'compare and swap' with atomic instructions may also improve performance when there is contention. Change-Id: I28bb8032052b73ae8ccdf6e4c612d2877085fa01 Reviewed-on: https://go-review.googlesource.com/c/go/+/204277 Run-TryBot: Michael Munday <mike.munday@ibm.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2019-11-10cmd/compile: enable nil check logging for other architectures.David Chase
Change-Id: If82ebd9cd6470863eb5de9e031e7905a66218857 Reviewed-on: https://go-review.googlesource.com/c/go/+/204159 Run-TryBot: David Chase <drchase@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2019-10-29cmd/compile: intrinsics for runtime/internal/atomic.Store8Austin Clements
For #10958, #24543, but makes sense on its own. Change-Id: I2a87dab66b82a1863e4b6512b1f8def51463ce2a Reviewed-on: https://go-review.googlesource.com/c/go/+/203284 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2019-10-28cmd/compile: delete ZeroAutoCherry Zhang
ZeroAuto was used with the ambiguously live logic. The ambiguously live logic is removed as we switched to stack objects. It is now never called. Remove. Change-Id: If4cdd7fed5297f8ab591cc392a76c80f57820856 Reviewed-on: https://go-review.googlesource.com/c/go/+/203538 Run-TryBot: Cherry Zhang <cherryyz@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2019-10-09cmd/asm: add s390x branch-on-count instructionsMichael Munday
The branch-on-count instructions on s390x decrement the input register and then compare its value to 0. If not equal the branch is taken. These instructions are useful for implementing loops with a set number of iterations (which might be in a register). For example, this for loop: for i := 0; i < n; i++ { ... // i is not used or modified in the loop } Could be implemented using this assembly: MOVD Rn, Ri loop: ... BRCTG Ri, loop Note that i will count down from n in the assembly whereas in the original for loop it counted up to n which is why we can't use i in the loop. These instructions will only be used in hand-written codegen and assembly for now since SSA blocks cannot currently modify values. We could look into this in the future though. Change-Id: Iaab93b8aa2699513b825439b8ea20d8fe2ea1ee6 Reviewed-on: https://go-review.googlesource.com/c/go/+/199977 Run-TryBot: Michael Munday <mike.munday@ibm.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2019-10-08cmd/compile: add SSA rules for s390x compare-and-branch instructionsMichael Munday
This commit adds SSA rules for the s390x combined compare-and-branch instructions. These have a shorter encoding than separate compare and branch instructions and they also don't clobber the condition code (a.k.a. flag register) reducing pressure on the flag allocator. I have deleted the 'loop_test.go' file and replaced it with a new codegen test which performs a wider range of checks. Object sizes from compilebench: name old object-bytes new object-bytes delta Template 562kB ± 0% 561kB ± 0% -0.28% (p=0.000 n=10+10) Unicode 217kB ± 0% 217kB ± 0% -0.17% (p=0.000 n=10+10) GoTypes 2.03MB ± 0% 2.02MB ± 0% -0.59% (p=0.000 n=10+10) Compiler 8.16MB ± 0% 8.11MB ± 0% -0.62% (p=0.000 n=10+10) SSA 27.4MB ± 0% 27.0MB ± 0% -1.45% (p=0.000 n=10+10) Flate 356kB ± 0% 356kB ± 0% -0.12% (p=0.000 n=10+10) GoParser 438kB ± 0% 436kB ± 0% -0.51% (p=0.000 n=10+10) Reflect 1.37MB ± 0% 1.37MB ± 0% -0.42% (p=0.000 n=10+10) Tar 485kB ± 0% 483kB ± 0% -0.39% (p=0.000 n=10+10) XML 630kB ± 0% 621kB ± 0% -1.45% (p=0.000 n=10+10) [Geo mean] 1.14MB 1.13MB -0.60% name old text-bytes new text-bytes delta HelloSize 763kB ± 0% 754kB ± 0% -1.30% (p=0.000 n=10+10) CmdGoSize 10.7MB ± 0% 10.6MB ± 0% -0.91% (p=0.000 n=10+10) [Geo mean] 2.86MB 2.82MB -1.10% Change-Id: Ibca55d9c0aa1254aee69433731ab5d26a43a7c18 Reviewed-on: https://go-review.googlesource.com/c/go/+/198037 Run-TryBot: Michael Munday <mike.munday@ibm.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org>
2019-10-02cmd/compile/internal/gc: intrinsify mulWW on s390xRuixin(Peter) Bao
SSA rule have already been added previously to intrisinfy Mul/Mul64 on s390x. In this CL, we want to let mulWW use that SSA rule as well. Also removed an extra line for formatting. Benchmarks: QuoRem-18 3.59µs ±15% 2.94µs ± 3% -18.06% (p=0.000 n=8+8) ModSqrt225_Tonelli-18 806µs ± 0% 800µs ± 0% -0.85% (p=0.000 n=7+8) ModSqrt225_3Mod4-18 245µs ± 1% 243µs ± 0% -0.81% (p=0.001 n=8+8) ModSqrt231_Tonelli-18 837µs ± 0% 834µs ± 1% -0.36% (p=0.028 n=8+8) ModSqrt231_5Mod8-18 282µs ± 0% 280µs ± 0% -0.76% (p=0.000 n=8+8) Sqrt-18 45.8µs ± 2% 38.6µs ± 0% -15.63% (p=0.000 n=8+8) IntSqr/1-18 19.1ns ± 0% 13.1ns ± 0% -31.41% (p=0.000 n=8+8) IntSqr/2-18 48.3ns ± 2% 48.2ns ± 0% ~ (p=0.094 n=8+8) IntSqr/3-18 70.5ns ± 1% 70.7ns ± 0% ~ (p=0.428 n=8+8) IntSqr/5-18 119ns ± 1% 118ns ± 0% -1.02% (p=0.000 n=7+8) IntSqr/8-18 215ns ± 1% 215ns ± 0% ~ (p=0.320 n=8+7) IntSqr/10-18 302ns ± 1% 301ns ± 0% ~ (p=0.148 n=8+7) IntSqr/20-18 952ns ± 1% 807ns ± 0% -15.28% (p=0.000 n=8+8) IntSqr/30-18 1.74µs ± 0% 1.53µs ± 0% -11.93% (p=0.000 n=8+8) IntSqr/50-18 3.91µs ± 0% 3.57µs ± 0% -8.64% (p=0.000 n=7+8) IntSqr/80-18 8.66µs ± 1% 8.11µs ± 0% -6.39% (p=0.000 n=8+8) IntSqr/100-18 12.8µs ± 0% 12.2µs ± 0% -5.19% (p=0.000 n=8+8) IntSqr/200-18 46.0µs ± 0% 44.5µs ± 0% -3.06% (p=0.000 n=8+8) IntSqr/300-18 81.4µs ± 0% 78.4µs ± 0% -3.71% (p=0.000 n=7+8) IntSqr/500-18 212µs ± 1% 206µs ± 0% -2.66% (p=0.000 n=8+8) IntSqr/800-18 419µs ± 1% 406µs ± 0% -3.07% (p=0.000 n=8+8) IntSqr/1000-18 635µs ± 0% 621µs ± 0% -2.13% (p=0.000 n=8+8) Change-Id: Ib097857186932b902601ab087cbeff3fc9555c3e Reviewed-on: https://go-review.googlesource.com/c/go/+/197639 Run-TryBot: Emmanuel Odeke <emm.odeke@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org>
2019-10-02cmd/compile: allow multiple SSA block control valuesMichael Munday
Control values are used to choose which successor of a block is jumped to. Typically a control value takes the form of a 'flags' value that represents the result of a comparison. Some architectures however use a variable in a register as a control value. Up until now we have managed with a single control value per block. However some architectures (e.g. s390x and riscv64) have combined compare-and-branch instructions that take two variables in registers as parameters. To generate these instructions we need to support 2 control values per block. This CL allows up to 2 control values to be used in a block in order to support the addition of compare-and-branch instructions. I have implemented s390x compare-and-branch instructions in a different CL. Passes toolstash-check -all. Results of compilebench: name old time/op new time/op delta Template 208ms ± 1% 209ms ± 1% ~ (p=0.289 n=20+20) Unicode 83.7ms ± 1% 83.3ms ± 3% -0.49% (p=0.017 n=18+18) GoTypes 748ms ± 1% 748ms ± 0% ~ (p=0.460 n=20+18) Compiler 3.47s ± 1% 3.48s ± 1% ~ (p=0.070 n=19+18) SSA 11.5s ± 1% 11.7s ± 1% +1.64% (p=0.000 n=19+18) Flate 130ms ± 1% 130ms ± 1% ~ (p=0.588 n=19+20) GoParser 160ms ± 1% 161ms ± 1% ~ (p=0.211 n=20+20) Reflect 465ms ± 1% 467ms ± 1% +0.42% (p=0.007 n=20+20) Tar 184ms ± 1% 185ms ± 2% ~ (p=0.087 n=18+20) XML 253ms ± 1% 253ms ± 1% ~ (p=0.377 n=20+18) LinkCompiler 769ms ± 2% 774ms ± 2% ~ (p=0.070 n=19+19) ExternalLinkCompiler 3.59s ±11% 3.68s ± 6% ~ (p=0.072 n=20+20) LinkWithoutDebugCompiler 446ms ± 5% 454ms ± 3% +1.79% (p=0.002 n=19+20) StdCmd 26.0s ± 2% 26.0s ± 2% ~ (p=0.799 n=20+20) name old user-time/op new user-time/op delta Template 238ms ± 5% 240ms ± 5% ~ (p=0.142 n=20+20) Unicode 105ms ±11% 106ms ±10% ~ (p=0.512 n=20+20) GoTypes 876ms ± 2% 873ms ± 4% ~ (p=0.647 n=20+19) Compiler 4.17s ± 2% 4.19s ± 1% ~ (p=0.093 n=20+18) SSA 13.9s ± 1% 14.1s ± 1% +1.45% (p=0.000 n=18+18) Flate 145ms ±13% 146ms ± 5% ~ (p=0.851 n=20+18) GoParser 185ms ± 5% 188ms ± 7% ~ (p=0.174 n=20+20) Reflect 534ms ± 3% 538ms ± 2% ~ (p=0.105 n=20+18) Tar 215ms ± 4% 211ms ± 9% ~ (p=0.079 n=19+20) XML 295ms ± 6% 295ms ± 5% ~ (p=0.968 n=20+20) LinkCompiler 832ms ± 4% 837ms ± 7% ~ (p=0.707 n=17+20) ExternalLinkCompiler 1.58s ± 8% 1.60s ± 4% ~ (p=0.296 n=20+19) LinkWithoutDebugCompiler 478ms ±12% 489ms ±10% ~ (p=0.429 n=20+20) name old object-bytes new object-bytes delta Template 559kB ± 0% 559kB ± 0% ~ (all equal) Unicode 216kB ± 0% 216kB ± 0% ~ (all equal) GoTypes 2.03MB ± 0% 2.03MB ± 0% ~ (all equal) Compiler 8.07MB ± 0% 8.07MB ± 0% -0.06% (p=0.000 n=20+20) SSA 27.1MB ± 0% 27.3MB ± 0% +0.89% (p=0.000 n=20+20) Flate 343kB ± 0% 343kB ± 0% ~ (all equal) GoParser 441kB ± 0% 441kB ± 0% ~ (all equal) Reflect 1.36MB ± 0% 1.36MB ± 0% ~ (all equal) Tar 487kB ± 0% 487kB ± 0% ~ (all equal) XML 632kB ± 0% 632kB ± 0% ~ (all equal) name old export-bytes new export-bytes delta Template 18.5kB ± 0% 18.5kB ± 0% ~ (all equal) Unicode 7.92kB ± 0% 7.92kB ± 0% ~ (all equal) GoTypes 35.0kB ± 0% 35.0kB ± 0% ~ (all equal) Compiler 109kB ± 0% 110kB ± 0% +0.72% (p=0.000 n=20+20) SSA 137kB ± 0% 138kB ± 0% +0.58% (p=0.000 n=20+20) Flate 4.89kB ± 0% 4.89kB ± 0% ~ (all equal) GoParser 8.49kB ± 0% 8.49kB ± 0% ~ (all equal) Reflect 11.4kB ± 0% 11.4kB ± 0% ~ (all equal) Tar 10.5kB ± 0% 10.5kB ± 0% ~ (all equal) XML 16.7kB ± 0% 16.7kB ± 0% ~ (all equal) name old text-bytes new text-bytes delta HelloSize 761kB ± 0% 761kB ± 0% ~ (all equal) CmdGoSize 10.8MB ± 0% 10.8MB ± 0% ~ (all equal) name old data-bytes new data-bytes delta HelloSize 10.7kB ± 0% 10.7kB ± 0% ~ (all equal) CmdGoSize 312kB ± 0% 312kB ± 0% ~ (all equal) name old bss-bytes new bss-bytes delta HelloSize 122kB ± 0% 122kB ± 0% ~ (all equal) CmdGoSize 146kB ± 0% 146kB ± 0% ~ (all equal) name old exe-bytes new exe-bytes delta HelloSize 1.13MB ± 0% 1.13MB ± 0% ~ (all equal) CmdGoSize 15.1MB ± 0% 15.1MB ± 0% ~ (all equal) Change-Id: I3cc2f9829a109543d9a68be4a21775d2d3e9801f Reviewed-on: https://go-review.googlesource.com/c/go/+/196557 Run-TryBot: Michael Munday <mike.munday@ibm.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Daniel Martí <mvdan@mvdan.cc> Reviewed-by: Keith Randall <khr@golang.org>
2019-09-26cmd/compile: use numeric condition code masks on s390xMichael Munday
Prior to this CL conditional branches on s390x always used an extended mnemonic such as BNE, BLT and so on to represent branch instructions with different condition code masks. This CL adds support for numeric condition code masks to the s390x SSA backend so that we can encode the condition under which a Block's successor is chosen as a field in that Block rather than in its type. This change will be useful as we come to add support for combined compare-and-branch instructions. Rather than trying to add extended mnemonics for every possible combination of mask and compare-and- branch instruction we can instead use a single mnemonic for each instruction. Change-Id: Idb7458f187b50906877d683695c291dff5279553 Reviewed-on: https://go-review.googlesource.com/c/go/+/197178 Reviewed-by: Keith Randall <khr@golang.org>
2019-09-13cmd/compile: add math/bits.Mul64 intrinsic on s390xRuixin Bao
This change adds an intrinsic for Mul64 on s390x. To achieve that, a new assembly instruction, MLGR, is introduced in s390x/asmz.go. This assembly instruction directly uses an existing instruction on Z and supports multiplication of two 64 bit unsigned integer and stores the result in two separate registers. In this case, we require the multiplcand to be stored in register R3 and the output result (the high and low 64 bit of the product) to be stored in R2 and R3 respectively. A test case is also added. Benchmark: name old time/op new time/op delta Mul-18 11.1ns ± 0% 1.4ns ± 0% -87.39% (p=0.002 n=8+10) Mul32-18 2.07ns ± 0% 2.07ns ± 0% ~ (all equal) Mul64-18 11.1ns ± 1% 1.4ns ± 0% -87.42% (p=0.000 n=10+10) Change-Id: Ieca6ad1f61fff9a48a31d50bbd3f3c6d9e6675c1 Reviewed-on: https://go-review.googlesource.com/c/go/+/194572 Reviewed-by: Michael Munday <mike.munday@ibm.com> Run-TryBot: Michael Munday <mike.munday@ibm.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-09-11cmd/compile/internal/s390x: replace 4-byte NOP with a 2-byte NOP on s390xRuixin(Peter) Bao
Added a new instruction, NOPH, with the encoding [0x0700](i.e: bcr 0, 0) and replace the current 4-byte nop that was encoded using the WORD instruction. This reduces the size of .text section in go binary by around 17KB and make generated code easier to read. Change-Id: I6a756df39e93c4415ea6d038ba4af001b8ccb286 Reviewed-on: https://go-review.googlesource.com/c/go/+/194344 Reviewed-by: Michael Munday <mike.munday@ibm.com> Run-TryBot: Michael Munday <mike.munday@ibm.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-09-10cmd/compile: improve s390x sign/zero extension removalMichael Munday
This CL gets rid of the MOVDreg and MOVDnop SSA operations on s390x. They were originally inserted to help avoid situations where a sign/zero extension was elided but a spill invalidated the optimization. It's not really clear we need to do this though (amd64 doesn't have these ops for example) so long as we are careful when removing sign/zero extensions. Also, the MOVDreg technique doesn't work if the register is spilled before the MOVDreg op (I haven't seen that in practice). Removing these ops reduces the complexity of the rules and also allows us to unblock optimizations. For example, the compiler can now merge the loads in binary.{Big,Little}Endian.PutUint16 which it wasn't able to do before. This CL reduces the size of the .text section in the go tool by about 4.7KB (0.09%). Change-Id: Icaddae7f2e4f9b2debb6fabae845adb3f73b41db Reviewed-on: https://go-review.googlesource.com/c/go/+/173897 Run-TryBot: Michael Munday <mike.munday@ibm.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2019-06-06cmd/compile, runtime: make atomic loads/stores sequentially consistent on s390xMichael Munday
The z/Architecture does not guarantee that a load following a store will not be reordered with that store, unless they access the same address. Therefore if we want to ensure the sequential consistency of atomic loads and stores we need to perform serialization operations after atomic stores. We do not need to serialize in the runtime when using StoreRel[ease] and LoadAcq[uire]. The z/Architecture already provides sufficient ordering guarantees for these operations. name old time/op new time/op delta AtomicLoad64-16 0.51ns ± 0% 0.51ns ± 0% ~ (all equal) AtomicStore64-16 0.51ns ± 0% 0.60ns ± 9% +16.47% (p=0.000 n=17+20) AtomicLoad-16 0.51ns ± 0% 0.51ns ± 0% ~ (all equal) AtomicStore-16 0.51ns ± 0% 0.60ns ± 9% +16.50% (p=0.000 n=18+20) Fixes #32428. Change-Id: I88d19a4010c46070e4fff4b41587efe4c628d4d9 Reviewed-on: https://go-review.googlesource.com/c/go/+/180439 Run-TryBot: Michael Munday <mike.munday@ibm.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
2019-05-03cmd/compile,runtime/internal/atomic: add Load8Austin Clements
Change-Id: Id52a5730cf9207ee7ccebac4ef12791dc5720e7c Reviewed-on: https://go-review.googlesource.com/c/go/+/172283 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: David Chase <drchase@google.com>
2019-05-03cmd/compile: add math/bits.{Add,Sub}64 intrinsics on s390xMichael Munday
This CL adds intrinsics for the 64-bit addition and subtraction functions in math/bits. These intrinsics use the condition code to propagate the carry or borrow bit. To make the carry chains more efficient I've removed the 'clobberFlags' property from most of the load and store operations. Originally these ops did clobber flags when using offsets that didn't fit in a signed 20-bit integer, however that is no longer true. As with other platforms the intrinsics are faster when executed in a chain rather than a loop because currently we need to spill and restore the carry bit between each loop iteration. We may be able to reduce the need to do this on s390x (e.g. by using compare-and-branch instructions that do not clobber flags) in the future. name old time/op new time/op delta Add64 1.21ns ± 2% 2.03ns ± 2% +67.18% (p=0.000 n=7+10) Add64multiple 2.98ns ± 3% 1.03ns ± 0% -65.39% (p=0.000 n=10+9) Sub64 1.23ns ± 4% 2.03ns ± 1% +64.85% (p=0.000 n=10+10) Sub64multiple 3.73ns ± 4% 1.04ns ± 1% -72.28% (p=0.000 n=10+8) Change-Id: I913bbd5e19e6b95bef52f5bc4f14d6fe40119083 Reviewed-on: https://go-review.googlesource.com/c/go/+/174303 Run-TryBot: Michael Munday <mike.munday@ibm.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2019-04-07cmd/compile: remove AUNDEF opcodeKeith Randall
This opcode was only used to mark unreachable code for plive to use. plive now uses the SSA representation, so it knows locations are unreachable because they are ends of Exit blocks. It doesn't need these opcodes any more. These opcodes actually used space in the binary, 2 bytes per undef on x86 and more for other archs. Makes the amd64 go binary 0.2% smaller. Change-Id: I64c84c35db7c7949617a3a5830f09c8e5fcd2620 Reviewed-on: https://go-review.googlesource.com/c/go/+/171058 Run-TryBot: Keith Randall <khr@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2019-03-18cmd/compile,runtime: provide index information on bounds check failureKeith Randall
A few examples (for accessing a slice of length 3): s[-1] runtime error: index out of range [-1] s[3] runtime error: index out of range [3] with length 3 s[-1:0] runtime error: slice bounds out of range [-1:] s[3:0] runtime error: slice bounds out of range [3:0] s[3:-1] runtime error: slice bounds out of range [:-1] s[3:4] runtime error: slice bounds out of range [:4] with capacity 3 s[0:3:4] runtime error: slice bounds out of range [::4] with capacity 3 Note that in cases where there are multiple things wrong with the indexes (e.g. s[3:-1]), we report one of those errors kind of arbitrarily, currently the rightmost one. An exhaustive set of examples is in issue30116[u].out in the CL. The message text has the same prefix as the old message text. That leads to slightly awkward phrasing but hopefully minimizes the chance that code depending on the error text will break. Increases the size of the go binary by 0.5% (amd64). The panic functions take arguments in registers in order to keep the size of the compiled code as small as possible. Fixes #30116 Change-Id: Idb99a827b7888822ca34c240eca87b7e44a04fdd Reviewed-on: https://go-review.googlesource.com/c/go/+/161477 Run-TryBot: Keith Randall <khr@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: David Chase <drchase@google.com>
2019-02-25cmd/compile: call ginsnop, not ginsnop2 on ppc64le for mid-stack inlining ↵Lynn Boger
tracebacks A recent change to fix stacktraces for inlined functions introduced a regression on ppc64le when compiling position independent code. That happened because ginsnop2 was called for the purpose of inserting a NOP to identify the location of the inlined function, when ginsnop should have been used. ginsnop2 is intended to be used before deferreturn to ensure r2 is properly restored when compiling position independent code. In some cases the location where r2 is loaded from might not be initialized. If that happens and r2 is used to generate an address, the result is likely a SEGV. This fixes that problem. Fixes #30283 Change-Id: If70ef27fc65ef31969712422306ac3a57adbd5b6 Reviewed-on: https://go-review.googlesource.com/c/163337 Reviewed-by: Cherry Zhang <cherryyz@google.com> Reviewed-by: Carlos Eduardo Seo <cseo@linux.vnet.ibm.com> Reviewed-by: Keith Randall <khr@golang.org> Run-TryBot: Andrew Bonventre <andybons@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-01-03cmd/compile: fix no-op instruction used by s390xKeith Randall
CL 152537 introduced a new use for ginsnop, the arch-dependent routine that generates nops. The previous s390x nop clobbered flags. It turns out the previous uses of this nop did not require flags to be preserved, but the new use does. Use a real nop: the 4-byte preferred nop. Fixes #29453 Change-Id: I95310dfdd831932e26f5d5b6608324687f4c3162 Reviewed-on: https://go-review.googlesource.com/c/155926 Reviewed-by: Michael Munday <mike.munday@ibm.com>
2018-12-28cmd/compile,runtime: redo mid-stack inlining tracebacksKeith Randall
Work involved in getting a stack trace is divided between runtime.Callers and runtime.CallersFrames. Before this CL, runtime.Callers returns a pc per runtime frame. runtime.CallersFrames is responsible for expanding a runtime frame into potentially multiple user frames. After this CL, runtime.Callers returns a pc per user frame. runtime.CallersFrames just maps those to user frame info. Entries in the result of runtime.Callers are now pcs of the calls (or of the inline marks), not of the instruction just after the call. Fixes #29007 Fixes #28640 Update #26320 Change-Id: I1c9567596ff73dc73271311005097a9188c3406f Reviewed-on: https://go-review.googlesource.com/c/152537 Run-TryBot: Keith Randall <khr@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: David Chase <drchase@google.com>
2018-09-05cmd/compile: make math/bits.RotateLeft{32,64} intrinsics on s390xMichael Munday
Extends CL 132435 to s390x. s390x has 32- and 64-bit variable rotate left instructions. Change-Id: Ic4f1ebb0e0543207ed2fc8c119e0163b428138a5 Reviewed-on: https://go-review.googlesource.com/133035 Run-TryBot: Michael Munday <mike.munday@ibm.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2018-09-03cmd/compile: implement OnesCount{8,16,32,64} intrinsics on s390xMichael Munday
This CL implements the math/bits.OnesCount{8,16,32,64} functions as intrinsics on s390x using the 'population count' (popcnt) instruction. This instruction was released as the 'population-count' facility which uses the same facility bit (45) as the 'distinct-operands' facility which is a pre-requisite for Go on s390x. We can therefore use it without a feature check. The s390x popcnt instruction treats a 64 bit register as a vector of 8 bytes, summing the number of ones in each byte individually. It then writes the results to the corresponding bytes in the output register. Therefore to implement OnesCount{16,32,64} we need to sum the individual byte counts using some extra instructions. To do this efficiently I've added some additional pseudo operations to the s390x SSA backend. Unlike other architectures the new instruction sequence is faster for OnesCount8, so that is implemented using the intrinsic. name old time/op new time/op delta OnesCount 3.21ns ± 1% 1.35ns ± 0% -58.00% (p=0.000 n=20+20) OnesCount8 0.91ns ± 1% 0.81ns ± 0% -11.43% (p=0.000 n=20+20) OnesCount16 1.51ns ± 3% 1.21ns ± 0% -19.71% (p=0.000 n=20+17) OnesCount32 1.91ns ± 0% 1.12ns ± 1% -41.60% (p=0.000 n=19+20) OnesCount64 3.18ns ± 4% 1.35ns ± 0% -57.52% (p=0.000 n=20+20) Change-Id: Id54f0bd28b6db9a887ad12c0d72fcc168ef9c4e0 Reviewed-on: https://go-review.googlesource.com/114675 Run-TryBot: Michael Munday <mike.munday@ibm.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2018-05-08cmd/compile: simplify shift lowering on s390xMichael Munday
Use conditional moves instead of subtractions with borrow to handle saturation cases. This allows us to delete the SUBE/SUBEW ops and associated rules from the SSA backend. Using conditional moves also means we can detect when shift values are masked so I've added some new rules to constant fold the relevant comparisons and masking ops. Also use the new shiftIsBounded() function to avoid generating code to handle saturation cases where possible. Updates #25167 for s390x. Change-Id: Ief9991c91267c9151ce4c5ec07642abb4dcc1c0d Reviewed-on: https://go-review.googlesource.com/110070 Run-TryBot: Michael Munday <mike.munday@ibm.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2018-05-02cmd/compile: intrinsify runtime.getcallerpc on all link register architecturesWei Xiao
Add a compiler intrinsic for getcallerpc on following architectures: arm mips mipsle mips64 mips64le ppc64 ppc64le s390x Change-Id: I758f3d4742fc214b206bcd07d90408622c17dbef Reviewed-on: https://go-review.googlesource.com/110835 Run-TryBot: Wei Xiao <Wei.Xiao@arm.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2018-04-20cmd/compile: don't lower OpConvertAustin Clements
Currently, each architecture lowers OpConvert to an arch-specific OpXXXconvert. This is silly because OpConvert means the same thing on all architectures and is logically a no-op that exists only to keep track of conversions to and from unsafe.Pointer. Furthermore, lowering it makes it harder to recognize in other analyses, particularly liveness analysis. This CL eliminates the lowering of OpConvert, leaving it as the generic op until code generation time. The main complexity here is that we still need to register-allocate OpConvert operations. Currently, each arch's lowered OpConvert specifies all GP registers in its register mask. Ideally, OpConvert wouldn't affect value homing at all, and we could just copy the home of OpConvert's source, but this can potentially home an OpConvert in a LocalSlot, which neither regalloc nor stackalloc expect. Rather than try to disentangle this assumption from regalloc and stackalloc, we continue to register-allocate OpConvert, but teach regalloc that OpConvert can be allocated to any allocatable GP register. For #24543. Change-Id: I795a6aee5fd94d4444a7bafac3838a400c9f7bb6 Reviewed-on: https://go-review.googlesource.com/108496 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: David Chase <drchase@google.com>
2018-04-11cmd/compile: use Block.Likely to put likely branch firstDavid Chase
When a neither of a conditional block's successors follows, the block must end with a conditional branch followed by a an unconditional branch. If the (conditional) branch is "unlikely", invert it and swap successors to make it likely instead. This doesn't matter to most benchmarks on amd64, but in one instance on amd64 it caused a 30% improvement, and it is otherwise harmless. The problematic loop is for i := 0; i < w; i++ { if pw[i] != 0 { return true } } compiled under GOEXPERIMENT=preemptibleloops This the very worst-case benchmark for that experiment. Also in this CL is a commoning up of heavily-repeated boilerplate, which made it much easier to see that the changes were applied correctly. In the future this should allow un-exporting of SSAGenState.Branches once the boilerplate-replacement is done everywhere. Change-Id: I0e5ded6eeb3ab1e3e0138e12d54c7e056bd99335 Reviewed-on: https://go-review.googlesource.com/104977 Run-TryBot: David Chase <drchase@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2018-02-13runtime: buffered write barrier for s390xAustin Clements
Updates #22460. Change-Id: I3f793e69577c1b837ad2666e6209a97a452405d4 Reviewed-on: https://go-review.googlesource.com/92703 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2017-11-07cmd/compile: fix SSA immediate sign extension on s390xMichael Munday
The CMPWUconst op (32-bit unsigned comparison with immediate) takes an unsigned immediate value. In SSA this should be sign extended to 64-bits to match the Int32 type given in the op and then zero extended when producing the final assembly. Before this CL we were zero extending in SSA which caused ssacheck to fail. While we are here also ensure other 32-bit immediates are sign extended in SSA. Passes toolstash -cmp on std on s390x. Fixes #22611. Change-Id: I5c061a76a710b10ecb0650c9c42efd9fa1c123cc Reviewed-on: https://go-review.googlesource.com/76336 Run-TryBot: Michael Munday <mike.munday@ibm.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2017-11-02cmd/compile: add missing s390x load with index operationsMichael Munday
Prior to this CL loads with sign extension could not be replaced with indexed loads (only loads with zero extension). This CL also prevents large offsets (more than 20-bits) from being merged into indexed loads. It is better to keep such offsets separate. Gives a small improvement in binary size, ~1.5KB from .text in cmd/go. Change-Id: Ib848ffc2b05de6660c5ce2394ae1d1d144273e29 Reviewed-on: https://go-review.googlesource.com/36845 Run-TryBot: Michael Munday <mike.munday@ibm.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org>