Описание тега branch-prediction
/questions/47845055/pochemu-obrabatyivat-otsortirovannyij-massiv-byistree-chem-nesortirovannyij stackru's highest-voted question and answer is a good introduction to the subject.
In computer architecture, a branch predictor is a digital circuit that tries to guess which way a branch (e.g. an if-then-else structure) will go before this is known for sure. The purpose of the branch predictor is to improve the flow in the instruction pipeline.
Branch predictors play a critical role in achieving high effective performance in many modern pipelined microprocessor architectures such as x86.
Two-way branching is usually implemented with a conditional jump instruction. A conditional jump can either be "not taken" and continue execution with the first branch of code which follows immediately after the conditional jump - or it can be "taken" and jump to a different place in program memory where the second branch of code is stored.
It is not known for certain whether a conditional jump will be taken or not taken until the condition has been calculated and the conditional jump has passed the execution stage in the instruction pipeline.
Without branch prediction, the processor would have to wait until the conditional jump instruction has passed the execute stage before the next instruction can enter the fetch stage in the pipeline. The branch predictor attempts to avoid this waste of time by trying to guess whether the conditional jump is most likely to be taken or not taken. The branch that is guessed to be the most likely is then fetched and speculatively executed. If it is later detected that the guess was wrong then the speculatively executed or partially executed instructions are discarded and the pipeline starts over with the correct branch, incurring a delay.
The time that is wasted in case of a branch misprediction is equal to the number of stages in the pipeline from the fetch stage to the execute stage. Modern microprocessors tend to have quite long pipelines so that the misprediction delay is between 10 and 20 clock cycles. The longer the pipeline the greater the need for a good branch predictor.
Source: http://en.wikipedia.org/wiki/Branch_predictor
The Spectre security vulnerability revolves around branch prediction:
Other resources
Special-purpose predictors: Return Address Stack for call/ret. ret
is effectively an indirect branch, setting program-counter = return address. This would be hard to predict on its own, but calls are normally made with a special instruction so modern CPUs can match call/ret pairs with an internal stack.
Computer architecture details about branch prediction / speculative execution, and its effects on pipelined CPUs
- /questions/47845055/pochemu-obrabatyivat-otsortirovannyij-massiv-byistree-chem-nesortirovannyij
- Branch prediction - Dan Luu's article on branch prediction, adapted from a talk. With diagrams. Good introduction to why it's needed, and some basic implementations used in early CPUs, building up to more complicated predictors. And at the end, a link to TAGE branch predictors used on modern Intel CPUs. (Too complicated for that article to explain, though!)
- Slow jmp-instruction - even unconditional direct jumps (like x86's
jmp
) need to be predicted, to avoid stalls in the very first stage of the pipeline: fetching blocks of machine code from I-cache. After fetching one block, you need to know which block to fetch next, before (or at best in parallel with) decoding the block you just fetched. A large sequence ofjmp next_instruction
will overwhelm branch prediction and expose the cost of misprediction in this part of the pipeline. (Many high-end modern CPUs have a queue after fetch before decode, to hide bubbles, so some blocks of non-branchy code can allow the queue to refill.) - /questions/9675284/predskazanie-tseli-vetvleniya-v-sochetanii-s-predskazaniem-vetvleniya
- /questions/26300767/kakoe-nevernoe-predskazanie-vetvleniya-obnaruzhivaet-tselevoj-bufer-vetvleniya
Cost of a branch miss
Characterizing the Branch Misprediction Penalty is a nice paper that models the real effect on a superscalar Out-of-Order pipeline of branch misses as well as data cache misses and I-cache misses. And how long it takes the CPU to return to a steady-state of lots of instructions in flight hiding execution latencies. Ramping back up to that state costs time, too, which is why true mispredict cost is context-sensitive, on top of how many cycles before starting execution on the correct path.
What exactly happens when a skylake CPU mispredicts a branch? and /questions/39675159/oshibochnoe-li-predskazanie-vetki-sbrasyivaet-ves-konvejer-dazhe-dlya-ochen-korotkogo-tela-operatora-if - fast recovery using a Branch Order Buffer snapshot of the out-of-order state can start sooner and allow work on instructions before the branch to continue, vs. treating it like other exceptions and flushing back to the retirement state.
/questions/39403080/izbegajte-ostanova-konvejera-vyichislyaya-uslovno-rano - detecting a mispredict early can let fast-recovery throw away less work, e.g. unrolling a loop can let the loop-overhead work run ahead of the rest of the loop, and only lose some front-end throughput on the hard-to-predict loop-exit branch, not much useful back-end work.
Will Speculative Execution Follow Into an Expensive Operation? - cost of "expensive" operations in the shadow of a mispredict.
Modern TAGE predictors (in Intel CPUs for example) can "learn" amazingly long patterns, because they index based on past branch history. (So the same branch can get different predictions depending on the path leading up to it. A single branch can have its prediction data scattered over many bits in the branch predictor table). This goes a long way to solving the problem of indirect branches in an interpreter almost always mispredicting ( /questions/19814232/optimizatsiya-predvaritelnoj-zagruzki-v-x86-mnogopotochnyij-kod-computed-goto and Branch prediction and the performance of interpreters — Don't trust folklore), or for example a binary search on the same data with the same input can be really efficient.
Static branch prediction on newer Intel processors - according to experimental evidence, it appears Nehalem and earlier do sometimes use static prediction at some point in the pipeline (backwards branches default to predicted-taken, forward to not-taken.) But Sandybridge and newer seem to be always dynamic based on some history, whether it's from this branch or one that aliases it. /questions/10650861/pochemu-intel-izmenila-mehanizm-staticheskogo-prognozirovaniya-vetvlenij-za-eti-godyi
Cases where TAGE does "amazingly" well
- /questions/48761178/ponimanie-effektivnosti-otraslevogo-prognozirovaniya
- /questions/48366732/pervyij-vyizov-metoda-zanimaet-v-10-raz-bolshe-chem-posledovatelnyie-vyizovyi-s (branch prediction can "learn" the branching for repeated sorting of the same data. Usually not useful in real programs (where you'd save the sorted data instead of re-sorting), just a problem to work around for microbenchmarking)
- discussion in comments on Indexed branch overhead on X86 64 bit mode has more about why TAGE does well there.
Assembly code layout: not so much for branch prediction, but because not-taken branches are easier on the front-end than taken branches. Better I-cache code density if the fast-path is just a straight line, and taken branches mean the part of a fetch block after the branch isn't useful.
Superscalar CPUs fetch code in blocks, e.g. aligned 16 byte blocks, containing multiple instructions. In non-branching code, including not-taken conditional branches, all of those bytes are useful instruction bytes.
/questions/23160246/kakov-effekt-uporyadocheniya-esli-esche-esli-utverzhdeniya-po-veroyatnosti (compilers guess which way if conditions will go, and lay out branches to minimize branches, especially taken branches, on the expected fast path.) Or profile-guided optimization has real data instead of guesses based on heuristics.
Or GNU C
__builtin_expect
/likely()
/unlikely()
lets programmers supply branch-direction hints which might be helpful if they're correct, but they can easily get out of date as code is modified./questions/1933393/kak-rabotayut-veroyatnyie-maloveroyatnyie-makrosyi-v-yadre-linux-i-v-chem-ih-vyigoda (the top answer that claims it helps branch prediction is wrong, some others have real examples of asm code layout differences.)
- Are BOOST_LIKELY and __builtin_expect still relevant?
- Branch-aware programming
- Is there a compiler hint for GCC to force branch prediction to always go a certain way?
- Is it possible to tell the branch predictor how likely it is to follow the branch? (Pentium 4 had branch hints as prefixes on instructions, silently ignored by earlier and later CPUs. PowerPC also experimented with that. Other than that, no, likely/unlikely affect code layout and/or other optimizations to make the expected case faster.
Branchless code: using cmov
or other tricks to avoid branches
This is the asm equivalent of replacing if (c) a=b;
with a = c ? b : a;
. If b
doesn't have side-effects, and a
isn't a potentially-shared memory location, compilers can do "if-conversion" to do the conditional with a data dependency on c
instead of a control dependency.
(C compilers can't introduce a non-atomic read/write: that could step on another thread's modification of the variable. Writing your code as always rewriting a value tells compilers that it's safe, which sometimes enables auto-vectorization: AVX-512 and Branching)
Potential downside to cmov
in scalar code: the data dependency can become part of a loop-carried dependency chain and become a bottleneck, while branch prediction + speculative execution hide the latency of control dependencies. The branchless data dependency isn't predicted or speculated, which makes it good for unpredictable cases, but potentially bad otherwise.
- https://yarchive.net/comp/linux/cmov.html - Linus Torvalds explains why
cmov
often sucks: most branches are predictable, and a branch takes even calculation of the branch condition off the critical path. - gcc optimization flag -O3 makes code slower than -O2 - Analysis of a real case where GCC's if-conversion heuristic isn't expecting sorted data, and a longer loop-carried dep chain with
cmov
is creates a bottleneck.