TT Gotchas

Code, algorithms, languages, construction...
Post Reply
thevinenator
Posts: 68
Joined: Tue Jun 02, 2015 11:02 pm
Real Name: Vince

TT Gotchas

Post by thevinenator » Thu Dec 17, 2015 5:32 pm

I found a gotcha in my search code. Consider the following code snippet which is published on many web pages. It fetches the TT entry and then modifies alpha/beta based on the entry.

Code: Select all

function negamax(node, depth, α, β, color)
    alphaOrig := α

    // Transposition Table Lookup; node is the lookup key for ttEntry
    ttEntry := TranspositionTableLookup( node )
    if ttEntry is valid and ttEntry.depth ≥ depth
        if ttEntry.Flag = EXACT
            return ttEntry.Value
        else if ttEntry.Flag = LOWERBOUND
            α := max( α, ttEntry.Value)
        else if ttEntry.Flag = UPPERBOUND
            β := min( β, ttEntry.Value)
        endif
        if α ≥ β
            return ttEntry.Value
    endif
I use a variation of this code and got bit because I didn't think before using it as is...

The major problem is, is that we can't assume that the entry coming back from the TT represents the actual position we are probing. It could have been written over, even if the depth is valid. I blindly used the returned bounds. later in my code i actually verify if the move stored in the TT is legal, to use it as the first move in move ordering, but the damage had been done. alpha/beta were modified prior to calling QSearch, for example, or even before, in the code itself, score is returned for EXACT and cutoff.

the pseudo code 'if ttEntry is valid' has hidden meaning that can't be answered by simply comparing hash keys.

so, now that i know alpha/beta from the hash can't be used unless we know the stored move is valid for the current position, it is back to the drawing board to move some code around.

while on the subject of rearranging code, i have a question. I've seen many code examples that have checks for leaf node, repetition, 50 move rule, and other checks that return prior to searching deeper. Is there any logic to the order of these checks. it seems that each developer tends to do them in different order. Repetition, for example. if we enter a leaf node, is it a repeated positions if we aren't going to go deeper in the search? is the ordering done for efficiency (which is more likely to occur first)?
"An Engine's strength flows from the Search. But beware, pruning, extensions, reductions; the dark side of the Search are they. Once you start down the dark path, it will dominate and consume you, as it has to so many developers before.” -- Yoda

hyatt
Posts: 1242
Joined: Thu Jun 10, 2010 2:13 am
Real Name: Bob Hyatt (Robert M. Hyatt)
Location: University of Alabama at Birmingham
Contact:

Re: TT Gotchas

Post by hyatt » Thu Dec 17, 2015 6:37 pm

In answer to your last question, if you have several tricks that can terminate the search early, you want to try the fastest one first, then the next fastest one next, etc. That is why I do the tests in the order I do them.. IE repetitions/50's are done together, then hash, then egtb probe, then a null-move search...

I didn't follow your hash issue. You have three pieces of information. Signature, score and best move (ignoring TYPE, depth). If sigs match, then this position is a duplicate of what has been searched. Testing the move for legality is fine, particularly if an illegal move will corrupt your search data (as it can mine, particularly in the case of a castle move). But a hash collision, assuming 64 bits, will have no measurable effect on search. Even if you have a collision every 1M nodes you won't see any changes to anything...

I didn't understand the "written over" part. Entries get replaced all the time. But they get completely replaced, so that the score/move always goes with the signature from that same position...

H.G.Muller
Posts: 190
Joined: Sun Jul 14, 2013 10:00 am
Real Name: H.G. Muller

Re: TT Gotchas

Post by H.G.Muller » Sun Dec 20, 2015 4:28 pm

I think the issue is more search instability. If a previous search at equal or better depth provided a lower bound, there is no guarantee that a search at the current depth, or even at the same depth, with another window, or even with the same window as before would return a score above that lower bound. So raising alpha to it might make the current node fail low. And what are you going to do then?

I think it would only be safe to do tricks like this within the context of (internal) aspiration, i.e. where the code realizes that the alpha and beta used where not the proper ones, so that if they are inadvertantly exceded, it will do a re-search with the official alpha, beta.

In my engines, if a second search gives results that contraduct earlier results of a search that was supposed to be the same quality, the second search is usually right. Most of the time the difference is due to hash grafting, and by grafting results from the first search that were left in the hash onto the second, the second can look much deeper. So it is foolhardy to stay with the bounds predicted by the earlier search.

Post Reply