Making good progress on my engine thanks to these discussions. Have implemented iterative deepening, PVS, killer moves, and ordering of root moves based on previous iteration node counts. I am now working on null move pruning before I look at LMR and futility pruning. My first cut at null move looks something like this with R = 2:

generate root moves (no null move here) and call alphaBeta(depth - 1, ...., true) // doNullMove = true

then in alphaBeta before generating moves:

if (doNullMove && !inCheck && depth > R) {

nullValue = alphaBeta(depth - R - 1,..., false) // using zero window around beta

if (nullValue >= beta) { return beta }

}

this has sped up the search considerably but as depth goes higher i'm curious what people do to in terms of changing R (adaptive NMR) or other techniques.

I have noticed that some people do not include the restriction depth > R when deciding when to do null move. They just reduce depth and then at the beginning of alphaBeta just add:

if (depth <= 0) { return quiescence() }

so we fall immediately into quiescence search. (fyi i do allow checks in the first two plies of qSearch so this is still a possibility for me)

So are people restricting null move to depth > R or not?

And in terms of making R a function of remaining depth, I know some people use much higher Rs when depth remaining is high. But what is high depth is different for each engine.

Assuming i strive to achieve a 12 ply search. What are people using for R at various depths. If it does change do you use a formula or just an array indexed by depth.

Depth R

12 root (no null move)

11 ?

10 ?

9 ?

8 ?

7 ?

6 ?

5 ?

4 ?

3 ?

2 ?

1 ?

also does anyone try to estimate wether null move is worth doing by restricting null move to lazyEval >= beta ? where lazyEval is material value + pieceSquare (which i update incrementally - so not expensive for me to get)

thanks in advance