To kick off some technical discussions

Code, algorithms, languages, construction...
Post Reply
hyatt
Posts: 1242
Joined: Thu Jun 10, 2010 2:13 am
Real Name: Bob Hyatt (Robert M. Hyatt)
Location: University of Alabama at Birmingham
Contact:

To kick off some technical discussions

Post by hyatt » Thu Jun 10, 2010 4:18 am

I have just finished testing a small modification to Crafty's LMR algorithm. When everyone started this, conventional wisdom said that certain moves should not be reduced, in particular captures. I've changed this particular idea just a bit. Since I always use SEE (or a faster equivalent when possible) to order captures, so that winning or equal captures are searched early, while losing captures are delayed until after the hash move, good captures, and killer moves, I wondered "Why would I not want to reduce dumb captures?" A simple fix to address this and I got a quick +15 Elo verified with my usual 30,000 game cluster tests.

Easy enough for anyone to try. One other idea for another 5-10 Elo is that I also now no longer extend checks that appear unsafe according to SEE.

Roughly +25 elo for a very easy effort.

User avatar
kingliveson
Posts: 1388
Joined: Thu Jun 10, 2010 1:22 am
Real Name: Franklin Titus
Location: 28°32'1"N 81°22'33"W

Re: To kick off some technical discussions

Post by kingliveson » Thu Jun 10, 2010 7:02 am

hyatt wrote:I have just finished testing a small modification to Crafty's LMR algorithm. When everyone started this, conventional wisdom said that certain moves should not be reduced, in particular captures. I've changed this particular idea just a bit. Since I always use SEE (or a faster equivalent when possible) to order captures, so that winning or equal captures are searched early, while losing captures are delayed until after the hash move, good captures, and killer moves, I wondered "Why would I not want to reduce dumb captures?" A simple fix to address this and I got a quick +15 Elo verified with my usual 30,000 game cluster tests.

Easy enough for anyone to try. One other idea for another 5-10 Elo is that I also now no longer extend checks that appear unsafe according to SEE.

Roughly +25 elo for a very easy effort.
Basically, what you're doing is smarter pruning which translates into speed gain. Essentially saying don't waste time on a particular tree as the exchange eval already tells us it's a bad move.
PAWN : Knight >> Bishop >> Rook >>Queen

Mincho Georgiev
Posts: 31
Joined: Thu Jun 10, 2010 7:30 am
Contact:

Re: To kick off some technical discussions

Post by Mincho Georgiev » Thu Jun 10, 2010 10:18 am

I thought about it while ago. Let's hope that my SEE inefficiency will not eat up the real gain. Thanks for pointing that out, Bob!

zullil
Posts: 82
Joined: Thu Jun 10, 2010 10:17 am
Real Name: Louis Zulli
Location: Pennsylvania, USA

Re: To kick off some technical discussions

Post by zullil » Thu Jun 10, 2010 10:30 am

hyatt wrote:I have just finished testing a small modification to Crafty's LMR algorithm. When everyone started this, conventional wisdom said that certain moves should not be reduced, in particular captures. I've changed this particular idea just a bit. Since I always use SEE (or a faster equivalent when possible) to order captures, so that winning or equal captures are searched early, while losing captures are delayed until after the hash move, good captures, and killer moves, I wondered "Why would I not want to reduce dumb captures?" A simple fix to address this and I got a quick +15 Elo verified with my usual 30,000 game cluster tests.

Easy enough for anyone to try. One other idea for another 5-10 Elo is that I also now no longer extend checks that appear unsafe according to SEE.

Roughly +25 elo for a very easy effort.
Great! So is 23.3 coming soon? :roll:

User avatar
Chris Whittington
Posts: 437
Joined: Wed Jun 09, 2010 6:25 pm

Re: To kick off some technical discussions

Post by Chris Whittington » Thu Jun 10, 2010 12:36 pm

hyatt wrote:I have just finished testing a small modification to Crafty's LMR algorithm. When everyone started this, conventional wisdom said that certain moves should not be reduced, in particular captures. I've changed this particular idea just a bit. Since I always use SEE (or a faster equivalent when possible) to order captures, so that winning or equal captures are searched early, while losing captures are delayed until after the hash move, good captures, and killer moves, I wondered "Why would I not want to reduce dumb captures?" A simple fix to address this and I got a quick +15 Elo verified with my usual 30,000 game cluster tests.

Easy enough for anyone to try. One other idea for another 5-10 Elo is that I also now no longer extend checks that appear unsafe according to SEE.

Roughly +25 elo for a very easy effort.
Hi Bob,

As I read you, also many times in the past, your objective is to raise ELO. In practice, for you and also the others, this means raising ELO against a pool of other (similarly minded?) machines. I know you and others also play and test against allcomers on servers but in practice I suspect that coding changes are realistically and more or less exclusively tested by multi machine-machine games.

My assertion has always been that machine pool testing runs the danger of leading up a cul de sac but I have a another criticism .....

What the technique does is to optimise statistically, try to win more often than lose over a large game set and a decent sized pool of machine opponents of course. Any statistical optimisation means, you win some, you lose some, but, more importantly, it means you do well in a large pool of positions/games (caveat: against like-behaving opponents), ok in a large pool of positions and badly in a (small?) pool of positions. For an intelligent opponent therefore, strategy should be to steer into those regions where your machine statistically does bad and although those regions may be small in your testing, they may be larger in reality (whatever that might mean!) and larger against intelligently organised opposition.

I guess what I'm saying is that although your testing methodology appears smart, appears to work, it is in fact, due to its all encompassing statistical nature really quite dumb. I bet there is no looking at individual games/positions from a chess perspective looking for what went wrong, or for weaknesses, because the methodology isn't interested in individual games or positions.

And, everytime you are pruning something, a bad capture, a bad check (which may not be of course) you are opening a window for a smart entity to step through and take you on a journey you never expected.

zamar
Posts: 16
Joined: Thu Jun 10, 2010 1:28 am

Re: To kick off some technical discussions

Post by zamar » Thu Jun 10, 2010 3:38 pm

hyatt wrote:I have just finished testing a small modification to Crafty's LMR algorithm. When everyone started this, conventional wisdom said that certain moves should not be reduced, in particular captures. I've changed this particular idea just a bit. Since I always use SEE (or a faster equivalent when possible) to order captures, so that winning or equal captures are searched early, while losing captures are delayed until after the hash move, good captures, and killer moves, I wondered "Why would I not want to reduce dumb captures?" A simple fix to address this and I got a quick +15 Elo verified with my usual 30,000 game cluster tests.
We tried this with Stockfish some months back and it didn't work for us. However this was before introducing log-based LMR, so we might want to retry some new variation of this idea.
Easy enough for anyone to try. One other idea for another 5-10 Elo is that I also now no longer extend checks that appear unsafe according to SEE.
Interesting! We must definetily try this.
Roughly +25 elo for a very easy effort.
Thanks for sharing these ideas!

Chan Rasjid
Posts: 33
Joined: Thu Jun 10, 2010 4:41 pm
Real Name: Chan Rasjid

Re: To kick off some technical discussions

Post by Chan Rasjid » Thu Jun 10, 2010 5:56 pm

Should not dumb captures deleted!

A QxN with the N protected by a pawn is very bad for the Q to make. It might be possible to delete in QS or deeper in QS. In full search, I can't see why they could not be reduced.

Of course, it is easier to detect protect by P/N. Protect by B/R mor costly

Rasjid

Jeremy Bernstein
Site Admin
Posts: 1226
Joined: Wed Jun 09, 2010 7:49 am
Real Name: Jeremy Bernstein
Location: Berlin, Germany
Contact:

Re: To kick off some technical discussions

Post by Jeremy Bernstein » Thu Jun 10, 2010 6:00 pm

Chan Rasjid wrote:Should not dumb captures deleted!

A QxN with the N protected by a pawn is very bad for the Q to make. It might be possible to delete in QS or deeper in QS. In full search, I can't see why they could not be reduced.

Of course, it is easier to detect protect by P/N. Protect by B/R mor costly

Rasjid
One man's dumb capture is another man's brilliant sacrifice.

Jeremy

hyatt
Posts: 1242
Joined: Thu Jun 10, 2010 2:13 am
Real Name: Bob Hyatt (Robert M. Hyatt)
Location: University of Alabama at Birmingham
Contact:

Re: To kick off some technical discussions

Post by hyatt » Thu Jun 10, 2010 6:31 pm

Mincho Georgiev wrote:I thought about it while ago. Let's hope that my SEE inefficiency will not eat up the real gain. Thanks for pointing that out, Bob!
A couple of key points.

(1) I always sort captures based on SEE (except for obvious winning moves like PxQ or BxR which win no matter what). Those captures with expected gain >= 0 are searched right after the hash table move.

(2) If you look at the crafty source, I have a "phase" variable that tells me what phase of the move selection I am in, from "HASH_MOVE" to "CAPTURE_MOVES" to "KILLER_MOVES" to "REMAINING_MOVES". I do not reduce moves until I get to REMAINING_MOVES (effectively the L (late) in LMR. So for me, there is no extra SEE calls. I have already used SEE to choose which captures are searched in CAPTURE_MOVES, leaving the rest for REMAINING_MOVES. I therefore reduce anything in REMAINING_MOVES (except for moves that give check). So there is really no extra SEE usage at all.

The net effect of this is to simply expend less effort on bad captures, just as we expend less effort on moves that appear to be ineffective for other reasons.

hyatt
Posts: 1242
Joined: Thu Jun 10, 2010 2:13 am
Real Name: Bob Hyatt (Robert M. Hyatt)
Location: University of Alabama at Birmingham
Contact:

Re: To kick off some technical discussions

Post by hyatt » Thu Jun 10, 2010 6:38 pm

Chris Whittington wrote:
hyatt wrote:I have just finished testing a small modification to Crafty's LMR algorithm. When everyone started this, conventional wisdom said that certain moves should not be reduced, in particular captures. I've changed this particular idea just a bit. Since I always use SEE (or a faster equivalent when possible) to order captures, so that winning or equal captures are searched early, while losing captures are delayed until after the hash move, good captures, and killer moves, I wondered "Why would I not want to reduce dumb captures?" A simple fix to address this and I got a quick +15 Elo verified with my usual 30,000 game cluster tests.

Easy enough for anyone to try. One other idea for another 5-10 Elo is that I also now no longer extend checks that appear unsafe according to SEE.

Roughly +25 elo for a very easy effort.
Hi Bob,

As I read you, also many times in the past, your objective is to raise ELO. In practice, for you and also the others, this means raising ELO against a pool of other (similarly minded?) machines. I know you and others also play and test against allcomers on servers but in practice I suspect that coding changes are realistically and more or less exclusively tested by multi machine-machine games.

My assertion has always been that machine pool testing runs the danger of leading up a cul de sac but I have a another criticism .....

What the technique does is to optimise statistically, try to win more often than lose over a large game set and a decent sized pool of machine opponents of course. Any statistical optimisation means, you win some, you lose some, but, more importantly, it means you do well in a large pool of positions/games (caveat: against like-behaving opponents), ok in a large pool of positions and badly in a (small?) pool of positions. For an intelligent opponent therefore, strategy should be to steer into those regions where your machine statistically does bad and although those regions may be small in your testing, they may be larger in reality (whatever that might mean!) and larger against intelligently organised opposition.

I guess what I'm saying is that although your testing methodology appears smart, appears to work, it is in fact, due to its all encompassing statistical nature really quite dumb. I bet there is no looking at individual games/positions from a chess perspective looking for what went wrong, or for weaknesses, because the methodology isn't interested in individual games or positions.

And, everytime you are pruning something, a bad capture, a bad check (which may not be of course) you are opening a window for a smart entity to step through and take you on a journey you never expected.
As the old 14th century world maps used to have marked on them in various places, "Here there be dragons..." that applies here as well. I am playing against a pool of strong programs, including stockfish which may well be the best there is right now or at least very close to it. If I statistically win more games against a pool of good players using a large set of very equal starting positions, I'm confident that we are getting better and better. Other independent testing has verified this as we continue to do better in every rating list that exists.

We _do_ look at individual games, just not the games we play on the cluster (at 30,000 games every couple of hours or so, that is intractable). But we still play lots of games vs Rybka, Robo* and such on ICC, and the longer games we spend a lot of time examining. Others in our group play slow games on local hardware and look at the games carefully there as well.

There is no doubt that every "trick" opens a hole. But if it improves Elo, it also must be closing an even larger hole. I see no other way to make real progress. We tried, for years, to look at games and make judgements. And when we started the cluster stuff, we did both to see how they compared. And quite often a "fix" that improved the play in a single game we were examining would cause a drastic drop in Elo overall. Now we are fairly confident that any fix we incorporate actually works in a large set of different circumstances without breaking anything.

Post Reply