personal experience regarding "self learning"

Code, algorithms, languages, construction...
User avatar
deeds
Posts: 652
Joined: Wed Oct 20, 2021 9:24 pm
Location: France
Contact:

Re: personal experience regarding "self learning"

Post by deeds » Wed Mar 01, 2023 11:38 am

Thank skynet to share your personal experience regarding "self learning".
I will answer your questions as soon as I have explained my own personal experience regarding "self learning".
Maybe it will answer some of your questions too.
deeds wrote:
Wed Mar 01, 2023 11:06 am
Do you know why an engine does not always play the moves stored in its experience file ?
The information stored in the experience file does not make it possible to calculate its effectiveness. There is the score, the depth, the number of times the move has been played (count value) but not the number of games or their result. As it often happens that the most effective moves are found late in learning (hence Khalid's instruction, min. 500 games/opening), their count value is very low in the formula that calculates their quality. The engine hesitates with other less effective moves because they get a good quality due to their high count value.
Example :
Image
Image
Here, 0-0-0 (e1c1) is the most effective move but it has been played less => lower count value => lower quality value => the engine will not always prefer this move.

User avatar
deeds
Posts: 652
Joined: Wed Oct 20, 2021 9:24 pm
Location: France
Contact:

Re: personal experience regarding "self learning"

Post by deeds » Wed Mar 01, 2023 11:39 am

deeds wrote:
Wed Mar 01, 2023 11:06 am
What is the "bonus" effect while learning ?
I know engine trainers who learn every opening from scratch. We know that in chess, there are very often different combinations of moves that lead to the same positions (transposition table). By reusing a single experience file during each learning, the engine can benefit from the experience data it has got during previous learnings. This is what I call the "bonus" effect. For those who use learning farms, simply merge the experience files at the end of the concurrent learnings without forgetting to correct the errors (exp_error_detect).

User avatar
deeds
Posts: 652
Joined: Wed Oct 20, 2021 9:24 pm
Location: France
Contact:

Re: personal experience regarding "self learning"

Post by deeds » Wed Mar 01, 2023 11:40 am

skynet wrote: Well, aren't the more efficient moves/considered the best ones?
Not always. That's what i learned when i started to train engines. The bestmoves of an engine aren't always the most efficient moves against other engines.
skynet wrote: Otherwise, what is the point of a move if it is not effective?
This move is only the favorite move of the engine for the next xx positions.

User avatar
deeds
Posts: 652
Joined: Wed Oct 20, 2021 9:24 pm
Location: France
Contact:

Re: personal experience regarding "self learning"

Post by deeds » Wed Mar 01, 2023 11:41 am

skynet wrote: Also i wanted to ask you about your tests Eman vs SF 15, do you believe that your test(s) was fair? I mean Eman was using experience file while SF was using nothing. Personally i see all that learning process useless...
If all that learning process is useless, the experience file brings no advantage to Eman. Furthermore, Eman consumes time to analyze each position. Eman do not use its experimental function "experience book" nor an opening book. Eman uses as many threads as Stockfish. Now, if the experience file brings an advantage to Eman, i think my "Eman experience vs Stockfish" tests are as fair as "AO vs SF8" ones.

User avatar
deeds
Posts: 652
Joined: Wed Oct 20, 2021 9:24 pm
Location: France
Contact:

Re: personal experience regarding "self learning"

Post by deeds » Wed Mar 01, 2023 11:47 am

deeds wrote:
Wed Mar 01, 2023 11:06 am
Why we need min. 500 games in order to learn an opening ?
As I discovered Eman very late (v6.95), even though I read everything I could about it, even though I reverse-engineered few experience files, I can only guess that when Kelly K. and Khalid O. added the learning algorithm to Eman (and others), they tested it and found that it only started to produce effects after several hundred plays. Now that I myself have tested few learning algorithms on a few openings with Eman and others, I confirm that it takes several hundred games.

Example #1 : learning of the "B06 Modern" opening with "Eman experience vs Eman only"

[Event "Eman's learning"]
[Site "deeds"]
[Date "2021.05.24"]
[Round "000021"]
[White "B06"]
[Black "B06"]
[Result "*"]
[ECO "B06"]
[PlyCount "20"]

1. e4 g6 2. d4 Bg7 3. Nc3 d6 4. Be3 c6 5. a4 Nf6 6. Qd2 Nbd7 7. f3 O-O 8.
h4 e5 9. dxe5 Nxe5 10. O-O-O d5 *
Image


Example #2 : learning of the "C89 Spanish, Marshall, Classical" opening with "Eman experience vs Eman only"

[Event "Eman's learning"]
[Site "deeds"]
[Date "2021.05.24"]
[Round "002950"]
[White "eman"]
[Black "eman"]
[Result "*"]
[ECO "C89"]
[PlyCount "36"]

1. e4 e5 2. Nf3 Nc6 3. Bb5 a6 4. Ba4 Nf6 5. O-O Be7 6. Re1 b5 7. Bb3 O-O
8. c3 d5 9. exd5 Nxd5 10. Nxe5 Nxe5 11. Rxe5 c6 12. d4 Bd6 13. Re1 Qh4 14.
g3 Qh3 15. Be3 Bg4 16. Qd3 Rae8 17. Nd2 Re6 18. a4 f5 *
Image


Example #3 : learning of the "C00 French, Tchigorine variation " opening with "Eman experience vs Eman only"

[Event "Eman's learning"]
[Site "deeds"]
[Date "2021.05.24"]
[Round "000008"]
[White "C00"]
[Black "C00"]
[Result "*"]
[ECO "C00"]
[PlyCount "20"]

1. e4 e6 2. Qe2 Be7 3. b3 d5 4. Bb2 Nf6 5. exd5 Qxd5 6. Nc3 Qa5 7. Nf3 O-O
8. Ne5 Nbd7 9. Nc4 Qf5 10. O-O-O b6 *
Image

...

User avatar
deeds
Posts: 652
Joined: Wed Oct 20, 2021 9:24 pm
Location: France
Contact:

Re: personal experience regarding "self learning"

Post by deeds » Wed Mar 01, 2023 11:47 am

...
In fact it takes hundreds of games because there are several "key" positions which each have several "close" moves in terms of scores.
The length of the learning process must make it possible to play several times the combinations of the "close" moves of the "key" positions in order to discover which combinations are the most effective.
Here are some examples of "key" positions of the opening "C89 Spanish, Marshall, Classical" :
Image
Image
Image

User avatar
deeds
Posts: 652
Joined: Wed Oct 20, 2021 9:24 pm
Location: France
Contact:

Re: personal experience regarding "self learning"

Post by deeds » Wed Mar 01, 2023 11:51 am

Sedat Canbaz wrote: 1st of all,
How nice to read useful comments by experts
Many valuable info...my respect ...

Btw, while we are taking about FAIR conditions,

Exception that,
Many of us running learning engines vs NON-learning...

Just I'd like to give several examples more:
-Many of us are running Private vs Available matches
-Many of us are running NNUE vs NON-NNUE matches
-Many of us are running Small in size vs BIG in size ..
Here I mean not only books, including engines too)
-Many of us are running BIN vs CTG matches
-Many of us are running faster CPU vs slower CPU
And so on...

In other words,
In our chess hobby, there are lot of things,
Where we run and test with 'unfair' conditions... !)

Meanwhile,
Let me say please some words regarding Lc0 (GPU version),

Well, here we can not be sure !

Perhaps Lc0 is the strongest engine in the world ? who knows ?

Note also that, if you are not aware..
Stockfish NNUE is based on Lc0's played data

So, without Lc0's data,
Stockfish plus SF-based ones would no be so strong as nowadays!

One thing more,
Well, I already stated many many times and I'll try once more,
For better performance, Lc0 requires fast high-end graphic card!

On other hand,
It's a mystery about equalizing, adapting the speeds (GPU vs CPU), e.g
Personally I use V-ray tool, but again no any proof about chess speeds!
But if nothing else..at least V-ray measures the CPU and GPU speeds..

Btw, one of the most funny things is that,
When people are trying to relay on famous Leela ratio formula,
And sorry to say that but I can't be so naive...if not so clear, then
1st, is there any data where NPS values are clear (AlphaZro vs SF8 match) ?
Because after checking the played games...I could not see any NPS records
Even if these NPS available..how can be sure that the formula is right.. ?
E.g I use as Hyper-threading (SMT) disabled..and it's faster, better..
Plus do you know that these higher HT nps values do not mean much in chess?
And in case of mixing, comparing NON-available NPS values (CPU vs GPU),
Sorry to say that but, but I can summarize about the mentioned formula:
It's like mixing apples with oranges...mo more no less really..))

Hope helps...

Best,
Sedat

User avatar
deeds
Posts: 652
Joined: Wed Oct 20, 2021 9:24 pm
Location: France
Contact:

Re: personal experience regarding "self learning"

Post by deeds » Wed Mar 01, 2023 11:51 am

Sedat Canbaz wrote: Just I'd like to add,
I like the engines, which are capable to learn as well...nothing wrong with that..
Even I can say that
The learning future can be useful, at least the lost game positions can
Be forced to not playing same critical moves etc.

Greetings

User avatar
deeds
Posts: 652
Joined: Wed Oct 20, 2021 9:24 pm
Location: France
Contact:

Re: personal experience regarding "self learning"

Post by deeds » Wed Mar 01, 2023 11:56 am

edit : Sorry guys if I'm only going to answer to far fewer questions, I'm no longer comfortable with some people shamefully supported of OCF.
edit : OCF's Moderation doesn't check evidence and they don't even do their own checks before acting.
edit : So I think I'm going to finish this and I'm out of OCF...
deeds wrote:
Wed Mar 01, 2023 11:06 am
edit : What is the main difference between an opening book (i mean the polyglot bin books, not the live books of the GUI) and an experience file ?
Main difference :

Opening books don't learn ! They are like dead things ! Quite the opposite of chess !


Other differences :

The experience files don't need the GUIs grant them an extra delay when the TC has an increment as they did with opening books.
They don't need human intervention to play other lines like the opening books do.
There is no risk of replaying exactly the same line that has just lost several games.

Engines with learning feature learn in real time, tourneys are alive.
We can never predict the scenario when learning an opening.
They keep what is still left as variety in chess.


But I must admit that the opening books and the experience files share the same defect : they don't know where they are in the game phases loool !
Image
Image

User avatar
deeds
Posts: 652
Joined: Wed Oct 20, 2021 9:24 pm
Location: France
Contact:

Re: personal experience regarding "self learning"

Post by deeds » Wed Mar 01, 2023 11:59 am

deeds wrote:
Wed Mar 01, 2023 11:06 am
What openings to learn ?
skynet wrote: 4) ...Seriously, in chess only after the first 4 moves there are 170,000 possible combinations, if we take into account that the engine has to play each position with white and black, we get 170,000x2 = 340,000 possible combinations, and now if we multiply this amount by the number of games that the engine has to play (about 500 each?) for the engine to learn something. And also taking into account the fact that with different time control (as well as the number of cores) the engine will reach different depths, in which there will be at least 2 possible moves, and also taking into account the updating of patches where the position estimate will be slightly changed - then training of the engine will never end...
For me from the start it was very easy to choose which openings Eman had to learn because I keep my reference tournament* up to date.
In addition Eman does not lose often so I first selected the rare openings where it had lost.
That way I immediately saw the improvement of Eman with an experience file trained on the openings it was going to play in tourneys.

Khalid had warned that the learning algorithm worked better after several moves (from the start position) but out of curiosity I indulged myself with openings like 1.e4, 1.d4 or 1.c4.
With 2000 learned games/opening I was far from the mark. Even with 1 million games loool

Since Eman now only lost 1 or 2 openings out of 100, I compared the performance of Eman with that of other engines on the other openings.
I trained it on those where the other engines did better.
Eman learned the attack/defense lines from its sparring-partners.
Besides, Eman thanks HypnoS, JudaS (MZ) and BrainLearn, ShashChess (AM) and Aurora (Sarona) again loool

Then the cheaters came in with their unfair "book vs experience" tests, with their giant experience files merged from different engines.
So I trained Eman on openings where the cheaters had more experience data.

And finally I use learning to improve black win rate after 1.g4.

* edit : all new engines play 200 games (100 openings) against the latest version of nodchip (link)

Post Reply