2 TB HD for Tablebases

Code, algorithms, languages, construction...
BB+
Posts: 1484
Joined: Thu Jun 10, 2010 4:26 am

Re: 2 TB HD for Tablebases

Post by BB+ » Wed Jul 07, 2010 10:08 am

But the current EGTB 8K blocksize was simply a compromise between I/O bandwidth/cpu-utilization/compression efficiency. In reality, when we were testing, we found that the value (blocksize) really had an optimal number for each different CPU/Disk drive combination.
This doesn't surprise me. In almost every "heavy-computing" project I've done, there has been a heady discussion of the best I/O and CPU mix, to which the right answer has been "test it and see" (the latest that I was involved with was multiplying huge integers that don't fit into memory via a disk-based FFT -- we had around 128GB of RAM, and the integers were maybe 256GB each, and the hard drive capacity was 4TB or more, and I think using about 6 of the 24 available cpus was all that we could manage, as using more would mean the chunk size was too small, so the decimation method would mean that the I/O was more at another stage, though I forget the details).

I think that Gaviota allows great control over the various decompression used, but the indexing system seems fixed. The block size looks like 32K: size_t block_mem = 32 * 1024; /* 32k fixed, needed for the compression schemes */

The RobboTotalBases are "not recommended" to be used in search (there are Shredder-style "RobboTripleBases" for that), but they have 64K blocks using BWT style compression, though something called "hyperindexing" seems to allow the block size to be 1MB, with then an extra 16-way indexing in the block itself). Any comparison to Nalimov is almost hopeless, given the different constraints.

Carey
Posts: 1
Joined: Fri Jun 11, 2010 2:38 am

Re: 2 TB HD for Tablebases

Post by Carey » Wed Jul 07, 2010 5:03 pm

BB+ wrote:This doesn't surprise me. In almost every "heavy-computing" project I've done, there has been a heady discussion of the best I/O and CPU mix, to which the right answer has been "test it and see" (the latest that I was involved with was multiplying huge integers that don't fit into memory via a disk-based FFT -- we had around 128GB of RAM, and the integers were maybe 256GB each, and the hard drive capacity was 4TB or more, and I think using about 6 of the 24 available cpus was all that we could manage, as using more would mean the chunk size was too small, so the decimation method would mean that the I/O was more at another stage, though I forget the details).
Just out of curiosity, what were you doing to need to multiply such large numbers?

I used to do some heavy number crunching years ago, but that was back when systems had megabytes and a 500g drive was a dream. The most I ever personally did was about 32meg numbers (on a 2meg system), but I researched and wrote the code to do numbers several gigadigits.

By 'FFT' I certainly assume you don't mean a real floating point FFT but more of a number theoretic kind or even a schonhage, schonhage-strassen or even a Nussbaumer? (I know... not a Nussie, the data accesses have no cache locality and performance would be just about the worst you could do.)


Sorry for the off topic post, but I was just curious...

BB+
Posts: 1484
Joined: Thu Jun 10, 2010 4:26 am

Re: 2 TB HD for Tablebases

Post by BB+ » Thu Jul 08, 2010 11:01 pm

Many things can be mapped to integer mult. In fact, as the asymptotics there are so good, this is standard practise for some problems. I think we had polynomials or initial segments of power series. An integer-based FFT was used.

BB+
Posts: 1484
Joined: Thu Jun 10, 2010 4:26 am

Re: 2 TB HD for Tablebases

Post by BB+ » Fri Jul 09, 2010 10:18 pm

If I read Schaeffer's paper correctly (Section 4), Chinook was using 1K slices, at least in the 90s.
http://webdocs.cs.ualberta.ca/~jonathan ... tabases.ps
Even though the "hit rate" for things already in cache could be 80-95%, Chinook often became I/O bound (and I presume checkers has more data locality than chess to boot). In a different paper, it was said that "even with 1GB of RAM, the program quickly becomes I/O bound" (and this seems to be before SMP was too widespread).

BB+
Posts: 1484
Joined: Thu Jun 10, 2010 4:26 am

Re: 2 TB HD for Tablebases

Post by BB+ » Sat Jul 31, 2010 12:26 am

I found a post of Bourzutschky where he says that the FEG format (which he reverse-engineered) was using 32K slices.
The original FEG data is about 90 GB, smaller partially because of larger block size (32k as opposed to 8k), and a more efficient compression algorithm.
http://kirill-kryukov.com/chess/discuss ... ?f=6&t=639

Post Reply