Memory storage considerations

Code, algorithms, languages, construction...
Post Reply
thevinenator
Posts: 68
Joined: Tue Jun 02, 2015 11:02 pm
Real Name: Vince

Memory storage considerations

Post by thevinenator » Fri Oct 14, 2016 3:32 pm

consider the following c code and its non-optimized assembly in x64 mode:

Code: Select all

	
U8 U8x;
U16 U16x;
U32 U32x;
U64 U64x;

	U8x = 3;
000000013F1C23C1  mov         byte ptr [U8x],3  
	U16x = 7;
000000013F1C23C6  mov         eax,7  
000000013F1C23CB  mov         word ptr [U16x],ax  
	U32x = 15;
000000013F1C23D0  mov         dword ptr [U32x],0Fh  
	U64x = 31;
000000013F1C23D8  mov         qword ptr [U64x],1Fh


U8, U16... refer to unsigned integers with the corresponding number of bits.

I couldn't find the instruction times for each of the "xword ptr" instructions, but seeing how the assmebly code turned out makes me think that to represent data, one can/should use the smallest data size that is needed since it appears to be no difference to the machine, at least in the Intel world.

So, for example, consider the following structure:

Code: Select all

struct BOARD
	{
	U64 Bitboards[14];
	U32 ChessBoard[64];  // serialized representation
	U8 ChessBoard8[64];  // serialized representation
	};
Code that affects ChessBoard8[] doesn't appear to be any different than if ChessBoard[] (32bit elements) is used. Using the smaller size could have advantages if make move/unmake move is implemented as Copy/Make, since less data needs to be copied from Board to Board.

Is this assertion correct or are there hidden code gotcha that I haven't noticed?
"An Engine's strength flows from the Search. But beware, pruning, extensions, reductions; the dark side of the Search are they. Once you start down the dark path, it will dominate and consume you, as it has to so many developers before.” -- Yoda

hyatt
Posts: 1242
Joined: Thu Jun 10, 2010 2:13 am
Real Name: Bob Hyatt (Robert M. Hyatt)
Location: University of Alabama at Birmingham
Contact:

Re: Memory storage considerations

Post by hyatt » Fri Oct 14, 2016 7:27 pm

As a general rule, native word length is the optimal reference size. The only exception would be when you are using an array or such, and you start to factor in L1 and L2 cache. If going from 64 bits down to 16 bits improves cache utilization, you would see a gain. Otherwise the cpu actually has to do a little extra work to access a fraction of a 64 bit value.

Post Reply