Page 1 of 1

Memory storage considerations

Posted: Fri Oct 14, 2016 3:32 pm
by thevinenator
consider the following c code and its non-optimized assembly in x64 mode:

Code: Select all

	
U8 U8x;
U16 U16x;
U32 U32x;
U64 U64x;

	U8x = 3;
000000013F1C23C1  mov         byte ptr [U8x],3  
	U16x = 7;
000000013F1C23C6  mov         eax,7  
000000013F1C23CB  mov         word ptr [U16x],ax  
	U32x = 15;
000000013F1C23D0  mov         dword ptr [U32x],0Fh  
	U64x = 31;
000000013F1C23D8  mov         qword ptr [U64x],1Fh


U8, U16... refer to unsigned integers with the corresponding number of bits.

I couldn't find the instruction times for each of the "xword ptr" instructions, but seeing how the assmebly code turned out makes me think that to represent data, one can/should use the smallest data size that is needed since it appears to be no difference to the machine, at least in the Intel world.

So, for example, consider the following structure:

Code: Select all

struct BOARD
	{
	U64 Bitboards[14];
	U32 ChessBoard[64];  // serialized representation
	U8 ChessBoard8[64];  // serialized representation
	};
Code that affects ChessBoard8[] doesn't appear to be any different than if ChessBoard[] (32bit elements) is used. Using the smaller size could have advantages if make move/unmake move is implemented as Copy/Make, since less data needs to be copied from Board to Board.

Is this assertion correct or are there hidden code gotcha that I haven't noticed?

Re: Memory storage considerations

Posted: Fri Oct 14, 2016 7:27 pm
by hyatt
As a general rule, native word length is the optimal reference size. The only exception would be when you are using an array or such, and you start to factor in L1 and L2 cache. If going from 64 bits down to 16 bits improves cache utilization, you would see a gain. Otherwise the cpu actually has to do a little extra work to access a fraction of a 64 bit value.