C++11 threads seem to get shafted for cycles

Code, algorithms, languages, construction...
User923005
Posts: 616
Joined: Thu May 19, 2011 1:35 am

Re: C++11 threads seem to get shafted for cycles

Post by User923005 » Wed Mar 19, 2014 5:47 am

It seems pretty clear that the problem is somehow related to my {peculiar?} environment, because other people do not seem to be seeing the same effect.
For what it is worth, the pushed priority boost version definitely does run faster on my machine.
I sent a copy to Fabian.

hyatt
Posts: 1242
Joined: Thu Jun 10, 2010 2:13 am
Real Name: Bob Hyatt (Robert M. Hyatt)
Location: University of Alabama at Birmingham
Contact:

Re: C++11 threads seem to get shafted for cycles

Post by hyatt » Thu Mar 20, 2014 2:07 am

There is more to this story yet.

If you have 4 cores, and run 4 threads, priorities are completely irrelevant unless something else is running. No operating system will have a core sitting idle, with a process in the run state ready to go. It just won't happen. That means there is something else running. Windows used to have a rather stupid "idle process" that ran at low priority, just to consume CPU cycles and account for them. But that seems unlikely today in an energy-conscious software/hardware world.

This makes absolutely no sense to me.

The ONLY way priority matters is if there are more runnable processes than physical cores.

User923005
Posts: 616
Joined: Thu May 19, 2011 1:35 am

Re: C++11 threads seem to get shafted for cycles

Post by User923005 » Thu Mar 20, 2014 10:19 am

Clearly, there is something different about my environment, since others do not see the same effect.

However, it only seems to be C++11 threaded programs that perform in that way on my system. Since they are statically linked, it is not some strange dynamic library on my machine.

Now, this particular machine has lots of services running, since I use it for database testing. But I am the only one who has accounts to these database systems. I would be curious to know if others see the same effect on their systems (it would be an easy experiment to try).

I can also send a binary with elevated privilege and see if it runs faster for someone else who sees a similar effect with priorities.

Stockfish, for instance, only runs 15% faster if I run it on realtime priority instead of slack on the same system.

hyatt
Posts: 1242
Joined: Thu Jun 10, 2010 2:13 am
Real Name: Bob Hyatt (Robert M. Hyatt)
Location: University of Alabama at Birmingham
Contact:

Re: C++11 threads seem to get shafted for cycles

Post by hyatt » Thu Mar 20, 2014 8:44 pm

Your goal has to be "find out what is competing for cpu cycles." Clearly something is, as normally with N threads and N cores, each thread gets 100% of its core and there are no cycles lost anywhere. If priority is changing this, there is competition going on. You simply have to track down what is stealing those cycles, except when you ramp up the priority so that the chess threads are higher priority than whatever is somehow preempting them.

I can run crafty normally, and with nice +19, and see absolutely no NPS change on my 12 core box:

log.001: time=10.71(90%) n=419489015 fh1=0.94% 50move=0 nps=39.6M
log.002: time=7.90(89%) n=317302760 fh1=0.94% 50move=0 nps=40.6M
log.003: time=8.90(88%) n=361621346 fh1=0.95% 50move=0 nps=41.1M
log.004: time=7.80(87%) n=312441816 fh1=0.95% 50move=0 nps=40.5M

Those are normal priority, nice +19, nice-19 and normal again.

Always some minor variability, but priority doesn't affect it at all.

Nice +19 is ultra-low priority, nice -19 is ultra-high priority

User923005
Posts: 616
Joined: Thu May 19, 2011 1:35 am

Re: C++11 threads seem to get shafted for cycles

Post by User923005 » Thu Mar 20, 2014 9:49 pm

I see the effect even when I run the tests with only 2 cores allocated to the chess program.
If it were competition for cycles, then Stockfish (and other non-C+11 threading programs) would be equally affected.

Here is the current crafty (tail end of a quick analysis with 4 threads):
19 9.35/30.00 0.02 1. Qf3 Bd7 2. g4 g6 3. Qf6 O-O 4. g5 Qg7
5. O-O-O Qxf6 6. exf6 Nc6 7. Nf4 Nab4 8.
Be2 Nxa2+ 9. Kb1 Nab4 10. Nxc7 Rac8
19-> 11.66/30.00 0.02 1. Qf3 Bd7 2. g4 g6 3. Qf6 O-O 4. g5 Qg7
5. O-O-O Qxf6 6. exf6 Nc6 7. Nf4 Nab4 8.
Be2 Nxa2+ 9. Kb1 Nab4 10. Nxc7 Rac8
20 13.30/30.00 0.02 1. Qf3 Bd7 2. g4 g6 3. Qf6 O-O 4. g5 Qg7
5. O-O-O Qxf6 6. exf6 Nc6 7. Nf4 Nab4 8.
Kb1 Nxd3 9. Nxd3 Rad8 10. Nc5 b6
20-> 15.66/30.00 0.02 1. Qf3 Bd7 2. g4 g6 3. Qf6 O-O 4. g5 Qg7
5. O-O-O Qxf6 6. exf6 Nc6 7. Nf4 Nab4 8.
Kb1 Nxd3 9. Nxd3 Rad8 10. Nc5 b6
21 23.58/30.00 -0.01 1. Qf3 Bd7 2. c3 Bxb5 3. Bxb5+ c6 4. Be2
c5 5. Bb5+ Nc6 6. Bxa6 bxa6 7. dxc5 Nxe5
8. Qe2 Qf6 9. Qxa6 O-O 10. O-O-O Rab8 11.
b4 Nc4 12. Rd4
21 29.20/30.00 ++ 1. Qg4! (>+0.00)
21 29.52/30.00 ++ 1. c3! (>+0.00)
time=30.24 n=406784323 afhm=1.27 predicted=0 50move=0 nps=13.5M
extended=4.1M qchks=8.4M reduced=30.0M pruned=137.8M evals=113.0M
EGTBprobes=0 hits=0 splits=7477 aborts=848 data=12/4096
terminating SMP processes.
White(1): c3
time used: 30.24
Black(1):

And here is the same thing with realtime priority:
20 13.31/30.00 -- 1. Qf3? (<+0.02)
20 15.36/30.00 0.00 1. Qf3 Bd7 2. g4 g6 3. Qf6 O-O 4. c3 Qg7
5. O-O-O Qxf6 6. exf6 Bxb5 7. Bxb5 c6 8.
Bxa6 Nxa6 9. Ng5 h5 10. gxh5 gxh5 11. Rxh5
20 15.36/30.00 ++ 1. Qg4! (>+0.01)
20 18.01/30.00 0.05 1. Qg4 Nc6 2. c3 Bd7 3. Qg3 g6 4. Qg5 Qxg5
5. Nxg5 h5 6. O-O-O Ne7 7. Rdg1 Rh6 8.
Re1 Nf5 9. Kc2 h4 10. f3 c5
20-> 20.14/30.00 0.05 1. Qg4 Nc6 2. c3 Bd7 3. Qg3 g6 4. Qg5 Qxg5
5. Nxg5 h5 6. O-O-O Ne7 7. Rdg1 Rh6 8.
Re1 Nf5 9. Kc2 h4 10. f3 c5 (s=2)
21 23.25/30.00 0.02 1. Qg4 Nc6 2. c3 Bd7 3. g3 f5 4. exf6 Qxf6
5. Ng5 O-O-O 6. Nxh7 e5 7. Qh5 Qe6 8. O-O-O

e4 9. Be2 Rde8 10. Qg5 Qf5 11. Bh5 Qxg5+
12. Nxg5
21 23.25/30.00 ++ 1. Qf3! (>+0.03)
21-> 28.59/30.00 0.03 1. Qf3 (s=2)15.3Mnps)
time=30.01 n=460826750 afhm=1.28 predicted=0 50move=0 nps=15.4M
extended=4.7M qchks=9.8M reduced=33.4M pruned=162.0M evals=118.5M
EGTBprobes=0 hits=0 splits=11038 aborts=1362 data=16/4096
terminating SMP processes.
White(1): Qf3
time used: 30.01
Black(1):

As you can see, it is not affected much by the priority boost (15.4/13.5=1.14) so only about 14% faster.

C++11 threaded programs run 2.2 to 2.7 times faster with boosted priority. An absurd difference.

hyatt
Posts: 1242
Joined: Thu Jun 10, 2010 2:13 am
Real Name: Bob Hyatt (Robert M. Hyatt)
Location: University of Alabama at Birmingham
Contact:

Re: C++11 threads seem to get shafted for cycles

Post by hyatt » Sat Mar 22, 2014 8:08 pm

Sorry, but to me a 15% variability is absolutely huge. You saw my numbers for various priorities. No effect at all, just the usual random variation caused by how a program is loaded into memory and how it maps into cache. Your numbers simply mean that something is dislodging Crafty from a processor for a significant amount of time, no way the O/S is just letting the processor sit idle when Crafty is in a runnable state.

What happens if you run just one thread with Crafty? Is the NPS static then? Second test, what about running a 1-thread crafty in each of two windows at the same time. Does NPS vary on either/both if you change priority?

User923005
Posts: 616
Joined: Thu May 19, 2011 1:35 am

Re: C++11 threads seem to get shafted for cycles

Post by User923005 » Sat Mar 22, 2014 10:32 pm

I will have to wait to Monday to perform the experiment.
But consider that the change in speed for Fabian's program is more than 18x larger {2.7 times faster compared to 15% faster} with a base priority boost than the effect on crafty.
Clearly, they are not behaving in the same manner.

User923005
Posts: 616
Joined: Thu May 19, 2011 1:35 am

Re: C++11 threads seem to get shafted for cycles

Post by User923005 » Tue Mar 25, 2014 3:13 am

Two instances of single threaded crafty at the same time running realtime priority:

White(1): st 30
search time set to 30.00.
White(1): st 30
search time set to 30.00.
White(1): setboard r2qr1k1/1pp2pb1/2n1b1pp/p2np3/P1N5/2PP1NP1/1P3PBP/R1BQR1K1 b - -
Black(1): mt 1
Warning-- xboard 'cores' option disabled
max threads set to 1.
Black(1): go
time surplus 0.00 time limit 30.00 (+0.00) (30.00)
depth time score variation (1)
10 0.15/30.00 0.05 1. ... Bf5 2. Nh4 Be6 3. Bd2 g5 4. Nf3
Bf5 5. Qb3 Bxd3 6. Qxb7
10-> 0.22/30.00 0.05 1. ... Bf5 2. Nh4 Be6 3. Bd2 g5 4. Nf3
Bf5 5. Qb3 Bxd3 6. Qxb7
11 0.29/30.00 0.12 1. ... Bf5 2. Nh4 Be6 3. Qb3 Nb6 4. Qb5
g5 5. Nxb6 cxb6 6. Nf3 Qc7
11-> 0.40/30.00 0.12 1. ... Bf5 2. Nh4 Be6 3. Qb3 Nb6 4. Qb5
g5 5. Nxb6 cxb6 6. Nf3 Qc7
12 0.58/30.00 0.04 1. ... Bf5 2. Nh4 Be6 3. Bd2 g5 4. Nf3
Bf5 5. Qb3 Nb6 6. Nxb6 cxb6 7. h4 gxh4
12-> 0.68/30.00 0.04 1. ... Bf5 2. Nh4 Be6 3. Bd2 g5 4. Nf3
Bf5 5. Qb3 Nb6 6. Nxb6 cxb6 7. h4 gxh4
13 0.92/30.00 0.04 1. ... Bf5 2. Nh4 Be6 3. Bd2 g5 4. Nf3
Bf5 5. Qb3 Nb6 6. Nxb6 cxb6 7. h4 gxh4
13-> 1.04/30.00 0.04 1. ... Bf5 2. Nh4 Be6 3. Bd2 g5 4. Nf3
Bf5 5. Qb3 Nb6 6. Nxb6 cxb6 7. h4 gxh4
14 1.53/30.00 -0.01 1. ... Bf5 2. Nh4 Be6 3. Nf3
14-> 1.65/30.00 -0.01 1. ... Bf5 2. Nh4 Be6 3. Nf3
15 2.01/30.00 ++ 1. ... Bf5? (>+0.15)
15 2.28/30.00 0.16 1. ... Bf5 2. Nh4 Be6 3. Qb3 Nb6 4. Be4
Nxc4 5. dxc4 Rb8 6. Be3 Re7 7. Bd5 Qd6
8. Rad1 Rd7
15-> 3.28/30.00 0.16 1. ... Bf5 2. Nh4 Be6 3. Qb3 Nb6 4. Be4
Nxc4 5. dxc4 Rb8 6. Be3 Re7 7. Bd5 Qd6
8. Rad1 Rd7
16 4.09/30.00 -- 1. ... Bf5! (<+0.00)
16 4.25/30.00 0.04 1. ... Bf5 2. Nh4 Be6 3. Qb3 Qc8 4. Bd2
g5 5. Nf3 Nb6 6. Nxb6 Bxb3 7. Nxc8 Raxc8
8. Bh3 Rcd8 9. Bf5
16-> 4.59/30.00 0.04 1. ... Bf5 2. Nh4 Be6 3. Qb3 Qc8 4. Bd2
g5 5. Nf3 Nb6 6. Nxb6 Bxb3 7. Nxc8 Raxc8
8. Bh3 Rcd8 9. Bf5
17 5.65/30.00 0.06 1. ... Bf5 2. Nh4 Be6 3. Qb3 Qc8 4. Bd2
Nb6 5. Nxb6 Bxb3 6. Nxc8 Raxc8 7. Be4 Rcd8
8. Nf3 Bd5 9. Bxd5 Rxd5
17-> 6.50/30.00 0.06 1. ... Bf5 2. Nh4 Be6 3. Qb3 Qc8 4. Bd2
Nb6 5. Nxb6 Bxb3 6. Nxc8 Raxc8 7. Be4 Rcd8
8. Nf3 Bd5 9. Bxd5 Rxd5
18 9.43/30.00 -0.01 1. ... Bf5 2. Nh4 Be6 3. Qb3 b6 4. Bd2
Bf6 5. Qb5 Na7 6. Qb3 Nc6
18-> 11.42/30.00 -0.01 1. ... Bf5 2. Nh4 Be6 3. Qb3 b6 4. Bd2
Bf6 5. Qb5 Na7 6. Qb3 Nc6
19 15.23/30.00 -0.01 1. ... Bf5 2. Nh4 Be6 3. Nf3
19-> 17.28/30.00 -0.01 1. ... Bf5 2. Nh4 Be6 3. Nf3
20 21.43/30.00 -0.01 1. ... Bf5 2. Nh4 Be6 3. Nf3
20-> 24.31/30.00 -0.01 1. ... Bf5 2. Nh4 Be6 3. Nf3
time=30.12 n=130376218 afhm=1.25 predicted=0 50move=0 nps=4.3M
extended=206K qchks=1.1M reduced=9.3M pruned=40.1M evals=50.1M
EGTBprobes=0 hits=0 splits=0 aborts=0 data=0/4096
Black(1): Bf5
time used: 30.12
White(2):

White(1): mt 1
Warning-- xboard 'cores' option disabled
max threads set to 1.
White(1): st 30
search time set to 30.00.
White(1): setboard r2qr1k1/1pp2pb1/2n1b1pp/p2np3/P1N5/2PP1NP1/1P3PBP/R1BQR1K1 b - -
Black(1): go
time surplus 0.00 time limit 30.00 (+0.00) (30.00)
depth time score variation (1)
10 0.12/30.00 0.05 1. ... Bf5 2. Nh4 Be6 3. Bd2 g5 4. Nf3
Bf5 5. Qb3 Bxd3 6. Qxb7
10-> 0.17/30.00 0.05 1. ... Bf5 2. Nh4 Be6 3. Bd2 g5 4. Nf3
Bf5 5. Qb3 Bxd3 6. Qxb7
11 0.25/30.00 0.12 1. ... Bf5 2. Nh4 Be6 3. Qb3 Nb6 4. Qb5
g5 5. Nxb6 cxb6 6. Nf3 Qc7
11-> 0.34/30.00 0.12 1. ... Bf5 2. Nh4 Be6 3. Qb3 Nb6 4. Qb5
g5 5. Nxb6 cxb6 6. Nf3 Qc7
12 0.48/30.00 0.04 1. ... Bf5 2. Nh4 Be6 3. Bd2 g5 4. Nf3
Bf5 5. Qb3 Nb6 6. Nxb6 cxb6 7. h4 gxh4
12-> 0.58/30.00 0.04 1. ... Bf5 2. Nh4 Be6 3. Bd2 g5 4. Nf3
Bf5 5. Qb3 Nb6 6. Nxb6 cxb6 7. h4 gxh4
13 0.76/30.00 0.04 1. ... Bf5 2. Nh4 Be6 3. Bd2 g5 4. Nf3
Bf5 5. Qb3 Nb6 6. Nxb6 cxb6 7. h4 gxh4
13-> 0.89/30.00 0.04 1. ... Bf5 2. Nh4 Be6 3. Bd2 g5 4. Nf3
Bf5 5. Qb3 Nb6 6. Nxb6 cxb6 7. h4 gxh4
14 1.31/30.00 -0.01 1. ... Bf5 2. Nh4 Be6 3. Nf3
14-> 1.45/30.00 -0.01 1. ... Bf5 2. Nh4 Be6 3. Nf3
15 1.78/30.00 ++ 1. ... Bf5? (>+0.15)
15 2.03/30.00 0.16 1. ... Bf5 2. Nh4 Be6 3. Qb3 Nb6 4. Be4
Nxc4 5. dxc4 Rb8 6. Be3 Re7 7. Bd5 Qd6
8. Rad1 Rd7
15-> 3.00/30.00 0.16 1. ... Bf5 2. Nh4 Be6 3. Qb3 Nb6 4. Be4
Nxc4 5. dxc4 Rb8 6. Be3 Re7 7. Bd5 Qd6
8. Rad1 Rd7
16 3.81/30.00 -- 1. ... Bf5! (<+0.00)
16 3.97/30.00 0.04 1. ... Bf5 2. Nh4 Be6 3. Qb3 Qc8 4. Bd2
g5 5. Nf3 Nb6 6. Nxb6 Bxb3 7. Nxc8 Raxc8
8. Bh3 Rcd8 9. Bf5
16-> 4.29/30.00 0.04 1. ... Bf5 2. Nh4 Be6 3. Qb3 Qc8 4. Bd2
g5 5. Nf3 Nb6 6. Nxb6 Bxb3 7. Nxc8 Raxc8
8. Bh3 Rcd8 9. Bf5
17 5.33/30.00 0.06 1. ... Bf5 2. Nh4 Be6 3. Qb3 Qc8 4. Bd2
Nb6 5. Nxb6 Bxb3 6. Nxc8 Raxc8 7. Be4 Rcd8
8. Nf3 Bd5 9. Bxd5 Rxd5
17-> 6.17/30.00 0.06 1. ... Bf5 2. Nh4 Be6 3. Qb3 Qc8 4. Bd2
Nb6 5. Nxb6 Bxb3 6. Nxc8 Raxc8 7. Be4 Rcd8
8. Nf3 Bd5 9. Bxd5 Rxd5
18 9.12/30.00 -0.01 1. ... Bf5 2. Nh4 Be6 3. Qb3 b6 4. Bd2
Bf6 5. Qb5 Na7 6. Qb3 Nc6
18-> 11.11/30.00 -0.01 1. ... Bf5 2. Nh4 Be6 3. Qb3 b6 4. Bd2
Bf6 5. Qb5 Na7 6. Qb3 Nc6
19 14.97/30.00 -0.01 1. ... Bf5 2. Nh4 Be6 3. Nf3
19-> 16.98/30.00 -0.01 1. ... Bf5 2. Nh4 Be6 3. Nf3
20 21.14/30.00 -0.01 1. ... Bf5 2. Nh4 Be6 3. Nf3
20-> 24.01/30.00 -0.01 1. ... Bf5 2. Nh4 Be6 3. Nf3
time=30.26 n=131358437 afhm=1.25 predicted=0 50move=0 nps=4.3M
extended=207K qchks=1.1M reduced=9.4M pruned=40.4M evals=50.4M
EGTBprobes=0 hits=0 splits=0 aborts=0 data=0/4096
Black(1): Bf5
time used: 30.26

==================================================================================
Crafty at normal priority running one thread:
White(1): mt 1
Warning-- xboard 'cores' option disabled
max threads set to 1.
White(1): st 30
search time set to 30.00.
White(1): setboard r2qr1k1/1pp2pb1/2n1b1pp/p2np3/P1N5/2PP1NP1/1P3PBP/R1BQR1K1 b - -
Black(1): go
time surplus 0.00 time limit 30.00 (+0.00) (30.00)
depth time score variation (1)
10 0.12/30.00 0.05 1. ... Bf5 2. Nh4 Be6 3. Bd2 g5 4. Nf3
Bf5 5. Qb3 Bxd3 6. Qxb7
10-> 0.17/30.00 0.05 1. ... Bf5 2. Nh4 Be6 3. Bd2 g5 4. Nf3
Bf5 5. Qb3 Bxd3 6. Qxb7
11 0.23/30.00 0.12 1. ... Bf5 2. Nh4 Be6 3. Qb3 Nb6 4. Qb5
g5 5. Nxb6 cxb6 6. Nf3 Qc7
11-> 0.31/30.00 0.12 1. ... Bf5 2. Nh4 Be6 3. Qb3 Nb6 4. Qb5
g5 5. Nxb6 cxb6 6. Nf3 Qc7
12 0.44/30.00 0.04 1. ... Bf5 2. Nh4 Be6 3. Bd2 g5 4. Nf3
Bf5 5. Qb3 Nb6 6. Nxb6 cxb6 7. h4 gxh4
12-> 0.53/30.00 0.04 1. ... Bf5 2. Nh4 Be6 3. Bd2 g5 4. Nf3
Bf5 5. Qb3 Nb6 6. Nxb6 cxb6 7. h4 gxh4
13 0.78/30.00 0.04 1. ... Bf5 2. Nh4 Be6 3. Bd2 g5 4. Nf3
Bf5 5. Qb3 Nb6 6. Nxb6 cxb6 7. h4 gxh4
13-> 0.98/30.00 0.04 1. ... Bf5 2. Nh4 Be6 3. Bd2 g5 4. Nf3
Bf5 5. Qb3 Nb6 6. Nxb6 cxb6 7. h4 gxh4
14 1.75/30.00 -0.01 1. ... Bf5 2. Nh4 Be6 3. Nf3
14-> 1.97/30.00 -0.01 1. ... Bf5 2. Nh4 Be6 3. Nf3
15 2.51/30.00 ++ 1. ... Bf5? (>+0.15)
15 2.86/30.00 0.16 1. ... Bf5 2. Nh4 Be6 3. Qb3 Nb6 4. Be4
Nxc4 5. dxc4 Rb8 6. Be3 Re7 7. Bd5 Qd6
8. Rad1 Rd7
15-> 4.12/30.00 0.16 1. ... Bf5 2. Nh4 Be6 3. Qb3 Nb6 4. Be4
Nxc4 5. dxc4 Rb8 6. Be3 Re7 7. Bd5 Qd6
8. Rad1 Rd7
16 5.19/30.00 -- 1. ... Bf5! (<+0.00)
16 5.37/30.00 0.04 1. ... Bf5 2. Nh4 Be6 3. Qb3 Qc8 4. Bd2
g5 5. Nf3 Nb6 6. Nxb6 Bxb3 7. Nxc8 Raxc8
8. Bh3 Rcd8 9. Bf5
16-> 5.76/30.00 0.04 1. ... Bf5 2. Nh4 Be6 3. Qb3 Qc8 4. Bd2
g5 5. Nf3 Nb6 6. Nxb6 Bxb3 7. Nxc8 Raxc8
8. Bh3 Rcd8 9. Bf5
17 6.92/30.00 0.06 1. ... Bf5 2. Nh4 Be6 3. Qb3 Qc8 4. Bd2
Nb6 5. Nxb6 Bxb3 6. Nxc8 Raxc8 7. Be4 Rcd8
8. Nf3 Bd5 9. Bxd5 Rxd5
17-> 7.80/30.00 0.06 1. ... Bf5 2. Nh4 Be6 3. Qb3 Qc8 4. Bd2
Nb6 5. Nxb6 Bxb3 6. Nxc8 Raxc8 7. Be4 Rcd8
8. Nf3 Bd5 9. Bxd5 Rxd5
18 12.17/30.00 -0.01 1. ... Bf5 2. Nh4 Be6 3. Qb3 b6 4. Bd2
Bf6 5. Qb5 Na7 6. Qb3 Nc6
18-> 16.44/30.00 -0.01 1. ... Bf5 2. Nh4 Be6 3. Qb3 b6 4. Bd2
Bf6 5. Qb5 Na7 6. Qb3 Nc6
19 22.87/30.00 -0.01 1. ... Bf5 2. Nh4 Be6 3. Nf3
19-> 25.44/30.00 -0.01 1. ... Bf5 2. Nh4 Be6 3. Nf3
time=30.08 n=83146386 afhm=1.26 predicted=0 50move=0 nps=2.8M
extended=127K qchks=677K reduced=5.7M pruned=24.9M evals=33.0M
EGTBprobes=0 hits=0 splits=0 aborts=0 data=0/4096
Black(1): Bf5
time used: 30.08
White(2):

hyatt
Posts: 1242
Joined: Thu Jun 10, 2010 2:13 am
Real Name: Bob Hyatt (Robert M. Hyatt)
Location: University of Alabama at Birmingham
Contact:

Re: C++11 threads seem to get shafted for cycles

Post by hyatt » Fri Mar 28, 2014 7:12 pm

It is crystal clear from that that you have something else running. Task Manager is not exactly a useful tool, unfortunately. For windows I can't recommend anything. But something is competing for cycles. What I have no idea. Would be easy enough to catch in unix. Only other system I would not trust is mac osx. The process scheduler is pretty bad there also.

Have you tried to boot into what used to be called "safe mode" or whatever they have today to get rid of some of the crap that is started by default?

BTW are you CERTAIN you don't have any sort of virus/malware running? They are quite clever at hiding, by changing system tools to not show them, etc.

User923005
Posts: 616
Joined: Thu May 19, 2011 1:35 am

Re: C++11 threads seem to get shafted for cycles

Post by User923005 » Fri Mar 28, 2014 7:33 pm

This machine has:
1. Lots of services running, such as database services. (I have perhaps 100 services running).
2. Virtual machines which I use for testing various things.

I am not at all surprised that we see competition for cycles.
However, the effect with C++11 threads is 18x larger than either pthreads or native threads.

Post Reply