Multi-core processors are capable of running multiple software streams/tasks concurrently. Multi-core allows a physical processor to simultaneously execute instructions from multiple processes or threads. Core of a processor is the part that executes application instructions. Core is shared by hardware threads (called Hyper-Threads). When two hyper-threads are active in the same core, it results in lower performance of compute intensive tasks as compared to a single thread using core exclusively. Traditional Linux tools (vmstat, mpstat..) do not show core utilization to help estimate the cost of core sharing. One can, however, measure hyper-thread overhead by: Disabling Hyper-thread, selectively binding tasks to available cores, or comparing CPI (cycles per instruction) or IPC (Instruction per cycle) metrics collected via Linux perf.
Similar to Software multithreading (MT), that refers to execution of multiple tasks within a single process, multi-core processor does the same in the hardware by executing multiple software threads simultaneously across multiple cores and hardware threads (Hyper-threads or HT) within a single physical processor (socket). Multi-core processors are ideal for throughput computing. Concurrency in the software is required in order to gain significant throughput by utilizing all available hardware threads and cores in physical cpu. Hardware threads in each core are seen by Linux scheduler as a separate cpu where the task can be scheduled. Caches in physical processor are also shared by hardware threads.
Linux scheduler uses hierarchical relationship when scheduling a process/task to a cpu.
Hyper-Threads → Core → Physical CPU (Socket)
When there is an available core in the physical cpu, new task is assigned to this core. Once all cores are occupied, then core is shared (two HT/core).
" Intel® HT technology is a great performance feature that can boost performance by up to 30%.."
HT does not double the core throughput, only improves it by 30%. Thus two compute intensive tasks sharing the core will run at 60-70% performance (30-40% slower).
Why Multi-Core
Processor Industry before multi-core was primarily focused on increasing cpu clock and deep pipelining to improve serial performance, requireing more logic and silicon space, that resulted in higher power requirements and heat dissipation. Multi-core architecture took a different approach. It traded serial performance for a higher throughput. Instead of implementing complicated logics and pipelining, it duplicated compute logic by implementing multiple dedicated processing units instead of just one. End result is a simple processor design with low power, less heat dissipation but massive throughput capabilities. Multi-core processors are thus ideal when software is designed to run multiple tasks in parallel to take full advantage of large number of compute engines available.
Another reason for multi-core popularity is that it uses physical cpu resources more efficiently. As the gap between processor and memory speeds widens, performance gain by ramping up the processor clock begins to have diminishing returns with processor stalling waiting for memory. Studies have shown that processors in most servers in real world deployments spent 80% of their time stalled waiting for memory or IO and thus high clock rates and deep pipelines of traditional processors are wasted stalling on cache refills from main memory. Hardware threads in Multi-core processor reduce the overhead of these frequent cache stalls and achieve maximum memory bandwidth by automatically parking stalled hardware threads and switching to next ready to run hardware threads leading to efficient processor utilization. Multi-core processor can access instructions from both threads within the same time slice, and that reduces cpu stalls and improves efficiency and throughput.
Linux scheduler uses hierarchical relationship when scheduling a process/task to a cpu.
Why Multi-Core
Processor Industry before multi-core was primarily focused on increasing cpu clock and deep pipelining to improve serial performance, requireing more logic and silicon space, that resulted in higher power requirements and heat dissipation. Multi-core architecture took a different approach. It traded serial performance for a higher throughput. Instead of implementing complicated logics and pipelining, it duplicated compute logic by implementing multiple dedicated processing units instead of just one. End result is a simple processor design with low power, less heat dissipation but massive throughput capabilities. Multi-core processors are thus ideal when software is designed to run multiple tasks in parallel to take full advantage of large number of compute engines available.Another reason for multi-core popularity is that it uses physical cpu resources more efficiently. As the gap between processor and memory speeds widens, performance gain by ramping up the processor clock begins to have diminishing returns with processor stalling waiting for memory. Studies have shown that processors in most servers in real world deployments spent 80% of their time stalled waiting for memory or IO and thus high clock rates and deep pipelines of traditional processors are wasted stalling on cache refills from main memory. Hardware threads in Multi-core processor reduce the overhead of these frequent cache stalls and achieve maximum memory bandwidth by automatically parking stalled hardware threads and switching to next ready to run hardware threads leading to efficient processor utilization. Multi-core processor can access instructions from both threads within the same time slice, and that reduces cpu stalls and improves efficiency and throughput.
Xen Virtual CPU(vcpu) Binding
In server virtualization, hypervisor divides cpu resources across multiple virtual machines or guests. Hypervisor assigns each guest a fixed set of virtual cpus (vcpu). Hypervisor scheduler is responsible for scheduling guest's vcpus onto hyper-threads, whereas Linux scheduler (running inside the guest) schedules processes or threads to assigned vcpus.
Amazon cloud instances (i2, r3, m4, d2 ,c3, c4, x1..) are based on Intel Xeon Ivy Bridge, Haswell processors with 2-3 GHz + Turbo speed and large caches. Each Physical CPU can have 8-16 cores, where each core is shared by two hyper-threads with private L1, L2 caches. There is also a large unified L3 cache shared by all cores in the physical processor
Amazon uses modified version of Xen Hypervisor. It assigns a dedicated core (2 HT) to a 2-cpu instance, 4-cpu instance gets 2 dedicated cores (4 HT) and so on..
Hierarchical relationship seen by Linux running on the instance may not be the same as it is on the physical system. One can use "/proc" stats, exported by kernel, to find relationship between: vcpu, hyperthreads and cores.
Xen Virtual CPU(vcpu) Binding
In server virtualization, hypervisor divides cpu resources across multiple virtual machines or guests. Hypervisor assigns each guest a fixed set of virtual cpus (vcpu). Hypervisor scheduler is responsible for scheduling guest's vcpus onto hyper-threads, whereas Linux scheduler (running inside the guest) schedules processes or threads to assigned vcpus.
Hierarchical relationship seen by Linux running on the instance may not be the same as it is on the physical system. One can use "/proc" stats, exported by kernel, to find relationship between: vcpu, hyperthreads and cores.
Type | /proc/cpuinfo | Detail |
---|---|---|
Socket | physical id | physical cpu or socket on a motherboard. Example: Amazon d2.8xl instance has two sockets: physical id: 0, 1 |
Cores | cores | Number of cores in physical cpu or socket. Example: d2.8xl has 18 cores, 9 cores in each socket: cpu cores: 9 |
Core ID | cores id | Each core is assign an id. Example: d2.8xl has core ids: 0,1,2,3,4,5,6,7,8 |
HyperThread | processor | Each core is shared by 2 HT. Example: d2.8xl has 36 HT: 0,1,2,3,...35 |
Note: $egrep "(( id|processo).*:|^ *$)" /proc/cpuinfo
In case of Amazon d2.8xl instance that has 18 cores across two sockets, there is a 1:1 mapping between vcpu and HT. Instance vcpu 0-17 are first assigned to HT in cores 0-8 in Socket 0 and Socket 1. Next, Xen hypervisor repeats the vcpu assignments and doubles each core occupancy.
d2.8xl
Phase I
Socket 0 | 9 cores | core-id: 0-8 | vcpus (coreid, vpc#): (0,0)(1,1)(2,2)(3,3)(4,4)(5,5),(6,6)(7,7),(8,8) |
Socket 1 | 9 cores | core-id: 0-8 | vcpus (coreid, vpc#): (0,9)(1,10)(2,11)(3,12)(4,13)(5,14)(6,15)(7,16)(8,17) |
Double the occupancy
Socket 0 | 9 cores | core-id: 0-8 | vcpus (coreid, vpc#): (0,18)(1,19)(2,20)(3,21)(4,22)(5,23)(6,24)(7,25)(8,26) |
Socket 1 | 9 cores | core-id: 0-8 | vcpus (coreid, vpc#): (0,27)(1,28)(2,29)(3,30)(4,31)(5,32)(6,33)(7,34)(8,35) |
One can disable HT in BIOS, but you don't have access to BIOS on cloud instance. There are other ways to disable HT:
- Pass boot arguments maxcpus=<#ofcores> by updating /boot/grub/menu.lst file. Save and reboot the server. Setting maxcpus=18 reduces number of vcpu that needs to be assigned to available cores and thus will cause socket 0 and 1 to be populated only once.
- You can find what cores sibling (HT) are sharing using /proc data and then use this information to disable sibling hyper-thread
#!/bin/bash
for num in `cat /proc/cpuinfo|grep processor|awk '{print $3}'`
do
echo sibling of cpu$num
cat /sys/devices/system/cpu/cpu$num/topology/thread_siblings_list
done
======save it into test.sh file and execute=====
~$ ./test.sh
sibling of cpu0
0,18
sibling of cpu1
1,19
sibling of cpu2
2,20
sibling of cpu3
3,21
sibling of cpu4
4,22
...
Disable HT on a live system
#!/bin/sh
if [ "$(id -u)" != "0" ]; then
echo "This script must be run as root. You should type:sudo -s" and then run the script 1>&2
exit 1
fi
cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list |
sort -u |
while read sibs
do
case "$sibs" in
*,*)
oldIFS="$IFS"
IFS=",$IFS"
set $sibs
IFS="$oldIFS"
shift
while [ "$1" ]
do
echo Disabling CPU $1 ..
echo 0 > /sys/devices/system/cpu/cpu$1/online
shift
done
;;
*)
;;
esac
done
As you can see in the output below only one thread is occupying the core.
$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 36
On-line CPU(s) list: 0-17
Off-line CPU(s) list: 18-35 << Disabled vCPU
Thread(s) per core: 1 <<
Core(s) per socket: 9
Socket(s): 2
..
Enable or online all vcpu. You cannot offline cpu 0
#!/bin/bash
NCPUS=`lscpu|grep ^CPU\(s\)|awk '{print $2}'`
NUM=1
for (( cpuid=$NUM; cpuid<$NCPUS; cpuid++ ))
do
echo enabling cpu$cpuid
echo 1 > /sys/devices/system/cpu/cpu$cpuid/online
cat /sys/devices/system/cpu/cpu$cpuid/online
done
lscpu
======
Verify it:
lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 36
On-line CPU(s) list: 0-35 << All cpus are online
Thread(s) per core: 2 <<
Core(s) per socket: 9
Socket(s): 2
...
To test HT overhead, one can use "taskset" utility to set task affinity. Once you know what vcpus and HT are sharing a core, use taskset to set task affinity. One can also use Linux containers or Docker to constrain the workload to subset of cpus. This way you don't need to disable HT and can bind the process(es) to a particular vcpu or group of vcpu.
~$ sudo taskset -pc 0,1,2 $$
pid 37091's current affinity list: 0-35
pid 37091's new affinity list: 0-2
This will result current shell to bind to vcpu 0,1,2. Thus running any task or application from this shell will limit the processes to subset of total cpus. Start the cpu load:
$ yes > /dev/null &; yes > /dev/null & ; yes > /dev/null &
$ mpstat -P ALL 1
This shows only 0,1,2 cpus are 100% cpu bound.
11:45:44 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
...
11:45:45 PM 0 100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:45:45 PM 1 99.01 0.00 0.99 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:45:45 PM 2 99.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:45:45 PM 3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
11:45:45 PM 4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
11:45:45 PM 5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
#!/bin/bash
for num in `cat /proc/cpuinfo|grep processor|awk '{print $3}'` do echo sibling of cpu$num cat /sys/devices/system/cpu/cpu$num/topology/thread_siblings_list done ======save it into test.sh file and execute===== ~$ ./test.sh sibling of cpu0 0,18 sibling of cpu1 1,19 sibling of cpu2 2,20 sibling of cpu3 3,21 sibling of cpu4 4,22 ... |
#!/bin/sh
if [ "$(id -u)" != "0" ]; then echo "This script must be run as root. You should type:sudo -s" and then run the script 1>&2 exit 1 fi cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort -u | while read sibs do case "$sibs" in *,*) oldIFS="$IFS" IFS=",$IFS" set $sibs IFS="$oldIFS" shift while [ "$1" ] do echo Disabling CPU $1 .. echo 0 > /sys/devices/system/cpu/cpu$1/online shift done ;; *) ;; esac done |
$ lscpu
Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 36 On-line CPU(s) list: 0-17 Off-line CPU(s) list: 18-35 << Disabled vCPU Thread(s) per core: 1 << Core(s) per socket: 9 Socket(s): 2 .. |
#!/bin/bash
NCPUS=`lscpu|grep ^CPU\(s\)|awk '{print $2}'` NUM=1 for (( cpuid=$NUM; cpuid<$NCPUS; cpuid++ )) do echo enabling cpu$cpuid echo 1 > /sys/devices/system/cpu/cpu$cpuid/online cat /sys/devices/system/cpu/cpu$cpuid/online done lscpu ====== Verify it: lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 36 On-line CPU(s) list: 0-35 << All cpus are online Thread(s) per core: 2 << Core(s) per socket: 9 Socket(s): 2 ... |
To test HT overhead, one can use "taskset" utility to set task affinity. Once you know what vcpus and HT are sharing a core, use taskset to set task affinity. One can also use Linux containers or Docker to constrain the workload to subset of cpus. This way you don't need to disable HT and can bind the process(es) to a particular vcpu or group of vcpu.
~$ sudo taskset -pc 0,1,2 $$
pid 37091's current affinity list: 0-35 pid 37091's new affinity list: 0-2 This will result current shell to bind to vcpu 0,1,2. Thus running any task or application from this shell will limit the processes to subset of total cpus. Start the cpu load: $ yes > /dev/null &; yes > /dev/null & ; yes > /dev/null & $ mpstat -P ALL 1 This shows only 0,1,2 cpus are 100% cpu bound. 11:45:44 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle ... 11:45:45 PM 0 100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:45:45 PM 1 99.01 0.00 0.99 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:45:45 PM 2 99.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:45:45 PM 3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 11:45:45 PM 4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 11:45:45 PM 5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 |
When to disable Hyper-Threading
Multi-Core processors are designed for throughput computing. Throughput computing is about performing multiple tasks in parallel by spreading the work across many compute engines (HT and cores). Each task may take little longer due to slower clock rate and shared cpu resources used by HT, but many task will be completed in a unit time and that improves application throughput. In general, when HT is enabled, number of cpu resources are statically allocated and shared to run extra thread in the cpu core. How much HT hurts application performance depends on design:
- Compute intensive application with small working set that fits into cpu caches are the one impacted the most with HT enabled.
- Lack of concurrency in application resulting higher contention for shared resources. More processors means more contention. Higher contention will cause less execution and thus processors will be either sitting idle or doing no productive work due to waiting for lock (context switch) or spinning on lock (busy-waiting).
One should also take into account additional factors such as:
- Proper application sizing (threads) to take into account additional vcpus.
- Too much locking may have higher overhead with more cpus.
- Application with large number of threads, but has a hot code (frequently run functions) that utilizes only few threads, then having additional vcpu may not do much.
- Heavy memory intensive application that is capable of utilizing full memory controller bandwidth may not see performance gain when HT is enabled.
- False sharing can happen when two processors share the same cache-line, commonly occurs for global and static variables. This results in inefficient use of cpu caches and may cause application to run at memory speed due to frequent load/store operations.
- NUMA latencies. Verify if the system is numa (Non-Uniform Memory Access). Large Amazon Instances: xx.8xl and above are NUMA. If not planned correctly, application running on NUMA may experience higher memory latencies. Application should use numa library or "numa" utility to hint kernel how its memory allocation should be handled.
How to test Hyper-Threading Overhead
Comparing data captured with or without HT during tests will help quantify performance gain or loss. Lower cpu utilization is a sign of scaling problem due to insufficient software threads, serialized code and lack of concurrency. To estimate HT overhead, one should measure:
- Core Utilization: CPU utilization may not the best way to measure and compare HT overhead. Utilization measures how much cpu headroom is available. One may assume cpu utilization would cut into half considering HT doubles the number of vcpus. It does not, however, translate into 2 x speed up if all vcpu are utilized. Instead of cpu utilization, one should look at other metrics such as: work done per unit time (RPS) and elapsed time (latency) to assess performance changes due to HT. Linux tools like top, mpstat and others do not offer clear insight into core utilization. All you get is the vcpu utilization. One can wrap /proc data in script to capture core utilization
#!/bin/bash
SOCKETS=`grep "physical id" /proc/cpuinfo|sort -ru|head -1|awk '{print $4}' `
sockets="SOCKETS" #converts into integer
NCORES=`grep cores /proc/cpuinfo|sort -u|awk '{print $4}'`
ncores="NCORES"
NUM=0
if [ $sockets != 0 ]; then
NCORES=$((($sockets + 1) * $ncores))
fi
for (( core=$NUM; core<$NCORES; core++ ))
do
SIBLING=`cat /sys/devices/system/cpu/cpu$core/topology/thread_siblings_list`
echo Core $core Utilization: Threads:$SIBLING
mpstat -P $SIBLING 1 2
done
-------save it ---
Core 0 Utilization: Threads:0,18
Linux 3.13.0-49-generic (abyssagents-Same-AZ-Test-i-f603ce47) 12/16/2015 _x86_64_ (36 CPU)
10:44:33 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
10:44:34 PM 0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
10:44:34 PM 18 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
..Core 1 Utilization: Threads:1,19
Linux 3.13.0-49-generic (abyssagents-Same-AZ-Test-i-f603ce47) 12/16/2015 _x86_64_ (36 CPU)
10:44:35 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
10:44:36 PM 1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
10:44:36 PM 19 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
,..
- HT double the number of vcpu that Linux can schedule a task or thread. That means twice as many threads will be running simultaneously. Let's assume a system with four cores with HT disabled, running four compute threads in parallel. If the 1 unit of work is computed by each thread in a second, then four threads will compute 4 unit of works in a seconds (4 units/s). With HT enabled, we can now run 8 threads (sharing core) in parallel. Expected gain should be: 4 x 1.25 = 5 units/s instead of 8 units/s. Due to shared core, compute latency is increased = 8 units / 5 units/s = 1.6 seconds. Thus HT improved overall throughput by 25% but at a cost of higher latency. Although it seems like response time may increase with HT, it is normally not the case due to less context switching with more available cpus.
- Core and Thread CPI: CPI stands for Cycles per Instruction. It is an average time it takes to execute a given set of instructions. CPI is an indicative of instruction level parallelism in the code. CPI can also be used to estimate memory fetch latency when a cache line is invalidated due to stale data found in cpu caches. For example: Intel processors based on Nehalem core can execute 4 instructions per clock, that is equivalent to a CPI 0.25. Due to cache misses and branch mispredictions, real-world applications has an average CPI of 1.0 or 2.0.
To capture Core CPI, disable HT and measure CPI. Since the core is dedicated to single thread, it will give you Core CPI. Now enable HT. Since two threads shared the core, they may execute different number of instructions and CPI. Let's assume over a sampling period, two threads sharing a core utilized 1 Million core cycles. During that period Thread-1 executed 750k and Thread-2 500k instructions. In this case, Thread-1 CPI:1.33, Thread-2 CPI: 2.0 and Core CPI :0.80 (1 Million cycles/ 750+500 instructions).
Note: CPI data is available through Intel PMU (Performance Monitoring Unit) and can be extracted using Linux perf tool and Intel pcm utility. Unfortunately, access to PMU registers is restricted on Amazon instances. We are working with Amazon to provide these capabilities.
sysbench cpu benchmark tool can be used to compare cpu core compute throughput and HT overhead.
One can use taskset to limit cpus where sysbench threads can be scheduled. Use core sibling information to run sysbench in dedicated or shared cores. Use perf to capture IPC/CPI metrics. Higher IPC is better as it means less number of stalled cycles
Example:
Running sysbench threads in dedicated cores. System has 4 cores (8 HT or vcpu)
$ sudo taskset -pc 0,1,2,3
Run 4 sysbench threads:
sysbench --max-requests=10000000 --num-threads=4 --test=cpu --cpu-max-prime=10000 run
While test is running, capture CPI/IPC metrics
$ sudo perf stat -a -p <sysbench_pid>
# perf stat -a -p 6841
Performance counter stats for process id '6841':
487861.349722 task-clock (msec) # 4.002 CPUs utilized [100.00%]
42,184 context-switches # 0.086 K/sec [100.00%]
2 cpu-migrations # 0.000 K/sec [100.00%]
0 page-faults # 0.000 K/sec
1,424,306,903,878 cycles # 2.919 GHz [83.34%]
706,061,423,450 stalled-cycles-frontend # 49.57% frontend cycles idle [83.33%]
196,403,173,757 stalled-cycles-backend # 13.79% backend cycles idle [66.67%]
550,084,970,527 instructions # 0.39 insns per cycle <<
# 1.28 stalled cycles per insn [83.34%] <<
..
Running sysbench threads in shared core
$sudo taskset -pc 0,1,4,5 $$
pid 6801's current affinity list: 0-3
pid 6801's new affinity list: 0,1,4,5
$mpstat -P ALL 1
01:56:17 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
01:56:18 PM all 50.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 50.00
01:56:18 PM 0 100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:56:18 PM 1 100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:56:18 PM 2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
01:56:18 PM 3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
01:56:18 PM 4 100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:56:18 PM 5 100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:56:18 PM 6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
01:56:18 PM 7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
Performance counter stats for process id '6864':
497028.064408 task-clock (msec) # 4.000 CPUs utilized [100.00%]
42,884 context-switches # 0.086 K/sec [100.00%]
3 cpu-migrations # 0.000 K/sec [100.00%]
0 page-faults # 0.000 K/sec
1,449,788,400,730 cycles # 2.917 GHz [83.33%]
1,010,235,123,660 stalled-cycles-frontend # 69.68% frontend cycles idle [83.33%]
592,764,654,453 stalled-cycles-backend # 40.89% backend cycles idle [66.67%]
361,851,972,955 instructions # 0.25 insns per cycle <<
# 2.79 stalled cycles per insn [83.33%] <<
..
Simple test below can also be used to estimate work done in a unit time.
start compute bound job:
for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null & done
Change 1..2 to 1..4 to start four process or use /proc data (as shown earlier) to start the compute bound job on selected cores and vcpus.
References
#!/bin/bash
SOCKETS=`grep "physical id" /proc/cpuinfo|sort -ru|head -1|awk '{print $4}' ` sockets="SOCKETS" #converts into integer NCORES=`grep cores /proc/cpuinfo|sort -u|awk '{print $4}'` ncores="NCORES" NUM=0 if [ $sockets != 0 ]; then NCORES=$((($sockets + 1) * $ncores)) fi for (( core=$NUM; core<$NCORES; core++ )) do SIBLING=`cat /sys/devices/system/cpu/cpu$core/topology/thread_siblings_list` echo Core $core Utilization: Threads:$SIBLING mpstat -P $SIBLING 1 2 done -------save it --- Core 0 Utilization: Threads:0,18 Linux 3.13.0-49-generic (abyssagents-Same-AZ-Test-i-f603ce47) 12/16/2015 _x86_64_ (36 CPU) 10:44:33 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle 10:44:34 PM 0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 10:44:34 PM 18 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 ..Core 1 Utilization: Threads:1,19 Linux 3.13.0-49-generic (abyssagents-Same-AZ-Test-i-f603ce47) 12/16/2015 _x86_64_ (36 CPU) 10:44:35 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle 10:44:36 PM 1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 10:44:36 PM 19 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 ,.. |
- HT double the number of vcpu that Linux can schedule a task or thread. That means twice as many threads will be running simultaneously. Let's assume a system with four cores with HT disabled, running four compute threads in parallel. If the 1 unit of work is computed by each thread in a second, then four threads will compute 4 unit of works in a seconds (4 units/s). With HT enabled, we can now run 8 threads (sharing core) in parallel. Expected gain should be: 4 x 1.25 = 5 units/s instead of 8 units/s. Due to shared core, compute latency is increased = 8 units / 5 units/s = 1.6 seconds. Thus HT improved overall throughput by 25% but at a cost of higher latency. Although it seems like response time may increase with HT, it is normally not the case due to less context switching with more available cpus.
- Core and Thread CPI: CPI stands for Cycles per Instruction. It is an average time it takes to execute a given set of instructions. CPI is an indicative of instruction level parallelism in the code. CPI can also be used to estimate memory fetch latency when a cache line is invalidated due to stale data found in cpu caches. For example: Intel processors based on Nehalem core can execute 4 instructions per clock, that is equivalent to a CPI 0.25. Due to cache misses and branch mispredictions, real-world applications has an average CPI of 1.0 or 2.0.
To capture Core CPI, disable HT and measure CPI. Since the core is dedicated to single thread, it will give you Core CPI. Now enable HT. Since two threads shared the core, they may execute different number of instructions and CPI. Let's assume over a sampling period, two threads sharing a core utilized 1 Million core cycles. During that period Thread-1 executed 750k and Thread-2 500k instructions. In this case, Thread-1 CPI:1.33, Thread-2 CPI: 2.0 and Core CPI :0.80 (1 Million cycles/ 750+500 instructions).
sysbench cpu benchmark tool can be used to compare cpu core compute throughput and HT overhead.
Simple test below can also be used to estimate work done in a unit time.
One can use taskset to limit cpus where sysbench threads can be scheduled. Use core sibling information to run sysbench in dedicated or shared cores. Use perf to capture IPC/CPI metrics. Higher IPC is better as it means less number of stalled cycles
Example:
Running sysbench threads in dedicated cores. System has 4 cores (8 HT or vcpu)
$ sudo taskset -pc 0,1,2,3
Run 4 sysbench threads:
sysbench --max-requests=10000000 --num-threads=4 --test=cpu --cpu-max-prime=10000 run
While test is running, capture CPI/IPC metrics
$ sudo perf stat -a -p <sysbench_pid>
# perf stat -a -p 6841
Performance counter stats for process id '6841':
487861.349722 task-clock (msec) # 4.002 CPUs utilized [100.00%]
42,184 context-switches # 0.086 K/sec [100.00%]
2 cpu-migrations # 0.000 K/sec [100.00%]
0 page-faults # 0.000 K/sec
1,424,306,903,878 cycles # 2.919 GHz [83.34%]
706,061,423,450 stalled-cycles-frontend # 49.57% frontend cycles idle [83.33%]
196,403,173,757 stalled-cycles-backend # 13.79% backend cycles idle [66.67%]
550,084,970,527 instructions # 0.39 insns per cycle <<
# 1.28 stalled cycles per insn [83.34%] <<
..
Running sysbench threads in shared core
$sudo taskset -pc 0,1,4,5 $$
pid 6801's current affinity list: 0-3
pid 6801's new affinity list: 0,1,4,5
$mpstat -P ALL 1
01:56:17 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
01:56:18 PM all 50.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 50.00
01:56:18 PM 0 100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:56:18 PM 1 100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:56:18 PM 2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
01:56:18 PM 3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
01:56:18 PM 4 100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:56:18 PM 5 100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
01:56:18 PM 6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
01:56:18 PM 7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
Performance counter stats for process id '6864':
497028.064408 task-clock (msec) # 4.000 CPUs utilized [100.00%]
42,884 context-switches # 0.086 K/sec [100.00%]
3 cpu-migrations # 0.000 K/sec [100.00%]
0 page-faults # 0.000 K/sec
1,449,788,400,730 cycles # 2.917 GHz [83.33%]
1,010,235,123,660 stalled-cycles-frontend # 69.68% frontend cycles idle [83.33%]
592,764,654,453 stalled-cycles-backend # 40.89% backend cycles idle [66.67%]
361,851,972,955 instructions # 0.25 insns per cycle <<
# 2.79 stalled cycles per insn [83.33%] <<
..
|
start compute bound job:
for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null & done Change 1..2 to 1..4 to start four process or use /proc data (as shown earlier) to start the compute bound job on selected cores and vcpus. |
References
Mylar Thread: Plies of film are consolidated into a layer, slice into cuts to make Mylar threads. They are solid yet may break under fast sewing.best type of machine embroidery thread
ReplyDeleteNice post. You pointed on very important facts by this post. This is really very informative and useful information. Thanks for sharing this post.pinoy tambayan
ReplyDeleteapply.
Over the years, fluorescent bulb manufacturers had some challenges getting users to accept the white color produced by fluorescent technology. Because of ljusskyltar led the limitations of phosphor technology, the fluorescent industry introduced subjective terms such as "cool white" or "warm white" to draw comparisons to incandescent white.
ReplyDeleteThe night safari receives a very huge amount of guests. The safari is divided into 8 geographical zones. You can explore this by foot or by train. The animals are Outdoor event space for rent in Singapore viewable spots that create light. The animals of The night safari have a variety from Indian Rhinoceros to tarsiers.
ReplyDeleteFree new release DVD movies, laptops, digital cameras, game consoles, and other gadgets are all products 123movies that everyone can get virtually for free! The trick, however, is knowing where to look for these products and how to go about getting them.
ReplyDeleteThere are some special types of documents all around the world that is used in almost all the countries that uses these kinds of documents in order to provide or assign a international documentation services special type of identification by assigning the special identification code or number to each individual which can be used later on there life a form of proof that the person belongs to the specified country. These special kinds of documents are also known as Novelty Documents.
ReplyDeleteI would like to make sure I understand your conclusions about hyperthreading and the latency tradeoff:
ReplyDelete"HT improved overall throughput by 25% but at a cost of higher latency" for each individual thread. More gets done, but each individual thread takes longer to complete: 8 threads in 1.6 seconds (HT) vs 8 threads in 2 seconds (non-HT), and 1.6 seconds per thread (HT) vs 1 second per thread (non-HT).
Does this sound accurate?
This comment has been removed by the author.
ReplyDeleteReally nice and interesting post for those students who are inters ted in Math subject and they have to solve the question about Geometer.
ReplyDeleteMasters dissertation writing service
Really I like your article it is easy to reading. You are a good blogger pls keep it up. Thank you so much. We offer incomparable Dissertation Writing Help
ReplyDeleteThanks for sharing. I found a lot of interesting information here. A really good post, very thankful and hopeful that you will write many more posts like this one.
ReplyDeleteSource: motion graphics designing services
BetMGM launches mobile casino in Michigan - JT-hub
ReplyDeleteThe BetMGM Michigan mobile casino 당진 출장샵 launched this past week in 의왕 출장샵 Michigan. 여주 출장마사지 The company announced on the official website that BetMGM Michigan 의왕 출장안마 will 충주 출장안마 launch on
check cloud migration consultant for more detailed information
ReplyDelete