Intel Core i7 8700K CofeeLake Review & Benchmarks – UHD 630 GPGPU Performance

What is “CofeeLake” CFL?

The 8th generation Intel Core architecture is code-named “CofeeLake” (CFL): unlike previous architectures, it is a minor stepping of the previous 7th generation “KabyLake” (KBL), itself a minor update of the 6th generation “SkyLake” (SKL). As before, the CPUs contain an integrated GPU (with compute support aka GPGPU).

While originally Intel integrated graphics were not much use – starting with SNB (“SandyBridge”) and especially its GPGPU-capable successor IVB (“IvyBridge”) the integrated graphics units made large progress, with HSW (“Haswell”) introducing powerful many compute units (GT3+) and esoteric L4 cache (eDRAM) versions (“CrystallWell) supporting high-end features like FP64 (native 64-bit floating-point support) and zero-copy CPU <> GPU transfers.

Alas, while the features remained, the higher-end versions (GT3, GT4e) never became mainstream and pretty much disappeared – except very high-end ULV/H SKUs with top-end desktop CPUs like 6700K, 8700K, etc. tested here stuck with the low-end GT2 versions. Perhaps nobody in their right mind would use such CPUs without a dedicated external (GP)GPU, it is still interesting to see how the GPU core has evolved in time.

Also let’s not forget that on the mobile platforms (either ULV/Y even H) most laptops/tablets do not have dedicated GPU and rely solely on integrated graphics – and here naturally UHD630 performance matters.

Hardware Specifications

We are comparing the graphics units of to-of-the-range Intel CPUs with low-end dedicated cards to determine whether they are good enough for modest use, especially for compute (GPGPU) use supporting the CPU.

GPGPU Specifications Intel UHD 630 (8700K) Intel HD 530 (6700K) nVidia GT 1030 Comments
Arch Chipset GT2 / EV9.5 GT2 / EV9 GP108 / SM6.1 UHD6xx is just a minor revision of the HD5xx video core.
Cores (CU) / Threads (SP) 24 / 192 24 / 192 3 / 384 No change in core / SP units.
ROPs / TMUs 8 / 16 8 / 16 16 / 24 No change in ROP/TMUs either.
Speed (Min-Turbo) 350-1200 350-1150 300-1.26-1.52 Turbo speed is only slightly increased.
Power (TDP) 95W 91W 35W TDP has gone up a bit but nothing major.
Constant Memory 3.2GB 3.2GB 64kB (dedicated) There is no dedicated constant memory thus a large chunk is available to use (GB) unlike a dedicated video card with very fast but small (kB).
Shared (Local) Memory 64kB 64kB 48kB (dedicated) Bigger than usual shared/local memory but slow (likely non dedicated).
Global Memory 7GB (of 16GB) 7GB (of 16GB) 2GB About 50% of main memory can be used as global memory – thus pretty large workloads can be run.
Memory System DDR4 3200Mt/s 128-bit DDR4 2533Mt/s 128-bit GDDR5 6Gt/s 64-bit CFL can reliably run at faster data rates thus 630 benefits too.
Memory Bandwidth (GB/s)
50 40 48 The high data rate of DDR4 can result in higher bandwidth than some dedicated cards.
L2 Cache 512kB 512kB 48kB L2 is unchanged and reasonably large.
FP64/double ratio Yes, 1/8 Yes, 1/8 Yes, 1/32 FP64 is supported and at good ration compared to gimped dedicated cards.
FP16/half ratio
Yes, 2x Yes, 2x Yes, 1/64 FP16 is also now supported at twice the rate – again unlike gimped dedicated cards.

Processing Performance

We are testing both OpenCL performance using the latest SDK / libraries / drivers from both Intel and competition.

Results Interpretation: Higher values (GOPS, MB/s, etc.) mean better performance.

Environment: Windows 10 x64, latest Intel drivers, OpenCL 2.x. Turbo / Boost was enabled on all configurations.

Processing Benchmarks Intel UHD 630 (8700K) Intel HD 530 (6700K) nVidia GT 1030 Comments
GPGPU Arithmetic Benchmark Mandel FP16/Half (Mpix/s) 1150 [+7%] 1070 1660 Thanks to FP16 support we see double the performance over FP32 and thus only 50% slower than dedicated 1030.
GPGPU Arithmetic Benchmark Mandel FP32/Single (Mpix/s) 584 [+9%] 535 1660 630 is almost 10% faster than old 530 but still about 1/3 of a dedicated 1030.
GPGPU Arithmetic Benchmark Mandel FP64/Double (Mpix/s) 151 [+9%] 138 72.8 FP64 sees a similar delta (+9%) but much faster (2x) than a dedicated 1030 due to gimped FP64 units.
GPGPU Arithmetic Benchmark Mandel FP128/Quad (Mpix/s) 7.84 [+5%] 7.46 2.88 Emulated FP128 precision depends entirely on FP64 performance and much better (3x) than gimped dedicated.
UHD630 is about 5-9% faster than 520, not much to celebrate – but due to native FP16 and especially FP64 support it can match or even overtake low-end dedicated GPUs – a pretty surprising result! If only we had more cores, it may actually be very much competitive.
GPGPU Crypto Benchmark Crypto AES-256 (GB/s) 1 [+5%] 0.954 4.37 We see a 5% improvement for 630 0 but far lower performance than a dedicated GPU.
GPGPU Crypto Benchmark Crypto AES-128 (GB/s) 1.3 [+6] 1.23 5.9 Nothing changes here , we see a 6% improvement.
GPGPU Crypto Benchmark Crypto SHA2-256 (GB/s) 3.6 [+3%] 3.5 18.4 In this heavy integer workload, the improvement falls to just 3% – but a dedicated unit would be about 4x faster.
GPGPU Crypto Benchmark Crypto SHA1 (GB/s) 8.18 [+2%] 8 24 Nothing much changes here, we see a 2% improvement.
GPGPU Crypto Benchmark Crypto SHA2-512 (GB/s) 1.3 [+2%] 1.27 7.8 With 64-bit integer workload, same improvement of just 2% but now the 1030 is about 6x faster!
Nobody will be using integrated graphics for crypto-mining any time soon, we see a very minor improvement in 639 vs old 530, but overall low performance versus dedicated graphics like a 1030 which would be 4-6x faster. We would need 3x more cores to compete here.
GPGPU Finance Benchmark Black-Scholes float/FP32 (MOPT/s) 1180 [+21%] 977 1320 In this FP32 financial workload we see a good 21% improvement vs. old 530. Also good result vs. dedicated 1030.
GPGPU Finance Benchmark Black-Scholes double/FP64 (MOPT/s) 180 [+2%] 175 137 Switching to FP64 code, the difference is next to nothing but better than a gimped 1030.
GPGPU Finance Benchmark Binomial float/FP32 (kOPT/s) 111 [+12%] 99 255 Binomial uses thread shared data thus stresses the internal memory sub-system, and here 630 is 12% faster. But 1/2 the performance of a 1030.
GPGPU Finance Benchmark Binomial double/FP64 (kOPT/s) 22.3 [+4%] 21.5 14 With FP64 code the improvement drops to 4%.
GPGPU Finance Benchmark Monte-Carlo float/FP32 (kOPT/s) 298 [+2%] 291 617 Monte-Carlo also uses thread shared data but read-only thus reducing modify pressure – strangely we see only 2% improvement and again 1/2 1030 performance.
GPGPU Finance Benchmark Monte-Carlo double/FP64 (kOPT/s) 43.4 [+2%] 42.5 28 Switching to FP64 we see no changes. But almost 2x performance over a 1030.
You can run financial analysis algorithms with decent performance on an UHD630 – just as you could on the old 530 – and again better FP64 performance than dedicated – (GT 1030) a pretty impressive result. Naturally, you can just use the powerful CPU cores instead…
GPGPU Science Benchmark SGEMM (GFLOPS) float/FP32 143 [+4%] 138 685 Using 32-bit precision 630 improves 4% but is almost 1/5 (5 times slower) than a 1030.
GPGPU Science Benchmark DGEMM (GFLOPS) double/FP64 55.5 [+3%] 53.7 35 With FP64 precision, the delta does not change but now 640 is amost 2x faster than a 1030.
GPGPU Science Benchmark SFFT (GFLOPS) float/FP32 39.6 [+20%] 33 37 FFT is memory access bound and here 630’s faster DDR4 memory gives it a 20% lead.
GPGPU Science Benchmark DFFT (GFLOPS) double/FP64 9.3 [+16%] 8 20 We see a similar improvement with FP64 about 16%.
GPGPU Science Benchmark SNBODY (GFLOPS) float/FP32 272 [+2%] 266 637 Back to normality with this algorithm – we see just 2% improvement.
GPGPU Science Benchmark DNBODY (GFLOPS) double/FP64 27.7 [+3%] 26.9 32 With FP64 precision, nothing much changes.
The scientific scores are similar to financial ones – except the memory access heavy FFT which greatly benefits from better memory  (if that is provided of course) but this a dedicated card (like the 1030) is much faster in FP32 mode but again the 630 can be 2x faster in FP64 mode. Again, you’re much better off using the CPU and its powerful SIMD units for these algorithms.
GPGPU Image Processing Blur (3×3) Filter single/FP32 (MPix/s) 592 [+10%] 536 1620 In this 3×3 convolution algorithm, we see a 10% improvement over the old 530. But about 1/3x performance of a 1030.
GPGPU Image Processing Sharpen (5×5) Filter single/FP32 (MPix/s) 128 [+9%] 117 637 Same algorithm but more shared data reduces the gap to 9%.
GPGPU Image Processing Motion Blur (7×7) Filter single/FP32 (MPix/s) 133 [+9%] 122 391 With even more data the gap remains the same.
GPGPU Image Processing Edge Detection (2*5×5) Sobel Filter single/FP32 (MPix/s) 127 [+9%] 116 368 Still convolution but with 2 filters – still 9% better.
GPGPU Image Processing Noise Removal (5×5) Median Filter single/FP32 (MPix/s) 9.2 [+10%] 8.4 7.3 Different algorithm does not change much still 10% better.
GPGPU Image Processing Oil Painting Quantise Filter single/FP32 (MPix/s) 10.6 [+9%] 9.7 4.08 Without major processing, 630 improves by the same amount.
GPGPU Image Processing Diffusion Randomise (XorShift) Filter single/FP32 (MPix/s) 1640 [+2%] 1600 2350 This algorithm is 64-bit integer heavy thus we fall to the “usual” 2% improvement.
GPGPU Image Processing Marbling Perlin Noise 2D Filter single/FP32 (MPix/s) 550 [+2%] 538 849 One of the most complex and largest filters, sees the same 2% improvement.
For image processing using FP32 precision 630 performs a bit better than usual, 10% faster across the board compared to the old 530 – but still about 1/3 (third) the speed of a dedicated 1030. But if you can make do with FP16 precision image processing, then we almost double performance.

Memory Performance

We are testing both OpenCL performance using the latest SDK / libraries / drivers from both Intel and competition.

Results Interpretation: Higher values (MB/s, etc.) mean better performance. Lower time values (ns, etc.) mean better performance.

Environment: Windows 10 x64, latest Intel drivers, OpenCL 2.x. Turbo / Boost was enabled on all configurations.

Memory Benchmarks Intel UHD 630 (8700K) Intel HD 530 (6700K) nVidia GT 1030 Comments
GPGPU Memory Bandwidth Internal Memory Bandwidth (GB/s) 36.4 [+21%] 30 38.5 Due to higher speed DDR4 memory, the 630 manages 21% better bandwidth than the 620 – and comparable to a 64-bit bus dedicated card.
GPGPU Memory Bandwidth Upload Bandwidth (GB/s) 17.9 [+29%] 13.9 3 (PCIe3 x4) The CPU<>GPU internal link seems to have 30% more bandwidth – naturally zero transfers are also supported. And a lot better than a dedicated card on PCIe3 x4 (4 lanes).
GPGPU Memory Bandwidth Download Bandwidth (GB/s) 17.9 [+35%] 13.3 3 (PCIe3 x4) Here again we see a good 35% bandwidth improvement.
CFL’s higher (stable) memory speed support improves bandwidth between 20-35% – which is likely behind most benchmark improvement in the compute algorithms above. However, that will only happen if high-speed DDR4 memory (3200 or faster) were to be used – an expensive proposition! eDRAM would greatly help here…
GPGPU Memory Latency Global (In-Page Random Access) Latency (ns) 179 [+1%] 178 223 No changes in global latencies in-page showing no memory sub-system improvements.
GPGPU Memory Latency Global (Full Range Random Access) Latency (ns) 268 [-19%] 332 244 Due to faster memory clock (even with slightly increased timings) full random access latencies fall by 20% (similar to bandwidth increase).
GPGPU Memory Latency Global (Sequential Access) Latency (ns) 126 [-5%] 132 76 Sequential access latencies do fall by a minor 5% as well though.
GPGPU Memory Latency Constant Memory (In-Page Random Access) Latency (ns) 181 [-6%] 192 92.5 Intel’s GPGPU don’t have dedicated constant memory thus we see similar performance to global memory.
GPGPU Memory Latency Shared Memory (In-Page Random Access) Latency (ns) 72 [-1%] 73 16.6 Shared memory latency is unchanged – and quite slow compared to architectures from competitors like the 1030.
GPGPU Memory Latency Texture (In-Page Random Access) Latency (ns) 138 [-9%] 151 220 Texture access latencies do seem to show a 9% improvement a surprising result.
GPGPU Memory Latency Texture (Full Range Random Access) Latency (ns) 227 [-16%] 270 242 Just as we’ve seen with global (full range access) latencies, we see the best improvement about 16% here.
GPGPU Memory Latency Texture (Sequential Access) Latency (ns) 45 [=] 45 71.9 With sequential access we see no improvement.
Anything to do with main memory access (aka “full random access”) does show a similar improvement to bandwidth increases, i.e. between 16-19% due to higher speed (but somewhat higher timings) main memory. All other access patterns show little to no improvements.

When using higher speed DDR4 memory – as we do here (3200 vs 2533) UHD630 shows a good improvement in both bandwidth and reduced latencies – but otherwise it performs just the same as the old HD520 – not a surprise really. At least you can see that your (expensive) memory investment does not go to waste – with memory bound algorithms showing good improvement.

SiSoftware Official Ranker Scores

Final Thoughts / Conclusions

For GPGPU workloads, UHD630 does not bring anything new – it performs similarly to the old HD520. But as CFL can use higher (stable) memory, bandwidth and latencies are improved (when using such higher speed memory) and thus most algorithms do show good improvements. Naturally as long as you can afford to provide such memory.

The surprising support for 1/2 ratio native FP64 support means 64-bit floating-point algorithms can run faster than on a typical low-end graphics card (as despite also supporting native FP64 the ratio is 1/32 vs. FP32 rate)  so high accuracy workloads do work well on it. If loss of accuracy is OK (e.g. picture processing) native FP16 support at 2x rate makes such algorithms almost 2x faster and thus within the performance of a typical low-end graphics card (that either don’t support FP16 or their ratio is 1/64!).

As we touched in the introduction – this may not matter on desktop – but on mobile where most laptops/tablets use the integrated graphics any and all such improvements can make a big difference. While in the past the fast-improving EV cores became performance competitive with CPU cores (as there were only 2 ULV ones) – with CFL doubling number of CPU cores (4 vs. 2) it is likely that internal graphics (GPGPU) performance is now too low.

We’re sad that the GT3/GT4 versions are not common-place not to mention the L4/eDRAM which showed so much promise in the HSW days.

But Intel has recently revamped its GPU division and are committed to release dedicated (not just internal) graphics in a few years (2020?) which hopefully means we should see far more powerful GPUs from them soon.

Let’s hope they do see the light-of-day and are not cancelled like the “Phi” GPGPU accelerators (“Knights Landing”) which showed so much promise but somehow never made it outside data centres before sailing into the sunset…

Intel Core i7 8700K CofeeLake Review & Benchmarks – 2-channel DDR4 Cache & Memory Performance

What is “CofeeLake” CFL?

The 8th generation Intel Core architecture is code-named “CofeeLake” (CFL): unlike previous architectures, it is a minor stepping of the previous 7th generation “KabyLake” (KBL), itself a minor update of the 6th generation “SkyLake” (SKL). The server/workstation (SKL-X/KBL-X) CPU core saw new instruction set support (AVX512) as well as other improvements – these have not made the transition yet.

Possibly due limited competition (before AMD Ryzen launch), process issues (still at 14nm) and the disclosure of a whole host of hardware vulnerabilities (Spectre, Meltdown, etc.) which required microcode (firmware) updates – performance improvements have not been forthcoming. This is pretty much unprecedented – while some Core updates were only evolutionary we have not had complete stagnation before; in addition the built-in GPU core has also remained pretty much stagnant – we will investigate this in a subsequent article.

However, CFL does bring up a major change – and that is increased core counts both on desktop and mobile: on desktop we go from 4 to 6 cores (+50%) while on mobile (ULV) we go from 2 to 4 (+100%) within the same TDP envelope!

In this article we test CPU Cache and Memory performance; please see our other articles on:

Hardware Specifications

We are comparing the top-of-the-range Gen 8 Core i7 (8700K) with previous generation (6700K) and competing architectures with a view to upgrading to a mid-range high performance design.

CPU Specifications Intel i7-9800K CofeeLake AMD Ryzen2 2700X Pinnacle Ridge Intel i9-7900X SkyLake-X Intel i7-6700K SkyLake Comments
L1D / L1I Caches 6x 32kB 8-way / 6x 32kB 8-way 8x 32kB 8-way / 8x 64kB 8-way 10x 32kB 8-way / 10x 32kB 8-way 4x 32kB 8-way / 4x 32kB 8-way No L1D/I changes, Ryzen’s L1I is twice as big.
L2 Caches 6x 256kB 4-way 8x 512kB 8-way 10x 1MB 16-way 4x 256kB 4-way No L2 changes, Ryzen’s L2 is twice as big again.
L3 Caches 12MB 16-way 2x 8MB 16-way 2x 8MB 16-way 8MB 16-way L3 has also increased with no of cores, still behind Ryzen’s dual 8MB L3 caches.
TLB 4kB pages
64 4-way / 64 8-way/ 1536 6-way 64 full-way 1536 8-way 64 4-way / 64 8-way / 1536 6-way 64 4-way / 64 8-way / 1536 6-way No TLB changes.
TLB 2MB pages
8 full-way / 1536 6-way 64 full-way 1536 2-way 8 full-way / 1536 6-way 8 full-way / 1536 6-way No TLB changes.
Memory Controller Speed (MHz) 1200-4400 1333-2667 1200-2700 1200-4000 The uncore (memory controller) runs at faster clock due to higher rated clock but not a lot in it.
Memory Data Speed (MHz)
3200 2667 3200 2533 CFL can easily run at 3200Mt/s while KBL/SKL were not as reliable. We could not get Ryzen past 2667 while it does support 2933.
Memory Channels / Width
2 / 128-bit 2 / 128-bit 2 / 128-bit 2 / 128-bit All have 128-bit total channel width.
Memory Bandwidth (GB/s)
50 42 100 40 Bandwidth has naturally increased with memory clock speed but latencies are higher.
Uncore / Memory Controller Firmware
2.6.2 2.0.0.6 We’re on firmware 2.6.x vs. 2.0.x on old SKL/KBL.
Memory Timing (clocks)
16-16-16-36 6-52-25-12 2T 16-17-17-35 7-60-20-10 2T 16-18-18-36 5-54-21-10 2T Timings are very much BIOS dependent and vary a lot.

Native Performance

We are testing native arithmetic, SIMD and cryptography performance using the highest performing instruction sets (AVX2, AVX, etc.). CFL supports most modern instruction sets (AVX2, FMA3) but not the latest SKL/KBL-X AVX512 nor a few others like SHA HWA (Atom, Ryzen).

Results Interpretation: Higher values (GOPS, MB/s, etc.) mean better performance.

Environment: Windows 10 x64 (1807), latest drivers. 2MB “large pages” were enabled and in use. Turbo / Boost was enabled on all configurations.

Spectre / Meltdown Windows Mitigations: all were enabled as per default (BTI enabled, RDCL/KVA enabled, PCID enabled).

Native Benchmarks Intel i7-9800K CofeeLake AMD Ryzen2 2700X Pinnacle Ridge Intel i9-7900X SkyLake-X Intel i7-6700K SkyLake Comments
CPU Multi-Core Benchmark Total Inter-Core Bandwidth – Best (GB/s) 52.5 [-5%] 55.3 86 39.5 Despite just 2 less cores, CFL has only 5% less bandwidth than Ryzen 2.
CPU Multi-Core Benchmark Total Inter-Core Bandwidth – Worst (GB/s) 15.5 [+144%] 6.35 25.7 16.1 In worst-case pairs on Ryzen2 must go across CCXes – unlike Intel’s CPUs – thus CFL can muster over 2x more bandwidth in this case.
CFL manages good bandwidth improvement over KBL/SKL – and due to unified design matching Ryzen2 in best case and beating it soundly in worst case.
CPU Multi-Core Benchmark Inter-Unit Latency – Same Core (ns) 14.4 [+7%] 13.5 15 16 Surprisingly, Ryzen2 manages lower thread latency when sharing core.
CPU Multi-Core Benchmark Inter-Unit Latency – Same Compute Unit (ns) 45 [+12%] 40 75 47 Within the same unit, Ryzen2 is again faster than CFL.
CPU Multi-Core Benchmark Inter-Unit Latency – Different Compute Unit (ns) 115 Obviously going across CCXes is slow, about 3x slower which needs careful thread scheduling.
The multiple CCX design still presents some challenges to programmers requiring threads to be carefully scheduled – but we see Ryzen2 with lower latencies for both core and unit a surprising result as usually Intel’s caches are lower latency.
Aggregated L1D Bandwidth (GB/s) 1630 [+59%]
854 2220 884 Intel’s wide data path L1 caches allow even old SKL to beat Ryzen2 with CFL enjoying 60% more bandwidth.
Aggregated L2 Bandwidth (GB/s) 571 [-21%] 720 985 329 But Ryzen2’s L2 caches are not only twice as big but also very wide – CFL has 20% less bandwidth.
Aggregated L3 Bandwidth (GB/s) 327 [-4%] 339 464 243 Ryzen’s 2 L3 caches also provide good bandwidth matching CFL’s unified L3 cache.
Aggregated Memory (GB/s) 35.6 [+11%] 32.2 70 30.1 Running at 3200Mt’s obviously CFL enjoys higher bandwidth than Ryzen2 at 2667Mt’s but somehow the latter has better efficiency.
Nothing much has changed in CFL vs. old SKL thus while L1 caches are wide and thus fast – the L2, L3 are not as impressive and the memory controller while competitive it does not seem as efficient as Ryzen2 but is more stable at high data rates allowing for higher bandwidth.
Data In-Page Random Latency (ns) 17.4 (4-11-20) [-73%] 63.4 (4-12-31) 25.5 (4-13-30) 20.4 (4-12-21) While clock latencies have not changed w.s. old KBL/SKL, CFL enjoys lower latencies due to higher data rates. Ryzen2 has problems here.
Data Full Random Latency (ns) 53.4 (4-11-42) [-30%] 76.2 (4-12-32) 74 (4-13-62) 63.9 (4-12-34) Out-of-page clock latencies have increased but still overall lower. Ryzen2 has almost caught up here.
Data Sequential Latency (ns) 3.8 (4-11-12) [+15%] 3.3 (4-6-7) 5.3 (4-12-12) 4.1 (4-12-13) With sequential access, Ryzen2 is now faster as CFL’s clock latencies have not changed.
CFL is lucky here as even Ryzen2 still has high latencies in random accesses (either in-page or full range) but manages to be faster with sequential access. Intel will need to improve going forward as clock latencies while good have really not improved at all.
Code In-Page Random Latency (ns) 8.7 (2-10-21) [-37%] 13.8 (4-9-24) 11.8 (4-14-25) 10.1 (2-10-21) Code clock latencies also have not changed and again and while Ryzen2 performs a lot better, CFL (even old SKL) manage to be ~35% faster.
Code Full Random Latency (ns) 59.8 (2-10-48) [-30%] 85.7 (4-14-49) 83.6 (4-15-74) 70.7 (2-11-46) Out-of-page clock latencies also have not changed and here CFL is 20% faster over Ryzen2.
Code Sequential Latency (ns) 4.5 (2-4-10) [-39%] 7.4 (4-12-20) 6.8 (4-7-11) 5 (2-4-9) Ryzen2 is competitive but again CFL manages to be almost 40% faster.
CFL dominates here and enjoys 30-40% less latency over Ryzen2 but the latter has improved a lot in time.
Memory Update Transactional (MTPS) 54 [+980%] 5 59 35 Finally all top-end Intel CPUs have HLE enabled and working and thus enjoy huge performance increase.
Memory Update Record Only (MTPS) 38 [+730%] 4.58 59 24.8 Nothing much changes here.

Ryzen2 brings nice updates – good bandwidth increases to all caches L1D/L2/L3 and also well-needed latency reduction for data (and code) accesses. Yes, there is still work to be done to bring the latencies down further – but it may be just enough to beat Intel to 2nd place for a good while.

At the high-end, ThreadRipper2 will likely benefit most as it’s going against many-core SKL-X AVX512-enabled competitor which is a lot “tougher” than the normal SKL/KBL/CFL consumer versions.

SiSoftware Official Ranker Scores

Final Thoughts / Conclusions

CFL’s caches and memory (uncore) sub-systems are unchanged from SKL/KBL and thus provide no surprises, with rock-solid performance at 3200Mt/s with huge bandwidth (needed after all to feed 12 threads) but Ryzen2 has improved a lot over old AMD CPU designs.

With the continuous increase in cores/threads (8/12 in CFL-R) as with Ryzen1/2 but modest DDR4 speed increases (not to mention very high cost), the desktop platforms are likely to see diminishing returns due to core/thread data starvation while the extra cores just cannot be fed by the memory sub-systems. The L2 and L3 caches will need to be improved (widened, larger as with SKL-X) also the now defunct L4/eDRAM cache should re-emerge to mitigate these issues…

Intel Core i7 8700K CofeeLake Review & Benchmarks – CPU 6-core/12-thread Performance

What is “CofeeLake” CFL?

The 8th generation Intel Core architecture is code-named “CofeeLake” (CFL): unlike previous architectures, it is a minor stepping of the previous 7th generation “KabyLake” (KBL), itself a minor update of the 6th generation “SkyLake” (SKL). The server/workstation (SKL-X/KBL-X) CPU core saw new instruction set support (AVX512) as well as other improvements – these have not made the transition yet.

Possibly due limited competition (before AMD Ryzen launch), process issues (still at 14nm) and the disclosure of a whole host of hardware vulnerabilities (Spectre, Meltdown, etc.) which required microcode (firmware) updates – performance improvements have not been forthcoming. This is pretty much unprecedented – while some Core updates were only evolutionary we have not had complete stagnation before; in addition the built-in GPU core has also remained pretty much stagnant – we will investigate this in a subsequent article.

However, CFL does bring up a major change – and that is increased core counts both on desktop and mobile: on desktop we go from 4 to 6 cores (+50%) while on mobile (ULV) we go from 2 to 4 (+100%) within the same TDP envelope!

While this article is a bit late in the day considering the 8700K launched last year – we are preparing to review the brand-new CofeeLake-R (Refresh) Core i9-9900K and it seems a good time to see what has changed performance-wise for the previous top-of-the-range CPU.

In this article we test CPU Core performance; please see our other articles on:

Hardware Specifications

We are comparing the top-of-the-range Gen 8 Core i7 (8700K) with previous generation (6700K) and competing architectures with a view to upgrading to a mid-range high performance design.

CPU Specifications Intel i7-9800K CofeeLake
AMD Ryzen2 2700X Pinnacle Ridge
Intel i9-7900X SkyLake-X
Intel i7-6700K SkyLake
Comments
Cores (CU) / Threads (SP) 6C/12T 8C / 16T 10C / 20T 4C / 8T We have 50% more cores compared to SKL/KBL but still not as much as Ryzen/2 with 8 cores.
Speed (Min / Max / Turbo) 0.8-3.7-4.7GHz (8x-37x-47x) 2.2-3.7-4.2GHz (22x-37x-42x) 1.2-3.3-4.3 (12x-33x-43x) 0.8-4.0-4.2GHz (8x-40x-42x) Single-core Turbo has increased close to 5GHz (reserved for Special Edition 8086K) way above SKL/KBL and Ryzen.
Power (TDP) 95W (131) 105W (135) 140W (308) 91W (100) TDP has only increased by 4% and is still below Ryzen though Turbo is comparable.
L1D / L1I Caches 6x 32kB 8-way / 6x 32kB 8-way 8x 32kB 8-way / 8x 64kB 8-way 10x 32kB 8-way / 10x 32kB 8-way 4x 32kB 8-way / 4x 32kB 8-way No change in L1 caches. Just more of them.
L2 Caches 6x 256kB 8-way 8x 512kB 8-way 10x 1MB 8-way 4x 256kB 8-way No change in L2 caches. Just more of them.
L3 Caches 12MB 16-way 2x 8MB 16-way 13.75MB 11-way 8MB 16-way L3 has also increased by 50% in line with cores, but still below Ryzen’s 16MB.
Microcode/Firmware MU069E0A-96 MU8F0802-04 MU065504-49 MU065E03-C2 L3 has also increased by 50% in line with cores, but still below Ryzen’s 16MB.

Native Performance

We are testing native arithmetic, SIMD and cryptography performance using the highest performing instruction sets (AVX2, AVX, etc.). CFL supports most modern instruction sets (AVX2, FMA3) but not the latest SKL/KBL-X AVX512 nor a few others like SHA HWA (Atom, Ryzen).

Results Interpretation: Higher values (GOPS, MB/s, etc.) mean better performance.

Environment: Windows 10 x64 (1807), latest drivers. 2MB “large pages” were enabled and in use. Turbo / Boost was enabled on all configurations.

Spectre / Meltdown Windows Mitigations: all were enabled as per default (BTI enabled, RDCL/KVA enabled, PCID enabled).

Native Benchmarks Intel i7-9800K CofeeLake AMD Ryzen2 2700X Pinnacle Ridge Intel i9-7900X SkyLake-X Intel i7-6700K SkyLake Comments
CPU Arithmetic Benchmark Native Dhrystone Integer (GIPS) 291 [-13%] 334 485 190 In the old Drystone integer workload, CFL is still 13% slower than Ryzen 2 depite the huge lead over SKL.
CPU Arithmetic Benchmark Native Dhrystone Long (GIPS) 296 [-12%] 335 485 192 With a 64-bit integer workload – nothing much changes.
CPU Arithmetic Benchmark Native FP32 (Float) Whetstone (GFLOPS) 170 [-14%] 198 262 105 Switching to floating-point, CFL is still 14% slower in the old Whetstone also a micro-benchmark.
CPU Arithmetic Benchmark Native FP64 (Double) Whetstone (GFLOPS) 143 [-15%] 169 223 89 With FP64 nothing much changes.
From integer workloads in Dhyrstone to floating-point workloads in Whestone, CFL is still 12-15% slower than Ryzen 2 with its 2 more cores (8 vs. 2), but much faster than the old SKL with 4 cores. We begin to see now why Intel is adding more cores in CFL-R.
BenchCpuMM Native Integer (Int32) Multi-Media (Mpix/s) 741 [+29%] 574 1590 (AVX512) 474 In this vectorised AVX2 integer test we see CFL beating Ryzen by ~30% despite less cores.
BenchCpuMM Native Long (Int64) Multi-Media (Mpix/s) 305 [+63%] 187 581 (AVX512) 194 With a 64-bit AVX2 integer vectorised workload, CFL is now 63% faster.
BenchCpuMM Native Quad-Int (Int128) Multi-Media (Mpix/s) 4.9 [-16%] 5.8 7.6 3 This is a tough test using Long integers to emulate Int128 without SIMD: Ryzen 2 thus wins this one with CFL slower by 16%.
BenchCpuMM Native Float/FP32 Multi-Media (Mpix/s) 678 [+14%] 596 1760 (AVX512) 446 In this floating-point AVX/FMA vectorised test, CFL is again 14% faster.
BenchCpuMM Native Double/FP64 Multi-Media (Mpix/s) 402 [+20%] 335 533 (AVX512) 268 Switching to FP64 SIMD code, CFL is again 20% faster.
BenchCpuMM Native Quad-Float/FP128 Multi-Media (Mpix/s) 16.7 [+7%] 15.6 40.3 (AVX512) 11 In this heavy algorithm using FP64 to mantissa extend FP128 but not vectorised – CFL is just 7% faster but does win.
In vectorised SIMD code we see the power of Intel’s SIMD units that can execute 256-bit instructions in one go; CFL soundly beats Ryzen2 despite fewer cores (7-60%). SKL-X shows that AVX512 brings further gains and is a pity CFL still does not support them.
BenchCrypt Crypto AES-256 (GB/s) 17.8 [+11%] 16.1 23 15 With AES HWA support all CPUs are memory bandwidth bound; unfortunately Ryzen 2 is at 2667 vs CFL/SKL-X at 3200 which means CFL is 11% faster.
BenchCrypt Crypto AES-128 (GB/s) 17.8 [+11%] 16.1 23 15 What we saw with AES-256 just repeats with AES-128.
BenchCrypt Crypto SHA2-256 (GB/s) 9 [-51%] 18.6 26 (AVX512) 5.9 With SHA HWA Ryzen2 similarly powers through hashing tests leaving Intel in the dust; CFL is thus 50% slower.
BenchCrypt Crypto SHA1 (GB/s) 17.3 [-9%] 19.3 38 (AVX512) 11.2 Ryzen also accelerates the soon-to-be-defunct SHA1 but the algorithm is less compute heavy thus CFL is only 9% slower.
BenchCrypt Crypto SHA2-512 (GB/s) 6.65 [+77%] 3.77 21 (AVX512) 4.4 SHA2-512 is not accelerated by SHA HWA, allowing CFL to use its SIMD units and be 77% faster.
AES HWA is memory bound and here CFL comfortably works with 3200Mt/s memory thus is faster than Ryzen 2 with 2667Mt/s memory (our sample); likely both would score similarly at 3200Mt/s. Ryzen 2 SHA HWA allows it to easily beat all other CPUs but only in SHA1/SHA256 – in others the CFL SIMD units win the day.
BenchFinance Black-Scholes float/FP32 (MOPT/s) 207 [-19%] 257 309 128 In this non-vectorised test CFL cannot match Ryzen 2 and is ~20% slower.
BenchFinance Black-Scholes double/FP64 (MOPT/s) 180 [-18%] 219 277 113 Switching to FP64 code, nothing much changes, Ryzen 2 is still faster.
BenchFinance Binomial float/FP32 (kOPT/s) 47 [-56%] 107 70.5 29.3 Binomial uses thread shared data thus stresses the cache & memory system; Ryzen 2 does very well here with CFL almost 60% slower.
BenchFinance Binomial double/FP64 (kOPT/s) 44.2 [-27%] 60.6 68 27.3 With FP64 code Ryzen2’s lead diminishes, CFL is “only” 27% slower.
BenchFinance Monte-Carlo float/FP32 (kOPT/s) 41.6 [-23%] 54.2 63 25.7 Monte-Carlo also uses thread shared data but read-only thus reducing modify pressure on the caches; Ryzen 2 also wins this one, CFL is 23% slower.
BenchFinance Monte-Carlo double/FP64 (kOPT/s) 32.9 [-20%] 41 50.5 20.3 Switching to FP64 nothing much changes, CFL is still 20% slower.
Without SIMD support, CFL loses to Ryzen 2 as we saw with Dhrystone/Whetstone – between 20 and 50%. As we noted before, Intel will still need to add more cores in order to beat Ryzen 2. Still big improvement over the old SKL/KBL as expected.
BenchScience SGEMM (GFLOPS) float/FP32 385 [+28%] 300 413 (AVX512) 268 In this tough vectorised AVX2/FMA algorithm CFL is ~30% faster.
BenchScience DGEMM (GFLOPS) double/FP64 135 [+13%] 119 212 (AVX512) 130 With FP64 vectorised code, CFL’s lead reduces to 13% over Ryzen 2.
BenchScience SFFT (GFLOPS) float/FP32 24 [167%] 9 28.6 (AVX512) 16.1 FFT is also heavily vectorised (x4 AVX/FMA) but stresses the memory sub-system more; here CFL is over 2.5x faster
BenchScience DFFT (GFLOPS) double/FP64 11.9 [+51%] 7.92 14.6 (AVX512) 7.2 With FP64 code, CFL’s lead reduces to ~50%.
BenchScience SNBODY (GFLOPS) float/FP32 411 [+47%] 280 638 (AVX512) 271 N-Body simulation is vectorised but many memory accesses to shared data but CFL remains ~50% faster.
BenchScience DNBODY (GFLOPS) double/FP64 127 [+13%] 113 195 (AVX512) 79 With FP64 code CFL’s lead reduces to 13% over Ryzen 2.
With highly vectorised SIMD code CFL performs well soundly beating Ryzen 2 between 13-167% as well as significantly improving over older SKL/KBL. As long as SIMD code is used Intel has little to fear.
CPU Image Processing Blur (3×3) Filter (MPix/s) 1700 [+39%] 1220 4540 (AVX512) 1090 In this vectorised integer AVX2 workload CFL enjoys a ~40% lead over Ryzen 2.
CPU Image Processing Sharpen (5×5) Filter (MPix/s) 675 [+25%] 542 1790 (AVX512) 433 Same algorithm but more shared data reduces the lead to 25% still significant.
CPU Image Processing Motion-Blur (7×7) Filter (MPix/s) 362 [+19%] 303 940 (AVX512) 233 Again same algorithm but even more data shared reduces the lead to 20%.
CPU Image Processing Edge Detection (2*5×5) Sobel Filter (MPix/s) 589 [+30%] 453 1520 (AVX512) 381 Different algorithm but still AVX2 vectorised workload means CFL is 30% faster than Ryzen 2.
CPU Image Processing Noise Removal (5×5) Median Filter (MPix/s) 57.8 [-17%] 69.7 223 (AVX512) 37.6 Still AVX2 vectorised code but CFL stumbles a bit here – it’s 17% slower than Ryzen 2.
CPU Image Processing Oil Painting Quantise Filter (MPix/s) 31.8 [+29%] 24.6 70.8 (AVX512) 20 Again we see CFL ~30% faster.
CPU Image Processing Diffusion Randomise (XorShift) Filter (MPix/s) 3480 [+140%] 1450 3570 (AVX512) 2300 CFL (like all Intel CPUs) does very well here – it’s a huge 140% faster.
CPU Image Processing Marbling Perlin Noise 2D Filter (MPix/s) 448 [+84%] 243 909 (AVX512) 283 In this final test, CFL is almost 2x faster than Ryzen 2.

The addition of 2 more cores brings big performance gains (not to mention the higher Turbo clock) over the old SKL/KBL which is pretty impressive considering TDP has stayed the same. With SIMD code (AVX/AVX2/FMA3) CFL has no problem beating Ryzen 2 by a pretty large margin (up to 2x faster) – but any algorithm not vectorised allows Ryzen 2 to win – though not by much 12-20%.

Streaming tests likely benefit from the higher supported memory frequencies that while in theory could be used on the older SKL/KBL (memory overclock) they were not supported officially nor stable in all cases. We shall test memory performance in a forthcoming article.

Software VM (.Net/Java) Performance

We are testing arithmetic and vectorised performance of software virtual machines (SVM), i.e. Java and .Net. With operating systems – like Windows 10 – favouring SVM applications over “legacy” native, the performance of .Net CLR (and Java JVM) has become far more important.

Results Interpretation: Higher values (GOPS, MB/s, etc.) mean better performance.

Environment: Windows 10 x64 (1807), latest drivers. 2MB “large pages” were enabled and in use. Turbo / Boost was enabled on all configurations.

Spectre / Meltdown Windows Mitigations: all were enabled as per default (BTI enabled, RDCL/KVA enabled, PCID enabled).

VM Benchmarks Ryzen2 2700X 8C/16T Pinnacle Ridge
Ryzen2 2600 6C/12T Pinnacle Ridge
Ryzen 1700X 8C/16T Summit Ridge
i7-6700K 4C/8T Skylake
Comments
BenchDotNetAA .Net Dhrystone Integer (GIPS) 41 [-33%] 61 52 28 .Net CLR integer performance starts off well over old SKL but still 33% slower than Ryzen 2.
BenchDotNetAA .Net Dhrystone Long (GIPS) 41 [-32%] 60 54 27 With 64-bit integers nothing much changes.
BenchDotNetAA .Net Whetstone float/FP32 (GFLOPS) 78 [-24%] 102 107 49 Floating-Point CLR performance does not change much, CFL is still 25% slower than Ryzen despite big gain over old SKL.
BenchDotNetAA .Net Whetstone double/FP64 (GFLOPS) 95 [-19%] 117 137 62 FP64 performance is similar to FP32.
Ryzen 2 performs exceedingly well in .Net workloads – soundly beating all Intel CPUs, with CFL between 20-33% slower. More cores will be needed for parity with Ryzen 2, but at least CFL improves a lot over SKL/KBL.
BenchDotNetMM .Net Integer Vectorised/Multi-Media (MPix/s) 93.5 [-16%] 111 144 57 Just as we saw with Dhrystone, this integer workload sees CFL improve greatly over SKL but Ryzen 2 is still faster.
BenchDotNetMM .Net Long Vectorised/Multi-Media (MPix/s) 93.1 [-15%] 109 143 57 With 64-bit integer workload nothing much changes.
BenchDotNetMM .Net Float/FP32 Vectorised/Multi-Media (MPix/s) 361 [-8%] 392 585 228 Here we make use of RyuJit’s support for SIMD vectors thus running AVX/FMA code but CFL is still 8% slower than Ryzen 2.
BenchDotNetMM .Net Double/FP64 Vectorised/Multi-Media (MPix/s) 198 [-9%] 217 314 128 Switching to FP64 SIMD vector code – still running AVX/FMA – CFL is still slower.
We see a similar improvement for CFL but again not enough to beat Ryzen 2; even using RyuJit’s vectorised support CFL cannot beat it – just reduce the loss to 8-9%.
Java Arithmetic Java Dhrystone Integer (GIPS) 557 [-3%] 573 877 352 Java JVM performance is almost neck-and-neck with Ryzen 2 despite 2 less cores.
Java Arithmetic Java Dhrystone Long (GIPS) 488 [-12%] 553 772 308 With 64-bit integers, CFL does fall behind Ryzen2 by 12%.
Java Arithmetic Java Whetstone float/FP32 (GFLOPS) 101 [-23%] 131 156 62 Floating-point JVM performance is worse though, CFL is now 23% slower.
Java Arithmetic Java Whetstone double/FP64 (GFLOPS) 103 [-26%] 139 160 64 With 64-bit precision nothing much changes.
While CFL improves markedly over the old SKL and almost ties with Ryzen 2 in integer workloads, it does fall behind in floating-point by a good amount.
Java Multi-Media Java Integer Vectorised/Multi-Media (MPix/s) 100 [-12%] 113 140 63 Without SIMD acceleration we see the usual delta (around 12%) with integer workload.
Java Multi-Media Java Long Vectorised/Multi-Media (MPix/s) 89 [-12%] 101 152 62 Nothing changes when changing to 64-bit integer workloads.
Java Multi-Media Java Float/FP32 Vectorised/Multi-Media (MPix/s) 64 [-34%] 97 98 41 With floating-point non-SIMD accelerated we see a bigger delta of about 30% less vs. Ryzen 2.
Java Multi-Media Java Double/FP64 Vectorised/Multi-Media (MPix/s) 64 [-29%] 90 99 41 With 64-bit floatint-point precision nothing much changes.
With compute heavy vectorised code but not SIMD accelerated, CFL cannot keep up with Ryzen 2 and the difference increases to about 30% less. Intel really needs to get Oracle to add SIMD extensions similar to .Net’s new CLR.

Ryzen dominates .Net and Java benchmarks – Intel will need more cores in order to compete; while the additional 2 cores helped a lot, it is not enough!

SiSoftware Official Ranker Scores

Final Thoughts / Conclusions

Due to Core improvement stagnation (4 on desktop, 2 on mobile), Intel had no choice really but increase core counts in light of new competition from AMD with Ryzen/2 with twice as many cores (8) and SMT (Hyper-Threading) as well! While KBL had increased base/Turbo cores appreciably over SKL within the same power envelope, CFL had to add more cores in order to compete.

With 50% more cores (6) CFL performs much better than the older SKL/KBL as expected but that is not enough in non-vectorised loads; Ryzen 2 with 2 more cores is still faster, not by much (12-20%) but still faster. Once vectorised code is used, the power of Intel’s SIMD units shows – with CFL soundly beating Ryzen despite not supporting AVX512 nor SHA HWA which is a pity. AMD still has work to do with Ryzen if it wants to be competitive in vectorised workloads (integer or floating-point).

We now see why CFL-R (CofeeLake Refresh) will add even more cores (8C/16T with 9900K which we review in a subsequent article) – it is the only way to beat Ryzen 2 in all workloads. In effect AMD’s has reached parity performance with Intel in all but SIMD workloads – a great achievement!

Unfortunately (unlike AMD’s AM4 Ryzen) CFL does require new chipset/boards (series 300) which makes it an expensive upgrade for SKL/KBL owners; otherwise it would have been a pretty no-brainer upgrade for those needing more compute power. While the new platform does bring some improvements (USB 3.1 Gen 2 aka 10GB/s, more PCIe lanes, integrated 802.11ac WiFi – at least on mobile) it’s nothing over the competition.

Roll on CofeeLake Refresh and the new CPUs: they are sorely needed…

Intel Core i9 (SKL-X) Review & Benchmarks – 4-channel @ 3200Mt/s Cache & Memory Performance

Intel Skylake-X Core i9

What is “SKL-X”?

“Skylake-X” (E/EP) is the server/workstation/HEDT version of desktop/mobile Skylake CPU – the 6-th gen Core/Xeon replacing the current Haswell/Broadwell-E designs. It naturally does not contain an integrated GPU but what does contain is more cores, more PCIe lanes and more memory channels (up to 6 64-bit) for huge memory bandwidth.

While it may seem an “old core”, the 7-th gen Kabylake core is not much more than a stepping update with even the future 8-th gen Coffeelake rumored again to use the very same core. But what it does do is include the much expected 512-bit AVX512 instruction set (ISA) that are are not enabled in the current desktop/mobile parts.

SKL-X does not only support DDR4 but also NVM-DIMMs (non-volatile memory DIMMs) and PMem (Persistent Memory) that should revolutionise future computing with no need for memory refresh or immediate sleep/resume (no need to save/restore memory from storage).

In this article we test CPU Cache and Memory performance; please see our other articles on:

Hardware Specifications

We are comparing the top-end desktop Core i9 with current competing architectures from both AMD and Intel as well as its previous version.

CPU Specifications Intel i9 7900X (Skylake-X) AMD Ryzen 1700X Intel i7 6700K (Skylake) Intel i7 5820K (Haswell-E) Comments
TLB 4kB pages
64 4-way / 64 8-way
1536 8-way
64 full-way
1536 8-way
64 4-way / 64 8-way
1536 6-way
64 4-way
1024 8-way
Ryzen has comparatively ‘better’ TLBs than all Intel CPUs.
TLB 2MB pages
8 full-way
1536 2-way
64 full-way
1536 2-way
8 full-way
1536 6-way
8 full-way
1024 8-way
Again Ryzen has ‘better’ TLBs than all Intel versions
Memory Controller Speed (MHz) 800-3300 600-1200 800-4000 1200-4000 Intel’s UNC clock runs higher than Ryzen
Memory Speed (Mhz) Max
3200 / 2667 2400 / 2667 2533 /2667 2133 / 2133 SKL-X can officially go as high as Ryzen and normal SKL @ 2667 but runs happily at 3200Mt/s.
Memory Channels / Width
4 / 256-bit (max 8 / 384-bit) 2 / 128-bit 2 / 128-bit 4 / 256-bit SKL-X has 2 memory controllers each with up to 3 channels each for massive memory bandwidth.
Memory Timing (clocks)
16-18-18-36 6-54-19-4 2T 14-16-16-32 7-54-18-9 2T 16-18-18-36 5-54-21-10 2T 14-15-15-36 4-51-16-3 2T SKL-X can run as tight timings as normal SKL or Ryzen.

Core Topology and Testing

Intel has dropped the (dual) ring bus(es) and instead opted for a mesh inter-connect between cores; on desktop parts this should not cause latency differences between cores (as with Ryzen) but on high-end server parts with many cores (up to 28) this may not be the case. The much increased L2 cache (1MB vs. old 256kB) should alleviate this issue – though the L3 cache seems to have been reduced quite a bit.

Native Performance

We are testing bandwidth and latency performance using all the available SIMD instruction sets (AVX, AVX2/FMA, AVX512) supported by the CPUs.

Results Interpretation: Higher values (GOPS, MB/s, etc.) mean better performance.

Environment: Windows 10 x64, latest AMD and Intel drivers. Turbo / Dynamic Overclocking was enabled on both configurations.

Native Benchmarks Intel i9 7900X (Skylake-X) AMD Ryzen 1700X Intel i7 6700K (Skylake) Intel i7 5820K (Haswell-E) Comments
CPU Multi-Core Benchmark Total Inter-Core Bandwidth – Best (GB/s) 87 [+85%] 47.7 39 46 With 10 cores SKL-X has massive aggregated inter-core bandwidth, almost 2x Ryzen or HSW-E.
CPU Multi-Core Benchmark Total Inter-Core Bandwidth – Worst (GB/s) 19 [+46%] 13 16 17 In worst-case pairs  SKL-X does well but not far away from normal SKL or HSW-E.
CPU Multi-Core Benchmark Inter-Unit Latency – Same Core (ns) 15.2 15.7 16 13.4 [-12%]
Within the same core all modern CPUs seem to have about 15-16ns latency.
CPU Multi-Core Benchmark Inter-Unit Latency – Same Compute Unit (ns) 80 45 [-43%] 49 58 Surprisingly we see massive latency increase almost 2x Ryzen or SKL.
CPU Multi-Core Benchmark Inter-Unit Latency – Different Compute Unit (ns) 131 Naturally Ryzen scores worst when going off-CCX.
It seems the mesh inter-connect between cores has decent bandwidth but much higher latency than the older HSW-E or even the current SKL.
Aggregated L1D Bandwidth (GB/s) 2200 [+3x] 727 878 1150 SKL-X has 512-bit data ports thus massive L1D bandwidth over 2x HSW-E and 3x over Ryzen.
Aggregated L2 Bandwidth (GB/s) 1010 [+81%] 557 402 500 The large L2 caches also have 2x more bandwidth than either HSW-E or Ryzen.
Aggregated L3 Bandwidth (GB/s) 289 392 [+35%] 247 205 The 2 Ryzen L3 caches have higher bandwidth than all Intel CPUs.
Aggregated Memory (GB/s) 69.3 [+2.4x] 28.5 31 42.5 With its 4 channels SKL-X reigns supreme with almost 2.5x more bandwidth than Ryzen.
The widened ports on the L1 and L2 caches allow SKL-X to demolish the competition with over 2x more bandwidth than either Ryzen or older HSW-E; only the smaller L3 cache falters. Its 4 channels running at 3200Mt/s yield huge memory bandwidth that greatly help streaming algorithms. SKL-X is a monster – we can only speculate what the server 6-channel version would score.
Data In-Page Random Latency (ns) 26 [1/2.84x] (4-13-33) 74 (4-17-36) 20 (4-12-21) 25 (4-12-26) SKL-X has comparable lantecy with SKL and HSW-E and much better than Ryzen.
Data Full Random Latency (ns) 75 [-21%] (4-13-70) 95 (4-17-37) 65 (4-12-34) 72 (4-13-52) Full random latencies are a bit higher than expected but on part with HSW-E and better than Ryzen.
Data Sequential Latency (ns) 5.4 [+28%] (4-11-13) 4.2 (4-7-7) 4.1 (4-12-13) 7 (4-12-13) Strangely SKL-X does not do as well as SKL here or Ryzen but at least it beats HSW-E.
If you were hoping SKL-E to match normal SKL that is sadly not the case even at similar Turbo clock they are higher across the board, even allowing Ryzen a win. Perhaps further platform optimisations are needed.
Code In-Page Random Latency (ns) 12 [-27%] (4-14-28) 16.6 (4-9-25) 10 (4-11-21) 15.8 (3-20-29) With code SKL-X performs better though not enough to catch normal SKL.
Code Full Random Latency (ns) 86 [-15%] (4-16-106) 102 (4-13-49) 70 (4-11-47) 85 (3-20-58) Out-of-page code latency takes a bigger hit but nothing to worry about.
Code Sequential Latency (ns) 6.5 [-27%] (4-7-12) 8.9 (4-9-18) 5.3 (4-9-20) 10.2 (3-8-16) Again nothing much changes here.
SKL-X again does not manage to match normal SKL but soundly trounces both Ryzen and its older HSW-E brother, delivering a good result overall. Code access seems to perform more consistently than data for some reason we need to investigate.
Memory Update Transactional (MTPS) 52.2 [+12x] HLE 4.23 32.4 HLE 7 SKL-X with working HLE is over 12-times faster than Ryzen and older HSW-E.
Memory Update Record Only (MTPS) 57.2 [+13.6x] HLE 4.19 25.4 HLE 5.47 SKL-X is king of the hill with nothing getting close.
Yes – Intel has finally fixed HLE/RTL which owners of HSW-E and BRW-E must feel very hard done by considering it was “working” before having it disabled due to the errata. Thus after so many years we have both HLE, RTL and AVX512! Great!

If there was any doubt, SKL-X does not disappoint – massive cache (L1D and L2) aggregate and memory bandwidths with server versions likely even more; the smaller L3 cache does falter though which is a bit of a surprise – the larger L2 caches must have forced some compromises to be made.

Latency is a bit disappointing compared to the “normal” SKL/KBL we have on desktop, but are still better than older HSW-E and also Ryzen competitor. Again the L1 and L2 caches (despite being 4-times bigger) clock latencies are OK with the L3 and memory controller being the source of the increased latencies.

SiSoftware Official Ranker Scores

Final Thoughts / Conclusions

After a strong CPU performance we did not expect the cache and memory performance to disappoint – and it does not. SKL-X is a big improvement over the older versions (HSW-E) and competition with few weaknesses.

The mesh interconnect does seem to exhibit higher inter-core latencies with small increase in bandwidth; perhaps this can be fixed.

The very much reduced L3 cache does disappoint both bandwidth and latency wise; the memory controllers provide huge bandwidth but at the expense of higher latencies.

All in all, if you can afford it, there is no question that SKL-X is worth it. But better wait to see what AMD’s Threadripper has in store before making your choice… 😉

FP16 GPGPU Image Processing Performance & Quality

GPGPU Image Processing

What is FP16 (“half”)?

FP16 (aka “half” floating-point) is the IEEE lower-precision floating-point representation that has recently begun to be supported by GPGPUs for compute (e.g. Intel EV9+ Skylake GPU, nVidia Pascal) while CPU support is still limited to SIMD conversion only (FP16C). It has been added to allow mobile devices (phones, tablets) to provide increased performance (and thus save power for fixed workloads) for a small drop in quality for normal 8-bbc (24-bbp) image and video.

However, normal laptops and tablets with integrated graphics can also benefit from FP16 support in same way due to relatively low graphics compute power and the need to save power due to limited battery in thin and light formats.

In this article we’re investigating the performance differences vs. standard FP32 (aka “single”) and the resulting quality difference (if any) for mobile GPGPUs (Intel’s EV9/9.5 SKL/KBL). See the previous articles for general performance comparison:

Image Processing Performance & Quality

We are testing GPGPU performance of the GPUs in OpenCL, DirectX/OpenGL ComputeShader .

Results Interpretation: Higher values (MPix/s, MB/s, etc.) mean better performance.

Environment: Windows 10 x64, latest Intel drivers (April 2017). Turbo / Dynamic Overclocking was enabled on all configurations.

Image Filter
FP32/Single FP16/Half Comments
GPGPU Image Processing Blur (3×3) Filter OpenCL (MPix/s)  481  967 [+2x] We see a a text-book 2x performance increase for no visible drop in quality.
GPGPU Image Processing Sharpen (5×5) Filter OpenCL (MPix/s)  107  331 [+3.1x] Using FP16 yields over 3x performance increase but we do see a few more changed pixels though no visible difference.
GPGPU Image Processing Motion-Blur (7×7) Filter OpenCL (MPix/s)  112  325 [+2.9x] Again almost 3x performance increase but no visible quality difference. Result!
GPGPU Image Processing Edge Detection (2*5×5) Sobel OpenCL (MPix/s)  107  323 [+3.1x] Again just over 3x performance increase but no visible quality difference.
GPGPU Image Processing Noise Removal (5×5) Median OpenCL (MPix/s) 5.41  5.67 [+4%] No image difference at all but also almost no performance increase – a measly 4%.
GPGPU Image Processing Oil Painting Quantise OpenCL (MPix/s)  4.7  13.48 [+2.86x] We’re back with a 2.8x times performance increase but few more differences than we’ve seen though quality seems acceptable.
GPGPU Image Processing Diffusion Randomise OpenCL (MPix/s)  1188  1210 [+2%] Due to random no generation using 64-bit integer processing the performance difference is minimal but the picture quality is not acceptable.
GPGPU Image Processing Marbling Perlin Noise 2D OpenCL (MPix/s) 470  508 [+8%] Again due to Perlin noise generation we see almost no performance gain but big drop in image quality – not worth it.

Other Image Processing relating Algorithms

Image Filter
FP16/Half FP32/Single FP64/Double Comments
GPGPU Science Benchmark GEMM OpenCL (GFLOPS)  178 [+50%]  118  35 Dropping to FP16 gives us 50% more performance, not as good as 2x but still a significant increase.
GPGPU Science Benchmark FFT OpenCL (GFLOPS)  34 [+70%]  20  5.4 With FFT we are now 70% faster, closer to the 100% promised.
GPGPU Science Benchmark N-Body OpenCL (GFLOPS)  297 [+49%]  199  35 Again we drop to “just” 50% faster with FP16 but still a great performance improvement.

Final Thoughts / Conclusions

For many image processing filters (Blur, Sharpen, Sobel/Edge-Detection, Median/De-Noise, etc.) we see a huge 2-3x performance increase – more than we’ve hoped for (2x) – with little or no image quality degradation. Thus FP16 support is very much useful and should be used when supported.

However for complex filters (Diffusion, Marble/Perlin Noise) the drop in quality is not acceptable for minor performance increase (2-8%); increasing the precision of more data items to improve quality (from FP16 to FP32) would further drop performance making the whole endeavour pointless.

For those algorithms that do benefit from FP16 the performance improvement with FP16 is very much worth it – so FP16 support is very useful indeed.

Intel Graphics GPGPU Performance

Intel Logo

Why test GPGPU performance Intel Core Graphics?

Laptops (and tablets) are still in fashion with desktops largely left to PC game enthusiasts and workstations for big compute workloads; most laptops (and all tablets) make due with integrated graphics with few dedicated graphics options mainly for mobile PC gamers.

As a result integrated graphics on Intel’s mobile platform is what the vast majority of users will experience – thus its importance is not to be underestimated. While in the past integrated graphics options were dire – the introduction of Core v3 (Ivy Bridge) series brought us a GPGPU-capable graphics processor as well an updated internal media transcoder of Core v2 (Sandy Bridge).

With each generation Intel has progressively improved the graphics core, perhaps far more than its CPU cores – and added more variants (GT3) and embedded cache (eDRAM) which greatly increased performance – all within the same power limit.

New Features enabled by the latest 21.45 graphics driver

With Intel graphics drivers supporting just 2 generations of graphics – unlike unified drivers of AMD and nVidia – old graphics quickly become obsolete with few updates; but Windows 10 “free update” forced Intel’s hand somewhat – with its driver (20.40) supporting 3 generations of graphics (Haswell, Broadwell and latest at the time Skylake).

However, the latest 21.45 driver for newly released Kabylake and older Skylake does bring new features that can make a big difference in performance:

  • Native FP64 (64-bit aka “double” floating-point support) in OpenCL – thus allowing high precision compute on integrated graphics.
  • Native FP16 (16-bit aka “half” floating-point support) in OpenCL, ComputeShader – thus allowing lower precision but faster compute.
  • Vulkan graphics interface support – OpenGL’s successor and DirectX 12’s competitor – for faster graphics and compute.

Will these new features make upgrading your laptop to a brand-new KBL laptop more compelling?

In this article we test (GP)GPU graphics unit performance; please see our other articles on:

Hardware Specifications

We are comparing the internal GPUs of the new Intel ULV APUs with the old versions.

Graphics Unit Haswell HD4000 Haswell HD5000 Broadwell HD6100 Skylake HD520 Skylake HD540 Kabylake HD620 Comment
Graphics Core EV7.5 HSW GT2U EV7.5 HSW GT3U EV8 BRW GT3U EV9 SKL GT2U EV9 SKL GT3eU EV9.5 KBL GT2U Despite 4 CPU generations we really have 2 GPU generations.
APU / Processor Core i5-4210U Core i7-4650U Core i7-5557U Core i7-6500U Core i5-6260U Core i3-7100U The naming convention has changed between generations.
Cores (CU) / Shaders (SP) / Type 20C / 160SP 40C / 320SP 48C / 384SP 24C / 192SP 48C / 384SP 23C / 184SP BRW increased CUs to 24/48 and i3 misses 1 core.
Speed (Min / Max / Turbo) MHz 200-1000 200-1100 300-1100 300-1000 300-950 300-1000 The turbo clocks have hardly changed between generations.
Power (TDP) W 15 15 28 15 15 15 Except GT3 BRW, all ULVs are 15W rated.
DirectX CS Support 11.1 11.1 11.1 11.2 / 12.1 11.2 / 12.1 11.2 / 12.1 SKL/KBL enjoy v11.2 and 12.1 support.
OpenGL CS Support 4.3 4.3 4.3 4.4 4.4 4.4 SKL/KBL provide v4.4 vs. verision 4.3 for older devices.
OpenCL CS Support 1.2 1.2 1.2 2.0 2.0 2.1 SKL provides v2 support with KBL 2.1 vs 1.2 for older devices.
FP16 / FP64 Support No / No No / No No / No Yes / Yes Yes / Yes Yes / Yes SKL/KBL support both FP64 and FP16.
Byte / Integer Width 8 / 32-bit 8 / 32-bit 8 / 32-bit 128 / 128-bit 128 / 128-bit 128 / 128-bit SKL/KBL prefer vectorised integer workloads, 128-bit wide.
Float/ Double Width 32 / X-bit 32 / X-bit 32 / X-bit 32 / 64-bit 32 / 64-bit 32 / 64-bit Strangely neither arch prefers vectorised floating-point loads – driver bug?
Threads per CU 512 512 256 256 256 256 Strangely BRW and later reduced the threads/CU to 256.

GPGPU Performance

We are testing vectorised, crypto (including hash), financial and scientific GPGPU performance of the GPUs in OpenCL, DirectX/OpenGL ComputeShader .

Results Interpretation: Higher values (MPix/s, MB/s, etc.) mean better performance.

Environment: Windows 10 x64, latest Intel drivers (April 2017). Turbo / Dynamic Overclocking was enabled on all configurations.

Graphics Processors HD4000 (EV7.5 HSW-GT2U) HD5000 (EV7.5 HSW-GT3U) HD6100 (EV8 BRW-GT3U) HD520 (EV9 SKL-GT2U) HD540 (EV9 SKL-GT3eU) HD620 (EV9.5 KBL-GT2U) Comments
GPGPU Arithmetic Half/Float/FP16 Vectorised OpenCL (Mpix/s) 288 399 597 875 [+3x] 1500 840 [+2.8x] If FP16 is enough, KBL and SKL have 2x performance of FP32.
GPGPU Arithmetic Single/Float/FP32 Vectorised OpenCL (Mpix/s) 299 375 614 468 [+56%] 817 452 [+50%] SKL GT3e rules the roost but KBL hardly improves on SKL.
GPGPU Arithmetic Double/FP64 Vectorised OpenCL (Mpix/s) 18.54 (eml) 24.4 (eml) 38.9 (eml) 112 [+6x] 193 104 [+5.6x] SKL GT2 with native Fp64 is almost 4x emulated BRW GT3!
GPGPU Arithmetic Quad/FP128 Vectorised OpenCL (Mpix/s) 1.8 (eml) 2.36 (eml) 4.4 (eml) 6.34 (eml) [+3.5x] 10.92 (eml) 6.1 (eml) [+3.4x] Emulating Fp128 though Fp64 is ~2.5x faster than through FP32.
As expected native FP16 runs about 2x faster than FP32 and thus provides a huge performance upgrade if precision is sufficient. Native FP64 is about 8x emulated FP64 and even emulated FP128 improves by about 2.5x! Otherwise KBL GT2 matches SKL GT2 and is about 50% faster than HSW GT2 in FP32 and 6x faster in FP64.
GPGPU Crypto Benchmark AES256 Crypto OpenCL (MB/s) 1.37 1.85 2.7 2.19 [+60%] 3.36  2.21 [+60%] Since BRW integer performance is similar.
GPGPU Crypto Benchmark AES128 Crypto OpenCL (MB/s) 1.87 2.45 3.45 2.79 [+50%] 4.3 2.83 [+50%] Not a lot changes here.
SKL/KBL GT2 with integer workloads (with extensive memory accesses) are 50-60% faster than HSW similar to what we saw with floating-point performance. But the changed happened with BRW which improved the most over HSW with SKL and KBL not improving further.
GPGPU Crypto Benchmark SHA2-256 (int32) Hash OpenCL (MB/s)  1.2 1.62 4.35  3 [+2.5x] 5.12 2.92 In this tough compute test SKL/KBL are 2.5x faster.
GPGPU Crypto Benchmark SHA1 (int32) Hash OpenCL (MB/s) 2.86  3.93  9.82  6.7 [+2.34x]  11.26  6.49 With a lighter algorithm SKL/KBL are still ~2.4x faster.
GPGPU Crypto Benchmark SHA2-512 (int64) Hash OpenCL (MB/s)  0.828  1.08 1.68 1.08 [+30%] 1.85  1 64-integer performance does not improve much.
In pure integer compute tests SKL/KBL greatly improve over HSW being no less than 2.5x faster a huge improvement; but 64-bit integer performance hardly improves (30% faster with 20% more CUs 24 vs 20). Again BRW is where the improvements were added with SKL GT3e hardly improving over BRW GT3.
GPGPU Finance Benchmark Black-Scholes FP32 OpenCL (MOPT/s) 461 495 493 656 [+42%]  772 618 [+40%] Pure FP32 compute SKL/KBL are 40% faster.
GPGPU Finance Benchmark Black-Scholes FP64 OpenCL (MOPT/s) 137  238 135 SKL GT3 is 73% faster than GT2 variants
GPGPU Finance Benchmark Binomial FP32 OpenCL (kOPT/s) 62.45 85.76 123 86.32 [+38%]  145.6 82.8 [+35%] In this tough algorithm SKL/KBL are still amost 40% faster.
GPGPU Finance Benchmark Binomial FP64 OpenCL (kOPT/s) 18.65 31.46 19 SKL GT3 is over 65% faster than GT2 KBL.
GPGPU Finance Benchmark Monte-Carlo FP32 OpenCL (kOPT/s) 106 160.4 192 174 [+64%] 295 166.4 [+56%] M/C is not as tough so here SKL/KBL are 60% faster.
GPGPU Finance Benchmark Monte-Carlo FP64 OpenCL (kOPT/s) 31.61 56 31 GT3 SKL manages an 80% improvement over GT2.
Intel is pulling our leg here; KBL GPU seems to show no improvement whatsoever over SKL, but both are about 40% faster in FP32 than the much older HSW. GT3 SKL variant shows good gains of 65-80% over the common GT2 and thus is the one to get if available. Obviously the ace card for SKL and KBL is FP64 support.
GPGPU Science Benchmark SGEMM FP32 OpenCL (GFLOPS)  117  130 142 116 [=]  181 113 [=] SKL/GBL have a problem with this algorithm but GT3 does better?
GPGPU Science Benchmark DGEMM FP64 OpenCL (GFLOPS) 34.9 64.7 34.7 GT3 SKL manages a 86% improvement over GT2.
GPGPU Science Benchmark SFFT FP32 OpenCL (GFLOPS) 13.3 13.1 15 20.53 [+54%]  27.3 21.9 [+64%] In a return to form SKL/KBL are 50% faster.
GPGPU Science Benchmark DFFT FP64 OpenCL (GFLOPS) 5.2  4.19  4.69 GT3 stumbles a bit here some optimisations are needed.
GPGPU Science Benchmark N-Body FP32 OpenCL (GFLOPS)  122  157.9 249 201 [+64%]  304 177.6 [+45%] Here SKL/KBL are 50% faster overall.
GPGPU Science Benchmark N-Body FP64 OpenCL (GFLOPS) 19.25 31.9 17.8 GT3 manages only a 65% improvement here.
Again we see no delta between SKL and KBL – the graphics cores perform the same; again both benefit from FP64 support allowing high precision kernels to run. GT3 SKL variant greatly improves over common GT2 variant – except in one test (DFFT) that seems to be an outlier.
GPGPU Image Processing Blur (3×3) Filter OpenCL (MPix/s)  341  432  636 492 [+44%]  641 488 [+43%] We see the GT3s trading blows in this integer test, but SKL/KBL are 40% faster than HSW.
GPGPU Image Processing Sharpen (5×5) Filter OpenCL (MPix/s)  72.7  92.8  147  106 [+45%]  139  106 [+45%] BRW GT3 just wins this with SKL/KBL again 45% faster.
GPGPU Image Processing Motion-Blur (7×7) Filter OpenCL (MPix/s)  75.6  96  152  110 [+45%]  149  111 [+45%] Another win for BRW and 45% improvent for SKL/KBL.
GPGPU Image Processing Edge Detection (2*5×5) Sobel OpenCL (MPix/s)  72.6  90.6  147  105 [+44%]  143  105 [+44%] As above in this test.
GPGPU Image Processing Noise Removal (5×5) Median OpenCL (MPix/s)  2.38  1.53  6.51  5.2 [+2.2x]  7.73  5.32 [+2.23x] SKL’s GT3 manages a win but overall SKl/KBL are over 2x faster than HSW.
GPGPU Image Processing Oil Painting Quantise OpenCL (MPix/s)  1.17  0.719  5.83  4.57 [+3.9x]  4.58  4.5 [+3.84x] Another win for BRW
GPGPU Image Processing Diffusion Randomise OpenCL (MPix/s)  511  688  1150  1100 [+2.1x]  1750  1080 [+2.05x]_ SKL/KBL are over 2x faster than HSW. BRW is beat here.
GPGPU Image Processing Marbling Perlin Noise 2D OpenCL (MPix/s)  378.5  288  424  437 [+15%]  611  443 [+17%] Some wild results here, some optimizations may be needed.
In this integer workloads (with texture access) the 28W GT3 of BRW manages a few wins over 15W GT3e of SKL – but compared to old HSW – both SKL and KBL are between 40 and 300% faster. Again we see no delta between SKL and KBL – there does not seem to be any difference at all!

If you have a HSW GT2 then an upgrade to SKL GT2 brings massive improvements as well as FP16 and FP64 native support. But HSW GT3 variant is competitive and BRW GT3 even more so. KBL GT2 shows no improvement over SKL GT2 – so it’s not just the CPU core that is unchanged but the graphics core also – it’s no EV9.5 here more like EV9.1!

For integer workloads BRW is where the big improvement came but for 64-integer that improvement is still to come, if ever. At least all drivers support native int64.

Transcoding Performance

We are testing media (video + audio) transcoding performance for common video algorithms: H.264/MP4, AVC1, M.265/HEVC.

Results Interpretation: Higher values (MPix/s, MB/s, etc.) mean better performance. Lower values (ns, clocks) mean better performance.

Environment: Windows 10 x64, latest Intel drivers (April 2017). Turbo / Dynamic Overclocking was enabled on all configurations.

Graphics Processors HD4000 (EV7.5 HSW-GT2U) HD5000 (EV7.5 HSW-GT3U) HD6100 (EV8 BRW-GT3U) HD520 (EV9 SKL-GT2U) HD540 (EV9 SKL-GT3eU) HD620 (EV9.5 KBL-GT2U) Comments
H.264/AVC Decoder/Encoder QuickSync H264 8-bit only QuickSync H264 8-bit only QuickSync H264 8/10-bit QuickSync H264 8/10-bit QuickSync H264 8/10-bit QuickSync H264 8/10-bit HSW supports 8-bit only so 10-bit (high-colour) are out of luck.
H.265/HEVC Decoder/Encoder QuickSync H265 8-bit partial QuickSync H265 8-bit QuickSync H265 8-bit QuickSync H265 8/10-bit SKL has full/hardware H265/HEVC transcoding but for 8-bit only; Main10 (10-bit profile) requires KBL so finally we see a difference.
Transcode Benchmark VC 1 > H264/AVC Transcoding (MB/s)  7.55 8.4  7.42 [-2%]  8.25  8.08 [+6%] With DDR4 KBL is 6% faster.
Transcode Benchmark VC 1 > H265/HEVC Transcoding (MB/s)  0.734  3.14 [+4.2x]  3.67  3.63 [+5x] Hardware support makes SKL/KBL 4-5x faster.

If you want HEVC/H.265 then you want SKL including 4k/UHD. But if you plan on using 10-bit/HDR colour then you need KBL – finally an improvement over SKL. As it uses fixed-point hardware the GT3 performs only slightly faster.

Memory Performance

We are testing memory performance of GPUs using OpenCL, DirectX/OpenGL ComputeShader,  including transfer (up/down) to/from system memory and latency.

Results Interpretation: Higher values (MPix/s, MB/s, etc.) mean better performance. Lower values (ns, clocks) mean better performance.

Environment: Windows 10 x64, latest Intel drivers (Apr 2017). Turbo / Dynamic Overclocking was enabled on all configurations.

Graphics Processors HD4000 (EV7.5 HSW-GT2U) HD5000 (EV7.5 HSW-GT3U) HD6100 (EV8 BRW-GT3U) HD520 (EV9 SKL-GT2U) HD540 (EV9 SKL-GT3eU) HD620 (EV9.5 KBL-GT2U) Comments
Memory Configuration 8GB DDR3 1.6GHz 128-bit 8GB DDR3 1.6GHz 128-bit 16GB DDR3 1.6GHz 128-bit 8GB DDR3 1.867GHz 128-bit 16GB DDR4 2.133GHz 128-bit 16GB DDR4 2.133GHz 128-bit All use 128-bit memory with SKL/KBL using DDR4.
Constant (kB) / Shared (kB) Memory 64 / 64 64 / 64 64 / 64 2048 / 64 2048 / 64 2048 / 64 Shared memory remains the same; in SKL/KBL constant memory is the same as global.
GPGPU Memory Bandwidth Internal Memory Bandwidth (GB/s) 10.4 10.7 11 15.65 23 [+2.1x] 19.6 DDR4 seems to provide over 2x bandwidth despite low clock.
GPGPU Memory Bandwidth Upload Bandwidth (GB/s) 5.23 5.35 5.54 7.74 11.23 [+2.1x] 9.46 Again over 2x increase in up speed.
GPGPU Memory Bandwidth Download Bandwidth (GB/s) 5.27 5.36 5.29 7.42 11.31 [+2.1x] 9.6 Again over 2x increase in down speed.
SKL/KBL + DDR4 provide over 2x increase in internal, up and down memory bandwidth – despite the relatively modern increase in memory speed (2133 vs 1600); with DDR3 1867MHz memory the improvement drops to 1.5x. So if you were to decide DDR3 or DDR4 the choice has been made!
GPGPU Memory Latency Global Memory (In-Page Random) Latency (ns)  179 192  234 [+30%]  296 235 [+30%] With DDR4 latency has increased by 30% not great.
GPGPU Memory Latency Constant Memory Latency (ns)  92.5  112  234 [+2.53x]  279  235 [+2.53x] Constant memory has effectively been dropped resulting in a disastrous 2.53x higher latencies.
GPGPU Memory Latency Shared Memory Latency (ns)  80  84  –  86.8 [+8%]  102  84.6 [+8%] Shared memory latency has stayed the same.
GPGPU Memory Latency Texture Memory (In-Page Random) Latency (ns)  283  298  56 [1/5x]
 58.1 [1/5x]
Texture access seems to have markedly improved to be 5x faster.
SKL/KBL global memory latencies have increased by 30% with DDR4 – thus wiping out some gains. The “new” constant memory (2GB!) is now really just bog-standard global memory and thus with over 2x increase in latency. Shared memory latency has stayed pretty much the same. Texture memory access is very much faster – 5x faster likely though some driver optimisations.

Again no delta between KBL and SKL; if you want bandwidth (who doesn’t?) DDR4 with modest 2133MHz memory doubles bandwidths – but latencies increase. Constant memory is now the same as global memory and does not seem any faster.

Shader Performance

We are testing shader performance of the GPUs in DirectX and OpenGL as well as memory bandwidth performance.

Results Interpretation: Higher values (MPix/s, MB/s, etc.) mean better performance.

Environment: Windows 10 x64, latest Intel drivers (Apr 2017). Turbo / Dynamic Overclocking was enabled on all configurations.

Graphics Processors HD4000 (EV7.5 HSW-GT2U) HD5000 (EV7.5 HSW-GT3U) HD6100 (EV8 BRW-GT3U) HD520 (EV9 SKL-GT2U) HD540 (EV9 SKL-GT3eU) HD620 (EV9.5 KBL-GT2U) Comments
Video Shader Benchmark Half/Float/FP16 Vectorised DirectX (Mpix/s) 250  119 602 [+2.4x] 1000 537 [+2.1x] Fp16 support in DirectX doubles performance.
Video Shader Benchmark Half/Float/FP16 Vectorised OpenGL (Mpix/s) 235  109 338 [+43%]  496 289 [+23%] Fp16 does not yet work in OpenGL.
Video Shader Benchmark Single/Float/FP32 Vectorised DirectX (Mpix/s)  238  120 276 [+16%]  485 248 [4%] We only see a measly 4-16% better performance here.
Video Shader Benchmark Single/Float/FP32 Vectorised OpenGL (Mpix/s) 228  108 338 [+48%] 498 289 [+26%] SKL does better here – it’s 50% faster than HSW.
Video Shader Benchmark Double/FP64 Vectorised DirectX (Mpix/s) 52.4  78 76.7 [+46%] 133 69 [+30%] With FP64 SKL is still 45% faster.
Video Shader Benchmark Double/FP64 Vectorised OpenGL (Mpix/s) 63.2  67.2 105 [+60%] 177 96 [+50%] Similar result here 50-60% faster.
Video Shader Benchmark Quad/FP128 Vectorised DirectX (Mpix/s) 5.2  7 18.2 [+3.5x] 31.3 16.7 [+3.2x] Driver optimisation makes SKL/KBL over 3.5x faster.
Video Shader Benchmark Quad/FP128 Vectorised OpenGL (Mpix/s) 5.55  7.5 57.5 [+10x]  97.7 52.3 [+9.4x] Here we see SKL/KBL over 10x faster!
We see similar results to OpenCL GPGPU here – with FP16 doubling performance in DirectX – but with FP64 already supported in both DirectX and OpenGL even with HSW, KBL and SKL have less of a lead – of around 50%.
Video Memory Benchmark Internal Memory Bandwidth (GB/s)  15  14.8 27.6 [+84%]
26.9 25 [+67%] DDR4 brings almost 50% more bandwidth.
Video Memory Benchmark Upload Bandwidth (GB/s)  7  7.8 10.1 [+44%] 12.34 10.54 [+50%] Upload bandwidth has also increased ~50%.
Video Memory Benchmark Download Bandwidth (GB/s)  3.63  3.3 3.53 [-2%] 5.66 3.51 [-3%] No change in download bandwidth though.

Final Thoughts / Conclusions

SKL and KBL with the 21.45 driver yields significant gains in OpenCL making an upgrade from HSW and even BRW quite compelling despite the relatively modern 20.40 driver Intel was forced to provide for Windows 10. The GT3 version provides good gains over the standard GT2 version and should always be selected if available.

Native FP64 support is a huge addition which provides support for high-precision kernels – unheard of for integrated graphics. Native FP16 support provides an additional 2x performance in cases where 16-bit floating-point processing is sufficient.

However KBL’s EV9.5 graphics core shows no improvement at all over SKL’s EV9 core – thus it’s not just the CPU core that has not been changed but the GPU core too! Except for the updated transcoder supporting Main10 HEVC/H.265 (thus HDR / 10-bit+ colour) which is still quite useful for UHD/4K HDR media.

This is very much a surprise – as while the CPU core has not improved markedly since SNB (Core v2), the GPU core has always provided significant improvements – and now we have hit the same road-block. As dedicated GPUs have continued to improve significantly in performance and power efficiency this is quite a surprise. This marks the smallest ever generation to generation – SKL to KBL – ever, effectively KBL is a SKL refresh.

It seems the rumour that Intel may change to ATI/AMD graphics cores may not be such a crazy idea after all!

Intel Broadwell-E Launch and Reviews

Intel has launched the Broadwell-E (6900,6800 series) CPU for 2011 platform (X99).  Is it worth upgrading from your 5800 Haswell-E? Here some reviews that contain Sandra benchmarks to help you make your decision:

Stay tuned for our own mini-review of this CPU. Note that as this is not Skylake-E it does not support the AVX512 instruction set but it does contain many other improvements…

Intel Atom X7 (CherryTrail 2015) CPU: Closing on Core M?

Intel Logo

What is CherryTrail (Braswell)?

“CherryTrail” (CYT) is the next generation Atom “APU” SoC from Intel (v3 2015) replacing the current Z3000 “BayTrail” (BYT) SoC which was Intel’s major foray into tablets (both Windows & Android). The “desktop” APUs are known as “Braswell” (BRS) while the APUs for other platforms have different code names.

BayTrail was a major update both CPU (OOS core, SSE4.x, AES HWA, Turbo/dynamic overclocking) and GPU (EV7 IvyBridge GPGPU core) so CherryTrail is a minor process shrink – but with a very much updated GPGPU – updated to EV8 (as latest Core Broadwell).

In this article we test (GP)GPU graphics unit performance; please see our other articles on:

Previous and next articles regarding Intel CPU performance:

Hardware Specifications

We are comparing Atom processors with the Core M processors. New models may run at higher (or lower) frequencies than the ones they replace, thus performance delta can vary.

APU Specifications Atom Z3770 (BayTrail) Atom X7 Z8700 (CherryTrail) Core M 5Y10 (Broadwell-Y) Comments
Cores (CU) / Threads (SP) 4C / 4T 4C / 4T 2C / 4T The new Atom still has 4C and no HT same as the old Atom; Core M has 2 cores but with HT so still 4 threads in total – we shall see whether this makes a big difference.
Speed (Min / Max / Turbo) 533-1467-2400 (6x-11x-18x) 480-1600-2400 (6x-20x-30x) 500-800-2000 (5x-8x-20x) The new CherryTrail Atom has lower BCLK (80MHz) compared to BayTrail (133MHz) and thus higher multipliers while LFM (Low), MFM (Rated) and Turbo (TFM) speeds are very similar.
Power (TDP) 2.4W 2.4W 4.5W TDP remains the same for the new Atom while Core M needs around 2x (twice) as much power.
L1D / L1I Caches 4x 24kB 6-way / 4x 32kB 8-way 4x 24kB 6-way / 4x 32kB 8-way 2x 32kB 8-way / 2x 32kB 8-way No change in L1 caches on the new Atom; comparatively Core M has half the caches.
L2 Caches 2x 1MB 16-way 2x 1MB 16-way 4MB 16-way No change in L2 cache; here Core M has twice as much cache – same size as a normal i7 ULV.

Native Performance

We are testing native arithmetic, SIMD and cryptography performance using the highest performing instruction sets (AVX2, AVX, etc.). Haswell introduces AVX2 which allows 256-bit integer SIMD (AVX only allowed 128-bit) and FMA3 – finally bringing “fused-multiply-add” for Intel CPUs. We are now seeing CPUs getting as wide a GP(GPUs) – not far from AVX3 512-bit in “Phi”.

Results Interpretation: Higher values (GOPS, MB/s, etc.) mean better performance.

Environment: Windows 8.1 x64, latest Intel drivers. Turbo / Dynamic Overclocking was enabled on both configurations.

Native Benchmarks Atom Z3770 (BayTrail) Atom X7 Z8700 (CherryTrail) Core M 5Y10 (Broadwell-Y) Comments
          Intel Braswell: CPU Arithmetic
CPU Arithmetic Benchmark Native Dhrystone (GIPS) 15.11 SSE4 35.91 SSE4 [+55%] 31.84 AVX2 [-12%] CherryTrail has no AVX2 but manages to be 55% faster than old Atom and even a big faster than Core M with AVX2! A great start!
CPU Arithmetic Benchmark Native FP32 (Float) Whetstone (GFLOPS) 11.09 AVX 18.43 AVX [+8.4%] 21.56 AVX/FMA [+17%] CherryTrail has no FMA either, but here it is only 8% faster than old Atom. Core M does better here – it’s 16% faster still.
CPU Arithmetic Benchmark Native FP64 (Double) Whetstone (GFLOPS) 8.6 AVX 12.34 AVX [+2.8%] 13.49 AVX/FMA [+17%] With FP64 we see and even smaller 3% gain. Core M remains 16% faster still.
We see a big improvement with integer workload of 50% but minor 3-8% with floating-point. Sure Core M is faster (16%) but not by a huge amount to justify power and cost increase.
Intel Braswell: CPU Vectorised SIMD
CPU Multi-Media Vectorised SIMD Native Integer (Int32) Multi-Media (Mpix/s) 25.1 AVX 48.7 AVX [+25%] 70.8 AVX2 [+45%] Again CherryTrail has to do without AVX2, but is still 25% faster than old Atom – a decent improvement. Here though AVX2 of Core M has a sizeable 45% improvement.
CPU Multi-Media Vectorised SIMD Native Long (Int64) Multi-Media (Mpix/s) 9 AVX 14.5 AVX [+4.6%] 24.5 AVX2 [+69%] With a 64-bit integer workload, the improvement drops to about 5%. Here Core M with AVX2 is 70% faster still – finally a big improvement over Atom.
CPU Multi-Media Vectorised SIMD Native Quad-Int (Int128) Multi-Media (Mpix/s) 0.109 0.512 [+3.3x] 0.382 [-25%] This is a tough test using Long integers to emulate Int128, but here CherryTrail manages to be over 3x faster than old Atom – and even faster than Core M! Without SIMD the new Atom does much better than even Core M.
CPU Multi-Media Vectorised SIMD Native Float/FP32 Multi-Media (Mpix/s) 18.9 AVX 41.5 AVX [+42%] 61.3 FMA [+47%] In this floating-point AVX/FMA algorithm, CherryTrail does much better returning with a 42% improvement over old Atom. Core M also returns to a 47% improvement still – seems to be the par for SIMD ops.
CPU Multi-Media Vectorised SIMD Native Double/FP64 Multi-Media (Mpix/s) 8.18 AVX 15.9 AVX [+93%] 36.48 FMA [+2.3x] Switching to FP64 code, CherryTrail is now almost 2x as fast as old Atom – with Core M being over 2x faster still.
CPU Multi-Media Vectorised SIMD Native Quad-Float/FP128 Multi-Media (Mpix/s) 0.5 AVX 0.81 AVX [+62%] 1.69 FMA [+2.1x] In this heavy algorithm using FP64 to mantissa extend FP128, CherryTrail manages a 62% improvement and again Core M is over 2x still.
Lack of AVX2/FMA and higher Turbo speed prevents the new Atom from crushing the old Atom – but still allows a 25-100% (2x as fast) improvement, a significant improvement. Here, though, with AVX2 and FMA – Core M is 45-100% faster still thus could be worth it if more compute power is required.
Intel Braswell: CPU Crypto
GPGPU Crypto Benchmark Crypto AES-256 (GB/s) 1.09 AES HWA 1.44 AES HWA [+32%] 2.59 AES HWA [+79%] All three CPUs support AES HWA – thus it is mainly a matter of memory bandwidth – here CherryTrail is 32% faster – thus we’d predict its memory controller can yield +35% more bandwidth. But even with just half no. cores Core M is 80% faster still – likely due to its dual-channel controller.
GPGPU Crypto Benchmark Crypto AES-128 (GB/s) 1.51 AES HWA 2 AES HWA [+32%] ? AES HWA What we saw with AES-256 was no fluke: less rounds don’t make any difference, new Atom is 32% faster.
GPGPU Crypto Benchmark Crypto SHA2-256 (GB/s) 0.143 AVX 0.572 AVX [+4x] 0.93 AVX2 [+63%] SHA HWA will come for the next Atom arch, so neither CPU supports it. But CherryTrail can manage to be a whopping 4x (four times) faster than old Atom. But Core M with AVX2 still manages to be 63% faster.
GPGPU Crypto Benchmark Crypto SHA1 (GB/s) 0.388 AVX 1.17 AVX [+3x] ? AVX2 With a less complex algorithm – we see a 3x (three times) improvement over old Atom.
CherryTrail still misses AVX2 or forthcoming SHA HWA – but due to (expected) higher memory bandwidth it still improves over old Atom – while raw compute is 3-4x faster – a huge improvement. Due to its dual-channel controller and AVX2 Core M is still faster than it by a good 60-80%.
Intel Braswell: CPU Finance
CPU Finance Benchmark Black-Scholes float/FP32 (MOPT/s) 8.0 14.15 [+76%] 21.88 [+54%] In this non-SIMD test we start with a good 76% improvement over old Atom – while Core M is 50% faster still. CherryTrail shows its prowess in FPU processing.
CPU Finance Benchmark Black-Scholes double/FP64 (MOPT/s) 7.3 11.66 [+59%] 17.6 [+50%] Switching to FP64 code, CherryTrail is still 60% faster – and Core M remains 50% faster still.
CPU Finance Benchmark Binomial float/FP32 (kOPT/s) 1.76 3.85 [+2.2x] 5.1 [+32%] Binomial uses thread shared data thus stresses the cache & memory system; CherryTrail is over 2x (twice) as fast as old BayTrail – with Core M just 32% faster. It seems the new memory improvements do help a lot.
CPU Finance Benchmark Binomial double/FP64 (kOPT/s) 1.33 4.13 [+3.1x] 5.02 [+21%] With FP64 code CherryTrail is now a whopping 3x (three times) faster – a massive improvement. Core M is just 20% faster still.
CPU Finance Benchmark Monte-Carlo float/FP32 (kOPT/s) 1.37 2.67 [+94%] 3.73 [+39%] Monte-Carlo also uses thread shared data but read-only thus reducing modify pressure on the caches; CherryTrail remains about 2x faster than old Atom – showing just how much the memory improvements help. Core M is still 40% faster.
CPU Finance Benchmark Monte-Carlo double/FP64 (kOPT/s) 1.54 2.43 [+57%] 3.0 [+23%] Switching to FP64 the improvement drops to around 60% – still a big improvement. Core M’s improvement drops to 23%.
The financial tests show big improvements of 60-200% – complex compute tasks are now very much possible on Atom. Without help from AVX2 and FMA – Core M can only be 20-50% faster still, not the improvement we were hoping.
Intel Braswell: CPU Science
CPU Science Benchmark SGEMM (GFLOPS) float/FP32 6.19 AVX 12.42 AVX [+2x] 13.99 FMA [+12%] In this tough SIMD algorithm, again CherryTrail manages to be 2x as fast as old Atom; even with the help of FMA, Core M is only 12% faster. The new Atom is just running away with it.
CPU Science Benchmark DGEMM (GFLOPS) double/FP64 3.38 AVX 6.09 AVX [+80%] 9.61 FMA [+58%] With FP64 SIMD code, CherryTrail remains about 80% faster, just under 2x. Here Core M shows its power, it’s almost 60% faster still.
CPU Science Benchmark SFFT (GFLOPS) float/FP32 2.83 AVX 5.17 AVX [+82%] 4.11 FMA [-21%] FFT also uses SIMD and thus AVX but stresses the memory sub-system more: CherryTrail remains 82% faster than old Atom and here it beats even Core M.
CPU Science Benchmark DFFT (GFLOPS) double/FP64 2 AVX 2.83 AVX [+41%] 3.66 FMA [+29%] With FP64 code, the improvement is reduced to just 41% but it is still significant. Core M’s improvement drops to just 30%.
CPU Science Benchmark SNBODY (GFLOPS) float/FP32 0.929 AVX 1.58 AVX [+70%] 2.64 FMA [+67%] N-Body simulation is SIMD heavy but many memory accesses to shared data but CherryTrail still manages a 70% improvement, with Core M 70% faster still.
CPU Science Benchmark DNBODY (GFLOPS) double/FP64 1.25 AVX 1.76 AVX [+40%] 3.71 FMA [+2.1x] Unlike what we saw before, with FP64 code the improvement drops to 40% as with DFFT. Core M is over 2x faster still.
With highly optimised SIMD AVX code, we see a similar 40-100% improvement in performance; as we said before complex algorithms run a lot faster on the new Atom. With FP64 code, Core M’s FPU shows its power but at what cost?
Intel Braswell: CPU Multi-Core Transfer
CPU Multi-Core Benchmark Inter-Core Bandwidth (GB/s) 1.63 1.69 [+5%] 8.5 [+5x] With unchanged L1/L2 caches and similar rated/Turbo clocks – it is not surprising the new Atom does not improve on inter-core bandwidth. Core M shows it’s prowess – managing over 5x higher bandwidth. We see how all these caches perform in the Cache & Memory Atom (CherryTrail) performance article.
CPU Multi-Core Benchmark Inter-Core Latency (ns) 182 179 [-6%] 76 [-68%] Latency, however, sees a 5% decrease, likely due to the higher clock of CherryTrail during run (due to higher rated speed) since the caches are unchanged. Core M’s latencies seem to be 1/2 (half) of Atom.

While it does not bring any new instruction sets, CherryTrail improves significantly over the old Atom (30-100%) – much more than we’ve seen in Core series from arch to arch. It does not support new instruction sets (e.g. AVX2, FMA, SHA HWA) nor new caches – which perhaps it is just as well as Core M is not much faster.

Considering just how much BayTrail improved over the old Atom arch, the improvements are particularly impressive.

About the only “issue” is FP64 performance where Core M shows its power – whether using SIMD AVX/FMA or FPU code.

Software VM (.Net/Java) Performance

We are testing arithmetic and vectorised performance of software virtual machines (SVM), i.e. Java and .Net. With operating systems – like Windows 8.x/10 – favouring SVM applications over “legacy” native, the performance of .Net CLR (and Java JVM) has become far more important.

Results Interpretation: Higher values (GOPS, MB/s, etc.) mean better performance.

Environment: Windows 8.1 x64 SP1, latest Intel drivers. .Net 4.5.x, Java 1.8.x. Turbo / Dynamic Overclocking was enabled on both configurations.

VM Benchmarks Atom Z3770 (BayTrail) Atom X7 Z8700 (CherryTrail) Core M 5Y10 (Broadwell-Y) Comments
Intel Braswell: .Net Arithmetic
.Net Arithmetic Benchmark .Net Dhrystone (GIPS) 2.69 3.85 [+43%] 2.95 [-33%] .Net CLR performance improves by 43% – a great start and in line to what we saw with native code – again faster than Core M.
.Net Arithmetic Benchmark .Net Whetstone final/FP32 (GFLOPS) 2.91 9 [+3.1x] 10.8 [+20%] Floating-Point CLR performance improves by a whopping 3x (three times)! While apps should really do compute tasks in native code, this ensures that .Net (or Java) apps will fly. Core M is just 20% faster still.
.Net Arithmetic Benchmark .Net Whetstone double/FP64 (GFLOPS) 5.89 10 [+69%] 12.71 [+27%] FP64 CLR performance improves by a lower 70% but still great improvement. Core M’s performance improves almost 30%.
Just like native SIMD code, we see great improvement of 40-200% making CherryTrail significantly faster running .Net apps than old Atom. Core M is only 20-30% faster still.
Intel Braswell: .Net Vectorised
.Net Vectorised Benchmark .Net Integer Vectorised/Multi-Media (MPix/s) 6.42 11.34 [+76%] 11.25 [=] A good start, we see vectorised .Net code a huge 76% faster in CherryTrail – even faster than Core M!
.Net Vectorised Benchmark .Net Long Vectorised/Multi-Media (MPix/s) 0.761 6.36 [+8.3x] 5.62 [-12%] With 64-bit integer vectorised workload, we see a pretty unbelievable 8x improvement – the old Atom really has an issue with this test. As before, CherryTrail is faster than Core M.
.Net Vectorised Benchmark .Net Float/FP32 Vectorised/Multi-Media (MPix/s) 0.922 2.6 [+2.82x] 4.14 [+59%] Switching to single-precision (FP32) floating-point code, CherryTrail is almost 3x faster. It seems that whatever workload you have in .Net, new Atom is a good deal faster. Core M is 60% faster still.
.Net Vectorised Benchmark .Net Double/FP64 Vectorised/Multi-Media (MPix/s) 2.52 6.25 [+2.48x] 9.36 [+49%] Switching to FP64 code, CherryTrail’s improvement drops to 2.5x – but Core M 50% faster still! While unlikely compute tasks are written in .Net rather than native code, small compute code does benefit.
Vectorised .Net improves even better, between 80-200% whether integer or floating-point workload. Unlike what we saw when we reviewed the Core CPUs, Atom does improve from arch to arch.

Just like native code, we see a huge 40-200% improvement when running .Net or Java code. Any “modern” apps (either Universal, Metro, WPF) should run very much faster – especially as heavily optimised SIMD code has also been shown to be a lot faster.

SiSoftware Official Ranker Scores

Final Thoughts / Conclusions

Despite being just a process shrink with minor improvements, CherryTrail is very much faster (25-100%) than the old Atom (BayTrail) within the same power envelope (TDP). Let’s not forget BayTrail itself improved greatly over the previous Atoms – thus new Atom is light-years away in performance. And that’s before we mention the very much improved EV8 GPGPU whose performance we saw earlier, see CPU Atom Z8700 (CherryTrail) GPGPU performance.

Any major CPU improvements including instruction set support (AVX2, FMA, SHA HWA) will have to wait for the next Atom arch – which should make it a formidable APU – but what we have here is still significant.

Coupled with its much improved GPGPU performance, we can now see just why Microsoft has selected it for Surface 3 – it really puts the latest Core M to shame – considering its 2x higher power rating as well as its much higher cost.

If you are after a new tablet, HTPC (like the NUC) or compute stick – best to wait for the new systems with CherryTrail – they are worth it! No ifs and no buts – these are the Atom APUs you are looking for.

Intel Atom X7 (CherryTrail 2015) GPGPU: Closing on Core M?

Intel Logo

What is CherryTrail (Braswell)?

“CherryTrail” (CYT) is the next generation Atom “APU” SoC from Intel (v3 2015) replacing the current Z3000 “BayTrail” (BYT) SoC which was Intel’s major foray into tablets (both Windows & Android). The “desktop” APUs are known as “Braswell” (BRS) while the APUs for other platforms have different code names.

BayTrail was a major update both CPU (OOS core, SSE4.x, AES HWA, Turbo/dynamic overclocking) and GPU (EV7 IvyBridge GPGPU core) so CherryTrail is a minor process shrink – but with a very much updated GPGPU – updated to EV8 (as latest Core Broadwell).

In this article we test (GP)GPU graphics unit performance; please see our other articles on:

Previous and next articles regarding Intel GPGPU performance:

Hardware Specifications

We are comparing the internal GPUs of 3 processors (BayTrail, CherryTrail and Broadwell-Y) that support GPGPU.

Graphics Unit BayTrail GT CherryTrail GT Broadwell GT2Y – HD 5300 Comment
Graphics Core B-GT EV7 B-GT EV8? B-GT2Y EV8 CherryTrail’s GPU is meant to be based on EV8 like Broadwell – the very latest GPGPU core from Intel! This makes it more advanced than the very popular Core Haswell series, a first for Atom.
APU / Processor Atom Z3770 Atom X7 Z8700 Core M 5Y10 Core M is the new Core-Y UULV versions for high-end tablets against the Atom processors for “normal” tablets/phones.
Cores (CU) / Shaders (SP) / Type 4C / 32SP (2×4 SIMD) 16C / 128SP (2×4 SIMD) 24C / 192SP (2×4 SIMD) Here’s the major change: CherryTrail has no less than 4 times the compute units (CU) in the same power envelope of the old BayTrail. Broadwell has more (24) but it is also rated at higher TDP.
Speed (Min / Max / Turbo) MHz 333 – 667 200 – 600 200 – 800 CherryTrail goes down all the way to 200MHz (same as Broadwell) which should help power savings. Its top speed is a bit lower than BayTrail but not by much.
Power (TDP) W 2.4 (under 4) 2.4 (under 4) 4.5 Both Atoms have the same TDP of around 2-2.4W – while Broadwell-Y is rated at 2x at 4.5-6W. We shall see whether this makes a difference.
DirectX / OpenGL / OpenCL Support 11 / 4.0 / 1.1 11.1 (12?) / 4.3 / 1.2 11.1 (12?) / 4.3 / 2.0 Intel has continued to improve the video driver – 2 generations share a driver – but here CherryTrail has a brand-new driver that supports much newer technologies like DirectX 11.1 (vs 11.0), OpenGL 4.3 (vs 4.0) including Compute and OpenCL 1.2. Broadwell’s driver does support OpenCL 2.0 – perhaps a later CherryTrail driver will do too?
FP16 / FP64 Support No / No (OpenCL), Yes (DirectX) No / No (OpenCL), Yes (DirectX, OpenGL) No / No (OpenCL), Yes (DirectX, OpenGL) Sadly FP16 support is still missing and FP64 is also missing on OpenCL – but available in DirectX Compute as well as OpenGL Compute! Those Intel FP64 extensions are taking their time to appear…

GPGPU Performance

We are testing vectorised, crypto (including hash), financial and scientific GPGPU performance of the GPUs in OpenCL, DirectX ComputeShader and OpenGL ComputeShader (if supported).

Results Interpretation: Higher values (MPix/s, MB/s, etc.) mean better performance.

Environment: Windows 8.1 x64, latest Intel drivers (Jun 2015). Turbo / Dynamic Overclocking was enabled on all configurations.

Graphics Processors BayTrail GT CherryTrail GT Broadwell GT2Y – HD 5300 Comments
Intel Braswell: GPGPU Vectorised
GPGPU Arithmetic Benchmark Single/Float/FP32 Vectorised OpenCL (Mpix/s) 25 160 [+6.4x] 181 [+13%] Straight off the bat we see that 4x more advanced CUs in CherryTrail gives us 6.4x better performance a huge improvement! Even the brand-new Broadwell GPU is only 13% faster.
GPGPU Arithmetic Benchmark Half/Float/FP16 Vectorised OpenCL (Mpix/s) 25 160 [+6.4x] 180 [+13%] As FP16 is not supported by any of the GPUs and promoted to FP32 the results don’t change.
GPGPU Arithmetic Benchmark Double/FP64 Vectorised OpenCL (Mpix/s) 1.63 (emulated) 10.1 [+6.2x] (emulated) 11.6 [+15%] (emulated) None of the GPUs support native FP64 either: emulating FP64 (mantissa extending) is quite hard on all GPUs, but the results don’t change: CherryTrail is 6.2x faster with Broadwell just 15% faster.
GPGPU Arithmetic Benchmark Quad/FP128 Vectorised OpenCL (Mpix/s) 0.18 (emulated) 1.08 [+6x] (emulated) 1.32 [+22%] (emulated) Emulating FP128 using FP32 is even more complex but CherryTrail does not disappoint, it is still 6x faster; Broadwell does pull ahead a bit being 22% faster.
Intel Braswell: GPGPU Crypto
GPGPU Crypto Benchmark AES256 Crypto OpenCL (MB/s) 96 825 [+8.6x] 770 [-7%] In this tough integer workload that uses shared memory CherryTrail does even better – it is 8.6x faster, more than we’d expect – the newer driver may help. Surprisingly this is faster than even Broadwell.
GPGPU Crypto Benchmark AES128 Crypto OpenCL (MB/s) 129 1105 [+8.6x] n/a What we saw before is no fluke, CherryTrail’s GPU is still 8.6 times faster than BayTrail’s.
Intel Braswell: GPGPU Hashing
GPGPU Crypto Benchmark SHA2-512 (int64) Hash OpenCL (MB/s) 54 309 [+5.7x] This 64-bit integer compute-heavy wokload is hard on all GPUs (no native 64-bit arithmetic), but CherryTrail does well – it is almost 6x faster than the older BayTrail. Note that neither DirectX nor OpenGL natively support int64 so this is about as hard as it gets for our GPUs.
GPGPU Crypto Benchmark SHA2-256 (int32) Hash OpenCL (MB/s) 96 1187 [+12.4x] 1331 [+12%] In this integer compute-heavy workload, CherryTrail really shines – it is 12.4x (twelve times) faster than BayTrail! Again, even the latest Broadwell is just 12% faster than it! Atom finally kicks ass both in CPU and GPU performance.
GPGPU Crypto Benchmark SHA1 (int32) Hash OpenCL (MB/s) 215 2764 [+12.8x] SHA1 is less compute-heavy, but results don’t change: CherryTrail is 12.8x times faster – the best result we’ve seen so far.
Intel Braswell: GPGPU Finance
GPGPU Finance Benchmark Black-Scholes FP32 OpenCL (MOPT/s) 34.33 299.8 [+8.7x] 280.5 [-7%] Starting with the financial tests, CherryTrail is quick off the mark – being almost 9x (nine times) faster than BayTrail – and again somehow faster than even Broadwell. Who says Atom cannot hold its own now?
GPGPU Finance Benchmark Binomial FP32 OpenCL (kOPT/s) 5.16 28 [+5.4x] 36.5 [+30%] Binomial is far more complex than Black-Scholes, involving many shared-memory operations (reduction) – but CherryTrail still holds its own, it’s over 5x times faster – not as much as we saw before but massive improvement. Broadwell’s EV8 GPU does show its prowess being 30% faster still.
GPGPU Finance Benchmark Monte-Carlo FP32 OpenCL (kOPT/s) 3.6 61.9 [+17x] 54 [-12%] Monte-Carlo is also more complex than Black-Scholes, also involving shared memory – but read-only; here we see Broadwell shine – it’s 17x (seventeen times) faster than BayTrail’s GPU – so much so we had to recheck. Most likely the newer GPU driver helps – but BayTrail will not get these improvements. Broadwell is again surprisingly 12% slower.
Intel Braswell: GPGPU Science
GPGPU Science Benchmark SGEMM FP32 OpenCL (GFLOPS) 6 45 [+7.5x] 44.1 [-3%] GEMM is quite a tough algorithm for our GPUs but CherryTrail remains over 7.5x faster – even Broadwell is 3% slower than it. We saw before EV8 not performing as we expected – perhaps some more optimisations are needed.
GPGPU Science Benchmark SFFT FP32 OpenCL (GFLOPS) 2.27 9 [+3.96x] 8.94 [-1%] FFT involves many kernels processing data in a pipeline – so here we see CherryTrail only 4x (four times) faster – the slowest we’ve seen so far. But then again Broadwell scores about the same so it’s a tough test for all GPUs.
GPGPU Science Benchmark N-Body FP32 OpenCL (GFLOPS) 10.74 65 [+6x] 50 [-23%] In our last test we see CherryTrail going back to being 6x faster than BayTrail – surprisingly again Broadwell’s EV8 GPU is 23% slower than it.

There is no simpler way to put this: CherryTrail Atom’s GPU obliterates the old one – never being less than 4x and up to 17x (yes, seventeen!) faster, many times even overtaking the much newer, more expensive and more power hungry Broadwell (Core M) EV8 GPU! It is a no-brainer really, you want it – for once Microsoft made a good choice for Surface 3 after the disasters of earlier Surfaces (perhaps they finally learn? Nah!).

There isn’t really much to criticise: sure, FP16 native support is missing – which is a pity on Android (that uses FP16 in UX) and naturally FP64 is also missing – though as usual DirectX compute and OpenGL compute. As mentioned, since OpenGL 4.3 is supported, Compute is also supported for the first time on Atom – a feature recently introduced in newer drivers on Haswell and later GPUs (EV7.5, EV8).

Just in case we’re not clear: this *is* the Atom you are looking for!

Transcoding Performance

We are testing memory performance of GPUs using their hardware transcoders using popular media formats: H.264/MP4, AVC1, M.265/HEVC.

Results Interpretation: Higher values (MPix/s, MB/s, etc.) mean better performance. Lower values (ns, clocks) mean better performance.

Environment: Windows 8.1 x64, latest Intel drivers (June 2015). Turbo / Dynamic Overclocking was enabled on all configurations.

Graphics Processors BayTrail GT CherryTrail GT Broadwell GT2Y – HD 5300 Comments
H.264/MP4 Decoder/Encoder QuickSync H264 (hardware accelerated) QuickSync H264 (hardware accelerated) QuickSync H264 (hardware accelerated) Same transcoder is used for all GPUs.
Intel Braswell: Transcoding H264
Transocde Benchmark H264 > H264 Transcoding (MB/s) 2.24 5 [+2.23x] 8.41 [+68%] H.264 transcoding on the new Atom has more than doubled (2.2x) which makes it ideal as a HTPC (e.g. Plex server). However, with more power we can see that Core M has over 60% more bandwidth.
Transocde Benchmark WMV > H264 Transcoding (MB/s) 1.89 4.75 [+2.51x] 8.2 [+70%] When just using the H264 encoder we still see a 2.5x improvement (over two and a half times), with Core M again about 70% faster still.

Intel has not forgotten transcoding, with the new Atom over 2x (twice) as fast – so if you were thinking of using it as a HTPC (NUC/Brix) server, better get the new one. However, unless you really want low power – the Core M (and thus ULV) versions have are 60-70% faster still…

GPGPU Memory Performance

We are testing memory performance of GPUs using OpenCL, DirectX ComputeShader and OpenGL ComputeShader (if supported), including transfer (up/down) to/from system memory and latency.

Results Interpretation: Higher values (MPix/s, MB/s, etc.) mean better performance. Lower values (ns, clocks) mean better performance.

Environment: Windows 8.1 x64, latest Intel drivers (June 2015). Turbo / Dynamic Overclocking was enabled on all configurations.

Graphics Processors BayTrail GT CherryTrail GT Broadwell GT2Y – HD 5300 Comments
Memory Configuration 2GB DDR3 1067MHz 64/128-bit (shared with CPU) 4GB DDR3 1.6GHz 64/128-bit (shared with CPU) 4GB DDR3 1.6GHz 128-bit (shared with CPU) Atom is generally configured to use a single memory controller, but CherryTrail runs at 1.6Mt/s same as modern Core APUs. But Core M/Broadwell naturally has a dual-channel controller though some laptops/tablets may use just one.
Cache Configuration 32kB L2 global/texture? 128kB L3 256kB L2 global/texture? 384kB L3 256kB L2 global/texture? 384kB L3 Internal cache arrangement seems to be very secret – so a lot of it is deduced from the latency graphs. The L2 increase in CherryTrail is in line with CU increase, i.e. 8x larger.
Intel Braswell: GPGPU Memory Bandwidth
GPGPU Memory Bandwidth Internal Memory Bandwidth (GB/s) 3.45 11 [+3.2x] 10.1 [-8%] CherryTrail manages over 3x higher bandwidth during internal transfer over BayTrail, close to what we’d expect a dual-channel system to achieve. Surprisingly our dual-channel Core M manages to be 8% slower. We did see Broadwell achieve less than Haswell – which may explain what we’re seeing here.
GPGPU Memory Bandwidth Upload Bandwidth (GB/s) 1.18 2.09 [+77%] 3.91 [+87%] While all APUs don’t need to transfer memory over PCIe like dedicated GPUs, they still don’t support “zero copy” – thus memory transfers are not free. Here CherryTrail improves almost 2x over BayTrail – but finally we see Core M being 87% faster still.
GPGPU Memory Bandwidth Download Bandwidth (GB/s) 1.13 2.29 [+2.02x] 3.79 [+65%] While upload bandwidth was the same, download bandwidth has improved a bit more, with CherryTrail being over 2x (twice) faster – but again Broadwell is 65% faster still. This will really help GPGPU applications that need to copy large results from the GPU to CPU memory until “zero copy” feature arrives.
Intel Braswell: GPGPU Memory Latency
GPGPU Memory Latency Global Memory (In-Page Random) Latency (ns) 981 829 [-15%] 274 With the memory running faster, we see latency decreasing by 15% a good result. However, Broadwell does so much better with almost 1/4 latency.
GPGPU Memory Latency Global Memory (Full Random) Latency (ns) 1272 1669 [+31%] Surprisingly, using full-random access we see latency increase by 31%. This could be due to the larger (4GB vs. 2GB) memory arrangement – the TLB-miss hit could be much higher.
GPGPU Memory Latency Global Memory (Sequential) Latency (ns) 383 279 [-27%] Sequential access brings the latency down by 27% – a good result.
GPGPU Memory Latency Constant Memory Latency (ns) 660 209 [-1/3x] With L1 cache covering the entire constant memory on CherryTrail – we see latency decrease to 1/3 (a third), great for kernels that use more than 32kB constant data.
GPGPU Memory Latency Shared Memory Latency (ns) 215 201 [-6%] Shared memory is a bit faster (6% lower latency), nothing to write home about.
GPGPU Memory Latency Texture Memory (In-Page Random) Latency (ns) 1583 1234 [-22%] With the memory running faster, as with global memory we see latency decreasing by 22% here – a good result!
GPGPU Memory Latency Texture Memory (Sequential) Latency (ns) 916 353 [-61%] Sequential access brings the latency down by a huge 61%, an even bigger difference than what we saw with Global memory. Impressive!

Again, we see big gains in CherryTrail with bandwidth increasing by 2-3x which is necessary to keep all those new EVs fed with data; Broadwell does do better but then again it has a dual-channel memory controller.

Latency has also decreased by a good amount 6-22% likely due to the faster memory employed, and the much larger caches (8x) do help. For data that exceeded the small BayTrail cache (32kB) – the CherryTrail one should be more than sufficient.

Shader Performance

We are testing shader performance of the GPUs in DirectX and OpenGL.

Results Interpretation: Higher values (MPix/s, MB/s, etc.) mean better performance.

Environment: Windows 8.1 x64, latest Intel drivers (Jun 2015). Turbo / Dynamic Overclocking was enabled on all configurations.

Graphics Processors BayTrail GT CherryTrail GT Broadwell GT2Y – HD 5300 Comments
Intel Braswell: Video Shaders
Video Shader Benchmark Single/Float/FP32 Vectorised DirectX (Mpix/s) 39 127.6 [+3.3x] Starting with DirectX FP32, CherryTrail is over 3.3x faster than BayTrail – not as high as we saw before but a good start.
Video Shader Benchmark Single/Float/FP32 Vectorised OpenGL (Mpix/s) 38.3 121.8 [+3.2x] 172 [+41%] OpenGL does not change matters, CherryTrail is still just over 3x (three times) faster than BayTrail. Here, though, Broadwell is 41% faster still…
Video Shader Benchmark Half/Float/FP16 Vectorised DirectX (Mpix/s) 39.2 109.5 [+2.8x] As FP16 is not supported by any of the GPUs and promoted to FP32 the results don’t change.
Video Shader Benchmark Half/Float/FP16 Vectorised OpenGL (Mpix/s) 38.11 121.8 [+3.2x] 170 [+39%] As FP16 is not supported by any of the GPUs and promoted to FP32 the results don’t change.
Video Shader Benchmark Double/FP64 Vectorised DirectX (Mpix/s) 7.48 18 [+2.4x] Unlike OpenCL driver, DirectX driver does support FP64 – so all GPUs run native FP64 code not emulation. Here, CherryTrail is only 2.4x faster than BayTrail.
Video Shader Benchmark Double/FP64 Vectorised OpenGL (Mpix/s) 9.17 26 [+2.83x] 46.45 [+78%] As above, OpenGL driver does support FP64 also – so all GPUs run native FP64 code again. CherryTrail is 2.8x times faster here, but Broadwell is 78% faster still.
Video Shader Benchmark Quad/FP128 Vectorised DirectX (Mpix/s) 1.3 (emulated) 1.34 [+3%] (emulated) (emulated) Here we’re emulating (mantissa extending) FP128 using FP64 not FP32 but it’s hard: CherryTrail’s performance falls to just 3% faster over BayTrail, perhaps some optimisations are needed.
Video Shader Benchmark Quad/FP128 Vectorised OpenGL (Mpix/s) 1 (emulated) 1.1 [+10%] (emulated) 3.4 [+3.1x] (emulated) OpenGL does not change the results – but here we see Broadwell being 3x faster than both CherryTrail and BayTrail. Perhaps such heavy shaders are too much for our Atom GPUs.

Unlike GPGPU, here we don’t see the same crushing improvement – but CherryTrail’s GPU is still about 3x (three times) faster than BayTrail’s – though Broadwell shows its power. Perhaps our shaders are a bit too complex for pixel processing and should rather stay in the GPGPU field…

Shader Memory Performance

We are testing memory performance of GPUs using DirectX and OpenGL, including transfer (up/down) to/from system memory.

Results Interpretation: Higher values (MPix/s, MB/s, etc.) mean better performance.

Environment: Windows 8.1 x64, latest Intel drivers (June 2015). Turbo / Dynamic Overclocking was enabled on all configurations.

Graphics Processors BayTrail GT CherryTrail GT Broadwell GT2Y – HD 5300 Comments
Intel Braswell: Video Memory Bandwidth
Video Memory Benchmark Internal Memory Bandwidth (GB/s) 6.74 11.18 [+65%] 12.46 [+11%] DirectX bandwdith is not as “bad” as OpenCL on BayTrail (better driver?) so we start from a higher baseline: CherryTrail still manages 65% more bandwidth – with Broadwell only squeezing 11% more despite its dual-channel. It shows that OpenCL GPGPU driver has come a long way to match DirectX.
Video Memory Benchmark Upload Bandwidth (GB/s) 2.62 2.83 [+8%] 5.29 [+87%] While all APUs don’t need to transfer memory over PCIe like dedicated GPUs, they still don’t support “zero copy” – thus memory transfers are not free. Again BayTrail does better so CherryTrail can only be 8% faster than it – with Broadwell finally 87% faster.
Video Memory Benchmark Download Bandwidth (GB/s) 1.14 2.1 [+83%] 1.23 [-42%] Here BayTrail “stumbles” so CherryTrail can be 83% faster with Broadwell surprisingly 42% slower. What it does show is that the CherryTrail drivers are better despite being much newer. It is a pity Intel does not provide this driver for BayTrail too…

Again, we see big gains in CherryTrail with bandwidth increasing by 2-3x which is necessary to keep all those new EVs fed with data; Broadwell does do better but then again it has a dual-channel memory controller.

SiSoftware Official Ranker Scores

Final Thoughts / Conclusions

Here we tested the brand-new Atom X7 Z8700 (CherryTrail) GPU with 16 EVs (EV8) – 4x (four times) more than the older Atom Z3700 (BayTrail) GPU with 4 EVs (EV7) – so we expected big gains – and they were delivered: GPGPU performance is nothing less than stellar, obliterating the old Atom GPU to dust – no doubt also helped by the newer driver (which sadly BayTrail won’t get). And all at the same TDP of about 2.4-5W! Impressive!

Even the very latest Core M (Broadwell) GPU (EV8) is sometimes left behind – at about 2x higher power, more EVs and higher cost – perhaps the new Atom is too good?

Architecturally nothing much has changed (beside the far more EVs) – but we also get better bandwidth and lower latencies – no doubt due to higher memory bus clock.

All in all, there’s no doubt – this new Atom is the one to get and will bring far better graphics and GPGPU performance at low cost – even overshadowing the great Core M series – no doubt the reason it is found in the latest Surface 3.

To see how the Atom CherryTrail CPU fares, please see CPU Atom Z8700 (CherryTrail) performance article!