SiSoftware Sandra 20/20/7 (2020 R7) Released – updates and fixes

Internet Overall Benchmark

We are pleased to release R7 (version 30.49) update for Sandra 20/20 (2020) with the following updates:

Sandra 20/20 (2020) Press Release

  • Updates & Optimisations
    • CPU Benchmarks: AMD Ryzen 4000 series (APU) preliminary support.
    • GPGPU (CUDA/OpenCL) Benchmarks: nVidia Ampere preliminary support.
    • Database: Optimise performance when accessing/updating benchmark results.
    • Branding (Benchmarks/Ranker): Updates manufacturer list.
  • Support & Fixes
    • Internet Benchmarks: Fix website access due to obsolete agent string.
    • Disk Benchmarks: Fix crash on fragmented media (HDD/SSD).
    • Database: Fix update/insert issues with specific benchmark results.

Reviews using Sandra 20/20:

Update & Download

Commercial version customers can download the free updates from their software distributor; Lite users please download from your favourite download site.

Download Sandra Lite

SiSoftware Sandra 20/20/6 (2020 R6) Released – 2 brand-new benchmarks!

Internet DNS Benchmark

We are pleased to release R6 (version 30.45) update for Sandra 20/20 (2020) with the following updates:

Sandra 20/20 (2020) Press Release

Internet DNS Benchmark Internet DNS Benchmark Benchmark the performance of the DNS service. Measure the latency of both cached and un-cached DNS queries to local and remote DNS servers.
Internet Overall Score Benchmark A combined performance index all Internet benchmarks (Connection (Bandwidth/Latency), Peerage (Bandwidth/Latency) and DNS (cached/un-cached Query Latency). Rate the overall performance of your Internet connection.
  • Benchmarks:
    • New: Internet DNS Benchmark: measure cached & un-cached DNS query latency for local and public DNS servers.
    • New: Internet Overall Score: using the existing Internet benchmarks (Connection, Peerage and brand-new DNS), compute an overall score denoting the Internet connection quality.
    • Internet Connection, Internet Peerage Benchmarks: updated list of top (300) websites to test against; additional multi-threading optimisations
  • Hardware Support:
    • Additional future hardware support and optimisations.
    • Additional CPU features support
    • Various stability and reliability improvements

Reviews using Sandra 20/20:

Update & Download

Commercial version customers can download the free updates from their software distributor; Lite users please download from your favourite download site.

Download Sandra Lite

SiSoftware Sandra 20/20/5 (2020 R5) Released – Updated Hardware Support

We are pleased to release R5 (version 30.41) update for Sandra 20/20 (2020) with the following updates:

Sandra 20/20 (2020) Press Release

  • Benchmarks:
    • Internet Connection, Internet Peerage Benchmarks: updated list of top websites to test against; additional multi-threading optimisations
  • Hardware Support:
    • Additional IceLake (ICL Gen10 Core), Future* (RKL, TGL Gen11 Core) AVX512, VAES, SHA-HWA support (see CPU, GP-GPU, Cache & Memory, AVX512 improvement reviews)
    • Additional CPU features support
    • Various stability and reliability improvements

Reviews using Sandra 20/20:

Update & Download

Commercial version customers can download the free updates from their software distributor; Lite users please download from your favourite download site.

Download Sandra Lite

AVX512 Improvement for Icelake Mobile (i7-1065G7 ULV)

Intel Ice Lake

What is AVX512?

AVX512 (Advanced Vector eXtensions) is the 512-bit SIMD instruction set that follows from previous 256-bit AVX2/FMA/AVX instruction set. Originally introduced by Intel with its “Xeon Phi” GPGPU accelerators, it was next introduced on the HEDT platform with Skylake-X (SKL-X/EX/EP) but until now it was not avaible on the mainstream platforms.

With the 10th “real” generation Core arch(itecture) (IceLake/ICL), we finally see “enhanced” AVX512 on the mobile platform which includes all the original extensions and quite a few new ones.

Original AVX512 extensions as supported by SKL/KBL-X HEDT processors:

  • AVX512F – Foundation – most floating-point single/double instructions widened to 512-bit.
  • AVX512-DQ – Double-Word & Quad-Word – most 32 and 64-bit integer instructions widened to 512-bit
  • AVX512-BW – Byte & Word – most 8-bit and 16-bit integer instructions widened to 512-bit
  • AVX512-VL – Vector Length eXtensions – most AVX512 instructions on previous 256-bit and 128-bit SIMD registers
  • AVX512-CD* – Conflict Detection – loop vectorisation through predication [only on Xeon/Phi co-processors]
  • AVX512-ER* – Exponential & Reciprocal – transcedental operations [only on Xeon/Phi co-processors]

New AVX512 extensions supported by ICL processors:

  • AVX512-VNNI** (Vector Neural Network Instructions) [also supported by updated CPL-X HEDT]
  • AVX512-VBMI, VBMI2 (Vector Byte Manipulation Instructions)
  • AVX512-BITALG (Bit Algorithms)
  • AVX512-IFMA (Integer FMA)
  • AVX512-VAES (Vector AES) accelerating crypto
  • AVX512-GFNI (Galois Field)
  • AVX512-GNA (Gaussian Neural Accelerator)

As with anything, simply doubling register widths does not automagically increase performance by 2x as dependencies, memory load/store latencies and even data characteristics limit performance gains; some may require future arch updates or tools to realise their true potential.

SIMD FMA Units: Unlike HEDT/server processors, ICL ULV (and likely desktop) have a single 512-bit FMA unit, not two (2): the execution rate (without dependencies) is thus similar for AVX512 and AVX2/FMA code. However, future versions are likely to increase execution units thus AVX512 code will benefit even more.

In this article we test AVX512 core performance; please see our other articles on:

Native SIMD Performance

We are testing native SIMD performance using various instruction sets: AVX512, AVX2/FMA3, AVX to determine the gains the new instruction sets bring.

Results Interpretation: Higher values (GOPS, MB/s, etc.) mean better performance.

Environment: Windows 10 x64, latest Intel drivers. Turbo / Dynamic Overclocking was enabled on both configurations.

Native Benchmarks ICL ULV AVX512 ICL ULV AVX2/FMA3 Comments
BenchCpuMM Native Integer (Int32) Multi-Media (Mpix/s) 504 [+25%] 403 For integer workloads we manage25% improvement, not quite the 100% we were hoping but still decent.
BenchCpuMM Native Long (Int64) Multi-Media (Mpix/s) 145 [+1%] 143 With a 64-bit integer workload the improvement reduces to 1%.
BenchCpuMM Native Quad-Int (Int128) Multi-Media (Mpix/s) 3.67 3.73 [-2%] – [No SIMD in use here]
BenchCpuMM Native Float/FP32 Multi-Media (Mpix/s) 414 [+22%] 339 In this floating-point test, we see a 22% improvement similar to integer.
BenchCpuMM Native Double/FP64 Multi-Media (Mpix/s) 232 [+20%] 194 Switching to FP64 we see a similar improvement.
BenchCpuMM Native Quad-Float/FP128 Multi-Media (Mpix/s) 10.17 [+13%] 9 In this heavy algorithm using FP64 to mantissa extend FP128 we see only 13% improvement
With limited resources, AVX512 cannot bring 100% improvement, but still manages 20-25% improvement over AVX2/FMA which is decent improvement; also consider this is a TDP-constrained ULV platform not desktop/HEDT.
BenchCrypt Crypto SHA2-256 (GB/s) 9 [+2.25x] 4 With no data dependency – we get great scaling of over 2x in this integer workload.
BenchCrypt Crypto SHA1 (GB/s) 15.71 [+81%] 8.6 Here we see only 80% improvement likely due to lack of (more) memory bandwidth – it likely would scale higher.
BenchCrypt Crypto SHA2-512 (GB/s) 7.09 [+2.3x] 3.07 With 64-bit integer workload we see larger than 2x improvement.
Thanks to the new crypto-algorithm friendly acceleration instructions of AVX512 and no doubt helped by high-bandwidth LP-DDR4X memory, we see over 2x (twice) improvement over older AVX2. ICL ULV will no doubt be a great choice for low-power network devices (routers/gateways/firewalls) able to pump 100′ Gbe crypto streams.
BenchScience SGEMM (GFLOPS) float/FP32 185 [-6%] 196 More optimisations seem to be required here for ICL at least.
BenchScience DGEMM (GFLOPS) double/FP64 91 [+18%] 77 Changing to FP64 brings a 18% improvement.
BenchScience SFFT (GFLOPS) float/FP32 31.72 [+12%] 28.34 With FFT, we see a modest 12% improvement.
BenchScience DFFT (GFLOPS) double/FP64 17.72 [-2%] 18 With FP64 we see 2% regression.
BenchScience SNBODY (GFLOPS) float/FP32 200 [+7%] 187 No help from the compiler here either.
BenchScience DNBODY (GFLOPS) double/FP64 61.76 [=] 62 With FP64 there is no delta.
With highly-optimised scientific algorithms, it seems we still have some way to go to extract more performance out of AVX512, though overall we still see a 7-12% improvement even at this time.
CPU Image Processing Blur (3×3) Filter (MPix/s) 1,580 [+79%] 883 We start well here with AVX512 80% faster with float FP32 workload.
CPU Image Processing Sharpen (5×5) Filter (MPix/s) 633 [+71%] 371 Same algorithm but more shared data improves by 70%.
CPU Image Processing Motion-Blur (7×7) Filter (MPix/s) 326 [+67%] 195 Again same algorithm but even more data shared now brings the improvement down to 67%.
CPU Image Processing Edge Detection (2*5×5) Sobel Filter (MPix/s) 502 [+58%] 318 Using two buffers does not change much still 58% improvement.
CPU Image Processing Noise Removal (5×5) Median Filter (MPix/s) 72.92 [+2.4x] 30.14 Different algorithm works better, with AVX512 over 2x faster.
CPU Image Processing Oil Painting Quantise Filter (MPix/s) 24.73 [+50%] 16.45 Using the new scatter/gather in AVX512 still brings 50% better performance.
CPU Image Processing Diffusion Randomise (XorShift) Filter (MPix/s) 2,100 [+33%] 1,580 Here we have a 64-bit integer workload algorithm with many gathers still good 33% improvement.
CPU Image Processing Marbling Perlin Noise 2D Filter (MPix/s) 307 [+33%] 231 Again loads of gathers and similar 33% improvement.
Image manipulation algorithms working on individual (non-dependent) pixels love AVX512, with 33-140% improvement. The new scatter/gather instructions also simplily memory access code that can benefit from future arch improvements.
Neural Networks NeuralNet CNN Inference (Samples/s) 25.94 [+3%] 25.23 Inference improves by a mere 3% only despite few dependencies.
Neural Networks NeuralNet CNN Training (Samples/s) 4.6 [+5%] 4.39 Traning improves by a slighly better 5% likely due to 512-bit accesses.
Neural Networks NeuralNet RNN Inference (Samples/s) 25.66 [-1%] 25.81 RNN interference seems very slighly slower.
Neural Networks NeuralNet RNN Training (Samples/s) 2.97 [+33%] 2.23 Finally RNN traning improves by 33%.
Unlike image manipulation, neural networks don’t seem to benefit as much pretty much the same performance across board. Clearly more optimisation is needed to push performance.

SiSoftware Official Ranker Scores

Final Thoughts / Conclusions

We never expected a low-power TDP (power)-limited ULV platform to benefit from AVX512 as much as HEDT/server platforms – especially when you consider the lower count of SIMD execution units. Nevertheless, it is clear that ICL (even in ULV form) benefits greatly from AVX512 with 50-100% improvement in many algorithms and no loses.

ICL also introduces many new AVX512 extensions which can even be used to accelrate existing AVX512 code (not just legacy AVX2/FMA), we are likely to see even higher gains in the future as software (and compilers) take advantage of the new extensions. Future CPU architectures are also likely to optimise complex instructions as well as add more SIMD/FMA execution units which will greatly improve AVX512 code performance.

As the data-paths for caches (L1D, L2?) have been widened, 512-bit memory accesses help extract more bandwidth for streaming algorithms (e.g. crypto) while scatter/gather instruction reduce latencies for non-sequential data accesses. Thus the benefit of AVX512 extends to more than just raw compute code.

We are excitedly waiting to see how AVX512-enabled desktop/HEDT ICL performs, not constrained by TDP and adequately cooled…

Ice Lake

Intel Ice Lake

Intel Iris Plus G7 Gen11 IceLake ULV (i7-1065G7) Review & Benchmarks – GPGPU Performance

Intel Iris Plus Graphics

What is “IceLake”?

It is the “proper” 10th generation Core arch (ICL) from Intel – the brand new core to replace the ageing “Skylake” (SKL) arch and its many derivatives; due to delays it actually debuts shortly after the latest update (“CometLake” (CLM)) that is also called 10th generation. Firstly launched for mobile ULV (U/Y) devices, it will also be launched for mainstream (desktop/workstations) soon.

Thus it contains extensive changes to all parts of the SoC: CPU, GPU, memory controller:

  • 10nm+ process (lower voltage, performance benefits)
  • Gen11 graphics (finally up from Gen9.5 for CometLake/WhiskyLake)
  • 64 EUs up to 1.1GHz – up to 1.12 TFLOPS/FP32, 2.25TFLOPS/FP16
  • 2-channel LP-DDR4X support up to 3733Mt/s
  • No eDRAM cache unfortunately (like CrystallWell and co)
  • VBR (Variable Rate Shading) – usefor for games

The biggest change GPGPU-wise is the increase in EUs (64 top end) which greatly increases processing power compared to previous generation using few EUs (24 except very rare GT3 version). Most of the  features seem to be geared towards gaming not GPGPU – thus one omission is no FP64 support! While mobile platforms are not very likely to use high-precision kernels, Gen9 FP64 performance did exceed CPU AVX2/FMA FP64 performance. FP16 is naturally supported, 2x rate as most current designs.

While there does not seem to be eDRAM (L4) cache at all, thanks to very high-speed LP-DDR4X memory (at 3733Mt/s) the bandwidth has almost doubled (58GB/s) which should greatly help bandwidth-intensive workloads. While L1 does not seem changed, L2 has been increased to 3MB (up from 1MB) which should also help.

We do hope to see more GPGPU-friendly features in upcoming versions now that Intel is taking graphics seriously.

GPGPU (Gen11 G7) Performance Benchmarking

In this article we test GPGPU core performance; please see our other articles on:

To compare against the other Gen10 SoC, please see our other articles:

Hardware Specifications

We are comparing the middle-range Intel integrated GP-GPUs with previous generation, as well as competing architectures with a view to upgrading to a brand-new, high performance, design.

GPGPU Specifications Intel UHD 630 (7200U) Intel Iris HD 540 (6550U) AMD Vega 8 (Ryzen 5) Intel Iris Plus (1065G7) Comments
Arch Chipset EV9.5 / GT2 EV9 / GT3 Vega / GCN1.5 EV11 / G7 The first G11 from Intel.
Cores (CU) / Threads (SP) 24 / 192 48 / 384 8 / 512 64 / 512 Less powerful CU but same SP as Vega
SIMD per CU / Width 8 8 64 8 Same SIMD width
Wave/Warp Size 32 32 64 32 Wave size matches nVidia
Speed (Min-Turbo)
300-1000MHz 300-950MHz 300-1100MHz 400-1100MHz Turbo maches Vega.
Power (TDP) 15-25W 15-25W 25W 15-25W Same TDP
ROP / TMU 8 / 16 16 / 24 8 / 32 16 / 32
ROPs the same but TMU have increased.
Shared Memory
64kB
64kB 32kB 64kB Same shared memory but 2x Vega.
Constant Memory
1.6GB 3.2GB 2.7GB 3.2GB No dedicated constant memory but large.
Global Memory 2x DDR4 2133Mt/s 2x DDR4 2133Mt/s 2x DDR4 2400Mt/s 2x LP-DDR4X 3733Mt/s Fastest memory ever
Memory Bandwidth
38GB/s 38GB/s 42GB/s 58GB/s Highest bandwidth ever
L1 Caches 16kB x 24 16kB x 48 8x 16kB 16kB x 64kB L1 does not appear changed.
L2 Cache 512kB 1MB ? 3MB L2 has tripled in size
Maximum Work-group Size
256×256 256×256 1024×1024 256×256 Vega supports 4x bigger workgroups
FP64/double ratio
1/16x 1/16x 1/32x No! No FP64 support in current drivers!
FP16/half ratio
2x 2x 2x 2x Same 2x ratio

Processing Performance

We are testing both OpenCL performance using the latest SDK / libraries / drivers from both Intel and competition.

Results Interpretation: Higher values (GOPS, MB/s, etc.) mean better performance.

Environment: Windows 10 x64, latest Intel and AMD drivers. Turbo / Boost was enabled on all configurations.

Processing Benchmarks Intel UHD 630 (7200U) Intel Iris HD 540 (6550U) AMD Vega 8 (Ryzen 5) Intel Iris Plus (1065G7) Comments
GPGPU Arithmetic Benchmark Mandel FP16/Half (Mpix/s) 895 1,530 2,000 2,820 [+41%] G7 beats Vega by 40%! Pretty incredible start.
GPGPU Arithmetic Benchmark Mandel FP32/Single (Mpix/s) 472 843 1,350 1,330 [-1%] Standard FP32 is just a tie.
GPGPU Arithmetic Benchmark Mandel FP64/Double (Mpix/s) 113 195 111 70* Without native FP64 support G7 craters, but old GT3 beats Vega.
GPGPU Arithmetic Benchmark Mandel FP128/Quad (Mpix/s) 6 10.2 7.1 7.54* Emulated FP128 is hard on FP64 units and G7 beats Vega again.
G7 ties with Mobile Vega in FP32 which in itself is a great achievement but FP16 is much faster. Unfortunately, without native FP64 support – G7 is a lot slower using emulation – but hopefully mobile systems don’t use high-precision kernels.

* Emulated FP64 through FP32.

GPGPU Crypto Benchmark Crypto AES-256 (GB/s) 0.88 1.14 2.58 2.6 [+1%] G7 manages to tie with Vega on this streaming test.
GPGPU Crypto Benchmark Crypto AES-128 (GB/s) 1.1 1.42 3.3 3.4 [+2%] Nothing much changes when changing to 128bit.
GPGPU Crypto Benchmark Crypto SHA2-256 (GB/s) 1.1 1.83 3.36 2.26 [-33%] Without crypto acceleration G7 cannot match Vega.
GPGPU Crypto Benchmark Crypto SHA1 (GB/s) 3 4.45 14.29 6.9 [1/2x] With 128-bit G7 is 1/2 speed of Vega.
GPGPU Crypto Benchmark Crypto SHA2-512 (GB/s) 6.79 10.6 18.77 14.18 [-24%] 64-bit integer workload is still 25% slower.
Thanks to the fast LP-DDR4X memory and its high bandwidth, G7 performance ties with Vega on integer workloads. However, G7 has not crypto acceleration thus Vega is much faster – thus crypto-currency/coin algorithms still favour AMD.
GPGPU Finance Benchmark Black-Scholes float/FP16 (MOPT/s) 1,170 1,470 1,720 2,340 [+36%] With FP16 we see G7 win again by ~35%.
GPGPU Finance Benchmark Black-Scholes float/FP32 (MOPT/s) 710 758 829 1,310 [+58%] With FP32 G7 is now even faster – 60% faster than Vega.
GPGPU Finance Benchmark Black-Scholes double/FP64 (MOPT/s) 158 264 185 No FP64 support.
GPGPU Finance Benchmark Binomial float/FP32 (kOPT/s) 95.7 153 254 292 [+8%] Binomial uses thread shared data thus stresses the memory system so G7 is just 15% faster.
GPGPU Finance Benchmark Binomial double/FP64 (kOPT/s) 20.32 31.1 15.67 No FP64 support.
GPGPU Finance Benchmark Monte-Carlo float/FP32 (kOPT/s) 240 392 362 719 [+2x] Monte-Carlo also uses thread shared data but read-only and here G7 is 2x faster.
GPGPU Finance Benchmark Monte-Carlo double/FP64 (kOPT/s) 35.27 59.7 47.13 No FP64 support.
For financial FP32/FP16 workloads, G7 is between 8% to 100% faster than the Vega – thus for financial workloads it is a great choice. Unfortunately, due to lack of FP64 support – it cannot run high-precision workloads which may be a problem for some algorithms.
GPGPU Science Benchmark HGEMM (GFLOPS) float/FP16 142 220 884 563 [-36%] G7 cannot beat Vega despite previous FP16 great performance.
GPGPU Science Benchmark SGEMM (GFLOPS) float/FP32 119 162 314 419 [+33%] With FP32, G7 is 33% faster than Vega.
GPGPU Science Benchmark DGEMM (GFLOPS) double/FP64 44.2 65.1 62.5 No FP64 support
GPGPU Science Benchmark HFFT (GFLOPS) float/FP16 39.77 42.54 61.34 61.4 [=] G7 manages to tie with Vega here.
GPGPU Science Benchmark SFFT (GFLOPS) float/FP32 23.8 29.69 31.48 39.22 [+25%] With FP32, G7 is 25% faster.
GPGPU Science Benchmark DFFT (GFLOPS) double/FP64 4.81 3.43 14.19 No FP64 support
GPGPU Science Benchmark HNBODY (GFLOPS) float/FP16 383 597 623 930 [+49%] G7 comes up strong here winning by 50%.
GPGPU Science Benchmark SNBODY (GFLOPS) float/FP32 209 327 537 566 [+5%] With FP32, G7 drops to just 5% faster than Vega.
GPGPU Science Benchmark DNBODY (GFLOPS) double/FP64 26.93 44.19 44
On scientific algorithms, G7 manages to beat Vega between 25-50% with FP32 precision and sometimes with FP16 as well. Again, the lack of FP64 support means all the high-precision kernels cannot be used which for some algorithms may be a problem.
GPGPU Image Processing Blur (3×3) Filter single/FP16 (MPix/s) 1,000 1,370 2,273 3,520 [+55%] With FP16, G7 is only 50% faster than Vega.
GPGPU Image Processing Blur (3×3) Filter single/FP32 (MPix/s) 498 589 781 1,570 [+2x] In this 3×3 convolution algorithm, G7 is 2x faster.
GPGPU Image Processing Sharpen (5×5) Filter single/FP16 (MPix/s) 307 441 382 1,000 [+72%] With FP16, G7 is just 70% faster.
GPGPU Image Processing Sharpen (5×5) Filter single/FP32 (MPix/s) 108 143 157 319 [+2x] Same algorithm but more shared data, G7 still 2x faster.
GPGPU Image Processing Motion Blur (7×7) Filter single/FP16 (MPix/s) 284 435 619 924 [+49%] With FP16, G7 is again 50% faster.
GPGPU Image Processing Motion Blur (7×7) Filter single/FP32 (MPix/s) 112 156 161 328 [+2x] With even more data the gap remains at 2x.
GPGPU Image Processing Edge Detection (2*5×5) Sobel Filter single/FP16 (MPix/s) 309 428 595 1,000 [+68%] With FP16 precision, G7 is 70% faster than Vega.
GPGPU Image Processing Edge Detection (2*5×5) Sobel Filter single/FP32 (MPix/s) 108 145 155 318 [+2x] Still convolution but with 2 filters – same 2x difference.
GPGPU Image Processing Noise Removal (5×5) Median Filter single/FP16 (MPix/s) 8.78 8.23 7.68 26.63 [+2.5x] With FP16, G7 is “just” 2.5x faster than Vega.
GPGPU Image Processing Noise Removal (5×5) Median Filter single/FP32 (MPix/s) 7.87 6.29 4.06 26.9 [+5.6x] Different algorithm allows G7 to fly at 6x faster.
GPGPU Image Processing Oil Painting Quantise Filter single/FP16 (MPix/s) 9.6 9.14 24.34 G7 does similarly well with FP16
GPGPU Image Processing Oil Painting Quantise Filter single/FP32 (MPix/s) 8.84 6.77 2.59 19.63 [+6.6x] Without major processing, this filter is 6x faster on G7.
GPGPU Image Processing Diffusion Randomise (XorShift) Filter single/FP16 (MPix/s) 1,000 1,620 2,091 1,740 [-17%] With FP16, G7 is 17% slower than Vega.
GPGPU Image Processing Diffusion Randomise (XorShift) Filter single/FP32 (MPix/s) 1,000 1,560 2,100 1,870 [-11%] This algorithm is 64-bit integer heavy thus G7 is 10% slower
GPGPU Image Processing Marbling Perlin Noise 2D Filter single/FP16 (MPix/s) 36.5 34.32 1,046 215 [1/5x] Some issues needed to be worked out here.
GPGPU Image Processing Marbling Perlin Noise 2D Filter single/FP32 (MPix/s) 433 649 608 950 [+56%] One of the most complex and largest filters, G7 is over 50% faster.
For image processing tasks, G7 does very well – it is 2x faster than Vega while dropping to FP16 precision is around 50% faster (with Vega benefiting greatly from the lower precision). All in all a fanstastic result for those using image/video manipulation algorithms.

Memory Performance

We are testing both OpenCL performance using the latest SDK / libraries / drivers from Intel and competition.

Results Interpretation: For bandwidth tests (MB/s, etc.) high values mean better performance, for latency tests (ns, etc.) low values mean better performance.

Environment: Windows 10 x64, latest Intel and AMD drivers. Turbo / Boost was enabled on all configurations.

Memory Benchmarks Intel UHD 630 (7200U) Intel Iris HD 540 (6550U) AMD Vega 8 (Ryzen 5) Intel Iris Plus (1065G7) Comments
GPGPU Memory Bandwidth Internal Memory Bandwidth (GB/s) 21.36 23.66 27.32 36.3 [+33%] G7 has 33% more bandwidth than Vega.
GPGPU Memory Bandwidth Upload Bandwidth (GB/s) 10.4 11.77 4.74 17 [+2.6x] G7 manages far higher transfers.
GPGPU Memory Bandwidth Download Bandwidth (GB/s) 10.55 11.75 5 18 [+2.6x] Again, same 2.6x delta.
Thanks to the fast LP-DDR4X memory, G7 has far more bandwidth than Vega or older GT2/GT3 design; this no doubt helps streaming algorithms as we have seen above.
GPGPU Memory Latency Global (In-Page Random Access) Latency (ns) 232 277 412 343 [-17%] Better latency than Vega but not less than old arch.
GPGPU Memory Latency Global (Full Range Random Access) Latency (ns) 363 436 519 433 [-17%] Similar 17% less than Vega.
GPGPU Memory Latency Global (Sequential Access) Latency (ns) 153 213 201 267 [+33%] Vega seems to be a lot faster than G7.
GPGPU Memory Latency Constant Memory (In-Page Random Access) Latency (ns) 236 252 411 350 [-15%] Same latency as global as not dedicated.
GPGPU Memory Latency Shared Memory (In-Page Random Access) Latency (ns) 72.5 100 22.5 16.7 [-26%] G7 has greatly reduced shared memory latency.
GPGPU Memory Latency Texture (In-Page Random Access) Latency (ns) 1,116 1,500 278 1,100 [+3x] Not much improvement over older versions.
GPGPU Memory Latency Texture (Full Range Random Access) Latency (ns) 1,178 1,533 418 1,018 [+1.4x] Similar high latency for G7.
GPGPU Memory Latency Texture (Sequential Access) Latency (ns) 1,057 1,324 122 973 [+8x] Again Vega has much lower latencies.
Despite high bandwidth, the latencies are high as LP-DDR4 has higher latencies than standard DDR4 (tens of clocks). Like Vega there is no dedicated constant memory – unlike nVidia. But G7 has greatly reduced shared memory latency to less than Vega which greatly helps algorithms using shared memory.

SiSoftware Official Ranker Scores

Final Thoughts / Conclusions

It’s great to see Intel taking graphics seriously again; with ICL, you don’t just get a brand-new core but a much updated GPU core too. And it does not disappoint – it trades blows with competition (Vega Mobile) and usually wins while it is close to 2x faster than Gen9/GT3 and 3x faster than Gen9.5/GT2 – a huge improvement.

The lack of native FP64 support is puzzling – but then again it could be reserved for higher-end/workstation versions if supported at all. Intel no doubt is betting on the CPU’s AVX512 SIMD cores for FP64 performance which is considerable. Again, it’s not very likely that mobile (ULV) platforms are going to run high-precision kernels.

The memory bandwidth is also 50% higher but unfortunately latencies are also higher due to LP-DDR4(X) memory; lower-end versions using “standard” DDR4 memory will not see high bandwidth but will see lower latencies – thus it is give and take.

As we’ve said in the other reviews of ICL, if you have been waiting to upgrade from the much older – but still good – SKL/KBL with Gen8/9 GT2 GPU – the Gen11 GPU is a significant upgrade. You will no longer feel “inadequate” compared to competition integrated GPUs. Naturally, you cannot expect discrete GPU levels of performance but for an integrated APU it is more than sufficient.

Overall with CPU and memory improvements, ICL-U is a very compelling proposition that cost permitting should be your top choice for long-term use.

In a word: Highly Recommended!

Please see our other articles on:

SiSoftware Sandra 20/20/4 (2020 R4a) Released – Updated Benchmarks

Service Pack 4 (SP4) Download

Note: The original R4 release text has been updated below. The (*) denotes new changes.

We are pleased to release R4a (version 30.39) update for 20/20 (2020) with the following updates:

Sandra 20/20 (2020) Press Release

  • Benchmarks:
    • Crypto AES Benchmarks*: Optimised AVX512/AVX2-VAES code to outperform AES-HWA where possible.
    • Crypto SHA Benchmarks*: Select AVX512 multi-buffer instead of SHA-HWA where supported.
    • Network (LAN), Wireless (WLAN/WWAN) Benchmarks: multi-threaded transfer tests and increased packet size to better utilise 10Gbe+ (and higher) links. [Note: threaded CPU required]
    • Internet Connection, Internet Peerage Benchmarks: multi-threaded transfer tests and increased packet size to better utilise Gigabit+ (and higher) connections.
  • Hardware Support:
    • Updated IceLake (ICL Gen10 Core), Future* (RKL, TGL Gen11 Core) AVX512, VAES, SHA-HWA support (see CPU, GP-GPU, Cache & Memory, AVX512 improvement reviews)
    • Updated CometLake (Gen10 Core) support (see CPU, GP-GPU, Cache & Memory reviews)
    • Updated CPU features support*
    • Updated NVMe support
    • Enhanced Biometrics information (fingerprint, face, voice, audio, etc. sensors)
    • Updated WiFi support (WiFi 6/802.11ax, WPA3)
    • Various stability and reliability improvements

Reviews using Sandra 20/20:

Update & Download

Commercial version customers can download the free updates from their software distributor; Lite users please download from your favourite download site.

Download Sandra Lite

Intel Core Gen10 IceLake ULV (i7-1065G7) Review & Benchmarks – CPU AVX512 Performance

Intel Core i7 Gen10

What is “IceLake”?

It is the “real” 10th generation Core arch(itecture) (ICL/”IceLake”) from Intel – the brand new core to replace the ageing “Skylake” (SKL) arch and its many derivatives; due to delays it actually debuts shortly after the latest update (“CometLake” (CLM)) that is also called 10th generation. Firstly launched for mobile ULV (U/Y) devices, it will also be launched for mainstream (desktop/workstations) soon.

Thus it contains extensive changes to all parts of the SoC: CPU, GPU, memory controller:

  • 10nm+ process (lower voltage, higher performance benefits)
  • Up to 4C/8T “Sunny Cove” cores on ULV (less than top-end CometLake 6C/12T)
  • Gen11 graphics (finally up from Gen9.5 for CometLake/WhiskyLake)
  • AVX512 instruction set (like HEDT platform)
  • SHA HWA instruction set (like Ryzen)
  • 2-channel LP-DDR4X support up to 3733Mt/s
  • Thunderbolt 3 integrated
  • Hardware fixes/mitigations for vulnerabilities (“Meltdown”, “MDS”, various “Spectre” types)
  • WiFi6 (802.11ax) AX201 integrated

Probably the biggest change is support for AVX512-family instruction set, effectively doubling the SIMD processing width (vs. AVX2/FMA) as well as adding a whole host of specialised instructions that even the HEDT platform (SKL/KBL-X) does not support:

  • AVX512-VNNI (Vector Neural Network Instructions)
  • AVX512-VBMI, VBMI2 (Vector Byte Manipulation Instructions)
  • AVX512-BITALG (Bit Algorithms)
  • AVX512-AVX512-IFMA (Integer FMA)
  • AVX512-VAES (Vector AES) accelerating crypto
  • AVX512-GFNI (Galois Field)
  • SHA HWA accelerating hashing
  • AVX512-GNA (Gaussian Neural Accelerator)

While some software may not have been updated to AVX512 as it was reserved for HEDT/Servers, due to this mainstream launch you can pretty much guarantee that just about all vectorised algorithms (already ported to AVX2/FMA) will soon be ported over. VNNI, IFMA support can accelerate low-precision neural-networks that are likely to be used on mobile platforms.

VAES and SHA acceleration improve crypto/hashing performance – important today as even LAN transfers between workstations are likely to be encrypted/signed, not to mention just about all WAN transfers, encrypted disk/containers, etc. Some SoCs will also make their way into powerful (but low power) firewall appliances where both AES and SHA acceleration will prove very useful.

From a security point-of-view, ICL mitigates all (existing/reported) vulnerabilities in hardware/firmware (Spectre 2, 3/a, 4; L1TF, MDS) except BCB (Spectre V1 that does not have a hardware solution) thus should not require slower mitigations that affect performance (especially I/O).

The memory controller supports LP-DDR4X at higher speeds than CML while the cache/TLB systems have been improved that should help both CPU and GPU performance (see corresponding article) as well as reduce power vs. older designs using LP-DDR3.

Finally the GPU core has been updated (Gen11) and generally contains many more cores than the old core (Gen9.5) that was used from KBL (CPU Gen7) all the way to CML (CPU Gen10) (see corresponding article).

CPU (Core) Performance Benchmarking

In this article we test CPU core performance; please see our other articles on:

To compare against the other Gen10 CPU, please see our other articles:

Hardware Specifications

We are comparing the top-of-the-range Intel ULV with competing architectures (gen 8, 7, 6) as well as competiors (AMD) with a view to upgrading to a mid-range but high performance design.

CPU Specifications AMD Ryzen 2500U Bristol Ridge Intel i7 8550U (Coffeelake ULV) Intel Core i7 10510U (CometLake ULV) Intel Core i7 1065G7 (IceLake ULV) Comments
Cores (CU) / Threads (SP) 4C / 8T 4C / 8T 4C / 8T 4C / 8T No change in cores count.
Speed (Min / Max / Turbo) 1.6-2.0-3.6GHz 0.4-1.8-4.0GHz
(1.8 @ 15W, 2GHz @ 25W)
0.4-1.8-4.9GHz
(1.8GHz @ 15W, 2.3GHz @ 25W)
0.4-1.5-3.9GHz
(1.0GHz @ 12W, 1.5GHz @ 25W)
ICL has lower clocks ws. CML.
Power (TDP) 15-35W 15-35W 15-35W 12-35W Same power envelope.
L1D / L1I Caches 4x 32kB 8-way / 4x 64kB 4-way 4x 32kB 8-way / 4x 32kB 8-way 4x 32kB 8-way / 4x 32kB 8-way 4x 48kB 12-way / 4x 32kB 8-way L1D is 50% larger.
L2 Caches 4x 512kB 8-way 4x 256kB 16-way 4x 256kB 16-way 4x 512kB 16-way L2 has doubled.
L3 Caches 4MB 16-way 6MB 16-way 8MB 16-way 8MB 16-way No L3 changes
Microcode (Firmware) MU8F1100-0B MU068E09-AE MU068E0C-BE MU067E05-6A Revisions just keep on coming.
Special Instruction Sets
AVX2/FMA, SHA AVX2/FMA AVX2/FMA AVX512, VNNI, SHA, VAES, GFNI 512-bit wide SIMD on mobile!
SIMD Width / Units
128-bit 256-bit 256-bit 512-bit Widest SIMD units ever

Native Performance

We are testing native arithmetic, SIMD and cryptography performance using the highest performing instruction sets (AVX2, AVX, etc.). “IceLake” (ICL) supports all modern instruction sets including AVX512, VNNI, SHA HWA, VAES and naturally the older AVX2/FMA, AES HWA.

Results Interpretation: Higher values (GOPS, MB/s, etc.) mean better performance.

Environment: Windows 10 x64, latest AMD and Intel drivers. 2MB “large pages” were enabled and in use. Turbo / Boost was enabled on all configurations.

Native Benchmarks AMD Ryzen 2500U Bristol Ridge Intel i7 8550U (Coffeelake ULV) Intel Core i7 10510U (CometLake ULV) Intel Core i7 1065G7 (IceLake ULV) Comments
CPU Arithmetic Benchmark Native Dhrystone Integer (GIPS) 103 125 134 154 [+15%]
ICL is 15% faster than CML.
CPU Arithmetic Benchmark Native Dhrystone Long (GIPS) 102 115 135 151 [+12%]
With a 64-bit integer workload – 12% increase
CPU Arithmetic Benchmark Native FP32 (Float) Whetstone (GFLOPS) 79 67 85 90 [+6%]
With floating-point, ICL is 6% faster than CML
CPU Arithmetic Benchmark Native FP64 (Double) Whetstone (GFLOPS) 67 57 70 74 [+5%]
With FP64 we see 5% improvement
With integer (legacy) workloads (not using SIMD) we see the new ICL core is over 10% faster than the higher-clocked CML core; with floating-point we see a 5% improvement. While modest, it shows the potential of the new core over the old-but-refined cores we’ve had since SKL.
BenchCpuMM Native Integer (Int32) Multi-Media (Mpix/s) 239 306 409 504* [+23%] With AVX512 ICL wins this vectorised integer test
BenchCpuMM Native Long (Int64) Multi-Media (Mpix/s) 53.4 117 149 145* [-3%] With a 64-bit AVX512 integer workload we have parity.
BenchCpuMM Native Quad-Int (Int128) Multi-Media (Mpix/s) 2.41 2.21 2.54 3.67 [+44%] A tough test using long integers to emulate Int128 without SIMD;  ICL is 44% faster!
BenchCpuMM Native Float/FP32 Multi-Media (Mpix/s) 222 266 328 414* [+26%]
In this floating-point vectorised test, AVX512 is 26% faster.
BenchCpuMM Native Double/FP64 Multi-Media (Mpix/s) 127 155.9 194 232* [+19%]
Switching to FP64 SIMD code,  ICL is 20% faster.
BenchCpuMM Native Quad-Float/FP128 Multi-Media (Mpix/s) 6.23 6.51 8.22 10.2* [+24%]
A heavy algorithm using FP64 to mantissa extend FP128 ICL is 24% faster.
With heavily vectorised SIMD workloads ICL is able to deploy AVX512 which leads to a 20-25% performance improvement even at the slower clock. However, AVX512 is quite power-hungry (as we’ve seen on HEDT) so we are power constrained in an ULV here – but higher TDP systems (28W, etc.) should perform much better.

* using AVX512 instead of AVX2/FMA.

BenchCrypt Crypto AES-256 (GB/s) 10.9 13.1 12.1 21.3* [+76%]
ICL with VAES is 76% faster than CML.
BenchCrypt Crypto AES-128 (GB/s) 10.9 13.1 12.1 21.3* [+76%]
No change with AES128.
BenchCrypt Crypto SHA2-256 (GB/s) 6.78** 3.97 4.3 9** [+2.1x] Despite SHA HWA, Ryzen loses top spot.
BenchCrypt Crypto SHA1 (GB/s) 7.13** 7.5 7.2 15.7** [+2.2x] Less compute intensive SHA1 does not help.
BenchCrypt Crypto SHA2-512 (GB/s) 1.48 1.54 7.1*** SHA2-512 is not accelerated by SHA HWA.
The memory sub-system is crucial here, and despite VAES (AVX512 VL) and SHA HWA support (like Ryzen), ICL wins thanks to the very fast LP-DDR4X @ 3733Mt/s. VAES marginally helps (at this time) and SHA HWA cannot beat AVX512 multi-buffer but should be much more important in single-buffer large data workloads.

* using VAES (AVX512 VL) instead of AES HWA.

** using SHA HWA instead of multi-buffer AVX2.

*** using AVX512 B/W

BenchFinance Black-Scholes float/FP32 (MOPT/s) 93.34 73.02 109 With non-vectorised code ICL is still faster
BenchFinance Black-Scholes double/FP64 (MOPT/s) 77.86 75.24 87.2 91 [+4%] Using FP64 ICL is 4% faster
BenchFinance Binomial float/FP32 (kOPT/s) 35.49 16.2 23.5 Binomial uses thread shared data thus stresses the cache & memory system.
BenchFinance Binomial double/FP64 (kOPT/s) 19.46 19.31 21 27 [+29%] With FP64 code ICL is 29% faster.
BenchFinance Monte-Carlo float/FP32 (kOPT/s) 20.11 14.61 79.9 Monte-Carlo also uses thread shared data but read-only thus reducing modify pressure on the caches.
BenchFinance Monte-Carlo double/FP64 (kOPT/s) 15.32 14.54 16.5 66 [+2x] Switching to FP64 ICL is 2x faster.
With non-SIMD financial workloads, ICL still improves a significant amount over CML thus it makes sense to choose it rather than the older core. Still, it is more likely that the GPGPU will be used for such workloads today.
BenchScience SGEMM (GFLOPS) float/FP32 107 141 158 185* [+17%]
In this tough vectorised  algorithm, ICL is 17% faster
BenchScience DGEMM (GFLOPS) double/FP64 47.2 55 69.2 91.7* [+32%]
With FP64 vectorised code, ICL is 32% faster.
BenchScience SFFT (GFLOPS) float/FP32 3.75 13.23 13.9 31.7* [+2.3x%]
FFT is also heavily vectorised and here ICL is over 2x faster.
BenchScience DFFT (GFLOPS) double/FP64 4 6.53 7.35 17.7* [+2.4x]
With FP64 code, ICL is even faster.
BenchScience SNBODY (GFLOPS) float/FP32 112.6 160 169 200* [+18%]
N-Body simulation is vectorised but with more memory accesses.
BenchScience DNBODY (GFLOPS) double/FP64 45.3 57.9 64.2 61.8* [-4%]
With FP64 code ICL is slighly behind CML.
With highly vectorised SIMD code (scientific workloads), ICL again shows us the power of AVX512 and can be over 2x (twice) faster than CML even at higher clock. Some algorithms may need further optimisations but even then we see 17-30% improvement.

* using AVX512 instead of AVX2/FMA

Neural Networks NeuralNet CNN Inference (Samples/s) 14.32 17.27 19.33 25.62* [+33%] Using AVX512 ICL inference is 33% faster.
Neural Networks NeuralNet CNN Training (Samples/s) 1.46 2.06 3.33 4.56* [+37%] Even training improves by 37%.
Neural Networks NeuralNet RNN Inference (Samples/s) 16.93 22.69 23.88 24.93* [+4%] Just 4% faster but improvement is there.
Neural Networks NeuralNet RNN Training (Samples/s) 1.48 1.14 1.57 2.97* [+43%] Training is much faster by 43% over CML.
As we’ve seen before, ICL benefits greatly from AVX512 – manages to beat the higher-clock CML across the board from 33-43% – and that is before using VNNI to accelerate algorithms even more.

* using AVX512 instead of AVX2/FMA (not using VNNI yet)

CPU Image Processing Blur (3×3) Filter (MPix/s) 532 720 891  1580* [+77%] In this vectorised integer workload ICL is 77% faster
CPU Image Processing Sharpen (5×5) Filter (MPix/s) 146 290 359 633* [+76%]
Same algorithm but more shared data still 76%.
CPU Image Processing Motion-Blur (7×7) Filter (MPix/s) 123 157 186 326* [+75%]
Again same algorithm but even more data shared brings 75%
CPU Image Processing Edge Detection (2*5×5) Sobel Filter (MPix/s) 185 251 302 502* [+66%]
Different algorithm but still vectorised workload still 66% faster.
CPU Image Processing Noise Removal (5×5) Median Filter (MPix/s) 26.49 25.38 27.7 72.9* [+2.6x]
Still vectorised code ICL rules here 2.6x faster!
CPU Image Processing Oil Painting Quantise Filter (MPix/s) 9.38 14.29 15.7 24.7* [57%]
Similar improvement here of about 57%
CPU Image Processing Diffusion Randomise (XorShift) Filter (MPix/s) 660 1525 1580 2100* [+33%]
With integer workload, 33% faster.
CPU Image Processing Marbling Perlin Noise 2D Filter (MPix/s) 94,16 188.8 214 307* [+43%]
In this final test again with integer workload 43% faster
ICL rules this benchmark with AVX512 integer (B/W) 33-43% faster and floating-point AVX512 66-77% faster than CML even at lower clock. Again we see the huge improvement AVX512 brings already even at low-power ULV envelopes.

* using AVX512 instead of AVX2/FMA

Unlike CML, ICL with AVX512 support is a revolution in performance – which is exactly what we were hoping for; even at much lower clock we see anywhere between 33% all the way to over 2x (twice) faster within the same power limits (TDP/turbo). As we know from HEDT, AVX512 is power-hungry thus higher-TDP rated version (e.g. 28W) should perform even better.

Even without AVX512, we see good improvement of 5-15% again at much lower clock (3.9GHz vs 4.9GHz) while CML and older versions relied on higher clock / more cores to outperform older versions KBL/SKL-U.

SiSoftware Official Ranker Scores

Final Thoughts / Conclusions

With AMD snapping at its heel with Ryzen Mobile, Intel has finally fixed its 10nm production and rolled out the “new Skylake” we deserve: Ice Lake with AVX512 brings feature parity with the much older HEDT platform and showing good promise for the future. This is the “Core” you have been looking for.

While power-hungry and TDP constrained, AVX512 does bring sizeable performance gains that are in addition to core improvements and cache & memory sub-system improvements. Other instruction sets VAES, SHA HWA complete the package and might help in some scenarios where code has not been updated to AVX512.

With ICL, a mere 15W thin & light (e.g. Dell XPS 13 9300) can outperform older desktop-class CPUs (e.g. SKL) at 4-6x (four/six-times) TDP which makes us really keen to see what desktop-class processors will be capable of. And not before time as the competition has been bringing stronger and stronger designs (Ryzen2, future Ryzen 3).

If you have been waiting to upgrade from the much older – but still good – SKL/KBL with just 2 cores and no hardware vulnerability mitigations – then you finally have something to upgrade to: CML was not it as despite its 4 cores (and rumoured 6 core), it just did not bring enough to the table to make upgrading worth-while (save hardware mitigations that don’t cripple performance).

Overall, with GP GPU and memory improvements, ICL-U is a very compelling proposition that cost permitting should be your top choice for long-term use.

In a word: Highly Recommended!

Please see our other articles on:

SiSoftware Sandra 20/20/3 (2020 R3) Released – Updated Benchmarks

Service Pack 3 (SP3) Download

We are pleased to release R3 (version 30.31) update for 20/20 (2020) with the following updates:

Sandra 20/20 (2020) Press Release

  • Hardware Support:
    • Additional PCIe extended capabilities support
  • CPU Cyrptography Benchmarks:
    • Block size changed to ~1500 bytes similar to Ethernet packet
    • Various stability and reliability improvements
  • GPGPU Cyrptography Benchmarks:
    • Block size changed to ~1500 bytes similar to Ethernet packet
    • Various stability and reliability improvements

Reviews using Sandra 20/20:

Update & Download

Commercial version customers can download the free updates from their software distributor; Lite users please download from your favourite download site.

Download Sandra Lite

SiSoftware Sandra 20/20/2 (2020 R2) Released – Stability Fixes

Service Pack 2 (SP2) Download

We are pleased to release R2 (version 30.27) update for 20/20 (2020) with the following updates:

Sandra 20/20 (2020) Press Release

  • Hardware Support:
    • PCIe extended capabilities support
  • Software Support:
    • ReFS format Disk benchmark stability issues
  • CPU Benchmarks:
    • Tools (Visual C++ compiler 2019) Update
  • GPGPU Benchmarks:
    • CUDA: Updated SDK 10.2/10.1
    • OpenCL: Updated SDK support

Reviews using Sandra 20/20:

Update & Download

Commercial version customers can download the free updates from their software distributor; Lite users please download from your favourite download site.

Download Sandra Lite

SiSoftware Sandra 20/20/1 (2020 R1a) Released – Updated Hardware Support

Service Pack 1 (SP1) Download

Update November 25th: Released patch (version 30.24) to add further hardware and software support.

Update October 24th: Released patch (version 30.21) to corrrect Windows 7 / Server 2008/R2 run-time issues.

We are pleased to release R1 (version 30.24) update for 20/20 (2020) with the following updates:

Sandra 20/20 (2020) Press Release

  • Hardware Support:
    • AMD Ryzen2 (series 3000 Matisse), Stoney Ridge updated support
    • Intel Cascade Lake (CSL), Comet Lake (CML), Cannon Lake (CNL), Ice Lake (ICL) updated support
  • CPU Benchmarks:
    • Tools (Visual C++ compiler 2019) Update
  • GPGPU Benchmarks:
    • CUDA: Updated SDK 10.2/10.1
    • OpenCL: Updated SDK support

Reviews using Sandra 20/20:

Update & Download

Commercial version customers can download the free updates from their software distributor; Lite users please download from your favourite download site.

Download Sandra Lite