AVX512 Improvement for Icelake Mobile (i7-1065G7 ULV)

What is AVX512?

AVX512 (Advanced Vector eXtensions) is the 512-bit SIMD instruction set that follows from previous 256-bit AVX2/FMA/AVX instruction set. Originally introduced by Intel with its “Xeon Phi” GPGPU accelerators, it was next introduced on the HEDT platform with Skylake-X (SKL-X/EX/EP) but until now it was not avaible on the mainstream platforms.

With the 10th “real” generation Core arch(itecture) (IceLake/ICL), we finally see “enhanced” AVX512 on the mobile platform which includes all the original extensions and quite a few new ones.

Original AVX512 extensions as supported by SKL/KBL-X HEDT processors:

  • AVX512F – Foundation – most floating-point single/double instructions widened to 512-bit.
  • AVX512-DQ – Double-Word & Quad-Word – most 32 and 64-bit integer instructions widened to 512-bit
  • AVX512-BW – Byte & Word – most 8-bit and 16-bit integer instructions widened to 512-bit
  • AVX512-VL – Vector Length eXtensions – most AVX512 instructions on previous 256-bit and 128-bit SIMD registers
  • AVX512-CD* – Conflict Detection – loop vectorisation through predication [only on Xeon/Phi co-processors]
  • AVX512-ER* – Exponential & Reciprocal – transcedental operations [only on Xeon/Phi co-processors]

New AVX512 extensions supported by ICL processors:

  • AVX512-VNNI** (Vector Neural Network Instructions) [also supported by updated CPL-X HEDT]
  • AVX512-VBMI, VBMI2 (Vector Byte Manipulation Instructions)
  • AVX512-BITALG (Bit Algorithms)
  • AVX512-IFMA (Integer FMA)
  • AVX512-VAES (Vector AES) accelerating crypto
  • AVX512-GFNI (Galois Field)
  • AVX512-GNA (Gaussian Neural Accelerator)

As with anything, simply doubling register widths does not automagically increase performance by 2x as dependencies, memory load/store latencies and even data characteristics limit performance gains; some may require future arch updates or tools to realise their true potential.

SIMD FMA Units: Unlike HEDT/server processors, ICL ULV (and likely desktop) have a single 512-bit FMA unit, not two (2): the execution rate (without dependencies) is thus similar for AVX512 and AVX2/FMA code. However, future versions are likely to increase execution units thus AVX512 code will benefit even more.

In this article we test AVX512 core performance; please see our other articles on:

Native SIMD Performance

We are testing native SIMD performance using various instruction sets: AVX512, AVX2/FMA3, AVX to determine the gains the new instruction sets bring.

Results Interpretation: Higher values (GOPS, MB/s, etc.) mean better performance.

Environment: Windows 10 x64, latest Intel drivers. Turbo / Dynamic Overclocking was enabled on both configurations.

Native Benchmarks ICL ULV AVX512 ICL ULV AVX2/FMA3 Comments
BenchCpuMM Native Integer (Int32) Multi-Media (Mpix/s) 504 [+25%] 403 For integer workloads we manage25% improvement, not quite the 100% we were hoping but still decent.
BenchCpuMM Native Long (Int64) Multi-Media (Mpix/s) 145 [+1%] 143 With a 64-bit integer workload the improvement reduces to 1%.
BenchCpuMM Native Quad-Int (Int128) Multi-Media (Mpix/s) 3.67 3.73 [-2%] – [No SIMD in use here]
BenchCpuMM Native Float/FP32 Multi-Media (Mpix/s) 414 [+22%] 339 In this floating-point test, we see a 22% improvement similar to integer.
BenchCpuMM Native Double/FP64 Multi-Media (Mpix/s) 232 [+20%] 194 Switching to FP64 we see a similar improvement.
BenchCpuMM Native Quad-Float/FP128 Multi-Media (Mpix/s) 10.17 [+13%] 9 In this heavy algorithm using FP64 to mantissa extend FP128 we see only 13% improvement
With limited resources, AVX512 cannot bring 100% improvement, but still manages 20-25% improvement over AVX2/FMA which is decent improvement; also consider this is a TDP-constrained ULV platform not desktop/HEDT.
BenchCrypt Crypto SHA2-256 (GB/s) 9 [+2.25x] 4 With no data dependency – we get great scaling of over 2x in this integer workload.
BenchCrypt Crypto SHA1 (GB/s) 15.71 [+81%] 8.6 Here we see only 80% improvement likely due to lack of (more) memory bandwidth – it likely would scale higher.
BenchCrypt Crypto SHA2-512 (GB/s) 7.09 [+2.3x] 3.07 With 64-bit integer workload we see larger than 2x improvement.
Thanks to the new crypto-algorithm friendly acceleration instructions of AVX512 and no doubt helped by high-bandwidth LP-DDR4X memory, we see over 2x (twice) improvement over older AVX2. ICL ULV will no doubt be a great choice for low-power network devices (routers/gateways/firewalls) able to pump 100′ Gbe crypto streams.
BenchScience SGEMM (GFLOPS) float/FP32 185 [-6%] 196 More optimisations seem to be required here for ICL at least.
BenchScience DGEMM (GFLOPS) double/FP64 91 [+18%] 77 Changing to FP64 brings a 18% improvement.
BenchScience SFFT (GFLOPS) float/FP32 31.72 [+12%] 28.34 With FFT, we see a modest 12% improvement.
BenchScience DFFT (GFLOPS) double/FP64 17.72 [-2%] 18 With FP64 we see 2% regression.
BenchScience SNBODY (GFLOPS) float/FP32 200 [+7%] 187 No help from the compiler here either.
BenchScience DNBODY (GFLOPS) double/FP64 61.76 [=] 62 With FP64 there is no delta.
With highly-optimised scientific algorithms, it seems we still have some way to go to extract more performance out of AVX512, though overall we still see a 7-12% improvement even at this time.
CPU Image Processing Blur (3×3) Filter (MPix/s) 1,580 [+79%] 883 We start well here with AVX512 80% faster with float FP32 workload.
CPU Image Processing Sharpen (5×5) Filter (MPix/s) 633 [+71%] 371 Same algorithm but more shared data improves by 70%.
CPU Image Processing Motion-Blur (7×7) Filter (MPix/s) 326 [+67%] 195 Again same algorithm but even more data shared now brings the improvement down to 67%.
CPU Image Processing Edge Detection (2*5×5) Sobel Filter (MPix/s) 502 [+58%] 318 Using two buffers does not change much still 58% improvement.
CPU Image Processing Noise Removal (5×5) Median Filter (MPix/s) 72.92 [+2.4x] 30.14 Different algorithm works better, with AVX512 over 2x faster.
CPU Image Processing Oil Painting Quantise Filter (MPix/s) 24.73 [+50%] 16.45 Using the new scatter/gather in AVX512 still brings 50% better performance.
CPU Image Processing Diffusion Randomise (XorShift) Filter (MPix/s) 2,100 [+33%] 1,580 Here we have a 64-bit integer workload algorithm with many gathers still good 33% improvement.
CPU Image Processing Marbling Perlin Noise 2D Filter (MPix/s) 307 [+33%] 231 Again loads of gathers and similar 33% improvement.
Image manipulation algorithms working on individual (non-dependent) pixels love AVX512, with 33-140% improvement. The new scatter/gather instructions also simplily memory access code that can benefit from future arch improvements.
Neural Networks NeuralNet CNN Inference (Samples/s) 25.94 [+3%] 25.23 Inference improves by a mere 3% only despite few dependencies.
Neural Networks NeuralNet CNN Training (Samples/s) 4.6 [+5%] 4.39 Traning improves by a slighly better 5% likely due to 512-bit accesses.
Neural Networks NeuralNet RNN Inference (Samples/s) 25.66 [-1%] 25.81 RNN interference seems very slighly slower.
Neural Networks NeuralNet RNN Training (Samples/s) 2.97 [+33%] 2.23 Finally RNN traning improves by 33%.
Unlike image manipulation, neural networks don’t seem to benefit as much pretty much the same performance across board. Clearly more optimisation is needed to push performance.

SiSoftware Official Ranker Scores

Final Thoughts / Conclusions

We never expected a low-power TDP (power)-limited ULV platform to benefit from AVX512 as much as HEDT/server platforms – especially when you consider the lower count of SIMD execution units. Nevertheless, it is clear that ICL (even in ULV form) benefits greatly from AVX512 with 50-100% improvement in many algorithms and no loses.

ICL also introduces many new AVX512 extensions which can even be used to accelrate existing AVX512 code (not just legacy AVX2/FMA), we are likely to see even higher gains in the future as software (and compilers) take advantage of the new extensions. Future CPU architectures are also likely to optimise complex instructions as well as add more SIMD/FMA execution units which will greatly improve AVX512 code performance.

As the data-paths for caches (L1D, L2?) have been widened, 512-bit memory accesses help extract more bandwidth for streaming algorithms (e.g. crypto) while scatter/gather instruction reduce latencies for non-sequential data accesses. Thus the benefit of AVX512 extends to more than just raw compute code.

We are excitedly waiting to see how AVX512-enabled desktop/HEDT ICL performs, not constrained by TDP and adequately cooled…

SiSoftware Sandra 20/20/4a (2020 R4a) Released

Note: The original R4 release text has been updated below. The (*) denotes new changes.

We are pleased to release R4a (version 30.39) update for 20/20 (2020) with the following updates:

Sandra 20/20 (2020) Press Release

  • Benchmarks:
    • Crypto AES Benchmarks*: Optimised AVX512/AVX2-VAES code to outperform AES-HWA where possible.
    • Crypto SHA Benchmarks*: Select AVX512 multi-buffer instead of SHA-HWA where supported.
    • Network (LAN), Wireless (WLAN/WWAN) Benchmarks: multi-threaded transfer tests and increased packet size to better utilise 10Gbe+ (and higher) links. [Note: threaded CPU required]
    • Internet Connection, Internet Peerage Benchmarks: multi-threaded transfer tests and increased packet size to better utilise Gigabit+ (and higher) connections.
  • Hardware Support:
    • Updated IceLake (ICL Gen10 Core), Future* (RKL, TGL Gen11 Core) AVX512, VAES, SHA-HWA support (see CPU, GP-GPU, Cache & Memory, AVX512 improvement reviews)
    • Updated CometLake (Gen10 Core) support (see CPU, GP-GPU, Cache & Memory reviews)
    • Updated CPU features support*
    • Updated NVMe support
    • Enhanced Biometrics information (fingerprint, face, voice, audio, etc. sensors)
    • Updated WiFi support (WiFi 6/802.11ax, WPA3)
    • Various stability and reliability improvements

Reviews using Sandra 20/20:

Update & Download

Commercial version customers can download the free updates from their software distributor; Lite users please download from your favourite download site.

Download Sandra Lite

SiSoftware Sandra 20/20/3 (2020 R3) Released

We are pleased to release R3 (version 30.31) update for 20/20 (2020) with the following updates:

Sandra 20/20 (2020) Press Release

  • Hardware Support:
    • Additional PCIe extended capabilities support
  • CPU Cyrptography Benchmarks:
    • Block size changed to ~1500 bytes similar to Ethernet packet
    • Various stability and reliability improvements
  • GPGPU Cyrptography Benchmarks:
    • Block size changed to ~1500 bytes similar to Ethernet packet
    • Various stability and reliability improvements

Reviews using Sandra 20/20:

Update & Download

Commercial version customers can download the free updates from their software distributor; Lite users please download from your favourite download site.

Download Sandra Lite

SiSoftware Sandra 20/20/2 (2020 R2) Released

We are pleased to release R2 (version 30.27) update for 20/20 (2020) with the following updates:

Sandra 20/20 (2020) Press Release

  • Hardware Support:
    • PCIe extended capabilities support
  • Software Support:
    • ReFS format Disk benchmark stability issues
  • CPU Benchmarks:
    • Tools (Visual C++ compiler 2019) Update
  • GPGPU Benchmarks:
    • CUDA: Updated SDK 10.2/10.1
    • OpenCL: Updated SDK support

Reviews using Sandra 20/20:

Update & Download

Commercial version customers can download the free updates from their software distributor; Lite users please download from your favourite download site.

Download Sandra Lite

SiSoftware Sandra 20/20/1a (2020 R1a) Released

Update November 25th: Released patch (version 30.24) to add further hardware and software support.

Update October 24th: Released patch (version 30.21) to corrrect Windows 7 / Server 2008/R2 run-time issues.

We are pleased to release R1 (version 30.24) update for 20/20 (2020) with the following updates:

Sandra 20/20 (2020) Press Release

  • Hardware Support:
    • AMD Ryzen2 (series 3000 Matisse), Stoney Ridge updated support
    • Intel Cascade Lake (CSL), Comet Lake (CML), Cannon Lake (CNL), Ice Lake (ICL) updated support
  • CPU Benchmarks:
    • Tools (Visual C++ compiler 2019) Update
  • GPGPU Benchmarks:
    • CUDA: Updated SDK 10.2/10.1
    • OpenCL: Updated SDK support

Reviews using Sandra 20/20:

Update & Download

Commercial version customers can download the free updates from their software distributor; Lite users please download from your favourite download site.

Download Sandra Lite

SiSoftware Sandra 20/20 (2020) Released!

FOR IMMEDIATE RELEASE

Contact: Press Office

SiSoftware Sandra 20/20 (2020) Released:
Brand-new benchmarks (AI/ML), hardware support

Updates: R1, R2, R3, R4.

London, UK, July 18th, 2019 – We are pleased to announce the launch of SiSoftware Sandra 20/20 (2020), the latest version of our award-winning utility, which includes remote analysis, benchmarking and diagnostic features for PCs, servers, mobile devices and networks.

It adds two Neural Networks AI/ML (Artificial Intelligence/Machine Learning) benchmarks for both CPU and GP (GPU) to measure both CNN (Convolution Neural Network) & RNN (Recurrent Neural Networks) performance on modern hardware.

It also adds hardware support and optimisations for brand-new CPU architectures (AMD Ryzen 2 (3000 series); Intel IceLake, CometLake) not forgetting GPGPU architectures across the various interfaces (CUDA, OpenCL, DirectX ComputeShader, OpenGL Compute).

As SiSoftware operates a “just-in-time” release cycle, some features were introduced in Sandra 2017 service packs: in Sandra Titanium they have been updated and enhanced based on all the feedback received.

Operating System Module

Broad Operating System Support

All current versions supported: Windows 10, 8.1*, 8*, 7*; Server 2019, 2016, 2012/R2 and 2008/R2*

Brand new AI/ML benchmarks featuring both CNN & RNN networks testing both inference/forward and training/back-propagation performance.

Processor Neural Networks (AI/ML)

A combined performance index of CNN (inference/forward & training) & RNN (inference/forward & training) for all precisions (single/FP32, double/FP64 floating-point) and instruction sets (AVX512, AVX2/FMA, AVX, SSE4, SSE2, RTM/HLE with NUMA and large-page support)

Ranker: Processor Neural Networks (Normal/Single Precision)
Ranker: Processor Neural Networks (High/Double Precision)

GP (GPU) Neural Networks (AI/ML)

A combined performance index of CNN (inference/forward & training) & RNN (inference/forward & training) for all precisions (half/FP16, single/FP32 floating-point) and platforms (CUDA, OpenCL, DirectX Compute)

GP (GPU) Neural Networks (Normal/Single Precision)
GP (GPU) Neural Networks (Low/Half Precision)

CNN (Convolution Neural Network) Architecture

Detailed document on the CNN architecture, data-sets and results that underpin our choices for the new benchmarks.

The new Neural Networks (AI/ML) Benchmarks: CNN Architecture

RNN (Recurrent Neural Network) Architecture

Detailed document on the RNN architecture, data-sets and results that underpin our choices for the new benchmarks.

The new Neural Networks (AI/ML) Benchmarks: RNN Architecture

Major changes

  • All connections to website engines (Ranker, Information, Price) are now secured by SSL through HTTP.
  • Sandra client (management console) is now installed as native 64-bit (on x64 and arm64) and thus needs 64-bit Access components (2016, 2013, 2010, etc.) or SQL Server (2017, 2016, 2014, etc) for its database.

Key features of Sandra 20/20

  • 4 native architectures support (x86, x64, ARM64** – Windows; ARM, ARM64, x86, x64 – Android)
  • Huge official hardware support through technology partners (AMD/ATI, nVidia, Intel).
  • 4 native (GP)GPU/APU platforms support (OpenCL 2.1+, CUDA 10.1+, DirectX Compute Shader 11/10+, OpenGL Compute 4.5+, Vulkan 1.0+).
  • 4 native Graphics platforms support (DirectX 11.x/10.x, OpenGL 4.0+, Vulkan 1.0+).
  • 9 language versions (English, German, French, Italian, Spanish, Japanese, Chinese (Traditional, Simplified), Russian) in a single installer.
  • Enhanced Sandra Lite (Eval) version (free for personal/educational use, evaluation for other uses)

Articles & Benchmarks

For more details, please see the following articles:

Purchasing

For more details, and to purchase the commercial versions, please click here.

Updating or Upgrading

To update your existing commercial version, please click here.

Downloading

For more details, and to download the Lite (Evaluation) version, please click here.

Reviewers and Editors

For your free review copies, please contact us.

About SiSoftware

SiSoftware, founded in 1995, is one of the leading providers of computer analysis, diagnostic and benchmarking software. The flagship product, known as “SANDRA”, was launched in 1997 and has become one of the most widely used products in its field. Many worldwide IT publications, magazines and review sites use SANDRA to analyse the performance of today’s computers. Thousands on-line reviews of computer hardware that use SANDRA are catalogued on our website alone.

Since launch, SiSoftware has always been at the forefront of the technology arena, being among the first providers of benchmarks that show the power of emerging new technologies such as multi-core, GPGPU, OpenCL, OpenGL, DirectCompute, x64, ARM64, ARM, NUMA, SMT (Hyper-Threading), SMP (multi-threading), AVX512, AVX2/FMA3, AVX, NEON/2, SSE4.2/4, SSSE3, SSE2, SSE, Java and .NET.

SiSoftware is located in London, UK. For more information, please visit www.sisoftware.net, www.sisoftware.eu, or www.sisoftware.co.uk

The new Neural Networks (AI/ML) Benchmarks: RNN Architecture

What is a Recurrent Neural Network (RNN/LSTM)?

A RNN is a type of neural network that is primarily made of of neurons that store their previous states thus are said to ‘have memory’. In effect this allows them to ‘remember’ patterns or sequences.

However, they can still be used as ‘classifiers’ i.e. recognising visual patterns in images and thus can be used in visual recognition software.

What is VGG(net) is why use it now?

VGGNet is the baseline (or benchmark) CNN-type network that while did not win the ILSVRC 2014 competition (won by GoogleNet/Inception) it is still the preferred choice in the community for classification due to its uniform and thus relatively simple architecture.

While it is generally implemented using CNN layers, either directly or combination like ResNet, it can also be implemented using RNN layers which is what we have done here.

We believe this is a good test scenario and thus a relevant benchmark for today’s common systems.

We are considering much complex neurons, like LSTM, for future tests specifically designed for high-end systems as those used in research and academia.

What is the MNIST dataset and why use it now?

The MNIST database (https://en.wikipedia.org/wiki/MNIST_database) is a decently sized dataset of handwritten digits used for training and testing image processing systems like neural networks. It contains 60K training and 10K testing images of 28×28 pixel anti-aliased gray levels. The number of classes is only 10 (digits ‘0’ to ‘9’).

While they are only 28×28 and not colour, they can be up-scaled to any size by common up-scaling algorithms to test neural networks with little source data.

Today (2019) the digits would be captured in much higher resolution similar to the standard input resolution of the image processing networks of today (between 200×200 and 300×300 pixels).

As Sandra is designed to be small and easily downloadable, it is not possible to include gigabytes (GB) of data for either inference or training. Even the low-resolution (32x32x3) ILSVRC is 3GB thus unusable for our purpose.

What is Sandra’s RNN network architecture and why was it designed this way?

Due to the low complexity of the data and in order to maintain good performance even on low-end hardware, a standard RNN was chosen as the architecture. The features are:

  • Input is 224x224x1 as MNIST images are grey-scale only (up-scaled from 28×28)
  • Output is 10 as there are only 10 classes
  • 4 layer network, 1 RNN, 3 fully connected layers

What are the implementation details of the network?

The CPU version of the neural network supports all common instruction sets and precision and will be continuously updated as the industry moves forward.

  • Both inference/forward and train/back-propagation tested and supported.
  • Precision: single and double floating-point supported with future half/FP16.
  • SIMD Instruction Sets: CPU, SSE2, SSE4.x, AVX, AVX2/FMA and AVX512 with future VNNI.
  • Threads/Cores: Up to the maximum operating system 384 threads in 64-thread groups are supported with hard affinity as all other benchmarks.
  • NUMA: NUMA is supported up to 16 nodes with data allocated to the closest node.

What kind of BTT (Back-propagation Through Time) is used?

Unfortunately as we only know the output (digit) at the end of the sequence (i.e. once all pixels have been presented) we cannot calculate intermediate errors in order to use TBTT (Truncated BTT) which relies on known output at intermediate sequence time-steps.

What kind of detection rate and error does Sandra’s implementation achieve?

Naturally due to the low source resolution, a much shallower/simpler network would have sufficed. However due to up-scaling and the relatively large number of training images there is no danger of over-fitting.

It achieves a % detection rate (over the 10K testing images) after just 1 epoch (Epoch 0) and % after 30 epochs.

Training (30 epochs) took just X* hours on an i9-7900X (10C/20T) using AVX512/single-precision.

Does Sandra fully infer or train the full image set when benchmarking?

As with all other Sandra benchmarks the tests are limited to 30 seconds (in order to complete reasonably quickly) – within this time as many images at random from the data-sets (60K train, 10K test) will be processed.

The new Neural Networks (AI/ML) Benchmarks: CNN Architecture

What is a Convolution Neural Network (CNN/ConvNet)?

A CNN is a type of neural network that is primarily made of of neuron layers connected in such a way that they perform convolution over the previous layers: in effect they are filters over the input – the same way a blur/sharpen/edge/etc filter would be applied over a picture.

They are used as ‘classifiers’ i.e. recognising visual patterns in images and thus are used in visual recognition software.

What is VGG(net) is why use its architecture now?

VGGNet is the baseline (or benchmark) CNN-type network that while did not win the ILSVRC 2014 competition (won by GoogleNet/Inception) it is still the preferred choice in the community for classification due to its uniform and thus relatively simple architecture.

Thus while today (2019) there are far deeper and more complex neural networks, as Sandra is intended to run on common systems we had to choose the most common but relatively simple network.

We believe this is a good test scenario and thus a relevant benchmark for today’s common systems.

We are considering much deeper networks, like ResNet, for future tests specifically designed for high-end systems as those used research and academia.

Why not use Tensorflow, Caffee, etc. as back-end?

As with all Sandra benchmarks we develop our own code which is optimised in the conjunction with the community which includes hardware makers. This allows us to control all the benchmark stack adding new features and support as required which we would not be able to do when using a back-end.

Using a specific vendor’s libraries (e.g. cuDNN, MKL, etc.) would lock-us into a specific platform while we provide implementation for all platforms including all CPU SIMD instruction sets (SSE2, SSE4, AVX, AVX2/FMA, AVX512) and major GP (GPGPU) run-times (CUDA, OpenCL, DirectX 11/12 Compute and future Vulkan*).

What is the MNIST dataset and why use it now?

The MNIST database (https://en.wikipedia.org/wiki/MNIST_database) is a decently sized dataset of handwritten digits used for training and testing image processing systems like neural networks. It contains 60k (thousand) training and 10k testing images of 28×28 pixel anti-aliased gray levels. The number of classes is only 10 (digits ‘0’ to ‘9’).

While they are only 28×28 and not colour (1 channel), they can be up-scaled to any size by common up-scaling algorithms to test neural networks with little source data. Here we up-scale them 8x to 224x224x1.

Today (2018) the digits would be captured in much higher resolution similar to the standard input resolution of the image processing networks of today (between 200×200 and 300×300 pixels).

As Sandra is designed to be small and easily downloadable, it is not possible to include gigabytes (GB) of data for either inference or training. Even the low-resolution ImageNet ILSVRC is 3GB thus unusable for our purpose.

What are the CIFAR datasets and why use them now?

The CIFAR datasets (https://www.cs.toronto.edu/~kriz/cifar.html) are also decently sized datasets of objects used for training and testing image processing systems like neural networks. They both consists of 50k (thousand) training and 10k testing images of 32x32x3 pixel colour images with CIFAR-10 having 10 classes and CIFAR-100 having 100 classes.

Unlike MNIST the pictures are colour (3 channels RGB) and can also be up-scaled to any size by common up-scaling algorithms to test neural networks with little source data. Here we up-scale them 7x to 224x224x3.

Again, just as with MNIST this allows us to include more datasets while processing them in high resolution similar to modern neural networks without including a large dataset like ImageNet ILSVRC dataset.

What are ImageNet ILSVRC datasets and why *not* use them?

The ImageNet (ImageNet Large Scale Visual Recognition Challenge) datasets (http://www.image-net.org/challenges/LSVRC/) are used in the yearly challenge for researchers in object detection, image classification at large scale. They are used to measure progress in computer vision in the World today.

The yearly challenge/competition has thus yielded many recent advancements in the field with winners (and in some cases runner-ups) providing the classical neural networks of today: AlexNet, VGG, ResNet, Inception, etc.

Naturally the task is non-trivial and requires cutting-edge complex neural networks that generally require similarly high-end hardware that is not the domain of mass-market. While old(er) neural networks like AlexNet, VGG or ResNet can today (2018) work on consumer hardware – they are usually deployed in inference/classification mode. Training them (from scratch) would still require significant processing power and time which does not make sense for our benchmark.

Due to the nature of our software (mass-market, small, fast) the size of the datasets (about 3GB for 32x32x3 1.2 million training images) makes them unsuitable to be included either as standard or downloadable. As we aready use low-resolution datasets, it would not make sense to include another – and the high resolution versions (e.g. 256x256x3) are far larger (about 137GB train, 6.3GB test).

Another issue is the licensing: they are licensed for research which Sandra as a commercial product – even though we provide the benchmarks free of charge – would likely not qualify.

What is Sandra’s CNN network architecture and why was it designed this way?

Due to the low complexity of the data and in order to maintain good performance even on low-end hardware, VGG-16 was chosen as the architecture. The features are:

  • For MNIST dataset
    • Input is 224x224x1 as MNIST images are grey-scale (upscaled from 28×28)
    • Output is 10* as there are only 10 classes
    • 8 convolution (3×3 step 1), 5 pooling (2×2 step 2), 3 full-connect layers
  • Network/Engine features
    • Layers: Fully Connected/Dense, Convolution, Max Pooling, Recurrent, Dropout.
    • Activation: ReLU, Leaky ReLU, Smooth ReLU, Sigmoid, TanH. Activation functions are fused to the layers for reduced memory size/bandwidth footprint.
    • Back-propagation Optimiser: 2nd order Hessian.
    • Alignment: For performance, some layer sizes may be increased (e.g. output) to match SIMD alignment; the performance due to SIMD is higher than the overhead due to more un-needed neurons.
    • SIMD Float Width: Up to 64 single-precision pixels per cycle when using AVX512.
    • SIMD Half Width: Up to 128 half-precision pixels per cycle when using AVX512/BFloat16*.
    • SIMD Int8 Width: Up to 256 int8 pixels per cycle when using AVX512/VNNI*.

What are the implementation details of the network?

The CPU version of the neural network supports all common instruction sets and precision and will be continuously updated as the industry moves forward.

  • Both inference/forward and train/back-propagation tested and supported.
  • Processor:
    • Precision: single/FP32 and double/FP64 supported.
    • SIMD Instruction Sets: FPU, SSE2, SSE4.x, AVX, AVX2/FMA, AVX512 with future VNNI*.
    • Threads/Cores: Up to the maximum operating system 384 threads in 64-thread groups are supported with hard affinity as all other benchmarks.
    • Atomic Updates: TSX/RTE used where supported otherwise 128/64/32-bit interlock/update.
    • NUMA: NUMA is supported up to 16 nodes with data allocated to the closest node.
    • Large Pages: Large (2/4MB) pages used where supported and enabled.
  • GP (GPGPU):
    • Precision: single/FP32 and half/FP16 supported.
    • Run-Times: CUDA 10+, OpenCL 1.2+, DirectX 11/12 Compute.
    • Multi-GPU: Up to 8 devices are supported including CPU pseudo-device.

How is the data stored/processed?

We use the CHW format for simple SIMD implementation and performance load/store.

What activation function do you use?

We use the Sigmoid activation function with a fast (but naturally somewhat low-precision) SIMD tanh/exp implementation; while many modern networks (and VGG itself) use ReLU (for speed reasons) we’ve found the Sigmoid to work “better” for us without appreciable performance impact. By better we mean fast convergence and no need for batch normalisation.

What kind of detection rate and error does Sandra’s implementation achieve?

Naturally due to the low source resolution, a much shallower/simpler network would have sufficed. However due to upscaling and the relatively large number of training images there is no danger of overfitting.

It achieves a 95.3% detection rate (over the 10k testing images) after just 1 epoch (Epoch 0) and 99.82% after 30 epochs.

Training (30 epochs) took just 7* hours on an i9-7900X (10C/20T) using AVX512/single-precision.

Does Sandra fully infer or train the full image set when benchmarking?

As with all other Sandra benchmarks the tests are limited to 30 seconds (in order to complete resonably quickly) – within this time as many images at random from the datasets (60k train, 10k test) will be processed.

SiSoftware Sandra Titanium (2018) SP4/a/c Update: Retpoline and hardware support

Note: Updated 2019/June with information regarding MDS as well as change of recent CFL-R microcode vulnerability reporting.

We are pleased to release SP4/a/c (version 28.69) update for Sandra Titanium (2018) with the following updates:

Sandra Titanium (2018) Press Release

  • Reporting of Operating System (Windows) speculation control settings for the recently discovered vulnerabilities:
    • Kernel Retpoline mitigation status (for RDCL) in recent Windows 10 / Server 2019 updates
    • Kernel Address Table Import Optimisation (“KATI”) status (as above)
    • L1TFL1 data terminal fault mitigation status
    • MDSMicroarchitectural Data Sampling/”ZombieLoad” mitigation status
  • Hardware Support:
    • AMD Ryzen2 (Matisse), Stoney Ridge support
    • Intel CometLake (CML), CannonLake (CNL), IceLake (ICL) support (based on public information)
  • CPU Benchmarks:
    • Image Processing: SIMD code improvement (SSE2/SSE4/AVX/AVX2-FMA/AVX512)
    • Multi-Media: Lock-up on NUMA systems (e.g. AMD ThreadRipper) thanks to Rob @ TechGage.
  • Memory/Cache Benchmarks
    • Return memory controller firmware version to Ranker
  • GPGPU Benchmarks:
    • CUDA SDK 10.1
    • OpenCL: Processing (Fractals/Mandelbrot) variable vector width based on reported FP16/32/64 optimal SIMD width.
  • Ranker, Price & Information Engines
    • HTTPS (encryption) support for all engines as well as the main website

What is Retpoline?

It is a mitigation against ‘Spectre‘ 2 variant (BTI – Branch Target Injection) that affects just about all CPUs (not just Intel but AMD, ARM, etc.). While ‘Spectre’ does not have the same overall performance impact degradation as ‘Meltdown‘ (RDCL – Rogue Data Cache Load) it can have a sizeable impact on some processors and workloads. At this time no CPUs contain hardware mitigation for Spectre without performance impact.

Retpoline (Return Trampoline) is a faster way to mitigate against it without restricting branch speculation in kernel mode (using IBRS/IBPB) and has recently been added to Linux and now Windows version 1809 builds with KB4482887. Note that it still needs to be enabled in registry via the Mitigation Features Override flags as by default it is not enabled.

What CPUs can Retpoline be used on?

Unfortunately Retpoline is only safe to use on some CPUs: AMD CPUs (though does not engage on Ryzen, see below), Intel Broadwell or older (v5 and earlier) – thus not Skylake (v6 or later).

Windows speculation control settings reporting:

Intel Haswell (Core v4), Broadwell (v5) – Retpoline enabled, KATI enabled
Kernel Retpoline Speculation Control – Enabled

Kernel Address Table Import Optimisation – Enabled

(Note RDCL mitigations KVA, L1TF are also enabled as required)

Intel Skylake (Core v6), Kabylake (v7), Skylake/Kabylake-X (v6x) – no Retpoline, KATI can be enabled
Kernel Retpoline Speculation Control – no

Kernel Address Table Import Optimisation – no/yes (can be enabled)

(Note RDCL mitigations KVA, L1TF are enabled as required)

Intel Coffeelake-R (Core v8r), Whiskeylake/AmberLake (Core v8r), CometLake* – no Retpoline, KATI not enabled
Kernel Retpoline Speculation Control – no

Kernel Address Table Import Optimisation – Enabled

Note 2019/June: Latest microcode (AEh) with MDS vulnerability support cause Windows to report KVA/L1TF mitigations as required despite CPU claiming to not be vulnerable to RDCL.

Intel Atom Braswell (Atom v5), GeminiLake/ApolloLake (Atom v6) – no Retpoline but KATI enabled
Kernel Retpoline Speculation Control – no

Kernel Address Table Import Optimisation – Enabled

(Note RDCL mitigations KVA, L1TF are enabled as required)

AMD Ryzen (Threadripper) 1, 2 – no Retpoline, no KATI
Kernel Retpoline Speculation Control – no (should be usable?)

Kernel Address Table Import Optimisation – no (should be usable)

(Note CPU does not require RDCL mitigation thus no KVA, L1TF required)

From our somewhat limited testing above it seems that:

  • Intel Haswell/Broadwell (Core v4/v5) and perhaps earlier (Ivy Bridge/Sandy Bridge Core v3/v2) users are in luck, Retpoline is enabled and should improve performance; unfortunately RDCL (“Meltdown” mitigation) remains.
  • Intel Coffeelake-R (Core v8r refresh), Whiskylake ULV (v8r) users do benefit a bit more for their investment – while Retpoline is not enabled, KATI is enabled and should help. Not requiring KVA is the biggest gain of CFL-R. 2019/June: latest microcode (AEh) causes Windows to require KVA/L1TF thus negating any benefit CFL-R had over original CFL/KBL/SKL.
  • Intel Skylake (Core v6), Kabylake (v7) and Coffeelake (v8) are not able to benefit from Retpoline but KATI can work on some systems (driver dependent). However, on our Skylake ULV, Skylake-X test systems KATI could not be enabled. We are investigating further.
  • Intel Atom (v4/v5+) users should be able to use Retpoline but it seems it cannot be enabled currently. KATI is enabled.
  • AMD Ryzen (Threadripper) 1/2 users should also be able to use Retpoline but it seems it cannot be enabled currently. While RDCL is not required, mitigations for Spectre v2 are required and should be enabled. We are investigating further.

Reviews using Sandra 2018 SP4:

Update & Download

Commercial version customers can download the free updates from their software distributor; Lite users please download from your favourite download site.

Download Sandra Lite

SiSoftware Sandra Titanium (2018) SP3a/b Update: Pushing the Limits

Note: This article originally announced SP3a (28.45); it has since been updated to SP3b (28.49).

We are pleased to announce SP3b (version 28.49 for Sandra Titanium (2018) with updated hardware and software support:

Sandra Titanium (2018) Press Release

Sandra has always pushed the limits of hardware, optimising the workload based on the capablities of the device (compute performance, memory/storage size, etc.) ensuring that both low-end devices and high-end devices are used to their best of their capability.

This new version pushes the workload even higher with better scaling across all GPGPU benchmarks allowing low-end devices to work (e.g. integrated graphics, emulation on CPU) and high-end professional GPGPU accelerators with very fast, very large memory.

GPGPU Benchmarks

  • All Benchmarks: increased workload size on all benchmarks to up to maximum device capacity.
  • FP16/half optimisations for AMD Vega and Radeon VII (vectorisation)*
  • FP16/half optimisations for nVidia Volta/Turing (half2) [CUDA 10]
  • Workgroup and workload optimisations for AMD Vega and Radeon VII*
  • Updated DirectX and OpenGL compute to match CUDA and OpenCL
  • Resolved “out of memory” issues on low-end hardware**
  • Resolved long running time of CPU test-paths (that the GPGPU test paths are checked against) by enabling SIMD implementation (FFT/GEMM)**

Note: At this time we have not personally tested Radeon VII to confirm improvements.

Note2: Applies to SP3b update.

Hardware Support

  • Intel Core v8r Mobile WhiskyLake (WHL), AmberLake (AML) support (based on public information)

Reviews using Sandra 2018 SP3a/b

Ranker Hardware Results

Note: FP64 rate on Radeon VII is 1/4 not 1/8 as stated in some places. FP16 rate is 2x with vectorisation, 1x with scalar.

Commercial version customers can download the free updates from their software distributor; Lite users please download from your favourite download site.

Download Sandra Lite