SiSoftware Sandra Titanium/2018 SP2 Update

We are pleased to announce SP2 (version 28.28) for Sandra Titanium (2018) with updated hardware and software support:

Sandra Titanium (2018) Press Release

  • GPGPU:
    • nVidia Series 2000 (Turing) GPGPU support (still on SDK 9.2 for broad compatibility)
    • Fixes workgroup sizes for Scientific benchmarks
  • CPU:
    • Support for Intel 9000-Series CPU
    • Crypto AES benchmark VAES 512-bit and 256-bit support

Product Reviews

Commercial version customers can download the free updates from their software distributor; Lite users please download from your favourite download site.

Download Sandra Lite

SiSoftware Sandra Titanium/2018 SP1b Update

We are pleased to announce SP1b (version 28.26) for Sandra Titanium (2018) with updated hardware and software support:

Sandra Titanium (2018) Press Release

  • GPGPU: Updated nVidia CUDA 9.2 SDK with FP16 support for all benchmarks
    • nVidia Volta arch GPGPU support
  • CPU: Support update for AMD ThreadRipper v2
  • CPU: Support update for Intel v9 CPUs

Product Reviews

Commercial version customers can download the free updates from their software distributor; Lite users please download from your favourite download site.

Download Sandra Lite

nVidia Titan V: Volta GPGPU performance in CUDA and OpenCL

What is “Titan V”?

It is the latest high-end “pro-sumer” card from nVidia with the next-generation “Volta” architecture, the next generation to the current “Pascal” architecture on the Series 10 cards. Based on the top-end 100 chipset (not lower 102 or 104) it boasts full speed FP64/FP16 performance as well as brand-new “tensor cores” (matrix multipliers) for scientific and deep-learning workloads. It also comes with on-chip HBM2 (high-bandwidth) memory unlike more traditional GDDRX stand-alone memory.

For this reason the price is also far higher than previous Titan X/XP cards but considering the features/performance are more akin to “Tesla” series it would still be worth it depending on workload.

While using the additional cores provided in FP64/FP16 workloads is automatic – save usual code optimisations – tensor cores support requires custom code and existing libraries and apps need to be updated to make use of them. It is unknown at this time if consumer cards based on “Volta” will also include them. As they support FP16 precision only, not workloads may be able to use them – but DL (deep learning) and AI (artificial intelligence) are generally fine using lower precision thus for such tasks it is ideal.

See these other articles on Titan performance:

Hardware Specifications

We are comparing the top-of-the-range Titan V with previous generation Titans and competing architectures with a view to upgrading to a mid-range high performance design.

GPGPU Specifications nVidia Titan V
nVidia Titan X (P)
nVidia 980 GTX (M2)
Comments
Arch Chipset Volta VP100 (7.0) Pascal GP102 (6.1) Maxwell 2 GM204 (5.2) The V is the only one using the top-end 100 chip not 102 or 104 lower-end versions
Cores (CU) / Threads (SP) 80 / 5120 28 / 3584 16 / 2048 The V boasts 80 CU units but these contain 64 FP32 units only not 128 like lower-end chips thus equivalent with 40.
FP32 / FP64 / Tensor Cores 5120 / 2560 / 640 3584 / 112 / no 2048 / 64 / no Titan V is the only one with tensor cores and also huge amount of FP64 cores that Titan X simply cannot match; it also has full speed FP16 support.
Speed (Min-Turbo) 1.2GHz (135-1.455) 1.531GHz (139-1910) 1.126GHz (135-1.215) Slightly lower clocked than the X it will will make up for it with sheer CU units.
Power (TDP) 300W 250W (125-300) 180W (120-225) TDP increases by 50W but it is not unexpected considering the additional units.
ROP / TMU
96 / 320 96 / 224 64 / 128 Not a “gaming card” but while ROPs stay the same the number of TMUs has increased – likely required for compute tasks using textures.
Global Memory 12GB HBM2 850Mhz 3072-bit 12GB GDDR5X 10Gbps 384-bit 4GB GDDR5 7Gbps 256-bit Memory size stays the same at 12GB but now uses on-chip HBM2 for much higher bandwidth
Memory Bandwidth (GB/s)
652 512 224 In addition to the modest bandwidth increase, latencies are also meant to have decreased by a good amount.
L2 Cache 4.5MB 3MB 2MB L2 cache has gone up by about 50% to feed all the cores.
FP64/double ratio
1/2 1/32 1/32 For FP64 workloads the V has huge advantage as consumer and previous Titan X had far less FP64 units.
FP16/half ratio
2x 1/64 n/a The V has an even bigger advantage here with over 128x more units for FP16 tasks like DL and AI.

Processing Performance

We are testing both CUDA native as well as OpenCL performance using the latest SDK / libraries / drivers.

Results Interpretation: Higher values (GOPS, MB/s, etc.) mean better performance.

Environment: Windows 10 x64, latest nVidia drivers 398.36, CUDA 9.2, OpenCL 1.2. Turbo / Boost was enabled on all configurations.

Processing Benchmarks nVidia Titan V CUDA/OpenCL
nVidia Titan X CUDA/OpenCL
nVidia GTX 980 CUDA/OpenCL
Comments
GPGPU Arithmetic Benchmark Mandel FP32/Single (Mpix/s) 22,400 [+25%] / 20,000 17,870 / 16,000 7,000 / 6,100 Right off the bat, the V is just 25% faster than the X some optimisations may be required.
GPGPU Arithmetic Benchmark Mandel FP16/Half (Mpix/s) 33,300 [135x] / n/a 245 / n/a n/a For FP16 workloads the V shows its power: it is an astonishing 135 *times* (times not %) faster than the X.
GPGPU Arithmetic Benchmark Mandel FP64/Double (Mpix/s) 11,000 [+16.7x] / 11,000 661 / 672 259 / 265 For FP64 precision workloads the V shines again, it is 16 times faster than the X.
GPGPU Arithmetic Benchmark Mandel FP128/Quad (Mpix/s) 458 [+17.7x] / 455 25 / 24 10.8 / 10.7 With emulated FP128 precision the V is again 17 times faster.
As expected FP64 and FP16 performance is much improved on Titan V, with FP64 over 16x times faster than the X; FP16 performance is over 50% faster than FP32 performance making it almost 2x faster than Titan X. For workloads that need it, the performance of Titan V is stellar.
GPGPU Crypto Benchmark Crypto AES-256 (GB/s) 71 [+79%] / 87 40 / 38 16 / 16 Titan V is almost 80% faster than the X here a significant improvement.
GPGPU Crypto Benchmark Crypto AES-128 (GB/s) 91 [+75%] / 116 52 / 51 23 / 21 Not a lot changes here, with the V still 7% faster than the X.
GPGPU Crypto Benchmark Crypto SHA2-256 (GB/s) 253 [+89%] / 252 134 / 142 58 / 59 In this integer workload, Titan V is almost 2x faster than the X.
GPGPU Crypto Benchmark Crypto SHA1 (GB/s) 130 [+21%] / 134
107 / 114 50 / 54 SHA1 is mysteriously slower than SHA256 and here the V is just 21% faster.
GPGPU Crypto Benchmark Crypto SHA2-512 (GB/s) 173 [+2.4x] / 176 72 / 42 32 / 24 With 64-bit integer workload, Titan V shines again – it is almost 2.5x (times) faster than the X!
Historically, nVidia cards have not been tuned for integer workloads, but Titan V is almost 2x faster in 32-bit hashing and almost 3x faster in 64-bit hashing than the older X. For algorithms that use integer computation this can be quite significant.
GPGPU Finance Benchmark Black-Scholes float/FP32 (MOPT/s) 18,460 [+61%] / 18,870
11,480 / 11,470 5,280 / 5,280 Titan V manages to be 60% faster in this FP32 financial workload.
GPGPU Finance Benchmark Black-Scholes double/FP64 (MOPT/s) 8,400 [+6.1x] / 9,200
1,370 / 1,300 547 / 511 Switching to FP64 code, the V is over 6x (times) faster than the X.
GPGPU Finance Benchmark Binomial float/FP32 (kOPT/s) 4,180 [+81%] / 4,190
2,240 / 2,240 1,200 / 1,140 Binomial uses thread shared data thus stresses the SMX’s memory system: but the V is 80% faster than the X.
GPGPU Finance Benchmark Binomial double/FP64 (kOPT/s) 2,000 [+15.5x] / 2,000
129 / 133 51 / 51 With FP64 code the V is much faster – 15x (times) faster!
GPGPU Finance Benchmark Monte-Carlo float/FP32 (kOPT/s) 12,550 [+2.35x] / 12,610
5,350 / 5,150 2,140 / 2,000 Monte-Carlo also uses thread shared data but read-only thus reducing modify pressure – here the V is over 2x faster than the X and that is FP32 code!
GPGPU Finance Benchmark Monte-Carlo double/FP64 (kOPT/s) 4,440 [+15.1x] / 4,100
294 / 267 118 / 106 Switching to FP64 the V is again over 15x (times) faster!
For financial workloads, the Titan V is significantly faster, almost twice as fast as Titan X on FP32 but over 15x (times) faster on FP64 workloads. If time is money, then this can be money well-spent!
GPGPU Science Benchmark SGEMM (GFLOPS) float/FP32 9,860 [+57%] / 10,350
6,280 / 6,600 2,550 / 2,550 Without using the new “tensor cores”, Titan V is about 60% faster than the X.
GPGPU Science Benchmark DGEMM (GFLOPS) double/FP64 3,830 [+11.4x] / 3,920 335 / 332 130 / 129 With FP64 precision, the V crushes the X again it is 11x (times) faster.
GPGPU Science Benchmark SFFT (GFLOPS) float/FP32 605 [+2.5x] / 391 242 / 227 148 / 136 FFT allows the V to do even better – no doubt due to HBM2 memory.
GPGPU Science Benchmark DFFT (GFLOPS) double/FP64 280 [+35%] / 245 207 / 191 89 / 82 We may need some optimisations here, otherwise the V is just 35% faster.
GPGPU Science Benchmark SNBODY (GFLOPS) float/FP32 6,390 [+15%] / 4,630
5,600 / 4,870 2,100 / 2,000 N-Body simulation also needs some optimisations as the V is just 15% faster.
GPGPU Science Benchmark DNBODY (GFLOPS) double/FP64 4,270 [+15.5x] / 4,200
275 / 275 82 / 81 With FP64 precision, the V again crushes the X – it is 15x faster.
The scientific scores are a bit more mixed – GEMM will require code paths to take advantage of the new “tensor cores” and some optimisations may be required – otherwise FP64 code simply flies on Titan V.
GPGPU Image Processing Blur (3×3) Filter single/FP32 (MPix/s) 26,790 [50%] / 26,660
17,860 / 13,680 7,310 / 5,530 In this 3×3 convolution algorithm, Titan V is 50% faster than the X. Convolution is also used in neural nets (CNN) thus performance here counts.
GPGPU Image Processing Blur (3×3) Filter half/FP16 (MPix/s) 29,200 [+18.6x]
1,570 n/a With FP16 precision, Titan V shines it is 18x (times faster than X) but 12% faster than FP32.
GPGPU Image Processing Sharpen (5×5) Filter single/FP32 (MPix/s) 9,295 [+94%] / 6,750
4,800 / 3,460 1,870 / 1,380 Same algorithm but more shared data allows the V to be almost 2x faster than the X.
GPGPU Image Processing Sharpen (5×5) Filter half/FP16 (MPix/s) 14,900 [24.4x]
609 n/a With FP16 Titan V is almost 25x (times) faster than X and also 60% faster than Fp32.
GPGPU Image Processing Motion-Blur (7×7) Filter single/FP32 (MPix/s) 9,428 [+2x] / 7,260
4,830 / 3,620 1,910 / 1,440 Again same algorithm but even more data shared the V is 2x faster than the X.
GPGPU Image Processing Motion-Blur (7×7) Filter half/FP16 (MPix/s) 14,790 [+45x] 325 n/a With FP16 the V is now45x (times) faster than the X showing the usefulness of FP16 support.
GPGPU Image Processing Edge Detection (2*5×5) Sobel Filter single/FP32 (MPix/s) 9,079 [1.92x] / 7,380
4,740 / 3450 1,860 / 1,370 Still convolution but with 2 filters – Titan V is almost 2x faster again.
GPGPU Image Processing Edge Detection (2*5×5) Sobel Filter half/FP16 (MPix/s) 13,740 [+44x]
309 n/a Just as we seen above, the V is an astonishing 44x (times) faster than the X, and also ~20% faster than FP32 code.
GPGPU Image Processing Noise Removal (5×5) Median Filter single/FP32 (MPix/s) 111 [+3x] / 66
36 / 55 20 / 25 Different algorithm but here the V is even faster, 3x faster than the X!
GPGPU Image Processing Noise Removal (5×5) Median Filter half/FP16 (MPix/s) 206 [+2.89x]
71 n/a With FP16 the V is “only” 3x faster than the X but also 2x faster than FP32 code-path again a big gain for FP16 processing
GPGPU Image Processing Oil Painting Quantise Filter single/FP32 (MPix/s) 157 [+10x] / 24
15 / 15 12 / 11 Without major processing, this filter flies on the V – it is 10x faster than the X.
GPGPU Image Processing Oil Painting Quantise Filter half/FP16 (MPix/s) 215 [+4x] 50 FP16 precision is “just” 4x faster but it is also ~40% faster than FP32.
GPGPU Image Processing Diffusion Randomise (XorShift) Filter single/FP32 (MPix/s) 24,370 / 22,780 [+25%] 19,480 / 14,000 7,600 / 6,640 This algorithm is 64-bit integer heavy and here Titan V is 25% faster than the X.
GPGPU Image Processing Diffusion Randomise (XorShift) Filter half/FP16 (MPix/s) 24,180 [+4x] 6,090 FP16 does not help a lot here, but still the V is 4x faster than the X.
GPGPU Image Processing Marbling Perlin Noise 2D Filter single/FP32 (MPix/s) 846 [+3x] / 874 288 / 635 210 / 308 One of the most complex and largest filters, Titan V does very well here, it is 3x faster than the X.
GPGPU Image Processing Marbling Perlin Noise 2D Filter half/FP16 (MPix/s) 1,712 [+3.7x]
461 n/a Switching to FP16, the V is almost 4x (times) faster than the X and over 2x faster than FP32 code.
For image processing, Titan V brings big performance increases from 50% to 4x (times) faster than Titan X a big upgrade. If you are willing to drop to FP16 precision, then it is an extra 50% to 2x faster again – while naturally FP16 is not really usable on the X. With potential 8x times better performance Titan V powers through image processing tasks.

Memory Performance

We are testing both CUDA native as well as OpenCL performance using the latest SDK / libraries / drivers.

Results Interpretation: For bandwidth tests (MB/s, etc.) high values mean better performance, for latency tests (ns, etc.) low values mean better performance.

Environment: Windows 10 x64, latest nVidia drivers 398.36, CUDA 9.2, OpenCL 1.2. Turbo / Boost was enabled on all configurations.

HBM2 does seem to increase latencies slightly by about 10% but for sequential accesses Titan V does perform a lot better than the X with 20-40% lower latencies, likely due to the the new architecture. Thus code using coalesce memory accesses will perform faster but for code using random access pattern over large data sets

 

Memory Benchmarks nVidia Titan V CUDA/OpenCL
nVidia Titan X CUDA/OpenCL
nVidia GTX 980 CUDA/OpenCL
Comments
GPGPU Memory Bandwidth Internal Memory Bandwidth (GB/s) 536 [+51%] / 530
356 / 354 145 / 144 HBM2 brings about 50% more raw bandwidth to feed all the extra compute cores, a significant upgrade.
GPGPU Memory Bandwidth Upload Bandwidth (GB/s) 11.47 / 11,4
11.4 / 9 12.1 / 12 Still using PCIe3 x16 there is no change in upload bandwidth. Roll on PCIe4!
GPGPU Memory Bandwidth Download Bandwidth (GB/s) 12.3 / 12.3
12.2 / 8.9 11.5 / 12.2 Again no significant difference but we were not expecting any.
Titan V’s HBM2 brings 50% more memory bandwidth but as it still uses the PCIe3 x16 connection there is no change to host upload/download bandwidth which may be a bit of a bottleneck trying to keep all those cores fed with data. Even more streaming load/save is required and code will need to be optimised to use all that processing power
GPGPU Memory Latency Global (In-Page Random Access) Latency (ns) 180 [-10%] / 187
201 / 230 230 From the start we see global latency accesses reduced by 10%, not a lot but will help.
GPGPU Memory Latency Global (Full Range Random Access) Latency (ns) 311 [+9%] / 317
286 / 311 306 Full range random accesses do seem to be 9% slower which may be due to the architecture.
GPGPU Memory Latency Global (Sequential Access) Latency (ns) 53 [-40%] / 57 89 / 121 97 However, sequential accesses seem to have dropped a huge 40% likely due to better prefetchers on the Titan V.
GPGPU Memory Latency Constant Memory (In-Page Random Access) Latency (ns) 75 [-36%] / 76 117 / 174 126 Constant memory latencies also seem to have dropped by almost 40% a great result.
GPGPU Memory Latency Shared Memory (In-Page Random Access) Latency (ns) 18 / 85 18 / 53 21 No significant change in shared memory latencies.
GPGPU Memory Latency Texture (In-Page Random Access) Latency (ns) 212 [+9%] / 279 195 / 196 208 Texture access latencies seem to have increased by 9%
GPGPU Memory Latency Texture (Full Range Random Access) Latency (ns) 344 [+22%] / 313 282 / 278 308 As we’ve seen with global memory, we see increased latencies here by about 20%.
GPGPU Memory Latency Texture (Sequential Access) Latency (ns) 88 / 163 87 /123 102 With sequential access there is no appreciable delta in latencies.
HBM2 does seem to increase latencies slightly by about 10% but for sequential accesses Titan V does perform a lot better than the X with 20-40% lower latencies, likely due to the the new architecture. Thus code using coalesce memory accesses will perform faster but for code using random access pattern over large data sets
We see L1 cache effects between 64-128kB tallying with an L1D of 96kB – 4x more than what we’ve seen on Titan X (at 16kB). The other inflexion is at 4MB – matching the 4.5MB L2 cache size – which is 50% more than what we saw on Titan X (at 3MB).
As with global memory we see the same L1D (64kB) and L2 (4.5MB) cache affects with similar latencies. Both are significant upgrades over Titan X’ caches.

Titan V’s memory performance does not disappoint – HBM2 obviously brings large bandwidth increase – latency depends on access pattern, when prefetchers can engage they are much lowers but in random accesses out-of-page they are a big higher but nothing significant. We’re also limited by the PCIe3 bus for transfers which requires judicious overlap of memory transfers and compute to keep the cores busy.

SiSoftware Official Ranker Scores

Final Thoughts / Conclusions

“Volta” architecture does bring good improvements in FP32 performance which we hope to see soon in consumer (Series 11?) graphics cards – as well as lower-end Titan cards.

But here (on Titan V) we have the top-end chip with full-power FP64 and FP16 units more akin to Tesla which simply power through any and all algorithms you can throw at them. This is really the “Titan” you were looking for and upgrading from the previous Titan X (Pascal) is a huge upgrade admittedly for quite a bit more money.

If you have workloads that requires double/FP64 precision – Titan V is 15-16x times faster than Titan X – thus great value for money. If code can make do with FP16 precision then you can gain up to 2x extra performance again – as well as save storage for large data-sets – again Titan X cannot cut it here running at 1/64 rate.

We have not yet shown tensor core performance which is an additional reason for choosing such a card – if you have code that can make use of them you can gain an extra 16x (times) performance that really puts Titan V heads and shoulders over the Titan X.

All in all Titan V is a compelling upgrade if you need more power than Titan X and are (or thinking of) using multiple cards – there is simply no point. One Titan V can replace 4 or more Titan X cards on FP64 or FP16 workloads and that is before you make any optimisations. Obviously you are still “stuck” with 12GB memory and PCIe bus for transfers but with judicious optimisations this should not impact performance significantly.

nVidia Titan X: Pascal GPGPU Performance in CUDA and OpenCL

What is “Titan X (Pascal)”?

It is the current high-end “pro-sumer” card from nVidia using the current generation “Pascal” architecture – equivalent to the Series 10 cards. It is based on the 2nd-from-the-top 102 chipset (not the top-end 100) thus it does not feature full speed FP64/FP16 performance that is generally reserved for the “Quadro/Tesla” professional range of cards. It does however come with more memory to fit more datasets and is engineered for 24/7 performance.

Pricing has increased a bit from previous generation X/XP but that is a general trend today from all manufacturers.

See these other articles on Titan performance:

Hardware Specifications

We are comparing the top-of-the-range Titan X with previous generation cards and competing architectures with a view to upgrading to a mid-range high performance design.

GPGPU Specifications nVidia Titan X (P) nVidia 980 GTX (M2) AMD Vega 56 AMD Fury Comments
Arch Chipset Pascal GP102 (6.1) Maxwell 2 GM204 (5.2) Vega 10 Fiji The X uses the current Pascal architecture that is also powering the current Series 10 consumer cards
Cores (CU) / Threads (SP) 28 / 3584 16 / 2048 56 / 3584 64 / 4096 We’ve got 28CU/SMX here down from 32 on GP100/Tesla but should still be sufficient to power through tasks.
FP32 / FP64 / Tensor Cores 3584 / 112 / no 2048 / 64 / no 3584 / 448 / no 4096 / 512 / no Only 112 FP64 units – a lot less than competition from AMD, this is a card geared for FP32 workloads.
Speed (Min-Turbo) 1.531GHz (139-1910) 1.126GHz (135-1.215) 1.64GHz 1GHz Higher clocked that previous generation and comparative with competition.
Power (TDP) 250W (125-300) 180W (120-225) 200W 150W TDP has also increased to 250W but again that is inline with top-end cards that are pushing over 200W.
ROP / TMU
96 / 224 64 / 128 64 / 224 64 / 256 As it may also be used as top-end graphics card, it has a good amount of ROPs (50% more than competition) and similar numbers of TMUs.
Global Memory 12GB GDDR5X 10Gbps 384-bit 4GB GDDR5 7Gbps 256-bit 8GB HBM2 2Gbps 2048-bit 4GB HBM 1Gbps 4096-bit Titan X comes with a huge 12GB of current GDDR5X memory while the competition has switched to HBM2 for top-end cards.
Memory Bandwidth (GB/s)
512 224 483 512 Due to high speed GDDR5X, the X has plenty of memory bandwidth even higher than HBM2 competition.
L2 Cache 3MB 2MB L2 cache has increased by 50% over previous arch to keep all cores fed.
FP64/double ratio
1/32 1/32 1/8 1/8 The X is not really meant for FP64 workloads as it uses the same ratio 1:32 as normal consumer cards.
FP16/half ratio
1/64 n/a 1/1 1/1 With 1:64 ratio FP16 is not really usable on Titan X but can only really be used for compatibility testing.

Processing Performance

We are testing both CUDA native as well as OpenCL performance using the latest SDK / libraries / drivers from both nVidia and competition.

Results Interpretation: Higher values (GOPS, MB/s, etc.) mean better performance.

Environment: Windows 10 x64, latest nVidia drivers 398.36, CUDA 9.2, OpenCL 1.2. Turbo / Boost was enabled on all configurations.

GPGPU Image ProcessingMotion-Blur (7×7) Filter single/FP32 (MPix/s)4,830 / 3,6201,910 / 1,440

Again same algorithm but even more data shared the V is 2x faster than the X.

Processing Benchmarks nVidia Titan X CUDA/OpenCL nVidia GTX 980 CUDA/OpenCL AMD Vega 56 OpenCL AMD Fury OpenCL Comments
GPGPU Arithmetic Benchmark Mandel FP32/Single (Mpix/s) 17,870 [37%] / 16,000 7,000 / 6,100 13,000 8,720 Titan X makes a good start beating the Vega by almost 40%.
GPGPU Arithmetic Benchmark Mandel FP16/Half (Mpix/s) 245 [-98%] / n/a n/a 13,130 7,890 FP16 is so slow that it is unusable – just for testing.
GPGPU Arithmetic Benchmark Mandel FP64/Double (Mpix/s) 661 [-47%] / 672 259 / 265 1,250 901 FP64 is also quite slow though a lot faster than on the GTX 980.
GPGPU Arithmetic Benchmark Mandel FP128/Quad (Mpix/s) 25 [-67%] / 24 10.8 / 10.7 77.3 55 Emulated FP128 precision depends entirely on FP64 performance and thus is… slow.
With FP32 “normal” workloads Titan X is quite fast, ~40% faster than Vega and about 2.5x faster than an older GTX 980 thus quite an improvement. But FP16 workloads should not apply – better off with FP32 – and FP64 is also about 1/2 the performance of a Vega – also slower than even a Fiji. As long as all workloads are FP32 there should be no problems.
GPGPU Crypto Benchmark Crypto AES-256 (GB/s) 40 [-38%] / 38 16 / 16 65 46 Titan X is a lot faster than previous gen but still ~40% slower than a Vega
GPGPU Crypto Benchmark Crypto AES-128 (GB/s) 52 [-38%] / 51 23 / 21 84 60 Nothing changes here , the X still about 40% slower than a Vega.
GPGPU Crypto Benchmark Crypto SHA2-256 (GB/s) 134 [+4%] / 142 58 / 59 129 82 In this integer workload, somehow Titan X manages to beat the Vega by 4%!
GPGPU Crypto Benchmark Crypto SHA1 (GB/s) 107 [-34%] / 114 50 / 54 163 124 SHA1 is mysteriously slower thus the X is ~35% slower than a Vega.
GPGPU Crypto Benchmark Crypto SHA2-512 (GB/s) 72 [+2.3x] / 42 32 / 24 31 13.8 With 64-bit integer workload, Titan X is a massive 2.3x times faster than a Vega.
Historically, nVidia cards have not been tuned for integer workloads, but Titan X still manages to beat a Vega – the “gold standard” for crypto-currency hashing – on both SHA256 and especially on 64-bit integer SHA2-512! Perhaps for the first time a nVidia card is competitive on integer workloads and even much faster on 64-bit integer workloads.
GPGPU Finance Benchmark Black-Scholes float/FP32 (MOPT/s) 11,480 [+28%] / 11,470 5,280 / 5,280 9,000 11,220 In this FP32 financial workload Titan X is almost 30% faster than a Vega.
GPGPU Finance Benchmark Black-Scholes double/FP64 (MOPT/s) 1,370 [-36%] / 1,300 547 / 511 1,850 1,290 Switching to FP64 code, the X remains competitive and is about 35% slower.
GPGPU Finance Benchmark Binomial float/FP32 (kOPT/s) 2,240 [-8%] / 2,240 1,200 / 1,140 2,440 1,760 Binomial uses thread shared data thus stresses the SMX’s memory system and here Vega surprisingly does better by 8%
GPGPU Finance Benchmark Binomial double/FP64 (kOPT/s) 129 [-20%] / 133 51 / 51 161 115 With FP64 code the X is only 20% slower than a Vega.
GPGPU Finance Benchmark Monte-Carlo float/FP32 (kOPT/s) 5,350 [+47%] / 5,150 2,140 / 2,000 3,630 2,470 Monte-Carlo also uses thread shared data but read-only thus reducing modify pressure – here Titan X is almost 50% faster!
GPGPU Finance Benchmark Monte-Carlo double/FP64 (kOPT/s) 294 [-34%] / 267 118 / 106 385 332 Switching to FP64 the X is again 34% slower than a Vega.
For financial FP32 workloads, the Titan X generally beats the Vega by a good amount or at least ties with it; with FP64 precision it is about 1/2 the speed which is to be expected. As long as you have FP32 workloads this should not be a problem.
GPGPU Science Benchmark SGEMM (GFLOPS) float/FP32 6,280 [+19%] / 6,600 2,550 / 2,550 5,260 3,630 Using 32-bit precision Titan X beats the Vega by 20%.
GPGPU Science Benchmark DGEMM (GFLOPS) double/FP64 335 [-40%] / 332 130 / 129 555 381 With FP64 precision, unsurprisingly the X is 40% slower.
GPGPU Science Benchmark SFFT (GFLOPS) float/FP32 242 [-20%] / 227 148 / 136 306 348 FFT does better with HBM memory and here Titan X is 20% slower than a Vega.
GPGPU Science Benchmark DFFT (GFLOPS) double/FP64 207 / 191 89 / 82 139 116 Surprisingly the X does very well here and manages to beat all cards by almost 50%!
GPGPU Science Benchmark SNBODY (GFLOPS) float/FP32 5,600 [+20%] / 4,870 2,100 / 2,000 4,670 3,080 Titan X does well in this algorithm, beating the Vega by 20%.
GPGPU Science Benchmark DNBODY (GFLOPS) double/FP64 275 [-20%] / 275 82 / 81 343 303 With FP64 precision, the X is again 20% slower.
The scientific scores are similar to the financial ones but the gain/loss is about 20% not 40% – in FP32 workloads Titan X is 20% faster while in FP64 it is about 20% slower than a Vega – a closer result than expected.
GPGPU Image Processing Blur (3×3) Filter single/FP32 (MPix/s) 14,550 [-60%] / 10,880 7,310 / 5,530 36,000 28,000 In this 3×3 convolution algorithm, somehow Titan X is over 50% slower than a Vega and even a Fury.
GPGPU Image Processing Sharpen (5×5) Filter single/FP32 (MPix/s) 3,840 [-11%] / 2,750 1,870 / 1,380 4,300 3,150 Same algorithm but more shared data reduces the gap to 10% but still a loss.
GPGPU Image Processing Motion Blur (7×7) Filter single/FP32 (MPix/s) 3,920 [-10%] / 2,930 1,910 / 1,440 4,350 3,200 With even more data the gap remains at 10%.
GPGPU Image Processing Edge Detection (2*5×5) Sobel Filter single/FP32 (MPix/s) 3,740 [-11%] / 2,760 1,860 / 1,370 4,210 3,130 Still convolution but with 2 filters – Titan X is 10% slower again.
GPGPU Image Processing Noise Removal (5×5) Median Filter single/FP32 (MPix/s) 35.7 / 55 [+52%] 20.6 / 25.4 36.3 30.8 Different algorithm allows the X to finally beat the Vega by 50%.
GPGPU Image Processing Oil Painting Quantise Filter single/FP32 (MPix/s) 15.6 [-60%] / 15.3 12.2 / 11.4 38.7 14.3 Without major processing, this filter does not like the X much it runs 1/2 slower than the Vega.
GPGPU Image Processing Diffusion Randomise (XorShift) Filter single/FP32 (MPix/s) 16,480 [-57%] / 14,000 7,600 / 6,640 38,730 28,500 This algorithm is 64-bit integer heavy but again Titan X is 1/2 the speed of Vega.
GPGPU Image Processing Marbling Perlin Noise 2D Filter single/FP32 (MPix/s) 290 / 6,350 [+13%] 210 / 3,080 5,600 4,410 One of the most complex and largest filters, Titan X finally beats the Vega by over 10%.
For image processing using FP32 precision Titan X surprisingly does not do as well as expected – either in CUDA or OpenCL – with the Vega beating it by a good margin on most filters – a pretty surprising result. Perhaps more optimisations are needed on nVidia hardware. We obviously did not test FP16 performance at all as that would have been far slower.

Memory Performance

We are testing both CUDA native as well as OpenCL performance using the latest SDK / libraries / drivers from nVidia and competition.

Results Interpretation: For bandwidth tests (MB/s, etc.) high values mean better performance, for latency tests (ns, etc.) low values mean better performance.

Environment: Windows 10 x64, latest nVidia drivers 398.36, CUDA 9.2, OpenCL 1.2. Turbo / Boost was enabled on all configurations.

HBM2 does seem to increase latencies slightly by about 10% but for sequential accesses Titan V does perform a lot better than the X with 20-40% lower latencies, likely due to the the new architecture. Thus code using coalesce memory accesses will perform faster but for code using random access pattern over large data sets

 

Memory Benchmarks nVidia Titan X CUDA/OpenCL nVidia GTX 980 CUDA/OpenCL AMD Vega 56 OpenCL AMD Fury OpenCL Comments
GPGPU Memory Bandwidth Internal Memory Bandwidth (GB/s) 356 [+13%] / 354 145 / 144 316 387 Titan X brings more bandwidth than a Vega (+13%) but the old Fury takes the crown.
GPGPU Memory Bandwidth Upload Bandwidth (GB/s) 11.4 / 9 12.1 / 12 12.1 11 All cards use PCIe3 x16 and thus no appreciable delta.
GPGPU Memory Bandwidth Download Bandwidth (GB/s) 12.2 / 8.9 11.5 / 12.2 10 9.8 Again no significant difference but we were not expecting any.
Titan X uses current GDDR5X but with high data rate allowing it to bring more bandwidth that some HBM2 designs – a pretty impressive feat. Naturally high-end cards using HBM2 should have even higher bandwidth.
GPGPU Memory Latency Global (In-Page Random Access) Latency (ns) 201 / 230 230 273 343 Compared to previous generation, Titan X has better latency due to higher data rate.
GPGPU Memory Latency Global (Full Range Random Access) Latency (ns) 286 / 311 306 399 525 Similarly, even full random accesses are faster,
GPGPU Memory Latency Global (Sequential Access) Latency (ns) 89 / 121 97 129 216 Sequential access has similarly low latencies but nothing special.
GPGPU Memory Latency Constant Memory (In-Page Random Access) Latency (ns) 117 / 174 126 269 353 Constant memory latencies have also dropped.
GPGPU Memory Latency Shared Memory (In-Page Random Access) Latency (ns) 18 / 53 21 49 112 Even shared memory latencies have dropped likely due to higher core clocks.
GPGPU Memory Latency Texture (In-Page Random Access) Latency (ns) 195 / 196 208 121 Texture access latencies have come down as well.
GPGPU Memory Latency Texture (Full Range Random Access) Latency (ns) 282 / 278 308 And even full range latencies have decreased.
GPGPU Memory Latency Texture (Sequential Access) Latency (ns) 87 /123 102 With sequential access there is no appreciable delta in latencies.
We’re only comparing CUDA latencies here (as OpenCL is quite variable) – thus compared to the previous generation (GTX 980) all latencies are down, either due to higher memory data rate or core clock increases – but nothing spectacular. Still good progress and everything helps.
We see L1 cache effects until 16kB (same as previous arch) and between 2-4MB tallying with the 3MB cache. While fast perhaps they could be a bit bigger.
As with global memory we see the same L1D and L2 cache affects with similar latencies. All in all good performance but we could do with bigger caches.

Titan X’s memory performance is what you’d expect from higher clocked GDDR5X memory – it is competitive even with the latest HBM2 powered competition – both bandwidth and latency wise. There are no major surprises here and everything works nicely.

SiSoftware Official Ranker Scores

Final Thoughts / Conclusions

Titan X based on the current “Pascal” architecture performs very well in FP32 workloads – it is much faster than previous generation for a modest price increase and is competitive with the AMD’s Vega offers. But it is likely due to be replaced soon as next-generation “Volta” architecture is already out on the high-end (Titan V) and likely due to filter down the stack to both consumer (Series 11?) cards and “pro-sumer” cheaper Titan cards than the Titan V.

For FP64 workloads it is perhaps best to choose an older Quadro/Tesla card with more FP64 units as performance is naturally much lower. FP16 performance is also restricted and pretty much not usable – good for compatibility testing should you hope to upgrade to a full-speed FP16 card in the future. For both these workloads – the high-end Titan V is the card you probably want – but at a much higher price.

Still for the money, Titan X has its place and the most common FP32 workloads (financial, scientific, high precision image processing, etc.) that do not require FP64 nor FP16 optimisations perform very well and this card is all you need.

SiSoftware Sandra Titanium/2018 RTMa Update

We are pleased to release RTMa (RTM update A – version 28.18) update for Sandra Titanium (2018) with the following updates:

Sandra Titanium (2018) Press Release

  • CPU: Updated tools allowed for new AVX512 code-path (FFT)
  • Hardware support updates and fixes
  • Updates to the new Overall Benchmarks: CPU, GPGPU, Disk, Memory
  • Firmware logging and upload for all benchmarks including Ranker support
    • Microcode version for CPUs (to determine Spectre/Meltdown support)
    • Firmware version for Memory Controllers
    • Firmware version for Disk drives (HDD or SSD)
    • BIOS version for mainboards
    • BIOS version for Graphics Cards / GPGPUs

Note: Sandra will also display whether an updated version for microcode, firmware or BIOS is available for your device from the respective manufacturer / OEM. Community powered, this relies on users benchmarking and uploading their scores to the Ranker.

Commercial version customers can download the free updates from their software distributor; Lite users please download from your favourite download site.

Download Sandra Lite

SiSoftware Sandra Titanium (2018) Released!

FOR IMMEDIATE RELEASE

Contact: Press Office

SiSoftware Sandra Titanium (2018) Released:
Brand-new benchmarks, hardware support

Updates: RTMa, SP1b, SP2.

Articles & Benchmarks: nVidia Titan V: Volta GPGPU performance, nVidia Titan X: Pascal GPGPU Performance.

London, UK, April 27th 2018 – We are pleased to announce the launch of SiSoftware Sandra Titanium (2018), the latest version of our award-winning utility, which includes remote analysis, benchmarking and diagnostic features for PCs, servers, mobile devices and networks.

We have added new Overall Performance benchmarks for all major components (CPU, GPGPU, Video/Graphics, Memory/Cache and Disk) and thus simplified the Overall Computer Performance benchmark.

We have added hardware support and optimisations for brand-new CPU architectures (AMD Ryzen 2, AMD Vega GPGPU, future Intel) not forgetting GPGPU architectures across the various interfaces (CUDA, OpenCL, DirectX ComputeShader, OpenGL Compute).

As SiSoftware operates a “just-in-time” release cycle, some features were introduced in Sandra 2017 service packs: in Sandra Titanium they have been updated and enhanced based on all the feedback received.

Operating System Module

Broad Operating System Support

All current versions supported: Windows 10*, 8.1, 8, 7; Server 2019*, 2016, 2012/R2 and 2008/R2

Brand new Overall Performance Benchmarks allowing easy and complete device performance analysis without individually running all the benchmarks: now you need to run just one benchmarks that aggregates the results from all the component benchmarks.

Processor Overall Index

A combined performance index of all major CPU-related benchmarks: Multi-Media, Scientific Analysis, Financial Analysis, Image Processing, Multi-Core Efficiency.

Ranker: Processor Overall Index

GPGPU Overall Index

A combined performance index of all major GPGPU-related benchmarks: Processing, Scientific Analysis, Financial Analysis, Image Processing, Memory Bandwidth, Memory Latency.

Ranker: GPGPU Overall Index

Video/Graphics Overall Index

A combined performance index of all major video benchmarks: Shader Processing, Memory Bandwidth, Media Transcoding.

Ranker: Video/Graphics Overall Index

Memory/Cache Overall Index

A combined performance index of all major memory and cache-related benchmarks: Memory Bandwidth, Cache Bandwidth, Memory Latency.

Ranker: Memory/Cache Overall Index

Disk Overall Index

A combined performance index of all major disk-related benchmarks: File System Bandwidth, File System I/O, Physical Disk Bandwidth.

Ranker: Disk Overall Index

As a result, the Overall Compute Benchmark was simplified to be an aggregate of the component overall benchmarks:

Overall (2018) Computer Index

A combined performance index of all component overall indexes (CPU, GPGPU, Video/Graphics, Memory/Cache and Disk)

Ranker: Overall (2018) Computer Index

Key features of Sandra Titanium

  • 4 native architectures support (x86, x64 – Windows; ARM, ARM64, x86, x64 – Android)
  • Huge official hardware support through technology partners (AMD/ATI, nVidia, Intel).
  • 4 native (GP)GPU/APU platforms support (OpenCL 2.0+, CUDA 9.0+, DirectX Compute Shader 10+, OpenGL Compute 4.3+, Vulkan 1.0+).
  • 4 native Graphics platforms support (DirectX 12.x, DirectX 11.x, DirectX 10.x, OpenGL 4.0+, Vulkan 1.0+).
  • 9 language versions (English, German, French, Italian, Spanish, Japanese, Chinese (Traditional, Simplified), Russian) in a single installer.
  • Enhanced Sandra Lite (Eval) version (free for personal/educational use, evaluation for other uses)

Relevant Articles

For more details, please see the following articles:

Purchasing

For more details, and to purchase the commercial versions, please click here.

Updating or Upgrading

To update your existing commercial version, please click here.

Downloading

For more details, and to download the Lite (Evaluation) version, please click here.

Reviewers and Editors

For your free review copies, please contact us.

About SiSoftware

SiSoftware, founded in 1995, is one of the leading providers of computer analysis, diagnostic and benchmarking software. The flagship product, known as “SANDRA”, was launched in 1997 and has become one of the most widely used products in its field. Many worldwide IT publications, magazines and review sites use SANDRA to analyse the performance of today’s computers. Thousands on-line reviews of computer hardware that use SANDRA are catalogued on our website alone.

Since launch, SiSoftware has always been at the forefront of the technology arena, being among the first providers of benchmarks that show the power of emerging new technologies such as multi-core, GPGPU, OpenCL, OpenGL, DirectCompute, x64, ARM, NUMA, SMT (Hyper-Threading), SMP (multi-threading), AVX512, AVX2, AVX, FMA3, NEON, SSE4.2, SSE4, SSSE3, SSE2, SSE, Java and .NET.

SiSoftware is located in London, UK. For more information, please visit www.sisoftware.net, www.sisoftware.eu, or www.sisoftware.co.uk

AMD Ryzen2 2700X Review & Benchmarks – 2-channel DDR4 Cache & Memory Performance

What is “Ryzen2” ZEN+?

After the very successful launch of the original “Ryzen” (Zen/Zeppelin – “Summit Ridge” on 14nm), AMD has been hard at work optimising and improving the design: “Ryzen2” (code-name “Pinnacle Ridge”) is thus a 12nm die shrink that also includes APU – with integrated “Vega RX” graphics” – as well as traditional CPU versions.

While new chipsets (400 series) will also be introduced, the CPUs do work with existing AM4 300-series chipsets (e.g. X370, B350, A320) with a BIOS/firmware update which makes them great upgrades.

Here’s what AMD says it has done for Ryzen2:

  • Process technology optimisations (12nm vs 14nm) – lower power but higher frequencies
  • Improvements for cache & memory speed & latencies (we are testing them in this article!)
  • Multi-core optimised boost (aka Turbo) algorithm – XFR2 – higher speeds

In this article we test CPU Cache and Memory performance; please see our other articles on:

Hardware Specifications

We are comparing the top-of-the-range Ryzen2 (2700X, 2600) with previous generation (1700X) and competing architectures with a view to upgrading to a mid-range high performance design.

CPU Specifications AMD Ryzen2 2700X Pinnacle Ridge AMD Ryzen2 2600 Pinnacle Ridge
AMD Ryzen 1700X Summit Ridge
Intel i7-6700K SkyLake
Comments
L1D / L1I Caches 8x 32kB 8-way / 8x 64kB 8-way 6x 32kB 8-way / 6x 64kB 8-way 8x 32kB 8-way / 8x 64kB 8-way 4x 32kB 8-way / 4x 32kB 8-way Ryzen2 data/instruction caches is unchanged; icache is still 2x as big as Intel’s.
L2 Caches 8x 512kB 8-way 6x 512kB 8-way 8x 512kB 8-way 4x 256kB 8-way Ryzen2 L2 cache is unchanged but we’re told latencies have been improved. And 4x bigger than Intel’s!
L3 Caches 2x 8MB 16-way 2x 8MB 16-way 2x 8MB 16-way 8MB 16-way Ryzen2 L3 caches are also unchanged – but again lantencies are meant to have improved. With each CCX having 8MB even the 2600 has 2x as much cache as an i7.
TLB 4kB pages
64 full-way 1536 8-way 64 full-way 1536 8-way 64 full-way 1536 8-way 64 8-way 1536 6-way No TLB changes.
TLB 2MB pages
64 full-way 1536 2-way 64 full-way 1536 2-way 64 full-way 1536 2-way 8 full-way 1536 6-way No TLB changes, same as 4kB pages.
Memory Controller Speed (MHz) 600-1200 600-1200 600-1200 1200-4000 Ryzen’s memory controller runs at memory clock (MCLK) base rate thus depends on memory installed. Intel’s UNC (uncore) runs between min and max CPU clock thus perhaps faster.
Memory Speed (MHz) Max
2400 / 2933 2400 / 2933 2400 / 2666 2533 / 2400 Ryzen2 how supports up to 2933MHz (officially) which should improve its performance quite a bit – unfortunately fast DDR4 is very expensive right now.
Memory Channels / Width
2 / 128-bit 2 / 128-bit 2 / 128-bit 2 / 128-bit All have 128-bit total channel width.
Memory Timing (clocks)
14-16-16-32 7-54-18-9 2T 14-16-16-32 7-54-18-9 2T 14-16-16-32 7-54-18-9 2T 16-18-18-36 5-54-21-10 2T Memory runs at the same timings on both Ryzen2 and Ryzen1 but we shall see if measured latencies are different.

Core Topology and Testing

As discussed in the previous article, cores on Ryzen are grouped in blocks (CCX or compute units) each with its own 8MB L3 cache – but connected via a 256-bit bus running at memory controller clock. This is better than older designs like Intel Core 2 Quad or Pentium D which were effectively 2 CPU dies on the same socket – but not as good as a unified design where all cores are part of the same unit.

Running algorithms that require data to be shared between threads – e.g. producer/consumer – scheduling those threads on the same CCX would ensure lower latencies and higher bandwidth which we will test with presently.

We have thus modified Sandra’s ‘CPU Multi-Core Efficiency Benchmark‘ to report the latencies of each producer/consumer unit combination (e.g. same core, same CCX, different CCX) as well as providing different matching algorithms when selecting the producer/consumer units: best match (lowest latency), worst match (highest latency) thus allowing us to test inter-CCX bandwidth also. We hope users and reviewers alike will find the new features useful!

Native Performance

We are testing native arithmetic, SIMD and cryptography performance using the highest performing instruction sets (AVX2, AVX, etc.). Ryzen supports all modern instruction sets including AVX2, FMA3 and even more.

Results Interpretation: Higher rate values (GOPS, MB/s, etc.) mean better performance. Lower latencies (ns, ms, etc.) mean better performance.

Environment: Windows 10 x64, latest AMD and Intel drivers. 2MB “large pages” were enabled and in use. Turbo / Boost was enabled on all configurations.

Native Benchmarks Ryzen2 2700X 8C/16T Pinnacle Ridge
Ryzen2 2600 6C/12T Pinnacle Ridge
Ryzen 1700X 8C/16T Summit Ridge
i7-6700K 4C/8T SkyLake
Comments
CPU Multi-Core Benchmark Total Inter-Core Bandwidth – Best (GB/s) 54.9 [+15%] 46.5 47.8 39 Ryzen2 manages 15% higher bandwidth between its cores, slightly better than just 11% clock increase – signalling some improvements under the hood.
CPU Multi-Core Benchmark Total Inter-Core Bandwidth – Worst (GB/s) 5.89 [+2%] 5.53 5.8 16.3 In worst-case pairs on Ryzen must go across CCXes – and with this link running at the same clock (1200MHz) on Ryzen2 we can only manage a 2% increase in bandwidth. This is why faster memory is needed.
CPU Multi-Core Benchmark Inter-Unit Latency – Same Core (ns) 13.5 [-13%] 15.4 15.6 16.2 Within the same core (sharing L1D/L2), Ryzen2 manages a 13% reduction in latency, again better than just clock speed increase.
CPU Multi-Core Benchmark Inter-Unit Latency – Same Compute Unit (ns) 40.1 [-7%] 43.5 43.2 47.3 Within the same compute unit (sharing L3), the latency decreased by 7% on Ryzen2 thus L3 seems to have improved also.
CPU Multi-Core Benchmark Inter-Unit Latency – Different Compute Unit (ns) 128 [-6%] 132 236 Going inter-CCX we still see a 6% reduction in latency on Ryzen2 – with the CCX link at the same speed – a welcome surprise.
The multiple CCX design still presents some challenges to programmers requiring threads to be carefully scheduled – but we see a decent 6-7% reduction in L3/CCX latencies on Ryzen2 even when running at the same clock as Ryzen1.
Aggregated L1D Bandwidth (GB/s) 862 [+18%] 615 730 837 Right off we see a 18% bandwidth increase – almost 2x higher (than the 11% clock increase) – thus some improvements have been made to the cache system. It allows Ryzen2 to finally beat the i7 with its wide L1 data paths (512-bit) though with 2x more caches (8 vs 4).
Aggregated L2 Bandwidth (GB/s) 736 [+32%] 542 556 329 We see a huge 32% increase in L2 cache bandwidth – almost 3x clock increase (the 11%) suggesting the L2 caches have been improved also. Ryzen2 has thus 2x the L2 bandwidth of i7 though with 2x more caches (8 vs 4).
Aggregated L3 Bandwidth (GB/s) 339 [+19%] 398 284 238 The bandwidth of the L3 caches has also increased by 19% (2x clock increase) though we see the 6-core 2600 doing better (398 vs 339) likely due to less threads competing for the same L3 caches (12 vs 16). Ryzen2 L3 caches are not just 2x bigger than Intel but also 2x more bandwidth.
Aggregated Memory (GB/s) 30.2 [+2%] 30.2 29.6 29.1 With the same memory clock, Ryzen2 does still manage a small 2% improvement – signalling memory controller improvements. We also see Ryzen’s memory at 2400Mt/s having better bandwidth than Intel at 2533.
We see big improvements on Ryzen2 for all caches L1D/L2/L3 of 20-30% – more than just raw clock increase (11%) – so AMD has indeed made improvements – which to be fair needed to be done. The memory controller is also a bit more efficient (2%) though it can run at higher clocks than tested (2400Mt/s) – hopefully fast DDR4 memory will become more affordable.
Data In-Page Random Latency (ns) 66.4 (4-12-31) [-6%] [0][-5][-4] 66.4 (4-12-31) 70.5 (4-17-35) 20.4 (4-12-21) In-page latency has decreased by a noticeable 6% on Ryzen2  (both 2700X and 2600) – we see 5 clocks reduction for L2 and 4 for L3 a welcome improvement. But still a way to go to catch Intel which has 1/3x (three times less) latency.
Data Full Random Latency (ns) 80.9 (4-12-32) [-8%] [0][-5][-4] 79.4 (4-12-32) 87.6 (4-17-36) 63.9 (4-12-34) Out-of-page latencies have also been reduced by 8% on Ryzen2 (same memory) and we see the same 5 and 4 clock reduction for L2 and L3 (on both 2700X and 2600 it’s no fluke). Again these are welcome but still have a way to go to catch Intel.
Data Sequential Latency (ns) 3.4 (4-6-7) [-8%] [0][-1][0] 3.5 (4-6-7) 3.7 (4-7-7) 4.1 (4-12-13) Ryzen’s prefetchers are working well with sequential access pattern latency and we see a 8% latency drop for Ryzen2.
Ryzen1’s issue was high memory latencies (in-page/full random) and Ryzen2 has reduced them all by 6-8%. While it is a good improvement, they are still pretty high compared to Intel’s thus more work needs to be done here.
Code In-Page Random Latency (ns) 14.2 (4-9-24) [-9%] [0][0][0] 14.6 (4-9-24) 15.6 (4-9-24) 10.1 (2-10-21) Code latencies were not a problem on Ryzen1 but we still see a welcome reduction of 9% on Ryzen2. (no clocks delta)
Code Full Random Latency (ns) 88.6 (4-14-49) [-9%] [0][+1][+2] 89.3 (4-14-49) 97.4 (4-13-47) 70.7 (2-11-46) Out-of-page latency also sees a 9% decrease on Ryzen2 but somewhat surprisingly a 1-2 clock increase.
Code Sequential Latency (ns) 7.6 (4-12-20) [-8%] [0][+1][+1] 7.8 (4-12-20) 8.3 (4-11-19) 5.0 (2-4-9) Ryzen’s prefetchers are working well with sequential access pattern latency and we see a 8% reduction on Ryzen2.
While code access latencies were not a problem on Ryzen1 and they also see a 8% improvement on Ryzen2 which is welcome. Note code L1i cache is 2x Intel’s (64kB vs 32).
Memory Update Transactional (MTPS) 4.7 [+10%] 5 4.28 33.2 HLE Ryzen2 is 10% faster than Ryzen1 but naturally without HLE support it cannot match the i7. But with Intel disabling HLE on all but top-end CPUs AMD does not have much to worry.
Memory Update Record Only (MTPS) 4.6 [+11%] 4.75 4.16 23 HLE With only record updates we still see an 11% increase.

Ryzen2 brings nice updates – good bandwidth increases to all caches L1D/L2/L3 and also well-needed latency reduction for data (and code) accesses. Yes, there is still work to be done to bring the latencies down further – but it may be just enough to beat Intel to 2nd place for a good while.

At the high-end, ThreadRipper2 will likely benefit most as it’s going against many-core SKL-X AVX512-enabled competitor which is a lot “tougher” than the normal SKL/KBL/CFL consumer versions.

SiSoftware Official Ranker Scores

Final Thoughts / Conclusions

As with original Ryzen, the cache and memory system performance is not the clean-sweep we’ve seen in CPU testing – but Ryzen2 does bring welcome improvements in bandwidth and latency – which hopefully will further improve with firmware/BIOS updates (AGESA firmware).

With the potential to use faster DDR4 memory – Ryzen2 can do far better than in this test (e.g. with 2933/3200MHz memory). Unfortunately at this time DDR4 – especially high-end fast versions – memory is hideously expensive which is a bit of a problem. You may be better off using less but fast(er) memory with Ryzen designs.

Ryzen2 is a great update that will not disappoint upgraders and is likely to increase AMD’s market share. AMD is here to stay!

AMD Ryzen2 2700X Review & Benchmarks – CPU 8-core Performance

What is “Ryzen2” ZEN+?

After the very successful launch of the original “Ryzen” (Zen/Zeppelin – “Summit Ridge” on 14nm), AMD has been hard at work optimising and improving the design: “Ryzen2” (code-name “Pinnacle Ridge”) is thus a 12nm die shrink that also includes APU – with integrated “Vega RX” graphics” – as well as traditional CPU versions.

While new chipsets (400 series) will also be introduced, the CPUs do work with existing AM4 300-series chipsets (e.g. X370, B350, A320) with a BIOS/firmware update which makes them great upgrades.

Here’s what AMD says it has done for Ryzen2:

  • Process technology optimisations (12nm vs 14nm) – lower power but higher frequencies
  • Improvements for cache & memory speed & latencies (we shall test that ourselves!)
  • Multi-core optimised boost (aka Turbo) algorithm – XFR2 – higher speeds

In this article we test CPU core performance; please see our other articles on:

Hardware Specifications

We are comparing the top-of-the-range Ryzen2 (2700X, 2600) with previous generation (1700X) and competing architectures with a view to upgrading to a mid-range high performance design.

CPU Specifications AMD Ryzen2 2700X Pinnacle Ridge
AMD Ryzen2 2600 Pinnacle Ridge
AMD Ryzen 1700X Summit Ridge
Intel i7-6700K SkyLake
Comments
Cores (CU) / Threads (SP) 8C / 16T 6C / 12T 8C / 16T 4C / 8T Ryzen2 like its predecessor has the most cores and threads; it thus be down to IPC and clock speeds for performance improvements.
Speed (Min / Max / Turbo) 2.2-3.7-4.2GHz (22x-37x-42x) [+9% rated, +11% turbo] 1.55-3.4-3.9GHz (15x-34x-39x) 2.2-3.3-3.8GHz (22x-34x-38x) 0.8-4.0-4.2GHz (8x-40x-42x) Ryzen2 base clock is 9% higher while Turbo/Boost/XFR is 11% higher; we thus expect at least about 10% improvement in CPU benchmarks.
Power (TDP) 105W 65W 95W 91W Ryzen2 also increases TDP by 11% (105W vs 95) which may require a bit more cooling especially when overclocking.
L1D / L1I Caches 8x 32kB 8-way / 8x 64kB 8-way 6x 32kB 8-way / 6x 64kB 8-way 8x 32kB 8-way / 8x 64kB 8-way 4x 32kB 8-way / 4x 32kB 8-way Ryzen2 data/instruction caches is unchanged; icache is still 2x as big as Intel’s.
L2 Caches 8x 512kB 8-way 6x 512kB 8-way 8x 512kB 8-way 4x 256kB 8-way Ryzen2 L2 cache is unchanged but we’re told latencies have been improved. 4x bigger than Intel’s.
L3 Caches 2x 8MB 16-way 2x 8MB 16-way 2x 8MB 16-way 8MB 16-way Ryzen2 L3 caches are also unchanged – but again lantencies are meant to have improved. With each CCX having 8MB even the 2600 has 2x as much cache as an i7.

Native Performance

We are testing native arithmetic, SIMD and cryptography performance using the highest performing instruction sets (AVX2, AVX, etc.). Ryzen supports all modern instruction sets including AVX2, FMA3 and even more like SHA HWA (supported by Intel’s Atom only) but has dropped all AMD’s variations like FMA4 and XOP likely due to low usage.

Results Interpretation: Higher values (GOPS, MB/s, etc.) mean better performance.

Environment: Windows 10 x64, latest AMD and Intel drivers. 2MB “large pages” were enabled and in use. Turbo / Boost was enabled on all configurations.

Native Benchmarks Ryzen2 2700X 8C/16T Pinnacle Ridge
Ryzen2 2600 6C/12T Pinnacle Ridge
Ryzen 1700X 8C/16T Summit Ridge
i7-6700K 4C/8T Skylake
Comments
CPU Arithmetic Benchmark Native Dhrystone Integer (GIPS) 323 [+8%] 236 298 194 Right off Ryzen2 is 8% faster than Ryzen1, let’s hope it does better. Even 2600 beats the i7 easily
CPU Arithmetic Benchmark Native Dhrystone Long (GIPS) 337 [+12%] 238 301 194 With a 64-bit integer workload – we finally get into gear, Ryzen2 is 12% faster than its old brother.
CPU Arithmetic Benchmark Native FP32 (Float) Whetstone (GFLOPS) 204 [+12%] 144 182 107 Even in this floating-point test, Ryzen2 is again 12% faster. All AMD CPUs beat the i7 into dust.
CPU Arithmetic Benchmark Native FP64 (Double) Whetstone (GFLOPS) 172 [+11%] 123 155 89 With FP64 nothing much changes, Ryzen2 is still 11% faster.
From integer workloads in Dhyrstone to floating-point workloads in Whestone, Ryzen2 is about 10% faster than Ryzen1: this is exactly in line with the speed increase (9-11%) but if you were expecting more you may be a tiny bit disappointed.
BenchCpuMM Native Integer (Int32) Multi-Media (Mpix/s) 619 [+16%] 428 535 510 In this vectorised AVX2 integer test Ryzen2 starts to pull ahead and is 16% faster than Ryzen1; perhaps some of the arch improvements benefit SIMD vectorised workloads.
BenchCpuMM Native Long (Int64) Multi-Media (Mpix/s) 187 [+10%] 132 170 197 With a 64-bit AVX2 integer vectorised workload, Ryzen2 drops to just 10% but still in line with speed increase.
BenchCpuMM Native Quad-Int (Int128) Multi-Media (Mpix/s) 5.83 [+7%] 4.12 5.47 3 This is a tough test using Long integers to emulate Int128 without SIMD; here Ryzen2 drops to just 7% faster than Ryzen1 but still a decent improvement.
BenchCpuMM Native Float/FP32 Multi-Media (Mpix/s) 577 [+11%] 409 520 453 In this floating-point AVX/FMA vectorised test, Ryzen2 is the standard 11% faster than Ryzen1.
BenchCpuMM Native Double/FP64 Multi-Media (Mpix/s) 332 [+11%] 236 299 267 Switching to FP64 SIMD code, again Ryzen2 is just the standard 11% faster than Ryzen1.
BenchCpuMM Native Quad-Float/FP128 Multi-Media (Mpix/s) 15.6 [+15%] 11 13.7 11 In this heavy algorithm using FP64 to mantissa extend FP128 but not vectorised – Ryzen2 manages to pull ahead further and is 15% faster.
In vectorised AVX2/FMA code we see a similar story with 10% average improvement (7-15%). It seems the SIMD units are unchanged. In any case the i7 is left in the dust.
BenchCrypt Crypto AES-256 (GB/s) 14.1 [+1%] 14.1 13.9 14.7 With AES HWA support all CPUs are memory bandwidth bound; as we’re testing Ryzen2 running at the same memory speed/timings there is still a very small improvement of 1%. But its advantage is that the memory controller is rated for 2933Mt/s operation (vs. 2533) thus with faster memory it could run considerably faster.
BenchCrypt Crypto AES-128 (GB/s) 14.2 [+1%] 14.2 14 14.8 What we saw with AES-256 just repeats with AES-128; Ryzen2 is marginally faster but the improvement is there.
BenchCrypt Crypto SHA2-256 (GB/s) 18.4 [+12%] 13.2 16.5 5.9 With SHA HWA Ryzen2 similarly powers through hashing tests leaving Intel in the dust; SHA is still memory bound but with just one (1) buffer it has larger headroom. Thus Ryzen2 can use its speed advantage and be 12% faster – impressive.
BenchCrypt Crypto SHA1 (GB/s) 19.2 [+14%] 13.1 16.8 11.3 Ryzen also accelerates the soon-to-be-defunct SHA1 and here it is even faster – 14% faster than Ryzen1.
BenchCrypt Crypto SHA2-512 (GB/s) 3.75 [+12%] 2.66 3.34 4.4 SHA2-512 is not accelerated by SHA HWA (version 1) thus Ryzen has to use the same vectorised AVX2 code path – it still is 12% faster than Ryzen1 but still loses to the i7. Those SIMD units are tough to beat.
In memory bandwidth bound algorithms, Ryzen2 will have to be used with faster memory (up to 2933Mt/s officially) in order to significantly beat its older Ryzen1 brother. Otherwise there is only a tiny 1% improvement.
BenchFinance Black-Scholes float/FP32 (MOPT/s) 260 [+11%] 184 235 126 In this non-vectorised test we see Ryzen2 is the standard 11% faster than Ryzen1.
BenchFinance Black-Scholes double/FP64 (MOPT/s) 221 [+11%] 157 199 112 Switching to FP64 code, nothing changes, Ryzen2 is still 11% faster.
BenchFinance Binomial float/FP32 (kOPT/s) 106 [+23%] 76 86 27 Binomial uses thread shared data thus stresses the cache & memory system; here the arch(itecture) improvements do show, Ryzen2 23% faster – 2x more than expected. Not to mention 3x (three times) faster than the i7.
BenchFinance Binomial double/FP64 (kOPT/s) 60.8 [+28%] 43.2 47.5 29.2 With FP64 code Ryzen2 is now even faster – 28% faster than Ryzen1 not to mention 2x faster than the i7. Indeed it seems there improvements to the cache and memory system.
BenchFinance Monte-Carlo float/FP32 (kOPT/s) 54.4 [+11%] 38.6 49.2 49.2 Monte-Carlo also uses thread shared data but read-only thus reducing modify pressure on the caches; Ryzen2 does not seem to be able to reproduce its previous gain and is just the standard 11% faster.
BenchFinance Monte-Carlo double/FP64 (kOPT/s) 41.2 [+10%] 29.1 37.3 20.3 Switching to FP64 nothing much changes, Ryzen2 is 10% faster.
Ryzen1 dies very well in these algorithms, but Ryzen2 does even better – especially when thread-local data is involved managing 23-28% improvement. For financial workloads Intel does not seem to have a chance anymore – Ryzen is impossible to beat.
BenchScience SGEMM (GFLOPS) float/FP32 275 [+10%] 238 250 267 In this tough vectorised AVX2/FMA algorithm Ryzen2 is still “just” the 10% faster than older Ryzen1 – but it finally manages to beat the i7.
BenchScience DGEMM (GFLOPS) double/FP64 113 [+4%] 103 109 116 With FP64 vectorised code, Ryzen2 only manages to be 4% faster. It seems the memory is holding it back thus faster memory would allow it to do much better.
BenchScience SFFT (GFLOPS) float/FP32 8.56 [+4%] 7.36 8.2 19.4 FFT is also heavily vectorised (x4 AVX/FMA) but stresses the memory sub-system more; Ryzen2 is just 4% faster again and is still 1/2x the speed of the i7. Again it seems faster memory would help.
BenchScience DFFT (GFLOPS) double/FP64 7.42 [+1%] 6.87 7.32 9.19 With FP64 code, Ryzen2’s improvement reduces to just 1% over Ryzen1 and again slower than the i7.
BenchScience SNBODY (GFLOPS) float/FP32 279 [+12%] 197 249 269 N-Body simulation is vectorised but many memory accesses to shared data and Ryzen2 gets back to 12% improvement over Ryzen1. This allows it to finally overtake the i7.
BenchScience DNBODY (GFLOPS) double/FP64 114 [+13%] 80 101 79 With FP64 code nothing much changes, Ryzen2 is still 13% faster.
With highly vectorised SIMD code Ryzen2 still improves by the standard 10-12% but in memory-heavy code it needs to run at higher memory speed to significantly overtake Ryzen1. But it allows it to beat the i7 in more algorithms.
CPU Image Processing Blur (3×3) Filter (MPix/s) 1290 [+11%] 913 1160 1170 In this vectorised integer AVX2 workload Ryzen2 is 11% faster allowing it to soundly beat the i7.
CPU Image Processing Sharpen (5×5) Filter (MPix/s) 551 [+11%] 391 497 435 Same algorithm but more shared data does not change things for Ryzen2. Only the i7 falls behind.
CPU Image Processing Motion-Blur (7×7) Filter (MPix/s) 307 [+11%] 218 276 233 Again same algorithm but even more data shared does not change anything, but now the i7 is so far behind Ryzen2 is 50% faster. Incredible.
CPU Image Processing Edge Detection (2*5×5) Sobel Filter (MPix/s) 461 [+11%] 326 415 384 Different algorithm but still AVX2 vectorised workload still changes nothing – Ryzen2 is 11% faster.
CPU Image Processing Noise Removal (5×5) Median Filter (MPix/s) 69.7 [+12%] 49.7 62 38 Still AVX2 vectorised code and still nothing changes; the i7 falls even further behind with Ryzen2 2x (two times) as fast.
CPU Image Processing Oil Painting Quantise Filter (MPix/s) 24.7 [+11%] 17.5 22.3 20 Again we see Ryzen2 11% faster than the older Ryzen1 and pulling away from the i7.
CPU Image Processing Diffusion Randomise (XorShift) Filter (MPix/s) 1460 [+8%] 1130 1350 1670 Here Ryzen2 is just 8% faster than Ryzen1 but strangely it’s not enough to beat the i7. Those SIMD units are way fast.
CPU Image Processing Marbling Perlin Noise 2D Filter (MPix/s) 243 [+11%] 172 219 268 In this final test, Ryzen2 returns to being 11% faster and again strangely not enough to beat the i7.

With all the modern instruction sets supported (AVX2, FMA, AES and SHA HWA) Ryzen2 does extremely well in all workloads – but it generally improves only by the 11% as per clock speed increase, except in some cases which seem to show improvements in the cache and memory system (which we have not tested yet).

Software VM (.Net/Java) Performance

We are testing arithmetic and vectorised performance of software virtual machines (SVM), i.e. Java and .Net. With operating systems – like Windows 10 – favouring SVM applications over “legacy” native, the performance of .Net CLR (and Java JVM) has become far more important.

Results Interpretation: Higher values (GOPS, MB/s, etc.) mean better performance.

Environment: Windows 10 x64, latest drivers. .Net 4.7.x (RyuJit), Java 1.9.x. Turbo / Boost was enabled on all configurations.

VM Benchmarks Ryzen2 2700X 8C/16T Pinnacle Ridge
Ryzen2 2600 6C/12T Pinnacle Ridge
Ryzen 1700X 8C/16T Summit Ridge
i7-6700K 4C/8T Skylake
Comments
BenchDotNetAA .Net Dhrystone Integer (GIPS) 63.2 [+8%] 30 58.6 26 .Net CLR integer performance starts off OK with Ryzen2 just 8% faster than Ryzen1 but now almost 3x (three times) faster than i7.
BenchDotNetAA .Net Dhrystone Long (GIPS) 49.6 [+20%] 33.6 41.2 27 Ryzen seems to favour 64-bit integer workloads, with Ryzen2 20% faster a lot higher than expected.
BenchDotNetAA .Net Whetstone float/FP32 (GFLOPS) 104 [+15%] 71.2 90.5 54.3 Floating-Point CLR performance was pretty spectacular with Ryzen already, but Ryzen2 is 15% than Ryzen1 still.
BenchDotNetAA .Net Whetstone double/FP64 (GFLOPS) 122 [+20%] 88.2 102 65.6 FP64 performance is also great (CLR seems to promote FP32 to FP64 anyway) with Ryzen2 even faster by 20%.
Ryzen1’s performance in .Net was pretty incredible but Ryzen2 is even faster – even faster than expected by mere clock speed increase. There is only one game in town now for .Net applications.
BenchDotNetMM .Net Integer Vectorised/Multi-Media (MPix/s) 106 [+9%] 74 97 54 Just as we saw with Dhrystone, this integer workload sees a 9% improvement for Ryzen2 which makes it 2x faster than the i7.
BenchDotNetMM .Net Long Vectorised/Multi-Media (MPix/s) 111 [+8%] 78 103 57 With 64-bit integer workload we see a similar story – Ryzen2 is 8% faster and again 2x faster than the i7.
BenchDotNetMM .Net Float/FP32 Vectorised/Multi-Media (MPix/s) 387 [+11%] 278 348 240 Here we make use of RyuJit’s support for SIMD vectors thus running AVX/FMA code; Ryzen2 is 11% faster but still almost 2x faster than i7 despite its fast SIMD units
BenchDotNetMM .Net Double/FP64 Vectorised/Multi-Media (MPix/s) 217 [+12%] 153 194 48.6 Switching to FP64 SIMD vector code – still running AVX/FMA – Ryzen2 is still 12% faster. i7 is truly left in the dust 1/4x the speed.
Ryzen2 is the usual 9-12% faster than Ryzen1 here but it means that even RyuJit’s SIMD support cannot save Intel’s i7 – it would take 2x as many cores (not 50%) to beat Ryzen2.
Java Arithmetic Java Dhrystone Integer (GIPS) 574 [+12%] 399 514 We start JVM integer performance with the usual 12% gain over Ryzen1.
Java Arithmetic Java Dhrystone Long (GIPS) 559 [+12%] 392 500 Nothing much changes with 64-bit integer workload, we have Ryzen2 12% faster.
Java Arithmetic Java Whetstone float/FP32 (GFLOPS) 138 [+13%] 99 122 With a floating-point workload Ryzen2 performance improvement is 13%.
Java Arithmetic Java Whetstone double/FP64 (GFLOPS) 137 [+7%] 97 128 With FP64 workload Ryzen2 is just 7% faster but still welcome
Java performance improves by the expected amount 7-13% on Ryzen2 and allows it to completely dominate the i7.
Java Multi-Media Java Integer Vectorised/Multi-Media (MPix/s) 108 [+15%] 76 94 Oracle’s JVM does not yet support native vector to SIMD translation like .Net’s CLR but here Ryzen2 manages a 15% lead over Ryzen1.
Java Multi-Media Java Long Vectorised/Multi-Media (MPix/s) 114 [+24%] 73 92 With 64-bit vectorised workload Ryzen2 (similar to .Net) increases its lead by 24%.
Java Multi-Media Java Float/FP32 Vectorised/Multi-Media (MPix/s) 99 [+14%] 69 87 Switching to floating-point we return to the usual 14% speed improvement.
Java Multi-Media Java Double/FP64 Vectorised/Multi-Media (MPix/s) 93 [+1%] 64 92 With FP64 workload Ryzen2’s lead somewhat unexplicably drops to 1%.
Java’s lack of vectorised primitives to allow the JVM to use SIMD instruction sets (aka SSE2, AVX/FMA) gives Ryzen2 free reign to dominate all the tests, be they integer or floating-point. It is pretty incredible that neither Intel CPU can come close to its performance.

Ryzen1 dominated the .Net and Java benchmarks – but now Ryzen2 extends that dominance out-of-reach. It would take a very much improved run-time or Intel CPU to get anywhere close. For .Net and Java code, Ryzen is the CPU to get!

SiSoftware Official Ranker Scores

Final Thoughts / Conclusions

Ryzen2 is a worthy update but its speed increase is generally due to its faster clock speed – similar to Intel’s SkyLake > KabyLake (gen 6 to gen 7) transition. But coming at the same price, a “free” performance increase of 10% or so is obviously not to be ignored. Let’s not forget that Ryzen2 can still use all the existing series 300 mainboards – subject to BIOS update.

The process shrink and power optimisations does allow Ryzen2 to run at lower voltages and consume less power – even though TDP has increased at least “on paper”.

Some algorithms do seem to show that the cache and memory system has been improved – but Ryzen2’s advantage is that it can (much) faster memory. Unfortunately at this time DDR4 memory, especially fast versions, are very expensive. Here Intel does (still) have an advantage in that fast DDR4 memory is not required except for bandwidth bound algorithms.

One advantage is that by now operating systems (and applications) have been updated to deal with its dual-CCX design that used to be so much trouble when we benchmarked Ryzen1 initially. With AMD increasing its market share no high-performance application can afford to ignore AMD CPUs.

We (just) cannot wait to see the new improvements in future AMD designs and especially the ThreadRipper2 update!

Sandra Platinum (2017) SP4 – Updates for ‘Meltdown’ and ‘Spectre’

Update Wizard

NB: SP4 has been refreshed to version 24.61 vs. the original 24.55 a day later.

We are pleased to release SP4 (Service Pack 4 – version 24.61) update for Sandra Platinum (2017) with the following updates:

Sandra Platinum (2017) Press Release

  • Reporting of Operating System (Windows) speculation control settings for the recently discovered vulnerabilities:
  • Reporting of latest CPU microcode update availability
    • Hardware mitigation for BTI/’Spectre’
  • Reporting of CPU features for branch control
    • Hardware enumeration and control for speculation and predictors
      • Indirect branch restricted speculation (IBRS) and Indirect branch predictor barrier (IBPB)
      • Single thread indirect branch predictors (STIBP)
      • Architecture Capabilities (affected by IB or not)
  • Reporting of CPU support for Context ID / Indirect CID
    • Hardware acceleration for context switching thus mitigating performance loss when KVA (Kernel’s Virtual Address) Shadowing is enabled.

Windows speculation control settings reporting:

Recommended Settings (Windows and CPU updated)
Operating System Mitigation: Enabled (Windows updated)

BTI Mitigation: Enabled, CPU indirect branch-control enumeration support (microcode updated)

RDCL Mitigation: KVA Shadowing enabled, Windows/CPU context-ID support

OK Settings (Windows updated, but CPU not updated)
Operating System Mitigation: Enabled (Windows updated)

BTI Mitigation: Not enabled, no CPU support (e.g. firmware not updated – check for BIOS update)

RDCL Mitigation: KVA Shadowing enabled, Windows/CPU context-ID support

OK Settings (Windows updated, but CPU not updated, no CID)
Operating System Mitigation: Enabled (Windows updated)

BTI Mitigation: Not enabled, no CPU support (e.g. firmware not updated – check for BIOS update)

RDCL Mitigation: KVA Shadowing not enabled, no context-ID support (either CPU or Windows obsolete)

Not Recommended (Windows not updated)
Operating System Mitigation: Not enabled, Windows has not been updated – install the OS update)

Processor features and microcode updates:

CPU microcode not latest
CPU has not been updated with the latest microcode update: check for an updated BIOS/firmware from the OEM/computer manufacturer.
CPU microcode w/speculation support
CPU supports IBRS/PB (indirect branch restricted speculation/predictor barrier) ennumeration as well as STIBP (single thread indirect branch predictors).
CPU supports CID or InvPCID
CPU supports either CID (Context ID) or InvPCID (Process Context ID) for faster context switch – thus mitigating performance loss.

 

Microcode Updates for Intel processors (non-exhaustive list):

 

 

Generation Old Microcode Updated Microcode
IvyBridge 3rd Gen (IVB C0) 28 2a
Haswell 4th Gen (HSW-U/Y Cx/Dx) 20 21
Haswell-X 4th Gen (HSX C0) 3a 3b
Haswell-EX 4th Gen (HSW-EX E0) 0f 10
Broadwell 5th Gen (BDW-U/Y E/F) 25 28
Crystalwell 5th Gen (CRW CX) 17 18
Broadwell 5th Gen (BDW-H E/G) 17 1b
Broadwell 5th Gen (BDX-DE V0/V1) 0f 14
Broadwell 5th Gen (BDX-DE V2) 0d 11
SkyLake 6th Gen (SKL-U/Y D0/R0) ba c2
SkyLake-X 6th Gen (SKX H0) 35 3c
KabyLake 7th Gen (KBL-U/Y H0) 62 80
KabyLake 7th Gen (KBL Y0 / CFL D0) 70 80
KabyLake 7th Gen (KBL-H/S B0) 5e 80
CofeeLake 8th Gen (CFL U0/B0) 70 80

 

Note: Due to some unexplained crashes, resume from sleep issues, etc. the current (January 2018) microcode updates have been suspended by Intel. They are scheduled to be resumed in March 2018.

In preliminary benchmarking, we have observed no or very minor performance impact to CPU, GPGPU and memory scores; disk performance for small transfers is impacted when KVA shadowing is enabled.

Commercial version customers can download the free updates from their software distributor; Lite users please download from your favourite download site.

Download Sandra Lite

Sandra Platinum (2017) SP3 – Updates and Fixes

Update Wizard

We are pleased to release SP3 (Service Pack 3 – version 24.50) update for Sandra Platinum (2017) with the following updates:

Sandra Platinum (2017) Press Release

  • Fix start-up crash with non-AVX older systems (both x86 and x64)
  • Tools update allowing further optimisations to AVX512 benchmarks
  • Other hardware information fixes depending on APIC ID configuration
    • AMD Ryzen 5 – L3 cache detection 4x not 2x
    • AMD Ryzen 3 – detected as SMP

Commercial version customers can download the free updates from their software distributor; Lite users please download from your favourite download site.

Download Sandra Lite