Intel 13th Gen Core RaptorLake (i5 13600K(F)) Review & Benchmarks – Mid-Range Hybrid

What is “RaptorLake”?

It is the “next-generation” (13th gen) Core architecture, replacing the current “AlderLake” (12th gen) and thus the 3rd generation “hybrid” (aka “big.LITTLE”) arch that Intel has released. As before it combines big/P(erformant) “Core” cores with LITTLE/E(fficient) “Atom” cores in a single package and covers everything from desktops, laptops, tablets and even (low-end) servers.

  • Desktop (S) (65-125W rated, up to 253W turbo)
    • 8C (aka big/P) + 16c (aka LITTLE/E) / 32T total (13th Gen Core i9-13900K(F))
    • 2x as many LITTLE/E cores as ADL
  • High-Performance Mobile (H/HX) (45-55W rated, up to 157W turbo)
    • 8C + 16c / 32T total
  • Mobile (P) (20-28W rated, up to 64W turbo)
    • 4/6C + 8c / 28T total
  • Ultra-Mobile/ULV (U) (9-15W rated, 29W turbo)
    • 2C + 8c / 12T total

For best performance and efficiency, this does require operating system scheduler changes – in order for threads to be assigned on the appropriate physical core/thread. For compute-heavy/low-latency this means a “big/P” core; for low compute/power-limited this means a “LITTLE/E” core.

In the Windows world, this means “Windows 11” for clients and “Windows Server vNext” (note not the recently released Server 2022 based on 21H2 Windows 10 kernel) for servers. The Windows power plans (e.g. “Balanced“, “High Performance“, etc.) contain additional settings (hidden), e.g. prefer (or require) scheduling on big/P or LITTLE/E cores and so on. But in general, the scheduler is supposed to automatically handle it all based on telemetry from the CPU.

Windows 11 also gets updated QoS (Quality of Service) API (aka functions) allowing app(lications) like Sandra to indicate which threads should use big/P cores and which LITTLE/E cores. Naturally, these means updated applications will be needed for best power efficiency.

Intel Core i5 13600K (RaptorLake) 6C + 8c

Intel Core i5 13600K (RaptorLake) 6C + 8c

General SoC Details

  • 10nm+++ (Intel 7+) improved process
  • Unified 36MB L3 cache (vs. 30MB on ADL thus 20% larger)
  • PCIe 5.0 (up to 64GB/s with x16 lanes) – up to x16 lanes PCIe5 + x4 lanes PCIe4
    • NVMe SSDs may thus be limited to PCIe4 or bifurcate main x16 lanes with GPU to PCIe5 x8 + x8
  • PCH up to x12 lanes PCIe4 + x16 lanes PCIe3
    • CPU to PCH DMI 4 x8 link (aka PCIe4 x8)
  • DDR5/LP-DDR5 memory controller support (e.g. 4x 32-bit channels) – up to 5600Mt/s (official, vs. 4800Mt/s ADL)
    • XMP 3.0 (eXtreme Memory Profile(s)) specification for overclocking with 3 profiles and 2 user-writable profiles (!)
  • Thunderbolt 4 (and thus USB 4)

big/P(erformance) “Core” core

  • Up to 8C/16T “Raptor Cove” (!) cores – improved from “Golden Cove” in ADL 😉
  • Disabled AVX512! in order to match Atom cores (on consumer)
    • (Server versions ADL-EX support AVX512 and new extensions like AMX and FP16 data-format)
    • Single FMA-512 unit (though disabled)
  • SMT support still included, 2x threads/core – thus 16 total
  • L1I remains at 32kB
  • L1D remains at 48kB
  • L2 increased to 2MB per core (almost 2x ADL) like server parts (ADL-EX)

LITTLE/E(fficient) “Atom” core

  • Up to 16c/16T “Gracemont” cores – thus 2x more than ADL but same core
  • No SMT support, only 1 thread/core – thus 16 total (in 4x modules of 4x threads)
  • AVX/AVX2 support – first for Atom core, but no AVX512!
    • (Recall that “Phi” GP-GPU accelerator w/AVX512 was based on Atom core)
  • L1I still at 64kB
  • L1D still at 32kB
  • L2 4MB shared by 4 cores (2x larger than ADL)

As with ADL, RPL’s big “Raptor Cove” cores have AVX512 disabled which may prove to be a (big) problem considering AMD’s upcoming Zen4 (Ryzen 7000?) will support it. Even Centaur’s “little” CNS CPU supported AVX512. Centaur has now has been bought by Intel, possibly in order to provide a little AVX512-supporting core. We may see Intel big Core + ex-Centaur LITTLE core designs.

While some are not keen on AVX512 due to relatively large power required to use (and thus lower clocks) as well as the large number of extensions (F, BW, CD, DQ, ER, IFMA, PF, VL, BF16, FP16, VAES, VNNI, etc.) – the performance gain cannot be underestimated (2x if not higher). Most processors no longer need to “clock-down” (AVX512 negative offset) and can run at full speed – power/thermal limits notwithstanding. Now that AMD and ex-Centaur support AVX512, it is no longer an Intel-only server-only instruction set (ISA).

RPL using the very same, “Gracemont” Atom cores as ADL – with no changes except 2x larger cluster L2 (4MB vs. 2MB) which is welcome especially in light of the big cores also getting a L2 cache size upgrade. While AVX2 support for Atom cores was a huge upgrade, tests have shown them not to be as power efficient as Intel would like to make us believe – which is why RPL will have more of them but lower clocked where the efficiency is greater.

As we hypothesized in our article (Mythical Intel 12th Gen Core AlderLake 10C/20T big/P Cores (i11-12999X) – AVX512 Extrapolated Performance) – ADL would have been great if Intel could have provided a version with only 10 big cores (replacing the 2x little cores cluster) that could have been an AVX512 SIMD-performance monster, trading blows with 16-core Zen3 (Ryzen 5950X). With RPL having space for 2 extra clusters – Intel could have had 10C + 8c or even 12C (big) AVX512-supporting cores that could go against Zen4…

Alas, what we are getting across SKUs is the same number of big cores (be they 8, 6, 4 or 2) and 2x clusters of Little cores (and thus 2x more little cores) but presumably at lower clocks in order to improve power efficiency. One issue with ADL across SKUs is that while TDP (on paper) is reasonable – turbo power has blown way past even AVX512-supporting “RocketLake” (!) despite the new efficiency claims. Thus, while disappointing, it is clear Intel is trying to bring power under control.

Changes in Sandra to support Hybrid

Like Windows (and other operating systems), we have had to make extensive changes to both detection, thread scheduling and benchmarks to support hybrid/big-LITTLE. Thankfully, this means we are not dependent on Windows support – you can confidently test AlderLake on older operating systems (e.g. Windows 10 or earlier – or Server 2022/2019/2016 or earlier) – although it is probably best to run the very latest operating systems for best overall (outside benchmarking) computing experience.

  • Detection Changes
    • Detect big/P and LITTLE/E cores
    • Detect correct number of cores (and type), modules and threads per core -> topology
    • Detect correct cache sizes (L1D, L1I, L2) depending on core
    • Detect multipliers depending on core
  • Scheduling Changes

    • All Threads (MT/MC)” (thus all cores + all threads – e.g. 20T
      • All Cores (MC aka big+LITTLE) Only” (both core types, no threads) – thus (6+8) 14T
    • “All Threads big/P Cores Only” (only “Core” cores + their threads) – thus (6×2) 12T
      • big/P Cores Only” (only “Core” cores) – thus 6T
      • LITTLE/E Cores Only” (only “Atom” cores) – thus 8T
    • Single Thread big/P Core Only” (thus single “Core” core) – thus 1T
    • Single Thread LITTLE/E Core Only” (thus single “Atom” core) – thus 1T
  • Benchmarking Changes
    • Dynamic/Asymmetric workload allocator – based on each thread’s compute power
      • Note some tests/algorithms are not well-suited for this (here P threads will finish and wait for E threads – thus effectively having only E threads). Different ways to test algorithm(s) will be needed.
    • Dynamic/Asymmetric buffer sizes – based on each thread’s L1D caches
      • Memory/Cache buffer testing using different block/buffer sizes for P/E threads
      • Algorithms (e.g. GEMM) using different block sizes for P/E threads
    • Best performance core/thread default selection – based on test type
      • Some tests/algorithms run best just using cores only (SMT threads would just add overhead)
      • Some tests/algorithms (streaming) run best just using big/P cores only (E cores just too slow and waste memory bandwidth)
      • Some tests/algorithms sharing data run best on same type of cores only (either big/P or LITTLE/E) (sharing between different types of cores incurs higher latencies and lower bandwidth)
    • Reporting the Performance Contribution & Ratio of each thread
      • Thus the big/P and LITTLE/E cores contribution for each algorithm can be presented. In effect, this allows better optimisation of algorithms tested, e.g. detecting when either big/P or LITTLE/E cores are not efficiently used (e.g. overloaded)

As per above you can be forgiven that some developers may just restrict their software to use big/Performance threads only and just ignore the LITTLE/Efficient threads at all – at least when using compute heavy algorithms.

For this reason we recommend using the very latest version of Sandra and keep up with updated versions that likely fix bugs, improve performance and stability.

But is it RaptorLake or AlderLake-Refresh?

Unfortunately, it seems that not all CPUs labelled “13th Gen” will be “RaptorLake” (RPL); some middle-range i5 and low-range i3 models will instead come with “AlderLake” (Refresh) ADL-R cores that is likely to confuse ordinary people into buying these older-gen CPUs.

What is more confusing is that the ID (aka CPUID) of these 13th Gen ADL-R/RPL models is the same (e.g. 0B067x) and does not match the old ADL (e.g. 09067x). However, the L2 cache sizes are the same as old ADL (1.25MB for big/Core and 2MB for LITTLE/Atom cluster) not the larger RPL (2MB for big/Core and 4MB for LITTLE/Atom cluster).

Note: There is still a possibility these are actually RPL cores but with L2 cache(s) reduced (part disabled/fused off) in order not to outperform higher models.

CPU (Core) Performance Benchmarking

In this article we test CPU core performance; please see our other articles on:

Hardware Specifications

We are comparing the Intel with competing desktop architectures as well as competitors (AMD) with a view to upgrading to a top-of-the-range, high performance design.

Specifications Intel Core i5 13600K(F) 6C+8c/20T (RPL) Intel Core i5 12600K(F) 6C+4c/16T (ADL) AMD Ryzen 5 7600X 6C/12T (Zen4) AMD Ryzen 5 5600X 6C/12T (Zen3) Comments
Arch(itecture) Raptor Cove + Gracemont / RaptorLake Golden Cove + Gracemont / AlderLake Zen4 / Raphael Zen3 / Vermeer The very latest arch
Modules (CCX) / Cores (CU) / Threads (SP) 6C+8c / 20T 6C+4c / 16T 6C / 12T 6C / 12T 4 more (2x) LITTLE cores!
Rated/Turbo Speed (GHz) 3.5 – 5.1GHz [+4%] / 2.6 – 3.9GHz [8%] 3.7 – 4.9GHz / 2.8 – 3.6GHz 4.7 – 5.3GHz 3.7 – 4.6GHz 4% big Core, 8% Atom clock
Rated/Turbo Power (W)
125 – 181W [PL2][+20%] 125 – 150W [PL2] 105 – 142W [PTT] 65 – 88W [PTT]
20% higher Turbo power
L1D / L1I Caches 6x 48/32kB + 8x 32/64kB
6x 48/32kB + 4x 32/64kB 6x 32/32kB 8-way 6x 32/32kB 8-way Same L1D/L1I caches
L2 Caches 6x 2MB + 2x 4MB (20MB) [2.1x]
6x 1.25MB + 2MB (9.5MB) 6x 1MB 16-way (6MB) 6x 512kB 16-way (3MB) L2 is over 2x larger!
L3 Cache(s) 24MB 16-way [+20%] 20MB 16-way 32MB 16-way 32MB 16-way L3 is 20% larger
Microcode (Firmware) 0B0671-10F [B0 stepping] 090672-1E [C0 stepping] A60F12-03 A20F10-09 Revisions just keep on coming.
Special Instruction Sets VNNI/256, SHA, VAES/256 VNNI/256, SHA, VAES/256 AVX512, VNNI/512, SHA, VAES/512 AVX2/FMA, SHA AVX512 still MIA
SIMD Width / Units
256-bit 256-bit 512-bit (as 2x 256-bit) 256-bit Same SIMD units
Price / RRP (USD)
$319 [+10%] $289 $299 $299 Price is 10% higher

Disclaimer

This is an independent review (critical appraisal) that has not been endorsed nor sponsored by any entity (e.g. Intel, etc.). All trademarks acknowledged and used for identification only under fair use.

The review contains only public information and not provided under NDA nor embargoed. At publication time, the products have not been directly tested by SiSoftware but submitted to the public Benchmark Ranker; thus the accuracy of the benchmark scores cannot be verified, however, they appear consistent and pass current validation checks.

And please, don’t forget small ISVs like ourselves in these very challenging times. Please buy a copy of Sandra if you find our software useful. Your custom means everything to us!

SiSoftware Official Ranker Scores

Native Performance

We are testing native arithmetic, SIMD and cryptography performance using the highest performing instruction sets. “RaptorLake” (RPL) does not support AVX512 – but it does support 256-bit versions of some original AVX512 extensions.

Results Interpretation: Higher values (GOPS, MB/s, etc.) mean better performance.

Environment: Windows 11 x64 (22H2), latest AMD and Intel drivers. 2MB “large pages” were enabled and in use. Turbo / Boost was enabled on all configurations.

Native Benchmarks Intel Core i5 13600K(F) 6C+8c/20T (RPL) Intel Core i5 12600K(F) 6C+4c/16T (ADL) AMD Ryzen 5 7600X 6C/12T (Zen4) AMD Ryzen 5 5600X 6C/12T (Zen3) Comments
CPU Arithmetic Benchmark Native Dhrystone Integer (GIPS) 642 [+34%] 479 462 347 RPL is 34% faster than ADL!
CPU Arithmetic Benchmark Native Dhrystone Long (GIPS) 601 [+22%] 494 484 356 A 64-bit integer workload RPL is 22% faster.
CPU Arithmetic Benchmark Native FP32 (Float) Whetstone (GFLOPS) 428 [+28%] 334 263 224 With floating-point, RPL is 28% faster
CPU Arithmetic Benchmark Native FP64 (Double) Whetstone (GFLOPS) 308 [+40%] 220 225 185 With FP64 RPL is 40% faster
With non-SIMD code, we see huge performance uplift in both integer (old’ Dhrystone) and floating-point (old’ Whetstone) of 31% over ADL that help push even past AMD’s new Zen4. The faster big Cores + 4 more Little Atom cores greatly help here.

Thus for normal, non-SIMD code – RPL will perform much better and provide a great upgrade over ADL and cement Intel’s domination in some workloads (Cinebench?)…

BenchCpuMM Native Integer (Int32) Multi-Media (Mpix/s) 1,881 [+22%] 1,548 1,846* 1,441 RPL is 22% faster than ADL here.
BenchCpuMM Native Long (Int64) Multi-Media (Mpix/s) 653 [+24%] 528 618* 475 With a 64-bit, RPL is 24% faster.
BenchCpuMM Native Quad-Int (Int128) Multi-Media (Mpix/s) 125 [+25%] 100 169** 90.69 Using 64-bit int to emulate Int128 RPL is 25% faster.
BenchCpuMM Native Float/FP32 Multi-Media (Mpix/s) 1,976 [+21%] 1,633 1,711* 1,311 In this floating-point vectorised test RPL is 21% faster
BenchCpuMM Native Double/FP64 Multi-Media (Mpix/s) 1,016 [+21%] 840 944* 676 Switching to FP64 RPL is 21% faster
BenchCpuMM Native Quad-Float/FP128 Multi-Media (Mpix/s) 48 [+23%] 39.03 38.58* 28.28 Using FP64 to mantissa extend FP128 RPL is 23% faster
With heavily vectorised SIMD workloads – RPL sees similar improvement, it is around 23% faster than ADL across all tests with minor variations. For older software just using AVX2/FMA3, RPL just flies past ADL as well as older CPU (Zen3, Zen2, etc.).

Even Zen4 with AVX512 support cannot catch RPL here, except in the test using AVX512-IFMA. This shows just how much the (AVX512) extensions help software gain performance from AVX512 even when not executed full width (as Zen4 splits it into 2x 256-bit). Intel will need to find a solution for future arch as more and more software will start supporting AVX512.

Note:* using AVX512 instead of AVX2/FMA.

Note:** using AVX512-IFMA52 to emulate 128-bit integer operations (int128).

BenchCrypt Crypto AES-256 (GB/s) 32*** [+33%] 24*** 24.59*** 20.8 RPL is just 4% faster.
BenchCrypt Crypto AES-128 (GB/s) 24.03*** 24.6*** 20.8 What we saw with AES-256 just repeats with AES-128.
BenchCrypt Crypto SHA2-256 (GB/s) 30** [+56%] 19.56** 21.89* 18.67** With SHA, RPL is 64% faster than ADL.
BenchCrypt Crypto SHA1 (GB/s) 19.26** 19.56** The less compute-intensive SHA1 does not change things due to acceleration.
As streaming tests (crypto/hashing) are memory bound, RPL won’t beat ADL with the same memory speed but here we see the power of DDR5-6000 memory.

With SHA, as we’ve see in the 13900K article, 13600K/RPL  does manage to beat ADL by a huge 64% and thus even AVX512-enabled AMD Zen4 which is a pretty impressive. The extra Little Atom cores can help here with SIMD integer workloads.

Note***: using VAES 256-bit (AVX2) or 512-bit (AVX512)

Note**: using SHA HWA not SIMD (e.g. AVX512, AVX2, AVX, etc.)

Note*: using AVX512 not AVX2.

BenchFinance Black-Scholes float/FP32 (MOPT/s) 296 The standard financial algorithm.
BenchFinance Black-Scholes double/FP64 (MOPT/s) 450 [+45%] 311 298 235 Switching to FP64 code, RPL is 45% faster
BenchFinance Binomial float/FP32 (kOPT/s) 59.98 Binomial uses thread shared data thus stresses the cache & memory system;
BenchFinance Binomial double/FP64 (kOPT/s) 135 [+48%] 91.15 88.63 68.92 With FP64 code RPL is 48% faster.
BenchFinance Monte-Carlo float/FP32 (kOPT/s) 286 Monte-Carlo also uses thread shared data but read-only thus reducing modify pressure on the caches
BenchFinance Monte-Carlo double/FP64 (kOPT/s) 180 [+37%] 131 125 94.9 Here RPL is 370% faster.
AMD’s Zen always did well on non-SIMD floating-point algorithms – but here RPL shows the times are changing; with 43% improvement over ADL, it has no problem dispatching even the latest Zen4 and all of its improvements (but cannot use AVX512 here).
BenchScience SGEMM (GFLOPS) float/FP32 573 In this tough vectorised algorithm that is widely used (e.g. AI/ML).
BenchScience DGEMM (GFLOPS) double/FP64 314 [+35%] 233 289* 198 We see RPL beat ADL by 35%.
BenchScience SFFT (GFLOPS) float/FP32 29.87 FFT is also heavily vectorised but stresses the memory sub-system more.
BenchScience DFFT (GFLOPS) double/FP64 20.47 [+6%] 19.3 15.56* 9.05 With FP64 code, RPL is memory latency bound
BenchScience SNBODY (GFLOPS) float/FP32 530 N-Body simulation is vectorised but fewer memory accesses.
BenchScience DNBODY (GFLOPS) double/FP64 188 [+26%] 149 242* 164 With FP64 RPL is 26% faster.
Unlike what we’ve seen in 13900K review, 13600K/RPL does much better here with less Atom cores – and sees a respectable 22% improvement over ADL. This allows it to beat even Zen4 with AVX512 in 2 out of 3 tests.

Here, faster DDR5 memory does make a big difference, we’ll need to see what speed is the “sweet-spot” for RPL, likely DDR5-6400 for that many cores.

Note*: using AVX512 not AVX2/FMA3.

CPU Image Processing Blur (3×3) Filter (MPix/s) 5,137 [+38%] 3,718 5,244* 2,585 In this vectorised integer RPL is 38% faster.
CPU Image Processing Sharpen (5×5) Filter (MPix/s) 1,941 [+37%] 1,417 2,052* 975 Same algorithm but more shared data 37% faster.
CPU Image Processing Motion-Blur (7×7) Filter (MPix/s) 964 [+39%] 695 1,062* 499 Again same algorithm but even more data shared – 39% faster
CPU Image Processing Edge Detection (2*5×5) Sobel Filter (MPix/s) 1,618 [+34%] 1,207 1,634* 821 Different algorithm RPL is 34% faster.
CPU Image Processing Noise Removal (5×5) Median Filter (MPix/s) 133 [+39%] 95.84 214* 85.79 Still vectorised code RPL is 39% faster.
CPU Image Processing Oil Painting Quantise Filter (MPix/s) 67 [+31%] 51 33.18* 27.81 This test has always been tough RPL is 31% faster.
CPU Image Processing Diffusion Randomise (XorShift) Filter (MPix/s) 5,733 [+38%] 4,152 3,818* 2,647 With integer workload, RPL is 38% faster.
CPU Image Processing Marbling Perlin Noise 2D Filter (MPix/s) 916 [+25%] 731 532* 373 In this final test we see RPL is 25% faster
These tests love SIMD vectorised compute, thus here RPL is again 35% faster than ADL – and this even allows it to beat the AVX512-enabled Zen4 in 3 out of 8 tests with 1 tied.

The test also showed how much Zen4 benefits from AVX512 and in effect how much RPL misses by not having AVX512 enabled. With AMD on board, AVX512 adoption is likely to increase, thus Intel had better bring support to Atom somehow, soon…

Note*: using AVX512 not AVX2/FMA3.

Intel RaptorLake 13600K (6C + 8c) Inter-Thread/Core HeatMap Latency (ns)

Intel RaptorLake 13600K (6C + 8c) Inter-Thread/Core HeatMap Latency (ns)

The inter-thread/core/module latencies “heat-map” shows how the latencies vary when transferring data off-thread (same L1D), off-core (same L3 for big Cores but same L2 for Little Atom cores) and different-core-type (big Core to Little Atom).

Still, judicious thread-pair scheduling is needed to keep latencies low (and conversely bandwidth high when large data is transferred.

CPU Multi-Core Benchmark Total Inter-Thread Bandwidth – Best Pairing (GB/s) 112 [+31%] 85.21 95.21* 72.07 31% more bandwidth than ADL.
With double L2 (either big Cores or Little Atom Cluster) and much bigger L3 cache, RPL has 31% more inter-core bandwidth than ADL.

Here Zen4/3 benefit from having single CCX and much larger L3 (32MB) but RPL manages to beat Zen4 just as ADL beat Zen3. At this level (6-core) AMD is not likely to release 3D-VCache versions unlike higher level (8-core) 5800X-3D and future 7800X-3D.

Note:* using AVX512 512-bit wide transfers.

CPU Multi-Core Benchmark Average Inter-Thread Latency (ns) 37.7 [-4%] 39.2 17.1 20.6 Overall latencies are 4% lower
CPU Multi-Core Benchmark Inter-Thread Latency (Same Core) Latency (ns) 10.1 [-3%] 10.4 8.7 10.2 Inter-Thread (big Core) latency is 3% lower.
CPU Multi-Core Benchmark Inter-Core Latency (big Core, same Module) Latency (ns) 33.33 [-3%] 34.3 17.9 21.6 We see Inter-big-Core latency 3% lower.
CPU Multi-Core Benchmark Inter-Core (Little Core, same Module) Latency (ns) 48.2 [-15%] 56.8 We see Inter-Little-Atom latency 15% lower.
CPU Multi-Core Benchmark Inter-Module/CCX Latency (ns) n/a
Due to the increased number of Little Atom cores (16 vs. 8 on ADL), the overall latency of RPL is naturally going to be higher than ADL – but here 13600K still end up with lower overall latency than 12600K.

We see the Inter-Thread of big Cores RPL latency (same L1D) 3% lower on RPL, Inter-big-Core and Inter-Little-Atom latencies 3/15%-lower which is pretty impressive and should help inter-thread transfers whatever the pair topology.

Aggregate Score (Points) 15,020 [+24%] 12,160 13,380* 9,030 Across all benchmarks, RPL is 24% faster than ADL!
Across all the benchmarks – 13600K/RPL ends up a good 24% faster over ADL that while not as as high as we’ve seen in the 13900K review, it is enough to beat Zen3 (7600X) by a good amount (12% faster). While Zen4 gets a huge uplift from AVX512, it faces RPL with 8 extra Atom cores that do make a good difference.

Still, Intel will have to find a solution as AMD has large headroom at the high end and could easily move products up the stack, e.g. 8-core future Zen facing future 6-big-Core MTL (MeteorLake) that would be a big issue.

Note*: using AVX512 instead of AVX2/FMA3.

Price/RRP (USD) $319 [+10%] $289 $299 $299 Price is 10% higher
Price Efficiency (Perf. vs. Cost) (Points/USD) 47.08 [+12%] 42.08 44.75 30.20 Overall 12% more performance for the price
With the price almost the same, the bang-for-buck has also increased by the same amount +12%, that makes 13600K/RPL much better value than ADL. It also allows it to overtake Zen4 (7600X) as better value though there’s not a lot in it.
Power/TDP – Turbo (W) 125 – 181W [+21%] 125 – 150W 105-142W 65-88W Turbo power is 21% higher
Power Efficiency (Perf. vs. Power) (Points/W) 82.98 [+2%] 81.07 94.23 102.61 RPL is 2% more efficient than ADL.
With turbo power 21% higher than ADL, 13600K/RPL is thus just 2% more power efficient which is not enough to beat either of AMD’s Zen4/Zen3 that is a bit disappointing considering the whole point of the hybrid design. Intel still has some work to do here.

Final Thoughts / Conclusions

Summary: Faster and good value (Intel i5 13600K(F)): 9/10

Like every revision (ex-“tock”) Intel arch(itecture), 13th Gen(eration) “RaptorLake” (RPL) is an improved 12th Gen “AlderLake” (ADL), that ends up much faster and thus more efficient (both price-wise and even power-wise), thus returns Intel to competitiveness against AMD’s latest Zen4 – battling for the top spot.

Intel has managed to increase the clocks of the big Cores and almost double L2 cache (2MB/core vs. 1.25MB/core); it has also managed to double (8c vs. 4c) the number of Little Atom cores – as well as double their cluster L2 cache (4MB vs. 2MB). This means the 13600K(F) has more than twice (14C/c) the number of real cores than AMD’s 7600X (6C) and more threads than the higher level 7800X (16T)!

Across all benchmarks – we see RPL/13600K(F) 24% faster than ADL/12600K(F) – not bad from an evolution architecture update! RPL also benefits from more mature software (e.g. Windows 11 22H2) and microcode/BIOS. However, power usage has gone up as well – though this seems to be happening to the competition too (AMD Zen4).

It is disappointing that Intel was not able to enable AVX512 on RPL – but that was very unlikely as the Atom cores are unchanged from ADL. AMD has shown with Zen4 (and even VIA/Centaur) that you don’t need a full-width 512-bit implementation to benefit from AVX512 – and Intel should consider it for future Atom cores. Sandra’s benchmark results gain significant uplift from AVX512 and here RPL is at a distinct disadvantage versus Zen4.

Talking power efficiency, to aggressively save power – you could use the good number of Little Atom cores (8!) to handle most workloads – and keep the big Cores (6) parked. Effectively, unless gaming/benchmarking/etc. – you’d just be using an ultra efficient many threaded Atom system  – but still be able to crank it up when needed.

If you already upgraded to Socket 1700 for the new technologies (DDR5, PCIe 5.0, Thunderbolt/USB 4.o, etc.) and want more, then RPL is a nice upgrade. As the next Intel core arch “MeteorLake” MTL will use a different socket, RPL does not have a great upgrading potential and may be an idea to wait for discounts from Intel or AMD before making a choice…

Summary: Faster and good value (Intel i5 13600K(F)): 9/10

Further Articles

Please see our other articles on:

Disclaimer

This is an independent review (critical appraisal) that has not been endorsed nor sponsored by any entity (e.g. Intel, etc.). All trademarks acknowledged and used for identification only under fair use.

The review contains only public information and not provided under NDA nor embargoed. At publication time, the products have not been directly tested by SiSoftware but submitted to the public Benchmark Ranker; thus the accuracy of the benchmark scores cannot be verified, however, they appear consistent and pass current validation checks.

And please, don’t forget small ISVs like ourselves in these very challenging times. Please buy a copy of Sandra if you find our software useful. Your custom means everything to us!

Tagged , , , , , , , , , . Bookmark the permalink.

Comments are closed.