Intel 13th Gen Core AlderLake-Refresh (i5 13400) Review & Benchmarks – Value Hybrid Efficiency

What is “RaptorLake”?

It is the “next-generation” (13th gen) Core architecture, replacing the current “AlderLake” (12th gen) and thus the 3rd generation “hybrid” (aka “big.LITTLE”) arch that Intel has released. As before it combines big/P(erformant) “Core” cores with LITTLE/E(fficient) “Atom” cores in a single package and covers everything from desktops, laptops, tablets and even (low-end) servers.

  • Desktop (S) (65-125W rated, up to 253W turbo)
    • 8C (aka big/P) + 16c (aka LITTLE/E) / 32T total (13th Gen Core i9-13900K(F))
    • 2x as many LITTLE/E cores as ADL
  • High-Performance Mobile (H/HX) (45-55W rated, up to 157W turbo)
    • 8C + 16c / 32T total
  • Mobile (P) (20-28W rated, up to 64W turbo)
    • 4/6C + 8c / 28T total
  • Ultra-Mobile/ULV (U) (9-15W rated, 29W turbo)
    • 2C + 8c / 12T total

For best performance and efficiency, this does require operating system scheduler changes – in order for threads to be assigned on the appropriate physical core/thread. For compute-heavy/low-latency this means a “big/P” core; for low compute/power-limited this means a “LITTLE/E” core.

In the Windows world, this means “Windows 11” for clients and “Windows Server vNext” (note not the recently released Server 2022 based on 21H2 Windows 10 kernel) for servers. The Windows power plans (e.g. “Balanced“, “High Performance“, etc.) contain additional settings (hidden), e.g. prefer (or require) scheduling on big/P or LITTLE/E cores and so on. But in general, the scheduler is supposed to automatically handle it all based on telemetry from the CPU.

Windows 11 also gets updated QoS (Quality of Service) API (aka functions) allowing app(lications) like Sandra to indicate which threads should use big/P cores and which LITTLE/E cores. Naturally, these means updated applications will be needed for best power efficiency.

Intel Core i5 13400 (AlderLake Refresh) 6C + 4c

Intel Core i5 13400 (AlderLake Refresh) 6C + 4c

General SoC Details

  • 10nm+++ (Intel 7+) improved process
  • Unified 36MB L3 cache (vs. 30MB on ADL thus 20% larger)
  • PCIe 5.0 (up to 64GB/s with x16 lanes) – up to x16 lanes PCIe5 + x4 lanes PCIe4
    • NVMe SSDs may thus be limited to PCIe4 or bifurcate main x16 lanes with GPU to PCIe5 x8 + x8
  • PCH up to x12 lanes PCIe4 + x16 lanes PCIe3
    • CPU to PCH DMI 4 x8 link (aka PCIe4 x8)
  • DDR5/LP-DDR5 memory controller support (e.g. 4x 32-bit channels) – up to 5600Mt/s (official, vs. 4800Mt/s ADL)
    • XMP 3.0 (eXtreme Memory Profile(s)) specification for overclocking with 3 profiles and 2 user-writable profiles (!)
  • Thunderbolt 4 (and thus USB 4)

big/P(erformance) “Core” core

  • Up to 8C/16T “Raptor Cove” (!) cores – improved from “Golden Cove” in ADL 😉
  • Disabled AVX512! in order to match Atom cores (on consumer)
    • (Server versions ADL-EX support AVX512 and new extensions like AMX and FP16 data-format)
    • Single FMA-512 unit (though disabled)
  • SMT support still included, 2x threads/core – thus 16 total
  • L1I remains at 32kB
  • L1D remains at 48kB
  • L2 increased to 2MB per core (almost 2x ADL) like server parts (ADL-EX)

LITTLE/E(fficient) “Atom” core

  • Up to 16c/16T “Gracemont” cores – thus 2x more than ADL but same core
  • No SMT support, only 1 thread/core – thus 16 total (in 4x modules of 4x threads)
  • AVX/AVX2 support – first for Atom core, but no AVX512!
    • (Recall that “Phi” GP-GPU accelerator w/AVX512 was based on Atom core)
  • L1I still at 64kB
  • L1D still at 32kB
  • L2 4MB shared by 4 cores (2x larger than ADL)

As with ADL, RPL’s big “Raptor Cove” cores have AVX512 disabled which may prove to be a (big) problem considering AMD’s upcoming Zen4 (Ryzen 7000?) will support it. Even Centaur’s “little” CNS CPU supported AVX512. Centaur has now has been bought by Intel, possibly in order to provide a little AVX512-supporting core. We may see Intel big Core + ex-Centaur LITTLE core designs.

While some are not keen on AVX512 due to relatively large power required to use (and thus lower clocks) as well as the large number of extensions (F, BW, CD, DQ, ER, IFMA, PF, VL, BF16, FP16, VAES, VNNI, etc.) – the performance gain cannot be underestimated (2x if not higher). Most processors no longer need to “clock-down” (AVX512 negative offset) and can run at full speed – power/thermal limits notwithstanding. Now that AMD and ex-Centaur support AVX512, it is no longer an Intel-only server-only instruction set (ISA).

RPL using the very same, “Gracemont” Atom cores as ADL – with no changes except 2x larger cluster L2 (4MB vs. 2MB) which is welcome especially in light of the big cores also getting a L2 cache size upgrade. While AVX2 support for Atom cores was a huge upgrade, tests have shown them not to be as power efficient as Intel would like to make us believe – which is why RPL will have more of them but lower clocked where the efficiency is greater.

As we hypothesized in our article (Mythical Intel 12th Gen Core AlderLake 10C/20T big/P Cores (i11-12999X) – AVX512 Extrapolated Performance) – ADL would have been great if Intel could have provided a version with only 10 big cores (replacing the 2x little cores cluster) that could have been an AVX512 SIMD-performance monster, trading blows with 16-core Zen3 (Ryzen 5950X). With RPL having space for 2 extra clusters – Intel could have had 10C + 8c or even 12C (big) AVX512-supporting cores that could go against Zen4…

Alas, what we are getting across SKUs is the same number of big cores (be they 8, 6, 4 or 2) and 2x clusters of Little cores (and thus 2x more little cores) but presumably at lower clocks in order to improve power efficiency. One issue with ADL across SKUs is that while TDP (on paper) is reasonable – turbo power has blown way past even AVX512-supporting “RocketLake” (!) despite the new efficiency claims. Thus, while disappointing, it is clear Intel is trying to bring power under control.

Changes in Sandra to support Hybrid

Like Windows (and other operating systems), we have had to make extensive changes to both detection, thread scheduling and benchmarks to support hybrid/big-LITTLE. Thankfully, this means we are not dependent on Windows support – you can confidently test AlderLake on older operating systems (e.g. Windows 10 or earlier – or Server 2022/2019/2016 or earlier) – although it is probably best to run the very latest operating systems for best overall (outside benchmarking) computing experience.

  • Detection Changes
    • Detect big/P and LITTLE/E cores
    • Detect correct number of cores (and type), modules and threads per core -> topology
    • Detect correct cache sizes (L1D, L1I, L2) depending on core
    • Detect multipliers depending on core
  • Scheduling Changes

    • All Threads (MT/MC)” (thus all cores + all threads – e.g. 32T
      • All Cores (MC aka big+LITTLE) Only” (both core types, no threads) – thus 24T
    • “All Threads big/P Cores Only” (only “Core” cores + their threads) – thus 16T
      • big/P Cores Only” (only “Core” cores) – thus 8T
      • LITTLE/E Cores Only” (only “Atom” cores) – thus 16T
    • Single Thread big/P Core Only” (thus single “Core” core) – thus 1T
    • Single Thread LITTLE/E Core Only” (thus single “Atom” core) – thus 1T
  • Benchmarking Changes
    • Dynamic/Asymmetric workload allocator – based on each thread’s compute power
      • Note some tests/algorithms are not well-suited for this (here P threads will finish and wait for E threads – thus effectively having only E threads). Different ways to test algorithm(s) will be needed.
    • Dynamic/Asymmetric buffer sizes – based on each thread’s L1D caches
      • Memory/Cache buffer testing using different block/buffer sizes for P/E threads
      • Algorithms (e.g. GEMM) using different block sizes for P/E threads
    • Best performance core/thread default selection – based on test type
      • Some tests/algorithms run best just using cores only (SMT threads would just add overhead)
      • Some tests/algorithms (streaming) run best just using big/P cores only (E cores just too slow and waste memory bandwidth)
      • Some tests/algorithms sharing data run best on same type of cores only (either big/P or LITTLE/E) (sharing between different types of cores incurs higher latencies and lower bandwidth)
    • Reporting the Performance Contribution & Ratio of each thread
      • Thus the big/P and LITTLE/E cores contribution for each algorithm can be presented. In effect, this allows better optimisation of algorithms tested, e.g. detecting when either big/P or LITTLE/E cores are not efficiently used (e.g. overloaded)

As per above you can be forgiven that some developers may just restrict their software to use big/Performance threads only and just ignore the LITTLE/Efficient threads at all – at least when using compute heavy algorithms.

For this reason we recommend using the very latest version of Sandra and keep up with updated versions that likely fix bugs, improve performance and stability.

But is it RaptorLake or AlderLake-Refresh?

Unfortunately, it seems that not all CPUs labelled “13th Gen” will be “RaptorLake” (RPL); some middle-range i5 and low-range i3 models will instead come with “AlderLake” (Refresh) ADL-R cores that is likely to confuse ordinary people into buying these older-gen CPUs.

What is more confusing is that the ID (aka CPUID) of these 13th Gen ADL-R/RPL models is the same (e.g. 0B067x) and does not match the old ADL (e.g. 09067x). However, the L2 cache sizes are the same as old ADL (1.25MB for big/Core and 2MB for LITTLE/Atom cluster) not the larger RPL (2MB for big/Core and 4MB for LITTLE/Atom cluster).

Note: There is still a possibility these are actually RPL cores but with L2 cache(s) reduced (part disabled/fused off) in order not to outperform higher models.

  • Core i5 13600/T, 13500/T – ADL-R ;(
  • Core i5 13400/F – either RPL 🙂 or ADL-R ;(
  • Core i3 13100/F/T – ADL-R ;(

CPU (Core) Performance Benchmarking

In this article we test CPU core performance; please see our other articles on:

Hardware Specifications

We are comparing the Intel with competing desktop architectures as well as competitors (AMD) with a view to upgrading to a top-of-the-range, high performance design.

Specifications Intel Core i5 13400 6C+4c/16T big/P+LITTLE/E (ADL-R) Intel Core i5 12400 6C/12T big/P (ADL) AMD Ryzen 5 5500 6C/12T (Zen3, Cezanne) AMD Ryzen 5 5600 6C/12T (Zen3, Vermeer) Comments
Arch(itecture) Golden Cove Refresh + Gracemont (AlderLake Refresh) Golden Cove (AlderLake) Zen3 / Cezanne Zen3 / Vermeer Can be ADL-R or RPL (!)
Modules (CCX) / Cores (CU) / Threads (SP) 6C + 4c / 16T 6C / 12T 6C / 12T 6C / 12T 4 LITTLE cores!
Rated/Turbo Speed (GHz) ? – 4.6GHz / ? – 2.5GHz 2.5 – 4.4GHz 3.6 – 4.2GHz 3.5 – 4.4GHz 5% higher Turbo
Rated/Turbo Power (W)
65 – 117W [PL2] 65 – 117W [PL2] 65 – 88W [PTT] 65 – 88W [PTT] TDP/Turbo is the same.
L1D / L1I Caches 6x 48/32kB + 4x 32/64kB
8x 48/32kB + 8x 32/64kB 12x 32kB 8-way / 12x 32kB 8-way 12x 32kB 8-way / 12x 32kB 8-way Same L1D/L1I caches
L2 Caches 6x 1.25MB + 2MB (9.5MB) [+27%]
6x 1.25MB (7.5MB) 6x 512kB (3MB) 6x 512kB (3MB) L2 same as ADL not RPL
L3 Cache(s) 20MB [+11%]
18MB 16MB 32MB L3 is 11% larger
Microcode (Firmware) 0B0671-108 [B0 stepping] 090672-1E [C0 stepping] A20F10-1003 8F7100-1009 Revisions just keep on coming.
Special Instruction Sets VNNI/256, SHA, VAES/256 VNNI/256, SHA, VAES/256 AVX2/FMA, SHA AVX2/FMA, SHA AVX512 still MIA
SIMD Width / Units
256-bit 256-bit 256-bit 256-bit Same SIMD units
Price / RRP (USD)
$219 $199 $159 $199 Price is a little higher?

Disclaimer

This is an independent review (critical appraisal) that has not been endorsed nor sponsored by any entity (e.g. Intel, etc.). All trademarks acknowledged and used for identification only under fair use.

The review contains only public information and not provided under NDA nor embargoed. At publication time, the products have not been directly tested by SiSoftware but submitted to the public Benchmark Ranker; thus the accuracy of the benchmark scores cannot be verified, however, they appear consistent and pass current validation checks.

And please, don’t forget small ISVs like ourselves in these very challenging times. Please buy a copy of Sandra if you find our software useful. Your custom means everything to us!

SiSoftware Official Ranker Scores

Native Performance

We are testing native arithmetic, SIMD and cryptography performance using the highest performing instruction sets. “RaptorLake” (RPL) does not support AVX512 – but it does support 256-bit versions of some original AVX512 extensions.

Results Interpretation: Higher values (GOPS, MB/s, etc.) mean better performance.

Environment: Windows 11 x64 (22H2), latest AMD and Intel drivers. 2MB “large pages” were enabled and in use. Turbo / Boost was enabled on all configurations.

Native Benchmarks Intel Core i5 13400 6C+4c/16T big/P+LITTLE/E (ADL-R) Intel Core i5 12400 6C/12T big/P (ADL) AMD Ryzen 5 5500 6C/12T (Zen3, Cezanne) AMD Ryzen 5 5600 6C/12T (Zen3, Vermeer) Comments
CPU Arithmetic Benchmark Native Dhrystone Integer (GIPS) 334 327 237 Waiting for results
CPU Arithmetic Benchmark Native Dhrystone Long (GIPS) 348 326 240 Waiting for results
CPU Arithmetic Benchmark Native FP32 (Float) Whetstone (GFLOPS) 209 205 204 Waiting for results
CPU Arithmetic Benchmark Native FP64 (Double) Whetstone (GFLOPS) 170 168 172 Waiting for results
Waiting for results…
BenchCpuMM Native Integer (Int32) Multi-Media (Mpix/s) 1,251 [+32%] 951 1,413 1,433 ADL-R is 32% faster.
BenchCpuMM Native Long (Int64) Multi-Media (Mpix/s) 428 [+22%] 350 488 504 With a 64-bit, ADL-R is 22% faster.
BenchCpuMM Native Quad-Int (Int128) Multi-Media (Mpix/s) 78.97 [+13%] 69.98 87.47 97.5 Using 64-bit int to emulate Int128 ADL-R is 13% faster
BenchCpuMM Native Float/FP32 Multi-Media (Mpix/s) 1,336 [+21%] 1,108 1,296 1,342 In this floating-point vectorised test ADL-R is 21% faster
BenchCpuMM Native Double/FP64 Multi-Media (Mpix/s) 683 [+19%] 572 668 695 Switching to FP64 ADL-R is 19% faster
BenchCpuMM Native Quad-Float/FP128 Multi-Media (Mpix/s) 33 [+23%] 27 26.96 28 Using FP64 to mantissa extend FP128 ADL-R is 23% faster
With heavily vectorised SIMD workloads – the new ADL-R (13400) with the help of the Little/Atom cores is about 22% faster than the old ADL. In effect, we can see how much 4 Little/Atom cores contribute to the existing 6 big/Cores of the old CPU.

This does help it be competitive against AMD’s Zen3 (5500/5600) something that required a 12600K in the past. It is likely just be enough against the latest Zen4 AVX512-enabled (7600).

Note:* using AVX512 instead of AVX2/FMA.

Note:** using AVX512-IFMA52 to emulate 128-bit integer operations (int128).

BenchCrypt Crypto AES-256 (GB/s) 24.84 [+15%] 21.65 18.84 19.31 ADL-R still gains 15% over the older CPU.
BenchCrypt Crypto AES-128 (GB/s) What we saw with AES-256 just repeats with AES-128.
BenchCrypt Crypto SHA2-256 (GB/s) 18.54* [+50%] 12.32* 18.18* 19.23* With SHA, ADL-R is a good 50% faster.
BenchCrypt Crypto SHA1 (GB/s) The less compute-intensive SHA1 does not change things due to acceleration.
As streaming tests (crypto/hashing) are memory bound, with faster memory ADL-R can feed the extra Little/Atom cores and gains a decent 15% but the hashing has improved by a large 50%.

Again, this allows it to be competitive with AMD’s Zen3 and even overtake them due to higher bandwidth DDR5 memory. In any case, it seems the Little Atom cores have great crypto accelerated performance (AES/SHA HWA).

Note***: using VAES 256-bit (AVX2) or 512-bit (AVX512)

Note**: using SHA HWA not SIMD (e.g. AVX512, AVX2, AVX, etc.)

Note*: using AVX512 not AVX2.

BenchFinance Black-Scholes float/FP32 (MOPT/s) The standard financial algorithm.
BenchFinance Black-Scholes double/FP64 (MOPT/s) 275 [+18%] 234 240 242 Switching to FP64 code, ADL-R is 18% faster
BenchFinance Binomial float/FP32 (kOPT/s) Binomial uses thread shared data thus stresses the cache & memory system;
BenchFinance Binomial double/FP64 (kOPT/s) 81.17 [+29%] 63 71.74 71.71 With FP64 code ADL-R is 29% faster.
BenchFinance Monte-Carlo float/FP32 (kOPT/s) Monte-Carlo also uses thread shared data but read-only thus reducing modify pressure on the caches
BenchFinance Monte-Carlo double/FP64 (kOPT/s) 109 [+27%] 85.84 99.3 97.57 Here ADL-R is 27% faster
AMD’s Zens always did well on non-SIMD floating-point algorithms – but here ADL-R (13400) shows the times are changing; with 24% improvement over the old ADL – this is enough to beat both Zen3 (5500/5600) competition as well. The Little Atom cores do well in non-SIMD heavy workloads.
BenchScience SGEMM (GFLOPS) float/FP32 In this tough vectorised algorithm that is widely used (e.g. AI/ML).
BenchScience DGEMM (GFLOPS) double/FP64 207 [+42%] 146 209 186 ADL-R is 42% faster.
BenchScience SFFT (GFLOPS) float/FP32 FFT is also heavily vectorised but stresses the memory sub-system more.
BenchScience DFFT (GFLOPS) double/FP64 13.63 [-5%] 14.4 7.54 11.79 With FP64 code, ADL-R is memory latency bound
BenchScience SNBODY (GFLOPS) float/FP32 N-Body simulation is vectorised but fewer memory accesses.
BenchScience DNBODY (GFLOPS) double/FP64 135 [+16%] 116 157 168 With FP64 ADL-R is 16% faster
The hybrid design of the ADL-R (13400) does not work as well in all algorithms with dynamic workload harder to allocate. Sometimes, more bandwidth-sapping little Atom cores are best ignored. Thus the results vary between large gains and some (minor) regressions.

Faster DDR5 memory is also likely to make a big difference, with current systems likely to use faster memory as yields and technologies improve.

Note*: using AVX512 not AVX2/FMA3.

CPU Image Processing Blur (3×3) Filter (MPix/s) 3,000 [+13%] 2,663 2,667 2,677 In this vectorised integer ADL-R is 13% faster
CPU Image Processing Sharpen (5×5) Filter (MPix/s) 1,157 [+16%] 999 1,1001 1,004 Same algorithm but more shared data 16% faster.
CPU Image Processing Motion-Blur (7×7) Filter (MPix/s) 581 [+17%] 495 499 515 Again same algorithm but even more data shared – 17% faster
CPU Image Processing Edge Detection (2*5×5) Sobel Filter (MPix/s) 979 [+9%] 897 839 850 Different algorithm ADL-R is 9% faster
CPU Image Processing Noise Removal (5×5) Median Filter (MPix/s) 84 [+23%] 68 88 86 Still vectorised code ADL-R is 23% faster.
CPU Image Processing Oil Painting Quantise Filter (MPix/s) 45 [+17%] 38 27 28 This test has always been tough ADL-R is 17% faster.
CPU Image Processing Diffusion Randomise (XorShift) Filter (MPix/s) 3,942 [+12%] 3,527 2,140 2,629 With integer workload, ADL-R is 12% faster.
CPU Image Processing Marbling Perlin Noise 2D Filter (MPix/s) 630 [+9%] 579 539 574 In this final test we see ADL-R 9% faster
These tests love SIMD vectorised compute, and here Intel’s SIMD units show their power – though the Little Atom cores only add 14% uplift over the old ADL (12400) – but this is enough to cement its leadership over AMD’s Zen3 competition and likely match AVX512-enabled Zen4.

Note*: using AVX512 not AVX2/FMA3.

Intel AlderLake Refresh 13400 (6C + 4c) Inter-Thread/Core HeatMap Latency (ns)

Intel AlderLake Refresh 13400 (6C + 4c) Inter-Thread/Core HeatMap Latency (ns)

The inter-thread/core/module latencies “heat-map” shows how the latencies vary when transferring data off-thread (same L1D), off-core (same L3 for big Cores but same L2 for Little Atom cores) and different-core-type (big Core to Little Atom).

Still, judicious thread-pair scheduling is needed to keep latencies low (and conversely bandwidth high when large data is transferred.

CPU Multi-Core Benchmark Total Inter-Thread Bandwidth – Best Pairing (GB/s) 79.73 [+21%] 65.9 58.86 69.39 21% more bandwidth than old ADL.
The additional Little Atom cores add about 21% to the overall bandwidth – and again this helps it overtake AMD’s Zen3 competition. The shared L2 cache cluster allows for higher bandwidth than the unified L3 cache provides for thread-pairs of Atom cores.

Zen3 (5500/5600) benefit from the single module/CCX design that forced thread-pairs on older designs (Zen2/Zen1) to span across modules/CCXes.

Note:* using AVX512 512-bit wide transfers.

CPU Multi-Core Benchmark Average Inter-Thread Latency (ns) 43.4 [+22%] 35.6 21.8 21.7 Overall latencies are 22% higher.
CPU Multi-Core Benchmark Inter-Thread Latency (Same Core) Latency (ns) 15.2 [-9%] 12.9 9.8 10.7 Inter-Thread (big Core) latency is 9% lower.
CPU Multi-Core Benchmark Inter-Core Latency (big Core, same Module) Latency (ns) 39.6 [+4%] 37.9 22.9 22.8 We see Inter-big-Core latency 4% higher.
CPU Multi-Core Benchmark Inter-Core (Little Core, same Module) Latency (ns) 67.3 Inter-Atom cores latency is 2x big-Core
CPU Multi-Core Benchmark Inter-Module/CCX Latency (ns) n/a
As ADL-R adds 2 thread pairs on Atom cores, the overall latency does greatly increase, but the Inter-Thread and Inter-big/Core latencies of the big Cores are about the same. The Inter-Little/Atom cores latencies are about 2x higher and that is to be expected.
Aggregate Score (Points) 10,100 [+19%] 8,470 8,150 9,330 Across all benchmarks, ADL-R is 19% faster
Across all the benchmarks – thanks to the added Little Atom cores, the ADL-R (13400) ends up 19% faster than ADL (12400) that is a decent improvement. We can see they are nowhere as powerful as big Cores (4 of them add 20% performance to 6 big – thus roughly 1/3x performance) they can bring efficiency improvements.

Note*: using AVX512 instead of AVX2/FMA3.

Price/RRP (USD) $219 [+10%] $199 $159 $199 Price 10% higher?
Price Efficiency (Perf. vs. Cost) (Points/USD) 46.12 [+8%] 42.56 51.26 46.88 Overall 8% more performance for the price
With the price up slightly, ADL-R (13400) is 8% more bang-per-buck vs. old ADL (12400) and also competitive with Zen3 (5600) though the low price of Zen3 (5500) is hard to beat.
Power/TDP (W) 65 – 117W 65 – 117W 65 – 88W 65 – 88W TDP/Turbo remain the same
Power Efficiency (Perf. vs. Power) (Points/W) 86.32 [+19%] 72.39 92.61 106 Same efficiency as performance delta, +19%
With both TDP and Turbo power the same, ADL-R (13400) is 19% more power efficient than the old ADL (12400), and thus getting close to Zen3’s efficiency (5500) though the other Zen3 (5600) is still much more efficient.

Final Thoughts / Conclusions

Summary: Hybrid Efficiency for Low Cost (13400 ADL-R): 7/10

Firstly, we’re absolutely not happy that a new generation (“13th Gen”) can contain a mix of new (RPL) and old (refreshed) (ADL-R) models – with even the ID (aka CPUID) to be the same (L2 cache sizes are different). This is unacceptable – it will no doubt confuse ordinary people buying older CPUs masquerading as the latest gen(eration).

Effectively, the Core i5 13400 is a cheaper and lower-power 12th Gen Core i5 12600K which used to be the cheapest hybrid ADL with Little/Atom cores. Thus price is approximately 50% cheaper ($219 vs. $319) and power is half (65W TDP vs. 128W TDP).

Nevertheless, the new 13400 performs on average 19% better than the old 12400 with a modest price increase. Turbo clock is just 5% higher for big Cores thus most of the gain is thanks to the additional Little/Atom cores.

Due to the Little/Atom cores it is likely more power efficient at low utilisation when the operating system can use these cores and park the big Cores. If used on a server (virtualisation/NAS/etc.) running 24/7, this may bring some energy cost savings that can be important these days when electricity costs have massively increased.

Otherwise, there’s nothing much to be said – yes, it brings hybrid benefits to lower-end devices but it is still disappointing that it is not using the latest arch (aka RPL) but just a rebranded (refreshed) older arch (ADL-R). Go for a real RPL CPU if you can afford it. We’re being generous and docking only a couple of points for the rebrand.

Summary: Hybrid Efficiency for Low Cost (13400 ADL-R): 7/10

Further Articles

Please see our other articles on:

Disclaimer

This is an independent review (critical appraisal) that has not been endorsed nor sponsored by any entity (e.g. Intel, etc.). All trademarks acknowledged and used for identification only under fair use.

The review contains only public information and not provided under NDA nor embargoed. At publication time, the products have not been directly tested by SiSoftware but submitted to the public Benchmark Ranker; thus the accuracy of the benchmark scores cannot be verified, however, they appear consistent and pass current validation checks.

And please, don’t forget small ISVs like ourselves in these very challenging times. Please buy a copy of Sandra if you find our software useful. Your custom means everything to us!

Tagged , , , , , , , , . Bookmark the permalink.

Comments are closed.