The five best Intel CPUs ever
Selecting the top five CPUs from Intel's lineup is a formidable challenge.
Throughout its decades-long history, Intel has consistently been at the forefront of computing with its CPUs, often topping our lists for the best gaming CPUs and the best workstation CPUs. Picking the top five CPUs from Intel is somewhat challenging because the company has always been at the top, raising the bar for Intel CPUs. Considering performance, value, innovation, and historical reputation, here we narrow down the range of what we consider to be the best processors from Chipzilla since the company's inception. This is by no means an exhaustive list, as it is inevitable that some excellent CPUs may have been inadvertently skipped.
Fifth Place: Intel 8086
Founded in 1968, Intel was a pioneer in the emerging semiconductor industry. It initially started as a memory design and manufacturing company but eventually began developing CPUs in the 1970s. The CPU business was more promising for the company as there were few competitors, and it was easy to achieve the world's first, such as the Intel 4004, which Intel claims to be the "first general-purpose microprocessor," meaning it was not specifically designed like other processors.
Advertisement
With only 4 bits, the 4004 had much room for improvement. In 1978, Intel introduced its first 16-bit CPU, the Intel 8086.
Although Intel once claimed this to be the world's first 16-bit CPU, this was not the case. In fact, Intel was playing catch-up with companies like Texas Instruments, which had introduced 16-bit chips earlier. Motorola's 68000 and Zilog's Z8000 also hit the market the following year, further heating up the competition.Intel believed it could compete with the 8086, but it needed to convince the market to adopt it. Andy Grove, who was then the president and chief operating officer and later became the CEO, was Intel's third employee. He initiated a large-scale campaign in 1980 called "Operation Crush." According to the author Nilakantasrinivasan J, "over 1000 employees were involved, dedicating themselves to committees, seminars, technical articles, new sales aids, and new sales incentive programs." A staggering $2 million was allocated solely for advertising, proclaiming that "the era of the 8086 has arrived."
The 8086 transformed the company, and Intel sold its memory business in 1986 to focus entirely on CPUs. The 8086 captured 85% of the 16-bit processor market. This made the x86 architecture of the 8086 of perpetual significance, and it is still used in PCs and servers to this day, making Intel's marketing claim almost prophetic.
The immense success of the 8086 caught the attention of IBM, which requested Intel to manufacture a cheaper version for its upcoming personal computer. Intel introduced the streamlined 8088, which was 8-bit instead of 16-bit. Regardless, the 8088-based personal computer (later known as the PC) was a massive success, and with the 8086, Intel easily won the best design award.
Incidentally, IBM was concerned that Intel would not have enough 8088 chips for personal computers, so it asked Intel to find a partner to manufacture more chips. Intel eventually chose another company established in 1968, which also produced memory and CPUs: Advanced Micro Devices, or AMD. Although Intel and AMD were partners in the 80s, Intel eventually tried to exclude AMD in the 90s, leading to AMD winning the rights to the x86 architecture and becoming a formidable competitor.
Fourth: Core i5-2500K
It didn't take long for AMD to become a thorn in Intel's side. Most of the 2000s were indeed painful for Intel, as its NetBurst and Itanium architectures were outperformed by Athlon and Opteron. However, it didn't take long for Intel to regain its technological edge, as well as questionable marketing funds, with its Core 2 CPUs. By the end of the 2010s, AMD was at a disadvantage, while Intel was gaining momentum.
For the next generation, Intel took the lead in January 2011 by launching the new second-generation CPUs with the new Sandy Bridge architecture. Compared to the first generation, these CPUs were not a complete revolution, as they still capped at four cores for mainstream desktops and only made minor updates to Intel's Turbo Boost technology. Nevertheless, there were some key upgrades: the 32nm node (although some first-generation CPUs also used 32nm), approximately 10% higher IPC, and fast synchronous video encoding.
Perhaps the most eye-catching innovation of Sandy Bridge was the unification of the CPU chip with the integrated graphics and memory controller chips. Although Intel and AMD have started placing graphics chips on separate chips for some processors again, combining two silicon chips into one at the time was a significant step forward, especially for the memory controller.
We found that Sandy Bridge did not make a large-scale upgrade in any particular category but instead provided various improvements across the board; it was much faster, more energy-efficient, and had fast synchronous capabilities compared to the first generation, with no solutions from AMD or Nvidia. In particular, the quad-core Core i5-2500K, with its $216 price tag, was a fantastic CPU, offering only four cores less than the $317 Core i7-2600K.Although the second-generation Core processors did not completely reshape the CPU landscape, if AMD wanted a chance, there was a much larger mountain to climb. AMD did not respond until October, and almost a year later, their response was undoubtedly poor both at the time and in hindsight. Despite AMD's eight-core high-end FX-8150 being only slightly more expensive than the 2500K, we believe the 2500K remains the better chip.
Although AMD's performance on Bulldozer was clearly dismal, few predicted this would effectively mark AMD's exit from the high-performance CPU market, even for the mainstream market. Over the next five years, Intel reaped the rewards.
Third Place: Core i7-920
While the Core 2 had put Intel back in the number one spot for the first time in years, the company's position was not entirely secure. After all, the Core architecture was not carefully planned and executed, as Intel was in a hurry to replace its NetBurst-based Pentium 4 series with a truly competitive product. Even after AMD launched the new Phenom series in 2007, the Core 2 remained ahead, but if Intel wanted to stay ahead and keep selling, it would need a new series of CPUs.
Among all its shortcomings, two needed to be addressed in particular. First, the Core 2 quad-core CPUs were made from two dual-core chips because the original Core architecture was based on the mobile Pentium M chip, which was not designed for quad-core at the time. Additionally, Intel had to abandon Hyper-Threading technology, which provided two threads per core instead of one, also due to the lack of this feature in the Pentium M and its presence in the subsequent Pentium 4 CPUs.
With the introduction of the brand-new Nehalem architecture in 2008, Intel finally had a true quad-core CPU with Hyper-Threading, and it also added a third-level cache and turbo boost technology. Architecturally, it was actually very similar to what AMD had done with its K10-based Phenom CPU, but with more refined execution. Nehalem's more advanced 45nm node was also a nice advantage.
Intel's new generation of Nehalem processors ushered in the Core i era, introducing three Core i7 CPUs, with the Core i7-920 being the obvious mainstream choice, priced at $284. Although its clock speed was a mere 2.66GHz, far below the 3.2GHz of the $999 i7-965 Extreme, the i7-920's performance was incredibly powerful, even outperforming the fastest Core 2 Extreme in almost all the benchmark tests we reviewed, and often by a significant margin.
Intel made a comeback with Core, and with the first generation of CPUs, this comeback was bound to continue. From flagship to flagship, Intel's Core i7-965 Extreme was 64% faster than AMD's Phenom X4 9950 Black Edition. Considering that Phenom had beaten Intel by building four cores on a single silicon die and implementing L3 cache, this must have been a sting for AMD. Even the six-core Phenom II CPU could not compare, and AMD was no longer the Pentium killer company it once was.
However, Intel did not just offer quad-core CPUs but also more core counts. Initially, these were six-core and eight-core models limited to Xeon servers, which used two quad-core chips, essentially a repetition of Core 2. Yet, once Intel shrunk Nehalem to 32nm, it created true six-core chips and even ten-core CPUs, a world first. Intel might have been ahead of AMD more than ever before, and the gap would continue to widen over the next eight years.
Second Place: Core i9-13900KIn the decade following the launch of the second-generation CPU based on Sandy Bridge, Intel has struggled to match its past prowess. Initially, this was because Intel had little incentive to compete, as AMD had essentially exited the market. However, when AMD made a comeback in 2017 with Ryzen, it was already too late for Intel. The company's 10nm node was repeatedly delayed for years, and Intel's lead continued to slip, until it ranked second in many categories from 2019 onwards. It wasn't until 2021, with the introduction of the 12th generation Alder Lake CPU, that the 10nm process became truly viable.
Although Alder Lake handily defeated the Ryzen 5000, it did so a year after the release of the Ryzen 5000, meaning AMD's next-generation products were already on the horizon. This put Intel in a bind, as its 7nm/Intel 4 CPUs were far from completion. Regardless, they relied on many cutting-edge and novel technologies, and launching these in the face of AMD's tried-and-tested Ryzen chips was risky. Intel did not want to be late or fail again, but the two seemed mutually exclusive.
However, over the past few years, as AMD continued to rack up victories, Intel learned two important lessons: significantly increasing the number of cores in a generation was a winning strategy, and increasing cache size was very beneficial for gaming performance. Intel already had an excellent CPU in hand, and creating versions with more cores and cache would not take much time, and it could potentially compete with AMD's next-generation products (although this would undoubtedly increase power consumption).
The 2022 CPU showdown began with the September release of the Ryzen 7000, with AMD's new flagship Ryzen 9 7950X being much faster than the Ryzen 9 5950X and Core i9-12900K. This was not surprising, as AMD had switched to TSMC's most advanced 5nm node, which significantly increased power consumption, and also gained some additional IPC. Whether the 13th generation Raptor Lake CPU could reach striking distance was still up in the air, as the only major upgrade of the Core i9-13900K was an additional 8 E-cores, rather than more P-cores, which are faster, with more L2 and L3 cache, and higher clock speeds.
Despite all the theoretical issues on paper, the Core i9-13900K proved to be as fast as the 7950X when it was released in November. In our review, the 13900K was only slightly behind the 7950X in productivity workloads and actually beat the 7950X by a considerable margin in gaming. The fact that Intel could match AMD's performance was impressive in itself, but the 13th generation also beat AMD on price. The CPU itself was quite affordable, and with discounted LGA 1700 600 series motherboards and DDR4 RAM, you could build a 13th generation PC very inexpensively.
Meanwhile, the Ryzen 7000 CPUs were relatively expensive in terms of performance, did not cover the low-end market (which they still do not cover to this day), and required costly new AM5 600 series motherboards and DDR5 memory, the cost of which is much higher than DDR4. The two selling points of the Ryzen 7000 are efficiency and a longer upgrade path, which are important but not paramount.
Although the 13th generation has technically been replaced by the 14th generation, both use the Raptor Lake architecture, and due to the 13th generation being cheaper, it is generally the better choice. AMD has since addressed some of the pricing issues with the Ryzen 7000 and AM5 motherboards, but the Ryzen 7000 is still generally more expensive. Both companies plan to release brand-new architectures and CPUs later in 2024, which will mark the end of this generation.
First place: Core 2 Quad Q6600The early to mid-2000s were a tough period for Intel. In the PC business, the Pentium 4 based on NetBurst was a disaster due to its high power consumption and failure to achieve the high clock speeds Intel required, leading to the cancellation of the 4GHz Pentium 4 and the subsequent CPU Tejas. The lucrative server business could be said to be even worse, as Intel invested billions of dollars in Itanium, which was incompatible with Intel's own x86 software ecosystem. When AMD launched the Opteron with a 64-bit version of the x86 architecture, the era of Itanium came to an end.
To keep the market in its favor, Intel spent billions of dollars on marketing to Dell, HP, and other original equipment manufacturers to prevent them from using AMD, which brought costs and fines to Chipzilla worldwide. This was not a sustainable strategy (obviously, its legality was also questionable), and with both NetBurst and Itanium being dead ends, Intel needed a new architecture, and fast.
Fortunately for Intel, it did have another architecture at its disposal. Intel's Haifa team was working on the Pentium M series for laptops in 2003. Fortunately for Intel, the Pentium M was primarily based on the Pentium III rather than the Pentium 4, which meant it was much more efficient. However, since it was designed for laptops, Intel had to do some work to make it suitable for desktops and servers. Most notably, Intel created Core 64-bit and retained the x86 architecture, just as AMD did with the Athlon 64. In addition to these technical changes, Intel also renamed the architecture, rebranding it as Core.
Technically speaking, Core was introduced in early 2006 as a laptop-only product line, but it was quickly replaced within a few months by Core 2, which was suitable for both laptops and desktops. In fact, Core 2 was a downgraded version. The original Conroe-based Core 2 Duo CPU had a lower clock speed, less L2 cache, and even abandoned the cutting-edge hyper-threading feature. Despite this, Core 2's higher IPC was proven to be a killer in our Core 2 Duo review, with even the slowest E6600 almost always able to beat fast Pentium 4 and Athlon 64 CPUs.
However, Intel had greater performance ambitions and launched the quad-core Core 2 Quad CPU in January 2007. Core was only designed for two cores, so to create the first quad-core CPU in history, Intel simply put two dual-core chips together in the same package. The Q6600 was the first quad-core desktop CPU you could buy, and although it was initially launched at a staggering price of $851, by August the price had dropped to just $266. Its high performance and relatively low price at the time made it one of Intel's most popular CPUs ever.
Intel's abandoned quad-core processor completely preempted AMD's upcoming Phenom CPU, which had a quad-core CPU that did not rely on a multi-chip solution. However, when Phenom was launched, it was clear that Intel's jankier chip design was better. In our review, the Phenom 9700 was nearly 13% slower than the Q6600, not to mention all the other faster quad-core processors that Intel introduced. Even the quad-core Phenom II X4 in 2009 struggled to beat the Q6600 based on 2006 technology.
Despite all the paper problems, Core 2 is still legendary: the original architecture was never designed for desktops and servers, it could only run two cores on a single chip, and it got rid of hyper-threading. It is almost certain that neither Phenom nor Phenom II could beat Intel's CPUs. Undoubtedly, Intel had an advantage in this area, ensuring the company's first place, a position that lasted for about 13 years until the arrival of Ryzen 3000.
Honorable Mention: Alder Lake
Just as Intel almost died in the 2000s trying to build a 10GHz Pentium 4, Intel's chaos in the late 2010s and early 2020s was also because it tried to compress nearly a decade of process progress into just a few years, almost dying. Intel's 10nm node was a disaster: it missed its scheduled 2015 release date, and by 2018 it was obviously very bad, then powered the Ice Lake and Tiger Lake CPUs for laptops from 2019 to 2021.
In contrast, AMD won in 2019 and 2020. Ryzen 3000 beat Intel's 9th generation product line, Ryzen 4000 essentially made Intel's mobile CPUs obsolete, and Ryzen 5000 easily beat Intel's 10th generation in gaming, taking away Chipzilla's last topic. Why its CPU is the best.With the launch of the Ryzen 5000 series, AMD introduced a price increase, with its most affordable CPU, the Ryzen 5 5600X, starting at $300. Of course, these new chips are fast, but they lack AMD's classic value proposition. If you don't have $300 to spare, you can't even upgrade.
Fortunately, Intel finally pulled together in 2021 and launched the 12th generation Alder Lake CPUs. This series features an all-new hybrid CPU with two types of cores, a new architecture, working on a 10nm node, support for PCIe 5.0, and compatibility with both DDR4 and DDR5. The flagship Core i9-12900K has a total of 16 cores, matching AMD's Ryzen 9 5950X. In our review, the 12900K achieved a clear victory in both multi-core and single-core applications and won by a slight margin in gaming. The price of the 12900K is also only $589, while the 5950X is priced at $799, making the 12900K the obvious winner.
Incredibly, AMD has not updated since the launch of the Ryzen 5000 series at the end of 2020; this means that the cheapest CPU is still $300. In contrast, Intel plans to launch a series of budget options in January 2022, such as the Core i5-12400. AMD's response is to cut performance and value mediocre chips, which is a shocking failure for a company that was once the king of value.
Of course, the Alder Lake generation was a year late, and AMD eventually launched more cost-effective models, reduced prices, and released the cutting-edge Ryzen 7 5800X3D with 3D V-Cache. With this in mind, the 12th generation is hard to place on the list, especially as the 13th generation proves that Alder Lake can do more. What's even more surprising is that Intel is competitive with the node that was supposed to launch in 2015. If Intel hadn't tried to pack nearly a decade of improvements into about two years of development, but instead spread these advancements over multiple generations, Chipzilla might still be Chipzilla, and not just in name.
*Disclaimer: This article is the original creation of the author. The content of the article represents their personal views. We republish it solely for sharing and discussion, and do not necessarily endorse or agree with it. If you have any objections, please contact the backend.
Selecting the top five CPUs from Intel's lineup is a formidable challenge.
Throughout its decades-long history, Intel has consistently been at the forefront of computing with its CPUs, often topping our lists for the best gaming CPUs and the best workstation CPUs. Picking the top five CPUs from Intel is somewhat challenging because the company has always been at the top, raising the bar for Intel CPUs. Considering performance, value, innovation, and historical reputation, here we narrow down the range of what we consider to be the best processors from Chipzilla since the company's inception. This is by no means an exhaustive list, as it is inevitable that some excellent CPUs may have been inadvertently skipped.
Fifth Place: Intel 8086
Founded in 1968, Intel was a pioneer in the emerging semiconductor industry. It initially started as a memory design and manufacturing company but eventually began developing CPUs in the 1970s. The CPU business was more promising for the company as there were few competitors, and it was easy to achieve the world's first, such as the Intel 4004, which Intel claims to be the "first general-purpose microprocessor," meaning it was not specifically designed like other processors.
Advertisement
With only 4 bits, the 4004 had much room for improvement. In 1978, Intel introduced its first 16-bit CPU, the Intel 8086.
Although Intel once claimed this to be the world's first 16-bit CPU, this was not the case. In fact, Intel was playing catch-up with companies like Texas Instruments, which had introduced 16-bit chips earlier. Motorola's 68000 and Zilog's Z8000 also hit the market the following year, further heating up the competition.Intel believed it could compete with the 8086, but it needed to convince the market to adopt it. Andy Grove, who was then the president and chief operating officer and later became the CEO, was Intel's third employee. He initiated a large-scale campaign in 1980 called "Operation Crush." According to the author Nilakantasrinivasan J, "over 1000 employees were involved, dedicating themselves to committees, seminars, technical articles, new sales aids, and new sales incentive programs." A staggering $2 million was allocated solely for advertising, proclaiming that "the era of the 8086 has arrived."
The 8086 transformed the company, and Intel sold its memory business in 1986 to focus entirely on CPUs. The 8086 captured 85% of the 16-bit processor market. This made the x86 architecture of the 8086 of perpetual significance, and it is still used in PCs and servers to this day, making Intel's marketing claim almost prophetic.
The immense success of the 8086 caught the attention of IBM, which requested Intel to manufacture a cheaper version for its upcoming personal computer. Intel introduced the streamlined 8088, which was 8-bit instead of 16-bit. Regardless, the 8088-based personal computer (later known as the PC) was a massive success, and with the 8086, Intel easily won the best design award.
Incidentally, IBM was concerned that Intel would not have enough 8088 chips for personal computers, so it asked Intel to find a partner to manufacture more chips. Intel eventually chose another company established in 1968, which also produced memory and CPUs: Advanced Micro Devices, or AMD. Although Intel and AMD were partners in the 80s, Intel eventually tried to exclude AMD in the 90s, leading to AMD winning the rights to the x86 architecture and becoming a formidable competitor.
Fourth: Core i5-2500K
It didn't take long for AMD to become a thorn in Intel's side. Most of the 2000s were indeed painful for Intel, as its NetBurst and Itanium architectures were outperformed by Athlon and Opteron. However, it didn't take long for Intel to regain its technological edge, as well as questionable marketing funds, with its Core 2 CPUs. By the end of the 2010s, AMD was at a disadvantage, while Intel was gaining momentum.
For the next generation, Intel took the lead in January 2011 by launching the new second-generation CPUs with the new Sandy Bridge architecture. Compared to the first generation, these CPUs were not a complete revolution, as they still capped at four cores for mainstream desktops and only made minor updates to Intel's Turbo Boost technology. Nevertheless, there were some key upgrades: the 32nm node (although some first-generation CPUs also used 32nm), approximately 10% higher IPC, and fast synchronous video encoding.
Perhaps the most eye-catching innovation of Sandy Bridge was the unification of the CPU chip with the integrated graphics and memory controller chips. Although Intel and AMD have started placing graphics chips on separate chips for some processors again, combining two silicon chips into one at the time was a significant step forward, especially for the memory controller.
We found that Sandy Bridge did not make a large-scale upgrade in any particular category but instead provided various improvements across the board; it was much faster, more energy-efficient, and had fast synchronous capabilities compared to the first generation, with no solutions from AMD or Nvidia. In particular, the quad-core Core i5-2500K, with its $216 price tag, was a fantastic CPU, offering only four cores less than the $317 Core i7-2600K.Although the second-generation Core processors did not completely reshape the CPU landscape, if AMD wanted a chance, there was a much larger mountain to climb. AMD did not respond until October, and almost a year later, their response was undoubtedly poor both at the time and in hindsight. Despite AMD's eight-core high-end FX-8150 being only slightly more expensive than the 2500K, we believe the 2500K remains the better chip.
Although AMD's performance on Bulldozer was clearly dismal, few predicted this would effectively mark AMD's exit from the high-performance CPU market, even for the mainstream market. Over the next five years, Intel reaped the rewards.
Third Place: Core i7-920
While the Core 2 had put Intel back in the number one spot for the first time in years, the company's position was not entirely secure. After all, the Core architecture was not carefully planned and executed, as Intel was in a hurry to replace its NetBurst-based Pentium 4 series with a truly competitive product. Even after AMD launched the new Phenom series in 2007, the Core 2 remained ahead, but if Intel wanted to stay ahead and keep selling, it would need a new series of CPUs.
Among all its shortcomings, two needed to be addressed in particular. First, the Core 2 quad-core CPUs were made from two dual-core chips because the original Core architecture was based on the mobile Pentium M chip, which was not designed for quad-core at the time. Additionally, Intel had to abandon Hyper-Threading technology, which provided two threads per core instead of one, also due to the lack of this feature in the Pentium M and its presence in the subsequent Pentium 4 CPUs.
With the introduction of the brand-new Nehalem architecture in 2008, Intel finally had a true quad-core CPU with Hyper-Threading, and it also added a third-level cache and turbo boost technology. Architecturally, it was actually very similar to what AMD had done with its K10-based Phenom CPU, but with more refined execution. Nehalem's more advanced 45nm node was also a nice advantage.
Intel's new generation of Nehalem processors ushered in the Core i era, introducing three Core i7 CPUs, with the Core i7-920 being the obvious mainstream choice, priced at $284. Although its clock speed was a mere 2.66GHz, far below the 3.2GHz of the $999 i7-965 Extreme, the i7-920's performance was incredibly powerful, even outperforming the fastest Core 2 Extreme in almost all the benchmark tests we reviewed, and often by a significant margin.
Intel made a comeback with Core, and with the first generation of CPUs, this comeback was bound to continue. From flagship to flagship, Intel's Core i7-965 Extreme was 64% faster than AMD's Phenom X4 9950 Black Edition. Considering that Phenom had beaten Intel by building four cores on a single silicon die and implementing L3 cache, this must have been a sting for AMD. Even the six-core Phenom II CPU could not compare, and AMD was no longer the Pentium killer company it once was.
However, Intel did not just offer quad-core CPUs but also more core counts. Initially, these were six-core and eight-core models limited to Xeon servers, which used two quad-core chips, essentially a repetition of Core 2. Yet, once Intel shrunk Nehalem to 32nm, it created true six-core chips and even ten-core CPUs, a world first. Intel might have been ahead of AMD more than ever before, and the gap would continue to widen over the next eight years.
Second Place: Core i9-13900KIn the decade following the launch of the second-generation CPU based on Sandy Bridge, Intel has struggled to match its past prowess. Initially, this was because Intel had little incentive to compete, as AMD had essentially exited the market. However, when AMD made a comeback in 2017 with Ryzen, it was already too late for Intel. The company's 10nm node was repeatedly delayed for years, and Intel's lead continued to slip, until it ranked second in many categories from 2019 onwards. It wasn't until 2021, with the introduction of the 12th generation Alder Lake CPU, that the 10nm process became truly viable.
Although Alder Lake handily defeated the Ryzen 5000, it did so a year after the release of the Ryzen 5000, meaning AMD's next-generation products were already on the horizon. This put Intel in a bind, as its 7nm/Intel 4 CPUs were far from completion. Regardless, they relied on many cutting-edge and novel technologies, and launching these in the face of AMD's tried-and-tested Ryzen chips was risky. Intel did not want to be late or fail again, but the two seemed mutually exclusive.
However, over the past few years, as AMD continued to rack up victories, Intel learned two important lessons: significantly increasing the number of cores in a generation was a winning strategy, and increasing cache size was very beneficial for gaming performance. Intel already had an excellent CPU in hand, and creating versions with more cores and cache would not take much time, and it could potentially compete with AMD's next-generation products (although this would undoubtedly increase power consumption).
The 2022 CPU showdown began with the September release of the Ryzen 7000, with AMD's new flagship Ryzen 9 7950X being much faster than the Ryzen 9 5950X and Core i9-12900K. This was not surprising, as AMD had switched to TSMC's most advanced 5nm node, which significantly increased power consumption, and also gained some additional IPC. Whether the 13th generation Raptor Lake CPU could reach striking distance was still up in the air, as the only major upgrade of the Core i9-13900K was an additional 8 E-cores, rather than more P-cores, which are faster, with more L2 and L3 cache, and higher clock speeds.
Despite all the theoretical issues on paper, the Core i9-13900K proved to be as fast as the 7950X when it was released in November. In our review, the 13900K was only slightly behind the 7950X in productivity workloads and actually beat the 7950X by a considerable margin in gaming. The fact that Intel could match AMD's performance was impressive in itself, but the 13th generation also beat AMD on price. The CPU itself was quite affordable, and with discounted LGA 1700 600 series motherboards and DDR4 RAM, you could build a 13th generation PC very inexpensively.
Meanwhile, the Ryzen 7000 CPUs were relatively expensive in terms of performance, did not cover the low-end market (which they still do not cover to this day), and required costly new AM5 600 series motherboards and DDR5 memory, the cost of which is much higher than DDR4. The two selling points of the Ryzen 7000 are efficiency and a longer upgrade path, which are important but not paramount.
Although the 13th generation has technically been replaced by the 14th generation, both use the Raptor Lake architecture, and due to the 13th generation being cheaper, it is generally the better choice. AMD has since addressed some of the pricing issues with the Ryzen 7000 and AM5 motherboards, but the Ryzen 7000 is still generally more expensive. Both companies plan to release brand-new architectures and CPUs later in 2024, which will mark the end of this generation.
First place: Core 2 Quad Q6600The early to mid-2000s were a tough period for Intel. In the PC business, the Pentium 4 based on NetBurst was a disaster due to its high power consumption and failure to achieve the high clock speeds Intel required, leading to the cancellation of the 4GHz Pentium 4 and the subsequent CPU Tejas. The lucrative server business could be said to be even worse, as Intel invested billions of dollars in Itanium, which was incompatible with Intel's own x86 software ecosystem. When AMD launched the Opteron with a 64-bit version of the x86 architecture, the era of Itanium came to an end.
To keep the market in its favor, Intel spent billions of dollars on marketing to Dell, HP, and other original equipment manufacturers to prevent them from using AMD, which brought costs and fines to Chipzilla worldwide. This was not a sustainable strategy (obviously, its legality was also questionable), and with both NetBurst and Itanium being dead ends, Intel needed a new architecture, and fast.
Fortunately for Intel, it did have another architecture at its disposal. Intel's Haifa team was working on the Pentium M series for laptops in 2003. Fortunately for Intel, the Pentium M was primarily based on the Pentium III rather than the Pentium 4, which meant it was much more efficient. However, since it was designed for laptops, Intel had to do some work to make it suitable for desktops and servers. Most notably, Intel created Core 64-bit and retained the x86 architecture, just as AMD did with the Athlon 64. In addition to these technical changes, Intel also renamed the architecture, rebranding it as Core.
Technically speaking, Core was introduced in early 2006 as a laptop-only product line, but it was quickly replaced within a few months by Core 2, which was suitable for both laptops and desktops. In fact, Core 2 was a downgraded version. The original Conroe-based Core 2 Duo CPU had a lower clock speed, less L2 cache, and even abandoned the cutting-edge hyper-threading feature. Despite this, Core 2's higher IPC was proven to be a killer in our Core 2 Duo review, with even the slowest E6600 almost always able to beat fast Pentium 4 and Athlon 64 CPUs.
However, Intel had greater performance ambitions and launched the quad-core Core 2 Quad CPU in January 2007. Core was only designed for two cores, so to create the first quad-core CPU in history, Intel simply put two dual-core chips together in the same package. The Q6600 was the first quad-core desktop CPU you could buy, and although it was initially launched at a staggering price of $851, by August the price had dropped to just $266. Its high performance and relatively low price at the time made it one of Intel's most popular CPUs ever.
Intel's abandoned quad-core processor completely preempted AMD's upcoming Phenom CPU, which had a quad-core CPU that did not rely on a multi-chip solution. However, when Phenom was launched, it was clear that Intel's jankier chip design was better. In our review, the Phenom 9700 was nearly 13% slower than the Q6600, not to mention all the other faster quad-core processors that Intel introduced. Even the quad-core Phenom II X4 in 2009 struggled to beat the Q6600 based on 2006 technology.
Despite all the paper problems, Core 2 is still legendary: the original architecture was never designed for desktops and servers, it could only run two cores on a single chip, and it got rid of hyper-threading. It is almost certain that neither Phenom nor Phenom II could beat Intel's CPUs. Undoubtedly, Intel had an advantage in this area, ensuring the company's first place, a position that lasted for about 13 years until the arrival of Ryzen 3000.
Honorable Mention: Alder Lake
Just as Intel almost died in the 2000s trying to build a 10GHz Pentium 4, Intel's chaos in the late 2010s and early 2020s was also because it tried to compress nearly a decade of process progress into just a few years, almost dying. Intel's 10nm node was a disaster: it missed its scheduled 2015 release date, and by 2018 it was obviously very bad, then powered the Ice Lake and Tiger Lake CPUs for laptops from 2019 to 2021.
In contrast, AMD won in 2019 and 2020. Ryzen 3000 beat Intel's 9th generation product line, Ryzen 4000 essentially made Intel's mobile CPUs obsolete, and Ryzen 5000 easily beat Intel's 10th generation in gaming, taking away Chipzilla's last topic. Why its CPU is the best.With the launch of the Ryzen 5000 series, AMD introduced a price increase, with its most affordable CPU, the Ryzen 5 5600X, starting at $300. Of course, these new chips are fast, but they lack AMD's classic value proposition. If you don't have $300 to spare, you can't even upgrade.
Fortunately, Intel finally pulled together in 2021 and launched the 12th generation Alder Lake CPUs. This series features an all-new hybrid CPU with two types of cores, a new architecture, working on a 10nm node, support for PCIe 5.0, and compatibility with both DDR4 and DDR5. The flagship Core i9-12900K has a total of 16 cores, matching AMD's Ryzen 9 5950X. In our review, the 12900K achieved a clear victory in both multi-core and single-core applications and won by a slight margin in gaming. The price of the 12900K is also only $589, while the 5950X is priced at $799, making the 12900K the obvious winner.
Incredibly, AMD has not updated since the launch of the Ryzen 5000 series at the end of 2020; this means that the cheapest CPU is still $300. In contrast, Intel plans to launch a series of budget options in January 2022, such as the Core i5-12400. AMD's response is to cut performance and value mediocre chips, which is a shocking failure for a company that was once the king of value.
Of course, the Alder Lake generation was a year late, and AMD eventually launched more cost-effective models, reduced prices, and released the cutting-edge Ryzen 7 5800X3D with 3D V-Cache. With this in mind, the 12th generation is hard to place on the list, especially as the 13th generation proves that Alder Lake can do more. What's even more surprising is that Intel is competitive with the node that was supposed to launch in 2015. If Intel hadn't tried to pack nearly a decade of improvements into about two years of development, but instead spread these advancements over multiple generations, Chipzilla might still be Chipzilla, and not just in name.
*Disclaimer: This article is the original creation of the author. The content of the article represents their personal views. We republish it solely for sharing and discussion, and do not necessarily endorse or agree with it. If you have any objections, please contact the backend.