Trending February 2024 # All About Amd’s Revolutionary V # Suggested March 2024 # Top 10 Popular

You are reading the article All About Amd’s Revolutionary V updated in February 2024 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested March 2024 All About Amd’s Revolutionary V

AMD’s mic-drop moment at Computex was the news of its closely guarded “V-Cache” technology, which would enable chip stacking on Ryzen CPUs.


The V-Cache in the CPU above sits above the existing CPU’s L3 cache. Additional silicon next to it is used to stiffen the die and convey heat to the heat spreader.

What is V-Cache?

V-Cache uses TSMC’s SoIC (System on Integrated Chips) chip-stacking technology to add 64MB of SRAM L3 cache to the compute dies of existing Zen 3-based Ryzen CPUs. That will basically triple the amount of cache to an insane 192MB of L3 for the 12-core and 16-core versions of AMD’s CPUs.

V-Cache will come to Zen 3-based Ryzen CPUs later this year

There was a little confusion as to which CPUs would get V-Cache, and when, as AMD CEO Lisa Su held up a prototype Ryzen with the technology. The company has since confirmed that yes, V-Cache will come to high-end, Zen 3-based Ryzen CPUs at the end of the year.


AMD previously used TSV (Through-Silicon Via) technology in its original Vega GPU’s HBM memory. As you can see, it channels through the silicon itself to connect to the next layer down.

How is V-Cache connected?

The individual 64MB SRAM cache will be stacked on each compute die over the existing L3 cache and be connected using a technology called Through-Silicon Via, which connects stacked chips using a tunneling system. This isn’t AMD’s first experience with TSV: The original Vega GPU with its HBM memory also used TSV technology.

Latency shouldn’t be an issue

One concern with the large amount of L3 is latency, or delay, as the CPU fetches instructions or data from the cache. AMD’s Naffziger said that shouldn’t be a problem  with V-Cache, because the TSV construction offers a more direct route to the cache.

How fast will V-Cache be?

AMD is touting a jaw-dropping 2TBps and beyond of bandwidth for V-Cache. That’s insanely fast. As a comparison, Intel’s 2013-era Core i7-4770R “Crystal Well” chip, which featured an impressive 128MB of eDRAM L4, offered about 100GBps of bandwidth at the time.

Can you stack more V-Cache on top of each other?

Won’t V-Cache make it harder to cool the CPU?

It might seem crazy to taking a hot CPU die and throw a silicon blanket over it, blocking its contact with the thermal interface material and heat spreader. AMD’s Naffziger said that won’t be an issue, because the actual SRAM L3 sits on the existing L3 cache on the die. The actual CPU cores that generate most of the heat will have a layer of silicon that will convey heat to the TIM and heat spreader. 

Developers don’t need to optimize for it


AMD said about a 15 percent bump was seen from just the V-Cache.

What will V-Cache make faster?

AMD so far has showed off very impressive gains in gaming using a standard Ryzen against one with V-Cache locked at 4GHz. AMD showed gains from 4 percent to 25 percent just from the additional L3 cache. That’s about a 15-percent gain on average, which the company said is equivalent to an entire generational CPU change.

Will V-Cache help make anything else go faster? Will V-Cache work with existing AM4 motherboards?

AMD did not say which motherboard sockets the first V-Cache Ryzen CPUs will go into, but we’re pretty certain it’ll be the existing stock of AM4 motherboards. That would be great news for upgraders who want to eke out one more CPU on an existing system. Naffziger said the thickness from V-Cache won’t change the Z-height, or thickness, of the actual Ryzen CPU. That pretty much says AMD has done all it can to make sure V-Cache-based Ryzen chips will work with the existing infrastructure of heat sinks and motherboards.


DDR5 is on the way and promises much higher capacity and much higher bandwidth too—and in all likelyhood—much higher initial prices too.

Does this mean Zen 4 might come sooner?

Besides the V-Cache news, AMD’s Lisa Su confirmed that the company’s Zen 4 cores will be based on TSMC’s 5nm process, which will be due sometime in 2023. Those hoping to see Zen 4 and its promise of DDR5 support sooner might feel let down, but memory transitions can be fraught with painful price hikes, too. If V-Cache on an existing Zen 3-based platform can go toe-to-toe with Intel’s CPUs, AMD can introduce Zen 4 with DDR5 when it’s far more affordable than it’s likely to be when the new RAM is first released.

How will V-Cache and Zen 3 Ryzen compare with Intel’s upcoming Alder Lake?

This is truly the million-transistor question we’ll face as we close this year out: Will Intel’s 10nm-based “big-little” design in Alder Lake put it back in black against AMD’s Ryzen? If anyone tells you they know the answer, they’re lying. We’ll have to wait until we see Intel’s Alder Lake parts and AMD’s V-Cache-based Ryzen before anyone can answer this question.

When can I buy a Ryzen with V-Cache?

You're reading All About Amd’s Revolutionary V

All You Need To Know About Soapa

All You Need To Know About SOAPA

Also Read : How To Take Back Your Hacked System

What Are the Benefits of Using SOAPA?

Apart from being an upgraded model of SIEM, it provides ease to manage the network security. Experts have come up with this tool to make sure that we make a better way for analytics and intelligence guided decision making. Also, with an efficient security model it will be easier to handle the attacks even if a firm is short on cyber experts.

Incident Response Platforms– To enable the administrators to classify threats detected. This way getting priority alerts and working on them instantly will no longer be an uphill task.

Threat Intelligence- This helps in identifying the abnormalities in their network so that it is easier to detect the infected node.

Network Security Analysis:  This allows to analyze the flow of data packets in the network.

Security Asset Managers and Vulnerability Scanners- With the help of this, security professionals can easily prioritize the alerts.

Anti-Malware Sandboxes- This allows security personnel to understand and become aware of the malware attacks and the vulnerabilities present in the network that can be exploited but are still unknown to the service provider.

There are many other services with which Security Operations and Analytics Platform Architecture can blend in.

The aim of SOAPA is to address these problems so that the organization can focus on new tools and work on them without losing an insight of information they need for operating.

Read Here : Best Mac Cleaners That Boost System Performance

Will SOAPA Have a Bright Future?

Definitely! The reason behind this is its ability to combine the emerging technologies in its framework and we cannot ask for more! With Machine Learning, Big Data and combination of other technologies, the threat detection and mitigating security issues will be as simple as snapping fingers.

The Final Verdict

We cannot ignore the speed with which SOAPA is moving forward. The market has already started to incline towards it. There will surely come a time when we’ll see a rapid increase in the SOAPA experts in the market. It’s time to brush up your skills fellas and make sure you remain updated with the security practices that are being followed to save from the security breaches. The chances of SOAPA getting out of the league is almost nil because customers don’t want any more tools, they need one that gets upgraded according to their needs.

We hope that SOAPA helps the security experts and yields result that they are expecting. What do you think?

Quick Reaction:

About the author

Tweak Library Team

Amd’s New Navi Lineup

The new Navi GPUs have finally arrived and come to shelves boasting some very impressive performance figures. The new range currently hosts the RX 5700 XT, Rx 5700, and RX 5700 XT 50th Anniversary Edition but rumors are now spreading of potential 5600, 5800 and even 5900 cards in the making.

The three cards we know exist come representing the new RDNA GPU architecture and will be the first cards since 2012 to take over from the GCN design. Dr. Lisa Su says the new architecture is an entirely new look on graphics cards and will redefine their GPUs going forward just as Zen did for their CPU range.

AMD has been in headlines a lot of late thanks to the new CPUs which went live on the 7th of July. The new range of Ryzen chips has taken the market by storm and look set to knock Intel off the CPU hierarchy top spot for the first time in decades. If the Navi lineup – which also went live on the 7th of July – is anything like their new CPU range, expect big things to come out the AMD GPU cupboard.

This being said, we already know quite a bit about the new Navi GPUs and what they have to offer. In the following article, we’re going to discuss everything we currently know about Navi and how the new cards will affect consumers looking to build a gaming rig in today’s market.

Let’s take a closer look at the recently released RX 5700 XT, RX 5700 XT 50th Anniversary edition, and the RX 5700.

Current Navi GPUs

RX 5700 XT 50th Anniversary Edition

The RX 5700 XT 50th Anniversary Edition, or 5700 XTAE as we’ll call it, is the most powerful of the three graphics cards to be released under the new Navi lineup. As with all the Navi cards, the 5700 XTAE arrives on 7nm and accompanied by the new RDNA architecture. The architecture comes with a new instruction set which is better suited for rendering graphics effects more efficiently and a multi-level cache hierarchy for reduced latency for higher responsive gaming.

The 5700 XTAE comes uniquely designed with a gold trim shroud and Dr. Lisa Su’s signature. It’s the current Navi flagship card and comes clocked 75MHz faster than the 5700 XT. It comes with three display ports and a single HDMI input. Like the other Navi cards, the 5700 XTAE is a blower-style card which sucks air through the fan and expels the hot air out the back, helping reduce computer temps more effectively.

RX 5700 XT

The 5700 XT is pretty much a 5700 XTAE but a few Mhz slower, 75 to be exact. Some subtle differences do make a small impact, though.

At stock speeds, the 5700 XT has a 1,605MHz base clock and 1,905MHz boost clock. AMD has implemented a new metric, which is a 1,7555MHz game clock. AMD has stated that the actual real-life speed of the card when gaming will sit somewhere between the base clock and the game clock.

The design is very similar to the 5700 XTAE but doesn’t come with the fancy gold trim or Su signature.

RX 5700

The last card in the current range is the RX 5700 which again, is a lesser powerful RX 5700 XT. The 5700 was released at the same time as the other cards and comes with the same fancy new tech and architecture.

The 5700 features 36 compute units, 2,304 stream processors and a boost clock speed of 1,725, almost 200MHz less than the XT. It also comes with 8GB of GDDR6 VRAM on a 256-bit bus with 65 render output units to boot.

Navi Specifications

The Navi 10 GPUs are the first RDNA graphics processors that have been equipped with the new AMD architecture. It really is exciting times for AMD fans right across the hardware board. We’ve gone over some of the specs above, but for those who want all the details, the charts below will showcase exactly what they have to offer.

As you can see, there’s not a huge difference between the three cards, apart from the price anyway. The biggest difference is the clock speeds and most noticeable is the drop from the 5700 XTAE to the RX 5700 (1680Mhz – 1465Mhz).

All cards have been built using the Navi 10 architecture, all use the same 7nm process, and all have 10.3 billion transistors. All have 8GB of GDDR6 VRAM and have 256-bit bus width.

So, what does this all mean for the performance of these new cards? How will they truly handle real-world situations?

AMD Navi 10 Architecture

Let’s take a closer look at that new Navi architecture we’ve heard so much about. If like me, you like to keep well informed on the latest tech, then you’ll be fully aware of AMD’s struggle to keep up with Nvidia’s offerings. It’s been this way for over half a decade now, AMD has always had to use 50-60% more power than Nvidia to achieve similar performance stats.

However, that might be about to change with AMD’s big redesign. AMD has been quoted on numerous occasions saying the new architecture is truly unique and not a refinement of pre-existing GCN units. Whether that is true is a tough question to answer.

RDNA has a new Dual compute unit design with some shared resources, including L1 cache. Previous GPUs from both AMD and Intel camps have been equipped with L0 cache, L2 cache and VRAM acting as a large L3 cache. CPUs, for example, don’t have L0 cache but do come with L1/L2/L3 caches. AMD says that the L1 cache helps dramatically reduce latency and throughput, which increases efficiency.

We will also be seeing improvements made to the wavefront and delta color compression (DCC), which will further reduce latency, improve efficiency, and boost overall performance.

AMD Navi Game Clock

AMD has also implemented its new ‘Game clock’ into the new range of Navi cards. This is a conservative estimation of what users might experience while playing games on this GPU. Saying that though, when announced at E3 and displayed to the masses, there were some computational performance issues when using boost clock. The Game clock is a more realistic assumption over the boost clock.

Navi Performance

It’s all gravy hearing about the great new technology implemented into these new cards, but ultimately, the people want to know how this thing performs in real-world situations.

Radeon RX 5700 XT Vs GeForce RTX 2070

I think it’s safe to say the 5700 XT has been released in direct competition of the GeForce RTX 2070. The initial benchmarking figures would suggest that the 5700 XT has it’s number as well. Similarly, the RX 5700 is in direct competition with the 2060, and again, beats it’s rival quite considerably.

The single-thread performance of the Navi cards has undoubtedly seen the greatest improvement in performance. Not something you would usually talk about when referencing a GPU, which is more CPU territory. However, that being said, how fast a GPU processes its workload is vital for the overall multi-threaded nature of a GPU.

AMD is claiming a 25% increase in instructions per clock for the new RDNA architecture versus the last-gen GCN. They’ve gone as far as saying the new cards will have a 50% overall performance increase while still using the same power and general configuration too.

The figures displayed have been generated from a pc running Division 2 at 1440p, so there’s no figure bodging this time.

Power consumption has also undergone improvement, and numbers now show a 1.5X increase in performance per watt when compared to the Vega 64 – even when using the most demanding GDDR6 VRAM.

For the gaming enthusiasts out there, AMD kindly ran a benchmarking comparison against the 5700 XT’s close rival, the 2070. The resulting figures are quite something.

The 5700 XT only loses out to the 2070 in two games (Shadow of the Tomb Raider & Civilization 6). Everywhere else the 5700 XT comes up trumps, even on Battlefield 5 where it boasts a 22% improvement.

Leaked Navi GPUs

It’s not just the 5700 that will be gracing the Navi badge though. Recently, news has leaked that we could see up to 20 more Navi branded cards to come. AMD’s biggest add-in-board partner, Sapphire has registered 20 additional trademarks with ECC. The list below shows all newly registered trademarks:

RX 5950 XT

RX 5950

RX 5900 XT

RX 5900

RX 5850 XT

RX 5850

RX 5800 XT

RX 5800

RX 5750 XT

RX 5750

RX 5650 XT

RX 5650

RX 5600 XT


RX 5550 XT

RX 5550

RX 5500

RX 590 XT

RX 590

Now, before we all get carried away, it’s worth mentioning that not all of these trademarks will be used for actual GPUs. It’s just a way the AIB can cover its tracks by registering all possible combinations in the 5000 series.

Final Thoughts

That’s pretty much all we know about the Navi GPU range as it stands. We’re currently in the process of compiling our own benchmarking figures to see how it handles itself against the 2070 in real-world situations.

This being said, what do the current figures mean for people looking to build a PC in today’s market?

Well, it firstly makes your life much more interesting or stressful, depending on which way you look at it. Before you just had Nvidia, it was the go-to selection for GPUs. Now though, AMD has messed things up by giving us a real decision to make.

The 5700 XT is a brilliant card that seemingly knocks out its competition and is extremely well priced to boot. It’s going to be really interesting seeing how the next 12 months pan out for Intel, especially with AMD seemingly coming out of the blocks swinging.

Windows Blue: What The Fuss Is All About

Windows Blue: What the Fuss is All About




Rumors about Windows Blue have been circulating on the web for quite a while, mostly alimented by the fact that some are disappointed by Windows 8 and badly want an update to it. Thus, Windows Blue might be nothing more than simply a project with the mission of constantly updating Windows 8. Thus, if you ask me, it might only be an internal code and nothing with a final destination. It wouldn’t make any sense to have this in the “real world”.

People might get even more confused than they are and they might even perceive Windows Blue as a separate operating systems, as previously rumors have hinted. So, what is Windows Blue, after all? Is it an internal project name or it’s actually the next operating system after Windows 8 and Windows RT? Let’s dig up more details about each posibility.

We’re looking for an excellent, experienced SDET to join the Core Experience team in Windows Sustained Engineering (WinSE). The Core Experience features are the centerpiece of the new Windows UI, representing most of what customers touch and see in the OS, including: the start screen; application lifecycle; windowing; and personalization. Windows Blue promises to build and improve upon these aspects of the OS, enhancing ease of use and the overall user experience on devices and PCs worldwide.

So, what we can see from this description is that they’re looking for an engineer to improve upon already present aspects of the OS, which means that, indeed, Windows Blue is nothing more but mere speculation from the traffic-hungry technology outlets out there (we’re not included *cough*,*cough*). But it gets even more curious when we discover that this relates not only to Windows 8. Somebody on Twitter has managed to find a direct reference to Windows Phone Blue:

The next version of Windows Phone is still Windows Phone, thus this further enhances the idea that this will be nothing more but an update. Of course, it can be an important update, even an overhaul, perhaps, but, still, it might be just too much to call Windows Blue the next OS after Windows 8. What we do understand is that not only the internals will be changed but the interface and the direct user experience. Could it mean, for some, the reapparance of the Start Menu?

Expert tip:

Also Read: Microsoft Uses the Best out of Google and Apple in their Windows 8 Strategy

Why wait for a few years for the next Windows to appear when you could have it year-by-year? And it doesn’t really make sense to “destroy” Windows 8. Windows 8 and Windows RT are made for the long run, they are made for a world full of tablets, ultrabooks, devices with touchscreens. And this tech revolution is only starting out. Microsoft will learn on the go from users and from the response of the market and they will convert their learnings into updates made to Windows 8 through Windows Blue. It’s logical.

Windows Blue is Microsoft’s shift in OS release cycles

I’m ready to make a bet here – Windows 8 will be Microsoft’s operating system with the longest lifespan, mainly because it will change along the way. It will permanently reinvent itself and when it will reach the moment when it will just be too different from the initial Windows 8, then we’ll step into a new OS. But we won’t sense it as a steep change, because we’ve been accustomed to it thanks to previous updates.

Things move too fast, it is pretty hard to plan an operating system a few years ahead. Consumers need the change to happen now and they want it for cheap money. We don’t want to pay a lot for the Windows versions we know we could be easily getting on torrents. I know enough folks that didn’t made the jump to Windows 8 because it was very similar to Windows 7 and because it was expensive to do it. By releasing yearly updates, Microsoft will hit two birds with a stone – they will basically force users to ugprade to the newest version and thanks to the much lower price, they will still manage to have high sales.

Also Read: Nokia Windows 8 Tablet Incoming

The updates won’t be made just to core products, like Windows 8, RT or Windows Phone. Products such as Outlook or SkyDrive will also be constantly updated and improved. Windows Blue will be the “inside OS” where Microsoft engineers will envision what’s wrong and what’s bad and will act accordingly to the situation. And like this, Windows will still remain in the center of our tech lives.

Was this page helpful?


All You Need To Know About Microsoft Rewards

All You Need To Know About Microsoft Rewards Does It Let You Earn Real Money?

Microsoft Rewards is a perfect combination of earning money from this concept & let you use it to buy items from MS Store.

What are Microsoft Rewards?

A free program, Microsoft Rewards doesn’t charge any convenience or some other kind of fee to get rewards. It’s a platform that lets you earn rewards by doing what you do every day.

Every day, the company comes up with one or the other ways so that you can add more rewarding points to your kit.

Since the world is too big, offering something like this must be limited, hence, Microsoft has allowed the Microsoft Rewards concept in specific geographical locations.

Please know Microsoft Rewards was previously called Bing Rewards that means if you had been a member of Bing Rewards in the past, your points had auto transferred to the Microsoft account.

Also Read: Best Windows Store Apps For 2023

Bing Rewards & Microsoft Rewards

Bing Rewards had been introduced as a platform to earn credit whenever you search on Bing. You will need to use the Bing features to get credits that ultimately will help you buy things from Microsoft store.

Launched in 2010, Bing Rewards was a smart way of letting people use Bing more than the Google Search engine. Kind of like a bribe that eventually will increase the number of using chúng tôi and if you think harder, these kinds of programs are already there in the market to lure consumers. The initiative had been quite successful and shockingly increased the number of users searching on Bing way more than it was in 2010.

How Bing Rewards Function?

For every 2 searches on Bing, you get 1 credit and obviously, the company needs to put a cap over the credits. So, the maximum you can have in a day is 15 credits.

All you need to do is login to the Bing account and start searching stuff on the platform instead of some other platform you are currently using.

Once you are done for the day, you can check your Bing Rewards on the dashboard.

Apart from working on your own, you can earn more Bing Rewards by referring the concept to your friends or contacts. Refer & Earn. An old concept that has made users switch from one platform to another in no time.

On the mobile version, the maximum Bing Rewards you can earn is 10 and the criteria of earning points will be the same (2 searches – 1 credit).

Redeem Bing Rewards

As simple as it can be. Bing Rewards can be redeemed in three ways of either shopping, donating, or winning.

You cannot decide the retailer or any specific outlet to redeem the rewards as every item that can be bought has its price (in points) along with image description.

“Nothing comes without a catch” so in case you are aiming to get something valuable, don’t just complete the daily search limit. I will take a lot of time to meet your targets so keep checking the page for new offers or tasks, complete them and get a good boost to your Bing Rewards.

So, be it Bing Rewards or Microsoft Rewards, users from these countries, keep using chúng tôi and fill your credits to meet the price of the product you want.

What are Xbox Rewards?

Since you can use your Bing Rewards or Microsoft Rewards to buy things from Microsoft Store, you can redeem the rewards from Xbox One console.

Xbox rewards are the credits that can be used to buy different Xbox games as well as subscriptions, & more. The best thing is that there is no limit to spend your credits as you can redeem them for smaller rewards as well as can save them to get a big reward if you have patience.

The best thing with Microsoft account is that the company believes in “One account for all the things”. So from using MS Office to Skype and Xbox, you only need one login that will let you redeem your Bing/Microsoft Rewards from these different platforms.

How To Earn Xbox Rewards?

Earning Xbox rewards is also quite simple because the concept stays the same so doing whatever you are doing still, continues to do so. Just login to your Microsoft account and keep a tab on how many points you have already earned.

Let’s look at the process of earning Xbox Rewards:

1. Download “Microsoft Rewards on Xbox” on your Xbox One console

2. Keep playing your games and don’t forget to check the Rewards section if the company has introduced any new way of getting it.

3. Meanwhile, don’t stop completing the battles you are fighting on your Xbox Game Pass or Xbox Game Pass Ultimate.

4. Apart from playing & winning, keep searching on chúng tôi to get additional credits as well as complete the daily or weekly tasks to boost up your rewards. In no time, you will have plenty of rewards to redeem something big & valuable.

You can get more insights on Xbox Game Pass Ultimate by visiting the link.

What Are Xbox Live Rewards?

Xbox Live Rewards is a part of the loyalty program initiated by Microsoft where you earn rewards to buy an entertainment or digital game purchase. Additionally, you get rewards when you search for info on chúng tôi instead of any other search engine.

Conveniently, if you live in the United States of America, your Xbox Live Rewards have been converted and been transferred to your Microsoft Rewards account by June 2023.

As the company believes in one account for everything, you could see your Xbox Live Rewards added to the Microsoft Rewards account.

Many of you might have experienced a good boost in the Microsoft Rewards and that might have happened because of the transfer of Xbox Live Rewards.

Be assured that your Xbox Live Membership has also been automatically updated to Microsoft Rewards Membership.

If you have any confusion regarding the Xbox Live Rewards ownership, you can find your answers here. You can also follow the Twitter handle for Xbox Live Rewards.

Final Thoughts on Microsoft Rewards

Whenever we use something as a consumer, that goes with either the brand name or the experiences other users have had in the past. And Microsoft stands tall for both the scenarios as it’s one of the most valuable brands in the world and surely it has a lot of consumers who are quite satisfied with its services.

Microsoft Rewards has let a lot of users buy stuff they wanted with the loyalty system. One of the best things is that there is no fee involved to use the platform and it’s quite easy to earn points to be redeemed later on. However, a lot of the users find it hard to use Bing instead of any other search engines such as Google.

Just to give my opinion, there is no harm in using the Microsoft Rewards loyalty program as it gives you options to get things on a discounted price or just the points you earned. The things you need to do such as using chúng tôi or completing the daily/weekly tasks can be considered as digital hard work. And there is no reward without performing any hard work actually.

Quick Reaction:

About the author

Ankit Agarwal

Ryzen 3000 Review: Amd’s 12

Best Prices Today: AMD Ryzen 9 3900X



Micro Center


View Deal

Update: We’ve since added 3D viewport and Synegy’s Cinescore performance results and have updated our gaming benchmarks to include scores for the older Ryzen chip in Far Cry 5 and Deus Ex: Mankind United.

Damn, this CPU is fast.

AMD Ryzen 9 3900X

Read our review

Best Prices Today:

But keep reading, because the Ryzen 9 3900X is likely as significant, and likely as game-changing, as AMD’s original K7 Athlon-series of CPUs that crossed the 1GHz line first, or its Athlon 64 CPU that ushered in 64-bit computing in a desktop PC.


AMD said it has improved its floating point performance by 2X on its Ryzen 3000 series of CPUs.

It is, after all, the first consumer x86 chip to be produced on a 7nm process node. Intel’s current desktop chips are still all built on a 14nm process node, and the company will just begin to move to 10nm later this year. We suspect the chip giant is a little envious that AMD reached this tiny die shrink first.

With that production technology lead, AMD breaks out a redesigned 2nd generation “Zen” core for the Ryzen 3000 that promises double the floating point performance over the previous Ryzen 2000 series, as well as a 15-percent increase in “instructions per clock” (think overall efficiency per clock).

AMD has essentially doubled the L3 cache on the Ryzen 3000 chips, and the company is going for some Apple-esque marketing by calling it Game Cache. The cache, up to 70MB on the Ryzen 9 3900X, goes a long way toward reducing memory latency on the Ryzen 3000s. It also tends to boost gaming performance dramatically on the CPU, so AMD feels calling it Game Cache can help the average consumer understand its benefits. Yes, that larger L3—err, Game Cache will also help application performance, but no one gets excited about App Cache we guess.


The Ryzen 3000 features two 7nm CCDs, which feed into an IO die that has the memory controller and PCIe controller.

The thousand-dollar question is whether gaming—which has dogged Ryzen performance from Day One—has finally been erased in situations where the GPU is not the limiting factor. We can say from what we’ve seen that Ryzen 3000 doesn’t quite win all of the time, but it’s so close now, even with Nvidia’s brutally fast RTX 2080 Ti driving it, that it just won’t matter 99 percent of the time.

PCIe 4.0?!

You can read all about PCIe 4.0 in this explainer. If you’re confused about the simultaneous presence of PCIe 5.0 and PCIe 6.0 in earlier stages of development, remember that it takes time to go from initial spec to actual hardware. PCIe 4.0 is basically the only answer today and a decent bragging point for AMD.

And yes, there’s still value. While Intel charges $488 for its flagship 8-core Core i9-9900K, AMD will give you 12 cores it claims are just as fast, if not faster, for $499, with a bundled RGB cooler too.


AMD’s lineup of Ryzen 3000 seems poised to push Intel’s entire lineup off the field of battle.


None of the cost-per-thread or fancy 7nm process matters, though, if the performance isn’t there. Let’s get on with what you came here for: to find out how fast the Ryzen 9 3900X is.

How we tested

For this review we decided to focus in on three key CPUs. First we use AMD’s 2nd-generation Ryzen 7 2700X as a baseline. Second we bring in the main competitor at $488: Intel’s mighty Core i9-9900K. The last chip is none other than AMD’s $499 Ryzen 9 3900X.

We tested each CPU in parallel, with the Ryzen 7 2700X mounted in an MSI X470 Gaming M7 AC, the Core i9-9900K into an Asus Maximus XI Hero, and the Ryzen 9 3900X into an MSI X570 Godlike.

Gordon Mah Ung

AMD’s new 12-core Ryzen 9 3900X is the new leader among mainstream performance CPUs.

While we used the same amount of DDR4 in dual-channel modes in all three builds, we did vary one aspect by running the Core i9-9900K and Ryzen 7 2700X with 16GB of DDR4/3200 CL14, and the Ryzen 9 3900X with 16GB of DDR4/3600 CL15. We wanted to test the Ryzen 9 with its optimal memory clock, which is 3,600MHz. We intend to test it at 3,200MHz as well. Due to time constraints, we’re initially showing only DDR4/3600 performance and will update it with DDR4/3200 once time permits. We are told by AMD, however, that DDR4/3200 CL14 should yield roughly single-digit performance differences compared to DDR4/3600 CL15.

Gordon Mah Ung

Corsair’s MP600 supports PCIe 4 on AMD’s new Ryzen 3000 CPUs.

To MCE or not?

For this review, as with our original Core i9-9900K review, we were torn as to whether to enable the “multi-core enhancement” feature. MCE is a motherboard-enabled feature that runs Intel “K” CPUs at higher clock speeds, while using more power and producing more heat. What offends some is that MCE is technically a violation of the letter of Intel’s law and considered an “overclock.”

You’d think that makes turning it off an easy decision. The problem is, just about every mid- to high-end Intel motherboard implements MCE set to auto out of the box. That means that any reviews of these new CPUs with the feature explicitly set to off, is not quite an honest portrayal of the true nature of the Core i9-9900K’s speed that most consumers will experience.

Leaving it on gets even stickier, because every motherboard maker implements it slightly differently. There’s no easy way to draw an exact bead on performance with MCE on.

Got all that? Then keep reading to see charts, charts, and more performance charts.

Ryzen 9 3900X 3D Modeling Performance

Up first is the old standby of Maxon’s Cinebench R15. This benchmark is built on the same engine used in Maxon’s Cinema 4D modeling and animation application. Cinema 4D is also built into Adobe’s Premiere and After Effects applications.


No surprise: 12 cores easily outguns 8 cores, making the Ryzen 9 3900X the easy winner here.


That’s impressive single-threaded performance for the Ryzen 9 3900X.

Not all megahertz are the same, though. With AMD’s much-improved instructions per clock—essentially how efficient the chip is—the Ryzen 9 is but 2 to 3 percent slower than the Core i9.

Cinebench R15, however, is fairly old, having come out in 2013, so we also measured all three CPUs using the new Cinebench R20. Intel generally is a bit faster in this updated test—against older Zen+ cores.


Shifting to the updated Cinebench R20, the Ryzen 9 3900X actually pulls ahead in single-threaded performance.

The situation changes for the Zen 2 cores in the Ryzen 9 3900X, resulting in a 3-percent bump in single-threaded performance. That’s a nice shift for AMD.


No surprise: The Ryzen 9 3900X essential runs the Core i9 off the field in multi-threaded performance.

We also tested the chips using Chaos Group’s Corona Renderer benchmark. Corona is a “modern unbiased photorealistic renderer,” which refers to the precision in which it renders the scene—not unbiased based on the hardware it’s run on.


The Corona modeler likes 12 cores more than 8 cores.

In this multi-threaded test, we see the Ryzen 9 outrun the Core i9 by about 32 percent. And yup, the body blows keep coming.

We run V-Ray Next, which is another renderer from the Chaos Group. The result? About 31 percent in favor of the Ryzen 9 over the Core i9.


Bored yet? V-Ray Next reinforces what we’re seeing in other modeling apps.


Dayum. The Ryzen 9 3900X is fast.

We’ll close off the 3D tasks with the oldie, but goodie POV-Ray 3.7 benchmark. The Persistence of Vision Raytracer is an open-source, free tool that has roots in the Amiga platform. Using the application’s built-in test, we saw the Ryzen 9 about 44 percent faster than the Core i9 in multi-threaded mode.


What a shocker: The Ryzen 9 mops the floor with the Core i9.

POV-Ray also has a single-threaded benchmark, in which the Core i9 and its 5GHz clock ekes out a 4-percent win over the 4.6GHz Ryzen 9. A win is a win, but most are likely to say big whoop.


The 5GHz boost clock of the Core i9 gives it a very slight edge over the Ryzen 9 in POV-Ray when set to test a single thread.

Viewport Performance

For a better interface or “viewport” experience for 3D artists, CGDirector said the CPU is the chief bottleneck—and not the GPU. That means chips with higher frequency and higher IPC typically are more important.

“In this Cinema 4D Viewport Benchmark, we measure the FPS of a typical Scene that uses common 3D Objects from Cinema 4D Objects in a hierarchy,” the website said. 

For this test, we ran the Cinema 4D Viewport benchmark with a demo version of Cinema 4D R20.


Using CGDirector’s Viewport Performance test, the Core i9 has the win but the Ryzen 9 isn’t too far behind is it?

Because this test is new to us, we’re not putting too much weight on it yet. Based on its results, however, the Core i9 does win. Assuming some truth to the theory that a 3D artists need UI responsiveness more than shorter rendering wait times, Intel has the edge.

It’s worth noting that even in a loss, Ryzen 9 3000 is not that far behind. And frankly, if you’re looking for the best of both worlds, with slightly slower Viewport performance but much, much greater rendering performance, the Ryzen 9 comes out in front. It’s really up to the individual tastes of the artist.

Keep reading for content creation and other tests.

Ryzen 9 3900X encoding performance

The Ryzen 9 3900X finishes an impressive 43 percent ahead of the Core i9-9900K on our conversion. Even on the Ryzen 9 3900X, it’s a beefy 30 minutes to finish the conversion.


The Ryzen 9 3900X easily wallops the Core i9 when doing a 4K encode using the H.265 codec.

It’s another big, big win for the Ryzen 9 3900X for CPU-based encoding. But we’d be remiss if we didn’t also mention that the one ace the Core i9-9900K has up its sleeve is QuickSync encoding. Rather than using the general-purpose CPU processors to convert the video, the Core i9 has a built-in GPU with fixed function processors that do only one thing, but do it stupidly fast.

Convert the video on the eight CPU cores of the Core i9, and you’re looking at a 47-minute wait. Convert it on the 12-CPU cores of the Ryzen 9 and you’re looking at about 30 minutes—just enough for a short lunch. But use QuickSync’s H.265 fixed function, and you’re looking at 4 minutes. Just barely enough time to grab a cup of joe.

In the end though, when you’re doing a CPU-based encode, Ryzen 9 wins hands down.

Our next test is new to us but doesn’t require the pain of installing a full video editing suite to run. Cinegy free Cinescore was created to give the broadcoast industry a quick and easy way to assess CPU and GPU performance. It runs tests from SD, to HD, UHD and 8K across both CPU and GPU. Cinescore also uses codecs as varied as XDCAM, MPEG2, H.264, H.265, DVCPro100, AVC_Intra as well Cinegy’s own “high-performance” Daniel2 codec.

Our machine configurations were the same for previous tests and all of the Cinescore results were obtained with Nvidia GeForce RTX 2080 Ti cards. We’re only reporting the composite score for SD, HD, UHD and 8K runs. As you can, there’s not much of a surprise and Ryzen 9 3900X again handily outruns the Core i9 chip.


Using Cinegy Cinescore 10.4, the Ryzen 9 continues to dominate the Core i9 chip.


The Ryzen 9 3900X’s extra cores let it beat up on the Core i9 in Premiere CC 2023.

We exported video using HEVC as well. The Ryzen 9 3900X continues to blow by the Core i9, but the gap closes up a little. Still, a shorter bar means less time wasted no matter how you cut it.


We also exported our Premiere project using the Premiere’s HEVC encoder, which is licensed from MainConcept.

Ryzen 3000 Photoshop Performance

We don’t normally use Adobe’s popular Photoshop as a performance test because, well, you don’t normally net the same dividends you do with modelling or video encoding. It’s an application that’s mostly balanced on single-threaded performance.

With the increased IPC and slightly higher clocks of the Ryzen 9 3900X, we did want to see if it could hang with the mighty Core i9’s high clocks. For this test, we used Puget System’s free Photoshop benchmark script and selected the Photoshop Extended script run. The Ryzen 9 3900X’s score of 992 is very respectable and about 6 percent faster than the Core i9-9900K’s overall score of 932.


The performance gap is slimmer, but the Ryzen 9 3900X still wins.

Ryzen 3000 Compression tests 


The Ryzen 9 3900X actually offers up a very decent uplift over the previous Ryzen 7 2700X.

Things go sideways when we shift to multi-threaded performance. While the Ryzen 9 3900X actually turns in a decent result, it falls slightly behind the Core i9-9900K despite having four more cores.

We’ll note that WinRAR typically hasn’t liked Ryzen-based CPUs (nor Intel’s Skylake X chips either). This result is a pretty decent uptick for the Ryzen 9 3900X overall, just not the victory we expected after seeing how well the CPU performed elsewhere.


This WinRAR result can be seen as good or bad in some ways. WinRAR has traditionally favored Intel CPUs, as we can see from the dismal performance of the Ryzen 7 2700X. The fact that the Ryzen 9 3900X is just about tied with the Core i9 can be seen as an improvement.

The good news for Ryzen 9 3900X is its performance in the far more popular 7Zip is pretty damn good.


The single-threaded performance in 7Zip slightly favors the Core i9 chip.

Decompression performance is largely reliant on integer performance and how well the CPU handles branch mispredictions.


Multi-threaded performance sees the Ryzen 9 dominate easily.


Decompressing is traditionally greatly limited by integer performance and how good a CPU can handle branch misprediction.  


In integer performance, the results indicate single-threaded performance is nearly a tie between all three, with the Core i9 squeaking out ahead.

Keep reading for game performance testing

Ryzen 9 3900X Gaming Performance

It’s pretty much been all sunshine and roses for the Ryzen 9 3900X so far. The final question is its gaming performance. Ever since the original Ryzen 7 1800X launch, gaming, especially at lower resolutions where the game isn’t bottlenecked by the GPU, has been nothing but controversy. It’s also been the one shining area for Intel.

First up is Shadows of the Tomb Raider, which we run on the GeForce GTX 1080 FE card using the Highest Quality settting. As you can see, it’s a tie! Win right? Well, not really. The real issue is using the Highest Quality setting is enough to make the now ancient GeForce GTX 1080 FE seem downright slow. It’s the bottleneck with these CPUs.


As you can see, Shadows of the Tomb Raider, even at 1920×1080 resolution, is bottlenecked by the GPU.

You can see why the minute we swap in the mean, lean GeForce RTX 2080 Ti FE: The Ryzen 7 2700X immediately falls to a distant third place. But how does that Ryzen 9 3900X do? We’d have to say pretty good,as it’s about 7 percent slower than the Core i9-9900K (that has a 8-percent higher clock speed.) It’s certainly better than the Ryzen 7 2700X, which exhibits the same problems with 1080p gaming that the original Ryzen 7 1800X suffered as well.


Playing modern games with the new Ryzen 9 or Core i9 needs a fast GPU too.

Moving on to the slightly older Rise of the Tomb Raider, we again see just how a “slow” card like the old GeForce GTX 1080 FE is the bottleneck in RoTR. It’s basically a three-way tie on the 1080, and you’d likely see that on any card short of an RTX 2080 and up.


The GeForce GTX 1080 FE is the bottleneck here on gaming.


The Ryzen 9 3900X doesn’t beat the Core i9, but it’s pretty close.

If you’re here to see the Ryzen 9 3900X leave the Core i9-9900K eating its dust in gaming benchmarks, prepare to be disappointed. For the most part, in all of the games we tested, the Ryzen 9 3900X generally trailed by single-digit ranges of 1 percent to 7 percent in most games using that wicked GeForce RTX 2080 Ti FE. 


The ancient CS:Go is a game we only needed to break out the GTX 1080 FE for. It’s not like you need more than 300 fps.

Ryzen fans shouldn’t take that as a failure. In many ways, we see it as a win for Team Red. Remember: The two previous generations of Ryzens have trailed Core i7 and Core i9 by double digits in the vast majority of games at 1080p resolution. To see the Ryzen 9 close to striking distance in just about every game we ran is a major upgrade. That also means you will occasionally hit games where the Ryzen 9 3900X flops against Core i9.

In Far Cry 5, we actually saw a fairly big gap of about 15 percent between the Ryzen 9 and Core i9.  Of the games we tested, Far Cry 5 coughed up the largest gap but lest you think the situation isn’t improved over 2nd gen Ryzen much that’s not true. As you can see, the Ryzen 7 2700X sits even further back than the Ryzen 9 2900X.


That graphic above might give you a heartache, but most of the games looked like Deus Ex: Machina, which gave the Core i9 about a 7-percent lead over the Ryzen 9. (Yes, the copy protection for Deus Ex gave up the ghost during our test runs too, so the Ryzen 7 2700X was not tested with the RTX card.)

We ran the popular game Rainbow Six Siege with similar results. Sure, Ryzen 9 is in second place, but not by much. Compare its results to the Ryzen 7 2700X.


Yes, Ryzen gaming  has improved. One look at the performance of Deux Ex: Mankind Divided between the two Ryzen chips.

If we had to call a victor based solely on gaming, we’d give it to the Core i9. But the slim victory makes it hardly a victory at all, because even the Core i9’s wins are by such small margins—certainly far smaller than against any older Zen or Zen+ CPU. And we have to move all the way to a $1,200 graphics card to see the separation. We really think anything below an RTX 2080 will likely make it nearly impossible to tell the difference between the two CPUs at 1920×1080.


The CPU Focused Test in Ashes is indeed a CPU test, because we barely saw frame rates budge going from a GeForce GTX 1080 to GeForce RTX 2080 Ti.




Update the detailed information about All About Amd’s Revolutionary V on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!