AV-TEST Advanced Threat Protection (ATP) AV.TEST test January - June 2025

Status
Not open for further replies.
Disclaimer
  1. This test shows how an antivirus behaves with certain threats, in a specific environment and under certain conditions.
    We encourage you to compare these results with others and take informed decisions on what security products to use.
    Before buying an antivirus you should consider factors such as price, ease of use, compatibility, and support. Installing a free trial version allows an antivirus to be tested in everyday use before purchase.

According to the nature of the facility; if data encryption and paying ransom chances are not low, they will cost way more than the security solutions fees for years.
Most will do the bare minimum required for it to be covered by cybersecurity insurance - if they can find and be underwritten. Some will not do it all no matter what. Others have a line item budget, and whatever that buys that is it.

Cost-Benefit Analysis.

Everybody understands it notionally, but almost nobody actually does what is required.

People problem.
 
According to the nature of the facility; if data encryption and paying ransom chances are not low, they will cost way more than the security solutions fees for years.
They don’t do these deep projections, if then else calculations. They operate on the belief that it will not happen to them and if it does happen, the insurance policy will cover.

Many businesses rely on severely outdated software, including software like Windows 10 last updated 3 years ago, third-party software long out of support and so on. Updating and upgrading such software requires heavy investments and disrupts the productivity, employees don’t know how to use it and many other factors come into play.
 
A cyber attack on an enterprise level needs proper reporting to auditors and are responsible for being transparent to customers affected. If Cyber professionals are not doing their job, likely they will in the CNN headlines sooner or later
Not really. Nobody really pays attention to the multitude of daily enterprise and breach reports made daily.

Nobody puts cyber professionals in jail or fines them, unless they've been definitively proven to commit fraud or some other equivalent crime. In most nations, the only thing that an employer can do is terminate the employee - who most likely was only doing what the executives instructed via poor policy, negligence of oversight, disdain for robust security, etc.

Governments are not good at creating and mandating cybersecurity standards and even more dismal at enforcement.

Cyber auditing is a failed model because it does not provide for strict enforcement of very strong, robust security requirements. Without authoritarian enforcement that people truly fear, everyone will be insecure because that is people being people.

People problems.
 
According to the nature of the facility; if data encryption and paying ransom chances are not low, they will cost way more than the security solutions fees for years.

I think that @bazang may be right when saying "Productivity and profit will always be prioritized before security". In many cases, the managers do not realize the danger and cut costs of security. The cost of security is not only software and the security team, but also consultations, courses, training, audits, etc.
 
I think that @bazang may be right when saying "Productivity and profit will always be prioritized before security". In many cases, the managers do not realize the danger and cut costs of security. The cost of security is not only software and the security team, but also consultations, courses, training, audits, etc.
Agree; we did not have IT section until years ago; I was volunteering for IT work as much as I could and know.
 
In many cases, the managers do not realize the danger and cut costs of security.
Most all of them know, but they do not care about security to the same extent as they do profits.

Most all of them, even security software and service provider companies, do not fear "reputational harm" from really bad compromises. I am referring to companies such as Authy, OKTA, LastPass, SalesLoft, CrowdStrike, etc.

Most all of them - virtually all of them - do not fear lawsuits arising from stolen user or client or managed database personally identifiable data because true financial harm from such data thefts is very rare. Plus, government regulators in 1st and 2nd world nations rarely take punitive actions against companies that suffer data loss that truly are painful to the companies.

The global system prioritizes profit. The global system does not prioritize security, and if there are losses, those losses just get passed onto consumers and clients.
 
Last edited:
The cost of security is not only software and the security team, but also consultations, courses, training, audits, etc.
They don't want to pay the financial nor the operational costs (time and effort, documentation, etc) of doing these things.

Their solution is often: "Let's find a better security software that will protect us and solve all of our problems. Makes sure it costs only 1 Euro per endpoint."
 
I want to make an important clarification about what I posted here:

Without authoritarian enforcement that people truly fear, everyone will be insecure because that is people being people.

I am not saying that authoritarian enforcement is the only possible, required, or optimal solution. That's not what I meant. There's a way to obtain maximum compliance voluntarily without being an authoritarian bootlicker. But that way to induce voluntary compliance is very, very expensive and will require governments to divert or cut money from other programs.

If governments are not willing to do that or the people are not willing to allow the government to do that because they want their welfare entitlements before everything else, then the government has no choice but to be an authoritarian enforcer if it wants compliance.

Most nations do not want cybersecurity compliance to be one of the top 5 national priorities - at least not as a matter of national policy across consumer, enterprise, and government. Too difficult. Too expensive. People are too much of a PITA to manage because the digital ecosystem is basically the wild, wild west.

People are people and they will not do anything unless highly incentivized or at the end of the barrel of a gun or facing truly onerous punishment. They will, however, eat like pigs and become obese, develop heart clog, stroke out and be gimped for the remainder of their lives - all done freely. They love doing that to themselves. Very few would get onto a treadmill if their lives depended upon it.
 
Last edited:
  • Like
Reactions: Khushal
I never said a word about the relevance of these tests. What I said was that they are limited in what they prove.

AV test labs do not exist on behalf of consumers. They exist to create a marketing tool for the AVs that participate in the tests.

The flaw in AV test lab testing are the "5 Stars and All Green Bars" cannot be understood by the average world citizen. They do not have the knowledge to understand what the tests say, and more importantly what they do not say and that they cannot be extrapolated generically to the entirety of real world possibilities.

Most of the world's population does not even know about AV test labs and AV lab test results.

When a test is limited, then it becomes less relevant. But since there's nothing else to consider except tests, then use them.

If tests are used as marketing tools, then they not even limited but deceptive. Meanwhile, others can show their own test results in this forum. Or do you think those are marketing tools, too?

I think it's the opposite: "5 Stars and All Green Bars" can be understood by "the average world citizen". And that should be the case.

Most of the world's population barely know about AVs, so that point is irrelevant.

Given these, I think my argument stands: if various AVs are not that expensive given several online stores and have only a slight impact on systems, then it becomes logical to choose from the ones that rank the best in terms of real-time protection, malware protection, and system impact in test results.
 
If tests are used as marketing tools, then they not even limited but deceptive. Meanwhile, others can show their own test results in this forum. Or do you think those are marketing tools, too?
That's an interesting question about the different kinds of tests we see. It brings up an important topic, testing methodology. Whether a test is a "marketing tool" or a useful benchmark depends less on who performs it and more on how it's performed.

Passionate community members who spend their own time testing products are commendable, and their results can be interesting case studies. For a test to be considered a reliable benchmark for comparing products, however, it needs to meet a few key criteria that professional labs are equipped to handle.

Realistic Threat Vectors

This is the most critical point. Most threats don't start from a ZIP file on your desktop. They arrive through multiple routes of infection. A comprehensive test must simulate these real-world scenarios.

Web-Based Threats

Does the product's web shield or browser extension block a malicious URL before the malware is ever downloaded?

Email-Based Threats

Does the email scanner detect and quarantine a malicious attachment or phishing link upon arrival?

Exploit-Based Threats

Can the product's behavioral analysis or exploit protection stop a fileless attack that leverages a software vulnerability (e.g., in a browser or Office document)?

Testing from a local folder only evaluates a single layer of defense, the on-demand or on-access file scanner. It completely bypasses the multiple, earlier layers of protection that are designed to prevent the threat from ever reaching the disk in the first place.

Large, Current, and Unbiased Sample Set

Labs like AV-TEST and SE Labs use automated systems to test against tens of thousands of newly discovered, "in-the-wild" malware samples every month. This scale is crucial to avoid bias and ensure the results are statistically significant.

Measuring the "Cost" of Security

A good test doesn't just measure protection. It also measures the side effects.

False Positives

How often does the product block legitimate software? A security tool that constantly gets in the way is a bad tool, even if its protection score is high.

Performance Impact

How much does the product slow down the computer during common tasks like launching applications, browsing the web, and copying files?

Conclusion

Community-driven tests are valuable for demonstrating how a specific product behaves against a specific set of samples in a specific scenario. They satisfy curiosity and showcase the passion within the security community.
However, for the purpose of making a general recommendation to the public, we must rely on tests that are comprehensive, repeatable, and simulate the entire infection chain. This is why the structured methodology of independent labs, despite any perceived flaws, remains the gold standard for consumer guidance. It's the difference between a controlled scientific experiment and an interesting hobbyist demonstration.

The Awareness Gap

For the general public, cybersecurity is a background task. They know they need protection, but the specific entities that test and validate these security products are deep in the weeds of a niche industry. Public awareness is typically limited to brand Names. People know names like Norton, McAfee, or Bitdefender because of decades of marketing, retail presence, and pre-installation deals with PC manufacturers.

They get recommendations from the tech support person who fixes their computer, the salesperson at Best Buy, or a family member who is "good with computers."

Many people will google "best antivirus" and click on the first few links, which are usually major tech publications.The testing labs themselves rarely, if ever, market directly to the public. They are industry auditors, and their primary audience consists of security vendors, enterprise IT departments, and technology journalists.

The average person doesn't need to read the raw lab reports because tech journalists and major review sites do it for them.

When publications like PCMag, CNET, or Tom's Guide publish their annual "Best Antivirus Software" articles, their recommendations are heavily informed by the data from these independent labs. They act as a bridge, translating the complex test data into the easy-to-read "Top 10" lists that consumers use to make decisions.

So, while a user may not know what AV-Comparatives is, their choice to buy a product with a "PCMag Editor's Choice" award is often an indirect endorsement of that product's stellar performance in lab testing.

A final, critical point is that no single security product, no matter how effective, is a silver bullet. The best security posture doesn't come from finding one perfect tool, it comes from creating multiple layers of defense. This fundamental concept is known as Defense in Depth.

Think of it like securing your home. You don't just lock the front door and assume you're completely safe.

True security involves multiple, overlapping layers

You have a gate at the edge of your property (your Firewall), keeping unsolicited traffic off your lawn entirely.

You have reinforced doors and strong window locks (your Software Updates), patching the structural weaknesses of your home.

You have a guard dog inside the house (your Endpoint Security/Antivirus), ready to deal with any intruder who gets past the outer defenses.

You have unique keys for every room (your Passwords), preventing access to everything if one key is stolen.

You need the key and a unique alarm code (your Multi-Factor Authentication), making a stolen key useless on its own.

You are cautious and look through the peephole before opening the door (your User Vigilance against phishing).

You have a fireproof, theft-proof safe for your irreplaceable items (your Reliable Backups), ensuring that even in a catastrophe, your most valuable assets are safe.

Relying only on an antivirus is like locking your front door but leaving all the windows wide open. True digital security comes from building a complete system where each layer supports the others.
 
mcafee needs to work on it's VPN sometimes the connections are just not stable enough compared to others. Mcafee does have a glitch where it attempted to connect for at least 1 hour on my machine. I'll submit some logs to them to see what's up but this can't just be my machine i'v experienced this issue on.
 
I think it's the opposite: "5 Stars and All Green Bars" can be understood by "the average world citizen". And that should be the case.
Sure. They can. And all the AV labs know this and that is why they use the simplistic "5 Stars and All Green Bars" rating systems or some variation thereof. They know that the average world citizen will look no further than the star rating, the bars, and/or the 100% score. And that suits them and the AV publisher just fine because the purpose of the test in the first place is for marketing.

But how, on the basis of that kind of super generic rating system, does the average world citizen differentiate between a decent baseline test performed at one AV test lab and one at a different test lab that is so easy that virtually any AV cannot score anything else but "5 Stars and All Green Bars"? All AV labs use simplistic rating systems that even a 4 year old child can understand as indicating "Good" or "Poor." That simplicity is the fatal flaw of such rating systems, but again - as marketing tools - the test labs and software publishers don't want consumers to know this fact.

Have you ever seen any AV test lab or AV publisher be forthright and transparent, and state explicitly and in-detail the limitations of the tests and - most importantly - how not to interpret and extrapolate the results?

Have you ever seen any AV test lab or AV publisher advise the test result readers "Do not interpret the results this way because those results are not indicative of protections under all real world conditions? You have to interpret them this way."?

Again I never said that the tests were not relevant, only that they provide a specific type of data point with limitations. As long as the reader and interpreter of the test results understand this simple fact, then what the test proves - and more importantly - what it does not prove are understood. Without that understanding, the average world citizen is, through their own propensity to generically extrapolate the results, to make false assumptions about the protection quality and capabilities of the products.

Meanwhile, others can show their own test results in this forum. Or do you think those are marketing tools, too?
They can but those tests are often flawed, biased, simplistic, and misleading even if there is not intent to mislead - just like the professional AV test labs.

Has anyone ever observed either a professional test lab, a professional tester, or an enthusiast tester who does not think they do a good job of testing?

Has anyone ever observed either a professional test lab, a professional tester, or an enthusiast tester openly state the limitations of their testing and caution readers and viewers to not "over-interpret" or "extrapolate" the results?

Has anyone ever observed either a professional test lab, a professional tester, or an enthusiast tester who openly admitted that they were wrong (when, in fact they were because they didn't get it right or due to a testing problem discovered after the test results or demonstration were released or statements were made) - until very recently. @cruelsister apologized to the community here. The only enthusiast tester that I am currently aware has done so.
 
Last edited:
  • Like
Reactions: Khushal
It seems to me recently ESET feels they can walk on water, one thing is for certain I wont fund them again.
It was just last year I'd bought an ESET license because I appreciated the continued legacy of graceful UIs and minimal impact on performance. My estimation is that, while they still can perform reasonably well in a variety of mainstream tests, their approach hasn't evolved quite at the pace of their toughest competitors. McAfee's stream of impressive patents helped highlight this for me. Licenses also tend to come at a premium compared to other options, even on sale. I wasn't sold this time around.
 
That's an interesting question about the different kinds of tests we see. It brings up an important topic, testing methodology. Whether a test is a "marketing tool" or a useful benchmark depends less on who performs it and more on how it's performed.

Passionate community members who spend their own time testing products are commendable, and their results can be interesting case studies. For a test to be considered a reliable benchmark for comparing products, however, it needs to meet a few key criteria that professional labs are equipped to handle.

Realistic Threat Vectors

This is the most critical point. Most threats don't start from a ZIP file on your desktop. They arrive through multiple routes of infection. A comprehensive test must simulate these real-world scenarios.

Web-Based Threats

Does the product's web shield or browser extension block a malicious URL before the malware is ever downloaded?

Email-Based Threats

Does the email scanner detect and quarantine a malicious attachment or phishing link upon arrival?

Exploit-Based Threats

Can the product's behavioral analysis or exploit protection stop a fileless attack that leverages a software vulnerability (e.g., in a browser or Office document)?

Testing from a local folder only evaluates a single layer of defense, the on-demand or on-access file scanner. It completely bypasses the multiple, earlier layers of protection that are designed to prevent the threat from ever reaching the disk in the first place.

Large, Current, and Unbiased Sample Set

Labs like AV-TEST and SE Labs use automated systems to test against tens of thousands of newly discovered, "in-the-wild" malware samples every month. This scale is crucial to avoid bias and ensure the results are statistically significant.

Measuring the "Cost" of Security

A good test doesn't just measure protection. It also measures the side effects.

False Positives

How often does the product block legitimate software? A security tool that constantly gets in the way is a bad tool, even if its protection score is high.

Performance Impact

How much does the product slow down the computer during common tasks like launching applications, browsing the web, and copying files?

Conclusion

Community-driven tests are valuable for demonstrating how a specific product behaves against a specific set of samples in a specific scenario. They satisfy curiosity and showcase the passion within the security community.
However, for the purpose of making a general recommendation to the public, we must rely on tests that are comprehensive, repeatable, and simulate the entire infection chain. This is why the structured methodology of independent labs, despite any perceived flaws, remains the gold standard for consumer guidance. It's the difference between a controlled scientific experiment and an interesting hobbyist demonstration.

The Awareness Gap

For the general public, cybersecurity is a background task. They know they need protection, but the specific entities that test and validate these security products are deep in the weeds of a niche industry. Public awareness is typically limited to brand Names. People know names like Norton, McAfee, or Bitdefender because of decades of marketing, retail presence, and pre-installation deals with PC manufacturers.

They get recommendations from the tech support person who fixes their computer, the salesperson at Best Buy, or a family member who is "good with computers."

Many people will google "best antivirus" and click on the first few links, which are usually major tech publications.The testing labs themselves rarely, if ever, market directly to the public. They are industry auditors, and their primary audience consists of security vendors, enterprise IT departments, and technology journalists.

The average person doesn't need to read the raw lab reports because tech journalists and major review sites do it for them.

When publications like PCMag, CNET, or Tom's Guide publish their annual "Best Antivirus Software" articles, their recommendations are heavily informed by the data from these independent labs. They act as a bridge, translating the complex test data into the easy-to-read "Top 10" lists that consumers use to make decisions.

So, while a user may not know what AV-Comparatives is, their choice to buy a product with a "PCMag Editor's Choice" award is often an indirect endorsement of that product's stellar performance in lab testing.

A final, critical point is that no single security product, no matter how effective, is a silver bullet. The best security posture doesn't come from finding one perfect tool, it comes from creating multiple layers of defense. This fundamental concept is known as Defense in Depth.

Think of it like securing your home. You don't just lock the front door and assume you're completely safe.

True security involves multiple, overlapping layers

You have a gate at the edge of your property (your Firewall), keeping unsolicited traffic off your lawn entirely.

You have reinforced doors and strong window locks (your Software Updates), patching the structural weaknesses of your home.

You have a guard dog inside the house (your Endpoint Security/Antivirus), ready to deal with any intruder who gets past the outer defenses.

You have unique keys for every room (your Passwords), preventing access to everything if one key is stolen.

You need the key and a unique alarm code (your Multi-Factor Authentication), making a stolen key useless on its own.

You are cautious and look through the peephole before opening the door (your User Vigilance against phishing).

You have a fireproof, theft-proof safe for your irreplaceable items (your Reliable Backups), ensuring that even in a catastrophe, your most valuable assets are safe.

Relying only on an antivirus is like locking your front door but leaving all the windows wide open. True digital security comes from building a complete system where each layer supports the others.

That doesn't make sense because AVs can also close windows and have layers supporting each other.

The point about an indirect endorsement also doesn't make sense because most users don't know much about security. As for insisting that they should, think of the many things you're not familiar with, and how you have to depend on reviewers to help you choose.
 
Sure. They can. And all the AV labs know this and that is why they use the simplistic "5 Stars and All Green Bars" rating systems or some variation thereof. They know that the average world citizen will look no further than the star rating, the bars, and/or the 100% score. And that suits them and the AV publisher just fine because the purpose of the test in the first place is for marketing.

But how, on the basis of that kind of super generic rating system, does the average world citizen differentiate between a decent baseline test performed at one AV test lab and one at a different test lab that is so easy that virtually any AV cannot score anything else but "5 Stars and All Green Bars"? All AV labs use simplistic rating systems that even a 4 year old child can understand as indicating "Good" or "Poor." That simplicity is the fatal flaw of such rating systems, but again - as marketing tools - the test labs and software publishers don't want consumers to know this fact.

Have you ever seen any AV test lab or AV publisher be forthright and transparent, and state explicitly and in-detail the limitations of the tests and - most importantly - how not to interpret and extrapolate the results?

Have you ever seen any AV test lab or AV publisher advise the test result readers "Do not interpret the results this way because those results are not indicative of protections under all real world conditions? You have to interpret them this way."?

Again I never said that the tests were not relevant, only that they provide a specific type of data point with limitations. As long as the reader and interpreter of the test results understand this simple fact, then what the test proves - and more importantly - what it does not prove are understood. Without that understanding, the average world citizen is, through their own propensity to generically extrapolate the results, to make false assumptions about the protection quality and capabilities of the products.


They can but those tests are often flawed, biased, simplistic, and misleading even if there is not intent to mislead - just like the professional AV test labs.

Has anyone ever observed either a professional test lab, a professional tester, or an enthusiast tester who does not think they do a good job of testing?

Has anyone ever observed either a professional test lab, a professional tester, or an enthusiast tester openly state the limitations of their testing and caution readers and viewers to not "over-interpret" or "extrapolate" the results?

Has anyone ever observed either a professional test lab, a professional tester, or an enthusiast tester who openly admitted that they were wrong (when, in fact they were because they didn't get it right or due to a testing problem discovered after the test results or demonstration were released or statements were made) - until very recently. @cruelsister apologized to the community here. The only enthusiast tester that I am currently aware has done so.

There's your problem: you insist that the average user should know more, and it's likely that they won't or can't, and for the same reasons that you won't or can't know more about what they specialize in.

The point about transparency is also questionable because several of them do provide certification about themselves or show videos of what they did.

The point about their having to admit that they're wrong is also questionable: what if they're not wrong? Then do you admit that they're right?

Your last point implies that no one should be trusted, not even those who present tests here, because they're either not revealing their mistakes or they're making mistakes.

Given that, what then is the basis of making a choice on what to purchase?
 
Status
Not open for further replies.