Spyware testing (lackthereof?)
Having looked at the numerous blog posts about Consumer Reports' September 2006 issue on the testing that they have conducted on anti-spyware applicatons, and having a chance to doing my own analysis (non-work related analysis) with the primarily (and apparently only) tool, Spycar, a suite of tools that was developed to "model some behaviours of spyware tools", i felt that perhaps i can also join in somewhat belatedly and share my jotted down personal comments on the Spycar suite viz-a-viz the CR report.
- The Spycar suite itself is not malicious, that we know is true (at least right now in its current form. Therefore, the first point of discussion will be the basis of validity of an antimalware application, be it antivirus or antispyware application, flagging a non-malicious tool.
In most real-life situations and testings, having an application falsely flagged as malicious would be a bad experience for the end-user and the application company.
In the antimalware industry, the flagging of innocent applications would be termed as a false positive.
- Since we have established that the Spycar suite is not malicious, and the simulated spyware actions are not malicious, how does a tester make full use of the results and behavioural blocking from antimalware products?
Should the tester give high scores for an antimalware application that does flag and block all the behaviours from the Spycar suite? Why should high scores be giving to blocking of non-harmful actions which are primarily registry settings, settings that can be changed through many other vectors as compared to the sole input from the suite?
The test coverage of the suite doesn't and cannot represent the overall quality of the antispyware application. Just because the application does not flag nor block the simulated actions (with reasons already mentioned above), the only thing that the tester can get out of this is that the application did not block the behaviour.
Does that mean that the application is inherently bad in detection and blocking? I dont think so. Furthermore, focusing on this single point, the tester will not know whether the non-blocking of the action is due to the program's specific design philosophy, its spyware categorization, its incompetence.
Which again brings my number one question in my mind when i was doing my own analysis is that how does the results that Spycar is looking for (behaviour blocking alone) represent a quality product?
- With the large number of rogue and suspect antispyware products available, and adding some code to specifically flag and block the Spycar suite looks like a day's work at most, this increases the opportunity for an non-informed tester to use the expected behaviour from the rogue products and score them highly.
I can forsee the ads plastered all over the rogue applications websites "100% blocking for Spycar!".
Oftentimes, rogue applications would actually betray the "trust" of the end-users and maliciously flag valid security products for removal, thereby futher decreasing the overall security of the end-users' machines. Wouldn't this create an environment that further confuse the end-users?
- The simulated infections from the Spycar suite do not necessary mean malicious intent, though it is common for spyware to do the simulated action.
For example, the blocking of the Internet Explorer Options screen is an action that most spyware would perform, but it's also an option that corporations and public terminals like library kiosks tend to use too. If this kind of behaviour is expected to be blocked and the end-user prompted, wouldn't that alarm the end-users unnecessarily?
- Though the protection of the end-user's system is the primarily goal of any antimalware application, something that i've started to learn as a result of the months of constant discussions in my primary system, is that the best tool in the world wouldn't be useful (to anyone) if it takes up 100% of the CPU and 24 hours just to do a on-demand scan.
Thus, there might be additional test coverage and consideration for areas such as:
- performance of the tool
- user-friendlieness of the tool
- frequency of updates
- open-door policy for categorization of the spyware
- Defense against malicious software in attacking itself would probably be very important too i believe. There's no use in having a antimalware application that can detect and block the behaviours simulated by Spycar if the application itself can be easily disabled before it can block the malicious actions. There's evidence in various viruses and malware that showed specific disabling of the more popular antimalware applications and defenses against it is critical.
- The other important aspect of any antispyware application would be the effectiveness of the removal system. No antispyware is going to be useful if all it can do is to block and/or flag a spyware, but have problems removing the persistent ones!
You can also read up on more professionally written insights on this issue on various antimalware related blogs such as Eset's (by Randy), McAfee Avert Labs, Sunbelt's and Eric Howes' full commentary on this issue.
I just find it amazing that a antispyware test article (and the earlier antivirus test article) did not include a single real-life malicious sample. It's not that hard to find one on purpose these days!
With the above points in mind, i wonder how anyone can use the findings from the reviews to decide on what antimalware packages are the best to use.
And that's that for my first knowledge sharing post.
PS: this post was originally posted on the Offpoint blog, but i've decided to create a new blog to just focus on posts like these.
technorati: antispyware, AntiVirus, testing, Antimalware