Search engine developers constantly ponder ways in which SERPs
(search engine results pages) might present only high-quality content to
users. But can quality be accurately measured? More to the point, are
search engines capable of subjectively analyzing search results using
objective algorithms and how does Google tackle the problem?
Cost
The answer to the first question is largely dependent on cost.
Analysing the quality of website content costs money and the more a
website is analysed the greater the cost, which is why search engines
tend to skim the surface and predict the rest. This technique is flawed
for the obvious reason that surface-level analyses are unable to
guarantee accurate results – they are simply not a reliable measure of
quality.
The issue of cost was explored by Lawrence Wai, who invented a patent
for Yahoo! called ‘System and method for development of search success
metrics’. While discussing the various problems faced by search engines
when they attempt to sort content or search results in order of quality,
Wai said, “It is also generally true that the higher a class is ranked,
the greater the cost of obtaining the metric”. In other words,
producing more reliable results requires search engines to spend more
money on qualitative analyses.
Measuring Search Results
If cost is a barrier to accuracy, search engines such as Google and Bing
are faced with the prospect of designing more intelligent systems to
evaluate the quality of search results. Automated systems are complex,
powerful and effective, but all lack the subjective reasoning power of
the human mind. Unless search results are monitored and assessed by real
people – a task too gargantuan and intrinsically flawed to be
realistically entertained – web users must rely on algorithms to display
the most appropriate, high-quality results for search terms.
An SEO company requires an understanding of how algorithms approve
and reject content if it is to provide clients with reliable advice, but
not even the most reputable SEO company knows for certain how the
search engines classify SERPs according to quality. Another question
that prompts speculation among SEO specialists concerns the extent to
which a search engine’s qualitative analysis of search results mirrors
its qualitative analysis of website content. Are they two sides of the
same coin or do they function on completely different methodologies? An
insight can be gained by reviewing patents filed by Yahoo! and Google.
The system invented by Wai aims to establish a “search success”
ranking for SERPs based on “page success metrics” and other important
criteria. Wai’s method employs various techniques, including
presentation, query reformulation, advertising, ranking, SRP
enhancements and diversity. Critically, the system admits that there is
no useful or reliable way to “evaluate the user’s perceptions of search
page results”.
Assessments of quality, therefore, cannot escape the objective
framework of search-engine algorithms. Search engines cannot man-manage
queries and users cannot be relied on to rank SERPs for reasons of bias
and fraud. Yahoo’s
patent also describes the analysis of click-through-rate metrics to
further improve search results – a technique that is hardly invulnerable
to deception or misreporting.
Google is thought to have introduced a similar system with the recent Panda update,
relying on a specific range of search metrics to objectively analyse
results. The system is by no means perfect, of course, but it does
represent the cutting edge of search-engine technology. Until systems
are capable of adopting a subjective, human-like approach to evaluating
search results, users will have to make do with a series of complex
metrics that are picked apart and pieced together to estimate quality.
No comments:
Post a Comment