http://strom.com/awards/381.html
My
experience with Web-based advertising boils down to this: Everyone is Still Clueless.
Imagine
this scenario: you are a magazine publisher. You charge a high cost per
thousand impressions (that's CPM, back when the metric system was king) so that
people think you have well-heeled readers. You do some focus groups that show
that people read and remember the ads in your magazine from cover to cover. You
do some other analyses that show that your readers buy more stuff than the
competition, and moreover, influence other buyers of more stuff.
All well and good so far. But there are a few
complicating factors: You don't know the addresses of your subscribers and have
limited demographic information -- people lie on surveys and moreover, lie in
unpredictable ways. In fact, your readers misrepresent themselves big-time. You
can't even be sure if they tell the truth about their sex, location of their
residence, or even the nature of their employers. Second, someone at the
printing plant screwed up and you don't even know how many issues were printed.
Third, many newsstands didn't distribute your issue, because they were closed
for random holidays which you knew nothing about. And fourth, readers have
passed along your magazine in their own distribution network and you don't know
how, why, or how many are doing this. Finally, many libraries carry your
magazine, but they tear out the ads before putting them on the shelves.
I
wrote the above back in 1996 (WI #21).
I thought about this as I was reading the current issue of Network Computing,
where they evaluate various Web analytics software packages. The article is
must reading for anyone that is looking to replace their current package and
can afford the six-figure tab for outsourced service providers who can help
analyze their traffic:
http://www.nwc.com/showitem.jhtml?docid=1515f1
As
I said in 1996, I welcome a day when we can keep track of Web site visitors and
count heads as well as we can in the print world (which, to be honest, isn't
all that great to begin with). We have made some small progress. But re-reading
my essays shows that we haven't come as far as I would have liked.
For
example, the current analytics software tool that we use here at CMP still has
trouble sorting out page views from real people from spiders, bots, and other
automated site crawlers. When I shut down one of our Web sites a few weeks ago,
our stats system was claiming over 30,000 page views a week. Looking deeper
into the log files, I came to the conclusion that most of those views were from
computer programs, not real people. This is one of the reasons why CMP will be
installing a better system in the coming weeks, based on Omniture.
If you read the Network Computing story above you'll see comments from our own analytics
manager incorporated into the article. (You don't get more real-world than
that.) We'll see how well the product will work.
In
the early days of the Web, I was frustrated by the fact that while servers
produced log files of every visitor, the analytics programs didn't count
exactly the same way. I wrote in WI
#61 back in 1997:
"There
is a problem with the way that all Web servers maintain their access logs. To
demonstrate this, I ran my own access log through about half a dozen different
analyzer programs. To my surprise, there was little agreement on just about any
metric: the total number of hits, the proportion of Microsoft vs. Netscape
browser users, the pages most often accessed, and so forth. Sometimes, the
numbers were off by more than five percent. Why the difference? I don't know,
although I suppose it could be because some programs counted gifs and jpegs as
actual page views."
Today
we don't run the log files through analyzers, because we can add Javascript tags to our pages and count visitors in real
time. But we still have the problem of consistency. One issue not addressed by
the otherwise excellent Network Computing article is this one: they tried each
analytic program out for a different time period on their site. It would have
been interesting to check and see if each service collected the same number of
hits, or if this is still a problem.
Back
in 1996-97, I was an early user of Doubleclick, and
the company was an early pioneer at developing ad targeting software based on a
visitor's IP address and cookies. We have come a long way since then, and
actually have very sophisticated targeting programs that are available on many
sites today. I wrote back then:
"Targeting
is important: any advertiser can tell you that. But what makes the Web hard to grok is that there is no predictable premium position (in
the print trade, certain pages in the magazine are read by more people: the
covers, opposite certain editorial sections, and so forth). This is because
people navigate Web sites in different ways, and enter and leave them almost at
random.
The
trick is being able to match your ad with the right kind of editorial content.
So an ad for Netscape's servers, for example, would draw better when it is
placed near a story having to do with web servers. (The online publication Web
Review found a three-fold improvement when they moved Netscape's ad to a 'better'
spot.)"
Aside
from the fact that I am dating myself by citing Netscape as a server vendor,
pretty much what I wrote seven and eight years ago still holds true. Back then,
an online ad service vendor called Focalink went so
far to say it doesn't matter that their traffic numbers shouldn't agree with
your site's own log files: what counts is that they and you make the same
mistake consistently. Needless to say, they are out of business.
Back
in 1997, I asked "What's your typical online media planner to do? I think
many of them will quickly tire of trying to learn the ins and outs of hits,
page views, click-throughs and the like and opt for
entire site sponsorships, targeted deals, and deals between Web and other
media." And that's what I am working on right now: a series of
micro-targeted sites that sell sponsorships. In another column many years ago,
one of my readers gave these great suggestions, all are still true today:
As
I said eight years ago, on the Web, no one knows that you are even a
carbon-based life form, let alone a dog. Everyone, including this writer, is still
clueless.
Entire contents copyright 2004 by David Strom, Inc.
David Strom, dstrom@cmp.com, +1 (516) 562-7151
Port Washington NY 11050
Web Informant is (r) registered trademark with the
ISSN #1524-6353 registered with U.S. Library of Congress
If you'd like to subscribe (issues are sent via email),
please send an email to:
mailto:Informant-request@avolio.com?body=subscribe.