More on measuring online ad effectiveness

http://www.strom.com/awards/62.html

David Koenig, the director of Online Sales & Marketing at Microtest (dkoenig@microtest.com) wrote to me about my last WI#61, measuring ad effectiveness. He had some thoughts about how they used ads on their own new product launch. Take it away, David.

Our launch plans called for the placement of up to ten different banner ads around the Web, including Altavista, Excite, Jumbo, Inc., USA Today, etc. Since online advertising is fairly new and pricing is all over the map, we wanted to be able to determine EACH ad's overall effectiveness using several different measurements, including:

Taken together, we could determine how effective each ad's placement was in getting a visitor, keeping a visitor (via our content and their interest), and turning a visitor into a customer. Except for banner ad impressions, all data was in-house, accurate, easily available, and complete.

To accurately measure ad effectiveness, we took our new product's web pages and mirrored them, placing each set of HTML pages into one or more "advertiser directories". These advertiser directories were at the same directory level as the product pages visitors access via our Web site. This allowed us to use relative addressing for links. We used HREF's instead of image maps for our navigation (a strategy we believe should be used far more often than current practice). Also, we only measured hits and visitor sessions on pages, not on objects such as images or graphics. So we didn't need to worry about mirroring them.

In addition, each advertiser directory was marked for "no robots". And the advertiser directory or its pages could not be accessed via indexing of its parent directory. Also, no advertiser directory or its pages had any links to them except from the ad itself. All this was designed to eliminate visitors who did not come from the banner ad.

This enabled us to create one set of exactly identical web pages, allowing us to simply copy them enmasse into an advertiser directory when one was needed. When an ad had run its course, the advertiser directory and its pages could be easily retired.

To measure the information we gathered, we developed an application to import our server's access log records into an Oracle database. Analysis, via database queries, was simple and very straightforward.

You probably want to which ad was the most effective. It depends on what your goals are.

If the goal is product exposure, then sites like Altavista and Excite generated the most traffic. But if the goal is to get people to download an evaluation copy of the product, because these folks are the most likely candidates to end up buying the product, then other sites we advertised at had a better visitor-to-download ratio.

For example, if an ad generated 100 visitors, and 20 people downloaded the product, the ratio is 5 or 20% (this is very good). If another site generated 1000 visitors and only 20 people downloaded the product the ratio is 50 or 2%, (just a bit below industry averages). So the higher the ratio number (or lower the percentage) the poorer the results.

But you have to consider the cost of the ad. In the above example, the cost to get one download on the 50-to-1 site might actually be lower than the cost to get one download on the 5-to-1 site. This can occur when you place ads on widely accessed pages, vs. those more targeted at your product.

Our bottom line: the choice of where to advertise online is still a bit of trial and error. We have advertised on the web for about 5 months. Over that time, we have developed some tactics that enable use today to accurately measure on- line advertising effectiveness.

Hopefully this will helps others as well.

Thanks David. Some good ideas for others, and thanks for letting me share them with my list. I don't think this is the only way to run your web ad program, but it certainly shows you have to spend some energy and use the tools you have at your disposal to make web ads more effective.

Site-keeping and self-promotions dep't

The April edition of Windows Sources has my latest Browser column, Host Out-of-Site Presentations, which describes how to use Databeam's net.120 Conference Server to share presentations among web users across the Internet.

Many of you are interested in keeping up with the latest stuff on web server benchmarks. Here are two noteworthy links:

First is a paper done for Lotus on measuring web server dynamic content by Bradley Chen, a Harvard CS professor. Existing web server benchmarks measure the ability to deliver static pages of HTML. Chen's begun work on something that can be used to measure dynamic content performance.

And an article in Mac Week shows that Macs aren't as good a performer as NT and Unix when it comes to web servers. This could be due mainly to the poor implementation of IP than anything else. And while Macs are poorer performers, they still deliver pages faster than most of us need, especially considering a 28.8 dial-up connection.

David Strom
david@strom.com
+1 (516) 944-3407
back issues
entire contents copyright 1997 by David Strom, Inc.