David Strom

SQL/Standard Commercial series

By David Strom



One of the biggest problems with setting up client/server computing systems is that the

networking skills needed to assemble the various pieces can sometimes be daunting. I found

this out recently with a series of tests conducted at Standard Commercial Corporation in

Wilson, N.C. Standard is a multinational tobacco and wool corporation that was number seven

in Infoworld's 100 (and profiled in the September 19, 1994 issue (p.89). 

For such a far-flung company, they have a remarkable uniformity in their gear: just about

every one of their two hundred desktops runs Windows for Workgroups 3.11, and all of their

file servers run Windows/NT. They also have some Sun servers as well. They have servers in

Turkey, England, and Africa, too.

When they first contacted me last summer about doing some on-site testing, the subject of

client/server computing was mentioned, but I was more interested in another issue that they

had. Charles Ledbetter, a consultant with the company, was trying to move all their NetBIOS

applications over to IP. He was tired of running multiple protocols on each desktop for a

variety of reasons: it took more work to support, it took time to set up properly, and because

NT/Advanced Server did such a good job running over IP there was no need to support

NetBIOS. 

There was one application however that required NetBIOS, and that was a network fax server.

So his original request seemed simple: try to find a fax server that could run on NT and use

IP protocols. I came up empty-handed, surprisingly enough. But that's a story for another

"on-site" series.

While we were scouring the world looking for fax servers, Ledbetter and his developers came

across an interesting problem. They have been actively developing a variety of database

applications to run on top of various SQL servers, including those from Microsoft and

Sybase. They used a variety of tools, including those from Microsoft and Powersoft, to

develop Windows front-end applications. One such application was keeping track of their

inventory of tobacco, and others were used to track orders and customers. This is the usual

stuff that lots of companies are doing these days.

Because their network literally spans the globe, Standard was concerned about network traffic

that was generated between the client and server: they wanted to make sure that their

networks would be able to support all the transactions and that their telecommunications costs

would be reasonable. They also were concerned about the relative performance of the two

database servers, and hopeful that they could migrate their Unix applications over to NT.

But before they could do this, they had to figure out why NT was so slow in returning the

results of some sample queries. This surprised both myself and Ledbetter -- "Since all of our

earlier tests indicated that SQL Server for NT was up to six times faster than Sybase on Unix,

we were very curious about these results," he said.

The in-house developers at Standard, including Randy Rabin, set up a test bed as indicated in

the sidebar. They set up a small SQL query which they would expect to produce about 150

kilobytes of data and 1800 rows.

What they got was more than they bargained for. Instead of 150 k of results, they got almost

half a megabyte when a Windows client was run against the NT server. It took over 2000

Ethernet frames to carry all this data between client and server. Where was all the overhead,

and what was causing it?

After spending some time with a Sniffer, they could see a pattern developing: the client

would request data from the server in 512 byte chunks. 

We then setup the query to run against the Sybase Unix server, only this time running IP of

course. Again, using the Sniffer we saw that that there was still some overhead in producing

the results, only this time we saw 277 kilobytes of data in just over 600 Ethernet frames.

Fewer Ethernet frames were needed to send the results because the IP client could take the

information in bigger chunks than the NetBIOS client. The Sniffer traces showed that the IP

network wasn't as much of a bottleneck as the NetBIOS one -- which is ironic, because

supposedly one of the things that NetBIOS is better at than IP is LAN transport. And IP

traditionally has a bad rap when it comes to networking overhead, which wasn't the case here.

"At this point, we didn't know where the problem was. Either SQL Server was doling out 512

bytes to the network transport layer, or the client workstation had a 512 byte limit set

somewhere," said Ledbetter. The only variable that seemed likely was the size of the packets

that were sent between client and server. But who was responsible: the client or the server?

Next week we'll find out.



Test bed infographic:



SQL Server 4.21: Compaq ProLiant 4000 with dual Pentium 66 MHz processors, 128 Mb

RAM, four 1 gigabyte disk drives formatted with NT file system. NT version 3.1 running

both IP and NetBIOS protocols.



Sybase 4.91: Sun 630 MP with a single processor and 128 Mb of RAM, two 2.5 gigabyte

disk drives mirrored to two other drives. 



Network: 10BaseT Ethernet. Workstations are configured with Windows for Workgroups 3.11

using a variety of Xircom and 3Com Etherlink III adapters. Query tools used on the

workstations included Powersoft's Powerbuilder and bundled Windows and DOS tools from

Microsoft. They also used Network General's Sniffer analyzer to monitor network traffic.







part two:



Last week I began to describe my work with Standard Commercial Corporation, a

multinational tobacco and agricultural company based in North Carolina. Charles Ledbetter

and Randy Rabin, two analysts at the company, had run some tests comparing the effect of

changing the packet size on returning results from an NT-based SQL Server back to a DOS

client. We were trying to find out why the NT server was so inefficient when comapared to a

Unix-based Sybase server in processing SQL queries, and whether the cause lay with the

client or server ends of the system.

To track down this problem, the guys from Standard used the DOS-based ISQL

command-line query program that ships with Microsoft's SQL Server. ISQL allows you to

specify the packet size used by the client explicity as one of the command-line parameters

(you use a "-a" followed by the packet size in byes). We tested packet sizes ranging from

4000 to 32000 bytes to see what the variation in results was. All of the tests used NetBIOS

connections between client and server. See the table below with what we found out.

As we increased the packet size, we got fewer frames and less data transmitted over the wire.

That's the good news. The bad news was that Ethernet utilization increased dramatically as

we upped the packet size. "Clearly, setting the packet size at 512 bytes is inefficient. There is

simply too much overhead when large SQL Server result sets are returned," said Ledbetter.

And 32 k is too big, because then "SQL Server waits until it has 32 k worth of data before

sending anything, which means that the client sees their data window fill in noticeable bursts.

Plus, with these large packet sizes we saw a single client use almost ten percent of overall

Ethernet bandwidth with these bursts. With lots of clients, this could create congestion

problems on your network," he said.

With these tests, we found out that the client is the culprit, not the server, which surprised all

of us. "The packet size is driven by the client," said Rabin. It is set by the client when it

connects to the server, and is fixed for the duration of the conversation between the client and

server. "A given client may have connections to several servers and negotiates a packet size

for each connection independently," said Ledbetter. 

That was reassuring, but we were still had a slight problem. Now that we understood the

problem, we wanted to apply our knowledge and improve things on Standard's network. And

that was going to be difficult, because they don't have complete control over how to set this

parameter. "If you write all your SQL code in C++, setting the packet size is easy," said

Rabin. "The client-side Windows dynamic linked libraries that ship with SQL Server provide

these functions to the developer." According to our research, W3DBLIB.DLL and

DBNMP3.DLL are the two files that support the packet size function call, which is called

DBSETLPACKET.

The issue is with using other client-side tools, such as the one that Standard had written most

of their applications in, namely Powerbuilder. "In Powerbuilder 3.0, which is what we use,

you have no access to this function call," said Rabin. 

We called Powersoft to find out if they were planning on including any control over packet

size in their next release, which should be out by now. They told me that packet size "isn't a

variable we aren't concerned with." [MORE TO COME TK] Au contrare.

"Until Powersoft supports this option, we'll either have to rewrite our applications in C++ or

else write our own DLL to set the packet size directly," said Ledbetter. "Neither are very

attractive options, since it would basically mean rewriting or recompiling all of our

applications."

Another option for Standard is to use IP connections between its Windows clients and SQL

Server. It turns out that 512 bytes is the default packet size for NetBIOS requestor. It also

turns out the default packet size for the IP SQL requestor is about three kilobytes, or six

times what the NetBIOS default is. Running IP would increase their packet size without

having to recompile or rewrite their applications, but they would have to reconfigure all of

their clients.

It was an interesting journey for me: I started out looking for IP-based fax servers and ended

up looking at IP-based database servers. And along the way I learned alot about how

client/server databases are setup. As a sanity check, I showed Standard's results to Rich

Finkelstein of Performance Computing in Chicago. Rich is a SQL guru and was excited about

the results. "I never thought about packet size before this, and I am sure that not too many

people have given this much consideration," he said to me. He recommends other corporate

developers take their Sniffers out and do similar tests to see what is actually going on over

their networks.

We can see that 512 bytes is too small for sending large result sets, and 32k is too large for

almost every application, unless you can segment your network carefully, have lots of

bandwidth to spare, and have lots of RAM on your server to handle caching the requests.

Somewhere in-between these two numbers is best, which is actually what Microsoft

recommends as well. "We think between four and eight kilobytes is ideal," said Rabin. "Now

we just have to convince Powersoft to allow us to control this parameter." Unfortunately,

Powersoft isn't alone here: if you've got other front-end application builders, you'll have to

convince these vendors to pay attention to controlling the packet size parameter.





Results Infographic:

Testing packet size for client/server applications



Packet Size (bytes) /Bytes returned /Ethernet frames returned

512/ 455 k / 2059

4096 / 279 k / 526

6144 / 271 k / 449

8096 / 266 k / 383

32768 / 257 k / 278 

Click here to return to the previous page

David Strom David Strom Port Washington, NY 11050 USA US TEL: 1 (516) 944-3407