http://strom.com/awards/359.html
We are entering a new computing era, the era where cheap
processing power is enabling a new series of applications
that
is astounding and amazing, the stuff of science fiction.
And
I, for one, am excited about such opportunities.
Yesterday's New York Times front page carried a front
page
story by John Markoff about what he calls flash mob
computing,
a fancy way of saying that a bunch of kids carried their
PCs
to a local university gym to set them up on a network and
have
them act as a cluster of computing nodes.
http://www.flashmobcomputing.org/
Back when I was teaching high school networking topics,
we
called these events LAN parties, and they principally
were for
the kids to gather together and play network-based games
in
someone's house. All you needed were plenty of power
plugs, a
router or a hub, and some space to set everything up.
The whole thing has me flashing back to 1987 when I was
at PC
Week, and Barry Gerber of UCLA, Jan Newman of Novell, and
Bill
Alderson of Network General got a bunch of IBM PS/2s
together
to do the first LAN topology tests, running Netware over
Ethernet, Arcnet, and Token Ring (Ethernet won, by the
way).
What was similar was we had extension cords running all
over
the UCLA building that we were using for the makeshift
staging
area, and we kept running out of power plugs, not to
mention
the fans that we brought in to try to keep the heat
pumping
out from all those computers from frying us and the
equipment
too. Those tests cemented some solid friendships and
professional relationships over the years with Gerber,
Newman
and Alderson, too.
But the power and AC requirements notwithstanding, the
software is what makes these Dean-like be-ins (I am
showing my
age, I know) possible. The software to assemble a cluster
of
computers is fairly well understood by now. A good
example
includes peer-to-peer products such as SETI-at-home and
others
than can disassemble a single computing task amongst
hundreds
and thousands and even millions of separate machines.
It isn't just software. Blade servers are becoming less
and
less expensive, and now it isn't unusual to find racks
and
racks of computers that are used in everyday
applications. A
lot of people now have some pretty substantial horsepower
at
home; in some cases their home PCs are more powerful than
the
ones at work. And most of the time, these machines aren't
doing much that taxes their CPUs, even with the latest
Microsoft applications that grab more and more processing
power to do the same tasks. But I digress.
Clusters-on-the-fly are just the tip of the iceberg.
Other
vendors are beginning to think clustering as part of
their
default approach when building their applications. I got
to
see another application yesterday that extends the
concept of
harnessing lots of horsepower for something a bit more
understandable, such as video compression.
Here's the problem: sending digital video around the
Internet
takes a very fat pipe. This is one of the reasons that
copying
DVDs hasn't taken off: it takes longer to move all these
bits
and you also need lots of processing power to encode and
send
the video out into the world. What I saw yesterday from a
company called Broadcast International (brin.com) got me
pumped, because what it is trying to do is to compress
this
video signal as much as possible, so that the receiving
end
won't need super-duper power to view the video stream.
The issue hinges around using multiple video codecs,
which are
a combination of software and hardware routines that are
designed for particular kinds of scenes and activities.
The
folks at BI have developed some nifty routines that will
allow
a video server to switch codecs on the fly, from frame to
frame, so that the right kind of compression routine is
matched up with the particular kind of scene that is
being
filmed. For example, they told me about one codec that
works
well with smoke and fog scenes, and another with rain and
water scenes.
The beauty of the system is that you don't need to have
much
more software on the receiving end, beyond the usual
media
players from Real and Microsoft (with a small plug-in
from BI)
to view the encoded video stream. That makes a lot of
sense.
Up until now, these codecs were horsepower hogs, and
there
wasn't an easy way to switch from one to another without
a lot
of trouble. The company has developed algorithms to do
this,
and also uses a bunch of clustered computers to encode
the
video signal. Again, all of this wouldn't be possible
just a
few years ago, when having a dual processor Pentium 300
was
considered hot stuff.
The flash mob computing may get the headlines, but the
work
that BI and others are doing to build processor-intensive
applications will be the ones we'll all be using in the
next
few years. And from this work we should see other video-
oriented applications that will harness the Internet in
new
and interesting ways.
Entire contents copyright 2004 by David Strom, Inc.
David Strom, dstrom@cmp.com, +1 (516) 562-7151
Port Washington NY 11050
Web Informant is (r) registered trademark with the
U.S. Patent and Trademark Office.
ISSN #1524-6353 registered with U.S. Library of Congress
If you'd like to subscribe (issues are sent via email),
please send an email to:
mailto:Informant-request@avolio.com?body=subscribe.