It rarely pays to be popular, unless you're a film star or an amusement park. On the Internet, popularity and cost often go hand in hand. The more traffic you get (unless it's tightly bound to sales), the more money it costs you. This is because most bandwidth usage is centralized: one server and one network feeds all of the hungry mouths, and you pay the piper for spikes and sustained use.
In mid-March, I narrowly averted a $15,000 bandwidth bill that arose when I offered Real World Adobe GoLive 6 as a free PDF. The site at which I hosted the download is a Level 3 Communications co-location customer, and Level 3 charges on sustained bandwidth.
The book had 10,000 downloads, representing nearly 250 gigabytes, in just 36 hours. I averted any cost because Level 3 drops the top five percent of the most busiest hours each month, which is just over 36 hours in an average month. My 36th hour was a few megabits per second, it turned out; my 37th or so, below 500Kbps.
If I were living in the future, the scenario might have been less harrowing. Systems that involve peer-to-peer file sharing -- which distribute parts or entire files across a whole system to reduce strain on any one part -- and edge server-to-edge server networks, like Akamai's system of pushing files topologically closer to downloaders, would make the effective cost of bandwidth lower by never straining single locations.
I learned a lot in the process of coping with the stress of a high-ticket bill and then the aftermath about managing expectations, dealing with high-bandwidth needs, and the true current cost of bandwidth. I also had a chance to reflect on how current distributed file sharing requires either too much work or involves too many unrelated political and legal issues.
(For more on the social costs and the follow-up on what happened, you can read the New York Times article I wrote that appeared in April 2003. The Times requires registration and is now charging for access to the archives after about seven days.)
When I started buying in-house bandwidth at the T-1 level (1.544Mbps dedicated digital service) back in 1994, it cost about $2,000 per month. And that didn't include unlimited bandwidth: after a small number of gigabytes, I was charged $50 for each additional Gb. I, in turn, passed this on to the Web hosting clients.
In the near decade since, bandwidth costs have dropped but not plummeted; you still have to shop around. If you want in-house bandwidth with more than just a few static IP addresses, you'll wind up spending several hundred dollars a month on the low end for 512Kbps DSL to T-1 service in places, like Seattle, with lots of competition. In other parts of the country, with older infrastructure or less competition, a T-1 could still top $2,000 per month.
On top of the actual local loop, or the line that links your network to the ISP or network provider, most companies charge an excess bandwidth fee. I switched my office network from one Seattle-based firm to another, Speakeasy Networks, because the first firm only allowed a few tens of gigabits of traffic a month before charging $30 per gigabyte. (They currently charge only $10 per gigabyte.)
Speakeasy Networks has pursued a more reactive model: they let you eat all the bandwidth you want, and only monitor for abuses, shutting down illegitimate uses of the network, such as warez, porno, or scams.
Most of us want high-availability throughput for our web sites in the single- or double-digit Mbps range without paying for it individually in our homes or offices. We turn to hosting or co-location. Monthly fees cover the basics, including storage on a server or the rack space, electricity, backups (battery and data), and air conditioning. Most hosting and co-location companies set maximum monthly bandwidth usage as part of a level of service and charge by the meg or gig thereafter.
The rates can still vary from reasonable to ridiculous. In researching co-location and web hosting companies to find out the going rates (and to find a location for my book-price comparison site, isbn.nu, which had outgrown my office's 768Kbps SDSL line), I found you could pay anything from $1 to $100 per gigabyte without any good reason for the disparity.
I chose to move isbn.nu to a local co-location and hosting company, digital.forest, which has a nearly 10-year history, an eternity in Internet time. Given my bandwidth near-blowout, their $1 per gigabit after the first 40Gb rate seemed delightful to me. (I also noted that they had redundant fiber optic lines: one running north and one south from their suburban location northeast of Seattle.)
digital.forest offered me a 10Mbps connection, and showed me their current capacity and utilization, all of which made me confident in their ability to deliver.
If I'd chosen to go with Level 3, I could have gotten 100Mbps feed as part of the basic deal, but there's terror that goes with that. Level 3's pricing in Seattle starts under $1,000 for a cage with a sub-1Mbps sustained bandwidth utilization, but costs $1,000 per Mbps above that; in the high Mbps, you start to pay less, and you can contract for higher bottom levels, too.
For many people, shared space on a server with access to ASP, PHP, JSP, MySQL, and other servers and languages is really all that's needed, and hosting instead of a dedicated server can more than suffice. Oddly, though, bandwidth costs tend to be much, much higher for hosting than co-location, when you'd think the reverse would make more sense.
EarthLink, for instance, allows a reasonable amount of bandwidth per month for its dial-up and DSL customers' sites, but cuts you off if you exceed a limit that they don't precisely define. Here's how they explain it:
Each member's free webspace is allocated a certain amount of traffic per month (traffic is calculated on a formula multiplying the number of hits that your site receives by the size of your files). If a site exceeds its maximum monthly allotment of traffic, the site will become unavailable until the beginning of the next calendar month. A site that exceeds the EarthLink Member's maximum allotment in size will also become unavailable. Unavailability includes but may not be limited to the inability to access the site publicly or to publish to or modify the site's contents via certain Web creation tools. More information about appropriate use of the free member webspace appears under Free Webspace Community Guidelines.
Follow the link and you find more details: Each member's free webspace is allocated at least 1GB of traffic per month (traffic is calculated on a formula multiplying the number of hits that your site receives by the size of your files). If a site exceeds its maximum monthly allotment of traffic, the site will become unavailable until the beginning of the next calendar month. A site that exceeds the maximum allowed webspace size will also become unavailable. Unavailability includes but may not be limited to the inability to access the site publicly or to publish to or modify the site's contents via certain web creation tools.
Customers buying a business hosting package, however, need to pay much more heed: their three basic hosting packages include many features for $20 to $85 per month, varying from 10 to 30Gb of bandwidth use included each month. Cross that limit and you start paying 10 cents per Mb. That's right: $100 per Gb!
Apple's .Mac service offers web site hosting, but requires a Mac to manage the account. Once it's set up, files can be uploaded via WebDAV from any platform. Apple declined to provide specific information about how they monitor and limit bandwidth, but they said they encourage all legitimate uses, such as sharing QuickTime movies created in iMovie. They don't charge for bandwidth at any level.
Whether using a hosting service or co-locating a server, you need to ask several critical questions before popularity strikes:
For instance, Level 3 just lets the bandwidth roll, and offers MRTG monitoring -- using a secure card's one-time number generator to restrict access.
Within a few days after I shut off the bandwidth to feed out the PDF of my book, sympathetic colleagues and strangers offered suggestions for continuing to make it available: distribute it through mirrors, and distribute it through peer-to-peer file sharing networks.
I was able to immediately engage on the former through some generosity. My colleague, friend, and wireless networking book co-author Adam Engst is also a moderator of the Info-Mac Archives, a collection of legal downloadable files, the inception of which goes back to the early 90s at Stanford.
Currently, the archives are hosted at MIT, but files are served through a few dozen mirrors worldwide (mostly at academic institutions, but including AOL and Apple). These collective mirrors probably have the capability to feed a gigabit per second.
Adam suggested uploading the file to the archives. As part of my contribution to their effort, I wrote round-robin Perl scripts that would allow easy random redirection to a given file or directory. The 10,000 downloads of my file, had they occurred through Info-Mac, would have been a tiny distributed blip.
It's not easy for the average individual to have access to this kind of distribution system, but there are plenty of archives like sourceforge.net for information, scripts, and programs (free, demo, or shareware), and there's no reason to host a file that could be placed into a distributed or replicated archive.
A few dozen people suggested peer-to-peer file sharing. Beyond sharing pirated music, P2P networks are an efficient way to use the vast pool of bandwidth available to individuals and reduce the load on any given machine.
Although folks said try Kazaa, LimeWire, and other well-known services, BitTorrent was the name that came up again and again, partly because it works on several platforms, and doesn't require a big installation.
The best-known file sharing systems generally distribute entire files to other machines, or allow people to expose their directories of files, and let the user choose the best source. BitTorrent distributes pieces of files so that given a large pool of downloadable bandwidth, a file might be reassembled from many pieces in many places.
In peer-to-peer systems, however, you can't necessarily be sure that a given file is the same an author meant to upload, that the file has been vetted for viruses, or that each version of the file throughout a network is the same as every other file. BitTorrent uses cryptographic hashing to verify that the file you received was correctly and legitimately reassembled, but it doesn't verify as a system that it's the file that an author or creator intended it to be
Unfortunately, the P2P moniker has gotten a generally bad name, and I would be concerned at suggesting to readers that they run a program which, on certain ISPs, at certain universities, and in certain corporations, could get their accounts cancelled, their butts expelled, or themselves get downsized.
There's also a problem in finding the "target demographic:" folks who are likely to want to download a copy of a book on Adobe GoLive 6 probably have a tiny overlap with people who understand peer-to-peer file sharing software, or have the desire to load it.
More appropriate to this kind of download might be Akamai's model of content distribution, which involves placing servers all over the Internet, even in individual ISPs, to replicate content to specific locations. The bandwidth use is thus minimized inside certain small topological network areas, reducing ISPs costs and the costs of delivering content.
Akamai isn't ad hoc, however, and you have to have a commercial relationship with them. An edge-server-based file distribution system could solve many of the problems with peer-to-peer sharing, legitimize distributed file sharing, and improve speed and availability. But it would require some centralized authority that would verify the legality of uploaded files for distribution and then sign them for verified distribution.
With Apple making electronic music purchases simple, perhaps P2P or edge-to-edge sharing could become workable with the kind of assurances and consistency needed, without circumventing acceptable use policies or copyright.
I wish I'd be smarter in investigating costs before I got started, but my switch to digital.forest certainly allows me some peace of mind: they monitor and notify, and charge so little for bandwidth, that even a blowout won't send me careening off the road.
What I really look forward to is a day when we truly have a pool of international bandwidth and distributing information, especially free information, becomes as simple as checking a box and letting the underlying mechanisms sort out the bandwidth. In that world, no one person gets stuck with the bill, just as no one actually pays for the whole Internet, either.
Glenn Fleishman is a freelance technology journalist contributing regularly to The New York Times, The Seattle Times, Macworld magazine, and InfoWorld. He maintains a wireless weblog at wifinetnews.com.
Return to OpenP2P.com
Copyright © 2009 O'Reilly Media, Inc.