News Archive

This page lists our news items sorted by most recent first.


Library of Congress Supports IBP

July 2008
The LoCI team was recently invited to the National Digital Information Infrastructure Preservation Program (NDIIPP) meeting in Arlington, Virginia headed by the Library of Congress. The NDIIPP was founded by Congress in 2000 in order to archive and preserve digital content.

How does LoCI tie into all of this? The LoCI team is the founder and primary developr of the Internet Backplane Protocal (IBP), which is designed to facilitate the storage and transfer of extremely large data sets. Utilizing the services provided by an IBP depot, i.e. a server, an organization and collaborators can upload, store, and download large amounts of content, something that IP/TCP often fails to do well.

The NDIIPP can use IBP as a means to securely store archived digital library content in a distributed wide-area network that is platform and OS independent.

For more information, check out the IBP page. Or you can see all of the LoCI projects here. For a test drive of our services (though still experimental in nature), see the LoDN page.

LoCI Screencasts

July 2008
LoCI screencasts are now avaiable. You can use the link in the left-hand navigation bar or click here. These informative screencasts will teach a newcomer how to use some of our tools and services while providing a brief overview of LoDN and how it functions. Special thanks to Harold Gonzales and Dr. Terry Moore for providing these videos.


The NetCDF/L 3.5.2 is available now!

December 2005
NetCDF/L 3.5.2 has now been released and is available for download using this link. Compared to NetCDF/L 3.5.1, several bugs have been fixed in this 3.5.2 version. We will continue to work on NetCDF/L 3.6.0, which will provide large file support.

The Network Common Data Form, or netCDF, is an interface to a library of data access functions for storing and retrieving data in the form of arrays. Developed by Unidata, it is an abstraction that supports a view of data as a collection of self-describing, portable objects that can be accessed through a simple interface. The netCDF software implements an abstract data type, which means that all operations to access and manipulate data in a netCDF dataset must use only the set of functions provided by the interface. The representation of the data is hidden from applications that use the interface, so that how the data are stored could be changed without affecting existing programs. The physical representation of netCDF data is designed to be independent of the computer on which the data were written.

Based on the netcdf-3.5.1 software package, LoCI Lab has developed NetCDF/L software package which adds Logistical Networking capabilities. Like traditional NetCDF it can store data on local disk and it can also store data on the global logistical network. By specifying a local filename or a LoRS URL (i.e. lors://), the user controls where the data is stored. The netCDF/L software package is based on two other software packages: libxio library and Logistical Runtime System(LoRS) package. The libxio library, developed for the DiDaS project in the Czech Republic, provides a standard UN*X I/O interface to access local files as well as logistical technology-based "network files". The libxio library source package is open-source and it is available for download here. The LoRS package consists of a C API and a tool set that automate the creation and maintenance of network files (exNodes). For more information about LoRS, please click here. You can download the LoRS package using this link.

Please refer to README in NetCDF/L 3.5.2 software package for detail instructions on installing and compiling this package. Please click here to see a poster on Global Terascale Data Management for Legacy Applications Using Netcdf/L.


LoDN : Logistical Distribution Network

September 2004
In an era when global communities have to work collaboratively with large quantities of data, we offer a new tool for building advanced collaborative environments, the Logistical Distribution Network (LoDN, pronounce "low down"). LoDN enables users to easily store, manage and distribute content in the global Logistical Networking infrastructure, the L-Bone, offering very fast downloads to users connected to high-speed networks, like Abilene.

Please find more information about LoDN in the paper "LoDN: Logistical Distribution Network ", which (through the generous support of Microsoft Research) will be presented at the 2004 Workshop on Advanced Collaborative Environments(WACE 2004) in Nice, France on September 23. You are also invited to try our LoDN Service directly. In order to use LoDN, all you need is a standard web browser, a recent Java runtime environment and an access to the world wide web.

PPPL improves performance using Logistical Networking

August 2004
The Princeton Plasma Physics Lab (PPPL) is the premier US facility for fusion research. PPPL researchers use both experimental reactors to observe reactions and supercomputer simulations to model those reactions. PPPL uses supercomputers at NERSC and ORNL to generate their simulation data and then transfer the data to PPPL for analysis and visualization. According to PPPL's Scott Klasky, "PPPL has been able to transfer their data more efficiently using Logistical Networking than ever before."

At SuperComputing2003 (SC2003), Klasky presented "Grid-Based Parallel Data Streaming implemented for the Gyrokinetic Toroidal Code" which describes a data streaming algorithm that uses GridFTP for data transport versus writing to local disk, etc. The goal of this work is to store the data as quickly as possible while avoiding any slowdown of the simulation itself. Klasky found that they could stream data from the Princeton campus to PPPL with an overhead of about 7% compared to writing to local disk. They were unable to measure from NERSC to PPPL due to compiling issues with GridFTP.

For Grid2004, Klasky et al have replaced GridFTP with Logistical Networking (LN), specifically the LoRS library, in the paper "High Performance Threaded Data Streaming for Large Scale Simulations". The paper compares storing data to the local parallel filesystem (General Parallel File System - GPFS) at NERSC versus moving data from NERSC to PPPL. Writing to GPFS incurred overhead ranging from 3% to over 10% compared to performing no IO. When using LN, the overhead rates dropped to 0.3% to 3% for the same data generation rates. For the data generation rates that PPPL researchers commonly generate (about 8 Mbps), using LN incurs 2.5% overhead to move the data from NERSC to PPPL compared to 7.5% overhead when writing to the NERSC filesystem!

On the whole, PPPL found switching to LN to be relatively easy and they saw improved performance immediately.

Czech Republic Researchers Use IBP for Distributed Video Transcoding

July 2004
The MetaCenter project in Czech Republic is using IBP in its new Distributed Data Storage (DiDaS) project, which is designed to enhance its ongoing effort to develop an open system for distributed video encoding based on grid infrastructure. Both the flexibility and scalability of IBP make it a good fit for managing the storage and the movement of the large volumes of data involved in distributed video processing.

The implementation, created by researchers Lukás Hejtmánek and Petr Holub, is composed of two parts: One part enables applications to work with data in the IBP storage, while the other provides a system that handles parallel encoding of multimedia content. For the first part, the DiDaS team has developed a library called libxio that allows developers to access both local files and files stored in IBP depot using interface similar to standard UNIX I/O interface. Using the libxio library, two applications have been integrated with IBP: the transcode program, which can load and store files to/from IBP depots, and Mplayer media player, which is able to stream and play content directly from IBP. As for parallel encoding of multimedia content, Czech researchers have created an umbrella system based on IBP infrastructure, called the Distributed Encoding Environment (DEE), which allows users to easily encode video in a distributed manner.

The two pilot groups formed to test the system - the Neurosurgery department at St. Anna University and the Hospital in Brno - use it heavily for distributed video processing of lecture recordings and neurosurgical operations. By the first quarter of 2004, the DiDaS project had already deployed IBP depots with a total of more than 7TB of storage across various locations in the Czech Republic.

IBP Tops High Performance File Transfer Protocols on Abilene

May 2004
During the week beginning from May 3, 2004, the Internet Backplane Protocol (IBP) for the first time generated more traffic on the Abilene backbone research network than any other high performance file transfer protocol listed in the the Abilene Netflow report.

IBP traffic has increased an order of magnitude from 200GB to sometimes over 1TB/week over the past 9 months, and totalled 917GB on the week beginning May 3. The increase in IBP traffic reflects the growing popularity of Logistical Networking tools as a mechanism for content distribution, data intensive collaboration and large scale data management in research and education environments.

During the same 9 month period, bbFTP traffic averaged between 1-2TB/week and gsiFTP averaged between 20-40GB/week. The other "advanced application protocols" tracked in the Netflow report are Unidata McIDAS and LDM, which are specific to the distribution, analysis and visualization of geoscientific data in the Unidata project. Unidata traffic on Abilene, which is point-to-multipoint in nature, has roughly doubled, from 5 to 10TB/week over the past 9 months.

Brazil's Academic Network adopts Logistical Networking for Video Delivery

April 2004
The Digital Video Working Group (GTVD) of Brazil's RNP is testing the integration of its content delivery system with Logistical Networking technologies.

Poised to be one of two systems adopted as a standard service for the RNP, the overlay network devised by the GTVD relies on both the use of IBP depots and LoRS-enabled user tools for digital video transport. The integration allows the system to take advantage of the aggregated idle storage resources and high-performance of transfers, two key characteristics for digital video services.

The content publication tool, which is available to anyone, transfers the desired video content to IBP depots installed at RNP PoPs (only 8 of the 27 PoPs are being used for the testing). Additional content will spill over into depots at other locations in the event the RNP connected depots are full. The resulting exNode is then copied to one or more primary source servers who retrieve the file in order to maintain a persistent copy. The administrator of the server can override this behavior and choose to host only exNodes. When the source server receives a request for the video and does not have it in persistent storage, it provides the exNode to the intermediate server who then reads the data from IBP and serves it via HTTP to the client.

This has several advantages. It makes transferring large video files much easier and quicker. It also allows publishers who do not have a robust storage infrastructure to make their content available by running a "light-weight" primary server that hosts only exNodes (30-500KB files). Also, maintaining a copy of the file in IBP storage speeds up synchronization amongst mirrored primary servers. This strategy does not hinder the performance of the system because each server, both intermediaries and primaries, can read from IBP at rates that exceed the client's HTTP connection, maintaining service transparency.

Maintaining data availability within the IBP storage cloud looked to be one of the main obstacles to system practicality. Recently LoCI has matured exNode warmer strategies (ways to prevent exNode decay) included in their LoDN application. Once it is released, content publishers will be able to easily manage their exNode collection. Another consideration was the limited amount of IBP storage available on the RNP. LoCI and the GTVD are working on increasing Brazilian IBP deployment by bringing more PoP nodes online, joining with RNP and Hewlett-Packard in promoting Planet Lab participation, and encouraging other projects to take advantage the layered architecture of Logistical Networking.

Relevant links:
Planet Lab


Why Logistical Networking is not a Content Sharing Service

December 2003
A recent Future File report characterized IBPvo, a video management tool based on Logistical Networking, as "The Napster of Television." A new LoCI Lab technical report explains why Logistical Networking is not comparable to Peer-to-Peer content distribution systems, and how the Internet Backplane Protocol and the Logistical Runtime System support privacy and end-to-end security.

Logistical Networking is sometimes compared to Peer-to-Peer content sharing services as a means of transferring data between network users. The key point of commonality is that both Logistical Networking and Peer-to-Peer services make use of storage that is not owned or operated by the publisher of the content. In the case of Logistical Networking the intermediate storage takes the form of systems that we call "depots" which support the Internet Backplane Protocol (IBP); in the case of Peer-to-Peer services, the intermediate storage is located in desktop systems of other users.

While there are many differences between Logistical Networking and Peer-to-Peer systems, one key difference is in the steps taken to make sure that the user of Logistical Networking services retains control over the content stored on depots. Storage space allocated on an IBP depot is not given a semantically meaningful name; its only identifier is a long random string that is assigned by the depot itself. Because it is randomly chosen, the identifier cannot be guessed by other users; an allocation made by one user cannot in fact be detected by other users except for an increase in total storage allocation reported by the depot. Even monitoring the network to snoop these random identifiers can be ruled out by using a secure variant of IBP based on SSL.

In contrast, many Peer-to-Peer content sharing systems are designed with the explicit intent of making information public by giving it a meaningful name that can be directly searched by other users. Without passing judgment on the propriety of such systems, it is clear that participation in them assists all users in not simply moving and storing data, but also in making it accessible to all other users.

Given that IBP takes such steps to keep storage allocations private, can data stored there be considered secure? The answer is no, because users have no control over the operators of IBP depots or the network that connects them. True security can only be accomplished by encrypting data before it is written to the depot and decrypting it only after it has been retrieved. The suite of end-user tools called the Logistical Runtime System (LoRS) implement end-to-end security using the standard AES encryption algorithm.

Given that the names assigned by IBP can enable any user who knows them to access data stored on the depot, can data stored in IBP be considered to be public? The answer is no, because as described above, the identifiers cannot be listed, searched or even guessed by any other user. The only way for users to share data is by sharing those identifiers. Data can only be shared when there is a name or searchable attribute associated that is known to more than one user.

Thus, IBP neither protects data securely nor does it publish it publicly; it provides sufficient privacy and control to avoid being a publication service, but sufficient access to support one. It provides the fundamental resources required to implement private, secure sharing of data, but ultimately leaves security to the end user's system, as is necessary in any scalable, distributed public infrastructure. Like communication on the Internet itself, data storage using Logistical Networking is ultimately a way of sharing resources that is neutral to the intent of the end-user, and seeks only to support the implementation of all applications in the wide area.

LoRS project described as the "killer app" for Internet2

October 2003
The presentation of the new version of LoRS at the Fall Internet2 Member Meeting in Indianapolis (October 12-17) made an impact. In this article for Syllabus, Joe St. Sauver, Director of User Services and Network Applications at the University of Oregon Computing Center, argues that LoRS may well be the "killer app" that the Internet2 community has been searching for because it can solve the widespread problem of sharing huge data files at high performance. He compares seeing a LoRS download to the revelation of seeing the first use of a web browser. ...more

Globus Replica Catalog Integrated with Logistical Networking

September 2003
Researchers at Nanyang Technological University (NTU) are collaborating with LoCI lab to develop software tools that integrate Internet Backbone Protocol(IBP) with the Globus Replica Catalog (GRC), an existing software tool that helps locate copies of a particular file on a distributed storage network.
Learn more...

Data Logistics Speeds P2P File Transfers

September 2003
The Wall Street Journal highlights FreeCache, a new service from the creators of KaZaa that uses local area caching to improve Peer-to-Peer performance and reduce wide-area traffic. This specialized use of local area caching is a prime example of the use of data logistics to optimize network traffic.

The good folks at Joltid (who brought you KaZaa) have a new offering, PeerCache, that exploits the bandwidth savings that can accrue to ISPs when they make storage resources available for caching of popular content being shared in P2P networks. The principle that a network or an ISP can act as a "super peer" and provision resources to optimize sharing of content is akin to the ideas that underlie Web caching and other cooperative forms of content distribution. A recent article in the Wall Street Journal reports that three European ISPs that have deployed disk PeerCache[1].

The use of storage resources deployed by institutions and network operators to optimize wide area data-intensive applications has been dubbed "Data Logistics" by researchers at the University of Tennessee's Logistical Computing and Internetworking (LoCI) Laboratory. PeerCache can be viewed as a specialized infrastructure to apply Data Logistics in the P2P networks. LoCI Lab has for some time been developing and deploying a much more general technology called Logistical Networking that can support a wide range of applications and tools for data-intensive computing and collaboration.

LoCI Lab's National Logistical Networking Testbed (NLNT) makes 20TB of shared storage available for caching, pre-staging and other purposes as a global storage pool of over 200 nodes in 19 countries around the world. This testbed, funded by the NSF's CISE Research Resources program, is funded to grow to at least over 50TB in the next two years, and is also attracting cooperative contributions of resources from academic institutions and research networks around the world. The Department of Energy SciDAC program is deploying a similar infrastructure for use by computational scientists working at its National Laboratories and collaborating Universities. Logistical Networking is also being deployed in several regional infrastructures around the world.


Rather than being part of a specific peer-to-peer system, these "Logistical Networking" storage resources are available as a highly generic and interoperable service, providing a high degree of flexibility to application developers. NLNT storage resources are available for unrestricted use by anyone in the research and education community. Current applications range from remote data visualization to multimedia and software content distribution to data-intensive collaboration. The next generation of LoCI software will include the ability to share computational as well as storage resources provisioned on shared servers, thus encompassing peer-to-peer computing as well as storage and content distribution.

Researchers at the University of Tennessee's LoCI Lab has been pursuing research in Logistical Networking for many years, and have developed a suite of tools to enable the scalable and interoperable sharing of storage by end users, institutions and network operators. More information on our project, including papers and downloads of all of our open source software are available on our Web site:

Micah Beck, University of Tennessee

Associate Professor, Computer Science

Director, Logistical Computing and Internetworking Lab

Chair, Internet2 Special Internet Group on Network Storage

[1] Kevin J. DeLaney, "Kazaa's Founder Peddles Software to Speed File Sharing", Wall Street Journal, September 8, 2003.

APAN Meeting Session on Logistical Networking

August 2003
There will be a technical session devoted to Logistical Networking at the 16th Asia Pacific Advanced Network Consortium (APAN) in Busan, Korea on Aug 28, 2003. This session, chaired by Hyun Chul Kim of KAIST in Korea, demonstrates the worldwide interest in Logistical Networking and the breadth of research participation.

It will include talks on: An Introduction to Logistical Networking by LoCI Lab Director Dr. Micah Beck; Replica Management for IBP by Ming Tang of the National Technical University in Singapore; Distributed Data Storage by Ludek Matyska , Associate Professor and Dean of the Masaryk University in the Czech Republic; Web 100 Project on Logistical Networking by Jim Ferguson of the University of Illinois in the US. Along with European countries, the APAN region is the most active in the deployment of Logistical Networking and research into tools and applications. For more information on APAN, see

IBP Deployed by the European 6NET

February 2003
The European 6NET project will utilize Logistical Networking to distribute freeware and shareware over the Italian INFN/GARR IPv6 network testbed. IBP will first be installed on hubs in Rome, Milan, and Bologna, and is expected to be up and running by May 2003.

Logistical Networking infrastructure will soon be deployed on the Italian Academic and Research Network (GARR), under the leadership of the European 6NET project. The 6NET project, founded by the European Commission's Information Society Technologies (IST) Program, currently operates an international pilot network for testing Internet Protocol version 6 (IPv6). The 6NET project will introduce and test new IPv6 services and applications on its native IPv6 testbed. The 6NET IPv6 testbed features more than twenty high-powered hubs located throughout Europe.

The Italian arm of the 6NET project, 6NET Italia, will utilize Logistical Networking to distribute freeware and shareware on the INFN (National Institute of Nuclear Physics)/GARR network. Internet Backplane Protocol (IBP) storage depots will be installed on three 140Gb POP hubs in Rome, Milan, and Bologna, forming the backbone of the INFN/GARR network. Each of the twelve participating Italian universities and research institutions will then install local IBP depots at their campuses. IBP is expected to be up and running on the INFN/GARR network by May 2003.

The Logistical Runtime System (LoRS) tools suite will allow users quick and easy access to an assortment of freeware and shareware. Logistical Networking expedites downloads by strategic prepositioning of content for local delivery. LoRS will allow the user to download from the closest, quickest site instead of from a single central server. Logistical Networking software will be explicitly included in the shareware deliverables.

Offering fast and reliable freeware and shareware content distribution is an effective way for 6NET Italia to generate network traffic. High traffic means a rigorous testing of IPv6 on their network, a primary focus of the project. Although the 6NET project is currently research oriented, the participants are looking toward the future of the European Internet and the likelihood of taking IPv6 to the production level.

Introducing IBPvo

February 2003
IBPvo is a new application of currently existing Logistical Networking infrastructure, which combines the convenience of an online VCR with the power of Logistical Networking to serve the scientific computing research community.

As part of their ongoing research into Logistical Networking, LoCI Laboratory is presently developing IBPvo, a prototype application of currently available Logistical Networking infrastructure. IBPvo combines the convenience of an online VCR with the power of Logistical Networking to enable the research community to explore the potential of this technology.

Recording with IBPvo is simple. Users provide such information as television channel and time of the program through IBPvo's web interface. IBPvo then uses vcr to record the program as an avi file in DivX format, and applies the LoRS Tools Upload command to store the video file in IBP storage depots on the L-Bone network.

IBPvo automatically sends the user a pointer to their recording, in the form of an exNode, via email. The exNode allows the user to access and download their recording using the LoRS Tools software package. IBPvo readily facilitates collaboration. Access may be granted to a video file simply by passing the appropriate exNode.

Using Logistical Networking, an IBPvo video file is parceled out between several time-limited storage allocations, with each storage allocation set to expire at a different time, on a different schedule--daily, weekly, etc. However, recordings may be reliably saved for days or even weeks. IBPvo keeps track of when each storage allocation is due to expire, and automatically renews the allocation, thereby preserving the file for the time interval specified by the user.

The size of an IBPvo video file will depend on a combination of the program length and the encoding bitrate. Currently, IBPvo can create files up to 2GB in size, adequate to record a one hour program at a bitrate of up to 4000 bits per second. IBPvo video files may be downloaded with LoRS Tools and played on any computer with a media player using the DivX Codec (Windows MediaPlayer, QuickTime, mplayer, xine, etc). IBPvo is currently available to the Logistical Networking research community and is under continued development by the LoCI team.