Published on ONJava.com (http://www.onjava.com/)
 See this if you're having trouble printing code examples

Java NIO

Top Ten New Things You Can Do with NIO

by Ron Hitchens, author of Java NIO

New I/O? Why do we need a new I/O? What's wrong with the old I/O?

There's nothing wrong with the classes in the java.io package; they work just dandy -- for what they do. But it turns out there are quite a lot of things the traditional Java I/O model can't handle. Things like non-blocking modes, file locks, readiness selection, scatter/gather, and so on. These capabilities are widely available on most serious operating systems today (and a few comical ones, as well). They're not just nice to have; they're essential for building high-volume, scalable, robust applications, especially in the enterprise arena.

NIO brings a host of powerful new capabilities to the Java platform. Despite the fact that "N" stands for "New," NIO is not a replacement for the older I/O classes. It's an alternate approach to modeling I/O services, with less emphasis on the streaming model. NIO concentrates on providing consistent, portable APIs to access all sorts of I/O services with minimum overhead and maximum efficiency. NIO sweeps away many barriers to the adoption of Java where I/O performance is critical, allowing Java to compete on an equal footing with natively compiled languages.

Related Reading

Java NIO
By Ron Hitchens

In this article I'm not going to explain buffers, channels, selectors, and the other denizens of the NIO depths. There just isn't room here to do so properly. My book, Java NIO, does all that. In this space, I'll list some new things you can do with NIO that you couldn't do before in Java. If you need a little context as you go along, visit this page (part of Sun Microsystems' JDK documentation), which gives a brief synopsis and links into the J2SE 1.4 Javadoc.

So, without further ado, in the time-honored tradition of O'Reilly authors flogging their own books by cooking up lame top ten lists: Top Ten New Things You Can Do with NIO That You Couldn't Do Before (TTNTYCDW..., oh never mind).

10: File Locking

File locking is one of those things most programmers don't need very often. But for those of you who do need it, you know you simply can't live without it. Prior to NIO, there was no way, short of resorting to native methods, to set or check for file locks in Java applications. File locking is notoriously OS- (and even filesystem-) specific, so the native route is fraught with peril if you need any sort of portability.

With NIO, file locks are built right into the FileChannel class. It's now easy to create, test, and manage file locks on any platform that supports file locks at the OS level. File locks are generally needed when integrating with non-Java applications, to mediate access to shared data files. In Figures 1 and 2 (borrowed from my book) assume the writer process is a legacy application that can't be replaced. With NIO, new reader applications can be written in Java that use the same locking conventions to seamlessly integrate with the pre-existing, non-Java application.

Readers holding shared lock
Figure 1: Reader processes holding shared locks.

Writer holding exclusive lock
Figure 2: Writer process holding exclusive lock.

File locks are generally not appropriate for intra-JVM coordination between threads; they operate at the file and process levels. The OS doesn't usually differentiate between threads within a process for lock ownership purposes. That means all threads own all locks equally within a JVM. File locks are primarily needed when integrating with non-Java applications, or between distinct JVMs.

You may never need to use file locks, but with NIO, now you have the option. Adding file-based locking to the Java bag of tricks further eliminates barriers to adoption of Java in the enterprise, especially where it's necessary to work and play well with others.

9: Regular Expressions Built into the String Class

Regular expressions (java.util.regex) are part of NIO. I know, they're neither "new" nor "I/O," but a standardized regular expression library was mandated as part of JSR 51, so there you go.

Regular expressions are not new in Java (several add-on packages have been around for quite some time) but now they're built right into the base J2SE distribution. According to Jeffrey E. F. Friedl's recently updated Mastering Regular Expressions book, the regex engine in J2SE 1.4 is the fastest and best of the lot -- good to know.

One nice side effect of having a regular expression engine integrated into the base JDK is that other base classes can make use of it. In J2SE 1.4, the String class has been extended to be regex-aware by adding the following new methods:

package java.lang;

public final class String
implements java.io.Serializable, Comparable, CharSequence
   // This is a partial API listing

   public boolean matches (String regex)
   public String [] split (String regex)
   public String [] split (String regex, int limit)
   public String replaceFirst (String regex, String replacement)
   public String replaceAll (String regex, String replacement)

These methods are useful because you can invoke them directly on a string you're working with. For example, rather than instantiating Pattern and Matcher objects, invoking methods on them, and checking the result, you can do simple tests like this, which are less error-prone and communicate better:

public static final String VALID_EMAIL_PATTERN =
+ "{1,3}\\.[0-9]{1,3}\\.)|(([a-zA-Z0-9\\-]+\\.)+))"
+ "([a-zA-Z]{2,4}|[0-9]{1,3})(\\]?)";


if (emailAddress.matches (VALID_EMAIL_PATTERN)) {
   addEmailAddress (emailAddress);
} else {
   throw new IllegalArgumentException (emailAddress);

The split() method is also handy, especially in cases where you'd normally use the StringTokenizer class. It has two advantages over StringTokenizer; it applies a (potentially sophisticated) regular expression to the target string to break it into tokens and it does the parsing all in one shot. Rather than writing yet another tokenizing loop, you can just do:

String [] tokens = lineBuffer.split ("\\s*,\\s*");

This splits the string lineBuffer (which contains a series of comma-separated values) into substrings and returns those strings in a type-safe array. This regular expression allows zero or more whitespace characters before and/or after each comma. You can also limit the number of times the string is split, in which case the last string in the array will be the remainder of the input string.

8: Buffer Views

NIO introduces buffers, a gaggle of related classes in the java.nio package (see Figure 3). Upon first impression, buffers seem like something you were assigned in Computer Science 101 class. They're very simple objects that encapsulate a fixed-size array of primitive values, along with some state information about the array. That's pretty much it.

Buffer Family Tree
Figure 3: Buffer class family tree.

Buffers were created primarily to act as containers for data being sent to or received from channels. Channels are conduits to low-level I/O services and are always byte-oriented; they only know how to use ByteBuffer objects.

So what do we use the other buffers types for? Instances of the non-byte buffers can be created from scratch, or wrapped around an array of the appropriate type. They can be useful that way, but such a buffer cannot be used for I/O. However, there's a third way to create non-byte buffers, as a view of an existing ByteBuffer.

For example, let's say you have a file containing Unicode characters stored as 16-bit values (this is the UTF-16 encoding, not the UTF-8 encoding used for common text files). If you were to read a chunk of this file into the byte buffer, you could then create a CharBuffer view of those bytes, like this:

CharBuffer charBuffer = byteBuffer.asCharBuffer();

This creates a view of the original ByteBuffer, which behaves like a CharBuffer, combining each pair of bytes in the buffer into a 16-bit char value, as represented by Figure 4 (this figure shows that odd remaining bytes are not included in the view; let's assume here that you started with an even-size byte buffer).

Short View of ByteBuffer
Figure 4: CharBuffer view of a ByteBuffer.

You can then use the CharBuffer object to iterate over the data (using relative get() calls), access it randomly with absolute get()s, or copy the data to a char[] array and pass it along to another object that knows nothing about buffers.

The ByteBuffer class also has methods to do ad hoc accesses of individual primitive values. For example, to access four bytes of a buffer as an int, you could do the following:

int fileSize = byteBuffer.getInt();

This extracts four bytes from the buffer, beginning at the current position (there are also absolute versions of these methods) and combines them to form a 32-bit int value. A very cool thing about this is that the four bytes need not be aligned on any particular address boundaries. The ByteBuffer implementation will do whatever it needs to do to assemble the bytes (or disassemble, for put()) if the underlying hardware does not permit misaligned memory accesses.

7: Byte Swabbing

If you've ever had to deal with cross-platform issues, you're probably wondering at this point about byte order in the previous example. The CharBuffer view groups pairs of bytes together to form 16-bit values, but which byte is high and which one is low? The order in which bytes are combined to form larger numeric values is known as endian-ness. When the numerically-most-significant byte is stored first in memory (at the lower address), this is big-endian byte order (see Figure 5). The opposite, where the least significant byte occurs first, is little-endian (see Figure 6).

Big Endian
Figure 5: Big-endian.

Little Endian
Figure 6: Little-endian.

In the example in the previous section, are the 16-bit Unicode chars stored as little-endian (UTF-16LE) or big-endian (UTF-16BE)? They could be stored in the file either way, so we need a means of controlling how the view buffer maps the bytes to chars.

Every buffer object has a byte order setting. For all but ByteBuffer, this is a read-only property and cannot be changed. The byte order setting of ByteBuffer objects can be changed at any time. Doing so affects the resulting byte order of any views created of that ByteBuffer object. So, if we knew that the Unicode data in our file was encoded as UTF-16LE (little-endian), we'd set the ByteBuffer's byte order prior to creating the view CharBuffer, thusly:

byteBuffer.order (ByteOrder.LITTLE_ENDIAN);

CharBuffer charBuffer = byteBuffer.asCharBuffer();

The new view buffer inherits the byte order setting of the ByteBuffer. Subsequent changes to the ByteBuffer's byte order will not affect that of the view. The initial byte order setting of a ByteBuffer object is always big-endian, regardless of the native byte ordering of the hardware it's running on.

What if we didn't know the byte order of the Unicode data in the file? If the file was encoded with the portable UTF-16 encoding, the first two bytes of the file would contain a byte order marker value (if it's directly encoded as UTF-16LE or UTF-16BE, then you need prior knowledge of the byte order). If you were to test that byte order marker, you could set the byte order appropriately before creating the CharBuffer view.

A ByteBuffer object's current byte order setting also affects byte swabbing for data element views (getInt(), getLong(), getFloat(), etc.). The buffer's byte order setting at the time of the call affects how bytes are combined to form the return value or broken out for storage in the buffer.

6: Direct Buffers

The data elements encapsulated by a buffer can be stored in one of several different ways: in a private array created by the buffer object (allocation), in an array you provide (wrapping), or, in the case of direct buffers, in native memory space outside of the JVM's memory heap. When you create a direct buffer (by invoking ByteBuffer.allocateDirect()), native system memory is allocated and a buffer object is wrapped around it.

The primary purpose of direct buffers is for doing I/O on channels. Channel implementations can set up OS-level I/O operations to act directly upon a direct buffer's native memory space. That alone is a powerful new capability and a key to NIO's efficiency. But those I/O operations occur under the hood; they're not something you can use directly. But there is an aspect of direct buffers that you can exploit to great advantage.

The ability to use native memory to hold buffer data is enabled by some new JNI methods, which make it possible, for the first time, for a Java object to access memory space allocated in native code. Prior to 1.4, native code could access data in the JVM heap (if it was very careful -- there were severe restrictions), but Java code could not reach memory allocated by native code.

Now, not only can JNI code discover the address of the native memory space inside of a buffer created with ByteBuffer.allocateDirect() on the Java side, but it can allocate its own memory (with malloc(), for example) and then call back to the JVM to wrap that memory space in a new ByteBuffer object (the JNI method to do this is NewDirectByteBuffer()).

The really exciting part is that a ByteBuffer object can be wrapped around any memory address the native code can obtain, even memory outside the JVM's own address space. One example is creating a direct ByteBuffer object that encapsulates the memory on a video card. Such an buffer enables pure Java code to read and write directly to video memory with no system calls or buffer copies. Pure Java video drivers! All you need is a tiny bit of JNI glue to grab the video memory and return a ByteBuffer object. You couldn't do that before NIO came along.

5: Memory-Mapped Files

The theme of wrapping ByteBuffer objects around arbitrary memory spaces continues with MappedByteBuffer, a specialized form of ByteBuffer. On most operating systems, it's possible to memory map a file using the mmap() system call (or something similar) on an open file descriptor. Calling mmap() returns a pointer to a memory segment, which actually represents the content of the file. Fetches from memory locations within that memory area will return data from the file at the corresponding offset. Modifications made to the memory space are written to the file on disk.

memory mapping
Figure 7: User memory mapped to the filesystem.

There are two big advantages to memory mapped files. First, the "memory" does not usually consume normal virtual memory space. Or, more correctly, the virtual memory space of a file mapping is backed by the file data on disk. That means it's not necessary to allocate regular paging space for mapped files; their paging area is the file itself. If you were to open the file conventionally and read it into memory, that would consume a corresponding amount of paging space, because you're copying the data into regular memory. Second, multiple mappings of the same file share the same virtual address space. Theoretically, 100 mappings can be established by 100 different processes to the same 500MB file; each will appear to have the entire 500MB of data in memory, but the overall memory consumption of the system won't change a bit. Pieces of the file will be brought into memory as references are made, which will compete for RAM, but no paging space will be consumed.

In Figure 7, additional processes running in user space would map to that same physical memory space, through the same filesystem cache and thence to the same file data on disk. Each of those processes would see changes made by any other. This can be exploited as a form of persistent, shared memory. Operating systems vary in the way their virtual memory subsystems behave, so your mileage may also vary.

MappedByteBuffer instances are created by invoking the map() method on an open FileChannel object. The MappedByteBuffer class has a couple of additional methods for managing caching and flushing of updates to the underlying file.

Prior to NIO, it wasn't possible to memory map files without resorting to platform-specific, non-portable native code. It's now possible for any pure Java program to take advantage of memory mapping, easily and portably.

4: Scattering Reads and Gathering Writes

Here's a familiar bit of code:

byte [] byteArray = new byte [100];
int bytesRead = fileInputStream.read (byteArray);

This reads some data from a stream into an array of bytes. Here's the equivalent read operation using ByteBuffer and FileChannel objects (to move the examples into the NIO realm):

ByteBuffer byteBuffer = ByteBuffer.allocate (100);
int bytesRead = fileChannel.read (byteBuffer);

And here's a common usage pattern:

ByteBuffer header = ByteBuffer.allocate (32);
ByteBuffer colorMap = ByteBuffer (256 * 3)
ByteBuffer imageBody = ByteBuffer (640 * 480);

fileChannel.read (header);
fileChannel.read (colorMap);
fileChannel.read (imageBody);

This performs three separate read() calls to load a hypothetical image file. This works fine, but wouldn't it be great if we could issue a single read request to the channel and tell it to place the first 32 bytes into the header buffer, the next 768 bytes into the colorMap buffer, and the remainder into imageBody?

No problem, can do, easy. Most NIO channel types support scatter/gather, also known as vectored I/O. A scattering read to the above buffers can be accomplished with this code:

ByteBuffer [] scatterBuffers = { header, colorMap, imageBody };

fileChannel.read (scatterBuffers);

Rather than pass a single buffer object to the channel, an array of buffers is passed in. The channel fills each buffer in turn until all are full or there're no more data to read. Gathering writes are done in a similar way; data are drained from each buffer in the list in turn and sent along the channel, exactly as if they had been written sequentially.

Scatter/gather can provide a real performance boost when reading or writing data that's partitioned into fixed-size, logically distinct segments. Passing a list of buffers means the entire transfer can be optimized (using multiple CPUs for example) and fewer overall system calls are needed.

Gathering writes can compose results from several buffers. For example, an HTTP response could use a read-only buffer containing static headers that are the same for every response, a dynamically-populated buffer for those headers unique to the response, and a MappedByteBuffer object associated with a file, which is to be the body of the response. A given buffer may even appear in more than one gather list, or multiple views of the same buffer can be used.

3: Direct Channel Transfers

Did you even notice that whenever you need to copy data to or from a file, you seem to write the same old copy loop over and over again? It's always the same story: you read a chunk of data into a buffer then immediately write it back out again somewhere else. You're not doing anything with that data so why is it necessary to pull it in just to shove it back out again? Why is it necessary to continually reinvent this wheel?

Here's a thought. Wouldn't it be great if you could just tell some class "Move the data from this file to that one" or "Write all the data that comes out of that socket to this file over there"? Well, thanks to the modern miracle of direct channel transfers, now you can.

public abstract class FileChannel
extends AbstractChannel
implements ByteChannel, GatheringByteChannel, ScatteringByteChannel
   // This is a partial API listing

   public abstract long transferTo (long position, long count, 
      WritableByteChannel target)

   public abstract long transferFrom (ReadableByteChannel src,	
      long position, long count)

A channel transfer lets you cross-connect two channels so that data is transfered directly from one to the other without any further intervention on your part. Because the transferTo() and transferFrom() methods belong to the FileChannel class, a FileChannel object must be the source or destination of a channel transfer (you can't transfer from one socket to another, for example). But the other end may be any ReadableByteChannel or WritableByteChannel, as appropriate.

On operating systems with appropriate support, channel transfers can be done entirely in kernel space. This not only relieves you of the chore of doing the copy, it bypasses the JVM entirely! One low-level system call and boom! Done. Even on those OS platforms without kernel support for transfers, making use of these methods still saves you the trouble of writing yet another copy loop. And the odds are good that the implementation will use native code or other optimizations to move the data as quickly as possible, faster than you could ever do it yourself in regular Java code. And the best part: code you never write never has bugs.

2: Non-Blocking Sockets

The lack of non-blocking I/O in the traditional Java I/O model has been conspicuous from the start. It's finally arrived with NIO. Channel classes that extend from SelectableChannel can be placed into non-blocking mode with the configureBlocking() method. As of the J2SE 1.4 release, only the socket channels (SocketChannel, ServerSocketChannel, and DatagramChannel) may be placed into non-blocking mode. FileChannel cannot be placed in non-blocking mode.

When a channel is non-blocking, read() or write() calls always return immediately, whether they transferred any data or not. This enables a thread to check if data is available without getting stuck.

ByteBuffer buffer = ByteBuffer.allocate (1024);
SocketChannel socketChannel = SocketChannel.open();
socketChannel.configureBlocking (false);


while (true) {
   if (socketChannel.read (buffer) != 0) {
      processInput (buffer);

The code above represents a typical polling loop. A non-blocking read is attempted, and if some data was read, it's processed. A return value of zero from the read() call indicates no data is available, and the thread trundles on through the body of the main loop, doing whatever else it does on each pass.

1: Multiplexed I/O

And now ladies and germs, the Number One New Thing You Can Do With NIO That You Couldn't Do Before.

The code example in the previous section uses polling to determine when input is ready on a non-blocking channel. There are situations where this is appropriate, but usually, polling is not very efficient. In a case where your processing loop is primarily doing something else and periodically checking for input, polling might be an appropriate choice.

But if the application's primary purpose is to respond to input arriving on many different connections (a Web server, for example), polling doesn't work so well. To be responsive, you need to poll quickly. But polling quickly needlessly burns tons of CPU cycles and generates massive numbers of unproductive I/O requests. I/O requests generate system calls, system calls entail context switches, and context switches are expensive.

When a single thread is managing many I/O channels, this is known as multiplexed I/O. For multiplexing, what you really want to do is have the managing thread block until input is available on at least one of the channels. But hey, weren't we just doing the happy dance about finally having non-blocking I/O? We had blocking I/O before, what the...?

The problem with the conventional blocking model is that a single thread can't multiplex a group of I/O streams. Without non-blocking mode, a read attempt by the thread on a socket with no data available would block the thread, thereby preventing it from taking care of other streams that may have data to read. The net effect is that one idle stream would suspend servicing of all streams.

The solution to this problem in the Java world has historically been to dedicate a thread to each active stream. As data became available on a given stream, its dedicated thread would wake up and read the data, process it, then block in a read() again until more data showed up. This process actually works, but it is not scalable. Threads (which are rather heavyweight to create) multiply at the same rate as sockets (which are relatively lightweight). The thread creation overhead can be mitigated somewhat by pooling and reusing them (more complexity and code that needs to be debugged), but the main problem is that it stresses the thread scheduler as the number of threads grows larger. The JVM thread-management machinery is designed to handle a few tens of threads, not hundreds or thousands. Even idle threads can slow things down considerably. Thread-per-stream may also introduce nasty concurrency issues if the stream-draining threads must funnel their data to common data-handling objects.

The right way to multiplex large numbers of sockets is with readiness selection (which takes the form of the Selector class in NIO). Selection is a big win over polling or thread-per-stream, because a single thread can monitor a large number of sockets easily. A thread can also (and here's where we get back to blocking again) choose to block and be awakened when any of those streams have data available (the readiness part) and receive information about exactly which streams are ready to go (the selection part).

Readiness selection is built on top of non-blocking mode; it only works with channels that have been placed in non-blocking mode. The actual selection process can also be non-blocking, if you prefer ("find out what's ready right now"). The key point is that a Selector object does the hard work of checking the state of a (potentially large) number of channels. You just act on the result of selection; you don't need to check each one yourself.

You create a Selector instance, then register one or more non-blocking channels with it, indicating for each what events are of interest. Below is a prototypical selection loop. In this example, incoming connections on a ServerSocketChannel object are serviced in the same loop as active socket connections (more complete examples are available in my book):

ServerSocketChannel serverChannel = ServerSocketChannel.open();
Selector selector = Selector.open();

serverChannel.socket().bind (new InetSocketAddress (port));
serverChannel.configureBlocking (false);
serverChannel.register (selector, SelectionKey.OP_ACCEPT);

while (true) {

   Iterator it = selector.selectedKeys().iterator();

   while (it.hasNext()) {
      SelectionKey key = (SelectionKey) it.next();

      if (key.isAcceptable()) {
         ServerSocketChannel server = (ServerSocketChannel) key.channel();
         SocketChannel channel = server.accept();

         channel.configureBlocking (false);
         channel.register (selector, SelectionKey.OP_READ);

      if (key.isReadable()) {
         readDataFromSocket (key);


This is much simpler and far more scalable than the thread-per-socket arrangement. It's much easier to write and debug code like this, but just as importantly, the effort required to manage and service large numbers of sockets is vastly reduced. Selectors, probably more than any other new feature in NIO, delegate grunt work to the OS. That relieves the JVM of a huge amount of work, which frees memory and CPU resources and allows tremendous scaling because the JVM isn't spending time doing work the OS can do much more easily.


Well, there you have it. Ten things you can do with NIO in J2SE 1.4 that you couldn't do before in Java. This is by no means a complete list. There are plenty of other things I didn't mention, like custom character set transcoding, pipes, asynchronous socket connection, copy-on-write mapped files, and so on. I'd need an entire book to cover everything. ;-)

I hope these brief glimpses have given you an idea of what NIO can do, as well as what you can do with NIO in your own projects. NIO is not a replacement for the traditional I/O classes; don't throw away your working code just to replace it with NIO. But keep NIO in mind as you design new applications. You just might find that NIO can kick some butt when you turn it loose.

O'Reilly & Associates recently released (August 2002) Java NIO.

Ron Hitchens is a California-based computer consultant and educator whose career dates back to the disco era. Ron has used just about every computer system and programming language you can imagine: from 6502 assembler to XSLT. He is also the author of O'Reilly's Java NIO.

Return to ONJava.com.

Copyright © 2009 O'Reilly Media, Inc.