I'm reading a book on Java, and we're on reading from a channel into a ByteBuffer. I found the way the author was structuring the while loop odd:
try (FileChannel inCh = (FileChannel) Files.newByteChannel(file)) {
ByteBuffer lengthBuf = ByteBuffer.allocate(8);
int strLength = 0;
ByteBuffer[] buffers = { null, ByteBuffer.allocate(8) };
while(true) {
if(inCh.read(lengthBuf) == -1)
break;
lengthBuf.flip();
strLength = (int)lengthBuf.getDouble();
buffers[0] = ByteBuffer.allocate(2*strLength);
if(inCh.read(buffers) == -1) {
System.err.println("EOF found reading ht eprime string.");
break;
}
System.out.printf("String length: %3s String: %-12s Binary Value: %3d%n", strLength,
((ByteBuffer) (buffers[0].flip())).asCharBuffer().toString(),
((ByteBuffer)buffers[1].flip()).getLong());
lengthBuf.clear();
buffers[1].clear();
}
System.out.println("\nEOF reached.");
} catch (IOException e) {
I tried it like this:
while(inCh.read(lengthBuf) != -1) {
and it works the same. Would there be a practical or code clarity reason the author would write it like he did?