June 2006 archive

pyPdf 1.6

Finally! Apparently I must be unemployed in order to get anything done on pyPdf. I’ve finally released version 1.6 today. Major highlights include:

  • Reads more PDF files than ever before.
  • Supports reading and creating compressed content streams.
  • Allows access to document information, such as the title, author, creator, and so on.

Basically, it’s just better. Mr. I-Am-Bitter has been using it on mountains of PDF files, so I feel confident that it works better than ever.

“import zlib” vs. .NET Framework

During my current period of unemployedness, I’ve been preparing for some contract development work that I expect to be doing in the near future. Inspired by the article series on IronPython and .NET GUI development over at The Voidspace Techie Blog, I’ve been looking into what kinds of development struggles I might face using IronPython and .NET as a platform. To that end, I began to look at making pyPdf work under IronPython.

The first struggle I encountered was that the “zlib” module was not available in IronPython. “No problem,” I think to myself. “There’s got to be access to a DEFLATE library through .NET, somehow.”

“Yes, younger-self,” my older-self now says. “There is a .NET way to do this, but apparently it requires an annoyingly large amount of code.”

Here’s the original Python code that was used to implement the FlateEncode streams in pyPdf:


import zlib
def decompress(data):
    return zlib.decompress(data)
def compress(data):
    return zlib.compress(data)

Okay, that was simple and straightforward. Here’s the IronPython solution (note, if you have suggestions to make this shorter, please do let me know):


import System
from System import IO, Collections, Array
def _string_to_bytearr(buf):
    retval = Array.CreateInstance(System.Byte, len(buf))
    for i in range(len(buf)):
        retval[i] = ord(buf[i])
    return retval
def _bytearr_to_string(bytes):
    retval = ""
    for i in range(bytes.Length):
        retval += chr(bytes[i])
    return retval
def _read_bytes(stream):
    ms = IO.MemoryStream()
    buf = Array.CreateInstance(System.Byte, 2048)
    while True:
        bytes = stream.Read(buf, 0, buf.Length)
        if bytes == 0:
            break
        else:
            ms.Write(buf, 0, bytes)
    retval = ms.ToArray()
    ms.Close()
    return retval
def decompress(data):
    bytes = _string_to_bytearr(data)
    ms = IO.MemoryStream()
    ms.Write(bytes, 0, bytes.Length)
    ms.Position = 0  # fseek 0
    gz = IO.Compression.DeflateStream(ms, IO.Compression.CompressionMode.Decompress)
    bytes = _read_bytes(gz)
    retval = _bytearr_to_string(bytes)
    gz.Close()
    return retval
def compress(data):
    bytes = _string_to_bytearr(data)
    ms = IO.MemoryStream()
    gz = IO.Compression.DeflateStream(ms, IO.Compression.CompressionMode.Compress, True)
    gz.Write(bytes, 0, bytes.Length)
    gz.Close()
    ms.Position = 0 # fseek 0
    bytes = ms.ToArray()
    retval = _bytearr_to_string(bytes)
    ms.Close()
    return retval

Basically, the code grew in length for a few reasons. First of all, the original compress and decompress functions took string arguments, but they were basically being used as arrays of bytes. In .NET, there is a clear difference between an array of bytes and a string, so conversion methods were necessary to create a byte array from a string. I actually like this, because it forces you to encode and decode strings whenever you use them, making you aware of their unicode nature (which is actually optional in CPython, basically).

The other added complexity was the use of streams, rather than just basic functions that can be called. A nice object-oriented stream library is actually quite flexible and powerful, but as you can see it can make things a little more verbose. But, you know… both have their advantages.

Finally, I had to write a function just to read an entire stream into a byte array. MemoryStream has a simple “ToArray()” function on it — I wish this was standard on all Stream objects. But regardless, this function only really needs to be written once and can be used for many different purposes. So it isn’t really adding to the length of the deflate encoding, it should be adding to the length of my toolbox somewhere else. Note that my implementation is fairly wasteful of memory, but it is a simple approach that will not fail if Read returns partial buffers, or anything like that.

IronPython is interesting. One hurdle is down for PyPdf, but a few still exist. We’ll see what happens next.