You are here
IO Streaming in ITK.
Out of core image processing is necessary for dataset sizes larger than a computer’s main memory. ITK’s pipeline architecture typically buffers the entire image for each filter. This behavior produces memory requirements that are multiples of the dataset size, which inhibits the processing of large datasets. Fortunately ITK was designed to accommodate the sequential processing of sub-regions of the data object, a process called streaming. Previously, to easily stream data objects one had to use the StreamingImageFilter, which sequentially requests sub-regions, causing the input to stream, and then reassembles the regions into a buffered output image. However, if the dataset exceeds the size of system memory this approach will not work. For large datasets the entire image must never be in memory at once; therefore pipelines must stream from the reader to the writer.
Section 13.3 of the ITK Software Guide provides a detailed explanation of the internals of how streaming works in ITK’s pipeline execution . For streaming to work correctly, each filter in a pipeline must be able to stream. The PropagateRequestedRegion() phase of the pipeline update is crucial for streaming, as a filter negotiates what region the filter should process on its input(s) and output(s). Filters incapable of streaming will set their input requested region to the largest possible. This action will make all upstream (closer to the first) filters process the largest possible region, but they will not actually stream. That is to say that streaming must begin with the writer.