Pipes and Loops
September 25, 2007 1 Comment
Today I spend some time studying the Java2D code of OpenJDK
(finally!). It is quite confusing when you come from other Java2D implementations that work completely different (because they are mostly AWT1.1 Graphics derived).
In GNU Classpath, the Graphics and Graphics2D subclasses are relatively specific. For example, there is one CairoGraphics2D class that renders to a Cairo surface, and a bunch of subclasses for more specific surfaces. While that makes sense from an object oriented POV, it has its problems. For example, it is a little difficult to share implementations for an algorithm (e.g. drawing a line) between otherwise unrelated Graphics implementations (e.g. a BufferedImage Graphics and a Framebuffer Graphics).
(Disclaimer: don’t take the following architecture description as authoritative, it is the result of only a couple hours of study and is certainly wrong in some places. If you know better, please comment below, I’d really like to understand all this as well as possible).
In OpenJDK the architecture is very different. There is only one final generic implementation for Graphics2D for all rendering, the SunGraphics2D class. This is the front-end of the rendering pipeline. This class is driven by a SurfaceData object, which serves as the central interface to the graphics backend. The SurfaceData sets up the pipes and loops that are used to render to the actual drawing surface. So what’s with those pipes and loops?
Pipes are the building blocks of the rendering pipeline. In a simple case, a pipeline is only one pipe. For example, if there’s no transform, no paint, no composite, etc (that is, a simple AWT1.1 like setting), then the pipeline for drawLine() (and the other primitives) in the X11 backend only consists of the X11Renderer class, which basically only calls the corresponding Xlib function. However, if there’s a more complicated configuration than that, the pipeline also gets more complicated. For example, a drawLine() would actually be drawn by creating a Line2D object, and rendering this as a generic shape, by plugging together a PixelToShapeConverter (which is a PixelDrawPipe and a PixelFillPipe, that means, it serves to draw and fill graphics primitives) with a SpanShapeRenderer (a ShapeDrawPipe) to render that shape. Another example for a pipe can be found in the OpenGL rendering pipeline, where all rendering goes through a BufferedRenderPipe to queue up rendering instructions for single threaded rendering. At the end of a pipeline is usually some call to a graphics primitive that is implemented by the graphics backend.
But what if the graphics backend has no way to implement a certain graphics primitive? An extreme example would be BufferedImage, which doesn’t even have some kind of graphics backend. This is where the loops come into play. The loops are implementations of graphical algorithms for all kinds of graphics primitives (lines, rectangles, but also images, blitting, text, etc) on different output formats. A sophisticated registry and lookup is used to find the rendering loop that is best suited for a given primitive on the source and destination rasters with the current composite setting. And then there’s always the generic fallback loops for the more exotic configurations that can render anything, but slower.
I think that this is a very flexible and efficient architecture. It allows for good reuse of the involved algorithms across different backend implementations, while still allowing to plug in optimizations for everything, if available.
Please comment below if you have any additions and/or corrections on my architecture outline. Thanks.