Monday, August 16, 2010

Data Driven Color Debugging... (part 3)

This falls along the same lines as dealing with any other data driven aspect of industrial printing.

First, you have to set the process up correctly.  This involves several steps:
  1. Identifying what needs to be change and why.
  2. Determining what color changes to apply.
  3. Testing the color changes relative to color approval.
  4. Testing the data aspects the determine the color changes.
The key difference here between testing data driven color and, say, data driven content is that three elements are involved in the color.  First, you have to pick the right color to change and making sure the color change process recognizes the appropriate shades, etc.  Second, you have to make sure what is produced passes all approvals.  Third you have to make sure that data you need to recognize when to apply the color change is present.

In general not too different than any other color work save for step #3.

Our general model for #3 has been to give transforms textual names, e.g., "LightGray1", and to match job identifiers with a table of transforms.  This allows a human to quickly determine what transforms go with what work.  The second element is to attach metadata to the job in order to trigger the proper transform group.  We do this with TLEs (or PDF Bookmarks).  Each TLE or bookmark describes a span of pages and links that span of pages to a set of transforms.

Having this data embedded in the document makes debugging and tracking problems straightforward.

On the application side pdfExpress XM and APX Raster Pro report what transforms are applied by page range and bookmark as the transformations occur.  This allows support personnel to quickly determine if the proper transforms are being applied.

Most imaging processes we are involved in already support some form of metadata per page for other reasons, e.g., mailing or mag strip encoding, so adding additional information for color support is not an issue.

On the debugging side, given this information, its not too hard to see if the proper transforms trigger simply by inspecting the log.  However, that in and of itself may not be adequate.  Sometimes the output will be wrong.

Typically "wrong output" will come from one of a few sources: 
  • Changes to input that were not tested.
  • Lack of full spectrum testing on job setup.
  • Programming errors.
Input changes, or content creep as we described previously, are common.   The CSR end of the business has to be made aware that when customers supplying new content art there may be workflow impact - especially if the change involves color, e.g., a logo change.  This is an organizational issue which must be addressed.

Testing failure is inevitable and is due to a number of issues.  Many programmers are basically unaware of color and how it works and make wrong assumptions - particularly when coding color related functions into the workflow. 

Another issues is lack of coverage.  A customer may be supplying dozens of logos - all with slightly different incorrect colors.  You have to make sure that you identify all the logos in question for correction - not just the first few you encounter.

You must also look for interactions between color transforms, e.g., if I am changing a gray to another color and I am also changing the shade of another, similar gray I don't want there to be a bad interaction.   This requires careful output inspection.

Programming errors are usually not found until real data is provided.  Test data will often cover only what a programmer expects to test and not the full range of real world issues.

No comments:

Post a Comment