In the last 10 years or so, there has been more movement in the Biopharma industry to use risk-based qualification and one outcome of that change was to leverage equipment commissioning to reduce the work required for qualification and validation. As companies realized that much of the work done during commissioning was being repeated during IQ/OQ, they moved to leverage the work being done in commissioning exercises to reduce the work needed during qualification. This has not come without some headaches though, an industry that was accustomed to rigorous scrutiny by the Quality unit of every document struggled to understand how to utilize commissioning documentation that is most often not written and executed by folks familiar with cGMP documentation requirements. We’ll explore here what some of the issues are and suggest ways to overcome them.
Let’s first examine what commissioning is and what it is not. Commissioning is the practice of following a written script to test equipment with the aim of ensuring that it is installed and works per design and user requirements. Sounds a lot like qualification huh. There are a few differences though. Commissioning is generally carried out by a third party that is experienced in that work. They have (or write) their own test scripts. These test scripts are not usually examined by the Quality unit prior to execution. They are also not generally signed off ahead of execution.
The company that is charged with commissioning are not necessarily versed in cGMP principals, they are engineers and technicians. Their idea of good documentation practice is probably not the same as that of your QA department. This can cause problems later on when you attach the commissioning report to the qualification protocol, and it is being reviewed after the fact by QA. A real-world example that I observed just recently was when QA was reviewing an IQ for an ISO 8 clean-room and the commissioning report for the HVAC unit was attached as documentary evidence. At the top of the test scrip sheet there were blanks for the technician’s name, date, weather conditions, etc. It was all filled out except for the line for humidity was left blank. Notably, this HVAC system did not have any provisions for humidification or de-humidification, so the technician filling out the form did not think that it was important to fill in that blank. Not so with the QA reviewer. Her position was that cGMP Good Documentation Practice (GDP) requires strict adherence to filling in every blank, even if it is to say n/a with an explanation as to why. This tiny little thing held up the approval of the protocol.
Another example of something that could cause issues is the test scripts themselves. Often, commissioning consists of checklists rather than instructions on how to check every item on the equipment. As an example, a commissioning script might say something like “ensure the air filter listed on the design documents is installed properly” with a place for a check mark next to it. A qualification protocol with the same test might require documenting the make, model, and serial number of that filter so that a reviewer could later double check that against the specification. The commissioning test scripts don’t require a second checker for everything, while a qualification protocol almost always does.
Deviations found during commissioning are handled differently than deviations in a protocol. During commissioning, an “issues list” is usually kept of tests that could not be completed or that were completed but failed. This usually is in the form of a spreadsheet or some other table that lists the issue and why it could not be completed or why it failed. As part of closing out the project, that issues list is reviewed to make sure that either the issues that were found are not important to the client or that the commissioning agent went back and re-executed the test after some corrective action was taken. It does not generally include an explanation of the corrective action as long as the re-executed test passes. In a qualification protocol, deviations are written to include the test that failed, an investigation as to why it failed, documentation of the corrective action taken, and follow up documentation of the test being re-executed to show passing results. An example of this might be testing the cooling mode of a roof top unit that can not be fully tested in the winter months when the unit was being commissioned. By the time warm weather rolls around, the commissioning agent will have moved on to their next project, so this is captured in the issues log, but is not completed by the commissioning agent. If this commissioning document and issues list is then attached to a qualification protocol, the reviewer may have a problem signing it off as they see the commissioning as never have been completed.
A logical question might be how to use this commissioning documentation if it is fraught with the issues described above. One answer is to require the commissioning agent to follow all of the same GDP as is required for qualification/validation activities, have it pre-approved by QA, and then again by QA after it is executed, but before being attached to the protocol. I would argue that this defeats the purpose of “leveraging” the commissioning. If you are going to treat it like qualification, then the advantage of leveraging it goes away. A more practical approach would be to review the commissioning report after it has been executed, and then, using a risk-based approach, decide if any critical parameters that are important to qualification need to be re-tested during qualification. If you are leveraging commissioning for no impact or indirect impact systems, the items you find that will need to be re-tested should be few. Going back to our example, if the outside humidity was not recorded during the commissioning of an HVAC system that doesn’t humidify or dehumidify, a risk based review of that would tell you that it is probably OK to have that line be blank. Using our other example of the cooling function not being tested on an RTU, if temperature of the room is identified as a Critical Process Parameter (CPP), then the protocol could include a test for that during qualification. If it is not a CPP, then an explanation is provided in the protocol where the commissioning report is attached, and you are done.
So, the bottom line is to use common sense and depend on your Subject Matter Experts (SME’s) to give you good advice as to the impact of anything in the commissioning report that is not completed to the satisfaction of your reviewers. In my opinion, you should not subject the commissioning process to a rigorous QA review prior to execution, and the technicians executing the commissioning test scripts should not be subject to documented GDP training (although you might want to give them some instruction in your requirements). These are just my opinions and I am 100% sure that there are others out there that will disagree with some or all of what I have presented here. Feel free to drop me a note if you’d like to disagree or have questions.
Comments