An interlaboratory test may be used, among other things, to validate a test method or measurement techniques, generate precision statements, investigate causes of systematic error/bias, assess lab performance against known values, or assess lab performance and uniformity of results through comparison with other labs results (a primary function of CTS tests). When used to assess measurement performance, the term proficiency test is often employed.
SELF-COMPARISON is a start
For years, many labs utilized what might be called a self-comparison approach. Calibration of instruments to in-house standards was used to assure that the equipment was operating properly. It was also assumed that lab personnel were correctly following an established test method. Two problems with this approach are immediately apparent. First, many tests lack absolute standards. Second, there is no indication how well the lab's results correlate with other labs performing the same test. In-house standards and internal procedures to verify repeatability are really only the first step toward assuring quality in the testing lab.
ROUND ROBINS are not always feasible
Traditional round robin testing, in which the same sample is tested and passed from lab to lab, is impractical when dealing with a large number of labs and is inappropriate for destructive tests and any materials, particularly those affected by environmental conditions. Its obvious advantage is that each participating lab tests the same sample(s). The process, however, is time-consuming and subject to delays if one lab fails to pass on the sample in a timely fashion. The supervisor of such a test must acknowledge the possibility of accidental or intentional tampering of the sample by one or more of the labs. Historically, round robins have been offered on an informal basis, so that labs failed to receive consistent feedback regarding performance.
PROFICIENCY TESTING provides an answer
The National Bureau of Standards (now known as the National Institute of Standards and Technology, or NIST) developed a concept of formalized, large-scale interlaboratory testing in the late 1960s and subsequently created such programs for paper and rubber. NBS provided Standard Reference Materials (SRMs) to various industries, but was unsure how individual labs used these standards to improve the quality of their measurements.
NBS also desired to know what steps were taken when such standards were not available. As a result, NBS hoped to be able to assist labs in improving test measurements. While the majority of prior interlaboratory studies concentrated on validation of a test method, NBS initiated a proficiency testing program that focused mainly on overall lab performance. Full responsibility for the programs' operations was transferred to CTS in the mid-1970s. The concept was then expanded by CTS to other industries.
At a minimum, a proficiency test provider must select test methods, identify and procure homogeneous materials, distribute samples, collect and analyze data, and summarize results for participants, all in a timely fashion. The quality of the provider can be discerned in the simplicity of the enrollment process, choice of test methods, caliber of test samples, timely distribution process, suitability of the data analysis method employed, turn-around time (the time it takes to analyze the data), and clarity and usefulness of the performance report, to name just a few considerations.
THE CTS APPROACH sets it apart
Founded in 1971, Collaborative Testing Services, Inc. (CTS) is a privately owned company that specializes in interlaboratory testing programs for a wide variety of industrial sectors, including rubber, plastics, fasteners and metals, containerboard, paper, wine, agriculture, hemp, and color. In addition, the firm also operates a forensic proficiency testing program. All of the programs are designed to assist organizations in achieving and maintaining quality control objectives. Thousands of organizations around the globe more than 80 countries, in fact have made CTS testing programs part of their quality system.
The majority of proficiency test providers provide a simple one-off pass/fail report. CTS continues NBS goal of analyzing and improving overall lab performance. The process starts with enabling a lab to assess the level and uniformity of its testing by comparing its results to those of other labs. We don't just analyze individual sample results we evaluate a lab at three levels: performance on individual samples tested; simultaneous analysis of the results of individual samples to check for consistency of testing; and comparison of a lab's results with the industry's overall performance. This evaluation is accomplished through multivariate analysis.
As a by-product, reports generated by CTS also provides the industry with an assessment of the state of testing capabilities, particularly where true values are lacking for test results. From there, the benefits attributed to proficiency testing continue:
Enables a lab to evaluate both instrument calibration and operator technique. Instrument calibration procedures check only the instrument, and yet all tests can be affected by operator techniques. In addition, the analysis can be extended to an examination of laboratory/material interaction problems, often overlooked by labs, which can result in inconsistent test results.
Results in a savings of production costs. A consistently low or high test result, or any uncertainty regarding the accuracy of a test result, could increase manufacturing costs by requiring additional processing and/or quantities of expensive raw materials to assure meeting production specifications. Participation in a proficiency program allows a lab to track results over a period of time, conduct trend analyses, and make corrections as needed on an ongoing basis.
Provides documentation of a lab's testing capabilities. Independent testing labs can detail their testing capabilities for prospective clients, and suppliers can reassure customers (both current and prospective) of their ability to test for compliance with specified requirements. At the same time, confidentiality of test results can be assured if program participants are identified only by lab code numbers in the general report.
Fulfills a key requirement for those labs seeking accreditation and companies undertaking Quality Systems Registration (QSR).
EXPLAINING THE STATISTICS behind CTS test design
Our primary test design can be broken down into five key components of the analysis: comparative statistics; two-sample plots; multivariate control charts; generation of control ellipses; and interpretation of control ellipses. We will touch briefly upon each, as well as point out when different test designs must be used.
Comparative Statistics
All of the CTS interlaboratory programs employ comparative statistics. This model is based on the principle of reproducibility: two or more labs agreeing (nominally) on a value for a given property. In comparative statistics the best value for a property is the most agreed upon, called the grand mean or consensus value. The grand mean is usually the arithmetic mean or median of results derived in all laboratories (lab means) for a given sample. CTS uses the difference between these two values (lab mean and grand mean) along with the variation of the grand mean to determine how close a labs result comes to the consensus value. This analysis of deviation on a single sample is an important preliminary step in judging lab performance, but it does not tell the whole story. If the values for two or more samples are analyzed simultaneously, further conclusions about a labs precision and consistency in that test can be drawn.
Two-sample plots
The majority of CTS interlaboratory tests are conducted using two samples, which we will refer to as sample A and sample B to keep the explanation simple. The lab means, grand means, and between-lab standard deviation for each sample, as well as the critical value for h, which is defined later in this text, are calculated. Other statistics are calculated for use by CTS analysts, but may not be published in the reports. Lab means that exceed the critical value are excluded from the statistics. Each lab is plotted as a single point. In the two-sample plot, the lab means for sample A form the x-axis, and the lab means for sample B form the y-axis. The concept of a two-sample plot is often attributed to W.J. Youden.
The two-sample plot is widely used because its simple graphic presentation quickly summarizes a great deal of information about the test and the results. The nature of the errors (systematic or otherwise), the robustness of the test method, and the influence, if any, of the samples on the labs results can be judged by a visual assessment of the two-sample plot.
Multivariate Control Charts
After the results for the two samples are plotted, a 95% control ellipse is drawn such that 95% of the time, a randomly selected lab is included inside the ellipse. Note that a 95% ellipse is not the same thing as a 95% confidence interval, which applies only to individual samples and not to two samples analyzed simultaneously. A 99% ellipse is calculated but not drawn on the plot. If a lab falls outside of the 99% control ellipse, it is excluded, and new grand means and between-lab standard deviations are calculated. The ellipse is generated again and again until no additional labs are excluded.
Many laboratory personnel are familiar with single variable control charts which track systematic variation of a property over time for a single product. Working on the same principle, a multivariate control chart traces performance of two or more samples or variables against control limits over time; however, information about each individual sample is lost. The 95% control ellipse found in our reports is an exact mathematical transformation of a T2 multivariate control chart. The T2 statistic was developed by Harold Hotelling in the 1930s and 1940s. His articles on principal components and multivariate quality control techniques were seen by some as revolutionary.
Generation of Control Ellipses
The concept of principal components analysis and the math behind an ellipse date back to Karl Pearson in 1901, when he described the line of best fit (now called the orthogonal regression line). Jumping ahead a few years, we find that W.A. Shewhart recommended the ellipse as a quality control tool in the 1930s. Drawing from Hotellings work and using small-sample statistics, the control ellipse was brought into the modern era by J. Edward Jackson. Jackson demonstrated that the size, shape, and major angle of an ellipse are all influenced by the sample variables under analysis. Jackson also explained that Hotellings T2 chart does not indicate which variable is out of control and the nature of the out-of-control situation; the control ellipse does, although the time function is lost. This latter issue is not a problem in the CTS program, since we are concerned with individual results from many labs rather than many results from one lab. As a side note, the term result is defined by the test method being followed and can mean a single direct observation or an arithmetic combination of test determinations.
Jackson points out that equivalent results (i.e., the same ellipse) are obtained by using either the T2 statistic or principle components analysis. CTS analysis programs use a combination of both approaches. CTS chose a 95% control ellipse because 3 sigma limits are generally considered too broad and insensitive to detect errors, and 2 sigma limits are often too narrow. Our goal is to reduce the possibility of rejecting a result and declaring it out of control when the result should actually be included in the statistics.
Interpretation of Control Ellipse
After working through the complicated math to generate an ellipse, the task of interpreting the results remains. The use of ellipses by NIST was refined by Drs. John Mandel and Ted Lashof. They developed a complex series of hypotheses and equations for both additive and concurrent models of variation. Briefly stated, Mandel and Lashof demonstrated how a good, robust test of two materials that are properly selected will result in a narrow ellipse oriented at approximately 45. The primary errors illustrated by the ellipse will be systematic and are represented somewhat by the length of the ellipse. A test in which the samples are inversely related (correlated) will flip the ellipse to the 2nd and 4th quadrants. A non-robust test will generate an ellipse approximating a circle. A wide ellipse does not mean that a test is useless; rather, a different set of interpretative guidelines is needed. The use of T2/principle components analysis and ellipse generation always yields an answer and will indicate out-of-control conditions.
Additional Test Designs
Given the broad scope of CTS testing programs, it is not surprising that every interlaboratory test does not fall within the framework presented above. A brief summary of differences in test design is given below.
Containerboard
The majority of tests in this program not only serve as proficiency tests, but also function as a type of calibration check. Samples are distributed monthly and are to be tested each week. Each analysis is divided into three time spans: (1) the individual laboratory means for each week of the month; (2) each labs statistical data for the month (mean, within-lab standard deviation and standard deviation between weeks); and (3) each labs cumulative data for up to 16 weeks. Finally, the statistics for all participants for each of the three time spans are presented, including average means, within-lab standard deviations, between-lab standard deviations, and average standard deviations between weeks.
The samples for this program, which have been randomized and placed in sealed packages for distribution, also serve as collaborative reference materials. Several weights of linerboard and corrugating medium are used. Each material is coded with a lot number and is used for approximately two years, making it possible for laboratories to check their measurements against the known cumulative average mean for a particular lot code.
Color and Color Difference
The Color and Paper programs include tests that evaluate a laboratorys ability to measure color on individual samples with small color differences. The data tables present each labs mean values (for the three color coordinates) and difference values (between the nonmetameric and/or metameric sample pairs), as well as the grand means and between-lab standard deviations. In the graphs, color coordinate differences are plotted. In addition to the Color and Color Difference tests, the Color and Appearance program also conducts a Spectrophotometric analysis, evaluating measurements at 16 selected wavelengths. Extreme data are detected by the difference area method and comparative performance value calculations.
THE COMMON THREAD in CTS programs
No matter how the results are analyzed and presented, all CTS tests use consensus values. Participants do not usually test against a known value (sometimes called an absolute value, accurate value or reference value). Rather, performance is judged against the groups results. Goodness or acceptability of the overall performance results is not determined by CTS, but rather by the participants and the marketplace. Within the Performance Analysis Reports issued to a lab, CTS provides a perfect tool for the busy lab manager who wants to know at a glance if corrective actions are required. We call them Action Items and they indicate potential problems.
It is important to recognize that a single data flag is not cause for alarm. CTS' approach, which emphasizes proficiency testing conducted at regular intervals, allows for a continuous assessment of results. Labs are encouraged to look for trends in test results. By searching for these trends, a lab engages in an on-going process to validate the quality of the work in the lab and, by extension, the products and services offered by the organization. Guides to Interpreting Performance Analysis Reports are posted on the CTS Customer Portal, ensuring that participants not only know what data are actionable, but also understand when data can have diagnostic and predictive uses.
We are truly the experts in interlaboratory testing, which has been our sole focus since our founding. We start with hands-on test design, aimed at helping laboratories meet their quality assurance goals. CTS prepares, packages and ships all of our samples to our participants; the critical steps of sample handling are not farmed out to a third party. We employ sophisticated technology to allow customers to submit and edit their data. Our staff is experienced in analyzing your data using established models. We communicate performance and highlight problems in a clear and concise report format. If you have questions about any aspect of the program, contact CTS - the people who handle the program from beginning to end will be available to help you.
Your Test Program
Select any combination of tests in which you wish to participate, noting frequency, samples and other conditions specified for each test. Enrollments are often best viewed as test subscriptions, with a complete protocol encompassing samples, testing frequency and analysis principles matched to most users' quality assurance objectives.
CTS assigns each subscriber a unique and confidential Lab Code. This Lab Code does not change unless you request a new code. Subscribers are required to identify a primary contact, who receives samples and controls access to information through CTS online portal.
Test Samples
Test samples are shipped to you on a predetermined schedule throughout the year directly from CTS. Refer to a Program Schedule for more information. Your laboratory tests the samples according to established methods; CTS will only specify additional parameters when necessary. Testing instructions are available 24/7 on the Customer Portal, and data is entered directly through a customers Portal account. Data must be received by the specified data due date in order to be included in a report. Adhering to a fixed data deadline is critical to keeping reports timely and meaningful.
Reports
Using our solid statistics, we provide labs with new insights into their data what is actionable, what is diagnostic, what is predictive.
Individual Report
Within three weeks of each data due date, CTS will send you an individualized Performance Analysis Report. Everything within this report is specific to your lab, including the data, charts and plots. This means that results can be communicated efficiently, usually a single (two-sided) page for each test, and that makes your results easy to review. A Guide to Interpreting Performance Analysis Reports is available on the CTS Customer Portal.
The primary purpose of our report is to help you evaluate your labs performance by highlighting actionable data. This means that if your current results are flagged, an Action Item will indicate any potential problems. CTS also recognizes that your commitment to quality testing is an on-going activity; when appropriate, we include Trend Charts that provide lab results from previous testing rounds.
Example Individual Reports
Summary Report
Data for all participants can be found in the comprehensive Summary Report available only on the CTS website. Participants use the confidential WebCodes, which appear on their data sheets and
Performance Analysis Reports, to locate their results in the Summary Report.
Quality Manager Report
For those companies that have multiple locations and several labs participating in the same industry Program, CTS can create a Quality Manager Report at no additional charge. We work with you to customize a report that allows the Quality Manager to quickly compare multiple laboratory's results in a single document. Please contact CTS for more information about our Quality Manager Reports.
eServices
While our eNotes e-mail notification system updates participants about important program events, such as enrollment deadlines, sample ship dates and the posting of Summary Reports on the website, our comprehensive Customer Portal enables a participant to manage almost all aspects of the testing experience and warehouses data and reports for up to five years.
Direct Access
You can call or e-mail experienced personnel at CTS who welcome your questions, feedback and comments.
These Subscription Policies apply to all tests on this CTS website, regardless of whether the tests were ordered from this website or by requesting a quote from CTS. Note that each CTS website has its own Subscription Information & Policies that pertain to the tests offered on that site.
Test Selection & Constraints
Select any combination of tests in which you wish to participate, noting the testing frequency and any other conditions specified for each test. CTS may limit participation in a test to a specified geographic region or impose special requirements. When selecting a test, the purchaser agrees to abide by any stated restrictions and requirements. If the purchaser is unable to comply, CTS will cancel the order for the test.
Prices
The prices listed beside each test are in U.S. dollars and include all samples indicated in the test description (e.g., if the test frequency is four times per year, the price covers all four shipments). The price charged is determined by the address to which CTS ships the test samples.
Enrollment Deadlines
Enrollment deadlines are strictly adhered to. Please pay special attention to the deadlines indicated for each test and contact CTS if you have questions.
Order Process
Participation is on a prepaid subscription basis. Subscriptions do not commence until funds are received and processed by CTS.
The order process starts with an order placed on a CTS website or a quote from CTS. A purchase order or payment must accompany the order or accepted quote. CTS will not continue the subscription order process without a payment or a purchase order.
Purchase Orders
CTS acknowledges that many organizations use purchase orders to obtain goods and services. If you choose to send a purchase order, please note that purchase orders are billed upon receipt; payment terms are Net Due. Test samples will not be shipped until invoices are paid in full. Customers receive a shipment hold notification via email if CTS has not received payment for the test samples.
Payment Types Accepted
CTS accepts checks and all major credit cards. Checks must be in U.S. dollars drawn on a U.S. bank, made out to Collaborative Testing Services, Inc. Companies that prefer to send payments via ACH should submit ACH authorization forms to CTS' accounting department.
We also accept electronic funds transfers from our international customers. International organizations that wish to send electronic funds transfers should contact CTS for bank information. To ensure the payment is received in its entirety, it is the responsibility of the customer to pay any bank or wire fees.
If CTS receives a payment for an order and the payment is less than the total amount billed due to wire fees, tax withholding, or for any other reason, please note that the order will not be considered complete and samples will not ship on their predetermined schedule until CTS receives the full payment amount due.
Sample Replacement Policy
Requests for replacement of missing or damaged samples must be received no later than two weeks before the data due date. This is the minimum time frame in which we can guarantee you will receive the samples and still be able to complete the test by the deadline.
We suggest all package contents be checked immediately upon receipt. All test samples are packaged independently of the outer wrapping. Therefore, opening the outer package to determine that all the contents are present and in good shape will not compromise the integrity of the test.
Data Submission Policy
CTS requires that results be returned by the date indicated on a data submission form. Late data will not be included for analysis. Adhering to a fixed deadline allows for results to be compared from within a defined period and helps keep reports timely, and therefore of better use to your lab. As group statistics are a major component of this type of proficiency testing, your interests are best served by ensuring your data are included for analysis.
Cancellation Policy
This cancellation policy applies to all CTS tests except Forensics tests. Please refer to www.ctsforensics.com for the cancellation policy that pertains to Forensics tests.
Samples for CTS interlaboratory tests are made to order, and generally require considerable lead time. Therefore, in order to honor our commitment to our manufacturers, we cannot make last minute changes.
Please review all test descriptions carefully. We encourage you to contact CTS with any questions to ensure that your laboratory subscribes to the tests that best fit your measurement capabilities.
Our cancellation policy is as follows:
International Orders
While all samples are labeled No Commercial Value, some countries may assess duties or taxes or both. Customs, brokerage, and import fees are not included in the shipping charges or test fees. It is the responsibility of the recipient to pay those charges. Similarly, the recipient is responsible for providing any additional import documents that may be required by customs agencies.
CTS will make every effort to ensure proper delivery of your samples. All samples are sent via selected couriers to assure traceability; we automatically use a tracking service to locate lost shipments. If necessary, the decision to send replacement samples or issue a credit will be made on a case-by-case basis; however, CTS will not issue credits or refunds for samples that are delayed or refused entry into your country or are unclaimed by the recipient.
Due to the automated nature of CTS' sample packaging and shipping process, international customers may not specify a courier or delivery method. Customers must accept the courier selected by CTS. CTS ships sample packages using a standard commercial invoice and cannot honor requests for custom, individual or additional shipping documents.
Feedback, Complaints and Appeals
All feedback, complaints and appeals shall be submitted in writing through one of the methods provided in the Contact Us page. CTS will acknowledge all submissions.
Caveats
Participation in an interlaboratory testing program compares a laboratory's ability to conduct a particular test against other participating laboratories. The consensus results, however, are not a guarantee of performance.
Collusion and falsification of results are prohibited. If collusion or falsification of results is identified, CTS may exclude participant results and will notify the participant(s) of CTS' decision.
If a sample distributed by CTS for use in a test should prove to be defective, CTS liability is limited to replacements of the samples, if time and supplies permit, or a refund of the test fee if the sample cannot be replaced.
While CTS will make every effort to adhere to the published program schedules, all dates provided are tentative and are subject to change.
Updated: 8 November 2024
CTS is strongly committed to protecting the anonymity of participants in our interlaboratory testing programs and ensuring the confidentiality of both results and identifying information. The intent of this Confidentiality Policy is to describe our policies and practices regarding the disclosure of any and all information that may be used to identify participants and participant results. This policy includes, but is not limited to, this Web site.
Participating organizations designate a Laboratory Contact who is recognized by CTS as the sole person authorized to receive confidential information about that organizations participation, such as identifying information and performance results. Communication between CTS and participating organization is routinely routed through the Lab Contact. It is incumbent on the participating organization to designate a new Lab Contact if the existing contact is no longer authorized to act as the participants agent.
CTS assigns Lab Codes and/or Participant Codes to maintain the anonymity of participating organizations. These unique codes are used in combination with policies and procedures outlined below to communicate with a third-party while maintaining the anonymity of participants.
CTS randomly generates temporary WebCodes to protect the confidentiality of results. Webcodes also are used in combination with policies and procedures to ensure the confidentiality of interlaboratory results. WebCodes are assigned to participants and are used to identify results in Web-based summary reports. The Webcode is unique to each test and, therefore, is not permanently associated with a participant.
CTS will not confirm the participation of any individual or organization in our programs to a third-party without prior authorization of the participant. This authorization may be written authorization given directly to CTS by the participant for the release of information to a specific party. The authorization may also be given to the third party by the participant. This indirect, third-party authorization must include identifying information given to the third-party such as a Lab Code or Participant Code and identifying information that provides sufficient validation that the third party has been authorized by the participant.
CTS will not release, transmit, comment or discuss in any manner the results of a participant with a third-party without prior authorization of the participant. This authorization may be written authorization given directly to CTS by the participant for the release of information to a specific party. The authorization may also be given to the third party by the participant. This indirect, third-party authorization must include identifying information such as a Lab Code or Participant Code, the specific results to be released and identifying information that provides sufficient validation that the third party has been authorized by the participant.
CTS may share anonymous test results or aggregated test information and reserves the right to use any or all data in articles or other literature so long as there is no identifying information disclosed.
Confidentiality and Transmissions over the Internet
The transmission of data or information (including communications by e-mail) over the Internet or other publicly accessible networks is not secure, and is subject to possible loss, interception or alteration while in transit. Accordingly, we do not assume any liability for any damage you may experience or costs you may incur as a result of any transmissions over the Internet or other publicly accessible networks, such as the submission of any of your information to us (including your personal information). While we shall take commercially reasonable efforts to safeguard the privacy of any personal information you provide us and treat such information in accordance with this Confidentiality Policy, in no event will the information you provide to us result in any liability to you on our part in the event that such information is inadvertently released by us or accessed by third parties without our consent. Notwithstanding the foregoing, to the extent that you are able to access the information of third parties through our Web site, we do not grant you any right or license whatsoever to use such information.
Moreover, in order to limit unauthorized access, we observe strict security measures when it comes to archiving and divulging any information provided by our customers. Because of that, we may ask you to confirm your identity once again before submitting personal information.
For all Reports: CTS strictly adheres to confidentiality policies regarding release of participant data, and published data is intended for internal use by participants. Further guidance for additional usage of reports is dictated by the relevant report type and their intended use, listed below.
Manipulation or misrepresentation of any CTS report is strictly prohibited.
Summary Reports: Summary Reports are a compilation of all results submitted by participants. Results in this report are presented as reported by the participants. The compiled results are not intended to be an overview of the quality of work performed in the profession and cannot be interpreted as such. Summary Comments are included for the benefit of the participants, and are not intended to reflect the general state of the art within the profession. Summary Reports are made available to the general public on CTS' commercial web site.
Individual Reports: Individual Reports are results, analyses and additional information that provide a full assessment of the laboratory's measurement performance. The contents of these reports are primarily intended for internal use by the participants for quality assurance, technical competency, accreditation and recognition activities. CTS does not expressly prohibit participant release of Individual Report information to third parties.
Quality Managers Reports: Quality Manager Reports are results, analyses and additional information to provide assessment of a previously identified group of related laboratories. These reports are intended for the use of individuals overseeing quality assurance for multiple sites participating in CTS tests. Members of the group expressly consent to the release of identifying information to the recipient of the Quality Manager Report. CTS does not expressly prohibit release of Quality Manager Report information to third parties.
November 11 – November 13, 2024
ASTM
Join us in Orlando, FL for ASTM Committee Week!
Don't see the test you desire?
Make a PT suggestion by emailing info@cts-interlab.com!
Send inquiries / feedback / discussions regarding results to one of the following:
Postal Mailing Address:
Collaborative Testing Services, Inc.
P.O. Box 650820
Sterling, VA 20165-0820
Shipping Address (for package/courier deliveries):
Collaborative Testing Services, Inc.
21331 Gentry Drive
Sterling, VA 20166
Tel 1-571-434-1925
Fax 1-571-434-1937
Please send e-mail inquiries to the appropriate address: