Clinical evaluation using non-clinical data
Article 61.10 -
For some devices it can be impossible to collect clinical data, for example because the device does not provide a direct clinical benefit that can be measured by a meaningful clinical endpoint, or because the device is an accessory only claiming a technical performance. This article gives insight in using article 61.10, which provides a methodology for the clinical evaluation of such devices, for these cases. Please note that future guidance from MDCG might change insights on this topic.
What does article 61.10 say?
Article 61.10 says:
Without prejudice to paragraph 4, where the demonstration of conformity with general safety and performance requirements based on clinical data is not deemed appropriate, adequate justification for any exception shall be given based on the results of the manufacturer's risk management taking the specifics of the interaction between the device and the human body, the clinical performance intended and the claims of the manufacturer in consideration. In such a case, the manufacturer shall duly substantiate in the technical documentation referred to in Annex II why it considers a demonstration of conformity with general safety and performance requirements that is based on the results of non-clinical testing methods alone, including performance evaluation, bench testing and pre-clinical evaluation, to be an adequate justification.
The highlighted parts are important to consider and will be discussed further below.
When should article 61.10 be used?
What is the meaning of not appropriate? The fact that ‘an adequate justification’ is mentioned, where risk management, interaction between patient and device and intended purpose and claims specifically need to be scrutinized, indicates that this appropriateness should be based on this justification. In other words, when all clinical performance and safety questions can be answered with design verification testing data AND there are no fitting or meaningful clinical endpoints to measure in patients. Intended clinical performance and claims are mentioned, indicating that if the device has a clinical claim, it is difficult to justify using article 61.10. In principle, only when the device comes with an indirect clinical benefit, article 61.10 is fitting. In this respect, it is useful to look at MDCG 2020-6 on sufficient clinical evidence for legacy devices that have been on the marked for some time. Even though this MDCG refers these type of devices, it gives some valuable insight in the concept of ‘indirect clinical benefit’.
The justification that this route is appropriate, and that the collected non-clinical data is sufficient to cover safety and performance, needs to be present in the clinical evaluation report, which is therefore always required to be present.
So when considering this article, thought must be given as to whether or not collection of clinical data is possible and will give meaningful results. Only when it is not possible to collect clinical outcomes at all, because the device has no influence on them, or if the outcomes will not be meaningful to show compliance with GSPR, this approach of using only non-clinical data can be used.
An example where the outcomes would not give meaningful results in sterilizing equipment. Of course it would be possible to do a clinical investigation on sterilizers, where patients are included that undergo surgeries with instruments sterilized in the sterilizer under evaluation, but it will be very hard to draw conclusions from this; even if patients would get an infection it is not sure the sterilizer would be the cause. So, in this particular case, sterilization validation is a much better method to demonstrate the performance.
Another example will be accessories, where the accessory is intended to deliver a certain output or performance for a second medical device, but does not have a clinical claim itself, and also has no influence on the clinical outcome of the patient, assuming the correct performance is delivered (which can be validated by technical testing). For example, data transfer devices, image processing devices or devices that need to deliver a certain output, for example a certain pressure of air of temperature of water for another medical device to function as intended. Because in this case there is a clear technical outcome, it would be more meaningful, and sometimes the only possibility, to measure this output using technical validation.
A third example is diagnostic devices, where a parameter of a patient is measured, which can also be simulated. It is often not possible to exactly tell which ‘output’ the patient delivers. The output will usually not be in the extremes, therefore using actual patients might be less fitting. An example is ECG devices: the electrophysiological parameter is easy to simulate and with a simulation it is exactly known what the input data is and how the device should respond, even at extreme values. However, this would not be applicable when the device also provides a direct diagnosis. In that case the use of patient data would be appropriate, because then there is a clear clinical claim. Also, the parameter must be possible to simulate, and it must be known that this is reliable. So, for a novel way of measuring a certain parameter, this would not be applicable.
Potential misunderstandings and cases when art. 61.10 cannot be used.
As article 61.10 states ‘Without prejudice to paragraph 4….’, it cannot be used for class III devices and implantable devices. For the latter this is a clear-cut case, as with every implantable, regardless of the intended clinical performance, there is always a very close interaction with the human body, so actual clinical data is certainly required. For other class III devices this can be a difficult, especially in the case of a software accessory for example, used to influence a class III device and therefore would aslvo be a class III in itself, even though it does not have a clinical claim or a meaningful clinical end-point in itself. In these cases, the best option is to conduct a clinical investigation for the combination of the accessory and the device the accessory is meant for.
Article 61.10 on non-clinical data should not be confused with the option not to do an actual clinical investigation because sufficient clinical data from equivalent devices is present. This option is covered in article 61.3 instead, which indicates that the clinical evaluation should always start with a review of clinical data of equivalent devices, if available. If sufficient evidence is collected, based on this data, to show compliance with the GSPR, no additional clinical investigation is required (unless it is an implantable or class III, for which article 4 on clinical investigations must be considered). This is a different case from article 61.10, which relates to situation in which collection of clinical data (whether own or from equivalents) is considered not appropriate at all.
Article 61.10 should not be viewed as a loophole to cover for the lack of clinical data. If there are appropriate clinical endpoints that can be measured in patients and there is not sufficient clinical data of equivalent devices available, a clinical investigation is necessary. This also means, that if there are similar (not necessarily equivalent) devices for which clinical data is available, it will be hard to justify using article 61.10, as the similar devices have already shown that it is possible to collect meaningful clinical endpoints.
Another point of confusion might be testing done to confirm equivalence. For example, if certain design verification is done to prove equivalence the current device with the predecessors (e.g. equivalent output). This would fall within the normal clinical evaluation regime (art. 61.3) and is not an article 61.10 situation, as the clinical data of the equivalent devices is used to confirm GSPR.
Last but not least, in some cases, especially for software, a ‘clinical investigation’ is done using retrospectively collected patient datasets, such as image datasets and related patient outcome data. This data is used, as input for the software to assess whether the software provides the expected outcomes, which are compared with the actual (golden standard) outcomes in the datasets. Even though this is not actually a clinical investigation as defined by the MDR, it is also not fitting to use article 61.10 in this case as these devices often have a direct clinical claim, and also clinical data (the patient datasets and outcomes) is used to validate safety and performance. The legislation is not completely clear in this case, but more insight can be gained from MDCG 2020-1 on clinical evaluation for software.
Clinical evaluation report for article 61.10
As stated above, a clinical evaluation report is still required for these devices. Annex XIV, part A of the MDR gives requirements for the content of a clinical evaluation report, and this also largely applies to devices where the clinical evaluation is based on article 61.10. This includes the presence of a clinical evaluation plan. The plan should contain the justification for the approach to use article 61.10, based on the performance claims of the device, the intended purpose, the interaction between device and body and the outcomes of the risk management.
A literature review should be done, not to collect clinical data on the device or equivalent (because, as said above, if that was possible, one should not use article 61.10), but to determine state of the art, alternative treatment options, similar devices, determination of acceptable limits and outputs and to detect clinical hazards, side-effects and contra-indications.
Next, a summary of the technical validation data should be given (performance evaluation, bench-testing, usability data and other pre-clinical information), with a reference to the test reports. Where applicable, (harmonized) standards should be used. Data from adverse event databases regarding prior versions of the device and similar devices should be included, and it should be described what the search strategy and used databases were.
Finally, an appraisal of the collected data should be present, with a clear indication on how each performance claim is substantiated by the technical data and how safety of the medical device is ensured. Adequacy of the data to show compliance with all relevant general safety and performance requirements should be justified. A final conclusion with a benefit – risk assessment should be present.
Postmarket clinical follow-up
As discussed above, the collection of clinical endpoints was not deemed fitting for these specific devices, otherwise article 61.10 should not have been used. This immediately raises the question on how to proceed with post-market clinical follow-up (PMCF). Important to realize here, is that the MDR differs from the MDD with regard to what exactly entails PMCF. In the MDD this was indicated as an actual clinical study. However, as can be seen in Annex XIV, part B of the MDR, this is extended to what is called ‘general methods of PMCF’. General methods of PMCF encompass gathering of data on clinical experience, feedback from users on actual device use, screening of literature and other sources of clinical data. When the article 61.10 route was used for a device, it is of course not possible to do a PMCF study (otherwise, it would have been possible, and necessary, to do this study premarket). However, the general methods for PMCF can, and should, be applied. For example, literature review in the post-market phase might give insight if the device remains state-of-the-art, user feedback on use in the clinical practice gives insight in certain additional risks that might occur, or frequency of certain risks might be higher (or lower) than estimated. Therefore, a PMCF plan describing all the general PMCF methods, that are planned to be implemented must be drawn up, besides (or as a separate plan incorporated in) the post-market surveillance plan.