Traceability in Device Design, Development, and Distribution

Angela Mallery – NAMSA Blog

The path through medical device design, testing, application, and maintenance needs to be traceable. Traceability analyses aid in understanding device design and whether requirements are being met. According to FDA, “A Traceability Analysis links together your product design requirements, design specifications, and testing requirements. It also provides a means of tying together identified hazards with the implementation and testing of the mitigations.”

Once a manufacturer decides on a product, they will need to start developing a plan, or the design input with its intended use, function, and performance requirements. Without complete and thorough design inputs, a product can run into performance issues. After the design inputs are created, an assessment is conducted, during which the conditions of the device’s operation are identified. Testing is conducted based on the pre-specified design input requirements which then yields design outputs. Upon design verification completion, these outputs become the inputs for the next step, design validation, as shown in Figure 1. Design reviews are conducted throughout the development process to allow for incremental changes and to verify that the device is capable of moving forward with development.

 

Figure 1 from FDA Design Control Guidance for Medical Device Manufacturers. Available at:http://www.fda.gov/RegulatoryInformation/Guidances/ucm070627.htm

 

It is important to begin the design control process when a product is first being developed in order to ensure that design outputs are traceable to design inputs. Traceability is intended to control the design process and to ensure than the device meets user needs, intended uses, and the specified requirements. This will facilitate risk mitigation and device testing as well as help prove device safety, efficacy, and compliance. A clear and effective approach to traceability is still being debated. The goal is to have a rationale and data supporting the device through all phases of development and beyond. The design inputs and outputs should be documented to confirm that the outputs are traceable to the inputs. A design history file should be maintained to demonstrate adherence to federal requirements (21 CFR 820).

Traceability in medical device design ensures that a protocol is followed and that there is appropriate documentation to support device efficacy and safety. It is important to maintain a traceability initiate throughout device design, testing, and distribution and to involve a cross-functional team to carry it out.

 

[READ MORE]

Design Control Guidance For Medical Device Manufacturers

FDA March 11, 1997 

FOREWORD

To ensure that good quality assurance practices are used for the design of medical devices and that they are consistent with quality system requirements worldwide, the Food and Drug Administration revised the Current Good Manufacturing Practice (CGMP) requirements by incorporating them into the Quality System Regulation, 21 CFR Part 820. An important component of the revision is the addition of design controls.

Because design controls must apply to a wide variety of devices, the regulation does not prescribe the practices that must be used. Instead, it establishes a framework that manufacturers must use when developing and implementing design controls. The framework provides manufacturers with the flexibility needed to develop design controls that both comply with the regulation and are most appropriate for their own design and development processes.

This guidance is intended to assist manufacturers in understanding the intent of the regulation. Design controls are based upon quality assurance and engineering principles. This guidance complements the regulation by describing its intent from a technical perspective using practical terms and examples.

Draft guidance was made publicly available in March, 1996. We appreciate the many comments, suggestions for improvement, and encouragement we received from industry, interested parties, and the Global Harmonization Task Force (GHTF) Study Group 3. The comments were systematically reviewed, and revisions made in response to those comments and suggestions are incorporated in this version. As experience is gained with the guidance, FDA will consider the need for additional revisions within the next six to eighteen months.

The Center publishes the results of its work in scientific journals and in its own technical reports. Through these reports, CDRH also provides assistance to industry and to the medical and healthcare professional communities in complying with the laws and regulations mandated by Congress. These reports are sold by the Government Printing Office (GPO) and by the National Technical Information Service (NTIS). Many reports, including this guidance document, are also available via Internet on the World Wide Web at http://www.fda.gov.

We welcome your comments and suggestions for future revisions.

/signed/

D. Bruce Burlington, M.D.
Director
Center for Devices and Radiological Health

PREFACEEffective implementation of design controls requires that the regulation and its intent be well understood. The Office of Compliance within CDRH is using several methods to assist manufacturers in developing this understanding. Methods include the use of presentations, teleconferences, practice audits, and written guidance.

Those persons in medical device companies charged with responsibility for developing, implementing, or applying design controls come from a wide variety of technical and non-technical backgrounds–engineering, business administration, life sciences, computer science, and the arts. Therefore, it is important that a tool be provided that conveys the intent of the regulation using practical terminology and examples. That is the purpose of his guidance.

The response of medical device manufacturers and other interested parties to the March, 1996 draft version of this guidance has significantly influenced this latest version. Most comments centered on the complaint that the guidance was too prescriptive. Therefore, it has been rewritten to be more pragmatic, focusing on principles rather than specific practices.

It is noteworthy that many comments offered suggestions for improving the guidance, and that the authors of the comments often acknowledged the value of design controls and the potential benefit of good guidance to the medical device industry, the public, and the FDA. Some comments even included examples of past experiences with the implementation of controls.

Finally, there are several people within CDRH that deserve recognition for their contributions to the development of this guidance. Al Taylor and Bill Midgette of the Office of Science and Technology led the development effort and served as co­chairs of the CDRH Design Control Guidance Team that reviewed the comments received last spring. Team members included Ashley Boulware, Bob Cangelosi, Andrew Lowrey, Deborah Lumbardo, Jack McCracken, Greg O’Connell, and Walter Scott. As the lead person within CDRH with responsibility for implementing the Quality System Regulation, Kim Trautman reviewed the guidance and coordinated its development with the many other concurrent and related activities. Their contributions are gratefully acknowledged.

FDA would also like to acknowledge the significant contributions made by the Global Harmonization Task Force (GHTF) Study Group 3. The Study Group reviewed and revised this guidance at multiple stages during its development. It is hoped that this cooperative effort will lead to this guidance being accepted as an internationally recognized guidance document through the GHTF later this year.

/signed/

Lillian J. Gill
Director
Office of Compliance

ACKNOWLEDGEMENTFDA wishes to acknowledge the contributions of the Global Harmonization Task Force (GHTF) Study Group 3 to the development of this guidance. As has been stated in the past, FDA is firmly committed to the international harmonization of standards and regulations governing medical devices. The GHTF was formed in 1992 to further this effort. The GHTF includes representatives of the Canadian Ministry of Health and Welfare; the Japanese Ministry of Health and Welfare; FDA; industry members from the European Union, Australia, Canada, Japan, and the United States; and a few delegates from observing countries.

Among other efforts, the GHTF Study Group 3 started developing guidance on the application of design controls to medical devices in the spring of 1995. Study Group 3 has recognized FDA’s need to publish timely guidance on this topic in conjunction with promulgation of its new Quality System Regulation. The Study Group has therefore devoted considerable time and effort to combine its draft document with the FDA’s efforts as well as to review and comment on FDA’s subsequent revisions. FDA, for its part, delayed final release of its guidance pending final review by the Study Group. As a result, it is hoped that this document, with some minor editorial revisions to make the guidance global to several regulatory schemes, will be recognized through the GHTF as an international guidance document.

 

[READ MORE]

UX Foes, Real and Imaginary

Source: UX Foes, Real and Imaginary

There’s a certain ethos in the UX community that goes like this: “You should test users in a focused way on the exact elements you want them to interact with.  And through this focused testing you will receive great feedback.  Complicated high fidelity prototypes make this difficult.”

This is the imaginary UX foe.
I’ve seen this both explicitly, in the form of blog posts and articles, as well as implicitly from being in the UX community for the past 3 years.  Here is an example, via Digital Telepathy

“With so many things to do, it may be hard to focus. Clients and test subjects wander from tree to tree, getting lost in the beautiful forest you’ve created, making it hard to get focused feedback.”

This is in reference to nuanced, complicated prototypes that perfectly mimic how the final site will look and feel.  I have news for you, if users are getting lost on a full fidelity version of your website, and can’t complete the tasks you give them, your site has problems.  And dumbing down the testing is not the solution… [continue on UX Foes, Real and Imaginary]

Providing assistive technology in Italy: the perceived delivery process quality as affecting abandonment

Purpose: The study brings together three aspects rarely observed at once in assistive technology (AT) surveys: (i) the assessment of user interaction/satisfaction with AT and service delivery, (ii) the motivational analysis of AT abandonment, and (iii) the management/design evaluation of AT delivery services. Methods: 15 health professionals and 4 AT experts were involved in modelling and assessing four AT Local Health Delivery Service (Centres) in Italy through a SWOT analysis and a Cognitive Walkthrough. In addition 558 users of the same Centres were interviewed in a telephone survey to rate their satisfaction and AT use. Results: The overall AT abandonment was equal to 19.09%. Different Centres’ management strategies resulted in different percentages of AT disuse, with a range from 12.61% to 24.26%. A significant difference between the declared abandonment and the Centres’ management strategies (p = 0.012) was identified. A strong effect on abandonment was also found due to professionals’ procedures (p = 0.005) and follow-up systems (p = 0.002). Conclusions: The user experience of an AT is affected not only by the quality of the interaction with the AT, but also by the perceived quality of the Centres in support and follow-up.

Implications for Rehabilitation

AT abandonment surveys provide useful information for modelling AT assessment and delivery process.
SWOT and Cognitive Walkthrough analyses have shown suitable methods for exploring limits and advantages in AT service delivery systems.
The study confirms the relevance of person centredness for a successful AT assessment and delivery process.

See the article

Human Factors methods for IVD and POC devices at DEC London- Workshop at the NIHR DEC London open day

Quite rarely industrial practitioners, researchers and commissioners participate to a workshop all together. Professor Peter Buckle and I were lucky because we had the opportunity to host this event at the Open Day of NIHR Diagnostic Evidence Cooperative of London.
We asked to participants to work in groups to map all the possible stakeholders of a Point of Care Device. The outcomes were impressive! When you put together different perspectives and you may drive them with a Human Factor framework you can only enrich people perspective. – Download Workshop Presentation

DSC_0042 DSC_0043

 

The Fox Guarding the Usability Lab

Journal of Usabilty Study – Bill Albert,  May 2015

The idiom of “don’t let the fox guard the hen house” warns us about the potential danger of giving someone responsibility for overseeing something that he or she shouldn’t be involved with, particularly when there is an inherent interest in the outcome. Unfortunately, that idiom is all too fitting in the world of user experience (UX) research and practice, especially with respect to usability testing. I have seen all too often designers who evaluate their own design, or the design agency that is responsible for evaluating their own work. This is an inherent conflict of interest that results in poor quality research and, perhaps more importantly, undermines the credibility of our profession. The good news is there is an easy fix.

Let us start by stating that I do not mean to offend anyone, especially the very talented UX researchers and designers I work with every day or the design agencies that produce world class products across every industry. I have tremendous respect for their skill and professionalism. I know that they want to produce and deliver great user experiences to their clients. But even with the best of intentions, wrong decisions about how to test the usability of those products and services are sometimes made.

Even though there has already been a fair amount written about the risk of designers evaluating their own designs, this continues to be a problem, particularly with the lean UX approach. The general consensus is that this approach is not a good idea because designers have great difficulty in maintaining objectivity. It is widely understood that it is just too hard to maintain objectivity. But, what are the risks of design agencies evaluating their own design work?

In this editorial I focus on the inherent conflict of interest that design agencies have when they are responsible for evaluating their own design work, what can be done to mitigate this problem, and the implications for the UX community. I define a design agency as a consulting firm that is hired to design (from a visual and interactive perspective) digital products…

[Read the full article]

When you should go for Usability Study versus A/B or MVT Testing?

nabconvert

Banner Ads 1

The above snapshots are taken from leading news reports and industry research journals that stress on an already aware environment of the growing importance of online businesses in the coming years. Well this article is not a discussion about the growing trends of the online industry but a way to lay the foundation of the critical attention websites need to survive this extremely competitive era. Users are short of patience given how spoilt they are for the available alternatives.

Websites need to be designed carefully with scientifically driven, confident and researched approaches. This brings us to the two main dominating methodologies:

  1. USABILITY: Use + Ability – The ease of a user to use a human made object

  2. TESTING: A way to test changes to your page against the current design and determine which ones produce positive results

Every website has a target audience; they have specific ‘personas’ i.e. demographics…

View original post 993 more words