NATHAN CORTEZ
SINCE ITS inception, the U.S. Food and Drug Administration (FDA) has had to confront novel products from early pharmaceuticals to sophisticated biotechnology drugs. But perhaps nothing has stretched the agency’s comfort zone more than computerized medical devices. When Congress granted FDA gatekeeping authority over devices in 1976, it did not anticipate software. By then, the germinations of computerized medicine were well under way. In 1967, for example, the New York Times reported that a computer in Washington, D.C., received electrocardiogram signals from France and returned its analysis by satellite within seconds.
Of course, the intervening decades would bring profound advances in computing. But FDA has always struggled with how to regulate computerized devices. From primitive diagnostic software and radiation machines to modern software that can turn smartphones into cardiac-event recorders, an old agency continues to apply an old regulatory framework to novel products.
This chapter reviews over three decades of federal documents, finding striking parallels between yesterday’s concerns and today’s. The same questions raised in the 1970s and 1980s remain today, largely untouched by Congress and FDA. But it is well past time to address them. The variety, complexity, and ubiquity of medical software grows relentlessly. A study by Fu (2011) found that beginning in 2006, more than half of all medical devices on the U.S. market relied on software in some way. Indeed, the market is now being saturated with software used on phones and other mobile devices, known as “mobile health” or “mHealth” (Cortez 2014a). As the use of software in medicine grows almost exponentially, it is time to reconsider FDA’s old framework.
I. COMPUTERIZED MEDICINE
FDA jurisdiction over certain software derives from the broad definition of “device” in the Food, Drug, and Cosmetic Act (FDCA). Section 201(h) defines a “device” as any “instrument, apparatus, implement, machine, contrivance” that is intended to diagnose, cure, mitigate, treat, or prevent disease or other conditions, or that is “intended to affect the structure or any function of the body,” including any component, part, or accessory. Broad as this definition is, many computer products are not FDA regulated. But this line is not always clear.
A. Beginnings
With a foundation laid in the 1940s and 1950s, computerized medicine emerged in the 1960s with university research creating promising-sounding programs like HELP, AIM, and PROMIS. “At the beginning of the 1960s,” a doctor testified to Congress, “the computer was essentially unknown in the biomedical world” (Computers in Health Care: Hearing Before the Subcomm. on Domestic and International Scientific Planning, Analysis, and Cooperation of the H. Comm. on Science and Technology, 95th Cong. (1978)). Early support from the National Institutes of Health (NIH) and its National Library of Medicine were instrumental (November 2012). A 1963 NIH program funded computers at forty-five universities and six hospitals, which pioneered the first computers for health care diagnosis and delivery, including clinical decision support (Information Technologies in the Health Care System: Hearing Before the Subcomm. on Investigations and Oversight of the H. Comm. on Science and Technology, 99th Cong. (1986)).
Later, NIH funding helped establish a network called SUMEX-AIM (Stanford University Medical Experimental Computer—Artificial Intelligence in Medicine), which included MYCIN, Stanford software that outperformed experts in identifying therapies for bacterial infections, and CASNET, Rutgers University software that helped diagnose glaucoma. Private sector research complemented these programs. For example, in the late 1970s, a drug company created software for diagnosing pediatric diseases, allowing physicians to “dial up to the remote computer, enter three or four findings to get back a list of possible causes, including the hard to remember congenital syndromes” (1986 Hearing).
Of course, many of these programs were not medical “devices” and were not even on FDA’s radar. But they helped lay the foundation for today’s sophisticated devices.
B. Aspirations
From the beginning, computers promised to reduce health spending. The first congressional hearings in 1978 focused on “whether and how computer science might help to stem the uncontrolled growth of medical costs.” At hearings in 1981, a congressional representative lamented that “we are missing a great bet in the involvement of the computer” in controlling “escalating health care costs” (Health Information Systems: Hearing Before the Subcomm. on Science, Research, and Technology and Subcomm. on Natural Resources, Agricultural Research, and Environment of the H. Comm. on Science and Technology, 97th Cong. (1981)). This idea persists today. Almost every recent congressional hearing or agency announcement continues to broadcast the idea that computers will cut health spending.
Another longstanding notion is that computers can help reduce medical errors and improve the quality of care. Even early computers had enough memory and processing power to capture data and compute probabilities better than humans. The near-exponential growth of memory and processing power has obvious medical applications, like analyzing large quantities of clinical data or distinguishing standard diagnoses from outliers. The emergence of artificial intelligence (AI) could refine medical algorithms, and communications networks could better share this data. Software was also used in medical devices to better calibrate diagnostic and therapeutic equipment.
A third longtime aspiration of computerized medicine is to expand access to care. A spokesperson for the U.S. Public Health Service said that the 1967 electrocardiogram in France showed how “long distance electrocardiogram analysis could be made available almost anywhere on earth” (New York Times 1967). Early hearings contemplated using computers to expand access to care in urban and rural communities. These ambitions surged recently with medical software on mobile devices (Cortez 2014a), which is already being deployed to expand access to care in developing economies (Kahn et al. 2010).
Science fiction has long embraced these ambitions. The idea that computers can process the complexities of the human body better than even expert physicians may trace back to the television show Star Trek, which featured a handheld “Tricorder” device that could instantaneously diagnose crew members. The 1986 congressional hearings included a Stark Trek reference, and in 2011 the X Prize Foundation announced a $10 million competition for the first group to create a real-life Tricorder (Cortez 2014a).
But these ambitions are also dismissed as mere science fiction. A doctor at the 1986 congressional hearing testified that “the notion that physicians will turn to a computer’s advice exactly as they would turn to a human consultant, I think, is science fiction.”
C. Hazards
There are well-documented hazards in relying on software to diagnose and treat patients (Leveson 1995, 2011; Fu 2011). Unlike physical devices, small errors in software can be disastrous. But software code is difficult, if not impossible, to test completely (Fu 2011). User interfaces can be awkward. And software is frequently operated in less than ideal environments, when the user is fatigued, distracted, or frustrated (Cortez 2014a). Repeated error alerts can numb users, who learn to bypass or ignore them. Users are also susceptible to “automation bias,” the belief that computers are infallible (Citron 2008; Fu 2011). As the New York Post speculated in 1959, “The day will come sooner than you think when improper diagnosis and treatment of human ills will be virtually impossible” (November 2012). Finally, hospitals and physicians can be quick to adopt unproven new technologies without investing the resources to use them properly.
FDA is well aware of these hazards. In 1996, FDA detailed a litany of software device errors, including problems with cardiac devices, radiation therapy machines, and infusion pumps, observing that “software-related errors can be subtle” and that “seemingly small design flaws can result in serious problems” (FDA 1996). The agency is aware of adverse events caused by human–computer interaction, including cumbersome controls, counterintuitive displays, unsignaled default settings, and inadequate user manuals.
Recalls involving software have increased steadily over the last 30 years (Fu 2011). In fact, FDA’s first public statement on software followed a series of deaths caused by computer software. Between 1985 and 1987, the Therac-25 (short for “therapeutic radiation computer”) caused multiple deaths in the United States and Canada. The Therac-25 was the first radiation therapy machine controlled primarily by software, but it was plagued with bugs, a confusing interface, incomplete manuals, and repeated malfunctions (Leveson 1995). Decades later, the New York Times documented frighteningly similar problems caused by the latest generation of radiation therapy machines (Bogdanich 2010).
These bookends demonstrate that old problems persist. Unfortunately, FDA scrutiny has not been commensurate with how ubiquitous and critical software devices have become.
II. CONGRESS, AN INTERESTED BYSTANDER
Congress has long had an interest in computerized medicine. But this interest has generated very little legislation, particularly laws that would update FDA’s old statutory framework. Early attention focused mostly on the benefits of computerized medicine. In 1978, the House Committee on Science and Technology held hearings on Computers in Health Care, with a sequel in 1981 on Health Information Systems. FDA officials did not testify at either hearing. An official from the U.S. Department of Health and Human Services testified at the 1981 hearing but mentioned FDA only in passing. In 1986, the same House committee held hearings on Information Technologies in the Health Care System, this time asking FDA for a draft of its early, unpublished software policy.
After years of sporadic attention, Congress has shown a renewed interest in FDA regulation of software, prompted by massive federal investments in health information technology and the proliferation of mobile devices. In 2013, the House held several hearings to consider FDA regulation of mobile software (Health Information Technologies: Hearing Before the Subcomm. on Health and Subcomm. on Communications and Technology of the H. Comm. on Energy & Commerce, 113th Cong. (2013); Examining Federal Regulation of Mobile Medical Apps and Other Health Software: Hearing Before the Subcomm. on Health of the H. Comm. on Energy & Commerce, 113th Cong. (2013)).
Nevertheless, this attention has led to few bills and even fewer laws. In 2012, Congress did pass a law, but only to ask for recommendations on how to regulate newer health information technologies (Food and Drug Administration Safety and Innovation Act of 2012 (FDASIA), Pub. L. No. 112-144, § 618), which were published recently (FDA 2014). Although such bills seem to be gaining momentum, these bills generally try to taper FDA oversight, not strengthen it (Cortez et al. 2014). For example, bills introduced in the 113th Congress would limit FDA jurisdiction over clinical decision support software (The Sensible Oversight for Technology which Advances Regulatory Efficiency (SOFTWARE) Act, H.R. 3303, 113th Cong. (2013); The Preventing Regulatory Overreach To Enhance Care Technology (PROTECT) Act, S.2007, 113th Cong. (2014); The Medical Electronic Data Technology Enhancement for Consumers’ Health (MEDTECH) Act, S.2977, 113th Cong. (2014)). Congress has not considered bills that would update FDA’s statutory framework for devices, enacted in 1976 when few could imagine today’s computerized products. (Note, however, that a bill not yet introduced as of February 2015, the 21st Century Cures Act, would represent more ambitious reforms to FDA regulation of software.)
III. FDA, THE RELUCTANT REGULATOR
Although FDA has spent almost four decades regulating computerized devices, its oversight remains piecemeal and sporadic, relying heavily on nonbinding guidance. FDA has never written comprehensive rules tailored to software. In the 1990s, FDA hinted that it was contemplating such a rule but never proposed one. In 2011, the agency explained that it never created an “overarching software policy” because “the use of computer and software products…grew exponentially and the types of products diversified and grew more complex” (FDA 2011a).
A. Early Consideration
FDA’s experience with computerized devices dates back to the 1970s, when it “completed premarket approval (PMA) reviews of computer-related products such as cardiac pacemaker programmers, patient monitoring equipment, and magnetic resonance imaging machines” (1986 Hearing). In the early 1980s, FDA began studying software more systematically. In 1981, it created a Task Force on Computers and Software as Medical Devices, which wrote an unpublished report. In 1984, a Program Management Committee on Software and Computerized Devices issued another nonpublic report.
By 1985, when the first reports emerged of patient deaths from the Therac-25, the need for an FDA software policy became apparent. In 1986, FDA made its first public statement, before the House Committee on Science and Technology. In 1987, FDA announced its first software policy in the Federal Register (Draft Policy for the Regulation of Computer Products, 52 Fed. Reg. 36,104 (Sep. 25, 1987)), beginning a quarter century of addressing software by guidance. But the 1987 draft was not particularly ambitious, simply explaining the scope of FDA jurisdiction and existing requirements that might apply. In 1989, FDA published an updated Draft Policy for the Regulation of Computer Products, which it withdrew 16 years later (70 Fed. Reg. 824, 890 (Jan. 5, 2005)).
B. Organization
Responsibility for software resides with the FDA Center for Devices and Radiological Health (CDRH), which has pockets of software expertise. For example, the Office of Science and Engineering Laboratories includes a Division of Electrical and Software Engineering, a Division of Physics, and Division of Imaging and Applied Mathematics, each of which covers software devices. For example, the divisions include a Laboratory of Software Engineering and a Laboratory of Medical Electronics, among others. These units support premarket product reviews, policy development, and other activities.
Yet, as important as these activities are, FDA’s software expertise is still not commensurate with the volume of software devices on the market. Fu (2011) observed that “seldom does an FDA inspector assigned to review a 510(k) application have experience in software engineering even though the majority of medical devices today rely on software.” Fu also notes that “software experts are notably underrepresented” in FDA’s fellowship programs that seek to cultivate technical expertise. Unfortunately, recent proposals in Congress would not dedicate substantially more in-house resources or expertise to software oversight.
C. Regulation
Although dozens of FDA rules refer to software, very few establish broadly applicable requirements distinct from physical devices. Most software references appear in 21 CFR Parts 862–892, which classify more than 1,700 types of devices. Software also appears frequently in FDA rules for radiology products (21 CFR Parts 1000–1050). Beyond this, the only broad rules tailored to software devices appear in the Quality Systems Regulation (QSR), which specifies that software is subject to manufacturing design controls, must be validated, and must document its specifications (21 CFR Part 820).
Of course, many broad FDA rules apply to software devices because they are “devices.” For example, like physical devices, software devices can be classified as Class I (low risk), Class II (moderate risk), or Class III (high risk) (21 USC §§ 360c, 360e), subject to the rules that apply to each class. But by and large, FDA has not written broadly applicable, legally binding rules tailored to software, relying instead on case-by-case adjudication and nonbinding guidance.
D. Adjudication
Since the 1970s, FDA has evaluated software devices case by case in both the premarket and postmarket stages. In the premarket stage, FDA can approve novel or high-risk devices through a premarket approval (PMA) application, which requires clinical data that the device is safe and effective (21 USC §§ 360c, 360e; 21 CFR Part 814). But more commonly, FDA clears devices through its 510(k) notification process, which declares that the device is “substantially equivalent” to a device already on the market (21 USC § 360(k)). FDA then places devices into one of roughly 1,700 product categories (21 CFR Parts 862–892). For example, 21 CFR § 876.1300 describes an “ingestible telemetric gastrointestinal capsule imaging system,” equipped with a camera, light, transmitter, and battery to take pictures of the small bowel and transmit them to a receiver. Sometimes, FDA will clear products without substantial equivalence through a de novo 510(k). For example in 2014, FDA cleared the PillCam, an ingestible capsule that videotapes the small bowel and colon before the capsule is excreted. FDA thus created a new classification for ingestible capsules that record video (21 CFR § 876.1330) as opposed to take pictures (§ 876.1300).
FDA has been criticized for clearing the vast majority of devices through its 510(k) process, sometimes with very little scrutiny (IOM 2011). Such critiques often highlight software devices as a particular concern. Yet, FDA continues to clear computerized devices as being substantially equivalent to noncomputerized predicates (Fu 2011; Cortez 2014a). Other software products may escape FDA scrutiny altogether if they qualify as general purpose articles, if they are designed and used by individual physicians or facilities, or if they are made “solely for use in research, teaching, or analysis” (21 CFR § 807.65).
FDA also regulates software devices in the postmarket stage, for example, by regulating manufacturing, adverse event reporting, and recalls. Some of FDA’s early software activities included postmarketing enforcement (1986 Hearing). Today, as might be suspected, software failures are responsible for a growing proportion of product recalls (Fu 2011).
Though there are certainly benefits to careful product-by-product consideration, this approach can lead to regulation that is piecemeal and inconsistent. Software does not stand on terra firma with FDA, lacking broader rules that bind prospectively.
E. Guidance
The most prominent feature of FDA’s approach to software is its heavy reliance on nonbinding guidance. My review found twenty-six separate guidance-type documents for software devices, including fifteen original documents and eleven updated versions. The guidances cover a range of topics, including premarket submissions, manufacturing controls, and cybersecurity. There are perhaps dozens more that specify “special controls” for Class II devices. For example, the ingestible camera above is governed by a Class II Special Controls Guidance Document: Ingestible Telemetric Gastrointestinal Capsule Imaging Systems. FDA guidance often incorporates even more guidance from standard-setting bodies like the International Electrotechnical Commission (IEC) and the International Organization for Standardization (ISO) (Cortez 2014a). Thus, FDA’s framework for software relies on a loose scaffolding of de facto but not de jure rules.
IV. ENDURING CONCERNS
This review demonstrates that FDA still lacks a tailored regulatory framework for software devices. Despite profound advances in computing, there are striking parallels between concerns raised in the 1970s and 1980s and in recent years.
A. Innovation vs. Regulation?
Early hearings expressed concern that FDA regulation would “delay,” “stagnate,” or “stifle” software innovation (1986 Hearing). A witness in 1986 said that regulating medical software at that point would be like creating the Federal Aviation Administration after the Wright Brothers’ inaugural flight. Vincent Brannigan, perhaps the leading legal expert on medical software at the time, emphasized that “the FDA proposal has the potential to simply wipe the whole industry out” (1986 Hearing).
Participants at recent hearings repeat this refrain (FDA 2011b; 2013 Hearings). Rare are the voices that argue that FDA regulation is not necessarily incompatible with technological advancement. In fact, every recent bill seems more concerned that FDA promote software innovation rather than regulate it (FDASIA 2012; Health Care Innovation and Marketplace Technologies Act, H.R. 2363, 113th Cong. (2013); SOFTWARE Act 2013; PROTECT Act 2014, MEDTECH Act 2014).
This sentiment is well meaning but blinkered. Software devices are in great need of more predictable, tailored oversight. The quantity, complexity, and variety of software devices are accelerating, not abating. Consider just one type of software, mobile health applications. There are more than 97,000 different mobile health applications on the market (research2guidance 2013). Hundreds, if not thousands, fall under FDA jurisdiction. Yet, despite studies showing that many do not work as claimed or lack scientific support, FDA relies on guidance and case-by-case scrutiny (Cortez 2014a). The market risks being flooded by apps that are ineffective or unsafe, which can undermine consumer confidence (Carpenter 2009).
B. Paralyzed by Change?
A second recurring notion is that software evolves too quickly for FDA. Agency personnel testified in 1986 that the “burgeoning growth of computers in medicine and its more pivotal roles poses new challenges to FDA” (1986 Hearing), a concern that has been repeated (FDA 1992). In fact, FDA explained that it did not publish “an overarching software policy” in part because “the use of computer and software products as medical devices grew exponentially and the types of products diversified and grew more complex” (FDA 2011a).
The pace of software innovation may partly explain why FDA relies heavily on guidance rather than rulemaking. Issuing guidance typically takes far less time than notice-and-comment rulemaking, and gives the agency more flexibility to change approaches. In fact, many scholars assume that guidance is appropriate for specifying technical or scientific standards that need more frequent updating. However, long-term reliance on guidance can become a crutch for agencies (Cortez 2014b), and there are signs of this in FDA’s approach to software.
C. Is Software Different?
A third unanswered question is whether software differs enough from physical devices to warrant its own tailored regulations. On one hand, Brannigan testified to Congress that FDA personnel treated software like “some kind of new bedpan” (1986 Hearing). On the other hand, FDA recognized early on that software differs in key ways from physical devices on issues like quality assurance and user errors. Even Brannigan observed that FDA personnel, “people of immense goodwill,” were “wrestling” with software being very different from traditional devices (1986 Hearing).
Thus, it is surprising how few FDA regulations are tailored to software devices. The agency has long wavered on whether software deserves an overarching policy. In 1986, FDA declared, “No separate policy for computer software presently exists nor is one envisioned for the future” (1986 Hearing). Although the 1987 and 1989 draft policies contradicted that sentiment, FDA never finalized or codified them, despite labeling them as “prerulemaking” (54 Fed. Reg. 44,643).
Years later, software’s place in the FDA universe continues to perplex. In a 2013 hearing, two congressional representatives asserted quite confidently (and quite incorrectly) that “software is not a medical device” (2013 Hearing; statements of Reps. Joe Pitts and John Shimkus). This lingering confusion suggests that it is well past time to update the FDCA to account for software.
D. Agency Expertise?
A well-recognized basis for agency authority is technical expertise. But FDA has long questioned its own expertise on software, recognizing its unusual complexity. In the agency’s words, “The reliability of software systems and higher order integrated circuits are extremely difficult to assess because of their complexity,” and it can be “impractical to test it for every possible input value, timing condition, environmental condition, logic error, coding error and other opportunity for failure” (1986 Hearing). Later FDA guidance repeated these concerns (FDA 1992). Brannigan observed that “even in the best of faith, with the best of will, the best of technology, the best of intentions,” FDA could not adequately regulate software based on the 1976 Device Amendments (1986 Hearing). FDA’s lack of confidence reveals itself during public hearings, when agency personnel frequently downplay FDA’s expertise and defer to the software industry’s. Again, this might reflect a combination of resource constraints and the challenge of applying an aging statutory framework to new technologies.
E. Regulating a Regulatory-Naïve Industry?
A related concern is that the software industry is not accustomed to federal regulation, particularly by FDA. Historically, the federal government has done more to incubate the software industry than regulate it (Katz & Phillips 1982). FDA recognized early on that “many manufacturers of new, computer-related medical technologies are not aware of their responsibilities under the law because they are not part of the traditional medical device industry” (1986 Hearing). Certainly, large modern software device manufacturers are well acquainted with FDA requirements. But the latest generation of software developers for mobile devices seems naïve to them.
A parallel theme is FDA’s longstanding commitment to adopting the “least burdensome” approach to software. FDA’s first statement on software promised to impose “the minimum level of regulatory control necessary” (1986 Hearing), a promise it would often repeat. Congress later codified this philosophy for all devices, requiring FDA to consider the “least burdensome” ways to evaluate device effectiveness and substantial equivalence (Food and Drug Administration Modernization Act, Pub. L. No. 105-115, 105th Cong. § 513 (1997)).
F. Regulating Medical Knowledge?
Finally, there has long been unease that FDA regulation of software, particularly clinical decision support, would allow it to regulate medical knowledge, contravening the limitation in FDCA § 1006 that the agency cannot regulate the practice of medicine. Perhaps accounting for this concern, FDA’s early software guidances cite the exemption that allows practitioners and hospitals to use custom devices without prior FDA clearance (21 CFR § 807.65).
Software regulation also can implicate First Amendment free speech rights, as early commentators noted. At the 1986 hearings, a physician testified, “Computerized decision support programs are not much different than a physician taking the newest literature on a subject and applying it to their patient.” Indeed, FDA’s 1987 draft policy acknowledged that the First Amendment might limit its ability to regulate medical advice encapsulated in software. Scholars continue to consider this question (Candeub 2014).
FDA policy reflects these concerns in two ways. First, the agency is careful not to regulate software that replicates the functions of textbooks, articles, or reference sources, distinguishing software intended for “educational purposes” from that intended to “diagnose or treat patients” (FDA 1989). Second, early FDA policy exempted software devices that allow time for “competent human intervention before any impact on human health occurs,” defined as a situation in which “clinical judgment and experience can be used to check and interpret a system’s output” (FDA 1987). Such software would include, for example, most “expert” or “knowledge based systems,” including “artificial intelligence and other types of decision support systems” (FDA 1989).
The conventional wisdom is that medical professionals will use such technologies wisely, without FDA oversight: “In the absence of any relevant regulations, the medical profession has been very cautious and critical in the acceptance of this kind of intelligent computing” (1986 Hearing). Yet, the more we learn about automation bias and errors from human–computer interaction, the more this conventional wisdom seems suspect.
V. TOWARD A NEW REGULATORY FRAMEWORK
Given these longstanding concerns, no longer should Congress be an interested bystander nor FDA a reluctant regulator. Though momentum seems to be building toward congressional action, proposals to date have been modest, if not regressive. On the modest end, the much-anticipated FDASIA Health IT Report does not propose any meaningful reforms to FDA oversight. On the regressive end, recent bills like the PROTECT Act and SAFETY Act would remove FDA jurisdiction over “clinical software,” including most clinical decision support. Thus, this moment of renewed interest in FDA software regulation threatens to pass without meaningful reform. FDA’s framework, based on the 1976 Medical Device Amendments and decades of guidance documents, needs to be updated and tailored to software. Such a framework should consider four types of improvements.
First, as a definitional matter, Congress should clarify that software can satisfy the definition of “device” in FDCA § 201(h). As a corollary, Congress should reject recent proposals to exclude from FDA oversight clinical decision support and other types of “clinical” software. Clinical software will become more ambitious, more sophisticated, and more likely to be relied upon by patients and providers alike. Research on automation bias and human–computer interaction suggests that many users will rely on computer advice without second guessing it, even if there is an opportunity for “competent human intervention.”
Second, Congress should recognize that software devices differ enough from physical ones and push FDA to create tailored requirements, preferably through rulemaking rather than guidance. For example, device approval pathways, quality systems, labeling, and postmarket surveillance all could stand to be updated for software products. In particular, Congress might consider a premarket pathway better suited to software devices that can easily change and that are often based on common, off-the-shelf software modules used by multiple products. Concerns about deterring innovation might be addressed by experimenting with conditional approvals based on clear, binding postmarketing requirements. A new round of rulemaking for software would also allow FDA to update decades of policies enunciated in nonbinding guidance.
Third, Congress might create an Office of Software Devices to better focus FDA’s attention and bolster its in-house expertise on software. Recent proposals stop short of doing so. For example, the FDASIA Health IT Report recommends a new “Health IT Safety Center,” but it would not have regulatory authority and would reside within the Office of the National Coordinator for Health Information Technology (ONC), not FDA. Recent bills also propose new entities, but their function is informational, not regulatory. Most observers recognize that there are simply too many software products for FDA to review. Certification by private entities has been suggested (Powell et al. 2014). But asking certifiers to oversee fee-generating applicants can create conflicts of interest. Presumably, user fees to fund a new FDA Office of Software Devices would present less of a conflict. A new office would need to coordinate with the ONC, the Federal Communications Commission (FCC), and Federal Trade Commission (FTC), to be sure. But FDA is uniquely situated to protect public health.
The final component of successful software regulation is consistent enforcement, given the bewildering number of software devices on the market. Without real enforcement, we risk having a lemons market like the dietary supplement industry, in which most products are ineffective, unsafe, or both. Consistent enforcement can encourage high-value innovation in the long run. FDA regulation can even be “market-constituting,” in that it sustains consumer confidence that would otherwise erode if flooded with substandard products (Carpenter 2009).
VI. CONCLUSION
This chapter tells the story of an old agency applying an old regulatory framework to very new technologies. Because neither Congress nor FDA has updated this framework, the same concerns over software regulation continue to linger. To be certain, other questions, beyond the scope of this chapter, also persist, including questions about legal liability, patient privacy, and regulation by other federal agencies like the ONC, FTC, and FCC. Although the federal government is beginning to address some of these questions, it insists on doing so under a very old statutory framework. It is well past time for Congress to consider a twenty-first-century framework for software devices.
REFERENCES
Bogdanich, W. 2010. “Radiation Offers New Cures, and Ways to Do Harm.” New York Times (January 2013).
Candeub, A. 2014. “Digital Medicine, FDA, and the First Amendment.” Georgia Law Review (forthcoming).
Carpenter, D. 2009. “Confidence Games: How Does Regulation Constitute Markets?” In Government and Markets: Toward a New Theory of Regulation, ed. Edward J. Balleisen and David A. Moss, 164–190. New York: Cambridge University Press.
Citron, D. K. 2008. “Technological Due Process.” Washington University Law Review 85:1249.
Cortez, N. 2014a. “The Mobile Health Revolution?” U.C. Davis Law Review 47:1173.
——. 2014b. “Regulating Disruptive Innovation.” Berkeley Technology Law Journal 29:173.
Cortez, N.G., I. G. Cohen, and A. S. Kesselheim. 2014. “FDA Regulation of Mobile Health Technologies.” New England Journal of Medicine 371:372–379.
Food, Drug, and Cosmetic Act of 1938, Public Law 75-717 (codified as amended at 21 USC §§ 301-99).
Fu, K. 2011. “Trustworthy Medical Device Software.” Institute of Medicine Workshop on The FDA’s 510(k) Clearance Process at 35 Years.
Institute of Medicine (IOM). 2011. Medical Devices and the Public’s Health: The FDA 510(k) Clearance Process at 35 Years. Washington: National Academies Press.
Kahn, J. G., J. S. Yang, and J. S. Kahn. 2010. “‘Mobile’ Health Needs and Opportunities in Developing Countries.” Health Affairs 29(2):254.
Katz, Barbara G. and Almarin Phillips. 1982. “The Computer Industry.” In Government and Technical Progress: A Cross-Industry Analysis, ed. Richard Nelson, 162–232. New York: Pergamon Press.
Leveson, Nancy G. 1995. Safeware: System Safety and Computers. New York: Addison-Wesley Professional.
Leveson, Nancy G. 2011. Engineering a Safer World: Systems Thinking Applied to Safety. Cambridge: MIT Press.
New York Times. 1967 (July 6). “Hearts in France Analyzed in U.S. in a Satellite Test.”
November, Joseph. 2012. Biomedical Computing: Digitizing Life in the United States. Baltimore: Johns Hopkins University Press.
Powell, A.C., A. B. Landman, and D. W. Bates. 2014. “In Search of a Few Good Apps.” Journal of the American Medical Association 311(18):1851.
research2guidance. 2013. Mobile Health Market Report 2013–2017.
U.S. Food and Drug Administration (FDA). 2014. “FDASIA Health IT Report: Proposed Strategy and Recommendations for a Risk-Based Framework.”
——. 2011a. “Draft Guidance for Industry and FDA Staff: Mobile Medical Applications.”
——. 2011b. “Public Workshop – Mobile Medical Applications Draft Guidance.”
——. 1996. “Do It by Design: An Introduction to Human Factors in Medical Devices.”
——. 1992. “Application of the Medical Device GMP to Computerized Devices and Manufacturing Processes: Medical Device GMP Guidance for FDA Investigators.”