You, Robot – Artificial intelligence in healthcare services

AI
21 Feb 2018

Science fiction – love it or hate it. However, the allegory presented by science fiction directs our thinking in a particular way in circumstances where that thinking was not presented in the ordinary course of scientific investigation or discovery or just simply of no particular utility at a particular point in time. This brings one to the matter of artificial intelligence particularly within the context of healthcare.

In my view, it is time for healthcare regulators to begin contemplating the effects of artificial intelligence in the healthcare industry at large: both in terms of the potential for artificial intelligence to provide health care services and for artificial intelligence to determine appropriate healthcare services for a particular patient or group of patients. In many respects, healthcare industries are susceptible to paradigm shifts in the manner in which they provide current services to patients, more particularly, if one is to understand how regulations currently are drafted in order to deal with the relationship between, at least, a healthcare provider and a patient.

Regulations and Guidelines published pursuant to healthcare legislation contemplate that healthcare services are provided, in the main, by natural persons as opposed to machines with any degree of intelligence. Certain automated phases of healthcare service delivery have already been recognised by the authorities, which include automated dispensing machines that are subject to Guidelines published by the South African Pharmacy Council. In addition, the Health Professions Council of South Africa has published guidelines related to telemedicine and the provision of healthcare diagnoses and related information by electronic means as between healthcare practitioners and a patient.

However, we need to go further when it comes to the application of artificial intelligence to the provision of healthcare services and ask ourselves how those services are to be regulated and not only where things go wrong. Can one sue a robot where harm has been caused or a side-effect experienced that may not have been foreseen or even if the diagnosis was wrong and simply didn’t help a patient but rather exacerbated the situation causing further harm and pain and injury? In that regard, is it even useful to undertake the debate as to whether or not one is able to sue a robot but rather look at the manner in which the regulation exists in respect of the application of the artificial intelligence to the patient and the person/s responsible for that application? Do we need to engage in a debate, for instance, if a robot is a person when we already recognise other inanimate objects, such as companies, as persons?

Therefore, preliminary questions would need to be asked, especially with reference to the particularity already in existence in healthcare legislation, of whether or not artificial intelligence, in the broadest sense, when applied within the context of a healthcare setting, may constitute an already regulated object or device. In this regard, with effect from 1 June 2017, large scale amendments were brought into effect to the Medicines and Related Substances Act No. 101 of 1965, as amended (“the Medicines Act”). One of the large-scale changes was the introduction of a definition of “medical device” and, for the first time since the introduction of the Medicines Act on 19 June 1965, formal regulation now exists over medical devices.

The first law of robotics proposed by Isaac Asimov is that robots may not do any harm to humans. Influentially, Asimov set out, albeit in the context of science fiction, the laws that may apply to artificial intelligence within the context, particularly, of a healthcare setting. Therefore, the rules around what artificial intelligence or intelligent robots will be able to do for humans, in so far as that distinction remains possible into the future, will be determined, presumably, largely with reference to the utility of both the artificial intelligence and the robotic function both jointly and severally. Based on Asimov’s anticipation of the extent of robotic intelligence functioning openly in human society, and on the assumption that Asimov is right, we turn to the inquiry of whether or not current definitions of, for example, “medical device” would encapsulate artificial intelligence when rendering healthcare services that would otherwise be provided by natural persons qualified in one or more of the health professions. The current definition of “medical device” in the Medicines Act provides as follows:

“means any instrument, apparatus, implement, machine, appliance, implant, reagent for in vitro use, software, material or other similar or related article, including Group III and IV Hazardous Substances contemplated in the Hazardous Substances Act, 1973 (Act No. 15 of 1973):

intended by the manufacturer to be used, alone or in combination, for humans or animals, for one or more of the following:

  • diagnosis, prevention, monitoring, treatment or alleviation of disease;
  • diagnosis, monitoring, treatment, alleviation of or compensation for an injury;
  • investigation, replacement, modification or support of the anatomy or of a physiological process;
  • supporting or sustaining life;
  • control of conception; disinfection of medical devices; or
  • providing information for medical or diagnostic purposes by means of in vitro examination of specimens derived from the human body; and

which does not achieve its primary intended action by pharmacological, immunological or metabolic means, in or on the human or animal body, but which may be assisted in its intended function by such means”.

The question then is does the current definition of “medical device” adequately deal with the robot using artificial intelligence providing medical or healthcare services? In my view, the current definition may indeed be adequate to cover or apply to robots providing healthcare and related services in a future society. The test applied by the definition is one based on the intention of the manufacturer in light of the express provisions contained in subsection (a) of the definition of “medical device”.

Therefore, when a particular robot is intended by the manufacturer to be applied for purposes of, by way of example, investigating or modifying part of the anatomy or a physiological process in a human or an animal, then conceivably that robot would be a medical device for purposes of the Medicines Act and would, perhaps unlike other robots, be required to comply with the licensing and registration requirements in terms of the Medicines Act.

Whether or not such control is adequate must be determined with reference to the nature of the artificial intelligence to be applied. Thus, in relation to the application of the intelligence part of the product, questions may arise as to whether or not a simple registration or license is adequate as ethical considerations may require that additional guidelines or requirements be imposed upon the manner in which healthcare services may be provided by a non-human, as this term may, in the future, be defined distinctively from a person with artificial intelligence supplying healthcare services in a healthcare-related context.

We are already entering an age where computers and robots are being used to provide medical diagnoses. In an article published by J Chung and A Zink in the Asia-Pacific Journal of Health Law, Policy and Ethics during November 2017 at page 1 entitled “Hey Watson, Can I Sue You for Malpractice? Examining the Liability of Artificial Intelligence in Medicine”, the authors examine the application of law to artificial intelligence within the current context of the provision of healthcare services. The authors pose four incredibly interesting questions within the context of the application of law to artificial intelligence within the context of healthcare:“As technologies mature, laws to regulate AI must mature organically to fit the technology. As guiding questions, we propose that lawmakers adopt the following four-part test before beginning to regulate and/or restrict AI:

  • To what degree does the machine enjoy autonomy?
  • To what degree does the machine interact with users/patients?
  • To what degree does the machine provide reliable options?
  • To what degree does the machine implement such options?” (at page 21)

Whether or not those are the only four questions to be asked in the context of regulation, time will tell. One may need to augment the four questions proposed with questions such as “to what degree does the machine apply non-healthcare related information in order to make a healthcare-related decision?”.

The law is notoriously inflexible and slow to react to societal change and technological advancement. One would need to balance the legal reaction to artificial intelligence and technological advancement carefully but recognising that artificial intelligence, as part of human society, is increasingly becoming relevant at a rate that currently is not matched by legal developments. Chung and Zink express the views as follows:

“The law must be flexible to facilitate such a future. To invent sweeping new legal regimes that comprehensively spell out AI legal rights and responsibilities in the hopes of regulating all forms of AI would be a Sisyphean task given current levels of innovation. Instead, in arguing that Watson be granted legal personhood, we are proposing a permissive regime that ensures accountability on the part of AI manufacturers and healthcare professionals through existing tort law frameworks.” (at page 22)

Fundamentally, humans should be accountable for harm done to other humans but where that harm is perpetrated by an object that is, largely, sentient and able to apply different aspects of intelligence to understand the harm concerned, the legal status of the act concerned may not, as wanted by a society in the future, be capable of being attributed to another human. First, do no harm as a centre point of the Hippocratic Oath in our society in the future may need a new context. As stated by Asimov:

“Let’s start with the three fundamental Rules of Robotics….We have: one, a robot may not injure a human being, or, through inaction, allow a human being to come to harm. Two, a robot must obey the orders given it by human beings except where such orders would conflict with the First Law. And three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.” (Isaac Asimov, Astounding Science Fiction, Mar. 1942)

See also: Artificial Intelligence – the apex of tech and policy challenges

(This article is provided for informational purposes only and not for the purpose of providing legal advice. For more information on the topic, please contact the author/s or the relevant provider.)
Share


Health & Pharmaceutical Law articles on GoLegal