I, Doctor

by | 27 May 2021 | MSc IME Blogs | 0 comments

Author: Zuzanna Nagadowska

Imagine a future where everyone has access to high quality healthcare and each prescribed drug or treatment is uniquely tailored to you.

Your personal fitness tracker checks that you are in top shape. A cancer diagnosis is no longer a death sentence as bespoke immunotherapies and automated surgeries are now available to you. Even in serious illness, technology will allow you to live your life the way you want to.

This can be the future of healthcare empowered by Artificial Intelligence (AI). It is almost certain that the technologies just described will be available soon. But how can we guide AI development to turn vision into reality?

The United Nations has headlined good healthcare and wellbeing as their third sustainable development goal, placing it higher than gender equality and climate action. Yet, despite advances in healthcare and medicine, our society is increasingly struggling with both physical and mental health. We are spend more time sick than the generations before us and our life expectancy is not significantly longer.

Developments in AI and Machine Learning seem to offer solutions to at least some of the problems faced of healthcare today. But the potential benefits also come with potential risks and the need for improved governance.

AI in healthcare

AI has been steadily finding its way into healthcare. In radiology, AI algorithms already analyze complex patient images. One study shows that AI diagnostics can perform at the level of medical professionals. Companies, such as Enlitic, have developed deep learning medical tools that can not only analyze radiology images, but also blood tests results, EKGs, genomics and patients’ medical records. This information can be triangulated to perform more accurate radiology diagnoses.

Another growth area is biomedical research, where AI is being used to predict chemical and pharmaceutical properties of molecules and develop personalized therapies. In an industry where a drug can cost around $2.6 billion to commercialize and only 10% of proposed molecules make it to market, AI is expected to create a renaissance for pharmaceutical companies. In January 2020, the first drug molecule designed entirely by an AI entered clinical trials.

Other AI-centered companies are aiming to improve healthcare customer service. They are making healthcare more accessible for patients through AI chatbots and assistants. For hospitals, AI-powered IT can decrease the time doctors spend on filling out forms and automate healthcare’s most repetitive processes.

Of course, we cannot forget about autonomous surgical robots, some of which can already perform basic surgical tasks such as suturing or work in tandem with a human surgeon when performing minimally invasive operations. Auris Health is developing an AI robotic arm capable of precisely targeting cancerous tissues.

There are other uses of AI in healthcare, such as health records management, end-of-life care, brain-computer interactions, and medical education (see the PwC report or this blog). Moreover, AI technology itself is rapidly growing. According to the AI Index 2019 Report, since 2012, the computing speed of AI is doubling every 3.4 months.


As AI technology and use develops, so do concerns about governance. According to the Future of Life Institute, AI can be dangerous, either because it can be programmed to purposefully do malevolent things or because it generates destructive methods to achieve seemingly beneficial goals.

The UK is often regarded as being at the forefront of AI governance, yet even its governance systems lag the pace of technological development. As the Global Legal Insight’s report states: “[..] the UK’s regulatory and governance framework for AI [..] remains a work in progress with notable deficiencies”.

So, what can we do to govern applications of AI in healthcare to minimize risks while also reaping the benefits?

Cybersecurity is probably the biggest worry when it comes to AI systems which are given the power to make decisions or perform autonomous tasks.

It is conceivable that a malicious attack on these artificial agents can be life threatening. For this reason, each medical procedure should be supervised by a physician, who should be able to make decisions regarding the use of new technologies and bear the ultimate responsibility for the patient.

Another pressing need is for an improved regulatory framework on the use of patient medical data. AI algorithms cannot function without data. To implement them, the healthcare industry will generate and use large amounts of data. However, a data breach or even mishandling of data can lead to patient information leaking to third parties. The UK has already encountered a case where NHS patient data was improperly shared with Google.

The introduction of hard laws governing the use of data in healthcare could potentially benefit not only patients, but also the companies trying to innovate in the area as it would decrease the uncertainty they face. Respect for privacy and human dignity should be prioritized and an option to opt out of certain schemes should be given to all people.

Considering the recent issues raised by companies such as Facebook and Cambridge Analytica, it would perhaps make sense to create special government body tasked with assuring proper handling of citizens’ data in healthcare and other domains.

Where AI is used to perform physical tasks such as simple surgical procedures, patients should also be able to opt out and request a human surgeon to perform the operation. Somewhat ironically, this may be beneficial for companies developing autonomous surgical robotics.

Without the option of requesting a human surgeon, some patients will actively avoid facilities where robots are employed. It can lead to a situation where a minority (who do not agree to robot surgery) can overrule a majority (who find robot surgery acceptable), which facilities deciding not to implement robots so as not to lose patients.

Accessibility and inclusion

The benefits of AI in healthcare will likely not be accessible to everyone in society right away. Early differences in access could widen the gap in medical care quality between different facilities. Government is responsible to ensure equitable diffusion of these innovations.

Of course, we cannot forget about possible biases that can become ingrained in AI algorithms in a variety of ways. Some can deepen racial and gender inequalities, while others might be accidental and inherent to the data on which an AI algorithm was trained on (I recommend Cathy O’Neil’s blog and book on the topic).

Next steps

The companies and organizations developing and using AI in healthcare are currently facing uncertainty regarding regulations. The UK government is currently trying to introduce a more tangible framework. Yet it should not wait for experiments such as the NHS AI Lab Sandbox and should instead set up some clear guidance on what AI cannot do in healthcare and on the rights of patients in this new emerging medical landscape.

While my recommendations involve introducing laws and regulations, the issues of AI in healthcare cannot be solved only by a top-down approach. Involvement is needed from interest groups and non-profit organizations. AI in healthcare will benefit greatly from considering responsible research principles: anticipation, reflexivity, inclusion, and responsiveness.



Zuzanna Nagadowska

Zuzanna Nagadowska is a current MSc Innovation Management and Entrepreneurship student at the University of Manchester, having obtained a BEng in Aerospace Engineering in 2019. She enjoys learning about new technologies from technological, governance and business standpoints. Her dissertation focuses on modern UK space industry policy.




Feature image: Medical staff using ion endoluminal system in the operating room ©2021 Intuitive Surgical, Inc.