Ethical aspects of the use of AI in Health and the role of organizations

Proper implementation of AI to avoid ethical issues and implications

person using MacBook Pro

Photo: Glenn Carstens

AI technologies are leading the way in terms of revolutionizing many industries including the healthcare industry. A great number of health clinics on the global scale are beginning to utilize the power of machine learning and natural language processing AI technology. However, with the implementation of AI in healthcare, ethical aspects of its use are questioned. So, we will take a look at these ethical aspects of the use of AI in healthcare as well as its role in organizations.

Embracing and using AI requires careful management and proper implementation

AI technologies can be incredibly helpful for both health clinics and patients. However, many debates are currently ongoing over potential ethical aspects of its use with patients. The major concern is how we can equip AI systems with humane and ethical insight, that might have certain moral views. The moral views can change depending on various factors such as culture for example and therefore many are wondering if AI can ever truly understand human cultures and languages fully.

Aside from these big moral implications, there is another layer of ethical consideration that health clinics need to pay close attention to. Bias, discrimination, and violation of privacy through AI are all extremely important topics and really connect with the other major ethical concerns of the use of AI. Leaders of organizations that use AI systems including health clinics are aiming to implement AI in such a way that it does not cause any of the above-mentioned issues. However, stopping the usage of AI is not a solution. On the contrary, AI brings many benefits to health systems such as lower waiting times, easier handling of different healthcare-related tasks, faster and more efficient scheduling, and so on. Health clinics need to focus on providing a few key guidelines for AI implementation that will help the experts properly implement it. Let's take a closer look.

Appropriate data acquisition

Data or data gathering to be more precise is what fuels each AI system. AI can learn and grow thanks to machine learning techniques but it needs to be able to gather data. Of course, when talking about healthcare, gathering data would also mean gathering certain data from communication and interaction with patients. However, some patients feel that this invades their privacy and worry about not only potential data leaks but also an incomplete picture of their lives being created with all of the data and context.

So, since data gathering is a potential ethical issue, the experts who are behind AI systems for healthcare need to be vigilant in asking their data science teams where the insights are coming from. Teams also need to be able to predict and create a proper plan of data gathering that does not break any privacy of patients and serves only to benefit their healthcare journey. This is, after all, the main goal of the use of AI in healthcare.

Relevance of the gathered data

The groups of patients that have been analyzed for certain data gathering need to know that that it is relevant for their healthcare journey and overall health. Health clinics need to provide detailed enough questions so that science teams understand how they sampled data to build their models. This is done to avoid issues such as racial and gender prejudice among others. There are a few key questions that need to be asked and answered so that the relevance of the gathered data is correct and within reason. These questions are:

  • Do the data reflect actual populations?

  • Did they include relevant data for minority groups?

  • Are there going to be issues with the data gathered through performance tests?

  • Is there anything missing from the gathered data?

Ensuring the equity of artificial intelligence results is necessary. Machine learning algorithms gathered data through various means and detect patterns, and formulate predictions and recommendations, based on data and experience. Historical human biases and judgments can affect predictions across a broad spectrum. Constant monitoring of data is needed in order to avoid this. A dedicated team of human experts needs to be in constant touch with their AI system and go through data analysis constantly.

Constant monitoring of data science teams requires actions such as choice of data, choice of "characteristics" of the raw data, development, evaluation, and monitoring of models.

Regulatory compliance and commitment

Organizations whose activities were not regulated in the past had less stringent standards for data privacy protection. However, with many new rules and regulations, how each organization gathers data is always monitored for any potential abuse, data leaks, privacy concerns, and so on. When it comes down to it, health clinics, leaders of organizations, and human experts behind AI systems need to make sure their data science, legal, and compliance teams work together to define clear criteria for data gathering and implementation of AI in health clinics. This is extremely important and without this step, proper implementation of AI is not possible and patients will definitely be more hesitant from engaging in conversation with AI.

Explainability of the model

Finally, the last step in ensuring the AI is implemented properly and without any ethical issues is making sure that both health clinics and patients know the basics of how a particular AI model works. Of course, patients do not really need to know exactly every aspect of how an AI predicts certain things because this is irrelevant to their healthcare journey. To give an example a medical application of an AI system that does not need to be explained to patients is how AI classifies images to adequately, consistently, and accurately predict which types of skin blemishes have a high risk for concern and which do not. AI can use the blemish's tone, shape, proximity to another skin blemish, and so on but the patient is unlikely to be concerned about how AI gathers this.

The main concern of a patient would be whether or not the recommendation made by the AI is correct. Being able to explain some results and how AI models and predicts certain things to health clinics but also patients is an important task that each team of experts who implement AI systems needs to understand and handle properly.

© Eniax - https://eniax.care