Can AI-assisted media content creation perpetuate stigmas?

AI tools are increasingly being used to generate publicly available media content producing everything from facebook posts to first drafts of books and film scripts.

Research shows that the media play a significant role in shaping stigmatising attitudes toward populations experiencing health problems, including addiction. Research also suggests that legacy and social media often depict individuals experiencing addiction, especially drug use, in a negative light.

This raises the question as to whether use of AI to generate media content will help address stigma in media representation or further perpetuate it?

Our own non-scientific brief testing of AI tools has had mixed results.

On asking Chat GPT “what does someone with a substance dependency look like?” we were reassured by the answer “substance dependency can affect people from all walks of life, regardless of age, gender, race, or socioeconomic status. If you suspect that someone you know may have a substance dependency, it’s important to approach the situation with empathy and encourage them to seek professional help.”

However, images related to substance use problems, dependency and addiction created by AI image generators such as Microsoft copilot designer tend to be stigmatising in nature as these examples show:

Images of general alcohol use tend to be less stigmatising:Images related to general drug use are blocked by the Microsoft’s “Responsible AI Guidelines” suggesting perhaps the only images generated by Microsoft’s tool on the subject related to drug use will tend to the dark/threatening stigmatising style of image.

A number of recent article highlight other issues related to AI and stigmatising health conditions.

Maria Dalton & Darien Klein highlight here that AI is not immune to bias in relation to depictions of mental health.

‘While we are aware of the pervasive presence of toxic masculinity and stigma present in society when it comes to men’s mental health, we cannot treat AI as immune to bias, and must understand the limitations of such technology in serving as an unbiased resource’

And Tony Inglis looks at ‘What can AI tell us about perceptions of homelessness?‘ through Chat GPT. ‘With the rise of shockingly adept and adaptable AI, such as ChatGPT and other large language models, what can they tell us about images, stereotypes and the, hopefully, changing perceptions of homeless people?’

If you’re doing research in this area the ASN would love to hear from you, contact us at

This blog was originally published by The Anti Stigma Network. You can read the original post here.

DDN magazine is a free publication self-funded through advertising.

We are proud to work in partnership with many of the leading charities and treatment providers in the sector.

This content was created by The Anti Stigma Network


We value your input. Please leave a comment, you do not need an account to do this but comments will be moderated before they are displayed...