8 Ethical Concerns Raised by AI Images and Video

Profile Picture
by
Alfredo Deambrosi
July 18, 2025
  |  
3 minute read
woman looking into camera with right half of face slightly blurred

Generative AI has made it possible to produce images and video faster than ever before. But as these tools move into everyday use – in marketing, design, and media – they raise new ethical questions that don’t have easy answers. 

The ethics of AI is a huge topic. It includes everything from environmental impact to military use to whether these tools are eroding our ability to think critically. Those are big, important conversations. But this post is about a more specific slice of the issue: the ethics of AI-generated images and video.

This post outlines 8 of the most important concerns, why they matter, and what’s at stake as we balance innovation, responsibility, and public trust. Now, this piece does not try to answer all of the visual AI ethics questions here. But it does aim to raise some of them – because these are conversations we need to be having.

1. Consent and Privacy in Training Data

The Issue: AI models are often trained on massive datasets that include images of real people.
The Concern: Individuals’ faces, bodies, or likenesses may be used without consent.

Your image could be part of an AI model, even if you never agreed to it. Photos posted online – on social media, in public datasets, or through stock photo platforms – are often scraped and used to train AI systems. And once your likeness is in the training data, it can show up in generated content without your knowledge.

Even when companies say they “anonymize” training data, it’s not always clear how effective those safeguards are – or whether they’re in place at all. Without clear standards or consent processes, there’s little to prevent people’s personal images from being used in ways they’d never expect.

Relevant Story:
  • In July 2025, Meta defended its use of Australians’ Facebook and Instagram posts to train its AI, arguing it needed “real” conversations to understand Australian culture.
  • Critics highlighted the absence of consent mechanisms and the implications for user privacy in generative image and video training datasets.

2. Creator Rights and IP Theft

The Issue: Artists’ and photographers’ work is used to train AI without permission.
The Concern: Original content is being exploited without compensation or credit.

For many artists, the rise of AI feels like déjà vu – only this time, it’s not just copycats, but machines learning from their work at scale. A model trained on millions of images can easily reproduce a creator’s unique style, without giving credit or asking permission.

This is especially troubling because creators have little control. Even if they see an AI-generated image that resembles their work, they often can’t trace where it came from or do anything about it. And as more tools use scraped or crowdsourced visuals, the legal and ethical rules around ownership are getting harder to pin down.

Relevant Story:

3. Economic Displacement of Creatives

The Issue: AI tools reduce the need for human visual creators.
The Concern: Artists, photographers, and video producers may lose work or income.

If you’ve ever wondered whether your job might be replaced by a tool, you’re not alone. Many creative professionals are feeling that pressure now. AI can generate polished visuals in seconds – and often at a fraction of the cost of hiring an illustrator or photographer.

These tools offer speed and convenience. But they’re also shifting how visual work gets done. Teams that once relied on creative freelancers or in-house experts may now turn to AI for quick outputs. And this kind of shift can reshape the economics of the industry. For many creatives, adapting to new technology is only part of the challenge. It’s just as crucial to maintain relevance and steady work in a rapidly changing market.

Relevant Story:
  • A December 2024 CISAC study projected that audiovisual creators could lose 21% of their income by 2028 due to the rapid growth of AI-generated video and animation.
  • The report warned of a large-scale shift in revenue from human artists to generative AI platforms unless protective policies are enacted.

4. Bias and Representation

The Issue: AI imagery often reflects skewed or stereotypical defaults.
The Concern: Racial, gender, age, and body diversity may be underrepresented or distorted.

Anyone who has experimented with image generators knows the drill: you type “CEO” and get a series of middle-aged white men. These biases aren’t random. They’re baked into the data. If a model learns from content that underrepresents certain people, it will reproduce those gaps in its outputs.

The problem is that AI-generated images are showing up more often in ads, websites, and brand content. If no one steps in, these visuals can reinforce narrow or inaccurate ideas about who fits where. That’s not just a representation issue – it’s a reputational one for brands aiming to reflect a diverse audience.

Relevant Story:
  • A 2025 roundup of AI bias cases found image-generation tools like DALL·E and Stable Diffusion often reinforced gender and racial stereotypes in visual outputs.
  • One study cited in the report found that an AI tool produced hypersexualized portraits of Asian women, illustrating how AI tools can reinforce both racial and gender stereotypes in visual content.

5. Misinformation and Deepfakes

The Issue: AI makes it easy to create deceptive visuals.
The Concern: Fake images or videos can be used to manipulate opinion or harm reputations.

We’re all getting a little too good at spotting fake content – but also a little too quick to believe it. AI-generated visuals are often polished, believable, and easy to create. That combination makes them perfect for spreading misinformation.

Whether it’s a fabricated protest scene or a politician saying something they never actually said, these videos can gain traction before fact-checkers catch up. Even brands using AI ethically may find themselves accused of manipulation, just because the line between real and fake keeps getting blurrier.

Relevant Story:

6. Safety and Harmful Content

The Issue: AI tools can generate dangerous or illegal imagery.
The Concern: These outputs can cause real-world harm or violate laws.

It’s not just cringe-worthy – AI can accidentally or intentionally generate truly harmful visuals.

Visual content isn’t just about aesthetics – it can be harmful. Some AI tools have produced explicit, violent, or illegal images when pushed, or even unintentionally. These risks become more serious when the outputs involve real people or resemble criminal content.

Without strong safeguards, platforms can easily be misused. And once disturbing content is generated, it’s difficult to contain. As image and video generation becomes more accessible, it’s essential that developers, platforms, and users treat content moderation as more than a secondary concern.

Relevant Story:
  • Australia’s eSafety Commission revealed that Google’s Gemini had generated hundreds of suspected terrorist and child abuse visuals in under a year.
  • The report criticized major tech platforms for failing to implement effective safeguards against harmful and illegal visual content.

7. Erosion of Trust

The Issue: The line between real and fake imagery is blurring.
The Concern: Audiences may lose faith in even authentic visual content.

It’s become all too common to see “that’s AI” in the social-media comments under a real photo. We’re all becoming more skeptical of images, even when they’re authentic.

That erosion of trust can hurt brands, journalists, and creators alike. If people don’t believe what they see, it becomes harder to tell a compelling story, or even convey a simple fact. For anyone using visual media to communicate, trust now has to be earned more actively than ever before.

Relevant Story:
  • In August 2024, a presidential candidate falsely claimed an image of his rival’s large campaign rally was AI-generated – despite multiple sources confirming its authenticity.
  • The incident highlighted a new dilemma: even real photos can now be dismissed as fakes, further undermining trust in visual evidence.

8. Authorship and Copyright of AI-Generated Work

The Issue: AI creations exist in a legal gray area.
The Concern: It’s unclear who, if anyone, owns the rights to AI-generated content.

When an AI tool generates a visual, who owns it? Right now, the answer is often: no one. Copyright law in many countries requires a human author, which means AI-created work may not be protected – or protectable.

That uncertainty can complicate everything from branding and licensing to asset reuse. If a campaign includes AI-generated imagery, marketers need to know what they actually own – and what they can legally defend or monetize.

Relevant Story:
  • In March 2025, a U.S. appeals court reaffirmed that AI-generated art lacking human input cannot be copyrighted.
  • The ruling reinforced that legal protections for AI-generated images remain murky, leaving creators and brands in a precarious position regarding rights and reuse.

Navigating the New Visual Frontier – Together

Generative AI is changing how we create and share images – but also how we think about ownership, fairness, and credibility. These aren’t abstract issues. They’re practical challenges that marketers, creatives, and teams across industries are already facing.

There’s no single rulebook for using AI responsibly. But brands are making choices today about transparency, representation, and rights. And those choices will shape how audiences respond tomorrow. It’s an opportunity to lead with clarity and integrity, even as the tools keep evolving.

To explore what that looks like in practice, watch our on-demand webinar: “How AI Is Shaping Storytelling and Visual Media.” Join Imgix CEO Chris Zacharias and veteran tech journalist Connie Guglielmo for a candid conversation about creativity, responsibility, and the future of content creation.

👉 Watch the webinar now.