The rapid rise of artificial intelligence has transformed almost every aspect of the creative and media industries, and voiceover is no exception. As AI-generated voices become more accessible, cheaper, and faster to produce, many clients are understandably asking whether they should embrace these new tools.
At The Voiceover Gallery, we recognise the potential of AI and do offer AI-driven solutions. However, we believe these tools must be used responsibly, transparently and most importantly, ethically.
A recent controversy in the UK illustrates this point well. Scotrail’s use of an AI-generated version of a voice actor’s voice to produce their train announcements, allegedly without her prior consent, caused the VO artist to speak out. Whether intentionally or through oversight, the situation highlighted the very real dangers of one’s voice being used (or reused) without permission. When a voice can be cloned so convincingly, actors risk losing not just ownership of their performance but also control over how their voice is publicly associated. For an industry built on trust and collaboration, this raises profound ethical concerns.
This is why clear, explicit consent must remain at the heart of ethical AI voiceover practice. As agencies and production companies navigate this new landscape, we frequently encounter contracts that include clauses permitting the cloning, synthesising or indefinite re-use of a performer’s voice. Some are overly complex, buried within technical language or broad usage rights that are easy to overlook. We make it a priority to flag any such clauses to our talent and encourage them to review their contracts in full, seeking clarification before agreeing to terms that could compromise their rights.
In cases like Scotrail, we have to question whether the artist’s voice was used for its original, intended purpose, which the VO artist disputes. This in turn raises broader concerns about ethical practice within the voiceover industry.
It’s completely reasonable for someone to object to having their voice used to promote certain products or industries, such as tobacco or fossil fuels. In fact, many music industry contracts include ethical waivers, this “opt-out” model at least provides a legal framework, but when things like this happen, we have to ask: what is the fairest and most responsible approach when a voice is used beyond the originally agreed-upon intention, even if the new usage isn’t something that might be considered unethical?
Complicating matters further is the fact that AI voiceover exists in a legal grey area. Current legislation has not caught up with the pace of technological development, and debates around ownership, likeness rights, intellectual property and consent remain unresolved. Media organisations have been pushing for stronger government guidance, but progress has been slow.
Beyond ethics, AI often fails to meet the practical standards required for professional voiceover work, especially for clients like Scotrail as it still struggles with complex regional pronunciations, dialects and cultural nuance. One only has to look at a map of Wales to see how easily pronunciations can get confused. Once more, for visually impaired passengers who rely entirely on audio cues, accuracy and clarity are not luxuries but necessities. This is where human performance remains irreplaceable.
Ultimately, choosing AI over a human performer is a trade-off. AI may be more affordable, but cutting costs in the short term can create long-term reputational, legal and operational risks.
For voice actors, clients and producers navigating this shifting landscape, the most important principles remain straightforward: seek clarity and work with partners who prioritise ethical AI voiceover practices. AI has enormous potential, but only when used responsibly.
At The Voiceover Gallery, we believe that when AI voice technology is handled with care and transparency, it’s possible to embrace innovation without compromising the rights of our voice artists who power our industry.