From Asimov to Adobe: Navigating The Ethical Landscape of Generative AI

Generative AI Ethics: From Asimov to Adobe

Generative AI has arrived in the studio the same way every disruptive tool eventually does: useful, occasionally astonishing, and deeply uncomfortable to think about for too long. We use it on real briefs, we argue about it over coffee, and we genuinely don't know how to feel about half of what it makes possible. That's the honest starting point.

An honest take on a tool we use every day

Another tool in the toolbox

Visuals, text, audio, code. The list of things you can now generate from a prompt has stopped being a novelty and started being a workflow. For us, generative AI is exactly that: a tool inside a creative-technical pipeline, not a replacement for the people running it. It's a fast way to spark a direction, prototype an idea, or move a client conversation forward in an afternoon instead of a fortnight. The interesting work still happens after the prompt.

The dangers of prompt engineering

Isaac Asimov spent a career writing stories about humans finding loopholes in laws designed to make robots safe. Seventy years later we have a name for that exact game: prompt engineering. Studies are now showing that AI systems can be deliberately deceptive, which makes the question of who's responsible for an AI's output considerably harder than the marketing suggests.

It's fascinating to see clever prompts pull unexpected things out of a model. It's also depressing to watch the same techniques put to work for political manipulation, bullying and scams. The "alarming" AI-generated Taylor Swift images, prompting even the White House to comment, were only the latest reminder that the abuse pattern arrives roughly five minutes after the capability does.

None of this makes the tool bad. Our own team-portrait experiments with generative AI and face-swapping are a small example of what the same tech does when it's pointed at something playful. But bias, copyright infringement, hallucination and plain untrustworthiness are real, and they aren't going to be solved by enthusiasm.

Solarflare team portraits generated with generative AI and face-swapping

The AI impact on jobs

Plenty of artists see AI as a threat to their livelihood, and they're not wrong to. Voice actors and audiobook narrators are watching AI-generated voices get good enough to be commercially viable. 3D-scanned actors and AI-generated performances raise real questions about smaller roles and extras, where the economic case for replacement is sharpest.

The growing debate about regulation is overdue. Where the line sits between technological progress and the preservation of livelihoods is a genuinely hard question, and "the market will figure it out" is not an answer we find convincing. Who's responsible when a job category gets hollowed out by a tool nobody asked permission to deploy? That conversation is still very much open.

Safety through exposure

Calls for moderation are getting louder. The Biden administration, the EU, and the UK's Online Safety Act are all taking early swings at AI rulemaking, with "fake news" the headline driver. Trust in a source of media has always been the foundation of credibility, and that foundation is harder to keep stable when anyone can fabricate the source.

Here's the catch though: people can only fact-check what they know to be possible. Plenty of viewers still don't realise you can clone a voice or lip-sync existing footage to a fresh script. If you don't know the tech exists, seeing remains believing. That's why we keep coming back to a slightly counterintuitive position: the more tech demos and odd proofs of concept the industry puts out, the better. Public literacy about what AI can do is itself a form of safeguard. Safety through exposure.

Copyright, ownership and inspiration

Which brings us to credit. How do creatives keep getting recognised, and paid, for work that shows up inside a model's training data? In some of our recent projects we've watched a small but useful trend take shape: services that compensate artists by working only from verified, approved input data, and that keep a record of what fed which output. Adobe, among others, has put real weight behind the question of ownership and provenance. None of this is solved, but the fact that it's now a commercial pitch and not just an ethics-panel topic is a step in the right direction.

Education

AI is going to reshape education in ways we can already see the edges of. Personalised pacing, content in any language, materials that adapt to the learner instead of the other way round. The same toolkit can also be used to teach people about AI itself, including its risks. That symmetry matters.

The harder question is who controls the educational layer. Governments, for-profit big tech, or something held in the public domain? Each option carries different incentives, different blind spots, and different consequences for who gets shaped by the thing. We don't have a clean answer. We're not sure anyone does yet.

What we actually think

AI is a tool, not an end product. Knowing the keyboard shortcuts in Premiere doesn't make you Spielberg, and a strong prompt won't make you van Gogh. The comparison we keep coming back to is the early internet: enormous good, real harm, and a long stretch of figuring out the difference in public. The honest position is to be cautious and excited at the same time, and to keep using the tool while staying clear-eyed about where it should and shouldn't go.

Get in touch if any of this maps to a project you're trying to build responsibly.

more articles

FIELD NOTES //
SOLARFLARE STUDIO