Using AI to write in someone else's voice — where's the line between ghostwriting and something worse?

Ghostwriting is not new. It’s been a legitimate professional practice for decades across publishing, speeches, memoirs, corporate communications. The person whose name goes on the work didn’t write every word, and most of the time everyone involved understands that.

AI changes the economics but does it change the ethics?

Here’s the specific scenario I’ve been thinking about. A freelance writer takes on a ghostwriting contract. They use AI to draft most of the content, refine it to match the client’s voice, deliver it. The client publishes it under their own name. The client either doesn’t know or doesn’t ask how the content was produced.

Traditional ghostwriting: acceptable. AI-assisted ghostwriting with disclosure to the client: arguably acceptable. AI-assisted ghostwriting where the client has no idea the writer is using AI: where does that sit?

I’d push back on the easy answer that it’s fine as long as the output quality is there. The client is paying for a professional service and has some reasonable expectation about what that service involves. If the nature of the work has fundamentally changed, is there a disclosure obligation even if the contract doesn’t require it?

Genuinely uncertain on this one. Not trying to be preachy about it. Curious how others who do ghostwriting work think about this.

honestly i’ve thought about this a lot. my take is that ghostwriting has always been about selling an outcome, not a process. clients hire you for the result: content that sounds like them and does its job. how you get there is your craft.

that said. i do think there’s a version of this that crosses a line. if a client is paying premium rates specifically because they believe they’re getting original human creativity and craft, and you’re mostly just prompting and lightly editing, you’re misrepresenting what they’re buying.

the thing is though… most clients don’t ask. and when they do ask, i tell them. that’s where i’ve landed.

the framing of “disclosure obligation” assumes the client would care. some would. a lot wouldn’t. they want content that works, full stop.

i think the more useful question is: are you delivering what was promised? if yes, the method is your business. if the output is worse because you’re over-relying on AI and not catching its problems, that’s the actual breach of contract.

anyway. the line for me is output quality, not production method. that’s always been true.