asking this because i had a moment last week that made me feel kind of dumb and i want to know i’m not the only one.
i spent probably two months trying to get better outputs from AI tools by rewriting my prompts over and over in the same conversation. same session, same context window, dozens of iterations. kept getting frustrated that the outputs weren’t improving much.
turns out i was accumulating context in a way that was constraining the outputs. the model was trying to stay consistent with everything it had already said in the thread. starting a fresh session with a clean, improved prompt from the beginning gave me noticeably better results.
this is probably obvious to everyone here. it was not obvious to me for two months.
what’s yours?
mine was not knowing that you could give AI a persona or a specific “you are an expert in X with Y years of experience” framing and that it would actually change the quality of the output. i thought those prompts were gimmicky and ignored them for months.
tried it properly once on a whim and the difference was real. obvious in retrospect. not obvious at the time.
honestly took me a while to figure out that shorter prompts often work better than longer ones for creative or editorial tasks. i kept writing these detailed, over-specified briefs thinking more instructions = better output.
for some tasks, giving the model less to work with and more room to interpret actually produces better results. the over-specified prompt was constraining things i didn’t need to constrain.
still figuring out the right balance for different content types.
the thing is i knew theoretically that providing examples of writing style in the prompt would help with tone matching. i just never actually did it consistently because it felt like extra work.
started doing it for Notion AI and Claude outputs and the improvement in voice consistency was significant enough that i now consider it non-optional. should have started months earlier. i said what i said.
From my experience: I spent too long treating AI outputs as drafts to edit rather than starting points to rewrite.
Editing implies the structure and approach are basically right and you’re fixing surface issues. Rewriting implies you might throw away the structure and keep only useful phrases or facts. For complex or technical content, the rewrite framing produces much better results because it changes how critically you read the original output.
This is a simple mindset shift but it took me real time to internalize it.
That I could ask it to explain its own reasoning before giving me the final output, and that this often improved the final output.
“Think through this step by step before answering” is one of those prompt engineering tricks that sounds like a placebo but demonstrably changes what you get. I resisted using it because it felt unnecessary. It isn’t.