Why AI Makes Marketing Faster But Not Easier
- Angela Troccoli
- 2 days ago
- 3 min read

I want to give an honest account of what using AI in marketing actually feels like day to day, because the conversation tends to stay at a level of abstraction that doesn't match my experience.
AI has changed how I work. Genuinely. Some things that used to take a day take an hour. I use it constantly. But the idea that it's simplified marketing — that it's reduced the expertise required or made the work more straightforward — doesn't match what I see.
The work is still fragmented
Here's what most AI-assisted marketing workflows actually look like: break the task into steps, generate something, review it, adjust, generate again, move it into the next tool, repeat. Each handoff requires judgment and each output needs to be evaluated against something you already know.
The connective tissue between steps is still mostly manual. I'm starting to see more integrated workflows and tools that chain tasks together in ways that reduce how many times you have to pick something up and carry it somewhere else. But we're not close to describing a campaign at a high level and getting back something you could actually use. The gap between what AI can generate and what's ready to deploy is still real, and bridging it requires experience.
What happens when you ask too broadly
The outputs you get are usually a direct reflection of how specifically you asked.
Ask for 'a campaign to drive awareness for a new enterprise product' and you'll get something technically responsive and practically generic. A headline, some ad concepts, maybe a few email subject lines. It answers the question but it won't move anyone.
You have to work through it in layers: start with the ICP and their specific pain, build to a positioning statement, develop a campaign concept from there, then move to individual assets. The quality improves at each step because every output is grounded in a decision that already got made. You can evaluate each piece then course-correct before you're three steps downstream from a bad premise.
Knowing how to sequence that kind of work isn't something you learn from a prompt guide. It comes from understanding marketing well enough to know what questions need answering before you can answer the next ones.
Templates over prompts
The most useful adjustment I've made is shifting from prompts to templates wherever I can.
A prompt is language, and language is open to interpretation. When you ask for a case study, the model decides the structure, the length, the emphasis, the tone. It might make fine choices but they probably won't be your choices.
A template shows the model (ChatGPT, Claude, etc.) what you actually want, from the sections and the approximate length of each to the type of content that belongs where. Instead of describing a thing you're showing it. And the outputs come back far more consistent and far closer to usable.
I use this for positioning docs, one-pagers, briefing decks, email sequences — anything where the structure carries as much meaning as the content. Give the model a populated example and you spend a lot less time reformatting what it gives you back.
What you still have to bring yourself
BCG research on generative AI found that professionals with strong domain expertise consistently outperformed those who relied on AI without it, especially on tasks requiring judgment.
That lines up with what I see. The marketers getting the most out of these tools are the ones who bring real expertise to the interaction and already know what a strong value proposition actually sounds like, can tell when a message is too vague, or catch when a case study buries the proof point.
AI surfaces possibilities quickly. It still takes someone who knows what they're looking at to know which ones are worth pursuing.




Comments