Content Writer Interview Questions That Go Beyond the Portfolio

working writer in nyc at her laptop

The most common mistake in content writer interviews is treating the portfolio like a test. It isn’t. A portfolio shows you what someone produced when they had unlimited time, picked their best work, and chose the topic. None of those conditions exist on the job.

Content writing is also the one creative role where almost every hiring manager’s subjectivity becomes the main evaluation tool. You read a sample and either like it or you don’t. That preference has very little to do with whether the person can write to your brand voice, hit a topic calendar, take structural edits without shutting down, or produce consistently under deadline pressure.

The questions below are the filter. And as Mediabistro has noted in our guide to hiring for creative roles, even a targeted job board can still surface applicants who “don’t know what a CMS is and applied to your editorial director posting with a resume that says ‘detail-oriented team player.'” Good interview questions are the layer of evaluation that job boards can’t do for you.

Why Content Writer Interviews Are Harder Than They Look

There’s a brief problem that rarely gets discussed. A content writer who looks spectacular in an interview may have spent their career writing longform journalism, which transfers poorly to the 800-word explainer with three CTAs and a keyword target. A writer who looks less polished may have spent five years doing exactly that kind of constrained work and has the organic traffic numbers to show for it.

Content writing also spans a wider range than the job title implies: brand blog posts, SEO-driven explainers, email sequences, case studies, white papers, scripts for video, social copy, UX microcopy. A writer who excels at one is often mediocre at another. Your questions need to surface whether their range matches what you actually need — not just whether they can write well in the abstract.

Finally, AI has made this evaluation harder. Candidates now routinely submit polished samples that partially or substantially reflect AI output. That doesn’t automatically disqualify a candidate, but it means your questions need to probe process and judgment in ways they didn’t three years ago.

Eight Questions and What to Listen For

The questions below are organized around the four things that predict on-the-job performance most reliably: process, collaboration, AI judgment, and self-awareness. For each question, there’s a note on what you’re actually testing, what a strong answer sounds like, and what raises concern.

Process

“Walk me through how you approach a brief you’ve never written about before.”

What you’re testing: Research depth and intellectual honesty. Can they go from zero to credible without fabricating authority they don’t have?

Strong answer: A real research sequence. Primary sources first (subject matter expert interviews, primary data), then secondary (industry reports, published research, competitor analysis). They mention how they decide what they don’t know well enough to write about confidently, and they describe when they’d flag to an editor that a topic requires more sourcing than the brief assumed.

Concern: “I research online and then write.” No specificity about what research means. No acknowledgment that some topics require more sourcing time than others, and no mention of what happens when they can’t find what they need.

“Describe how you handle a brief that doesn’t give you enough to work with.”

What you’re testing: Proactivity and professional initiative. A writer who waits to be fully briefed before doing anything useful will frustrate most editorial and content teams quickly. Editors want writers they can rely on to move forward with minimal hand-holding — a point Mediabistro has heard from editors at publications across the country: the writers who stand out are the ones who “can pitch great stories” and require minimal direction once the assignment is given.

Strong answer: They name specific things they do without being asked: reading the brand’s existing published content, pulling competitive examples, drafting a quick outline for sign-off before writing the full piece. Bonus if they’ve ever written the brief themselves when a client didn’t provide one.

Concern: “I ask them what they want.” One step, no follow-through. A writer who needs the brief to be complete before they can start is going to be a bottleneck.

Collaboration

“Tell me about the most significant structural edit your work has received. What changed, and what did you take from it?”

What you’re testing: Ego and professional flexibility. Content writing is inherently collaborative, and a writer who can’t accept meaningful edits — not copy tweaks, but real structural changes — will create friction with every editor they work with.

Strong answer: A real example where the edit wasn’t small. The lede buried three paragraphs in, the argument that didn’t hold under scrutiny, the section that got cut entirely. They can articulate why the editor’s instinct was right even if their first reaction was resistance. Extra credit if they describe a time they pushed back and won, with a rational argument rather than an emotional one.

Concern: “Most editors don’t change much of my work.” Either they’ve worked exclusively with light-touch editors, or they’re not being accurate. Neither tells you what you need to know.

“How do you write in a brand voice that’s different from your own?”

What you’re testing: Range and adaptability. Most content writing positions require writing in someone else’s voice, consistently, over time. That is a distinct skill from having a strong personal voice.

Strong answer: They have a method. Reading a deep sample of existing content before starting. Building an internal style guide if one doesn’t exist. Asking for annotated examples: “Can you mark a recent post with what you liked and what felt off?” They can describe a situation where their personal preference conflicted with the brand voice and they set it aside without resentment.

Concern: “I think my voice is pretty adaptable.” Adaptability is a trait, not a method. Push for the actual process.

AI Judgment

“Walk me through how you currently use AI in your writing workflow.”

What you’re testing: Transparency, judgment, and a real understanding of what AI produces well versus where it falls short. This question has become non-optional for content hiring in 2025. The issue, as Mediabistro covered in our piece on AI prompting for writers, is that “using AI too much in your writing can be a huge issue. The AI and ranking systems themselves try their best to spot and reward human perspectives and opinions.” What you want to know is whether the candidate has internalized that distinction or is relying on AI output without a critical framework for evaluating it.

Strong answer: Specific about where AI enters the workflow (ideation, headline testing, outline review, grammar pass) and what it doesn’t touch (original arguments, primary research synthesis, brand voice, final prose). They explain why each boundary exists. They can describe a situation where AI output was unusable and what they did instead.

Concern: “I don’t use AI at all” is probably not accurate and suggests low curiosity about current tools. “I use it for the first draft and then edit it” suggests no framework for distinguishing what AI does competently from what requires human judgment. Both answers leave you without useful signal.

Measurement

“Tell me about a piece you wrote that you can track the performance of. What happened?”

What you’re testing: Whether they think in outcomes, not just outputs. Content writers who have never tracked how their work performs can produce a lot of polished material that does nothing. The BLS puts the median annual wage for writers and authors at $72,270. At that rate, you need writers whose work moves metrics, not just writers who write well.

Strong answer: A specific metric named: organic traffic growth, time on page, email open rate, lead generation, social shares. They describe what the content was designed to achieve, what actually happened, and what they’d do differently based on what they observed. The piece doesn’t have to be a success story — a performance miss they learned from is just as valuable.

Concern: “I don’t usually have access to analytics.” This may be true for some freelancers, but it’s worth asking whether they’ve ever requested it. A writer who has spent years producing content without ever asking about results is operating without feedback, which tends to produce writers who stop improving.

Self-Awareness

“What kind of content do you find genuinely difficult to write well?”

What you’re testing: Honest self-assessment. Content writers all have strong suits and weaker areas. The ones who know what theirs are create fewer surprises once they’re hired.

Strong answer: Named and specific. Deeply reported longform, technical content without subject matter expert access, UX microcopy, video scripts, and email sequences are distinct skills that not every content writer carries. They describe how they handle assignments that land in their weak zones: asking for extra time, pulling in supplemental research support, or flagging to the editor upfront so expectations are calibrated.

Concern: “I’m comfortable writing about almost anything.” That describes range, not self-awareness. Writers who claim no weak spots haven’t done enough varied work or aren’t paying close attention.

“Which piece in your portfolio do you wish you’d never published?”

What you’re testing: Critical distance from their own work. Writers who think everything they’ve published is excellent aren’t reading their own work carefully enough.

Strong answer: A specific piece with a specific diagnosis. “The argument was sound but the structure buried the main point in paragraph six.” “The headline overpromised and the body couldn’t deliver.” They can identify exactly what failed without blaming the brief, the editor, or the client.

Concern: “I’m proud of everything I’ve published.” That’s either not true or suggests a writer who isn’t growing. A second variant to watch for: they pick a piece and then explain why it was actually fine given the constraints. Framed as self-criticism, functions as a defense.

On Writing Tests

Whether to assign a writing test depends on what you’re actually trying to learn. An in-interview timed prompt — 30 to 45 minutes, brief provided in the room — tells you how candidates work under pressure and whether they can interpret a brief accurately on the first read. It also eliminates AI-augmented take-home samples as a variable.

A take-home test tells you how they work when they have time, which is closer to the actual job but harder to evaluate authentically given the current state of AI tools. If you go the take-home route, brief it as specifically as possible: target audience, goal, word count, keyword if relevant, one example of existing content in a similar vein. Vague briefs produce vague writing, and you’re testing their ability to execute your brief, not their ability to invent one.

The Editorial Freelancers Association’s rate guide puts professional content rates at $0.15 to $0.50 per word depending on complexity and expertise. If you’re asking for an unpaid sample of more than 600 words, experienced writers will decline. Cap the test or compensate for it.

Red Flags in the Conversation

  • They can’t name a piece of content — from any brand, not necessarily their own — that they think is genuinely excellent and explain why it works. Writers who don’t read widely make narrow writers.
  • Every portfolio piece is attributed to “a client” with no specifics. Some NDAs are real. A complete and consistent pattern of opacity about their work history raises questions.
  • They describe their writing process in terms of output (word count per day, turnaround time) rather than input (research depth, brief interpretation, revision approach). Fast production is a feature of their workflow. It isn’t a process.
  • They haven’t asked a single question about your content strategy, publishing cadence, or how editorial feedback works on your team by the second interview. Writers who don’t vet the editorial environment tend to be a poor fit once they’re inside it.

Where to Find Content Writers Worth Interviewing

Content writers with genuine media, editorial, or publishing backgrounds — the kind who understand CMS workflows, have worked with style guides, and can operate in a brand voice without constant oversight — are disproportionately reachable through industry-specific channels. Mediabistro’s job board is built for exactly this audience. Post your role at mediabistro.com/post-jobs. For help writing the listing itself, What Does a Copywriter Do? covers the adjacent role and the skills that overlap, which is useful when you’re deciding which requirements to make mandatory versus preferred.

Before the next interview, settle two things in advance: what a strong answer to the AI question looks like for your specific content operation, and what you’re actually trying to measure with a writing test if you plan to assign one. Deciding those things before the candidate walks in is how you get useful signal instead of confirming whatever impression the portfolio already gave you.

Topics:

Interviews