AI search is no longer a future scenario. ChatGPT processes 2 billion queries monthly. Perplexity runs real-time web searches for every question. Claude reads live URLs. When your ideal client asks a specific question about your consulting vertical—pricing models, implementation risks, vendor selection criteria—an AI engine must cite you to drive business to your door.

Most consultants still optimize for Google. That's yesterday's game. Today's game is Answer Engine Optimization (AEO). And solo consultants have a first-mover advantage because you can own a narrow vertical before the agency crowd builds a machine around it.

The problem: your existing content was built for Google's ranking algorithm. It buries answers in walls of prose. It targets keywords instead of questions. AI models hate it. They want to extract clean answers without guessing. When an AI engine can't find your answer in a citable format, it cites your competitor instead.

This guide walks you through the FOCUS Strategy—a framework to position yourself as the source AI models recommend in your exact market.

The Real Mechanic: How AI Chooses Sources

You need to understand how the system actually works before you optimize for it.

Perplexity runs a live web search on every query. It retrieves candidate pages, reads them, extracts answers, and provides inline citations. Unlike ChatGPT (which relies partly on training data), Perplexity pulls fresh content every time. This is your lever.

ChatGPT shows 87% alignment with Bing's top results, meaning it weights established authority signals. But here's what matters: both systems evaluate content using RAG (Retrieval-Augmented Generation). RAG does not optimize for backlinks or keyword density. It optimizes for semantic clarity, structural retrievability, and third-party validation.

Translation: an answer that's easy to extract beats a sophisticated answer buried in purple prose.

The data proves it. Listicles account for 50% of AI citations. Tables increase citation rates 2.5x. Structured data wins. When an AI model scans your page, it's looking for clean, extractable information it can attribute to you without guessing.

Filter: Who Asks Your Question?

Before you rewrite anything, answer this one: who asks your exact question?

This is the Filter step in the FOCUS Strategy. Not "who do you want as a client?" Who actually asks the specific question you answer?

When I trained under Dan Kennedy, he hammered one principle: be the obvious expert on one thing, not a generalist on twenty. That was true when prospects used Yellow Pages. It's ten times more true now that prospects ask ChatGPT. AI doesn't cite generalists—it cites the source that answers the specific question better than anyone.

Most consultants make this mistake. They write blog posts for "decision-makers in our vertical." Vague. Unverifiable. Unoptimizable.

Instead: sit down with your last 10 paying clients. Write out the exact 3-5 questions they asked before they hired you. Those are your filtering questions. Those are the questions where you have earned the authority to be cited.

Example: If you're a conversion-rate optimization consultant for SaaS founders, your filtering questions might be: "How much does multivariate testing cost?" "What's a good control group for A/B tests?" "Why do my analytics show more conversions than revenue?" These questions come from your ICP (ideal client profile). They're specific. They're answerable.

If you write content that answers these exact questions—not adjacent questions, not aspirational questions—you're already filtering for citability.

Obsess: Audit Your Content Against AI Citation Criteria

Most of your existing content was built for Google's algorithm. It won't work for AI. You need to audit and restructure.

Here's the checklist. Go through your top 10 articles. For each, answer these questions:

Does it lead with the answer? If someone copies the first 200 words of your article and sends it to a friend, is the core answer there? If not, rewrite the opening. AI models cite headers and opening paragraphs first. Bury the answer in the third section, and you're invisible.

Is it scannable? Count your paragraphs. If any paragraph exceeds 3-4 sentences, it's too dense for extraction. AI doesn't cite walls of text. It cites clear, discrete information. Reformat.

Do you have tables or structured data? Tables increase citation rates 2.5x. If you're explaining a comparison (pricing models, implementation timelines, risk matrices), build a table. AI reads tables cleanly. It struggles with narrative comparisons.

Is the question in the heading? Rephrase your section headers as actual questions. Instead of "Multivariate Testing," write "How Much Does Multivariate Testing Cost?" This teaches the AI model exactly what question the section answers.

Do you cite external sources? Perplexity and ChatGPT favor content with real citations. If you make a claim, link to a source that validates it. This signals authority to RAG systems.

Is the content fresh? Perplexity cited content published within the last 30 days at an 82% rate in 2026 research. If your best article was published three years ago, refresh it. Add new data, new examples, new links. Mark the update date.

This is not rocket science. But it's mechanical. Go through your top 10 articles. Run them through this checklist. Restructure for extraction.

Capitalize: Build Your Citation Asset

Once you've restructured your content, you need to systematize new creation around the same principles.

For each filtering question you identified earlier, build a cornerstone article. Not a 2,000-word think piece. A 1,200-1,500-word answer-first piece built specifically to be cited.

Here's the formula:

Opening paragraph (100-150 words). Answer the question in full. If someone read only the opening, they'd have the answer. No setup. No context-building. Just the answer.

Section 2: Why it matters. A single paragraph explaining the business impact or risk. Why should your ICP care about this answer?

Section 3: The mechanics. Use a table or list to break down the answer into discrete, citable pieces. This is where extraction happens.

Section 4: Your specific angle or methodology. Here's where you inject operator-specific thinking. Not agency thinking. Your thinking. The manual that works in your specific vertical.

Section 5: FAQ. 3-5 follow-up questions your clients actually ask, with concise answers.

Citations: At least 2-3 external sources validating your claims.

This format is machine-readable. It's also human-readable. You're not compromising for AI—you're building for both.

Understand: Monitor Your Citations

You need receipts. Not traffic estimates. Real citations.

Set up monitoring for your name + your filtering keywords in Perplexity, ChatGPT, and Claude. When you're cited, note it. Over 90 days, you'll see patterns. Which content format gets cited most? Which topics? Which examples?

Treat this like an engine-room operator watching gauges. You're reading the system's feedback. If one article gets cited three times a week and another gets zero citations despite similar quality, that's a signal. The first one does something the second doesn't. Extract that mechanic. Apply it to future content.

Also: measure the traffic AEO sends. Don't assume it's nil because you're not thinking about it. Use UTM parameters on your cited links. Track conversions from AI sources separately from organic search. You might find that AI-referred traffic converts higher because the person asking was already being answered by an authority source—you.

Systematize: Make This Repeatable

The final step in FOCUS is Systematize. This can't be a one-time audit.

Build a quarterly rhythm: audit your top 10 articles, refresh 2-3 based on citation feedback, publish one new cornerstone article targeting one specific filtering question.

The math compounds. In month one, you're cited once or twice. In month six, you're cited in 10-15 different Perplexity answers. In month twelve, when someone asks your exact question, your content is the default citation. Prospects see your name before they know they're looking at your business.

That's the lever. That's the system.

The Doctrine Connection

Competence beats credentials. You can have an MBA and a portfolio full of case studies, but if you can't clearly answer the specific question your prospect is asking in a format the AI engine can cite, you're invisible. Restructure for extraction. Prove your competence through citable answers, not credentials.

FAQs

Q: Doesn't this favor large content production teams?

No. It favors clarity and specificity. A solo consultant with three cornerstone articles optimized for exact AEO criteria will out-cite a 50-person agency that publishes 30 articles built for Google keyword ranking. Narrow beats wide. Specific beats general. One article answering your filtering question perfectly will be cited more than ten articles answering adjacent questions poorly.

Q: How do I know if my content is "citable"?

Test it. Paste a paragraph into Claude or ChatGPT and ask it to cite you for a specific claim. If the AI can extract your answer and cite it cleanly, it's citable. If it paraphrases or pulls from other sources instead, restructure. The AI is showing you what doesn't work.

Q: What if my vertical doesn't have many AI-published articles yet?

This is your advantage. You're first-mover. If your vertical is niche—say, specialty dental insurance compliance or industrial HVAC optimization—and no one's publishing extraction-first content, you own the first-mover slot. Perplexity will cite you because you're the only source that answers clearly. Capitalize on that window before competitors wake up.

Q: Should I remove SEO optimization to focus on AEO?

No. You're stacking priorities. AEO comes first because it answers the specific question you own. But strong AEO—clear structure, answer-first formatting, fresh content—also ranks well in Google. You're not choosing. You're optimizing for extraction, and SEO follows.

Q: How long until I see AEO results?

The data shows that fresh content gets cited at an 82% higher rate within 30 days of publication. Restructured content typically starts showing citations within 6-8 weeks after updating. Compounding kicks in at month four. This is faster than traditional SEO ranking cycles. But it requires consistency.