In partnership with

Unlock the Social Media Tactics That Work Right Now

Is your social strategy ready for what's next in 2025?

HubSpot Media's latest Social Playbook reveals what's actually working for over 1,000 global marketing leaders across TikTok, Instagram, LinkedIn, Pinterest, Facebook, and YouTube.

Inside this comprehensive report, you’ll discover:

  • Which platforms are delivering the highest ROI in 2025

  • Content formats driving the most engagement across industries

  • How AI is transforming social content creation and analytics

  • Tactical recommendations you can implement immediately

Unlock the playbook—free when you subscribe to the Masters in Marketing newsletter.

Get cutting-edge insights, twice a week, from the marketing leaders shaping the future.

YC mentors just pulled back the curtain on what their top AI startups are actually doing with prompts.

Spoiler: They're not writing "please summarize this article" one-liners. They're shipping long and specific prompts containing instructions in XML that look more like code than English.

One of the mentors compared it to coding in 1995. Raw, messy, but insanely powerful. They walked through real production prompts from Parahelp, Giga ML, and others. These aren't theory. They're the exact patterns closing seven-figure deals.

I took notes on everything. Here are the five techniques that separate production AI from ChatGPT experiments.

Let’s dive right into it!

Technique #1 — Structure Prompts Like Code

YC mentors showed that Parahelp uses very detailed prompts containing instructions instead of prose. They build full programming logic into their prompts.

Real example from Parahelp's planning prompt:

## Plan elements

- A plan consists of steps.
- You can always include <if_block> tags to include different steps based on a condition.

### How to Plan

- When planning next steps, make sure it's only the goal of next steps, not the overall goal of the ticket or user.
- Make sure that the plan always follows the procedures and rules of the # Customer service agent Policy doc

### How to create a step

- A step will always include the name of the action (tool call), description of the action and the arguments needed for the action. It will also include a goal of the specific action.

The step should be in the following format:
<step>
<action_name></action_name>
<description>{reason for taking the action, description of the action to take, which outputs from other tool calls that should be used (if relevant)}</description>
</step>

- The action_name should always be the name of a valid tool
- The description should be a short description of why the action is needed, a description of the action to take and any variables from other tool calls the action needs e.g. "reply to the user with instrucitons from <helpcenter_result>"
- Make sure your description NEVER assumes any information, variables or tool call results even if you have a good idea of what the tool call returns from the SOP.
- Make sure your plan NEVER includes or guesses on information/instructions/rules for step descriptions that are not explicitly stated in the policy doc.
- Make sure you ALWAYS highlight in your description of answering questions/troubleshooting steps that <helpcenter_result> is the source of truth for the information you need to answer the question.

- Every step can have an if block, which is used to include different steps based on a condition.
- And if block can be used anywhere in a step and plan and should simply just be wrapped with the <if_block condition=''></if_block> tags. An <if_block> should always have a condition. To create multiple if/else blocks just create multiple <if_block> tags.

### High level example of a plan

_IMPORTANT_: This example of a plan is only to give you an idea of how to structure your plan with a few sample tools (in this example <search_helpcenter> and <reply>), it's not strict rules or how you should structure every plan - it's using variable names to give you an idea of how to structure your plan, think in possible paths and use <tool_calls> as variable names, and only general descriptions in your step descriptions.

Scenario: The user has error with feature_name and have provided basic information about the error

<plan>
    <step>
        <action_name>search_helpcenter</action_name>
        <description>Search helpcenter for information about feature_name and how to resolve error_name</description>
    </step>
    <if_block condition='<helpcenter_result> found'>
        <step>
            <action_name>reply</action_name>
            <description>Reply to the user with instructions from <helpcenter_result></description>
        </step>
    </if_block>
    <if_block condition='no <helpcenter_result> found'>
        <step>
            <action_name>search_helpcenter</action_name>
            <description>Search helpcenter for general information about how to resolve error/troubleshoot</description>
        </step>
        <if_block condition='<helpcenter_result> found'>
            <step>
                <action_name>reply</action_name>
                <description>Reply to the user with relevant instructions from general <search_helpcenter_result> information </description>
            </step>
        </if_block>
        <if_block condition='no <helpcenter_result> found'>
            <step>
                <action_name>reply</action_name>
                <description>If we can't find specific troubleshooting or general troubleshooting, reply to the user that we need more information and ask for a {troubleshooting_info_name_from_policy_2} of the error (since we already have {troubleshooting_info_name_from_policy_1}, but need {troubleshooting_info_name_from_policy_2} for more context to search helpcenter)</description>
            </step>
        </if_block>
    </if_block>
</plan>

The power moves:

  • Variable references: <helpcenter_result> for tool outputs, {policy_name} for injected context (it’s a standard way of passing variables in

  • No else blocks (intentional): Forces explicit conditions for every path

  • Never assume outputs: Description can reference future tool calls without knowing results

Quick Test: Take a multi-step task. Write each step with <action> tags and add <if_block condition='[specific outcome]'> for branches. Watch your AI handle edge cases it previously fumbled.

Technique #2 — Metaprompting (Let AI Improve AI)

Tropir

Tropir discovered "prompt folding"—feeding prompts back to AI for enhancement.

The loop:

  1. Start with basic prompt

  2. Feed to Claude/GPT: "You're an expert prompt engineer. Make this more specific and effective: [PROMPT]"

  3. Run 3-5 iterations until stable

Metaprompting feels like coding in 1995; the tools aren't all there, but we're in this new frontier.

Some time ago, I even made a whole story on how to write metaprompts. You can check it out here (prompt inside).

Quick Test: Take one high-value prompt, run three metaprompting cycles, A/B test for a week.

Technique #3 — Build Escape Hatches

The problem: AI will confidently hallucinate rather than admit confusion.

Parahelp's solution - explicitly tell AI not to make up information if it doesn’t have enough information, but rather ask for more context if needed:

- The action_name should always be the name of a valid tool
- The description should be a short description of why the action is needed, a description of the action to take and any variables from other tool calls the action needs e.g. "reply to the user with instrucitons from <helpcenter_result>"
- Make sure your description NEVER assumes any information, variables or tool call results even if you have a good idea of what the tool call returns from the SOP.
- Make sure your plan NEVER includes or guesses on information/instructions/rules for step descriptions that are not explicitly stated in the policy doc.
- Make sure you ALWAYS highlight in your description of answering questions/troubleshooting steps that <helpcenter_result> is the source of truth for the information you need to answer the question.

YC's internal pattern (even better) - add a section for the AI to complain about unclear instructions:

<debug_info>
[Complain here about unclear instructions]
</debug_info>

It literally becomes a to-do list the agent gives back to you.

Jared from YC

Technique #4 — Big Models Prompt Small Models

Happy Robot closed seven-figure logistics deals using this arbitrage:

  • Generate 2,000-token prompts with Claude Opus

  • Run them on Haiku for 10x speed, 1/10th cost

  • Users get instant responses with zero quality drop

Voice AI companies do this constantly. Latency kills the illusion, so they need the fastest models with the smartest prompts.

Use Opus to create the prompt, Haiku to run it. Especially powerful for voice agents.

Diana from YC

Technique #5 — Match Model Personalities

YC's testing revealed each model has a distinct personality:

Model

Personality

Best For

Claude

The Empath - flexible, contextual

Customer-facing, nuanced tasks

o3

The Soldier - follows rules rigidly

Compliance, legal, strict processes

Llama

The Developer - needs guidance

Technical tasks with heavy steering

Gemini 2.5

The Manager - balances rules/exceptions

Complex decisions, evaluation rubrics

o3 was very rigid, penalizing anything outside the rubric. Gemini 2.5 could reason through exceptions like a high-agency employee.

Time to Wrap Up!

Every YC AI unicorn-in-waiting uses these five techniques. Not because they have special models. They're using the same Claude and OpenAI models you are.

They just prompt better.

And now you have their playbook.

Want to see these patterns in action? Reply with your most complex prompting challenge. Best question gets a detailed breakdown in next week's issue.

Keep shipping,
Luke

Keep Reading

No posts found