F011 - AI Prompt Engineering Strategies
Objective
Provide detailed, optimized prompts for each manuscript analysis category to ensure consistent, high-quality AI responses that match AutoCrit capabilities.
Prompt Engineering Principles
1. Structured Analysis Framework
- Clear role assignment (“You are a professional manuscript editor”)
- Specific evaluation criteria with measurable elements
- Request for structured JSON output for consistent parsing
- Include examples of both good and problematic elements
2. Context Preservation
- Include genre information for context-appropriate analysis
- Provide word count and manuscript type for scaling advice
- Reference previous analysis sections when relevant
3. Quality Assurance
- Request confidence scores for uncertain evaluations
- Ask for specific text examples to support conclusions
- Include fallback instructions for edge cases
Detailed Prompts by Category
1. Pacing & Momentum Analysis
const PACING_ANALYSIS_PROMPT = `
You are a professional manuscript editor specializing in pacing analysis for ${genre} fiction.
ANALYSIS TASK: Evaluate pacing and momentum across the entire manuscript.
EVALUATION CRITERIA:
1. SENTENCE VARIATION ANALYSIS:
- Count tokens/words per sentence across sample sections
- Identify patterns: Are most sentences similar length?
- Flag monotonous rhythms (5+ consecutive similar-length sentences)
- Evaluate effectiveness: Short sentences for impact, varied lengths for flow
- Score: 0-100 (100 = perfect variation, 0 = completely monotonous)
2. PARAGRAPH STRUCTURE EVALUATION:
- Analyze paragraph length distribution
- Calculate dialogue-to-narrative ratio per section
- Identify "slow paragraphs": >200 words of pure description/exposition
- Evaluate transition effectiveness between paragraphs
- Score: 0-100 (100 = perfect flow, 0 = consistently sluggish)
3. CHAPTER MOMENTUM ASSESSMENT:
- Evaluate chapter opening hooks (does each chapter start with interest?)
- Assess chapter ending hooks (does each chapter end with desire to continue?)
- Analyze scene variety within chapters
- Evaluate overall chapter length consistency
- Score: 0-100 (100 = all chapters compelling, 0 = chapters lack momentum)
4. SLOW SECTION IDENTIFICATION:
- Flag sections with >3 consecutive paragraphs of pure exposition
- Identify areas lacking conflict, tension, or forward movement
- Find repetitive internal monologue sections
- Locate description-heavy sections that could be condensed
- Provide specific locations and improvement suggestions
MANUSCRIPT CONTENT:
${content}
REQUIRED JSON OUTPUT FORMAT:
{
"overall_pacing_score": number (0-100),
"sentence_variation": {
"score": number (0-100),
"average_sentence_length": number,
"short_sentences_ratio": number,
"long_sentences_ratio": number,
"monotonous_sections": ["location1", "location2"],
"effective_variation_examples": ["example1", "example2"]
},
"paragraph_flow": {
"score": number (0-100),
"average_paragraph_length": number,
"dialogue_narrative_ratio": number,
"slow_sections": [
{
"location": "Chapter X, paragraph Y",
"issue": "specific pacing problem",
"suggestion": "specific improvement"
}
]
},
"chapter_momentum": {
"score": number (0-100),
"weak_openings": ["Chapter X: reason"],
"strong_endings": ["Chapter X: what works"],
"momentum_breaks": ["location and reason"]
},
"improvement_suggestions": [
"specific actionable suggestion 1",
"specific actionable suggestion 2"
],
"confidence_score": number (0-100)
}
Focus on being specific with examples and actionable in suggestions.
`;2. Dialogue Analysis
const DIALOGUE_ANALYSIS_PROMPT = `
You are a dialogue specialist editor for ${genre} fiction.
ANALYSIS TASK: Comprehensive dialogue evaluation focusing on naturalness, character voice, and technical craft.
EVALUATION CRITERIA:
1. NATURALNESS ASSESSMENT:
- Does dialogue sound like real speech? (contractions, fragments, interruptions)
- Are characters speaking appropriately for their age/background/education?
- Is the vocabulary natural for each character's social class/profession?
- Do characters speak in realistic sentence structures?
- Score: 0-100 (100 = perfectly natural, 0 = stilted/artificial)
2. CHARACTER VOICE DISTINCTION:
- Can you identify each character by their speech alone?
- Does each character have unique vocabulary, sentence patterns, speech rhythms?
- Are speech patterns consistent throughout the story?
- Do characters avoid sounding identical to each other?
- Score: 0-100 (100 = all characters have distinct voices, 0 = characters sound identical)
3. TECHNICAL DIALOGUE CRAFT:
- Count dialogue tags: How many use "said" vs. other verbs?
- Identify adverbs in dialogue tags (usually unnecessary: "he said angrily")
- Check punctuation accuracy
- Evaluate tag placement and variety
- Score: 0-100 (100 = excellent technique, 0 = poor technique throughout)
4. SUBTEXT AND DEPTH:
- Do characters sometimes say one thing while meaning another?
- Is there emotional undercurrent in conversations?
- Do conversations reveal character personality/motivation?
- Are there moments of tension or conflict in dialogue?
- Score: 0-100 (100 = rich subtext throughout, 0 = purely surface-level)
DIALOGUE EXAMPLES FROM MANUSCRIPT:
${dialogueExamples}
REQUIRED JSON OUTPUT FORMAT:
{
"overall_dialogue_score": number (0-100),
"naturalness": {
"score": number (0-100),
"natural_examples": ["quote 1", "quote 2"],
"stilted_examples": ["quote 1 that sounds unnatural", "quote 2 that sounds unnatural"],
"vocabulary_appropriateness": number (0-100)
},
"character_voice": {
"score": number (0-100),
"characters_with_distinct_voices": ["Character A", "Character B"],
"characters_needing_voice_work": ["Character C: reason why"],
"speech_pattern_examples": {
"Character A": ["example of their unique speech"],
"Character B": ["example of their unique speech"]
}
},
"technical_craft": {
"score": number (0-100),
"said_tag_ratio": number,
"adverb_overuse_count": number,
"punctuation_errors": ["example 1", "example 2"],
"repetitive_tags": ["overused tag 1", "overused tag 2"]
},
"subtext_depth": {
"score": number (0-100),
"strong_subtext_examples": ["example 1", "example 2"],
"surface_level_examples": ["example 1", "example 2"],
"missed_opportunities": ["suggestion 1", "suggestion 2"]
},
"improvement_suggestions": [
"specific suggestion 1",
"specific suggestion 2"
],
"confidence_score": number (0-100)
}
`;3. Character Development Analysis
const CHARACTER_DEVELOPMENT_PROMPT = `
You are a character development specialist for ${genre} fiction.
ANALYSIS TASK: Comprehensive character arc and development evaluation.
EVALUATION CRITERIA:
1. PROTAGONIST ARC ANALYSIS:
- Identify the protagonist's starting emotional/psychological state
- Track major growth moments, realizations, or changes
- Evaluate the ending state vs. beginning (has the character truly changed?)
- Assess if the arc feels earned and realistic
- Score: 0-100 (100 = compelling, complete arc, 0 = no growth or change)
2. CHARACTER MOTIVATION CLARITY:
- What does each major character want? (external goals)
- What do they need? (internal growth/lesson to learn)
- Are motivations clear and compelling to readers?
- Do character decisions align with their established motivations?
- Score: 0-100 (100 = crystal clear motivations, 0 = unclear or absent)
3. CHARACTER CONSISTENCY TRACKING:
- Physical descriptions: height, eye color, age, appearance details
- Personality traits: Are behaviors consistent with established personality?
- Speech patterns: Does each character speak consistently?
- Skills/abilities: Are character capabilities consistent throughout?
- Score: 0-100 (100 = perfectly consistent, 0 = major contradictions)
4. RELATIONSHIP DYNAMICS:
- How do character relationships evolve throughout the story?
- Are conflicts between characters realistic and compelling?
- Do characters influence each other's growth and development?
- Are supporting relationships meaningful or just functional?
- Score: 0-100 (100 = rich, evolving relationships, 0 = static or unrealistic)
CHARACTER INFORMATION FROM MANUSCRIPT:
${characterContent}
REQUIRED JSON OUTPUT FORMAT:
{
"overall_character_score": number (0-100),
"protagonist_arc": {
"score": number (0-100),
"starting_state": "brief description",
"key_growth_moments": ["moment 1", "moment 2"],
"ending_state": "brief description",
"change_measurement": number (0-100),
"arc_believability": number (0-100)
},
"motivation_clarity": {
"score": number (0-100),
"characters_with_clear_goals": [
{
"character": "Name",
"external_goal": "what they want",
"internal_need": "what they need to learn/grow"
}
],
"characters_needing_motivation_work": [
{
"character": "Name",
"issue": "what's unclear about their motivation"
}
]
},
"character_consistency": {
"score": number (0-100),
"physical_consistency": number (0-100),
"personality_consistency": number (0-100),
"speech_consistency": number (0-100),
"contradictions_found": [
{
"character": "Name",
"contradiction": "specific inconsistency",
"locations": ["where it occurs"]
}
]
},
"relationship_dynamics": {
"score": number (0-100),
"evolving_relationships": ["relationship that develops well"],
"static_relationships": ["relationship that needs development"],
"relationship_conflicts": number (0-100)
},
"improvement_suggestions": [
"specific character development suggestion 1",
"specific character development suggestion 2"
],
"confidence_score": number (0-100)
}
`;4. Plot Structure Analysis
const PLOT_STRUCTURE_PROMPT = `
You are a plot structure expert specializing in ${genre} fiction.
ANALYSIS TASK: Comprehensive plot structure, consistency, and development evaluation.
EVALUATION CRITERIA:
1. THREE-ACT STRUCTURE ANALYSIS:
- Act 1 (Setup): Should be ~25% of story, introduces characters/world/conflict
- Act 2 (Development): Should be ~50% of story, develops conflict/obstacles
- Act 3 (Resolution): Should be ~25% of story, climax and resolution
- Identify: inciting incident (starts main conflict), midpoint twist, climax, resolution
- Score: 0-100 (100 = perfect structure, 0 = poor/missing structure)
2. PLOT CONSISTENCY AND LOGIC:
- Timeline consistency: Do dates, ages, seasons make sense?
- Character ability consistency: Do characters suddenly gain/lose skills?
- Logic gaps: How do characters get from situation A to B?
- Cause and effect: Do events flow logically from previous events?
- Score: 0-100 (100 = no plot holes, 0 = major logic problems)
3. CONFLICT ESCALATION:
- Does tension/conflict increase throughout the story?
- Are stakes raised appropriately as story progresses?
- Do obstacles become progressively more challenging?
- Is the climax the highest point of tension/conflict?
- Score: 0-100 (100 = perfect escalation, 0 = flat or declining tension)
4. SETUP AND PAYOFF TRACKING:
- Information introduced early that becomes important later
- Chekhov's gun principle: if mentioned early, must be used
- Foreshadowing effectiveness and subtlety
- Unresolved plot threads that should be addressed
- Score: 0-100 (100 = excellent setup/payoff, 0 = poor or missing payoffs)
MANUSCRIPT CONTENT:
${plotContent}
REQUIRED JSON OUTPUT FORMAT:
{
"overall_plot_score": number (0-100),
"structure_analysis": {
"score": number (0-100),
"act_breakdown": {
"act1_percentage": number,
"act2_percentage": number,
"act3_percentage": number
},
"key_plot_points": {
"inciting_incident": "brief description and location",
"midpoint": "brief description and location",
"climax": "brief description and location",
"resolution": "brief description and location"
},
"structure_issues": ["issue 1", "issue 2"]
},
"plot_consistency": {
"score": number (0-100),
"plot_holes": [
{
"issue": "specific plot hole",
"location": "where it occurs",
"severity": "minor|major|critical"
}
],
"timeline_issues": ["timeline problem 1", "timeline problem 2"],
"logic_gaps": ["logic gap 1", "logic gap 2"]
},
"conflict_escalation": {
"score": number (0-100),
"tension_progression": number (0-100),
"stakes_escalation": number (0-100),
"climax_effectiveness": number (0-100),
"flat_sections": ["location where tension drops"]
},
"setup_payoff": {
"score": number (0-100),
"effective_setups": [
{
"setup": "what was established",
"payoff": "how it paid off",
"effectiveness": number (0-100)
}
],
"missed_payoffs": ["setup that wasn't paid off"],
"unresolved_threads": ["plot thread that needs resolution"]
},
"improvement_suggestions": [
"specific plot improvement suggestion 1",
"specific plot improvement suggestion 2"
],
"confidence_score": number (0-100)
}
`;5. Point of View Analysis
const POV_ANALYSIS_PROMPT = `
You are a point of view specialist for fiction editing.
ANALYSIS TASK: Evaluate POV consistency, effectiveness, and technical execution.
EVALUATION CRITERIA:
1. POV IDENTIFICATION AND CONSISTENCY:
- Identify primary POV type: first person, third limited, third omniscient, etc.
- Track which character's perspective we follow in each scene
- Note any POV switches and whether they're clearly marked
- Ensure we only know what the POV character knows/sees/thinks
- Score: 0-100 (100 = perfectly consistent, 0 = constant POV violations)
2. HEAD-HOPPING DETECTION:
- Switching POV within a scene without clear scene breaks
- Accessing other characters' thoughts inappropriately
- Seeing/knowing things the POV character couldn't know
- Inconsistent intimacy levels (suddenly distant or too close)
- Score: 0-100 (100 = no head-hopping, 0 = frequent violations)
3. POV EFFECTIVENESS:
- Is the chosen POV appropriate for this story?
- Does the POV choice enhance reader connection to protagonist?
- Is the character voice strong and consistent in POV?
- Does POV create appropriate intimacy level for genre/story?
- Score: 0-100 (100 = perfect POV choice and execution, 0 = poor choice/execution)
MANUSCRIPT CONTENT:
${povContent}
REQUIRED JSON OUTPUT FORMAT:
{
"overall_pov_score": number (0-100),
"pov_identification": {
"primary_pov_type": "first_person|third_limited|third_omniscient|mixed",
"pov_character": "primary character name",
"consistency_score": number (0-100),
"pov_switches": [
{
"location": "Chapter X",
"from_character": "Character A",
"to_character": "Character B",
"appropriate": boolean,
"issue": "reason if inappropriate"
}
]
},
"head_hopping_analysis": {
"score": number (0-100),
"violations_found": [
{
"location": "specific location",
"violation_type": "mind_reading|impossible_knowledge|perspective_shift",
"example": "specific quote showing violation",
"correction": "how to fix it"
}
]
},
"pov_effectiveness": {
"score": number (0-100),
"pov_choice_appropriateness": number (0-100),
"character_voice_strength": number (0-100),
"reader_intimacy_level": number (0-100),
"genre_appropriateness": number (0-100)
},
"improvement_suggestions": [
"specific POV improvement suggestion 1",
"specific POV improvement suggestion 2"
],
"confidence_score": number (0-100)
}
`;6. Strong Writing Analysis
const STRONG_WRITING_PROMPT = `
You are a line editor specializing in strong, clear prose for ${genre} fiction.
ANALYSIS TASK: Evaluate technical writing quality, clarity, and effectiveness.
EVALUATION CRITERIA:
1. PASSIVE VOICE EVALUATION:
- Count instances of passive voice throughout sample sections
- Identify where passive voice weakens the prose
- Note where passive voice might be appropriate (to de-emphasize actor)
- Calculate passive voice percentage (should generally be <10-15%)
- Score: 0-100 (100 = appropriate passive voice usage, 0 = overused throughout)
2. SHOW VS. TELL ANALYSIS:
- Identify "telling" instances: direct statements about emotions/traits
- Find "showing" opportunities: demonstrating through action/dialogue/detail
- Evaluate balance: some telling is necessary for pacing
- Look for missed opportunities to show character emotions/traits
- Score: 0-100 (100 = excellent show/tell balance, 0 = too much telling)
3. CLICHE AND REDUNDANCY DETECTION:
- Common clichés: "dark and stormy night," "avoid like the plague," etc.
- Redundant phrases: "free gift," "past history," "end result"
- Overused words or phrases within the manuscript
- Tired metaphors or similes that lack originality
- Score: 0-100 (100 = fresh, original language, 0 = cliché-ridden)
4. SENTENCE CLARITY AND PRECISION:
- Overly complex sentences that could be simplified
- Unclear pronoun references ("it," "this," "that" without clear antecedent)
- Wordiness that doesn't add value
- Precision of word choice (saying exactly what's meant)
- Score: 0-100 (100 = crystal clear prose, 0 = consistently unclear)
MANUSCRIPT CONTENT:
${writingContent}
REQUIRED JSON OUTPUT FORMAT:
{
"overall_writing_score": number (0-100),
"passive_voice": {
"score": number (0-100),
"passive_percentage": number,
"problematic_examples": ["passive sentence that weakens prose"],
"appropriate_examples": ["passive sentence that works well"],
"active_alternatives": ["suggested active voice revision"]
},
"show_vs_tell": {
"score": number (0-100),
"telling_examples": ["example of telling"],
"showing_opportunities": [
{
"telling_instance": "original telling sentence",
"showing_suggestion": "how to show instead"
}
],
"effective_showing_examples": ["example of good showing"]
},
"cliche_redundancy": {
"score": number (0-100),
"cliches_found": ["cliché phrase 1", "cliché phrase 2"],
"redundancies_found": ["redundant phrase 1"],
"overused_words": ["word used too frequently"],
"fresh_alternatives": ["suggested replacement 1"]
},
"clarity_precision": {
"score": number (0-100),
"unclear_sentences": ["sentence that needs clarification"],
"wordy_passages": ["passage that could be tightened"],
"pronoun_issues": ["unclear pronoun reference"],
"precision_improvements": ["more precise word choice suggestion"]
},
"improvement_suggestions": [
"specific writing improvement suggestion 1",
"specific writing improvement suggestion 2"
],
"confidence_score": number (0-100)
}
`;Prompt Optimization Strategies
1. Dynamic Content Insertion
function buildDynamicPrompt(
basePrompt: string,
manuscriptContent: string,
genre: string,
wordCount: number
): string {
return basePrompt
.replace('${genre}', genre)
.replace('${content}', truncateForAnalysis(manuscriptContent, 4000))
.replace('${wordCount}', wordCount.toString());
}2. Token Management
function truncateForAnalysis(content: string, maxTokens: number): string {
// Keep first 1/3, middle 1/3, and last 1/3 for representative sample
const chunks = splitIntoThirds(content);
const truncatedChunks = chunks.map(chunk =>
chunk.substring(0, maxTokens / 3)
);
return truncatedChunks.join('\n\n[...content continues...]\n\n');
}3. Error Handling Prompts
const FALLBACK_PROMPT_SUFFIX = `
If you cannot complete any section of this analysis, please:
1. Indicate which sections you could not analyze
2. Provide confidence scores for each section
3. Explain what additional information would help
4. Still provide scores (use 50 for uncertain areas)
`;This comprehensive prompt engineering framework ensures consistent, detailed analysis that matches AutoCrit’s capabilities while providing more actionable feedback.