Architecture Patterns & Best Practices
Overview
This document outlines the two core architecture patterns used across all 16 tools, incorporating best practices from the existing successful implementations (Story Prompts, Book Blurbs, Flashcards) and the comprehensive development standards from the Tools App.
Architecture Patterns
Pattern 1: Story Prompt Architecture (14/16 tools)
This pattern is used for tools that generate writing prompts that users can respond to, creating a community-driven content ecosystem.
Tools using this pattern:
- Romance Writing Prompts Generator
- Love Story Ideas Generator
- Romance Character Generator
- Romance Dialogue Generator
- Romance Conflict Generator
- Family Storytelling Ideas Generator
- Family Memory Prompts Generator
- Family Character Profile Generator
- Adventure Story Ideas Generator
- Mystery Writing Prompts Generator
- Mystery Story Ideas Generator
- Adventure Character Generator
- Horror Writing Prompts Generator
- Comedy Story Ideas Generator
Pattern 2: Multi-Variant Generator Architecture (2/16 tools)
This pattern is used for tools that generate multiple analyzed versions of content with detailed feedback and comparison.
Tools using this pattern:
- Story Plot Generator (All Genres)
- Creative Writing Prompts Generator
Pattern 1: Story Prompt Architecture
Core Components
1. Database Schema Structure
-- Main prompts table (standardized across all prompt-based tools)
CREATE TABLE tools_[tool_name] (
-- ✅ CRITICAL: Standard identity fields (required for all tools)
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
title TEXT NOT NULL,
share_code TEXT UNIQUE NOT NULL,
-- ✅ CRITICAL: Ownership and session management
user_id UUID REFERENCES auth.users(id) ON DELETE CASCADE,
session_id TEXT NOT NULL,
-- ✅ CRITICAL: Visibility and content management
is_public BOOLEAN DEFAULT FALSE,
is_featured BOOLEAN DEFAULT FALSE,
is_reviewed BOOLEAN DEFAULT FALSE,
-- ✅ CRITICAL: Engagement tracking
view_count INTEGER DEFAULT 0,
share_count INTEGER DEFAULT 0,
export_count INTEGER DEFAULT 0,
use_count INTEGER DEFAULT 0, -- How many responses written
-- ✅ CRITICAL: SEO optimization
seo_title TEXT,
seo_description TEXT,
keywords TEXT[],
-- ✅ CRITICAL: AI integration and tracking
is_ai_generated BOOLEAN DEFAULT TRUE,
ai_generation_prompt TEXT,
ai_confidence DECIMAL(3,2) DEFAULT 0.8,
ai_tokens_used INTEGER,
ai_cost_usd DECIMAL(10,4),
ai_model TEXT,
generation_time_ms INTEGER,
-- ✅ CRITICAL: Core prompt content
prompt_text TEXT NOT NULL,
-- Tool-specific fields (customize per tool)
-- Example for Romance Prompts:
-- romance_subgenre VARCHAR(50),
-- heat_level VARCHAR(20),
-- tropes TEXT[],
-- relationship_dynamic VARCHAR(50),
-- ✅ CRITICAL: Flexible content structure (JSONB for tool-specific data)
content JSONB NOT NULL DEFAULT '{}'::jsonb,
generation_options JSONB DEFAULT '{}'::jsonb,
metadata JSONB DEFAULT '{}'::jsonb,
-- ✅ CRITICAL: Standard timestamps
created_at TIMESTAMPTZ DEFAULT NOW(),
updated_at TIMESTAMPTZ DEFAULT NOW(),
expires_at TIMESTAMPTZ -- for anonymous content (24 hours)
);
-- ✅ CRITICAL: User responses table (enables community writing)
CREATE TABLE tools_[tool_name]_responses (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
prompt_id UUID REFERENCES tools_[tool_name](id) ON DELETE CASCADE,
user_id UUID REFERENCES auth.users(id) ON DELETE CASCADE,
session_id TEXT NOT NULL,
title TEXT NOT NULL,
content TEXT NOT NULL,
word_count INTEGER DEFAULT 0,
excerpt TEXT, -- First 200 characters
is_public BOOLEAN DEFAULT FALSE,
is_featured BOOLEAN DEFAULT FALSE,
view_count INTEGER DEFAULT 0,
like_count INTEGER DEFAULT 0,
share_count INTEGER DEFAULT 0,
-- SEO for public responses
seo_title TEXT,
seo_description TEXT,
keywords TEXT[],
created_at TIMESTAMPTZ DEFAULT NOW(),
updated_at TIMESTAMPTZ DEFAULT NOW(),
writing_time_minutes INTEGER,
last_accessed_at TIMESTAMPTZ
);
-- ✅ CRITICAL: Collections table (themed prompt sets)
CREATE TABLE tools_[tool_name]_collections (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
title TEXT NOT NULL,
description TEXT,
slug TEXT UNIQUE NOT NULL,
is_featured BOOLEAN DEFAULT FALSE,
is_published BOOLEAN DEFAULT TRUE,
prompt_count INTEGER DEFAULT 0,
seo_title TEXT,
seo_description TEXT,
keywords TEXT[],
created_at TIMESTAMPTZ DEFAULT NOW(),
updated_at TIMESTAMPTZ DEFAULT NOW()
);
-- ✅ CRITICAL: Analytics table (comprehensive tracking)
CREATE TABLE tools_[tool_name]_analytics (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
prompt_id UUID REFERENCES tools_[tool_name](id) ON DELETE CASCADE,
event_type VARCHAR(50) NOT NULL, -- generation, view, response, share, export
event_data JSONB,
user_id UUID REFERENCES auth.users(id),
session_id TEXT,
ip_address TEXT,
user_agent TEXT,
referrer TEXT,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- ✅ CRITICAL: Required indexes for performance
CREATE INDEX idx_[tool_name]_share_code ON tools_[tool_name](share_code);
CREATE INDEX idx_[tool_name]_session_id ON tools_[tool_name](session_id);
CREATE INDEX idx_[tool_name]_user_id ON tools_[tool_name](user_id);
CREATE INDEX idx_[tool_name]_public ON tools_[tool_name](is_public, created_at) WHERE is_public = true;
CREATE INDEX idx_[tool_name]_featured ON tools_[tool_name](is_featured) WHERE is_featured = true;2. AI Service Implementation Pattern
// ✅ CRITICAL: AI service class following established patterns
import OpenAI from 'openai'
import { trackAIUsage, estimateTokenCount, estimateTokenCost, getRecommendedToolsModel } from '@/lib/ai-usage-tracker'
export class ToolNameAI {
private openai: OpenAI
constructor() {
this.openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
})
}
// ✅ CRITICAL: Main generation method with comprehensive tracking
async generatePrompts(
options: PromptGenerationOptions,
userId?: string,
sessionId?: string
): Promise<GeneratedPrompt[]> {
const startTime = Date.now()
let totalTokens = 0
let success = false
try {
const systemPrompt = this.buildSystemPrompt(options)
const userPrompt = this.buildUserPrompt(options)
// ✅ CRITICAL: Use recommended model for cost efficiency
const response = await this.openai.chat.completions.create({
model: getRecommendedToolsModel(), // Uses gpt-4o-mini-2024-07-18
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: userPrompt }
],
temperature: 0.8,
max_tokens: 4000,
response_format: { type: 'json_object' }
})
totalTokens = response.usage?.total_tokens || estimateTokenCount(systemPrompt + userPrompt)
const cost = estimateTokenCost(getRecommendedToolsModel(), totalTokens)
const generationTime = Date.now() - startTime
const prompts = this.parseAndValidateResponse(response.choices[0].message.content!, options)
success = true
// ✅ CRITICAL: Track ALL AI usage for admin monitoring
await trackAIUsage({
userId,
sessionId,
featureName: `${options.toolName}_generation`,
promptName: 'prompt_creation',
modelUsed: getRecommendedToolsModel(),
provider: 'openai',
tokensConsumed: totalTokens,
costUsd: cost,
responseTimeMs: generationTime,
requestParams: options,
requestSuccess: true,
generatedPrompts: prompts.map(p => ({
promptText: p.promptText,
confidence: p.aiConfidence,
category: p.category || 'general'
}))
})
return prompts
} catch (error) {
const cost = estimateTokenCost(getRecommendedToolsModel(), totalTokens)
const generationTime = Date.now() - startTime
// ✅ CRITICAL: Track AI failures for admin monitoring
await trackAIUsage({
userId,
sessionId,
featureName: `${options.toolName}_generation`,
promptName: 'prompt_creation',
modelUsed: getRecommendedToolsModel(),
provider: 'openai',
tokensConsumed: totalTokens,
costUsd: cost,
responseTimeMs: generationTime,
requestParams: options,
requestSuccess: false,
errorMessage: error instanceof Error ? error.message : 'Unknown error'
})
throw new Error(`Failed to generate prompts: ${error instanceof Error ? error.message : 'Unknown error'}`)
}
}
// ✅ Tool-specific prompt engineering
private buildSystemPrompt(options: PromptGenerationOptions): string {
return `You are an expert ${options.toolName} prompt generator.
Your task is to create ${options.count} engaging, high-quality writing prompts that:
1. Are appropriate for the specified audience and difficulty level
2. Incorporate the requested themes and elements naturally
3. Provide clear direction without being overly prescriptive
4. Include specific details that spark imagination
5. Are SEO-optimized for the target keywords
Generate prompts with varied approaches while maintaining consistent quality.
Return a JSON object with a "prompts" array containing complete prompt objects.`
}
private buildUserPrompt(options: PromptGenerationOptions): string {
return `Create ${options.count} ${options.toolName} prompts with these specifications:
${Object.entries(options.parameters).map(([key, value]) => `${key}: ${value}`).join('\n')}
Each prompt should:
- Be unique and creative while fitting the specifications
- Include enough detail for writers to start immediately
- Have clear emotional stakes or compelling hooks
- Be optimized for the keywords: ${options.targetKeywords.join(', ')}`
}
private parseAndValidateResponse(response: string, options: PromptGenerationOptions): GeneratedPrompt[] {
try {
const parsed = JSON.parse(response)
if (!parsed.prompts || !Array.isArray(parsed.prompts)) {
throw new Error('Invalid response format: missing prompts array')
}
return parsed.prompts.map((prompt: any, index: number) => ({
id: `${Date.now()}-${index}`,
promptText: prompt.promptText || prompt.text || '',
seoTitle: prompt.seoTitle || `${options.toolName} Prompt #${index + 1}`,
seoDescription: prompt.seoDescription || prompt.promptText?.substring(0, 155) || '',
keywords: prompt.keywords || options.targetKeywords,
aiConfidence: this.calculateConfidence(prompt),
aiGenerationPrompt: 'prompt_generation',
content: this.extractStructuredContent(prompt),
...options.parameters // Include tool-specific fields
}))
} catch (error) {
throw new Error(`Failed to parse AI response: ${error instanceof Error ? error.message : 'Invalid JSON'}`)
}
}
private calculateConfidence(prompt: any): number {
let confidence = 0.7 // Base confidence
// Quality indicators
if (prompt.promptText && prompt.promptText.length > 100) confidence += 0.05
if (prompt.seoTitle) confidence += 0.05
if (prompt.keywords && prompt.keywords.length > 0) confidence += 0.05
if (prompt.structuredContent) confidence += 0.1
return Math.min(confidence, 0.95)
}
}3. API Endpoint Standards
// ✅ CRITICAL: Standard API structure for all prompt-based tools
import { NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkRateLimit, getRateLimitHeaders } from '@/lib/rate-limiter'
import { SessionManager } from '@/lib/session'
import { getSupabaseTools } from '@/lib/supabase'
import { UniversalShareCodeManager } from '@/lib/universal-share-code'
import { trackAnalytics } from '@/lib/analytics'
// ✅ CRITICAL: Zod validation schema (customize per tool)
const generatePromptsSchema = z.object({
count: z.number().min(1).max(10).default(5),
// Tool-specific fields with proper validation
// Example: subgenre: z.enum(['contemporary', 'historical', 'paranormal']),
title: z.string().max(200).optional(),
isPublic: z.boolean().default(false),
customTheme: z.string().optional()
})
// ✅ CRITICAL: Rate limiting configuration
const RATE_LIMITS = {
[`${toolName}_generation`]: {
feature: `${toolName}_generation`,
maxRequests: 15, // Adjust per tool popularity
windowMinutes: 24 * 60
}
}
// POST /api/[tool-name]/generate
export async function POST(request: NextRequest) {
try {
// ✅ CRITICAL: Session and user management
const sessionId = SessionManager.getSessionId(request)
const userId = request.headers.get('x-user-id') || undefined
// ✅ CRITICAL: Request validation with Zod
const body = await request.json()
const validatedData = generatePromptsSchema.parse(body)
// ✅ CRITICAL: Rate limiting
const rateLimitResult = await checkRateLimit(
request,
`${toolName}_generation`,
userId
)
if (!rateLimitResult.allowed) {
return NextResponse.json(
{
success: false,
error: 'Rate limit exceeded. Please try again later.',
meta: {
remainingLimit: rateLimitResult.remaining,
resetTime: rateLimitResult.resetAt.toISOString()
}
},
{
status: 429,
headers: getRateLimitHeaders(rateLimitResult)
}
)
}
// ✅ CRITICAL: AI generation with proper tracking
const aiService = new ToolNameAI()
const startTime = Date.now()
const prompts = await aiService.generatePrompts(
{
...validatedData,
toolName: 'tool_name',
targetKeywords: ['tool specific', 'keywords'],
parameters: validatedData // Tool-specific parameters
},
userId,
sessionId
)
const generationTime = Date.now() - startTime
// ✅ CRITICAL: Database persistence
const supabase = getSupabaseTools()
const savedPrompts = []
for (const [index, prompt] of prompts.entries()) {
try {
const shareCode = await UniversalShareCodeManager.generateShareCode(
'tool_name',
prompt.seoTitle
)
const { data, error } = await supabase
.from(`tools_${toolName}`)
.insert({
title: validatedData.title || `${toolName} Prompt #${index + 1}`,
share_code: shareCode,
user_id: userId,
session_id: sessionId,
is_public: validatedData.isPublic,
// Core prompt fields
prompt_text: prompt.promptText,
// Tool-specific fields
// ...extractToolSpecificFields(prompt, validatedData),
// Content structure
content: prompt.content,
generation_options: validatedData,
// SEO fields
seo_title: prompt.seoTitle,
seo_description: prompt.seoDescription,
keywords: prompt.keywords,
// AI metadata
is_ai_generated: true,
ai_generation_prompt: prompt.aiGenerationPrompt,
ai_confidence: prompt.aiConfidence,
generation_time_ms: generationTime,
// Anonymous content expires in 24 hours
expires_at: userId ? null : new Date(Date.now() + 24 * 60 * 60 * 1000).toISOString()
})
.select()
.single()
if (error) {
console.error('Database save error:', error)
continue
}
savedPrompts.push({
...prompt,
id: data.id,
shareCode: data.share_code,
createdAt: data.created_at
})
} catch (promptError) {
console.error('Error saving prompt:', promptError)
continue
}
}
// ✅ CRITICAL: Analytics tracking
await trackAnalytics(`${toolName}_generated`, {
tool: toolName,
sessionId,
userId,
promptCount: savedPrompts.length,
generationTime,
options: validatedData
})
// ✅ CRITICAL: Analytics database logging
for (const prompt of savedPrompts) {
await supabase
.from(`tools_${toolName}_analytics`)
.insert({
prompt_id: prompt.id,
event_type: 'generation',
event_data: {
generationTime,
aiConfidence: prompt.aiConfidence,
options: validatedData
},
user_id: userId,
session_id: sessionId,
ip_address: request.headers.get('x-forwarded-for') || 'unknown',
user_agent: request.headers.get('user-agent') || 'unknown'
})
}
// ✅ CRITICAL: Standardized response format
return NextResponse.json({
success: true,
data: {
prompts: savedPrompts,
remainingLimit: rateLimitResult.remaining,
generationTime
},
meta: {
remainingLimit: rateLimitResult.remaining,
processingTime: generationTime,
version: '1.0'
}
}, {
headers: {
...getRateLimitHeaders(rateLimitResult),
// ✅ CRITICAL: CORS headers for web-app integration
'Access-Control-Allow-Origin': process.env.WEB_APP_URL || 'https://mystoryflow.com',
'Access-Control-Allow-Credentials': 'true'
}
})
} catch (error) {
console.error(`${toolName} generation error:`, error)
// ✅ CRITICAL: Error handling with proper responses
if (error instanceof z.ZodError) {
return NextResponse.json(
{
success: false,
error: 'Invalid request data',
details: error.errors
},
{ status: 400 }
)
}
return NextResponse.json(
{
success: false,
error: `Failed to generate ${toolName} prompts. Please try again.`
},
{ status: 500 }
)
}
}
// ✅ CRITICAL: CORS handler
export async function OPTIONS(request: NextRequest) {
return new Response(null, {
status: 200,
headers: {
'Access-Control-Allow-Origin': process.env.WEB_APP_URL || 'https://mystoryflow.com',
'Access-Control-Allow-Methods': 'POST, OPTIONS',
'Access-Control-Allow-Headers': 'Content-Type, Authorization, X-Session-ID, X-User-ID',
'Access-Control-Allow-Credentials': 'true',
'Access-Control-Max-Age': '86400'
}
})
}4. Frontend Component Standards
// ✅ CRITICAL: Standardized component structure with MyStoryFlow UI
'use client'
import React, { useState } from 'react'
import { Button, Input, Select, SelectContent, SelectItem, SelectTrigger, SelectValue, Textarea, Checkbox, Card, CardContent, CardHeader, CardTitle, Badge } from '@mystoryflow/ui'
import { Loader2, Sparkles } from 'lucide-react'
interface ToolGeneratorProps {
onPromptsGenerated?: (prompts: any[]) => void
}
export const ToolGenerator: React.FC<ToolGeneratorProps> = ({
onPromptsGenerated
}) => {
// ✅ CRITICAL: Consistent form state management
const [formData, setFormData] = useState({
count: 5,
// Tool-specific fields
isPublic: false
})
const [isGenerating, setIsGenerating] = useState(false)
const [generatedPrompts, setGeneratedPrompts] = useState<any[]>([])
const [error, setError] = useState<string | null>(null)
// ✅ CRITICAL: Form validation
const isFormValid = () => {
// Tool-specific validation logic
return true
}
const handleInputChange = (field: string, value: any) => {
setFormData(prev => ({
...prev,
[field]: value
}))
}
const handleGenerate = async () => {
if (!isFormValid()) return
setIsGenerating(true)
setError(null)
try {
const response = await fetch(`/api/${toolName}/generate`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-Session-ID': sessionStorage.getItem('sessionId') || 'anonymous'
},
body: JSON.stringify(formData)
})
const result = await response.json()
if (!result.success) {
throw new Error(result.error || 'Failed to generate prompts')
}
setGeneratedPrompts(result.data.prompts)
onPromptsGenerated?.(result.data.prompts)
} catch (err) {
setError(err instanceof Error ? err.message : 'An error occurred')
} finally {
setIsGenerating(false)
}
}
return (
<div className="max-w-4xl mx-auto space-y-8">
{/* ✅ CRITICAL: Standard header structure */}
<div className="text-center">
<h1 className="text-4xl font-bold text-slate-900 mb-4">
{toolDisplayName}
</h1>
<p className="text-xl text-slate-600 max-w-3xl mx-auto">
{toolDescription}
</p>
</div>
{/* ✅ CRITICAL: Standard form structure */}
<Card>
<CardHeader>
<CardTitle className="flex items-center gap-2">
<Sparkles className="w-5 h-5 text-amber-500" />
Generation Settings
</CardTitle>
</CardHeader>
<CardContent className="space-y-6">
{/* ✅ CRITICAL: Consistent form elements with size="sm" */}
<div className="grid md:grid-cols-2 gap-4">
<div>
<label className="block text-sm font-medium mb-2">
Number of Prompts
</label>
<Select
value={formData.count.toString()}
onValueChange={(value) => handleInputChange('count', parseInt(value))}
>
<SelectTrigger size="sm">
<SelectValue />
</SelectTrigger>
<SelectContent>
<SelectItem value="3">3 prompts</SelectItem>
<SelectItem value="5">5 prompts</SelectItem>
<SelectItem value="7">7 prompts</SelectItem>
<SelectItem value="10">10 prompts</SelectItem>
</SelectContent>
</Select>
</div>
{/* Tool-specific form fields here */}
</div>
{/* ✅ CRITICAL: Public visibility option */}
<div className="flex items-center space-x-2">
<Checkbox
id="isPublic"
checked={formData.isPublic}
onCheckedChange={(checked) => handleInputChange('isPublic', checked)}
/>
<label htmlFor="isPublic" className="text-sm">
Make prompts publicly discoverable
</label>
</div>
{/* ✅ CRITICAL: Generate button with loading state */}
<Button
onClick={handleGenerate}
disabled={isGenerating || !isFormValid()}
className="w-full"
size="lg"
>
{isGenerating ? (
<>
<Loader2 className="w-4 h-4 mr-2 animate-spin" />
Generating {toolDisplayName}...
</>
) : (
<>
<Sparkles className="w-4 h-4 mr-2" />
Generate {formData.count} {toolDisplayName}
</>
)}
</Button>
{/* ✅ CRITICAL: Error display */}
{error && (
<div className="p-4 bg-red-50 border border-red-200 rounded-lg">
<p className="text-red-600 text-sm">{error}</p>
</div>
)}
</CardContent>
</Card>
{/* ✅ CRITICAL: Results display with response system integration */}
{generatedPrompts.length > 0 && (
<div className="space-y-6">
<h2 className="text-2xl font-bold text-slate-900">
Your Generated Prompts
</h2>
<div className="grid gap-6">
{generatedPrompts.map((prompt, index) => (
<Card key={prompt.id} className="border-amber-200">
<CardHeader>
<CardTitle className="flex items-start justify-between">
<span className="text-lg">Prompt #{index + 1}</span>
<div className="flex gap-2">
{/* Tool-specific badges */}
</div>
</CardTitle>
</CardHeader>
<CardContent className="space-y-4">
{/* ✅ CRITICAL: Prompt display */}
<div className="p-4 bg-amber-50 rounded-lg">
<p className="text-slate-800 leading-relaxed">
{prompt.promptText}
</p>
</div>
{/* ✅ CRITICAL: Action buttons for responses */}
<div className="flex gap-2 pt-4 border-t">
<Button variant="outline" size="sm">
<BookOpen className="w-4 h-4 mr-2" />
Write Response
</Button>
<Button variant="outline" size="sm">
<Users className="w-4 h-4 mr-2" />
Share
</Button>
<Button variant="outline" size="sm">
<Download className="w-4 h-4 mr-2" />
Export
</Button>
</div>
</CardContent>
</Card>
))}
</div>
</div>
)}
</div>
)
}Pattern 2: Multi-Variant Generator Architecture
Core Components
This pattern is used for tools that generate multiple analyzed versions of content with detailed feedback.
1. Database Schema Structure
-- ✅ Similar to Pattern 1 but with variants and analysis focus
CREATE TABLE tools_[tool_name] (
-- All standard fields from Pattern 1, plus:
-- Multi-variant specific fields
variant_count INTEGER DEFAULT 3,
selected_variant INTEGER DEFAULT 1,
-- Analysis data
analysis_data JSONB NOT NULL DEFAULT '{}'::jsonb,
comparison_data JSONB DEFAULT '{}'::jsonb,
performance_score DECIMAL(3,2),
-- Content variants (JSONB array)
variants JSONB NOT NULL DEFAULT '[]'::jsonb
);
-- No responses table needed (users don't respond to analysis tools)
-- Collections and analytics tables remain the same2. AI Service Implementation
export class MultiVariantAI {
async generateContent(
options: GenerationOptions,
userId?: string,
sessionId?: string
): Promise<MultiVariantResult> {
// ✅ CRITICAL: Generate multiple approaches
const approaches = ['creative', 'analytical', 'balanced', 'professional']
const variants = await Promise.all(
approaches.map(approach => this.generateVariant(options, approach))
)
// ✅ CRITICAL: Analyze each variant
const analyzedVariants = await Promise.all(
variants.map(variant => this.analyzeVariant(variant, options))
)
// ✅ CRITICAL: Compare variants and provide recommendations
const analysis = this.compareVariants(analyzedVariants, options)
const comparison = await this.generateComparison(analyzedVariants, options)
// ✅ CRITICAL: Same tracking as Pattern 1
await trackAIUsage({
userId,
sessionId,
featureName: `${options.toolName}_generation`,
promptName: 'multi_variant_generation',
// ... same tracking fields
})
return {
variants: analyzedVariants,
analysis,
comparison,
recommendations: this.generateRecommendations(analysis),
metadata: {
generationTime: Date.now() - startTime,
totalTokens,
cost
}
}
}
private async generateVariant(options: GenerationOptions, approach: string): Promise<ContentVariant> {
// Approach-specific prompt engineering
const systemPrompt = this.buildVariantSystemPrompt(approach, options)
const userPrompt = this.buildVariantUserPrompt(approach, options)
// Generate single variant with approach-specific parameters
const response = await this.openai.chat.completions.create({
model: getRecommendedToolsModel(),
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: userPrompt }
],
temperature: this.getTemperatureForApproach(approach),
// ... other approach-specific parameters
})
return this.parseVariantResponse(response, approach)
}
private async analyzeVariant(variant: ContentVariant, options: GenerationOptions): Promise<AnalyzedVariant> {
// Generate analysis for this specific variant
const analysisPrompt = this.buildAnalysisPrompt(variant, options)
const response = await this.openai.chat.completions.create({
model: getRecommendedToolsModel(),
messages: [
{ role: 'system', content: 'You are an expert content analyzer...' },
{ role: 'user', content: analysisPrompt }
],
response_format: { type: 'json_object' }
})
return {
...variant,
analysis: this.parseAnalysisResponse(response.choices[0].message.content!)
}
}
}Multi-Step Form Standards
✅ CRITICAL: Form State Management Pattern
// ✅ Standard multi-step form structure for complex tools
interface FormState {
currentStep: number
formData: ToolFormData
errors: Record<string, string>
isValid: boolean
isSubmitting: boolean
}
const useToolFormState = (initialData: ToolFormData) => {
const [state, setState] = useState<FormState>({
currentStep: 1,
formData: initialData,
errors: {},
isValid: false,
isSubmitting: false
})
// ✅ CRITICAL: Real-time validation
const updateFormData = useCallback((field: string, value: any) => {
setState(prev => {
const newFormData = { ...prev.formData, [field]: value }
const validation = validateFormStep(newFormData, prev.currentStep)
return {
...prev,
formData: newFormData,
errors: { ...prev.errors, ...validation.errors },
isValid: validation.isValid
}
})
}, [])
return {
state,
updateFormData,
validateStep: (step: number) => validateFormStep(state.formData, step),
canProceedToNextStep: () => validateFormStep(state.formData, state.currentStep).isValid
}
}✅ CRITICAL: Form Element Consistency
// ✅ ALL form elements MUST use size="sm" for 40px height consistency
<Input
size="sm"
value={formData.field}
onChange={(e) => updateFormData('field', e.target.value)}
placeholder="Enter value..."
error={errors.field}
/>
<Select size="sm" value={formData.option} onValueChange={(value) => updateFormData('option', value)}>
<SelectTrigger>
<SelectValue placeholder="Select option..." />
</SelectTrigger>
<SelectContent>
<SelectItem value="option1">Option 1</SelectItem>
<SelectItem value="option2">Option 2</SelectItem>
</SelectContent>
</Select>
<Textarea
value={formData.description}
onChange={(e) => updateFormData('description', e.target.value)}
placeholder="Enter description..."
className="min-h-[100px]"
/>Admin Management Standards
✅ CRITICAL: Bulk Generation Endpoints
// ✅ Every tool needs admin bulk generation capability
// File: /api/admin/[tool-name]/bulk-generate/route.ts
export async function POST(request: NextRequest) {
try {
// ✅ CRITICAL: Admin authentication
const isAdmin = await validateAdminAccess(request)
if (!isAdmin) {
return NextResponse.json(
{ success: false, error: 'Admin access required' },
{ status: 403 }
)
}
const body = await request.json()
const { count, ...generationOptions } = body
// ✅ CRITICAL: Reasonable limits for bulk operations
if (count > 100) {
return NextResponse.json(
{ success: false, error: 'Maximum 100 prompts per bulk operation' },
{ status: 400 }
)
}
// ✅ CRITICAL: Generate in batches to avoid timeouts
const batchSize = 10
const batches = Math.ceil(count / batchSize)
const allContent = []
for (let i = 0; i < batches; i++) {
const batchCount = Math.min(batchSize, count - (i * batchSize))
const batchContent = await aiService.generateContent({
count: batchCount,
...generationOptions
})
allContent.push(...batchContent)
}
// ✅ CRITICAL: Save all generated content
const savedContent = await saveContentToDatabase(allContent, {
isReviewed: true, // Admin-generated content is pre-reviewed
isPublic: true
})
return NextResponse.json({
success: true,
data: {
generated: allContent.length,
saved: savedContent.length,
processingTime: Date.now() - startTime
}
})
} catch (error) {
console.error('Bulk generation error:', error)
return NextResponse.json(
{ success: false, error: 'Failed to generate content' },
{ status: 500 }
)
}
}✅ CRITICAL: Quality Review System
// ✅ Admin endpoints for content quality management
// /api/admin/[tool-name]/[id]/review/route.ts
// /api/admin/[tool-name]/[id]/feature/route.ts
// /api/admin/[tool-name]/analytics/route.ts
// /api/admin/[tool-name]/collections/route.tsTesting Standards
✅ CRITICAL: Comprehensive Test Coverage
// ✅ Unit Tests (>80% coverage required)
describe('ToolNameAI', () => {
test('generates valid prompts', async () => {
const result = await ToolNameAI.generatePrompts(mockOptions)
expect(result).toHaveLength(5)
expect(result[0]).toMatchSchema(promptSchema)
})
test('tracks AI usage correctly', async () => {
await ToolNameAI.generatePrompts(mockOptions, 'user123', 'session456')
expect(trackAIUsage).toHaveBeenCalledWith(expect.objectContaining({
featureName: 'tool_name_generation',
userId: 'user123',
sessionId: 'session456'
}))
})
})
// ✅ Integration Tests (All API endpoints)
describe('POST /api/tool-name/generate', () => {
test('successful generation with valid data', async () => {
const response = await request(app)
.post('/api/tool-name/generate')
.send(validPayload)
expect(response.status).toBe(200)
expect(response.body.success).toBe(true)
expect(response.body.data.prompts).toHaveLength(5)
})
test('rate limiting works correctly', async () => {
// Make requests up to limit
// Verify rate limit response
})
test('handles invalid input properly', async () => {
const response = await request(app)
.post('/api/tool-name/generate')
.send(invalidPayload)
expect(response.status).toBe(400)
expect(response.body.success).toBe(false)
})
})
// ✅ E2E Tests (Complete user flows)
describe('Tool Generation Flow', () => {
test('complete generation, response, and share flow', async () => {
// Generate prompts
// Write response to prompt
// Share response publicly
// Verify all steps work end-to-end
})
})Performance Standards
✅ CRITICAL: Response Time Requirements
- API Endpoints: < 500ms for simple operations
- AI Generation: < 10 seconds for content generation
- Page Load: < 2 seconds for initial page load
- Export Generation: < 5 seconds for most formats
✅ CRITICAL: Scalability Requirements
- Concurrent Users: Support 100+ concurrent users
- Database Queries: Optimized with proper indexes
- Rate Limiting: Graceful degradation under load
- AI Service: Retry logic with exponential backoff
Security Standards
✅ CRITICAL: Input Validation & Sanitization
// ✅ Always use Zod for request validation
const generateSchema = z.object({
content: z.string().min(10).max(50000),
options: z.object({
// Tool-specific validation
})
})
// ✅ Sanitize all AI responses
const sanitizeAIResponse = (response: string): string => {
return response.replace(/<script[^>]*>.*?<\/script>/gi, '')
}
// ✅ Content validation for user responses
const contentValidation = validateApiContent({ title, content })
if (!contentValidation.isValid) {
return NextResponse.json({
success: false,
error: 'Content validation failed',
details: contentValidation.errors
}, { status: 400 })
}Implementation Checklist
✅ Before Starting Development
- ✅ Choose appropriate architecture pattern (Story Prompt vs Multi-Variant)
- ✅ Define tool-specific database fields and validation rules
- ✅ Plan AI prompt engineering approach
- ✅ Design rate limiting strategy based on expected usage
- ✅ Create business rules documentation
✅ During Development
- ✅ Follow database naming conventions (
tools_[tool_name]) - ✅ Implement comprehensive AI usage tracking
- ✅ Use consistent form element sizing (
size="sm") - ✅ Add proper error handling and validation
- ✅ Include CORS headers for web-app integration
- ✅ Write unit, integration, and E2E tests
✅ Before Production
- ✅ Test performance under load (>100 concurrent users)
- ✅ Verify rate limiting works correctly
- ✅ Confirm admin endpoints are secure and functional
- ✅ Validate SEO metadata generation
- ✅ Test export functionality for all formats
- ✅ Verify analytics tracking works end-to-end
This architecture ensures consistent, scalable, and maintainable implementation across all 16 tools while maximizing code reuse and maintaining excellent user experience.