AI Integration
The Trial Experience uses AI for content validation and story generation. All AI features integrate with the admin-app logging system for usage tracking, cost monitoring, and dynamic model switching.
AI Features Overview
| Feature Name | Purpose | Default Model |
|---|---|---|
trial_content_validation | Validates trial content quality before signup | gpt-4o-mini |
trial_story_generation | Converts trial content to formatted story | gpt-4o |
trial_conversation | AI conversation during trial (existing Elena) | gpt-4o |
Admin-App AI Management Integration
All trial AI features must use the existing admin-app logging system at /apps/admin-app/src/lib/ai-management.ts.
Registering Trial Features
Add trial AI features to the ai_features table:
INSERT INTO ai_features (name, description, default_model, is_enabled) VALUES
('trial_content_validation', 'Validates trial content quality before signup', 'gpt-4o-mini', true),
('trial_story_generation', 'Converts trial content to formatted story', 'gpt-4o', true),
('trial_conversation', 'AI conversation during trial mode', 'gpt-4o', true);Usage Logging Pattern
Every AI call must log usage for tracking and billing:
import { logAIUsage } from '@/lib/ai/usage-tracker'
// After each AI call
await logAIUsage({
userId: trialSessionId, // Use trial session ID for anonymous users
featureName: 'trial_content_validation',
modelName: 'gpt-4o-mini',
provider: 'openai',
inputTokens: usage.prompt_tokens,
outputTokens: usage.completion_tokens,
costUsd: calculateCost(usage),
responseTimeMs: endTime - startTime,
success: true,
metadata: {
trialSessionId,
contentType: 'voice_recording', // or 'ai_conversation'
qualityScore: 85
}
})Dynamic Model Selection
Use the server AI service to get the currently configured model:
import { getModelForFeature } from '@/lib/ai/server-ai-service'
// Get model from admin dashboard configuration
const model = await getModelForFeature('trial_content_validation')
// Returns: 'gpt-4o-mini' (or whatever is configured)
// Use this model for the AI call
const response = await openai.chat.completions.create({
model: model,
messages: [...],
})This allows switching models in the admin dashboard without code changes.
Content Validation
/api/trial/validate/route.ts
Validates trial content before allowing signup:
import { createOpenAI } from '@ai-sdk/openai'
import { logAIUsage } from '@/lib/ai/usage-tracker'
import { getModelForFeature } from '@/lib/ai/server-ai-service'
export async function POST(request: NextRequest) {
const { trialSessionId } = await request.json()
const supabase = await createClient()
// Fetch trial content
const { data: session } = await supabase
.from('trial_sessions')
.select('*')
.eq('id', trialSessionId)
.single()
// Get recordings or conversation content
const content = await getTrialContent(supabase, trialSessionId)
// Get configured model
const model = await getModelForFeature('trial_content_validation')
const startTime = Date.now()
// Validate with AI
const response = await openai.chat.completions.create({
model: model,
messages: [
{
role: 'system',
content: `You are a content quality validator for a family story collection app.
Analyze the provided content and determine if it's sufficient for creating a meaningful story.
Criteria for valid content:
- Contains personal memories, experiences, or reflections
- Has enough detail to create a 2-3 page story
- Is coherent and narrative-worthy (not just noise or random words)
- For voice: at least 5 minutes of meaningful speech
- For conversation: at least 5 substantive exchanges with details
Return JSON with:
- isValid: boolean
- qualityScore: 0-100
- suggestions: array of specific improvement suggestions
- estimatedStoryLength: string (e.g., "2-3 pages")`
},
{
role: 'user',
content: `Validate this content:\n\n${content.text}`
}
],
response_format: { type: 'json_object' }
})
const endTime = Date.now()
const result = JSON.parse(response.choices[0].message.content)
// Log usage
await logAIUsage({
userId: trialSessionId,
featureName: 'trial_content_validation',
modelName: model,
provider: 'openai',
inputTokens: response.usage.prompt_tokens,
outputTokens: response.usage.completion_tokens,
costUsd: calculateCost(response.usage, model),
responseTimeMs: endTime - startTime,
success: true,
metadata: {
trialSessionId,
contentType: content.type,
qualityScore: result.qualityScore,
isValid: result.isValid
}
})
return NextResponse.json(result)
}Validation Criteria
| Content Type | Minimum for Pass | Quality Indicators |
|---|---|---|
| Voice Recording | 5+ minutes | Clear speech, narrative content, personal details |
| AI Conversation | 5+ exchanges | Detailed responses, personal memories shared |
Validation Response
interface ValidationResult {
isValid: boolean // Can proceed to signup
qualityScore: number // 0-100
suggestions: string[] // Improvements if not valid
estimatedStoryLength: string // "2-3 pages", "5-6 pages", etc.
}Story Generation
Async Pipeline
Story generation runs asynchronously during onboarding to avoid blocking the user:
Story Generation Prompt
const storyGenerationPrompt = `You are a professional memoir writer helping families preserve their stories.
Transform the following raw content (from a voice recording transcription or AI conversation) into a beautifully written story.
Guidelines:
- Write in first person, preserving the storyteller's voice
- Organize content chronologically or thematically
- Add paragraph breaks for readability
- Include sensory details and emotional depth
- Aim for 500-1000 words (2-4 pages when printed)
- Preserve key quotes and memorable phrases exactly as spoken
- Add a compelling title that captures the essence of the story
Content type: ${contentType}
Raw content:
${rawContent}
Return JSON with:
- title: string
- content: string (the formatted story)
- wordCount: number
- suggestedChapter: string (e.g., "Early Years", "Family Memories")
`Story Generation Implementation
// /apps/web-app/lib/trial/story-generator.ts
export async function generateStoryFromTrial(jobId: string) {
const supabase = await createServiceClient()
// Get job details
const { data: job } = await supabase
.from('story_generation_jobs')
.select('*, trial_sessions(*)')
.eq('id', jobId)
.single()
// Get trial content
const content = await getTrialContent(supabase, job.trial_session_id)
// Get configured model
const model = await getModelForFeature('trial_story_generation')
const startTime = Date.now()
try {
// Generate story
const response = await openai.chat.completions.create({
model: model,
messages: [
{ role: 'system', content: storyGenerationPrompt },
{ role: 'user', content: content.text }
],
response_format: { type: 'json_object' }
})
const endTime = Date.now()
const result = JSON.parse(response.choices[0].message.content)
// Create story record
const { data: story } = await supabase
.from('stories')
.insert({
user_id: job.user_id,
title: result.title,
content: result.content,
word_count: result.wordCount,
source: 'trial',
metadata: {
trial_session_id: job.trial_session_id,
suggested_chapter: result.suggestedChapter,
content_type: content.type
}
})
.select()
.single()
// Update trial session with story ID
await supabase
.from('trial_sessions')
.update({ story_id: story.id })
.eq('id', job.trial_session_id)
// Update job status
await supabase
.from('story_generation_jobs')
.update({ status: 'completed', story_id: story.id })
.eq('id', jobId)
// Log AI usage
await logAIUsage({
userId: job.user_id,
featureName: 'trial_story_generation',
modelName: model,
provider: 'openai',
inputTokens: response.usage.prompt_tokens,
outputTokens: response.usage.completion_tokens,
costUsd: calculateCost(response.usage, model),
responseTimeMs: endTime - startTime,
success: true,
metadata: {
trialSessionId: job.trial_session_id,
storyId: story.id,
wordCount: result.wordCount
}
})
return { success: true, storyId: story.id }
} catch (error) {
// Update job with error
await supabase
.from('story_generation_jobs')
.update({ status: 'failed', error: error.message })
.eq('id', jobId)
// Log failed attempt
await logAIUsage({
userId: job.user_id,
featureName: 'trial_story_generation',
modelName: model,
provider: 'openai',
inputTokens: 0,
outputTokens: 0,
costUsd: 0,
responseTimeMs: Date.now() - startTime,
success: false,
errorMessage: error.message,
metadata: { trialSessionId: job.trial_session_id }
})
throw error
}
}Trial Conversation AI
The trial conversation mode uses the existing Elena AI with trial-specific modifications:
// When isTrialMode is true in ImmersiveConversation component
const trialSystemPrompt = `${baseElenaPrompt}
TRIAL MODE CONTEXT:
- This user is trying MyStoryFlow for the first time
- Guide them to share personal stories and memories
- Encourage detailed responses with follow-up questions
- After 5+ meaningful exchanges, gently mention saving their story
- Be warm and encouraging about the content they're creating`Cost Estimation
Expected costs per trial session:
| Feature | Model | Est. Tokens | Est. Cost |
|---|---|---|---|
| Content Validation | gpt-4o-mini | ~2,000 | $0.0003 |
| Story Generation | gpt-4o | ~5,000 | $0.025 |
| AI Conversation | gpt-4o | ~10,000 | $0.05 |
| Total per conversion | - | - | ~$0.075 |
Cost Alerts
Configure in admin dashboard:
- Daily budget: $50/day for trial features
- Alert at 50%, 75%, 90%, 100%
- Anomaly detection for usage spikes
Model Switching
To switch models without code changes:
- Go to Admin Dashboard → AI Features
- Find
trial_content_validationortrial_story_generation - Update the model configuration
- Changes take effect immediately
Use cases:
- Switch to cheaper model during high-traffic periods
- A/B test different models for quality
- Upgrade to newer model versions
Error Handling
// Retry strategy for AI calls
const MAX_RETRIES = 3
const RETRY_DELAY = 1000
async function callWithRetry(fn: () => Promise<any>, retries = MAX_RETRIES) {
try {
return await fn()
} catch (error) {
if (retries > 0 && isRetryableError(error)) {
await sleep(RETRY_DELAY * (MAX_RETRIES - retries + 1))
return callWithRetry(fn, retries - 1)
}
throw error
}
}
function isRetryableError(error: any): boolean {
return error.status === 429 || // Rate limited
error.status === 503 || // Service unavailable
error.code === 'ETIMEDOUT'
}Next Steps
- Implementation Guide - Step-by-step development
- Testing Scenarios - Test cases