F004 - AI Service Integration Setup
Objective
Set up comprehensive AI service integrations for the MyStoryFlow manuscript analyzer application, including OpenAI GPT-4, Claude 3, and embeddings generation for intelligent manuscript analysis with model routing and fallback mechanisms.
Quick Implementation
Using MyStoryFlow Components
- API route handlers from
apps/story-analyzer/src/app/api/ - Environment variable management from
@mystoryflow/shared - Error handling middleware from
@mystoryflow/shared - Rate limiting utilities from
@mystoryflow/shared - Logger from
@mystoryflow/logger - AI core utilities from
@mystoryflow/ai-core
New Requirements
- OpenAI and Anthropic SDK installations
- AI service wrapper classes with manuscript-specific logic
- Genre-aware model selection
- Token optimization and batch processing
- Embeddings generation for semantic search
- Circuit breaker pattern for API resilience
MVP Implementation
1. Package Installation
# In packages/manuscript-analysis directory
npm install openai @anthropic-ai/sdk tiktoken p-retry p-queue
# In packages/ai-core directory (shared AI utilities)
npm install @pinecone-database/pinecone langchain @langchain/openai2. Environment Configuration
# AI Service Keys
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
PINECONE_API_KEY=...
PINECONE_ENVIRONMENT=...
# Model Configuration
OPENAI_MODEL=gpt-4-turbo-preview
OPENAI_EMBEDDING_MODEL=text-embedding-3-large
ANTHROPIC_MODEL=claude-3-opus-20240229
# Performance Settings
AI_TIMEOUT=120000 # 2 minutes per request
AI_MAX_RETRIES=3
AI_BATCH_SIZE=5
AI_RATE_LIMIT=100 # requests per minute
AI_CACHE_TTL=3600 # 1 hour cache
# Cost Optimization
AI_MAX_TOKENS_PER_REQUEST=4000
AI_QUALITY_THRESHOLD=0.8
AI_USE_CACHE=true3. Core AI Service Class
// packages/manuscript-analysis/src/services/ai-service.ts
import OpenAI from 'openai'
import Anthropic from '@anthropic-ai/sdk'
import { encoding_for_model } from 'tiktoken'
import pRetry from 'p-retry'
import PQueue from 'p-queue'
import { Logger } from '@mystoryflow/logger'
import { CircuitBreaker } from '@mystoryflow/ai-core'
import { CacheService } from '@mystoryflow/cache'
export class AIService {
private openai: OpenAI
private anthropic: Anthropic
private tokenEncoder: any
private logger: Logger
private cache: CacheService
private circuitBreaker: CircuitBreaker
private queue: PQueue
constructor() {
this.openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
timeout: parseInt(process.env.AI_TIMEOUT || '120000'),
maxRetries: parseInt(process.env.AI_MAX_RETRIES || '3')
})
this.anthropic = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
timeout: parseInt(process.env.AI_TIMEOUT || '120000')
})
this.tokenEncoder = encoding_for_model('gpt-4')
this.logger = new Logger('AIService')
this.cache = new CacheService({ ttl: parseInt(process.env.AI_CACHE_TTL || '3600') })
this.circuitBreaker = new CircuitBreaker({ threshold: 5, timeout: 60000 })
this.queue = new PQueue({ concurrency: 5, interval: 60000, intervalCap: parseInt(process.env.AI_RATE_LIMIT || '100') })
}
async analyzeManuscript(
content: string,
analysisType: ManuscriptAnalysisType,
options: AnalysisOptions = {}
): Promise<ManuscriptAnalysisResult> {
const cacheKey = this.generateCacheKey(content, analysisType, options)
// Check cache first
if (process.env.AI_USE_CACHE === 'true') {
const cached = await this.cache.get(cacheKey)
if (cached) {
this.logger.info('Returning cached analysis', { analysisType })
return cached
}
}
// Execute with circuit breaker protection
return this.circuitBreaker.execute(async () => {
const result = await this.queue.add(async () => {
const prompt = this.buildManuscriptPrompt(content, analysisType, options)
const model = this.selectOptimalModel(analysisType, options.genre)
try {
const analysis = model.startsWith('gpt')
? await this.analyzeWithOpenAI(prompt, options.maxTokens || 4000)
: await this.analyzeWithClaude(prompt, options.maxTokens || 4000)
// Cache successful results
if (process.env.AI_USE_CACHE === 'true') {
await this.cache.set(cacheKey, analysis)
}
return analysis
} catch (error) {
this.logger.error('Primary model failed', { model, error })
return this.analyzeWithFallback(prompt, analysisType, options.maxTokens || 4000)
}
})
return result
})
}
private buildManuscriptPrompt(
content: string,
analysisType: ManuscriptAnalysisType,
options: AnalysisOptions
): string {
const prompts = {
overall: `Analyze this manuscript excerpt and provide a comprehensive evaluation including:
1. Writing quality and style
2. Plot structure and pacing
3. Character development
4. Dialogue effectiveness
5. Market potential
6. Areas for improvement
Genre: ${options.genre || 'General Fiction'}
Content: ${content}`,
character: `Analyze the character development in this manuscript:
1. Character depth and complexity
2. Character arcs and growth
3. Dialogue authenticity
4. Character relationships
5. Protagonist/antagonist dynamics
Content: ${content}`,
plot: `Evaluate the plot structure:
1. Story arc and progression
2. Conflict and tension
3. Pacing and rhythm
4. Plot holes or inconsistencies
5. Resolution effectiveness
Content: ${content}`,
style: `Assess the writing style:
1. Voice and tone consistency
2. Prose quality
3. Descriptive effectiveness
4. Readability
5. Genre appropriateness
Genre: ${options.genre || 'General Fiction'}
Content: ${content}`,
market: `Analyze market potential:
1. Genre fit and conventions
2. Target audience appeal
3. Comparable titles
4. Commercial viability
5. Unique selling points
Genre: ${options.genre || 'General Fiction'}
Content: ${content}`
}
return prompts[analysisType] || prompts.overall
}
private async analyzeWithOpenAI(
prompt: string,
maxTokens: number
): Promise<any> {
const response = await this.openai.chat.completions.create({
model: process.env.OPENAI_MODEL || 'gpt-4-turbo-preview',
messages: [
{
role: 'system',
content: 'You are a professional manuscript editor providing detailed analysis.'
},
{ role: 'user', content: prompt }
],
temperature: 0.3,
max_tokens: maxTokens,
response_format: { type: 'json_object' }
})
return JSON.parse(response.choices[0].message.content || '{}')
}
private async analyzeWithClaude(
prompt: string,
maxTokens: number
): Promise<any> {
const response = await this.anthropic.messages.create({
model: process.env.ANTHROPIC_MODEL || 'claude-3-opus-20240229',
max_tokens: maxTokens,
temperature: 0.3,
messages: [{ role: 'user', content: prompt }]
})
return JSON.parse(response.content[0].text)
}
countTokens(text: string): number {
return this.tokenEncoder.encode(text).length
}
}4. Enhanced Model Router with Genre Awareness
// packages/manuscript-analysis/src/services/model-router.ts
import { Logger } from '@mystoryflow/logger'
export class ModelRouter {
private logger = new Logger('ModelRouter')
private modelPreferences = {
// Analysis type preferences
overall: { primary: 'claude-3', fallback: 'gpt-4' },
character: { primary: 'gpt-4', fallback: 'claude-3' },
plot: { primary: 'claude-3', fallback: 'gpt-4' },
style: { primary: 'gpt-4', fallback: 'claude-3' },
market: { primary: 'gpt-4', fallback: 'claude-3' },
dialogue: { primary: 'gpt-4', fallback: 'claude-3' },
worldBuilding: { primary: 'claude-3', fallback: 'gpt-4' },
pacing: { primary: 'gpt-4', fallback: 'claude-3' }
}
private genreOptimizations = {
'literary-fiction': { preferred: 'claude-3', strength: 'nuanced analysis' },
'genre-fiction': { preferred: 'gpt-4', strength: 'market conventions' },
'romance': { preferred: 'gpt-4', strength: 'emotional beats' },
'mystery': { preferred: 'claude-3', strength: 'plot complexity' },
'fantasy': { preferred: 'claude-3', strength: 'world-building' },
'sci-fi': { preferred: 'claude-3', strength: 'concept exploration' },
'thriller': { preferred: 'gpt-4', strength: 'pacing analysis' }
}
selectOptimalModel(
analysisType: string,
genre?: string,
costOptimize: boolean = false
): string {
// Cost optimization mode
if (costOptimize && this.canUseEconomicalModel(analysisType)) {
return 'gpt-3.5-turbo'
}
// Genre-specific optimization
if (genre && this.genreOptimizations[genre]) {
const genrePref = this.genreOptimizations[genre]
this.logger.info('Using genre-optimized model', { genre, model: genrePref.preferred })
return genrePref.preferred
}
// Default to analysis type preference
const pref = this.modelPreferences[analysisType] || this.modelPreferences.overall
return pref.primary
}
getFallbackModel(primaryModel: string, analysisType: string): string {
const pref = this.modelPreferences[analysisType] || this.modelPreferences.overall
return pref.fallback
}
private canUseEconomicalModel(analysisType: string): boolean {
// Simple analyses can use cheaper models
return ['wordCount', 'readability', 'basicGrammar'].includes(analysisType)
}
}5. Embeddings Service for Semantic Search
// packages/ai-core/src/services/embeddings-service.ts
import { OpenAI } from 'openai'
import { Pinecone } from '@pinecone-database/pinecone'
import { Logger } from '@mystoryflow/logger'
export class EmbeddingsService {
private openai: OpenAI
private pinecone: Pinecone
private logger: Logger
constructor() {
this.openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY })
this.pinecone = new Pinecone({
apiKey: process.env.PINECONE_API_KEY!,
environment: process.env.PINECONE_ENVIRONMENT!
})
this.logger = new Logger('EmbeddingsService')
}
async generateEmbeddings(texts: string[]): Promise<number[][]> {
try {
const response = await this.openai.embeddings.create({
model: process.env.OPENAI_EMBEDDING_MODEL || 'text-embedding-3-large',
input: texts
})
return response.data.map(item => item.embedding)
} catch (error) {
this.logger.error('Failed to generate embeddings', { error })
throw error
}
}
async storeManuscriptEmbeddings(
manuscriptId: string,
chunks: Array<{ text: string; metadata: any }>
): Promise<void> {
const index = this.pinecone.index('manuscript-embeddings')
// Generate embeddings for all chunks
const texts = chunks.map(c => c.text)
const embeddings = await this.generateEmbeddings(texts)
// Prepare vectors for storage
const vectors = chunks.map((chunk, i) => ({
id: `${manuscriptId}-chunk-${i}`,
values: embeddings[i],
metadata: {
...chunk.metadata,
manuscriptId,
text: chunk.text.substring(0, 1000) // Store preview
}
}))
// Store in batches
const batchSize = 100
for (let i = 0; i < vectors.length; i += batchSize) {
const batch = vectors.slice(i, i + batchSize)
await index.upsert(batch)
}
}
async searchSimilarContent(
query: string,
manuscriptId?: string,
topK: number = 10
): Promise<any[]> {
const index = this.pinecone.index('manuscript-embeddings')
// Generate query embedding
const [queryEmbedding] = await this.generateEmbeddings([query])
// Search with optional manuscript filter
const filter = manuscriptId ? { manuscriptId } : undefined
const results = await index.query({
vector: queryEmbedding,
topK,
includeMetadata: true,
filter
})
return results.matches || []
}
}6. MyStoryFlow API Integration
// apps/story-analyzer/src/app/api/ai/analyze/route.ts
import { NextRequest, NextResponse } from 'next/server'
import { AIService, ManuscriptAnalysisType } from '@mystoryflow/manuscript-analysis'
import { withAuth } from '@mystoryflow/auth'
import { rateLimit } from '@mystoryflow/shared'
import { trackAIUsage } from '@mystoryflow/analytics'
import { Logger } from '@mystoryflow/logger'
const logger = new Logger('AI-Analyze-API')
export async function POST(req: NextRequest) {
try {
// Rate limiting with custom limits for AI endpoints
const rateLimitResult = await rateLimit(req, {
limit: 50,
window: '1h',
identifier: 'ai-analysis'
})
if (!rateLimitResult.success) {
return NextResponse.json(
{ error: 'Rate limit exceeded', retryAfter: rateLimitResult.retryAfter },
{ status: 429 }
)
}
// Auth check with organization context
const session = await withAuth(req)
if (!session) {
return NextResponse.json(
{ error: 'Unauthorized' },
{ status: 401 }
)
}
const { content, analysisType, options = {} } = await req.json()
// Validate request
if (!content || !analysisType) {
return NextResponse.json(
{ error: 'Missing required fields' },
{ status: 400 }
)
}
// Check user's subscription tier for AI features
const userTier = await getUserSubscriptionTier(session.user.id)
const maxTokens = getMaxTokensForTier(userTier)
const aiService = new AIService()
const startTime = Date.now()
const result = await aiService.analyzeManuscript(
content,
analysisType as ManuscriptAnalysisType,
{
...options,
maxTokens: Math.min(options.maxTokens || 4000, maxTokens)
}
)
const processingTime = Date.now() - startTime
// Track AI usage for billing and analytics
await trackAIUsage({
userId: session.user.id,
organizationId: session.user.organizationId,
model: result.modelUsed,
tokens: result.tokensUsed,
cost: calculateCost(result.modelUsed, result.tokensUsed),
analysisType,
operation: 'manuscript-analysis',
processingTime,
success: true
})
logger.info('AI analysis completed', {
userId: session.user.id,
analysisType,
tokensUsed: result.tokensUsed,
processingTime
})
return NextResponse.json({
result: result.analysis,
metadata: {
modelUsed: result.modelUsed,
tokensUsed: result.tokensUsed,
processingTime
}
})
} catch (error) {
logger.error('AI analysis error', { error })
// Track failed attempts
if (session?.user?.id) {
await trackAIUsage({
userId: session.user.id,
analysisType,
operation: 'manuscript-analysis',
success: false,
error: error.message
})
}
return NextResponse.json(
{ error: 'Analysis failed', details: error.message },
{ status: 500 }
)
}
}
// Helper functions
function getMaxTokensForTier(tier: string): number {
const limits = {
free: 2000,
starter: 4000,
professional: 8000,
enterprise: 16000
}
return limits[tier] || limits.free
}
function calculateCost(model: string, tokens: number): number {
const rates = {
'gpt-4': 0.03, // per 1k tokens
'gpt-3.5-turbo': 0.002, // per 1k tokens
'claude-3': 0.025 // per 1k tokens
}
const rate = rates[model] || 0.03
return (tokens / 1000) * rate
}MVP Acceptance Criteria
- OpenAI GPT-4 integration working
- Claude 3 integration working
- Intelligent model routing by category
- Error handling with fallback
- Token counting for cost management
- Rate limiting protection
- MyStoryFlow auth integration
- AI usage tracking integration
7. Batch Processing for Chapters
// packages/manuscript-analysis/src/services/batch-processor.ts
import { AIService } from './ai-service'
import { ChunkingService } from './chunking-service'
import { Logger } from '@mystoryflow/logger'
import PQueue from 'p-queue'
export class BatchProcessor {
private aiService: AIService
private chunkingService: ChunkingService
private logger: Logger
private queue: PQueue
constructor() {
this.aiService = new AIService()
this.chunkingService = new ChunkingService()
this.logger = new Logger('BatchProcessor')
this.queue = new PQueue({
concurrency: parseInt(process.env.AI_BATCH_SIZE || '5'),
interval: 60000,
intervalCap: parseInt(process.env.AI_RATE_LIMIT || '100')
})
}
async processManuscriptBatch(
manuscriptId: string,
content: string,
analysisTypes: string[]
): Promise<BatchAnalysisResult> {
// Split into chapters/sections
const chunks = await this.chunkingService.splitIntoChapters(content)
this.logger.info('Processing manuscript in batches', {
manuscriptId,
chapterCount: chunks.length,
analysisTypes
})
const results = await Promise.all(
chunks.map((chunk, index) =>
this.queue.add(async () => {
const chapterResults = {}
for (const analysisType of analysisTypes) {
try {
const result = await this.aiService.analyzeManuscript(
chunk.content,
analysisType,
{
chapterNumber: index + 1,
chapterTitle: chunk.title
}
)
chapterResults[analysisType] = result
} catch (error) {
this.logger.error('Chapter analysis failed', {
chapter: index + 1,
analysisType,
error
})
chapterResults[analysisType] = { error: error.message }
}
}
return {
chapterNumber: index + 1,
chapterTitle: chunk.title,
wordCount: chunk.wordCount,
analyses: chapterResults
}
})
)
)
// Aggregate results
return this.aggregateResults(results, analysisTypes)
}
private aggregateResults(
chapterResults: any[],
analysisTypes: string[]
): BatchAnalysisResult {
const aggregated = {
totalChapters: chapterResults.length,
totalWordCount: chapterResults.reduce((sum, ch) => sum + ch.wordCount, 0),
byChapter: chapterResults,
summary: {}
}
// Create summary across all chapters
for (const analysisType of analysisTypes) {
const typeResults = chapterResults
.map(ch => ch.analyses[analysisType])
.filter(r => !r.error)
aggregated.summary[analysisType] = this.summarizeAnalysis(typeResults, analysisType)
}
return aggregated
}
}Database Changes
-- Enhanced AI usage tracking with organization support
CREATE TABLE analyzer.ai_usage (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
user_id UUID REFERENCES auth.users(id),
organization_id UUID REFERENCES public.organizations(id),
manuscript_id UUID REFERENCES analyzer.manuscripts(id),
model_used VARCHAR(50),
tokens_used INTEGER,
cost_cents INTEGER,
analysis_type VARCHAR(50),
operation VARCHAR(100),
processing_time_ms INTEGER,
success BOOLEAN DEFAULT true,
error_message TEXT,
created_at TIMESTAMP DEFAULT NOW(),
-- Indexes for performance
INDEX idx_ai_usage_user (user_id),
INDEX idx_ai_usage_org (organization_id),
INDEX idx_ai_usage_manuscript (manuscript_id),
INDEX idx_ai_usage_created (created_at)
);
-- AI analysis results storage
CREATE TABLE analyzer.ai_analyses (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
manuscript_id UUID REFERENCES analyzer.manuscripts(id),
analysis_type VARCHAR(50),
model_used VARCHAR(50),
analysis_data JSONB,
confidence_score DECIMAL(3,2),
tokens_used INTEGER,
processing_time_ms INTEGER,
created_at TIMESTAMP DEFAULT NOW(),
expires_at TIMESTAMP, -- For cache expiration
-- Unique constraint for caching
UNIQUE(manuscript_id, analysis_type),
INDEX idx_analyses_manuscript (manuscript_id),
INDEX idx_analyses_type (analysis_type)
);
-- Embeddings storage for semantic search
CREATE TABLE analyzer.manuscript_embeddings (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
manuscript_id UUID REFERENCES analyzer.manuscripts(id),
chunk_index INTEGER,
chunk_text TEXT,
embedding_model VARCHAR(50),
embedding_vector vector(1536), -- Using pgvector extension
metadata JSONB,
created_at TIMESTAMP DEFAULT NOW(),
INDEX idx_embeddings_manuscript (manuscript_id),
INDEX idx_embeddings_vector USING ivfflat (embedding_vector vector_cosine_ops)
);
-- Model performance tracking
CREATE TABLE analyzer.model_performance (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
model_name VARCHAR(50),
analysis_type VARCHAR(50),
avg_processing_time_ms INTEGER,
avg_tokens_used INTEGER,
success_rate DECIMAL(5,2),
quality_score DECIMAL(3,2),
sample_count INTEGER,
period_start TIMESTAMP,
period_end TIMESTAMP,
INDEX idx_performance_model (model_name),
INDEX idx_performance_period (period_start, period_end)
);Error Handling and Monitoring
// packages/ai-core/src/monitoring/ai-monitor.ts
import { Logger } from '@mystoryflow/logger'
import { MetricsCollector } from '@mystoryflow/metrics'
export class AIMonitor {
private logger: Logger
private metrics: MetricsCollector
constructor() {
this.logger = new Logger('AIMonitor')
this.metrics = new MetricsCollector('ai-service')
}
trackRequest(params: {
model: string
operation: string
tokensUsed: number
latency: number
success: boolean
error?: string
}): void {
// Log the request
this.logger.info('AI request completed', params)
// Collect metrics
this.metrics.recordHistogram('ai.request.latency', params.latency, {
model: params.model,
operation: params.operation
})
this.metrics.recordCounter('ai.tokens.used', params.tokensUsed, {
model: params.model
})
if (!params.success) {
this.metrics.recordCounter('ai.errors', 1, {
model: params.model,
error: params.error
})
}
}
async checkModelHealth(): Promise<ModelHealthStatus> {
const models = ['gpt-4', 'claude-3', 'gpt-3.5-turbo']
const health = {}
for (const model of models) {
try {
const testStart = Date.now()
await this.testModel(model)
health[model] = {
status: 'healthy',
latency: Date.now() - testStart,
lastCheck: new Date()
}
} catch (error) {
health[model] = {
status: 'unhealthy',
error: error.message,
lastCheck: new Date()
}
}
}
return health
}
}Cost Optimization Strategies
// packages/ai-core/src/optimization/cost-optimizer.ts
export class CostOptimizer {
private costThresholds = {
low: 0.10, // $0.10 per analysis
medium: 0.50, // $0.50 per analysis
high: 1.00 // $1.00 per analysis
}
optimizeRequest(params: {
content: string
analysisType: string
userTier: string
monthlyUsage: number
}): OptimizationStrategy {
const contentLength = params.content.length
const estimatedTokens = Math.ceil(contentLength / 4)
// Tier-based optimization
if (params.userTier === 'free') {
return {
model: 'gpt-3.5-turbo',
maxTokens: 2000,
sampling: true,
sampleSize: Math.min(contentLength, 10000)
}
}
// Usage-based optimization
if (params.monthlyUsage > 1000) {
return {
model: 'gpt-3.5-turbo',
maxTokens: 3000,
caching: true,
cacheTTL: 86400 // 24 hours
}
}
// Quality-first for premium users
return {
model: this.selectBestModel(params.analysisType),
maxTokens: 8000,
caching: true,
multiModel: true // Use consensus from multiple models
}
}
private selectBestModel(analysisType: string): string {
// Based on performance metrics
const modelScores = {
'gpt-4': { quality: 0.95, cost: 0.7 },
'claude-3': { quality: 0.93, cost: 0.6 },
'gpt-3.5-turbo': { quality: 0.8, cost: 0.1 }
}
// Weighted selection based on analysis type
return 'gpt-4' // Simplified for example
}
}Post-MVP Enhancements
- Custom fine-tuned models for manuscript analysis
- Streaming responses for real-time feedback
- Multi-model consensus for critical analyses
- Advanced caching with semantic similarity
- Automated prompt optimization
- Usage analytics dashboard
- Cost prediction and budgeting tools
- A/B testing framework for model selection
Implementation Time
- Core AI Service: 1 day
- Embeddings & Search: 0.5 days
- Batch Processing: 0.5 days
- Monitoring & Optimization: 0.5 days
- Testing & Integration: 1 day
- Total: 3.5 days
Dependencies
- F000 - Monorepo setup (for package structure)
- F002 - Database schema (for storage tables)
- F003 - File storage (for manuscript content)
- Shared packages: @mystoryflow/logger, @mystoryflow/cache, @mystoryflow/auth
Types and Interfaces
// packages/manuscript-analysis/src/types/ai-types.ts
export type ManuscriptAnalysisType =
| 'overall'
| 'character'
| 'plot'
| 'style'
| 'dialogue'
| 'pacing'
| 'worldBuilding'
| 'market'
export interface AnalysisOptions {
genre?: string
targetAudience?: string
maxTokens?: number
chapterNumber?: number
chapterTitle?: string
costOptimize?: boolean
}
export interface ManuscriptAnalysisResult {
analysis: {
summary: string
scores: Record<string, number>
strengths: string[]
weaknesses: string[]
recommendations: string[]
marketPotential: MarketAnalysis
}
modelUsed: string
tokensUsed: number
confidence: number
}
export interface BatchAnalysisResult {
totalChapters: number
totalWordCount: number
byChapter: ChapterAnalysis[]
summary: Record<string, AnalysisSummary>
}Next Feature
After completion, proceed to F005-DOCUMENT-UPLOAD to handle manuscript file uploads, then F007-MANUSCRIPT-ANALYSIS for the complete analysis workflows.