Building Multi-Agent Email Sequences with Mastra and Memory

March 7, 2025

Building AI agents is trendy, but building AI agents that actually remember context and produce useful outputs is hard. After experimenting with LangChain, AutoGPT, and other frameworks, I discovered Mastra—a lightweight multi-agent framework that makes it easy to build AI assistants with persistent memory.

In this post, I'll show you how I built an Email Sequence Agent that generates personalized sales email campaigns with memory across conversations.

The Problem: Stateless AI Agents

Most AI chat interfaces are stateless. Each conversation starts fresh:

User: "Generate a cold email for SaaS founders"
AI: [Generates generic email]

User: "Make it more casual"
AI: "Make what more casual?"  Lost context!

For an email sequence generator, this is fatal. You need to:

  • Remember previous emails in the sequence
  • Track user preferences (tone, length, style)
  • Store generated outputs for editing
  • Resume conversations later

Why Mastra?

Mastra is a TypeScript framework specifically designed for multi-agent systems with memory:

PostgreSQL-backed memory (not in-memory like LangChain) ✅ Thread-based conversations (like Claude Projects) ✅ Artifact system for storing outputs ✅ Tool calling with proper state management ✅ assistant-ui integration (ChatGPT-like interface)

Compare to alternatives:

| Framework | Memory | Multi-agent | TypeScript | |-----------|--------|-------------|------------| | LangChain | ❌ In-memory | ✅ | ⚠️ JS wrapper | | AutoGPT | ❌ None | ✅ | ❌ Python | | Mastra | ✅ Postgres | ✅ | ✅ Native TS |

System Architecture

┌─────────────────────────────────────────┐
          Mastra Email Agent             
├─────────────────────────────────────────┤
                                         
  ┌──────────────┐    ┌──────────────┐  
     Next.js            Mastra      
      App       │◄──►│  Agent Core    
   (assistant-ui)                    
  └──────┬───────┘    └──────┬───────┘  
                                       
                                       
  ┌──────────────────────────────────┐  
        PostgresStore (Memory)        
    - Threads                         
    - Messages                        
    - Artifacts (email sequences)     
  └──────────────────────────────────┘  
                                         
         ┌────────────────┐              
           OpenAI GPT-4                
         └────────────────┘              
└─────────────────────────────────────────┘

Tech Stack

  • Next.js 14: App Router, Server Components
  • Mastra: Multi-agent framework
  • PostgreSQL: Persistent memory storage
  • Prisma: ORM for database access
  • assistant-ui: ChatGPT-like UI components
  • OpenAI GPT-4: LLM for email generation

Setup: Mastra Agent with PostgreSQL Memory

1. Install Dependencies

npm install mastra @mastra/postgres-store prisma @assistant-ui/react
npm install --save-dev @types/node tsx

2. Configure Prisma Schema

// prisma/schema.prisma
generator client {
  provider = "prisma-client-js"
}

datasource db {
  provider = "postgresql"
  url      = env("DATABASE_URL")
}

model Thread {
  id        String    @id @default(uuid())
  title     String?
  createdAt DateTime  @default(now())
  updatedAt DateTime  @updatedAt
  messages  Message[]
  artifacts Artifact[]
}

model Message {
  id        String   @id @default(uuid())
  threadId  String
  thread    Thread   @relation(fields: [threadId], references: [id])
  role      String   // "user" | "assistant" | "system"
  content   String
  createdAt DateTime @default(now())
}

model Artifact {
  id        String   @id @default(uuid())
  threadId  String
  thread    Thread   @relation(fields: [threadId], references: [id])
  type      String   // "email_sequence"
  data      Json     // Stores the generated email sequence
  createdAt DateTime @default(now())
}

3. Create Mastra Agent

// lib/mastra/agent.ts
import { Mastra } from 'mastra'
import { PostgresStore } from '@mastra/postgres-store'
import { PrismaClient } from '@prisma/client'
import { z } from 'zod'

const prisma = new PrismaClient()

// PostgreSQL memory store
const memoryStore = new PostgresStore({
  client: prisma,
})

// Email sequence generation tool
const generateEmailSequence = {
  name: 'generate_email_sequence',
  description: 'Generate a multi-email sales sequence',
  parameters: z.object({
    targetAudience: z.string().describe('Who is this for? e.g. "SaaS founders"'),
    productDescription: z.string().describe('What are you selling?'),
    tone: z.enum(['professional', 'casual', 'friendly']).default('professional'),
    numEmails: z.number().min(2).max(7).default(5),
  }),
  execute: async (params) => {
    // Generate email sequence using GPT-4
    const sequence = await generateEmails(params)

    return {
      success: true,
      sequence,
    }
  },
}

// Create Mastra agent
export const emailAgent = new Mastra({
  name: 'Email Sequence Agent',
  description: 'Generates personalized sales email sequences',
  model: 'gpt-4-turbo',
  tools: [generateEmailSequence],
  memory: memoryStore,
  systemPrompt: `You are an expert email marketer specializing in cold outreach sequences.

Your job is to:
1. Ask clarifying questions about the target audience and product
2. Generate a sequence of 3-7 emails that build trust and drive action
3. Ensure each email has a clear purpose (introduce, educate, social proof, CTA)
4. Match the requested tone (professional, casual, or friendly)

Always store generated sequences as artifacts for the user to review and edit.`,
})

// Email generation logic
async function generateEmails(params: any) {
  const { targetAudience, productDescription, tone, numEmails } = params

  const prompt = `Generate a ${numEmails}-email sales sequence for:

Target Audience: ${targetAudience}
Product: ${productDescription}
Tone: ${tone}

Requirements:
- Email 1: Introduction (establish credibility)
- Email 2: Education (provide value)
- Email 3: Social Proof (testimonials/case studies)
- Email 4+: CTA (demo, trial, meeting)

Format each email with:
Subject: [subject line]
Body: [email content]
---`

  const response = await openai.chat.completions.create({
    model: 'gpt-4-turbo',
    messages: [
      { role: 'system', content: 'You are an expert email copywriter.' },
      { role: 'user', content: prompt },
    ],
  })

  // Parse response into structured format
  const emails = parseEmailSequence(response.choices[0].message.content)

  return emails
}

function parseEmailSequence(text: string) {
  // Split by --- delimiter
  const emailBlocks = text.split('---').filter(Boolean)

  return emailBlocks.map((block, index) => {
    const subjectMatch = block.match(/Subject: (.+)/)
    const bodyMatch = block.match(/Body: ([\s\S]+)/)

    return {
      emailNumber: index + 1,
      subject: subjectMatch?.[1]?.trim() || '',
      body: bodyMatch?.[1]?.trim() || '',
    }
  })
}

UI: assistant-ui Integration

assistant-ui provides ChatGPT-style components out of the box:

// app/chat/[threadId]/page.tsx
'use client'

import { Thread } from '@assistant-ui/react'
import { useThread } from '@/lib/mastra/hooks'

export default async function ChatPage({ params }) {
  const { threadId } = await params

  return (
    <div className="flex flex-col h-screen">
      <Thread
        threadId={threadId}
        agent={emailAgent}
        renderMessage={(message)=> (
          <div className="p-4">
            <strong>{message.role}:</strong> {message.content}
          </div>
        )}
        renderArtifact={(artifact) => (
          <EmailSequencePreview data={artifact.data} />
        )}
      />
    </div>
  )
}

function EmailSequencePreview({ data }) {
  const emails = data.sequence

  return (
    <div className="border rounded-lg p-4 space-y-4">
      <h3 className="font-bold">Generated Email Sequence</h3>
      {emails.map((email, i) => (
        <div key={i} className="bg-gray-50 p-3 rounded">
          <p className="font-semibold">Email {email.emailNumber}</p>
          <p className="text-sm text-gray-600">Subject: {email.subject}</p>
          <p className="text-sm mt-2 whitespace-pre-wrap">{email.body}</p>
        </div>
      ))}
    </div>
  )
}

Thread Management

ThreadList Component

// app/components/thread-list.tsx
'use client'

import Link from 'next/link'
import { useThreads } from '@/lib/mastra/hooks'

export function ThreadList() {
  const threads = useThreads()

  return (
    <div className="space-y-2">
      {threads.map((thread) => (
        <Link
          key={thread.id}
          href={`/chat/${thread.id}`}
          className="block p-3 border rounded hover:bg-gray-50"
        >
          <p className="font-medium">{thread.title || 'Untitled Thread'}</p>
          <p className="text-sm text-gray-500">
            {new Date(thread.createdAt).toLocaleDateString()}
          </p>
        </Link>
      ))}
    </div>
  )
}

Server-side Thread Fetching

// lib/mastra/hooks.ts
import { prisma } from '@/lib/db'

export async function getThreads(userId: string) {
  return await prisma.thread.findMany({
    where: { userId },
    orderBy: { updatedAt: 'desc' },
    include: {
      _count: {
        select: { messages: true, artifacts: true },
      },
    },
  })
}

export async function createThread(userId: string, title?: string) {
  return await prisma.thread.create({
    data: {
      userId,
      title: title || 'New Conversation',
    },
  })
}

Memory in Action

Here's what makes Mastra powerful—context persists:

User: "Generate a cold email for SaaS founders selling a CRM"

Agent: [Generates 5-email sequence]
      [Stores as Artifact in thread]

User: "Make email 2 more casual"

Agent: [Retrieves artifact from memory]
      [Regenerates email 2 with casual tone]
      [Updates artifact]

[User closes browser]
[Returns 3 days later]

User: "Add a case study to email 3"

Agent: [Loads thread from PostgreSQL]
      [Retrieves full email sequence]
      [Updates email 3 with case study]

No context lost! Everything is in PostgreSQL.

Logging & Debugging

Comprehensive logging for debugging agent behavior:

// lib/mastra/logger.ts
import { prisma } from '@/lib/db'

export async function logToolCall(
  threadId: string,
  toolName: string,
  params: any,
  result: any
) {
  await prisma.toolCallLog.create({
    data: {
      threadId,
      toolName,
      parameters: params,
      result,
      timestamp: new Date(),
    },
  })

  console.log(`[Tool Call] ${toolName}`, {
    params,
    result,
  })
}

View logs in admin panel:

// app/admin/logs/page.tsx
export default async function LogsPage() {
  const logs = await prisma.toolCallLog.findMany({
    orderBy: { timestamp: 'desc' },
    take: 100,
  })

  return (
    <div className="p-6">
      <h1 className="text-2xl font-bold mb-4">Tool Call Logs</h1>
      <div className="space-y-2">
        {logs.map((log) => (
          <div key={log.id} className="bg-gray-100 p-3 rounded">
            <p className="font-mono text-sm">{log.toolName}</p>
            <pre className="text-xs mt-1">
              {JSON.stringify(log.parameters, null, 2)}
            </pre>
          </div>
        ))}
      </div>
    </div>
  )
}

Lessons Learned

1. Refactoring from Weather to Email

I originally built a weather agent (toy example). Key changes:

Before:

tools: [getWeather, getForecast]

After:

tools: [generateEmailSequence, editEmail, analyzeCompetitor]

Focus on high-value outputs (email sequences) vs low-value (weather data).

2. Artifact Model is Essential

Don't just store messages—store structured outputs:

// ❌ Bad: Storing emails in message content
message.content = "Here are your emails: ..."

// ✅ Good: Storing emails as structured artifact
artifact.data = {
  sequence: [
    { emailNumber: 1, subject: "...", body: "..." },
    { emailNumber: 2, subject: "...", body: "..." },
  ]
}

This enables editing, versioning, and export.

3. Async Params in Next.js 15

Next.js 15 requires awaiting params:

// ❌ Old way (Next.js 14)
export default function Page({ params }) {
  const { threadId } = params
}

// ✅ New way (Next.js 15)
export default async function Page({ params }) {
  const { threadId } = await params
}

Production Considerations

1. Rate Limiting

Prevent abuse with user-level rate limits:

export async function checkRateLimit(userId: string) {
  const count = await redis.incr(`ratelimit:${userId}:${Date.now()}`)

  if (count > 10) {
    throw new Error('Rate limit exceeded')
  }

  await redis.expire(`ratelimit:${userId}:${Date.now()}`, 3600)
}

2. Cost Management

GPT-4 is expensive ($0.03/1K tokens). Optimize:

// Use GPT-3.5 for simple tasks
const model = taskComplexity === 'high' ? 'gpt-4-turbo' : 'gpt-3.5-turbo'

// Limit max tokens
maxTokens: 1000

// Cache common prompts
const cacheKey = `prompt:${hash(params)}`
const cached = await redis.get(cacheKey)
if (cached) return cached

3. Monitoring

Track agent performance:

import { Analytics } from '@vercel/analytics'

Analytics.track('email_sequence_generated', {
  userId,
  numEmails: sequence.length,
  tone,
  duration: Date.now() - startTime,
})

Future Improvements

Short-term:

  • A/B testing different email variants
  • Import customer data for personalization
  • Email scheduling and automation
  • Template library

Long-term:

  • Multi-agent collaboration (research agent + writer agent)
  • Human-in-the-loop editing flow
  • Integration with email providers (SendGrid, Mailgun)
  • Analytics on email performance

Conclusion

Mastra makes building AI agents with persistent memory straightforward. The combination of PostgreSQL storage, thread-based conversations, and assistant-ui creates a production-ready foundation for agent applications.

Key takeaways:

  • Use persistent memory (not in-memory state)
  • Store structured artifacts (not just text)
  • Log everything for debugging
  • Focus on high-value outputs (email sequences, not weather)

Whether you're building sales tools, customer support agents, or content generators, Mastra's opinionated architecture accelerates development while maintaining flexibility.

Try it: mastra.ai