NotebookLM MCP Setup Guide: Codex and CC Workflows
If you are researching NotebookLM MCP, you usually want one thing: a reliable MCP workflow your agent can run repeatedly, not a one-off manual process. This guide explains how to set up AutoContent as your NotebookLM MCP server, connect it to Codex or CC (Claude Code), and run a production-safe create-and-status loop for podcasts, infographics, slide decks, and videos.
What Is a NotebookLM MCP Workflow?
In practical terms, a NotebookLM MCP server is a Model Context Protocol endpoint your agent uses to transform source material into useful content assets. Instead of opening multiple dashboards and manually copying prompts, your agent calls tools, captures a requestId, polls status, and returns final URLs.
That pattern is what makes MCP valuable for teams. You get deterministic behavior, traceable requests, easier retries, and cleaner automation across internal tools and customer workflows.
Why Teams Search for "NotebookLM MCP"
Most teams are trying to solve one of these problems:
- They need an agent to create content assets from URLs or source text.
- They want the same orchestration logic for different output types.
- They need status-based polling so jobs can run asynchronously.
- They want to plug generation into CI workflows, runbooks, or product features.
AutoContent MCP is designed for exactly that: one endpoint, clear tool contracts, and a repeatable status loop.
NotebookLM MCP Setup (Codex and CC)
Use one command for each client. Keep each command isolated so it is easy to copy, test, and troubleshoot.
Codex CLI
codex mcp add autocontentapi --url https://mcp.autocontentapi.com/mcp
CC (Claude Code)
cc mcp add --transport http autocontentapi https://mcp.autocontentapi.com/mcp
You can also copy both commands from the dedicated AutoContent MCP page.
Tool Map: Create + Status by Asset Type
A good NotebookLM MCP implementation keeps naming predictable. The AutoContent tools follow this pattern:
| Asset | Create Tool | Status Tool | Output Field |
|---|---|---|---|
| Podcast | create_podcast |
get_podcast_status |
audioUrl |
| Infographic | create_infographic |
get_infographic_status |
imageUrl |
| Slide Deck | create_slide_deck |
get_slide_deck_status |
deckUrl |
| Video | create_video |
get_video_status |
videoUrl |
| Repurpose | repurpose_content |
Status tool of target type | Target type URL field |
End-to-End NotebookLM MCP Flow for Agents
The best way to think about NotebookLM MCP orchestration is as a strict four-step state machine.
- Collect inputs: Provide
sourceTextand/orsourceUrls, plus optional prompt guidance. - Create job: Call the proper
create_*tool and persist the returnedrequestId. - Poll status: Call the matching
get_*_statustool every 30 to 60 seconds. - Deliver output: When status reaches 100, return the asset URL and next action.
Here is a minimal execution pattern your agent can follow:
// 1) create
const job = await mcp.create_podcast({
sourceUrls: ['https://example.com/report'],
prompt: 'Summarize for sales leadership in 6 minutes.'
})
// 2) persist requestId immediately
const requestId = job.requestId
// 3) poll
let done = false
while (!done) {
const status = await mcp.get_podcast_status({ requestId })
if (status.status === 100) {
done = true
return status.audioUrl
}
await sleep(45000)
}
Detailed Use Cases for NotebookLM MCP
1) Weekly Internal Product Briefing
Ingest roadmap updates, changelogs, and customer interview notes, then generate a weekly audio briefing for GTM and support teams. This reduces meeting overhead while keeping everyone aligned.
2) Sales Infographics from Research Reports
Turn large market reports into concise visual assets for outbound campaigns and account reviews. The status-based MCP workflow makes this repeatable account by account.
3) Executive Slide Deck Drafts
Generate first-draft slide decks from quarterly business reviews, board prep docs, or strategy memos. Teams then edit and finalize rather than building decks from zero.
4) Campaign Recap Videos
Feed campaign results and landing page content into create_video to generate recap videos for internal reporting or customer updates.
5) Cross-Format Repurposing
Use repurpose_content to convert a successful asset into another format, such as podcast to infographic or slide deck to short video, while preserving the same core message.
Production Best Practices
- Persist request IDs: Save them as soon as create calls return.
- Add timeout policy: Avoid infinite polling loops in failed or stalled jobs.
- Use idempotent retries: Retry status checks safely, and only recreate jobs when needed.
- Log input lineage: Keep source URLs and prompts for auditability.
- Return explicit outputs: Always surface the final URL plus status context to end users.
Common NotebookLM MCP Errors and Fixes
Authentication failure
Confirm your AutoContent account has a valid API key and your client is using the right environment variables.
Status never reaches 100
Check source quality, reduce overly restrictive prompts, and verify you are polling the matching status tool for that asset type.
Unexpected output format
Be explicit in prompts about audience, tone, and structure. For reliable runs, template your prompt blocks per use case.
NotebookLM MCP FAQ
Do I need a separate NotebookLM account for this MCP workflow?
No. You can run the workflow through AutoContent MCP with your AutoContent API access.
Can I use the same NotebookLM MCP flow for multiple asset types?
Yes. The flow stays the same: create tool, request ID, matching status tool, final output URL.
Is this suitable for production automation?
Yes, as long as you implement persistence, retries, timeout logic, and observability around create and status calls.
Start Building with NotebookLM MCP Workflows
Use the MCP endpoint with Codex or CC, then move to the API docs for full schema details and advanced patterns. You will need a valid AutoContent API key on your account.
For teams targeting the notebooklm mcp use case, the key is not just model quality. It is workflow reliability: predictable tools, clear status semantics, and repeatable output delivery.