With the rising popularity of audio content consumption, the ability to convert your documents or written content into realistic audio formats has been trending more recently.
While Google's NotebookLM has garnered attention in this space, I wanted to explore building a similar system using modern cloud services. In this article, I'll walk you through how I created a scalable, cloud-native system that converts documents into high-quality podcasts using FastAPI, Firebase, Google Cloud Pub/Sub, and Azure's Text-to-Speech service.
Here is a showcase you can refer to for the results of this system: MyPodify Showcase
Converting documents to podcasts isn't as simple as running text through a text-to-speech engine. It requires careful processing, natural language understanding, and the ability to handle various document formats while maintaining a smooth user experience. The system needs to:
Let's break down the key components and understand how they work together:
FastAPI serves as our backend framework, chosen for several compelling reasons:
Here's a detailed look at our upload endpoint:
@app.post('/upload') async def upload_files( token: Annotated[ParsedToken, Depends(verify_firebase_token)], project_name: str, description: str, website_link: str, host_count: int, files: Optional[List[UploadFile]] = File(None) ): # Validate token user_id = token['uid'] # Generate unique identifiers project_id = str(uuid.uuid4()) podcast_id = str(uuid.uuid4()) # Process and store files file_urls = await process_uploads(files, user_id, project_id) # Create Firestore document await create_project_document(user_id, project_id, { 'status': 'pending', 'created_at': datetime.now(), 'project_name': project_name, 'description': description, 'file_urls': file_urls }) # Trigger async processing await publish_to_pubsub(user_id, project_id, podcast_id, file_urls) return {'project_id': project_id, 'status': 'processing'}
Firebase provides two crucial services for our application:
Here's how we implement real-time status updates:
async def update_status(user_id: str, project_id: str, status: str, metadata: dict = None): doc_ref = db.collection('projects').document(f'{user_id}/{project_id}') update_data = { 'status': status, 'updated_at': datetime.now() } if metadata: update_data.update(metadata) await doc_ref.update(update_data)
Pub/Sub serves as our messaging backbone, enabling:
Message structure example:
@app.post('/upload') async def upload_files( token: Annotated[ParsedToken, Depends(verify_firebase_token)], project_name: str, description: str, website_link: str, host_count: int, files: Optional[List[UploadFile]] = File(None) ): # Validate token user_id = token['uid'] # Generate unique identifiers project_id = str(uuid.uuid4()) podcast_id = str(uuid.uuid4()) # Process and store files file_urls = await process_uploads(files, user_id, project_id) # Create Firestore document await create_project_document(user_id, project_id, { 'status': 'pending', 'created_at': datetime.now(), 'project_name': project_name, 'description': description, 'file_urls': file_urls }) # Trigger async processing await publish_to_pubsub(user_id, project_id, podcast_id, file_urls) return {'project_id': project_id, 'status': 'processing'}
The core of our audio generation uses Azure's Cognitive Services Speech SDK. Let's look at how we implement natural-sounding voice synthesis:
async def update_status(user_id: str, project_id: str, status: str, metadata: dict = None): doc_ref = db.collection('projects').document(f'{user_id}/{project_id}') update_data = { 'status': status, 'updated_at': datetime.now() } if metadata: update_data.update(metadata) await doc_ref.update(update_data)
One of the unique features of our system is the ability to generate multi-voice podcasts using AI. Here's how we handle script generation for different hosts:
{ 'user_id': 'uid_123', 'project_id': 'proj_456', 'podcast_id': 'pod_789', 'file_urls': ['gs://bucket/file1.pdf'], 'description': 'Technical blog post about cloud architecture', 'host_count': 2, 'action': 'CREATE_PROJECT' }
For voice synthesis, we map different speakers to specific Azure voices:
import azure.cognitiveservices.speech as speechsdk from pathlib import Path class SpeechGenerator: def __init__(self): self.speech_config = speechsdk.SpeechConfig( subscription=os.getenv("AZURE_SPEECH_KEY"), region=os.getenv("AZURE_SPEECH_REGION") ) async def create_speech_segment(self, text, voice, output_file): try: self.speech_config.speech_synthesis_voice_name = voice synthesizer = speechsdk.SpeechSynthesizer( speech_config=self.speech_config, audio_config=None ) # Generate speech from text result = synthesizer.speak_text_async(text).get() if result.reason == speechsdk.ResultReason.SynthesizingAudioCompleted: with open(output_file, "wb") as audio_file: audio_file.write(result.audio_data) return True return False except Exception as e: logger.error(f"Speech synthesis failed: {str(e)}") return False
The worker component handles the heavy lifting:
Document Analysis
Content Processing
Audio Generation
Here's a simplified view of our worker logic:
async def generate_podcast_script(outline: str, analysis: str, host_count: int): # System instructions for different podcast formats system_instructions = TWO_HOST_SYSTEM_PROMPT if host_count > 1 else ONE_HOST_SYSTEM_PROMPT # Example of how we structure the AI conversation if host_count > 1: script_format = """ **Alex**: "Hello and welcome to MyPodify! I'm your host Alex, joined by..." **Jane**: "Hi everyone! I'm Jane, and today we're diving into {topic}..." """ else: script_format = """ **Alex**: "Welcome to MyPodify! Today we're exploring {topic}..." """ # Generate the complete script using AI script = await generate_content_from_openai( content=f"{outline}\n\nContent Details:{analysis}", system_instructions=system_instructions, purpose="Podcast Script" ) return script
The system implements comprehensive error handling:
Retry Logic
Status Tracking
Resource Cleanup
To handle production loads, we've implemented several optimizations:
Worker Scaling
Storage Optimization
Processing Optimization
The system includes comprehensive monitoring:
@app.post('/upload') async def upload_files( token: Annotated[ParsedToken, Depends(verify_firebase_token)], project_name: str, description: str, website_link: str, host_count: int, files: Optional[List[UploadFile]] = File(None) ): # Validate token user_id = token['uid'] # Generate unique identifiers project_id = str(uuid.uuid4()) podcast_id = str(uuid.uuid4()) # Process and store files file_urls = await process_uploads(files, user_id, project_id) # Create Firestore document await create_project_document(user_id, project_id, { 'status': 'pending', 'created_at': datetime.now(), 'project_name': project_name, 'description': description, 'file_urls': file_urls }) # Trigger async processing await publish_to_pubsub(user_id, project_id, podcast_id, file_urls) return {'project_id': project_id, 'status': 'processing'}
While the current system works well, there are several exciting possibilities for future improvements:
Enhanced Audio Processing
Content Enhancement
Platform Integration
Building a document-to-podcast converter has been an exciting journey into modern cloud architecture. The combination of FastAPI, Firebase, Google Cloud Pub/Sub, and Azure's Text-to-Speech services provides a robust foundation for handling complex document processing at scale.
The event-driven architecture ensures the system remains responsive under load, while the use of managed services reduces operational overhead. Whether you're building a similar system or just exploring cloud-native architectures, I hope this deep dive has provided valuable insights into building scalable, production-ready applications.
Want to learn more about cloud architecture and modern application development? Follow me for more technical and practical tutorials.
The above is the detailed content of How to Build your very own Googles NotebookLM. For more information, please follow other related articles on the PHP Chinese website!