From dddd745b5b3ee78c57d95db8d6c3427233267c79 Mon Sep 17 00:00:00 2001 From: hetao Date: Mon, 26 Jan 2026 14:01:48 +0800 Subject: [PATCH] refactor: simplify podcast-generation to use direct JSON script input - Remove LLM script generation from Python script, model now generates JSON script directly (similar to image-generation skill) - Add --transcript-file option to generate markdown transcript - Add optional "title" field in JSON for transcript heading - Remove dependency on OPENAI_API_KEY for podcast generation - Update SKILL.md with new workflow and JSON format documentation Co-Authored-By: Claude Opus 4.5 --- skills/public/podcast-generation/SKILL.md | 131 ++++++++--- .../podcast-generation/scripts/generate.py | 212 ++++-------------- .../templates/tech-explainer.md | 4 +- 3 files changed, 142 insertions(+), 205 deletions(-) diff --git a/skills/public/podcast-generation/SKILL.md b/skills/public/podcast-generation/SKILL.md index 8143e21..b78b8dd 100644 --- a/skills/public/podcast-generation/SKILL.md +++ b/skills/public/podcast-generation/SKILL.md @@ -7,7 +7,7 @@ description: Use this skill when the user requests to generate, create, or produ ## Overview -This skill generates high-quality podcast audio from text content using a multi-stage pipeline. The workflow includes script generation (converting input to conversational dialogue), text-to-speech synthesis, and audio mixing to produce the final podcast. +This skill generates high-quality podcast audio from text content. The workflow includes creating a structured JSON script (conversational dialogue) and executing audio generation through text-to-speech synthesis. ## Core Capabilities @@ -24,64 +24,127 @@ This skill generates high-quality podcast audio from text content using a multi- When a user requests podcast generation, identify: - Source content: The text/article/report to convert into a podcast -- Language: English or Chinese (auto-detected from content) +- Language: English or Chinese (based on content) - Output location: Where to save the generated podcast - You don't need to check the folder under `/mnt/user-data` -### Step 2: Prepare Input Content +### Step 2: Create Structured Script JSON -The input content should be plain text or markdown. Save it to a text file in `/mnt/user-data/workspace/` with naming pattern: `{descriptive-name}-content.md` +Generate a structured JSON script file in `/mnt/user-data/workspace/` with naming pattern: `{descriptive-name}-script.json` + +The JSON structure: +```json +{ + "locale": "en", + "lines": [ + {"speaker": "male", "paragraph": "dialogue text"}, + {"speaker": "female", "paragraph": "dialogue text"} + ] +} +``` ### Step 3: Execute Generation -Call the Python script directly without any concerns about timeout or the need for pre-testing: - +Call the Python script: ```bash python /mnt/skills/public/podcast-generation/scripts/generate.py \ - --input-file /mnt/user-data/workspace/content-file.md \ + --script-file /mnt/user-data/workspace/script-file.json \ --output-file /mnt/user-data/outputs/generated-podcast.mp3 \ - --locale en + --transcript-file /mnt/user-data/outputs/generated-podcast-transcript.md ``` Parameters: -- `--input-file`: Absolute path to input text/markdown file (required) +- `--script-file`: Absolute path to JSON script file (required) - `--output-file`: Absolute path to output MP3 file (required) -- `--locale`: Language locale - "en" for English or "zh" for Chinese (optional, auto-detected if not specified) +- `--transcript-file`: Absolute path to output transcript markdown file (optional, but recommended) > [!IMPORTANT] -> - Execute the script in one complete call. Do NOT split the workflow into separate steps (e.g., testing script generation first, then TTS). -> - The script handles all external API calls and audio generation internally with proper timeout management. +> - Execute the script in one complete call. Do NOT split the workflow into separate steps. +> - The script handles all TTS API calls and audio generation internally. > - Do NOT read the Python file, just call it with the parameters. +> - Always include `--transcript-file` to generate a readable transcript for the user. + +## Script JSON Format + +The script JSON file must follow this structure: + +```json +{ + "title": "The History of Artificial Intelligence", + "locale": "en", + "lines": [ + {"speaker": "male", "paragraph": "Hello Deer! Welcome back to another episode."}, + {"speaker": "female", "paragraph": "Hey everyone! Today we have an exciting topic to discuss."}, + {"speaker": "male", "paragraph": "That's right! We're going to talk about..."} + ] +} +``` + +Fields: +- `title`: Title of the podcast episode (optional, used as heading in transcript) +- `locale`: Language code - "en" for English or "zh" for Chinese +- `lines`: Array of dialogue lines + - `speaker`: Either "male" or "female" + - `paragraph`: The dialogue text for this speaker + +## Script Writing Guidelines + +When creating the script JSON, follow these guidelines: + +### Format Requirements +- Only two hosts: male and female, alternating naturally +- Target runtime: approximately 10 minutes of dialogue (around 40-60 lines) +- Start with the male host saying a greeting that includes "Hello Deer" + +### Tone & Style +- Natural, conversational dialogue - like two friends chatting +- Use casual expressions and conversational transitions +- Avoid overly formal language or academic tone +- Include reactions, follow-up questions, and natural interjections + +### Content Guidelines +- Frequent back-and-forth between hosts +- Keep sentences short and easy to follow when spoken +- Plain text only - no markdown formatting in the output +- Translate technical concepts into accessible language +- No mathematical formulas, code, or complex notation +- Make content engaging and accessible for audio-only listeners +- Exclude meta information like dates, author names, or document structure ## Podcast Generation Example User request: "Generate a podcast about the history of artificial intelligence" -Step 1: Create content file `/mnt/user-data/workspace/ai-history-content.md` with the source text: -```markdown -# The History of Artificial Intelligence - -Artificial intelligence has a rich history spanning over seven decades... - -## Early Beginnings (1950s) -The term "artificial intelligence" was coined by John McCarthy in 1956... - -## The First AI Winter (1970s) -After initial enthusiasm, AI research faced significant setbacks... - -## Modern Era (2010s-Present) -Deep learning revolutionized the field with breakthrough results... +Step 1: Create script file `/mnt/user-data/workspace/ai-history-script.json`: +```json +{ + "title": "The History of Artificial Intelligence", + "locale": "en", + "lines": [ + {"speaker": "male", "paragraph": "Hello Deer! Welcome back to another fascinating episode. Today we're diving into something that's literally shaping our future - the history of artificial intelligence."}, + {"speaker": "female", "paragraph": "Oh, I love this topic! You know, AI feels so modern, but it actually has roots going back over seventy years."}, + {"speaker": "male", "paragraph": "Exactly! It all started back in the 1950s. The term artificial intelligence was actually coined by John McCarthy in 1956 at a famous conference at Dartmouth."}, + {"speaker": "female", "paragraph": "Wait, so they were already thinking about machines that could think back then? That's incredible!"}, + {"speaker": "male", "paragraph": "Right? The early pioneers were so optimistic. They thought we'd have human-level AI within a generation."}, + {"speaker": "female", "paragraph": "But things didn't quite work out that way, did they?"}, + {"speaker": "male", "paragraph": "No, not at all. The 1970s brought what's called the first AI winter..."} + ] +} ``` Step 2: Execute generation: ```bash python /mnt/skills/public/podcast-generation/scripts/generate.py \ - --input-file /mnt/user-data/workspace/ai-history-content.md \ + --script-file /mnt/user-data/workspace/ai-history-script.json \ --output-file /mnt/user-data/outputs/ai-history-podcast.mp3 \ - --locale en + --transcript-file /mnt/user-data/outputs/ai-history-transcript.md ``` +This will generate: +- `ai-history-podcast.mp3`: The audio podcast file +- `ai-history-transcript.md`: A readable markdown transcript of the podcast + ## Specific Templates Read the following template file only when matching the user request. @@ -101,15 +164,14 @@ The generated podcast follows the "Hello Deer" format: After generation: -- Podcasts are saved in `/mnt/user-data/outputs/` -- Share generated podcast with user using `present_files` tool +- Podcasts and transcripts are saved in `/mnt/user-data/outputs/` +- Share both the podcast MP3 and transcript MD with user using `present_files` tool - Provide brief description of the generation result (topic, duration, hosts) - Offer to regenerate if adjustments needed ## Requirements The following environment variables must be set: -- `OPENAI_API_KEY` or equivalent LLM API key for script generation - `VOLCENGINE_TTS_APPID`: Volcengine TTS application ID - `VOLCENGINE_TTS_ACCESS_TOKEN`: Volcengine TTS access token - `VOLCENGINE_TTS_CLUSTER`: Volcengine TTS cluster (optional, defaults to "volcano_tts") @@ -117,8 +179,7 @@ The following environment variables must be set: ## Notes - **Always execute the full pipeline in one call** - no need to test individual steps or worry about timeouts -- Input content language is auto-detected and matched in output -- The script generation uses LLM to create natural conversational dialogue -- Technical content is automatically simplified for audio accessibility -- Complex notations (formulas, code) are translated to plain language +- The script JSON should match the content language (en or zh) +- Technical content should be simplified for audio accessibility in the script +- Complex notations (formulas, code) should be translated to plain language in the script - Long content may result in longer podcasts diff --git a/skills/public/podcast-generation/scripts/generate.py b/skills/public/podcast-generation/scripts/generate.py index 5ad7ea9..baea8e0 100644 --- a/skills/public/podcast-generation/scripts/generate.py +++ b/skills/public/podcast-generation/scripts/generate.py @@ -3,7 +3,6 @@ import base64 import json import logging import os -import re import uuid from typing import Literal, Optional @@ -21,7 +20,7 @@ class ScriptLine: class Script: - def __init__(self, locale: Literal["en", "zh"] = "en", lines: list[ScriptLine] = None): + def __init__(self, locale: Literal["en", "zh"] = "en", lines: Optional[list[ScriptLine]] = None): self.locale = locale self.lines = lines or [] @@ -38,139 +37,6 @@ class Script: return script -# Prompt template for script generation -SCRIPT_WRITER_PROMPT = """You are a skilled podcast script writer for "Hello Deer", a conversational podcast show with two hosts. - -Transform the provided content into an engaging podcast script following these guidelines: - -## Format Requirements -- Output as JSON with this structure: {{"locale": "en" or "zh", "lines": [{{"speaker": "male" or "female", "paragraph": "dialogue text"}}]}} -- Only two hosts: male and female, alternating naturally -- Target runtime: approximately 10 minutes of dialogue -- Start with the male host saying a greeting that includes "Hello Deer" - -## Tone & Style -- Natural, conversational dialogue - like two friends chatting -- Use casual expressions and conversational transitions -- Avoid overly formal language or academic tone -- Include reactions, follow-up questions, and natural interjections - -## Content Guidelines -- Frequent back-and-forth between hosts -- Keep sentences short and easy to follow when spoken -- Plain text only - no markdown formatting in the output -- Translate technical concepts into accessible language -- No mathematical formulas, code, or complex notation -- Make content engaging and accessible for audio-only listeners -- Exclude meta information like dates, author names, or document structure - -## Language -- Match the locale of the input content -- Use "{locale}" for the output locale - -Now transform this content into a podcast script: - -{content} -""" - - -def extract_json_from_text(text: str) -> dict: - """Extract JSON from text that might contain markdown code blocks or extra content.""" - # Try to find JSON in markdown code blocks first - json_block_pattern = r"```(?:json)?\s*(\{[\s\S]*?\})\s*```" - match = re.search(json_block_pattern, text) - if match: - return json.loads(match.group(1)) - - # Try to find raw JSON object - json_pattern = r"\{[\s\S]*\}" - match = re.search(json_pattern, text) - if match: - return json.loads(match.group(0)) - - # Last resort: try parsing the whole text - return json.loads(text) - - -def generate_script(content: str, locale: str) -> Script: - """Generate podcast script from content using LLM.""" - logger.info("Generating podcast script...") - - api_key = os.getenv("OPENAI_API_KEY") - base_url = os.getenv("OPENAI_BASE_URL", "https://api.openai.com/v1") - model = os.getenv("OPENAI_MODEL", "gpt-4o") - - if not api_key: - raise ValueError("OPENAI_API_KEY environment variable is not set") - - prompt = SCRIPT_WRITER_PROMPT.format(content=content, locale=locale) - - # First try with JSON mode - try: - response = requests.post( - f"{base_url}/chat/completions", - headers={ - "Authorization": f"Bearer {api_key}", - "Content-Type": "application/json", - }, - json={ - "model": model, - "messages": [ - {"role": "system", "content": "You are a podcast script writer. Always respond with valid JSON only, no markdown formatting."}, - {"role": "user", "content": prompt}, - ], - "response_format": {"type": "json_object"}, - }, - ) - - if response.status_code != 200: - raise Exception(f"LLM API error: {response.status_code} - {response.text}") - - result = response.json() - logger.info(f"API response keys: {result.keys()}") - if "error" in result: - raise Exception(f"API error: {result['error']}") - response_content = result["choices"][0]["message"]["content"] - logger.info(f"LLM response preview: {response_content[:200]}...") - script_json = json.loads(response_content) - - except (json.JSONDecodeError, KeyError) as e: - # Fallback: try without JSON mode for models that don't support it - logger.warning(f"JSON mode failed ({e}), trying without response_format...") - - response = requests.post( - f"{base_url}/chat/completions", - headers={ - "Authorization": f"Bearer {api_key}", - "Content-Type": "application/json", - }, - json={ - "model": model, - "messages": [ - {"role": "system", "content": "You are a podcast script writer. Respond with valid JSON only, no markdown or extra text."}, - {"role": "user", "content": prompt}, - ], - }, - ) - - if response.status_code != 200: - raise Exception(f"LLM API error: {response.status_code} - {response.text}") - - result = response.json() - response_content = result["choices"][0]["message"]["content"] - logger.debug(f"LLM response (fallback): {response_content[:500]}...") - script_json = extract_json_from_text(response_content) - - # Validate structure - if "lines" not in script_json: - raise ValueError(f"Invalid script format: missing 'lines' key. Got keys: {list(script_json.keys())}") - - script = Script.from_dict(script_json) - - logger.info(f"Generated script with {len(script.lines)} lines") - return script - - def text_to_speech(text: str, voice_type: str) -> Optional[bytes]: """Convert text to speech using Volcengine TTS.""" app_id = os.getenv("VOLCENGINE_TTS_APPID") @@ -264,45 +130,53 @@ def mix_audio(audio_chunks: list[bytes]) -> bytes: return output -def detect_locale(content: str) -> str: - """Auto-detect content locale based on character analysis.""" - chinese_chars = sum(1 for char in content if "\u4e00" <= char <= "\u9fff") - total_chars = len(content) +def generate_markdown(script: Script, title: str = "Podcast Script") -> str: + """Generate a markdown script from the podcast script.""" + lines = [f"# {title}", ""] - if total_chars > 0 and chinese_chars / total_chars > 0.1: - return "zh" - return "en" + for line in script.lines: + speaker_name = "**Host (Male)**" if line.speaker == "male" else "**Host (Female)**" + lines.append(f"{speaker_name}: {line.paragraph}") + lines.append("") + + return "\n".join(lines) def generate_podcast( - input_file: str, + script_file: str, output_file: str, - locale: Optional[str] = None, + transcript_file: Optional[str] = None, ) -> str: - """Generate a podcast from input content.""" + """Generate a podcast from a script JSON file.""" - # Read input content - with open(input_file, "r", encoding="utf-8") as f: - content = f.read() + # Read script JSON + with open(script_file, "r", encoding="utf-8") as f: + script_json = json.load(f) - if not content.strip(): - raise ValueError("Input file is empty") + if "lines" not in script_json: + raise ValueError(f"Invalid script format: missing 'lines' key. Got keys: {list(script_json.keys())}") - # Auto-detect locale if not specified - if not locale: - locale = detect_locale(content) - logger.info(f"Auto-detected locale: {locale}") + script = Script.from_dict(script_json) + logger.info(f"Loaded script with {len(script.lines)} lines") - # Step 1: Generate script - script = generate_script(content, locale) + # Generate transcript markdown if requested + if transcript_file: + title = script_json.get("title", "Podcast Script") + markdown_content = generate_markdown(script, title) + transcript_dir = os.path.dirname(transcript_file) + if transcript_dir: + os.makedirs(transcript_dir, exist_ok=True) + with open(transcript_file, "w", encoding="utf-8") as f: + f.write(markdown_content) + logger.info(f"Generated transcript to {transcript_file}") - # Step 2: Convert to audio + # Convert to audio audio_chunks = tts_node(script) if not audio_chunks: raise Exception("Failed to generate any audio") - # Step 3: Mix audio + # Mix audio output_audio = mix_audio(audio_chunks) # Save output @@ -312,15 +186,18 @@ def generate_podcast( with open(output_file, "wb") as f: f.write(output_audio) - return f"Successfully generated podcast to {output_file}" + result = f"Successfully generated podcast to {output_file}" + if transcript_file: + result += f" and transcript to {transcript_file}" + return result if __name__ == "__main__": - parser = argparse.ArgumentParser(description="Generate podcast from text content") + parser = argparse.ArgumentParser(description="Generate podcast from script JSON file") parser.add_argument( - "--input-file", + "--script-file", required=True, - help="Absolute path to input text/markdown file", + help="Absolute path to script JSON file", ) parser.add_argument( "--output-file", @@ -328,19 +205,18 @@ if __name__ == "__main__": help="Output path for generated podcast MP3", ) parser.add_argument( - "--locale", - choices=["en", "zh"], - default=None, - help="Language locale (auto-detected if not specified)", + "--transcript-file", + required=False, + help="Output path for transcript markdown file (optional)", ) args = parser.parse_args() try: result = generate_podcast( - args.input_file, + args.script_file, args.output_file, - args.locale, + args.transcript_file, ) print(result) except Exception as e: diff --git a/skills/public/podcast-generation/templates/tech-explainer.md b/skills/public/podcast-generation/templates/tech-explainer.md index 8dff4af..9f7751e 100644 --- a/skills/public/podcast-generation/templates/tech-explainer.md +++ b/skills/public/podcast-generation/templates/tech-explainer.md @@ -49,9 +49,9 @@ This is commonly used in signup flows, admin dashboards, or when importing users ```bash python /mnt/skills/public/podcast-generation/scripts/generate.py \ - --input-file /mnt/user-data/workspace/tech-content.md \ + --script-file /mnt/user-data/workspace/tech-explainer-script.json \ --output-file /mnt/user-data/outputs/tech-explainer-podcast.mp3 \ - --locale en + --transcript-file /mnt/user-data/outputs/tech-explainer-transcript.md ``` ## Tips for Technical Podcasts