by Jay Emp0
Ebook to Audiobook Converter ▶️ Watch Full Demo Video What It Does Turn any PDF ebook into a professional audiobook automatically. Upload a PDF, get an MP3 audiobook in your Google Drive. Perfect for listening to books, research papers, or documents on the go. Example: Input PDF → Output Audiobook Key Features Upload PDF via web form → Get MP3 audiobook in Google Drive Natural-sounding AI voices (MiniMax Speech-02-HD) Automatic text extraction, chunking, and audio merging Customizable voice, speed, and emotion settings Processes long books in batches with smart rate limiting Perfect For Students**: Turn textbooks into study audiobooks Professionals**: Listen to reports and documents while commuting Content Creators**: Repurpose written content as audio Accessibility**: Make content accessible to visually impaired users Requirements | Component | Details | |-----------|---------| | n8n | Self-hosted ONLY (cannot run on n8n Cloud) | | FFmpeg | Must be installed in your n8n environment | | Replicate API | For MiniMax TTS (Sign up here) | | Google Drive | OAuth2 credentials + "Audiobook" folder | ⚠️ Important: This workflow does NOT work on n8n Cloud because FFmpeg installation is required. Quick Setup 1. Install FFmpeg Docker users: docker exec -it <n8n-container-name> /bin/bash apt-get update && apt-get install -y ffmpeg Native installation: sudo apt-get install ffmpeg # Linux brew install ffmpeg # macOS 2. Get API Keys Replicate**: Sign up at replicate.com and copy your API token Google Drive**: Set up OAuth2 in n8n and create an "Audiobook" folder in Drive 3. Import & Configure Import n8n.json into your n8n instance Replace the Replicate API token in the "MINIMAX TTS" node Configure Google Drive credentials and select your "Audiobook" folder Activate the workflow Cost Estimate | Component | Cost | |-----------|------| | MiniMax TTS API | $0.15 per 1000 characters ($3-5 for average book) | | Google Drive Storage | Free (up to 15GB) | | Processing Time | ~1-2 minutes per 10 pages | How It Works PDF Upload → Extract Text → Split into Chunks → Convert to Speech (batches of 5) → Merge Audio Files (FFmpeg) → Upload to Google Drive The workflow uses four main modules: Extraction: PDF text extraction and intelligent chunking Conversion: MiniMax TTS processes text in batches Merging: FFmpeg combines all audio files seamlessly Upload: Final audiobook saved to Google Drive Voice Settings (Customizable) { "voice_id": "Friendly_Person", "emotion": "happy", "speed": 1, "pitch": 0 } Available emotions: happy, neutral, sad, angry, excited Limitations ⚠️ Self-hosted n8n ONLY (not compatible with n8n Cloud) PDF files only (not EPUB, MOBI, or scanned images) Large books (500+ pages) take longer to process Requires FFmpeg installation (see setup above) Troubleshooting FFmpeg not found? Docker: Run docker exec -it <container> /bin/bash then apt-get install ffmpeg Native: Run sudo apt-get install ffmpeg (Linux) or brew install ffmpeg (macOS) Rate limit errors? Increase wait time in the "WAITS FOR 5 SECONDS" node to 10-15 seconds Google Drive upload fails? Make sure you created the "Audiobook" folder in your Google Drive Reconfigure OAuth2 credentials in n8n Created by emp0 | More workflows: n8n Gallery
by RealSimple Solutions
POML → Prompt/Messages (No-Deps) What this does Turns POML markup into either a single Markdown prompt or chat-style messages\[] — using a zero-dependency n8n Code node. It supports variable substitution (via context), basic components (headings, lists, code, images, tables, line breaks), and optional schema-driven validation using componentSpec + attributeSpec. Credits Created by Real Simple Solutions as an n8n template friendly POML compiler (no dependencies) for full POML feature parity. View more of our _templates here_ Who’s it for Teams who author prompts in POML and want a template-safe way to turn them into either a single Markdown prompt or chat-style messages—without installing external modules. Works on n8n Cloud and self-hosted. What it does This workflow converts POML into: prompt** (Markdown) for single-shot models, or messages[]** (system|user|assistant) for chat APIs when speakerMode is true. It supports variable substitution via a context object ({{dot.path}}), lists, headings, code blocks, images (incl. base64 → data: URL), tables from JSON (records/columns), and basic message components. How it works Set (Specs & Context):** Provide componentSpec (allowed attrs per tag), attributeSpec (typing/coercion), and optional context. Code (POML → Prompt/Messages):** A zero-dependency compiler parses the POML and emits prompt or messages[]. > Add a yellow Sticky Note that includes this description and any setup links. Use additional neutral sticky notes to explain each step. How to set up Import the template. Open the first Set node and paste your componentSpec, attributeSpec, and context (examples included). In the Code node, choose: speakerMode: true to get messages[], or false for a single prompt. listStyle: dash | star | plus | decimal | latin. Run → inspect prompt/messages in the output. Requirements No credentials or community nodes. Works without external libraries (template-compliant). How to customize Add message tags (<system-msg>, <user-msg>, <ai-msg>) in your POML when using speakerMode: true. Extend componentSpec/attributeSpec to validate or coerce additional tags/attributes. Preformat arrays in context (e.g., bulleted, csv) for display, or add a small Set node to build them on the fly. Rename nodes and keep all user-editable fields grouped in the first Set node. Security & best practices Never** hardcode API keys in nodes. Remove any personal IDs before publishing. Keep your Sticky Note(s) up to date and instructional.
by Md Sagor Khan
⚡ How it works This workflow automates first responses to new Zendesk tickets with the help of AI and your internal knowledge base. Webhook trigger fires whenever a new ticket is created in Zendesk. Ticket details (subject, description, requester info) are extracted. Knowledge base retrieval – the workflow searches a Supabase vector store (with OpenAI embeddings) for the most relevant KB articles. AI assistant (RAG agent) drafts a professional reply using the retrieved KB and conversation memory stored in Postgres. Decision logic: If no relevant KB info is found (or if it’s a sensitive query like KYC, refunds, or account deletion), the workflow sends a fallback response and tags the ticket for human review. Otherwise, it posts the AI-generated reply and tags the ticket with ai_reply. Logging & context memory ensure future ticket updates are aware of past interactions. 🔧 Set up steps This workflow takes about 15–30 minutes to set up. Connect credentials for Zendesk, OpenAI, Supabase, and Postgres. Prepare your knowledge base: store support content in Supabase (documents table) and embed it using the provided Embeddings node. Set up Postgres memory table (zendesk_ticket_histories) to store conversation history. Update your Zendesk domain in the HTTP Request nodes (<YOUR_ZENDESK_DOMAIN>). Deploy the webhook URL in Zendesk triggers so new tickets flow into n8n. Test by creating a sample ticket and verifying: AI replies appear in Zendesk Correct tags (ai_reply or human_requested) are applied Logs are written to Postgres
by Yoshino Haruki
Who is this for? This workflow is ideal for filmmakers, video producers, content creators, and location managers who need to quickly build a database of potential shooting locations without manual research and data entry. How it works Chat Input: Start the workflow via the n8n chat interface and enter a search query (e.g., "Quiet cafes in Kyoto" or "Cyberpunk streets"). Search: The workflow queries the Google Maps Places API to find matching real-world locations. AI Analysis: An AI agent (via OpenRouter) reviews the location details and writes a short, creative "Director's Commentary" highlighting its cinematic appeal. Data Entry: The location name, address, rating, Google Maps link, and the AI's commentary are automatically saved to a Google Sheet. Notification: Once all locations are processed, a summary link is sent to your Slack channel. Prerequisites n8n Version**: 1.0 or later Google Cloud Platform**: API Key with "Places API (New)" enabled. Google Sheets**: A formatted sheet (see setup below). Slack**: An App/Bot token with chat writing permissions. OpenRouter** (or OpenAI/Anthropic): API Key for the LLM. How to set up Google Sheet: Create a new sheet with the following headers in the first row: 場所名 (Name) 住所 (Address) 評価(星) (Rating) AI監督のコメント (AI Comment) GoogleMapリンク (Link) Credentials: Configure your credentials for Google Maps, Google Sheets, Slack, and OpenRouter within n8n. Configuration Node: Open the node named "Workflow Configuration" and input your specific details: googleMapsApiKey: Your Google Cloud API key. slackChannelId: The Channel ID where you want notifications (e.g., C0123456). googleSheetId: The string of characters found in your Google Sheet URL. Customization Adjust Results: Change the **Limit node settings to process more locations per run (default is set to 2 to save API credits during testing). Change Persona: Edit the "System Prompt" in the **AI Location Analyzer node to change the AI's tone (e.g., from "Film Director" to "Real Estate Agent" or "Travel Blogger"). Swap LLM**: You can easily replace the OpenRouter node with an OpenAI or Anthropic node if you prefer a different model.
by Shun Nakayama
This workflow implements cutting-edge concepts from Google DeepMind's OPRO (Optimization by PROmpting) and Stanford's DSPy to automatically refine AI prompts. It iteratively generates, evaluates, and optimizes responses against a ground truth, allowing you to "compile" your prompts for maximum accuracy. Why this is powerful Instead of manually tweaking prompts (trial and error), this workflow treats prompt engineering as an optimization problem: OPRO-style Optimization**: The "Optimizer" LLM analyzes past performance scores and reasons to mathematically deduce a better prompt. DSPy-style Logic**: It separates the "Logic" (Workflow) from the "Parameters" (Prompts), allowing the system to self-correct until it matches the Ground Truth. How it works Define**: Set your initial prompt and a test case with the expected answer (Ground Truth). Generate**: The workflow generates a response using the current prompt. Evaluate**: An AI Evaluator scores the response (0-100) based on accuracy and format. Optimize**: If the score is low, the Optimizer AI analyzes the failure and rewrites the prompt. Loop**: The process repeats until the score reaches 95/100 or the loop limit is hit. Setup steps Configure OpenAI: Ensure you have an OpenAI credential set up in the OpenAI Chat Model node. Customize: Open the Define Initial Prompt & Test Data node and set your initial_prompt, test_input, and ground_truth. Run: Execute the workflow and check the Manage Loop & State node output for the optimized prompt.
by Supira Inc.
How it works This workflow automatically collects the latest news articles from both English and Japanese sources using NewsAPI, summarizes them with OpenAI, and appends the results into a Google Sheet. The summaries are concise (about 50 characters) in Japanese, making it easy to review news highlights at a glance. Set up steps Create a Google Sheet with two tabs: 01_Input (columns: Keyword, SearchRequired) 02_Output (columns: Date, Keyword, Summary, URL) Enter your own Google Sheet ID and tab names in the workflow. Add your NewsAPI key in the HTTP Request nodes. Connect your OpenAI account (or deactivate the summarization node if not needed). Run the workflow manually or use the daily schedule trigger at 13:00. This template is ready to use with minimal changes. Sticky notes inside the workflow provide extra guidance.
by Jeremiah Wright
Who’s it for Recruiters, freelancers, and ops teams who scan job briefs and want quick, relevant n8n template suggestions, saved in a Google Sheet for tracking. What it does Parses any job text, extracts exactly 5 search keywords, queries the n8n template library, and appends the matched templates (ID, name, description, author) to Google Sheets, including the canonical template URL. How it works Trigger receives a message or paste-in job brief. LLM agent returns 5 concise search terms (JSON). For each keyword, an HTTP request searches the n8n templates API. Results are split and written to Google Sheets; the workflow builds the public URL from ID+slug. Set up Add credentials for OpenAI (or swap the LLM node to your provider). Create a Google Sheet with columns: Template ID, Name, User, Description, URL. In the ⚙️ Config node, set: GOOGLE_SHEETS_DOC_ID, GOOGLE_SHEET_NAME, N8N_TEMPLATES_API_URL. Requirements • n8n (cloud or self-hosted) • OpenAI (or alternative LLM) credentials • Google Sheets OAuth credentials Customize • Change the model/system prompt to tailor keyword extraction. • Swap Google Sheets for Airtable/Notion. • Extend filters (e.g., only AI/CRM templates) before writing rows.
by Ertay Kaya
Zendesk Ticket Summarizer with Pinecone, OpenAI, and Slack This workflow automates the process of summarizing recent Zendesk support tickets and sharing key insights in a Slack channel. It is ideal for support teams who want daily, AI-generated overviews of customer issues without manually reviewing each ticket. How it works Daily Trigger: The workflow runs every day at 10am. Fetch Tickets: It retrieves all Zendesk tickets created in the last 24 hours (optionally filtered by brand). Vector Storage: Tickets are stored in a Pinecone vector database, with relevant fields and metadata. AI Summarization: An AI agent (using OpenAI) analyzes the tickets, identifies main complaints, and counts how many tickets mention each issue. Slack Notification: The summary is posted to a specified Slack channel for your team to review. Setup Instructions Configure your Zendesk, Pinecone, OpenAI, and Slack credentials in the respective nodes. Set your Pinecone index and namespace in both Pinecone nodes. Adjust the Zendesk query if you want to filter by a specific brand. Set the Slack channel ID where you want to receive the summaries. Use case Get daily, actionable insights from your Zendesk tickets, helping your team quickly spot trends and recurring issues.
by Robert Breen
This n8n workflow template creates an intelligent data analysis chatbot that can answer questions about data stored in Google Sheets using OpenAI's GPT-5 Mini model. The system automatically analyzes your spreadsheet data and provides insights through natural language conversations. What This Workflow Does Chat Interface**: Provides a conversational interface for asking questions about your data Smart Data Analysis**: Uses AI to understand column structures and data relationships Google Sheets Integration**: Connects directly to your Google Sheets data Memory Buffer**: Maintains conversation context for follow-up questions Automated Column Detection**: Automatically identifies and describes your data columns 🚀 Try It Out! 1. Set Up OpenAI Connection Get Your API Key Visit the OpenAI API Keys page. Go to OpenAI Billing. Add funds to your billing account. Copy your API key into your OpenAI credentials in n8n (or your chosen platform). 2. Prepare Your Google Sheet Connect Your Data in Google Sheets Data must follow this format: Sample Marketing Data First row** contains column names. Data should be in rows 2–100. Log in using OAuth, then select your workbook and sheet. 3. Ask Questions of Your Data You can ask natural language questions to analyze your marketing data, such as: Total spend** across all campaigns. Spend for Paid Search only**. Month-over-month changes** in ad spend. Top-performing campaigns** by conversion rate. Cost per lead** for each channel. 📬 Need Help or Want to Customize This? 📧 rbreen@ynteractive.com 🔗 LinkedIn 🔗 n8n Automation Experts
by Arkadiusz
📝 Workflow Description This workflow creates a conversational bridge between Telegram / n8n Chat and Home Assistant. It allows users to control smart home devices or request information using natural language (text or voice). ⸻ 🔑 Key Features Multi-channel input: Works with both Telegram and n8n’s chat interface. Voice support: Telegram voice messages are transcribed to text using OpenAI Whisper. AI-driven assistant: Google Gemini processes queries in natural language. Home Assistant integration: Uses MCP client tools to execute actions like turning devices on/off, adjusting lights, or broadcasting messages. Memory management: Short-term memory keeps context within conversations. Smart reply routing: Responses are automatically sent back to the correct channel (Telegram or chat). Message formatting: Telegram replies are beautified (bold, bullet points, inline code, links). ⸻ 📌 Node Overview Telegram Trigger: Captures incoming Telegram messages (text or voice). Bot Is Typing: Sends a “typing…” action to indicate the bot is working. Voice or Text: Separates voice and text inputs. Get Voice File → Speech to Text → Transcription to ChatInput: Handles Telegram voice notes by downloading the file, transcribing it, and preparing it for the chat pipeline. When Chat Message Received: Captures messages from n8n’s built-in chat interface. Process Messages: Normalizes incoming data (input text, source, session ID, voice flag). Home Agent: Main AI agent that processes queries. Google Gemini Chat Model: Language model for intent understanding and conversation. Simple Memory & Simple Memory1: Buffer memories to preserve conversation context. Home Assistant Connector: MCP client node that executes smart home actions (turn on/off devices, adjust lights, etc.). Reply Router: Routes the assistant’s response either to Telegram or to the n8n chat webhook. Telegram Message Beautifier → Telegram Send: Formats and sends responses back to Telegram. Respond to Webhook: Sends responses to n8n chat. ⸻ 🚀 Example Use Cases Send “Turn on the living room lights” via Telegram → Bot triggers Home Assistant action. Ask “What’s the temperature in the bedroom?” → Response comes back formatted in Telegram. Record a voice note “Goodnight mode” → Automatically transcribed and executed by Home Assistant. Use n8n chat to quickly trigger automations or check device statuses. ⸻ ⚡️ Benefits Unified chat & voice control for Home Assistant. AI-powered natural language understanding. Works seamlessly across platforms (Telegram & n8n chat). Extensible: new tools or intents can be added easily.
by CentralStationCRM
Overview This template benefits anyone who wants to: automate web research on a prospect company compile that research into an easily readable note and save the note into CentralStationCRM Tools in this workflow CentralStationCRM, the easy and intuitive CRM Software for small teams. Here is our API Documentation if you want to customize the workflow. ChatGPT, the well-known ai chatbot Tavily, a web search service for large language models Disclaimer Tavily Web Search is (as of yet) a community node. You have to activate the use of community nodes inside your n8n account to use this workflow. Workflow Screenshot Workflow Description The workflow consists of: a webhook trigger an ai agent node an http request node The Webhook Trigger The Webhook is set up in CentralStationCRM to trigger when a new company is created inside the CRM. The Webhook Trigger Node in n8n then fetches the company data from the CRM. The AI Agent Node The node uses ChatGPT as ai chat model and two Tavily Web Search operations ('search for information' and 'extract URLs') as tools. Additionally, it uses a simple prompt as tool, telling the ai model to re-iterate on the research data if applicable. The AI Agent Node takes the Company Name and prompts ChatGPT to "do a deep research" on this company on the web. "The research shall help sales people get a good overview about the company and allow to identify potential opportunities." The AI Agent then formats the results into markdown format and passes them to the next node. The CentralStationCRM protocol node This is an HTTP Request to the CentralStationCRM API. It creates a 'protocol' (the API's name for notes in the CRM) with the markdown data it received from the previous node. This protocol is saved in CentralStationCRM, where it can easily be accessed as a note when clicking on the new company entry. Customization ideas Even though this workflow is pretty simple, it poses interesting possibilities for customization. For example, you can alter the Webhook trigger (in CentralstationCRM and n8n) to fire when a person is created. You have to alter the AI prompt as well and make sure the third node adds the research note to the person, not a company, via the CentralStationCRM API. You could also swap the AI model used here for another one, comparing the resulting research data and get a deeper understanding of ai chat models. Then of course there is the prompt itself. You can definitely double down on the information you are most interested in and refine your prompt to make the ai bot focus on these areas of search. Start experimenting a bit! Preconditions For this workflow to work, you need a CentralStationCRM account with API Access an n8n account with API Access an Open AI account with API Access Have fun with our workflow!
by Afareayo Soremekun
ChannelCrawler API to Google Slides Template This template shows how you can use the ChannelCrawler API alongside ChatGPT (or any LLM) to generate google slides using images and texts received from the API How it Works A user inputs the link to the Youtube channel(s) of their target creators The list is parsed by a python script, returning it in a format that can be ran in a loop The workflow iterates over each channel url The url is passed to the ChannelCrawler API, where it returns a json of the creators profile. The OpenAI node processes the description and content of the creators profile to create a summary We retrieve the google slides presentation using the get presentation node. We use the Google Slides API to duplicate an existing page and pull back the original page as it has a new revision ID We use the Google Slides API to change the image placeholder of the of the image Presentation Lastly we update other placeholders in with text from the ChannelCrawler and ChatGPT outputs How to Use From executing the workflow, a pop up form will come up where you can insert the Youtube Channel urls On submission, provided the prerequisites are set up - rest of the workflow will be triggered Use Cases You can create profiles on influencers and creators with extensive data points from the ChannelCrawler API and consistent summarisation from GPT Prerequisites ChannelCrawler Account - there's a great pay as you go options for access to the API OpenAI account - the you can access free Open AI credit if you are a first time n8n user! Check the credentials options in the node Google account (For slides) - You should have a google account or sign up for google with your non google email