by Shannon Atkinson
Template Description WDF Top Keywords: This workflow is designed to streamline keyword research by automating the process of generating, filtering, and analyzing Google and YouTube keyword data. Ensure compliance with local regulations and API terms of service when using this workflow. 📌 Purpose The WDF Top Keywords workflow automates collecting, processing, and managing keyword data for both Google and YouTube platforms. Leveraging multiple data sources and APIs ensures an efficient and scalable approach to identifying high-impact keywords for SEO, content creation, and marketing campaigns. Key Features Automates the generation of keyword suggestions using autocomplete APIs. Integrates with NocoDB to store and manage keyword data. Filters keywords based on monthly search volume and cost-per-click (CPC). Supports bulk import of keyword data into structured databases. Outputs both Google and YouTube keyword insights, enabling informed decision-making. 🎯 Target Audience This workflow is ideal for: Digital marketers aiming to optimize ad campaigns with data-driven insights. SEO specialists looking to identify high-potential keywords efficiently. Content creators seeking trending and relevant topics for their platforms. Agencies managing keyword research for multiple clients. ⚙️ How It Works Trigger: The workflow runs on-demand or at scheduled intervals. Keyword Generation: Retrieves base keywords from NocoDB. Generates autocomplete suggestions for Google and YouTube. Data Processing: Filters and formats keyword data based on specific criteria (e.g., search volume, CPC). Consolidates results for efficient storage and analysis. Storage and Output: Saves data into structured NocoDB tables for tracking and reuse. Bulk imports monthly search volume statistics for detailed analysis. 🛠️ Key APIs and Tools Used NocoDB**: Stores and organizes base and processed keyword data. DataForSEO API**: Provides search volume and keyword performance metrics. Google Autocomplete API**: Suggests relevant Google search terms. YouTube Autocomplete API**: Suggests trending YouTube keywords. Social Flood Docker Instance**: Serves as the local integration hub. Setup Instructions Required Tools: NocoDB n8n DataForSEO Account Social Flood Docker Instance Create the following NocoDB tables: Base Keyword Search Second Order Google Keywords Second Order YouTube Keywords Search Volume This template empowers users to handle complex keyword research tasks effortlessly, saving time and providing actionable insights. Share this template to enhance your workflow efficiency!
by Lucas Peyrin
How it works This template is a hands-on tutorial for one of the most advanced and powerful patterns in n8n: asynchronous parallel processing, also known as the Fan-Out/Fan-In model. When should you use this? Use this pattern when speed is your top priority and you have multiple independent, long-running tasks. Instead of running them one after another (which is slow), this workflow runs them all at the same time and waits for them all to finish. We use a Construction Project analogy to explain the architecture: The Main Workflow (Top):* This is the *Project Manager**. It defines the project, assigns all the tasks to specialist teams, and then pauses, waiting for a final report. The Sub-Workflow (Bottom):* This represents the *Specialist Teams**. It's a single, reusable workflow that can perform any task it's assigned. Static Data (The Brains):* A hidden *Project Dashboard** is used to track the status of every task in real-time. The process follows three key phases: Fan-Out: The Project Manager starts multiple sub-workflows at once without waiting for them to finish. Asynchronous Execution: Each Specialist Team works on its task independently and in parallel. When a team finishes, it updates its status on the Project Dashboard. Fan-In: The Project Manager, which has been paused by a Wait node, is only resumed when the Project Dashboard confirms that all tasks are complete. It then receives the aggregated results from all the parallel tasks. Set up steps Setup time: < 1 minute This workflow is a self-contained tutorial. The only setup required is to configure the AI model. Configure Credentials: Go to the The AI Specialist node in the sub-workflow (bottom flow). Select your desired AI credential (Gemini in that case). Execute the Workflow: Click the "Execute Workflow" button on the Start Project node. Explore and Learn: Follow the execution path to see how the main workflow fans out, and how the sub-workflow is called multiple times. Click on each node and read the detailed sticky notes to understand its specific role in this advanced pattern.
by Pablo
What this template does The Ultimate Scraper for n8n uses Selenium and AI to retrieve any information displayed on a webpage. You can also use session cookies to log in to the targeted webpage for more advanced scraping needs. ⚠️ Important: This project requires specific setup instructions. Please follow the guidelines provided in the GitHub repository: n8n Ultimate Scraper Setup : https://github.com/Touxan/n8n-ultimate-scraper/tree/main. The workflow version on n8n and the GitHub project may differ; however, the most up-to-date version will always be the one available on the GitHub repository : https://github.com/Touxan/n8n-ultimate-scraper/tree/main. How to use Deploy the project with all the requirements and request your webhook. Example of request: curl -X POST http://localhost:5678/webhook-test/yourwebhookid \ -H "Content-Type: application/json" \ -d '{ "subject": "Hugging Face", "Url": "github.com", "Target data": [ { "DataName": "Followers", "description": "The number of followers of the GitHub page" }, { "DataName": "Total Stars", "description": "The total numbers of stars on the different repos" } ], "cookie": [] }' Or to just scrap a url : curl -X POST http://localhost:5678/webhook-test/67d77918-2d5b-48c1-ae73-2004b32125f0 \ -H "Content-Type: application/json" \ -d '{ "Target Url": "https://github.com", "Target data": [ { "DataName": "Followers", "description": "The number of followers of the GitHub page" }, { "DataName": "Total Stars", "description": "The total numbers of stars on the different repo" } ], "cookies": [] }' `
by Łukasz
Who is it for This workflow is for anyone who is using N8N. It's especially helpful if you are a DevOps and your N8N instance is self hosted. If you carea lot about security and number of failed executions and at the same time you are using InfluxDB to monitor status of your systems, this will perfectly fit in your stack. How it works This automation is fairly simple. It uses native N8N nodes to gather data from itself. Then it is parsing this data to be compatible with InfluxDB input. And finally it is sending this data to InfluxDB for further processing. Remember to set up Setup is really simple and you just need to provide just three variables. First is your InfluxDB URL, second is your InfluxDB organization, and third is your InfluxDB bucket name. Of course, to set up N8N nodes and gather data from them, you will need your instance API key. And that's all. How it looks in InfluxDB? See below Schedule Audits Audits don't need to be run often, but I would recommend it to be run on regular basis. This way you can see real data series in InfluxDB. I think that once a day should be enough, but it depends on your N8N usage of course Thank you, perfect! Glad I could help. Visit my profile for other automations for businesses. And if you are looking for dedicated software development, do not hesitate to reach out! You can also see automations on my Sailing Byte's GitHub N8N repository.
by Rod
Telegram Personal Assistant with Long-Term Memory & Note-Taking This n8n workflow transforms your Telegram bot into a powerful personal assistant that handles voice, photo, and text messages. The assistant uses AI to interpret messages, save important details as long-term memories or notes in a Baserow database, and recall information for future interactions. 🌟 How It Works Message Reception & Routing Telegram Integration: The workflow is triggered by incoming messages on your Telegram bot. Dynamic Routing: A switch node inspects the message to determine whether it's voice, text, or photo (with captions) and routes it for the appropriate processing. Content Processing Voice Messages: Audio files are retrieved and sent to an AI transcription node to convert spoken words into text. Text Messages: Text is directly captured and prepared for analysis. Photos: If an image is received, the bot fetches the file (and caption, if provided) and uses an AI-powered image analysis node to extract relevant details. AI-Powered Agent & Memory Management The core AI agent (powered by GPT-4o-mini) processes the incoming message along with any previous conversation history stored in PostgreSQL memory buffers. Long-Term Memory: When a message contains personal or noteworthy information, the assistant uses a dedicated tool to save this data as a long-term memory in Baserow. Note-Taking: For specific instructions or reminders, the assistant saves concise notes in a separate Baserow table. The AI agent follows defined rules to decide which details are saved as memories and which are saved as notes. Response Generation After processing the message and updating memory/notes as needed, the AI agent crafts a contextual and personalized response. The response is sent back to the user via Telegram, ensuring smooth and natural conversation flow. 🚀 Key Features Multimodal Input:** Seamlessly handles voice, photo (with captions), and text messages. Long-Term Memory & Note-Taking:** Uses a Baserow database to store personal details and notes, enhancing conversational context over time. AI-Driven Contextual Responses:** Leverages an AI agent to generate personalized, context-aware replies based on current input and past interactions. User Security & Validation:** Incorporates validation steps to verify the user's Telegram ID before processing, ensuring secure and personalized interactions. Easy Baserow Setup:** Comes with a clear setup guide and sample configurations to quickly integrate Baserow for managing memories and notes. 🔧 Setup Guide Telegram Bot Setup: Create your bot via BotFather and obtain the Bot Token. Configure the Telegram webhook in n8n with your bot's token and URL. Baserow Database Configuration: Memory Table: Create a workspace titled "Memories and Notes". Set up a table (e.g., "Memory Table") with at least two fields: Memory (long text) Date Added (US date format with time) Notes Table: Duplicate the Memory Table and rename it to "Notes Table". Change the first field's name from "Memory" to "Notes". n8n Workflow Import & Configuration: Import the workflow JSON into your n8n instance. Update credentials for Telegram, Baserow, OpenAI, and PostgreSQL (for memory buffering) as needed. Adjust node settings if you need to customize AI agent prompts or memory management rules. Testing & Deployment: Test your bot by sending various message types (text, voice, photo) to confirm that the workflow processes them correctly, updates Baserow, and returns the appropriate response. Monitor logs to ensure that memory and note entries are correctly stored and retrieved. ✨ Example Interactions Voice Message Processing:** User sends a voice note requesting a reminder. Bot Response: "Thanks for your message! I've noted your reminder and saved it for future reference." Photo with Caption:** User sends a photo with the caption "Save this recipe for dinner ideas." Bot Response: "Got it! I've saved this recipe along with the caption for you." Text Message for Memory Saving:** User: "I love hiking on weekends." Bot Response: "Noted! I’ll remember your interest in hiking." Retrieving Information:** User asks: "What notes do I have?" Bot Response: "Here are your latest notes: [list of saved notes]." 🛠️ Resources & Next Steps Telegram Bot Configuration:** Telegram BotFather Guide n8n Documentation:** n8n Docs Community Forums:** Join discussions and share your customizations! This workflow not only streamlines message processing but also empowers users with a personal AI assistant that remembers details over time. Customize the rules and responses further to fit your unique requirements and enjoy a more engaging, intelligent conversation experience on Telegram!
by Mohammadreza azari
🔧 How it works: • The workflow triggers when a new order is created in WooCommerce. • It extracts order details including ID, status, total, and products list. • Sends a formatted message via Telegram to the store admin. • Includes a clickable button that links directly to the order view page. ⚙️ Set up steps: • Estimated setup time: 5–10 minutes. • Requires active WooCommerce REST API credentials. • Requires a Telegram bot and your admin chat ID. • Replace the Telegram chatId and WooCommerce credentials in the workflow. • Make sure your WooCommerce site allows external API access.
by Joseph LePage
Description This workflow automates document processing using LlamaParse to extract and analyze text from various file formats. It intelligently processes documents, extracts structured data, and delivers actionable insights through multiple channels. How It Works Document Ingestion & Processing 📄 Monitors Gmail for incoming attachments or accepts documents via webhook Validates file formats against supported LlamaParse extensions Uploads documents to LlamaParse for advanced text extraction Stores original documents in Google Drive for reference Intelligent Document Analysis 🧠 Automatically classifies document types (invoices, reports, etc.) Extracts structured data using customized AI prompts Generates comprehensive document summaries with key insights Converts unstructured text into organized JSON data Invoice Processing Automation 💼 Extracts critical invoice details (dates, amounts, line items) Organizes financial data into structured formats Calculates tax breakdowns, subtotals, and payment information Maintains detailed records for accounting purposes Multi-Channel Delivery 📱 Saves extracted data to Google Sheets for tracking and analysis Sends concise summaries via Telegram for immediate review Creates searchable document archives in Google Drive Updates spreadsheets with structured financial information Setup Steps Configure API Credentials 🔑 Set up LlamaParse API connection Configure Gmail OAuth for email monitoring Set up Google Drive and Sheets integrations Add Telegram bot credentials for notifications Customize AI Processing ⚙️ Adjust document classification parameters Modify extraction templates for specific document types Fine-tune summary generation prompts Customize invoice data extraction schema Test and Deploy 🚀 Test with sample documents of various formats Verify data extraction accuracy Confirm notification delivery Monitor processing pipeline performance
by Joseph LePage
📄✨ Easy WordPress Content Creation from PDF Docs + Human in the Loop Gmail This n8n workflow automates the process of transforming PDF documents into engaging, SEO-friendly WordPress blog posts. It incorporates AI-powered text analysis, automatic image generation, and a human review step to ensure quality before publishing. 🚀 How It Works 🗂️ PDF Upload & Text Extraction Users upload a PDF document through a form trigger. The workflow extracts text from the uploaded file, ensuring compatibility with supported formats. 🤖 AI-Powered Blog Post Generation The extracted text is analyzed by an AI model (GPT-based) to create a structured blog post. The AI generates: A captivating SEO-friendly title. Well-formatted HTML content, including an introduction, chapters with subheadings, and a conclusion. 🎨 Image Creation & Integration An image is generated using Pollinations.ai based on the blog post title. The vibrant image is uploaded to WordPress and set as the featured image for the post. 📝 WordPress Draft Creation A draft blog post is created on WordPress with the AI-generated title, content, and featured image. ✅ Human-in-the-Loop Approval The draft content is sent via Gmail to a reviewer for manual approval. If approved, the post is published on WordPress. If not, an error message is sent for troubleshooting. 📢 Multi-Channel Notifications Once published, notifications are sent via Gmail and Telegram to relevant stakeholders. 🔧 Setup Steps 🔑 Configure API Credentials Set up API connections for: OpenAI (for AI content generation). WordPress (for post creation and media uploads). Gmail (for sending approval emails). Telegram (for notifications). imgbb (for saving blog image). ⚙️ Customize Workflow Parameters Adjust the AI prompt to match your desired blog structure and tone. Modify the image generation parameters to align with your branding needs. 🧪 Test & Deploy Test the workflow with sample PDFs to ensure: Accurate text extraction. Proper formatting of generated content. Seamless approval and publishing processes. This workflow streamlines content creation while maintaining quality control through human oversight, making it an ideal solution for efficient blog management! 🎉
by Ozgur Karateke
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 1 — What Does It Do / Which Problem Does It Solve? This workflow turns Google Docs-based contract & form templates into ready-to-sign PDFs in minutes—all from a single chat flow. Automates repetitive document creation.** Instead of copying a rental, sales, or NDA template and filling it by hand every time, the bot asks for the required values and fills them in. Eliminates human error.** It lists every mandatory field so nothing is missed, and removes unnecessary clauses via conditional blocks. Speeds up approvals.** The final draft arrives as a direct PDF link—one click to send for signing. One template → unlimited variations.* Every new template you drop in Drive is auto-listed with *zero workflow edits—**it scales effortlessly. 100 % no-code.** Runs on n8n + Google Apps Script—no extra backend, self-hosted or cloud. 2 — How It Works (Detailed Flow) 📝 Template Discovery 📂 The TemplateList node scans the Drive folder you specify via the ?mode=meta endpoint and returns an id / title / desc list. The bot shows this list in chat. 🎯 Selection & Metadata Fetch The user types a template name. 🔍 GetMetaData opens the chosen Doc, extracts META_JSON, placeholders, and conditional blocks, then lists mandatory & optional fields. 🗣 Data-Collection Loop The bot asks for every placeholder value. For each conditional block it asks 🟢 Yes / 🔴 No. Answers are accumulated in a data JSON object. ✅ Final Confirmation The bot summarizes the inputs → when the user clicks Confirm, the DocProcess sub-workflow starts. ⚙️ DocProcess Sub-Workflow | 🔧 Step | Node | Task | | --- | --- | --- | | 1 | User Choice Match Check | Verifies name–ID match; throws if wrong | | 2 | GetMetaData (renew) | Gets the latest placeholder list | | 3 | Validate JSON Format | Checks for missing / unknown fields | | 4 | CopyTemplate | Copies the Doc via Drive API | | 5 | FillDocument | Apps Script fills placeholders & removes blocks | | 6 | Generate PDF Link | Builds an export?format=pdf URL | 📎 Delivery The master agent sends 🔗 Download PDF & ✏️ Open Google Doc links. 🚫 Error Paths status:"ERROR", missing:[…] → bot lists missing fields and re-asks. unknown:[…] → template list is outdated; rerun TemplateList. Any Apps Script error → the returned message is shown verbatim in chat. 3 — 🚀 Setup Steps (Full Checklist) > Goal: Get a flawless PDF on the first run. > > > Mentally tick the ☑️ in front of every line as you go. > ☁️ A. Google Drive Preparation | Step | Do This | Watch Out For | | --- | --- | --- | | 1 | Create a Templates/ folder → put every template Doc inside | Exactly one folder; no sub-folders | | 2 | Placeholders in every Doc are {{UPPER_CASE}} | No Turkish chars or spaces | | 3 | Wrap optional clauses with [[BLOCK_NAME:START]]…[[BLOCK_NAME:END]] | The START tag must have a blank line above | | 4 | Add a META_JSON block at the very end | Script deletes it automatically after fill | | 5 | Right-click Doc > Details ▸ Description = 1-line human description | Shown by the bot in the list | | 6 | Create a second Generated/ folder (for copies) | Keeps Drive tidy | > 🔑 Folder ID (long alphanumerical) = <TEMPLATE_PARENT_ID> > > > We’ll paste this into the TemplateList node next. > Simple sample template → Template Link 🛠 B. Import the Workflow into n8n Settings ▸ Import Workflow ▸ DocAgent.json If nodes look Broken afterwards → no community-node problem; you only need to select credentials. 📑 C. Customize the TemplateList Node Open Template List node ⚙️ → replace '%3CYOUR_PARENT_ID%3E' in parents with the real folder ID in the URL. Right-click node > Execute Node. Copy the entire JSON response. In the editor paste it into: DocAgent → System Prompt (top) User Choice Match Check → System Prompt (top) Save. > ⚠️ Why manual? Caching the list saves LLM tokens. Whenever you add a template, rerun the node and update the prompts. > 🔗 D. Deploy the Apps Script | Step | Screen | Note | | --- | --- | --- | | 1 | Open Gist files GetMetaData.gs + FillDocument.gs → File ▸ Make a copy | Both files may live in one project | | 2 | Project Settings > enable Google Docs API ✔️ & Google Drive API ✔️ | Otherwise you’ll see 403 errors | | 3 | Deploy ▸ New deployment ▸ Web app | | | • Execute as | Me | | | • Who has access | Anyone | | | 4 | On the consent screen allow scopes:• …/auth/documents• …/auth/drive | Click Advanced › Go if Google warns | | 5 | Copy the Web App URL (e.g. https://script.google.com/macros/s/ABC123/exec) | If this URL changes, update n8n | Apps Script source code → Notion Link 🔧 E. Wire the Script URL in n8n | Node | Field | Action | | --- | --- | --- | | GetMetaData | URL | <WEB_APP_URL>?mode=meta&id={{ $json["id"] }} | | FillDocument | URL | <WEB_APP_URL> | > 💡 Prefer using an .env file? Add GAS_WEBAPP_URL=… and reference it as {{ $env.GAS_WEBAPP_URL }}. > 🔐 F. Add Credentials Google Drive OAuth2* → *Drive API (v3) Full Access Google Docs OAuth2** → same account LLM key** (OpenAI / Gemini) (Optional) Postgres Chat Memory credential for the corresponding node 🧪 G. First Run (Smoke Test) Switch the workflow Active. In the chat panel type /start. Bot lists templates → pick one. Fill mandatory fields, optionally toggle blocks → Confirm. 🔗 Download PDF link appears → ☑️ setup complete. ❌ H. Common Errors & Fixes | 🆘 Error | Likely Cause | Remedy | | --- | --- | --- | | 403: Apps Script permission denied | Web app access set to User | Redeploy as Anyone, re-authorize scopes | | placeholder validation failed | Missing required field | Provide the listed values → rerun DocProcess | | unknown placeholders: … | Template vs. agent mismatch | Check placeholder spelling (UPPER_CASE ASCII) | | Template ID not found | Prompt list is old | Rerun TemplateList → update both prompts | | Cannot find META_JSON | No meta block / wrong tag | Add [[META_JSON_START]] … [[META_JSON_END]], retry | ✅ Final Checklist [ ] Drive folder structure & template rules ready [ ] Workflow imported, folder ID set in node [ ] TemplateList output pasted into both prompts [ ] Apps Script deployed, URL set in nodes [ ] OAuth credentials & LLM key configured [ ] /start test passes, PDF link received 🙋♂️ Need Help with Customizations? Reach out for consulting & support on LinkedIn: Özgür Karateke Full Documentation → Notion Simple sample template → Template Link Apps Script source code → Notion Link
by Abrar Sami
Auto-generate product comparison pages that help users buy faster This workflow creates detailed "X vs Y" product comparison pages designed to help readers make faster, more confident purchase decisions — all with zero manual writing. How it works Triggered manually or via Google Sheets row Takes two product names as input (e.g. “Notion vs Evernote”) Uses AI to generate: ✅ A compelling title and meta description 📝 Clear feature-by-feature comparison 🤝 Use-case-based recommendations 💬 FAQ section tailored to user pain points Saves each section into a Google Sheet for review or publishing Final output can be exported to your CMS or website builder (like Dorik, Webflow, etc.) Set up steps You’ll need OpenAI and Google Sheets credentials Takes 10–15 minutes to plug in your keys and connect the sheet Adjust prompts to match your brand tone or SEO goals 📝 You can easily expand this to generate pricing tables, testimonials, or even localized versions using the same structure. Ideal for SaaS companies, affiliate marketers, or content teams who want to scale up comparison content — without spending hours writing.
by Ferenc Erb
Overview Transform your Bitrix24 Open Line channels with an intelligent chatbot that leverages Retrieval-Augmented Generation (RAG) technology to provide accurate, document-based responses to customer inquiries in real-time. Use Case This workflow is designed for organizations that want to enhance their customer support capabilities in Bitrix24 by providing automated, knowledge-based responses to customer inquiries. It's particularly useful for: Customer service teams handling repetitive questions Support departments with extensive documentation Sales teams needing quick access to product information Organizations looking to provide 24/7 customer support What This Workflow Does Smart Document Processing Automatically processes uploaded PDF documents Splits documents into manageable chunks Generates vector embeddings for semantic understanding Indexes content for efficient retrieval AI-Powered Responses Utilizes Google Gemini AI to generate natural language responses Constructs answers based on relevant document content Maintains conversation context for coherent interactions Provides fallback responses when information is not available Vector Database Integration Stores document embeddings in Qdrant vector database Enables semantic search beyond simple keyword matching Retrieves the most relevant information for each query Maintains a persistent knowledge base that grows over time Webhook Handler Processes incoming messages from Bitrix24 Open Line channels Handles authentication and security validation Routes different types of events to appropriate handlers Manages session and conversation state Event Routing Intelligently routes different event types: ONIMBOTMESSAGEADD: Processes new user messages ONIMBOTJOINCHAT: Handles bot joining a conversation ONAPPINSTALL: Manages application installation ONIMBOTDELETE: Handles bot deletion Document Management Organizes processed documents in designated folders Tracks document processing status Moves indexed documents to appropriate locations Maintains document metadata for reference Interactive Menu Provides menu-based options for common user requests Customizable menu items and responses Easy navigation for users seeking specific information Fallback to operator option when needed Technical Architecture Components Webhook Handler: Receives and validates incoming requests from Bitrix24 Credential Manager: Securely manages authentication tokens and API keys Event Router: Directs events to appropriate processing functions Document Processor: Handles document loading, chunking, and embedding Vector Store: Qdrant database for storing and retrieving document embeddings Retrieval System: Searches for relevant document chunks based on user queries LLM Integration: Google Gemini model for generating natural language responses Response Manager: Formats and sends responses back to Bitrix24 Integration Points Bitrix24 API**: For bot registration, message handling, and user interaction Ollama API**: For generating document embeddings Qdrant API**: For vector storage and retrieval Google Gemini API**: For AI-powered response generation Setup Instructions Prerequisites Active Bitrix24 account with Open Line channels enabled Access to n8n workflow system Ollama API credentials Qdrant vector database access Google Gemini API key Configuration Steps Initial Setup Import the workflow into your n8n instance Configure credentials for all services Set up webhook endpoints Bitrix24 Configuration Create a new Bitrix24 application Configure webhook URLs Set appropriate permissions Install the application to your Bitrix24 account Document Storage Create a designated folder in Bitrix24 for knowledge base documents Configure folder paths in the workflow settings Upload initial documents to be processed Bot Configuration Customize bot name, avatar, and description Configure welcome messages and menu options Set up fallback responses Testing Verify successful installation Test document processing pipeline Send test queries to evaluate response qu
by Joseph LePage
Automate Audio Transcription, AI Summarization, and Google Drive Storage Who is this for? Content Teams, Researchers, and Administrators who need to automatically process voice memos, meeting recordings, or interview audio into structured, searchable documents. What problem does this solve? Eliminates manual transcription work by automatically converting audio files into organized text documents with AI analysis, while maintaining human oversight through approval workflows. What this workflow does Smart Audio Processing: Triggers when new .m4a files appear in Google Drive Uses OpenAI's Whisper for accurate transcription Implements dual-format reporting (JSON + Markdown) Human Oversight (optional): Requires email approval before processing 45-minute response window with escalation options AI-Powered Analysis: Generates structured JSON reports with: Key points & action items Sentiment analysis Technical terminology glossary Creates Markdown versions for easy reading Document Management: Stores raw transcripts + reports in Google Drive Automatic file naming with timestamps Sends completion alerts via Email/Telegram Workflow visualization showing audio file processing path Setup Credentials Needed: Google Drive API access OpenAI API key (GPT-4o-mini) Gmail & Telegram integrations Configuration: Set your Google Drive folder ID in 3 nodes Update email addresses in Gmail nodes Customize approval timeout in "Gmail User for Approval" Customization Points: File extension filters (.m4a) AI report templates and prompts Notification channels (Email/Telegram) How to customize Approval Process**: Add SMS/Teams notifications via additional nodes File Types**: Modify filter node for .mp3/.wav support Analysis Depth**: Adjust GPT-4 prompts in "Summarize to JSON" nodes Storage**: Connect to Notion/Airtable instead of Google Drive