by Darsheel
This n8n workflow acts as an AI-powered Inbox Assistant that automatically summarizes and classifies Gmail emails, prioritizes important messages, and sends a daily digest to Slack. It’s ideal for startup founders and small teams juggling investor intros, customer leads, and support queries — all from a busy Gmail inbox. Each email is processed using ChatGPT to generate a concise summary, classify the message (e.g., Support, Investor, Spam), and determine its urgency. High and medium priority messages are forwarded to Slack instantly. Lower priority emails are logged to Google Sheets for review. A daily 7 PM digest summarizes the day’s most important messages. 💡 Use Cases Preventing missed investor or lead emails Lightweight CRM alternative using Google Sheets Slack summaries of critical Gmail activity 🔧 How It Works Gmail node fetches new messages ChatGPT summarizes and extracts urgency + type High/medium urgency → sent to Slack + labeled in Gmail Low urgency → logged in Google Sheets Cron node triggers a daily 7 PM Slack summary ✅ Requirements OpenAI API Key (GPT-4 or GPT-4o recommended) Gmail access with read and label permission Slack Bot Token or Webhook URL Google Sheets integration (optional) 🛠 Customization Ideas Replace Slack with Telegram or WhatsApp Route investor leads to Airtable or Notion Add multi-language support in ChatGPT prompt Create weekly summaries via email
by David Harvey
🚨 Emergency Alerts Reporter to iMessage This n8n template fetches real-time emergency incident alerts from PulsePoint for a specific agency and delivers them directly to any phone number via iMessage using the Blooio API. It's designed to keep users informed with clear, AI-summarized reports of emergency activity near them—automatically and reliably. Use cases are powerful and immediate: Get real-time fire/medical alerts for your neighborhood. Use it for family, local safety groups, or even emergency response teams. Convert technical dispatch data into readable updates with emojis and plain English. 🧠 Good to Know You’ll need a PulsePoint agency ID (see instructions below). iMessages are sent using Blooio’s API (which supports Apple’s iMessage and fallback RCS/SMS). Messages are AI-enhanced using OpenAI's o4-mini model to summarize incident reports with context and urgency. The workflow runs every hour, but this can be configured to match your needs. Each report is sent only once, thanks to persistent tracking of seen incident IDs in workflow static memory. ⚙️ How it Works Trigger: A Schedule Trigger (every hour) or manual start kicks off the flow. Get Alerts: A code node fetches the latest PulsePoint incidents for a specified agency and decrypts the data. Filter New Incidents: We store previously seen incident IDs to prevent duplicate alerts. Merge Incidents: All new incident details are merged into a single payload. Condition Check: If there are no new incidents, nothing is sent. AI Summary: The incident data is passed to an AI agent for summarization with human-friendly emojis and formatting. Send Message: The final summary is sent via Blooio’s API to your phone using iMessage. 📝 How to Use Get Your PulsePoint Agency ID: Visit https://web.pulsepoint.org. Find your agency by location or name. Inspect the API call or browser network log to get the agencyid (e.g. 19100 from a URL like ?agencyid=19100). Set Up Blooio for Messaging: Sign up at https://blooio.com. Go to your account and retrieve your Bearer API Key. Pricing details available on their pricing page. Add your key to the HTTP Request node as a Bearer Token. OpenAI API: Create or use an existing OpenAI account. Use the o4-mini model for efficient, readable summaries. Get your OpenAI API key from https://platform.openai.com/account/api-keys. Add Your Phone Number: Replace +1111112222 with your actual number (international format). You can also modify the message content or prepend special tags/emojis. ✅ Requirements PulsePoint agency ID** – See usage instructions above OpenAI API Key** – Get API Key Blooio Account & Bearer Token** – Get Started Phone number** for iMessage delivery 🔧 Customizing This Workflow Change the schedule** to get alerts more or less frequently Add filters** to only get alerts for specific incident types (e.g. fires, traffic accidents) Send to groups**: Expand to send alerts to multiple recipients or use Slack instead of iMessage Use different AI prompts** to get detailed, humorous, or abbreviated alerts depending on your audience With just a few credentials and a phone number, you’ll have real-time incident alerts with human-friendly summaries at your fingertips. 🛠️ Stay informed. Stay safe.
by Aitor | 1Node
Template Description This template creates a powerful Retrieval Augmented Generation (RAG) AI agent workflow in n8n. It monitors a specified Google Drive folder for new PDF files, extracts their content, generates vector embeddings using Cohere, and stores these embeddings in a Milvus vector database. Subsequently, it enables a RAG agent that can retrieve relevant information from the Milvus database based on user queries and generate responses using OpenAI, enhanced by the retrieved context. Functionality The workflow automates the process of ingesting documents into a vector database for use with a RAG system. Watch New Files: Triggers when a new file (specifically targeting PDFs) is added to a designated Google Drive folder. Download New: Downloads the newly added file from Google Drive. Extract from File: Extracts text content from the downloaded PDF file. Default Data Loader / Set Chunks: Processes the extracted text, splitting it into manageable chunks for embedding. Embeddings Cohere: Generates vector embeddings for each text chunk using the Cohere API. Insert into Milvus: Inserts the generated vector embeddings and associated metadata into a Milvus vector database. When chat message received: Adapt the trigger tool to fit your needs. RAG Agent: Orchestrates the RAG process. Retrieve from Milvus: Queries the Milvus database with the user's chat query to find the most relevant chunks. Memory: Manages conversation history for the RAG agent to optimize cost and response speed. OpenAI / Cohere embeddings: Uses ChatGPT 4o for text generation. Requirements To use this template, you will need: An n8n instance (cloud or self-hosted). Access to a Google Drive account to monitor a folder. A Milvus instance or access to a Milvus cloud service like Zilliz. A Cohere API key for generating embeddings. An OpenAI API key for the RAG agent's text generation. Usage Set up the required credentials in n8n for Google Drive, Milvus, Cohere, and OpenAI. Configure the "Watch New Files" node to point to the Google Drive folder you want to monitor for PDFs. Ensure your Milvus instance is running and the target cluster is set up correctly. Activate the workflow. Add PDF files to the monitored Google Drive folder. The workflow will automatically process them and insert their embeddings into Milvus. Interact with the RAG agent. The agent will use the data in Milvus to provide context-aware answers. Benefits Automates document ingestion for RAG applications. Leverages Milvus for high-performance vector storage and search. Uses Cohere for generating high-quality text embeddings. Enables building a context-aware AI agent using your own documents. Suggested improvements Support for More File Types:** Extend the "Watch New Files" node and subsequent extraction steps to handle various document types (e.g., .docx, .txt, .csv, web pages) in addition to PDFs. Error Handling and Notifications:** Implement robust error handling for each step of the workflow (e.g., failed downloads, extraction errors, Milvus insertion failures) and add notification mechanisms (e.g., email, Slack) to alert the user. Get in touch with us Contact us at https://1node.ai
by Dataki
This workflow serves as a solid foundation when you need an AI Agent to return output in a specific JSON schema, without relying on the often-unreliable Structured Output Parser. What It Does The example workflow takes a simple input (like a food item) and expects a JSON-formatted output containing its nutritional values. Why Use This Instead of Structured Output Parser? The built-in Structured Output Parser node is known to be unreliable when working with AI Agents. While the n8n documentation recommends using a “Basic LLM Chain” followed by a Structured Output Parser, this alternative workflow completely avoids using the Structured Output Parser node. Instead, it implements a custom loop that manually validates the AI Agent's output. This method has proven especially reliable with OpenAI's gpt-4.1 series (gpt-4.1, gpt-4.1-mini, gpt-4.1-nano), which tend to produce correctly structured JSON on the first try, as long as the System Prompt is well defined. In this template, gpt-4.1-nano is set by default. How It Works Instead of using the Structured Output Parser, this workflow loops the AI Agent through a manual schema validation process: A custom schema check is performed after the AI Agent response. A runIndex counter tracks the number of retries. A Switch node: If the output does not match the expected schema, it routes back to the AI Agent with an updated prompt asking it to return the correct format. The process allows up to 4 retries to avoid infinite loops. If the output does match the schema, it continues to a Set node that serves as chat response (you can customize this part to fit your use case). This approach ensures schema consistency, offers flexibility, and avoids the brittleness of the default parser.
by Jesse White
Automate High-Quality Voice with Google Text-to-Speech & n8n Effortlessly convert any text into stunningly realistic, high-quality audio with this powerful n8n workflow. Leveraging Google's advanced Text-to-Speech (TTS) AI, this template provides a complete, end-to-end solution for generating, storing, and tracking voiceovers automatically. Whether you're a content creator, marketer, or developer, this workflow saves you countless hours by transforming your text-based scripts into ready-to-use audio files. The entire process is initiated from a simple form, making it accessible for users of all technical levels. Features & Benefits 🗣️ Studio-Quality Voices: Leverage Google's cutting-edge AI to produce natural and expressive speech in a wide variety of voices and languages. 🚀 Fully Automated Pipeline: From text submission to final file storage, every step is handled automatically. Simply input your script and let the workflow do the rest. ☁️ Seamless Cloud Integration: Automatically uploads generated audio files to Google Drive for easy access and sharing. 📊 Organized Asset Management: Logs every generated audio file in an Airtable base, complete with the original script, a direct link to the file, and its duration. ⚙️ Simple & Customizable: The workflow is ready to use out-of-the-box but can be easily customized. Change the trigger, add notification steps, or integrate it with other services in your stack. Perfect For a Variety of Use Cases 🎬 Content Creators: Generate consistent voiceovers for YouTube videos, podcasts, and social media content without needing a microphone. 📈 Marketers: Create professional-sounding audio for advertisements, product demos, and corporate presentations quickly and efficiently. 🎓 Educators: Develop accessible e-learning materials, audiobooks, and language lessons with clear, high-quality narration. 💻 Developers: Integrate dynamic voice generation into applications, build interactive voice response (IVR) systems, or provide audio feedback for user actions. How The Workflow Operates Initiate with a Form: The process begins when you submit a script, a desired voice, and language through a simple n8n Form Trigger. Synthesize Speech: The workflow sends the text to Google's Text-to-Speech API, which generates the audio and returns it as a base64 encoded file. Process and Upload: The data is converted into a binary audio file and uploaded directly to a specified folder in your Google Drive. Enrich Metadata: The workflow then retrieves the audio file's duration using the fal.ai ffmpeg API, adding valuable metadata. Log Everything: Finally, it creates a new record in your Airtable base, storing the asset name, description (your script), content type, file URLs from Google Drive, and the audio duration for perfect organization. What You'll Need To use this workflow, you will need active accounts for the following services: Google Cloud oAuth2 Client Credentials:** With the Text-to-Speech API enabled. Google Drive:** For audio file storage. Airtable:** For logging and asset management. fal.ai:** For the ffmpeg API used to get audio duration.
by Yaron Been
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automatically performs weekly keyword research and competitor analysis to discover trending keywords in your industry. It saves you time by eliminating the need to manually research keywords and provides a constantly updated database of trending search terms and opportunities. Overview This workflow automatically researches trending keywords for any specified topic or industry using AI-powered search capabilities. It runs weekly to gather fresh keyword data, analyzes search trends, and saves the results to Google Sheets for easy access and analysis. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For accessing search engines and keyword data sources OpenAI**: AI agent for intelligent keyword research and analysis Google Sheets**: For storing and organizing keyword research data How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Google Sheets: Connect your Google Sheets account and set up your keyword tracking spreadsheet Customize: Define your target topics or competitors for keyword research Use Cases SEO Teams**: Discover new keyword opportunities and track trending search terms Content Marketing**: Find trending topics for content creation and strategy PPC Teams**: Identify new keywords for paid advertising campaigns Competitive Analysis**: Monitor competitor keyword strategies and market trends Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #keywordresearch #seo #brightdata #webscraping #competitoranalysis #contentmarketing #n8nworkflow #workflow #nocode #seoresearch #keywordmonitoring #searchtrends #digitalmarketing #keywordtracking #contentautomation #marketresearch #trendingkeywords #keywordanalysis #seoautomation #keyworddiscovery #searchmarketing #keyworddata #contentplanning #seotools #keywordscraping #searchinsights #markettrends #keywordstrategy
by Sunny Thaper
Workflow Overview: This n8n workflow template takes a US phone number as input, validates it, and returns it in multiple standard formats, including handling extensions. It's designed to streamline the process of standardizing phone number data within your automations. How it Works: Input: Accepts a phone number string as input. This number can be in various common formats (e.g., (555) 123-4567, 555.123.4567, +15551234567, 5551234567x890). Formatting Removal: Strips all non-numeric characters to isolate the core number and any potential extension. Validation: Country Code Check:** Verifies if the number starts with the US country code (+1 or 1) or assumes US if no country code is present and the length is correct. Length Check:** Ensures the main number component consists of exactly 10 digits after stripping formatting and the country code. Output Generation (if valid): If the number passes validation, the workflow outputs the phone number in several standardized formats: Number Only:** 5551234567 E.164 Standard:** +15551234567 National Standard:** (555) 123-4567 Full National Standard:** 1 (555) 123-4567 International Standard:** 00-1-555-123-4567 Extension Handling: If an extension is detected in the input, it is separated and provided as: Extension (Number):** 890 Extension (String):** "890" Use Cases: Cleaning and standardizing phone number data in CRM systems. Formatting numbers before sending SMS messages via APIs. Validating user input from forms. Ensuring consistent phone number representation across different applications.
by Vlad Temian
Description This workflow automates a video content pipeline that generates creative, Instagram Reel videos using AI. It combines OpenAI's GPT-4o-mini for idea generation with Sisif.ai's text-to-video AI technology to produce engaging short-form content automatically. Perfect for: Content creators, social media managers, marketing teams, and anyone who wants to maintain a consistent flow of AI-generated video content without manual intervention. Prerequisites Sisif.ai Account**: Sign up at sisif.ai and get your API token from https://sisif.ai/users/api-keys/ OpenAI Account**: Get your API key from OpenAI platform n8n Instance**: Self-hosted or cloud instance Step-by-step setup Import the workflow in n8n. Create OpenAI API credentials here. Create Sisif.ai API credentials here. Add OpenAI API & Sisif.ai API creds in n8n. Open the blue sticky → edit topic, style, duration, resolution. Enable the Cron trigger (defaults to every 6 h). Run once to test. Activate when ready. How it Works The workflow operates on a scheduled cycle, generating fresh video content every 6 hours: 🤖 AI Idea Generation: OpenAI's GPT-4o-mini acts as a creative video strategist, generating unique, trend-aware video concepts optimized for Instagram and social media 🎬 Video Creation: Sisif.ai transforms each creative prompt into a high-quality 5-second video in 360x640 resolution ⏱️ Smart Monitoring: The workflow intelligently monitors video generation progress, waiting for completion before proceeding 📊 Data Processing: Final video data is structured and prepared for further use or storage Key Features ⚡ Fully Automated Runs every 6 hours without manual intervention Generates 4 unique videos daily (28 videos per week) Self-monitoring with automatic retry logic 🎯 Optimized for Social Media Instagram 360x640 resolution 5-second duration for maximum engagement Trend-aware content generation Action-packed, visual storytelling 🔧 Smart Architecture Simple HTTP requests for reliable operation Bearer token authentication for secure API access Automatic status checking and waiting logic Error handling and retry mechanisms
by Tom
This is the workflow powering the n8n demo shown at StrapiConf 2022. The workflow searches matching Tweets every 30 minutes using the Interval node and listens to Form submissions using the Webhook node. Sentiment analysis is handled by Google using the Google Cloud Natural Language node before the result is stored in Strapi using the Strapi node. (These were originally two separate workflows that have been combined into one to simplify sharing.)
by Eric Mooney
Setlist Manager This workflow takes a Google spreadsheet called 'Setlist_Manager' with 'Artist' and 'SongTitle' entries and get's Lyrics for each song and creates a playlist for that set of songs. Create Spotify Playlist (naming it 'Setlist - [date of today]') Create the Google doc that will store the lyrics found. (naming it 'Setlist - [date of today]') Get the rows of songs from 'Setlist_Manager'. Use AI to verify the Artist name and song title. Get the lyrics to the song. Append the Google Doc with the lyrics. Search for the song in Spotify. Add that song to the Spotify Playlist. Go to band practice and be prepared! =)
by Sk developer
TikTok Transcript to OpenAI GPT-4 This automation workflow provides a seamless, efficient, and AI-powered solution for extracting, processing, and storing TikTok video subtitles. By combining TikTok Transcript API, OpenAI GPT-4 API, and Google Docs, this workflow transforms the process of transcription and text analysis into a smooth, automated experience. It's perfect for content creators, marketers, and businesses who need to process large volumes of TikTok videos and want to leverage AI for language processing and summarization. How It Works: User Form Submission: The process begins when a user submits a TikTok video URL and specifies the language in which they want the processed content. The data is captured via a simple form that triggers the entire workflow. The form is crucial for collecting the necessary parameters before processing, such as the video link and language preferences. Fetching Subtitles from TikTok: The workflow uses the TikTok Transcript API to retrieve subtitles from the specified TikTok video. This API extracts all textual data associated with the video (including spoken words, captions, etc.) in real-time. The TikTok Transcript API allows you to fetch subtitles efficiently, making it ideal for those who need to process content from TikTok quickly. Advanced Processing with OpenAI GPT-4: Once the TikTok subtitles are fetched, the workflow sends this text to OpenAI’s GPT-4 API. OpenAI's GPT-4 model is renowned for its powerful natural language processing capabilities, making it perfect for handling multi-lingual data. OpenAI GPT-4 API processes the raw transcript in several ways, including: Translation: If the subtitles are in a different language, GPT-4 API can translate them to the desired language. Summarization: GPT-4 API can summarize long TikTok video subtitles into concise points, saving you time and effort. Text Interpretation: You can configure GPT-4 API to generate insights, analyze emotions, or interpret context, which is ideal for detailed content analysis. Storing the Results in Google Docs: After processing the subtitles, the final output (whether it is a translated, summarized, or interpreted version) is automatically saved into a Google Doc. This integration allows the processed text to be stored in an easily editable and shareable format. The document can be accessed by anyone with permission, making it perfect for team collaboration or content management. Workflow Automation: The automation continues with a wait step to ensure that all data is fetched and processed before moving on to storing it in Google Docs. It ensures that the entire process is handled without needing manual intervention, from fetching subtitles to generating results and storing them. Key Features and Benefits: Efficient TikTok Subtitle Extraction: Automatically fetch TikTok video subtitles using the **TikTok Transcript API, eliminating the need for manual transcription. AI-Driven Text Processing: Use the power of **OpenAI GPT-4 API to process the extracted text. GPT-4 API can translate, summarize, or analyze the subtitles for advanced insights, making it far more than just a transcription tool. Seamless Multi-Language Support: **OpenAI GPT-4 API handles multiple languages, translating or summarizing the content based on the user’s input. This makes the workflow versatile for global content creators and marketers. Google Docs Integration: After processing the subtitles, the results are saved directly into **Google Docs for easy access, editing, and sharing. This ensures that all processed data is stored in an organized manner and ready for use in various projects. Time & Effort Savings**: The entire process is automated from start to finish, allowing users to bypass manual transcription and processing tasks. You can focus on creating content while the workflow handles all the repetitive tasks. Advanced Text Insights: By using **OpenAI’s GPT-4 API, you not only get the raw transcript, but you also get insights, summaries, translations, and other interpretations that enhance your content’s value. Challenges Solved: Manual Transcription: This workflow eliminates the need for manual transcription by automatically fetching subtitles from TikTok using the **TikTok Transcript API. Language Barriers: With **OpenAI GPT-4, users can translate TikTok video subtitles into any language, ensuring the content is accessible to a global audience. Content Management: By storing processed content in **Google Docs, this workflow makes it easier to manage and collaborate on transcriptions and analysis, providing a central hub for your data. Automation for Productivity**: This workflow automates every step of the process, from fetching subtitles to analyzing and storing them, freeing up time for higher-value tasks like content creation, strategy planning, or marketing. APIs Integrated: TikTok Transcript API**: Retrieves subtitles directly from TikTok videos, providing the base for further processing. OpenAI GPT-4 API**: Handles advanced text processing, including translation, summarization, and analysis of the TikTok video subtitles. Google Docs API**: Stores processed content into Google Docs, providing a clean, accessible format for viewing and collaboration. Use Cases: Content Creation**: Automatically process and summarize video subtitles for content creation, marketing, or research purposes. Market Research**: Extract and analyze content from TikTok to understand audience sentiment, trending topics, and engagement strategies. Education**: Teachers and educators can use the workflow to analyze educational TikTok videos and save the insights in Google Docs for lesson planning. Conclusion: This TikTok Transcript to OpenAI GPT-4 + Google Docs Automation workflow saves time, enhances content processing with AI, and organizes results into easily accessible documents. By integrating the TikTok Transcript API and OpenAI GPT-4 API, it provides a smart, automated solution for anyone working with TikTok content. Whether you're a content creator, researcher, or marketer, this workflow can help you streamline and optimize your content processing tasks.
by Samir Saci
Tags: Marketing, Image Processing, Automation Context Hey! I’m Samir, a Data Scientist from Paris and the founder of LogiGreen Consulting. We use AI, automation, and data to support sustainable business practices for small, medium and large companies. I implemented this workflow to support an event agency to automate image processing like background removal using Photoroom API. > Automate your photos processing with n8n! This n8n workflow collects all images in a Google Drive folder shared with multiple photographers. For each image, it calls the Photoroom API: A processed image w/o a background is saved in a subfolder Remove Background The original pictures are saved in the subfolder Original This workflow, triggered every morning, will process the backlog of images. 📬 For business inquiries, feel free to connect with me on LinkedIn Who is this template for? This workflow is useful for: Digital Marketing** teams that use images for content creation Photographs* or *Event Organisers** that collect large amounts of photos that need processing What does it do? This n8n workflow: ⏰ Triggers automatically every morning 🖼️ Collects the names and IDs of all images in the folder 🧹 HTTP POST request to Photoroom API to remove the background 📄 Stores the processed image and the original image in two separate sub-folders What do I need to get started? You’ll need: A Google Drive Account connected to your n8n instance with credentials A Photoroom API key that you can get for free (trial) here: Photoroom API Follow the Guide! Follow the sticky notes inside the workflow or check out my step-by-step tutorial on how to configure and deploy it. 🎥 Watch My Tutorial This workflow was built using n8n version 1.93.0 Submitted: May 26, 2025