by Jay Hartley
What this workflow does Downloads the daily top podcasts of a selected genre Summarizes the content of each podcast in a few paragraphs Sends the summaries and the direct link to each podcast in a formatted email Setup Create a free API key on Taddy here: https://taddy.org/signup/developers Input your user number and API key into the TaddyTopDaily node in the header parameters X-USER-ID and X-API-KEY respectively. Create access credentials for your Gmail as described here: https://developers.google.com/workspace/guides/create-credentials. Use the credentials from your client_secret.json in the Gmail node. In the Genre node, set the genre of podcasts you want a summary for. Valid values are: TECHNOLOGY, NEWS, ARTS, COMEDY, SPORTS, FICTION, etc. Look at api.taddy.org for the full list (they will be displayed in the help docs as PODCASTSERIES_TECHNOLOGY, PODCASTSERIES_NEWS, etc.) Enter your email address in the Gmail node. Change the schedule time for sending email from Schedule to whichever time you want to receive the email. Test: Hit Test Workflow. Check your email for the results. That's it! It should take less than 5 minutes total.
by Bao Duy Nguyen
Who is this for? This template is ideal for DevOps engineers, automation specialists, and n8n users who manage multiple workflows and want a reliable version control system for backups. It’s especially useful for teams collaborating via GitHub. What problem is this workflow solving? Manually backing up n8n workflows to GitHub can be time-consuming and error-prone. This workflow solves that by automating the backup of new and updated n8n workflows, ensuring your GitHub repository always reflects the latest changes. What this workflow does Retrieves all workflows from your local n8n instance. Decodes their content and compares it with existing GitHub files. Detects newly created or updated workflows. Creates a new Git branch and commits changes. Opens a pull request (PR) to the main branch. Sends a Slack notification when the PR is created. The system uses GitHub API, n8n, Merge, Set, and Slack nodes for full automation. Setup GitHub credentials: Add your GitHub API credentials in n8n. Slack integration: Connect your Slack Bot token if you want PR notifications. Repository details: Update github_owner, repo_name, and workflow directory path in the “Define Local Variables” node. n8n API key - Check this doc How to customize this workflow to your needs Change the workflow directory from workflows/ to a custom path. Modify the Slack message or add email notification support. Add filters to back up only specific workflows based on naming or tags. Adjust branch naming conventions or use different GitHub base branches. This workflow provides a seamless backup and versioning pipeline, minimizing manual Git interactions and supporting collaborative automation development.
by felipe biava cataneo
What this template does This template serves as a Chatbot that enables you to ask questions about the content of a PDF directly in Telegream. It checks incoming Telegram messages if they contain a document. If they do, it stores the PDF in a Pinecone Vector store. If there's no document, it will search the Vector Store for information and try to answer your question. Setup Open the Telegram app and search for the BotFather user (@BotFather) Start a chat with the BotFather Type /newbot to create a new bot Follow the prompts to name your bot and get a unique API token Save your access token and username Once you set your bot, you can send the pdf, and then ask questions about the content. How to adjust it to your needs You can exchange the Groq chat model with any model that you like Exchange Pinecone with any other vector store tool you like (e.g. Supabase, Postgres or QDrant) #Telegram, #Pinecone, #Openai, #GroQ
by Joey D’Anna
This workflow is a building block designed to be called from other workflows via an Execute workflow node. When called from another workflow, and given the JSON input of a "pulse" field with the ID to pull from monday, this workflow will return: The items name and ID All column data, indexable by the column name All column data, indexable by the column's ID string All board relation columns, with their data and column values All subitems, with their data and column values For example: ++Prerequisites++ A monday.com account and credential A workflow that needs to get detailed data from a monday.com row The pulse id of the monday.com row to retreive data from. ++Setup++ Import the workflow Configure all monday nodes with your credentials and save the workflow Copy the workflow ID from it's URL In a different workflow, add an Edit Fields node, to output the field "pulse", with the monday item you want to retrieve. Feed the Edit Fields node with your pulse into an Execute workflow node, and paste the workflow ID from above into it This "pulse" field will tell the workflow what pulse to retreive. This can be populated by an expression in your workflow There is an example of the Edit Fields and Execute Workflow nodes in the template
by Matthieu
Search LinkedIn companies, Score with AI and add them to Google Sheet CRM Setup Video: https://youtube.com/watch?v=m904RNxtF0w&t Who is this for? This template is ideal for sales teams, business development professionals, and marketers looking to build a targeted prospect database with automatic qualification. Perfect for agencies, consultants, and B2B companies wanting to identify and prioritize the most promising potential clients. What problem does this workflow solve? Manually researching companies on LinkedIn, evaluating their fit for your services, and tracking them in your CRM is time-consuming and subjective. This automation streamlines lead generation by automatically finding, scoring, and importing qualified prospects into your database. What this workflow does This workflow automatically searches for companies on LinkedIn based on your criteria, retrieves detailed information about each company, filters them based on quality indicators, uses AI to score how well they match your ideal customer profile, and adds them to your Google Sheet CRM while preventing duplicates. Setup Create a Ghost Genius API account and get your API key Configure HTTP Request nodes with Header Auth credentials Create a copy of the provided Google Sheet template Set up your Google Sheet and OpenAI credentials following n8n documentation Customize the "Set Variables" node to match your target audience and scoring criteria How to customize this workflow Modify search parameters to target different industries, locations, or company sizes Adjust the follower count threshold based on your qualification criteria Customize the AI scoring system to align with your specific product or service offering Add notification nodes to alert you when high-scoring companies are identified
by Artem Boiko
How it works This template automates the conversion of CAD and BIM files Revit, AutoCAD, IFC, MicroStation (e.g. .rvt, .ifc, .dwg, .dgn) into structured Excel databases and lightweight 3D geometry .dae files using the DataDrivenConstruction open-source converter. 📦 High-level steps: Set file paths and converter path in the Set node Trigger conversion via Execute Command (runs .exe converter offline) Output includes .xlsx (data) and .dae (3D model) files Includes sticky note instructions for troubleshooting and GitHub repo info Set up steps 🕒 Setup time: ~10 minutes You’ll need: Windows machine (offline or airgapped OK) Path to the converter .exe file Path to a sample .rvt (or .ifc, .dwg, .dgn) file 🧷 Setup paths in the Set node: path_to_converter = "C:\\...\\RvtExporter.exe" path_project_file = "C:\\...\\project.rvt" Docs & Issues: Full Readme on GitHub
by Stefan
Overview This comprehensive n8n workflow provides a sophisticated solution for dynamically selecting and using AI models while maintaining GDPR compliance. It leverages Requesty's European-based AI routing service to ensure data privacy and automatically updates available model options based on real-time API availability. Choose Your Integration Approach Before diving into the setup, it's crucial to understand that this workflow offers two completely independent AI integration approaches: Approach 1: Dynamic HTTP Request Workflow (Advanced) Complete infrastructure with dynamic model selection What it includes: Automatic model discovery from Requesty's API Dynamic dropdown updates in web forms Model selection persistence in Google Sheets Complex workflow orchestration with multiple phases Full control over API parameters and response handling Best for: Teams needing multiple AI models for different tasks Organizations requiring model usage auditing Users who want maximum flexibility and control Advanced n8n users comfortable with complex workflows Setup complexity: High (requires multiple components and configurations) Approach 2: Standalone AI Agent (Simple) Plug-and-play solution without complexity What it includes: Direct use of n8n's native OpenAI Chat Model node Simple configuration: just set base URL to https://router.requesty.ai/v1 Immediate GDPR compliance through European infrastructure No model discovery or selection infrastructure needed Best for: Users wanting quick GDPR-compliant AI integration Single-model use cases Simple chat interfaces Users preferring minimal configuration Setup complexity: Low (5-minute setup) Quick Start: Approach 2 (Simple AI Agent) If you want to get started quickly with GDPR-compliant AI, follow these steps: Step 1: Register with Requesty Visit https://www.requesty.ai Complete the registration process Choose "OpenAI-compatible" integration Note your API endpoint: https://router.requesty.ai/v1 Create an API key (name it "n8n Integration") Step 2: Configure n8n Add a new OpenAI credential in n8n Set the base URL to: https://router.requesty.ai/v1 Enter your Requesty API key Add an OpenAI Chat Model node to your workflow Select your Requesty credential Step 3: Test Your AI agent is now ready and GDPR-compliant! All requests will be routed through Requesty's European infrastructure. Advanced Setup: Approach 1 (Dynamic HTTP Workflow) For users who need dynamic model selection and advanced features, follow this comprehensive setup: Prerequisites n8n instance (self-hosted or cloud) Requesty API credentials Google Sheets integration Basic understanding of n8n workflows Phase 1: Requesty Account Setup 1.1 Registration Process Navigate to https://www.requesty.ai Sign up with your email address Complete the welcome process 1.2 Integration Configuration Choose Integration Type: Select "OpenAI-compatible" Note API Endpoint: https://router.requesty.ai/v1 Create API Key: Provide a descriptive name (e.g., "n8n Dynamic Workflow") Click "Create API Key" Important: Save this key securely - you'll need it for n8n configuration Phase 2: Google Sheets Preparation 2.1 Create Storage Sheet Create a new Google Sheet named "AI Model Selections" Add the following column: A1: "Selected Model" Note the Google Sheet ID from the URL 2.2 Configure Google Sheets API Enable Google Sheets API in Google Cloud Console Create service account credentials Share your sheet with the service account email Download the credentials JSON file Phase 3: n8n Workflow Configuration 3.1 Import Workflow Download the workflow JSON file Import into your n8n instance Review all nodes and connections 3.2 Configure Credentials Requesty API Credentials: Go to n8n Credentials section Create new HTTP Request credential Set authentication type to "Header Auth" Header name: "Authorization" Header value: "Bearer YOUR_REQUESTY_API_KEY" Google Sheets Credentials: Create new Google Sheets credential Upload your service account JSON file Test the connection Google Sheets Nodes: Update sheet ID in all Google Sheets nodes Verify column mappings match your sheet structure Phase 4: Troubleshooting Guide Common Issues and Solutions Models Not Loading: Verify Requesty API credentials Check network connectivity and API endpoint URL Selection Not Persisting: Verify Google Sheets credentials and write permissions Check sheet ID configuration Chat Not Responding: Verify selected model availability Check API request formatting and response processing Debug Procedures Enable debug mode and detailed logging Check node outputs and data flow Validate API calls with external tools Review n8n execution logs Conclusion The choice between approaches depends on your specific requirements: Simple AI Agent**: Perfect for straightforward AI integration with minimal setup Dynamic HTTP Workflow**: Ideal for complex requirements with multiple models and advanced features
by Belgacem Dhiflaoui
What Problem Does This Solve? 🛠️ This workflow automates the process of extracting information from a Google Doc, storing it in a Pinecone vector database, and using it to personalize and send emails based on user input via chat. It eliminates the manual steps of gathering recipient data, writing messages, and dispatching emails providing a fully automated, intelligent communication system. Perfect for teams that need to: Maintain dynamic contact lists Personalize bulk or contextual email outreach Use chat interfaces to trigger intelligent email actions Target Audience: Sales teams, marketing departments, HR staff, startup founders, or anyone looking to automate AI-powered communication workflows. What Does It Do? 🌟 Extracts content from a Google Docs document (e.g., a list of contacts or structured notes) Splits, embeds, and stores that content in Pinecone for semantic search Listens for incoming chat messages using n8n's chatTrigger Uses LangChain agents with OpenAI to: Search Pinecone for contextually relevant information (e.g., email addresses) Compose personalized emails based on instructions Sends emails using the Gmail API, triggered dynamically from the AI output Key Features 📋 Google Docs integration for live document data Embedding & vector search with Pinecone for AI lookups Custom LangChain agents with tool calling logic (search + send) Full support for OpenAI models (GPT-4o) Personalized email generation with dynamic name and message filling Modular design: plug-and-play with other tools like CRMs, Notion, etc. Setup Instructions Prerequisites n8n Instance:** Self-hosted or cloud instance Google Docs Account:** For reading input content Pinecone Account:** For storing document data semantically OpenAI Account:** For generating embeddings and messages Gmail Account:** With Gmail OAuth2 credentials for sending emails Installation Steps 📦 1. Import the Workflow Import the provided JSON files into your n8n instance. 2.Configure Credentials Go to n8n > Credentials, and set up: Google Docs API** OpenAI API** Pinecone API** Gmail OAuth2** 3. Set Your Pinecone Index & Namespace Ensure you have a working Pinecone index (e.g., n8ndocs) and namespace (e.g., docsmail). 4. Test the Full Flow Run the Google Docs → Pinecone embedding workflow to prepare data. Send a message to the chatTrigger endpoint (e.g., "Send an offer to User"). Check the execution log to verify correct tool usage and Gmail delivery. How It Works 🔍 1. Data Preparation: Google Doc content is fetched and chunked. OpenAI embeddings are created. Data is stored in Pinecone under a specific namespace. 2. Chat Trigger: A webhook captures chat input. The LangChain agent interprets the user request. The agent uses two tools: Vectorstore_mails: Retrieves relevant emails via Pinecone vector search send_mail: Uses an internal n8n sub-workflow to send Gmail messages 3. Mail Generation & Delivery: Email is personalized using recipient info (name/email from Pinecone) Message follows a clean, friendly format with clear subject and closing Delivered via Gmail integration
by Joseph LePage
✍️🌄 WordPress + AI Content Creator This workflow automates the creation and publishing of multi-reading-level content for WordPress blogs. It leverages AI to generate optimized articles, automatically creates featured images, and provides versions of the content at different reading levels (Grade 2, 5, and 9). How It Works Content Generation & Processing 🎯 Starts with a manual trigger and a user-defined blog topic Uses AI to create a structured blog post with proper HTML formatting Separates and validates the title and content components Saves a draft version to Google Drive for backup Multi-Reading Level Versions 📚 Automatically rewrites the content for different reading levels: Grade 9: Sophisticated language with appropriate metaphors Grade 5: Simplified with light humor and age-appropriate examples Grade 2: Basic language with simple metaphors and child-friendly explanations WordPress Integration 🌐 Creates a draft post in WordPress with the Grade 9 version Generates a relevant featured image using Pollinations.ai Automatically uploads and sets the featured image Sends success/error notifications via Telegram Setup Steps Configure API Credentials 🔑 Set up WordPress API connection Configure OpenAI API access Set up Google Drive integration Add Telegram bot credentials for notifications Customize Content Parameters ⚙️ Adjust reading level prompts as needed Modify image generation settings Set WordPress post parameters Test and Deploy 🚀 Run a test with a sample topic Verify all reading level versions Check WordPress draft creation Confirm notification system This workflow is perfect for content creators who need to maintain a consistent blog presence while catering to different audience reading levels. It's especially useful for educational content, news sites, or any platform that needs to communicate complex topics to diverse audiences.
by Mohan Gopal
This workflow automates the process of reading EDI files generated by Sabre, parsing them using an AI Agent, and producing structured accounting reports like: 📌 Accounts Receivable (AR) Summary 📌 Tax and Surcharges Report It also uses Retrieval-Augmented Generation (RAG) to vectorize the Sabre Interface User Record (IUR)—a 154-page technical document—so that the AI agent can reference it when clarification is required while generating reports. ⚙️ Tools & Integrations Used Component:Tool/Service:Purpose:Workflow Engine:n8n:Automation & orchestration LLM Model:OpenAI GPT-4 / Chat Model:Natural language understanding and parsing Embeddings Model:OpenAI Embeddings:Convert text into semantic vector format Vector Database:Pinecone:Store and retrieve document chunks semantically Storage:Google Drive:Source of raw EDI text files and PDF documentation DataLoader + Splitter:n8n Node + Recursive Splitter:Loads and prepares documents for embedding AI Agents:n8n AI Agent Node:Runs context-aware prompts and parses reports 🧱 Workflow Breakdown 🧠 1. Vectorizing the Sabre IUR Document (RAG Setup) 📘 Objective: Enable the AI Agent to refer to the IUR document (154 pages) for detailed explanations of EDI terms, formats, and rules. Flow Steps: Google Drive Search + Download – Find and pull the IUR PDF file. Default Data Loader – Load the file and preprocess it for semantic splitting. Recursive Character Splitter – Break down large pages into meaningful chunks. OpenAI Embeddings – Vectorize each chunk. Pinecone Vector Store – Save into a Pinecone namespace for future retrieval. ✅ Result: The IUR is now searchable via semantic queries from the AI Agent. 📁 2. Reading and Extracting Data from EDI Files 📘 Objective: Parse raw EDI files for financial records and summaries. Flow Steps: Trigger – Manual or scheduled execution of the workflow. Google Drive Search – Finds all new .edi or .txt files. Download File Contents – Loads content of each file into memory. Extract from File – Raw text extraction. 📊 3. Report Generation Using AI Agents 📘 Objective: AI Agents parse the extracted data to generate structured accounting reports. a. Accounts Receivable Report Agent The extracted text is passed to an AI Agent. Model is connected to: OpenAI Chat Model (LLM) Pinecone Vector DB (IUR reference) Outputs a structured AR Summary Report. b. Tax and Surcharges Report Agent Same steps as above. Prompts adjusted to extract tax, fees, surcharges, and amounts. ✅ Output Format: Can be mapped to columns and inserted into a Google Sheet or exported as a CSV/JSON. 📑 Sample Reports You Can Build Already implemented: ✅ Accounts Receivable (AR) Summary Report ✅ Tax and Surcharges Report Can be extended to: Accounts Payable (AP) Passenger Revenue Daily Sales Commission Report Net Profit Margin (if supplier cost + commission is available) 💡 Key Advantages ✅ No-code automation with n8n ✅ Semantic reasoning using AI + Vector DB (RAG) ✅ Can work with various Sabre outputs without manual parsing ✅ Modular: Easy to add new report types ✅ Cloud-integrated (Drive, Pinecone, OpenAI) 🧪 Potential Improvements Area Suggestions Testing Add a “Preview” step to validate extracted data before writing Scalability Batch mode + Google Sheet batching for multiple reports Audit Trail Log every file name, timestamp, report type in a Google Sheet Notification Send Slack/Email when a new report is generated Multi-model support Add Claude/Gemini fallback if OpenAI usage limit is hit
by Behram
Automated n8n workflow: Receives videos via form, dubs/translates them to the selected languages, and—upon completion—uploads them to multiple social media channels and cloud drives, including Box, Dropbox, and YouTube, Telegram, Postiz (Facebook, Instagram, Tiktok, Reddit etc.) Workflows Via n8n form select files to dub for desired languages. Listen webhook and whenever dubbing finishes upload to desired platforms Used Stacks DubLab App (ApiKey, Webhook Setup Required) Optional (Upload) Telegram (Token Required) Box (Oauth2 Required) Dropbox (Oauth2 Required) Youtube (Oauth2 Required) Postiz (ApiKey Required)
by Max T
How it works This template takes a YouTube video ID and identifies potentially engaging moments based on the intensity of specific timestamps 👇 Ideal for vloggers and YouTube content creators, it serves as a foundation for various automations to streamline content calendars or highlight popular moments in your videos. You can leverage it for: Automatic processes to analyze YouTube videos and create sizzle reels or clips for social media, particularly effective for microcontent strategies like those endorsed by Gary Vee. Instant alerts via Slack, Telegram, Email, or WhatsApp when significant moments occur in your videos. Utilizing transcripts of these moments with AI to refine content ideas or brainstorm chatbots in your editorial workflow. Example response from the Workflow-as-an-API A GET request to {your instance URL}/webhook/youtube-engaging-moments-extractor?ytID=IZsQqarWXtYy returns 👇 The workflow generates multiple moments; the screenshot above shows a truncated version. Not all videos contain timestamp intensity data, the workflow handles this case as well 👇 How to use Import the template into your n8n workspace or self-hosted instance, then activate the workflow. Open the Webhook trigger node and copy the Production URL. In a web browser or any tool capable of consuming HTTP Requests (e.g., native code, Bubble app, n8n workflow, another automation tool, Postman, etc.), pass along the URL parameter ?ytID={youtube video ID} when invoking the API endpoint. Your URL should resemble something like https://acme.app.n8n.cloud/webhook/youtube-engaging-moments-extractor?ytID=IZsQqarWXtYy. Keep in mind This workflow relies on an unofficial YouTube API graciously hosted for free by the folks at lemnoslife.com. It's not recommended for high-volume production usecases.