by Sleak
Who is this template for? This workflow template is designed for people seeking alerts when certain specific changes are made to any web page. Leveraging agentic AI, it analyzes the page every day and autonomously decides whether to send you an e-mail notification. Example use cases Track price changes on [competitor's website]. Notify me when the price drops below €50. Monitor new blog posts on [industry leader's website] and summarize key insights. Check [competitor's job page] for new job postings related to software development. Watch for new product launches on [e-commerce site] and send me a summary. Detect any changes in the terms and conditions of [specific website]. Track customer reviews for [specific product] on [review site] and extract key themes. How it works When clicking 'test workflow' in the editor, a new browser tab will open where you can fill in the details of your espionage assignment Make sure you be as concise as possible when instructing AI. Instruct specific and to the point (see examples at the bottom). After submission, the flow will start off by extracting both the relevant website url and an optimized prompt. OpenAI's structured outputs is utilized, followed by a code node to parse the results for further use. From here on, the endless loop of daily checks will begin: Initial scrape 1 day delay Second scrape AI agent decides whether or not to notify you Back to step 1 You can cancel an espionage assignment at any time in the executions tab Set up steps Insert your OpenAI API key in the structured outputs node (second one) Create a Firecrawl account and connect your Firecrawl API key in both 'Scrape page'-nodes Connect your OpenAI account in the AI agents' model node Connect your Gmail account in the AI agents' Gmail tool node
by ömer
Generate and Publish AI Content to LinkedIn and X (Twitter) with n8n Overview This n8n workflow automates the generation and publishing of AI-powered social media content across LinkedIn and X (formerly Twitter). By leveraging AI, this workflow helps social media managers, marketers, and content creators streamline their posting process. Who is this for? Social media managers Content creators Digital marketers Businesses looking to automate content generation Features AI-powered content creation** tailored for LinkedIn and X (Twitter) Automated publishing** to both platforms Structured output parsing** to ensure consistency OAuth2 authentication** for secure posting Merge and confirmation steps** to track successful postings Setup Instructions Prerequisites Before using this workflow, ensure you have: An n8n instance set up API credentials for: Google Gemini AI (for content generation) X Developer Account with OAuth2 authentication LinkedIn Developer Account with OAuth2 authentication A form submission service integrated with n8n Workflow Breakdown 1. Trigger: Form Submission A user submits a form containing the post title. The form is secured with Basic Authentication. The submitted title is passed to the AI Agent. 2. AI Content Generation The Google Gemini Chat Model processes the title and generates: LinkedIn post content Twitter (X) post content Hashtags Call-to-action (LinkedIn) Character limit check (Twitter) 3. Parsing AI Output A structured output parser converts the AI-generated content into a JSON format. Ensures correct formatting for LinkedIn and Twitter (X). 4. Publishing to Social Media X (Twitter) Posting Extracts the Twitter post from the AI output. Publishes it via an OAuth2-authenticated X (Twitter) account. LinkedIn Posting Extracts the LinkedIn post from the AI output. Publishes it via an OAuth2-authenticated LinkedIn account. 5. Merging Post Results Merges the response data from both LinkedIn and Twitter after publishing. 6. Confirmation Step Displays a final confirmation form once the posts are successfully published. Benefits Save time** by automating content creation and publishing. Ensure consistency** across platforms with structured AI-generated posts. Secure authentication** using OAuth2 for LinkedIn and Twitter. Increase engagement** with AI-optimized hashtags and CTAs. This workflow enables seamless social media automation, helping professionals post engaging AI-powered content effortlessly. 🚀
by Luke
Built this for a dedicated Slack outage-notifications channel — works well on both desktop and mobile. This is for: IT Administrators & small MSPs looking to streamline M365 alerts from one or multiple mailboxes into a single or specific Slack channels IT Admins who prefer ChatOps over management-by-email What does it do Scans for M365 outage alerts emails (every 1 min) Checks if it impacts a specific user region (if the alert calls it out, countries have to be manually set) Summarizes the incident using OpenAI o4-mini (cheap model - or you can swap for local Ollama) Sends a Slack Block to your outage channel with incident link (can be extended) Deletes the original alert email after successful delivery Credentials Outlook: Create an Outlook credential (OAuth2.0) to point to the mailbox (regular or shared) where M365 service alerts will be received Slack: Create a Slack bot credential with access to the slack channel you want updates posted to OpenAI: Create a OpenAI credential that has access to the GPT-4O-MINI model. Recommend you use projects in OpenAI so that you may set a per-project-budget and not impact other projects. Review this OpenAI documentation for more info on managing Projects in the API portal. Expect this to consume no more than 1-2 cents per month on average. Setup Download & import the workflow Modify the first Outlook block (Check for 365 Service Alert) to use the Outlook credential Modify the OpenAI block's system prompt to call out the countries your users reside in ie. "- Assume the organization has users primarily in the U.S. and Australia. If those regions are affected, state: "Your users may have been affected." Otherwise, add: "No impact expected for your user base."" ← swap U.S. & Australia for desired countries Modify the Slack block (Post outage to Slack) to specify the channel updates will be posted to Sample Slack Output Workflow Diagram
by Praveena
Idea The idea for app came since I wanted to build a unique gift for my niece because she gets excited for her birthday (which Im going to miss this year). The web app has a simple countdown (in html and JS) but more importantly, there is an AI agent that will answer some specific questions and know her preferences. How it works The questions from app are sent via web hook to N8N which has pulls preferences file (about her likes, dislikes, personality) from postgre and AI Agent that will answer questions/respond. The current status is stored back in postgre (especially about status of cat and universe happenings) before responding back. Features Integrated AI chatbot via N8N webhook Persistent conversation history Minimizable chat interface Fallback support for offline testing Features: -- Wheres Mittens - This is a query to track her lost cat in multiverse. -- Multiverse updates with recent update stored Pre Requisites Postgre SQL database is available. Alternatively, use any other database but change the N8N nodes. LLM Api Key. Step by Step Instructions Export this N8N Workflow. Modify LLM API Key, I used openAI, 4.1 For web app scofflding,you will need Node, HTML and Javascript. I've created a mini version using Node and JS with web app and N8N connection settings here: <https://github.com/productiser/FiBirthdayAgent> PostgreSQL Database Script (1 table for memory and context storage): CREATE TABLE fifi_world_context ( id TEXT PRIMARY KEY, -- e.g., 'agent_fifi' cat_location TEXT, -- e.g., "Bubble Nebula" cat_activity TEXT, -- e.g., "Playing laser tag with moon mice" fifi_preferences JSONB, -- e.g., likes/dislikes/foods/shows world_history TEXT, -- Summary of narrative events last_updated TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); 5.Modify system prompt as per your needs. Built With N8N Self hosted Self hosted web app Hosted on Vercel Total spend = <£1 (AI costs only) Total Time = <1 day Support Watch this video for web app overview and how it looks. <https://youtu.be/e7PlrTdvwoM> Contact me on info@pankstr.com/ superllmuser@gmail.com for any queries Hope you enjoy!!
by Pavel Duchovny
Building agentic AI workflows often requires multiple moving parts: memory management, document retrieval, vector similarity, and orchestration. Until now, these pieces had to be custom-wired. But with the new native n8n nodes for MongoDB Atlas, we reduce that overhead dramatically. With just a few clicks: Store and recall long-term memory from MongoDB Query vector embeddings stored in Atlas Vector Search Use these results in your LLM chains and automation logic In this example we present an ingestion and AI Agent flows that focus around Travel Planning. The different interest points that we want the agent to know about can be ingested into the vector store. The AI Agent will use the vector store tool to get relevant context about those points of interest if it needs to. Prerequisites MongoDB Atlas project and Cluster OpenAI Valid API Key for embeddings (can be other provider) Gemini API Key for the LLM (can be other provider) How it works: There are 2 main flows. One is ingesting flow: Gets a document from a webhook and use MongoDB Vector Atlas to embed the document title and description into points_of_interest collection. Embeddings are stored in a field named embedding Embeddings used are OpenAI's but it can be any type of supported embedders. Second flow is an AI Agent node with Chat Memory Stored in MongoDB Atlas and a Vector Search node as a tool: Chat Message Trigger**: Chatting with the AI Agent will trigger the conversation store in the MongoDB Chat Memory node. When data is necessary like a location search or details it will go to the "Vector Search" tool. Vector Search Tool** - uses Atlas Vector Search index created on the points_of_interest collection: // index name : "vector_index" // If you change an embedding provider make sure the numDimensions correspond to the model. { "fields": [ { "type": "vector", "path": "embedding", "numDimensions": 1536, "similarity": "cosine" } ] } Additional Resources MongoDB Atlas Vector Search n8n Atlas Vector Search docs
by Yang
Who is this for? This workflow is for digital marketers, small business owners, lead generation agencies, and VAs who need a scalable way to find and store local business leads using AI. It’s especially useful for teams that want to enrich leads with real-time news insights and save the structured data to Airtable. What problem is this workflow solving? Manually researching local businesses and staying up to date with relevant news is time-consuming and inefficient. This automation eliminates that burden by using Dumpling AI chat agents to generate leads and context, GPT-4o to summarize, and Airtable to store everything in one place. What this workflow does This AI workflow listens for a manual trigger in n8n and executes the following steps: Extracts local business leads using a Local Business Agent from Dumpling AI. Pulls current news related to the business type or location using a News Agent from Dumpling AI. Uses GPT-4o to combine both responses into a human-readable summary. Extracts structured lead data like name, category, and city. Saves the summary and lead data into Airtable for easy follow-up. Setup 1. Create AI Agents in Dumpling AI Sign in at Dumpling AI Create two separate agents: Local Business Agent: Designed to respond with structured lists of businesses by location and category. News Agent: Designed to fetch relevant recent news and summaries about a specific industry or region. After setting up each agent, copy the Agent Key from Dumpling AI. These keys will be required in the headers of your HTTP Request nodes in n8n. 2. Manual Trigger This workflow begins with a manual trigger inside n8n, Which is the When chat message is recieved. This makes it easy to test and reuse, especially during setup. 3. Get Local Business Data from Dumpling AI The first HTTP Request node sends a prompt like List 5 top real estate companies in Atlanta with full address and services. Include your Local Business Agent Key in the x-agent-key header. The response will return a structured list of business leads. 4. Get News Context from Dumpling AI The second HTTP Request node sends a prompt such as Give me the latest news related to the real estate market in Atlanta. Use your News Agent Key in the header. This fetches a brief set of recent news summaries relevant to the businesses being researched. 5. Use GPT-4o to Merge and Summarize The GPT node combines the list of businesses and news into one coherent summary. You can modify the prompt to output in paragraph format, bullet points, or structured notes. 6. Save Lead to Airtable The Airtable node sends all structured fields into your selected base and table. Be sure to connect your Airtable account and confirm the columns match exactly. How to customize this workflow Replace the prompt inside the HTTP node to focus on different types of businesses or cities. Expand the GPT output to include additional lead info like websites, phone numbers, or emails if the agent includes them. Add a webhook trigger to allow this flow to be run via a chatbot, external app, or button. Link to HubSpot or another CRM to sync the leads automatically. Duplicate the process to run for multiple industries in parallel. Final Notes You must create and configure your Dumpling AI agents first before running this workflow. The Agent Keys from Dumpling AI are required in both HTTP Request nodes. This flow is modular and flexible, ready for deeper CRM integrations. The manual trigger is great for testing, but you can add a Webhook node to automate it. This workflow helps you launch an intelligent lead gen process that combines location-targeted business discovery, AI-generated insights, and structured CRM-friendly output, all powered by Dumpling AI and OpenAI.
by Airtop
Define Your ICP from Customer LinkedIn Profiles Use Case This automation helps marketing and sales teams define their Ideal Customer Profile (ICP) using real LinkedIn profiles of current high-fit customers. By enriching and analyzing profile data, it generates a clear ICP definition and scoring methodology for future targeting. What This Automation Does This automation analyzes LinkedIn profiles of your existing customers and produces: A structured ICP definition A scoring model to evaluate future prospects A Google Boolean search string to find similar prospects Input: LinkedIn profile URLs of existing high-fit customers (e.g., https://www.linkedin.com/in/amirashkenazi/) Output: A Google Doc containing the ICP analysis and scoring methodology How It Works Trigger: Waits for a chat message containing one or more LinkedIn profile URLs. AI Agent: Parses and processes the URLs. Airtop Data Enrichment: Uses Airtop to extract structured information from each LinkedIn profile (e.g., job title, company, experience, skills). Memory: Maintains state between inputs for consistent analysis. LLM Analysis: Uses Claude 3.7 Sonnet to synthesize enriched data into a meaningful ICP. Google Docs: Automatically creates a new doc with a timestamped title and appends the ICP definition. Setup Requirements Airtop Profile connected to LinkedIn, Insert the profile name in the Airtop Tool Airtop API credentials. Get it free here If you choose to activate saving the profiles in Google Docs you will need OAuth2 credentials (or just copy the ICP definition from the chat) Next Steps Use the ICP for Scoring**: Feed new LinkedIn profiles through the same Airtop enrichment and use the scoring function to evaluate fit. Automate Target Discovery**: Plug the Boolean search output into LinkedIn, Google, or People Data Labs for ICP-matching lead generation. Refine Continuously**: Repeat the workflow as your customer base grows or segments evolve. Read more about how to Define ICP from Customer Examples
by Automate With Marc
✉️ Telegram Email Agent with GPT + Gmail Category: Messaging / AI Agent Level: Beginner-Friendly Tags: Telegram, Email Automation, AI Agent, Gmail, GPT Model Watch Step-by-step video guide here: https://www.youtube.com/watch?v=nyI40s9QOuw&t=420s&pp=0gcJCb4JAYcqIYzv 🤖 What This Workflow Does This workflow turns your Telegram bot into a personal email assistant powered by AI. With just a message on Telegram, users can: Send an email via Gmail Automatically generate the email content using OpenAI Models. Get confirmation or responses directly in Telegram It's like ChatGPT meets Gmail, inside your Telegram chat. 🔧 How It Works Telegram Trigger – Listens for incoming messages from your bot. AI Agent – Processes the input using OpenAI Model and converts it into structured email content (To, Subject, Body). Memory Node – Stores short-term context per user (via chat ID), so the agent can hold simple conversations. Gmail Node – Sends the generated email using your Gmail account. Telegram Node – Replies to the user confirming the output or status. 🧠 Why This is Useful Ever wanted to send an email while on the go, without typing the whole thing out in Gmail? This is a fast, intuitive, and AI-powered way to: Dictate or draft emails from anywhere Create an AI-powered virtual assistant via Telegram Integrate n8n's Langchain Agent with real-world productivity use cases 🪜 Setup Instructions Connect your Telegram bot via BotFather and add the credentials in n8n. Set up your OpenAI API key (GPT-4o-mini recommended). Add your Gmail OAuth credentials. Activate the workflow and start messaging your bot!
by NovaNode
Who is this for? This template is designed for internal support teams, product specialists, and knowledge managers in technology companies who want to automate ingestion of product documentation and enable AI-driven, retrieval-augmented question answering. What problem is this workflow solving? Support agents often spend too much time manually searching through lengthy documentation, leading to inconsistent or delayed answers. This solution automates importing, chunking, and indexing product manuals, then uses retrieval-augmented generation (RAG) to answer user queries accurately and quickly with AI. What these workflows do Workflow 1: Document Ingestion & Indexing Manually triggered to import product documentation from Google Docs. Automatically splits large documents into chunks for efficient searching. Generates vector embeddings for each chunk using OpenAI embeddings. Inserts the embedded chunks and metadata into a MongoDB Atlas vector store, enabling fast semantic search. Workflow 2: AI-Powered Query & Response Listens for incoming user questions (can be extended to webhook). Converts questions to vector embeddings and performs similarity search on MongoDB vector store. Uses OpenAI’s GPT-4o-mini model with retrieval-augmented generation to produce direct, context-aware answers. Maintains short-term conversation context using a memory buffer node. Setup Setting up vector embeddings Authenticate Google Docs and connect your Google Docs URL containing the product documentation you want to index. Authenticate MongoDB Atlas and connect the collection where you want to store the vector embeddings. Create a search index on this collection to support vector similarity queries. Ensure the index name matches the one configured in n8n (data_index). See the example MongoDB search index template below for reference. Setting up chat Configure the AI system prompt in the “Knowledge Base Agent” node to reflect your company’s tone, answering style, and any business rules. Update the workflow description and instructions to help users understand the chat’s purpose and capabilities. Connect the MongoDB collection used for vector search in the chat workflow and update the vector search index if needed to match your setup. Make sure Both MongoDB nodes (in ingestion and chat workflows) are connected to the same collection, with: An embedding field storing vector data, Relevant metadata fields (e.g., document ID, source), and The same vector index name configured (e.g., data_index). Search Index Example: { "mappings": { "dynamic": false, "fields": { "_id": { "type": "string" }, "text": { "type": "string" }, "embedding": { "type": "knnVector", "dimensions": 1536, "similarity": "cosine" }, "source": { "type": "string" }, "doc_id": { "type": "string" } } } }
by Abdul Mir
Company Website Chatbot Agent Overview This workflow implements a modular Website AI Chatbot Assistant capable of handling multiple types of customer interactions autonomously. Instead of relying on a single large agent to handle all logic and tools, this system routes user queries to specialized sub-agents—each dedicated to a specific function. By using a manager-style orchestration layer, this approach prevents overloading a single AI model with excessive context, leading to cleaner routing, faster execution, and easier scaling as your automation needs grow. How It Works 1. Chat Trigger The flow is initiated when a chat message is received via the website widget. 2. Manager Agent (Ultimate Website AI Assistant) The central LLM-based agent is responsible for parsing the message and deciding which specialized sub-agent to route it to. It uses an OpenAI GPT model for natural language understanding and a lightweight memory system to preserve recent context. 3. Sub-Agent Routing calendarAgent: Handles availability checks and books meetings on connected calendars. RAGAgent: Searches company documentation or FAQs to provide accurate responses from your internal knowledge base. ticketAgent: Forwards requests to human support by generating and sending support tickets to a designated email. Setup Instructions Embed the Chatbot Use a custom HTML widget or script to embed the chatbot interface on your website. Connect the frontend to the webhook that triggers the When chat message received node. Configure Your OpenAI Key Insert your API key in the OpenAI Chat Model node. Adjust the model parameters for temperature, max tokens, etc., based on how formal or creative you want the bot to be. Customize Sub-Agents calendarAgent: Connect to your Google or Outlook calendar. RAGAgent: Link to a vector store or document database via API or native integration. ticketAgent: Set the destination email and format for ticket generation (e.g. via SendGrid or SMTP). Deploy in Production Host on n8n Cloud or your self-hosted instance. Monitor usage through the Executions tab and refine prompts based on user behavior. Benefits Modular system with dedicated logic per function Reduces token bloat by offloading complexity to sub-agents Easy to scale by adding more tools (e.g. CRM, analytics) Fast and responsive user experience for customers on your site Cleaner code structure and easier debugging
by Billy Christi
Who is this for? This workflow is perfect for: Companies that manage invoices through Google Drive Business owners who want to minimize manual data entry and maximize accuracy Accounting teams and finance departments seeking to automate invoice processing What problem is this workflow solving? Processing invoices manually is time-consuming, error-prone, and inconsistent. This workflow solves those issues by: Automating invoice processing** from detection to data extraction to storage Improving accuracy** by using AI to extract key invoice data fields reliably Reducing human workload** while maintaining compliance and consistency What this workflow does This workflow creates a fully automated invoice processing system by: Monitoring a Google Drive folder for new PDF invoices in real time Downloading the PDF files and extracting their content using OCR technology Using AI (OpenAI) to parse and extract key invoice fields such as invoice number, date, total amount, vendor name, itemized details, tax, and category Validating the extracted data to ensure compliance with a structured JSON schema Storing structured data in Google Sheets for easy access, review, and reporting Key Features: AI-powered extraction handles both text-based and scanned PDF invoices Provides a structured, searchable invoice database in Google Sheets Configured to run as frequently as the user needs, ensuring timely processing. Setup Copy the Google Sheet template here: 👉 PDF Invoice Parser – Google Sheet Template Connect your Google Drive account to the Drive Trigger and File Download nodes Add your OpenAI API key in the AI Parser node Link the Google Sheet in the final storage node Drop a test invoice PDF into the monitored Drive folder Required Credentials: OpenAI API Key** Google Drive Credentials** Google Sheets Credentials** How to customize this workflow to your needs Modify the polling interval** (default: every minute) for higher/lower frequency. Integrate with your accounting software** by adding nodes (e.g., QuickBooks, Xero). Use alternative LLM** such as Gemini, Claude.
by Yang
👤 Who is this for? This workflow is ideal for social media managers, personal brand strategists, ghostwriters, and founders who want to post regularly on LinkedIn without spending hours writing from scratch. It’s also useful for marketing agencies and assistants looking to automate consistent post creation using curated articles as source material. 🧩 What problem does this workflow solve? Manually reading multiple articles, extracting key insights, and writing a clean, professional LinkedIn post is a time-consuming process. This workflow automates everything: from pulling topics, finding related articles, summarizing them using AI, and even generating a matching image to accompany the post. It ensures faster content turnaround, more consistency, and less manual effort. 🔁 What this workflow does This workflow starts manually and retrieves one topic marked as “To do” from a Google Sheet. That topic is used as a search term for Dumpling AI’s search endpoint, which scrapes and returns the top three article contents related to the topic. These articles are sent to a LangChain agent powered by GPT-4o, which analyzes and summarizes the content into a LinkedIn post in a friendly, insightful tone. It also generates an image prompt for the post. After generating the post and image prompt, the data is extracted using a Set node. The prompt is sent to Dumpling AI’s image generation endpoint, which returns an image URL. Finally, the post text, image prompt, image URL, and status update (“created”) are saved back to the original row in Google Sheets. 🛠️ Workflow Breakdown Manual Trigger – Starts the automation. Google Sheets (Get Topic) – Searches for the first row in your content pipeline sheet where the “status” is “To do”. HTTP Request (Dumpling AI Search) – Uses the topic as a search query to pull 3 article contents using Dumpling AI’s API. Set LangChain GPT Model – Defines GPT-4o as the LLM for the LangChain Agent. LangChain Agent (Summarize & Generate) – Summarizes all 3 articles and generates a LinkedIn post and a related image prompt. Set (Extract Data) – Extracts postText and imagePrompt from the LangChain agent output. HTTP Request (Dumpling Image Gen) – Sends imagePrompt to Dumpling AI’s image generation endpoint. Update Google Sheets – Writes the post, image prompt, and image URL back to the sheet and changes the row status to “created”. ⚙️ Setup Instructions Dumpling AI Sign up at Dumpling AI Get your API key and connect it in the HTTP Request nodes (Search and Image endpoints) Use the /search endpoint to retrieve article content Use the /generate-image endpoint to create the image Google Sheets Create a spreadsheet with columns: topic, status, postText, imagePrompt, imageURL Add sample topics and set their status to To do LangChain (GPT-4o) Connect your OpenAI credentials to n8n Make sure GPT-4o is available in your OpenAI account Use the LangChain node to process multi-input summarization and generate a social media caption Customize the Prompt (Optional) Adjust the Set node to tweak the input format sent to the LangChain agent Add constraints like tone, hashtags, or emojis to fit your brand style 🧠 How to Customize This Workflow Change the content source (RSS feed, Notion DB, etc.) instead of Google Sheets Add a scheduler node to run this automatically every morning or weekly Use Airtable instead of Google Sheets for more control and filtering Send the final post to LinkedIn using the Buffer or LinkedIn API Add a Telegram or Slack notification when new content is ready for approval