by Chris Carr
Split Test Agent Prompts with Supabase and OpenAI Use Case Oftentimes, it's useful to test different settings for a large language model in production against various metrics. Split testing is a good method for doing this. What it Does This workflow randomly assigns chat sessions to one of two prompts, the baseline and the alternative. The agent will use the same prompt for all interactions in that chat session. How it Works When messages arrive, a table containing information regarding session ID and which prompt to use is checked to see if the chat already exists If it does not, the session ID is added to the table and a prompt is randomly assigned These values are then used to generate a response Setup Create a table in Supabase called split_test_sessions. It needs to have the following columns: session_id (text) and show_alternative (bool) Add your Supabase, OpenAI, and PostgreSQL credentials Modify the Define Path Values node to set the baseline and alternative prompt values. Activate the workflow and test by sending messages through n8n's inbuilt chat Experiment with different chat sessions to test see both prompts in action Next Steps Modify the workflow to test different LLM settings such as temperature Add a method to measure the efficacy of the two alternative prompts
by Alex Huang
Use case Manually monitoring Reddit for viable business ideas is time-consuming and inconsistent. This workflow automatically analyzes trending Reddit discussions using AI to surface high-potential opportunities, filter irrelevant content, and generate actionable insights - saving entrepreneurs 10+ hours weekly in market research. What this workflow does This AI-powered workflow automatically collects trending Reddit discussions, analyzes posts for viable business opportunities using GPT-4, applies smart filters to exclude low-value content, and generates scored opportunity reports with market insights. It identifies unmet customer needs through sentiment analysis, prioritizes high-potential ideas using custom criteria, and outputs structured data to Google Sheets for actionable decision-making. Setup Add Reddit,Google and OpenAI credentials Configure target subreddits in Subreddit node Test workflow by testing workflow Review generated opportunity report in Google Sheets How to adjust this template Change data sources**: Replace Reddit trigger with Twitter/X or Hacker News API Modify criteria**: Adjust scoring thresholds in Opportunity Calculator node Add integrations**: Create automatic Slack alerts for urgent opportunities Generate draft business plans using AI Document Writer
by David Olusola
📊 Google Sheets MCP Workflow – AI Meets Spreadsheets! 😄 ✨ What It Does This n8n workflow lets you chat with your spreadsheets using AI + MCP! From reading and updating data to creating sheets, it’s your smart assistant for Google Sheets 📈🤖 🚀 Cool Features 💬 Natural language commands (e.g. "Add a new lead: John Doe") ✏️ Full CRUD (Create, Read, Update, Delete) 🧠 AI-powered analysis & smart workflows 🗂️ Multi-sheet support 🔗 Works with ChatGPT, Claude, and more (via MCP) 💡 Use Cases Data Tasks: “Update status to 'Done' in row 3” Sheet Ops: “Create a ‘Marketing 2024’ sheet” Business Flows: “Summarize top sales from Q2” 🛠️ Quick Setup Import Workflow into n8n Copy the JSON In n8n → Import JSON → Paste & Save ✅ Connect Google Sheets Create a project in Google Cloud Enable Sheets & Drive APIs Create OAuth2 credentials In n8n → Add Google Sheets OAuth2 credential → Connect 🔐 Add Your Credentials Get your credential ID Open each Google Sheets node → Update with your new credential ID Link to AI (Optional 😊) MCP webhook is pre-set Plug it into your AI tool (like ChatGPT) Send test command → Watch the magic happen ✨ ✅ Test It Out Try these fun commands: 🆕 "Add entry: Jane Doe, janed@example.com" 🔍 "Read data from Sales 2024" 🧹 "Clear data from A1:C5" ➕ "Create sheet 'Budget 2025'" ❌ "Delete sheet 'Test'" 🧠 MCP Command List (AI-Callable Functions) These are the tasks the AI can perform via MCP: Add a new entry to a sheet Read data from a sheet Update a row in a sheet Delete a row from a sheet Create a new sheet Delete an existing sheet Clear data from a specific range Summarize data from a sheet using AI ⚙️ Tips & Fixes OAuth2 Errors? Re-authenticate and check scopes Confirm redirect URI is exact Permissions? Spreadsheet must be shared with edit access Use service accounts for production Webhook Not Firing? Double-check the URL Trigger it manually to test
by Julian Ivanov
How it works This workflow automates the transformation of standard product images into professional product photography featuring human models It uses AI to analyze product images, create tailored photography prompts, and generate high-quality enhanced versions Set up steps You'll need an OpenAI API key and access to gpt-image-1 (verify your organization) Set up a Google Sheets spreadsheet with columns: Image-URL, Prompt, Output Create a Google Drive folder to store the generated images Requirements: OpenAI API access (for image generation and analysis) Google Sheets and Google Drive accounts Basic product images (URLs) as input The spreadsheet must contain a column named "Image-URL" with links to the product images This workflow automatically: Reads product image URLs from your Google Sheet Downloads the images for processing Analyzes each image to understand what product it contains Creates specialized photography prompts ensuring each product is shown with a human model Generates professional product photography using OpenAI's image generation capabilities Uploads results to Google Drive and updates your spreadsheet with links Extra: You can also use the included simple image generation workflow to directly create images via prompt without product image input. This option lets you quickly generate images through the OpenAI API using just text prompts
by n8n Team
This workflow digests mentions of n8n on Reddit that can be sent as an single email or Slack summary each week. We use OpenAI to classify if a specific Reddit post is really about n8n or not, and then the summarise it into a bullet point sentence. How it works Get posts from Reddit that might be about n8n; Filter for the most relevant posts (posted in last 7 days and more than 5 upvotes and is original content); Check if the post is actually about n8n; If it is, categorise with OpenAI. Bear in mind: Workflow only considers first 500 characters of each reddit post. So if n8n is mentioned after this amount, it won't register as being a post about n8n.io. Next steps Improve OpenAI Summary node prompt to return cleaner summaries; Extend to more platforms/sources - e.g. it would be really cool to monitor larger Slack communities in this way; Do some classification on type of user to highlight users likely to be in our ICP; Separate a list of data sources (reddit, twitter, slack, discord etc.), extract messages from there and have them go to a sub workflow for classification and summarisation.
by Zacharia Kimotho
What it does This workflow scrapes the top 10 pages on SERP and conducts an in-depth analysis of the keyword intent for each ranking keyword, saving the information to a Google Sheet for further analysis. How does this workflow work? We add our keywords and country code to a Google sheet that we need to monitor and research on Run the system Scrape the top 10 pages Analyze the intents of the top 10 and update to a Google sheet Technical Setup Make a copy of this G sheet Add your desired keywords to the Google sheet Map keyword and country code Update the Zone name to match your zone on Bright Data Run the scraper Upon successful scraping, we run an intent classifier to determine the intents for each ranking page and update the G sheet. Setting up the Serp Scraper in Bright Data On Bright Data, go to the Proxies & Scraping tab Under SERP API, create a new zone Give it a suitable name and description. The default is serp_api Add this to your account Add your credentials as a header credential
by Mujahid Kabae
How it works This workflow scrapes the latest Artificial Intelligence articles from TechCrunch, then processes and classifies the content using OpenAI and LangChain nodes. The final result is saved to Google Sheets and sent as a summary to a Telegram group. Workflow Logic: Trigger: Schedules daily at 6AM Bangkok time. Scraper: Extracts URLs and publish dates from TechCrunch's AI category. Filter: Only continues if the article is from yesterday (to avoid duplication). Content Fetch: Downloads and extracts article body text. AI Agent: Summarizes the article in Thai. Scores it using strict journalism criteria (max 100). Categorizes the news into one of 9 predefined categories. Output: Saves all structured data to Google Sheets. Sends a summary to a Telegram group. Set up steps 🕒 Estimated setup time: 10–15 minutes Connect your credentials: Google Sheets (OAuth2) Telegram OpenAI account (via LangChain model) Update the Telegram chatId and Google Sheets documentId/sheetName values. Deploy and activate the workflow. It runs daily without manual intervention.
by Solomon
Learn how to build an MCP Server and Client in n8n with official nodes. > ⚠ Requires n8n version 1.88.0 or higher. In this example, we use Google Calendar and custom functions as two separate MCP Servers, demonstrating how to integrate both native and custom tools. How it works The AI Agent connects to two MCP Servers. Each MCP Trigger (Server) generates a URL exposing its tools. This URL is used by an MCP Client linked to the AI Agent. Whenever you make changes to the tools, there’s no need to modify the MCP Client. It automatically keeps the AI Agent informed on how to use each tool, even if you change them over time. That’s the power of MCP 🙌 Who is this template for Anyone looking to use MCP with their AI Agents. How to set up Instructions are included within the workflow itself. Check out my other templates 👉 https://n8n.io/creators/solomon/
by Jason Krol
Notion Weekly Journal AI Summary This workflow will run on a weekly schedule and retrieve your Notion Daily Journal pages for the past week and aggregate them into a ChatGPT generated concise summary. It will save that weekly summary back to your Notion as a new Note in addition to posting to a personal Discord channel. Additionally it will also retrieve all of the Tasks you've completed in the past week and provide a quick total with a congratulatory message to a Discord channel as well. Requirements/Setup: You need Notion setup w/ a Notes database If you want the Discord messages, setup a Discord webhook for your channel as well, or simply delete the Discord nodes. One of the properties for the Notes db should be Type with a value of Journal The contents of your daily Journal pages can be whatever you want I've found what works best for me is the format of "What was a highlight of the day?", "What was a low point of the day?", and "What decisions did I delegate, delay, or dodge?" You should also create an additional Type for your Weekly summary page that gets created - in this case I used simply Weekly Automate this to run weekly on your day of choice. I tend to only journal on weekdays so I've set mine up to run every Friday retrieving the past week's Journal entries. Options: You don't have to use Discord, feel free to swap out with Slack or remove altogether. You also don't need to use the Tasks summary bottom half, simply remove that if you don't want it or need it. You can easily reuse this workflow to aggregate your Weekly Summary notes (that this workflow auto generates/saves) to generate a Quarterly or even Yearly summary!
by NovaNode
Who is this for? This template is designed for internal support teams, product specialists, and knowledge managers who want to build an AI-powered knowledge assistant with retrieval-augmented generation (RAG) and reinforcement learning from human feedback (RLHF) via Telegram. What problem is this workflow solving? Manual knowledge management and answering support queries can be time-consuming and error-prone. This solution automates importing and indexing official documentation into MongoDB vector search and enhances AI responses with Telegram-based user feedback to continuously improve answer quality. What these workflows do Workflow 1: Document ingestion & indexing Manually triggered workflow imports product documentation from Google Docs. Documents are split into manageable chunks and embedded using OpenAI embeddings. Embedded document chunks are stored in MongoDB Atlas vector store to enable semantic search. Workflow 2: Telegram chat with RLHF feedback loop Listens for user messages via Telegram bot integration. Uses vector similarity search on MongoDB to retrieve relevant documentation chunks. Generates answers with OpenAI GPT-4o-mini model using retrieval-augmented generation. Sends answers back via Telegram and waits for user feedback (approval or disapproval). Captures feedback, maps it as positive or negative, and stores it with the conversation data for future model improvement. Setup Setting up vector embeddings Authenticate Google Docs and connect your Google Docs URL containing the product documentation you want to index. Authenticate MongoDB Atlas and connect the collection where you want to store the vector embeddings. Create a search index on this collection to support vector similarity queries. Ensure the index name matches the one configured in n8n (data_index). See the example MongoDB search index template below for reference. Setting up chat with Telegram RLHF Create a bot in Telegram with @botFather using the /newbot command. Connect the MongoDB database and search index used for vector search in the previous workflow. Also create two new collections in MongoDB Atlas: one for feedback and one for chat history. Create a search index for feedback, copying the provided template. Configure the AI system prompt in the “Knowledge Base Agent” node, making sure it references all three tools connected (productDocs, feedbackPositive, feedbackNegative) as provided in the template prompt. Make sure Product documentation and feedback collections must connect to the same MongoDB database. There are three distinct MongoDB collections: one for product documentation, one for feedback, and one for chat history (chat history collection can be separate). Telegram API credentials are valid and webhook URLs are correctly set up. MongoDB Search Index Templates Documentation Collection Index { "mappings": { "dynamic": false, "fields": { "_id": { "type": "string" }, "text": { "type": "string" }, "embedding": { "type": "knnVector", "dimensions": 1536, "similarity": "cosine" }, "source": { "type": "string" }, "doc_id": { "type": "string" } } } } Feedback Collection Index { "mappings": { "dynamic": false, "fields": { "prompt": { "type": "string" }, "response": { "type": "string" }, "text": { "type": "string" }, "embedding": { "type": "knnVector", "dimensions": 1536, "similarity": "cosine" }, "feedback": { "type": "token" } } } }
by Adrian Bent
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow scrapes job listings on indeed via Apify, automatically gets that dataset, extracts information about the listing filters jobs off relevance, finds a decision maker at the company and updates a database (google sheets) with that info for outreach. All you need to do is run Apify actor then the database will update with the processed data. Benefits: Complete Job search Automation - A webhook monitors the Apify actor which sends a integration and starts the process AI-Powered Filter - Uses ChatGPT to analyze content/context, identify company goals, and filters based on job description Smart Duplicate Prevention - Automatically tracks processed job listings in a database to avoid redundancy Multi-Platform Intelligence - Combines Indeed scraping, web research via Tavily, and enriches each listing Niche Focus - Process content from multiple niches 6 currently (hardcoded) but can be changed to fit other niches (just prompt the "job filter" node) How It Works: Indeed Job Discovery: Search and apply filter for relevant job listings, copy and use URL in Apify Uses Apify's Indeed job scraper to scrape job listings from the URL of interest Automatically scrapes the information, stores it in a dataset and initiates a integration Oncoming Data Processing: Loops over 500 items (can be changed) with a batch size of 55 items (can be changed) to avoid running into API timeouts. Multiple filters to ensure all fields are scrapped with our required metrics (website must exist and number of employees < 250) Duplicate job listings are removed from oncoming batch to be processed Job Analysis & Filter: An additional filter to remove any job listing from the oncoming batch if it already exists in the google sheets database Then all new job listings gets pasted to chatGPT which uses information about the job post/description to determine if it is relevant to us All relevant jobs get a new field "verdict" which is either true or false and we keep the ones where verdict is true Enrich & Update Database: Uses Tavily to search for a decision maker (doesn't always finds one) and populate a row in google sheet with information about the job listing, the company and a decision maker at that company. Waits for 1 minute and 30 seconds to avoid google sheets and chatGPT API timeouts then loops back to the next batch to start filtering again until all job listings are processed Required Google Sheets Database Setup: Before running this workflow, create a Google Sheets database with these exact column headers: Essential Columns: jobUrl - Unique identifier for job listings title - Position Title descriptionText - Description of job listing hiringDemand/isHighVolumeHiring - Are they hiring at high volume? hiringDemand/isUrgentHire - Are they hiring at high urgency? isRemote - Is this job remote? jobType/0 - Job type: In person, Remote, Part-time, etc. companyCeo/name - CEO name collected from Tavily's search icebreaker - Column for holding custom icebreakers for each job listing (Not completed in the workflow. I will upload another that does this called "Personalized IJSFE") scrapedCeo - CEO name collected from Apify Scraper email - Email listed on for job listing companyName - Name of company that posted the job companyDescription - Description of the company that posted the job companyLinks/corporateWebsite - Website of the company that posted the job companyNumEmployees - Number of employees the company listed that they have location/country - Location of where the job is to take place salary/salaryText - Salary on job listing Setup Instructions: Create a new Google Sheet with these column headers in the first row Name the sheet whatever you please Connect your Google Sheets OAuth credentials in n8n Update the document ID in the workflow nodes The merge logic relies on the id column to prevent duplicate processing, so this structure is essential for the workflow to function correctly. Feel free to reach out for additional help or clarification at my gmail: terflix45@gmail.com and I'll get back to you as soon as I can. Set Up Steps: Configure Apify Integration: Sign up for an Apify account and obtain API key Get indeed job scraper actor and use Apify's integration to send a HTTP request to your n8n webhook (if test URL doesn't work use production URL) Use Apify node with Resource: Dataset, Operation: Get items and use your Api key as your credentials Set Up AI Services: Add OpenAI API credentials for job filtering Add Tavily API credentials for company research Set up appropriate rate limiting for cost control Database Configuration: Create Google Sheets database with provided column structure Connect Google Sheets OAuth credentials Configure the merge logic for duplicate detection Content Filtering Setup: Customize the AI prompts for your specific niche, requirements or interest Adjust the filtering criteria to fit your needs
by Davide
This workflow allows users to generate AI videos using the cheaper model Google Veo3 Fast, save them to Google Drive, generate optimized titles with GPT-4o, and automatically upload them to YouTube and TikTok with Upload-Post. The entire process is triggered from a Google Sheet that acts as the central interface for input and output. IT automates video creation, uploading, and tracking, ensuring seamless integration between Google Sheets, Google Drive, Google Veo3 Fast, TikTok and YouTube. Benefits of this Workflow 💡 No Code Interface**: Trigger and control the video production pipeline from a simple Google Sheet. ⚙️ Full Automation**: Once set up, the entire video generation and publishing process runs hands-free. 🧠 AI-Powered Creativity**: Generates engaging YouTube and TikTok titles using GPT-4o. Leverages advanced generative video AI from Google Veo3. 📁 Cloud Storage & Backup**: Stores all generated videos on Google Drive for safekeeping. 📈 YouTube Ready**: Automatically uploads to YouTube with correct metadata, saving time and boosting visibility. 📈 TikTok Ready**: Automatically uploads to TikTok with correct metadata, saving time and boosting visibility. 🧪 Scalable**: Designed to process multiple video prompts by looping through new entries in Google Sheets. 🔒 API-First**: Utilizes secure API-based communication for all services. How It Works Trigger: The workflow can be started manually ("When clicking ‘Test workflow’") or scheduled ("Schedule Trigger") to run at regular intervals (e.g., every 5 minutes). Fetch Data: The "Get new video" node retrieves unfilled video requests from a Google Sheet (rows where the "VIDEO" column is empty). Video Creation: The "Set data" node formats the prompt and duration from the Google Sheet. The "Create Video" node sends a request to the Fal.run API (Google Veo3 Fast) to generate a video based on the prompt. Status Check: The "Wait 60 sec." node pauses execution for 60 seconds. The "Get status" node checks the video generation status. If the status is "COMPLETED," the workflow proceeds; otherwise, it waits again. Video Processing: The "Get Url Video" node fetches the video URL. The "Generate title" node uses OpenAI (GPT-4.1) to create an SEO-optimized YouTube and TikTok title. The "Get File Video" node downloads the video file. Upload & Update: The "Upload Video" node saves the video to Google Drive. The "HTTP Request" node uploads the video to YouTube via the Upload-Post API. The "HTTP Request" node uploads the video to TikTok via the Upload-Post API. The "Update Youtube URL" and "Update result" nodes update the Google Sheet with the video URL and YouTube link. Set Up Steps Google Sheet Setup: Create a Google Sheet with columns: PROMPT, DURATION, VIDEO, and YOUTUBE_URL. Share the Sheet link in the "Get new video" node. API Keys: Obtain a Fal.run API key (for Veo3) and set it in the "Create Video" node (Header: Authorization: Key YOURAPIKEY). Get an Upload-Post API key (for YouTube uploads) and configure the "HTTP Request" node (Header: Authorization: Apikey YOUR_API_KEY). Get an Upload-Post API key (for TikTok uploads) and configure the "HTTP Request" node (Header: Authorization: Apikey YOUR_API_KEY). YouTube Upload Configuration: Replace YOUR_USERNAME in the "HTTP Request" node with your Upload-Post profile name. Schedule Trigger: Configure the "Schedule Trigger" node to run periodically (e.g., every 5 minutes). Need help customizing? Contact me for consulting and support or add me on Linkedin.