by Mauricio Perera
📁 Analyze uploaded images, videos, audio, and documents with specialized tools — powered by a lightweight language-only agent. 🧭 What It Does This workflow enables multimodal file analysis using Google Gemini tools connected to a text-only LLM agent. Users can upload images, videos, audio files, or documents via a chat interface. The workflow will: Upload each file to Google Gemini and obtain an accessible URL. Dynamically generate contextual prompts based on the file(s) and user message. Allow the agent to invoke Gemini tools for specific media types as needed. Return a concise, helpful response based on the analysis. 🚀 Use Cases Customer support**: Let users upload screenshots, documents, or recordings and get helpful insights or summaries. Multimedia QA**: Review visual, audio, or video content for correctness or compliance. Educational agents**: Interpret content from PDFs, diagrams, or audio recordings on the fly. Low-cost multimodal assistants: Achieve multimodal functionality **without relying on large vision-language models. 🎯 Why This Architecture Matters Unlike end-to-end multimodal LLMs (like Gemini 1.5 or GPT-4o), this template: Uses a text-only LLM (Qwen 32B via Groq) for reasoning. Delegates media analysis to specialized Gemini tools. ✅ Advantages | Feature | Benefit | | ----------------------- | --------------------------------------------------------------------- | | 🧩 Modular | LLM + Tools are decoupled; can update them independently | | 💸 Cost-Efficient | No need to pay for full multimodal models; only use tools when needed | | 🔧 Tool-based Reasoning | Agent invokes tools on demand, just like OpenAI’s Toolformer setup | | ⚡ Fast | Groq LLMs offer ultra-fast responses with low latency | | 📚 Memory | Includes context buffer for multi-turn chats (15 messages) | 🧪 How It Works 🔹 Input via Chat Users submit a message and (optionally) files via the chatTrigger. 🔹 File Handling If no files: prompt is passed directly to the agent. If files are included: Files are split, uploaded to Gemini (to get public URLs). Metadata (name, type, URL) is collected and embedded into the prompt. 🔹 Prompt Construction A new chatInput is dynamically generated: User message Media: [array of file data] 🔹 Agent Reasoning The Langchain Agent receives: The enriched prompt File URLs Memory context (15 turns) Access to 4 Gemini tools: IMG: analyze image VIDEO: analyze video AUDIO: analyze audio DOCUMENT: analyze document The agent autonomously decides whether and how to use tools, then responds with concise output. 🧱 Nodes & Services | Category | Node / Tool | Purpose | | --------------- | ---------------------------- | ------------------------------------- | | Chat Input | chatTrigger | User interface with file support | | File Processing | splitOut, splitInBatches | Process each uploaded file | | Upload | googleGemini | Uploads each file to Gemini, gets URL | | Metadata | set, aggregate | Builds structured file info | | AI Agent | Langchain Agent | Receives context + file data | | Tools | googleGeminiTool | Analyze media with Gemini | | LLM | lmChatGroq (Qwen 32B) | Text reasoning, high-speed | | Memory | memoryBufferWindow | Maintains session context | ⚙️ Setup Instructions 1. 🔑 Required Credentials Groq API key** (for Qwen 32B model) Google Gemini API key** (Palm / Gemini 1.5 tools) 2. 🧩 Nodes That Need Setup Replace existing credentials on: Upload a file Each GeminiTool (IMG, VIDEO, AUDIO, DOCUMENT) lmChatGroq 3. ⚠️ File Size & Format Considerations Some Gemini tools have file size or format restrictions. You may add validation nodes before uploading if needed. 🛠️ Optional Improvements Add logging and error handling (e.g., for upload failures). Add MIME-type filtering to choose the right tool explicitly. Extend to include OCR or transcription services pre-analysis. Integrate with Slack, Telegram, or WhatsApp for chat delivery. 🧪 Example Use Case > "Hola, ¿qué dice este PDF?" Uploads a document → Agent routes it to Gemini DOCUMENT tool → Receives extracted content → LLM summarizes it in Spanish. 🧰 Tags multimodal, agent, langchain, groq, gemini, image analysis, audio analysis, document parsing, video analysis, file uploader, chat assistant, LLM tools, memory, AI tools 📂 Files This template is ready to use as-is in n8n. No external webhooks or integrations required.
by Axiomlab.dev
HubSpot Lead Refinement 🚀 How it works Triggers: HubSpot Trigger: Fires when contacts are created/updated. Manual Trigger: Run on demand for testing or batch checks. Get Recently Created/Updated Contacts: Pulls fresh contacts from HubSpot. Edit Fields (Set): Maps key fields (First Name, Last Name, Email) for the Agent. AI Agent: First reads your Google Doc (via the Google Docs tool) to learn the research steps and output format. Then uses SerpAPI (Google engine) to locate the contact’s likely LinkedIn profile and produce a concise result. Code – Remove Think Part: Cleans the model output (removes hidden “think” blocks / formatting) so only the final answer remains. HubSpot Update: Writes the cleaned LinkedIn URL to the contact (via email match). 🔑 Required Credentials: HubSpot App Token (Private App) — for Get/Update contact nodes. HubSpot Developer OAuth (optional) — if you use the HubSpot * Trigger node for event-based runs. Google Service Account — for the Google Docs tool (share your * playbook doc with this service account). OpenRouter — for the OpenRouter Chat Model used by the AI Agent. SerpAPI — for targeted Google searches from within the Agent. 🛠️ Setup Instructions HubSpot Create a Private App and copy the Access Token. Add or confirm the contact property linkedinUrl (Text). Plug the token into the HubSpot nodes. If using HubSpot Trigger, connect your Developer OAuth app and subscribe to contact create/update events. Google Docs (Living Instructions) ➡️ Sample configuration doc file Copy the sample doc file and modify to your need. Share the doc with your Google Service Account (Viewer is fine). In the Read Google Docs node, paste the Document URL. OpenRouter & SerpAPI Add your OpenRouter key to the OpenRouter Chat Model credential. Add your SerpAPI key to the SerpAPI tool node. (Optional) In your Google Doc or Agent prompt, set sensible defaults for SerpAPI (engine=google, hl=en, gl=us, num=5, max 1–2 searches). ✨ What you get Auto-enriched contacts with a LinkedIn URL and profile insights (clean, validated output). A research process you can change anytime by editing the Google Doc—no workflow changes needed. Tight, low-noise searches via SerpAPI to keep costs down. And that’s it—publish and let the Agent enrich new leads automatically while you refine the rules in your doc. It allows handing off to a team who wouldn't necessarily tweak the automation nodes.
by vinci-king-01
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works This workflow automatically monitors trending topics across multiple platforms and generates content strategy insights for marketing teams. Key Steps Daily Trigger - Runs automatically every 24 hours to capture fresh trends and viral content. Multi-Platform Scraping - Uses AI-powered scrapers to analyze trends from LinkedIn, Twitter, Instagram, Google Trends, BuzzSumo, and Reddit. Trend Analysis - Processes collected data to identify viral patterns, engagement metrics, and content opportunities. Content Strategy Generation - Creates actionable insights for content planning and social media strategy. Team Notifications - Sends comprehensive reports to Slack and updates content calendars in Google Sheets. Set up steps Setup time: 10-15 minutes Configure ScrapeGraphAI credentials - Add your ScrapeGraphAI API key for AI-powered trend scraping. Set up Slack connection - Connect your Slack workspace for team notifications. Configure Google Sheets - Set up a Google Sheets connection for content calendar updates. Customize target industries - Modify the configuration to focus on your specific industry verticals (AI, marketing, tech, etc.). Adjust monitoring frequency - Change the trigger timing based on your content planning needs. What you get Daily trend reports** with viral content analysis and engagement metrics Content opportunity scores** for different platforms and topics Automated content calendar updates** with trending topics and suggested content Team notifications** with key insights and actionable recommendations Competitive analysis** of viral content patterns and successful strategies
by Madame AI
Scrape Detailed GitHub Profiles to Google Sheets Using BrowserAct This template is a sophisticated data enrichment and reporting tool that scrapes detailed GitHub user profiles and organizes the information into dedicated, structured reports within a Google Sheet. This workflow is essential for technical recruiters, talent acquisition teams, and business intelligence analysts who need to dive deep into a pre-qualified list of developers to understand their recent activity, repositories, and technical footprint. Self-Hosted Only This Workflow uses a community contribution and is designed and tested for self-hosted n8n instances only. How it works The workflow is triggered manually but can be started by a Schedule Trigger or by integrating directly with a candidate sourcing workflow (like the "Source Top GitHub Contributors" template). A Google Sheets node reads a list of target GitHub user profile URLs from a master candidate sheet. The Loop Over Items node processes each user one by one. A Slack notification is sent at the beginning of the loop to announce that the scraping process has started for the user. A BrowserAct node visits the user's GitHub profile URL and scrapes all available data, including profile info, repositories, and social links. A custom Code node (labeled "Code in JavaScript") performs a critical task: it cleans, fixes, and consolidates the complex, raw scraped data into a single, clean JSON object. The workflow then dynamically manages your output. It creates a new sheet dedicated to the user (named after them) and clears it to ensure a fresh report every time. The consolidated data is separated into three paths: main profile data, links, and repositories. Three final Google Sheets nodes then append the structured data to the user's dedicated sheet, creating a clear, multi-section report (User Data, User Links, User Repositories). Requirements BrowserAct** API account for web scraping BrowserAct* "Scraping GitHub Users Activity & Data*" Template BrowserAct* "* Source Top GitHub Contributors by Language & Location**" Template Output BrowserAct** n8n Community Node -> (n8n Nodes BrowserAct) Google Sheets** credentials for input (candidate list) and structured output (individual user sheets) Slack** credentials for sending notifications Need Help? How to Find Your BrowseAct API Key & Workflow ID How to Connect n8n to Browseract How to Use & Customize BrowserAct Templates How to Use the BrowserAct N8N Community Node Workflow Guidance and Showcase GitHub Data Mining: Extracting User Profiles & Repositories with N8N
by Ranjan Dailata
This workflow automatically scrapes Amazon price-drop data via Decodo, extracts structured product details with OpenAI, generates summaries and sentiment insights for each item, and saves everything to Google Sheets — creating a fully automated price-intelligence pipeline. Disclaimer Please note - This workflow is only available on n8n self-hosted as it’s making use of the community node for the Decodo Web Scraping Who this is for This workflow is designed for e-commerce analysts, product researchers, price-tracking teams, and affiliate marketers who want to: Monitor daily Amazon product price drops automatically. Extract key information such as product name, price, discount, and links. Generate AI-driven summaries and sentiment insights on the latest deals. Store all structured data directly in Google Sheets for trend analysis and reporting. What problem this workflow solves This workflow solves the following: Eliminates the need for manual data scraping or tracking. Turns unstructured web data into structured datasets. Adds AI-generated summaries and sentiment analysis for smarter decision-making. Enables automated, daily price intelligence tracking across multiple product categories. What this workflow does This automation combines Decodo’s web scraping, OpenAI GPT-4.1-mini, and Google Sheets to deliver an end-to-end price intelligence system. Trigger & Setup Manually start the workflow. Input your price-drop URL (default: CamelCamelCamel Daily Drops). Web Scraping via Decodo Decodo scrapes the Amazon price-drop listings and extracts product details (title, price, savings, product link). LLM-Powered Data Structuring The extracted content is sent to OpenAI GPT-4.1-mini to format and clean the output into structured JSON fields. Loop & Deep Analysis Each product URL is revisited by Decodo for content enrichment. The AI performs two analyses per product: Summarization: Generates a comprehensive summary of the product. Sentiment Analysis: Detects tone (positive/neutral/negative), sentiment score, and key topics. Aggregation & Storage All enriched results are merged and aggregated. Structured data is automatically appended to a connected Google Sheet. End Result: A ready-to-use dataset showing each price-dropped product, its summary, sentiment polarity, and key highlights updated in real time. Setup Pre-requisite Please make sure to install the n8n custom node for Decodo. Import and Connect Credentials Import the workflow into your n8n self-hosted instance. Connect: OpenAI API (GPT-4.1-mini)** → for summarization and sentiment analysis Decodo API** → for real-time price-drop scraping Google Sheets OAuth2** → to save structured results Configure Input Fields In the “Set input fields” node: Update the price_drop_url to your target URL (e.g., https://camelcamelcamel.com/top_drops?t=weekly). Run the Workflow Click “Execute Workflow” or schedule it to run daily to automatically fetch and analyze new price-drop listings. Check Output The aggregated data is saved to a Google Sheet (Pricedrop Info). Each record contains: Product name Current price and savings Product link AI-generated summary Sentiment classification and score How to customize this workflow Change Source Replace the price_drop_url with another CamelCamelCamel or Amazon Deals URL. Add multiple URLs and loop through them for category-based price tracking. Modify Extraction Schema In the Structured Output Parser, modify the JSON schema to include fields like: category, brand, rating, or availability. Tune AI Prompts Edit the Summarize Content and Sentiment Analysis nodes to: Add tone analysis (e.g., promotional vs. factual). Include competitive product comparison. Integrate More Destinations Replace Google Sheets with: Airtable → for no-code dashboards. PostgreSQL/MySQL → for large-scale storage. Notion or Slack → for instant price-drop alerts. Automate Scheduling Add a Cron Trigger node to run this workflow daily or hourly. Summary This workflow creates a fully automated price intelligence system that: Scrapes Amazon product price drops via Decodo. Extracts structured data with OpenAI GPT-4.1-mini. Generates AI-powered summaries and sentiment insights. Updates a connected Google Sheet with each run.
by Davide
This workflow automates the process of creating short videos from multiple image references (up to 7 images). It uses "Vidu Reference to Video" model, a video generation API to transform a user-provided prompt and image set into a consistent, AI-generated video. This workflow automates the process of generating AI-powered videos from a set of reference images and then uploading them to TikTok and Youtube. The process is initiated via a user-friendly web form. Advantages ✅ Consistent Video Creation: Uses multiple reference images to maintain subject consistency across frames. ✅ Easy Input: Just a simple form with prompt + image URLs. ✅ Automation: No manual waiting—workflow checks status until video is ready. ✅ SEO Optimization: Automatically generates a catchy, optimized YouTube title using AI. ✅ Multi-Platform Publishing: Uploads directly to Google Drive, YouTube, and TikTok in one flow. ✅ Time Saving: Removes repetitive tasks of video generation, download, and manual uploading. ✅ Scalable: Can run periodically or on-demand, perfect for content creators and marketing teams. ✅ UGC & Social Media Ready: Designed for creating viral short videos optimized for platforms like TikTok and YouTube Shorts. How It Works Form Trigger: A user submits a web form with two key pieces of information: a text Prompt describing the desired video and a list of Reference images (URLs separated by commas or new lines). Data Processing: The workflow processes the submitted image URLs, converting them from a text string into a proper array format for the AI API. AI Video Generation: The processed data (prompt and image array) is sent to the Fal.ai VIDU API endpoint (reference-to-video) to start the video generation job. This node returns a request_id. Status Polling: The workflow enters a loop where it periodically checks the status of the generation job using the request_id. It waits for 60 seconds and then checks if the status is "COMPLETED". If not, it waits and checks again. Result Retrieval: Once the video is ready, the workflow fetches the URL of the generated video file. Title Generation: Simultaneously, the original user prompt is sent to an AI model (GPT-4o-mini via OpenRouter) to generate an optimized, engaging title for the social media post. Upload & Distribution: The video file is downloaded from the generated URL. A copy is saved to a specified Google Drive folder for storage. The video, along with the AI-generated title, is automatically uploaded to YouTube and TikTok via the Upload-Post.com API service. Set Up Steps This workflow requires configuration and API keys from three external services to function correctly. Step 1: Configure Fal.ai for Video Generation Create an account and obtain your API key. In the "Create Video" HTTP node, edit the "Header Auth" credentials. Set the following values: Name: Authorization Value: Key YOUR_FAL_API_KEY (replace YOUR_FAL_API_KEY with your actual key) Step 2: Configure Upload-Post.com for Social Media Uploads Get an API key from your Upload-Post Manage Api Keys dashboard (10 free uploads per month). In both the "HTTP Request" (YouTube) and "Upload on TikTok" nodes, edit their "Header Auth" credentials. Set the following values: Name: Authorization Value: Apikey YOUR_UPLOAD_POST_API_KEY (replace YOUR_UPLOAD_POST_API_KEY with your actual key) Crucial: In the body parameters of both upload nodes, find the user field and replace YOUR_USERNAME with the exact name of the social media profile you configured on Upload-Post.com (e.g., my_youtube_channel). Step 3: Configure Google Drive (Optional Storage) The "Upload Video" node is pre-configured to save the video to a Google Drive folder named "Fal.run". Ensure your Google Drive credentials in n8n are valid and that you have access to this folder, or change the folderId parameter to your desired destination. Step 4: Configure AI for Title Generation The "Generate title" node uses OpenAI to access the gpt-5-mini model.. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by vinci-king-01
Multi-Source RAG System with GPT-4 Turbo, News & Academic Papers Integration This workflow provides an enterprise-grade RAG (Retrieval-Augmented Generation) system that intelligently searches multiple sources and generates AI-powered responses using GPT-4 Turbo. How it works This workflow provides an enterprise-grade RAG (Retrieval-Augmented Generation) system that intelligently searches multiple sources and generates AI-powered responses using GPT-4 Turbo. Key Steps Form Input - Collects user queries with customizable search scope, response style, and language preferences Intelligent Search - Routes queries to appropriate sources (web, academic papers, news, internal documents) Data Aggregation - Unifies and processes information from multiple sources with quality scoring AI Processing - Uses GPT-4 Turbo to generate context-aware, source-grounded responses Response Enhancement - Formats outputs in various styles (comprehensive, concise, technical, etc.) Multi-Channel Delivery - Delivers results via webhook, email, Slack, and optional PDF generation Data Sources & AI Models Search Sources Web Search**: Google, Bing, DuckDuckGo integration Academic Papers**: arXiv, PubMed, Google Scholar News Articles**: News API, RSS feeds, real-time news Technical Documentation**: GitHub, Stack Overflow, documentation sites Internal Knowledge**: Google Drive, Confluence, Notion integration AI Models GPT-4 Turbo**: Primary language model for response generation Embedding Models**: For semantic search and similarity matching Custom Prompts**: Specialized prompts for different response styles Set up steps Setup time: 15-20 minutes Configure API credentials - Set up OpenAI API, ScrapeGraphAI, Google Drive, and other service credentials Set up search sources - Configure academic databases, news APIs, and internal knowledge sources Connect analytics - Link Google Sheets for usage tracking and performance monitoring Configure notifications - Set up Slack channels and email templates for automated alerts Test the workflow - Run sample queries to verify all components are working correctly Keep detailed configuration notes in sticky notes inside your workflow
by Robert Breen
Run an AI-powered degree audit for each senior student. This template reads student rows from Google Sheets, evaluates completed courses against hard-coded program requirements, and writes back an AI Degree Summary of what's still missing (major core, Gen Eds, major electives, and upper-division credits). It's designed for quick advisor/registrar review and SIS prototypes. Trigger: Manual — When clicking "Execute workflow" Core nodes: Google Sheets, OpenAI Chat Model, (optional) Structured Output Parser Programs included: Computer Science BS, Business Administration BBA, Psychology BA, Mechanical Engineering BS, Biology BS (Pre-Med), English Literature BA, Data Science BS, Nursing BSN, Economics BA, Graphic Design BFA Who's it for Registrars & advisors** who need fast, consistent degree checks Student success teams** building prototype dashboards SIS/EdTech builders** exploring AI-assisted auditing How it works Read seniors from Google Sheets (Senior_data) with: StudentID, Name, Program, Year, CompletedCourses. AI Agent compares CompletedCourses to built-in requirements (per program) and computes Missing items + a short Summary. Write back to the same sheet using "Append or update" by StudentID (updates AI Degree Summary; you can also map the raw Missing array to a column if desired). Example JSON (for one student): { "StudentID": "S001", "Program": "Computer Science BS", "Missing": [ "GEN-REMAIN | General Education credits remaining | 6", "CS-EL-REM | CS Major Electives (200+ level) | 6", "UPPER-DIV | Additional Upper-Division (200+ level) credits needed | 18", "FREE-EL | Free Electives to reach 120 total credits | 54" ], "Summary": "All core CS courses are complete. Still need 6 Gen Ed credits, 6 CS electives, and 66 total credits overall, including 18 upper-division credits — prioritize 200/300-level CS electives." } Setup (2 steps) 1) Connect Google Sheets (OAuth2) In n8n → Credentials → New → Google Sheets (OAuth2) and sign in. In the Google Sheets nodes, select your spreadsheet and the Senior_data tab. Ensure your input sheet has at least: StudentID, Name, Program, Year, CompletedCourses. 2) Connect OpenAI (API Key) In n8n → Credentials → New → OpenAI API, paste your key. In the OpenAI Chat Model node, select that credential and a model (e.g., gpt-4o or gpt-5). Requirements Sheet columns:** StudentID, Name, Program, Year, CompletedCourses CompletedCourses format:** pipe-separated IDs (e.g., GEN-101|GEN-103|CS-101). Program labels:** should match the built-in list (e.g., Computer Science BS). Credits/levels:** Template assumes upper-division ≥ 200-level (adjust the prompt if your policy differs). Customization Change requirements:** Edit the Agent's system message to update totals, core lists, elective credit rules, or level thresholds. Store more output:** Map Missing to a new column (e.g., AI Missing List) or write rows to a separate sheet for dashboards. Distribute results:** Email summaries to advisors/students (Gmail/Outlook), or generate PDFs for advising folders. Add guardrails:** Extend the prompt to enforce residency, capstone, minor/cognate constraints, or per-college Gen Ed variations. Best practices (per n8n guidelines) Sticky notes are mandatory:** Include a yellow sticky note that contains this description and quick setup steps; add neutral sticky notes for per-step tips. Rename nodes clearly:** e.g., "Get Seniors," "Degree Audit Agent," "Update Summary." No hardcoded secrets:** Use credentials—not inline keys in HTTP or Code nodes. Sanitize identifiers:** Don't ship personal spreadsheet IDs or private links in the published version. Use a Set node for config:** Centralize user-tunable values (e.g., column names, tab names). Troubleshooting OpenAI 401/429:** Verify API key/billing; slow concurrency if rate-limited. Empty summaries:** Check column names and that CompletedCourses uses |. Program mismatch:** Align Program labels to those in the prompt (exact naming recommended). Sheets auth errors:** Reconnect Google Sheets OAuth2 and re-select spreadsheet/tab. Limitations Not an official audit:** It infers gaps from the listed completions; registrar rules can be more nuanced. Catalog drift:** Requirements are hard-coded in the prompt—update them each term/year. Upper-division heuristic:** Adjust the level threshold if your institution defines it differently. Tags & category Category: Education / Student Information Systems Tags: degree-audit, registrar, google-sheets, openai, electives, upper-division, graduation-readiness Changelog v1.0.0 — Initial release: Senior_data in/out, 10 programs, AI Degree Summary output, append/update by StudentID. Contact Need help tailoring this to your catalog (e.g., per-college Gen Eds, capstones, minors, PDFs/email)? 📧 rbreen@ynteractive.com 📧 robert@ynteractive.com 🔗 Robert Breen — https://www.linkedin.com/in/robert-breen-29429625/ 🌐 ynteractive.com — https://ynteractive.com
by Khaisa Studio
Promo Seeker finds fresh, working promo codes and vouchers on the web so your team never misses a deal. This n8n workflow uses SerpAPI and Decodo Scrapper for real-time search, an agent powered by GPT-5 Mini for filtering and validation, and Chat Memory to keep context—saving time, reducing manual checks, and helping marketing or customer support teams deliver discounts faster to customers (and yes, it's better at hunting promos than your inbox). 💡 Why Use Promo Seeker? Speed: Saves hours per week by automatically finding and validating current promo codes, so you can publish deals faster. Simplicity: Eliminates manual searching across sites, no more copy-paste scavenger hunts. Accuracy: Reduces false positives by cross-checking results and keeping only working vouchers—fewer embarrassed "expired code" moments. Edge: Combine search APIs with an AI agent to surface hard-to-find, recently-live offers—win over competitors who still rely on manual scraping. ⚡ Perfect For Marketing teams: Quickly populate newsletters, landing pages, or ads with valid promos. Customer support: Give verified discount codes to users without ping-ponging between tabs. Deal aggregators & affiliates: Discover fresh vouchers faster and boost conversion rates. 🔧 How It Works ⏱ Trigger: A user message via the chat webhook starts the search (Message node). 📎 Process: The agent queries SerpAPI and Decodo Scrapper to collect potential promo codes and voucher pages. 🤖 Smart Logic: The Promo Seeker Agent uses GPT-5 Mini with Chat Memory to filter for fresh, working promos and to verify validity and relevance. 💌 Output: Results are returned to the chat with clear, copy-ready promo codes and source links. 🗂 Storage: Chat Memory stores context and recent searches so the agent avoids repeating old results and can follow up with improved queries. 🔐 Quick Setup Import JSON file to your n8n instances Add credentials: SerpAPI, Azure OpenAI (Gpt 5 Mini), Decodo API Customize: Search parameters (brands, regions, validity window), agent system message, and result formatting Update: Azure OpenAI endpoint and API key in the Gpt 5 Mini credentials; add your SerpAPI key and Decodo key Test: Run a few queries like "latest Amazon promo" or "food delivery voucher" and confirm returned codes are valid 🧩 You'll Need Active n8n instances SerpAPI account and API key Azure OpenAI (for GPT-5 Mini) with key and endpoint Decodo account/API key 🛠️ Level Up Ideas Push verified promos to a Slack channel or email digest for the team. Add scheduled scans to detect newly expired codes and remove them from lists. Integrate with a CMS to auto-post verified deals to landing pages. Made by: khaisa Studio Tags: promo, vouchers, discounts Category: Marketing Automation Need custom work? Contact Us
by Daiki Takayama
[Workflow Overview] ⚠️ Self-Hosted Only: This workflow uses the gotoHuman community node and requires a self-hosted n8n instance. Who's It For Content teams, bloggers, news websites, and marketing agencies who want to automate content creation from RSS feeds while maintaining editorial quality control. Perfect for anyone who needs to transform news articles into detailed blog posts at scale. What It Does This workflow automatically converts RSS feed articles into comprehensive, SEO-optimized blog posts using AI. It fetches articles from your RSS source, generates detailed content with GPT-4, sends drafts for human review via gotoHuman, and publishes approved articles to Google Docs with automatic Slack notifications to your team. How It Works Schedule Trigger runs every 6 hours to check for new RSS articles RSS Read node fetches the latest articles from your feed Format RSS Data extracts key information (title, keywords, description) Generate Article with AI creates a structured blog post using OpenAI GPT-4 Structure Article Data formats the content with metadata Request Human Review sends the article for approval via gotoHuman Check Approval Status routes the workflow based on review decision Create Google Doc and Add Article Content publish approved articles Send Slack Notification alerts your team with article details Requirements OpenAI API key** with GPT-4 access Google account** for Google Docs integration gotoHuman account** for human-in-the-loop approval workflow Slack workspace** for team notifications RSS feed URL** from your preferred source How to Set Up Configure RSS Feed: In the "RSS Read" node, replace the example URL with your RSS feed source Connect OpenAI: Add your OpenAI API credentials to the "OpenAI Chat Model" node Set Up Google Docs: Connect your Google account and optionally specify a folder ID for organized storage Configure gotoHuman: Add your gotoHuman credentials and create a review template for article approval Connect Slack: Authenticate with Slack and select the channel for notifications Customize Content: Modify the AI prompt in "Generate Article with AI" to match your brand voice and article structure Adjust Schedule: Change the trigger frequency in "Schedule Trigger" based on your content needs How to Customize Article Style**: Edit the AI prompt to change tone, length, or structure Keywords & SEO**: Modify the "Format RSS Data" node to adjust keyword extraction logic Publishing Destination**: Change from Google Docs to other platforms (WordPress, Notion, etc.) Approval Workflow**: Customize the gotoHuman template to include specific review criteria Notification Format**: Adjust the Slack message template to include additional metadata Processing Volume**: Modify the Code node to process multiple RSS articles instead of just one
by Milan Vasarhelyi - SmoothWork
Video Introduction Want to automate your inbox or need a custom workflow? 📞 Book a Call | 💬 DM me on Linkedin Workflow Overview This workflow creates an intelligent AI chatbot that retrieves recipes from an external API through natural conversation. When users ask for recipes, the AI agent automatically determines when to use the recipe lookup tool, fetches real-time data from the API Ninjas Recipe API, and provides helpful, conversational responses. This demonstrates the powerful capability of API-to-API integration within n8n, allowing AI agents to access external data sources on demand. Key Features Intelligent Tool Calling:** The AI agent automatically decides when to use the HTTP Request Tool based on user queries External API Integration:** Connects to API Ninjas Recipe API using Header Authentication for secure access Conversational Memory:** Maintains context across multiple turns for natural dialogue Dynamic Query Generation:** The AI model automatically generates the appropriate search query parameters based on user input Common Use Cases Build AI assistants that need access to real-time external data Create chatbots with specialized knowledge from third-party APIs Demonstrate API-to-API integration patterns for custom automation Prototype AI agents with tool-calling capabilities Setup & Configuration Required Credentials: OpenAI API: Sign up at OpenAI and obtain an API key for the language model. Configure this in n8n's credential manager. API Ninjas: Register at API Ninjas to get your free API key for the Recipe API (supports 400+ calls/day). This API uses Header Authentication with the header name "X-Api-Key". Agent Configuration: The AI Agent includes a system message instructing it to "Always use the recipe tool if i ask you for recipe." This ensures the agent leverages the external API when appropriate. The HTTP Request Tool is configured with the API endpoint (https://api.api-ninjas.com/v1/recipe) and set to accept query parameters automatically from the AI model. The tool description "Use the query parameter to specify the food, and it will return a recipe" helps the AI understand when and how to use it. Language Model: Currently configured to use OpenAI's gpt-5-mini, but you can change this to other compatible models based on your needs and budget. Memory: Uses a window buffer to maintain conversation context, enabling natural multi-turn conversations where users can ask follow-up questions.
by Yaron Been
Automate Financial Operations with O3 CFO & GPT-4.1-mini Finance Team This workflow builds a virtual finance department inside n8n. At the center is a CFO Agent (O3 model) who acts like a strategic leader. When a financial request comes in, the CFO interprets it, decides the strategy, and delegates to the specialist agents (each powered by GPT-4.1-mini for cost efficiency). 🟢 Section 1 – Entry & Leadership Nodes: 💬 When chat message received → Entry point for user financial requests. 💼 CFO Agent (O3) → Acts as the Chief Financial Officer. Interprets the request, decides the approach, and delegates tasks. 💡 Think Tool → Helps the CFO brainstorm and refine financial strategies. 🧠 OpenAI Chat Model CFO (O3) → High-level reasoning engine for strategic leadership. ✅ Beginner view: Think of this as your finance CEO’s desk — requests land here, the CFO figures out what needs to be done, and the right specialists are assigned. 📊 Section 2 – Specialist Finance Agents Each specialist is powered by GPT-4.1-mini (fast + cost-effective). 📈 Financial Planning Analyst → Builds budgets, forecasts, and financial models. 📚 Accounting Specialist → Handles bookkeeping, tax prep, and compliance. 🏦 Treasury & Cash Management Specialist → Manages liquidity, banking, and cash flow. 📊 Financial Analyst → Runs KPI tracking, performance metrics, variance analysis. 💼 Investment & Risk Analyst → Performs investment evaluations, capital allocation, and risk management. 🔍 Internal Audit & Controls Specialist → Checks compliance, internal controls, and audits. ✅ Beginner view: This section is your finance department — every role you’d find in a real company, automated by AI. 📋 Section 3 – Flow of Execution User sends a request (e.g., “Create a financial forecast for Q1 2026”). CFO Agent (O3) interprets it → “We need planning, analysis, and treasury.” Delegates tasks to the relevant specialists. Specialists process in parallel, generating plans, numbers, and insights. CFO Agent compiles and returns a comprehensive financial report. ✅ Beginner view: The CFO is the conductor, and the specialists are the musicians. Together, they produce the financial “symphony.” 📊 Summary Table | Section | Key Roles | Model | Purpose | Beginner Benefit | | ---------------------- | ------------------------------------------------------- | ----------------- | ------------------- | -------------------------------------- | | 🟢 Entry & Leadership | CFO Agent, Think Tool | O3 | Strategic direction | Acts like a real CFO | | 📊 Finance Specialists | FP Analyst, Accounting, Treasury, FA, Investment, Audit | GPT-4.1-mini | Specialized tasks | Each agent = finance department role | | 📋 Execution Flow | All connected | O3 + GPT-4.1-mini | Collaboration | Output = complete financial management | 🌟 Why This Workflow Rocks Full finance department in n8n** Strategic + execution separation** → O3 for CFO, GPT-4.1-mini for team Cost-optimized** → Heavy lifting done by mini models Scalable** → Easily add more finance roles (tax, payroll, compliance, etc.) Practical outputs** → Reports, budgets, risk analyses, audit notes 👉 Example Use Case: “Generate a Q1 financial forecast with cash flow analysis and risk report.” CFO reviews request. Financial Planning Analyst → Budget + Forecast. Treasury Specialist → Cash flow modeling. Investment Analyst → Risk review. Audit Specialist → Compliance check. CFO delivers a packaged financial report back to you.