by vinci-king-01
Multi-Source RAG System with GPT-4 Turbo, News & Academic Papers Integration This workflow provides an enterprise-grade RAG (Retrieval-Augmented Generation) system that intelligently searches multiple sources and generates AI-powered responses using GPT-4 Turbo. How it works This workflow provides an enterprise-grade RAG (Retrieval-Augmented Generation) system that intelligently searches multiple sources and generates AI-powered responses using GPT-4 Turbo. Key Steps Form Input - Collects user queries with customizable search scope, response style, and language preferences Intelligent Search - Routes queries to appropriate sources (web, academic papers, news, internal documents) Data Aggregation - Unifies and processes information from multiple sources with quality scoring AI Processing - Uses GPT-4 Turbo to generate context-aware, source-grounded responses Response Enhancement - Formats outputs in various styles (comprehensive, concise, technical, etc.) Multi-Channel Delivery - Delivers results via webhook, email, Slack, and optional PDF generation Data Sources & AI Models Search Sources Web Search**: Google, Bing, DuckDuckGo integration Academic Papers**: arXiv, PubMed, Google Scholar News Articles**: News API, RSS feeds, real-time news Technical Documentation**: GitHub, Stack Overflow, documentation sites Internal Knowledge**: Google Drive, Confluence, Notion integration AI Models GPT-4 Turbo**: Primary language model for response generation Embedding Models**: For semantic search and similarity matching Custom Prompts**: Specialized prompts for different response styles Set up steps Setup time: 15-20 minutes Configure API credentials - Set up OpenAI API, ScrapeGraphAI, Google Drive, and other service credentials Set up search sources - Configure academic databases, news APIs, and internal knowledge sources Connect analytics - Link Google Sheets for usage tracking and performance monitoring Configure notifications - Set up Slack channels and email templates for automated alerts Test the workflow - Run sample queries to verify all components are working correctly Keep detailed configuration notes in sticky notes inside your workflow
by Robert Breen
Run an AI-powered degree audit for each senior student. This template reads student rows from Google Sheets, evaluates completed courses against hard-coded program requirements, and writes back an AI Degree Summary of what's still missing (major core, Gen Eds, major electives, and upper-division credits). It's designed for quick advisor/registrar review and SIS prototypes. Trigger: Manual — When clicking "Execute workflow" Core nodes: Google Sheets, OpenAI Chat Model, (optional) Structured Output Parser Programs included: Computer Science BS, Business Administration BBA, Psychology BA, Mechanical Engineering BS, Biology BS (Pre-Med), English Literature BA, Data Science BS, Nursing BSN, Economics BA, Graphic Design BFA Who's it for Registrars & advisors** who need fast, consistent degree checks Student success teams** building prototype dashboards SIS/EdTech builders** exploring AI-assisted auditing How it works Read seniors from Google Sheets (Senior_data) with: StudentID, Name, Program, Year, CompletedCourses. AI Agent compares CompletedCourses to built-in requirements (per program) and computes Missing items + a short Summary. Write back to the same sheet using "Append or update" by StudentID (updates AI Degree Summary; you can also map the raw Missing array to a column if desired). Example JSON (for one student): { "StudentID": "S001", "Program": "Computer Science BS", "Missing": [ "GEN-REMAIN | General Education credits remaining | 6", "CS-EL-REM | CS Major Electives (200+ level) | 6", "UPPER-DIV | Additional Upper-Division (200+ level) credits needed | 18", "FREE-EL | Free Electives to reach 120 total credits | 54" ], "Summary": "All core CS courses are complete. Still need 6 Gen Ed credits, 6 CS electives, and 66 total credits overall, including 18 upper-division credits — prioritize 200/300-level CS electives." } Setup (2 steps) 1) Connect Google Sheets (OAuth2) In n8n → Credentials → New → Google Sheets (OAuth2) and sign in. In the Google Sheets nodes, select your spreadsheet and the Senior_data tab. Ensure your input sheet has at least: StudentID, Name, Program, Year, CompletedCourses. 2) Connect OpenAI (API Key) In n8n → Credentials → New → OpenAI API, paste your key. In the OpenAI Chat Model node, select that credential and a model (e.g., gpt-4o or gpt-5). Requirements Sheet columns:** StudentID, Name, Program, Year, CompletedCourses CompletedCourses format:** pipe-separated IDs (e.g., GEN-101|GEN-103|CS-101). Program labels:** should match the built-in list (e.g., Computer Science BS). Credits/levels:** Template assumes upper-division ≥ 200-level (adjust the prompt if your policy differs). Customization Change requirements:** Edit the Agent's system message to update totals, core lists, elective credit rules, or level thresholds. Store more output:** Map Missing to a new column (e.g., AI Missing List) or write rows to a separate sheet for dashboards. Distribute results:** Email summaries to advisors/students (Gmail/Outlook), or generate PDFs for advising folders. Add guardrails:** Extend the prompt to enforce residency, capstone, minor/cognate constraints, or per-college Gen Ed variations. Best practices (per n8n guidelines) Sticky notes are mandatory:** Include a yellow sticky note that contains this description and quick setup steps; add neutral sticky notes for per-step tips. Rename nodes clearly:** e.g., "Get Seniors," "Degree Audit Agent," "Update Summary." No hardcoded secrets:** Use credentials—not inline keys in HTTP or Code nodes. Sanitize identifiers:** Don't ship personal spreadsheet IDs or private links in the published version. Use a Set node for config:** Centralize user-tunable values (e.g., column names, tab names). Troubleshooting OpenAI 401/429:** Verify API key/billing; slow concurrency if rate-limited. Empty summaries:** Check column names and that CompletedCourses uses |. Program mismatch:** Align Program labels to those in the prompt (exact naming recommended). Sheets auth errors:** Reconnect Google Sheets OAuth2 and re-select spreadsheet/tab. Limitations Not an official audit:** It infers gaps from the listed completions; registrar rules can be more nuanced. Catalog drift:** Requirements are hard-coded in the prompt—update them each term/year. Upper-division heuristic:** Adjust the level threshold if your institution defines it differently. Tags & category Category: Education / Student Information Systems Tags: degree-audit, registrar, google-sheets, openai, electives, upper-division, graduation-readiness Changelog v1.0.0 — Initial release: Senior_data in/out, 10 programs, AI Degree Summary output, append/update by StudentID. Contact Need help tailoring this to your catalog (e.g., per-college Gen Eds, capstones, minors, PDFs/email)? 📧 rbreen@ynteractive.com 📧 robert@ynteractive.com 🔗 Robert Breen — https://www.linkedin.com/in/robert-breen-29429625/ 🌐 ynteractive.com — https://ynteractive.com
by Khaisa Studio
Promo Seeker finds fresh, working promo codes and vouchers on the web so your team never misses a deal. This n8n workflow uses SerpAPI and Decodo Scrapper for real-time search, an agent powered by GPT-5 Mini for filtering and validation, and Chat Memory to keep context—saving time, reducing manual checks, and helping marketing or customer support teams deliver discounts faster to customers (and yes, it's better at hunting promos than your inbox). 💡 Why Use Promo Seeker? Speed: Saves hours per week by automatically finding and validating current promo codes, so you can publish deals faster. Simplicity: Eliminates manual searching across sites, no more copy-paste scavenger hunts. Accuracy: Reduces false positives by cross-checking results and keeping only working vouchers—fewer embarrassed "expired code" moments. Edge: Combine search APIs with an AI agent to surface hard-to-find, recently-live offers—win over competitors who still rely on manual scraping. ⚡ Perfect For Marketing teams: Quickly populate newsletters, landing pages, or ads with valid promos. Customer support: Give verified discount codes to users without ping-ponging between tabs. Deal aggregators & affiliates: Discover fresh vouchers faster and boost conversion rates. 🔧 How It Works ⏱ Trigger: A user message via the chat webhook starts the search (Message node). 📎 Process: The agent queries SerpAPI and Decodo Scrapper to collect potential promo codes and voucher pages. 🤖 Smart Logic: The Promo Seeker Agent uses GPT-5 Mini with Chat Memory to filter for fresh, working promos and to verify validity and relevance. 💌 Output: Results are returned to the chat with clear, copy-ready promo codes and source links. 🗂 Storage: Chat Memory stores context and recent searches so the agent avoids repeating old results and can follow up with improved queries. 🔐 Quick Setup Import JSON file to your n8n instances Add credentials: SerpAPI, Azure OpenAI (Gpt 5 Mini), Decodo API Customize: Search parameters (brands, regions, validity window), agent system message, and result formatting Update: Azure OpenAI endpoint and API key in the Gpt 5 Mini credentials; add your SerpAPI key and Decodo key Test: Run a few queries like "latest Amazon promo" or "food delivery voucher" and confirm returned codes are valid 🧩 You'll Need Active n8n instances SerpAPI account and API key Azure OpenAI (for GPT-5 Mini) with key and endpoint Decodo account/API key 🛠️ Level Up Ideas Push verified promos to a Slack channel or email digest for the team. Add scheduled scans to detect newly expired codes and remove them from lists. Integrate with a CMS to auto-post verified deals to landing pages. Made by: khaisa Studio Tags: promo, vouchers, discounts Category: Marketing Automation Need custom work? Contact Us
by Daiki Takayama
[Workflow Overview] ⚠️ Self-Hosted Only: This workflow uses the gotoHuman community node and requires a self-hosted n8n instance. Who's It For Content teams, bloggers, news websites, and marketing agencies who want to automate content creation from RSS feeds while maintaining editorial quality control. Perfect for anyone who needs to transform news articles into detailed blog posts at scale. What It Does This workflow automatically converts RSS feed articles into comprehensive, SEO-optimized blog posts using AI. It fetches articles from your RSS source, generates detailed content with GPT-4, sends drafts for human review via gotoHuman, and publishes approved articles to Google Docs with automatic Slack notifications to your team. How It Works Schedule Trigger runs every 6 hours to check for new RSS articles RSS Read node fetches the latest articles from your feed Format RSS Data extracts key information (title, keywords, description) Generate Article with AI creates a structured blog post using OpenAI GPT-4 Structure Article Data formats the content with metadata Request Human Review sends the article for approval via gotoHuman Check Approval Status routes the workflow based on review decision Create Google Doc and Add Article Content publish approved articles Send Slack Notification alerts your team with article details Requirements OpenAI API key** with GPT-4 access Google account** for Google Docs integration gotoHuman account** for human-in-the-loop approval workflow Slack workspace** for team notifications RSS feed URL** from your preferred source How to Set Up Configure RSS Feed: In the "RSS Read" node, replace the example URL with your RSS feed source Connect OpenAI: Add your OpenAI API credentials to the "OpenAI Chat Model" node Set Up Google Docs: Connect your Google account and optionally specify a folder ID for organized storage Configure gotoHuman: Add your gotoHuman credentials and create a review template for article approval Connect Slack: Authenticate with Slack and select the channel for notifications Customize Content: Modify the AI prompt in "Generate Article with AI" to match your brand voice and article structure Adjust Schedule: Change the trigger frequency in "Schedule Trigger" based on your content needs How to Customize Article Style**: Edit the AI prompt to change tone, length, or structure Keywords & SEO**: Modify the "Format RSS Data" node to adjust keyword extraction logic Publishing Destination**: Change from Google Docs to other platforms (WordPress, Notion, etc.) Approval Workflow**: Customize the gotoHuman template to include specific review criteria Notification Format**: Adjust the Slack message template to include additional metadata Processing Volume**: Modify the Code node to process multiple RSS articles instead of just one
by Milan Vasarhelyi - SmoothWork
Video Introduction Want to automate your inbox or need a custom workflow? 📞 Book a Call | 💬 DM me on Linkedin Workflow Overview This workflow creates an intelligent AI chatbot that retrieves recipes from an external API through natural conversation. When users ask for recipes, the AI agent automatically determines when to use the recipe lookup tool, fetches real-time data from the API Ninjas Recipe API, and provides helpful, conversational responses. This demonstrates the powerful capability of API-to-API integration within n8n, allowing AI agents to access external data sources on demand. Key Features Intelligent Tool Calling:** The AI agent automatically decides when to use the HTTP Request Tool based on user queries External API Integration:** Connects to API Ninjas Recipe API using Header Authentication for secure access Conversational Memory:** Maintains context across multiple turns for natural dialogue Dynamic Query Generation:** The AI model automatically generates the appropriate search query parameters based on user input Common Use Cases Build AI assistants that need access to real-time external data Create chatbots with specialized knowledge from third-party APIs Demonstrate API-to-API integration patterns for custom automation Prototype AI agents with tool-calling capabilities Setup & Configuration Required Credentials: OpenAI API: Sign up at OpenAI and obtain an API key for the language model. Configure this in n8n's credential manager. API Ninjas: Register at API Ninjas to get your free API key for the Recipe API (supports 400+ calls/day). This API uses Header Authentication with the header name "X-Api-Key". Agent Configuration: The AI Agent includes a system message instructing it to "Always use the recipe tool if i ask you for recipe." This ensures the agent leverages the external API when appropriate. The HTTP Request Tool is configured with the API endpoint (https://api.api-ninjas.com/v1/recipe) and set to accept query parameters automatically from the AI model. The tool description "Use the query parameter to specify the food, and it will return a recipe" helps the AI understand when and how to use it. Language Model: Currently configured to use OpenAI's gpt-5-mini, but you can change this to other compatible models based on your needs and budget. Memory: Uses a window buffer to maintain conversation context, enabling natural multi-turn conversations where users can ask follow-up questions.
by Yaron Been
Automate Financial Operations with O3 CFO & GPT-4.1-mini Finance Team This workflow builds a virtual finance department inside n8n. At the center is a CFO Agent (O3 model) who acts like a strategic leader. When a financial request comes in, the CFO interprets it, decides the strategy, and delegates to the specialist agents (each powered by GPT-4.1-mini for cost efficiency). 🟢 Section 1 – Entry & Leadership Nodes: 💬 When chat message received → Entry point for user financial requests. 💼 CFO Agent (O3) → Acts as the Chief Financial Officer. Interprets the request, decides the approach, and delegates tasks. 💡 Think Tool → Helps the CFO brainstorm and refine financial strategies. 🧠 OpenAI Chat Model CFO (O3) → High-level reasoning engine for strategic leadership. ✅ Beginner view: Think of this as your finance CEO’s desk — requests land here, the CFO figures out what needs to be done, and the right specialists are assigned. 📊 Section 2 – Specialist Finance Agents Each specialist is powered by GPT-4.1-mini (fast + cost-effective). 📈 Financial Planning Analyst → Builds budgets, forecasts, and financial models. 📚 Accounting Specialist → Handles bookkeeping, tax prep, and compliance. 🏦 Treasury & Cash Management Specialist → Manages liquidity, banking, and cash flow. 📊 Financial Analyst → Runs KPI tracking, performance metrics, variance analysis. 💼 Investment & Risk Analyst → Performs investment evaluations, capital allocation, and risk management. 🔍 Internal Audit & Controls Specialist → Checks compliance, internal controls, and audits. ✅ Beginner view: This section is your finance department — every role you’d find in a real company, automated by AI. 📋 Section 3 – Flow of Execution User sends a request (e.g., “Create a financial forecast for Q1 2026”). CFO Agent (O3) interprets it → “We need planning, analysis, and treasury.” Delegates tasks to the relevant specialists. Specialists process in parallel, generating plans, numbers, and insights. CFO Agent compiles and returns a comprehensive financial report. ✅ Beginner view: The CFO is the conductor, and the specialists are the musicians. Together, they produce the financial “symphony.” 📊 Summary Table | Section | Key Roles | Model | Purpose | Beginner Benefit | | ---------------------- | ------------------------------------------------------- | ----------------- | ------------------- | -------------------------------------- | | 🟢 Entry & Leadership | CFO Agent, Think Tool | O3 | Strategic direction | Acts like a real CFO | | 📊 Finance Specialists | FP Analyst, Accounting, Treasury, FA, Investment, Audit | GPT-4.1-mini | Specialized tasks | Each agent = finance department role | | 📋 Execution Flow | All connected | O3 + GPT-4.1-mini | Collaboration | Output = complete financial management | 🌟 Why This Workflow Rocks Full finance department in n8n** Strategic + execution separation** → O3 for CFO, GPT-4.1-mini for team Cost-optimized** → Heavy lifting done by mini models Scalable** → Easily add more finance roles (tax, payroll, compliance, etc.) Practical outputs** → Reports, budgets, risk analyses, audit notes 👉 Example Use Case: “Generate a Q1 financial forecast with cash flow analysis and risk report.” CFO reviews request. Financial Planning Analyst → Budget + Forecast. Treasury Specialist → Cash flow modeling. Investment Analyst → Risk review. Audit Specialist → Compliance check. CFO delivers a packaged financial report back to you.
by Emilio Loewenstein
Description Save hours of manual reporting with this end-to-end automation. This workflow pulls campaign performance data (demo or live), generates a clear AI-powered executive summary, and compiles everything into a polished weekly report. The report is formatted in Markdown, automatically stored in Google Docs, and instantly shared with your team via Slack — no spreadsheets, no copy-paste, no delays. What it does ⏰ Runs on a schedule (e.g. every Monday morning) 📊 Collects performance metrics (Google Ads, Meta, TikTok, YouTube – demo data included) 🤖 Uses AI to summarize wins, issues, and recommendations 📝 Builds a structured Markdown report (totals, channel performance, top campaigns) 📄 Creates and updates a Google Doc with the report 💬 Notifies your team in Slack with topline numbers + direct report link 📧 Optionally email the report to stakeholders or clients Why it’s valuable Saves time** – no manual data aggregation Standardizes reporting** – same format and quality every week Adds insights** – AI highlights what matters most Improves transparency** – instant access via Docs, Slack, or Email Scales easily** – adapt to multiple clients or campaigns Professional delivery** – branded, polished reports on autopilot 💡 Extra recommendation: Connect to a Google Docs template to give your reports a professional, branded look.
by Punit
This n8n workflow automates the process of generating and publishing LinkedIn posts that align with your personal brand tone and trending tech topics. It uses OpenAI to create engaging content and matching visuals, posts it directly to LinkedIn, and sends a confirmation via Telegram with post details. 🔑 Key Features 🏷️ Random Hashtag Selection Picks a trending tag from a custom list for post inspiration. ✍️ AI-Generated Content GPT-4o crafts a LinkedIn-optimized post in your personal writing style. 🖼️ Custom Image Generation Uses OpenAI to generate a relevant image for visual appeal. 📤 Direct LinkedIn Publishing Posts are made automatically to your profile with public visibility. 📩 Telegram Notification You get a real-time Telegram alert with the post URL, tag, and timestamp. 📚 Writing Style Alignment Past posts are injected as examples to maintain a consistent tone. Ideal Use Case: Automate your daily or weekly LinkedIn presence with minimal manual effort while maintaining high-quality, relevant, and visually engaging posts.
by Nishant
Automated daily swing‑trade ideas from end‑of‑day (EOD) data, scored by an LLM, logged to Google Sheets, and pushed to Telegram. What this workflow does Fetches EOD quotes* for a chosen stock universe (example: *NSE‑100** via RapidAPI). Cleans & filters** the universe using simple technical/quality gates (e.g., price/volume sanity, avoid illiquid names). Packages market context* and feeds it to *OpenAI* with a strict *JSON schema* to produce *top swing‑trade recommendations** (entry, target, stop, rationale). Splits structured output* into rows and *logs* them to a *Google Sheet** for tracking. Sends an alert* with the day’s trade ideas to *Telegram** (channel or DM). Ideal for Retail traders who want a daily, hands‑off idea generator. PMs/engineers prototyping LLM‑assisted quant sidekicks. Creators who publish daily trade notes to their audience. Tech stack n8n** (orchestration) RapidAPI** (EOD quotes; pluggable data source) OpenAI** (LLM for idea generation) Google Sheets** (logging & performance tracker) Telegram** (alerts) Prerequisites RapidAPI key with access to an EOD quotes endpoint for your exchange. OpenAI API key. Google account with a Sheet named Trade_Recommendations_Tracker (or update the node). Telegram bot token (via @BotFather) and destination chat ID. > You can replace any of the above vendors with equivalents (e.g., Alpha Vantage, Twelve Data, Polygon, etc.). Only the HTTP Request + Format nodes need tweaks. Environment variables | Key | Example | Used in | | -------------------- | -------------------------- | --------------------- | | RAPIDAPI_KEY | xxxxxxxxxxxxxxxxxxxxxxxx | HTTP Request (quotes) | | OPENAI_API_KEY | sk-… | OpenAI node | | TELEGRAM_BOT_TOKEN | 123456:ABC-DEF… | Telegram node | | TELEGRAM_CHAT_ID | 5357385827 | Telegram node | Google Sheet schema Create a Sheet (tab: EOD_Ideas) with the headers: Date, Symbol, Direction, Entry, Target, StopLoss, Confidence, Reason, SourceModel, UniverseTag Node map (name → purpose) Trigger – Daily Market Close → Fires daily after market close (e.g., 4:15 PM IST). Prepare Stock List (NSE 100) → Provides stock symbols to analyze (static list or from a Sheet/API). Fetch EOD Data (RapidAPI) → Gets EOD data for all symbols in one or batched calls. Format EOD Data → Normalizes API response to a clean array (symbol, close, high, low, volume, etc.). Filter Valid Stock Data → Drops illiquid/invalid rows (e.g., volume > 200k, close > 50). Build LLM Prompt Input → Creates compact market context & JSON instructions for the model. Generate Swing Trade Ideas (OpenAI) → Returns strict JSON with top ideas. Split JSON Output (Trade‑wise) → Explodes the JSON array into individual items. Log Trade to Google Sheet → Appends each idea as a row. Send Trade Alert to Telegram → Publishes a concise summary to Telegram.
by Iternal Technologies
Blockify® Technical Manual Data Optimization Workflow Blockify Optimizes Data for Technical Manual RAG and Agents - Giving Structure to Unstructured Data for ~78X Accuracy, when pairing Blockify Ingest and Blockify Distill Learn more at https://iternal.ai/blockify Get Free Demo API Access here: https://console.blockify.ai/signup Read the Technical Whitepaper here: https://iternal.ai/blockify-results See example Accuracy Comparison here: https://iternal.ai/case-studies/medical-accuracy/ Blockify is a data optimization tool that takes messy, unstructured text, like hundreds of sales‑meeting transcripts or long proposals, and intelligently optimizes the data into small, easy‑to‑understand "IdeaBlocks." Each IdeaBlock is just a couple of sentences in length that capture one clear idea, plus a built‑in contextualized question and answer. With this approach, Blockify improves accuracy of LLMs (Large Language Models) by an average aggregate 78X, while shrinking the original mountain of text to about 2.5% of its size while keeping (and even improving) the important information. When Blockify's IdeaBlocks are compared with the usual method of breaking text into equal‑sized chunks, the results are dramatic. Answers pulled from the distilled IdeaBlocks are roughly 40X more accurate, and user searches return the right information about 52% more accurate. In short, Blockify lets you store less data, spend less on computing, and still get better answers- turning huge documents into a concise, high‑quality knowledge base that anyone can search quickly. Blockify works by processing chunks of text to create structured data from an unstructured data source. Blockify® replaces the traditional "dump‑and‑chunk" approach with an end‑to‑end pipeline that cleans and organizes content before it ever hits a vector store. Admins first define who should see what, then the system ingests any file type—Word, PDF, slides, images—inside public cloud, private cloud, or on‑prem. A context‑aware splitter finds natural breaks, and a series of specially developed Blockify LLM model turns each segment into a draft IdeaBlock. GenAI systems fed with this curated data return sharper answers, hallucinate far less, and comply with security policies out of the box. The result: higher trust, lower operating cost, and a clear path to enterprise‑scale RAG without the cleanup headaches that stall most AI rollouts.
by Robert Breen
A hands-on starter workflow that teaches beginners how to: Pull rows from a Google Sheet Append a new record that mimics a form submission Generate AI-powered text with GPT-4o based on a “Topic” column Write the AI output back into the correct row using an update operation Along the way you’ll learn the three essential Google Sheets operations in n8n (read → append → update), see how to pass sheet data into an OpenAI node, and document each step with sticky-note instructions—perfect for anyone taking their first steps in no-code automation. 0️⃣ Prerequisites Google Sheets** Open Google Cloud Console → create / select a project. Enable Google Sheets API under APIs & Services. Create an OAuth Desktop credential and connect it in n8n. Share the spreadsheet with the Google account linked to the credential. OpenAI** Create a secret key at <https://platform.openai.com/account/api-keys>. In n8n → Credentials → New → choose OpenAI API and paste the key. Sample sheet to copy** (make your own copy and use its link) <https://docs.google.com/spreadsheets/d/15i9WIYpqc5lNd5T4VyM0RRptFPdi9doCbEEDn8QglN4/edit?usp=sharing> 1️⃣ Trigger Manual Trigger – lets you run on demand while learning. (Swap for a Schedule or Webhook once you automate.) 2️⃣ Read existing rows Node:** Get Rows from Google Sheets Reads every row from Sheet1 of your copied file. 3️⃣ Generate a demo row Node:** Generate 1 Row of Data (Set node) Pretends a form was submitted: Name, Email, Topic, Submitted = "Yes" 4️⃣ Append the new row Node:** Append Data to Google Operation append → writes to the first empty line. 5️⃣ Create a description with GPT-4o OpenAI Chat Model – uses your OpenAI credential. Write description (AI Agent) – prompt = the Topic. Structured Output Parser – forces JSON like: { "description": "…" }. 6️⃣ Update that same row Node:** Update Sheets data Operation update. Matches on column Email to update the correct line. Writes the new Description cell returned by GPT-4o. 7️⃣ Why this matters Demonstrates the three core Google Sheets operations: read → append → update. Shows how to enrich sheet data with an AI step and push the result right back. Sticky Notes provide inline docs so anyone opening the workflow understands the flow instantly. 👤 Need help? Robert Breen – Automation Consultant ✉️ robert.j.breen@gmail.com 🔗 <https://www.linkedin.com/in/robert-breen-29429625/>
by Avkash Kakdiya
How it works This workflow automatically identifies users who started but did not complete the signup process. It runs on a fixed schedule, checks your database for inactive and incomplete users, and validates the results before proceeding. Each user is then processed individually to send a personalized recovery email and enroll them in a follow-up sequence. Finally, the workflow updates the database to avoid duplicate outreach and notifies the sales team in Slack. Step-by-step Step 1: Run scheduled check and identify abandoned users** Schedule Trigger – Executes the workflow automatically every 24 hours. Find Abandoned Users – Queries Postgres for users marked as incomplete and inactive for over 24 hours. If – Confirms that valid user records exist before continuing. Step 2: Process users and send recovery emails** Loop Over Items – Processes users one at a time to avoid rate limits and execution errors. PrepareEmail email – Generates a personalized recovery email using a predefined template. Send a message – Sends the recovery email through Gmail. Get a message – Retrieves the sent email details for tracking and thread reference. StartSequence email – Adds the email to a follow-up sequence for engagement tracking. Step 3: Update records and notify the team** Update rows in a table – Marks the user as contacted to prevent duplicate recovery emails. Alert Sales Team – Sends a Slack notification with user details and recovery status. Why use this? Recover users who abandon onboarding without manual follow-ups Ensure each user receives only one recovery email Keep your Postgres user data accurate and up to date Provide sales teams with real-time visibility via Slack alerts Improve signup completion and activation rates automatically