by Habeeb Mohammed
Who's it for This workflow is perfect for individuals who want to maintain detailed financial records without the overhead of complex budgeting apps. If you prefer natural language over data entry forms and want an AI assistant to handle the bookkeeping, this template is for you. It's especially useful for: People who want to track cash and online transactions separately Anyone who lends money to friends/family and needs debt tracking Users comfortable with Slack as their primary interface Those who prefer conversational interactions over manual spreadsheet updates What it does This AI-powered finance tracker transforms your Slack workspace into a personal finance command center. Simply mention your bot with transactions in plain English (e.g., "₹500 cash food, borrowed ₹1000 from John"), and the AI agent will: Parse transactions using natural language understanding via Google Gemini Calculate balance changes for cash and online accounts Show a preview of changes before saving anything Update Google Sheets only after you approve Track debts (who owes you, who you owe, repayments) Send daily reminders at 11 PM with current balances and active debts The workflow maintains conversational context using PostgreSQL memory, so you can say things like "yesterday's transactions" or "that payment to Sarah" and it understands the context. How it works Scheduled Daily Check-in (11 PM) Fetches current balances from Google Sheets Retrieves all active debts Formats and sends a Slack message with balance summary Prompts you to share the day's transactions AI Agent Transaction Processing When you mention the bot in Slack: Phase 1: Parse & Analyze Extracts amount, payment type (cash/online), category (food, travel, etc.) Identifies transaction type (expense, income, borrowed, lent, repaid) Stores conversation context in PostgreSQL memory Phase 2: Calculate & Preview Reads current balances from Google Sheets Calculates new balances based on transactions Shows formatted preview with projected changes Waits for your approval ("yes"/"no") Phase 3: Update Database (only after approval) Logs transactions with unique IDs and timestamps Updates debt records with person names and status Recalculates and stores new balances Handles debt lifecycle (Active → Settled) Phase 4: Confirmation Sends success message with updated balances Shows active debts summary Includes logging timestamp Requirements Essential Services: n8n instance (self-hosted or cloud) Slack workspace with admin access Google account Google Gemini API key PostgreSQL database Recommended: Claude AI model (mentioned in workflow notes as better alternative to Gemini) How to set up 1. Google Sheets Setup Create a new Google Sheet with three tabs named exactly: Balances Tab: | Date | Cash_Balance | Online_Balance | Total_Balance | |------|--------------|----------------|---------------| Transactions Tab: | Transaction_ID | Date | Time | Amount | Payment_Type | Category | Transaction_Type | Person_Name | Description | Added_At | |----------------|------|------|--------|--------------|----------|------------------|-------------|-------------|----------| Debts Tab: | Person_Name | Amount | Type | Date_created | Status | Notes | |-------------|--------|------|--------------|--------|-------| Add header rows and one initial balance row in the Balances tab with today's date and starting amounts. 2. Slack App Setup Go to api.slack.com/apps and create a new app Under OAuth & Permissions, add these Bot Token Scopes: app_mentions:read chat:write channels:read Install the app to your workspace Copy the Bot User OAuth Token Create a dedicated channel (e.g., #personal-finance-tracker) Invite your bot to the channel 3. Google Gemini API Visit ai.google.dev Create an API key Save it for n8n credentials setup 4. PostgreSQL Database Set up a PostgreSQL database (you can use Supabase free tier): Create a new project Note down connection details (host, port, database name, user, password) The workflow will auto-create the required table 5. n8n Workflow Configuration Import the workflow and configure: A. Credentials Google Sheets OAuth2**: Connect your Google account Slack API**: Add your Bot User OAuth Token Google Gemini API**: Add your API key PostgreSQL**: Add database connection details B. Update Node Parameters All Google Sheets nodes: Select your finance spreadsheet Slack nodes: Select your finance channel Schedule Trigger: Adjust time if you prefer a different check-in hour (default: 11 PM) Postgres Chat Memory: Change sessionKey to something unique (e.g., finance_tracker_your_name) Keep tableName as n8n_chat_history_finance or rename consistently C. Slack Trigger Setup Activate the "Bot Mention trigger" node Copy the webhook URL from n8n In Slack App settings, go to Event Subscriptions Enable events and paste the webhook URL Subscribe to bot event: app_mention Save changes 6. Test the Workflow Activate both workflow branches (scheduled and agent) In your Slack channel, mention the bot: @YourBot ₹100 cash snacks Bot should respond with a preview Reply "yes" to approve Verify Google Sheets are updated How to customize Change Transaction Categories Edit the AI Agent's system message to add/remove categories. Current categories: travel, food, entertainment, utilities, shopping, health, education, other Modify Daily Check-in Time Change the Schedule Trigger's triggerAtHour value (0-23 in 24-hour format). Add Currency Support Replace ₹ with your currency symbol in: Format Daily Message code node AI Agent system prompt examples Switch AI Models The workflow uses Google Gemini, but notes recommend Claude. To switch: Replace "Google Gemini Chat Model" node Add Claude credentials Connect to AI Agent node Customize Debt Types Modify AI Agent's system prompt to change debt handling logic: Currently: I_Owe and They_Owe_Me You can add more types or change naming Add More Payment Methods Current: cash, online To add more (e.g., credit card): Update AI Agent prompt Modify Balances sheet structure Update balance calculation logic Change Approval Keywords Edit AI Agent's Phase 2 approval logic to recognize different approval phrases. Add Spending Analytics Extend the daily check-in to calculate: Weekly/monthly spending summaries Category-wise breakdowns Use additional Code nodes to process transaction history Important Notes ⚠️ Never trigger with normal messages - Only use app mentions (@botname) to avoid infinite loops where the bot replies to its own messages. 💡 Context Awareness - The bot remembers conversation history, so you can reference "yesterday", "last week", or previous transactions naturally. 🔒 Data Privacy - All your financial data stays in your Google Sheets and PostgreSQL database. The AI only processes transaction text temporarily. 📊 Backup Regularly - Export your Google Sheets periodically as backup. Pro Tips: Start with small test transactions to ensure everything works Use consistent person names for debt tracking The bot understands various formats: "₹500 cash food" = "paid 500 rupees in cash for food" You can batch transactions in one message: "₹100 travel, ₹200 food, ₹50 snacks"
by Bhuvanesh R
The competitive edge, delivered. This Customer Intelligence Engine simultaneously analyzes the web, Reddit, and X/Twitter to generate a professional, actionable executive briefing. 🎯 Problem Statement Traditional market research for Customer Intelligence (CI) is manual, slow, and often relies on surface-level social media scraping or expensive external reports. Service companies, like HVAC providers, struggle to efficiently synthesize vast volumes of online feedback (Reddit discussions, real-time tweets, web articles) to accurately diagnose systemic service gaps (e.g., scheduling friction, poor automated systems). This inefficiency leads to delayed strategic responses and missed opportunities to invest in high-impact solutions like AI voice agents. ✨ Solution This workflow deploys a sophisticated Multisource Intelligence Pipeline that runs on a scheduled or ad-hoc basis. It uses parallel processing to ingest data from three distinct source types (SERP API, Reddit, and X/Twitter), employs a zero-cost Hybrid Categorization method to semantically identify operational bottlenecks, and uses the Anthropic LLM to synthesize the findings into a clear, executive-ready strategic brief. The data is logged for historical analysis while the brief is dispatched for immediate action. ⚙️ How It Works (Multi-Step Execution) 1. Ingestion and Parallel Processing (The Data Fabric) Trigger:** The workflow is initiated either on an ad-hoc basis via an n8n Form Trigger or on a schedule (Time Trigger). Parallel Ingestion:** The workflow immediately splits into three parallel branches to fetch data simultaneously: SERP API: Captures authoritative content and industry commentary (Strategic Context). Reddit (Looping Structure): Fetches posts from multiple subreddits via an Aggregate Node workaround to get authentic user experiences (Qualitative Signal). X/Twitter (HTTP Request): Bypasses standard rate limits to capture real-time social complaints (Sentiment Signal). 2. Analysis and Fusion (The Intelligence Layer) Cleanup and Labeling (Function Nodes):** Each branch uses dedicated Function Nodes to filter noise (e.g., low-score posts) and normalize the data by adding a source tag (e.g., 'Reddit'). Merge:** A Merge Node (Append Mode) fuses all three parallel streams into a single, unified dataset. Hybrid Categorization (Function Node):** A single Function Node applies the Hybrid Categorization Logic. This cost-free step semantically assigns a pain_point category (e.g., 'Call Hold/Availability') and a sentiment_score to every item, transforming raw text into labeled metrics. 3. Dispatch and Reporting (The Executive Output) Aggregation and Split (Function Node):** The final Function Node calculates the total counts, deduplicates the final results, and generates the comprehensive summaryString. Data Logging:* The aggregated counts and metrics are appended to *Google Sheets** for historical logging. LLM Input Retrieval (Function Node):** A final Function Node retrieves the summary data using the $items() helper (the serial route workaround). AI Briefing:* The *Message a model (Anthropic) Node receives the summaryString and uses a strict HTML System Prompt to synthesize the strategic brief, identifying the top pain points and suggesting AI features. Delivery:* The *Gmail Node** sends the final, professional HTML brief to the executive team. 🛠️ Setup Steps Credentials Anthropic:** Configure credentials for the Language Model (Claude) used in the Message a model node. SERP API, Reddit, and X/Twitter:** Configure API keys/credentials for the data ingestion nodes. Google Services:** Set up OAuth2 credentials for Google Sheets (for logging data) and Gmail (for email dispatch). Configuration Form Configuration:** If using the Form Trigger, ensure the Target Keywords and Target Subreddits are mapped correctly to the ingestion nodes. Data Integrity:** Due to the serial route, ensure the Function (Get LLM Summary) node is correctly retrieving the LLM_SUMMARY_HOLDER field from the preceding node's output memory. ✅ Benefits Proactive CI & Strategy:** Shifts market research from manual, reactive browsing to proactive, scheduled data diagnostic. Cost Efficiency:** Utilizes a zero-cost Hybrid Categorization method (Function Node) for intent analysis, avoiding expensive per-item LLM token costs. Actionable Output:** Delivers a fully synthesized, HTML-formatted executive brief, ready for immediate presentation and strategic sales positioning. High Reliability:** Employs parallel ingestion, API workarounds, and serial routing to ensure the complex workflow runs consistently and without failure.
by May Ramati Kroitero
Automated Job Hunt with Tavily — Setup & Run Guide What this template does? Automatically searches for recent job postings (example: “Software Engineering Intern”), extracts structured details from each posting using an AI agent + Tavily, bundles results, and emails a single weekly digest. Estimated setup time: ~30 minutes 1. Required credentials Before you import or run the workflow, create/configure these credentials in your n8n instance: OpenAI (Chat model) — used by the OpenAI Chat Model and Message a model nodes. Add an OpenAI credential (name it e.g. OpenAi account) and paste your OpenAi API key. Tavily API — used by the Search in Tavily node. Add a Tavily credential (name it e.g. Tavily account) and add your Tavily API key. Gmail (OAuth2) — used by the Send a message node to deliver the digest email. Configure Gmail OAuth2 credential and select it for the Gmail node (e.g. Gmail account. 2. Node-by-node configuration (what to check/change) Schedule Trigger Node name: Schedule Trigger Configure interval: daily or weekly (example: weekly, trigger at 08:00). Note: This is the workflow start. Adjust to your preferred cadence. AI Agent Node name: AI Agent Important: First step — set the agent’s prompt / system message. Search in Tavily (Tavily Tool node) Node name: Tavily Query: user-editable field (example default: Roles posted this week for Software Engineering) Advice: keep query under 400 chars; change to target role/location keywords. Options recommended: Search Depth: advanced (optional, better extraction) Max Results: 15 Time Range: week (limit to past week) Include Raw Content: true (fetch full page content for better extraction) Include Domains: indeed.com, glassdoor.com,linkedin.com — prioritize trusted sources Edit Fields / Set (bundle) Node name: Edit Fields (Set) Purpose: Collect the agent output into one field (e.g., $json.output or Response) for downstream processing. Message a Model (OpenAI formatting step) Node name: Message a model Uses OpenAI (the openAiApi credential). This node can be used to reformat or normalize the agent output into consistent blocks if needed. Use the same system rules you used for the agent (the prompt/system message earlier). You can also leave this minimal if the agent already outputs structured blocks. Code Node (Parsing & structuring) Node name: Code Purpose: Split the agent/LLM text into separate job postings and extract fields with regex. Aggregate Node Node name: Aggregate Mode: aggregateAllItemData (this combines all parsed postings into a single data array so the Gmail node can loop over them) Gmail node (Send a message) Node name: Send a message sendTo: set to recipient(s) (e.g., your inbox) subject: e.g. New Jobs for this week! emailType: text (or html if you build HTML content) message (body): use the expression that loops through data and formats every posting. 3. How to test (quick steps) Set credentials in n8n (OpenAI, Tavily, Gmail). Run the Schedule Trigger manually (use the “Execute Workflow” or manually trigger nodes). Inspect the Search in Tavily node output — confirm it returns results. Inspect the AI Agent and Message a model outputs — ensure formatted postings are produced and separated by --- END JOB POSTING ---. Run the Code node — confirm it returns structured items with posting_number, job_title, requirements[], etc. Check Aggregate output: you should see a single item with data array. In Gmail node, run a test send — confirm the email receives one combined message with all postings. 4. Troubleshooting tips Gmail body shows [Array: …]: Avoid dragging the array raw — use the expression that maps data to formatted strings. Code node split error: Occurs when raw is undefined. Ensure previous node returns message.content or adjust to use $input.all() and join contents safely. Missing fields after parsing: Check LLM/agent output labels match the Code node’s regex (e.g., Job Title:). If labels differ, update regex or LLM formatting. 5. Customization ideas Filter by location or remote-only roles, or add keyword filters (seniority, stack). Send results to Google Sheets or Slack instead of/in addition to Gmail. Add an LLM summarization step to create a 1-line highlight per posting.
by gotoHuman
This workflow automatically classifies every new email from your linked mailbox, drafts a personalized reply, and creates Linear tickets for bugs or feature requests. It uses a human-in-the-loop with gotoHuman and continuously improves itself by learning from approved examples. How it works The workflow triggers on every new email from your linked mailbox. Self-learning Email Classifier: an AI model categorizes the email into defined categories (e.g., Bug Report, Feature Request, Sales Opportunity, etc.). It fetches previously approved classification examples from gotoHuman to refine decisions. Self-learning Email Writer: the AI drafts a reply to the email. It learns over time by using previously approved replies from gotoHuman, with per-classification context to tailor tone and style (e.g., different style for sales vs. bug reports). Human Review in gotoHuman: review the classification and the drafted reply. Drafts can be edited or retried. Approved values are used to train the self-learning agents. Send approved Reply: the approved response is sent as a reply to the email thread. Create ticket: if the classification is Bug or Feature Request, a ticket is created by another AI agent in Linear. Human Review in gotoHuman: How to set up Most importantly, install the gotoHuman node before importing this template! (Just add the node to a blank canvas before importing) Set up credentials for gotoHuman, OpenAI, your email provider (e.g. Gmail), and Linear. In gotoHuman, select and create the pre-built review template "Support email agent" or import the ID: 6fzuCJlFYJtlu9mGYcVT. Select this template in the gotoHuman node. In the "gotoHuman: Fetch approved examples" http nodes you need to add your formId. It is the ID of the review template that you just created/imported in gotoHuman. Requirements gotoHuman (human supervision, memory for self-learning) OpenAI (classification, drafting) Gmail or your preferred email provider (for email trigger+replies) Linear (ticketing) How to customize Expand or refine the categories used by the classifier. Update the prompt to reflect your own taxonomy. Filter fetched training data from gotoHuman by reviewer so the writer adapts to their personalized tone and preferences. Add more context to the AI email writer (calendar events, FAQs, product docs) to improve reply quality.
by Franz
🚀 AI Lead Generation and Follow-Up Template 📋 Overview This n8n workflow template automates your lead generation and follow-up process using AI. It captures leads through a form, enriches them with company data, classifies them into different categories, and sends appropriate follow-up sequences automatically. Key Features: 🤖 AI-powered lead classification (Demo-ready, Nurture, Drop) 📊 Automatic lead enrichment with company data 📧 Intelligent email responses and follow-up sequences 📅 Automated demo scheduling for qualified leads 📝 Complete lead logging in Google Sheets 💬 AI assistant for immediate query responses 🛠️ Prerequisites Before setting up this workflow, ensure you have: n8n Instance: Self-hosted or cloud version OpenAI API Key: For AI-powered features Google Workspace Account with: Gmail access Google Sheets Google Calendar Basic understanding of your Ideal Customer Profile (ICP) ⚡ Quick Start Guide Step 1: Import the Workflow Copy the workflow JSON Import into your n8n instance The workflow will appear with all nodes connected Step 2: Configure Credentials You'll need to set up the following credentials: OpenAI API**: For AI agents and classification Gmail OAuth2**: For sending emails Google Sheets OAuth2**: For lead logging Google Calendar OAuth2**: For demo scheduling Step 3: Create Your Lead Log Sheet Create a Google Sheet with these columns: Date Name Email Company Job Title Message Number of Employees Industry Geography Annual Revenue Technology Pain Points Lead Classification Step 4: Update Configuration Nodes Replace Sheet ID: Update all Google Sheets nodes with your sheet ID Update Email Templates: Customize all email content Set Escalation Email: Replace "your-email@company.com" with your team's email Configure ICP Criteria: Edit the "Define ICP and Lead Criteria" node 🎯 Lead Classification Setup Define Your ICP (Ideal Customer Profile) Edit the "Define ICP and Lead Criteria" node to set your criteria: 📌 ICP Criteria Example: Company Size: 50+ employees Industry: SaaS, Finance, Healthcare, Manufacturing Geography: North America, Europe Pain Points: Manual processes, compliance needs, scaling challenges Annual Revenue: $5M+ ✅ Demo-Ready Criteria: High-intent prospects who meet multiple qualifying factors: Large company size (your threshold) Clear pain points mentioned Urgent timeline Budget authority indicated Specific solution requests 🌱 Nurture Criteria: Prospects with future potential: Meet basic size requirements In target industry General interest expressed Planning future implementation Exploring options ❌ Drop Criteria: Only drop leads that clearly don't fit: Outside target geography Wrong industry (B2C if you're B2B) Too small with no growth Already with competitor Spam or test messages 📧 Email Customization Customize Follow-Up Sequences: Demo-Ready Sequence: Immediate calendar invitation Personalized demo confirmation Meeting reminder (optional) Nurture Sequence: Welcome email with resources Educational content (Day 2) Webinar/event invitation (Day 3) Demo offer (Day 4) Drop Message: Polite acknowledgment Clear explanation Keep door open for future 🔧 Advanced Configuration AI Answer Agent Setup: Update the system prompt with your company information Add common Q&A patterns Set escalation rules Configure language preferences Lead Enrichment Options: Add API keys for additional data sources Configure enrichment fields Set data quality thresholds Enable duplicate detection Calendar Integration: Set available meeting times Configure meeting duration Add buffer times Set timezone handling 📊 Monitoring and Optimization Track Key Metrics: Lead volume by classification Response rates Demo conversion rates Time to first response Enrichment success rate Optimization Tips: Regular Review: Check classification accuracy weekly A/B Testing: Test different email sequences Feedback Loop: Use outcomes to refine ICP criteria AI Training: Update prompts based on results 🎉 Best Practices Start Simple: Begin with basic criteria and refine over time Test Thoroughly: Use test leads before going live Monitor Daily: Check logs for the first week Iterate Quickly: Adjust based on results Document Changes: Keep track of criteria updates 📈 Scaling Your Workflow As your lead volume grows: Add Sub-workflows: Separate complex processes Implement Queuing: Handle high volumes Add CRM Integration: Sync with your sales tools Enable Analytics: Track detailed metrics Set Up Alerts: Monitor for issues
by Vivekanand M
Self-learning feedback loop for AI customer support email drafts with Gmail, OpenAI and PostgreSQL Automatically compare AI-generated email drafts against what your support team actually sent, learn from the differences, and improve future drafts over time — without any model fine-tuning. What this workflow does This is the second workflow in a two-part customer support automation system. The first workflow generates AI draft replies for incoming support emails. This workflow closes the loop — it runs every 3 hours, checks which drafts were reviewed and sent, compares them against the original AI output, and stores the human-edited versions as training examples. The more this workflow runs, the smarter the first workflow becomes. When generating future drafts, the similarity search surfaces past human-approved responses — so the AI progressively learns what good answers look like for your specific support context. How it works Step 1 — Watermark and scheduling Every run starts by fetching the last_processed_sent_at timestamp from the previous completed run. Only Gmail Sent emails newer than this timestamp are fetched, so nothing gets processed twice. On the first-ever run it defaults to 7 days ago. Step 2 — Fetch and loop Sent emails are fetched from Gmail and processed one at a time. For each email, the full message body is retrieved via the Gmail API (the list endpoint only returns a preview snippet). The sent email's thread ID is matched against the ai_drafts table to find the corresponding AI draft. Step 3 — Match and skip logic Three things skip an email without processing: no matching AI draft found (the team sent something manually), the draft was already processed in a previous run, or the fetch returns no results. Only genuine unprocessed matches continue. Step 4 — AI comparison GPT-4o-mini compares the AI draft text against the human-sent text and returns a structured analysis: whether it was approved unchanged, the type of edit made (minor edits vs major rewrite), a plain English summary of what changed, and whether the edit implies missing or incorrect information in the knowledge base. Step 5 — Store the correction If the human made any edits, the pair (original email + human response) is embedded using OpenAI text-embedding-3-small and saved to the corrections table. This table is what the first workflow searches using vector cosine similarity when assembling future draft prompts. Step 6 — KB auto-update If the AI comparison flags that the human edit contained new information, the most relevant knowledge base entry for that category is fetched and rewritten by GPT-4o-mini to incorporate the new information. The previous answer is preserved in the previous_answer column for auditing. Step 7 — Run log Each run is logged to feedback_run_log with counts of emails checked, corrections saved, KB updates made and any errors. This log also serves as the watermark source for the next run. Setup steps Prerequisites Gmail account (same support inbox used by the main email workflow) OpenAI API key PostgreSQL database with pgvector extension and the full schema from Workflow 1 already applied The main email automation workflow (Workflow 1) must be active and generating drafts 1. Apply the DB migration Run the following against your existing database to add the columns this workflow needs: ALTER TABLE ai_drafts ADD COLUMN IF NOT EXISTS email_embedding vector(1536), ADD COLUMN IF NOT EXISTS feedback_processed_at TIMESTAMPTZ, ADD COLUMN IF NOT EXISTS was_approved_as_is BOOLEAN DEFAULT FALSE; ALTER TABLE corrections ADD COLUMN IF NOT EXISTS source TEXT DEFAULT 'feedback_loop', ADD COLUMN IF NOT EXISTS kb_updated BOOLEAN DEFAULT FALSE; ALTER TABLE kb_data ADD COLUMN IF NOT EXISTS updated_by TEXT DEFAULT 'manual', ADD COLUMN IF NOT EXISTS previous_answer TEXT; CREATE TABLE IF NOT EXISTS feedback_run_log ( id SERIAL PRIMARY KEY, run_started_at TIMESTAMPTZ DEFAULT NOW(), run_completed_at TIMESTAMPTZ, last_processed_sent_at TIMESTAMPTZ, emails_checked INTEGER DEFAULT 0, approved_as_is INTEGER DEFAULT 0, corrections_saved INTEGER DEFAULT 0, kb_updates INTEGER DEFAULT 0, errors INTEGER DEFAULT 0, status TEXT DEFAULT 'running' ); 2. Configure credentials | Node | Credential needed | |---|---| | Gmail - Fetch Sent Emails | Gmail OAuth2 | | Gmail - Fetch Full Message | Gmail OAuth2 (HTTP Request with OAuth) | | All DB nodes | PostgreSQL | | OpenAI Chat Model - Compare | OpenAI API | | AI - Rewrite KB Answer | OpenAI API | | Generate Embedding - Human Sent | OpenAI API | 3. Check node connections The splitInBatches loop node has two outputs — make sure they are connected correctly: Output 0 (loop)** → DB - Match Thread ID Output 1 (done)** → DB - Complete Run Log All branch dead-ends (approved as-is, no KB update, KB updated) should feed back into the loop node's input to advance to the next item. 4. Activate Toggle the workflow to active. It will run automatically on the 3-hour schedule. You can also trigger it manually to test. How it connects to Workflow 1 Once corrections start accumulating in the corrections table, Workflow 1's similarity search (which queries this table using vector cosine distance) will begin surfacing relevant past human-approved responses when assembling draft prompts. No changes to Workflow 1 are needed — it queries the same table this workflow writes to. Tech stack n8n** — workflow automation and scheduling Gmail API** — sent folder monitoring and full message fetch OpenAI GPT-4o-mini** — draft comparison and KB rewriting OpenAI text-embedding-3-small** — vector embedding for similarity search PostgreSQL + pgvector** — storing corrections and running cosine similarity queries Who this is for Teams already running an AI email draft workflow who want it to improve over time Support operations that want human edits to automatically become training data Anyone who wants a self-improving system without model fine-tuning or external ML infrastructure
by KPendic
This n8n flow demos basic dev-ops operation task, dns records management. AI agent with light and basic prompt functions like getter and setter for DNS records. In this special case, we are managing remote dns server, via API calls - that are handled on CloudFlare platform side. Use-cases for this flow can be standalone, or you can chain it in your pipe-line to get powerful infrastructure flows for your needs. How it works we created basic agent and gave it a prompt to know about one tool: cf_tool - sub-routine (to itself flow - or it can be separate dedicated one) prompt have defined arguments that are needed for passing them when calling agent, for each action specifically tool it self have basic if switch that is - based of a action call - calling specific CloudFlare API endpoint (and pass down the args from the tool) Requirements For storing and processing of data in this flow you will need: CloudFlare.com API key/token - for retrieving your data (https://dash.cloudflare.com/?to=/:account/api-tokens) OpenAPI credentials (or any other LLM provider) saved - for agent chat (Optional) PostGres table for chat history saving Official CloudFlare api Documentation For full details and specifications please use API documentation from: https://developers.cloudflare.com/api/ Linkedin post Let me know if you found this flow usefull on my Linkedin post > here. tags: #cloudflare, #dns, #domain
by Kumar SmartFlow Craft
🚀 How it works Monitors your AP inbox for incoming invoices, extracts structured data with AI, runs duplicate and vendor history checks against Supabase, then scores each invoice for fraud risk — routing suspicious ones to Slack and your AP team before any payment is processed. 📬 Gmail Trigger monitors your accounts payable inbox in real time 🤖 AI Agent extracts invoice number, vendor, amount, currency, dates and line items into structured JSON — no manual data entry 🔍 Checks Supabase for duplicate invoice numbers already in the system 🏢 Checks vendor payment history — flags unknown vendors and amount deviations above 50% from the vendor's historical average 🧠 Second AI Agent scores fraud risk: low / medium / high / critical with specific fraud indicators and a recommended action 🚨 High/critical risk — posts a detailed Slack alert to #invoice-alerts and emails your AP manager with a hold notice 🗄️ Logs every processed invoice to Supabase with risk score and status 🛠️ Set up steps Estimated setup time: ~20 minutes Gmail Trigger — connect Gmail OAuth2; point it at your AP inbox OpenAI — connect OpenAI API credential (used by both AI Agent nodes) Supabase — connect Supabase API credential; create two tables: invoices (invoice_number, vendor_name, amount, status, risk_level, created_at) and vendors (vendor_name, avg_amount, total_invoices, flagged) Slack — connect Slack OAuth2; update the channel name #invoice-alerts Gmail (Send) — connect Gmail OAuth2; replace ap-manager@example.com Follow the sticky notes inside the workflow for per-node guidance 📋 Prerequisites Gmail account receiving invoices OpenAI API key (GPT-4o) Supabase project with invoices and vendors tables Slack workspace with an alerts channel Custom Workflow Request with Personal Dashboard kumar@smartflowcraft.com https://www.smartflowcraft.com/contact More free templates https://www.smartflowcraft.com/n8n-templates
by Roshan Ramani
Generate Personalized & Aggregate Survey Reports with Jotform and Gemini AI Overview Automatically transform Jotform survey responses into intelligent, professional reports. This workflow generates personalized insights for each respondent and statistical summaries for administrator, all hands-free. Who Should Use This Survey managers needing automated report generation Market researchers analyzing response data Product teams collecting customer feedback Organizations using Jotform without built-in analytics What It Does Two-Part Report System: Personal Reports (Instant) Triggers immediately when respondent submits survey AI analyzes their individual responses using Google Gemini Generates customized insights and recommendations Sends professional HTML report to respondent's email Weekly Aggregate Reports (Scheduled) Runs automatically every week Collects all survey submissions Calculates statistics, percentages, and trends Identifies patterns across all respondents Sends comprehensive analysis to admin Key Features ✓ Real-time personal report generation ✓ Intelligent AI-powered analysis (Google Gemini) ✓ Professional HTML email formatting ✓ Automatic weekly summaries ✓ Statistical analysis and trend identification ✓ Zero manual processing required ✓ Fully customizable prompts and styling ✓ Works with any Jotform survey structure Setup Requirements Jotform** account with active survey form Get Jotform from here n8n** instance (cloud or self-hosted) Google Gemini API** key Gmail** account (for sending reports) Jotform API** key What You Get in Reports Personal Reports Include: Respondent Profile** – Auto-extracted demographics (age, role, location, email) Key Insights** – 3-4 AI-generated insights specific to their responses Personalized Recommendations** – 3-4 actionable suggestions based on their answers Professional Formatting** – HTML-styled email with your branding colors Mobile Responsive** – Looks great on all devices Fully Customizable: Edit the AI prompt to generate different types of insights Change HTML styling (colors, fonts, layout) Add/remove sections (logos, footers, additional analysis) Adjust the tone (professional, casual, technical, etc.) Include custom branding and messaging Aggregate Reports Include: Total Respondents Count** – How many submissions in the period Demographic Breakdown** – Distribution of respondent profiles Response Statistics** – Percentages and frequencies for each question Answer Distribution** – Most popular choices across all responses Trend Analysis** – Patterns and correlations in the data Key Findings** – Top 5-7 insights from all responses combined Statistical Metrics** – Averages, frequencies, comparisons Fully Customizable: Choose which statistics to calculate and display Change how data is visualized and presented Customize report styling and branding Adjust analysis depth and metrics focus Add custom sections for your specific needs Modify HTML layout and design How Reports Look Personal Report Structure (Email): Header: Professional gradient background with thank you message Section 1: Respondent Details (extracted from survey) Section 2: Key Insights (AI-generated from their responses) Section 3: Recommendations (personalized suggestions) Footer: Thank you message and company info Aggregate Report Structure (Email): Header: Report title and date range Section 1: Total respondent count and demographics Section 2: Question-by-question response breakdown Section 3: Statistical analysis and trends Section 4: Key findings and patterns discovered Section 5: Actionable insights for decision-makers Footer: Next report date and company branding Quick Start Get your Jotform Form ID and API key Enable Google Gemini API and create API key Set up Gmail OAuth2 credentials in n8n Import this workflow Add your credentials to the nodes Test with a sample survey submission Complete setup instructions are included in the workflow as an expandable sticky note. Workflow Logic PERSONAL REPORTS: Survey Submission ↓ Collect Response Data ↓ AI Analysis & Insights Generation ↓ Create Styled HTML Report ↓ Send to Respondent Email AGGREGATE REPORTS: Weekly Schedule Triggers ↓ Fetch All Submissions ↓ Statistical Analysis & Trend Detection ↓ Generate Insights from All Data ↓ Create Summary HTML Report ↓ Send to Admin Email Use Cases Customer Feedback Surveys** – Analyze responses, send personalized insights Product Research** – Track trends across respondents weekly Market Research** – Automated statistical reporting Employee Surveys** – Personalized feedback + company trends Event Feedback** – Instant attendee insights + organizer summary Customer Satisfaction (NPS)** – Personalized follow-ups + trend analysis Lead Qualification** – Auto-analyze prospect responses and route accordingly
by Muhammadumar
This is the core AI agent used for isra36.com. Don't trust complex AI-generated SQL queries without double-checking them in a safe environment. That's where isra36 comes in. It automatically creates a test environment with the necessary data, generates code for your task, runs it to double-check for correctness, and handles errors if necessary. If you enable auto-fixing, isra36 will detect and fix issues on its own. If not, it will ask for your permission before making changes during debugging. In the end, you get thoroughly verified code along with full details about the environment it ran in. Setup It is an embedded chat for the website, but you can pin input data and run it on your own n8n instance. Input data sessionId: uuid\_v4. Required to handle ongoing conversations and to create table names (used as a prefix). threadId: string | nullable. If aiProvider is openai, conversation history is managed on OpenAI’s side. This is not needed in the first request—it will start a new conversation. For ongoing conversations, you must provide this value. You can get it from the OpenAIMainBrain node output after the first run. If you want to start a new conversation, just leave it as null. apiKey: string. Your API key for the selected aiProvider. aiProvider: string. Currently supported values: openai, openrouter. model: string. The AI model key (e.g., gpt-4.1, o3-mini, or any supported model key from OpenRouter). autoErrorFixing: boolean. If true, it will automatically fix errors encountered when running code in the environment. If false, it will ask for your permission before attempting a fix. chatInput: string. The user's prompt or message. currentDbSchemaWithData: string. A JSON representation of the database schema with sample data. Used to inform the AI about the current database structure during an ongoing conversation. Please use the '[]' value in the first request. Example string for filled db structure : '{"users":[{"id":1,"name":"John Doe","email":"john.d@example.com"},{"id":2,"name":"Jane Smith","email":"jane.s@example.com"}],"products":[{"product_id":101,"product_name":"Laptop","price":999.99}]}' Make sure to fill in your credentials: Your OpenAI or OpenRouter API key Access to a local PostgreSQL database for test execution You can view your generated tables using your preferred PostgreSQL GUI. We recommend DBeaver. Alternatively, you can activate the “Deactivated DB Visualization” nodes below. To use them, connect each to the most recent successful Set node and manually adjust the output. However, the easiest and most efficient method is to use a GUI. Workflow Explanation We store all input values in the localVariables node. Please use this node to get the necessary data. OpenAI has a built-in assistant that manages chat history on their side. For OpenRouter, we handle chat history locally. That’s why we use separate nodes like ifOpenAi and isOpenAi. Note that if logic can also be used inside nodes. The AutoErrorFixing loop will run only a limited number of times, as defined by the isMaxAutoErrorReached node. This prevents infinite loops. The Execute_AI_result node connects to the PostgreSQL test database used to execute queries. Guidance on customization This setup is built for PostgreSQL, but it can be adapted to any programming language, and the logic can be extended to any programming framework. To customize the logic for other programming languages: Change instruction parameter in localVariables node. Replace the Execute_AI_result PostgreSQL node with another executable node. For example, you can use the HTTP Request node. Update the GenerateErrorPrompt node's prompt parameter to generate code specific to your target language or framework. Any workflows built on top of this must credit the original author and be released under an open-source license.
by NODA shuichi
Description: More than an alarm. A smart morning experience that adapts to the weather. 🎸☔️☀️ This workflow demonstrates how to upgrade a simple automation into a smart, context-aware system. By integrating OpenMeteo (Weather API), Google Gemini (AI), and Spotify, it creates a personalized DJ experience for your morning. Why is this "Advanced"? Context Awareness: It doesn't just play music; it checks the weather (via OpenMeteo API) to understand the user's environment. AI Persona: Gemini acts as a live DJ, generating commentary that connects the specific Led Zeppelin track to the current weather conditions (e.g., "It's rainy, perfect for 'The Rain Song'"). Data Logging: It logs every wake-up session (Song, Time, Weather) to Google Sheets, creating a personal music history database. Robust Error Handling: Includes logic to detect offline speakers and send fallback alerts. How it works: Check Context: Fetches real-time weather data for your location and checks your Spotify speaker status. Select Music: Picks a random track from Led Zeppelin's top hits. Generate: Gemini generates a unique "Good Morning" script combining the song title and the weather. Action: Plays the music, logs the data to Google Sheets, and emails you the AI's greeting with album art. Setup Requirements: Spotify Premium Google Gemini API Key Google Sheets: Create a sheet named History with headers: date, time, weather, temperature, song, artist. Gmail
by Yahor Dubrouski
Overview Build your own AI Prompt Hub inside n8n. This template lets ChatGPT automatically search your saved prompts in Notion using semantic embeddings from HuggingFace. Each time a user sends a message, the workflow finds the most relevant prompt based on meaning - not keywords. Perfect for developers who maintain dozens of prompts and want ChatGPT to pick the right one automatically. Key Features 🔍 Semantic Prompt Search - Finds the best prompt using HuggingFace embeddings 🧠 AI Agent Integration - ChatGPT automatically calls the prompt-search workflow 📚 Notion Prompt Database - Store unlimited prompts with auto-generated embeddings ⚡ Automatic Embedding Sync - Regenerates vectors when prompts change This template is ideal for: AI automations Prompt engineering DevOps and backend engineers who reuse prompts Teams managing large prompt libraries How it works The user sends any message to the ChatGPT interface The n8n AI Agent calls a sub-workflow that performs semantic search in Notion HuggingFace converts both the message and saved prompts into vector embeddings The workflow returns the most similar prompt, which ChatGPT can use automatically Setup Instructions (15–20 minutes) Import this template into your n8n instance Set credentials for Notion, OpenAI, and HuggingFace Create a Notion database with: Prompt (Text) Embeddings (Text) Checksum (Text) Paste your Notion database ID in: “Get All Prompts” “On Page Update” “On Page Create” “Get All Prompts for Search” Enable the workflow and open the URL from “When chat message received” to start chatting Type any request - the system will search for a matching prompt automatically Documentation & Demo Full documentation and examples: https://github.com/YahorDubrouski/ai-planner/blob/main/documentation/prompt-hub/README.md