by Tony Adijah
Who is this for This workflow is built for sales teams, agencies, and small businesses that receive inbound leads via WhatsApp and want to automate their first response, lead qualification, and CRM logging — without missing a single message. What this workflow does It listens for incoming WhatsApp messages, uses an AI agent to classify each message by intent (hot lead, warm lead, support, or needs qualification), sends a tailored auto-reply, logs every interaction to Google Sheets, and automatically books Google Calendar meetings with Meet links for qualified leads. How it works WhatsApp Trigger receives incoming messages and filters out bot/status messages to prevent loops. AI Agent (powered by Ollama or any connected LLM) classifies the message into one of four intent categories with confidence scoring. Smart Router directs each intent down a dedicated path. Hot & Warm Leads receive an instant reply, get logged to Google Sheets, have a Google Calendar meeting auto-booked, and receive the Meet link via WhatsApp. Support requests are logged and receive a ticket confirmation. Vague or incomplete messages trigger a smart follow-up question. Conversation memory ensures the AI re-classifies correctly when the user replies with more context. Setup steps Connect your WhatsApp Business API credentials (Meta Cloud API). Connect Google Sheets OAuth and set your spreadsheet ID in all three logging nodes. Connect Google Calendar OAuth and select your calendar in both booking nodes. Configure your LLM (Ollama endpoint, OpenAI, or any supported model). Update the BOT_NUMBERS array in the "Parse WhatsApp Message" node to match your WhatsApp Business phone number ID. Update the phoneNumberId in all WhatsApp Send nodes to your number. Send a test message and verify the full flow. Requirements WhatsApp Business API (Meta Cloud API) access Google Sheets and Google Calendar accounts with OAuth credentials An LLM endpoint (Ollama, OpenAI, or any n8n-supported model) n8n instance (cloud or self-hosted) How to customize Swap the AI model in the Ollama Chat Model node for OpenAI, Anthropic, or any supported LLM. Edit the auto-reply templates in each Reply code node to match your brand voice. Adjust meeting booking times (default: Hot = 2 hours out, Warm = 4 hours out). Add Slack or email notifications by branching from the Google Sheets logging nodes. Modify the AI classification prompt to add custom intent categories for your business.
by Oneclick AI Squad
This workflow ingests student profiles from a form submission or CRM, loads the active scholarship catalogue, uses Claude AI to score each student's eligibility against every available scholarship, filters strong matches, and automatically notifies eligible candidates with personalised application guidance. How it works Trigger — Form submission webhook or nightly scheduled batch run Load Student Profile — Fetches or normalises the student's academic and personal data Load Scholarship Catalogue — Pulls active scholarships from Airtable / Google Sheets Pair Students × Scholarships — Builds evaluation pairs for AI scoring AI Eligibility Scoring — Claude AI scores fit, flags eligibility, ranks scholarships Parse & Rank Results — Extracts structured scores, sorts by match strength Filter Qualified Matches — Keeps scholarships above configurable match threshold Check Deadline Urgency — Flags scholarships closing within 14 days Personalise Notification — Builds tailored email per student with top matches Send Student Email — Dispatches personalised scholarship digest Notify Advisor on Slack — Alerts academic advisor for high-value matches Update CRM Record — Writes matched scholarships back to Airtable student record Log to Audit Sheet — Appends full match report to Google Sheets Return API Response — Returns structured match results to caller Setup Steps Import workflow into n8n Configure credentials: Anthropic API — Claude AI for eligibility scoring Airtable — Student profiles and scholarship catalogue Google Sheets — Audit and match history log Slack OAuth — Academic advisor notifications SendGrid / SMTP — Student notification emails Set your Airtable base ID and table names for students and scholarships Configure match threshold (default: 70) in filter node Set urgency window (default: 14 days) in deadline check node Add your Slack advisor channel ID Activate the workflow Sample Webhook Payload { "studentId": "STU-2025-4821", "firstName": "Priya", "lastName": "Sharma", "email": "priya.sharma@university.edu", "gpa": 3.8, "major": "Computer Science", "yearOfStudy": 2, "nationality": "Indian", "residencyStatus": "International", "financialNeed": true, "extracurriculars": ["Robotics Club", "Volunteer Tutor"], "achievements": ["Dean's List 2024", "Hackathon Winner"], "intendedCareer": "AI Research", "disabilities": false, "firstGenStudent": true } Scholarship Criteria Evaluated by Claude AI Academic Merit** — GPA, honours, academic awards Field of Study** — Major/discipline alignment Financial Need** — Demonstrated need indicators Demographic Eligibility** — Nationality, residency, gender, Indigenous status Year of Study** — Undergraduate, postgraduate, PhD level Extracurricular Profile** — Leadership, community service, sports Career Alignment** — Intended career path vs scholarship mission Special Circumstances** — First-gen, disability support, regional background Features Batch processing of entire student cohort nightly AI-powered multi-criteria eligibility scoring (0–100) Deadline urgency detection and priority flagging Personalised email with ranked scholarship list and tips Academic advisor Slack digest for high-value matches Full audit trail in Google Sheets Airtable CRM auto-update with matched scholarships Explore More Automation: Contact us to design AI-powered content engagement, and multi-platform reply workflows tailored to your growth strategy.
by Influencers Club
How it works: Get multi social platform data for SaaS clients with their email and send personalized comms to onboard them as organic creators, partners and ambassadors. Step by step workflow to enrich customer emails with multi social (Instagram, Tiktok, Youtube, Twitter, Onlyfans, Twitch and more) profiles, analytics and metrics using the influencers.club API and sending tailored outreach to activate them as creators. Set up: Hubspot (can be swapped for any CRM like Salesforce, Attio or DB) Influencers.club Gmail Sendgrid (can be swapped for any programmatic email sender like Mailgun)
by Cheng Siong Chin
How It Works This workflow automates patient risk assessment and clinical alerting for healthcare providers using NVIDIA AI models. Designed for hospitals, clinics, and healthcare organizations, it addresses the critical challenge of timely identification and response to high-risk patients requiring immediate intervention. The system monitors patient data webhooks, enriches records with external EHR data, and analyzes aggregated information through Claude AI for comprehensive risk stratification. Healthcare operations data is fetched and combined with patient metrics to provide contextual risk assessment. NVIDIA's structured generation capabilities ensure standardized clinical outputs, while parallel execution routes enable simultaneous processing: critical cases trigger immediate alerts via email and escalation flags, whereas routine cases follow standard documentation paths. The workflow maintains an audit trail, merges execution results, and generates detailed reports for compliance and quality improvement initiatives. Setup Steps Configure Patient Event Webhook with your EHR system endpoint URL and authentication headers Add NVIDIA API credentials (API key) in Fetch Patient Data and Structured Generation nodes Connect Claude Model node with Anthropic API key and configure healthcare risk assessment prompt Set up Gmail node with sender credentials and configure recipient email addresses for clinical alerts Enable Google Sheets integration for audit logging and specify spreadsheet ID for execution reports Prerequisites NVIDIA API access, Anthropic Claude API key, Google Workspace account (Gmail, Sheets) Use Cases Emergency department triage automation, post-operative monitoring for deterioration detection Customization Modify risk scoring algorithms, add disease-specific assessment criteria Benefits Reduces clinical response time through automated risk detection
by Paul Karrmann
HR Weekly Radar AI powered workflow that scans HR news via RSS, checks which of your policies or contract templates might need updates, and sends a weekly internal newsletter as HTML. What this template is for If you maintain an HR policy and template library, this helps you spot relevant changes faster and turn them into a small, actionable review list. Good to know This workflow fetches article pages and sends extracted text to LLMs Respect the publisher’s terms and avoid redistributing full article text outside your organization Cost and runtime depend on how many articles you process and how long the extracted text is How it works Weekly trigger starts the workflow RSS feed read pulls new HR articles Filter keeps only the last 7 days Limit node caps processing to maxArticles HTTP request fetches each article page HTML extract + cleanup converts the article body to plain text Google Drive node lists your policy and template file names Merge combines each article with the document list Reading agent evaluates relevance and suggests: which documents to review or update what change to consider missing document ideas Build report aggregates results across all articles Summary agent writes a short, scannable HTML email Gmail sends the newsletter to your chosen recipient How to use Add your RSS feed URL in the Workflow Configuration node (newsUrl) Set your recipient email (userEmail) Set your Google Drive folder id that contains policies and templates (templatesFolderId) Connect credentials for: Google Drive LLM provider nodes Gmail Run once manually and verify the email formatting, then activate the workflow Requirements RSS feed URL with HR or compliance updates Google Drive folder containing policy and template files LLM credentials for: per article analysis newsletter drafting Gmail account to send the email Customising this workflow Increase or decrease maxArticles to control cost and speed Adjust the last 7 days filter if you want a different reporting window Change the HTML extraction selector if your news source has a different page layout Swap the final Gmail node for Slack, Teams, Notion, or Google Docs Add a redaction step before the Reading Agent if you want to remove signatures or long quoted sections
by Avkash Kakdiya
How it works This workflow fetches the latest blog post from a WordPress API and checks against a Google Sheets tracker to prevent duplicate processing. If a new post is found, the workflow updates the tracker and cleans the blog data. The cleaned content is then sent to a Gemini-powered AI agent to generate a newsletter and LinkedIn teaser. Finally, the workflow distributes the content via LinkedIn and Gmail to subscribers. Step-by-step Detect new blog content** Schedule Trigger – Runs the workflow automatically at intervals. HTTP Request – Fetches the latest blog post from WordPress. Last ID – Retrieves the last processed blog ID from Google Sheets. If – Compares IDs to check if the blog is new. Update Last ID – Updates the sheet with the latest blog ID. Clean and generate AI content** data cleanse – Cleans HTML, extracts title, content, and image. AI Agent2 – Generates newsletter and teaser content. Google Gemini Chat Model – Provides AI model for content generation. Distribute content across channels** Format Response – Parses and structures AI output. Create a post – Publishes content on LinkedIn. Email List – Fetches subscriber emails from Google Sheets. Loop Over Items – Iterates through each recipient. Send Email – Sends HTML newsletter via Gmail. Why use this? Automates end-to-end blog promotion workflow Prevents duplicate publishing using ID tracking Uses AI to generate engaging content instantly Saves time on manual posting and emailing Easily scalable for growing audiences
by Mo AlBarrak
Overview This is a production-grade, fully automated stock analysis system built entirely in n8n. It combines institutional-level financial analysis, dual AI model consensus, and a self-improving backtesting loop — all running on autopilot, every single day. Every morning, the engine screens the US stock market, collects deep financial data, reads the latest news, and sends two independent AI analysts (GPT-4o and Gemini 2.5 Pro) to debate each stock. When they disagree, a structured bull-vs-bear tiebreaker is triggered. The result: a daily ranked list of BUY, HOLD, and SELL signals — with price targets, confidence scores, and risk assessments — delivered straight to your Telegram. A companion backtesting workflow runs silently in the background, grading every past signal 7 days after it was issued and sending you a weekly performance report every Monday morning. This is not a toy workflow. This is the kind of system that would cost thousands of dollars to build as a SaaS — running entirely on your own infrastructure. ✨ What Makes This Template Unique 🤖 Dual AI Consensus Engine — GPT-4o and Gemini 2.5 Pro analyze every stock independently. Their outputs are compared, and consensus is only declared when both models agree within a tight price target band ⚖️ Structured Tiebreaker Architecture — When models disagree, a bull analyst (GPT-4o) and a bear analyst (Gemini) re-run with opposing mandates. The final verdict is derived from their averaged price target plus a Piotroski F-Score gate 📊 Institutional-Grade Financial Modeling — Piotroski F-Score (9-point), Graham Number intrinsic value, DCF anchor, TTM revenue & margins, net debt, FCF, revenue growth YoY, and sector-relative P/E valuation — all computed automatically 📰 Live News Sentiment — Latest headlines per stock are fed into the AI prompt, adjusting confidence scores in real time based on positive or negative sentiment signals 🎯 Scenario Price Targets — Every stock gets three targets: pt_bear (downside), pt_base (fair value), pt_bull (upside case), giving you a full risk/reward picture 🔁 Self-Improving Backtester — Every signal is automatically graded 7 days later. Win rate, average return, and best/worst calls are reported every Monday via Telegram 📡 Smart Screener with Sector Diversity — Scores 100 candidates daily using volume health, market cap sweet spot ($5B–$100B), and beta gradient — with a sector diversity cap so you never end up with 15 tech stocks 💾 Full Google Sheets Audit Trail — Every signal, confidence score, rationale, and outcome is logged permanently for your own review and analysis 📋 Workflow Breakdown Workflow 1 — AI Institutional Stock Valuation Engine Phase What Happens Phase 1 — Screening FMP screener fetches 100 US stocks. Score_and_Prefilter scores and selects the top 20 with sector diversity Phase 2A — Financial Data 13 FMP endpoints per stock: income, balance sheet, cash flow, ratios, profile, sector P/E Phase 2B — News Latest headlines fetched and passed into AI context Phase 3 — AI Round 1 GPT-4o and Gemini 2.5 Pro analyze in parallel. Verdicts and price targets compared Phase 3 — Tiebreaker Bull vs Bear re-analysis when models disagree or price target gap > 25% Phase 4 — Strong Buy Alert Stocks with BUY verdict + upside ≥ 20% + confidence ≥ 65 trigger an immediate alert Phase 5 — Storage & Summary All results written to Google Sheets. Daily Telegram summary sent with top picks Workflow 2 — Signal Outcome Checker & Weekly Backtester Trigger What Happens Daily 8AM Finds signals that are 7 days old, fetches current price, grades WIN / LOSS / NEUTRAL, writes outcome back to sheet Monday 9AM Computes weekly win rate, average return on BUY signals, best and worst call — sends full report to Telegram 🛠️ What You Need Requirement Details FMP API Key Financial Modeling Prep — Starter plan or above (~$25/mo). Covers all financial data, screener, news, and historical prices OpenAI API Key GPT-4o access via API or ChatGPT Plus Google Gemini API Key Gemini 2.5 Pro via Google AI Studio (free tier available) Google Sheets One sheet named Stock_Signals with the column headers listed in the setup guide Telegram Bot Create via @BotFather in 2 minutes. Free n8n Self-hosted or n8n Cloud Estimated running cost: $0.43/day in AI tokens for 20 stocks ($10–$13/month). FMP and Telegram are the only other costs. ⚙️ Setup Time ~30–45 minutes for a first-time setup. All credentials, Sheet IDs, and API keys are clearly labeled in each node. No coding required — every parameter is documented. 📈 Example Daily Telegram Output 📊 Daily Valuation Report — 2026-04-02 Stocks Analyzed: 20 🟢 BUY: 7 🟡 HOLD: 10 🔴 SELL: 3 🚨 STRONG BUY ALERTS: • NVDA — Upside 34% | Confidence 81 | F-Score 7/9 • MSFT — Upside 22% | Confidence 74 | F-Score 8/9 Top Picks: NVDA pt_base $172 | pt_bull $198 | pt_bear $124 MSFT pt_base $485 | pt_bull $530 | pt_bear $410 AMGN pt_base $318 | pt_bull $355 | pt_bear $275 📊 Example Weekly Backtest Report 📈 Weekly Signal Performance — Week of Mar 31 Signals Graded: 18 ✅ Win Rate: 72% | BUY Accuracy: 78% 📈 Avg Return on BUY signals: +4.3% 🏆 Best Call: NVDA +11.2% (BUY ✅) 💔 Worst Call: BA -6.8% (BUY ❌) 💡 Who Is This For? Retail investors who want institutional-quality analysis without paying for a Bloomberg terminal Quantitative traders looking for a customizable, data-driven signal generation pipeline n8n builders who want to see a real-world, production-grade multi-node workflow in action AI enthusiasts interested in multi-model consensus systems and structured debate architectures 📬 Questions, Customizations & Feedback Have a question about setup, want to adapt this workflow to your own strategy, or found something to improve? 📧 mambarrak@gmail.com All feedback is welcome. If you build something interesting on top of this, I'd love to hear about it. Built with ❤️ using n8n, Financial Modeling Prep, OpenAI GPT-4o, and Google Gemini 2.5 Pro.
by Davide
This Chatbot automates the process of discovering job openings and generating tailored job application emails. It combines AI agents, web scraping, and email drafting to streamline job applications. This workflow transforms job applications from a manual, repetitive process into an intelligent AI-assisted automation system that: Saves time Improves email quality Reduces errors Maintains human oversight Scales across multiple job postings It represents a strong example of combining conversational AI, external data tools, structured parsing, and workflow automation into a production-ready solution. How it works User starts a chat – The workflow begins when a user sends a message via the chat trigger. PredictLeads Agent processes the request – A LangChain agent determines the user's intent. If the request involves company research, it first queries Context7, then optionally PredictLeads for deeper data. Response parser – The agent's output is cleaned and parsed into a structured JSON format with list (boolean) and output fields. List check – If list is true (e.g., a list of job URLs), the workflow extracts links and passes them to the next stage. If false, the agent responds directly to the user. Link extraction – The Links Extractor node uses OpenAI to extract job posting URLs from the user's input. Loop through each link – Each URL is processed individually using a Loop Over Items node. Scrape job details – The Scrape Job node (powered by ScrapegraphAI) extracts: Email address to send the application to Job position title Full job description text Email presence check – If an email is found, the workflow proceeds to generate an application email. If not, it informs the user that no email is available and provides the job link. Job Application Agent – A Gemini-powered agent generates a professional email using: Candidate's personal info (name, location, skills) Job position and description A tool (Create email) to format the subject and body Send email tool – The agent triggers the Send email workflow, which: Fetches the CV from a public URL Creates a draft in Gmail with the CV attached User response – The final output is sent back to the user via chat, confirming the draft creation or notifying them of missing information. Setup steps To use this workflow, you need to configure the following credentials and nodes: 1. Chat Trigger No setup required. This is the entry point for user messages. 2. OpenAI Chat Model Add your OpenAI API key. 3. Google Gemini Chat Model Add your Google AI API key. 4. Context7 MCP Tool Credential**: Context7 Add your API key as a header (e.g., Authorization: Bearer XXX). 5. PredictLeads MCP Tool Credential**: Multiple Headers PredictLeads Add required headers (e.g., X-API-Key or similar). 6. ScrapegraphAI Add your ScrapegraphAI API key. 7. Gmail Authorize access to Gmail (OAuth2) to create drafts. 8. HTTP Request (Get CV) Ensure the CV is publicly accessible at the URL in the node (https://XXX/cv.pdf) or update it with your own. 9. Simple Memory No setup needed. Used to maintain conversation context. 10. Agent Prompt Customization (Optional) Review the system prompts in the PredictLeads Agent and Job application Agent nodes. Update candidate personal information (name, location, etc.) in the Job application Agent prompt. 11. Workflow ID for "Send email" The Send email tool calls another workflow by ID . Ensure this ID matches the current workflow (it should be self-referential). Key Advantages 1. ✅ End-to-End Automation It automates the entire job application lifecycle: Job discovery Job data extraction Email writing CV attachment Draft preparation No manual copy-paste required. 2. ✅ AI-Orchestrated Tool Usage The system intelligently decides when to use: Company research tools (Context7) PredictLeads data Scraping services Email drafting workflows This makes it dynamic and adaptable rather than static. 3. ✅ Structured & Reliable Data Handling Uses JSON schema validation Cleans malformed AI outputs Ensures consistent structured results Reduces errors in automation flows 4. ✅ Human-in-the-Loop Safety Before sending any email: The system requires double approval The email is saved as a draft, not automatically sent This prevents accidental or incorrect applications. 5. ✅ Personalized & Tailored Applications Each application is: Context-aware Position-specific Professionally formatted Generated using candidate-specific data This increases response quality compared to generic templates. 6. ✅ Scalability Because of: Split-in-batches logic Looping over multiple job listings Structured parsing The workflow can process multiple job opportunities efficiently. 7. ✅ Modular Architecture The workflow is cleanly modular: AI agents Scraper Parser Email tool CV fetcher 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Khairul Muhtadin
Streamline M&A due diligence with AI. This n8n workflow automatically parses financial documents using LlamaIndex, embeds data into Pinecone, and generates comprehensive, AI-driven reports with GPT-5-mini, saving hours of manual review and ensuring consistent, data-backed insights. Why Use This Workflow? Time Savings: Reduces manual document review and report generation from days to minutes. Cost Reduction: Minimizes reliance on expensive human analysts for initial data extraction and summary. Error Prevention: AI-driven analysis ensures consistent data extraction, reducing human error and oversight. Scalability: Effortlessly processes multiple documents and deals in parallel, scaling with your business needs. Ideal For Investment Analysts & Private Equity Firms:** Quickly evaluate target companies by automating the extraction of key financials, risks, and business models from deal documents. M&A Advisors:** Conduct preliminary due diligence efficiently, generating comprehensive overview reports for clients without extensive manual effort. Financial Professionals:** Accelerate research and analysis of company filings, investor presentations, and market reports for critical decision-making. How It Works Trigger: A webhook receives multiple due diligence documents (PDFs, DOCX, XLSX) along with associated metadata. Document Processing & Cache Check: Files are split individually. The workflow first checks Pinecone to see if the deal's documents have been processed before (cache hit). If so, it skips parsing and embedding. Data Extraction (LlamaIndex): For new deals, each document is sent to LlamaIndex for advanced parsing, extracting structured text content. Vectorization & Storage: The parsed text is then converted into numerical vector embeddings using OpenAI and stored in Pinecone, our vector database, with relevant metadata. AI-Powered Analysis (Langchain Agent): An n8n Langchain Agent, acting as a "Senior Investment Analyst," leverages GPT-5-mini to query Pinecone multiple times for specific information (e.g., company profile, financials, risks, business model). It synthesizes these findings into a structured JSON output. Report Generation: The structured AI output is transformed into an HTML report, then converted into a professional PDF document. Secure Storage & Delivery: The final PDF due diligence report is uploaded to an S3 bucket, and a public URL is returned via the initial webhook, providing instant access. Setup Guide Prerequisites | Requirement | Type | Purpose | | :---------- | :--- | :------ | | n8n instance | Essential | Workflow execution platform | | LlamaIndex API Key | Essential | For robust document parsing and text extraction | | OpenAI API Key | Essential | For creating text embeddings and powering the GPT-5-mini AI agent | | Pinecone API Key | Essential | For storing and retrieving vector embeddings | | AWS S3 Account | Essential | For secure storage of generated PDF reports | Installation Steps Import the JSON file to your n8n instance. Configure credentials: LlamaIndex: Create an "HTTP Header Auth" credential with x-api-key in the header and your LlamaIndex API key as the value. OpenAI: Create an "OpenAI API" credential with your OpenAI API key. Ensure the credential name is "Sumopod" or update the workflow nodes accordingly. Pinecone: Create a "Pinecone API" credential with your Pinecone API key and environment. Ensure the credential name is "w3khmuhtadin" or update the workflow nodes accordingly. AWS S3: Create an "AWS S3" credential with your Access Key ID and Secret Access Key. Update environment-specific values: In the "Upload to S3" node, ensure the bucketName is set to your desired S3 bucket. In the "Create Public URL" node, update the baseUrl variable to match your S3 bucket's public access URL or CDN if applicable (e.g., https://your-s3-bucket-name.s3.amazonaws.com). Customize settings: Review the prompt in the "Analyze" (Langchain Agent) node to adjust the AI's persona or required queries if needed. Test execution: Send sample documents (PDF, DOCX, XLSX) to the webhook URL (/webhook/dd-ai) to verify all connections and processing steps work as expected. Technical Details Core Nodes | Node | Purpose | Key Configuration | | :------------------------------ | :--------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------- | | Webhook | Initiates workflow with document uploads | Path: dd-ai, HTTP Method: POST | | Split Multi-File (Code) | Splits binary files, generates unique deal ID | Parses filenames from body or binary, creates dealId from sorted names. | | Parse Document via LlamaIndex | Extracts structured text from various document types | URL: https://api.cloud.llamaindex.ai/api/v1/parsing/upload, Authentication: HTTP Header Auth with x-api-key. | | Monitor Document Processing | Polls LlamaIndex for parsing status | URL: https://api.cloud.llamaindex.ai/api/v1/parsing/job/{{ $json.id }}, Authentication: HTTP Header Auth. | | Insert to Pinecone | Stores vector embeddings in Pinecone | Mode: insert, Pinecone Index: poc, Pinecone Namespace: dealId. | | Data Retrieval (Pinecone) | Enables AI agent to search due diligence documents | Mode: retrieve-as-tool, Pinecone Index: poc, Pinecone Namespace: {{ $json.dealId }}, topK: 100. | | Analyze (Langchain Agent) | Orchestrates AI analysis using specific queries | Prompt Type: define, detailed role and 6 mandatory Pinecone queries, Model: gpt-5-mini, Output Parser: Parser. | | Generate PDF (Puppeteer) | Converts HTML report to a professional PDF | Script Code: await $page.pdf(...) with A4 format, margins, and 60s timeout. | | Upload to S3 | Stores final PDF reports securely | Bucket Name: poc, File Name: {{ $json.fileName }}, Credentials: AWS S3. | | If (Check Namespace Exists) | Implements caching logic | Checks stats.namespaces[dealId].vectorCount > 0 to determine cache hit/miss. | Workflow Logic The workflow begins by accepting multiple files via a webhook. It intelligently checks if the specific "deal" (identified by a unique ID generated from filenames) has already had its documents processed and embedded in Pinecone. This cache mechanism prevents redundant processing, saving time and API costs. If a cache miss occurs, documents are parsed by LlamaIndex, their content vectorized by OpenAI, and stored in a Pinecone namespace unique to the deal. For analysis, a Langchain Agent, powered by GPT-5-mini, is instructed with a specific persona and a mandatory sequence of Pinecone queries (e.g., company overview, financials, risks). It uses the Data Retrieval tool to interact with Pinecone, synthesizing information from the stored embeddings. The AI's output is then structured by a dedicated parser, transformed into a human-readable HTML report, and converted into a PDF. Finally, this comprehensive report is uploaded to AWS S3, and a public access URL is provided as a response. Customization Options Basic Adjustments: AI Prompt Refinement:** Modify the Prompt field in the "Analyze" (Langchain Agent) node to adjust the AI's persona, introduce new mandatory queries, or change reporting style. Output Schema:** Update the JSON schema in the "Parser" (Langchain Output Parser Structured) node to include additional fields or change the structure of the AI's output. Advanced Enhancements: Integration with CRM/Dataroom:** Add nodes to automatically fetch documents from or update status in a CRM (e.g., Salesforce, HubSpot) or a virtual data room (e.g., CapLinked, Datasite). Conditional Analysis:** Implement logic to trigger different analysis paths or generate different report sections based on document content or deal parameters. Notification System:** Integrate with Slack, Microsoft Teams, or email to send notifications upon report generation or specific risk identification. Use Case Examples Scenario 1: Private Equity Firm Evaluating a Target Company Challenge: A private equity firm receives dozens of due diligence documents (financials, CIM, management presentations) for a potential acquisition, needing a rapid initial assessment. Solution: The workflow ingests all documents, automatically parses them, and an AI agent synthesizes key company information, financial summaries (revenue history, margins), and identified risks into a structured report within minutes. Result: The firm's analysts gain an immediate, comprehensive overview, enabling faster screening and more focused deep-dive questions, significantly accelerating the deal cycle. Scenario 2: M&A Advisor Conducting Preliminary Due Diligence Challenge: An M&A advisory firm needs to provide clients with a quick, consistent, and standardized preliminary due diligence report across multiple prospects. Solution: Advisors upload relevant prospect documents to the workflow. The AI-powered system automatically extracts core business model details, investment thesis highlights, and customer concentration analysis, along with key financials. Result: The firm can generate standardized, high-quality preliminary reports efficiently, ensuring consistency across all client engagements and freeing up senior staff for strategic analysis. Created by: Khmuhtadin Category: AI | Tags: Due Diligence, AI, Automation, M&A, LlamaIndex, Pinecone, GPT-5-mini, Document Processing Need custom workflows? Contact us Connect with the creator: Portfolio • Workflows • LinkedIn • Medium • Threads
by Victor Manuel Lagunas Franco
Turn any topic into a ready-to-study Anki deck. This workflow generates vocabulary flashcards with AI images and native pronunciation, then sends the .apkg file straight to your inbox. What it does You fill out a simple form (topic, languages, difficulty) GPT-4 creates vocabulary with translations, readings, and example sentences DALL-E 3 generates a unique image for each word ElevenLabs adds native pronunciation audio (word + example) Everything gets packaged into a real .apkg file The deck lands in your email, ready to import into Anki A backup copy saves to Google Sheets Why I built this I was spending hours making flashcards by hand for language learning. Finding images, recording audio, formatting everything for Anki... it took forever. This workflow does all of that in about 3 minutes. Setup (~15 min) Install npm packages: jszip and sql.js Add OpenAI credentials (for GPT-4 + DALL-E) Add ElevenLabs credentials Connect Gmail and Google Sheets via OAuth Update OPENAI_API_KEY in the DALL-E code node Update the Spreadsheet ID in the Sheets node Features 20 languages supported 7 image styles (minimal icons, kawaii, realistic, watercolor, pixel art...) 6 difficulty levels (A1 to C2) Optional reverse cards (target→native AND native→target) Works on Anki desktop and mobile
by Cheng Siong Chin
How It Works This workflow automates comprehensive risk signal detection and regulatory compliance management across financial and claims data sources. Designed for risk management teams, compliance officers, and financial auditors, it solves the critical challenge of identifying potential risks while ensuring timely regulatory reporting and stakeholder notifications. The system operates on scheduled intervals, fetching data from multiple sources including financial APIs and claims databases, then merging these streams for unified analysis. It employs an AI-powered risk signal agent to detect anomalies, regulatory violations, and compliance issues. The workflow intelligently routes findings based on risk severity, orchestrating parallel processes for critical risks requiring immediate escalation and standard risks needing documentation. It manages multi-channel notifications through Slack and email, generates comprehensive compliance documentation, and maintains detailed audit trails. By coordinating regulatory analysis, exception handling, and evidence collection, it ensures complete risk visibility while automating compliance workflows. Setup Steps Configure Schedule Trigger with risk monitoring frequency Connect Workflow Configuration node with data source parameters Set up Fetch B2B Data and Fetch Claims Data nodes with respective API credentials Configure Merge Financial Data node for data consolidation Connect Calculate Risk Metrics node with risk scoring algorithms Set up Risk Signal Agent with OpenAI/Nvidia API credentials for anomaly detection Configure parallel output parsers Connect Check Critical Risk node with severity routing logic Set up Route by Risk Level node for workflow branching Prerequisites OpenAI or Nvidia API credentials for AI-powered risk analysis, financial data API access Use Cases Insurance companies monitoring claims fraud patterns, financial institutions detecting transaction anomalies Customization Adjust risk scoring algorithms for industry-specific thresholds Benefits Reduces risk detection time by 80%, eliminates manual compliance monitoring
by Rahul Joshi
📊 Description Every company has documents sitting in Google Drive that nobody reads. HR policies, sales playbooks, product FAQs, financial guidelines — all written once, never found again. This workflow turns all of those documents into a live, searchable AI knowledge base that any team member can query instantly via a simple API call. Ask it anything. It finds the right document, pulls the exact relevant section, and answers in plain english — with the source cited so you always know where the answer came from. No hallucinations, no guessing, no manual searching. Built for founders, ops teams, and automation agencies who want company knowledge to be instantly accessible without building a custom RAG system from scratch. What This Workflow Does 📂 Reads all Google Docs from your Knowledge Base folder in Google Drive automatically ✂️ Splits each document into semantic chunks with overlap for better context retrieval 🤖 Converts every chunk into vector embeddings using OpenAI text-embedding-3-small 📌 Stores all embeddings in Pinecone with document metadata for fast semantic search 🌐 Accepts any question via webhook — from Slack, a form, or any internal tool 🔍 Searches Pinecone for the 5 most semantically relevant chunks to the question 🧠 Sends retrieved context to GPT-4o which answers using only what's in your documents 📝 Logs every question, answer, source, and confidence score to Google Sheets 🔄 Every Sunday checks Drive for new or updated documents and re-ingests them automatically 📧 Sends a weekly knowledge base digest showing what's current, new, or updated Key Benefits ✅ Zero hallucinations — GPT-4o only answers from your actual documents ✅ Always cites the source document so answers are verifiable ✅ Semantic search finds relevant content even if exact words don't match ✅ Knowledge base stays fresh automatically every Sunday ✅ Every Q&A logged to Google Sheets for full audit trail ✅ Works with any Google Docs — just drop them in the folder and run SW1 How It Works The workflow runs across 3 sub-workflows — one for ingestion, one for answering, one for maintenance. SW1 — Document Ingestion Pipeline (Run manually) You point it at your Google Drive Knowledge Base folder. It downloads every Google Doc as plain text, splits each one into 500-character chunks with 100-character overlap so context is preserved across boundaries. Each chunk gets converted into a 1536-dimension vector embedding using OpenAI's text-embedding-3-small model and stored in Pinecone with the document name as metadata. Every ingested document is logged to your Document Registry sheet with the ingestion date. Run this once when setting up, then SW3 handles updates automatically. SW2 — Question & Answer Agent (Always active via webhook) Someone sends a POST request with a question and their email. The question gets converted to an embedding using the same model used during ingestion. Pinecone finds the 5 most semantically similar chunks — ranked by cosine similarity score. Chunks scoring below 0.3 are filtered out to avoid irrelevant results. The remaining context gets sent to GPT-4o with strict instructions to only answer from what's provided. If the answer isn't in the knowledge base, it says so clearly instead of making something up. The response includes the answer, source document, confidence level, and whether it was found in the knowledge base. Everything is logged to your Q&A Log sheet. SW3 — Knowledge Base Manager (Every Sunday 11AM) Pulls your current Drive folder contents and compares every document ID against your Document Registry. New documents get flagged for ingestion. Existing documents get checked — if the file was modified after the last ingestion date, it gets re-ingested automatically. You get a weekly digest email showing what's current, what was updated, and what's new. No manual monitoring needed. Features Manual ingestion trigger for initial setup Google Drive folder monitoring for new and updated docs Recursive character text splitting with configurable chunk size and overlap OpenAI text-embedding-3-small for high quality 1536-dimension embeddings Pinecone vector database for fast cosine similarity search Relevance score filtering — only chunks above 0.3 score are used GPT-4o grounded answering with strict no-hallucination prompt Source citation in every answer Confidence scoring — high, medium, or low per response Full Q&A audit log in Google Sheets Weekly automated document registry sync Weekly KB digest email with full status report Modular 3-stage architecture — easy to extend with Slack or Teams integration Requirements OpenAI API key (text-embedding-3-small + GPT-4o access) Pinecone account — free tier works (index: dimensions 1536, metric cosine) Google Drive OAuth2 connection Google Sheets OAuth2 connection Gmail OAuth2 connection A Google Drive folder with your company documents as Google Docs A configured Google Sheet with 2 sheets: Q&A Log and Document Registry Setup Steps Create a Pinecone account at pinecone.io — free tier is enough Create a Pinecone index with dimensions 1536 and metric cosine Create a Google Drive folder called "Knowledge Base" Add your company documents as Google Docs inside that folder Copy the Google Sheet template and grab your Sheet ID Add all credentials — Pinecone, OpenAI, Google Drive, Google Sheets, Gmail Paste your Knowledge Base folder ID into both Google Drive nodes Paste your Sheet ID into all Google Sheets nodes Test by sending a POST request to the webhook with a question from your docs Target Audience 🧠 Founders who want instant answers from company documents without digging through Drive 📋 Ops and HR teams tired of answering the same internal questions repeatedly 💼 Sales teams who need instant access to product, pricing, and competitor information 🤖 Automation agencies building internal AI tools and knowledge systems for clients