by Daniel Shashko
How it Works This workflow transforms natural language queries into research reports through a five-stage AI pipeline. When triggered via webhook (typically from Google Sheets using the companion google-apps-script.js (GitHub gist), it first checks Redis cache for instant results. For new queries, GPT-4o breaks complex questions into focused sub-queries, optimizes them for search, then uses Bright Data's MCP Tool to find the top 5 credible sources (official sites, news, financial reports). URLs are scraped in parallel, bypassing bot detection. GPT-4o extracts structured data from each source: answers, facts, entities, sentiment, quotes, and dates. GPT-4o-mini validates source credibility and filters unreliable content. Valid results aggregate into a final summary with confidence scores, key insights, and extended analysis. Results cache for 1 hour and output via webhook, Slack, email, and DataTableโall in 30-90 seconds with 60 requests/minute rate limiting. Who is this for? Research teams needing automated multi-source intelligence Content creators and journalists requiring fact-checked information Due diligence professionals conducting competitive intelligence Google Sheets power users wanting AI research in spreadsheets Teams managing large research volumes needing caching and rate limiting Setup Steps Setup time: 30-45 minutes Requirements: Bright Data account (Web Scraping API + MCP token) OpenAI API key (GPT-4o and GPT-4o-mini access) Redis instance Slack workspace (optional) SMTP email provider (optional) Google account (optional for Sheets integration) Core Setup: Get Bright Data Web Scraping API token and MCP token Get OpenAI API key Set up Redis instance Configure critical nodes: Webhook Entry: Add Header Auth token Bright Data MCP Tool: Add MCP endpoint with token Parallel Web Scraping: Add Bright Data API credentials Redis Nodes: Add connection credentials All GPT Nodes: Add OpenAI API key (5 nodes) Slack/Email: Add credentials if using Google Sheets Integration: Create Google Sheet Open Extensions โ Apps Script Paste the companion google-apps-script.js code Update webhook URL and auth token Save and authorize Test: {"prompt": "What is the population of Tokyo?", "source": "Test", "language": "English"} Customization Guidance Source Count:** Change from 5 to 3-10 URLs per query Cache Duration:** Adjust from 1 hour to 24 hours for stable info Rate Limits:** Modify 60/minute based on usage needs Character Limits:** Adjust 400-char main answer to 200-1000 AI Models:** Swap GPT-4o for Claude or use GPT-4o-mini for all stages Geographic Targeting:** Add more regions beyond us/il Output Channels:** Add Notion, Airtable, Discord, Teams Temperature:** Lower (0.1-0.2) for facts, higher (0.4-0.6) for analysis Once configured, this workflow handles all web research, from fact-checking to complex analysisโdelivering validated intelligence in seconds with automatic caching. Built by Daniel Shashko Connect on LinkedIn
by Feras Dabour
Auto-generate job descriptions from briefing notes with OpenAI and Google Docs Who is this for Recruiters, HR teams, and hiring managers who conduct role briefing conversations and want to convert their meeting notes into polished, structured job descriptions automatically -- without manual copywriting. What this workflow does This workflow watches a Google Drive folder for new briefing documents, extracts structured job data using AI, generates a professional HTML job description, sends it to Microsoft Teams for approval, and exports the final version as a PDF to Google Drive. How it works Trigger -- A Google Drive Trigger detects when a new Google Doc (e.g. a briefing transcript) is created in a watched folder. File organization -- A timestamped subfolder is created and the document is moved into it for clean project structure. Document reading -- The Google Doc content is fetched via the Google Docs API. AI data extraction -- An OpenAI AI Agent analyzes the transcript (supports German input) and extracts structured job data: title, department, responsibilities, skills, benefits, tech stack, and more -- output as JSON. Data logging -- The extracted fields are appended to a Google Sheets tracker for reference and audit. Prompt assembly -- A Code node builds a detailed prompt from the structured data, choosing between "create" mode (first draft) or "revise" mode (feedback loop). JD generation -- A second AI Agent (JD-Writer) generates a complete, styled HTML job description following a professional template with sections like responsibilities, profile, benefits, and diversity statement. Human review -- The draft is sent to a Microsoft Teams chat with an approve/reject form and an optional feedback field. Approval path -- If approved, the HTML is converted to PDF and uploaded to the Google Drive subfolder alongside the original briefing. Revision loop -- If rejected, the feedback is routed back to the JD-Writer for targeted revisions, and the updated draft is re-sent for approval. Setup steps Google Drive & Docs -- Create OAuth2 credentials. Set the folder ID in the Google Drive Trigger node to the folder where briefing documents are saved. Google Sheets -- Create a spreadsheet with columns for all job data fields (job_title, department, responsibilities, hard_skills, soft_skills, etc.). Update the Sheet ID in the Google Sheets node. OpenAI -- Add your API key as an OpenAI credential. Used for both the data extraction agent (reads the transcript) and the JD-Writer agent (generates the job description). Microsoft Teams -- Create OAuth2 credentials. Set the Teams chat ID in the approval node to the chat or channel where drafts should be reviewed. HTML-to-PDF -- Install the community node n8n-nodes-htmlcsstopdf (self-hosted only). Add the API credential. Requirements Community node:* n8n-nodes-htmlcsstopdf -- *self-hosted n8n only** OpenAI API key (GPT-4 or newer recommended) Google Drive, Docs & Sheets OAuth2 credentials Microsoft Teams OAuth2 credentials How to customize AI extraction prompt** -- Edit the system message in the "Extract job data from transcript" node to adjust which fields are extracted or to support different transcript languages. JD template style** -- Modify the prompt in the "Build JD-Writer prompt" Code node to change tone, section order, or formatting style of the generated job description. Approval channel** -- Change the Teams chat ID to route drafts to a different team or channel. Output format** -- Swap the HTML-to-PDF node for a different converter, or skip PDF and use the raw HTML output directly. Tracker columns** -- Add or remove columns in Google Sheets to match your internal job data schema. Revision depth** -- The approval loop supports unlimited revision cycles. The JD-Writer applies feedback minimally without rewriting from scratch.
by Artem Boiko
A Telegram bot that converts natural-language work descriptions into detailed cost estimates using AI parsing, vector search, and the open-source DDC CWICR database with 55,000+ construction work items. Who's it for Contractors & Estimators** who need quick ballpark figures from verbal/text descriptions Construction managers** doing feasibility checks on-site via mobile BIM/CAD professionals** integrating text-based estimation into workflows Developers** building construction cost APIs or chatbots What it does Receives text messages in Telegram (work lists, specifications, notes) Parses input with AI (OpenAI/Claude/Gemini) into structured work items Searches DDC CWICR vector database via Qdrant for matching rates Calculates costs with full breakdown (labor, materials, machines) Exports results as HTML report, Excel, or PDF Supports 9 languages: ๐ฉ๐ช DE ยท ๐ฌ๐ง EN ยท ๐ท๐บ RU ยท ๐ช๐ธ ES ยท ๐ซ๐ท FR ยท ๐ง๐ท PT ยท ๐จ๐ณ ZH ยท ๐ฆ๐ช AR ยท ๐ฎ๐ณ HI How it works โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โ Telegram โ โ โ AI Parse โ โ โ Embeddings โ โ โ Qdrant โ โ Text Input โ โ (GPT/Claude)โ โ (OpenAI) โ โ Search โ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โ Export โ โ โ Aggregate โ โ โ Calculate โ โ โ AI Rerank โ โ HTML/XLS/PDFโ โ Results โ โ Costs โ โ Results โ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ Step-by-step: User sends /start โ selects language โ enters work description AI Parse extracts work items: name, quantity, unit, room Query Transform optimizes search terms for construction domain Embeddings API converts query to vector (OpenAI text-embedding-3-small) Qdrant Search finds top-10 matching rates from DDC CWICR AI Rerank selects best match considering context and units Calculate applies quantities, sums labor/materials/machines Report sends Telegram message + optional Excel/PDF export Prerequisites | Component | Requirement | |-----------|-------------| | n8n | v1.30+ (AI nodes support) | | Telegram Bot | Token from @BotFather | | OpenAI API | For embeddings + LLM parsing | | Qdrant | Vector DB with DDC CWICR collections loaded | | DDC CWICR Data | github.com/datadrivenconstruction/DDC-CWICR | Setup 1. Credentials (n8n Settings โ Credentials) OpenAI API** โ required for embeddings and text parsing Anthropic API** โ optional, for Claude models Google Gemini API** โ optional, for Gemini models 2. Configuration (๐ TOKEN node) bot_token = YOUR_TELEGRAM_BOT_TOKEN QDRANT_URL = http://localhost:6333 QDRANT_API_KEY = (if using Qdrant Cloud) 3. Qdrant Setup Load DDC CWICR collections for your target languages: DE_construction_rates โ German (STLB-Bau based) EN_construction_rates โ English RU_construction_rates โ Russian (GESN/FER based) ... (see DDC CWICR docs for all 9 languages) 4. Link AI Model Nodes Open OpenAI Model nodes Select your OpenAI credential (Optional) Enable Claude/Gemini nodes for alternative models 5. Telegram Webhook Activate workflow Telegram Trigger auto-registers webhook Test with /start in your bot Features | Feature | Description | |---------|-------------| | ๐ค Multi-LLM | Swap between OpenAI, Claude, Gemini | | ๐ 9 Languages | Full UI + database localization | | ๐ Smart Parsing | Handles lists, tables, free-form text | | ๐ Semantic Search | Vector similarity + AI reranking | | ๐ Cost Breakdown | Labor, materials, machines, hours | | โ๏ธ Inline Edit | Modify quantities, delete items | | ๐ค Export | HTML report, Excel, PDF | | ๐พ Session State | Multi-turn conversation support | Example Input/Output Input (Telegram message): Living room renovation: Laminate flooring 25 mยฒ Wall painting 60 mยฒ Ceiling plasterboard 25 mยฒ 3 electrical outlets Output: โ Estimate Ready โ 4 items found Laminate flooring โ 25 mยฒ ร โฌ18.50 = โฌ462.50 โ Labor: โฌ125 ยท Materials: โฌ337.50 Wall painting โ 60 mยฒ ร โฌ8.20 = โฌ492.00 โ Labor: โฌ312 ยท Materials: โฌ180 Ceiling plasterboard โ 25 mยฒ ร โฌ32.00 = โฌ800.00 โ Labor: โฌ425 ยท Materials: โฌ375 Electrical outlets โ 3 pcs ร โฌ45.00 = โฌ135.00 โ Labor: โฌ95 ยท Materials: โฌ40 โโโโโโโโโโโโโโโโโโโโโ Total: โฌ1,889.50 [โ Excel] [โ PDF] [โป Restart] Notes & Tips First run:** Ensure Qdrant has DDC CWICR data loaded before testing Rate accuracy:** Results depend on query quality; AI reranking improves matching Large lists:** Bot handles 50+ items; progress shown per-item Customization:** Edit Config node for UI text, currencies, database mapping Extend:** Chain with your CRM, project management, or reporting tools Categories AI ยท Data Extraction ยท Communication ยท Files & Storage Tags telegram-bot, construction, cost-estimation, qdrant, vector-search, openai, multilingual, bim, cad Author DataDrivenConstruction.io https://DataDrivenConstruction.io info@datadrivenconstruction.io Consulting & Training We help construction, engineering, and technology firms implement: Open data principles for construction CAD/BIM processing automation AI-powered estimation pipelines ETL workflows for construction databases Contact us to test with your data or adapt to your project requirements. Resources DDC CWICR Database:** GitHub Qdrant Setup Guide:** qdrant.tech/documentation n8n AI Nodes:** docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain โญ Star us on GitHub! github.com/datadrivenconstruction/DDC-CWICR
by ศugui Dragoศ
This workflow automates the process of turning meeting recordings into structured notes and actionable tasks using AssemblyAI and Google Sheets. It is ideal for teams who want to save time on manual note-taking and ensure that action items from meetings are never missed. What it does Receives a meeting recording (audio file) via webhook Transcribes the audio using AssemblyAI Uses AI to generate structured meeting notes and extract action items (tasks) Logs meeting details and action items to a Google Sheet for easy tracking Use cases Automatically document meetings and share notes with your team Track action items and responsibilities from every meeting Centralize meeting outcomes and tasks in Google Sheets Quick Setup AssemblyAI API Key: Sign up at AssemblyAI and get your API key. Google Sheets Credentials: Set up a Google Service Account and share your target Google Sheet with the service account email. OpenAI API Key (optional, if using OpenAI for notes extraction): Get your API key from OpenAI. Configure the following essential nodes: Recording Ready Webhook: Set the webhook URL in your meeting platform to trigger the workflow when a recording is ready. Workflow Configuration: Enter your AssemblyAI API key, default due date, and admin email. AssemblyAI Transcription: Add your AssemblyAI API key in the credentials. Generate Meeting Notes & Extract Action Items: Add your OpenAI API key if required. Log Meeting to Sheets: Enter your Google Sheets document ID and sheet name. How to Use AssemblyAI in this Workflow The workflow sends the meeting audio file to AssemblyAI via the AssemblyAI Transcription node. AssemblyAI processes the audio and returns a full transcript. The transcript is then used by AI nodes to generate meeting notes and extract action items. Requirements AssemblyAI API key Google Service Account credentials (Optional) OpenAI API key for advanced note and action item extraction Start the workflow by sending a meeting recording to the webhook URL. The rest is fully automated!
by Joseph
Transform meeting transcripts into fully customized, AI-powered presentations automatically. This comprehensive 5-workflow automation system analyzes client conversations and generates professional slide decks complete with personalized content and AI-generated illustrations. ๐ฏ What This Automation Does This end-to-end solution takes a meeting transcript (Google Docs) and client information as input, then automatically: Creates a presentation from your custom template Generates a strategic presentation plan tailored to the client's needs Creates custom illustrations using AI image generation Populates slides with personalized text content Inserts generated images into the appropriate slides Delivers a client-ready presentation Perfect for sales teams, consultants, agencies, and anyone who needs to create customized presentations at scale. ๐ง How It Works The automation is split into 5 interconnected workflows: Workflow 1: Clone Presentation & Database Setup Form trigger captures client name, transcript URL, and submission time Clones your presentation template via Google Slides API Saves presentation details to Google Sheets for tracking Workflow 2: AI Presentation Plan Generation Analyzes meeting transcript to understand client pain points Generates comprehensive presentation structure and content strategy Saves plan to Google Docs for review and tracking Uses company profile (customizable) to match solutions to client needs Workflow 3: AI Illustration Generation AI agent creates image prompts based on presentation plan Generates illustrations using Flux model via OpenRouter (nanobanana) Uploads images to Google Drive for slide insertion Tracks all generated assets in database Workflow 4: Text Content Population AI agent generates final presentation text from the plan Replaces template placeholders with personalized content Uses Object IDs to target specific text elements in slides Updates slides using native n8n Google Slides node Workflow 5: Image Insertion Retrieves image Object IDs from presentation structure Downloads illustrations from Google Drive Converts images for ImgBB hosting (resolves Google Drive URL limitations) Updates slide images via Google Slides API ๐ Prerequisites Required Accounts & API Keys: Google Workspace (Drive, Slides, Docs) OpenAI API (for AI agents) OpenRouter API (for Flux image generation) ImgBB API (free tier available) Gemini API (optional, for additional AI tasks) Setup Requirements: Google Sheets database (template provided in article and inside the workflow) Google Slides presentation template with standard Object IDs Meeting transcript in Google Docs format ๐จ Customization Options This automation is designed to be flexible: Template Flexibility**: Use any slide template structure Company Profile**: Customize the business context for your use case AI Models**: Swap OpenAI/Gemini agents for your preferred LLM Image Generation**: Replace Flux with DALL-E, Midjourney API, or other models Slide Logic**: Extend to dynamically select slides based on content needs ๐ก Key Technical Insights Structured Output Handling**: Uses JavaScript for reliable JSON parsing when AI output structure is complex Object ID System**: Template placeholders use unique IDs for precise element targeting Image Hosting Workaround**: ImgBB resolves Google Drive direct URL limitations in API calls HTTP Request Nodes**: Used for API operations not covered by native n8n nodes (copying presentations, image updates) ๐ Full Documentation For a detailed breakdown of each workflow, configuration steps, and best practices, read the complete guide on this Medium article ๐ Use Cases Sales Teams**: Auto-generate pitch decks from discovery calls Consulting Firms**: Create client proposals from needs assessments Marketing Agencies**: Build campaign presentations from strategy sessions Product Teams**: Transform user research into stakeholder presentations Training & Education**: Convert session notes into learning materials โ ๏ธ Important Notes Template must use consistent Object IDs for automation to work Google Drive images require ImgBB hosting for reliable URL access AI agent output structure is complex; JavaScript parsing recommended Rate limits apply for API services (especially image generation) ๐ฆ Resources & Templates API Services (Get Your Keys Here) OpenRouter** - For Flux (nanobanana) AI image generation ImgBB API** - Free image hosting service OpenAI API** - For AI agents and text generation Google Cloud Console** - Enable Google Slides, Drive, and Docs APIs Google AI Studio** - For Gemini API key Templates & Examples Meeting Transcript Sample** - Example transcript structure Google Sheets Database Template** - Copy this to track your presentations Presentation Template** - Base slide deck with Object IDs ๐ก Tip: Make copies of all templates before using them in your workflows! Have questions or improvements? Connect with me: X (Twitter): @juppfy Email: joseph@uppfy.com P.S: I'd love to hear how you adapt this for your workflow!
by Oneclick AI Squad
A smart, fully automated coding pipeline built inside n8n that leverages Cursor AI to write, refactor, review, and optimize code projects โ triggered by a webhook, schedule, or manual prompt. Every output is versioned, stored, and delivered with a detailed review report. ๐ฏ What's the Goal? Eliminate the repetitive overhead of writing boilerplate code, performing code reviews, and refactoring legacy code manually. This workflow turns a plain-text task description into production-ready, reviewed, and optimized code โ automatically and at scale โ using Cursor AI's deep coding intelligence inside n8n. ๐ก Why Does It Matter? Software teams lose hundreds of hours to boilerplate writing, manual code reviews, and inconsistent refactoring. AI-assisted coding with Cursor is powerful, but still requires manual triggering. By wiring Cursor into n8n, you get a repeatable, auditable, hands-free coding pipeline that integrates directly with your Git repos, Slack, and storage โ making AI code generation a true part of your CI/CD culture, not just a one-off tool. โ๏ธ How It Works Webhook or Schedule triggers the flow with a coding task description Task is classified (Generate / Review / Refactor / Optimize) Cursor AI API is called with the appropriate system prompt & task Raw code output is received and parsed A second Cursor pass performs automated code review & scoring If quality score passes threshold โ code is committed to GitHub If score is below threshold โ Cursor runs an optimization pass Final code + review report saved to Google Drive Summary logged to Google Sheets Slack notification sent with code snippet preview & Drive link ๐ง Configuration Requirements Cursor AI API key** (via Cursor developer access or proxy endpoint) GitHub Personal Access Token** (for auto-commit & PR creation) Google Drive OAuth2** (for storing code files & reports) Google Sheets OAuth2** (for logging task history & quality scores) Slack Bot Token** (for team notifications) Optional: OpenAI API key** (for task classification fallback) ๐ Setup Guide Import this workflow into your n8n instance Connect all credentials: Cursor API, GitHub, Google Drive, Google Sheets, Slack Open the Set Task Config node and fill in: repo_owner and repo_name (your GitHub target repo) target_branch (e.g. ai-generated or main) quality_threshold (score 0โ100, recommended: 75) storage_folder (Google Drive folder name) log_sheet_id (Google Sheets document ID) Test with a manual webhook POST containing { "task": "Write a Python FastAPI CRUD endpoint for users" } Review output in Drive and check Slack notification Activate the webhook for live use Optionally activate the daily schedule for batch processing queued tasks Monitor quality scores in Google Sheets and tune the threshold as needed
by aditya vadaganadam
This n8n template turns chat questions into structured financial reports using Gemini and posts them to a Discord channel via webhook. Ask about tickers, sectors, or theses (e.g., โNVDA longโterm outlook?โ or โGold ETF shortโterm drivers?โ) and receive a concise, shareable report. Good to know Not financial advice: Use for insights only; verify independently. Model availability can vary by region. If you see โmodel not found,โ it may be geoโrestricted. Costs depend on model and tokens. Check current Gemini pricing for updates. Discord messages are limited to ~2000 characters per post; long reports may need splitting. Rate limits: Discord webhooks are rateโlimited; add short waits for bursts. How it works Chat Trigger collects the userโs question (public chat supported when the workflow is activated). Conversation Memory keeps a short window of recent messages to maintain context. Connect Gemini provides the LLM (e.g., geminiโ2.5โflashโlite) and parameters (temperature, tokens). Agent (agent1) applies a financial analysis System Message to produce structured insights. Structured Output Parser enforces a simple JSON schema: idea (oneโline thesis) + analysis (Markdown sections). Code formats a Discordโready Markdown report (title, question, executive summary, sections, disclaimer). Edit Fields maps the formatted report to a clean content field. Discord Webhook posts the final report to your channel. How to use Start with the builtโin Chat Trigger: click Open chat, ask a question, and verify the Discord post. Replace or augment with a Cron or Webhook trigger for scheduled or programmatic runs. For richer context, add HTTP Request nodes (prices, news, filings) and pass summaries to the agent. Requirements n8n instance with internet access Google AI (Gemini) API key Discord server with a webhook URL Customising this workflow System Message: Adjust tone, depth, risk profile, and required sections (Summary, Drivers, Risks, Metrics, Next Steps, Takeaway). Model settings: Switch models or tune temperature/tokens in Connect Gemini. Schema: Extend the parser and formatter with fields like drivers[], risks[], or metrics{}. Formatting: Edit the Code node to change headings, emojis, disclaimers, or add timestamps. Operations: Add retries, message splitting for long outputs, and rateโlimit handling for Discord.
by DIGITAL BIZ TECH
AI-Powered Website Chatbot with Google Drive Knowledge Base Overview This workflow combines website chatbot intelligence with automated document ingestion and vectorization โ enabling live Q&A from both chat input and processed Google Drive files. It uses Mistral AI for OCR + embeddings, and Qdrant for vector search. Chatbot Flow Trigger:** When chat message received or webhook based upon deployed chatbot Model:** OpenAI gpt-4.1-mini Memory:** Simple Memory (Buffer Window) Vector Search Tool:** Qdrant Vector Store Embeddings:** Mistral Cloud Agent:** website chat agent Responds based on chatdbtai Supabase content Enforces brand tone and informative documents. Integratration with both: Embedded chat UI Webhook Document โ Knowledge Base Pipeline Triggered manually to keep vector store up-to-date. Steps Google Drive (brand folder) โ Fetch files from folder Website kb (ID: 1o3DK9Ceka5Lqb8irvFSfEeB8SVGG_OL7) Loop Over Items โ For each file: Set metadata Download file Upload to Mistral for OCR Get Signed URL Run OCR extraction (mistral-ocr-latest) If OCR success โ Pass to chunking pipeline Else โ skip and continue Chunking Logic (Code node) Splits document into 1,000-character JSON chunks Adds metadata (source, char positions, file ID) Default Data Loader + Text Splitter โ Prepares chunks for embedding Embeddings (Mistral Cloud) โ Generates embeddings for text chunks Qdrant Vector Store (Insert mode) โ Saves embeddings into docragtestkb collection Wait โ Optional delay between batches Integrations Used | Service | Purpose | Credential | |----------|----------|------------| | Google Drive | File source | Google Drive account 6 rn dbt | | Mistral Cloud | OCR + embeddings | Mistral Cloud account 2 dbt rn | | Qdrant | Vector storage | QdrantApi account | | OpenAI | Chat model | OpenAi account 8 dbt digi | Agent System Prompt Summary > โYou are the official AI assistant for this website. Use chatdbtai only as your knowledge source. Respond conversationally, list offerings clearly, link blogs, and say โI couldnโt find that on this siteโ if no match.โ Key Features โ Automated OCR + chunking โ vectorization โ Persistent memory for chat sessions โ Multi-channel (Webhook + Embedded Chat) โ Fully brand-guided, structured responses โ Live data retrieval from Qdrant vector store Summary > A unified workflow that turns brand files + web content into a knowledge base that powers a intelligent chatbot โ capable of responding to visitors in real time, powered by Mistral, OpenAI, and Qdrant. Need Help or More Workflows? Want to customize this workflow for your business or integrate it with your existing tools? Our team at Digital Biz Tech can tailor it precisely to your use case from automation logic to AI-powered enhancements. ๐ก We can help you set it up for free โ from connecting credentials to deploying it live. Contact: shilpa.raju@digitalbiz.tech Website: https://www.digitalbiz.tech LinkedIn: https://www.linkedin.com/company/digital-biz-tech/ You can also DM us on LinkedIn for any help.
by Trung Tran
Create AI-Powered Chatbot for Candidate Evaluation on Slack > This workflow connects a Slack chatbot with AI agents and Google Sheets to automate candidate resume evaluation. It extracts resume details, identifies the applied job from the message, fetches the correct job description, and provides a summarized evaluation via Slack and tracking sheet. Perfect for HR teams using Slack. Whoโs it for This workflow is designed for: HR Teams, **Recruiters, and Hiring Managers Working in software or tech companies using Slack, Google Sheets, and n8n Who want to automate candidate evaluation based on uploaded profiles and applied job positions How it works / What it does This workflow is triggered when a Slack user mentions the HR bot and attaches a candidate profile PDF. The workflow performs the following steps: Trigger from Slack Mention A user mentions the bot in Slack with a message like: @HRBot Please evaluate this candidate for the AI Engineer role. (with PDF attached) Input Validation If no file is attached, the bot replies: "Please upload the candidate profile file before sending the message." Extract Candidate Profile Downloads the attached PDF from Slack Uses Extract from File to parse the resume into text Profile Analysis (AI Agent) Sends the resume text and message to the Profile Analyzer Agent Identifies: Candidate name, email, and summary Applied position (from message) Looks up the Job Description PDF URL using Google Sheets Job Description Retrieval Downloads and parses the matching JD PDF HR Evaluation (AI Agent) Sends both the candidate profile and job description to HR Expert Agent Receives a summarized fit evaluation and insights Output and Logging Sends evaluation result back to Slack in the original thread Updates a Google Sheet with evaluation data for tracking How to set up Slack Setup Create a Slack bot and install it into your workspace Enable the app_mention event and generate a bot token Connect Slack to n8n using Slack Bot credentials Google Sheets Setup Create a sheet mapping Position Title โ Job Description URL Create another sheet for logging evaluation results n8n Setup Add a Webhook Trigger for Slack mentions Connect Slack, Google Sheets, and GPT-4 credentials Set up agents (Profile Analyzer Agent, HR Expert Agent) with appropriate prompts Deploy & Test Mention your bot in Slack with a message and file Confirm the reply and entry in the evaluation tracking sheet Requirements n8n (self-hosted or cloud) Slack App with Bot Token OpenAI or Azure OpenAI account (for GPT-4) Google Sheets (2 sheets: job mapping + evaluation log) Candidate profiles in PDF format Defined job titles and descriptions How to customize the workflow You can easily adapt this workflow to your teamโs needs: | Customization Area | How to Customize | |--------------------------|----------------------------------------------------------------------------------| | Job Mapping Source | Replace Google Sheet with Airtable or Notion DB | | JD Format | Use Markdown or inline descriptions instead of PDF | | Evaluation Output Format | Change from Slack message to Email or Notion update | | HR Agent Prompt | Customize to match your company tone or include scoring rubrics | | Language Support | Add support for bilingual input/output (e.g., Vietnamese & English) | | Workflow Trigger | Trigger from slash command or form instead of @mention |
by Guillaume Duvernay
This template introduces a revolutionary approach to automated web research. Instead of a rigid workflow that can only find one type of information, this system uses a "thinker" and "doer" AI architecture. It dynamically interprets your plain-English research request, designs a custom spreadsheet (CSV) with the perfect columns for your goal, and then deploys a web-scraping AI to fill it out. It's like having an expert research assistant who not only finds the data you need but also builds the perfect container for it on the fly. Whether you're looking for sales leads, competitor data, or market trends, this workflow adapts to your request and delivers a perfectly structured, ready-to-use dataset every time. Who is this for? Sales & marketing teams:** Generate targeted lead lists, compile competitor analysis, or gather market intelligence with a simple text prompt. Researchers & analysts:** Quickly gather and structure data from the web for any topic without needing to write custom scrapers. Entrepreneurs & business owners:** Perform rapid market research to validate ideas, find suppliers, or identify opportunities. Anyone who needs structured data:** Transform unstructured, natural language requests into clean, organized spreadsheets. What problem does this solve? Eliminates rigid, single-purpose workflows:** This workflow isn't hardcoded to find just one thing. It dynamically adapts its entire research plan and data structure based on your request. Automates the entire research process:** It handles everything from understanding the goal and planning the research to executing the web search and structuring the final data. Bridges the gap between questions and data:** It translates your high-level goal (e.g., "I need sales leads") into a concrete, structured spreadsheet with all the necessary columns (Company Name, Website, Key Contacts, etc.). Optimizes for cost and efficiency:* It intelligently uses a combination of deep-dive and standard web searches from *Linkup.so** to gather high-quality initial results and then enrich them cost-effectively. How it works (The "Thinker & Doer" Method) The process is cleverly split into two main phases: The "Thinker" (AI Planner): You submit a research request via the built-in form (e.g., "Find 50 US-based fashion companies for a sales outreach campaign"). The first AI node acts as the "thinker." It analyzes your request and determines the optimal structure for your final spreadsheet. It dynamically generates a plan, which includes a discoveryQuery to find the initial list, an enrichmentQuery to get details for each item, and the JSON schemas that define the exact columns for your CSV. The "Doer" (AI Researcher): The rest of the workflow is the "doer," which executes the plan. Discovery: It uses a powerful "deep search" with Linkup.so to execute the discoveryQuery and find the initial list of items (e.g., the 50 fashion companies). Enrichment: It then loops through each item in the list. For each one, it performs a fast and cost-effective "standard search" with Linkup to execute the enrichmentQuery, filling in all the detailed columns defined by the "thinker." Final Output: The workflow consolidates all the enriched data and converts it into a final CSV file, ready for download or further processing. Setup Connect your AI provider: In the OpenAI Chat Model node, add your AI provider's credentials. Connect your Linkup account: In the two Linkup (HTTP Request) nodes, add your Linkup API key (free account at linkup.so). We recommend creating a "Generic Credential" of type "Bearer Token" for this. Linkup offers โฌ5 of free credits monthly, which is enough for 1k standard searches or 100 deep queries. Activate the workflow: Toggle the workflow to "Active." You can now use the form to submit your first research request! Taking it further Add a custom dashboard:** Replace the form trigger and final CSV output with a more polished user experience. For example, build a simple web app where users can submit requests and download their completed research files. Make it company-aware:** Modify the "thinker" AI's prompt to include context about your company. This will allow it to generate research plans that are automatically tailored to finding leads or data relevant to your specific products and services. Add an AI summary layer:** After the CSV is generated, add a final AI node to read the entire file and produce a high-level summary, such as "Here are the top 5 leads to contact first and why," turning the raw data into an instant, actionable report.
by AlphaInsider
AlphaInsider Telegram Chat Bot Automate trading on AlphaInsider by monitoring Telegram messages. Uses AI to analyze signals and execute trades, create posts, or answer questions. How It Works Message Flow: Telegram โ Route (DM/Channel) โ Detect Type (Text/Voice) โ Transcribe (if voice) โ Global Settings โ Fetch Positions โ AI Analysis โ Route Action (Trade/Post/Q&A) โ Execute โ Reply (if DM) Three AI Actions: Trade - Executes orders on AlphaInsider based on clear trading signals Post - Creates audience posts on AlphaInsider from commentary or analysis Q&A - Responds to direct questions about positions or trading The AI agent analyzes messages with current portfolio context, determines intent, and routes to the appropriate action. Prerequisites n8n instance** (self-hosted or cloud) AlphaInsider account** with API access - Sign up OpenAI API key** - Get key Telegram Bot** - Create via @BotFather Quick Setup 1. Configure Credentials Telegram API: Create bot with @BotFather and copy auth token Add token to Telegram Channel Listener node (Optional) Add bot as admin to channel with no permissions OpenAI API: Get API key from OpenAI Platform Add to OpenAI Model and Transcribe Voice Message nodes AlphaInsider API: Go to Developer Settings Click n8n button and copy API key Add to Get Positions, Search Stocks, and Create Orders nodes 2. Configure Workflow Global Settings Node: strategy_id** (required): Copy from AlphaInsider strategy URL (e.g. URL: https://alphainsider.com/strategy/niAlE-cMI8TdsYQllZLmf, Strategy ID: niAlE-cMI8TdsYQllZLmf) whitelist** (optional): Restrict trading to specific securities using ["SYMBOL:EXCHANGE"] format, or leave as [] for all Channel Check Node (optional): Set channel_id to monitor specific channel (e.g., -1001234567890) Leave default to monitor all messages 3. Activate Toggle Active in the workflow editor. Your bot is now monitoring Telegram. Features Real-time monitoring** via webhook for instant processing Dual input modes**: Direct messages and channel posts Voice message support**: Automatic transcription with OpenAI Whisper Portfolio context**: Considers existing positions before trading Leverage support**: Up to 2x (200%) portfolio leverage Security whitelist**: Optional restriction to approved securities Interactive replies**: Responds to DMs with confirmations or answers Trading Logic Trade Signals (action = trade): Long**: buy, bullish, loading up, moon, strong buy Short**: sell, dumping, overvalued, bearish, exiting Close**: take profit, exiting, holding cash, getting out Posts (action = post): Broadcasting information or analysis to followers No explicit trading intention Q&A (action = none): Position queries and trading questions Pure observation or technical analysis Weak/ambiguous signals Allocation Rules: Percentages: 0 to 2.0 (1.5 = 150% leverage) Total must not exceed 2.0 Cannot mix stocks and crypto Conservative approach - only acts on clear signals Example Usage Buy Signal: Message: "Extremely bullish on Nvidia. Adding more NVDA" Current: 50% TSLA Result: 150% NVDA + 50% TSLA = 200% Reallocation: Message: "Selling Tesla to buy Microsoft" Current: 100% TSLA Result: 100% MSFT (TSLA closed) Post to Audience: Message: "Bitcoin breaking key resistance at $45k!" Result: Post created on AlphaInsider Position Query: Message: "What are my current positions?" Reply: "Your current positions are: TSLA:NASDAQ long 100%, MSFT:NASDAQ long 50%." Supported Exchanges COINBASE** - Cryptocurrencies NYSE** - New York Stock Exchange NASDAQ** - NASDAQ Stock Market Troubleshooting | Issue | Solution | |-------|----------| | Webhook not triggering | Verify bot is admin in channel with correct permissions | | AI not trading | Ensure messages contain clear action words (buying, selling, closing) | | Trades not executing | Verify AlphaInsider credentials and strategy_id | | Voice messages failing | Check OpenAI API key in Transcribe Voice Message node | | Wrong channel messages | Update channel_id in Channel Check node | | Not replying to DMs | Verify Telegram credentials in User Reply node | Customization Open Parse Stock Allocations node to modify trading logic: Signal recognition keywords Leverage limits (default: 2.0) Allocation strategies Risk filters Disclaimer Trading involves risk. Always test thoroughly before using with real money. Start with small allocations and security whitelists. Users are responsible for their own trading decisions and risk management.
by Cheng Siong Chin
How It Works This workflow automates financial transaction surveillance by monitoring multiple payment systems, analyzing transaction patterns with AI, and triggering instant fraud alerts. Designed for finance teams, compliance officers, and fintech operations, it solves the challenge of real-time fraud detection across high-volume transaction streams without manual oversight. The system continuously fetches transactions from banking APIs and payment gateways via scheduled triggers or webhooks. Each transaction flows through validation layers checking for irregular amounts, velocity patterns, and geolocation anomalies. AI models analyze transaction metadata against historical patterns to calculate fraud risk scores. High-risk transactions trigger immediate alerts to designated teams via Gmail and Slack, while audit trails are logged to Google Sheets for compliance documentation. Approved transactions proceed to reconciliation, aggregating financial reports automatically. This eliminates delayed fraud discovery, reduces false positives through intelligent scoring, and ensures regulatory compliance through comprehensive audit logging. Setup Steps Configure banking API credentials for transaction access Set up webhook endpoints for real-time transaction notifications Add OpenAI API key for fraud pattern analysis and risk scoring Configure NVIDIA NIM API for advanced anomaly detection models Set Gmail OAuth credentials for automated fraud alert delivery Connect Slack workspace and specify alert channels for urgent notifications Link Google Sheets for transaction logging and compliance audit trails Prerequisites Active accounts for payment processors (Stripe, PayPal) or banking APIs (Plaid) Use Cases Real-time credit card transaction monitoring with instant fraud blocks Customization Adjust fraud risk scoring thresholds based on business risk tolerance Benefits Reduces fraud detection time from hours to seconds through real-time monitoring.