by Don Jayamaha Jr
Instantly access live OKX Spot Market data directly in Telegram! This workflow integrates the OKX REST v5 API with Telegram and optional GPT-4.1-mini formatting, delivering real-time insights such as latest prices, order book depth, candlesticks, trades, and mark prices โ all in clean, structured reports. ๐ How It Works A Telegram Trigger node listens for incoming user commands. The User Authentication node validates the Telegram ID to allow only authorized users. The workflow creates a Session ID from chat.id to manage session memory. The OKX AI Agent orchestrates data retrieval via HTTP requests to OKX endpoints: Latest Price (/api/v5/market/ticker?instId=BTC-USDT) 24h Stats (/api/v5/market/ticker?instId=BTC-USDT) Order Book Depth (/api/v5/market/books?instId=BTC-USDT&sz=50) Best Bid/Ask (book ticker snapshot) Candlesticks / Klines (/api/v5/market/candles?instId=BTC-USDT&bar=15m) Average / Mark Price (/api/v5/market/mark-price?instType=SPOT&instId=BTC-USDT) Recent Trades (/api/v5/market/trades?instId=BTC-USDT&limit=100) Utility tools refine the data: Calculator โ spreads, % change, normalized volumes. Think โ reshapes raw JSON into clean text. Simple Memory โ stores sessionId, symbol, and state for multi-turn interactions. A message splitter ensures Telegram output stays under 4000 characters. Final results are sent to Telegram in structured, human-readable format. โ What You Can Do with This Agent Get latest price and 24h stats for any Spot instrument. Retrieve order book depth with configurable size (up to 400 levels). View best bid/ask snapshots instantly. Fetch candlestick OHLCV data across intervals (1m โ 1M). Monitor recent trades (up to 100). Check the mark price as a fair average reference. Receive clean, Telegram-ready reports (auto-split if too long). ๐ ๏ธ Setup Steps Create a Telegram Bot Use @BotFather to generate a bot token. Configure in n8n Import OKX AI Agent v1.02.json. Replace the placeholder in User Authentication node with your Telegram ID. Add Telegram API credentials (bot token). Add your OpenAI API key for GPT-4.1-mini. Add your OKX API key optional. Deploy and Test Activate the workflow in n8n. Send a query like BTC-USDT to your bot. Instantly get structured OKX Spot data back in Telegram. ๐บ Setup Video Tutorial Watch the full setup guide on YouTube: โก Unlock real-time OKX Spot Market insights directly in Telegram โ no private API keys required! ๐งพ Licensing & Attribution ยฉ 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. ๐ For support: Don Jayamaha โ LinkedIn
by Intuz
This n8n template from Intuz provides a complete solution to automate the extraction of critical information from PDF documents like faxes, or any PDFs. It uses the power of Google Gemini's multimodal capabilities to read the document, identify key fields, and organize the data into a structured format, saving it directly to a Google Sheet. Who's this workflow for? Healthcare Administrators Medical Billing Teams Legal Assistants Data Entry Professionals Office Managers How it works 1. Upload via Web Form: The process starts when a user uploads a fax (as a PDF file) through a simple, secure web form generated by n8n. 2. AI Document Analysis: The PDF is sent directly to Google Gemini's advanced multimodal model, which reads the entire documentโincluding text, tables, and form fields. It extracts all relevant information based on a detailed prompt. 3. AI Data Structuring: The raw extracted text is then passed to a second AI step. This step cleans the information and strictly structures it into a predictable JSON format (e.g., Patient ID, Name, DOB, etc.). 4. Save to Google Sheets: The final, structured data is automatically appended as a new, clean row in your designated Google Sheet, creating an organized and usable dataset from the unstructured fax. Key Requirements to Use This Template 1. n8n Instance & Required Nodes: An active n8n account (Cloud or self-hosted). This workflow uses the official n8n LangChain integration (@n8n/n8n-nodes-langchain). If you are using a self-hosted version of n8n, please ensure this package is installed. 2. Google Accounts: Google Drive Account: For temporarily storing the uploaded file. Google Gemini AI Account: A Google Cloud account with the Vertex AI API (for Gemini models) enabled and an associated API Key. Google Sheets Account: A pre-made Google Sheet with columns that match the data you want to extract. Customer Setup Guide: Here is a detailed, step-by-step guide to help you configure and run this workflow. 1. Before You Begin: Prerequisites Please ensure you have the following ready: The FAX-Content-Extraction.json file we provided. Active accounts for n8n, Google Drive, Google Cloud (for Gemini AI), and Google Sheets. A Google Sheet created with header columns that match the data you want to extract (e.g., Patient ID, Patient Name, Date of Birth, etc.). 2. Step-by-Step Configuration Step 1: Import the Workflow Open your n8n canvas. Click "Import from File" and select the FAX-Content-Extraction.json file. The workflow will appear on your canvas. Step 2: Set Up the Form Trigger The workflow starts with the "On form submission" node. Click on this node. In the settings panel, you will see a "Form URL". Copy this URL. This is the link to the web form where you will upload your fax files. Step 3: Configure the Google Drive Node Click on the "Upload file" (Google Drive) node. Credentials: Select your Google Drive account from the "Credentials" dropdown or click "Create New" to connect your account. Folder ID: In the "Folder ID" field, choose the specific Google Drive folder where you want the uploaded faxes to be saved. Step 4: Configure the Google Gemini AI Nodes (Very Important) This workflow uses AI in two places, and both need to be connected. First AI Call (PDF Reading): Click on the "Call Gemini 2.0 Flash with PDF Capabilities" (HTTP Request) node. Under "Authentication", make sure "Predefined Credential Type" is selected. For "Credential Type", choose "Google Palm API". In the "Credentials" dropdown, select your Google Gemini API key or click "Create New" to add it. Second AI Call (Data Structuring): Click on the "Google Gemini Chat Model" node (it's connected below the "Basic LLM Chain" node). In the "Credentials" dropdown, select the same Google Gemini API key you used before. Step 5: (Optional) Customize What Data is Extracted You have full control over what information the AI looks for. To change the extraction rules: Click on the "Define Prompt" node. You can edit the text in the "Value" field to tell the AI what to look for (e.g., "Extract only the patient's name and medication list"). To change the final output columns: Click on the "Basic LLM Chain" node. In the "Text" field, you can edit the JSON schema to add, remove, or rename the fields you want in your final output. The keys here MUST match the column headers in your Google Sheet. Step 6: Configure the Final Google Sheets Node Click on the "Append row in sheet" node. Credentials: Select your Google Sheets account from the "Credentials" dropdown. Document ID: Select your target spreadsheet from the "Document" dropdown list. Sheet Name: Select the specific sheet within that document. Columns: Ensure that the fields listed here match the columns in your sheet and the schema from the "Basic LLM Chain" node. 4. Running the Workflow Save and Activate: Click "Save" and then toggle the workflow to "Active". Open the Form: Open the Form URL you copied in Step 2 in a new browser tab. Upload a File: Upload a sample fax PDF and submit the form. Check Your Sheet: After a minute, a new row with the extracted data should appear in your Google Sheet. Connect with us Website: https://www.intuz.com/services Email: getstarted@intuz.com LinkedIn: https://www.linkedin.com/company/intuz Get Started: https://n8n.partnerlinks.io/intuz For Custom Worflow Automation Click here- Get Started
by Denis
How it works Multi-modal AI Image Generator powered by Google's Nano Banana (Gemini 2.5 Flash Image) - the latest state-of-the-art image generation model Accepts text, images, voice messages, and PDFs via Telegram for maximum flexibility Uses OpenAI GPT models for conversation and image analysis, then Nano Banana for stunning image generation Features conversation memory for iterative image modifications ("make it darker", "change to blue") Processes different input types: analyzes uploaded images, transcribes voice messages, extracts PDF text All inputs are converted to optimized prompts specifically tuned for Nano Banana's capabilities Set up steps Create Telegram bot via @BotFather and get API token Set up Google Gemini API key from Google AI Studio for Nano Banana image generation (~$0.04/image) Configure OpenAI API key for GPT models (conversation, image analysis, voice transcription) Import workflow and configure all three API credentials in n8n Update bot tokens in HTTP request nodes for file downloads Test with text prompts, image uploads, voice messages, and PDF documents
by Oneclick AI Squad
A smart, fully automated coding pipeline built inside n8n that leverages Cursor AI to write, refactor, review, and optimize code projects โ triggered by a webhook, schedule, or manual prompt. Every output is versioned, stored, and delivered with a detailed review report. ๐ฏ What's the Goal? Eliminate the repetitive overhead of writing boilerplate code, performing code reviews, and refactoring legacy code manually. This workflow turns a plain-text task description into production-ready, reviewed, and optimized code โ automatically and at scale โ using Cursor AI's deep coding intelligence inside n8n. ๐ก Why Does It Matter? Software teams lose hundreds of hours to boilerplate writing, manual code reviews, and inconsistent refactoring. AI-assisted coding with Cursor is powerful, but still requires manual triggering. By wiring Cursor into n8n, you get a repeatable, auditable, hands-free coding pipeline that integrates directly with your Git repos, Slack, and storage โ making AI code generation a true part of your CI/CD culture, not just a one-off tool. โ๏ธ How It Works Webhook or Schedule triggers the flow with a coding task description Task is classified (Generate / Review / Refactor / Optimize) Cursor AI API is called with the appropriate system prompt & task Raw code output is received and parsed A second Cursor pass performs automated code review & scoring If quality score passes threshold โ code is committed to GitHub If score is below threshold โ Cursor runs an optimization pass Final code + review report saved to Google Drive Summary logged to Google Sheets Slack notification sent with code snippet preview & Drive link ๐ง Configuration Requirements Cursor AI API key** (via Cursor developer access or proxy endpoint) GitHub Personal Access Token** (for auto-commit & PR creation) Google Drive OAuth2** (for storing code files & reports) Google Sheets OAuth2** (for logging task history & quality scores) Slack Bot Token** (for team notifications) Optional: OpenAI API key** (for task classification fallback) ๐ Setup Guide Import this workflow into your n8n instance Connect all credentials: Cursor API, GitHub, Google Drive, Google Sheets, Slack Open the Set Task Config node and fill in: repo_owner and repo_name (your GitHub target repo) target_branch (e.g. ai-generated or main) quality_threshold (score 0โ100, recommended: 75) storage_folder (Google Drive folder name) log_sheet_id (Google Sheets document ID) Test with a manual webhook POST containing { "task": "Write a Python FastAPI CRUD endpoint for users" } Review output in Drive and check Slack notification Activate the webhook for live use Optionally activate the daily schedule for batch processing queued tasks Monitor quality scores in Google Sheets and tune the threshold as needed
by Jitesh Dugar
Transform college admissions from an overwhelming manual process into an intelligent, efficient, and equitable system that analyzes essays, scores applicants holistically, and identifies top candidatesโsaving 40+ hours per week while improving decision quality. ๐ฏ What This Workflow Does Automates comprehensive application review with AI-powered analysis: ๐ Application Intake - Captures complete college applications via Jotform ๐ AI Essay Analysis - Deep analysis of personal statements and supplemental essays for: Writing quality, authenticity, and voice AI-generated content detection Specificity and research quality Red flags (plagiarism, inconsistencies, generic writing) ๐ฏ Holistic Review AI - Evaluates applicants across five dimensions: Academic strength (GPA, test scores, rigor) Extracurricular profile (leadership, depth, impact) Personal qualities (character, resilience, maturity) Institutional fit (values alignment, contribution potential) Diversity contribution (unique perspectives, experiences) ๐ฆ Smart Routing - Automatically categorizes and routes applications: Strong Admit (85-100): Slack alert โ Director email โ Interview invitation โ Fast-track Committee Review (65-84): Detailed analysis โ Committee discussion โ Human decision Standard Review (<65): Acknowledgment โ Human verification โ Standard timeline ๐ Comprehensive Analytics - All applications logged with scores, recommendations, and outcomes โจ Key Features AI Essay Analysis Engine Writing Quality Assessment**: Grammar, vocabulary, structure, narrative coherence Authenticity Detection**: Distinguishes genuine voice from AI-generated content (GPT detectors) Content Depth Evaluation**: Self-awareness, insight, maturity, storytelling ability Specificity Scoring**: Generic vs tailored "Why Us" essays with research depth Red Flag Identification**: Plagiarism indicators, privilege blindness, inconsistencies, template writing Thematic Analysis**: Core values, motivations, growth narratives, unique perspectives Holistic Review Scoring (0-100 Scale) Academic Strength (35%)**: GPA in context, test scores, course rigor, intellectual curiosity Extracurricular Profile (25%)**: Quality over quantity, leadership impact, commitment depth Personal Qualities (20%)**: Character, resilience, empathy, authenticity, self-awareness Institutional Fit (15%)**: Values alignment, demonstrated interest, contribution potential Diversity Contribution (5%)**: Unique perspectives, life experiences, background diversity Intelligent Candidate Classification Admit**: Top 15% - clear admit, exceptional across multiple dimensions Strong Maybe**: Top 15-30% - competitive, needs committee discussion Maybe**: Top 30-50% - solid but not standout, waitlist consideration Deny**: Below threshold - does not meet competitive standards (always human-verified) Automated Workflows Priority Candidates**: Immediate Slack alerts, director briefs, interview invitations Committee Cases**: Detailed analysis packets, discussion points, voting workflows Standard Processing**: Professional acknowledgments, timeline communications Interview Scheduling**: Automated invitations with candidate-specific questions ๐ผ Perfect For Selective Colleges & Universities**: 15-30% acceptance rates, holistic review processes Liberal Arts Colleges**: Emphasis on essays, personal qualities, institutional fit Large Public Universities**: Processing thousands of applications efficiently Graduate Programs**: MBA, law, medical school admissions Scholarship Committees**: Evaluating merit and need-based awards Honors Programs**: Identifying top candidates for selective programs Private High Schools**: Admissions teams with holistic processes ๐ Admissions Impact Efficiency & Productivity 40-50 hours saved per week** on initial application review 70% faster** essay evaluation with AI pre-analysis 3x more applications** processed per reader Zero data entry** - all information auto-extracted Consistent evaluation** across thousands of applications Same-day turnaround** for top candidate identification Decision Quality Improvements Objective scoring** reduces unconscious bias Consistent criteria** applied to all applicants Essay authenticity checks** catch AI-written applications Holistic view** considers all dimensions equally Data-driven insights** inform committee discussions Fast-track top talent** before competitors Equity & Fairness Standardized evaluation** ensures fair treatment First-generation flagging** provides context Socioeconomic consideration** in holistic scoring Diverse perspectives valued** in diversity score Bias detection** in essay analysis Audit trail** for compliance and review Candidate Experience Instant acknowledgment** of application receipt Professional communication** at every stage Clear timelines** and expectations Interview invitations** for competitive candidates Respectful process** for all applicants regardless of outcome ๐ง What You'll Need Required Integrations Jotform** - Application intake forms Create your form for free on JotForm using this link OpenAI API** - GPT-4o for analysis (~$0.15-0.25 per application) Gmail/Outlook** - Applicant and staff communication (free) Google Sheets** - Application database and analytics (free) Optional Integrations Slack** - Real-time alerts for strong candidates ($0-8/user/month) Google Calendar** - Interview scheduling automation (free) Airtable** - Advanced application tracking (alternative to Sheets) Applicant Portal Integration** - Status updates via API CRM Systems** - Slate, TargetX, Salesforce for higher ed ๐ Setup Guide (3-4 Hours) Step 1: Create Application Form (60 min) Build comprehensive Jotform with sections: Basic Information Full name, email, phone High school, graduation year Intended major Academic Credentials GPA (weighted/unweighted, scale) SAT score (optional) ACT score (optional) Class rank (if available) Academic honors Essays (Most Important!) Personal statement (650 words max) "Why Our College" essay (250-300 words) Supplemental prompts (program-specific) Activities & Achievements Extracurricular activities (list with hours/week, years) Leadership positions (with descriptions) Honors and awards Community service hours Work experience Additional Information First-generation college student (yes/no) Financial aid needed (yes/no) Optional: demographic information Optional: additional context Step 2: Import n8n Workflow (15 min) Copy JSON from artifact n8n: Workflows โ Import โ Paste Includes all nodes + 7 detailed sticky notes Step 3: Configure OpenAI API (20 min) Get API key: https://platform.openai.com/api-keys Add to both AI nodes (Essay Analysis + Holistic Review) Model: gpt-4o (best for nuanced analysis) Temperature: 0.3 (consistency with creativity) Test with sample application Cost: $0.15-0.25 per application (essay analysis + holistic review) Step 4: Customize Institutional Context (45 min) Edit AI prompts to reflect YOUR college: In Holistic Review Prompt, Update: College name and type Acceptance rate Average admitted student profile (GPA, test scores) Institutional values and culture Academic programs and strengths What makes your college unique Desired student qualities In Essay Analysis Prompt, Add: Specific programs to look for mentions of Faculty names applicants should reference Campus culture keywords Red flags specific to your institution Step 5: Setup Email Communications (30 min) Connect Gmail/Outlook OAuth Update all recipient addresses: admissions-director@college.edu admissions-committee@college.edu Email addresses for strong candidate alerts Customize email templates: Add college name, logo, branding Update contact information Adjust tone to match institutional voice Include decision release dates Add applicant portal links Step 6: Configure Slack Alerts (15 min, Optional) Create channel: #admissions-strong-candidates Add webhook URL or bot token Test with mock strong candidate Customize alert format and recipients Step 7: Create Admissions Database (30 min) Google Sheet with columns:
by Joseph
Reddit Lead Generator - Frontend Integrated (Productized Version) Overview Production-ready Reddit lead generation system with progressive data loading for optimal UX. This workflow integrates with a web frontend, sending results in real-time as they're processed instead of waiting for everything to complete. Key Features โ Progressive Loading - Users see results as they come in (website analysis โ keywords โ conversations) โ 5 Response Stages - Data sent in batches for better UX โ Webhook Authentication - Secured with API key headers โ Frontend Ready - Built to work with the companion web app โ Real-Time Updates - No waiting for 2-minute batch processing What This Workflow Does Receives product URL from frontend via webhook Immediately responds "processing started" Analyzes website with Firecrawl โ sends data to frontend Generates 10 keywords with OpenAI โ sends to frontend Searches Reddit and filters conversations โ sends in 3 batches Frontend displays results progressively as they arrive Response Flow Stage 1: website_analysis - Product details, favicon, summary Stage 2: keywords_generated - All 10 keywords Stage 3: conversations_partial1 - First keyword results Stage 4: conversations_partial2 - Second keyword results Stage 5: conversations_final - Remaining keywords (3-10) in markdown Quick Setup Set Environment Variables in the "Set Environment Variables" node: BACKEND_API_URL - Your frontend API endpoint WEBHOOK_API_KEY - Your webhook security key Configure Credentials: Firecrawl API Reddit OAuth2 OpenAI API Deploy Frontend & Backend: Clone repo: https://github.com/juppfy/leads-gen Deploy frontend on Vercel Deploy backend on Railway Full instructions in the repo README Activate Workflow and update your frontend with the webhook URL Requirements n8n (cloud or self-hosted) Firecrawl API key Reddit Developer Account OpenAI API key Frontend + Backend deployed (see GitHub repo) Resources ๐ Complete Setup Guide: https://bit.ly/mediumarticleredditworkflow ๐ฅ Video Tutorial: https://bit.ly/youtubetutorialredditworkflow ๐ป GitHub Repo: https://github.com/juppfy/leads-gen Frontend Integration This workflow requires the companion web app to receive and display results. The frontend handles: User input and URL validation Real-time result display Conversation cards with pagination Request tracking by searchId Deploy instructions and complete code available in the GitHub repo above. Difference from Batch Version This Version (Frontend): Progressive data loading Real-time updates Production-ready UX Requires frontend deployment Batch Version: Single output at end No frontend needed Perfect for testing/scheduled runs Simpler setup Support Questions? Check the Medium article or YouTube tutorial first. Both have detailed setup instructions and troubleshooting tips.
by Rahul Joshi
๐ Description Automate your client proposal creation with this intelligent workflow that transforms Google Sheets entries into professional Google Docs proposals using OpenAI GPT-4o. Designed for agencies and sales teams, it delivers personalized, branded, and structured proposals in minutes โ no manual editing required. ๐๐๐ค What This Template Does Triggers when a new row is added in a connected Google Sheet. ๐ Filters only the latest row to ensure one proposal per new entry. ๐ Uses GPT-4o to generate structured proposal content (Executive Summary, Scope, Costing, Timeline, Conclusion). ๐ก Parses output into validated JSON format for accurate field mapping. โ๏ธ Populates a Google Docs template with AI-generated content using placeholders. ๐ Downloads the completed proposal as a PDF file. ๐พ Archives the finalized document into a designated Google Drive folder. ๐ Resets the template for the next proposal cycle automatically. ๐ Key Benefits โ Eliminates repetitive manual proposal writing. โ Ensures brand consistency with structured templates. โ Generates high-quality proposals using AI in real time. โ Automates document formatting, saving hours per client. โ Scales easily for agencies handling multiple clients daily. Features Google Sheets trigger for new entries. GPT-4o-based content generation with customizable prompts. JSON output validation and structured parsing. Google Docs population using placeholder replacement. Drive storage automation for version tracking. End-to-end automation from data to proposal delivery. Requirements Google Sheets document with columns: clientName, jobDescription. Google Docs template with placeholders (e.g., {{executive_summary}}, {{scope_of_work}}). OpenAI API key (GPT-4o). Google Drive credentials for output management. Target Audience Marketing and web agencies automating client proposal generation. Sales teams preparing project estimates and deliverables. Freelancers and consultants managing multiple client requests. Businesses streamlining documentation workflows. Step-by-Step Setup Instructions Connect Google Sheets and replace the Sheet ID placeholder. Set up your Google Docs proposal template and replace the Document ID. Add your OpenAI API key for GPT-4o content generation. Specify your Google Drive folder for saving proposals. Test the workflow with a sample entry to confirm formatting. Activate the workflow for continuous proposal generation. โ
by Fayzul Noor
This workflow is built for digital marketers, sales professionals, influencer agencies, and entrepreneurs who want to automate Instagram lead generation. If youโre tired of manually searching for profiles, copying email addresses, and updating spreadsheets, this automation will save you hours every week. It turns your process into a smart system that finds, extracts, and stores leads while you focus on growing your business. How it works / What it does This n8n automation completely transforms how you collect Instagram leads using AI and API integrations. Hereโs a simple breakdown of how it works: Set your targeting parameters using the Edit Fields node. You can specify your platform (Instagram), field of interest such as โbeauty & hair,โ and target country like โUSA.โ Generate intelligent search queries with an AI Agent powered by GPT-4o-mini. It automatically creates optimized Google search queries to find relevant Instagram profiles in your chosen niche and location. Extract results from Google using Apifyโs Google Search Scraper, which collects hundreds of Instagram profile URLs that match your search criteria. Fetch detailed Instagram profile data using Apifyโs Instagram Scraper. This includes usernames, follower counts, and profile bios where contact information usually appears. Use AI to extract emails from the profile biographies with the Information Extractor node powered by GPT-3.5-turbo. It identifies emails even when they are hidden or creatively formatted. Store verified leads in a PostgreSQL database. The workflow automatically adds new leads or updates existing ones with fields like username, follower count, email, and niche. Once everything is set up, the system runs on autopilot and keeps building your database of quality leads around the clock. How to set up Follow these steps to get your Instagram Lead Generation Machine running: Import the JSON file into your n8n instance. Add your API credentials: Apify token for the Google and Instagram scrapers OpenAI API key for the AI-powered nodes PostgreSQL credentials for storing leads Open the Edit Fields node and set your platform, field of interest, and target country. Run the workflow manually using the Manual Trigger node to test it. Once confirmed, replace the manual trigger with a schedule or webhook to run it automatically. Check your PostgreSQL database to ensure the leads are being saved correctly. Requirements Before running the workflow, make sure you have the following: An n8n account or instance (self-hosted or n8n Cloud) An Apify account for accessing the Google and Instagram scrapers OpenAI API access for generating smart search queries and extracting emails A PostgreSQL database to store your leads Basic understanding of how n8n workflows and nodes operate How to customize the workflow This workflow is flexible and can be customized to fit your business goals. Hereโs how you can tailor it: Change your niche or location by updating the Edit Fields node. You can switch from โbeauty influencers in the USAโ to โfitness coaches in Canadaโ in seconds. Add more data fields to collect additional information such as engagement rates, bio keywords, or profile categories. Just modify the PostgreSQL node and database schema. Connect to your CRM or email system to automatically send introduction emails or add new leads to your marketing pipeline. Use different triggers such as a scheduled cron trigger for daily runs or a webhook trigger to start the workflow through an API call. Filter higher-quality leads by adding logic to capture only profiles with a minimum number of followers or verified emails.
by Daniel Shashko
How it Works This workflow transforms natural language queries into research reports through a five-stage AI pipeline. When triggered via webhook (typically from Google Sheets using the companion google-apps-script.js (GitHub gist), it first checks Redis cache for instant results. For new queries, GPT-4o breaks complex questions into focused sub-queries, optimizes them for search, then uses Bright Data's MCP Tool to find the top 5 credible sources (official sites, news, financial reports). URLs are scraped in parallel, bypassing bot detection. GPT-4o extracts structured data from each source: answers, facts, entities, sentiment, quotes, and dates. GPT-4o-mini validates source credibility and filters unreliable content. Valid results aggregate into a final summary with confidence scores, key insights, and extended analysis. Results cache for 1 hour and output via webhook, Slack, email, and DataTableโall in 30-90 seconds with 60 requests/minute rate limiting. Who is this for? Research teams needing automated multi-source intelligence Content creators and journalists requiring fact-checked information Due diligence professionals conducting competitive intelligence Google Sheets power users wanting AI research in spreadsheets Teams managing large research volumes needing caching and rate limiting Setup Steps Setup time: 30-45 minutes Requirements: Bright Data account (Web Scraping API + MCP token) OpenAI API key (GPT-4o and GPT-4o-mini access) Redis instance Slack workspace (optional) SMTP email provider (optional) Google account (optional for Sheets integration) Core Setup: Get Bright Data Web Scraping API token and MCP token Get OpenAI API key Set up Redis instance Configure critical nodes: Webhook Entry: Add Header Auth token Bright Data MCP Tool: Add MCP endpoint with token Parallel Web Scraping: Add Bright Data API credentials Redis Nodes: Add connection credentials All GPT Nodes: Add OpenAI API key (5 nodes) Slack/Email: Add credentials if using Google Sheets Integration: Create Google Sheet Open Extensions โ Apps Script Paste the companion google-apps-script.js code Update webhook URL and auth token Save and authorize Test: {"prompt": "What is the population of Tokyo?", "source": "Test", "language": "English"} Customization Guidance Source Count:** Change from 5 to 3-10 URLs per query Cache Duration:** Adjust from 1 hour to 24 hours for stable info Rate Limits:** Modify 60/minute based on usage needs Character Limits:** Adjust 400-char main answer to 200-1000 AI Models:** Swap GPT-4o for Claude or use GPT-4o-mini for all stages Geographic Targeting:** Add more regions beyond us/il Output Channels:** Add Notion, Airtable, Discord, Teams Temperature:** Lower (0.1-0.2) for facts, higher (0.4-0.6) for analysis Once configured, this workflow handles all web research, from fact-checking to complex analysisโdelivering validated intelligence in seconds with automatic caching. Built by Daniel Shashko Connect on LinkedIn
by Oneclick AI Squad
This workflow automatically collects, processes, and publishes customer testimonials and reviews after project completion or purchase. It uses AI to generate polished testimonials from raw feedback and distributes them across your marketing channels. How it works Trigger - Detects project completion, purchase, or manual trigger Customer Lookup - Fetches customer details from CRM/database Smart Timing Check - Determines optimal follow-up timing based on purchase/completion date Send Feedback Request - Sends personalized email/SMS requesting review Track Form Submissions - Monitors feedback form responses Sentiment Analysis - AI analyzes feedback sentiment and quality Generate Polished Testimonial - Claude AI converts raw feedback into professional testimonials Quality Filter - Filters testimonials based on rating and sentiment Format for Channels - Prepares testimonials for different platforms Publish to Website - Auto-posts to WordPress/Webflow/custom CMS Post to Social Proof Tools - Sends to Trustpilot, Google Reviews, testimonial widgets Update CRM - Logs testimonial status in customer records Send Thank You - Automated appreciation message to customer Analytics Dashboard - Tracks collection rates and testimonial performance Setup Steps Import workflow into n8n Configure credentials: Anthropic API - Claude AI for testimonial generation CRM Integration - HubSpot/Salesforce/Airtable for customer data Email Service - SendGrid/Mailgun for follow-up emails SMS Provider - Twilio for SMS reminders Website CMS - WordPress/Webflow API for publishing Social Proof Tools - Trustpilot, Google My Business, Yotpo Form Builder - Typeform/Google Forms for feedback collection Set your timing rules (e.g., 7 days after purchase) Configure AI testimonial generation prompts Set quality thresholds (minimum 4-star rating) Map testimonial fields to your website/tools Activate the workflow Sample Trigger Payload { "customerId": "CUST-12345", "customerEmail": "john@example.com", "customerName": "John Smith", "projectType": "Website Development", "completionDate": "2024-04-01", "projectValue": 5000, "followUpDelay": 7, "preferredChannel": "email" } AI Testimonial Generation Features Converts casual feedback into professional testimonials Maintains authenticity while improving clarity Generates multiple versions for different platforms Extracts key benefits and outcomes mentioned Creates compelling headlines from feedback Suggests best quotes for marketing materials Publishing Channels Website testimonial sections (auto-inject HTML) Google My Business reviews Trustpilot/G2/Capterra Social media testimonial posts Email signature testimonials Case study content Landing page social proof widgets Smart Features Multi-touch follow-up sequences (email โ SMS โ reminder) Optimal timing based on customer journey stage Personalized feedback requests with project context Automatic language detection and translation Video testimonial collection via integrated forms Screenshot/photo attachment handling Auto-detection of negative feedback for internal review A/B testing of follow-up messaging Response rate tracking and optimization
by Rahul Joshi
๐ Description Ensure your GitHub repositories stay configuration-accurate and documentation-compliant with this intelligent AI-powered validation workflow. ๐ค This automation monitors repository updates, compares configuration files against documentation references, detects inconsistencies, and alerts your team instantlyโstreamlining DevOps and compliance reviews. โก What This Template Does Step 1: Triggers automatically on GitHub push or pull_request events. ๐ Step 2: Fetches both configuration files (config/app-config.json and faq-config.json) from the repository. ๐ Step 3: Uses GPT-4o-mini to compare configurations and detect mismatches, missing keys, or deprecated fields. ๐ง Step 4: Categorizes issues by severityโcritical, high, medium, or lowโand generates actionable recommendations. ๐จ Step 5: Logs all discrepancies to Google Sheets for tracking and audit purposes. ๐ Step 6: Sends Slack alerts summarizing key issues and linking to the full report. ๐ฌ Key Benefits โ Prevents production incidents due to config drift โ Ensures documentation stays in sync with code changes โ Reduces manual review effort with AI-driven validation โ Improves team response with Slack-based alerts โ Maintains audit logs for compliance and traceability Features Real-time GitHub webhook integration AI-powered config comparison using GPT-4o-mini Severity-based issue classification Automated Google Sheets logging Slack alerts with detailed issue context Error handling for malformed JSON or parsing issues Requirements GitHub OAuth2 credentials with repo and webhook permissions OpenAI API key (GPT-4o-mini or compatible model) Google Sheets OAuth2 credentials Slack API token with chat:write permissions Target Audience DevOps teams ensuring consistent configuration across environments Engineering leads maintaining documentation accuracy QA and Compliance teams tracking configuration changes and risks Setup Instructions Create GitHub OAuth2 credentials and enable webhook access. Connect your OpenAI API key under credentials. Add your Google Sheets and Slack integrations. Update file paths (config/app-config.json and faq-config.json) if your repo uses different names. Activate the workflow โ it will start validating on every push or PR. ๐
by Feras Dabour
Auto-generate job descriptions from briefing notes with OpenAI and Google Docs Who is this for Recruiters, HR teams, and hiring managers who conduct role briefing conversations and want to convert their meeting notes into polished, structured job descriptions automatically -- without manual copywriting. What this workflow does This workflow watches a Google Drive folder for new briefing documents, extracts structured job data using AI, generates a professional HTML job description, sends it to Microsoft Teams for approval, and exports the final version as a PDF to Google Drive. How it works Trigger -- A Google Drive Trigger detects when a new Google Doc (e.g. a briefing transcript) is created in a watched folder. File organization -- A timestamped subfolder is created and the document is moved into it for clean project structure. Document reading -- The Google Doc content is fetched via the Google Docs API. AI data extraction -- An OpenAI AI Agent analyzes the transcript (supports German input) and extracts structured job data: title, department, responsibilities, skills, benefits, tech stack, and more -- output as JSON. Data logging -- The extracted fields are appended to a Google Sheets tracker for reference and audit. Prompt assembly -- A Code node builds a detailed prompt from the structured data, choosing between "create" mode (first draft) or "revise" mode (feedback loop). JD generation -- A second AI Agent (JD-Writer) generates a complete, styled HTML job description following a professional template with sections like responsibilities, profile, benefits, and diversity statement. Human review -- The draft is sent to a Microsoft Teams chat with an approve/reject form and an optional feedback field. Approval path -- If approved, the HTML is converted to PDF and uploaded to the Google Drive subfolder alongside the original briefing. Revision loop -- If rejected, the feedback is routed back to the JD-Writer for targeted revisions, and the updated draft is re-sent for approval. Setup steps Google Drive & Docs -- Create OAuth2 credentials. Set the folder ID in the Google Drive Trigger node to the folder where briefing documents are saved. Google Sheets -- Create a spreadsheet with columns for all job data fields (job_title, department, responsibilities, hard_skills, soft_skills, etc.). Update the Sheet ID in the Google Sheets node. OpenAI -- Add your API key as an OpenAI credential. Used for both the data extraction agent (reads the transcript) and the JD-Writer agent (generates the job description). Microsoft Teams -- Create OAuth2 credentials. Set the Teams chat ID in the approval node to the chat or channel where drafts should be reviewed. HTML-to-PDF -- Install the community node n8n-nodes-htmlcsstopdf (self-hosted only). Add the API credential. Requirements Community node:* n8n-nodes-htmlcsstopdf -- *self-hosted n8n only** OpenAI API key (GPT-4 or newer recommended) Google Drive, Docs & Sheets OAuth2 credentials Microsoft Teams OAuth2 credentials How to customize AI extraction prompt** -- Edit the system message in the "Extract job data from transcript" node to adjust which fields are extracted or to support different transcript languages. JD template style** -- Modify the prompt in the "Build JD-Writer prompt" Code node to change tone, section order, or formatting style of the generated job description. Approval channel** -- Change the Teams chat ID to route drafts to a different team or channel. Output format** -- Swap the HTML-to-PDF node for a different converter, or skip PDF and use the raw HTML output directly. Tracker columns** -- Add or remove columns in Google Sheets to match your internal job data schema. Revision depth** -- The approval loop supports unlimited revision cycles. The JD-Writer applies feedback minimally without rewriting from scratch.