by WhySoSerious
What it is This workflow listens for new tickets in HaloPSA via webhook, generates a professional AI-powered summary of the issue using Gemini (or another LLM), and posts it back into the ticket as a private note. It’s designed for MSPs using HaloPSA who want to reduce triage time and give engineers a clear head start on each support case. ⸻ ✨ Features • 🔔 Webhook trigger from HaloPSA on new ticket creation • 🚧 Optional team filter (skip Sales or other queues) • 📦 Extracts ticket subject, details, and ID • 🧠 Builds a structured AI prompt with MSP context (NinjaOne, M365, CIPP) • 🤖 Processes via Gemini or other LLM • 📑 Cleans & parses JSON output (summary, next step, troubleshooting) • 🧱 Generates a branded HTML private note (logo + styled sections) • 🌐 Posts the note back into HaloPSA via API ⸻ 🔧 Setup Webhook • Replace WEBHOOK_PATH and paste the generated Production URL into your HaloPSA webhook. Guard filter (optional) • Change teamName or teamId to skip tickets from specific queues. Branding • Replace YOUR_LOGO_URL and Your MSP Brand in the HTML note builder. HaloPSA API • In the HTTP node, replace YOUR_HALO_DOMAIN and add your Halo API token (Bearer auth). LLM credentials • Set your API key in the Gemini / OpenAI node credentials section. (Optional) Adjust the AI prompt with your own tools or processes. ⸻ ✅ Requirements • HaloPSA account with API enabled • Gemini / OpenAI (or other LLM) API key • SMTP (optional) if you want to extend with notifications ⸻ ⚡ Workflow overview `🔔 Webhook → 🚧 Guard → 📦 Extract Ticket → 🧠 Build AI Prompt → 🤖 AI Agent (Gemini) → 📑 Parse JSON → 🧱 Build HTML Note → 🌐 Post to HaloPSA`
by Alex Huy
How it works This workflow automatically curates and sends a daily AI/Tech news digest by aggregating articles from premium tech publications and using AI to select the most relevant and trending stories. 🔄 Automated News Pipeline RSS Feed Collection - Fetches articles from 14 premium tech news sources (TechCrunch, MIT Tech Review, The Verge, Wired, etc.) Smart Article Filtering - Limits articles per source to ensure diverse coverage and prevent single-source domination Data Standardization - Cleans and structures article data (title, summary, link, date) for AI processing AI-Powered Curation - Uses Google Vertex AI to analyze articles and select top 10 most relevant/trending stories Newsletter Generation - Creates professional HTML newsletter with summaries and direct links Email Delivery - Automatically sends formatted digest via Gmail 🎯 Key Features Premium Sources** - Curates from 14 top-tier tech publications AI Quality Control** - Intelligent article selection and summarization Balanced Coverage** - Prevents source bias with smart filtering Professional Format** - Clean HTML newsletter design Scheduled Automation** - Daily delivery at customizable times Error Resilience** - Continues processing even if some feeds fail Setup Steps 1. 🔑 Required API Access Google Cloud Project** with Vertex AI API enabled Google Service Account** with AI Platform Developer role Gmail API** enabled for email sending 2. ☁️ Google Cloud Setup Create or select a Google Cloud Project Enable the Vertex AI API Create a service account with these permissions: AI Platform Developer Service Account User Download the service account JSON key Enable Gmail API for the same project 3. 🔐 n8n Credentials Configuration Add these credentials to your n8n instance: Google Service Account (for Vertex AI): Upload your service account JSON key Name it descriptively (e.g., "Vertex AI Service Account") Gmail OAuth2: Use your Google account credentials Authorize Gmail API access Required scopes: gmail.send 4. ⚙️ Workflow Configuration Import the workflow into your n8n instance Update node configurations: Google Vertex AI Model: Set your Google Cloud Project ID Send Newsletter Email: Update recipient email address Daily Newsletter Trigger: Adjust schedule time if needed Verify credentials are properly connected to respective nodes 5. 📰 RSS Sources Customization (Optional) The workflow includes 14 premium tech news sources: TechCrunch (AI & Startups) The Verge (AI section) MIT Technology Review Wired (AI/Science) VentureBeat (AI) ZDNet (AI topics) AI Trends Nature (Machine Learning) Towards Data Science NY Times Technology The Guardian Technology BBC Technology Nikkei Asia Technology To customize sources: Edit the "Configure RSS Sources" node Add/remove RSS feed URLs as needed Ensure feeds are active and properly formatted 6. 🚀 Testing & Deployment Manual Test: Execute the workflow manually to verify setup Check Email: Confirm newsletter arrives with proper formatting Verify AI Output: Ensure articles are relevant and well-summarized Schedule Activation: Enable the daily trigger for automated operation 💡 Customization Options Newsletter Timing: Default: 8:00 AM UTC daily Modify "triggerAtHour" in the Schedule Trigger node Add multiple daily sends if desired Content Focus: Adjust the AI prompt in "AI Tech News Curator" node Specify different topics (e.g., focus on startups, enterprise AI, etc.) Change output language or format Email Recipients: Update single recipient in Gmail node Or modify to send to multiple addresses Integrate with mailing list services Article Limits: Current: Max 5 articles per source Modify the filtering logic in "Filter & Balance Articles" node Adjust total article count in AI prompt 🔧 Troubleshooting Common Issues: RSS Feed Failures**: Individual feed failures won't stop the workflow AI Rate Limits**: Vertex AI has generous limits, but monitor usage Gmail Sending**: Ensure sender email is authorized in Gmail settings Missing Articles**: Some RSS feeds may be inactive - check source URLs Performance Tips: Monitor execution times during peak RSS activity Consider adding delays if hitting rate limits Archive old newsletters for reference This workflow transforms daily news consumption from manual browsing to curated, AI-powered intelligence delivered automatically to your inbox.
by Ranjan Dailata
Disclaimer Please note - This workflow is only available on n8n self-hosted as it's making use of the community node for the Decodo Web Scraping This workflow automates intelligent keyword and topic extraction from Google Search results, combining Decodo’s advanced scraping engine with OpenAI GPT-4.1-mini’s semantic analysis capabilities. The result is a fully automated keyword enrichment pipeline that gathers, analyzes, and stores SEO-relevant insights. Who this is for This workflow is ideal for: SEO professionals** who want to extract high-value keywords from competitors. Digital marketers** aiming to automate topic discovery and keyword clustering. Content strategists** building data-driven content calendars. AI automation engineers** designing scalable web intelligence and enrichment pipelines. Growth teams** performing market and search intent research with minimal effort. What problem this workflow solves Manual keyword research is time-consuming and often incomplete. Traditional keyword tools only provide surface-level data and fail to uncover contextual topics or semantic relationships hidden in search results. This workflow solves that by: Automatically scraping live Google Search results for any keyword. Extracting meaningful topics, related terms, and entities using AI. Enriching your keyword list with semantic intelligence to improve SEO and content planning. Storing structured results directly in n8n Data Tables for trend tracking or export. What this workflow does Here’s a breakdown of the flow: Set the Input Fields – Define your search query and target geo (e.g., “Pizza” in “India”). Decodo Google Search – Fetches organic search results using Decodo’s web scraping API. Return Organic Results – Extracts the list of organic results and passes them downstream. Loop Over Each Result – Iterates through every search result description. Extract Keywords and Topics – Uses OpenAI GPT-4.1-mini to identify relevant keywords, entities, and thematic topics from each snippet. Data Enrichment Logic – Checks whether each result already exists in the n8n Data Table (based on URL). Insert or Skip – If a record doesn’t exist, inserts the extracted data into the table. Store Results – Saves both enriched search data and Decodo’s original response to disk. End Result: A structured and deduplicated dataset containing URLs, keywords, and key topics — ready for SEO tracking or further analytics. Setup Pre-requisite Please make sure to install the n8n custom node for Decodo. Import and Configure the Workflow Open n8n and import the JSON template. Add your credentials: Decodo API Key under Decodo Credentials account. OpenAI API Key under OpenAI Account. Define Input Parameters Modify the Set node to define: search_query: your keyword or topic (e.g., “AI tools for marketing”) geo: the target region (e.g., “United States”) Configure Output The workflow writes two outputs: Enriched keyword data → Stored in n8n Data Table (DecodoGoogleSearchResults). Raw Decodo response → Saved locally in JSON format. Execute Click Execute Workflow or schedule it for recurring keyword enrichment (e.g., weekly trend tracking). How to customize this workflow Change AI Model** — Replace gpt-4.1-mini with gemini-1.5-pro or claude-3-opus for testing different reasoning strengths. Expand the Schema** — Add extra fields like keyword difficulty, page type, or author info. Add Sentiment Analysis** — Chain a second AI node to assess tone (positive, neutral, or promotional). Export to Sheets or DB** — Replace the Data Table node with Google Sheets, Notion, Airtable, or MySQL connectors. Multi-Language Research** — Pass a locale parameter in the Decodo node to gather insights in specific languages. Automate Alerts** — Add a Slack or Email node to notify your team when high-value topics appear. Summary Search & Enrich is a low-code AI-powered keyword intelligence engine that automates research and enrichment for SEO, content, and digital marketing. By combining Decodo’s real-time SERP scraping with OpenAI’s contextual understanding, the workflow transforms raw search results into structured, actionable keyword insights. It eliminates repetitive research work, enhances content strategy, and keeps your keyword database continuously enriched — all within n8n.
by Automate With Marc
Step-By-Step AI Stock Market Research Agent (Beginner) Build your own AI-powered daily stock market digest — automatically researched, summarized, and delivered straight to your inbox. This beginner-friendly n8n workflow shows how to combine OpenAI GPT-5, Decodo scraping tool, and Gmail to produce a concise daily financial update without writing a single line of code. 🎥 Watch a full tutorial and walkthrough on how to build and customize similar workflows at: https://www.youtube.com/watch?v=DdnxVhUaQd4 What this template does Every day, this agent automatically: Triggers on schedule (e.g., 9 a.m. daily). Uses Decodo Tool to fetch real market headlines from Bloomberg, CNBC, Reuters, Yahoo Finance, etc. Passes the information to GPT-5, which summarizes key events into a clean daily report covering: Major indices (S&P 500, Nasdaq, Dow) Global markets (Europe & Asia) Sector trends and earnings Congressional trading activity Major financial and regulatory news Emails the digest to you in a neat, ready-to-read HTML format. Why it’s useful (for beginners) Zero coding: everything configured through n8n nodes. Hands-on AI Agent logic: learn how a language-model node, memory, and web-scraping tool work together. Practical use case: a real-world agent that automates market intelligence for investors, creators, or business analysts. Requirements OpenAI API Key (GPT-4/5 compatible) Decodo API Key (for market data scraping) Gmail OAuth2 Credential (to send daily digest) Credentials to set in n8n OpenAI API (Chat Model) → Connect your OpenAI key. Decodo API → Paste your Decodo access key. Gmail OAuth2 → Connect your Google Account and edit “send to” email address. How it works (nodes overview) Schedule Trigger Starts the workflow at a preset time (default: daily). AI Research Agent Acts as a Stock Market Research Assistant. Uses GPT-5 via OpenAI Chat Model. Uses Decodo Tool to fetch real-time data from trusted finance sites. Applies custom system rules for concise summaries and email-ready HTML output. Simple Memory Maintains short-term context for clean message passing between nodes. Decodo Tool Handles all data scraping and extraction using the AI’s tool calls. Gmail Node Emails the final daily digest to the user (default subject: “Daily AI News Update”). Setup (step-by-step) Import template into n8n. Open each credential node → connect your accounts. In the Gmail node, replace “sendTo” with your email. Adjust Schedule Trigger → e.g., every weekday 8:30 a.m. (Optional) Edit the system prompt in AI Research Agent to focus on different sectors (crypto, energy, tech). Click Execute Workflow Once to test — you’ll receive an AI-curated digest in your inbox. Customization tips 🕒 Change frequency: adjust Schedule Trigger to run multiple times daily or weekly. 📰 Add sources: extend the Decodo Tool input with new URLs (e.g., Seeking Alpha, MarketWatch). 📈 Switch topic: modify prompt to track crypto, commodities, or macroeconomic data. 💬 Alternative delivery: send digest via Slack, Telegram, or Notion instead of Gmail. Troubleshooting 401 errors: verify OpenAI/Decodo credentials. Empty output: ensure Decodo Tool returns valid data; inspect the agent’s log. Email not sent: confirm Gmail OAuth2 scope and recipient email. Formatting issues: keep output in HTML mode; avoid Markdown.
by Matt Chong
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Gmail Auto-Reply with AI Automatically draft smart email replies using ChatGPT. Reclaim your time typing the same responses again and again. Who is this for? If you're overwhelmed with emails and constantly repeating yourself in replies, this workflow is for you. Whether you're a freelancer, business owner, or team lead, it saves you time by handling email triage and drafting replies for you. What does it solve? This workflow reads your unread Gmail messages and uses AI to: Decide whether the email needs a response Automatically draft a short, polite reply when appropriate Skip spam, newsletters, or irrelevant emails Save the AI-generated reply as a Gmail draft (you can edit it before sending) It takes email fatigue off your plate and keeps your inbox moving. How it works Trigger on New Email: Watches your Gmail inbox for unread messages. AI Agent Review: Analyzes the content to decide if a reply is needed. OpenAI ChatGPT: Drafts a short, polite reply (under 120 words). Create Gmail Draft: Saves the response as a draft for you to review. Label It: Applies a custom label like Action so you can easily find AI-handled emails. How to set up? Connect credentials: Gmail (OAuth2) OpenAI (API key) Create the Gmail label: In your Gmail, create a label named Action (case-sensitive). How to customize this workflow to your needs Change the AI prompt**: Add company tone, extra context, or different reply rules. Label more intelligently**: Add conditions or labels for “Newsletter,” “Meeting,” etc. Adjust frequency**: Change how often the Gmail Trigger polls your inbox. Add manual review**: Route drafts to a team member before sending.
by WeblineIndia
Send Automated Recruitment Rejection Emails with Google Sheets and Gmail at End-of-Day. Automatically reads a “Candidate Status” tab in Google Sheets every day at 18:00 Asia/Kolkata, filters rows with exact (case-sensitive) rejection statuses and sends one personalized rejection email per candidate via SMTP (Gmail). It rate-limits sends, supports DRY\_RUN previews and writes a timestamp back to rejection_sent_at to avoid duplicates. Who’s it for Recruiters needing consistent, respectful closure at day end. Teams tracking hiring outcomes in Google Sheets. Coordinators who prefer a scheduled, hands-off workflow with safeguards. How it works Cron (18:00 IST) triggers daily Google Sheets Read → loads Candidate Status tab Filter → keep rows where status REJECT_STATUS_CSV (exact match), with valid candidate_email and empty rejection_sent_at DRY\RUN? If true → output preview only; if false → proceed Rate limit → wait RATE_LIMIT_SECONDS (default 10s) between emails SMTP (Gmail) → send personalized email per row using templates Mark as sent → write current timestamp to rejection_sent_at How to set up Sheet & Columns**: Create “Candidate Status” tab with: candidate_name, candidate_email, role, status, recruiter_name, recruiter_email, company_name, interview_feedback (optional), template_variant (optional), language (optional), rejection_sent_at Credentials: Connect **Google Sheets (OAuth) and SMTP (Gmail) in n8n (use App Password if 2FA) Config (Set node)**: SPREADSHEET_ID SOURCE_SHEET = Candidate Status TIMEZONE = Asia/Kolkata REJECT_STATUS_CSV = e.g., Rejected SMTP_FROM = e.g., careers@company.com SUBJECT_TEMPLATE = Regarding your application for {{role}} at {{company_name}} HTML_TEMPLATE / TEXT_TEMPLATE RATE_LIMIT_SECONDS = 10 INCLUDE_WEEKENDS = true DRY_RUN = false Activate**: Enable the workflow Requirements Google Sheet with the “Candidate Status” tab and columns above. SMTP (Gmail) account for sending. n8n (cloud or self-hosted) with Google Sheets + SMTP credentials. How to customize Statuses**: REJECT_STATUS_CSV supports comma-separated exact values (e.g., Rejected,Not Selected) Templates**: Edit SUBJECT_TEMPLATE, HTML_TEMPLATE, TEXT_TEMPLATE Variables:** {{candidate_name}}, {{role}}, {{company_name}}, {{recruiter_name}}, and optional {{feedback_text}}/{{feedback_html}} from interview_feedback Schedule**: Change Cron time from 18:00 to your preferred hour Rate limit**: Tune RATE_LIMIT_SECONDS for SMTP policy Preview**: Set DRY_RUN=true for a safe, no-send preview Add-ons Dynamic Reply-To** per recruiter_email Localization/Variants** via language or template_variant columns Daily summary** email: sent/skip/error counts Validation & logging**: log invalid emails to another tab Gmail API**: swap SMTP with Gmail nodes if preferred Use Case Examples Daily round-up**: 18:00 IST closure emails for all candidates marked Rejected today Multi-brand hiring**: Switch company_name per row and personalize subject lines Compliance/logging**: DRY\RUN each afternoon, review, then flip to live sends Common troubleshooting No emails sent**: Ensure status exactly matches REJECT_STATUS_CSV (case-sensitive) and candidate_email is present Duplicates**: Verify rejection_sent_at is blank before run; workflow sets it after sending Blank variables**: Fill candidate_name, role, company_name, recruiter_name in the sheet SMTP errors**: Check credentials, sender permissions, and daily limits Timing**: Confirm workflow timezone Asia/Kolkata and Cron = 18:00 Need Help? Want us to tailor the template, add a summary report or wire up company-based variants? Contact our n8n automation engineers at WeblineIndia and we’ll plug it in.
by PDF Vector
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Transform Research Papers into a Searchable Knowledge Graph This workflow automatically builds and maintains a comprehensive knowledge graph from academic papers, enabling researchers to discover connections between concepts, track research evolution, and perform semantic searches across their field of study. By combining PDF Vector's paper parsing capabilities with GPT-4's entity extraction and Neo4j's graph database, this template creates a powerful research discovery tool. Target Audience & Problem Solved This template is designed for: Research institutions** building internal knowledge repositories Academic departments** tracking research trends and collaborations R&D teams** mapping technology landscapes Libraries and archives** creating searchable research collections It solves the problem of information silos in academic research by automatically extracting and connecting key concepts, methods, authors, and findings across thousands of papers. Prerequisites n8n instance with PDF Vector node installed OpenAI API key for GPT-4 access Neo4j database instance (local or cloud) Basic understanding of graph databases At least 100 API credits for PDF Vector (processes ~50 papers) Step-by-Step Setup Instructions Configure PDF Vector Credentials Navigate to Credentials in n8n Add new PDF Vector credentials with your API key Test the connection to ensure it's working Set Up Neo4j Database Install Neo4j locally or create a cloud instance at Neo4j Aura Note your connection URI, username, and password Create database constraints for better performance: CREATE CONSTRAINT paper_id IF NOT EXISTS ON (p:Paper) ASSERT p.id IS UNIQUE; CREATE CONSTRAINT author_name IF NOT EXISTS ON (a:Author) ASSERT a.name IS UNIQUE; CREATE CONSTRAINT concept_name IF NOT EXISTS ON (c:Concept) ASSERT c.name IS UNIQUE; Configure OpenAI Integration Add OpenAI credentials in n8n Ensure you have GPT-4 access (GPT-3.5 can be used with reduced accuracy) Set appropriate rate limits to avoid API throttling Import and Configure the Workflow Import the template JSON into n8n Update the search query in the "PDF Vector - Fetch Papers" node to your research domain Adjust the schedule trigger frequency based on your needs Configure the PostgreSQL connection for logging (optional) Test with Sample Papers Manually trigger the workflow Monitor the execution for any errors Check Neo4j browser to verify nodes and relationships are created Adjust entity extraction prompts if needed for your domain Implementation Details The workflow operates in several stages: Paper Discovery: Uses PDF Vector's academic search to find relevant papers Content Parsing: Leverages LLM-enhanced parsing for accurate text extraction Entity Extraction: GPT-4 identifies concepts, methods, datasets, and relationships Graph Construction: Creates nodes and relationships in Neo4j Statistics Tracking: Logs processing metrics for monitoring Customization Guide Adjusting Entity Types: Edit the GPT-4 prompt in the "Extract Entities" node to include domain-specific entities: // Add custom entity types like: // - Algorithms // - Datasets // - Institutions // - Funding sources Modifying Relationship Types: Extend the "Build Graph Structure" node to create custom relationships: // Examples: // COLLABORATES_WITH (between authors) // EXTENDS (between papers) // FUNDED_BY (paper to funding source) Changing Search Scope: Modify providers array to include/exclude databases Adjust year range for historical or recent focus Add keyword filters for specific subfields Scaling Considerations: For large-scale processing (>1000 papers/day), implement batching Use Redis for deduplication across runs Consider implementing incremental updates to avoid reprocessing Knowledge Base Features: Automatic concept extraction with GPT-4 Research timeline tracking Author collaboration networks Topic evolution visualization Semantic search interface via Neo4j Components: Paper Ingestion: Continuous monitoring and parsing Entity Extraction: Identify key concepts, methods, datasets Relationship Mapping: Connect papers, authors, concepts Knowledge Graph: Store in graph database Search Interface: Query by concept, author, or topic Visualization: Interactive knowledge exploration
by tsushima ryuto
Invoice Automation Kit: AI-Powered Invoice Processing and Weekly Reports This n8n workflow is designed to automate invoice processing and streamline financial management. It leverages AI to extract key invoice data, validate it, and store it in Airtable. Additionally, it generates and emails weekly spending reports. Who is it for? This template is for small businesses, freelancers, or individuals looking to save time on manual invoice processing. It's ideal for anyone who wants to improve the accuracy of their financial data and maintain a clear overview of their spending. How it Works / What it Does This workflow consists of two main parts: Invoice Data Extraction and Storage: Invoice Upload Form: Upload your invoices (PDF, PNG, JPG) via an n8n form. AI-Powered Data Extraction: AI extracts key information such as vendor name, invoice date, total amount, currency, and line items (description, quantity, unit price, total) from the uploaded invoice. Data Validation: The extracted data is validated to ensure it is complete and accurate. Store in Airtable: Validated invoice data is saved in a structured format to your specified Airtable base and table. Weekly Spending Report Generation and Email: Weekly Report Schedule: Automatically triggers every Sunday at 6 PM. Fetch Weekly Invoices: Retrieves all invoices stored in Airtable within the last 7 days. AI-Powered Spending Report Generation: Based on the retrieved invoice data, AI generates a comprehensive spending report, including total spending for the week, breakdown by vendor, top 5 expenses, spending trends, and any notable observations. Send Weekly Report Email: The generated report is sent in a professional format to the configured recipient email address. How to Set Up Update Workflow Configuration Node: Replace airtableBaseId with your Airtable Base ID. Replace airtableTableId with your Airtable Table ID. Replace reportRecipientEmail with the email address that should receive the weekly reports. Airtable Credentials: Set up your Airtable Personal Access Token credentials in the Airtable nodes. OpenAI Credentials: Set up your OpenAI API key credentials in the OpenAI Chat Model nodes. Email Credentials: Configure your email sending service (e.g., SMTP) credentials in the "Send Weekly Report Email" node and update the fromEmail. Airtable Table Setup: Ensure your Airtable has a table set up with appropriate columns to store invoice data, such as "Vendor", "Invoice Date", "Total Amount", "Currency", and "Line Items". Requirements An n8n instance An OpenAI account and API key An Airtable account and Personal Access Token An email sending service (e.g., SMTP server) How to Customize the Workflow Adjust Information Extraction**: Edit the prompt in the "Extract Invoice Data" node to include additional information you wish to extract. Customize Report**: Adjust the prompt in the "Generate Spending Report" node to change specific analyses or formatting included in the report. Add Notifications**: Incorporate notification nodes to other services like Slack or Microsoft Teams to be alerted when an invoice is uploaded or a report is ready. Modify Validation Rules**: Edit the conditions in the "Validate Invoice Data" node to implement additional validation rules. Here's a visual representation of the workflow.
by Emir Belkahia
Newsletter Quality Assurance with LLM Judge This sub-workflow validates newsletter quality before sending to customers. It's triggered by the main newsletter workflow and acts as an automated quality gate to catch data issues, broken layouts, or missing content. Who's it for E-commerce teams who want to automate newsletter quality checks and prevent broken or incomplete emails from reaching customers. Perfect for ensuring consistent brand quality without manual review. How it works Receives newsletter HTML - Triggered by parent workflow with the generated newsletter content Sends to test inbox - Delivers newsletter to LLM Judge's Gmail inbox to validate actual rendering Retrieves rendered email - Fetches the email back from Gmail to analyze how it actually renders (catches Gmail-specific issues) AI-powered validation - GPT-5 analyzes the newsletter against quality criteria: Verifies all 6 product cards have images, prices, and descriptions Checks layout integrity and date range formatting Detects broken images or unprocessed template variables Validates sale prices are lower than original prices Decision gate - Based on Judge's verdict: PASS: Returns approval to parent workflow → sends to customers BLOCK: Alerts admin via email → requires human review Set up steps Setup time: ~5 minutes Connect your Gmail account for sending test emails Update the Judge's email address in "Send newsletter to LLM Judge" node Update the admin alert email in error handling nodes Connect your OpenAI API credentials (GPT-5 recommended for heavy HTML processing) (Optional) Adjust quality thresholds in the Judge's system prompt Requirements Gmail account for test sends and retrieving rendered emails OpenAI API key (GPT-5 recommended) Parent workflow that passes newsletter HTML content How to customize Adjust validation strictness**: Modify the Judge's system prompt to change what triggers BLOCK vs PASS Change product count**: Update prompt if your newsletters have different numbers of products Add custom checks**: Extend the system prompt with brand-specific validation rules Modify alert recipients**: Update email addresses in error handling nodes 💡 Pro tip: The workflow validates the actual Gmail-rendered version to catch image loading issues and ensure consistent customer experience.
by Ranjan Dailata
Who this is for This workflow is designed for: Recruiters, Talent Intelligence Teams, and HR tech builders automating resume ingestion. Developers and data engineers building ATS (Applicant Tracking Systems) or CRM data pipelines. AI and automation enthusiasts looking to extract structured JSON data from unstructured resume sources (PDFs, DOCs, HTML, or LinkedIn-like URLs). What problem this workflow solves Resumes often arrive in different formats (PDF, DOCX, web profile, etc.) that are difficult to process automatically. Manually extracting fields like candidate name, contact info, skills, and experience wastes time and is prone to human error. This workflow: Converts any unstructured resume into a structured JSON Resume format. Ensures the output aligns with the JSON Resume Schema. Saves the structured result to Google Sheets and local disk for easy tracking and integration with other tools. What this workflow does The workflow automates the entire resume parsing pipeline: Step 1: Trigger Starts manually with an Execute Workflow button. Step 2: Input Setup A Set Node defines the resume_url (e.g., a hosted resume link). Step 3: Resume Content Extraction Sends the URL to Thordata Universal API, which retrieves the web content, cleans HTML/CSS, and extracts structured text and metadata. Step 4: Convert HTML → Markdown Converts the HTML content into Markdown to prepare for AI model parsing. Step 5: JSON Resume Builder (AI Extraction) Sends the Markdown to OpenAI GPT-4.1-mini, which extracts: basics: name, email, phone, location work: companies, roles, achievements education: institutions, degrees, dates skills, projects, certifications, languages, and more The output adheres to the JSON Resume Schema. Step 6: Output Handling Saves the final structured resume: Locally to disk Appends to a Google Sheet for analytics or visualization. Setup Prerequisites n8n instance (self-hosted or cloud) Credentials for: Thordata Universal API (HTTP Bearer Token). First time users Signup OpenAI API Key Google Sheets OAuth2 integration Steps Import the provided workflow JSON into n8n. Configure your Thordata Universal API Token under Credentials → HTTP Bearer Auth. Connect your OpenAI account under Credentials → OpenAI API. Link your Google Sheets account (used in the Append or update row in sheet node). Replace the resume_url in the Set Node with your own resume file or hosted link. Execute the workflow. How to customize this workflow Input Sources Replace the Manual Trigger with: A Webhook Trigger to accept resumes uploaded from your website. A Google Drive / Dropbox Trigger to process uploaded files automatically. Output Destinations Send results to: Notion, Airtable, or Supabase via API nodes. Slack / Email for recruiter notifications. Language Model Options You can upgrade from gpt-4.1-mini → gpt-4.1 or a custom fine-tuned model for improved accuracy. Summary Unstructured Resume Parser with Thordata Universal API + OpenAI GPT-4.1-mini — automates the process of converting messy, unstructured resumes into clean, structured JSON data. It leverages Thordata’s Universal API for document ingestion and preprocessing, then uses OpenAI GPT-4.1-mini to extract key fields such as name, contact details, skills, experience, education, and achievements with high accuracy.
by Emir Belkahia
Automated Weekly Newsletter for E-commerce Promotions (based on Algolia) This workflow automatically sends a beautifully designed HTML newsletter every Sunday at 8 AM, featuring products currently on sale from your Algolia-powered e-commerce store. Who's it for Perfect for e-commerce store owners, marketing teams, and anyone running promotional campaigns who wants to automate their weekly newsletter without relying on expensive email marketing platforms. How it works Triggers every Sunday at 8:00 AM - Scheduled to start each new promotion week Fetches discounted products - Queries your Algolia index for 6 products marked with on_sale:true Calculates promotion dates - Automatically generates the week's date range (Sunday to Saturday) Builds HTML newsletter - Populates a responsive email template with product images, prices, and descriptions Retrieves subscribers - Pulls the latest subscriber list from your Google Sheets Sends personalized emails - Delivers the newsletter to all subscribers via Gmail Set up steps Setup time: ~15 minutes Connect your Algolia credentials (Search API key + Application ID) Update the Algolia index name to match your store (currently set to dogtreats_prod_products) Create a Google Sheet with subscriber emails (column named "Email") Connect your Google Sheets and Gmail accounts (Optional) Customize the HTML template colors and branding to match your store Requirements Algolia account with a product index containing on_sale, price_eur, original_price_eur, image, name, and description fields Google Sheets with subscriber list Gmail account for sending emails How to customize Change promotion criteria**: Modify the filter in "Request products from Algolia" node (e.g., category:shoes instead of on_sale:true) Adjust product count**: Change hitsPerPage value (currently 6) Modify schedule**: Update the trigger node to run on different days/times Personalize email design**: Edit the HTML template node to match your brand colors and style Add unsubscribe logic**: Extend the workflow to handle unsubscribe requests 💡 Pro tip: Use the manual execution button to test the workflow mid-week - it's "smart" enough to calculate the current promotion week even when not running on Sunday.
by Summer
Website Leads to Voice Demo and Scheduling Creator: Summer Chang AI Booking Agent Setup Guide Overview This automation turns your website into an active booking agent. When someone fills out your form, it automatically: Adds their information to Notion AI researches their business from their website Calls them immediately with a personalized pitch Updates Notion with call results Total setup time: 30-45 minutes What You Need Before starting, create accounts and gather these: n8n account (cloud or self-hosted) Notion account - Free plan works duplicate my notion template OpenRouter API key - Get from openrouter.ai Vapi account - Get from vapi.ai Create an AI assistant Set up a phone number Copy your API key, Assistant ID, and Phone Number ID How It Works The Complete Flow Visitor fills form on your website Form submission creates new record in Notion with Status = "New" Notion Trigger detects new record (checks every minute) Main Workflow executes: Fetches lead's website AI analyzes their business Updates Notion with analysis Makes Vapi call with personalized intro Call happens between your AI agent and the lead When call ends, Vapi sends webhook to n8n Webhook Workflow executes: Fetches call details from Vapi AI generates call summary Updates Notion with results and recording