by The Higher Pitch
This workflow automatically pulls articles from an RSS feed, translates the content and title from English to Hindi using OpenAI, extracts the featured image from the HTML content, and publishes the translated post as a draft on a connected WordPress site. 🔧 Key Features: Polls RSS feed every 10 minutes for new articles Extracts and parses the featured image from custom HTML tags Translates content and title from English to Hindi using OpenAI Assistant Uploads the featured image to WordPress media library Associates the image with the new draft post Publishes the translated article as a draft for review 🎯 Use Case: Ideal for multi-language blog automation or content localization workflows where original content is in English and needs to be localized into Hindi before publishing.
by n8n Team
This workflow has multiple functionalities. It starts with a manual trigger, "When clicking 'Execute Workflow'", that activates two separate paths. The first path takes a preset string "Tell me a joke" and processes it through a custom Language Learning Model (LLM) chain node. This node interacts with an OpenAI node for query processing. The second path takes another preset string "What year was Einstein born?" and passes it to an "Agent" node. This agent further interacts with a Chat OpenAI node and a custom Wikipedia node to produce the required information. The workflow uses both built-in and custom nodes, and integrates with OpenAI for both paths. It's built for experimenting with language models, specifically in the context of conversational agents and information retrieval. Note that to use this template, you need to be on n8n version 1.19.4 or later.
by Airtop
About the Automation Staying on top of competitor pricing changes can be a full-time job. Manual price tracking is time-consuming and prone to errors, especially when dealing with complex pricing structures and multiple subscription tiers. Paid competitor price monitoring tools like Competera, Visualping and Fluxguard can be expensive. What if you could automate this process and get instant alerts when competitors adjust their pricing? How to easily monitor competitor pricing With this automation, you'll learn how to set up automated price monitoring system using Airtop's built-in node in n8n. By the end, your system will automatically track competitor pricing changes and notify you of any modifications. What You'll Need A free Airtop API Key Google Sheets account with a copy of this sheet URLs of competitors' pricing pages Understanding the Process This automation continuously monitors competitor pricing pages and compares them against your baseline data. The workflow: Tracks all different pricing plans (monthly, yearly, etc.). Monitors feature changes across different tiers. Detects and logs pricing structure modifications. Alerts you via Slack when changes are detected Setting Up Your Automation We've created a ready-to-use blueprint for seamless price monitoring. Here's how to get started: Connect your Google Sheets Set up your Airtop API connection Define update frequency Customization Options Enhance the basic template with these popular modifications: Add other notification channels (Email, Telegram, etc.). Include feature comparison tracking. Set up threshold-based alerts for significant price changes Track historical pricing trends Real-World Applications Case Study 1: A B2B SaaS company can use this automation to track competitors' pricing changes. When they identify a market-wide pricing shift, they can adjust their strategy proactively within minutes. Case Study 2: An online Ecommerce retailer automates monitoring of 100+ competitor products, maintaining optimal pricing positions and increasing profit margins. Best Practices To ensure accurate tracking: Include detailed baseline data for each pricing tier Specify both monthly and annual pricing clearly List all features included in each plan Update your baseline data whenever you verify changes Include any promotional pricing or special offers Document currency and regional variations if applicable Example Structure in Google Sheets: Competitor: Acme Tools Basic Plan: Monthly: $29 Annual: $290 ($24.17/mo) Features: 5 users, 10GB storage, basic support Pro Plan: Monthly: $79 Annual: $790 ($65.83/mo) Features: 20 users, 50GB storage, priority support What's Next? After setting up your price monitoring automation, consider the following: Creating automated competitive analysis reports Setting up market trend analysis Implementing automatic pricing recommendations Expanding monitoring to feature changes Happy monitoring!
by Samir Saci
Tags: Sustainability, CSRD, Reporting, ESG, Compliance, Automation Context Hey! I'm Samir, a Supply Chain Engineer and Data Scientist from Paris, founder of LogiGreen Consulting We help companies automate sustainability workflows using AI, Data Analytics, and No-Code tools like N8N. > Sustainability Reporting meets Automation with n8n! 📬 For business inquiries, you can add me on Here What is a CSRD XHTML Report? Under the Corporate Sustainability Reporting Directive (CSRD), companies must publish their ESG disclosures in a machine-readable XHTML format, embedding XBRL tags that make the report structured and standardized. These files must follow strict formatting and tagging rules to ensure compliance, traceability, and accessibility for both regulators and analysts. Who is this template for? This workflow is designed for sustainability teams, ESG consultants, or developers who want to automatically check the structure and format of CSRD reports submitted in XHTML. How does it work? This N8N workflow automates the audit process: 📤 Input Node → Uploads or fetches the XHTML file via URL or Webhook. 🧪 Validates Structure → Uses a custom code node to parse HTML and identify required tags (e.g., <ix:nonNumeric>, namespaces). 📋 Outputs a Report → Returns a summary report of errors, warnings, and key metadata (like entity name, reporting period). 📤 Export Option → Save the results in Google Sheets or send via email. Prerequisite A sample XHTML file that you can find in my GitHub Repository Google Sheets API* and *OpenAI API** credentials Next Steps Follow the sticky notes inside each node to adjust parsing rules or extend validation to specific XBRL tags relevant to your sector (e.g., GHG emissions, water usage). *📺 Check my complete tutorial to understand how to use it: * 🎥 Check My Tutorial 🚀 Interested in combining CSRD compliance with automation and analytics? Let’s connect on LinkedIn Notes This workflow includes an example XHTML file to test the validator. You can plug this into your internal systems or even extend it with AI to auto-summarize the sustainability report. This workflow has been created with N8N 1.82.1 Submitted: April 3rd, 2025
by Omer Fayyaz
An intelligent AI-powered agent that automatically browses publication websites, analyzes page content with natural language understanding, and identifies the latest downloadable reports, research papers, and data files across multiple sources using advanced structured output parsing. What Makes This Different: AI-Powered Content Analysis** - Uses advanced language models (GPT-4/GPT-5.1) to understand page context and identify downloadable reports, even when links aren't explicitly labeled, handling complex page layouts and dynamic content Structured Output Parsing** - Enforces JSON schema validation ensuring consistent data extraction with required fields (title, link, file_type, description), eliminating parsing errors and data inconsistencies HTML to Markdown Conversion** - Converts raw HTML to clean Markdown before AI processing, removing noise and improving AI comprehension of page structure and content hierarchy Intelligent Link Detection** - AI agent identifies direct download URLs, converts relative links to absolute URLs, and prioritizes the most recent reports based on publication dates and page positioning Comprehensive Validation** - Multi-layer validation checks link format, file type detection, and report relevance before saving, ensuring only valid, downloadable reports enter your library Flexible Source Management** - Reads publication sources from Google Sheets, enabling easy addition/removal of sources without workflow modification, with support for categories and custom metadata Key Benefits of AI-Powered Report Discovery: Automated Discovery** - Eliminates manual browsing and searching across multiple publication sites, saving hours of research time while ensuring you never miss new reports Context-Aware Extraction** - AI understands page context, distinguishing between actual reports and navigation links, category pages, or promotional content Prioritized Results** - Automatically selects the most recent and relevant report from each source, focusing on quality over quantity Structured Data Output** - All discovered reports are saved with consistent metadata (title, link, file type, description, source), making them easy to search, filter, and integrate with other systems Error Resilience** - Handles missing reports gracefully, logging when no reports are found without failing the entire workflow, ensuring continuous operation Integration Ready** - Can be called by other workflows (e.g., PDF downloader), enabling end-to-end automation from discovery to storage Who's it for This template is designed for researchers, market analysts, competitive intelligence teams, academic institutions, industry monitoring services, and anyone who needs to systematically discover and track downloadable reports from multiple publication sources. It's perfect for organizations that need to monitor industry publications, track competitor research, discover new market reports, build research libraries, or stay updated on latest publications without manually visiting dozens of websites daily. How it works / What it does This workflow creates an AI-powered report discovery system that reads publication source URLs from Google Sheets, fetches their pages, uses AI to analyze content, and extracts information about downloadable reports. The system: Reads Active Sources - Fetches publication URLs and metadata from Google Sheets "Report Sources" sheet, processing each source in sequence Loops Through Sources - Processes sources one at a time using Split in Batches, ensuring proper error isolation and preventing batch failures Fetches Publication Pages - Downloads HTML content from each source URL with proper browser headers (User-Agent, Accept, Accept-Language) to avoid blocking Converts HTML to Markdown - Transforms raw HTML into clean Markdown format, removing styling, scripts, and navigation elements to improve AI comprehension AI Analysis - LangChain agent analyzes the Markdown content using GPT-4/GPT-5.1, identifying downloadable reports based on context, link patterns, and content structure Structured Output Parsing - Enforces JSON schema validation, ensuring the AI returns data in the exact format: source, title, link, file_type, description Validates & Normalizes Output - Validates extracted links are absolute URLs, checks file type indicators, determines report validity, and normalizes all fields Routes by Validity - IF node routes valid reports to save operation, invalid/missing reports to logging Saves Discovered Reports - Appends valid reports to Google Sheets "Discovered Reports" sheet with metadata, source URL, category, and discovery timestamp Logs No Report Found - Records sources where no valid reports were found in "Discovery Log" sheet for monitoring and troubleshooting Tracks Completion - Generates completion summary with number of sources checked and processing timestamp Key Innovation: AI-Powered Context Understanding - Unlike traditional web scrapers that rely on fixed CSS selectors or regex patterns, this workflow uses AI to understand page context and semantics. The AI can identify reports even when they're embedded in complex layouts, use non-standard naming, or require understanding of surrounding text to determine relevance. This makes it adaptable to any website structure without manual configuration. How to set up 1. Prepare Google Sheets Create a Google Sheet with three tabs: "Report Sources", "Discovered Reports", and "Discovery Log" In "Report Sources" sheet, create columns: Source_Name, Source_URL, Category (optional) Add publication URLs in the Source_URL column (e.g., "https://example.com/research" or "https://publisher.com/reports") Add descriptive names in Source_Name column for easy identification Optionally add Category values (e.g., "Market Research", "Industry Reports", "Academic Papers") The "Discovered Reports" sheet will be automatically populated with columns: source, title, link, fileType, description, sourceUrl, category, discoveredAt, status, isValid The "Discovery Log" sheet will record sources where no reports were found Verify your Google Sheets credentials are set up in n8n (OAuth2 recommended) 2. Configure Google Sheets Nodes Open the "Read Active Sources" node and select your spreadsheet from the document dropdown Set sheet name to "Report Sources" Configure the "Save Discovered Report" node: select same spreadsheet, set sheet name to "Discovered Reports", operation should be "Append or Update" Configure the "Log No Report Found" node: same spreadsheet, "Discovery Log" sheet, operation "Append or Update" Test connection by running the "Read Active Sources" node manually to verify it can access your sheet 3. Set Up OpenAI Credentials Open the "OpenAI GPT-5.1" node (or configure the model you want to use) Connect your OpenAI API credentials (API key required) The workflow uses GPT-5.1 by default, but you can change to GPT-4, GPT-4 Turbo, or other models Temperature is set to 0.1 for consistent, deterministic output Verify API key has sufficient credits and access to the selected model For cost optimization, GPT-4 Turbo is recommended for similar results at lower cost 4. Configure AI Agent & Output Parser The "AI Report Discovery Agent" node contains a detailed system prompt that instructs the AI on what to look for The prompt is pre-configured but can be customized for your specific needs (e.g., prioritize certain file types, look for specific keywords) The "Structured Output Parser" enforces the JSON schema - verify the schema matches your needs: { "source": "Publisher Name", "title": "Report Title", "link": "https://example.com/report.pdf", "file_type": "pdf", "description": "Brief description" } The parser ensures the AI always returns valid JSON with all required fields Test the AI agent by manually running with a sample source URL to verify it correctly identifies reports 5. Customize Discovery Rules (Optional) The AI agent's system prompt can be modified in the "AI Report Discovery Agent" node Current rules prioritize: downloadable files (PDF, Excel, Word, PowerPoint), most recent publications, direct download URLs To customize: Edit the system message to add specific keywords, file types, or discovery patterns Example customization: Add industry-specific terms or prioritize reports with certain keywords in titles The validation code in "Validate & Normalize Output" can be adjusted to change what's considered "valid" Test with your specific sources to ensure discovery rules work as expected 6. Set Up Scheduling & Test The workflow includes Manual Trigger (for testing), Schedule Trigger (runs daily), and Execute Workflow Trigger (for calling from other workflows) To customize schedule: Open "Schedule (Daily)" node and adjust interval (e.g., twice daily, weekly) For initial testing: Use Manual Trigger, add 2-3 test publication URLs to your "Report Sources" sheet Verify execution: Check that pages are fetched, AI analysis completes, and reports are saved to "Discovered Reports" Monitor execution logs: Check for API errors, timeout issues, or parsing failures Review Discovery Log: Verify sources with no reports are properly logged Common issues: OpenAI API rate limits (add delays if processing many sources), invalid URLs (check source URLs), timeout errors (increase timeout for slow-loading pages), AI not finding reports (may need to adjust system prompt for specific site structures) Requirements OpenAI API Key** - Active OpenAI account with API access and sufficient credits for GPT-4/GPT-5.1 model usage (API key configured in n8n credentials) Google Sheets Account** - Active Google account with OAuth2 credentials configured in n8n for reading and writing spreadsheet data Source Spreadsheet** - Google Sheet with "Report Sources", "Discovered Reports", and "Discovery Log" tabs, properly formatted with required columns Valid Publication URLs** - Direct links to publication pages that contain downloadable reports (not direct PDF links - the workflow discovers those) n8n Instance** - Self-hosted or cloud n8n instance with access to external websites (HTTP Request node needs internet connectivity) and LangChain nodes enabled
by Harshil Agrawal
This workflow allows you to add positive feedback messages to a table in Notion. Prerequisites Create a Typeform that contains Long Text filed question type to accepts feedback from users. Get your Typeform credentials by following the steps mentioned in the documentation. Follow the steps mentioned in the documentation to create credentials for Google Cloud Natural Language. Create a page on Notion similar to this page. Create credentials for the Notion node by following the steps in the documentation. Follow the steps mentioned in the documentation to create credentials for Slack. Follow the steps mentioned in the documentation to create credentials for Trello. Typeform Trigger node: Whenever a user submits a response to the Typeform, the Typeform Trigger node will trigger the workflow. The node returns the response that the user has submitted in the form. Google Cloud Natural Language node: This node analyses the sentiment of the response the user has provided and gives a score. IF node: The IF node uses the score provided by the Google Cloud Natural Language node and checks if the score is positive (larger than 0). If the score is positive we get the result as True, otherwise False. Notion node: This node gets connected to the true branch of the IF node. It adds the positive feedback shared by the user in a table in Notion. Slack node: This node will share the positive feedback along with the score and username to a channel in Slack. Trello node: If the score is negative, the Trello node is executed. This node will create a card on Trello with the feedback from the user.
by Paul Taylor
📩 Gmail → GPT → Supabase | Task Extractor This n8n workflow automates the extraction of actionable tasks from unread Gmail messages using OpenAI's GPT API, stores the resulting task metadata in Supabase, and avoids re-processing previously handled emails. ✅ What It Does Triggers on a schedule to check for unread emails in your Gmail inbox. Loops through each email individually using SplitInBatches. Checks Supabase to see if the email has already been processed. If it's a new email: Formats the email content into a structured GPT prompt Calls ChatGPT-4o to extract structured task data Inserts the result into your emails table in Supabase 🧰 Prerequisites Before using this workflow, you must have: An active n8n Cloud or self-hosted instance A connected Gmail account with OAuth credentials in n8n A Supabase project with an emails table and: ALTER TABLE emails ADD CONSTRAINT unique_email_id UNIQUE (email_id); An OpenAI API key with access to GPT-4o or GPT-3.5-turbo 🔐 Required Credentials | Name | Type | Description | |-----------------|------------|-----------------------------------| | Gmail OAuth | Gmail | To pull unread messages | | OpenAI API Key | OpenAI | To generate task summaries | | Supabase API | HTTP | For inserting rows via REST API | 🔁 Environment Variables or Replacements Supabase_TaskManagement_URI → e.g., https://your-project.supabase.co Supabase_TaskManagement_ANON_KEY → Your Supabase anon key These are used in the HTTP request to Supabase. ⏰ Scheduling / Trigger Triggered using a Schedule node Default: every X minutes (adjust to your preference) Uses a Gmail API filter: unread emails with label = INBOX 🧠 Intended Use Case > Designed for productivity-minded professionals who want to extract, summarize, and store actionable tasks from incoming email — without processing the same email twice or wasting GPT API credits. This is part of a larger system integrating GPT, calendar scheduling, and optional task platforms (like ClickUp). 📦 Output (Stored in Supabase) Each processed email includes: email_id subject sender received_at body (email snippet) gpt_summary (structured task) requires_deep_work (from GPT logic) deleted (initially false)
by Fenngbrotalk
n8n Workflow: AI-Powered Stock Chart Analysis Bot for Telegram This is a powerful n8n automation workflow that integrates a Telegram bot with OpenAI's multimodal large language model (GPT-4 Vision) to provide users with real-time stock chart analysis. Workflow Breakdown Receive Image:** The workflow is initiated by a Telegram Trigger. It activates whenever a user sends an image (e.g., a stock's candlestick chart) to a designated Telegram chat, automatically downloading the file. Image Pre-processing:** To optimize the AI's performance and efficiency, the Edit Image node resizes the incoming image to a standard 512x512 pixel format. AI Vision Analysis:** The processed image is then passed to a LangChain Chain, which utilizes the OpenAI GPT-4 Vision model. A sophisticated system prompt instructs the AI to act as a professional stock analyst. Intelligent Interpretation:** The AI analyzes the image to identify the stock's name, price trend (uptrend, downtrend, or sideways), key support/resistance levels, and volume changes. It then generates a comprehensive analysis report combining technical indicators and market sentiment. Structured Output:** To ensure reliability and consistency, the AI's output is parsed into a specific JSON format. This structure includes a search_word (for the industry/sector) and the main content (the analysis text). Send Response:** Finally, the workflow extracts the content field from the JSON output and uses the Telegram node to send this professional analysis back to the user as a text message in the same chat. Key Features User-Friendly:** Users simply send an image to get an analysis, requiring no complex commands. Instant & Efficient:** The entire analysis and response process is fully automated and completed within moments. Professional-Grade Analysis:** Leverages the advanced image recognition and reasoning capabilities of GPT-4 Vision to deliver insights comparable to those of a human analyst. Reliable & Consistent:** The use of structured output ensures that the format of the response is always consistent and easy to read or process further.
by isa024787bel
This n8n workflow automates sending out SMS notifications via Vonage which includes new tech-related vocabulary everyday. To build this handy vocabulary improver, you’ll need the following: n8n – You can find details on how to install n8n on the Quickstart page. LingvaNex account – You can create a free account here. Up to 200,000 characters are included in the free plan when you generate your API key. Airtable account – You can register for free. Vonage account – You can sign up free of charge if you aren’t already.
by scrapeless official
> ⚠️ Disclaimer: This workflow uses Scrapeless and Claude AI via community nodes, which require n8n self-hosted to work properly. 🔁 How It Works This intelligent B2B lead generation workflow combines search automation, website crawling, AI analysis, and multi-channel output: It starts by using Scrapeless’s Deep Serp API to find company websites from targeted Google Search queries. Each result is then individually crawled using Scrapeless's Crawler module, retrieving key business information from pages like /about, /contact, /services. The raw web content is processed via a Code node to clean, extract, and prepare structured data. The cleaned data is passed to Claude Sonnet (Anthropic) which analyzes and qualifies the lead based on content richness, contact data, and relevance. A filter step ensures only high-quality leads (e.g. lead score ≥ 6) are kept. Sent via Discord webhook for real-time notification (can be replaced with Slack, email, or CRM tools). > 📌 The result is a fully automated system that finds, qualifies, and organizes B2B leads with high efficiency and minimal manual input. ✅ Pre-Conditions Before using this workflow, make sure you have: An n8n self-hosted instance A Scrapeless account and API key (get it here) An Anthropic Claude API key A configured Discord webhook URL (or alternative notification service) ⚙️ Workflow Overview Manual Trigger → Scrapeless Google Search → Item Lists → Scrapeless Crawler → Code (Data Cleaning) → Claude Sonnet → Code (Response Parser) → Filter → Discord Notification 🔨 Step-by-Step Breakdown Manual Trigger – For testing purposes (can be replaced with Cron or Webhook) Scrapeless Google Search – Queries target B2B topics via Scrapeless’s Deep SERP API Item Lists – Splits search results into individual items Scrapeless Crawler – Visits each company domain and scrapes structured content Code Node (Data Cleaner) – Extracts and formats content for LLM input Claude Sonnet (via HTTP Request) – Evaluates lead quality, relevance, and contact info Code Node (Parser) – Parses Claude’s JSON response IF Filter – Filters leads based on score threshold Discord Webhook – Sends formatted message with company info 🧩 Customization Guidance You can easily adjust the workflow to match your needs: Lead Criteria**: Modify the Claude prompt and scoring logic in the Code node Output Channels**: Replace the Discord webhook with Slack, Email, Airtable, or any CRM node Search Topics**: Change your query in the Scrapeless SERP node to find leads in different niches or countries Scoring Threshold**: Adjust the filter logic (lead_score >= 6) to match your quality tolerance 🧪 How to Use Insert your Scrapeless and Claude API credentials in the designated nodes Replace or configure the Discord webhook (or alternative outputs) Run the workflow manually (or schedule it) View qualified leads directly in your chosen notification channel 📦 Output Example Each qualified lead includes: 🏢 Company Name 🌐 Website ✉️ Email(s) 📞 Phone(s) 📍 Location 📈 Lead Score 📝 Summary of relevant content 👥 Ideal Users This workflow is perfect for: AI SaaS companies** targeting mid-market & enterprise leads Marketing agencies** looking for B2B-qualified leads Automation consultants** building scraping solutions No-code developers** working with n8n, Make, Pipedream Sales teams** needing enriched prospecting data
by Kev
⚠️ Important: This workflow uses the Autype community node and requires a self-hosted n8n instance. Send an email with a document request and optional PDF attachments. The AI assistant can summarize documents, compare multiple PDFs, draft new content, or create documents from scratch with internet research — all output as professionally branded PDFs using Autype. The finished document is delivered back to the sender via email. Who is this for? Consultants, analysts, project managers, and teams who need on-demand document generation. Send an email and get a branded PDF back — whether it's a summary, comparison, draft, or a freshly researched document. Concrete example: Attach 3 PDF proposals and write "Compare these proposals and recommend the best option" — each PDF is OCR'd via Autype Lens, the AI assistant produces a structured comparison with tables, and you receive a branded PDF within minutes. This also works as an additional skill for an AI agent. Instead of an email trigger, connect the workflow to a webhook or chat trigger so an agent can call it when a user asks "create a summary of these documents." What this workflow does On each incoming email, the workflow: Extracts the email subject + body as the document request, and detects PDF attachments Processes each attached PDF sequentially: uploads to Autype, extracts text via Lens OCR Combines all OCR results into a single context Downloads the Autype Extended Markdown syntax reference so the AI knows the output format Passes the request text + all PDF content to an AI Document Assistant with Firecrawl and SerpAPI as research tools The assistant determines the task type (summarize, compare, draft, or create from scratch) and produces the document in Autype Extended Markdown Autype renders the markdown to a branded PDF with company styling (fonts, colors, heading styles, tables, header with logo, footer with page numbers) The PDF is delivered back to the original sender via email Output structure How it works New Email Received — An IMAP Email Trigger monitors your inbox for incoming document requests. The email subject and body become the request text; PDF attachments are automatically detected. Set Company Config — A Set node defines your company name, logo URL, and brand color. Edit these values once. Extract & Split PDFs — A Code node extracts the sender email, combines subject + body as request text, and detects PDF attachments. Each PDF is split into a separate item for loop processing. If no PDFs are attached, a single item with just the text is output. Has PDFs? — An IF node routes the flow: emails with PDF attachments enter the processing loop, text-only emails skip directly to the AI Assistant. Loop Over PDFs — A Split In Batches node processes each PDF sequentially (one at a time to avoid API rate limits). Upload PDF to Autype — Each PDF is uploaded to Autype via the community node (resource: file). Autype Lens OCR — An HTTP Request node triggers Autype Lens OCR on the uploaded file with outputFormat: "md". This uses Generic Auth Type → Header Auth with X-API-Key set to your Autype API key. Cost: 4 credits per page. A dedicated community node for Lens is planned. Wait for OCR → Poll OCR Status — Waits 8 seconds, then polls the job status via HTTP Request (same Header Auth credential). The loop continues to the next PDF after each OCR completes. Extract OCR Text — Extracts the markdown text from each OCR result and stores it with the original filename. Combine All OCR Results — After the loop completes, collects all OCR texts and combines them into a single context string with labeled sections per PDF. Prepare Text Only — For emails without PDFs, passes just the request text forward. Download Markdown Syntax — Fetches the Autype Extended Markdown syntax reference so the AI knows the output format. Merge Context — Combines the request text, all OCR content, and the markdown syntax reference into a single item for the AI Agent. AI Document Assistant — An n8n AI Agent (OpenRouter) with two tools: Firecrawl Scrape — Scrapes specific URLs to extract page content as markdown. SerpAPI — Web search for current information, statistics, and facts. The assistant determines the task type (summarize, compare, draft, or create from scratch). The system prompt limits tool usage to max 5 calls and prioritizes attached PDF content. Prepare Render Payload — Cleans the AI output (strips code fences), generates a filename, and prepares branding variables. Render Branded PDF — Autype Render from Markdown generates the PDF with a full defaults JSON for company styling: Roboto font, heading colors from brand color, styled tables with colored headers, header with company logo, and footer with page numbers. See the defaults schema for all options. Send Report via Email — SMTP sends the PDF as an attachment back to the original email sender. Setup Install the Autype community node (n8n-nodes-autype) via Settings > Community Nodes. Create an Autype API credential with your API key from app.autype.com. See API Keys in Settings. Create a Header Auth credential for the Lens OCR HTTP Request nodes: Go to Credentials > New > Header Auth Name: X-API-Key Value: your Autype API key (same key as step 2) Assign this credential to the "Autype Lens OCR" and "Poll OCR Status" nodes. Create an OpenRouter API credential (or replace the chat model with OpenAI/Anthropic). Create an IMAP credential for the email inbox to monitor. Create an SMTP credential for sending emails. Get a Firecrawl API key from firecrawl.dev and create a Firecrawl credential. Get a SerpAPI key from serpapi.com and create a SerpAPI credential. Import this workflow and assign your credentials to each node. Edit the Set Company Config node: companyName — Your company name (appears in header/footer) companyLogoUrl — URL to your company logo (PNG/JPEG, publicly accessible) brandColor — Hex color for headings and table headers (e.g. #1a5276) Update the Send Report via Email node with your sender email address. Activate the workflow — any new email to the monitored inbox triggers document generation. > Note: This is a community node. It is not maintained by the n8n team. You need a self-hosted n8n instance to use community nodes. Requirements Self-hosted n8n instance (community nodes are not available on n8n Cloud) Autype account with API key (Lens OCR costs 4 credits/page, Render from Markdown costs 1 credit) n8n-nodes-autype community node installed OpenRouter API key (or OpenAI/Anthropic — configurable chat model) IMAP credentials for the monitored inbox SMTP credentials for sending emails Firecrawl API key (free tier: 500 pages/month) SerpAPI key (serpapi.com) How to customize Change AI model:** Replace the OpenRouter Chat Model sub-node with OpenAI, Anthropic Claude, Google Gemini, or any LangChain-compatible chat model. Add more research tools:** Add additional tool nodes for specialized APIs — Google Scholar, SEC filings, PubMed, or internal knowledge bases. Customize styling:** Edit the defaults JSON in the Render Branded PDF node to change fonts, colors, heading styles, table designs, header/footer content, and spacing. See the defaults schema for all available options. Replace email trigger:** Swap the IMAP Email Trigger with a Form Trigger, Webhook, or Chat Trigger to accept input from different sources. Add watermark:** Insert an Autype Watermark step after rendering to stamp "DRAFT" or "CONFIDENTIAL" on every page. Save to cloud storage:** Add a Google Drive, S3, or SharePoint upload step after rendering (before or instead of SMTP). Adjust OCR wait time:** For large PDFs (10+ pages), increase the Wait node from 8 to 15-20 seconds, or add a retry loop that polls until status is COMPLETED. Use Autype community node for Lens:** Once the Autype community node adds Lens OCR support, replace the HTTP Request OCR/poll chain with a single Autype node. Change output format:** Switch from Render from Markdown to Render from JSON for a better manipulation experience
by Lorena
This workflow is triggered when a new order is created in Shopify. Then: the order information is stored in Zoho CRM, an invoice is created in Harvest and stored in Trello, if the order value is above 50, an email with a discount coupon is sent to the customer and they are added to a MailChimp campaign for high-value customers; otherwise, only a "thank you" email is sent to the customer. Note that you need to replace the List ID in the Trello node with your own ID (see instructions in our docs). Same goes for the Account ID in the Harvest node (see instructions here).