by LukaszB
UX & SEO Website Analyst (Airtop + OpenAI + Gmail) This workflow automatically analyzes a website for UX and SEO quality. It uses Airtop for realistic web scraping, OpenAI for structured evaluation of metadata (title, description, and overall SEO signals), and Gmail to send professional reports. What it does Scrapes website content and metadata through an Airtop session. Evaluates SEO and UX factors (strengths, weaknesses, recommendations) with OpenAI. Generates a clear, structured report. Sends the report automatically via Gmail. Use cases Marketing agencies auditing client websites. Freelancers offering SEO/UX review services. Businesses monitoring their own website performance. Requirements Airtop account** with API access. OpenAI API key**. Gmail credentials** connected in n8n. How it works Start the flow with a target website URL. Airtop opens a session and scrapes metadata naturally. OpenAI analyzes and scores the title, description, and overall quality. Gmail sends a formatted report to your chosen recipient. Set up steps Connect Airtop, OpenAI, and Gmail credentials in n8n. Provide the target URL to analyze. Run the workflow and review the email report. Keep detailed instructions inside sticky notes in the workflow.
by Khairul Muhtadin
Streamline M&A due diligence with AI. This n8n workflow automatically parses financial documents using LlamaIndex, embeds data into Pinecone, and generates comprehensive, AI-driven reports with GPT-5-mini, saving hours of manual review and ensuring consistent, data-backed insights. Why Use This Workflow? Time Savings: Reduces manual document review and report generation from days to minutes. Cost Reduction: Minimizes reliance on expensive human analysts for initial data extraction and summary. Error Prevention: AI-driven analysis ensures consistent data extraction, reducing human error and oversight. Scalability: Effortlessly processes multiple documents and deals in parallel, scaling with your business needs. Ideal For Investment Analysts & Private Equity Firms:** Quickly evaluate target companies by automating the extraction of key financials, risks, and business models from deal documents. M&A Advisors:** Conduct preliminary due diligence efficiently, generating comprehensive overview reports for clients without extensive manual effort. Financial Professionals:** Accelerate research and analysis of company filings, investor presentations, and market reports for critical decision-making. How It Works Trigger: A webhook receives multiple due diligence documents (PDFs, DOCX, XLSX) along with associated metadata. Document Processing & Cache Check: Files are split individually. The workflow first checks Pinecone to see if the deal's documents have been processed before (cache hit). If so, it skips parsing and embedding. Data Extraction (LlamaIndex): For new deals, each document is sent to LlamaIndex for advanced parsing, extracting structured text content. Vectorization & Storage: The parsed text is then converted into numerical vector embeddings using OpenAI and stored in Pinecone, our vector database, with relevant metadata. AI-Powered Analysis (Langchain Agent): An n8n Langchain Agent, acting as a "Senior Investment Analyst," leverages GPT-5-mini to query Pinecone multiple times for specific information (e.g., company profile, financials, risks, business model). It synthesizes these findings into a structured JSON output. Report Generation: The structured AI output is transformed into an HTML report, then converted into a professional PDF document. Secure Storage & Delivery: The final PDF due diligence report is uploaded to an S3 bucket, and a public URL is returned via the initial webhook, providing instant access. Setup Guide Prerequisites | Requirement | Type | Purpose | | :---------- | :--- | :------ | | n8n instance | Essential | Workflow execution platform | | LlamaIndex API Key | Essential | For robust document parsing and text extraction | | OpenAI API Key | Essential | For creating text embeddings and powering the GPT-5-mini AI agent | | Pinecone API Key | Essential | For storing and retrieving vector embeddings | | AWS S3 Account | Essential | For secure storage of generated PDF reports | Installation Steps Import the JSON file to your n8n instance. Configure credentials: LlamaIndex: Create an "HTTP Header Auth" credential with x-api-key in the header and your LlamaIndex API key as the value. OpenAI: Create an "OpenAI API" credential with your OpenAI API key. Ensure the credential name is "Sumopod" or update the workflow nodes accordingly. Pinecone: Create a "Pinecone API" credential with your Pinecone API key and environment. Ensure the credential name is "w3khmuhtadin" or update the workflow nodes accordingly. AWS S3: Create an "AWS S3" credential with your Access Key ID and Secret Access Key. Update environment-specific values: In the "Upload to S3" node, ensure the bucketName is set to your desired S3 bucket. In the "Create Public URL" node, update the baseUrl variable to match your S3 bucket's public access URL or CDN if applicable (e.g., https://your-s3-bucket-name.s3.amazonaws.com). Customize settings: Review the prompt in the "Analyze" (Langchain Agent) node to adjust the AI's persona or required queries if needed. Test execution: Send sample documents (PDF, DOCX, XLSX) to the webhook URL (/webhook/dd-ai) to verify all connections and processing steps work as expected. Technical Details Core Nodes | Node | Purpose | Key Configuration | | :------------------------------ | :--------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------- | | Webhook | Initiates workflow with document uploads | Path: dd-ai, HTTP Method: POST | | Split Multi-File (Code) | Splits binary files, generates unique deal ID | Parses filenames from body or binary, creates dealId from sorted names. | | Parse Document via LlamaIndex | Extracts structured text from various document types | URL: https://api.cloud.llamaindex.ai/api/v1/parsing/upload, Authentication: HTTP Header Auth with x-api-key. | | Monitor Document Processing | Polls LlamaIndex for parsing status | URL: https://api.cloud.llamaindex.ai/api/v1/parsing/job/{{ $json.id }}, Authentication: HTTP Header Auth. | | Insert to Pinecone | Stores vector embeddings in Pinecone | Mode: insert, Pinecone Index: poc, Pinecone Namespace: dealId. | | Data Retrieval (Pinecone) | Enables AI agent to search due diligence documents | Mode: retrieve-as-tool, Pinecone Index: poc, Pinecone Namespace: {{ $json.dealId }}, topK: 100. | | Analyze (Langchain Agent) | Orchestrates AI analysis using specific queries | Prompt Type: define, detailed role and 6 mandatory Pinecone queries, Model: gpt-5-mini, Output Parser: Parser. | | Generate PDF (Puppeteer) | Converts HTML report to a professional PDF | Script Code: await $page.pdf(...) with A4 format, margins, and 60s timeout. | | Upload to S3 | Stores final PDF reports securely | Bucket Name: poc, File Name: {{ $json.fileName }}, Credentials: AWS S3. | | If (Check Namespace Exists) | Implements caching logic | Checks stats.namespaces[dealId].vectorCount > 0 to determine cache hit/miss. | Workflow Logic The workflow begins by accepting multiple files via a webhook. It intelligently checks if the specific "deal" (identified by a unique ID generated from filenames) has already had its documents processed and embedded in Pinecone. This cache mechanism prevents redundant processing, saving time and API costs. If a cache miss occurs, documents are parsed by LlamaIndex, their content vectorized by OpenAI, and stored in a Pinecone namespace unique to the deal. For analysis, a Langchain Agent, powered by GPT-5-mini, is instructed with a specific persona and a mandatory sequence of Pinecone queries (e.g., company overview, financials, risks). It uses the Data Retrieval tool to interact with Pinecone, synthesizing information from the stored embeddings. The AI's output is then structured by a dedicated parser, transformed into a human-readable HTML report, and converted into a PDF. Finally, this comprehensive report is uploaded to AWS S3, and a public access URL is provided as a response. Customization Options Basic Adjustments: AI Prompt Refinement:** Modify the Prompt field in the "Analyze" (Langchain Agent) node to adjust the AI's persona, introduce new mandatory queries, or change reporting style. Output Schema:** Update the JSON schema in the "Parser" (Langchain Output Parser Structured) node to include additional fields or change the structure of the AI's output. Advanced Enhancements: Integration with CRM/Dataroom:** Add nodes to automatically fetch documents from or update status in a CRM (e.g., Salesforce, HubSpot) or a virtual data room (e.g., CapLinked, Datasite). Conditional Analysis:** Implement logic to trigger different analysis paths or generate different report sections based on document content or deal parameters. Notification System:** Integrate with Slack, Microsoft Teams, or email to send notifications upon report generation or specific risk identification. Use Case Examples Scenario 1: Private Equity Firm Evaluating a Target Company Challenge: A private equity firm receives dozens of due diligence documents (financials, CIM, management presentations) for a potential acquisition, needing a rapid initial assessment. Solution: The workflow ingests all documents, automatically parses them, and an AI agent synthesizes key company information, financial summaries (revenue history, margins), and identified risks into a structured report within minutes. Result: The firm's analysts gain an immediate, comprehensive overview, enabling faster screening and more focused deep-dive questions, significantly accelerating the deal cycle. Scenario 2: M&A Advisor Conducting Preliminary Due Diligence Challenge: An M&A advisory firm needs to provide clients with a quick, consistent, and standardized preliminary due diligence report across multiple prospects. Solution: Advisors upload relevant prospect documents to the workflow. The AI-powered system automatically extracts core business model details, investment thesis highlights, and customer concentration analysis, along with key financials. Result: The firm can generate standardized, high-quality preliminary reports efficiently, ensuring consistency across all client engagements and freeing up senior staff for strategic analysis. Created by: Khmuhtadin Category: AI | Tags: Due Diligence, AI, Automation, M&A, LlamaIndex, Pinecone, GPT-5-mini, Document Processing Need custom workflows? Contact us Connect with the creator: Portfolio • Workflows • LinkedIn • Medium • Threads
by Dinakar Selvakumar
Automate LinkedIn lead discovery, enrichment, and email follow-ups using Apify and Google Sheets What this template demonstrates End-to-end lead pipeline (discovery → enrichment → outreach) Google Search–based LinkedIn discovery (safe approach) Batch processing with controlled loops AI-generated cold emails and follow-ups Google Sheets as a structured lead database Use cases B2B outbound lead generation Recruitment sourcing workflows Founder-led sales outreach Building verified prospect lists Automated follow-up systems How it works Reads role + location inputs from Google Sheets Uses Apify to find LinkedIn profiles via Google Search Stores raw leads and filters valid profiles Enriches profiles with additional data Generates personalized email sequences using AI Sends emails and tracks responses Updates lead status to prevent duplicates How to use Add {{APIFY_API_TOKEN}} Connect Google Sheets and Gmail Prepare input sheet (Keyword, Location) Run workflow in test mode Verify outputs in Google Sheets Activate workflow Requirements n8n (cloud or self-hosted) Apify account Google Sheets Gemini / LLM API key Good to know No direct LinkedIn automation (safe and compliant) Email follow-ups include response tracking Uses batching to handle large datasets Google Sheets acts as the system state Customising this workflow Change target roles and locations Improve AI prompts for personalization Add email verification tools Replace Sheets with a database Add retry logic and rate limits What this template demonstrates Multi-step automation architecture API orchestration using HTTP nodes AI integration in workflows Conditional logic and batching Scalable lead processing design
by Hemanth Arety
Automatically fetch, curate, and distribute Reddit content digests using AI-powered filtering. This workflow monitors multiple subreddits, ranks posts by relevance, removes spam and duplicates, then delivers beautifully formatted digests to Telegram, Discord, or Slack. Who's it for Perfect for content creators tracking trends, marketers monitoring discussions, researchers following specific topics, and community managers staying informed. Anyone who wants high-quality Reddit updates without manually browsing multiple subreddits. How it works The workflow fetches top posts from your chosen subreddits using Reddit's JSON API (no authentication required). Posts are cleaned, deduplicated, and filtered by upvote threshold and custom keywords. An AI model (Google Gemini, OpenAI, or Claude) then ranks remaining posts by relevance, filters out low-quality content, and generates a formatted digest. The final output is delivered to your preferred messaging platform on a schedule or on-demand. Setup requirements n8n version 1.0+ AI provider API key (Google Gemini recommended - has free tier) At least one messaging platform configured: Telegram bot token + chat ID Discord webhook URL Slack OAuth token + channel access How to set up Open the Configuration node and edit subreddit list, post counts, and keywords Configure the Schedule Trigger or use manual execution Add your AI provider credentials in the AI Content Curator node Enable and configure your preferred delivery platform (Telegram/Discord/Slack) Test with manual execution, then activate the workflow Customization options Subreddits**: Add unlimited subreddits to monitor (comma-separated) Time filters**: Choose from hour, day, week, month, year, or all-time top posts Keywords**: Set focus keywords to prioritize and exclude keywords to filter out Post count**: Adjust how many posts to fetch vs. how many appear in final digest AI prompt**: Customize ranking criteria and output format in the AI node Schedule**: Use cron expressions for hourly, daily, or weekly digests Output format**: Modify the formatting code to match your brand style Add email notifications, database storage, or RSS feed generation by extending the workflow with additional nodes.
by Mark Ma
How it works This workflow is your personal CEO Brain. Every Saturday night, it automatically collects the past week’s activity across: 📩 Gmail: filters out spam, promos, receipts, etc. 📅 Google Calendar: grabs past week and upcoming month 🗒️ Notion Weekly Plan: pulls and analyzes a photo of your weekly plan (e.g., taken from a paper planner/notebook) using GPT-4o 🎯 Notion OKRs: fetches quarterly OKRs and formats them for summary It sends all the data to GPT-4.1, which generates a smart weekly report — including progress check, misalignments, overdue follow-ups, and next steps — then emails it to you as a Weekly OKR Report. Set up steps 🧠 Add your Gmail, Google Calendar, Notion, and OpenAI credentials 📅 The Notion Weekly Plan should have a date property and an image field that holds a photo of your planner/notebook page 🎯 The Notion OKR database should include objective, key result, and status fields 🔁 Schedule is preset to every Saturday at 20:30 HK time (cron 0 30 20 * * 6). Also set the workflow timezone in n8n and, if self-hosting, the server/container TZ (e.g., TZ=Asia/Hong_Kong) to ensure correct triggering 💬 You can modify the AI prompts inside the LLM nodes for more customized outputs
by Alena - Prodigy AI Sol
How it works A weekly cron triggers the flow The Reddit node fetches top posts from a +-separated list of subreddits you define A code node normalises each item into {url, title, publishedAt} while preserving Reddit-specific fields like ups, upvote_ratio, and num_comments A deduplication step drops repeats by normalised URL and then by title A scoring step ranks candidates by ups x upvote_ratio and passes the winner plus the top 10 candidates to GPT-4 A LangChain agent writes three platform-specific drafts in a single call: a short Telegram blurb with a link Gmail sends you one email containing a draft with APPROVE / DECLINE buttons (sendAndWait, double-confirm) On APPROVE, a Gate node fans out to three parallel branches that post to Telegram, and each result is appended to its own tab in Google Sheets On DECLINE, reviewer feedback is injected into the agent and it picks another topic to retry Set up steps Setup takes about 15 to 20 minutes Connect a Reddit OAuth2 credential with read access Connect an OpenAI API credential (defaults to gpt-4.1-mini; swap to any model you prefer) Connect a Gmail OAuth2 credential for the approval email and set sendTo to your own address Connect a Telegram Bot API credential and replace @your_telegram_channel in the Post to Telegram node with your real channel handle (your bot must be an admin of the channel) Connect a Google Sheets OAuth2 credential and pick your destination spreadsheet + correct tab in each of the four Google Sheets nodes Edit the default subreddit list inside the Reddit node to match your niche Optional: adjust the GPT prompt inside the Social Posts Generator to match your tone per platform Detailed per-step notes live inside the workflow as sticky notes
by Avkash Kakdiya
How it works This workflow starts whenever a new lead is submitted through Typeform. It cleans and stores the raw lead data, checks if the email is business-related (not Gmail), and then uses AI to enrich the lead with company details. After enrichment, the workflow scores the lead with AI, updates your HubSpot CRM, and saves everything neatly into Google Sheets for tracking and reporting. Step-by-step Capture New Lead Triggered by a new Typeform submission. Collects basic details: Name, Email, Phone, and Message. Saves raw lead data into a Google Sheet for backup. Stores the basic info in Airtable (avoids duplicates by email). Format & Filter Leads Formats the incoming data into a clean structure. Filters out non-business emails (e.g., Gmail) so only qualified leads continue. Enrich Company Information Uses AI (GPT-4o-mini) to enrich the lead’s company data based on email domain. Returns details like: Company Name, Industry, Headquarters, Employee Count, Website, LinkedIn, and Description. Merges this information with the original lead profile and adds metadata (timestamp, workflow ID). Score the Lead AI analyzes the enriched profile and assigns a lead score (1–10). Scoring considers industry fit, company size, contact source, and domain reputation. Update CRM & Sheets Sends the enriched lead with score to HubSpot CRM. Updates company details, contact info, and custom properties (lead_score, LinkedIn, description). Logs the fully enriched lead in a Google Sheet for tracking. Why use this? Automatically enriches and scores every incoming lead. Filters out low-value (non-business) emails before wasting CRM space. Keeps HubSpot CRM up to date with the latest company and contact info. Maintains both raw and enriched lead data in Google Sheets for easy reporting. Saves your team hours of manual research and ensures consistent, AI-driven lead qualification.
by Bilel Aroua
👥 Who is this for? Creators, marketers, and brands that want to turn a single product photo into premium motion clips, then optionally publish to Instagram/TikTok/YouTube via LATE. No editing skills required. ❓ What problem does it solve? Producing short vertical ads from a static packshot takes time (retouching, motion design, soundtrack, publishing). This workflow automates the entire process: image enhancement → cinematic motion → optional upscale → soundtrack → share. 🛠️ What this workflow does Collects a product photo via Telegram. Generates two refined edit prompts + two motion prompts using multi-agent Gemini orchestration. Creates two edited images with Fal.ai Gemini-Flash (image edit). Renders two 5s vertical videos with Kling (via fal.run queue). Auto-stitches them (FFmpeg API) and optionally upscales with Topaz. Generates a clean ambient soundtrack with MMAudio. Sends previews + final links back on Telegram. Optionally publishes to Instagram, TikTok, YouTube Shorts, and more via LATE. ⚡ Setup Telegram**: Bot token (Telegram node). Fal.ai**: HTTP Header Auth (Authorization: Bearer <FAL_API_KEY>) for Gemini-Flash edit, Kling queue, FFmpeg compose, Topaz upscale, and MMAudio. Google Gemini** (PaLM credential) for AI agents. ImgBB**: API key for uploading original/edited images. LATE: create an account at **getlate.dev and use your API key for publishing (optional). ▶️ How to use Start the workflow and DM your bot a clear product photo (jpg/jpeg/webp). Approve the two still concepts when prompted in Telegram. The orchestrator generates cinematic motion prompts and queues Kling renders. Receive two motion previews, then a stitched final (upscaled + soundtrack). Choose to auto-publish to Instagram/TikTok/YouTube via LATE (optional). 🎨 How to customize Art Direction** → tweak the “Art Director” system message (lighting, backgrounds, grading). Motion Flavor** → adjust the “Motion Designer” vocabulary for different camera moves/dynamics. Durations/Aspect** → default is 9:16, 5s; you can change Kling duration. Soundtrack** → edit the MMAudio prompt to reflect your brand’s sonic identity. Publishing** → enable/disable LATE targets; customize captions/hashtags. ✅ Prerequisites A Telegram bot created via @BotFather. A Fal.ai account + API key. An ImgBB account + API key. (Optional) a LATE account with connected social profiles — sign up at getlate.dev. 💡 Detailed technical notes, architecture, and step-by-step flow explanation are included as sticky notes inside this workflow. 🆘 Support If you need help setting up or customizing this workflow: 📧 Email: bilsimaging@gmail.com 🌐 Website: bilsimaging.com I can provide guidance, troubleshooting, or custom extra workflow adaptations.
by Punit
WordPress AI Content Creator Overview Transform a few keywords into professionally written, SEO-optimized WordPress blog posts with custom featured images. This workflow leverages AI to research topics, structure content, write engaging articles, and publish them directly to your WordPress site as drafts ready for review. What This Workflow Does Core Features Keyword-to-Article Generation**: Converts simple keywords into comprehensive, well-structured articles Intelligent Content Planning**: Uses AI to create logical chapter structures and content flow Wikipedia Integration**: Researches factual information to ensure content accuracy and depth Multi-Chapter Writing**: Generates coherent, contextually-aware content across multiple sections Custom Image Creation**: Generates relevant featured images using DALL-E based on article content SEO Optimization**: Creates titles, subtitles, and content optimized for search engines WordPress Integration**: Automatically publishes articles as drafts with proper formatting and featured images Business Value Content Scale**: Produce high-quality blog posts in minutes instead of hours Research Efficiency**: Automatically incorporates factual information from reliable sources Consistency**: Maintains professional tone and structure across all generated content SEO Benefits**: Creates search-engine friendly content with proper HTML formatting Cost Savings**: Reduces need for external content creation services Prerequisites Required Accounts & Credentials WordPress Site with REST API enabled OpenAI API access (GPT-4 and DALL-E models) WordPress Application Password or JWT authentication Public-facing n8n instance for form access (or n8n Cloud) Technical Requirements WordPress REST API v2 enabled (standard on most WordPress sites) WordPress user account with publishing permissions n8n instance with LangChain nodes package installed Setup Instructions Step 1: WordPress Configuration Enable REST API (usually enabled by default): Check that yoursite.com/wp-json/wp/v2/ returns JSON data If not, contact hosting provider or install REST API plugin Create Application Password: In WordPress Admin: Users > Profile Scroll to "Application Passwords" Add new password with name "n8n Integration" Copy the generated password (save securely) Get WordPress Site URL: Note your full WordPress site URL (e.g., https://yourdomain.com) Step 2: OpenAI Configuration Obtain OpenAI API Key: Visit OpenAI Platform Create API key with access to: GPT-4 models (for content generation) DALL-E (for image creation) Add OpenAI Credentials in n8n: Navigate to Settings > Credentials Add "OpenAI API" credential Enter your API key Step 3: WordPress Credentials in n8n Add WordPress API Credentials: In n8n: Settings > Credentials > "WordPress API" URL: Your WordPress site URL Username: Your WordPress username Password: Application password from Step 1 Step 4: Update Workflow Settings Configure Settings Node: Open the "Settings" node Replace wordpress_url value with your actual WordPress URL Keep other settings as default or customize as needed Update Credential References: Ensure all WordPress nodes reference your WordPress credentials Verify OpenAI nodes use your OpenAI credentials Step 5: Deploy Form (Production Use) Activate Workflow: Toggle workflow to "Active" status Note the webhook URL from Form Trigger node Test Form Access: Copy the form URL Test form submission with sample data Verify workflow execution completes successfully Configuration Details Form Customization The form accepts three key inputs: Keywords**: Comma-separated topics for article generation Number of Chapters**: 1-10 chapters for content structure Max Word Count**: Total article length control You can modify form fields by editing the "Form" trigger node: Add additional input fields (category, author, publish date) Change field types (dropdown, checkboxes, file upload) Modify validation rules and requirements AI Content Parameters Article Structure Generation The "Create post title and structure" node uses these parameters: Model**: GPT-4-1106-preview for enhanced reasoning Max Tokens**: 2048 for comprehensive structure planning JSON Output**: Structured data for subsequent processing Chapter Writing The "Create chapters text" node configuration: Model**: GPT-4-0125-preview for consistent writing quality Context Awareness**: Each chapter knows about preceding/following content Word Count Distribution**: Automatically calculates per-chapter length Coherence Checking**: Ensures smooth transitions between sections Image Generation Settings DALL-E parameters in "Generate featured image": Size**: 1792x1024 (optimized for WordPress featured images) Style**: Natural (photographic look) Quality**: HD (higher quality output) Prompt Enhancement**: Adds photography keywords for better results Usage Instructions Basic Workflow Access the Form: Navigate to the form URL provided by the Form Trigger Enter your desired keywords (e.g., "artificial intelligence, machine learning, automation") Select number of chapters (3-5 recommended for most topics) Set word count (1000-2000 words typical) Submit and Wait: Click submit to trigger the workflow Processing takes 2-5 minutes depending on article length Monitor n8n execution log for progress Review Generated Content: Check WordPress admin for new draft post Review article structure and content quality Verify featured image is properly attached Edit as needed before publishing Advanced Usage Custom Prompts Modify AI prompts to change: Writing Style**: Formal, casual, technical, conversational Target Audience**: Beginners, experts, general public Content Focus**: How-to guides, opinion pieces, news analysis SEO Strategy**: Keyword density, meta descriptions, heading structure Bulk Content Creation For multiple articles: Create separate form submissions for each topic Schedule workflow executions with different keywords Use CSV upload to process multiple keyword sets Implement queue system for high-volume processing Expected Outputs Article Structure Generated articles include: SEO-Optimized Title**: Compelling, keyword-rich headline Descriptive Subtitle**: Supporting context for the main title Introduction**: ~60 words introducing the topic Chapter Sections**: Logical flow with HTML formatting Conclusions**: ~60 words summarizing key points Featured Image**: Custom DALL-E generated visual Content Quality Features Factual Accuracy**: Wikipedia integration ensures reliable information Proper HTML Formatting**: Bold, italic, and list elements for readability Logical Flow**: Chapters build upon each other coherently SEO Elements**: Optimized for search engine visibility Professional Tone**: Consistent, engaging writing style WordPress Integration Draft Status**: Articles saved as drafts for review Featured Image**: Automatically uploaded and assigned Proper Formatting**: HTML preserved in WordPress editor Metadata**: Title and content properly structured Troubleshooting Common Issues "No Article Structure Generated" Cause: AI couldn't create valid structure from keywords Solutions: Use more specific, descriptive keywords Reduce number of chapters requested Check OpenAI API quotas and usage Verify keywords are in English (default language) "Chapter Content Missing" Cause: Individual chapter generation failed Solutions: Increase max tokens in chapter generation node Simplify chapter prompts Check for API rate limiting Verify internet connectivity for Wikipedia tool "WordPress Publication Failed" Cause: Authentication or permission issues Solutions: Verify WordPress credentials are correct Check WordPress user has publishing permissions Ensure WordPress REST API is accessible Test WordPress URL accessibility "Featured Image Not Attached" Cause: Image generation or upload failure Solutions: Check DALL-E API access and quotas Verify image upload permissions in WordPress Review image file size and format compatibility Test manual image upload to WordPress Performance Optimization Large Articles (2000+ words) Increase timeout values in HTTP request nodes Consider splitting very long articles into multiple posts Implement progress tracking for user feedback Add retry mechanisms for failed API calls High-Volume Usage Implement queue system for multiple simultaneous requests Add rate limiting to respect OpenAI API limits Consider batch processing for efficiency Monitor and optimize token usage Customization Examples Different Content Types Product Reviews Modify prompts to include: Pros and cons sections Feature comparisons Rating systems Purchase recommendations Technical Tutorials Adjust structure for: Step-by-step instructions Code examples Prerequisites sections Troubleshooting guides News Articles Configure for: Who, what, when, where, why structure Quote integration Fact checking emphasis Timeline organization Alternative Platforms Replace WordPress with Other CMS Ghost**: Use Ghost API for publishing Webflow**: Integrate with Webflow CMS Strapi**: Connect to headless CMS Medium**: Publish to Medium platform Different AI Models Claude**: Replace OpenAI with Anthropic's Claude Gemini**: Use Google's Gemini for content generation Local Models**: Integrate with self-hosted AI models Multiple Models**: Use different models for different tasks Enhanced Features SEO Optimization Add nodes for: Meta Description Generation**: AI-created descriptions Tag Suggestions**: Relevant WordPress tags Internal Linking**: Suggest related content links Schema Markup**: Add structured data Content Enhancement Include additional processing: Plagiarism Checking**: Verify content originality Readability Analysis**: Assess content accessibility Fact Verification**: Multiple source confirmation Image Optimization**: Compress and optimize images Security Considerations API Security Store all credentials securely in n8n credential system Use environment variables for sensitive configuration Regularly rotate API keys and passwords Monitor API usage for unusual activity Content Moderation Review generated content before publishing Implement content filtering for inappropriate material Consider legal implications of auto-generated content Maintain editorial oversight and fact-checking WordPress Security Use application passwords instead of main account password Limit WordPress user permissions to minimum required Keep WordPress and plugins updated Monitor for unauthorized access attempts Legal and Ethical Considerations Content Ownership Understand OpenAI's terms regarding generated content Consider copyright implications for Wikipedia-sourced information Implement proper attribution where required Review content licensing requirements Disclosure Requirements Consider disclosing AI-generated content to readers Follow platform-specific guidelines for automated content Ensure compliance with advertising and content standards Respect intellectual property rights Support and Maintenance Regular Maintenance Monitor OpenAI API usage and costs Update AI prompts based on output quality Review and update Wikipedia search strategies Optimize workflow performance based on usage patterns Quality Assurance Regularly review generated content quality Implement feedback loops for improvement Test workflow with diverse keyword sets Monitor WordPress site performance impact Updates and Improvements Stay updated with OpenAI model improvements Monitor n8n platform updates for new features Engage with community for workflow enhancements Document custom modifications for future reference Cost Optimization OpenAI Usage Monitor token consumption patterns Optimize prompts for efficiency Consider using different models for different tasks Implement usage limits and budgets Alternative Approaches Use local AI models for cost reduction Implement caching for repeated topics Batch similar requests for efficiency Consider hybrid human-AI content creation License and Attribution This workflow template is provided under MIT license. Attribution to original creator appreciated when sharing or modifying. Generated content is subject to OpenAI's usage policies and terms of service.
by Nguyen Thieu Toan
📖 Overview A comprehensive flight price monitoring and AI assistant solution built entirely in n8n. Combines automated price tracking with intelligent conversational flight search via Telegram. Perfect for: ✈️ Tracking flight prices to favorite destinations 💰 Getting alerts when prices drop below threshold 🗓️ Planning trips with AI-powered flight searches 🌍 Finding best deals across airlines 📱 Managing travel plans through Telegram chat Requirements: n8n v1.123.0+ or v2.0.0+ SerpAPI key (500 free/month), Google Gemini API, Telegram bot token ⚡ What's in the Box Two Powerful Workflows | Workflow | Function | Trigger | |----------|----------|---------| | 🔔 Automated Monitoring | Tracks specific routes, alerts on price drops | Schedule (every 7 days) | | 💬 AI Flight Assistant | Interactive search with natural language | Telegram messages | Key Capabilities: 🎯 Set price thresholds and get instant alerts 🤖 Ask questions in natural language (Vietnamese/English) 🧠 AI remembers conversation context 📊 Compares prices across airlines ⚡ Real-time search results from Google Flights 🎯 Key Features 📅 Scheduled Price Checks**: Automatic monitoring every 7 days (customizable) 💡 Smart AI Assistant**: Understands "find cheapest flight to Bangkok next weekend" 🔔 Instant Alerts**: Telegram notifications when prices drop 🧠 Context-Aware**: AI remembers your preferences and previous searches 🌐 Multi-Language**: Handles Vietnamese and English seamlessly 📱 Mobile-Ready**: Full control via Telegram chat interface Technical Highlights: SerpAPI integration for real-time prices, Google Gemini Flash for AI responses, session-based conversation memory, Telegram HTML formatting, automatic date calculations (+5 days for returns) 🏗️ How It Works Workflow 1: Automated Monitoring Schedule Trigger → Configure Route → Search Flights → Extract Best Price ↓ Price < Threshold? → Send Alert Workflow 2: AI Assistant Telegram Message → AI Agent → Flight Search Tools → Format Response ↓ ↓ ↓ Understand Round-trip/One-way Telegram HTML Context Auto +5 days return Send to user 🛠️ Setup Guide Step 1: API Credentials Get SerpAPI key (https://serpapi.com), Google Gemini API (https://aistudio.google.com/app/apikey), Telegram bot token (@BotFather) Step 2: Configure Monitoring Edit Fields node: Set departure/arrival codes, price threshold, Telegram ID Step 3: AI Assistant Setup Link Gemini model to AI Agent, connect flight search tools, activate memory Step 4: Activate & Test Enable workflow, send test message to bot, verify alerts 💡 Usage Examples Automated Alert: ✈️ CHEAPEST TICKET Price: 2,450,000 VND Airline: Vietjet Air Time: 06:00 → 08:00 AI Chat: "Find round-trip tickets Hanoi to Bangkok tomorrow" "What's the cheapest flight to Nha Trang next weekend?" "Search one-way Da Nang to Singapore on March 15" 👤 About the Author Nguyen Thieu Toan (Nguyễn Thiệu Toàn / Jay Nguyen) AI Automation Specialist | n8n Workflow Expert Contact: 🌐 nguyenthieutoan.com 📘 Facebook 💼 LinkedIn 📧 Email: me@nguyenthieutoan.com More Nguyen Thieu Toan's n8n Template GenStaff Company: genstaff.net 📄 License Free for commercial/personal use. Keep author attribution when sharing.** Ready to never miss a flight deal again? Import this workflow and start tracking prices today! 🚀
by Will Carlson
What it does: Collects cybersecurity news from trusted RSS feeds and uses OpenAI’s Retrieval-Augmented Generation (RAG) capabilities with Pinecone to filter for content that is directly relevant to your organization’s tech stack. “Relevant” means the AI looks for news items that mention your specific tools, vendors, frameworks, cloud platforms, programming languages, operating systems, or security solutions — as described in your .txt scope documents. By training on these documents, the system understands the environment you operate in and can prioritize news that could affect your security posture, compliance, or operational stability. Once filtered, summaries of the most important items are sent to your work email every day. How it works Pulls in news from multiple cybersecurity-focused RSS feeds:** The workflow automatically collects articles from trusted, high-signal security news sources. These feeds cover threat intelligence, vulnerability disclosures, vendor advisories, and industry updates. Filters articles for recency and direct connection to your documented tech stack:** Using the publish date, it removes stale or outdated content. Then, leveraging your .txt scope documents stored in Pinecone, it checks each article for references to your technologies, vendors, platforms, or security tools. Uses OpenAI to generate and review concise summaries:** For each relevant article, OpenAI creates a short, clear summary of the key points. The AI also evaluates whether the article provides actionable or critical information before passing it through. Trains on your scope using Pinecone Vector Store (free) for context-aware filtering:** Your scope documents are embedded into a vector store so the AI can “remember” your environment. This context ensures the filtering process understands indirect or non-obvious connections to your tech stack. Aggregates and sends only the most critical items to your work email:** The system compiles the highest-priority news items into one daily digest, so you can review key developments without wading through irrelevant stories. What you need to do: Setup your OpenAI and Pinecone credentials in the workflow Create and configure a Pinecone index (dimension 1536 recommended) Pinecone is free to setup. Setup Pinecone with a single free index. Use a namespace like: scope. Make sure the embedding model is the same for all of your Pinecone references. Submit .txt scope documents listing your technologies, vendors, platforms, frameworks, and security products. .txt does not need to be structured. Add as much detail as possible. Update AI prompts to accurately describe your company’s environment and priorities.
by Typhoon Team
This n8n template demonstrates how to use Typhoon OCR + LLM to digitize business cards, enrich the extracted details, and save them directly into Google Sheets or any CRM. It works with both Thai and English business cards and even includes an optional step to draft greeting emails automatically. Use cases: Automatically capture leads at events, enrich contact details before saving them into your CRM, or simply keep a structured database of your professional network. Good to know Two versions of the workflow are provided: 🟢 Without Search API → cost-free option using only Typhoon OCR + LLM 🔵 With Search API → adds Google Search enrichment for richer profiles (may incur API costs via SerpAPI) The Send Email step is optional — include it if you want to follow up instantly, or disable it if not needed. Typhoon provides a free API for anyone to sign up and use → opentyphoon.ai How it works A form submission triggers the workflow with a business card image (JPG/PNG). Typhoon OCR extracts text from the card (supports Thai & English). Typhoon LLM parses the extracted text into structured JSON fields (e.g., name, job title, organization, email). Depending on your chosen path: Version 1: Typhoon LLM enriches the record with job type, level, and sector. Version 2: The workflow calls the Search API (via SerpAPI) to add a profile/company summary. The cleaned and enriched contact is saved to Google Sheets (can be swapped with your preferred CRM or database). (Optional) Typhoon LLM drafts a short, friendly greeting email, which can be sent automatically via Gmail. How to use The included form trigger is just one example. You can replace it with: A webhook for uploads A file drop in cloud storage Or even a manual trigger for testing You can easily change the destination from Google Sheets to HubSpot, Notion, Airtable, or Salesforce. The enrichment prompt is customizable — adjust it to classify contacts based on your organization’s needs. Requirements Typhoon API key Google Sheets API credentials + a prepared spreadsheet (Optional) Gmail API credentials for sending emails (Optional) SerpAPI key for the Search API enrichment path Customising this workflow This AI-powered business card reader can be adapted to many scenarios: Event lead capture: Collect cards at conferences and sync them to your CRM automatically. Sales enablement: Draft instant greeting emails for new contacts. Networking: Keep a clean and enriched database of your professional connections.