by Bao Duy Nguyen
Who is this for? This template is ideal for DevOps engineers, automation specialists, and n8n users who manage multiple workflows and want a reliable version control system for backups. It’s especially useful for teams collaborating via GitHub. What problem is this workflow solving? Manually backing up n8n workflows to GitHub can be time-consuming and error-prone. This workflow solves that by automating the backup of new and updated n8n workflows, ensuring your GitHub repository always reflects the latest changes. What this workflow does Retrieves all workflows from your local n8n instance. Decodes their content and compares it with existing GitHub files. Detects newly created or updated workflows. Creates a new Git branch and commits changes. Opens a pull request (PR) to the main branch. Sends a Slack notification when the PR is created. The system uses GitHub API, n8n, Merge, Set, and Slack nodes for full automation. Setup GitHub credentials: Add your GitHub API credentials in n8n. Slack integration: Connect your Slack Bot token if you want PR notifications. Repository details: Update github_owner, repo_name, and workflow directory path in the “Define Local Variables” node. n8n API key - Check this doc How to customize this workflow to your needs Change the workflow directory from workflows/ to a custom path. Modify the Slack message or add email notification support. Add filters to back up only specific workflows based on naming or tags. Adjust branch naming conventions or use different GitHub base branches. This workflow provides a seamless backup and versioning pipeline, minimizing manual Git interactions and supporting collaborative automation development.
by VEED
Create AI screencast videos with VEED and automated slides Overview This n8n workflow automatically generates presentation-style "screen recording" videos with AI-generated slides and a talking head avatar overlay. You provide a topic and intention, and the workflow handles everything: scriptwriting, slide generation, avatar creation, voiceover, and video composition. Output: Horizontal (16:9) AI-generated videos with animated slides as the main content and a lip-synced avatar in picture-in-picture, ready for YouTube, LinkedIn, or professional presentations. What It Does Topic + Intention → Claude writes script → Parallel processing: ├── OpenAI generates avatar → ElevenLabs voiceover → VEED lip-sync └── FAL Flux Pro generates slides → Creatomate composites everything → Saved to Google Drive + logged to Sheets Pipeline Breakdown | Step | Tool | What Happens | |------|------|--------------| | 1. Script Generation | Claude Sonnet 4 | Creates hook, script (25-40 sec), slide prompts, caption, and avatar description | | 2. Avatar Generation | OpenAI gpt-image-1 | Generates photorealistic portrait image (1024×1536) | | 3. Slide Generation | FAL Flux Pro | Creates 5-7 professional slides (1920×1080) with text overlays | | 4. Voiceover | ElevenLabs | Converts script to natural speech (multiple voice options) | | 5. Talking Head | VEED Fabric 1.0 | Lip-syncs avatar to audio, creates 9:16 talking head video | | 6. Video Composition | Creatomate | Combines slides + avatar in 16:9 PiP layout | | 7. Storage | Google Drive | Uploads final MP4 | | 8. Logging | Google Sheets | Records all metadata (script, caption, URLs, timestamps) | Required Connections API Keys (entered in Configuration node) | Service | Key Type | Where to Get | |---------|----------|--------------| | Anthropic | API Key | https://console.anthropic.com/settings/keys | | OpenAI | API Key | https://platform.openai.com/api-keys | | ElevenLabs | API Key | https://elevenlabs.io/app/settings/api-keys | | FAL.ai | API Key | https://fal.ai/dashboard/keys | | Creatomate | API Key | https://creatomate.com/dashboard/settings | > ⚠️ OpenAI Note: gpt-image-1 requires organization verification. Go to https://platform.openai.com/settings/organization/general to verify. n8n Credentials (connect in n8n) | Node | Credential Type | Purpose | |------|-----------------|---------| | 🎬 Generate Talking Head (VEED) | FAL.ai API | VEED video rendering | | 📤 Upload to Drive | Google Drive OAuth2 | Store final videos | | 📝 Log to Sheets | Google Sheets OAuth2 | Track all generated content | Configuration Options Edit the ⚙️ Workflow Configuration node to customize: { // 📝 CONTENT SETTINGS topic: "How AI is transforming content creation", intention: "informative", // informative, lead_generation, disruption brand_name: "YOUR_BRAND_NAME", target_audience: "sales teams and marketers", trending_hashtags: "#AIvideo #ContentCreation #VideoMarketing", // 🎨 SLIDE STYLE slide_style: "vibrant_colorful", // See slide styles below // 🎥 VIDEO SETTINGS video_resolution: "720p", // VEED only supports 720p seconds_per_slide: 6, // How long each slide shows // 🖼️ BACKGROUND (Optional) background: "", // URL, gradient array, or empty // 🔑 API KEYS (Required) anthropic_api_key: "YOUR_ANTHROPIC_API_KEY", openai_api_key: "YOUR_OPENAI_API_KEY", elevenlabs_api_key: "YOUR_ELEVENLABS_API_KEY", creatomate_api_key: "YOUR_CREATOMATE_API_KEY", fal_api_key: "YOUR_FAL_API_KEY", // 🎤 VOICE SELECTION voice_selection: "susie", // cristina, enrique, susie, jeff, custom // 🎨 AVATAR OPTIONS (Optional) custom_avatar_description: "", // Leave empty for AI-generated custom_avatar_image_url: "", // Direct URL to use existing image // 📝 CUSTOM SCRIPT (Optional) custom_script: "" // Leave empty for AI-generated } Slide Style Options | Style | Description | Best For | |-------|-------------|----------| | dark_professional | Dark gradients, white text, sleek look | Tech, SaaS, premium brands | | light_modern | Light backgrounds, dark text, clean | Corporate, educational | | vibrant_colorful | Bold colors, energetic, eye-catching | Social media, startups | | minimalist | Lots of whitespace, simple, elegant | Luxury, professional services | | tech_corporate | Blue tones, geometric shapes | Enterprise, B2B | Background Options | Type | Example | Description | |------|---------|-------------| | None | "" | Full bleed layout, slides take 78% width | | URL | "https://example.com/bg.jpg" | Image background with margins | | Gradient | ["#ff6b6b", "#feca57", "#48dbfb"] | Gradient background with margins | Voice Options | Voice | Language | Description | |-------|----------|-------------| | cristina | Spanish | Female voice | | enrique | Spanish | Male voice | | susie | English | Female voice (default) | | jeff | English | Male voice | | custom | Any | Use your ElevenLabs voice clone ID | Intention Types | Intention | Content Style | Best For | |-----------|---------------|----------| | informative | Educational, value-driven, builds trust | Thought leadership, tutorials | | lead_generation | Creates curiosity, soft CTA | Product awareness, funnels | | disruption | Bold, provocative, scroll-stopping | Viral potential, brand awareness | Custom Avatar & Script Options Custom Avatar Description Leave custom_avatar_description empty to let Claude decide, or provide your own: custom_avatar_description: "female marketing influencer, cool, working in tech" Examples: "a woman in her 20s with gym clothes" "a bearded man in his 30s wearing a hoodie" "a professional woman with glasses in business casual" Custom Avatar Image URL Skip avatar generation entirely by providing a direct URL: custom_avatar_image_url: "https://example.com/my-avatar.png" > Image should be portrait orientation, high quality, with the subject looking at camera. Custom Script Leave custom_script empty to let Claude write it, or provide your own: custom_script: "This is my custom script. AI is changing how we create content..." Guidelines for custom scripts: Keep it 25-40 seconds when read aloud (60-100 words) Avoid special characters for TTS compatibility Write naturally, as if speaking Behavior Matrix | custom_avatar_description | custom_avatar_image_url | custom_script | What Claude Generates | |---------------------------|-------------------------|---------------|----------------------| | Empty | Empty | Empty | Avatar + Script + Slides + Caption | | Provided | Empty | Empty | Script + Slides + Caption | | Empty | Provided | Empty | Script + Slides + Caption | | Empty | Empty | Provided | Avatar + Slides + Caption | | Provided | Provided | Provided | Slides + Caption only | Video Layout The final video uses a picture-in-picture (PiP) layout: Without Background (Full Bleed) ┌─────────────────────────────────┬──────┐ │ │ │ │ │ │ │ SLIDES (78%) │AVATAR│ │ │(22%) │ │ │ │ │ │ │ └─────────────────────────────────┴──────┘ With Background (Margins + Rounded Corners) ┌─────────────────────────────────────────┐ │ BG ┌───────────────────────────┐ ┌────┐ │ │ │ │ │ │ │ │ │ SLIDES (74%) │ │AVA │ │ │ │ │ │TAR │ │ │ │ │ │20% │ │ │ └───────────────────────────┘ └────┘ │ └─────────────────────────────────────────┘ Output Per Video Generated | Asset | Format | Location | |-------|--------|----------| | Final Video | MP4 (1920×1080, 60fps) | Google Drive folder | | Avatar Image | PNG (1024×1536) | tmpfiles.org (temporary) | | Slide Images | PNG (1920×1080) | FAL CDN (temporary) | | Voiceover | MP3 | tmpfiles.org (temporary) | | Metadata | Row entry | Google Sheets | Google Sheets Columns | Column | Description | |--------|-------------| | topic | Video topic | | intention | Content intention used | | brand_name | Brand mentioned | | slide_style | Visual style used | | content_theme | 2-3 word theme summary | | script | Full voiceover script | | caption | Ready-to-post caption with hashtags | | num_slides | Number of slides generated | | video_url | Google Drive link to final video | | avatar_video_url | VEED talking head video URL | | audio_url | Temporary audio URL | | status | done/error | | created_at | Timestamp | Estimated Costs Per Video | Service | Usage | Approximate Cost | |---------|-------|------------------| | Claude Sonnet 4 | 2K tokens | $0.01 | | OpenAI gpt-image-1 | 1 image (1024×1536) | ~$0.04-0.08 | | FAL Flux Pro | 5-7 images (1920×1080) | ~$0.10-0.15 | | ElevenLabs | 100 words | $0.01-0.02 | | VEED/FAL.ai | 1 video render | ~$0.10-0.20 | | Creatomate | 1 video composition | ~$0.10-0.20 | | Total | | ~$0.35-0.65 per video | > Costs vary based on script length and current API pricing. Setup Checklist Step 1: Import Workflow [ ] Import create-ai-screencast-videos-with-veed-and-automated-slides.json into n8n Step 2: Configure API Keys [ ] Open the ⚙️ Workflow Configuration node [ ] Replace all YOUR_*_API_KEY placeholders with your actual API keys [ ] Verify your OpenAI organization at https://platform.openai.com/settings/organization/general Step 3: Connect n8n Credentials [ ] Click on 🎬 Generate Talking Head (VEED) node → Add FAL.ai credential [ ] Click on 📤 Upload to Drive node → Add Google Drive OAuth2 credential [ ] Click on 📝 Log to Sheets node → Add Google Sheets OAuth2 credential Step 4: Configure Storage [ ] Update the 📤 Upload to Drive node with your Google Drive folder URL [ ] Update the 📝 Log to Sheets node with your Google Sheets URL [ ] Create column headers in your Google Sheet (see Output section) Step 5: Customize Content [ ] Update topic, brand_name, target_audience, and trending_hashtags [ ] Choose your preferred slide_style and voice_selection [ ] Optionally configure background, custom_avatar_description, and/or custom_script Step 6: Test [ ] Execute the workflow [ ] Check Google Drive for the output video [ ] Verify metadata was logged to Google Sheets MCP Integration (Optional) This workflow can be exposed to Claude Desktop via n8n's Model Context Protocol (MCP) integration. To enable MCP: Add a Webhook Trigger node to the workflow (in addition to the Manual Trigger) Connect it to the ⚙️ Workflow Configuration node Go to Settings → Instance-level MCP → Enable the workflow Configure Claude Desktop with your n8n MCP server URL Claude Desktop Configuration (Windows): { "mcpServers": { "n8n-mcp": { "command": "supergateway", "args": [ "--streamableHttp", "https://YOUR_N8N_INSTANCE.app.n8n.cloud/mcp-server/http", "--header", "authorization:Bearer YOUR_MCP_ACCESS_TOKEN" ] } } } > Note: Install supergateway globally first: npm install -g supergateway Limitations & Notes Technical Limitations tmpfiles.org**: Temporary file URLs expire after ~1 hour. Final videos are safe in Google Drive. VEED processing**: Takes 1-3 minutes for the talking head. Creatomate processing**: Takes 30-60 seconds for composition. Total workflow time**: ~3-5 minutes per video. Content Considerations Scripts are optimized for 25-40 seconds (TTS-friendly) Avatar images are AI-generated (not real people) Slides are dynamically generated based on script length Slide count: 5-7 slides depending on script duration Best Practices Start simple: Test with default settings before customizing Review scripts: Claude generates good content but review before posting Monitor costs: Check API usage dashboards weekly Use backgrounds: Adding a background image creates a more polished look Match voice to content: Use Spanish voices for Spanish content Troubleshooting | Issue | Solution | |-------|----------| | "Organization must be verified" | Verify at platform.openai.com/settings/organization/general | | VEED authentication error | Re-add FAL.ai credential to VEED node | | Google Drive "no binary field" | Ensure Download Video outputs to binary field | | JSON parse error from Claude | Workflow has fallback content; check Claude node output | | Slides not matching script | Increase seconds_per_slide for fewer slides | | Avatar cut off in PiP | Avatar is designed for right-side placement | | MCP "Server disconnected" | Install supergateway globally: npm install -g supergateway | | Render timeout | Increase wait time in "⏳ Wait for Render" node | Version History | Version | Date | Changes | |---------|------|---------| | 2.1 | Jan 2026 | Renamed workflow, improved documentation with section sticky notes, consolidated setup information | | 2.0 | Jan 2026 | Added dynamic slide count, background options, FAL Flux Pro for slides, improved PiP layout | | 1.0 | Jan 2026 | Initial release with fixed slide count, basic composition | Credits Built with: n8n** - Workflow automation Anthropic Claude** - Script & slide prompt generation OpenAI** - Avatar image generation FAL.ai** - Slide image generation (Flux Pro) ElevenLabs** - Voice synthesis VEED Fabric** - AI lip-sync video rendering Creatomate** - Video composition Google Workspace** - Storage & logging
by Roman Rozenberger
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Who's it for Content creators, marketers, and researchers who need to monitor multiple RSS feeds and get AI-generated summaries without manual work. How it works This workflow automatically monitors RSS feeds, filters new articles from the last X days, checks for duplicates, and generates structured AI summaries. It fetches full article content, converts HTML to markdown, and uses Gemini AI to create consistent summaries with quick takeaways, key points, and practical insights. All data is saved to Google Sheets for easy access and sharing. The system processes RSS feeds in batches, ensuring no duplicate articles are processed twice by checking existing URLs in your Google Sheets. Each new article gets a comprehensive AI summary that includes the main message, key takeaways, important points, and practical applications. Requirements Google Sheets access OpenRouter API key for Gemini AI model or other language model RSS feed URLs to monitor How to set up Copy the template Google Sheet, add your RSS feeds in the "RSS FEEDS" tab, configure Google Sheets and OpenRouter credentials in n8n, and adjust the time filter in the Settings node. The workflow can run manually or on schedule every hour. How to customize Modify AI prompts for different summary styles, change the time filter duration, add more data fields to Google Sheets, or switch to a different AI model in the LLM Chat Model node.
by Dinakar Selvakumar
Complete AI support system using website data (RAG pipeline) This template provides a full end-to-end Retrieval-Augmented Generation (RAG) system using n8n. It includes two connected workflows: A data ingestion pipeline that crawls a website and stores its content in a vector database. A customer support chatbot that retrieves this knowledge and answers user queries in real time. Together, these workflows allow you to turn any public website into an intelligent AI-powered support assistant grounded in real business data. Use cases AI customer support chatbot for your website Internal company knowledge assistant Product FAQ automation Helpdesk or IT support bot AI receptionist for services Semantic search over company content How it works Ingestion workflow Discover all URLs from a website sitemap. Filter and normalize the URLs. Fetch each page and extract readable text. Clean HTML into plain text. Split text into overlapping chunks. Generate embeddings using OpenAI. Store vectors in Pinecone with metadata. Chatbot workflow A user sends a message via chat webhook. The agent queries Pinecone for relevant knowledge. Retrieved content is passed to OpenAI. OpenAI generates a grounded response. Short-term memory maintains conversation context. How to use Step 1 – Run ingestion Set your target website URL. Add Firecrawl, OpenAI, and Pinecone credentials. Create a Pinecone index. Execute the ingestion workflow. Wait until all pages are indexed. Step 2 – Run chatbot Deploy the chatbot workflow. Set the same Pinecone index and namespace. Copy the chat webhook URL. Connect it to a website, chat widget, or WhatsApp bot. Start chatting with your AI assistant. Requirements Firecrawl account OpenAI API key Pinecone account and index Public website to crawl Optional: frontend chat interface Good to know The chatbot never answers from memory for business data. All company knowledge comes from Pinecone. If Pinecone returns nothing, the bot fails safely. HTML cleaning is basic and can be replaced with: Mozilla Readability Jina Reader Unstructured Chunk size and overlap affect retrieval quality. Pinecone can be replaced with: Qdrant Weaviate Supabase Vector Chroma Customising this workflow You can extend this system by: Adding PDF or document loaders Scheduling ingestion daily or weekly Connecting CRM or ticketing systems Adding appointment booking tools Switching to local or open-source models Adding multilingual support Storing raw content in a database Adding feedback or logging What this n8n template demonstrates Real-world RAG architecture Web crawling pipelines Text chunking strategies Vector database integration AI agent orchestration Memory-controlled conversations Production-grade AI support systems End-to-end AI infrastructure with n8n Architecture overview This template follows a modern AI system design: Website → Ingestion → Embeddings → Pinecone → Retrieval → OpenAI → User It separates: Data preparation (offline) Knowledge storage Runtime inference This makes the system scalable, maintainable, and safe for production use. Need a custom setup? If you want a similar AI system built for your business (custom data sources, CRM integration, WhatsApp bots, booking systems, dashboards, or private deployments), feel free to reach out at dinakars2003@gmail.com. I help companies design and deploy production-ready AI workflows.
by Ranjan Dailata
The Scrape and Analyze Amazon Product Info with Decodo + OpenAI workflow automates the process of extracting product information from an Amazon product page and transforming it into meaningful insights. The workflow then uses OpenAI to generate descriptive summaries, competitive positioning insights, and structured analytical output based on the extracted information. Disclaimer Please note - This workflow is only available on n8n self-hosted as it’s making use of the community node for the Decodo Web Scraping Who this is for? This workflow is ideal for: E-commerce product researchers Marketplace sellers (Amazon, Flipkart, Shopify, etc.) Competitive intelligence teams Product comparison bloggers and reviewers Pricing and product analytics engineers Automation builders needing AI-powered product insights What problem is this workflow solving? Manually extracting Amazon product details, ads, pricing, reviews, and competitive signals is: Time-consuming Requires switching across tools Difficult to analyze at scale Not structured for reporting Hard to compare products objectively This workflow automates: Web scraping of Amazon product pages Extraction of product features and ad listings AI-generated product summaries Competitive positioning analysis Generation of structured product insight output Export to Google Sheets for tracking and reporting What this workflow does This workflow performs an end-to-end product intelligence pipeline, including: Data Collection Scrapes an Amazon product page using Decodo Retrieves product details and advertisement placements Data Extraction Extracts: Product specs Key feature descriptions Ads data Supplemental metadata AI-Driven Analysis Generates: Descriptive product summary Competitive positioning insights Structured product insight schema Data Consolidation Merges descriptive, analytical, and structured outputs Export & Persistence Aggregates results Writes final dataset to Google Sheets for: tracking comparison reporting product research archives Setup Prerequisites If you are new to Decode, please signup on this link visit.decodo.com n8n instance** Decodo API credentials** OpenAI API credentials** Make sure to install the Decodo Community Node. Required Credentials Decodo API Go to Credentials Add Decodo API Enter API key Save as: Decodo Credentials account OpenAI API Go to Credentials Select OpenAI Enter API key Save as: OpenAi account Google Sheets Add Google Sheets OAuth Authorize via Google Save as desired account Inputs to configure Modify in Set the Input Fields node: product_url = https://www.amazon.in/Sony-DualSense-Controller-Grey-PlayStation/dp/B0BQXZ11B8 How to customize this workflow to your needs You can easily adapt this workflow for various use cases. Change the product being analyzed Modify: product_url Change AI model In OpenAI nodes: Replace gpt-4.1-mini Use Gemini, Claude, Mistral, Groq (if supported) Customize the insight schema Edit Product Insights node to include: sustainability markers sentiment extraction pricing bands safety compliance brand comparisons Expand data extraction You may extract: product reviews FAQs Q&A seller information delivery and logistics signals Change output destination Replace Google Sheets with: PostgreSQL MySQL Notion Slack Airtable Webhook delivery CSV export Turn it into a batch processor Loop over: multiple ASINs category listings search results pages Summary This workflow provides a complete automated product intelligence engine, combining Decodo’s scraping capabilities with OpenAI’s analytical reasoning to transform Amazon product pages into structured insights, competitive analysis, and summarized evaluations automatically stored for reporting and comparison.
by Automate With Marc
🔥 Daily Web Scraper & AI Summary with Firecrawl + Email Automation Need to extract and summarize web content from a site that doesn’t have an API? This workflow runs daily to scrape a web page using Firecrawl, summarize the content with OpenAI, and send it directly to your email — fully automated. Watch Full Video Step-by-step Tutorial Here: https://www.youtube.com/@Automatewithmarc 🔧 How It Works Daily Trigger – Starts the workflow every 24 hours. Firecrawl Node – Crawls and extracts structured data from any web page you specify. OpenAI Node (Optional) – Processes and summarizes the raw content using a prompt you control. Gmail Node – Sends the final summary or content snapshot to your email inbox. ✅ Perfect For Business analysts tracking daily market or industry news Researchers and founders automating competitive intelligence Anyone who wants web data delivered without coding or scraping scripts 🪜 Setup Instructions Firecrawl API Key – Sign up and insert your key in the credentials. Update Target URL – Edit the URL in the Firecrawl node to your desired site. Customize the Prompt – Tailor the OpenAI prompt to extract the insights you want. Connect Gmail – Add your Gmail credentials and set your recipient email. 🧰 Built With Firecrawl (Web scraping without code) OpenAI (For summarizing and insight extraction) Gmail (Automated notifications) n8n (Workflow automation engine)
by Rakin Jakaria
How it works: This project creates a personal AI knowledge assistant that operates through Telegram. The assistant can extract summaries from YouTube videos or online articles, store them in Google Sheets for later reference, and retrieve stored summaries when requested by the user. Step-by-step: Google Sheets Trigger:* The workflow starts by detecting a new YouTube or article URL added to a dedicated sheet (Sheet2*). It checks whether the link is already processed. Link Type Detection:** The system identifies if the URL is from YouTube or a standard article. Data Retrieval:** If it’s YouTube: Uses Apify to fetch the transcript. If it’s an article: Uses an HTTP Request node to fetch the webpage content. AI Summarization:* The transcript or article content is passed to *Google Gemini** for refined summarization. Google Sheets Storage:* The summary and title are appended to another sheet (Sheet1*) for long-term storage, along with a “Stored” status update in Sheet2. Telegram Assistant:** A Telegram Trigger listens for messages from the user. The assistant searches stored summaries in Google Sheets. If a match is found, it returns the result to the user on Telegram; otherwise, it politely apologizes.
by Michael Muenzer
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Generates relevant keywords and questions from a a customer profile. Keyword data is enriched from ahref and everything is stored in a Google Sheet. This is great for market and customer research. Understanding search intent for a well defined audience and gives relevant actionable data in a fraction of time that manual research takes. How it works We'll define a customer profile in the 'Data' node We use an OpenAI LLM to fetch relevant search intent as keywords and questions We use an SEO MCP server to fetch keyword data from ahref free tooling The fetched data is stored in the Google sheet Set up steps Copy Google Sheet template and add it in all Google Sheet nodes Make sure that n8n has read & write permissions for your Google sheet. Add your list of domains in the first column in the Google sheet Add MCP credentials for seo-mcp Add OpenAI API credentials
by vinci-king-01
How it works This workflow automatically extracts data from invoice documents (PDFs and images) and processes them through a comprehensive validation and approval system. Key Steps Multi-Input Triggers - Accepts invoices via email attachments or direct file uploads through webhook. AI-Powered Extraction - Uses ScrapeGraphAI to extract structured data from invoice documents. Data Cleaning & Validation - Processes and validates extracted data against business rules. Approval Workflow - Routes invoices requiring approval through a multi-stage approval process. System Integration - Automatically sends validated invoices to your accounting system. Set up steps Setup time: 10-15 minutes Configure ScrapeGraphAI credentials - Add your ScrapeGraphAI API key for invoice data extraction. Set up Telegram connection - Connect your Telegram account for approval notifications. Configure email trigger - Set up IMAP connection for processing emailed invoices. Customize validation rules - Adjust business rules, amount thresholds, and vendor lists. Set up accounting system integration - Configure the HTTP request node with your accounting system's API endpoint. Test the workflow - Upload a sample invoice to verify the extraction and approval process. Features Multi-format support**: PDF, PNG, JPG, JPEG, TIFF, BMP Intelligent validation**: Business rules, duplicate detection, amount thresholds Approval automation**: Multi-stage approval workflow with role-based routing Data quality scoring**: Confidence levels and completeness analysis Audit trail**: Complete processing history and metadata tracking
by vinci-king-01
How it works Turn Amazon into your personal competitive intelligence goldmine! This AI-powered workflow automatically monitors Amazon markets 24/7, delivering deep competitor insights and pricing intelligence that would take you 10+ hours of manual research weekly. Key Steps Daily Market Scan - Runs automatically at 6:00 AM UTC to capture fresh competitive data AI-Powered Analysis - Uses ScrapeGraphAI to intelligently extract pricing, product details, and market positioning Competitive Intelligence - Analyzes competitor strategies, pricing gaps, and market opportunities Keyword Goldmine - Identifies high-value keyword opportunities your competitors are missing Strategic Insights - Generates actionable recommendations for pricing and positioning Automated Reporting - Delivers comprehensive market reports directly to Google Docs Set up steps Setup time: 15-20 minutes Configure ScrapeGraphAI credentials - Add your ScrapeGraphAI API key for intelligent web scraping Set up Google Docs integration - Connect Google OAuth2 for automated report generation Customize Amazon search URL - Target your specific product category or market niche Configure IP rotation - Set up proxy rotation if needed for large-scale monitoring Test with sample products - Start with a small product set to validate data accuracy Set competitive alerts - Define thresholds for price changes and market opportunities Save 10+ hours weekly while staying ahead of your competition with real-time market intelligence!
by Anna Bui
This n8n template automatically syncs website visitors identified by RB2B into your Attio CRM, creating comprehensive contact records and associated sales deals for immediate follow-up. Perfect for sales teams who want to capture every website visitor as a potential lead without manual data entry! Good to know RB2B identifies anonymous website visitors and sends structured data via Slack notifications The workflow prevents duplicate contacts by checking email addresses before creating new records All RB2B leads are automatically tagged with source tracking for easy identification How it works RB2B sends website visitor notifications to your designated Slack channel with visitor details The workflow extracts structured data from Slack messages including name, email, company, LinkedIn, and location It searches Attio CRM to check if the person already exists based on email address For new visitors, it creates a complete contact record with all available information For existing contacts, it updates their record and manages deal creation intelligently Automatically creates sales deals tagged as "RB2B Website Visitor" for proper lead tracking How to use Configure RB2B to send visitor notifications to a dedicated Slack channel The Slack trigger can be replaced with other triggers like webhooks if you prefer different notification methods Customize the deal naming conventions and stages to match your sales pipeline Requirements RB2B account with Slack integration enabled Attio CRM account with API access Slack workspace with bot permissions for the designated RB2B channel Customising this workflow Modify deal stages and values based on your sales process Add lead scoring based on company domain or visitor behavior patterns Integrate additional enrichment APIs to enhance contact data Set up automated email sequences or Slack notifications for high-value leads
by Paul
🚀 Google Search Console MCP Server 📋 Description This n8n workflow serves as a Model Context Protocol (MCP) server, connecting MCP-compatible AI tools (like Claude) directly to the Google Search Console APIs. With this workflow, users can automate critical SEO tasks and manage Google Search Console data effortlessly via MCP endpoints. Included Functionalities: 📌 List Verified Sites 📌 Retrieve Detailed Site Information 📌 Access Search Analytics Data 📌 Submit and Manage Sitemaps 📌 Request URL Indexing OAuth2 is fully supported for secure and seamless API interactions. 🛠️ Setup Instructions 🔑 Prerequisites n8n instance** (cloud or self-hosted) Google Cloud project with enabled APIs: Google Search Console API Web Search Indexing API OAuth2 Credentials from Google Cloud ⚙️ Workflow Setup Step 1: Import Workflow Open n8n, select "Import from JSON", and paste this workflow JSON. Step 2: Configure OAuth2 Credentials Navigate to Settings → Credentials. Add new credentials (Google OAuth2 API): Client ID and Client Secret from Google Cloud Scopes: https://www.googleapis.com/auth/webmasters.readonly https://www.googleapis.com/auth/webmasters https://www.googleapis.com/auth/indexing Step 3: Configure Webhooks Webhook URLs auto-generate in MCP Server Trigger node. Ensure webhooks are publicly accessible via HTTPS. Step 4: Testing Test your endpoints with sample HTTP requests to confirm everything is working correctly. 🎯 Usage Examples List Sites**: Fetch all verified Search Console sites. Get Site Info**: Get detailed information about a particular site. Search Analytics**: Pull metrics such as clicks, impressions, and rankings. Submit Sitemap**: Automatically submit sitemaps. Request URL Indexing**: Trigger Google's indexing for specific URLs instantly. 🚩 Use Cases & Applications SEO automation workflows AI-driven SEO analytics Real-time website performance monitoring Automated sitemap management