by franck fambou
⚠️ IMPORTANT: This template requires self-hosted n8n hosting due to the use of community nodes (MCP tools). It will not work on n8n Cloud. Make sure you have access to a self-hosted n8n instance before using this template. Overview This workflow automation allows a Google Gemini-powered AI Agent to orchestrate multi-source web intelligence using MCP (Model Context Protocol) tools such as Firecrawl, Brave Search, and Apify. The system allows users to interact with the agent in natural language, which then leverages various external data collection tools, processes the results, and automatically organizes them into structured spreadsheets. With built-in memory, flexible tool execution, and conversational capabilities, this workflow acts as a multi-agent research assistant, capable of retrieving, synthesizing, and delivering actionable insights in real time. How the system works AI Agent + MCP Pipeline User Interaction A chat message is received and forwarded to the AI Agent. AI Orchestration The agent, powered by Google Gemini, decides which MCP tools to invoke based on the query. Firecrawl-MCP: Recursive web crawling and content extraction. Brave-MCP: Real-time web search with structured results. Apify-MCP: Automation of web scraping tasks with scalable execution. Memory Management A memory module stores context across conversations, ensuring multi-turn reasoning and task continuity. Spreadsheet automation Results are structured in a new, automatically created Google Spreadsheet, enriched with formatting and additional metadata. Data processing The workflow generates the spreadsheet content, updates the sheet, and improves results via HTTP requests and field edits. Delivery of results Users receive a structured and contextualized dataset ready for review, analysis, or integration into other systems. Configuration instructions Estimated setup time: 45 minutes Prerequisites Self-hosted n8n instance (v0.200.0 or higher recommended) Google Gemini API key MCP-compatible nodes (Firecrawl, Brave, Apify) configured Google Sheets credentials for spreadsheet automation Detailed configuration steps Step 1: Configuring the AI Agent AI Agent node**: Select Google Gemini as the LLM model Configure your Google Gemini API key in the n8n credentials Set the system prompt to guide the agent's behavior Connect the Simple Memory node to enable context tracking Step 2: Integrating MCP Tools Firecrawl-MCP Configuration**: Install the @n8n/n8n-nodes-firecrawl-mcp package Configure your Firecrawl API key Set crawling parameters (depth, CSS selectors) Brave-MCP configuration**: Install the @n8n/n8n-nodes-brave-mcp package Add your Brave Search API key Configure search filters (region, language, SafeSearch) Apify-MCP configuration**: Install the @n8n/n8n-nodes-apify-mcp package Configure your Apify credentials Select the appropriate actors for your use cases Step 3: Spreadsheet automation “Create Spreadsheet” node**: Configure Google Sheets authentication (OAuth2 or Service Account) Set the file name with dynamic timestamps Specify the destination folder in Google Drive “Generate Spreadsheet Content” node**: Transform the agent's outputs into tabular format Define the columns: URL, Title, Description, Source, Timestamp Configure data formatting (dates, links, metadata) “Update Spreadsheet” node**: Insert the data into the created sheet Apply automatic formatting (headers, colors, column widths) Add summary formulas if necessary Step 4: Post-processing and delivery “Data Enrichment Request” node** (formerly “HTTP Request1”): Configure optional API calls to enrich the data Add additional metadata (geolocation, sentiment, categorization) Manage errors and timeouts “Edit Fields” node**: Refine the final dataset (metadata, tags, filters) Clean and normalize the data Prepare the final response for the user Structure of generated Google Sheets Default columns | Column | Description | Type | |---------|-------------|------| | URL | Data source URL | Hyperlink | | Title | Page/resource title | Text | | Description | Description or content excerpt | Long text | | Source | MCP tool used (Brave/Firecrawl/Apify) | Text | | Timestamp | Date/time of collection | Date/Time | | Metadata | Additional data (JSON) | Text | Automatic formatting Headings**: Bold font, colored background URLs**: Formatted as clickable links Dates**: Standardized ISO 8601 format Columns**: Width automatically adjusted to content Use cases Business and enterprise Competitive analysis combining search, crawling, and structured scraping Market trend research with multi-source aggregation Automated reporting pipelines for business intelligence Research and academia Literature discovery across multiple sources Data collection for research projects Automated bibliographic extraction from online sources Engineering and development Discovery of APIs and documentation Aggregation of product information from multiple platforms Scalable structured scraping for datasets Personal productivity Automated creation of newsletters or knowledge hubs Personal research assistant compiling spreadsheets from various online data Key features Multi-source intelligence Firecrawl for deep crawling Brave for real-time search Apify for structured web scraping AI-driven orchestration Google Gemini for reasoning and tool selection Memory for multi-turn interactions Context-based adaptive workflows Structured data output Automatic spreadsheet creation Data enrichment and formatting Ready-to-use datasets for reporting Performance and scalability Handles multiple simultaneous tool calls Scalable web data extraction Real-time aggregation from multiple MCPs Security and privacy Secure authentication based on API keys Data managed in Google Sheets / n8n Configurable retention and deletion policies Technical architecture Workflow User query → AI agent (Gemini) → MCP tools (Firecrawl / Brave / Apify) → Aggregated results → Spreadsheet creation → Data processing → Results delivery Supported data types Text and metadata** from crawled web pages Search results** from Brave queries Structured data** from Apify scrapers Tabular reports** via Google Sheets Integration options Chat interfaces Web widget for conversational queries Slack/Teams chatbot integration REST API access points Data sources Websites (via Firecrawl/Apify) Search engines (via Brave) APIs (via HTTP Request enrichment) Performance specifications Query response: < 5 seconds (search tasks) Crawl capacity: Thousands of pages per run Spreadsheet automation: Real-time creation and updates Accuracy: > 90% when using combined sources Advanced configuration options Customization Set custom prompts for the AI Agent Adjust the spreadsheet schema for reporting needs Configure retries for failed tool runs Analytics and monitoring Track tool usage and costs Monitor crawl and search success rates Log queries and outputs for auditing Troubleshooting and support Timeouts:** Manually re-run failed MCP executions Data gaps:** Validate Firecrawl/Apify selectors Spreadsheet errors:** Check Google Sheets API quotas
by Sulieman Said
How it Works This workflow automates the process of discovering companies in different cities, extracting their contact data, and storing it in Airtable. City Loop (Airtable → Google Maps API) Reads a list of cities from Airtable. Uses each city combined with a search term (e.g., SEO Agency, Berlin) to query Google Maps. Marks processed cities as “checked” to allow safe restarts if interrupted. Business Discovery & Deduplication Searches for businesses via Google Maps Text Search. Checks Airtable to avoid scraping the same company multiple times. Fetches detailed info for each business via Google Maps Place Details API. Impressum Extraction (Website → HTML Parsing) Builds an Impressum page URL for each business. Requests the HTML and cleans out ads, headers, footers, etc. Extracts relevant contact info using an AI extractor (OpenAI node). Contact Information Extraction Pulls out: Decision Maker (Name + Position in one string, if available). Email address (must be valid, containing @). Phone number (international format if possible). Filters out incomplete results (e.g., empty email). Database Storage Writes company data back into Airtable: Company name Address Website Email Phone number Decision Maker (Name + Position) Search term & city used Setup Steps 1. Prerequisites Google Maps API Key with access to: Places API → Text Search + Place Details Airtable base with at least two tables: Cities (with columns: ID, City, Country, Status) Companies (for scraped results) OpenAI API key (for decision maker + contact extraction). 2. Authentication Configure your Airtable API credentials in n8n. Set up HTTP Query Auth with your Google Maps API key. Add your OpenAI API key in the OpenAI Chat node. 3. Configuration In the Airtable “Cities” table, list all cities you want to scrape. Define your search term in the “Execute Workflow” node (e.g., SEO Agency). Adjust the batch sizes and wait intervals if you want faster/slower scraping (Google API has strict rate limits). 4. Execution Start manually or from another workflow. The workflow will scrape all companies in each city step by step. It can be safely stopped and resumed — cities already marked as processed will be skipped. 5. Results Enriched company dataset stored in Airtable, ready for CRM import, lead generation, or further automation. Tips & Notes Always respect GDPR and local laws when handling scraped data. The workflow is modular → you can swap Airtable with Google Sheets, Notion, or a database of your choice. Add custom filters to limit results (e.g., only companies with websites). Use sticky notes inside the workflow to understand each step (mandatory for template publishing). Keep an eye on Google Places API costs** — queries are billed after the free quota. If you are still within the first 2 months of the Google Cloud Developer free trial, you can benefit from free credits. Questions or custom requests? 📩 suliemansaid.business@gmail.com
by Trung Tran
Chat-Based AWS IAM Policy Generator with OpenAI Agent > Chat-driven workflow that lets IT and DevOps teams generate custom AWS IAM policies via AI, automatically apply them to AWS, and send an email notification with policy details. 👤 Who’s it for This workflow is designed for: Cloud Engineers / DevOps* who need to quickly generate and apply *custom IAM policies** in AWS. IT Support / Security teams* who want to create IAM policies through a *chat-based interface** without manually writing JSON. Teams that want automatic notifications (via email) once new policies are created. ⚙️ How it works / What it does Trigger → Workflow starts when a chat message is received. IAM Policy Creator Agent → Uses OpenAI to: Interpret user requirements (e.g., service, actions, region). Generate a valid IAM policy JSON following AWS best practices. IAM Policy HTTP Request → Sends the generated policy to AWS IAM CreatePolicy API. Email Notification → Once AWS responds with a CreatePolicyResponse, an email is sent with policy details (name, ARN, ID, timestamps, etc.) using n8n mapping. Result: The user can chat with the AI agent, create a policy, and receive an email confirmation with full details. 🛠 How to set up Chat Trigger Node Configure the When chat message received node to connect your preferred chat channel (Slack, MS Teams, Telegram, etc.). IAM Policy Creator Agent Add OpenAI Chat Model as the LLM. Use a system prompt that enforces AWS IAM JSON best practices (least privilege, correct JSON structure). Connect Memory (Simple Memory) and Structured Output Parser to ensure consistent JSON output. IAM Policy HTTP Request Set method: POST URL: https://iam.amazonaws.com/ Add authentication using AWS Signature v4 (Access Key + Secret Key). Body: Action=CreatePolicy PolicyName={{ $json.CreatePolicyResponse.CreatePolicyResult.Policy.PolicyName }} PolicyDocument={{ $json.policyDocument }} Version=2010-05-08 Email for tracking 📋 Requirements n8n instance (self-hosted or cloud). AWS IAM user/role with permission to iam:CreatePolicy. AWS Access Key + Secret Key (for SigV4 signing in HTTP request). OpenAI API key (for the Chat Model). Email server credentials (SMTP or provider integration). 🎨 How to customize the workflow Restrict services/actions** → Adjust the IAM Policy Creator Agent system prompt to limit what services/policies can be generated. Notification channels** → Replace the email node with Slack, MS Teams, or PagerDuty to alert other teams. Tagging policies** → Modify the HTTP request to include Tags when creating policies in AWS. Human-readable timestamps** → Add a Function or Set node to convert CreateDate and UpdateDate from Unix epoch to ISO datetime before sending emails. Approval step** → Insert a manual approval node before sending the policy to AWS for compliance workflows.
by Jose Cuartas
Sync Gmail emails to PostgreSQL with S3 attachment storage Automated Gmail Email Processing System Who's it for Businesses and individuals who need to: Archive email communications in a searchable database Backup email attachments to cloud storage Analyze email patterns and communication data Comply with data retention policies Integrate emails with other business systems What it does This workflow automatically captures, processes, and stores Gmail emails in a PostgreSQL database while uploading file attachments to S3/MinIO storage. It handles both individual emails (via Gmail Trigger) and bulk processing (via Schedule Trigger). Key features: Dual processing: real-time individual emails + scheduled bulk retrieval Complete email metadata extraction (sender, recipients, labels, timestamps) HTML to plain text conversion for searchable content Binary attachment processing with metadata extraction Organized S3/MinIO file storage structure UPSERT database operations to prevent duplicates How it works Email Capture: Gmail Trigger detects new emails, Schedule Trigger gets bulk emails from last hour Parallel Processing: Emails with attachments go through binary processing, others go directly to transformation Attachment Handling: Extract metadata, upload to S3/MinIO, create database references Data Transformation: Convert Gmail API format to PostgreSQL structure Storage: UPSERT emails to database with linked attachment information Requirements Credentials needed: Gmail OAuth2 (gmail.readonly scope) PostgreSQL database connection S3/MinIO storage credentials Database setup: Run the provided SQL schema to create the messages table with JSONB fields for flexible data storage. How to set up Gmail OAuth2: Enable Gmail API in Google Cloud Console, create OAuth2 credentials PostgreSQL: Create database and run the SQL schema provided in setup sticky note S3/MinIO: Create bucket "gmail-attachments" with proper upload permissions Configure: Update authenticatedUserEmail in transform scripts to your email Test: Start with single email before enabling bulk processing How to customize Email filters**: Modify Gmail queries (in:sent, in:inbox) to target specific emails Storage structure**: Change S3 file path format in Upload node Processing schedule**: Adjust trigger frequencies based on email volume Database fields**: Extend PostgreSQL schema for additional metadata Attachment types**: Add file type filtering in binary processing logic Note: This workflow processes emails from the last hour to avoid overwhelming the system. Adjust timeframes based on your email volume and processing needs.
by Daniel Shashko
How it Works This workflow automatically monitors competitor affiliate programs twice daily using Bright Data's web scraping API to extract commission rates, cookie durations, average order values, and payout terms from competitor websites. The AI analysis engine scores each competitor (0-100 points) by comparing their commission rates, cookie windows, earnings per click (EPC), and affiliate-friendliness against your program, then categorizes them as Critical (70+), High (45-69), Medium (25-44), or Low (0-24) threat levels. Critical and high-threat competitors trigger immediate Slack alerts with detailed head-to-head comparisons and strategic recommendations, while lower threats route to monitoring channels. All competitors are logged to Google Sheets for tracking and historical analysis. The system generates personalized email reports—urgent action plans with 24-48 hour deadlines for critical threats, or standard intelligence updates for routine monitoring. The entire process takes minutes from scraping to strategic alert, eliminating manual competitive research and ensuring you never lose affiliates to better-positioned competitor programs. Who is this for? Affiliate program managers monitoring competitor programs who need automated intelligence E-commerce brands in competitive verticals who can't afford to lose top affiliates Affiliate networks managing multiple merchants needing competitive benchmarking Performance marketing teams responding to commission rate wars in their industry Setup Steps Setup time: Approx. 20-30 minutes (Bright Data setup, API configuration, spreadsheet creation) Requirements: Bright Data account with web scraping API access Google account with a competitor tracking spreadsheet Slack workspace SMTP email provider (Gmail, SendGrid, etc.) Sign up for Bright Data and get your API credentials and dataset ID. Create a Google Sheets with two tabs: "Competitor Analysis" and "Historical Log" with appropriate column headers. Set up these nodes: Schedule Competitor Check: Pre-configured for twice daily (adjust timing if needed). Scrape Competitor Sites: Add Bright Data credentials, dataset ID, and competitor URLs. AI Offer Analysis: Review scoring thresholds (commission, cookies, AOV, EPC). Route by Threat Level: Automatically splits by 70-point critical and 45-point high thresholds. Google Sheets Nodes: Connect spreadsheet and map data fields. Slack Alerts: Configure channels for critical alerts and routine monitoring. Email Reports: Set up SMTP and recipient addresses. Credentials must be entered into their respective nodes for successful execution. Customization Guidance Scoring Weights:** Adjust point values for commission (35), cookies (25), cost efficiency (25), volume (15) based on your priorities. Threat Thresholds:** Modify 70-point critical and 45-point high thresholds for your risk tolerance. Benchmark Values:** Update commission gap thresholds (5%+ = critical, 2%+ = warning) and cookie duration benchmarks (30+ days = critical). Competitor URLs:** Add or remove competitor websites to monitor in the HTTP Request node. Alert Routing:** Create tier-based channels or route to Microsoft Teams, Discord, or SMS via Twilio. Scraping Frequency:** Change from twice-daily to hourly for competitive markets or weekly for stable industries. Additional Networks:** Duplicate workflow for different affiliate networks (CJ, ShareASale, Impact, Rakuten). Once configured, this workflow will continuously monitor competitive threats and alert you before top affiliates switch to better-paying programs, protecting your affiliate revenue from competitive pressure. Built by Daniel Shashko Connect on LinkedIn
by Oneclick AI Squad
This automated n8n workflow enables an AI-powered movie recommendation system on WhatsApp. Users send messages like "I want to watch a horror movie" or "Where can I watch the Jumanji movie?" The workflow uses AI to interpret the request, searches relevant APIs (e.g., TMDb, JustWatch), and replies with movie recommendations or streaming platform availability via WhatsApp. Fundamental Aspects WhatsApp Webhook Trigger**: Initiates the workflow when a WhatsApp message is received. Analyze WhatsApp Message**: Uses AI (e.g., Ollama Model) to interpret the user's intent and extract request type. Check Request Type**: Determines if the request is for a movie genre or a specific movie title. Check Where Request**: Identifies if the request includes a "where to watch" query. Extract Movie Title**: Extracts the movie title from the message if specified. Extract Genre**: Identifies the movie genre from the message if specified. Search Specific Movie Title**: Queries an API (e.g., TMDb) for details about a specific movie. Search Movies by Genre**: Queries an API (e.g., TMDb) for movies matching the genre. Get Streaming Availability**: Queries an API (e.g., JustWatch) for streaming platforms. Format Streaming Response**: Prepares the response with streaming platform details. Format Genre Recommendations**: Prepares the response with genre-based movie recommendations. Prepare WhatsApp Message**: Formats the final response for WhatsApp. Send WhatsApp Response**: Sends the recommendation or streaming info back to the user via WhatsApp. Setup Instructions Import the Workflow into n8n: Download the workflow JSON and import it via the n8n interface. Configure API Credentials: Set up WhatsApp Business API credentials with a valid phone number and token. Configure TMDb API key (e.g., https://api.themoviedb.org). Configure JustWatch API key (e.g., https://api.watchmode.com). Set up AI model credentials (e.g., Ollama Model). Run the Workflow: Activate the webhook trigger and test with a WhatsApp message. Verify Responses: Check WhatsApp for accurate movie recommendations or streaming info. Adjust Parameters: Fine-tune API endpoints or AI model as needed. Features AI Interpretation**: Uses AI to analyze user intents (genre or movie title). API Integration**: Searches TMDb for movie details and JustWatch for streaming availability. Real-Time Responses**: Sends instant replies via WhatsApp. Custom Recommendations**: Provides genre-based or specific movie recommendations. Technical Dependencies WhatsApp Business API**: For receiving and sending messages. TMDb API**: For movie details and genre searches. JustWatch API**: For streaming availability. Ollama Model**: For AI-based message analysis. n8n**: For workflow automation and integration. Customization Possibilities Add More APIs**: Integrate additional movie databases (e.g., IMDb). Enhance AI**: Train the Ollama Model for better intent recognition. Support More Languages**: Add multilingual support for WhatsApp responses. Add Email Alerts**: Include email notifications for admin monitoring. Customize Responses**: Adjust the format of recommendations or streaming info.
by Nguyen Thieu Toan
How it works 🧠 AI-Powered News Update Bot for Zalo using Gemini and RSS Feeds This workflow allows you to build a smart Zalo chatbot that automatically summarizes and delivers the latest news using Google Gemini and RSS feeds. It’s perfect for keeping users informed with AI-curated updates directly inside Vietnam’s most popular messaging app. 🚀 What It Does Receives user messages via Zalo Bot webhook Fetches the latest articles from an RSS feed (e.g., AI news) Summarizes the content using Google Gemini Formats the response and sends it back to the user on Zalo 📱 What Is Zalo? Zalo is Vietnam’s leading instant messaging app, with over 78 million monthly active users—more than 85% of the country’s internet-connected population. It handles 2 billion messages per day and is deeply embedded in Vietnamese daily life, making it a powerful channel for communication and automation. 🔧 Setup Instructions 1. Create a Zalo Bot Open the Zalo app and search for "Zalo Bot Creator" Tap "Create Zalo Bot Account" Your bot name must start with "Bot" (e.g., Bot AI News) After creation, Zalo will send you a message containing your Bot Token 2. Configure the Webhook Replace [your-webhook URL] in Zalo Bot Creator with your n8n webhook URL Use the Webhook node in this workflow to receive incoming messages 3. Set Up Gemini Add your Gemini API key to the HTTP Request node labeled Summarize AI News Customize the prompt if you want a different tone or summary style 4. Customize RSS Feed Replace the default RSS URL with your preferred news source You can use any feed that provides timely updates (e.g., tech, finance, health) 🧪 Example Interaction User: "What's new today?" Bot: "🧠 AI Update: Google launches Gemini 2 with multimodal capabilities, revolutionizing how models understand text, image, and code..." ⚠️ Notes Zalo Bots currently do not support images, voice, or file attachments Make sure your Gemini API key has access to the model you're calling RSS feeds should be publicly accessible and well-formatted 🧩 Nodes Used Webhook HTTP Request (Gemini) RSS Feed Read Set & Format Zalo Message Sender (via API) 💡 Tips You can swap Gemini with GPT-4 or Claude by adjusting the API call Add filters to the RSS node to only include articles with specific keywords Use the Function node to personalize responses based on user history Built by Nguyen Thieu Toan (Nguyễn Thiệu Toàn) (https://nguyenthieutoan.com). Read more about this workflow by Vietnamese: https://nguyenthieutoan.com/share-workflow-n8n-zalo-bot-cap-nhat-tin-tuc/
by Luis Acosta
📰 Reddit to Newsletter (Automated Curation with Open AI 4o Mini ) Turn the best posts from a subreddit into a ready-to-send HTML newsletter — no copy-pasting, no wasted time. This workflow fetches new posts, filters by topic of interest, analyzes comments, summarizes insights, and composes a clean HTML email delivered straight to your inbox with Gmail. 💡 What this workflow does ✅ Fetches posts from your chosen subreddit (default: r/microsaas, sorted by “new”) 🏆 Selects the Top 10 by upvotes, comments, and recency 🧭 Defines a topic of interest and runs a lightweight AI filter (true/false) without altering the original JSON 💬 Pulls and flattens comments into a clean, structured list 🧠 Summarizes each post + comments into main_post_summary, comment_insights, and key_learnings ✍️ Generates a newsletter in HTML (not Markdown) with headline, outline, sections per post, quotes, and “by the numbers” 📤 Sends the HTML email via Gmail with subject “Reddit Digest” (editable) 🛠 What you’ll need 🔑 Reddit OAuth2 connected in n8n 🔑 OpenAI API key (e.g., gpt-4o-mini) for filtering and summarization 🔑 Gmail OAuth2 to deliver the newsletter 🧵 A target subreddit and a clearly defined topic of interest 🧩 How it works (high-level) Manual Trigger → Get many posts (from subreddit) Select Top 10 (Code node, ranking by ups + comments + date) Set topic of interest → AI filter → String to JSON → If topic of interest Loop Over Items for each valid post Fetch post comments → Clean comments (Code) → Merge comments → Merge with post Summarize post + comments (AI) → Merge summaries → Create newsletter HTML Send Gmail message with the generated HTML ⚙️ Key fields to adjust Subreddit name* and “new” filter in *Get many posts Ranking logic* inside *Top 10 Code node Text inside Set topic of interest** Prompts* for *AI filter, Summarize, and Create newsletter (tone & structure) Recipient & subject line* in *Send Gmail message ✨ Use cases Weekly digest** of your niche community Podcast or newsletter prep** with community insights Monitoring specific themes** (e.g., “how to get first customers”) and delivering insights to a team or client 🧠 Tips & gotchas ⏱️ Reddit API limits: tune batch size and rate if the subreddit is very active 🧹 Robust JSON parsing: the String to JSON node handles clean, fenced, or escaped JSON; failures return error + raw for debugging 📨 Email client quirks: test long newsletters; some clients clip lengthy HTML 💸 AI cost: the two-step (summarization + HTML generation) improves quality but can be merged to reduce cost 🧭 Quick customization Change microsaas to your target subreddit Rewrite the topic of interest (e.g., “growth strategies”, “fundraising”, etc.) Adapt the newsletter outline prompt for a different tone/format Schedule with a Cron node for daily or weekly digests 📬 Contact & Feedback Need help tailoring this workflow to your stack? 📩 Luis.acosta@news2podcast.com 🐦 @guanchehacker If you’re building something more advanced with curation + AI (like turning the digest into a podcast or video), let’s connect — I may have the missing piece you need.
by David Olusola
WordPress Weekly Newsletter Generator Overview: This automation automatically converts your latest WordPress posts into beautifully formatted email newsletters using AI, then sends them to your subscriber list every Friday. What it does: Fetches your latest WordPress posts from the past week every Friday at 10 AM Filters posts to ensure there's content to include AI creates an engaging newsletter with compelling subject line and HTML content Parses the AI response to extract subject and content Sends formatted HTML email newsletter to your subscriber list Setup Required: WordPress Connection Configure WordPress credentials in the "Fetch Recent Posts" node Enter your WordPress site URL, username, and password/app password Email SMTP Setup Set up SMTP credentials (Gmail, SendGrid, Mailgun, etc.) in the "Send Newsletter" node Replace newsletter@yoursite.com with your actual sender email Replace subscriber emails in "To Email" field with your actual subscriber list Configure reply-to address for professional appearance AI Configuration Set up Google Gemini API credentials Connect the Gemini model to the "AI Newsletter Creator" node Customization Options Newsletter Schedule: Modify schedule trigger (default: Friday 10 AM) Post Count: Adjust number of posts to include (default: 5 from past week) Content Style: Modify AI system message for different newsletter tones Email Design: Customize HTML template and styling in AI prompt Testing Run workflow manually to test all connections Send test newsletter to yourself first Verify HTML formatting appears correctly in email clients Features: Automatic weekly scheduling AI-generated compelling subject lines HTML email formatting with proper structure Post filtering to avoid empty newsletters Professional email headers and reply-to setup Batch processing of multiple recent posts Customization: Change newsletter frequency (daily, bi-weekly, monthly) Adjust AI prompts for different writing styles Modify email template design Add custom intro/outro messages Include featured images from posts Need Help? For n8n coaching or one-on-one consultation
by Manav Desai
This n8n template demonstrates how to build a weekly Hollywood film industry briefing using Tavily for real-time search and Google Gemini for summarization. It sends a concise, emoji‑styled email with movie releases, box office results, industry news, and must‑watch recommendations every week automatically. Use cases: Great for film journalists, entertainment bloggers, or movie enthusiasts who want automated weekly updates without manually checking multiple sources. Good to know Free to use: Tavily provides **1,000 API credits per month on their free plan (no credit card required), so this workflow can run at zero cost. Real-time data**: Tavily’s search API is optimized for up-to-date information — perfect for weekly movie releases and box office stats. Google Gemini is used for summarization, and you only need basic API access (no paid tier required). How it works Trigger**: Scheduled every Thursday morning (configurable). Search**: Four Tavily API calls gather: Movies releasing this week Last week’s box office results Hollywood industry news Must‑watch movies currently in theatres Summarization**: Google Gemini turns this into Gmail‑friendly HTML with emojis and bullet points. Email**: The formatted newsletter is sent via Gmail node. How to use Configure Tavily API and Gmail OAuth2 credentials in n8n’s credential manager. (Optional) Edit Tavily queries to focus on specific genres or add filters. Adjust the schedule trigger to any day/time you prefer. Requirements Tavily API account (free plan – 1,000 monthly requests) Google Gemini API key for summarization Gmail account (OAuth2 credentials for sending emails) Want insane output quality? You can swap Gemini for OpenAI’s ChatGPT models: GPT‑3.5 Turbo** – \~\$0.002/run (crazy cheap) GPT‑4o** – \~\$0.009/run (latte price) GPT‑4.5** – \~\$0.15/run (god‑mode quality) This upgrade gives you cleaner, richer, “did‑a‑human‑write‑this?” vibes — perfect for journalist‑grade Hollywood briefings. Just note: OpenAI API requires a \$5 minimum credit to activate usage. Example Output (ChatGPT version) Subject: Daily Hollywood Film Industry Briefing – August 3, 2025 Good morning, Here's your daily Hollywood film briefing for August 3, 2025: 🎬 Releases The Bad Guys 2 – Released Friday, August 1, 2025 The Naked Gun – Released Friday, August 1, 2025 These are the confirmed new wide theatrical Hollywood releases this week (Monday through Sunday of current week). No additional new Hollywood theatrical releases found for this week. 📊 Box Office Highest‑grossing Hollywood films of 2025 (worldwide): Ne Zha 2 – approx. \$1.90 billion (non‑Hollywood Chinese animated film leads) Lilo & Stitch – approx. \$1.02 billion A Minecraft Movie – approx. \$955 million Jurassic World Rebirth – approx. \$731 million How to Train Your Dragon – approx. \$610 million Last week’s box office performance (Monday–Sunday): The Fantastic Four: First Steps – domestic debut \\$118 M; global \\$218 M, Marvel’s biggest opening of 2025 Superman – added \~\$94 M worldwide last week, passing \$500 M global total Jurassic World Rebirth – up \\$70 M worldwide last week, despite \40 % drop week‑on‑week F1: The Movie – up \~\$48 M last week internationally/domestically growth visible Lilo & Stitch – added \~\$10 M worldwide last week, slower tail but still billion‑plus gross Highlights & trends: Fantastic Four’s strong debut reboots Marvel success, signaling resumed audience interest; Superman continues to hold strong; Jurassic World Rebirth remains durable after holiday surge; surge in box office recovery noted across key titles. Overall box office up \~12–15 % year‑on‑year. 📰 Industry Buzz Christopher Nolan has signed to direct a massive \$250 million adaptation of Homer’s The Odyssey, starring Matt Damon and Tom Holland, with Imax pre‑sales at 95 % capacity across major locations. Marvel has relaunched the Fantastic Four franchise successfully with First Steps; positive CinemaScore and strong visuals marking a fresh start. DC’s Superman continues strong with over \$500 M global, solidifying DC’s summer comeback. Universal’s Jurassic World Rebirth continues strong overseas, especially in China, contributing to \$318 M global in opening holiday weekend. Warner Bros.–Discovery stock surges (\~30 %) amid box office rebound, with Disney, IMAX and Cinemark also seeing robust growth in 2025. Ne Zha 2 becomes highest‑grossing animated and non‑Hollywood film ever, crossing \$2 billion globally—though not Hollywood, its impact on global trends is notable. Mission: Impossible – The Final Reckoning quietly solidifies strong global numbers (\~\$562 M) and continues reliable franchise performance. 🎥 Must‑Watch in Theatres (Surat, India) The Fantastic Four: First Steps** – Currently showing in English/Hindi/Tamil/Telugu in Surat cinemas; hyped globally, strong visuals, action‑heavy, best experienced in IMAX or premium formats if available in Surat multiplexes. Runs this week. F1: The Movie** – Available in Surat in multiple languages, strong reviews praising adrenaline‑fuelled direction and visuals and growing fan hype; ideal in standard or Dolby formats for immersive sound and speed feel. Jurassic World Rebirth** – Still playing in Surat, popular with family audiences; grand visuals and dinosaur action well‑suited to IMAX or large format screens. That’s all for today’s briefing. Have a great theatrical weekend ahead!
by Rahul Joshi
Description Automate Jira backlog management with intelligent cleanup, prioritization, and AI-powered reporting. This workflow scans daily to identify stale issues, missing priorities, and overdue tasks — auto-updates Jira with corrective labels, logs everything into Google Sheets for tracking, and notifies teams via Slack. Every Friday, it sends an AI-generated backlog summary email to project leads for visibility and planning. 🚀📅 What This Template Does Step 1: Triggers automatically every weekday at 7:00 AM to fetch backlog issues from Jira. ⏰ Step 2: Filters issues missing estimates, assignees, or priority values for cleanup. 🧹 Step 3: Applies corrective labels (e.g., “Needs Estimation,” “Unassigned,” “Overdue”). 🏷️ Step 4: Logs all flagged issues into Google Sheets with timestamps for audit tracking. 📊 Step 5: Sends real-time Slack alerts summarizing key backlog insights. 💬 Step 6: Every Friday, uses GPT-4 to generate a summarized backlog health report. 🤖 Step 7: Delivers weekly summary emails to leads and project managers via Gmail. 📧 Key Benefits ✅ Eliminates manual backlog reviews and prioritization. ✅ Ensures consistent Jira hygiene and task visibility. ✅ Provides centralized backlog tracking via Google Sheets. ✅ Sends real-time alerts for overdue and unassigned tasks. ✅ Offers AI-driven insights for better sprint planning. Features Automated daily trigger (Mon–Fri, 7 AM) Jira issue fetching and filtering by priority and assignment Smart labeling for hygiene tracking Slack alerts for backlog anomalies Weekly GPT-4 generated summary reporting Google Sheets integration for historical logging Gmail integration for summary email delivery Requirements Jira API credentials with read/write issue permissions Google Sheets OAuth2 credentials for data logging Slack Bot token with chat:write permissions Gmail OAuth2 credentials for email delivery OpenAI or Azure OpenAI API key for GPT-4 summarization Target Audience Agile and Scrum teams maintaining large backlogs 🧩 Product managers ensuring backlog quality and consistency 📋 Engineering leads seeking proactive backlog hygiene 🛠️ Organizations needing visibility across project tasks 🏢 Remote teams using Slack for daily syncs 🌐 Step-by-Step Setup Instructions Connect Jira credentials and specify your project key(s). 🔑 Link your Google Sheet and replace YOUR_SHEET_ID for backlog tracking. 📊 Configure Slack and replace YOUR_CHANNEL_ID for alert delivery. 💬 Add Gmail credentials and define recipient emails for weekly reports. 📧 Add your GPT-4 API key (OpenAI or Azure) for AI summarization. 🤖 Adjust cron expression (0 7 * * 1-5) to match your local timezone. ⏰ Run manually once to validate all connections, then enable automation. ✅
by Frederik Duchi
This n8n template demonstrates how to automatically create tasks (or in general, records) in Baserow based on template or blueprint tables. The first blueprint table is the master table that holds the general information about the template. For example: a standard procedure to handle incidents. The second table is the details table that holds multiple records for the template. Each record in that table is a specific task that needs to be assigned to someone with a certain deadline. This makes it easy to streamline task creation for recurring processes. Use cases are many: Project management (generate tasks for employees based on a project template) HR & onboarding (generate tasks for employee onboarding based on a template) Operations (create checklists for maintenance, audits, or recurring procedures) Good to know The Baserow template for handling Standard Operating Procedures works perfect as a base schema to try out this workflow. Authentication is done through a database token. Check the documentation on how to create such a token. Tasks are inserted using the HTTP request node instead of a dedicated Baserow node. This is to support batch import instead of importing records one by one. Requirements Baserow account (cloud or self-hosted) A Baserow database with at least the following tables: Assignee / employee table. This is required to be able to assign someone to a task. Master table with procedure or template information. This is required to be able to select a certain template Details table with all the steps associated with a procedure or template. This is required to convert each step into a specific task. A step must have a field Days to complete with the number of days to complete the step. This field will be used to calculate the deadline. Tasks table that contains the actual tasks with an assignee and deadline. How it works Trigger task creation (webhook)** The automation starts when the webhook is triggered through a POST request. It should contain an assignee, template, date and note in the body of the request. It will send a succes or failure response once all steps are completed. Configure settings and ids** Stores the ids of the involved Baserow database and tables, together with the API credentials and the data from the webhook. Get all template steps** Gets all the steps from the template Details table that are associated with the id of the Master template table. For example: the master template can have a record about handling customer complaints. The details table contains all the steps to handle this procedure. Calculate deadlines for each step** Prepares the input of the tasks by using the same property names as the field of the Tasks table. Adjust this names, add or remove fields if this is required for your database structure. The deadline of each step is calculated by adding the number of days a step can take based on the deadline of the first step. This is done through a field Days to complete in the template Details table. For example. If the schedule_date property in the webhook is set to 2025-10-01 and the Days to complete for the step is 3, then the deadline will be 2025-10-04 Avoid scheduling during the weekend** It might happen that the calculated deadline is on a Saturday or Sunday. This Code node moves those dates to the first Monday to avoid scheduling during the weekend. Aggregate tasks for insert** Aggregates the data from the previous nodes as an array in a property named items. This matches perfect with the Baserow API to insert new records in batch. Generate tasks in batch** Calls the API endpoint /api/database/rows/table/{table_id}/batch/ to insert multiple records at once in the tasks table. Check the Baserow API documentation for further details. Success / Error response** Sends a simple text response to indicate the success or failure of the record creation. This is to offer feedback when triggering the automation from a Baserow application, but can be replaced with a JSON response. How to use Call the Trigger task creation node with the required parameters through a POST request. This can be done from any web application. For example: the application builder in Baserow supports an action to send an HTTP request. The Procedure details page in the Standard Operating Procedures template demonstrates this action. The following information is required in the body of the request. This information is required to create the actual tasks. { "assignee_id": integer refering to the id of the assignee in the database, "template_id": integer refering to the id of the template or procedure in the master table, "schedule_date": the date the tasks need to start scheduling, "note": text with an optional note about the tasks } Set the corresponding ids in the Configure settings and ids node. Check the names of the properties in the Calculate deadlines for each step node. Make sure the names of those properties match the field names of your Tasks table. You can replace the text message in the Success response and Failure response with a more structured format if this is necessary in your application. Customising this workflow Add support for public holidays (e.g., using an external calendar API). Modify the task assignment logic (e.g., pre-assign tasks in the details table). Combine with notifications (email, Slack, etc.) to alert employees when new tasks are generated.