by berke
Who's it for This workflow is perfect for sales teams, customer service departments, and businesses that frequently handle spare parts inquiries via email. It's especially valuable for companies managing multiple products with complex pricing structures who want to automate their quotation process while maintaining professional, multilingual communication. What it does This workflow: Monitors your Gmail inbox** for incoming spare parts requests Automatically generates professional HTML price quotes** in the sender's language Sends personalized replies** Uses AI to detect the email language (supports Turkish, English, German, and more) Extracts project or part codes** Fetches pricing data from Google Sheets** Calculates totals accurately** Formats everything** into a clean, professional quote that matches your brand How it works Schedule Trigger runs every minutes to check for new emails Gmail node fetches the latest unread email Keyword detection filters for spare parts-related terms in multiple languages AI Agent processes the request by: Detecting the email's language Extracting project/part codes Querying three Google Sheets: CRM, Bill of Materials, Pricing Calculating line totals and grand total Generating a professional HTML quote in the sender's language Gmail reply sends the quote and marks the original email as read Requirements n8n self-hosted or cloud instance Gmail account with OAuth2 authentication Google Sheets with proper structure (3 sheets for CRM, BoM, and Pricing data) Google Gemini API key for AI processing Basic understanding of Google Cloud Console for OAuth setup How to set up Import the workflow into your n8n instance Create three Google Sheets with the following column structure: CRM Sheet: Email, ProjectCode, CustomerName Bill of Materials: ProjectCode, PartCode, PartDescription, Quantity Pricing Sheet: PartCode, UnitPriceEUR, PartDescription Configure credentials: Set up Gmail OAuth2 in Google Cloud Console Configure Google Sheets OAuth2 (can use same project) Get your Google Gemini API key from Google AI Studio Update the workflow: Replace placeholder Sheet IDs in the CRM, BoM, and Pricing nodes Adjust company name in the AI Agent’s system message Modify keyword detection if needed Test with a sample email before activating How to customize the workflow Add more languages**: Update the keyword detection node with additional terms Modify the quote template**: Edit the HTML in the AI Agent's message to match your branding Change data sources**: Replace Google Sheets with PostgreSQL or MySQL nodes Add approval steps**: Insert a manual approval node for quotes above a certain value Include attachments**: Add PDF or product spec file nodes Enhance notifications**: Add Slack or Teams notifications after quote is sent Implement follow-ups**: Create a separate workflow for reminder emails This template provides a solid foundation for automating your quotation process, while staying flexible to fit your specific business needs. Feel free to contact me for further implementation guidelines: LinkedIn: Berke
by Ranjan Dailata
Who this is for The Async Structured Bulk Data Extract with Bright Data Web Scraper workflow is designed for data engineers, market researchers, competitive intelligence teams, and automation developers who need to programmatically collect and structure high-volume data from the web using Bright Data's dataset and snapshot capabilities. This workflow is built for: Data Engineers - Building large-scale ETL pipelines from web sources Market Researchers - Collecting bulk data for analysis across competitors or products Growth Hackers & Analysts - Mining structured datasets for insights Automation Developers - Needing reliable snapshot-triggered scrapers Product Managers - Overseeing data-backed decision-making using live web information What problem is this workflow solving? Web scraping at scale often requires asynchronous operations, including waiting for data preparation and snapshots to complete. Manual handling of this process can lead to timeouts, errors, or inconsistencies in results. This workflow automates the entire process of submitting a scraping request, waiting for the snapshot, retrieving the data, and notifying downstream systems all in a structured, repeatable fashion. It solves: Asynchronous snapshot completion handling Reliable retrieval of large datasets using Bright Data Automated delivery of scraped results via webhook Disk persistence for traceability or historical analysis What this workflow does Set Bright Data Dataset ID & Request URL: Takes in the Dataset ID and Bright Data API endpoint used to trigger the scrape job HTTP Request: Sends an authenticated request to the Bright Data API to start a scraping snapshot job Wait Until Snapshot is Ready: Implements a loop or wait mechanism that checks snapshot status (e.g., polling every 30 seconds) until completion i.e ready state Download Snapshot: Downloads the structured dataset snapshot once ready Persist Response to Disk: Saves the dataset to disk for archival, review, or local processing Webhook Notification: Sends the final result or a summary of it to an external webhook Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. Update the Set Dataset Id, Request URL for setting the brand content URL. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Polling Strategy : Adjust polling interval (e.g., every 15–60 seconds) based on snapshot complexity Input Flexibility : Accept datasetId and request URL dynamically from a webhook trigger or input form Webhook Output : Send notifications to - Internal APIs – for use in dashboards Zapier/Make – for multi-step automation Persistence Save output to: Remote FTP or SFTP storage Amazon S3, Google Cloud Storage etc.
by Stefan
Track n8n Node Definitions from GitHub and Export to Google Sheets Overview This workflow automatically retrieves and processes metadata from the official n8n GitHub repository, filters all available .node.json files, parses their structure, and appends structured information into a Google Sheet. Perfect for developers, community managers, and technical writers who need to maintain up-to-date information about n8n's evolving node ecosystem. Setup Instructions Prerequisites Before setting up this workflow, ensure you have: A GitHub account with API access A Google account with Google Sheets access An active n8n instance (cloud or self-hosted) Step 1: GitHub API Configuration Navigate to GitHub Settings → Developer Settings → Personal Access Tokens Generate a new token with public_repo permissions Copy the generated token and store it securely In n8n, create a new "GitHub API" credential Paste your token in the credential configuration and save Step 2: Google Sheets Setup Create a new Google Sheets document Set up the following column headers in the first row: node (Column A) - Node identifier/name nodeVersion (Column B) - Version of the node codexVersion (Column C) - Codex version number categories (Column D) - Node categories credentialDocumentation (Column E) - Credential documentation URL primaryDocumentation (Column F) - Primary documentation URL Note down the Google Sheets document ID from the URL Configure Google Sheets OAuth2 credentials in n8n Step 3: Workflow Configuration Import the workflow into your n8n instance Update the following placeholder values: Replace YOUR_GOOGLE_SHEETS_DOCUMENT_ID with your actual document ID Replace YOUR_WEBHOOK_ID if using webhook functionality Configure the GitHub API credentials in the HTTP Request nodes Set up Google Sheets credentials in the Google Sheets nodes Share your Google Sheets document with the email address associated with your Google OAuth2 credentials Grant "Editor" permissions to allow the workflow to write data Google Sheets Template Details The workflow creates a structured dataset with these columns: node**: Node identifier (e.g., n8n-nodes-base.slack) nodeVersion**: Version of the node (e.g., 1.0.0) codexVersion**: Codex version number (e.g., 1.0.0) categories**: Node categories (e.g., Communication, Productivity) credentialDocumentation**: URL to credential documentation primaryDocumentation**: URL to primary node documentation Customization Options Modifying Data Extraction You can customize the "Format Data" node to extract additional fields: Add new assignments in the Set node Modify the column mapping in the Google Sheets node Update your spreadsheet headers accordingly Changing Update Frequency To run this workflow on a schedule: Replace the Manual Trigger with a Cron node Set your desired schedule (e.g., daily, weekly) Configure appropriate timing to avoid API rate limits Adding Filters Customize the "Filter Node Files" code node to: Filter specific node types Include/exclude certain categories Process only recently updated nodes Features Fetches all node definitions from the n8n-io/n8n repository Filters for .node.json files only Downloads and parses metadata automatically Extracts key fields like node names, versions, categories, and documentation URLs Appends structured data to Google Sheets with batch processing Includes error handling and retry mechanisms Clears existing data before appending new information for fresh results Use Cases This workflow is ideal for: Track changes in official n8n node definitions over time Audit node categories and documentation links for completeness Build custom dashboards from node metadata Community management and documentation maintenance Integration planning and compatibility analysis
by InfraNodus
Optimize Your Top Performing Website Content with Google Analytics, Firecrawl, and InfraNodus This templates helps you extract** the top performing pages from your website using Google Analytics scrape** the content of the pages using Firecrawl API (HTTP node provided) build a knowledge graph* for all these pages with the *topics* and *gaps** identified using InfraNodus understand the main concepts and topical clusters in your top-performing content, so you can create more of it, while also identifying the content gaps — structural holes between the topics that you can use to generate new content ideas have access to a knowledge graph visualization of your top performing content to explore it using the interactive network interface How it works This template uses the InfraNodus to visualize and analyze your top performing content. It will extract the top pages from the Google Analytics data for the website you choose and scrape their text content using the high-quality Firecrawl API. Then it will ingest every page into an InfraNodus graph you specify. The graph can be used to explore the content visually. The insights from the graph, such as the main topics and gaps between them will be shown to you in the end of the workflow. You can use these insights to understand what kind of content you should focus on creating to get the highest number of views* and to establish *topical authority* in your area, which is good for *SEO* and *LLM optimization** — focusing on the topics identified in the top content discover the content gaps — which topics are not connected yet that you could link with new content ideas and publish — this caters to your audience's interests, but connects your existing ideas in a new way. So you deliver the content that's relevant but also novel. Here's a description step by step: Note:* you can replace the PDF to Text convertor node with a better quality *PDF convertor* from ConvertAPI which respects the original file layout and doesn't split text into small chunks Trigger the workflow Extract a list of top (25, 50) pages from your Google Analytics account (you'll need to connect it via the Google Cloud API) Fix the extracted data and add a correct URL prefix to each page (if your Analytics has relative paths only Loop through each page extracted Extract the text content of every page using the high-quality Firecrawl API Ingest the text content into the InfraNodus graph that you specify Once all the pages are ingested into the InfraNodus graph, access the AI insights endpoint in InfraNodus and get the information about the main topics and gaps Display this information to the user How to use You need an InfraNodus API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Requirements An InfraNodus account and API key Optional: A Google Analytics account for your property (alternatively, you can modify this workflow to provide a list of the most popular pages) Optional: A Google Cloud API access (to access the data from Google Analytic saccount — follow the n8n instructions) Optional: A Firecrawl API key API key for better quality web page scraping (otherwise, use the standard HTTP to Text node from n8n) Customizing this workflow You can customize this workflow by using a list of the URL pages you want to analyze from a Google sheet. Alternatively, you can use the Google SERP node to extract top search results for a query and get the main topics for them. For support and feedback, please, contact us at https://support.noduslabs.com To learn more about InfraNodus: https://infranodus.com
by Cyril Nicko Gaspar
📌 AI Agent Template with Bright Data MCP Tool Integration This template obtains all the possible tools from Bright Data MCP, process this through chatbot, then run any tool based on the user's query ❓ Problem It Solves The problem that the MCP solves is the complexity and difficulty of traditional automation, where users need to have specific knowledge of APIs or interfaces to trigger backend processes. By allowing interaction through natural language, automatically classifying and routing queries, and managing context and memory effectively, MCP simplifies complex data operations, customer support, and workflow orchestration scenarios where inputs and responses change dynamically. 🧰 Pre-requisites Before deploying this template, ensure you have: An active n8n instance (self-hosted or cloud). A valid OpenAI API key (or any AI models) Access to Bright Data MCP API with credentials. Basic familiarity with n8n workflows and nodes. ⚙️ Setup Instructions **Install the MCP Community Node in N8N In your N8N self-hosted instance, go to Settings → Community Nodes. Search and install n8n-nodes-mcp. Configure Credentials: Add your OpenAI API key or any AI mdeols to the relevant nodes. If you want other AI model, please replace all associated nodes of OpenAI in the workflow Set up Bright Data MCP client credentials in the installed community node (STDIO) Obtain your API in Bright Data and put it in Environment field in the credentials window. It should be written as API_Key=<your api key from Bright Data> 🔄 Workflow Functionality (Summary) User message** triggers the workflow. AI Classifier** (OpenAI) interprets the intent and maps it to a tool from Bright Data MCP. If no match is found, the user is notified. If more information is needed, the AI requests it. Memory** preserves context for follow-up actions. The tool is executed, and results are returned contextually to the user. > 🧠 Optional memory buffer and chat memory manager nodes keep conversations context-aware across multiple messages. 🧩 Use Cases Data Scraping Automation**: Trigger scraping tasks via chat. Lead Generation Bots**: Use MCP tools to fetch, enrich, or validate data. Customer Support Agents**: Automatically classify and respond to queries with tool-backed answers. Internal Workflow Agents**: Let team members trigger backend jobs (e.g., reports, lookups) by chatting naturally. 🛠️ Customization Tool Matching Logic**: Modify the AI classifier prompt and schema to suit different APIs or services. Memory Size and Retention**: Adjust memory buffer size and filtering to fit your app’s complexity. Tool Execution**: Extend the "Execute the tool" sub-workflow to handle additional actions, fallback strategies, or logging. Frontend Integration**: Connect this with various platforms (e.g., WhatsApp, Slack, web chatbots) using the webhook. ✅ Summary This template delivers a powerful no-code/low-code agent that turns chat into automation, combining AI intelligence with real-world tool execution. With minimal setup, you can build contextual, dynamic assistants that drive backend operations using natural language.
by Solido AI
How it works: This bot operates in a continuous WhatsApp monitoring loop. It analyzes messages to detect keywords in common questions (like hours, prices, and location) and sends automatic replies with predefined information. For unrecognized questions, it directs the user to manual assistance. Set up steps: The initial setup involves integrating with the WhatsApp API, registering keywords and their respective responses, and defining the fallback flow. It takes only a few minutes to have the bot running with essential information.
by Solido AI
How it works: This system functions by receiving expenses via webhook POST. It validates the data, stores it in Google Sheets, and, daily at 8 PM, generates and sends financial summaries. Automatic categorization simplifies the organization of expenses. Set up steps: Setup involves creating the Google Sheet, configuring the webhook, and defining the categorization rules. The process is quick and intuitive, taking about 10-15 minutes for the system to be ready to receive your expenses.
by Jez
Workflow: Automated Weekly Google Calendar Summary via Email with AI ✨🗓️📧 Get a personalized, AI-powered summary of your upcoming week's Google Calendar events delivered straight to your inbox! This workflow automates the entire process, from fetching events to generating an intelligent summary and emailing it to you. 🌟 Overview This n8n workflow connects to your Google Calendar, retrieves events for the upcoming week (Monday to Sunday, based on the day the workflow runs), uses Google Gemini AI to create a well-structured and insightful summary, and then emails this summary to you. It's designed to help you start your week organized and aware of your commitments. Key Features: Automated Weekly Summary:** Runs on a schedule (default: weekly) to keep you updated. AI-Powered Insights:** Leverages Google Gemini to not just list events, but to identify important ones and offer a brief weekly outlook. Personalized Content:** Uses your specified timezone, locale, name, and city for accurate and relevant information. Clear Formatting:** Events are grouped by day and displayed chronologically with start and end times. Important events are highlighted. Email Delivery:** Receive your schedule directly in your inbox in a clean HTML format. Customizable:** Easily adapt to your specific calendar, AI preferences, and email settings. ⚙️ How It Works: Step-by-Step The workflow consists of the following nodes, working in sequence: weekly_schedule (Schedule Trigger): What it does: Initiates the workflow. Default: Triggers once a week at 12:00 PM. You can adjust this to your preference (e.g., Sunday evening or Monday morning). locale (Set Node): What it does: This is a crucial node for you to configure! It sets user-specific parameters like your preferred language/region (users-locale), timezone (users-timezone), your name (users-name), and your home city (users-home-city). These are used throughout the workflow for correct date/time formatting and personalizing the AI prompt. date-time (Set Node): What it does: Dynamically generates various date and time strings based on the current execution time and the locale settings. This is used to define the precise 7-day window (from the current day to 7 days ahead, ending at midnight) for fetching calendar events. get_next_weeks_events (Google Calendar Node): What it does: Connects to your specified Google Calendar and fetches all events within the 7-day window calculated by the date-time node. Requires: Google Calendar API credentials and the ID of the calendar you want to use. simplify_evens_json (Code Node): What it does: Runs a small JavaScript snippet to clean up the raw event data from Google Calendar. It removes several fields that aren't needed for the summary (like htmlLink, etag, iCalUID), making the data more concise for the AI. aggregate_events (Aggregate Node): What it does: Takes all the individual (and now simplified) event items and groups them into a single JSON array called eventdata. This is the format the AI agent expects for processing. Google Gemini (LM Chat Google Gemini Node): What it does: This node is the connection point to the Google Gemini language model. Requires: Google Gemini (or PaLM) API credentials. event_summary_agent (Agent Node): What it does: This is where the magic happens! It uses the Google Gemini model and a detailed system prompt to generate the weekly schedule summary. The Prompt Instructs the AI to: Start with a friendly greeting. Group events by day (Monday to Sunday) for the upcoming week, using the user's timezone and locale. Format event times clearly (e.g., 09:30 AM - 10:30 AM: Event Summary). Identify and prefix "IMPORTANT:" to events with keywords like "urgent," "deadline," "meeting," etc., in their summary or description. Conclude with a 1-2 sentence helpful insight about the week's schedule. Process the input eventdata (the JSON array of calendar events). Markdown (Markdown to HTML Node): What it does: Converts the text output from the event_summary_agent (which is generated in Markdown format for easy structure) into HTML. This ensures the email body is well-formatted with proper line breaks, lists, and emphasis. send_email (Email Send Node): What it does: Sends the final HTML summary to your specified email address. Requires: SMTP (email sending) credentials and your desired "From" and "To" email addresses. 🚀 Getting Started: Setup Instructions Follow these steps to get the workflow up and running: Import the Workflow: Download the workflow JSON file. In your n8n instance, go to "Workflows" and click the "Import from File" button. Select the downloaded JSON file. Configure Credentials: You'll need to set up credentials for three services. In n8n, go to "Credentials" on the left sidebar and click "Add credential." Google Calendar API: Search for "Google Calendar" and create new credentials using OAuth2. Follow the authentication flow. Once created, select these credentials in the get_next_weeks_events node. Google Gemini (PaLM) API: Search for "Google Gemini" or "Google PaLM" and create new credentials. You'll typically need an API key from Google AI Studio or Google Cloud. Once created, select these credentials in the Google Gemini node. SMTP / Email: Search for your email provider (e.g., "SMTP," "Gmail," "Outlook") and create credentials. This usually involves providing your email server details, username, and password/app password. Once created, select these credentials in the send_email node. ‼️ IMPORTANT: Customize User Settings in the locale Node: Open the locale node. Update the following values in the "Assignments" section: users-locale: Set your locale string (e.g., "en-AU" for English/Australia, "en-US" for English/United States, "de-DE" for German/Germany). This affects how dates, times, and numbers are formatted. users-timezone: Set your timezone string (e.g., "Australia/Sydney", "America/New_York", "Europe/London"). This is critical for ensuring event times are displayed correctly for your location. users-name: Enter your name (e.g., "Bob"). This is used to personalize the email greeting. users-home-city: Enter your home city (e.g., "Sydney"). This can be used for additional context by the AI. Configure the get_next_weeks_events (Google Calendar) Node: Open the node. In the "Calendar" parameter, you need to specify which calendar to fetch events from. The default might be a placeholder like c_4d9c2d4e139327143ee4a5bc4db531ffe074e98d21d1c28662b4a4d4da898866@group.calendar.google.com. Change this to your primary calendar (often your email address) or the specific Calendar ID you want to use. You can find Calendar IDs in your Google Calendar settings. Configure the send_email Node: Open the node. Set the fromEmail parameter to the email address you want the summary to be sent from. Set the toEmail parameter to the email address(es) where you want to receive the summary. You can also customize the subject line if desired. (Optional) Customize the AI Prompt in event_summary_agent: If you want to change how the AI summarizes events (e.g., different keywords for important events, a different tone, or specific formatting tweaks), you can edit the "System Message" within the event_summary_agent node's parameters. (Optional) Adjust the Schedule in weekly_schedule: Open the weekly_schedule node. Modify the "Rule" to change when and how often the workflow runs (e.g., a specific day of the week, a different time). Activate the Workflow: Once everything is configured, toggle the "Active" switch in the top right corner of the workflow editor to ON. 📬 What You Get You'll receive an email (based on your schedule) with a subject like "Next Week Calendar Summary : [Start Date] - [End Date]". The email body will contain: A friendly greeting. Your schedule for the upcoming week (Monday to Sunday), with events listed chronologically under each day. Event times displayed in your local timezone (e.g., 09:30 AM - 10:30 AM: Team Meeting). Priority events clearly marked (e.g., IMPORTANT: 02:00 PM - 03:00 PM: Project Deadline Review). A brief, insightful observation about your week's schedule. 🛠️ Troubleshooting & Notes Timezone is Key:** Ensure your users-timezone in the locale node is correct. This is the most common source of incorrect event times. Google API Permissions:** When setting up Google Calendar and Gemini credentials, make sure you grant the necessary permissions. AI Output Varies:** The AI-generated summary can vary slightly each time. The prompt is designed to guide it, but LLMs have inherent creativity. Calendar Event Details:** The quality of the summary (especially for identifying important events) depends on how detailed your calendar event titles and descriptions are. Including keywords like "meeting," "urgent," "prepare for," etc., in your events helps the AI. 💬 Feedback & Contributions Feel free to modify and enhance this workflow! If you have suggestions, improvements, or run into issues, please share them in the n8n community. Happy scheduling!
by Ranjan Dailata
Who this is for? The LinkedIn Company Story Generator is an automated workflow that extracts company profile data from LinkedIn using Bright Data's web scraping infrastructure, then transforms that data into a professionally written narrative or story using a language model (e.g., OpenAI, Gemini). The final output is sent via webhook notification, making it easy to publish, review, or further automate. This workflow is tailored for: Marketing Professionals**: Seeking to generate compelling company narratives for campaigns. Sales Teams**: Aiming to understand potential clients through summarized company insights. Content Creators**: Looking to craft stories or articles based on company data. Recruiters**: Interested in obtaining concise overviews of companies for talent acquisition strategies. What problem is this workflow solving? Manually gathering and summarizing company information from LinkedIn can be time-consuming and inconsistent. This workflow automates the process, ensuring: Efficiency**: Quick extraction and summarization of company data. Consistency**: Standardized summaries for uniformity across use cases. Scalability**: Ability to process multiple companies without additional manual effort. What this workflow does The workflow performs the following steps: Input Acquisition**: Receives a company's name or LinkedIn URL as input. Data Extraction**: Utilizes Bright Data to scrape the company's LinkedIn profile. Information Parsing**: Processes the extracted HTML content to retrieve relevant company details. Summarization**: Employs AI Google Gemini to generate a concise company story. Output Delivery**: Sends the summarized content to a specified webhook or email address. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the LinkedIn URL by navigating to the Set LinkedIn URL node. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Input Variations: Modify the **Set LinkedIn URL node to accept a different company LinkedIn URL. Data Points**: Adjust the HTML Data Extractor Node to retrieve additional details like employee count, industry, or headquarters location. Summarization Style**: Customize the AI prompt to generate summaries in different tones or formats (e.g., formal, casual, bullet points). Output Destinations**: Configure the output node to send summaries to various platforms, such as Slack, CRM systems, or databases.
by KPendic
How it works This workflow simply exports all your CloudFlare domains to Google Sheet to get high overview of all of your settings. This could help for easy debugging, searching or similar needs. In flow simple pagging nodes are used to iterate over all your domains, because this list could be huge. For each host we are merging DNS & Settings and transforming them into columns for all our domains. Requirements For storing and processing of data in this flow you will need: CloudFlare.com API key/token - for retrieving your data (https://dash.cloudflare.com/:account/api-tokens) (need full access) Google Spreadsheet auth connected in your n8n Credentials Google Spreadsheet template - you can copy my sheet as starting point, start by copying it to your account Match Sheet ID in 'Export' node to your newly created. Official CloudFlare api Documentation For full details and specifications please use API documentation from: https://developers.cloudflare.com/api/ Potential API timeouts If you encounter CF API timeouts - I would suggest to only put somewhere in the loop simple sleep/wait node - for couple of seconds - and it should resolve timeouts. Google Sheet I've used simple Google Sheet feature conditional formatting to visually distinct my on|off toggles that was of my interest to easily get high overview for debuggint some of the settings on my hosts - but please use your own logic or change it completely.
by Jay Emp0
🔥 Upgrade to V3 Longer blogs, Higher SEO ranking with images, charts and tables We’ve released Version 3 of our AI-Powered Blog Automation workflow. We heard your complains and made a complete redesign built for serious content creators. 📝 Read the New Articles Generated by v3 🛒 View the workflow on n8n.io ✅ Longer Blog contents 3-Agent AI Architecture as the Planner, Writer, Editor: simulate a full content team for structure, writing, and QC. A more continuous flow 📈 2x bump in SEO ranking SEO Scoring System so every article is graded on keyword density, readability, structure, backlinks, and uniqueness. IF quality doesnt meet threshold, we revise the content again. 🖼️ Smarter Visuals In-blog images via Leonardo, charts via QuickChart, tables and web scraped outbound links 🕸️ Multi-Platform Publishing Auto-posts to WordPress, Twitter (X), and Dev.to 🕵️♂️ Research Agent Adds quotes, stats, facts, outbound links, entities, and references to improve article credibility Content Farming V2 AI Powered Blog Automation for WordPress This workflow automatically generates and publishes 10 blog posts per day to a WordPress site. It collects tech-related news articles, filters and analyzes them for relevance, expands them with research, generates SEO-optimized long-form articles using AI, creates a matching image using Leonardo AI, and publishes them via the WordPress REST API. Every step is tracked and stored in MongoDB for reference and performance tracking. You can see the demo results for the AI based articles here: Emp0 Articles How it works A scheduler runs daily to fetch the latest news from RSS feeds including BBC, TechCrunch, Wired, MIT Tech Review, HackerNoon, and others. The RSS data is normalized and filtered to include only articles published within the past 24 hours. Each article is passed through an OpenAI-powered classifier to check for relevance to predefined user topics like AI, robotics, or tech policy. Relevant articles are then aggregated, researched, and summarized with supporting sources and citations. An AI agent generates five long-tail SEO blog title ideas, ranks them by uniqueness and performance score, and selects the top one. A blog outline is created including H1 and H2 headers, keyword targeting, content structure, and featured snippet optimization. A full-length article (1000 to 1500 words) is generated based on the outline, with analogies, citations, examples, and keyword density maintained. SEO metadata is produced including meta title, description, image alt text, slug, and a readability audit. An AI-generated image is created based on the blog theme using Leonardo AI, enhanced for emotional storytelling and visual consistency. The blog article, metadata, and image are uploaded to WordPress as a draft, the image is attached, Yoast SEO metadata is set, and the article is published. All outputs including article versions, metadata, generation steps, and final blog URLs are stored in MongoDB to allow for future analytics and feedback. Requirements To run this project, you need accounts and API access for the following: | Tool | Purpose | Notes | |--------------|------------------------------------------------------------------|-----------------------------------------------------------------------| | OpenAI | Used for blog classification, generation, summarization, SEO | Around $0.20 per day, using GPT-4o-mini. Estimated monthly: $6 | | MongoDB | Stores data flexibly including drafts, titles, metadata, logs | Free tier on MongoDB Atlas offers 512 MB, enough for 64,000 articles | | Leonardo AI | Generates featured images for blog articles | $9 for 3500 credits, $5 monthly top-up needed for 300 images | | WordPress | Final publishing platform via REST API | Hosted on Hostinger for $15/year including domain | Setup Instructions Import the provided JSON file into your n8n instance. Configure these credentials in n8n: OpenAI API key MongoDB Atlas connection string HTTP Header Auth for Leonardo AI WordPress REST API credentials Modify the classifier and prompt nodes to reflect your preferred content themes. Adjust scheduler nodes if you want to change post frequency or publishing times. Run the n8n instance continuously using Docker, PM2, or hosted automation platform. Cost Estimate | Component | Daily Usage | Monthly Cost Estimate | |---------------|------------------------------|------------------------| | OpenAI | 10 posts per day | ~$6 | | Leonardo AI | 10 images per day (15 credits each) | ~$14 (9 base + 5 top-up) | | MongoDB | Free up to 512 MB | $0 | | WordPress | Hosting and domain | ~$1.25 | | Total | | ~$21/month | Observations and Learnings This system can scale daily article publishing with zero manual effort. However, current limitations include inconsistent blog length and occasional coherence issues. To address this, I plan to build a feedback loop within the workflow: An SEO Commentator Agent will assess keyword strength, structure, and discoverability. An Editor-in-Chief Agent will review tone, clarity, and narrative structure. Both agents will loop back suggestions to the content generator, improving each draft until it meets human-level standards. The final goal is to consistently produce high-quality, readable, SEO-optimized content that is indistinguishable from human writing.
by Khairul Muhtadin
The Page Speed Insight workflow automates website performance analysis by integrating Google PageSpeed Insights API with Discord messaging and Gemini. This n8n workflow provides expert-level performance audits and comparisons, delivering actionable insights for website owners, SEO professionals, and developers. Disclaimer: this workflow using community nodes Google PageSpeed Insights Community Node 💡 Why Use Page Speed Insight? Save Time:** Instantly analyze and compare website speeds without manual tool usage Eliminate Guesswork:** Receive expert audit reports that translate technical data into clear, actionable insights Improve Website Outcomes:** Identify critical bottlenecks and enhancements prioritized by AI-driven analysis Seamless Integration:** Pull URLs and deliver reports directly via Discord for team collaboration and immediate response ⚡ Who Is This For? Webmasters and website owners seeking fast, automated performance checks SEO analysts who need consistent, data-backed website comparisons Developers requiring clear, prioritized action points from performance audits Digital agencies managing multiple client sites with ongoing monitoring needs 🔧 What This Workflow Does ⏱ Trigger:** Discord message containing URLs or scheduled execution 📎 Parse:** Extracts URLs and determines analysis type (single/comparison) 🔍 Analyze:** Calls Google PageSpeed API for performance data 🤖 Process:** AI generates user-friendly reports from raw Lighthouse JSON 💌 Deliver:** Sends chunked reports to Discord channels 🗂 Log:** Stores execution data for review and improvement 🔐 Setup Instructions Import the provided JSON workflow into your n8n instance Set up credentials for: Google PageSpeed API (ensure you have a valid API key — get yours here) Discord Bot API with permissions to read messages and send messages in your chosen guild/channel Customize the workflow by adjusting: Discord guild and channel IDs where messages are monitored and results posted Scheduled trigger interval if needed Any prompt text or AI model parameters to tailor report tone and detail level Test thoroughly with real URLs and Discord interaction to confirm smooth data flow and output quality 🧩 Pre-Requirements Active n8n instance (Cloud or self-hosted) n8n Google PageSpeed community node Google PageSpeed Insights API key Discord Bot credentials with channel access Google Gemini AI credentials (recommended) 🛠️ Customize It Further Extend to analyze desktop performance or other device types easily by modifying the PageSpeed API call Integrate with Slack, email, or other team tools alongside Discord for broader notification Enhance report depth by adding more AI-driven insights like competitor site recommendations or historical trend tracking 🧠 Nodes Used Google PageSpeed Insights Community Node Discord (getAllMessages, sendMessage) Code (URL parsing, message chunking) AI Language Model (Google Gemini) Schedule Trigger Switch (message type handling) Sticky Notes (workflow guidance) 📞 Support Made by: khaisa Studio Tag: automation, performance, SEO, google-pagespeed, discord Category: Monitoring & Reporting Need a custom solution? Contact Me