by noda
🧩 What this template does This workflow builds a 120-minute local date course around your starting point by querying Google Places for nearby spots, selecting the top candidates, fetching real-time weather data, letting an AI generate a matching emoji, and drafting a friendly itinerary summary with an LLM in both English and Japanese. It then posts the full bilingual plan with a walking route link and weather emoji to Slack. 👥 Who it’s for Makers and teams who want a plug-and-play bilingual local itinerary generator with weather awareness — no custom code required. ⚙️ How it works Trigger – Manual (or schedule/webhook). Discovery – Google Places nearby search within a configurable radius. Selection – Rank by rating and pick the top 3. Weather – Fetch current weather (via OpenWeatherMap). Emoji – Use an AI model to match the weather with an emoji 🌤️. Planning – An LLM writes the itinerary in Markdown (JP + EN). Route – Compose a Google Maps walking route URL. Share – Post the bilingual itinerary, route link, and weather emoji to Slack. 🧰 Requirements n8n (Cloud or self-hosted) Google Maps Platform (Places API) OpenWeatherMap API key Slack Bot (chat:write) LLM provider (e.g., OpenRouter or DeepL for translation) 🚀 Setup (quick) Open Set → Fields: Config and fill in coords/radius/time limit. Connect Credentials for Google, OpenWeatherMap, Slack, and your LLM. Test the workflow and confirm the bilingual plan + weather emoji appear in Slack. 🛠 Customize Adjust ranking filters (type, min rating). Modify translation settings (target language or tone). Change output layout (side-by-side vs separated). Tune emoji logic or travel mode. Add error handling, retries, or logging for production use.
by MANISH KUMAR
Shopify Digital Product Automation (from just and image to complete Shopify product page.) This Shopify Digital Product Automation is an advanced n8n-powered workflow that leverages AI (Google Gemini), Airtable, and Shopify API to generate product details from images and automatically post them to Shopify. It fully automates the process — from uploading images to publishing Shopify products — with minimal manual effort. 💡 Key Advantages Our Shopify Digital Product Automation offers five core advantages: 🔗 Shopify Product Sync — Automatically posts product details including title, description, SEO fields, and matched category to Shopify. ✍️ AI-Powered Product Generation — Gemini analyzes uploaded images and generates engaging, SEO-friendly product titles, descriptions, and metadata. 🗂️ Structured Output — Outputs JSON-ready product data compatible with Shopify, ensuring smooth automation. 📄 Airtable Integration — Tracks uploaded images, analyzed data, and generated products to prevent duplication. 📤 End-to-End Automation — Handles the complete workflow from image upload to Shopify posting without manual intervention. ⚙️ How It Works The workflow follows a step-by-step automated process: Step-by-Step Process Upload Images – Add your digital artwork or poster images to Google Drive and record them in Airtable. Image Analysis – AI fetches new images and analyzes visual elements like characters, series, poster text, and style. Store Analysis Results – Updates analyzed data in Airtable and marks images as Used. Fetch Shopify Collections – Retrieves current collections from your Shopify store for category matching. Generate Product Details – Gemini generates product title, description, matched category, SEO page title, meta description, and URL handle. Save Generated Products – Stores generated product details in Airtable and marks them as generated. Post Products to Shopify – Automatically creates new products in Shopify using the API. Update Status – Marks products as posted in Airtable after successful posting. 🛠️ Setup Steps Required Node Configuration To implement this workflow, configure the following n8n nodes: Trigger Node** – Start workflow manually or via scheduler. Airtable Node** – Fetch raw images and store processed product details. Google Drive Node** – Access image files. HTTP Request Node** – Fetch Shopify collections. Code Node** – Refine AI outputs and format product data. Split & Limit Nodes** – Process images and products in batches. LangChain / Gemini Node** – Generate product titles, descriptions, and SEO data. Shopify Node** – Create products via Shopify API. Status Update Node** – Update Airtable with processing and posting status. 🔐 Credentials Required Before running the workflow, ensure you have the following credentials configured: Shopify Access Token** – For posting products and fetching collections. Gemini API Key** – For AI-powered product generation. Airtable API Key** – For storing and tracking workflow data. Google Drive OAuth** – To access image files. 👤 Ideal For This automation workflow is designed for: Shopify store owners managing hundreds of digital products Ecommerce teams automating product listings Marketing teams needing scalable, AI-driven product content workflows 💬 Bonus Tip The workflow is fully modular and customizable. You can extend it to: Automatically assign prices or discounts Multi-language product description generation Social media promotion of new products Email campaign integration All extensions can be implemented within the same n8n flow, making it a complete digital product automation solution.
by Daniel Shashko
This workflow automates the process of monitoring multiple RSS feeds, intelligently identifying new articles, maintaining a record of processed content, and delivering timely notifications to a designated Slack channel. It leverages AI to ensure only truly new and relevant articles are dispatched, preventing duplicate alerts and information overload. 🚀 Main Use Cases Automated News Aggregation:** Continuously monitor industry news, competitor updates, or specific topics from various RSS feeds. 📈 Content Curation:** Filter and deliver only new, unprocessed articles to a team or personal Slack channel. 🎯 Duplicate Prevention:** Maintain a persistent record of seen articles to avoid redundant notifications. 🛡️ Enhanced Information Delivery:** Provide a streamlined and intelligent way to stay updated without manual checking. 📧 How it works The workflow operates in distinct, interconnected phases to ensure efficient and intelligent article delivery: 1. RSS Feed Data Acquisition 📥 Initiation:** The workflow is manually triggered to begin the process. 🖱️ RSS Link Retrieval:** It connects to a Baserow database to fetch a list of configured RSS feed URLs. 🔗 Individual Feed Processing:** Each RSS feed URL is then processed independently. 🔄 Content Fetching & Parsing:** An HTTP Request node downloads the raw XML content of each RSS feed, which is then parsed into a structured JSON format for easy manipulation. 📄➡️🌳 2. Historical Data Management 📚 Seen Articles Retrieval:** Concurrently, the workflow queries another Baserow table to retrieve a comprehensive list of article GUIDs or links that have been previously processed and notified. This forms the basis for duplicate detection. 🔍 3. Intelligent Article Filtering with AI 🧠 Data Structuring for AI:** A Code node prepares the newly fetched articles and the list of already-seen articles into a specific JSON structure required by the AI Agent. 🏗️ AI-Powered Filtering:** An AI Agent, powered by an OpenAI Chat Model and supported by a Simple Memory component, receives this structured data. It is precisely prompted to compare the new articles against the historical "seen" list and return only those articles that are genuinely new and unprocessed. 🤖 Output Validation:** A Structured Output Parser ensures that the AI Agent's response adheres to a predefined JSON schema, guaranteeing data integrity for subsequent steps. ✅ JSON Cleaning:** A final Code node takes the AI's raw JSON string output, parses it, and formats it into individual n8n items, ready for notification and storage. 🧹 4. Notification & Record Keeping 🔔 Persistent Record:** For each newly identified article, its link is saved to the Baserow "seen products" table, marking it as processed and preventing future duplicate notifications. 💾 Slack Notification:** The details of the new article (title, content, link) are then formatted and sent as a rich message to a specified Slack channel, providing real-time updates. 💬 Summary Flow: Manual Trigger → RSS Link Retrieval (Baserow) → HTTP Request → XML Parsing | Seen Articles Retrieval (Baserow) → Data Structuring (Code) → AI-Powered Filtering (AI Agent, OpenAI, Memory, Parser) → JSON Cleaning (Code) → Save Seen Articles (Baserow) → Slack Notification 🎉 Benefits: Fully Automated:** Eliminates manual checking of RSS feeds and Slack notifications. ⏱️ Intelligent Filtering:** Leverages AI to accurately identify and deliver only new content, avoiding duplicates. 💡 Centralized Data Management:** Utilizes Baserow for robust storage of RSS feed configurations and processed article history. 🗄️ Real-time Alerts:** Delivers timely updates directly to your team or personal Slack channel. ⚡ Scalable & Customizable:** Easily adaptable to monitor various RSS feeds and integrate with different Baserow tables and Slack channels. ⚙️ Setup Requirements: Baserow API Key:** Required for accessing and updating your Baserow databases. 🔑 OpenAI API Key:** Necessary for the AI Agent to function. 🤖 Slack Credentials:** Either a Slack OAuth token (recommended for full features) or a Webhook URL for sending messages. 🗣️ Baserow Table Configuration:** A table with an rssLink column to store your RSS feed URLs. A table with a Nom column to store the links of processed articles. For any questions or further assistance, feel free to connect with me on LinkedIn: https://www.linkedin.com/in/daniel-shashko/
by Amirul Hakimi
Advanced AI Lead Enrichment & Cold Email Personalization with n8n, Airtable, Apify, and LLMs Automated B2B Lead Nurturing: Hyper-Personalization for High-Converting Cold Email Campaigns This powerful n8n automation workflow is designed to execute advanced B2B lead enrichment and hyper-personalization for cold email outreach. By orchestrating a complex chain of data scraping, AI analysis (via LLMs/GPT-4.1), and CRM synchronization (using Airtable), this workflow ensures every lead receives a highly tailored and relevant outreach message, maximizing conversion rates and minimizing manual effort. Workflow Execution & Key Features Airtable Trigger & Lead Qualification: The workflow is triggered by an Airtable webhook, pulling a new lead record (including name, email, and company URLs). Email Validation* is performed using *NeverBounce** to filter out invalid contacts. Initial Lead Filtering screens for key demographic criteria (e.g., US: Yes or No? and target Headcount: >5, <30?). Only qualified B2B leads proceed, ensuring optimal resource allocation. Deep Web & Social Scraping (Apify Integration): LinkedIn Company Scraper* and a *LinkedIn Profile Scraper* (via *Apify**) extract raw data from the lead's company and personal profiles. Company Homepage Scraper** pulls the main website content for analysis. Scrape Personal LinkedIn Posts** node retrieves recent activity for the ultimate personalization hook. AI-Powered Data Synthesis & Variable Determination: Multiple OpenAI (GPT-4.1-mini/4.1) nodes analyze and structure the raw, cleaned text (Remove HTML nodes ensure clean inputs). Determine Valuable URLs** uses an LLM to smartly categorize and select key company pages (e.g., ==/about==, ==/solutions==, ==/case-studies==) for deeper scraping. Analyze Company/Mission, Analyze Offerings & Positioning, Analyze Process & Differentiation, and **Analyze Proof of Success nodes create factual, structured business summaries for the ultimate ICP research. Determine Variables* nodes create *pre-written, personalized cold email variables** (==company_specialty==, ==ICPofLead==, ==PainPointLeadSolves==, etc.) for different outreach strategies. LinkedIn Post Personalization: An LLM (Craft Opening Line - Posts) analyzes recent LinkedIn activity to generate a hyper-specific, conversation-starting opener (e.g., "Saw your LinkedIn post about..."). Conditional logic (Posts Available?) determines whether to use the post-based opener or fall back to the standard, company-based personalization. CRM Update & Campaign Launch (Instantly.ai): Finalized, enriched lead data and the crafted personalization variables are synchronized back to the Airtable CRM for record-keeping and lead status updates (Update Lead W/ Enrichment). The lead is then seamlessly pushed to the Instantly.ai outbound platform, injecting the AI-generated custom variables directly into the cold email sequence for mass deployment. This blueprint automates the tedious, high-effort task of prospect research and personalization, providing a scalable lead generation solution that increases both outreach quality and sales velocity. Stop sending generic emails—start leveraging AI automation today.
by Club de Inteligencia Artificial Politécnico CIAP
🤖 Interactive Academic Chatbot (Telegram + MongoDB) Overview 📋 This project is a template for building a complete academic virtual assistant using n8n. It connects to Telegram, answers frequently asked questions by querying MongoDB, keeps the community informed about key dates (via web scraping), and collects user feedback for continuous improvement. How It Works Architecture and Workflow ⚙️ n8n: Orchestration of 3 workflows (chatbot, scraping worker, announcer). Telegram: Frontend for user interaction and sending announcements. MongoDB: Centralized database for FAQs, academic calendar, and feedback logs. Web Scraping: HTTP Request and HTML Extract nodes to read the university's web calendar. Cron: For automatic periodic executions (daily and weekly). Core Processes 🧠 Real-time reception of user queries via Telegram. Querying MongoDB collections for FAQ answers and calendar dates. Daily scraping of the university website to keep the calendar updated. Instant logging of user feedback (👍/👎) in MongoDB. Proactive sending of weekly announcements to the Telegram channel. Key Benefits ✅ Complete automation of student communication 24/7. An always-accurate academic calendar database without manual intervention. A built-in continuous improvement system through user feedback. Proactive communication of important events to the entire community. Use Cases 💼 Automation of student support in universities, colleges, and institutions. A virtual assistant for any organization needing to manage FAQs and a dynamic calendar. An automated announcements channel to keep a community informed. Requirements 👨💻 n8n instance (self-hosted or cloud). Credentials for a Telegram Bot (obtained from @BotFather). Credentials for a MongoDB database (Connection URI). URL of the academic calendar to be scraped. Authors 👥 Doménica Amores Nicole Guevara Adrián Villamar Mentor: Jaren Pazmiño Applicants to the CIAP Polytechnic Artificial Intelligence Club
by 小林幸一
Generate personalized sales emails from Google Maps results to Google Sheets This workflow automates the entire process of lead generation and personalized outreach drafting for local businesses. It utilizes Apify to scrape business data from Google Maps based on your search criteria (e.g., "Cafes in Shibuya"), visits each business's website to extract content, and uses OpenAI to generate a highly personalized sales email that connects the business's unique characteristics with your service's value proposition. Finally, it saves the business details, scraped data, and the generated email draft into Google Sheets. This template is perfect for reducing the manual effort required to research leads and write initial cold outreach emails. Who is this for Sales Representatives** looking to automate lead sourcing and initial drafting. Marketing Agencies** doing outreach for local businesses. Freelancers** offering web design, SEO, or reservation system services to brick-and-mortar stores. What it does Configuration: You define your search query (e.g., "Gyms in London"), the number of leads to fetch, and details about the service you are selling. Lead Scraping: The workflow triggers an Apify actor (Google Maps Scraper) to find businesses matching your criteria. Website Analysis: It checks if the business has a website, fetches the HTML, and extracts relevant text to understand the business's vibe and offerings. AI Email Generation: OpenAI analyzes the scraped website text and generates a specific, personalized email subject and body promoting your service. Data Storage: All data (Business Name, Phone, Address, Website, Scraped Info, and Email Draft) is appended to a Google Sheet. Requirements n8n** (v1.0 or later) Apify Account**: You need an Apify account and the compass/google-maps-scraper actor. OpenAI Account**: An API key for generating the email content. Google Cloud Platform**: A project with the Google Sheets API enabled. How to set up Credentials: Set up your credentials for Apify, OpenAI, and Google Sheets in n8n. Google Sheet: Create a new Google Sheet and add the following headers in the first row: 店舗名 (Store Name) 住所 (Address) Webサイト (Website) 電話番号 (Phone Number) サイトから取得した情報 (Info from Website) 生成されたメール件名 (Generated Subject) 生成されたメール本文 (Generated Body) Workflow Configuration Node: Open the first "Workflow Configuration" node and update the following values: searchQuery: The location and keyword you want to target. serviceName: The name of the product you are selling. serviceStrength: The USP (Unique Selling Proposition) of your product. Google Sheets Node: Open the "Save to Google Sheets" node and select the file you created in step 2. How to customize Change the Prompt**: Open the "Generate Personalized Email" (OpenAI) node to modify the system prompt. You can change the tone, language (currently set to Japanese context in your example), or structure of the sales email. Filter Results**: You can add logic to the "Check Website URL Exists" node to filter out specific types of businesses or domains. Limit Scraping**: Adjust the maxPlaces value in the Configuration node to control how many leads you process per run to save on API credits.
by Yoshino Haruki
Who’s it for This template is ideal for busy professionals, students, or anyone with a dynamic schedule who wants to optimize their brief periods of free time. If you frequently find yourself with unexpected gaps between meetings and wish for intelligent, personalized suggestions on where to grab a coffee or get some work done, this workflow is for you. How it works / What it does The workflow begins by checking your Google Calendar for your next event at a scheduled time. It then calculates the travel time from your current location to your next event's venue using Google Maps. This allows it to determine your actual "gap time" – the usable free time before you need to start moving. If you have a sufficient gap (e.g., 30 minutes or more), the workflow fetches your preferred cafe criteria from a Google Sheet and searches for nearby cafes using Google Places. An AI agent then processes this information, along with your available gap time, to recommend the best cafe suited to your needs. This recommendation, complete with ratings and a Google Maps link, is sent directly to your Slack channel. Conversely, if the gap time is too short to comfortably visit a cafe, the workflow sends an urgent Slack alert, reminding you to prepare for your next appointment and providing essential details. How to set up Import the Workflow: Import this workflow into your n8n instance. Configure API Keys: In the "Workflow Configuration" node, replace the placeholders for googleMapsApiKey and googlePlacesApiKey with your actual API keys. Ensure these keys have access to the Google Maps Distance Matrix API and Google Places API (Nearby Search). Also, update currentLocation with your default or most frequent starting location (latitude/longitude or address). Google Calendar Credentials: Authenticate the "Get Next Calendar Event" node with your Google Calendar account. Select the calendar you wish to monitor. Google Sheets Credentials: Authenticate the "Get User Preferences" node with your Google Sheets account. Create a Google Sheet to store your cafe preferences (e.g., "Likes quiet places", "Prefers espresso", "Needs Wi-Fi"). Update the "Document ID" and "Sheet Name" in this node to point to your preference sheet. OpenRouter Credentials: Authenticate the "OpenRouter Chat Model" with your OpenRouter API key. Slack Credentials: Authenticate both "Send Slack Notification" and "Send Urgent Move Alert (Slack)" nodes with your Slack account. In both Slack nodes, update the channelId to your desired Slack channel where you want to receive notifications (e.g., #general, or a specific DM channel). Activate the Workflow: Once all configurations are complete, activate the workflow. Requirements An n8n instance (self-hosted or cloud). Google Account with Google Calendar and Google Sheets. Google Cloud Project with activated Google Maps Platform APIs (Distance Matrix API, Places API) and corresponding API Keys. An OpenRouter API Key. A Slack Workspace and API Token (or Webhook URL). How to customize the workflow Scheduling: Adjust the "Schedule Trigger" node to run at different intervals or specific times that best suit your daily routine. Minimum Gap Time: Modify the minimumGapMinutes variable in the "Workflow Configuration" node to set a different threshold for cafe recommendations. Cafe Search Radius: In the "Search Nearby Cafes (Google Places API)" node, you can change the radius parameter to search for cafes within a larger or smaller area. User Preferences: Expand your Google Sheet with more detailed preferences to give the AI agent better context for recommendations (e.g., "vegan options," "good for meetings," "strong coffee"). AI Prompt: Refine the prompt in the "AI Agent" node to guide the AI towards specific types of recommendations or output formats. Slack Message Customization: Edit the text fields in the Slack nodes to personalize the notification messages.
by Itunu
CoinMarketCap Token Discovery (Free API) Automatically discover cryptocurrency tokens from CoinMarketCap, clean the data, enrich it with official websites, and store the results in your preferred database or sheet. This workflow is designed to be safe for free API usage, easy to understand, and ready for extension. What This Workflow Does This workflow runs on a schedule and: Randomly selects pages from CoinMarketCap listings Fetches token data using the free CoinMarketCap API Cleans and normalizes token fields Enriches each token with official website data Processes tokens in safe batches with delays Outputs clean, structured token records Optionally saves results to a database or sheet Who This Is For This workflow is useful if you are: Doing crypto research or market discovery Building token datasets Running crypto outreach or lead generation Learning how to work with APIs in n8n Looking for a clean, real-world n8n example No advanced n8n knowledge is required. Setup Instructions (Required) Follow these steps before running the workflow: 1. Get a CoinMarketCap API Key Create a free account on CoinMarketCap Generate an API key from your dashboard 2. Add Your API Key Open the HTTP Request nodes Add your API key to the request headers: X-CMC_PRO_API_KEY = YOUR_API_KEY 3. Connect Storage Replace the storage node with your preferred option: Google Sheets Airtable PostgreSQL Webhook Add your own credentials before running the workflow. 4. Activate the Workflow Enable the workflow Let it run automatically based on the schedule How the Workflow Is Structured Trigger:** Runs every few days to avoid API limits Random Page Generator:** Prevents bias toward only top tokens Batch Processing:** Controls memory and request volume Delay Logic:** Keeps the workflow stable and API-friendly Cleaning Steps:** Removes messy or invalid data Final Output:** Clean, simple token records ready for use Output Example Each valid token produces a clean record like: Token name Symbol (ticker) Official website Source (CoinMarketCap) Timestamp Invalid or incomplete entries are automatically skipped. Customization Ideas You can easily extend this workflow to: Add social media scraping Track new tokens over time Trigger alerts for specific tokens Combine with other crypto APIs Feed data into outreach or analytics pipelines Important Notes This workflow uses CoinMarketCap’s free API tier Do not remove batch limits or delays unless you upgrade your API plan Replace sample storage with your own before production use License This workflow is provided for educational and practical use. You are free to modify and adapt it for your own projects. Author Built and shared by Itunu Ola n8n automation builder focused on practical, production-ready workflows.
by vinci-king-01
How it works This workflow automatically processes bank statements from various formats and extracts structured transaction data with intelligent categorization using AI. Key Steps File Upload - Accepts bank statements via webhook upload (PDF, Excel, CSV formats). Smart Format Detection - Automatically routes files to appropriate processors (PDF text extraction or spreadsheet parsing). AI-Powered Extraction - Uses GPT-4 to extract account details, transactions, and balances from statement data. Data Processing & Categorization - Cleans, validates, and automatically categorizes transactions into expense categories. Database Storage - Saves processed data to PostgreSQL database for analysis and reporting. API Response - Returns structured summary with transaction counts, expense totals, and category breakdowns. Set up steps Setup time: 8-12 minutes Configure OpenAI credentials - Add your OpenAI API key for AI-powered data extraction. Set up PostgreSQL database - Connect your PostgreSQL database and create the required table structure. Configure webhook endpoint - The workflow provides a /upload-statement endpoint for file uploads. Customize transaction categories - Modify the AI prompt to include your preferred expense categories. Test the workflow - Upload a sample bank statement to verify the extraction and categorization process. Set up database table - Ensure your PostgreSQL database has a bank_statements table with appropriate columns. Features Multi-format support**: PDF, Excel, CSV bank statements AI-powered extraction**: GPT-4 extracts account details and transactions Automatic categorization**: Expenses categorized as groceries, dining, gas, shopping, utilities, healthcare, entertainment, income, fees, or other Data validation**: Cleans and validates transaction data with error handling Database storage**: PostgreSQL integration for data persistence API responses**: Clean JSON responses with transaction summaries and category breakdowns Smart routing**: Automatic format detection and appropriate processing paths
by Kevin Meneses
What this workflow does This template extracts high-intent SEO keywords from any web page and turns them into a ranked keyword list you can use for content planning, landing pages, and SEO strategy. It runs in 3 phases: Scrape the target URL* with Decodo Decodo – Web Scraper for n8n Use AI to extract seed keywords* and understand the page topic Enrich each seed keyword with real Google SERP data via SerpApi* (related searches + questions + competitors), then apply a JavaScript scoring system to rank the best opportunities The final output is saved to Google Sheets as a clean table of ranked keywords. Who this workflow is for SEO consultants and agencies SaaS marketers and growth teams Founders validating positioning and messaging Content teams looking for “what people actually search for” This workflow is especially useful when you want keywords with commercial / solution intent, not generic single-word terms. Workflow overview Phase 1 — Scrape & clean page content Reads the URL from Google Sheets Scrapes the page via Decodo Cleans HTML into plain text (token-friendly) Phase 2 — AI keyword extraction AI returns a structured JSON with: brand / topic 5–10 mid-tail seed keywords intent + audience hints Phase 3 — SERP enrichment + scoring Uses SerpApi to fetch: related searches People Also Ask questions competitor domains Scores and ranks keywords based on: -- source type (related searches / PAA / organic) -- frequency across seeds -- modifiers (pricing, best, free, docs, etc.) -- mid-tail length preference Setup (step by step) 1) Google Sheets (input) Create a sheet with: Column name: urls One URL per row 2) Google Sheets (output) Create an output sheet with columns like: keyword score intent_hint source_type Tip: Clear the output sheet before each run if you want a clean export. 3) Decodo Add your Decodo credentials The URL is taken automatically from Google Sheets Decodo – Web Scraper for n8n 4) SerpApi Add your SerpApi key in the SerpApi node 5) AI Model Connect your preferred AI model (Gemini / OpenAI) The prompt is optimized to output valid JSON only Self-hosted disclaimer This is a community template. You must configure your own credentials (Google Sheets, Decodo, SerpApi, AI). Results depend on page accessibility and page content quality.
by Ian Kerins
Overview This n8n template automates the process of researching niche topics. It searches for a topic on Wikipedia, scrapes the relevant page using ScrapeOps, extracts the history or background section, and uses AI to generate a concise summary and timeline. The results are automatically saved to Google Sheets for easy content planning. Who is this for? Content Creators**: Quickly gather background info for videos or articles. Marketers**: Research niche markets and product histories. Educators/Students**: Generate timelines and summaries for study topics. Researchers**: Automate the initial data gathering phase. What problems it solves Time Consumption**: Manually reading and summarizing Wikipedia pages takes time. Blocking**: Scraping Wikipedia directly can sometimes lead to IP blocks; ScrapeOps handles this. Unstructured Data**: Raw HTML is hard to use; this workflow converts it into a clean, structured format (JSON/CSV). How it works Define Topic: You set a keyword in the workflow. Locate Page: The workflow queries the Wikipedia API to find the correct page URL. Smart Scraping: It uses the ScrapeOps Proxy API to fetch the page content reliably. Extraction: A code node intelligently parses the HTML to find "History", "Origins", or "Background" sections. AI Processing: GPT-4o-mini summarizes the text and extracts key dates for a timeline. Storage: The structured data is appended to a Google Sheet. Setup steps (~ 5-10 minutes) ScrapeOps Account: Register for a free API key at ScrapeOps. Configure the ScrapeOps Scraper node with your API key. OpenAI Account: Add your OpenAI credentials to the Message a model node. Google Sheets: Create a Google Sheet. You can duplicate this Template Sheet (copy the headers). Connect your Google account to the Append row in sheet node and select your new sheet. Pre-conditions An active ScrapeOps account. An OpenAI API key (or another LLM credential). A Google account for Sheets access. Disclaimer This template uses ScrapeOps as a community node. You are responsible for complying with Wikipedia's Terms of Use, robots directives, and applicable laws in your jurisdiction. Scraping targets may change at any time; adjust render/scroll/wait settings and parsers as needed. Use responsibly for legitimate business purposes.
by Zakwan
📖 Overview This template automates the process of researching a keyword, scraping top-ranking articles, cleaning their content, and generating a high-quality SEO-optimized blog post. It uses Google Search via RapidAPI, Ollama with Mistral AI, and Google Drive to deliver an end-to-end automated content workflow. Ideal for content creators, SEO specialists, bloggers, and marketers who need to quickly gather and summarize insights from multiple sources to create superior content. ⚙️ Prerequisites Before using this workflow, make sure you have: n8n installed (Desktop, Docker, or Cloud). Ollama installed with the mistral:7b model: ollama pull mistral:7b RapidAPI account (for Google Search API). Google Drive account (with a target folder where articles will be saved). 🔑 Credentials Required RapidAPI (Google Search API) Header authentication with your API key. Example headers: x-rapidapi-key: YOUR_API_KEY x-rapidapi-host: google-search74.p.rapidapi.com Google Drive OAuth2 Allow read/write permissions. Update the folderId with your Drive folder where articles should be stored. Ollama API Base URL: http://localhost:11434 (local n8n) http://host.docker.internal:11434 (inside Docker) Ensure the mistral:7b model is available. 🚀 Setup Instructions Configure RapidAPI Sign up at RapidAPI . Subscribe to the Google Search API. Create an HTTP Header Auth credential in n8n with your API key. Configure Google Drive In n8n, add a Google Drive OAuth2 credential. Select the Drive folder ID where output files should be saved. Configure Ollama Install Ollama locally. Pull the required model (mistral:7b). Create an Ollama API credential in n8n. Run the Workflow Trigger by sending a chat message with your target keyword. The workflow searches Google, extracts the top 3 results, scrapes the articles, cleans the content, and generates a structured blog post. Final output is stored in Google Drive as a .docx file. 🎨 Customization Options Search Engine → Swap out RapidAPI with Bing or SerpAPI. Number of Articles → Change limit: 3 in the Google Search node. Content Cleaning → Modify the regex in the “Clean Body Text” node to capture or tags. AI Model → Replace mistral:7b with llama3, mixtral, or any other Ollama-supported model. Storage → Save output to a different Google Drive folder or export to Notion/Slack. 📌 Workflow Highlights Google Search (RapidAPI) → Fetch top 3 results for your keyword. HTTP Request + Code Nodes → Extract and clean article body text. Mistral AI via Ollama → Summarize, optimize, and refine the content. Google Drive → Save the final blog-ready article automatically.