by Madame AI
Scrape & Import Products to Shopify from Any Site (with Variants & Images)-(Optimized for shoes) This advanced n8n template automates e-commerce operations by scraping product data (including variants and images) from any URL and creating fully detailed products in your Shopify store. This workflow is essential for dropshippers, e-commerce store owners, and anyone looking to quickly import product catalogs from specific websites into their Shopify store. Self-Hosted Only This Workflow uses a community contribution and is designed and tested for self-hosted n8n instances only. How it works The workflow reads a list of product page URLs from a Google Sheet. Your sheet, with its columns for Product Name and Product Link, acts as a database for your workflow. The Loop Over Items node processes products one URL at a time. Two BrowserAct nodes run sequentially to scrape all product details, including the Name, price, description, sizes, and image links. A custom Code node transforms the raw scraped data (where fields like sizes might be a single string) into a structured JSON format with clean lists for sizes and images. The Shopify node creates the base product entry using the main details. The workflow then uses a series of nodes (Set Option and Add Option via HTTP Request) to dynamically add product options (e.g., "Shoe Size") to the new product. The workflow intelligently uses HTTP Request nodes to perform two crucial bulk tasks: Create a unique variant for each available size, including a custom SKU. Upload all associated product images from their external URLs to the product. A final Slack notification confirms the batch has been processed. Requirements BrowserAct** API account for web scraping BrowserAct* "Bulk Product Scraping From (URLs) and uploading to Shopify (Optimized for shoe - NIKE -> Shopify)*" Template BrowserAct** n8n Community Node -> (n8n Nodes BrowserAct) Google Sheets** credentials for the input list Shopify** credentials (API Access Token) to create and update products, variants, and images Slack** credentials (optional) for notifications Need Help? How to Find Your BrowseAct API Key & Workflow ID How to Connect n8n to Browseract How to Use & Customize BrowserAct Templates How to Use the BrowserAct N8N Community Node Workflow Guidance and Showcase Automate Shoe Scraping to Shopify Using n8n, BrowserAct & Google Sheets
by Daniel Iliesh
This n8n workflow lets you effortlessly tailor your resume for any job using Telegram and LinkedIn. Simply send a LinkedIn job URL or paste a job description to the Telegram bot, and the workflow will: Extract the job information (using optional proxy if needed) Fetch your resume in JSON Resume format (hosted on GitHub Gist or elsewhere) Use an OpenRouter-powered LLM agent to automatically adapt your resume to match the job requirements Generate both HTML and PDF versions of your tailored resume Return the PDF file and shareable download links directly in Telegram The workflow is open-source and designed with privacy in mind. You can host the backend yourself to keep your data entirely under your control. It requires a Telegram Bot, a public JSON Resume, and an OpenRouter account. Proxy support is available for LinkedIn scraping. Perfect for anyone looking to quickly customize their resume for multiple roles with minimal manual effort!
by Oneclick AI Squad
This automated n8n workflow enables AI-powered responses across multiple social media platforms, including Instagram DMs, Facebook messages, and WhatsApp chats using Meta's APIs. The system provides intelligent customer support, lead generation, and smart engagement at scale through AI-driven conversation management and automated response routing. Good to Know Supports multi-platform messaging across Instagram, Facebook, and WhatsApp Uses AI Travel Agent and Ollama Chat Model for intelligent response generation Includes platform memory for maintaining conversation context and history Automatic message processing and routing based on platform and content type Real-time webhook integration for instant message detection and response How It Works WhatsApp Trigger** - Monitors incoming WhatsApp messages and initiates automated response workflow Instagram Webhook** - Captures Instagram DM notifications and processes them for AI analysis Facebook Webhook** - Detects Facebook Messenger interactions and routes them through the system Message Processor** - Analyzes incoming messages from all platforms and prepares them for AI processing AI Travel Agent** - Processes messages using intelligent AI model to generate contextually appropriate responses Ollama Chat Model** - Provides advanced language processing for complex conversation scenarios Platform Memory** - Maintains conversation history and context across multiple interactions for personalized responses Response Router** - Determines optimal response strategy and routes messages to appropriate sending mechanisms Instagram Sender** - Delivers AI-generated responses back to Instagram DM conversations Facebook Sender** - Sends automated replies through Facebook Messenger API Send Message (WhatsApp)** - Delivers personalized responses to WhatsApp chat conversations How to Use Import workflow into n8n Configure Meta's Instagram Graph API, Facebook Messenger API, and WhatsApp Business Cloud API Set up approved Meta Developer App with required permissions Configure webhook endpoints for real-time message detection Set up Ollama Chat Model for AI response generation Test with sample messages across all three platforms Monitor response accuracy and adjust AI parameters as needed Requirements Access to Meta's Instagram Graph API, Facebook Messenger API, and WhatsApp Business Cloud API Approved Meta Developer App Webhook setup and persistent token management for real-time messaging Ollama Chat Model integration AI Travel Agent configuration Customizing This Workflow Modify AI prompts for different business contexts (customer service, sales, support) Adjust response routing logic based on message content or user behavior Configure platform-specific message templates and formatting Set up custom memory storage for enhanced conversation tracking Integrate additional AI models for specialized response scenarios Add message filtering and content moderation capabilities
by Kev
Overview This n8n workflow automatically generates professionally formatted Word documents (DOCX) with consistent company branding using AI. It leverages Json2Doc and the Json2Doc MCP server to transform simple text prompts into complete, multi-page documents. Get your free API key at: app.json2doc.com Use Cases Generate first drafts of: Contracts and legal agreements Internal forms and templates Company announcements and notices Internal documentation and policies Business reports and presentations Guidelines and procedures and much more ... Key Features Consistent Company Branding Custom fonts, colors, and typography Company logo in headers Page numbers in footers Controlled spacing and layout Professional heading styles Multi-Page Document Support Page-based sections (new page) Flow sections (continuous across pages) Automatic pagination** Consistent headers and footers throughout Rich Content Types Multiple heading levels Formatted text and paragraphs Tables with custom styling Ordered and unordered lists Images and logos Auto Generated QR Codes AI-Driven Generation Uses Claude Sonnet 4.5 to: Generate appropriate document structure Apply correct formatting Create professional, coherent content How It Works 1. Input Form Users provide: Prompt** - Description of the desired document (e.g., "Generate an employment contract template") Logo URL** - Web-accessible URL to company logo 2. Company Styling Pre-configured branding is applied (See workflow for Description on how to Update): Font, font Styles (for H1,H2, ...) Header: Company name + logo Footer: Page numbers ("Page X of Y") Spacing rules for all content types Table Styles 3. AI Document Generation The AI agent: Retrieves the Json2Doc section schema Generates JSON configuration for the document Validates the configuration Creates a document generation job Returns the Job ID 4. Processing & Download Waits for document completion (3 seconds initially) Polls job status via API Retries if not complete Downloads the final DOCX file when ready Setup Requirements Authentication You need a Json2Doc API key from app.json2doc.com (Permanently free version available). Processing Times Configuration Generation (Model-dependent) The AI model generates the JSON configuration: Simple documents (1-2 pages): 10-30 seconds Medium documents (3-5 pages): 30-60 seconds Complex documents (10-20 pages): 60-120 seconds Time varies based on the selected AI model and document complexity. Json2Doc Processing Once the configuration is created, Json2Doc generates the DOCX file in 2-6 seconds regardless of document size. Extensions This workflow can be integrated with: Cloud storage (Google Drive, Dropbox) Email services for automated delivery Approval workflows Document management systems Important Limitation This workflow is only suitable for documents up to 20 pages, as larger documents will exceed the AI model's context window. For longer documents, use the Builder Mode instead: DocumentBuilder Docs
by Mr Shifu
AI NETWORK DIAGRAM PROMPT GENERATOR Template Description This workflow automates the creation of network diagram prompts using AI. It retrieves Layer-2 topology data from AWX, parses device relationships, and generates a clean, structured prompt ready for Lucidchart’s AI diagram generator. How It Works The workflow triggers an AWX Job Template that runs commands such as show cdp neighbors detail. After the job completes, n8n fetches the stdout, extracts neighbor relationships through a JavaScript parser, and sends the structured data to an LLM (Gemini). The LLM transforms the topology into a formatted prompt you can paste directly into Lucidchart to instantly generate a visual network diagram. Setup Steps Configure AWX: Ensure your Job Template runs the required network commands and produces stdout. Obtain your AWX base URL, credentials, and Job Template ID. Add Credentials in n8n: Create AWX API credentials. Add Google Gemini credentials for the LLM node. Update Workflow Nodes: Insert your AWX URL and Job Template ID in the “Launch Job” node. Verify endpoints in the “Job Status” and “Job Stdout” nodes. Run the workflow: After execution, copy the generated Lucidchart prompt and paste it into Lucidchart’s AI to produce the network diagram.
by Masaki Go
About This Template This workflow turns complex data or topics sent via LINE into beautiful, easy-to-understand Infographics. It combines Gemini (to analyze data and structure the visual layout) and Nano Banana Pro (accessed via Kie.ai API) to generate high-quality, data-rich graphics (Charts, timelines, processes). How It Works Input: User sends a topic or data points via LINE (e.g., "Japan's Energy Mix: 20% Solar, 10% Wind..."). Data Visualization Logic: Gemini acts as an Information Designer, deciding the best chart type (Pie, Bar, Flow) and layout for the data. Render: Nano Banana generates a professional 3:4 Vertical Infographic. Smart Polling: The workflow uses a loop to check the API status every 5 seconds, ensuring it waits exactly as long as needed. Delivery: Uploads to S3 and sends the visual report back to LINE. Who It’s For Social Media Managers needing quick visual content. Educators and presenters summarizing data. Consultants creating quick visual reports on the go. Requirements n8n** (Cloud or Self-hosted). Kie.ai API Key** (Nano Banana Pro). Google Gemini API Key**. AWS S3 Bucket** (Public access). LINE Official Account**. Setup Steps Credentials: Configure Header Auth for Kie.ai and your other service credentials. Webhook: Add the production URL to LINE Developers console.
by Rahul Joshi
📊 Description Streamline AI-focused SEO research by automatically analyzing URLs stored in Google Sheets, extracting semantic signals from each webpage, and generating high-quality topic clusters for AI discovery. 🤖🔍 This automation fetches URLs weekly, scrapes headings (H1–H6), extracts entities, keywords, topics, and summaries using GPT-4o-mini, and classifies each page into clusters and subclusters optimized for LLM search visibility. It also generates internal linking suggestions for better topical authority and writes all results back into Google Sheets. Perfect for content strategists, SEO teams, and AI-search optimization workflows. 📈🧩 🔁 What This Template Does 1️⃣ Triggers weekly to process URLs stored in Google Sheets. 📅 2️⃣ Fetches all URL records from the configured sheet. 📥 3️⃣ Processes URLs in batches to avoid API overload. 🔁 4️⃣ Extracts webpage HTML and pulls semantic headings (H1–H6). 📰 5️⃣ Sends headings + URL context to GPT-4o-mini for structured extraction of: — title — entities — keywords — topics — summary 6️⃣ Generates high-level cluster + subcluster labels for each page. 🧠 7️⃣ Recommends 3–5 internal linking URLs to strengthen topical authority. 🔗 8️⃣ Updates Google Sheets with all extracted fields + status flags. 📊 9️⃣ Repeats the process until all URLs are analyzed. 🔄 ⭐ Key Benefits ✅ Automates topical clustering for AI search optimization ✅ Extracts entities, keywords, and topics with high semantic accuracy ✅ Strengthens internal linking strategies using AI suggestions ✅ Eliminates manual scraping and analysis work ✅ Enables scalable content audits for large URL datasets ✅ Enhances visibility in AI-driven search systems and answer engines 🧩 Features Google Sheets integration for input + output HTML parsing for H1–H6 extraction GPT-4o-mini structured JSON extraction Topic clustering engine (cluster & subcluster classification) Internal linking recommendation generator Batch processing for large URL datasets Status-based updating in Google Sheets 🔐 Requirements Google Sheets OAuth2 credentials OpenAI API key (GPT-4o-mini) Publicly accessible URLs (or authenticated HTML if applicable) n8n with LangChain nodes enabled 🎯 Target Audience SEO teams performing semantic clustering at scale Content strategists creating AI-ready topic maps Agencies optimizing large client URL collections AI-search consultants building structured content libraries Technical marketers needing automated content analysis
by Shadrack
How it works You have several resumes you need to review manually? well this workflows allows you to upload upto 20 bunches pdf at once. AI does the heavy lifting, saving time, reducing repetive tasks and achieving high accuracy. The job description and qualificattion goes under the agent System message. Setup steps. It will take you roughly 20minutes to finish setting up this workflow. n8n Form Allow multiple file submission JavaScript Code allow mapping of each file individually System message adjust the system message to fit the job description and qualification. Google Sheet make a copy
by Pedro Protes
AI Agent that uses MCP Server to execute actions requested via Evolution API. This workflow receives messages and media from WhatsApp via the Evolution API, converts the content into structured inputs, and forwards them to an AI Agent capable of triggering MCP tools to execute external actions. 🔧 How it works A Webhook receives messages sent to WhatsApp via the Evolution API. The "Message Type" node detects and forwards the received media. It handles the types Text, Image, Audio, and Document. If it is another media type, the fallback forwards a "media not supported" message to the user. The message goes to the system where it retrieves the Base64 of the media. The media is converted into Binary File(s) and a Gemini node will generate a text input for the agent. The AI Agent receives the structured input and calls the appropriate MCP Tool. In this example, only one MCP Server was configured. The AI Agent generates the output and sends it to the user. 🗒️ Requirements Evolution API Account, with the instance configured. Gemini API. Google Calendar API. MCP Server (Internal or external, whichever you prefer) configured and with a URL to link to the MCP Tool. ✔️ How to set up Configure the Evolution API webhook** Copy the webhook URL generated in the first node. In the Evolution API panel, go to the instance > webhook > paste the URL into the corresponding field. Configure Google Calendar credentials** In n8n, go to Credentials → Create New and select Google Calendar OAuth2. Select this credential in all Google Calendar MCP nodes (Get, Create, Update, Delete). Enable MCP Server nodes** Copy the MCP Server URL and paste it into the “Endpoint field of the MCP Tool. Configure Evolution API nodes** In all Evolution API nodes, you need to fill in the “instance field with the name of your Evolution API instance. 🦾 how to adapt it? Customize or extend the MCP Tools** You can add new MCP tools (e.g., Google Sheets, Notion, ClickUp). Only the agent prompt needs to be updated; the workflow structure remains the same. I opted to use simple memory, but if you want the agent to remember the entire conversation, I recommend changing the memory type; as it is, it will only remember the last 8 messages. If you're going to use a tool like Chatwoot or TypeBot, simply change the webhook URL and pay attention to the objects that the switch (Message Type) uses.
by Daniel
Transform any website into a custom logo in seconds with AI-powered analysis—no design skills required! 📋 What This Template Does This workflow receives a website URL via webhook, captures a screenshot and fetches the page content, then leverages OpenAI to craft an optimized prompt based on the site's visuals and text. Finally, Google Gemini generates a professional logo image, which is returned as a binary response for immediate use. Automates screenshot capture and content scraping for comprehensive site analysis Intelligently generates tailored logo prompts using multimodal AI Produces high-quality, context-aware logos with Gemini's image generation Delivers the logo directly via webhook response 🔧 Prerequisites n8n self-hosted or cloud instance with webhook support ScreenshotOne account for website screenshots OpenAI account with API access Google AI Studio account for Gemini API 🔑 Required Credentials ScreenshotOne API Setup Sign up at screenshotone.com and navigate to Dashboard → API Keys Generate a new access key with screenshot permissions In the workflow, replace "[Your ScreenshotOne Access Key]" in the "Capture Website Screenshot" node with your key (no n8n credential needed—it's an HTTP query param) OpenAI API Setup Log in to platform.openai.com → API Keys Create a new secret key with chat completions access Add to n8n as "OpenAI API" credential type and assign to "OpenAI Prompt Generator" node Google Gemini API Setup Go to aistudio.google.com/app/apikey Create a new API key (free tier available) Add to n8n as "Google PaLM API" credential type and assign to "Generate Logo Image" node ⚙️ Configuration Steps Import the workflow JSON into your n8n instance Assign the required credentials to the OpenAI and Google Gemini nodes Replace the placeholder API key in the "Capture Website Screenshot" node's query parameters Activate the workflow to enable the webhook Test by sending a POST request to the webhook URL with JSON body: {"websiteUrl": "https://example.com"} 🎯 Use Cases Marketing teams prototyping brand assets**: Quickly generate logo variations for client websites during pitches, saving hours on manual design Web developers building portfolios**: Auto-create matching logos for new sites to enhance visual consistency in demos Freelance designers iterating ideas**: Analyze competitor sites to inspire custom logos without starting from scratch Educational projects on AI design**: Teach students how multimodal AI combines text and images for creative outputs ⚠️ Troubleshooting Screenshot fails (timeout/error)**: Increase "timeout" param to 120s or check URL accessibility; verify API key and quotas at screenshotone.com Prompt generation empty**: Ensure OpenAI credential has sufficient quota; test node isolation with a simple query Logo image blank or low-quality**: Refine the prompt in "Generate Logo Prompt" for more specifics (e.g., add style keywords); check Gemini API limits Webhook not triggering**: Confirm POST method and JSON body format; view execution logs for payload details
by phil
This workflow is your all-in-one AI Content Strategist, designed to generate comprehensive, data-driven content briefs by analyzing top-ranking competitors. It operates through a simple chat interface. You provide a target keyword, and the workflow automates the entire research process. First, it scrapes the top 10 Google search results using the powerful Bright Data SERP API. Then, for each of those results, it performs a deep dive, using the Bright Data Web Unblocker to reliably extract the full content from each page, bypassing any anti-bot measures. Finally, all the gathered data—titles, headings, word counts, and page summaries—is synthesized by a Large Language Model (LLM) to produce a strategic content plan. This plan identifies search intent, core topics, and crucial content gaps, giving you a clear roadmap to outrank the competition. This template is indispensable for SEO specialists, content marketers, and digital agencies looking to scale their content production with strategies that are proven to work. Why Use This AI Content Strategist Workflow ? Data-Driven Insights: Base your content strategy on what is actually ranking on **Google, not guesswork. Automated Competitive Analysis: Instantly understand the structure, length, and key themes of the **top-performing articles for any keyword. Strategic Gap Detection: The **AI analysis highlights poorly covered topics and missed opportunities, allowing you to create content that provides unique value. Massive Time Savings: Condenses hours of **manual research into a fully automated process that runs in minutes. How It Works Chat Interaction Begins: The workflow is initiated via a chat UI. The user enters a target keyword to start the analysis. Google SERP Scraping (Bright Data): The "Google SERP" node uses Bright Data's SERP API to fetch the top 10 organic results, providing the URLs for the next stage. Individual Page Scraping (Bright Data): The workflow loops through each URL. The "Access and extract data" node uses the Bright Data Web Unblocker to ensure successful and complete HTML scraping of every competitor's page. Content Extraction & Aggregation: A series of Code nodes clean the raw HTML and extract structured data (title, meta description, headings, word count). The Aggregate node then compiles the data from all 10 pages into a single dataset. AI Synthesis (OpenRouter): The "Analysis" node sends the entire compiled dataset to an LLM via OpenRouter. The AI performs a holistic analysis to identify search intent, must-cover topics, and differentiation opportunities. Strategic Brief Generation: The "Format Output" node takes the AI's structured JSON analysis and transforms it into a clean, human-readable Markdown report, which is then delivered back to the user in the chat interface. 🔑 Prerequisites To use this workflow, you will need active accounts with both Bright Data (for web scraping) and OpenRouter (for AI model access). Setting Up Your Credentials: Bright Data Account: Sign up for a free trial account on their website. Inside your Bright Data dashboard, you will need to activate both the SERP API and the Web Unblocker products to create the necessary Zones. In n8n, navigate to the Credentials section, add a new "Brightdata API" credential, and enter your API key. In the workflow, select your newly created credential in both the "Google SERP" node and the "Access and extract data from a specific URL" node. OpenRouter Account: Sign up for an account at OpenRouter.ai. Navigate to your account settings to find your API Key. In n8n, go to Credentials, add a new "OpenRouter API" credential, and paste your key. In the workflow, select this credential in all three "OpenRouter Chat Model" nodes. Phil | Inforeole 🇫🇷 Contactez nous pour automatiser vos processus
by Gabriel Santos
This workflow helps HR teams run smoother monthly Q\&A sessions with employees. Who’s it for** HR teams and managers who want to centralize employee questions, avoid duplicates, and keep meetings focused. How it works** Employees submit questions through a styled form. Questions are stored in a database. HR selects a date range to review collected questions. An AI Agent deduplicates and clusters similar questions, then generates a meeting script in Markdown format. The Agent automatically creates a Google Calendar event (with a Google Meet link) on the last Friday of the current month at 16:00–17:00. The script is returned as a downloadable .txt file for HR to guide the session. Requirements** MySQL (or compatible DB) for storing questions Google Calendar credentials OpenAI (or another supported LLM provider) How to customize** Adjust meeting day/time in the Set node expressions Change database/table name in MySQL nodes Modify clustering logic in the AI Agent prompt Replace the form styling with your company’s branding This template ensures no repeated questions, keeps HR better prepared with a structured script, and automates meeting scheduling in just one click.