by Pratyush Kumar Jha
Image Ad Cloner AI Agent This n8n workflow automates “cloning” a competitor ad style and re-generating it using your product image. A user uploads your product image and provides a competitor ad URL via a form trigger; the workflow downloads both images, converts them to inline/base64 parts, builds a detailed generation prompt, sends the request to a generative-vision model (Gemini-style endpoints), waits for the result, converts the output back to a file and returns it. Use-cases: rapid ad iteration, creative A/B variants, and scaled ad re-skins. How it works (step-by-step) form_trigger — user submits: competitor image URL, their product file, and optional changes. convert_product_image_to_base64 — converts uploaded product file to base64 inline data. download_image — fetches the competitor ad image from the provided URL. convert_ad_image_to_base64 — converts downloaded competitor image to base64 inline data. build_prompt — assembles a careful prompt (handles partial text, labels, placements, CTA swaps, additional changes). generate_ad_image_prompt → generate_ad_image — two HTTP Request nodes call generative endpoints (first to prepare prompt/content and then flash-image generation). These include inline image data and model-specific parameters. Wait — allows async generation to complete (workflow uses a wait/webhook pattern). set_result → get_image — extract image result, convert binary to file for downstream use (download or return to the requester). Sticky Note — visual diagram / docs inside workflow. Quick Setup Guide 👉 Demo & Setup Video 👉 Course Nodes of interest form_trigger (n8n-nodes-base.formTrigger) — entry point; validates inputs. convert_product_image_to_base64 & convert_ad_image_to_base64 (extractFromFile) — convert uploaded/downloaded binaries to inline/base64 data for the model. download_image (httpRequest) — fetches competitor image URL. Add robust URL validation. build_prompt (set) — builds the natural-language prompt that instructs the model how to replace branding, packaging, and CTAs. generate_ad_image_prompt & generate_ad_image (httpRequest) — POST to generative APIs (examples use Gemini endpoints). Have retry/backoff settings. Wait — used to allow model generation to complete before extracting results. set_result / get_image / convertToFile — assemble the final binary file to deliver to the user. What you’ll need (credentials & permissions) n8n account with Workflow active toggle & webhook exposure (or self-hosted endpoint). HTTP credential(s) for your generative model provider: Header auth credential for any required custom headers (httpHeaderAuth). Bearer token credential for the model API (httpBearerAuth). (Optional) Generic HTTP auth if using proxies or intermediate APIs. Storage (optional): S3 or other if you want to persist generated images. (Recommended) An OCR or moderation service API key if you plan to detect/remove copyrighted logos or PII. Ensure your hosting domain allows outbound HTTPS to the model endpoints and that webhook URLs are reachable. Recommended settings & best practices Input validation**: enforce allowed mime types (jpg, png, webp), max file size (e.g., 5–10 MB), and sanitize competitor URL domain whitelist. Retries & backoff**: use retryOnFail and waitBetweenTries for httpRequest nodes (example uses retryOnFail: true and waitBetweenTries: 5000). Limit retries to avoid abuse. Rate limits & quotas**: guard calls to the model API with rate-limiting / queueing to avoid hitting provider quotas. Ethics & copyright check**: run OCR/logo-detect and flag requests that attempt to reproduce trademarked content; require human approval before creating ads that replicate competitor branding. Partial-text handling**: include logic in the prompt (and optionally OCR pre-check) to identify fragment text on packaging and replace it correctly with your brand text. Binary handling**: always set and preserve correct mime_type in inline data (png/jpeg) to avoid corrupt outputs. Webhooks & timeouts**: set a sensible Wait time and webhook timeout; provide progress/error messages back to the requester. Logging & monitoring**: capture model responses and node errors to troubleshoot prompt issues and image artifacts. Obfuscate tokens in logs. Security**: store API keys in n8n credentials (never as plain text in Set nodes). Use least privilege. Customization ideas Add an OCR node (e.g., Tesseract or cloud OCR) prior to build_prompt to detect and list text to replace automatically. Add a logo mask step to blur or remove competitor logos before feeding to the generator. Generate multiple variants per request (different CTAs, color palettes) and save them for A/B testing. Add human-in-the-loop: store generated images in a review queue and send approval emails/webhooks before final download. Integrate analytics: tag generated ads and push metadata (prompt, model used, timestamp) to a database for later analysis. Replace Gemini endpoints with other providers or allow a model selector UI option. Add automatic compliance moderation filters and a “safe mode” toggle that refuses disallowed transformations. Tags image-ad-cloner, n8n, gemini, generative-ai, image-editing, ad-cloning, automation, form-trigger, viral
by Harshil Agrawal
Store the data received from the CocktailDB API in JSON
by Mauricio Perera
📝 Description This workflow allows you to extract all links (URLs) contained in a PDF file by converting it to HTML via PDF.co and then extracting the URLs present in the resulting HTML. Unlike the traditional Read PDF node, which only returns visible link text, this flow provides the full active URLs, making further processing and analysis easier. 📌 Use Cases Extract all hyperlinks from PDF documents. Automate URL verification and monitoring within documents. Extract links from reports, contracts, catalogs, newsletters, or manuals. Prepare URLs for validation, classification, or storage. 🔗 Workflow Overview User uploads a PDF file via a web form. The PDF is uploaded to PDF.co. The PDF is converted to HTML (preserving links). The converted HTML is downloaded. URLs are extracted from the HTML using a custom code node. ⚙️ Node Breakdown 1. Load PDF (formTrigger) Uploads a .pdf file. Single file upload. 2. Upload (PDF.co API) Uploads the PDF file to PDF.co using binary data. 3. PDF to HTML (PDF.co API) Converts the uploaded PDF to HTML using its URL. 4. Get HTML (HTTP Request) Downloads the converted HTML from PDF.co. 5. Code1 (Function / Code) Parses the HTML content to extract all URLs (http, https, www). Uses a regex to identify URLs within the HTML text. Outputs an array of objects containing the extracted URLs. 📎 Requirements Active PDF.co account with API key. Set up PDF.co credentials in n8n (PDF.co account). Enable webhook to expose the upload form. 🛠️ Suggested Next Steps Add nodes to validate extracted URLs (e.g., HTTP requests to check status). Store URLs in a database, spreadsheet, or send via email. Extend the flow to filter URLs by domain, type, or pattern. 📤 Importing the Template Import this workflow into n8n via Import workflow and paste the provided JSON. If you want help adding extra steps or optimizing the URL extraction, just ask! If you want, I can also prepare this as a Canva visual template for you. Would you like that?
by phil
This workflow automates the update of Yoast SEO metadata for a specific post or product on a WordPress or WooCommerce site. It sends a POST request to a custom API endpoint exposed by the Yoast SEO API Manager plugin, allowing for programmatic changes to the SEO title and meta description. Bulk version available here. Prerequisites A WordPress site with administrator access. The Yoast SEO plugin installed and activated. The Yoast SEO API Manager companion plugin installed and activated to expose the required API endpoint. WordPress credentials configured within your n8n instance. Setup Steps Configure the Settings Node: In the Settings node, replace the value of the wordpress URL variable with the full URL of your WordPress site (e.g., https://your-domain.com/). Set Credentials: In the HTTP Request - Update Yoast Meta node, select your pre-configured WordPress credentials from the Credential for WordPress API dropdown menu. Define Target and Content: In the same HTTP Request node, navigate to the Body Parameters section and update the following values: post_id: The ID of the WordPress post or WooCommerce product you wish to update. yoast_title: The new SEO title. yoast_description: The new meta description. How It Works Manual Trigger: The workflow is initiated manually. This can be replaced by any trigger node for full automation. Settings Node: This node defines the base URL of the target WordPress instance. This centralizes the configuration, making it easier to manage. HTTP Request Node: This is the core component. It constructs and sends a POST request to the /wp-json/yoast-api/v1/update-meta endpoint. The request body contains the post_id and the new metadata, and it authenticates using the selected n8n WordPress credentials. Customization Guide Dynamic Inputs**: To update posts dynamically, replace the static values in the HTTP Request node with n8n expressions. For example, you can use data from a Google Sheets node by setting the post_id value to an expression like {{ $json.column_name }}. Update Additional Fields: The underlying API may support updating other Yoast fields. Consult the **Yoast SEO API Manager plugin's documentation to identify other available parameters (e.g., yoast_canonical_url) and add them to the Body Parameters section of the HTTP Request node. Change the Trigger**: Replace the When clicking ‘Test workflow’ node with any other trigger node to fit your use case, such as: Schedule: To run the update on a recurring basis. Webhook: To trigger the update from an external service. Google Sheets: To trigger the workflow whenever a row is added or updated in a specific sheet. Yoast SEO API Manager Plugin for WordPress // ATTENTION: Replace the line below with <?php - This is necessary due to display constraints in web interfaces. <?php /** Plugin Name: Yoast SEO API Manager v1.2 Description: Manages the update of Yoast metadata (SEO Title, Meta Description) via a dedicated REST API endpoint. Version: 1.2 Author: Phil - https://inforeole.fr (Adapted by Expert n8n) */ if ( ! defined( 'ABSPATH' ) ) { exit; // Exit if accessed directly. } class Yoast_API_Manager { public function __construct() { add_action('rest_api_init', [$this, 'register_api_routes']); } /** Registers the REST API route to update Yoast meta fields. */ public function register_api_routes() { register_rest_route( 'yoast-api/v1', '/update-meta', [ 'methods' => 'POST', 'callback' => [$this, 'update_yoast_meta'], 'permission_callback' => [$this, 'check_route_permission'], 'args' => [ 'post_id' => [ 'required' => true, 'validate_callback' => function( $param ) { $post = get_post( (int) $param ); if ( ! $post ) { return false; } $allowed_post_types = class_exists('WooCommerce') ? ['post', 'product'] : ['post']; return in_array($post->post_type, $allowed_post_types, true); }, 'sanitize_callback' => 'absint', ], 'yoast_title' => [ 'type' => 'string', 'sanitize_callback' => 'sanitize_text_field', ], 'yoast_description' => [ 'type' => 'string', 'sanitize_callback' => 'sanitize_text_field', ], ], ] ); } /** Updates the Yoast meta fields for a specific post. * @param WP_REST_Request $request The REST API request instance. @return WP_REST_Response|WP_Error Response object on success, or WP_Error on failure. */ public function update_yoast_meta( WP_REST_Request $request ) { $post_id = $request->get_param('post_id'); if ( ! current_user_can('edit_post', $post_id) ) { return new WP_Error( 'rest_forbidden', 'You do not have permission to edit this post.', ['status' => 403] ); } // Map API parameters to Yoast database meta keys $fields_map = [ 'yoast_title' => '_yoast_wpseo_title', 'yoast_description' => '_yoast_wpseo_metadesc', ]; $results = []; $updated = false; foreach ( $fields_map as $param_name => $meta_key ) { if ( $request->has_param( $param_name ) ) { $value = $request->get_param( $param_name ); update_post_meta( $post_id, $meta_key, $value ); $results[$param_name] = 'updated'; $updated = true; } } if ( ! $updated ) { return new WP_Error( 'no_fields_provided', 'No Yoast fields were provided for update.', ['status' => 400] ); } return new WP_REST_Response( $results, 200 ); } /** Checks if the current user has permission to access the REST API route. * @return bool */ public function check_route_permission() { return current_user_can( 'edit_posts' ); } } new Yoast_API_Manager(); Bulk version available here : this bulk version, provided with a dedicated WordPress plugin, allows you to generate and bulk-update meta titles and descriptions for multiple articles simultaneously using artificial intelligence. It automates the entire process, from article selection to the final update in Yoast, offering considerable time savings. . Phil | Inforeole
by Kev
⚠️ Important: This workflow uses the Autype community node and requires a self-hosted n8n instance. This workflow demonstrates Autype's Extended Markdown engine — the foundation for creating production-ready documents from Markdown. It supports full document layouts with headers, footers, page numbering, cross-references, indices, custom layouts, and advanced diagrams (TikZ, Mermaid, ...). You can generate complete PDF, DOCX, or ODT documents with professional typography, tables, charts, and embedded images. Who is this for? This workflow is for developers, operations teams, and business analysts who want to turn structured Markdown into branded documents with a consistent design system. It's ideal if you want to separate content from styling and include uploaded visuals in your final PDF or DOCX output. What this workflow does This workflow builds a business quarter report from Markdown and applies a separate style JSON (defaults) to control typography, chart colors, table styling, and header/footer layout. It also downloads a report image via HTTP, uploads it as a temporary Autype image, and injects the returned refPath into the title page before rendering. The included example report uses: A dedicated style configuration node (schema-aligned defaults) A cover page with company logo + uploaded content image A financial KPI table A chart directive for regional performance A second chart + page break section for operational metrics How it works Manual Trigger — Starts the workflow on demand. Set Document Style JSON — Defines document and defaults (font, color, table style, chart colors, header/footer) plus a company logo URL (placehold.co). Set Business Report Markdown — Stores the markdown template with placeholders for logo and uploaded title image. Download Report Image — Fetches a PNG via HTTP Request (file response). Upload Content Image — Uploads the downloaded file using Autype uploadImage and returns a temporary refPath. Build Markdown + Style Payload — Injects image URLs/refs into markdown and serializes defaults JSON for rendering. Render Styled Markdown Report — Renders markdown with defaults and downloads the final document. How Image Upload Works For images that aren't publicly accessible (e.g., internal dashboards, screenshots), Autype provides a temporary image upload mechanism: Download the image as binary data (HTTP Request, file upload, etc.) Upload to Autype via the uploadImage operation → returns a refPath (e.g., /temp-image/{id}) Reference the image in Markdown using the refPath directly: {width=520} Temporary images expire after 24 hours and are automatically cleaned up How Markdown works with Autype Autype uses an Extended Markdown syntax that transforms standard Markdown into a full-featured document markup system for professional document creation. This goes far beyond basic Markdown with specialized elements for document structure, layout, and advanced content. Key extended elements include: :::toc — Table of contents with automatic heading extraction :::chart — Interactive charts and data visualizations :::table — Enhanced tables with styling and formatting options ---page{align=center}--- directives — Page layout, orientation, and section breaks Cross-references, indices, diagrams (mermaid, tikz, ...), equations and bibliography support For the complete markup reference and all available elements, see the Autype Markup Reference. Setup Install the Autype community node (n8n-nodes-autype) via Settings → Community Nodes. Create an Autype API credential with your API key from app.autype.com. See API Keys in Settings. In Download Report Image, replace the sample URL with your own dashboard/chart image if needed. In Set Document Style JSON, adjust typography/colors/header/footer as required. Import this workflow and click Test Workflow to generate the example quarter report. Requirements n8n instance with community node support Autype account with API key n8n-nodes-autype community node installed How to customize Style system:** Keep report content in markdown and update design centrally in Set Document Style JSON (defaults). Title page assets:** Replace the logo URL (companyLogoUrl) and the downloaded content image source URL. Output format:** Change document.type from pdf to docx for Word-compatible output. Extended syntax:** Use markdown tables, chart directives (:::chart), page directives, and text2 blocks for richer reports.
by Kristian Ekachandra
Automatically scrape Google Maps business data using BrowserAct AI automation and save results directly to Google Sheets. 🎯 What This Workflow Does: Collects business information from Google Maps based on location and category Extracts name, phone, rating, address, website, and latest reviews Automatically saves data to Google Sheets with deduplication User-friendly form interface for easy data collection 🔧 Requirements: BrowserAct Community Node BrowserAct account Google Sheets OAuth2 credentials BrowserAct Google Maps Detail Scraper template ✨ Perfect For: Lead generation and prospecting Market research and competitor analysis Building business directories Local SEO research Sales outreach campaigns 📊 Extracted Data Includes: Business Name Phone Number Category Rating Full Address Website URL Latest Review Summary 💡 Easy Setup: Just fill in the form with your target location and business category, and let BrowserAct AI handle the scraping while results automatically populate your Google Sheets. 🎥 Full tutorial on YouTube
by Alexander Schnabl
Audit permissions in Confluence to ensure compliance This workflow scans selected Confluence spaces for public exposure risks, helping teams identify unintended access and potential data leakage. What it does Detects public exposure risks in Confluence spaces, including: Anonymous access permissions at space level Whether public links are enabled Pages with active or blocked public links Uses Confluence REST API v2 together with the Atlassian GraphQL API. Produces a consolidated per-space report containing: Anonymous access permissions Public link status Pages with public links (title, status, URL, enabled-by user) Ideal for security audits, compliance reviews, and data leakage prevention. How it works The workflow starts via a Manual Trigger. A Set Variables node defines: atlassianDomain spaceKeys (comma-separated) Get Spaces (v2)** retrieves matching spaces and splits them into individual items. For each space, three GraphQL queries run in parallel: Retrieve anonymous access permissions Check public link feature status at space level Fetch pages with public links (ON / BLOCKED) Results from all three queries are merged and normalized into a single per-space report. Setup Configure the Set Variables node: atlassianDomain → your Confluence base URL spaceKeys → comma-separated list (e.g. ENG, HR) Create an HTTP Basic Auth credential for Atlassian: Email + API token Assign it to all HTTP and GraphQL nodes Ensure the credential has permission to: Read spaces Read space permissions Access GraphQL endpoints Execute the workflow manually to generate the report. Notes Uses the Atlassian GraphQL API, which exposes permission and public-link data not fully available via REST. Pages with blocked public links are included for visibility. The GraphQL page query fetches up to 250 pages per space.
by AppStoneLab Technologies LLP
Build a Vectorless PDF Knowledge Bot on Telegram Using PageIndex RAG 👤 Who Is This For? This template is built for developers, researchers, and automation builders who want to create a document Q&A system — without the complexity of vector databases, embeddings, or chunking pipelines. It's perfect for: Developers exploring next-generation RAG architectures Teams building internal knowledge bots over PDFs (reports, manuals, contracts) Anyone who wants to query documents through Telegram with a clean, no-infrastructure setup ❓ What Problem Does This Solve? Traditional RAG systems require converting text into vectors, storing them in a vector database, and relying on semantic similarity to retrieve relevant chunks. This approach has known weaknesses: Similarity ≠ Relevance** - queries express intent, not exact content Chunking breaks context** - arbitrary splits destroy meaning across sections In-document references are missed** - e.g. "see Appendix B" has no semantic match PageIndex solves this differently. Instead of vectors, it builds a hierarchical tree index (like a Table of Contents) from your PDF using an LLM. At query time, the LLM reasons over that tree — identifies the most relevant sections, retrieves only those, and generates a precise, cited answer. No embeddings. No vector DB. No chunking. ⚡ What This Workflow Does This n8n template delivers a fully working Telegram-based RAG bot with two independent flows in a single workflow: 📄 Flow 1 → PDF Knowledge Upload (Run Once per Document) Send a PDF file to your Telegram bot. The workflow downloads it and uploads it to PageIndex cloud, where the tree index is built automatically. 💬 Flow 2 → Q&A Chat (Runs Every Time) Send any question as a text message to the same Telegram bot. The workflow fetches all your indexed documents, sends the question to PageIndex's LLM reasoning engine, and delivers a cited answer back to your Telegram chat. 🔄 How It Works Flow 1 - PDF Upload Receive PDF Document - Telegram Trigger listens for messages containing a file. Send any PDF to the bot to start indexing. Download PDF File - The bot downloads the binary PDF from Telegram's file storage using the file_id. Index PDF on PageIndex - The PDF is uploaded to PageIndex cloud via POST /doc/. PageIndex builds a hierarchical tree index (TOC with LLM-generated summaries per section). Returns a doc_id. No vectors are created. Flow 2 - Q&A Receive User Question - Telegram Trigger listens for text messages. Any message triggers the Q&A flow. Fetch All Indexed Documents - Calls GET /docs on PageIndex to retrieve all previously uploaded documents. Extract Document IDs - Maps the documents list into a clean array of doc_id strings. LLM Reasoning over Document Tree - Sends the user's question + all doc_ids to PageIndex POST /chat/completions. PageIndex's LLM traverses the tree, identifies the relevant nodes, retrieves the raw text, and generates an answer with page citations. Send Answer to User The answer is delivered back to the exact Telegram user who asked, using their chat_id. 🛠️ Setup Instructions Step 1 - Create a Telegram Bot Open Telegram and message @BotFather Send /newbot and follow the prompts Copy the Bot Token provided In n8n, add a new Telegram credential and paste the token Step 2 - Get Your PageIndex API Key Visit dash.pageindex.ai and create a free account Go to API Keys and generate a new key In the workflow, replace YOUR_PAGEINDEX_API_KEY in these three nodes: ☁️ Index PDF on PageIndex 📚 Fetch All Indexed Documents 🧠 LLM Reasoning over Document Tree Step 3 - Connect Telegram Credentials Both Telegram Trigger nodes and the Telegram send node use the same credential. Set your Telegram API credentials once and n8n will apply them across all nodes automatically. Step 4 - Activate the Workflow Click Activate in n8n Send a PDF file to your Telegram bot → it gets indexed Send any text question → get an LLM-reasoned answer back 📋 Required Credentials | Service | Where to Get | Used In | |---|---|---| | Telegram Bot Token | @BotFather on Telegram | All Telegram nodes | | PageIndex API Key | API Key From Dashboard | Upload + Chat nodes | 💡 How to Customize Query multiple documents at once** - Upload multiple PDFs (each creates a separate doc_id). The Q&A flow automatically fetches all of them and reasons across all documents simultaneously. Change temperature** - In the LLM Reasoning over Document Tree node, adjust "temperature": 0.5 for more creative (higher) or more precise (lower) answers. Enable/disable citations** - Toggle "enable_citations": true/false in the chat node body to control whether page references appear in answers. Filter by specific document** - Modify the Extract Document IDs node to filter only documents with status: completed or by name to limit which docs are queried. Replace Telegram with another interface** - Swap the Telegram Trigger nodes for a Webhook or Form Trigger if you want to build a web-based version instead. 📦 About PageIndex PageIndex is an open-source vectorless RAG framework by VectifyAI. It powers the Mafin 2.5 financial assistant which achieved 98.7% accuracy on FinanceBench - significantly outperforming GPT-4o (~31%) on document-intensive tasks. 📖 PageIndex Docs 💻 GitHub — VectifyAI/PageIndex 🎮 Developer Dashboard 🔧 Technical Notes PDFs sent via Telegram must be under 20MB (Telegram Bot API limit) PageIndex document processing typically takes 10-60 seconds depending on PDF size - the first question after upload may take slightly longer if the doc is still being indexed All indexed documents persist permanently in your PageIndex account and can be reused across sessions without re-uploading 🤝 Need Help? Feel free to reach out via the n8n Community Forum or check out more automation templates on AppStoneLab Technologies.
by Alok Singh
Step 1: Slack Trigger The workflow starts whenever your Slack bot is mentioned or receives an event in a channel. The message that triggered it (including text and channel info) is passed into the workflow. Step 2: Extract the Sheet ID The workflow looks inside the Slack message for a Google Sheets link. If it finds one, it extracts the unique spreadsheet ID from that link. It also keeps track of the Slack channel where the message came from. If no link is found, the workflow stops quietly. Step 3: Read Data from Google Sheet Using the sheet ID, the workflow connects to Google Sheets and reads the data from the chosen tab (the specific sheet inside the spreadsheet). This gives the workflow all the rows and columns of data from that tab. Step 4: Convert Data to CSV The rows pulled from Google Sheets are then converted into a CSV file. At this point, the workflow has the spreadsheet data neatly packaged as a file. Step 5: Upload CSV to Slack Finally, the workflow uploads the CSV file back into Slack. It can either be sent to a fixed channel or directly to the same channel where the request came from. Slack users in that channel will see the CSV as a file upload. How it works The workflow is triggered when your Slack bot is mentioned or receives a message. It scans the message for a Google Sheets link. If a valid link is found, the workflow extracts the unique sheet ID. It then connects to Google Sheets, reads the data from the specified tab, and converts it into a CSV file. Finally, the CSV file is uploaded back into Slack so the requesting user (and others in the channel) can download it. How to use In Slack, mention your bot and include a Google Sheets link in your message. The workflow will automatically pick up the link and process it. Within a short time, the workflow will upload a CSV file back into the same Slack channel. You can then download or share the CSV file directly from Slack. Requirements Slack App & Credentials: Your bot must be installed in Slack with permissions to receive mentions and upload files. Google Sheets Access: The Google account connected in n8n must have at least read access to the sheet. n8n Setup: The workflow must be imported into n8n and connected to your Slack and Google Sheets credentials. Correct Sheet Tab: The workflow needs to know which tab of the spreadsheet to read (set by name or by sheet ID). Customising this workflow Channel Targeting: By default, the file can be sent back to the channel where the request came from. You can also set it to always post in a fixed channel. File Naming: Change the uploaded file name (e.g., include the sheet title or today’s date). Sheet Selection: Adjust the configuration to read a specific tab or allow the user to specify the tab in their Slack message. Error Handling: Add extra steps to send a Slack message if no valid link is detected, or if the Google Sheet cannot be accessed. Formatting: Extend the workflow to clean, filter, or enrich the data before converting it into CSV.
by Jay Emp0
Prompt-to-Image Generator & WordPress Uploader (n8n Workflow) This workflow generates high-quality AI images from text prompts using Leonardo AI, then automatically uploads the result to your WordPress media library and returns the final image URL. It functions as a Modular Content Production (MCP) tool - ideal for AI agents or workflows that need to dynamically generate and store visual assets on-demand. ⚙️ Features 🧠 AI-Powered Generation Uses Leonardo AI to create 1472x832px images from any text prompt, with enhanced contrast and style UUID preset. ☁️ WordPress Media Upload Uploads the image as an attachment to your connected WordPress site via REST API. ☁️ Twitter Media Upload Uploads the image to twitter so that you can post the image later on to X.com using the media_id 🔗 Returns Final URL Outputs the publicly accessible image URL for immediate use in websites, blogs, or social media posts. 🔁 Workflow-Callable (MCP Compatible) Can be executed standalone or triggered by another workflow. Acts as an image-generation microservice for larger automation pipelines. 🧠 Use Cases For AI Agents (MCP) Plug this into multi-agent systems as the "image generation module" Generate blog thumbnails, product mockups, or illustrations Return a clean image_url for content embedding or post-publishing For Marketers / Bloggers Automate visual content creation for articles Scale image generation for SEO blogs or landing pages Supports media upload for twitter For Developers / Creators Integrate with other n8n workflows Pass prompt and slug as inputs from any external trigger (e.g., webhook, Discord, Airtable, etc.) 📥 Inputs | Field | Type | Description | |--------|--------|----------------------------------------| | prompt | string | Text prompt for image generation | | slug | string | Filename identifier (e.g. hero-image) | Example: { "prompt": "A futuristic city skyline at night", "slug": "futuristic-city" } 📤 Output { "public_image_url" : "https://your.wordpress.com/img-id", "wordpress":{...obj}, "twitter":{...obj} } 🔄 Workflow Summary Receive Prompt & Slug Via manual trigger or parent workflow execution Generate Image POST to Leonardo AI's API with the prompt and config Wait & Poll Delays 1 minute, then fetches final image metadata Download Image GET request to retrieve generated image Upload to WordPress Uses WordPress REST API with proper headers Upload to Twitter Uses Twitter Media Upload API to get the media id incase you want to post the image to twitter Return Result Outputs a clean public_image_url JSON object along with wordpress and twitter media objects 🔐 Requirements Leonardo AI account and API Key WordPress site with API credentials (media write permission) Twitter / x.com Oauth API (optional) n8n instance (self-hosted or cloud) This credential setup: httpHeaderAuth for Leonardo headers httpBearerAuth for Leonardo bearer token wordpressApi for upload 🧩 Node Stack Execute Workflow Trigger / Manual Trigger Code (Input Parser) HTTP Request → Leonardo image generation Wait → 1 min delay HTTP Request → Poll generation result HTTP Request → Download image HTTP Request → Upload to WordPress Code → Return final image URL 🖼 Example Prompt { "prompt": "Batman typing on a laptop", "slug": "batman-typing-on-a-laptop" } Will return: { "public_image_url": "https://articles.emp0.com/wp-content/uploads/2025/07/img-batman-typing-on-a-laptop.jpg" } 🧠 Integrate with AI Agents This workflow is MCP-compliant—plug it into: Research-to-post pipelines Blog generators Carousel builders Email visual asset creators Trigger it from any parent AI agent that needs to generate an image based on a given idea, post, or instruction.
by Muhammad Abrar
This n8n template demonstrates how to automate the scraping of posts, comments, and sub-comments from a Facebook Group and store the data in a Supabase database. Use cases are many: Gather user engagement data for analysis, archive posts and comments for research, or monitor community sentiment by collecting feedback across discussions! Good to know At the time of writing, this workflow requires the apify api for scraping and Supabase credentials for database storage. How it works The Facebook Group posts are retrieved using an Apify scraper node. For each post, comments and sub-comments are collected recursively to capture all levels of engagement. The data is then structured and stored in Supabase, creating records for posts, comments, and sub-comments. This workflow includes the option to adjust how often it scrapes and which group to target, making it easy to automate collection on a schedule. How to use The workflow is triggered manually in the example, but you can replace this with other triggers like webhooks or scheduled workflows, depending on your needs. This workflow is capturing data points, such as user interactions or media attached to posts. Requirements Apify account API Supabase account for data storage Customizing this workflow This template is ideal for gathering and analyzing community feedback, tracking discussions over time, or archiving group content for future use.
by KlickTipp
Community Node Disclaimer: This workflow uses KlickTipp community nodes. Introduction This workflow automates the end-to-end integration between Zoom and KlickTipp. It listens to Zoom webinar events (specifically meeting.ended), validates incoming webhooks, retrieves participant data from Zoom, and applies segmentation in KlickTipp by subscribing and tagging participants based on their attendance duration. This enables precise, automated campaign targeting without manual effort. How It Works Zoom Webhook Listener Captures meeting.ended events from Zoom. Validates initial webhook registration via HMAC before processing. Webhook Response Handling Distinguishes between Zoom’s URL validation requests and actual event data. Sends appropriate responses (plainToken + encryptedToken for validation, or simple status: ok for regular events). Data Retrieval Waits briefly (1 second) to ensure meeting data is available. Pulls the participant list from Zoom’s past_meetings/{uuid}/participants endpoint. Participant Processing Splits the list into individual participant items. Filters out internal users (like the host). Routes participants based on the meeting topic (e.g., Anfänger vs. Experten webinar). Attendance Segmentation Subscribes each participant to KlickTipp with mapped fields (first name, last name, email). Uses conditions to check attendance thresholds: ≥ 90% of total meeting duration → Full attendance Otherwise → General attendance Applies corresponding KlickTipp tags per meeting type. Key Features ✅ Webhook Validation & Security with HMAC (SHA256). ✅ Automated Attendance Calculation using participant duration vs. meeting duration. ✅ Dynamic Routing by meeting topic for multiple webinars. ✅ KlickTipp Integration with: Subscriber creation or update. Tagging for full vs. general attendance. ✅ Scalable Structure for adding more webinars by extending the Switch and tagging branches. Setup Instructions Zoom Setup Enable Zoom API access and OAuth2 app credentials. Configure webhook event meeting.ended. Grant scopes: meeting:read:meeting meeting:read:list_past_participants KlickTipp Setup Prepare custom fields: Zoom | meeting selection (Text) Zoom | meeting start (Date & Time) Zoom | Join URL (URL) Zoom | Registration ID (Text) Zoom | Duration meeting (Text) Create tags for each meeting variation: attended, attended fully, not attended per meeting name. n8n Setup Add Zoom webhook node (Listen to ending Zoom meetings). Configure validation nodes (Crypto, Build Validation Body). Set up HTTP Request node with Zoom OAuth2 credentials. Connect KlickTipp nodes with your KlickTipp API credentials. Testing & Deployment End a test Zoom meeting connected to this workflow. Verify that: The webhook triggers correctly. Participant list is fetched. Internal users are excluded. Participants are subscribed and tagged in KlickTipp. Check contact records in KlickTipp for tag and field updates. 💡 Pro Tip: Use test emails and manipulate duration values to confirm segmentation logic. Customization Ideas Adjust attendance thresholds (e.g., 80% instead of 90%). Add additional meeting topics via the Switch node. Trigger email campaigns in KlickTipp based on attendance tags. Expand segmentation with more granular ranges (e.g., 0–30%, 30–60%, 60–90%). Add error handling for missing Zoom data or API failures. Resources: Use KlickTipp Community Node in n8n Automate Workflows: KlickTipp Integration in n8n