by Oriol Seguí
This template allows you to automatically fetch WHOIS data for any domain and display it in a clean, modern HTML card. It doesn’t just stop at showing raw registry data — it also uses a lightweight AI model to generate a short analysis or conclusion about the domain. It’s designed for SEO specialists, web developers, sysadmins, digital marketers, and cybersecurity enthusiasts who want quick and structured access to domain ownership and status details without wasting time on manual searches. What it does: Receives a domain name via webhook. Queries the WHOIS API through RapidAPI. Extracts and formats key details (registrar, creation date, expiry date, DNS, domain status, etc.). Uses AI (GPT-5-Nano) to generate a short descriptive insight about the domain. Returns everything in a responsive, styled HTML card (light + dark mode included). Requirements: A free account on RapidAPI.com. Use of the Bulk WHOIS API (includes up to 1,000 requests per month free, no credit card required). Who is it for? SEO professionals** who need to quickly check domain lifespans, expirations, and registrar info. Web developers** who want to integrate WHOIS checks into dashboards, apps, or chatbots. IT admins & security teams** who monitor domains for fraud, abuse, or expiry. Entrepreneurs & marketers** researching competitors’ domains. This template saves time, improves workflows, and makes WHOIS data both actionable and user-friendly.
by AI/ML API | D1m7asis
🧠 Telegram Search Assistant — Tavily + AI/ML API This n8n workflow lets users ask questions in Telegram and receive concise, fact-based answers. It performs a web search with Tavily, then uses AIMLAPI (GPT-5) to summarize results into a clear 3–4 sentence reply. The flow ensures grounded, non-hallucinated answers. 🚀 Features 📩 Telegram-based input ⌨️ Typing indicator for better UX 🔎 Web search with Tavily (JSON results) 🧠 Summarization with AIMLAPI (openai/gpt-5-chat-latest) 📤 Replies in the same chat/thread ✅ Guardrails against hallucinations 🛠 Setup Guide 1. 📲 Create Telegram Bot Talk to @BotFather Use /newbot → choose a name and username Save the bot token 2. 🔐 Set Up Credentials in n8n Telegram API**: use your bot token Tavily**: add your Tavily API key AI/ML API**: add your API key Base URL: https://api.aimlapi.com/v1 3. 🔧 Configure the Workflow Open the n8n editor and import the JSON Update credentials for Telegram, Tavily, and AIMLAPI ⚙️ Flow Summary | Node | Function | |--------------------------|-----------------------------------------------| | 📩 Receive Telegram Msg | Triggered when user sends text | | ⌨️ Typing Indicator | Shows “typing…” to user | | 🔎 Web Search | Queries Tavily with user’s message | | 🧠 LLM Summarize | Summarizes search JSON into a factual answer | | 📤 Reply to Telegram | Sends concise answer back to same thread | 📁 Data Handling By default: no data stored Optional: log queries & answers to Google Sheets or a database 💡 Example Prompt Flow User sends: When is the next solar eclipse in Europe? Bot replies: The next solar eclipse in Europe will occur on August 12, 2026. It will be visible as a total eclipse across Spain, with partial views in much of Europe. The maximum eclipse will occur around 17:46 UTC. 🔄 Customization Add commands: /help, /sources, /news Apply rate-limits per user Extend logging to Google Sheets / DB Add NSFW / profanity filters before search 🧪 Testing Test end-to-end in Telegram (not just “Execute Node”) Add a fallback reply if Tavily returns empty results Use sticky notes for debugging & best practices 📎 Resources 🔗 AI/ML API Docs 🔗 Tavily Search API
by Abhinav
Overview This workflow automatically converts PDF files stored in Google Drive into structured, SEO-optimized blog articles in HTML format. It eliminates repetitive manual rewriting and formatting by transforming raw PDF content into publish-ready blog files. What This Workflow Does Retrieves all PDFs from a specified Google Drive folder Processes each file sequentially using Split In Batches Downloads and extracts text from each PDF Generates a long-form SEO blog article using OpenAI Ensures the output is structured in HTML format Saves the final blog back to Google Drive as a .html file Automatically converts the original .pdf filename to .html How It Works The workflow begins with a manual trigger and fetches files from a configured Google Drive folder. Using Split In Batches, each PDF is processed one at a time. This prevents API rate limits, reduces memory load, and ensures stable execution. The extracted text is passed to OpenAI with a structured prompt that defines: Blog length SEO formatting Heading structure (H1, H2, H3) Tone and writing style HTML output format The generated content is then saved as an HTML file in a destination Google Drive folder. Requirements Google Drive credentials OpenAI credentials Source folder containing PDFs Destination folder for generated HTML files Output For every PDF processed: A long-form SEO blog article is created Output is saved in HTML format The original filename is retained and converted from .pdf to .html Customization You can modify: The OpenAI prompt to change tone or niche Blog length and SEO requirements Destination folder Output format if integrating with a CMS This template is suitable for content teams, product managers, and creators who want to automate repetitive content transformation workflows while maintaining consistent output quality.
by Chris Rudy
Who's it for Marketing teams, copywriters, and agencies who need to quickly generate and iterate on ad copies for Meta and TikTok campaigns. Perfect for brands that want AI-powered copy generation with human review and approval built into the workflow. What it does This workflow automates the ad copy creation process by: Collecting brand and product information through a form Using AI to generate tailored ad copies based on brand type (Fashion or Problem-Solution) Sending copies to Slack for team review and approval Handling revision requests with feedback incorporation Limiting revisions to 3 rounds to maintain efficiency How to set up Configure your OpenAI credentials in the OpenAI nodes Set up Slack integration and select your review channel in all Slack nodes Customize the AI prompts in the OpenAI nodes to match your brand voice Test the form to ensure file uploads and data collection work properly Activate the workflow when ready Requirements OpenAI API access (GPT-3.5 or GPT-4) Slack workspace with appropriate channel permissions Self-hosted n8n instance (for file upload functionality) How to customize Adjust the AI prompts in OpenAI nodes to match your specific industry or brand guidelines Modify the revision limit in the "Edit Fields: Revision Counter Max 3" node Add additional brand types in the form dropdown and corresponding AI nodes Customize Slack messages to match your team's communication style
by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: , , , ,
by System Admin
No description available
by Arkadiusz
📝 Description This workflow automates the process of extracting text from receipt or document images using OCR.space and presenting the results in a clean, styled form. It’s especially useful for cases like receipt digitization, invoice parsing, table recognition, or quick OCR text extraction directly inside n8n without third-party dashboards. The workflow is lightweight and self-contained - all you need is an OCR.space API key. 🔄 How it works Form Trigger – Upload File A simple form collects the image (max 1 MB) and asks whether the file contains a table. Normalize Inputs Converts the “Yes/No” response into a boolean flag isTable and keeps the uploaded file attached. OCR.space API Call Sends the uploaded image to the OCR.space API with the correct parameters: language=pol (Polish by default, can be changed) OCREngine=2 isTable flag Display Results The parsed text (ParsedResults[0].ParsedText) is rendered in a styled card with monospace formatting for easier reading and copy-paste. 🎯 Use cases Receipt OCR for expense tracking Invoice or document text digitization Table parsing from scanned files Quick OCR text preview in n8n flows ⚙️ Requirements OCR.space API key (Header Authentication) n8n instance running ≥ v1.20 📌 Notes & Customization Language**: change language parameter (eng, deu, etc.) to match your input. Validation**: add a file size check if you expect larger files. Error handling**: add an Error Trigger if you anticipate API rate limits. Table vs Text**: enabling isTable improves structured data parsing. 🔑 Keywords OCR, receipt parsing, document OCR, invoice automation, text extraction, table recognition, AI OCR, OCR.space, workflow automation
by Harshil Agrawal
No description available
by Annie To
Google Drive Folder Duplicator This n8n workflow creates a complete recursive copy of any Google Drive folder, preserving the entire folder structure, all files, and sharing permissions. What It Does Duplicates folders with unlimited nested subfolders Copies all files maintaining original names and metadata Preserves sharing permissions (users, groups, domains) Creates identical folder hierarchy in target location How It Works Initialize: Sets source/target folders and creates main destination folder Recursive Processing: Scans each folder level, splits files from subfolders File Handling: Copies files and applies original permissions Folder Handling: Creates subfolders and applies original permissions Self-Recursion: Calls itself for each subfolder to process unlimited nested levels Key Features Unlimited Depth: Handles any number of nested folder levels Permission Preservation: Maintains exact sharing settings Requirements Google Drive OAuth2 credentials with drive access
by Arthur Braghetto
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Export Workflows Between n8n Instances Copy workflows between n8n instances — with optional credential export and automatic sub-workflow adjustments. 🧠 How it Works This workflow copies a selected workflow from a SOURCE n8n server to a TARGET server and guides you through safe checks: Name conflict check**: If a workflow with the same name exists on the target the export is stopped. Sub-workflows**: Detects calls to sub-workflows. If all sub-workflows exist on the target (same names), references are auto-updated and the export continues. If any are missing, the form shows what’s missing and lets you cancel or proceed anyway. Credentials**: Detects nodes using credentials and lets you export those credentials along with the workflow. The workflow can only apply credential corrections for the credentials that you choose to export with it. At the end, the form lists which credentials were successfully exported. 💡 For in-depth behavior and edge cases, see the Notes inside the workflow (Setup, How It Works, and Credential Issues). 🚀 How to Use Run this workflow on your SOURCE server. Follow the step-by-step form: pick the workflow to export, choose whether to include credentials, and review sub-workflow checks. Done. ⚙️ Setup Create an n8n API key on both servers (SOURCE and TARGET). On the SOURCE server, create two n8n API credentials in n8n: one for SOURCE and one for TARGET (using the respective base URL and key). Configure the nodes in this workflow with these two credentials. Detailed step-by-step instructions are available in the workflow notes. ✅ Once configured, you’ll be ready to migrate workflows between servers in just a few clicks.
by Harshil Agrawal
No description available