by Dahiana
This template demonstrates how to build an AI-powered name generator that creates realistic names perfect for UX/UI designers, developers, and content creators. Use cases: User persona creation, mockup development, prototype testing, customer testimonials, team member listings, app interface examples, website content, accessibility testing, and any scenario requiring realistic placeholder names. How it works AI-Powered Generation:** Uses any LLM to generate names based on your specifications Customizable Parameters:** Accepts gender preferences, name count, and optional reference names for style matching UX/UI Optimized:** Names are specifically chosen to work well in design mockups and prototypes Smart Formatting:** Returns clean JSON arrays ready for integration with design tools and applications Reference Matching:** Can generate names similar in style to a provided reference name How to set up Replace "Dummy API" credentials with your preferred language model API key Update webhook path and authentication as needed for your application Test with different parameters: gender (masculine/feminine/neutral), count (1-20), reference_name (optional) Integrate webhook URL with your design tools, Bubble apps, or other platforms Requirements LLM API access (OpenAI, Claude, or other language model) n8n instance (cloud or self-hosted) Platform capable of making HTTP POST requests API Usage POST to webhook with JSON body: { "gender": "masculine", "count": 5, "reference_name": "Alex Chen" // optional } Response: { "success": true, "names": ["Marcus Johnson", "David Kim", "Sofia Rodriguez", "Chen Wei", "James Wilson"], "count": 5 } How to customize Modify AI prompt for specific naming styles or regions Add additional parameters (age, profession, cultural background) Connect to databases for persistent name storage Integrate with design tools APIs (Figma, Sketch, Adobe XD) Create batch processing for large mockup projects
by Javier Rieiro
Short description Automates collection, technical extraction, and automatic generation of Nuclei templates from public CVE PoCs. Converts verified PoCs into reproducible detection templates ready for testing and distribution. Purpose Provide a reliable pipeline that turns public proof-of-concept data into usable detection artifacts. Reduce manual work involved in finding PoCs, extracting exploit details, validating sources, and building Nuclei templates. How it works (technical summary) Runs a scheduled SSH job that executes vulnx with filters for recent, high-severity PoCs. Parses the raw vulnx output and splits it into individual CVE entries. Extracts structured fields: CVE ID, severity, title, summary, risk, remediation, affected products, POCs, and references. Extracts URLs from PoC sections using regex. Validates each URL with HTTP requests. Invalid or unreachable links are logged and skipped. Uses an AI agent (OpenAI via LangChain) to extract technical artifacts: exploit steps, payloads, endpoints, raw HTTP requests/responses, parameters, and reproduction notes. The prompt forces technical-only output. Sends the extracted technical content to ProjectDiscovery Cloud API to generate Nuclei templates. Validates AI and API responses. Accepted templates are saved to a configured Google Drive folder. Produces JSON records and logs for each processed CVE and URL. Output Nuclei templates in ProjectDiscovery format (YAML) stored in Google Drive. Structured JSON per CVE with metadata and extracted technical details. Validation logs for URL checks, AI extraction, and template generation. Intended audience Bug bounty hunters. Security researchers and threat intel teams. Automation engineers who need reproducible detection templates. Setup & requirements n8n instance with workflow imported. SSH access to a host with vulnx installed. OpenAI API key for technical extraction. ProjectDiscovery API key for template generation. Google Drive OAuth2 credentials for storing templates. Configure schedule trigger and target Google Drive folder ID. Security and usage notes Performs static extraction and validation only. No active exploitation. Processes only PoCs that meet configured filters (e.g., CVSS > 6). Use responsibly. Do not target systems you do not own or have explicit permission to test.
by Takuya Ojima
Who’s it for Remote and distributed teams that schedule across time zones and want to avoid meetings landing on public holidays—PMs, CS/AM teams, and ops leads who own cross-regional calendars. What it does / How it works The workflow checks next week’s Google Calendar events, compares event dates against public holidays for selected country codes, and produces a single Slack digest with any conflicts plus suggested alternative dates. Core steps: Workflow Configuration (Set) → Fetch Public Holidays (via a public holiday API such as Calendarific/Nager.Date) → Get Next Week Calendar Events (Google Calendar) → Detect Holiday Conflicts (compare dates) → Generate Reschedule Suggestions (find nearest business day that isn’t a holiday/weekend) → Format Slack Digest → Post Slack Digest. How to set up Open Workflow Configuration (Set) and edit: countryCodes, calendarId, slackChannel, nextWeekStart, nextWeekEnd. Connect your own Google Calendar and Slack credentials in n8n (no hardcoded keys). (Optional) Adjust the Trigger to run daily or only on Mondays. Requirements n8n (Cloud or self-hosted) Google Calendar read access to the target calendar Slack app with permission to post to the chosen channel A public-holiday API (no secrets needed for Nager.Date; Calendarific requires an API key) How to customize the workflow Time window: Change nextWeekStart/End to scan a different period. Holiday sources: Add or swap APIs; merge multiple regions. Suggestion logic: Tweak the look-ahead window or rules (e.g., skip Fridays). Output: Post per-calendar messages, DM owners, or create tentative reschedule events automatically.
by vanhon
Split Test AI Prompts Using Supabase & Langchain Agent This workflow allows you to A/B test different prompts for an AI chatbot powered by Langchain and OpenAI. It uses Supabase to persist session state and randomly assigns users to either a baseline or alternative prompt, ensuring consistent prompt usage across the conversation. 🧠 Use Case Prompt optimization is crucial for maximizing the performance of AI assistants. This workflow helps you run controlled experiments on different prompt versions, giving you a reliable way to compare performance over time. ⚙️ How It Works When a message is received, the system checks whether the session already exists in the Supabase table. If not, it randomly assigns the session to either the baseline or alternative prompt. The selected prompt is passed into a Langchain Agent using the OpenAI Chat Model. Postgres is used as chat memory for multi-turn conversation support. 🧪 Features Randomized A/B split test per session Supabase database for session persistence Langchain Agent + OpenAI GPT-4o integration PostgreSQL memory for maintaining chat context Fully documented with sticky notes 🛠️ Setup Instructions Create a Supabase table named split_test_sessions with the following columns: session_id (text) show_alternative (boolean) Add credentials for: Supabase OpenAI PostgreSQL (for chat memory) Modify the "Define Path Values" node to set your baseline and alternative prompts. Activate the workflow. Send messages to test both prompt paths in action. 🔄 Next Steps Add tracking for conversions or feedback scores to compare outcomes. Modify the prompt content or model settings (e.g. temperature, model version). Expand to multi-variant tests beyond A/B. 📚 Learn More How This Workflow Uses Supabase + OpenAI for Prompt Testing
by SpaGreen Creative
WooCommerce New Category Alert via WhatsApp Using Rapiwa API This n8n automation listens for the creation of a new WooCommerce product category, fetches all WooCommerce customers, cleans and formats their phone numbers, verifies them using the Rapiwa WhatsApp validation API, sends a WhatsApp message to verified numbers with the new category info, and logs each interaction into a Google Sheet (separately for verified and unverified customers). Who this is for You have a WooCommerce store and want to: Send a promotional message when a new product category is added, Verify customer WhatsApp numbers in bulk, Keep a clear log in Google Sheets of which numbers are verified or not. What it does (high level) Webhook is triggered when a new WooCommerce category is created. Fetches all WooCommerce customers via API. Limits processing to the first 10 customers (for performance/testing). Cleans phone numbers (removes +, spaces, and non-digits). Verifies each number via Rapiwa WhatsApp Verify API. If verified: sends WhatsApp message with new category info, logs as Verification = verified, Status = sent. If not verified: logs as Verification = unverified, Status = not sent. Processes users in batches with delays to avoid rate limiting. How it works (step-by-step) Trigger**: Webhook node is triggered by WooCommerce category creation. Format Data**: Category details (name, slug, description) are parsed. Get Customers**: Fetch all WooCommerce customers using the WooCommerce API. Limit**: Only the first 10 are processed. Loop & Clean**: Loop over each customer, clean phone numbers and extract info. Verify Number**: Send HTTP POST to https://app.rapiwa.com/api/verify-whatsapp. Decision Node**: Use If node to check if exists == true. Send Message**: If verified, send WhatsApp message with category details. Append to Sheet**: Log verified and unverified customers separately in Google Sheets. Wait + Batch Control**: Use Wait and SplitInBatches nodes to control flow and prevent throttling. Example verify body (HTTP Request node): { "number": "{{ $json['WhatsApp No'] }}" } Customization ideas Send images, videos, or template messages if supported by Rapiwa. Personalize messages using name or category data. Increase delay or reduce batch size to minimize risk of rate limits. Add a second sheet to log full API responses for debugging and auditing. Best practices Test on small batches before scaling. Only send messages to users who opted in. Store API credentials securely using n8n’s credentials manager. Ensure your Google Sheet column headers match exactly with what's expected. Key Improvements Made Clarified the trigger source as a Webhook from WooCommerce category creation. Fixed inconsistency in the "What it does" section (originally referenced reading from Google Sheets, but your workflow starts from WooCommerce, not Sheets). Standardized terminology to match n8n nodes: Webhook, Loop, HTTP Request, etc. Aligned the flow exactly with your nodes: Webhook → Format → Get Customers → Limit → Loop → Clean → Verify → If → Send/Log → Wait → Repeat Useful Links Dashboard:** https://app.rapiwa.com Official Website:** https://rapiwa.com Documentation:** https://docs.rapiwa.com Support WhatsApp Support: Chat Now Discord: Join SpaGreen Community Facebook Group: SpaGreen Support Website: https://spagreen.net Developer Portfolio: Codecanyon SpaGreen
by Mr Shifu
AI NETWORK DIAGRAM PROMPT GENERATOR Template Description This workflow automates the creation of network diagram prompts using AI. It retrieves Layer-2 topology data from AWX, parses device relationships, and generates a clean, structured prompt ready for Lucidchart’s AI diagram generator. How It Works The workflow triggers an AWX Job Template that runs commands such as show cdp neighbors detail. After the job completes, n8n fetches the stdout, extracts neighbor relationships through a JavaScript parser, and sends the structured data to an LLM (Gemini). The LLM transforms the topology into a formatted prompt you can paste directly into Lucidchart to instantly generate a visual network diagram. Setup Steps Configure AWX: Ensure your Job Template runs the required network commands and produces stdout. Obtain your AWX base URL, credentials, and Job Template ID. Add Credentials in n8n: Create AWX API credentials. Add Google Gemini credentials for the LLM node. Update Workflow Nodes: Insert your AWX URL and Job Template ID in the “Launch Job” node. Verify endpoints in the “Job Status” and “Job Stdout” nodes. Run the workflow: After execution, copy the generated Lucidchart prompt and paste it into Lucidchart’s AI to produce the network diagram.
by M Ayoub
Who is this for? Crypto traders, researchers, and investors who want to identify trending market narratives and sector rotations before they become mainstream news. What it does Automatically detects which crypto sectors are gaining momentum by analyzing top gainers, groups tokens by narrative (AI, DeFi, Meme, Gaming, RWA, etc.), uses Gemini AI to research why each sector is pumping, and delivers a comprehensive digest to Discord. ✅ Identifies Emerging Narratives! — Automatically detects sector-wide pumps and researches the catalysts driving them. How it works Triggers on schedule (configurable - default: hourly) Fetches top 200 gainers sorted by 24h performance from CoinMarketCap Filters tokens with strict criteria: >40% gain, >$10M market cap, >$1M volume Groups tokens into sectors using CoinMarketCap tags (AI, DeFi, Meme, Gaming, Layer1, Layer2, DePIN, RWA, Infrastructure) Creates research prompts for sectors with 2+ pumping tokens Gemini AI analyzes each sector for catalysts, news, and sustainability Generates formatted narrative digest with token performance and AI insights Splits long reports and sends to Discord (handles 2000 char limit) Set up steps Get a CoinMarketCap API key from CoinMarketCap (free tier: 10K credits/month) Get a Google Gemini API key from Google AI Studio Create a Discord webhook in your server (Server Settings → Integrations → Webhooks) Connect CMC API key as Header Auth credential (Header Name: X-CMC_PRO_API_KEY) to the Fetch Top 200 Gainers from CMC node Connect Gemini credentials to the Gemini Sector Research node Connect Discord webhook to the Send to Discord node Optionally adjust filter thresholds in the Filter Top Gainers node (MIN_PERCENT_CHANGE, MIN_MARKET_CAP, MIN_VOLUME) Setup time: ~10 minutes
by Rahul Joshi
📊 Description Streamline AI-focused SEO research by automatically analyzing URLs stored in Google Sheets, extracting semantic signals from each webpage, and generating high-quality topic clusters for AI discovery. 🤖🔍 This automation fetches URLs weekly, scrapes headings (H1–H6), extracts entities, keywords, topics, and summaries using GPT-4o-mini, and classifies each page into clusters and subclusters optimized for LLM search visibility. It also generates internal linking suggestions for better topical authority and writes all results back into Google Sheets. Perfect for content strategists, SEO teams, and AI-search optimization workflows. 📈🧩 🔁 What This Template Does 1️⃣ Triggers weekly to process URLs stored in Google Sheets. 📅 2️⃣ Fetches all URL records from the configured sheet. 📥 3️⃣ Processes URLs in batches to avoid API overload. 🔁 4️⃣ Extracts webpage HTML and pulls semantic headings (H1–H6). 📰 5️⃣ Sends headings + URL context to GPT-4o-mini for structured extraction of: — title — entities — keywords — topics — summary 6️⃣ Generates high-level cluster + subcluster labels for each page. 🧠 7️⃣ Recommends 3–5 internal linking URLs to strengthen topical authority. 🔗 8️⃣ Updates Google Sheets with all extracted fields + status flags. 📊 9️⃣ Repeats the process until all URLs are analyzed. 🔄 ⭐ Key Benefits ✅ Automates topical clustering for AI search optimization ✅ Extracts entities, keywords, and topics with high semantic accuracy ✅ Strengthens internal linking strategies using AI suggestions ✅ Eliminates manual scraping and analysis work ✅ Enables scalable content audits for large URL datasets ✅ Enhances visibility in AI-driven search systems and answer engines 🧩 Features Google Sheets integration for input + output HTML parsing for H1–H6 extraction GPT-4o-mini structured JSON extraction Topic clustering engine (cluster & subcluster classification) Internal linking recommendation generator Batch processing for large URL datasets Status-based updating in Google Sheets 🔐 Requirements Google Sheets OAuth2 credentials OpenAI API key (GPT-4o-mini) Publicly accessible URLs (or authenticated HTML if applicable) n8n with LangChain nodes enabled 🎯 Target Audience SEO teams performing semantic clustering at scale Content strategists creating AI-ready topic maps Agencies optimizing large client URL collections AI-search consultants building structured content libraries Technical marketers needing automated content analysis
by Masaki Go
Automatically publish your Note.com articles to WordPress with intelligent category and tag assignment powered by OpenAI. Who is this for Content creators and bloggers who publish on Note.com and want to maintain a synced WordPress blog without manual copy-pasting. What this workflow does Monitors your Note.com RSS feed for new articles (hourly) Fetches full article content via Note.com's API Uses OpenAI to analyze content and assign the best category and tags Downloads all images (including the featured image) from Note.com Uploads images to your WordPress media library Replaces all image URLs in the article content Publishes the post with the correct featured image Setup Add your OpenAI API credentials Add your WordPress credentials (using Application Password) Update the RSS feed URL to your Note.com profile Customize the AI prompt with your WordPress category and tag IDs Requirements Note.com account with published articles Self-hosted WordPress site with REST API enabled OpenAI API key WordPress Application Password Customization Edit the system prompt in the AI node to match your WordPress taxonomy. You can also change the polling frequency and post status (draft/publish).
by Davide
🤹🤖 This workflow (AI Document Generator with Anthropic Agent Skills and Uploading to Google Drive) automates the process of generating, downloading, and storing professionally formatted files (PDF, DOCX, PPTX, XLSX) using the Anthropic Claude API and Google Drive. This workflow connects user prompts with the Anthropic API to generate professional documents in multiple formats, automatically retrieves and uploads them to Google Drive — providing a complete AI-powered document automation system. Key Advantages ✅ Full Automation** From user input to file delivery, the entire pipeline — creation, extraction, download, and upload — runs without manual intervention. ✅ Multi-Format Support** Handles four major business document types: PPTX (Presentations) PDF (Reports) DOCX (Documents) XLSX (Spreadsheets) ✅ Professional Output** Each format includes tailored Claude system prompts with detailed formatting and design principles: Layout structure Typography Visual hierarchy Consistency and readability This ensures that every file produced follows professional standards. ✅ Easy Customization** You can modify the prompt templates or add new Skills using the “Get All Skills” node. The form and switch logic make it simple to extend with additional file types or workflows. ✅ Seamless Cloud Integration** Generated files are automatically uploaded to a Google Drive folder, enabling: Centralized storage Easy sharing and access Automatic organization ✅ Reusable and Scalable** This workflow can be used as a foundation for: Automated report generation Client deliverables Internal documentation systems AI-driven content creation pipelines How it Works This n8n workflow enables users to create professional documents using Anthropic's Claude AI and automatically save them to Google Drive. The process works as follows: Form Trigger: The workflow starts with a web form where users submit a prompt and select their desired file type (PPTX, PDF, DOCX, or XLSX). Document Type Routing: A switch node routes the request based on the selected file type to the appropriate document creation node. AI Document Generation: Each document type has a dedicated HTTP Request node that calls Anthropic's Messages API with: Specific system prompts tailored for each document type (PowerPoint, PDF, Word, or Excel) The user's input prompt Appropriate Anthropic skills (pptx, pdf, docx, xlsx) for specialized document creation Code execution capabilities for complex formatting File ID Extraction: Custom JavaScript code nodes extract the generated file ID from Anthropic's response using recursive search algorithms to handle nested response structures. File Download: HTTP Request nodes download the actual file content from Anthropic's Files API using the extracted file ID. Cloud Storage: Finally, the downloaded files are automatically uploaded to a specified Google Drive folder, organized and ready for use. Set Up Steps API Configuration: Set up HTTP Header authentication with Anthropic API Add x-api-key header with your Anthropic API key Configure required headers: anthropic-version and anthropic-beta Google Drive Integration: Connect Google Drive OAuth2 credentials Specify the target folder ID where documents will be uploaded Ensure proper permissions for file upload operations Custom Skills (Optional): Use the "Get All Skills" node to retrieve available custom skills Update skill_id fields in JSON bodies if using custom Anthropic skills Modify the form dropdown to include custom skill options if needed Form Configuration: The form is pre-configured with prompt field and file type selection No additional setup required for basic functionality Execution: Activate the workflow Access the form trigger URL Submit prompts and select desired output formats Generated files will automatically appear in the specified Google Drive folder The workflow handles the entire process from AI-powered document creation to cloud storage, providing a seamless automated solution for professional document generation. 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Eddy Medina
What does this workflow do? This workflow exports the names of all Dialogflow intents from your agent, together with their priority levels, directly into a Google Sheets spreadsheet. It is triggered via Telegram and includes visual indicators (emojis) for priority levels. 📜 Overview 🔔 Activation**: Triggered when a validated user sends the keyword (e.g. "backup") via Telegram. 📥 Data Retrieval**: Fetches all intents of the specified Dialogflow agent using the Dialogflow API. ⚙️ Processing**: Transforms each intent into an n8n-compatible item. Extracts the displayName and priority of each intent. Assigns an emoji and descriptive label based on priority tier: 🔴 Highest, 🟠 High, 🔵 Normal, 🟢 Low, 🚫 Ignore. 📑 Storage**: Appends each intent (name, priority number, emoji, and description), along with current date and time, to a Google Sheets document. 📩 Notification**: Sends a single confirmation message to the Telegram user once insertion is complete (using Execute Once). 🛠️ How to install and configure Import the workflow: Upload the .json into your n8n instance. Connect Telegram: Add your Telegram bot credentials and configure the node Validación de usuario por ID with your Telegram ID. Configure Dialogflow: Authenticate using a Google Service Account API Credential. Then, in the Obtiene datos de los intents node, replace the example project ID (TU_PROJECT_ID) with your actual Dialogflow agent's project ID. Connect Google Sheets: Authorize Google Sheets via OAuth2 and select your destination document/sheet in the node Añadir fila en la hoja. Customize trigger keyword: Adjust the command text (default "backup") if needed. Activate workflow: Ensure the webhook is correctly set up in Telegram before enabling the workflow. 👥 Who is this for? 🤖 Bot administrators who need quick backups of Dialogflow intent names. 🌐 Teams managing multilingual or multi-intent agents wanting priority oversight. 💻 Development teams needing an automated way to audit or version intent configurations regularly. 💡 Use Cases ⚙️ Backup intents periodically to monitor changes over time. 📊 Visualize priority assignment in a spreadsheet for analysis or team discussion. 📖 Document conversational structure for onboarding or knowledge transfer.
by Fahmi Fahreza
Analyze Trustpilot & Sitejabber sentiment with Decodo + Gemini to Sheets Sign up for Decodo HERE for Discount This template scrapes public reviews from Trustpilot and Sitejabber with a Decodo tool, converts findings into a flat, spreadsheet-ready JSON, generates a concise sentiment summary with Gemini, and appends everything to Google Sheets. It’s ideal for reputation snapshots, competitive analysis, or lightweight BI pipelines that need structured data and a quick narrative. Who’s it for? Marketing teams, growth analysts, founders, and agencies who need repeatable review collection and sentiment summaries without writing custom scrapers or manual copy/paste. How it works A Form Trigger collects the Business Name or URL. Set (Config Variables) stores business_name, spreadsheet_id, and sheet_id. The Agent orchestrates the Decodo tool and enforces a strict JSON schema with at most 10 reviews per source. Gemini writes a succinct summary and recommendations, noting missing sources with: “There’s no data in this website.” A Merge node combines JSON fields with the narrative. Google Sheets appends a row. How to set up Add Google Sheets, Gemini, and Decodo credentials in Credential Manager. Replace (YOUR_SPREADSHEET_ID) and (YOUR_SHEET_ID) in Set: Config Variables. In Google Sheets, select Define below and map each column explicitly. Keep the parser and agent connections intact to guarantee flat JSON. Activate, open the form URL, submit a business, and verify the appended row.