by nero
How it works This template uses the n8n AI agent node as an orchestrating agent that decides which tool (knowledge graph) to use based on the user's prompt. How to use Create an account and apply for an API key on https://ai.nero.com/ai-api?utm_source=n8n-base-workflow. Fill your key into the Create task and Query task status nodes. Select an AI service and modify Create task node parameters, the API doc: https://ai.nero.com/ai-api/docs. Execute the workflow so that the webhook starts listening. Make a test request by postman or other tools, the test URL from the Webhook node. You will receive the output in the webhook response. Our API doc Please create an account to access our API docs. https://ai.nero.com/ai-api/docs. Use cases Large Scale Printing Upscale images into ultra-sharp, billboard-ready masterpieces with 300+ DPI and billions of pixels. Game Assets Compression Improve your game performance with AI-Image Compression: Faster, Better & Lossless. E-commerce Image Editing Remove & replace your product image backgrounds, create virtual showrooms. Photo Retouching Remove & reduce grains & noises from images. Face Animation Transform static images into dynamic facial expression videos or GIFs with our cutting-edge Face Animation API Photo Restoration Our Al-driven Photo Restoration API offers advanced scratch removal, face enhancement, and image upscaling. Colorize Photo Transform black & white images into vivid colors. Avatar Generator Turn your selfie into custom avatars with different styles and backgrounds Website Compression Speed up your website, compress your images in bulk.
by David Olusola
This workflow analyzes images submitted via a form using OpenAI Vision, then delivers the analysis result directly to your Telegram chat. ✅ Use case examples: • Users submit screenshots for instant AI interpretation • Automated document or receipt analysis with Telegram delivery • Quick OCR or image classification workflows ⸻ ⚙️ Setup Guide Form Submission Trigger • Connect your form app (e.g. Typeform, Tally, or n8n’s own webhook form) to the On form submission trigger node. • Ensure it sends the image file or URL as input. OpenAI Vision Analysis • In the OpenAI node, select Analyze Image operation. • Provide your OpenAI API key and configure the prompt to instruct the model on what to analyze (e.g. “Describe this receipt in detail”). Set Telegram Chat ID • Use this manual node to input your Telegram Chat ID for delivery. • Alternatively, automate this with a database lookup or user session if building for multiple users. Telegram Delivery Node • Connect your Telegram Bot to n8n using your bot token. • Set up the sendMessage operation, using the analysis result from the previous node as the message text. Testing • Click Execute workflow. • Submit an image via your form and confirm it delivers to your Telegram as expected.
by Lucas Peyrin
How it works This workflow demonstrates a fundamental pattern for securing a webhook by requiring an API key. It acts as a gatekeeper, checking for a valid key in the request header before allowing the request to proceed. Incoming Request: The Secured Webhook node receives an incoming POST request. It expects an API key to be sent in the x-api-key header. API Key Verification: The Check API Key node takes the key from the incoming request's header. It then makes an internal HTTP request to a second webhook (Get API Key) which acts as a mock database. This second webhook retrieves a list of registered API keys (from the Registered API Keys node) and filters it to find a match for the key that was provided. Conditional Response: If a match is found, the API Key Identified node routes the execution to the "success" path, returning a 200 OK response with the identified user's ID. If no match is found, it routes to the "unauthorized" path, returning a 401 Unauthorized error. This pattern separates the public-facing endpoint from the data source, which is a good security practice. Set up steps Setup time: ~2 minutes This workflow is designed to be a self-contained example. Set up Credentials: This workflow uses "Header Auth" for its internal communication. Go to Credentials and create a new Header Auth credential. You can use any name and value (e.g., Name: X-N8N-Auth, Value: my-secret-password). Select this credential in all four webhook/HTTP Request nodes. Add Your API Keys: Open the Registered API Keys node. This is your mock database. Edit the array to include the user_id and api_key pairs you want to authorize. Activate the workflow. Test it: Use the Test Secure Webhook node to send a request. Try it with a valid key from your list to see the success response. Change the x-api-key header to an invalid key to see the 401 Unauthorized error. For Production: Replace the mock database part of this workflow (the Get API Key webhook and Registered API Keys node) with a real database node like Supabase, Postgres, or Baserow to look up keys.
by Niranjan G
Who is this for? NVD (National Vulnerability Database) data is essential for security analysts, vulnerability managers, and DevSecOps professionals who need to perform both CVE lookups and monitor historical change logs. This workflow helps streamline those efforts by providing structured outputs for audit, triage, or compliance tracking purposes. 📝 Note: While this example uses Google Sheets as the destination, you can easily modify the final destination node (e.g., send to Slack, email, database, etc.) based on your specific automation needs.? What problem is this solving? Security teams often manually look up CVE data and track changes across multiple tools. This process is inefficient and error-prone. This workflow automates the CVE lookup and historical change tracking by logging enriched vulnerability data into Google Sheets in real-time. What this workflow does This workflow is designed for CVE API lookup and change history tracking. In many vulnerability automation pipelines, it is essential to determine not only the metadata of a CVE but also how it has evolved over time. Based on the operational need—whether it's enrichment, risk scoring, or remediation validation—this workflow becomes particularly handy in surfacing both current and historical CVE data. This template performs the following actions: Accepts incoming webhook requests containing a CVE ID Queries the NVD CVE Lookup API to fetch vulnerability metadata Queries the NVD CVE History API to retrieve all historical changes Flattens both datasets into a sheet-compatible structure Appends vulnerability metadata to one sheet and change history to another within the same Google Spreadsheet Setup 🔑 Request an NVD API Key To request an NVD API Key, please provide your organization name, a valid email address, and indicate your organization type at NVD API Key Request. You must scroll to the end of the Terms of Use Agreement and check "I agree to the Terms of Use" to obtain an API Key. After submission, you will receive a single-use hyperlink via email to activate and view your API Key. If not activated within seven days, a new request must be submitted. 📊 API Rate Limits Without an API key, you're limited to 5 requests per 30-second window. With an API key, you’re allowed up to 50 requests in the same period. To prevent request throttling, it's recommended to introduce slight delays between consecutive API calls in production setups. Clone or import this workflow into your n8n instance. Set up the following credentials: Google Sheets OAuth2 NVD API Key (via HTTP Header Auth) The workflow logs data to a Google Sheet titled NVD Database, with Sheet 1 named CVE Lookup and Sheet 2 named CVE History. Trigger each workflow using the respective webhook URL, appending ?cveId=CVE-XXXX-XXXX as a query parameter. 🔍 Example Webhook Request (CVE Change History) You can test this workflow with the following example: GET https://your-domain.com/webhook/cve-history?cveId=CVE-2023-34362 How to customize this workflow Use the Edit Fields node (optional) to centralize configuration like sheet name or query input Extend the CVE flattening logic to include more nested metadata if needed Integrate notification systems (e.g., Slack or email) by branching from the processing nodes Modify webhook paths for better endpoint organization 🔐 Production Security Tips Use HTTP Header Auth on the webhook for secure access > ⚠️ This template uses webhooks and NVD API access with authentication headers. This template uses two flows: Webhook 1:** NVD CVE Lookup — Lookup CVE vulnerability metadata from NVD and sync to Google Sheet Webhook 2:** NVD CVE Change History — Track change history for CVEs via NVD and log each update Each flow: Hits NVD’s respective endpoint Uses custom JS Code node to flatten the nested JSON Syncs data to dedicated Google Sheet tabs 🧩 4 nodes: Webhook → API Call → Parse → Sheet Sync Make sure both flows are activated and webhooks exposed for external access. Based on your needs, ensure you have a secure setup—whether hosted internally or in a cloud environment—when running n8n in production.
by Yaron Been
Workflow Overview This cutting-edge n8n automation is a sophisticated market research and intelligence gathering tool designed to transform web content discovery into actionable insights. By intelligently combining web crawling, AI-powered filtering, and smart summarization, this workflow: Discovers Relevant Content: Automatically crawls target websites Identifies trending topics Extracts comprehensive article details Intelligent Content Filtering: Applies custom keyword matching Filters for most relevant articles Ensures high-quality information capture AI-Powered Summarization: Generates concise, meaningful summaries Extracts key insights Provides quick, digestible information Seamless Delivery: Sends summaries directly to Slack Enables instant team communication Facilitates rapid information sharing Key Benefits 🤖 Full Automation: Continuous market intelligence 💡 Smart Filtering: Precision content discovery 📊 AI-Powered Insights: Intelligent summarization 🚀 Instant Delivery: Real-time team updates Workflow Architecture 🔹 Stage 1: Content Discovery Scheduled Trigger**: Daily market research FireCrawl Integration**: Web content crawling Comprehensive Site Scanning**: Extracts article metadata Captures full article content Identifies key information sources 🔹 Stage 2: Intelligent Filtering Keyword-Based Matching** Relevance Assessment** Custom Domain Optimization**: AI and technology focus Startup and innovation tracking 🔹 Stage 3: AI Summarization OpenAI GPT Integration** Contextual Understanding** Concise Insight Generation**: 3-point summary format Captures essential information 🔹 Stage 4: Team Notification Slack Integration** Instant Information Sharing** Formatted Insight Delivery** Potential Use Cases Market Research Teams**: Trend tracking Innovation Departments**: Technology monitoring Startup Ecosystems**: Competitive intelligence Product Management**: Industry insights Strategic Planning**: Rapid information gathering Setup Requirements FireCrawl API Web crawling credentials Configured crawling parameters OpenAI API GPT model access Summarization configuration API key management Slack Workspace Channel for insights delivery Appropriate app permissions Webhook configuration n8n Installation Cloud or self-hosted instance Workflow configuration API credential management Future Enhancement Suggestions 🤖 Multi-source crawling 📊 Advanced sentiment analysis 🔔 Customizable alert mechanisms 🌐 Expanded topic tracking 🧠 Machine learning refinement Technical Considerations Implement robust error handling Use exponential backoff for API calls Maintain flexible crawling strategies Ensure compliance with website terms of service Ethical Guidelines Respect content creator rights Use data for legitimate research Maintain transparent information gathering Provide proper attribution Workflow Visualization [Daily Trigger] ⬇️ [Web Crawling] ⬇️ [Content Filtering] ⬇️ [AI Summarization] ⬇️ [Slack Delivery] Connect With Me Ready to revolutionize your market research? 📧 Email: Yaron@nofluff.online 🎥 YouTube: @YaronBeen 💼 LinkedIn: Yaron Been Transform your information gathering with intelligent, automated workflows! #AIResearch #MarketIntelligence #AutomatedInsights #TechTrends #WebCrawling #AIMarketing #InnovationTracking #BusinessIntelligence #DataAutomation #TechNews
by The O Suite
This n8n workflow automates website security audits. It combines direct website scanning, threat intelligence from AlienVault OTX, and advanced analysis from an OpenAI large language model (LLM) to generate and email a comprehensive security report. How it Works (Workflow Flow): Input: A user provides a website URL via a simple web form. Data Collection: An HTTP Request node visits the provided URL to gather initial data (status code, headers). An AlienVault HTTP Request node queries AlienVault OTX for known threats associated with the website's hostname. Data Preparation (Prepare Data for AI): A custom code node consolidates the collected website data and AlienVault intelligence, performing initial checks for common issues (e.g., error codes, missing security headers, AlienVault warnings). AI Analysis (Security Configuration Audit): The prepared data is sent to an OpenAI Chat Model, which acts as a cybersecurity expert. The AI analyzes the data to identify vulnerabilities, explain their impact, suggest exploitation methods, and outline mitigation steps. Report Formatting (Format Report for Email): Another custom code node takes the AI's plain-text report and converts it into a structured HTML format suitable for email. Delivery (Send Security Report): The final HTML report is sent via Gmail to a specified email address. Setup Steps: To use this workflow, you'll need an n8n instance and the following credentials: n8n Instance: Ensure your n8n environment is running. OpenAI API Key: Generate a key from OpenAI. Add an "OpenAI API" credential in n8n (e.g., "OpenAI account"). AlienVault OTX API Key: Obtain a key from your AlienVault OTX profile. Add an "AlienVault OTX API" credential in n8n (e.g., "AlienVault account"). Gmail Account: Set up a "Gmail OAuth2" credential in n8n for sending emails (recommended for security; involves Google Cloud setup). Import Workflow: Copy the workflow's JSON code. In n8n, import the workflow via "Workflows" > "New" > "Import from JSON". Configure Recipient: In the "Send Security Report" node, specify the email address where reports should be sent. Activate: Enable the workflow to start processing submissions. Once activated, access the "On form submission" webhook URL to input a URL and trigger an audit.
by RedOne
This workflow is designed for e-commerce store owners, operations managers, and developers who use Shopify as their e-commerce platform and want an automated way to track and analyze their order data. It is particularly useful for businesses that: Need a centralized view of all Shopify orders Want to analyze order trends without logging into Shopify Need to share order data with team members who don't have Shopify access Want to build custom reports based on order information What Problem Is This Workflow Solving? While Shopify provides excellent order management within its platform, many businesses need their order data available in other systems for various purposes: Data accessibility**: Not everyone in your organization may have access to Shopify's admin interface Custom reporting**: Google Sheets allows for flexible analysis and report creation Data integration**: Having orders in Google Sheets makes it easier to combine with other business data Backup**: Creates an additional backup of your critical order information What This Workflow Does This n8n workflow creates an automated bridge between your Shopify store and Google Sheets: Listens for new order notifications from your Shopify store via webhooks Processes the incoming order data and transforms it into a structured format Stores each new order in a dedicated Google Sheets spreadsheet Sends real-time notifications to Telegram when new orders are received or errors occur Setup Create a Google Sheet Create a new Google Sheet to store your orders Add a sheet named "orders" with the following columns: orderId orderNumber created_at processed processed_at json customer shippingAddress lineItems totalPrice currency Set Up Telegram Bot Create a Telegram bot using BotFather (send /newbot to @BotFather) Save your bot token for use in n8n credentials Start a chat with your bot and get your chat ID (you can use @userinfobot) Configure the Workflow Set your Google Sheet ID in the "Edit Variables" node Enter your Telegram chat ID in the "Edit Variables" node Set up your Telegram API credentials in n8n Configure Shopify Webhook In your Shopify admin, go to: Settings > Notifications > Webhooks Create a new webhook for "Order creation" Set the URL to your n8n webhook URL (from the "Receive New Shopify Order" node) Set the format to JSON How to Customize This Workflow to Your Needs Additional data**: Modify the "Transform Order Data to Standard Format" function to extract more Shopify data Multiple sheets**: Duplicate the Google Sheets node to store different aspects of orders in separate sheets Telegram messages**: Customize the text in Telegram nodes to include more details or rich formatting Data processing**: Add nodes to perform calculations or transformations on order data Additional notifications**: Add more channels like Slack, Discord, or SMS Integrations**: Extend the workflow to send order data to other systems like CRMs, ERPs, or accounting software Final Notes This workflow serves as a foundation that you can build upon to create a comprehensive order management system tailored to your specific business needs.
by Ranjan Dailata
Who this is for? Extract Amazon Best Seller Electronic Info is an automated workflow that extracts best seller data from Amazon's Electronics section using Bright Data Web Unlocker, transform it into structured JSON using Google Gemini's LLM, and forwards a fully structured JSON response to a specified webhook for downstream use. This workflow is tailored for: eCommerce Analysts** Who need to monitor Amazon best-seller trends in the Electronics category and track changes in real-time or on a schedule. Product Intelligence Teams** Who want structured insights on competitor offerings, including rankings, prices, ratings, and promotions. AI-powered Chatbot Developers** Who are building assistants capable of answering product-related queries with fresh, structured data from Amazon. Growth Hackers & Marketers** Looking to automate competitive research and surface trending product data to inform pricing strategies. Data Aggregators and Price Trackers** Who need reliable and smart scraping of Amazon data enriched with AI-driven parsing. What problem is this workflow solving? Keeping up with Amazon's best sellers in Electronics is a time-consuming, error-prone task when done manually.This workflow automates the process, ensuring: Automating Data Extraction from Amazon Best Sellers using Bright Data, ensuring reliable access to real-time, structured data. Enhancing Raw Data with Google Gemini, turning product lists into structured JSON using the Google Gemini LLM. Sending Results to a Webhook, enabling seamless integration into dashboards, databases, or chatbots. What this workflow does The workflow performs the following steps: Extracts Amazon Best Seller Electronics page info using Bright Data's Web Unlocker API. Processes the unstructured content using Google Gemini's Flash Exp model to extract structured product data. Sends the structured information to a webhook endpoint. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Amazon URL with the Bright Data zone by navigating to the Amazon URL with the Bright Data Zone node. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs This workflow is built to be flexible - whether you're a market researcher, e-commerce entrepreneur, or data analyst. Here's how you can adapt it to fit your specific use case: Change the Amazon Category** Update the Amazon URL with the topic of your interest such as Computers & Accessories, Home Audio, etc. Customize the Gemini Prompt** Update the Gemini prompt to get different styles of output — comparison tables, summaries, feature highlights, etc. Send Output to Other Destinations** Replace the Webhook URL to forward output to: Google Sheets Airtable Slack or Discord Custom API endpoints
by Dhruv Dalsaniya
Description: This n8n workflow automates a Discord bot to fetch messages from a specified channel and send AI-generated responses in threads. It ensures smooth message processing and interaction, making it ideal for managing community discussions, customer support, or AI-based engagement. This workflow leverages Redis for memory persistence, ensuring that conversation history is maintained even if the workflow restarts, providing a seamless user experience. How It Works The bot listens for new messages in a specified Discord channel. It sends the messages to an AI model for response generation. The AI-generated reply is posted as a thread under the original message. The bot runs on an Ubuntu server and is managed using PM2 for uptime stability. The Discord bot (Python script) acts as the bridge, capturing messages from Discord and sending them to the n8n webhook. The n8n workflow then processes these messages, interacts with the AI model, and sends the AI's response back to Discord via the bot. Prerequisites to host Bot Sign up on Pella, which is a managed hosting service for Discord Bots. (Easy Setup) A Redis instance for memory persistence. Redis is an in-memory data structure store, used here to store and retrieve conversation history, ensuring that the AI can maintain context across multiple interactions. This is crucial for coherent and continuous conversations. Set Up Steps 1️⃣ Create a Discord Bot Go to the Discord Developer Portal. Click “New Application”, enter a name, and create it. Navigate to Bot > Reset Token, then copy the Bot Token. Enable Privileged Gateway Intents (Presence, Server Members, Message Content). Under OAuth2 > URL Generator, select bot scope and required permissions. Copy the generated URL, open it in a browser, select your server, and click Authorize. 2️⃣ Deploy the Bot on Pella Create a new folder discord-bot and navigate into it: Create and configure an .env file to store your bot token: Copy the code to .env: (You can copy the webhook URL from the n8n workflow) TOKEN=your-bot-token-here WEBHOOK_URL=https://your-domain.tld/webhook/getmessage Create file main.py copy the below code and save it: Copy this Bot script to main.py: import discord import requests import json import os from dotenv import load_dotenv Load environment variables from .env file load_dotenv() TOKEN = os.getenv("TOKEN") WEBHOOK_URL = os.getenv("WEBHOOK_URL") Bot Configuration LISTEN_CHANNELS = ["YOUR_CHANNEL_ID_1", "YOUR_CHANNEL_ID_2"] # Replace with your target channel IDs Intents setup intents = discord.Intents.default() intents.messages = True # Enable message event intents.guilds = True intents.message_content = True # Required to read messages client = discord.Client(intents=intents) @client.event async def on_ready(): print(f'Logged in as {client.user}') @client.event async def on_message(message): if message.author == client.user: return # Ignore bot's own messages if str(message.channel.id) in LISTEN_CHANNELS: try: fetched_message = await message.channel.fetch_message(message.id) # Ensure correct fetching payload = { "channel_id": str(fetched_message.channel.id), # Ensure it's string "chat_message": fetched_message.content, "timestamp": str(fetched_message.created_at), # Ensure proper formatting "message_id": str(fetched_message.id), # Ensure ID is a string "user_id": str(fetched_message.author.id) # Ensure user ID is also string } headers = {'Content-Type': 'application/json'} response = requests.post(WEBHOOK_URL, data=json.dumps(payload), headers=headers) if response.status_code == 200: print(f"Message sent successfully: {payload}") else: print(f"Failed to send message: {response.status_code}, Response: {response.text}") except Exception as e: print(f"Error fetching message: {e}") client.run(TOKEN) Create requirements.txt and copy: discord python-dotenv 3️⃣ Follow the video to set up the bot which will run 24/7 Tutorial - https://www.youtube.com/watch?v=rNnK3XlUtYU Note: Free Plan will expire after 24 hours, so please opt for the Paid Plan in Pella to keep your bot running. 4️⃣ n8n Workflow Configuration The n8n workflow consists of the following nodes: Get Discord Messages (Webhook):** This node acts as the entry point for messages from the Discord bot. It receives the channel_id, chat_message, timestamp, message_id, and user_id from Discord when a new message is posted in the configured channel. Its webhook path is /getmessage and it expects a POST request. Chat Agent (Langchain Agent):** This node processes the incoming Discord message (chat_message). It is configured as a conversational agent, integrating the language model and memory to generate an appropriate response. It also has a prompt to keep the reply concise, under 1800 characters. OpenAI -4o-mini (Langchain Language Model):** This node connects to the OpenAI API and uses the gpt-4o-mini-2024-07-18 model for generating AI responses. It is the core AI component of the workflow. Message History (Redis Chat Memory):** This node manages the conversation history using Redis. It stores and retrieves chat messages, ensuring the Chat Agent maintains context for each user based on their user_id. This is critical for coherent multi-turn conversations. Calculator (Langchain Tool):** This node provides a calculator tool that the AI agent can utilize if a mathematical calculation is required within the conversation. This expands the capabilities of the AI beyond just text generation. Response fromAI (Discord):** This node sends the AI-generated response back to the Discord channel. It uses the Discord Bot API credentials and replies in a thread under the original message (message_id) in the specified channel_id. Sticky Note1, Sticky Note2, Sticky Note3, Sticky Note4, Sticky Note5, Sticky Note:** These are informational nodes within the workflow providing instructions, code snippets for the Discord bot, and setup guidance for the user. These notes guide the user on setting up the .env file, requirements.txt, the Python bot code, and general recommendations for channel configuration and adding tools. 5️⃣ Setting up Redis Choose a Redis Hosting Provider: You can use a cloud provider like Redis Labs, Aiven, or set up your own Redis instance on a VPS. Obtain Redis Connection Details: Once your Redis instance is set up, you will need the host, port, and password (if applicable). Configure n8n Redis Nodes: In your n8n workflow, configure the "Message History" node with your Redis connection details. Ensure the Redis credential ✅ redis-for-n8n is properly set up with your Redis instance details (host, port, password). 6️⃣ Customizing the Template AI Model:** You can easily swap out the "OpenAI -4o-mini" node with any other AI service supported by n8n (e.g., Cohere, Hugging Face) to use a different language model. Ensure the new language model node is connected to the ai_languageModel input of the "Chat Agent" node. Agent Prompt:** Modify the text parameter in the "Chat Agent" node to change the AI's persona, provide specific instructions, or adjust the response length. Additional Tools:** The "Calculator" node is an example of an AI tool. You can add more Langchain tool nodes (e.g., search, data lookup) and connect them to the ai_tool input of the "Chat Agent" node to extend the AI's capabilities. Refer to the "Sticky Note5" in the workflow for a reminder. Channel Filtering:** Adjust the LISTEN_CHANNELS list in the main.py file of your Discord bot to include or exclude specific Discord channel IDs where the bot should listen for messages. Thread Management:** The "Response fromAI" node can be modified to change how threads are created or managed, or to send responses directly to the channel instead of a thread. The current setup links the response to the original message ID (message_reference). 7️⃣ Testing Instructions Start the Discord Bot: Ensure your main.py script is running on Pella. Activate the n8n Workflow: Make sure your n8n workflow is active and listening for webhooks. Send a Message in Discord: Go to one of the LISTEN_CHANNELS in your Discord server and send a message. Verify Response: The bot should capture the message, send it to n8n, receive an AI-generated response, and post it as a thread under your original message. Check Redis: Verify that the conversation history is being stored and updated correctly in your Redis instance. Look for keys related to user IDs. ✅ Now your bot is running in the background! 🚀
by Vitali
Template Description This n8n workflow template allows you to create a masked email address using the Fastmail API, triggered by a webhook. This is especially useful for generating disposable email addresses for privacy-conscious users or for testing purposes. Workflow Details: Webhook Trigger: The workflow is initiated by sending a POST request to a specific webhook. You can include state and description in your request body to customize the masked email's state and description. Session Retrieval: The workflow makes an HTTP request to the Fastmail API to retrieve session information. It uses this data to authenticate further requests. Create Masked Email: Using the retrieved session data, the workflow sends a POST request to Fastmail's JMAP API to create a masked email. It uses the provided state and description from the webhook payload. Prepare Output: Once the masked email is successfully created, the workflow extracts the email address and attaches the description for further processing. Respond to Webhook: Finally, the workflow responds to the original POST request with the newly created masked email and its description. Requirements: Fastmail API Access**: You will need valid API credentials for Fastmail configured with HTTP Header Authentication. Authorization Setup**: Optionally set up authorization if your webhook is exposed to the internet to prevent misuse. Custom Webhook Request**: Use a tool like curl or create a shortcut on macOS/iOS to send the POST request to the webhook with the necessary JSON payload, like so: curl -X POST -H 'Content-Type: application/json' https://your-n8n-instance/webhook/87f9abd1-2c9b-4d1f-8c7f-2261f4698c3c -d '{"state": "pending", "description": "my mega fancy masked email"}' This template simplifies the process of integrating masked email functionality into your projects or workflows and can be extended for various use cases. Feel free to use the companion shortcut I've also created. Please update the authorization header in the shortcut if needed. https://www.icloud.com/shortcuts/ac249b50eab34c04acd9fb522f9f7068
by Fan Luo
Auto-Share YouTube Videos with AI-Generated Posts to Facebook, X and Notify in Discord This n8n template demonstrates how to use a LLM like DeepSeek to generate a post and share to Facebook page and X automatically whenever a new video is published to a YouTube channel. How it works We first define RSS with a polling schedule to pull YouTube videos from a specified channel Prompt AI agent to generate a post with proper url and hash tags based on the video metadata Then automatically create a new post in Facebook and X via their APIs Post a new message in Discord channel via Webhook How to use Simply setup a RSS polling trigger to automatically trigger the workflow Requirements Facebook API setup, see step by step tutorials X v2 API setup, see step by step tutorials Discord channel webhook, see step by step tutorials Need Help? Contact me via My Blog or ask in the Forum! Happy Hacking!
by Lukas Kunhardt
Who is this for? This template is for any website owner, digital agency, or compliance officer operating within the European Union. It's designed for users who need to comply with the upcoming European Accessibility Act (EAA) but may not have deep technical or legal expertise. Disclaimer This workflow uses an npm package called "cheerio" to work with the specified URLs HTML code. Installing packages is only possible in self hosting. What problem is this workflow solving? / Use Case Starting June 28, 2025, the European Accessibility Act (EAA) mandates that most websites offering products or services in the EU must be accessible and publish a formal Accessibility Statement. Manually creating this legal document is complex, requiring both a technical site analysis and knowledge of specific legal requirements. This workflow automates the generation of a compliant first draft, saving significant time and effort. What this workflow does After you input your details (like website URL and API key) in a central configuration node, this workflow automatically: Scans your live website for accessibility issues using the powerful WAVE API. Processes the scan results to identify the main problem areas. Instructs a Google Gemini AI agent with a specialized legal prompt based on the European Accessibility Act. Generates a formal Accessibility Statement in your desired language. Saves the statement as an .html file and sends it to you as an email attachment. Setup This workflow is designed for a quick setup: Configure All Variables: Click the 'CHANGE THESE: dependencies' node. This is your central control panel. Fill in all the values, including your WAVE API Key, the URL to analyze, company details, and desired output language. Set Up Credentials: You will need to connect your Google accounts for the workflow to run. Gemini: Click the 'gemini 2.5 pro' node, click the gear icon (⚙️) next to the "Credential" field, and connect your Google Gemini API credentials. Gmail: Click the 'Send report by email' node and connect your Gmail account to allow sending the final report. Activate & Execute: Make sure the workflow is active in the top-right corner, then click 'Execute Workflow' to run your first analysis. How to customize this workflow to your needs This template is a great starting point for any EU country. Here's how to adapt it: Localize for Your Country (Important!):* The generated statement contains a placeholder for the "Enforcement Procedure". You *must* edit the prompt in the *'Accessibility Statement Generator'** node to replace this placeholder with the name and link to your specific country's official enforcement body. Change the AI:** Swap the Google Gemini node for any other AI model, like OpenAI or Anthropic Claude, by replacing the node and connecting it to the agent. Change the Trigger:* Replace the *'When clicking ‘Execute workflow’'** node with a Form Trigger or Webhook Trigger to run this workflow based on external inputs, for example, to offer this analysis as a service to your clients.