by Harshil Agrawal
This workflow handles the incoming call from Twitter and sends the required response for verification. On registering the webhook with the Twitter Account Activity API, Twitter expects a signature in response. Twitter also randomly ping the webhook to ensure it is active and secure. Webhook node: Use the displayed URL to register with the Account Activity API. Crypto node: In the Secret field, enter your API Key Secret from Twitter. Set node: This node generates the response expected by the Twitter API. Learn more about connecting n8n with Twitter in the Getting Started with Twitter Webhook article.
by Joseph LePage
Transform simple queries into comprehensive, well-structured content with this n8n workflow that leverages Perplexity AI for research and GPT-4 for content transformation. Create professional blog posts and HTML content automatically while maintaining accuracy and depth. Intelligent Research & Analysis ๐ Automated Research Pipeline Harnesses Perplexity AI's advanced research capabilities Processes complex topics into structured insights Delivers comprehensive analysis in minutes instead of hours ๐ง Smart Content Organization Automatically structures content with clear hierarchies Identifies and highlights key concepts Maintains technical accuracy while improving readability Creates SEO-friendly content structure Content Transformation Features ๐ Dynamic Content Generation Converts research into professional blog articles Generates clean, responsive HTML output Implements proper semantic structure Includes metadata and categorization ๐จ Professional Formatting Responsive Tailwind CSS styling Clean, modern HTML structure Proper heading hierarchy Mobile-friendly layouts Blockquote highlighting for key insights Perfect For ๐ Content Researchers Save hours of manual research by automating the information gathering and structuring process. โ๏ธ Content Writers Focus on creativity while the workflow handles research and technical formatting. ๐ Web Publishers Generate publication-ready HTML content with modern styling and proper structure. Technical Implementation โก Workflow Components Webhook endpoint for query submission Perplexity AI integration for research GPT-4 powered content structuring HTML transformation engine Telegram notification system (optional) Transform your content creation process with an intelligent system that handles research, writing, and formatting while you focus on strategy and creativity.
by Joseph
Watch on Youtubeโถ๏ธ Welcome to this complete step-by-step guide on how to build your own newsletter automation system using n8n, Bolt.new, and RapidAPI. Whether you're a solo founder, indie hacker, or community builder, this setup will allow you to collect subscribers, send them curated job updates, and manage unsubscriptions โ all with full control and zero reliance on third-party newsletter tools. ๐ Goal of This Guide By the end of this guide, you will have a fully working system that allows you to: Collect user subscriptions from a modern frontend interface Send welcome or rejection emails (using your own SMTP) Automatically scrape jobs via an API and send them to subscribers weekly or daily Manage unsubscriptions with confirmation and webhook logic Customize and manage all this using n8n workflows with no-code/low-code skills This system is perfect for niche job boards, community newsletters, or any project that needs automated content delivery to subscribers. ๐งฑ Tools You'll Be Using n8n** โ for automation workflows and acting as your backend Bolt.new** โ to build your newsletter landing page and subscription interface Google Sheets** โ to act as your lightweight subscriber/job database RapidAPI** โ to pull job listings from the Jobs Search API Custom SMTP Email (Optional)** โ to send branded emails using your own domain ๐ Step 1: Set Up Your Google Sheets Database Make a copy of this Google Sheets template that will serve as your database: ๐ Click here to copy the Google Sheet template](https://docs.google.com/spreadsheets/d/11vxYkjfwIrnNHN6PIdAOa_HZdTvMXI0lm_Jecac4YO0/edit?gid=0#gid=0) This includes: A Subscribers sheet to store new signups An Unsubscribers sheet to prevent duplicates A Jobs sheet to store scraped job listings โ Step 2: Get Your API Key for Jobs Scraping We use this API from RapidAPI to pull job listings programmatically: ๐ Jobs Search API on RapidAPI Sign up or log into RapidAPI Subscribe to the Jobs Search API Copy your API key โ you'll need this in n8n โ Step 3: Get Your API Key for Email Validation We use this API from Mails.so to confirm email's validity before adding them to our database: ๐ Mails.so API Sign up or log into mails dot so Visit the dashboard, then click on API Copy the cURL command and import on http request node ๐ Step 4: Set Up Your Frontend with Bolt.new You'll be building a beautiful, modern newsletter landing page using Bolt.new. Use this link for prompts to generate: Your landing page Email templates (welcome, already subscribed, unsubscribe confirmation) Terms & Privacy Policy pages Unsubscribe confirmation page ๐ Access the Bolt.new Prompt Document This includes: A homepage form with input fields (Name, Email) and consent checkbox Logic to send data to n8n webhook using fetch() UI logic for showing webhook response (Success, Already Exists, Invalid Email) Unsubscribe page handling (Make your own copy so that you can edit it while we format the prompts) ๐ค Step 5: Set Up Email Sending With Your Custom Domain (Optional but Recommended) To send branded HTML emails from your own domain, follow this tutorial to configure SMTP properly on n8n with your cPanel email account: ๐ Guide: How to Set Up SMTP with cPanel Email on n8n This setup helps: Improve deliverability Avoid Gmail spam filters Send beautiful HTML emails you can customize fully ๐ Step 6: Create n8n Workflows for Subscription Management In n8n, you'll need to build these workflows: โ 1. Handle Subscriptions Receives webhook from frontend with name and email Validates email (using mails.so) Checks if already subscribed Sends appropriate HTML email (Welcome, Already Exists, Invalid Email) Adds to Google Sheet database โ 2. Scrape Jobs and Email Subscribers Use Cron node to run daily/weekly Use RapidAPI to fetch new jobs Format jobs into readable HTML Send jobs to all active subscribers via SMTP โ 3. Handle Unsubscriptions Expose a webhook for /unsubscribe Confirm email, show a button On confirmation, add email to Unsubscribers sheet Show feedback and redirect user back to homepage after 2 seconds ๐ง What You're Learning Along the Way How to use n8n as a backend service (reliable, scalable, visual) How to use webhooks to connect frontend and backend logic How to scrape APIs, format JSON data, and convert it to HTML emails How to use Function nodes for data processing How to build logic with IF and Switch nodes How to design a minimal, clean frontend with Bolt.new How to control the entire newsletter system without external platforms Follow me on twitter @juppfy | or check out my agency website.
by Will Stenzel
Creates a new team for a project from webhook form data. When the project is created the current semester is added to it's relation attribute. More info can be found on using this workflow as part of a larger system here.
by n8n Team
This n8n workflow serves as a powerful cybersecurity and threat intelligence tool to look up URLs or IP addresses through industry standard threat intelligence vendors. It starts with either a form submission or a webhook trigger, allowing users to input data, URLs or IPs that require analysis. The workflow then splits into two paths depending on whether the input data is an IP or URL. If an IP was given, it sets the ip variable to the IP; however if a URL was given the workflow will perform a DNS lookup using Google Public DNS and sets the ip variable based on the results from Google. The workflow then checks the obtained IP addresses against GreyNoise services, with one branch utilizing GreyNoise RIOT IP Lookup to assess IP reputation and association with known benign services, and the other using GreyNoise IP Context to evaluate potential threats. The results from both GreyNoise services are merged to create a comprehensive analysis which includes the IP, classification (benign, malicious, or unknown), IP location, tags to identify activity or malware, category, and trust level. In parallel, a VirusTotal scan is initiated for the URL/IP to identify if it is malicious. A 5-second wait ensures proper processing, and the workflow subsequently polls the scan result to determine when the analysis is complete. The workflow then summarizes the analysis including the overall security vendor analysis results, blockList analysis, OpenPhish analysis, the URL, and the IP. Finally, the workflow combines the summarized intelligence from both GreyNoise and VirusTotal to provide a thorough analysis of the URL/IP. This summarized intelligence can then be emailed to the user that filled out the form via Gmail or it can be sent to the user via a Slack message. Setting up this workflow may require proper configuration of the form submission or webhook trigger, and ensuring that the GreyNoise and VirusTotal API credentials are correctly integrated. Users should also consider the potential volume of data and API rate limits, as excessive requests could lead to issues. Proper documentation and validation of input data are crucial to ensure accurate and meaningful results in the final report.
by Jinash Rouniyar
PROBLEM Thousands of MCP Servers exist and many are updated daily, making server selection difficult for LLMs. Current approaches require manually downloading and configuring servers, limiting flexibility. When multiple servers are pre-configured, LLMs get overwhelmed and confused about which server to use for specific tasks. This template enables dynamic server selection from a live PulseMCP directory of 5000+ servers. How it works A user query goes to an LLM that decides whether to use MCP servers to fulfill a given query and provides reasoning for its decision. Next, we fetch MCP Servers from Pulse MCP API and format them as documents for reranking Now, we use Contextual AI's Reranker to score and rank all MCP Servers based on our query and instructions How to set up Sign up for a free trial of Contextual AI here to find CONTEXTUALAI_API_KEY. Click on variables option in left panel and add a new environment variable CONTEXTUALAI_API_KEY. For the baseline model, we have used GPT 4.1 mini, you can find your OpenAI API key here How to customize the workflow We use chat trigger to initate the workflow. Feel free to replace it with a webhook or other trigger as required. We use OpenAI's GPT 4.1 mini as the baseline model and reranker prompt generator. You can swap out this section to use the LLM of your choice. We fetch 5000 MCP Servers from the PulseMCP directory as a baseline number, feel free to adjust this parameter as required. We are using Contextual AI's ctxl-rerank-v2-instruct-multilingual reranker model, which can be swapped with any one of the following rerankers: 1) ctxl-rerank-v2-instruct-multilingual 2) ctxl-rerank-v2-instruct-multilingual-mini 3) ctxl-rerank-v1-instruct You can checkout this blog for more information about rerankers to learn more about them. Good to know: Contextual AI Reranker (with full MCP docs): ~$0.035/query Includes 0.035 for reranking + ~$0.0001 for OpenAI instruction generation. OpenAI Baseline: ~$0.017/query
by Oneclick AI Squad
This n8n workflow automates the generation of personalized marketing content for events, including emails, social media posts, and advertisements. Leveraging AI, it tailors content based on event details and target audience preferences, enhancing promotional efforts and engagement for organizers. Key Features Generates customized email, social media, and ad content for event promotion. Personalizes content based on event specifics and audience insights. Streamlines content creation with AI-driven suggestions and formatting. Delivers content ready for distribution across multiple channels. Supports real-time updates and adjustments for campaign optimization. Workflow Process The Webhook for Event Planning node receives event details and marketing preferences to initiate the workflow. The Read Event Details node extracts and organizes event data from Google Sheets for content creation. The Set Variables node defines key parameters and audience targeting criteria. The AI Agent for Event Plan node uses AI to generate optimized marketing content, including emails, social media posts, and ads. The Format Plan node structures the generated content into a polished, actionable format. The Save to Google Sheets node stores the generated content for tracking and future use. The Email Report node compiles a comprehensive event marketing plan and sends it to organizers via email. The Send Email Report node delivers the finalized report to the organizer. Setup Instructions Import the workflow into n8n and configure the Webhook for Event Planning with your event management system's API credentials. Set up Google Sheets integration for the Read Event Details and Save to Google Sheets nodes. Configure the AI Agent for Event Plan node with a suitable language model for content generation. Set up email credentials for the Email Report and Send Email Report nodes. Test the workflow by inputting sample event data to verify content generation and delivery. Monitor the output and adjust AI parameters or node settings as needed for optimal results. Prerequisites Webhook integration with the event management or input system. Google Sheets account for data storage and retrieval. AI/LLM service for content generation and personalization. Email service for report delivery. Access to event details and audience data for customization. Modification Options Modify the Read Event Details node to include additional data fields or sources. Adjust the Set Variables node to incorporate specific audience segments or branding guidelines. Customize the AI Agent for Event Plan node to focus on particular content types (e.g., video scripts, banners). Add social media posting nodes to directly publish content from the Format Plan node. Configure the Email Report node to include additional metrics or campaign analytics.
by Davide
This workflow integrates a Retrieval-Augmented Generation (RAG) system with a post-sales AI agent for WooCommerce. It combines vector-based search (Qdrant + OpenAI embeddings) with LLMs (Google Gemini and GPT-4o-mini) to provide accurate and contextual responses. Both systems are connected to VAPI webhooks, making the workflow usable in a voice AI assistant via Twilio phone numbers. The workflow receives JSON payloads from VAPI via webhooks, processes the request through the appropriate chain (Agent or RAG), and sends a structured response back to VAPI to be read out to the user. Advantages โ Unified AI Support System: Combines knowledge retrieval (RAG) with transactional support (WooCommerce). โ Data Privacy & Security: Enforces strict email/order verification before sharing information. โ Multi-Model Power: Leverages both Google Gemini and OpenAI GPT-4o-mini for optimal responses. โ Scalable Knowledge Base: Qdrant vector database ensures fast and accurate context retrieval. โ Customer Satisfaction: Provides real-time answers about orders, tracking, and store policies. โ Flexible Integration: Easily connects with VAPI for voice assistants and phone-based customer support. โ Reusable Components: The RAG part can be extended for FAQs, while the post-sales agent can scale with more WooCommerce tools. How it Works It has two main components: RAG System (Knowledge Retrieval & Q\&A) Uses OpenAI embeddings to store documents in Qdrant. Retrieves relevant context with a Vector Store Retriever. Sends the information to a Question & Answer Chain powered by Google Gemini. Returns precise, context-based answers to user queries via webhook. Post-Sales Customer Support Agent Acts as a WooCommerce virtual assistant to: Retrieve customer orders (get_order, get_orders). Get user profiles (get_user). Provide shipment tracking (get_tracking) using YITH WooCommerce Order Tracking plugin. Enforces strict verification rules: customer email must match the order before disclosing details. Communicates professionally, providing clear and secure customer support. Integrates with GPT-4o-mini for natural conversation flow. Set Up Steps To implement this workflow, follow these three main steps: 1. Infrastructure & Credentials Setup in n8n: Ensure all required nodes have their credentials configured: OpenAI API Key: For the GPT 4o-mini and Embeddings OpenAI nodes. Google Gemini API Key: For the Google Gemini Chat Model node. Qdrant Connection Details: For the Qdrant Vector Store1 node (points to a Hetzner server). WooCommerce API Keys: For the get_order, get_orders, and get_user nodes (for magnanigioielli.com). WordPress HTTP Auth Credentials: For the Get tracking node in the sub-workflow. Pre-populate the Vector Database:** The RAG system requires a pre-filled Qdrant collection with your store's knowledge base (e.g., policy documents, product info). The "Sticky Note2" provides a link to a guide on building this RAG system. 2. Workflow Activation in n8n: Save this JSON workflow in your n8n instance. Activate the workflow.** This is crucial, as n8n only listens for webhook triggers when the workflow is active. Note the unique public webhook URLs generated for the Webhook (post-sales agent) and rag (RAG system) nodes. You will need these URLs for the next step. 3. VAPI Configuration: Create Two API Tools in VAPI:** Tool 1 (Post-Sales): Create an "API Request" tool. Connect it to the n8n Webhook URL. Configure the request body to send parameters email and n_order based on the conversation with the user. Tool 2 (RAG): Create another "API Request" tool. Connect it to the n8n rag webhook URL. Configure the request body to send a search parameter containing the user's query. Build the Assistant:** Create a new assistant in VAPI. Write a system prompt that instructs the AI on when to use each of the two tools you created. In the "Tools" tab, add both tools. Go Live:** Add a phone number (e.g., from Twilio) to your VAPI assistant and set it to "Inbound" to receive customer calls. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by David Olusola
๐ฅ Auto-Summarize Zoom Recordings โ Slack & Email Never lose meeting insights again! This workflow automatically summarizes Zoom meeting recordings using OpenAI GPT-4 and delivers structured notes directly to Slack and Email. โ๏ธ How It Works Zoom Webhook โ triggers when a recording is completed. Normalize Data โ extracts meeting details + transcript. OpenAI GPT-4 โ creates structured meeting summary. Slack โ posts summary to your chosen channel. Email โ delivers summary to your inbox. ๐ ๏ธ Setup Steps 1. Zoom Create a Zoom App with the recording.completed event. Add workflow webhook URL. 2. OpenAI Add your API key to n8n. Use GPT-4 for best results. 3. Slack Connect Slack credentials. Replace YOUR_SLACK_CHANNEL with your channel ID. 4. Email Connect Gmail or SMTP. Replace recipient email(s). ๐ Example Slack Message ๐ Zoom Summary Topic: Sales Demo Pitch Host: alex@company.com Date: 2025-08-30 Summary: Reviewed Q3 sales pipeline Discussed objections handling Assigned action items for next week โก Get instant summaries from every Zoom meeting โ no more manual note-taking!
by Evgeny Agronsky
What it does Automates code review by listening for a comment trigger on GitLab merge requests, summarising the diff, and using an LLM to post constructive, lineโspecific feedback. If a JIRA ticket ID is found in the MR description, the ticketโs summary is used to inform the AI review. Use cases Quickly obtain highโquality feedback on MRs without waiting for peers. Highlight logic, security or performance issues that might slip through cursory reviews. Incorporate project context by pulling in related JIRA ticket summaries. Good to know Triggered by commenting ai-review on a merge request. The LLM returns only highโvalue findings; if nothing critical is detected, the workflow posts an โall clearโ message. You can swap out the LLM (Gemini, OpenAI, etc.) or adjust the prompt to fit your teamโs guidelines. AI usage may incur costs or be geoโrestricted depending on your provider n8n.io. How it works Webhook listener:** A Webhook node captures GitLab note events and filters for the trigger phrase. Fetch & parse:** The workflow retrieves MR details and diffs, splitting each change into โoriginalโ and โnewโ code blocks. Optional JIRA context:** If your MR description includes a JIRA key (e.g., PROJ-123), the workflow fetches the ticket (and parent ticket for subtasks) and composes a brief context summary. LLM review:** The parsed diff and optional context are sent to an LLM with instructions to identify logic, security or performance issues and suggest improvements. Post results:** Inline comments are posted back to the MR at the appropriate file/line positions; if no issues are found, a single โall clearโ note is posted. How to use Import the template JSON and open the Webhook node. Replace the REPLACE_WITH_UNIQUE_PATH placeholder with your desired path and configure a GitLab project webhook to send MR comments to that URL. Select your LLM credentials in the Gemini (or other LLM) node, and optionally add JIRA credentials in the JIRA nodes. Activate the workflow and comment ai-review on any merge request to test it. For each review, the workflow posts status updates (โAI review initiatedโฆโ) and final comments. Requirements A GitLab project with a generate Personal Access Token (PAT) stored as an environment variable (GITLAB_TOKEN). LLM credentials (e.g., Google Gemini) and optional JIRA credentials. Customising this workflow Change the trigger phrase in the Trigger Phrase Filter node. Modify the LLM prompt to focus on different aspects (e.g., style, documentation). Filter out certain file types or directories before sending diffs to the LLM. Integrate other services (Slack, email) to notify teams when reviews are complete.
by Avkash Kakdiya
How it works This workflow captures idea submissions from a webhook and enriches them using AI. It extracts key fields like Title, Tags, Submitted By, and Created date in IST format. The cleaned data is stored in a Notion database for centralized tracking. Finally, a confirmation message is posted in Slack to notify the team. Step-by-step Step-by-step 1. Capture and process submission Webhook** โ Receives idea submissions with text and user ID. AI Agent & OpenAI Model** โ Enrich and structure the input into Title, Tags, Submitted By, and Created fields. Code** โ Extracts clean data, formats tags, and prepares the entry for Notion. 2. Store in Notion Add to Notion** โ Creates a new database entry with mapped fields: Title, Submitted By, Tags, Created. 3. Notify in Slack Send Confirmation (Slack)** โ Posts a confirmation message with the submitted idea title. Why use this? Centralizes idea collection directly into Notion for better organization. Eliminates manual formatting with AI-powered data structuring. Ensures consistency in tags, submitter info, and timestamps. Provides instant team-wide visibility via Slack notifications. Saves time while keeping idea management streamlined and transparent.
by Raz Hadas
This n8n template demonstrates how to automate stock market technical analysis to detect key trading signals and send real-time alerts to Discord. It's built to monitor for the Golden Cross (a bullish signal) and the Death Cross (a bearish signal) using simple moving averages. Use cases are many: Automate your personal trading strategy, monitor a portfolio for significant trend changes, or provide automated analysis highlights for a trading community or client group. ๐ก Good to know This template relies on the Alpha Vantage API, which has a free tier with usage limits (e.g., API calls per minute and per day). Be mindful of these limits, especially if monitoring many tickers. The data provided by free APIs may have a slight delay and is intended for informational and analysis purposes. Disclaimer**: This workflow is an informational tool and does not constitute financial advice. Always do your own research before making any investment decisions. โ๏ธ How it works The workflow triggers automatically every weekday at 5 PM, after the typical market close. It fetches a list of user-defined stock tickers from the Set node. For each stock, it gets the latest daily price data from Alpha Vantage via an HTTP Request and stores the new data in a PostgreSQL database to maintain a history. The workflow then queries the database for the last 121 days of data for each stock. A Code node calculates two Simple Moving Averages (SMAs): a short-term (60-day) and a long-term (120-day) average for both today and the previous day. Using If nodes, it compares the SMAs to see if a Golden Cross (short-term crosses above long-term) or a Death Cross (short-term crosses below long-term) has just occurred. Finally, a formatted alert message is sent to a specified Discord channel via a webhook. ๐ How to use Configure your credentials for PostgreSQL and select them in the two database nodes. Get a free Alpha Vantage API Key and add it to the "Fetch Daily History" node. For best practice, create a Header Auth credential for it. Paste your Discord Webhook URL into the final "HTTP Request" node. Update the list of stock symbols in the "Set - Ticker List" node to monitor the assets you care about. The workflow is set to run on a schedule, but you can press "Test workflow" to trigger it manually at any time. โ Requirements An Alpha Vantage account for an API key. A PostgreSQL database to store historical price data. A Discord account and a server where you can create a webhook. ๐จ Customising this workflow Easily change the moving average periods (e.g., from 60/120 to 50/200) by adjusting the SMA_SHORT and SMA_LONG variables in the "Compute 60/120 SMAs" Code node. Modify the alert messages in the "Set - Golden Cross Msg" and "Set - Death Cross Msg" nodes. Swap out Discord for another notification service like Slack or Telegram by replacing the final HTTP Request node.