by Oriol Seguí
This fun workflow automates the generation and delivery of personalized jokes by email based on the names or objects entered in the form. The process works as follows: On form submission The workflow starts when someone submits a form with the required names or objects to create the joke. You can modify the form fields to make the jokes more creative. Set the output language Manually define the language in which you want to receive the joke. OpenAI Message Model Uses the OpenAI model to generate the joke based on the prompt and in the chosen language. (The response is limited to around 200 tokens.) Gmail: send message The generated joke is automatically sent to the specified email address via Gmail.
by Anthony
This n8n workflow automates the process of researching companies by gathering relevant data such as traffic volume, foundation details, funding information, founders, and more. The workflow leverages the ProspectLens API, which is particularly useful for researching companies commonly found on Crunchbase and LinkedIn. ProspectLens is an API that provides very detailed company data. All you need to do is supply the company's domain name. You can obtain your ProspectLens API key here: https://apiroad.net/marketplace/apis/prospectlens In n8n, create a new "HTTP Header" credential. Set x-apiroad-key as the "Name" and enter your APIRoad API key as the "Value". Use this credential in the HTTP Request node of the workflow.
by Peter
Store a key with a value in a local json file. Multiple keys could be saved in a single file. Related workflow: GetKey Create a subfolder in your n8n homedir: /home/node/.n8n/local-files. In docker look at the data path and create a subfolder local-files. Set the correct access rights chmod 1000.1000 local-files. Put the workflow code in a new workflow named WriteKey. Create another workflow with a function item: return { file: '/4711.json', // 4711 should be your workflow id key: 'MyKey', value: 'MyValue' } Pipe the function item to an Execution Workflow that calls the WriteKey workflow. It would be nice if we could get someday a shiny built-in n8n node that does the job. :)
by Zacharia Kimotho
How to scrap emails from websites This workflow shows how to quickly build an Email scraping API using n8n. Email marketing is at the core of most marketing strategies, be it content marketing, sales, etc. As such, being able to find contacts in bulk for your business on a large scale is key. There are available tools available in the market that can do this, but most are premium; why not build a custom one with n8n? Usage The workflow gets the data from a website and performs an extraction based on the date around on the website Copy the webhook URL to your browser Add a query parameter eg ?Website=https://mailsafi.com . This should give you a URL like this {{$n8nhostingurl/webhook/ea568868-5770-4b2a-8893-700b344c995e?Website=https://mailsafi.com Click on the URL and wait for the extracted email to be displayed. This will return the email address on the website, or if there is no email, the response will be "workflow successfully executed." Make sure to use HTTP:// for your domains Otherwise, you may get an error.
by JPres
n8n Template: Store Chat Data in Supabase PostgreSQL for WhatsApp/Slack Integration This n8n template captures chat data (like user ID, name, or address) and saves it to a Supabase PostgreSQL database. It’s built for testing now but designed to work with WhatsApp, Slack, or similar platforms later, where chat inputs aren’t predefined. Guide with images can be found on: https://github.com/JimPresting/Supabase-n8n-Self-Hosted-Integration/ Step 1: Configure Firewall Rules in Your VPC Network To let your n8n instance talk to Supabase, add a firewall rule in your VPC network settings (e.g., Google Cloud, AWS, etc.). Go to VPC Network settings. Add a new firewall rule: Name: allow-postgres-outbound Direction: Egress (outbound traffic) Destination Filter: IPv4 ranges Destination IPv4 Ranges: 0.0.0.0/0 (allows all; restrict to Supabase IPs for security) Source Filter: Pick IPv4 ranges and add the n8n VM’s IP range, or Pick None if any VM can connect Protocols and Ports: Protocol: TCP Port: 5432 (default PostgreSQL port) Save the rule. Step 2: Get the Supabase Connection String Log into your Supabase Dashboard. Go to your project, click the Connect button in the header. Copy the PostgreSQL connection string: postgresql://postgres.fheraruzdahjd:[YOUR-PASSWORD]@aws-0-eu-central-1.pooler.supabase.com:6543/postgres Replace [YOUR-PASSWORD] with your Supabase account password (no brackets) and replace the string before that with your actual unique identifier. Note the port (6543 or 5432)—use what’s in the string. Step 3: Set Up the n8n Workflow This workflow takes chat data, maps it to variables, and stores it in Supabase. It’s built to handle messy chat inputs from platforms like WhatsApp or Slack in production. Workflow Steps Trigger Node: "When clicking 'Test workflow'" (manual trigger). For now, it’s manual. In production, this will be a WhatsApp or Slack message trigger, which won’t have a fixed input format. Set Node: "Set sample input variables (manual)". This node sets variables like id, name, address to mimic chat data. Why? Chat platforms send unstructured data (e.g., a message with a user’s name or address). We map it to variables so we can store it properly. The id will be something unique like a phone number, account ID, or account number. Sample Agent Node: Uses a model (e.g., GeminiFlash2.0 but doesn't matter). This is a placeholder to process data (e.g., clean or validate it) before saving. You can skip or customize it. Supabase PostgreSQL Node: "Supabase PostgreSQL Database". Connects to Supabase using the connection string from Step 2. Saves the variables (id, name, address) to a table. Why store extra fields? The id (like a phone number or account ID) is the key. Extra fields like name or address let us keep all user info in one place for later use (e.g., analytics or replies). Output Node: "Update additional values e.g., name, address". Confirms the data is saved. In production, this could send a reply to the chat platform. Why This Design? Handles Unstructured Chat Data**: WhatsApp or Slack messages don’t have a fixed format. The "Set" node lets us map any incoming data (e.g., id, name) to our database fields. Scales for Production**: Using id as a key (phone number, account ID, etc.) with extra fields like name makes this workflow flexible for many use cases, like user profiles or support logs. Future-Ready**: It’s built to swap the manual trigger for a real chat platform trigger without breaking. Step 4: Configure the Supabase PostgreSQL Node In the n8n workflow, set up the Supabase PostgreSQL node: Host: aws-0-eu-central-1.pooler.supabase.com (from the connection string) Port: 6543 (or what’s in the connection string) Database: postgres User: postgres.fhspudlibstmpgwqmumo (from the connection string) Password: Your Supabase password SSL: Enable (Supabase usually requires it) Set the node to Insert or Update: Map id to a unique column in your Supabase table (e.g., phone number, account ID). Map fields like name, address to their columns. Test the workflow to confirm data saves correctly. Security Tips Limit Firewall Rules**: Don’t use 0.0.0.0/0. Find Supabase’s IP ranges in their docs and use those. Hide Passwords**: Store your Supabase password in n8n’s environment variables. Use SSL**: Enable SSL in the n8n node for secure data transfer.
by David Olusola
When you fill out the form with business challenges and requirements GPT-4 analyzes the input and generates a customized proposal using your template System automatically creates a Google Slides presentation with personalized content Professional proposal email is sent directly to the prospect with the presentation link Set up steps Estimated time: 15-20 minutes Connect your OpenAI API key for GPT-4 access Link your Google account for Slides and Gmail integration Create your proposal template in Google Slides with placeholder variables Customize the AI prompt and email template with your branding Test with sample data and activate the workflow
by Emmanuel Bernard
🎉 Do you want to master AI automation, so you can save time and build cool stuff? I’ve created a welcoming Skool community for non-technical yet resourceful learners. 👉🏻 Join the AI Atelier 👈🏻 Accepting payments via credit card online is a crucial component for the majority of businesses. Stripe provides a robust suite of tools for processing payments, yet many people still find it challenging to create a simple payment page and distribute it to their customers. 📋 Blog post 📺 Youtube Video This n8n workflow aims to offer the simplest and most direct method for generating a Stripe payment link. Features Quick Stripe Payment Link Creation:** Simply enter a title and select a price to create a Stripe payment link in seconds. Set Up Steps Connect your Stripe credentials. Fill the config node (currency). This n8n workflow template is crafted to significantly reduce the creation time of a Stripe Payment link. Created by the n8ninja.
by Lorena
This workflow allows you to collect tweets, store them in MongoDB, analyse their sentiment, insert them into a Postgres database, and post positive tweets in a Slack channel. Cron node: Schedule the workflow to run every day Twitter node: Collect tweets MongoDB node: Insert the collected tweets in MongoDB Google Cloud Natural Language node: Analyse the sentiment of the collected tweets Set node: Extract the sentiment score and magnitude Postgres node: Insert the tweets and their sentiment score and magnitude in a Posgres database IF node: Filter tweets with positive and negative sentiment scores Slack node: Post tweets with a positive sentiment score in a Slack channel NoOp node: Ignore tweets with a negative sentiment score
by Shahrukh
AI-Powered Workflow for Auto-Responding to Positive Cold Email Replies This workflow is designed for agencies, freelancers, and sales teams who want to turn positive cold email replies into booked meetings automatically—without hiring VAs or spending hours on manual responses. ❓ The Problem Most teams waste time replying manually or pay for virtual assistants, leading to delays and missed opportunities. This template eliminates that bottleneck. ✅ What the Workflow Does Detects positive replies from Instantly.ai campaigns Uses AI to analyze intent and craft natural, human-like responses Adds personalization to keep replies authentic Includes Calendly links, product docs, or FAQs based on the lead’s intent Sends responses instantly—so you never miss a hot lead again No robotic AI text. Just smooth, human-style emails that get booked calls faster. 👥 Who is This For? Agencies** running Instantly.ai or similar outbound tools Founders** handling their own cold email outreach Sales teams** looking to automate follow-up and booking Anyone who gets 5–20 positive replies a week and wants to 2x–4x conversions ✅ Requirements n8n** (Cloud or self-hosted) Instantly.ai account** with API access OpenAI API key** (stored securely in n8n credentials) (Optional) Calendly or booking link, Notion or Google Docs for resources ⚙️ How to Set Up Import the workflow into n8n Add your Instantly.ai API credentials and OpenAI key using n8n’s credential manager Customize the AI prompt for your tone, CTA, and offer Insert your Calendly or booking link in the response template Test with one positive reply to confirm filtering and response quality Activate the workflow to auto-reply in real time 🔧 How to Customize Adjust the filtering logic for different keywords or intent signals Add branching for multiple booking links (e.g., based on region or service type) Push responses to a CRM for tracking Include extra resources like case studies or pricing docs
by Mobder
This workflow automatically connects to a Cloudflare R2 bucket (via S3-compatible API), filters out files older than 14 days, deletes them, and then sends a Telegram notification for each deletion. It runs on a daily schedule. 🕘 Schedule Trigger Executes the workflow once a day at a specified hour (e.g., 9 AM). 📦 S3 Node – List Files Retrieves all objects from a specific folder (prefix) in a Cloudflare R2 bucket using the S3 API. 🔎 Code Node – Filter Files Older Than 2 Weeks Filters the retrieved files by comparing their LastModified timestamps to the current date. Files older than 14 days (2 weeks) are selected for deletion. 🗑️ S3 Node – Delete File Deletes each filtered file from the R2 bucket. 📨 Telegram Node – Notify Deletion Sends a Telegram message with the name of the deleted file to a specified chat ID. The message includes:
by Open Paws
This sub-workflow uses two custom Hugging Face regression models from Open Paws to evaluate and predict the real-world performance and advocacy alignment of text content. It’s designed to support animal advocacy organizations in optimizing their messaging across platforms like social media, email campaigns, and more. 🛠️ What It Does Sends input text to two deployed Hugging Face endpoints: Predicted Performance Model – Estimates real-world content success (e.g., engagement, shares, opens) based on patterns from real online data. Advocate Preference Model – Predicts how well the content will resonate with animal advocates (emotional impact, relevance, rationality, etc.) Outputs structured scores for both models Can be integrated into larger workflows for automated content review, filtering, or revision 📊 About the Models Text Performance Prediction Model** Trained on real-world data from 30+ animal advocacy organizations, this model predicts actual online performance of content—including social media, email marketing, and other outreach channels. Advocate Preference Prediction Model** Trained on ratings from animal advocates to evaluate how well a piece of text aligns with advocacy goals and values. Model Repositories: open-paws/text_performance_prediction_longform open-paws/animal_advocate_preference_prediction_longform > 📌 You must deploy each model as an inference endpoint on Hugging Face. Click "Deploy" on each model’s repo, then add the endpoint URL and your Hugging Face access token using n8n credentials. 📦 Use Cases Advocacy content review before publishing Automated scoring of outreach messages Filtering or flagging content with low predicted impact A/B testing support for message optimization
by Tom
This workflow builds a valid RSS feed (which is an XML feed under the hood) for ARD Audiothek podcasts. This allows you to subscribe to such podcasts using your favourite podcatcher without using the ARD Audiothek app. The example builds a feed for Kalk & Welk, but the workflow can be easily adjusted by providing another podcast URL on the Get overview page HTTP Request node. To subscribe to the feed, active your n8n workflow and then use the Production URL from the intitial Feed Webhook node in your podcatcher. I've tested the resulting feed using Pocket Casts... ...and Miniflux: When using Miniflux, you can add your feed via this page to your account. Make sure you select Private when doing so to avoid sharing your n8n instance with the world. The resulting feed passes the W3C Feed Validation Service: The workflow can also be used as a foundation to free other podcasts from propriertary big media platforms, though not all of them will be as simple to deal with as the ARD Audiothek.