by Blue Code
It allows you to automate candidate retrieval and onboarding in your HR processes. How it works It monitors a Gmail address for new emails with a PDF attachment It expects the PDF to be a candidate’s CV, extracts the text using OCR, and then structures the data using ChatGPT Once the data is processed, it connects to Notion and adds (or updates) an entry in the specified database How to use Configure your Gmail account and provide your ChatGPT API key Provide an API key for the OCR service in a variable named OCR_SPACE_API_KEY Connect your Notion account Once everything is configured, the workflow will monitor your inbox for new emails. Just send an email with a PDF attachment to the configured address Requirements In addition to Gmail, ChatGPT, and Notion, the system uses a third-party OCR API (OCR SPACE). You’ll need to create an account and obtain an API key You must map the fields returned by ChatGPT to the Notion database, or use the same field names we are using Customising It should be easy to replace Notion with PostgreSQL or another database if needed
by Roni Bandini
This workflow receives plain English instructions from a retro console via a webhook. Using an AI agent, it can combine multiple tools to read general RSS news headlines, stock market updates, emails, calendar events, search X, send Telegram messages, and run Linux commands. The idea is to avoid using smartphones or regular laptops in the morning, and instead use a retro console installed on an old notebook or netbook. You will need to copy a Python script onto the notebook, configure the webhook URL, and set up all the required credentials. Steps: Setup Gemini API key, Google Gmail and Calendar credentials from console.google.com Setup X credentials, RSS URL, etc Obtain the webhook URL and paste into the Python code to be executed at the Linux machine Run the python script with python3 console.py Note: if you ask for a Linux command, the command will not only be returned but also executed.
by Nabin Bhandari
Who’s it for This template is designed for bakeries, event planners, and e-commerce platforms that want to automatically generate custom cake designs. It’s also ideal for marketers or digital creators who need personalized celebratory visuals for social media or email campaigns. How it works This workflow converts simple user input (e.g., “Sarah’s Birthday”) into a creative cake design: Webhook: Captures user input from the Bolt frontend form. OpenAI GPT: Generates a detailed and creative cake design prompt. Replicate Flux Schnell: Produces a unique cake image using the AI-generated prompt. HTTP Response: Sends the final cake image back to the frontend. How to set up Import this template into n8n. Add your OpenAI API Key under n8n Credentials for the OpenAI Chat Model node. Add your Replicate API Token as an HTTP Header Auth credential (do not hardcode it). Update the Webhook node URL in the Bolt frontend form to send a POST request to n8n. (Optional) Customize the OpenAI prompt in the Prompt Generator node to adjust cake style, colors, or decorations. Requirements n8n account (cloud or self-hosted). OpenAI API Key** for prompt generation. Replicate API Token** for AI image generation. A Bolt frontend or any form that can call the webhook endpoint. How to customize the workflow Replace "cake" with any product type (e.g., mugs, greeting cards, or T-shirts). Add a database node (Google Sheets or Supabase) to log user requests and images. Implement input moderation by adding an OpenAI moderation node before the prompt generation. Frontend
by Oneclick AI Squad
This n8n workflow automates the collection and processing of trip feedback data using Google Sheets as the backend. When new users are added to the system, they automatically receive feedback forms via email, and all responses are systematically processed and stored in Google Sheets for analysis and record-keeping. Good to know The delay buffer prevents system overload and ensures data integrity before sending notifications. All feedback data is automatically organized and maintained in Google Sheets for easy access and analysis. The workflow handles both new user onboarding and trip feedback submission seamlessly. How it works The Trigger - New User Entry node detects when a new user is added to the Google Sheets feedback form database. The Delay - Process Buffer node introduces a processing delay to ensure data is fully processed before sending notifications, avoiding premature actions. The Send Email To That New User node automatically sends a feedback form email to the newly registered user. When a user submits their trip feedback, the Trigger - Trip Form Submission node captures the submission. The Tack All Feedback Item node iterates over each form submission item to process multiple entries if present, ensuring all feedback data is handled. The Update - Trip Feedback Sheet node appends or updates the trip feedback data in the Google Sheets, maintaining an organized record of all responses. How to use Import the workflow into n8n and configure the nodes with your Google Sheets API credentials and email service settings. Set up your Google Sheets with the appropriate columns for user data and feedback responses. Test the workflow by adding a new user entry to verify email delivery and feedback processing. Requirements Google Sheets API credentials with read/write permissions Email service configuration (SMTP or email API) Access to Google Sheets containing user data and feedback forms Customising this workflow Modify the email template in the Send Email To That New User node to match your branding and feedback requirements. Adjust the delay timing in the Delay - Process Buffer node based on your system's processing needs. Customize the Google Sheets structure and update the Update - Trip Feedback Sheet node accordingly to match your data organization preferences.
by Vincent Belmehel
Purpose This workflow automatically creates a subscriber in a given Beehiiv publication when a new opt-in is registered in a given Systeme.io sales funnel. Good to know: the integration with Systeme.io is done at the sales funnel level, not at the account level. If you have several sales funnels, you can use the same workflow several times. Quick Setup Configure your sales funnel in Systeme.io to create and trigger a webhook after an opt-in Open the “On New Systeme.io Optin” node to find the webhook URL needed to configure your sales funnel on Systeme.io Configure the “Configure Workflow” node Add your Beehiiv publication ID If you know the subscriber's first and last name and want to send it to Beehiiv, configure the custom field names for first and last name Add one or more email addresses to which to send alert notifications in the event of a problem (separated by commas). If you have not already done so : Connect your Beehiiv account in the “Create New Beehiiv Subscriber” node Connect your Gmail account in the “Send Email Alert (Beehiiv API error)” node How It Works As soon as a new opt-in is registered on your sales funnel, Systeme.io triggers the workflow (via a webhook) Only requests actually coming from Systeme.io are considered (whitelisting of their IP addresses for security reasons) A new subscriber is added to your Beehiiv publication (via an API call) If available in Systeme.io, UTM tags (utm_source, utm_medium and utm_campaign) are transferred to Beehiiv to correctly track where your subscribers are coming from If an error occurs during the Beehiiv API call, an alert notification is sent to you (via email) Requirements A Systeme.io account A Beehiiv account with an active publication A Gmail account Benefits Automate & scale your email marketing efforts seamlessly No more manual tasks to keep your subscriber list always up-to-date Focus on creating a newsletter that stands out, not on the technical side Check Out My Other Templates 👉 https://n8n.io/creators/belmehel/
by Arlin Perez
AI Research Assistant via Telegram (GPT-4o mini + DeepSeek R1 + SerpAPI) 👥 Who’s it for This workflow is perfect for anyone who wants to receive AI-powered research summaries directly on Telegram. Ideal for people asking frequent product, tech, or decision-making questions and want up-to-date answers sourced from the web. 🤖 What it does Users send a question via Telegram. An AI agent (DeepSeek R1) reformulates and understands the intent, while a second agent (GPT-4o mini) performs live research using SerpAPI. The most relevant answers, including links and images, are delivered back via Telegram. ⚙️ How it works 📲 Telegram Trigger – Starts when a user sends a message to your Telegram bot. 🧠 DeepSeek R1 Agent – Understands, clarifies, or reformulates the user query. 🧠 Research AI Agent (GPT-4o mini + SerpAPI) – Searches the web and summarizes the best results. 📤 Send Telegram Message – Sends the response back to the same user. 📋 Requirements Telegram bot (via BotFather) with API token set in n8n credentials OpenAI account with API key and balance for GPT-4o mini SerpAPI account (100 free searches/month) with API key DeepSeek account with API key and balance 🛠️ How to set up Create your Telegram bot using BotFather and connect it using the Telegram Trigger node Set up DeepSeek credentials and add a Chat Model AI Agent node using DeepSeek R1 to reformulate the user’s question Set up OpenAI credentials and add a second ChatGPT AI Agent node using GPT-4o mini In the GPT-4o node, enable the SerpAPI Tool and add your SerpAPI API key Pass the reformulated question from DeepSeek to the GPT-4o agent for live search and summarization Format the response (text, links, optional images) Send the final reply to the user using the Telegram Send Message node Ensure your n8n instance is publicly accessible Test the workflow by sending a message to your Telegram bot ✅
by Srinivasan KB
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. What is DIGIPIN? DIGIPIN (Digital Pincode) is a 10-character alphanumeric code introduced by India Post. It maps any 3x3 meter square in India to a unique digital address. This helps precisely locate homes, shops, or landmarks, especially in areas where physical addresses are inconsistent or missing. What this workflow does This workflow creates a fully offline DIGIPIN microservice using only JavaScript - no external APIs are used. You get two HTTP endpoints: GET /generate-digipin?lat={latitude}&lon={longitude} → returns a DIGIPIN GET /decode-digipin?digipin={code} → returns the latitude and longitude You can plug this into any system to: Convert GPS coordinates to a DIGIPIN Convert a DIGIPIN back to coordinates How it works An HTTP Webhook node receives the request A JS Function node either encodes or decodes based on input The result is returned as a JSON response All the logic is handled inside the workflow - no API keys, no external calls. Why use this Fast and lightweight Easily extendable: you can connect this to forms, CRMs, apps, or spreadsheets Ideal for field agents, address validation, logistics, or rural operations
by GiovanniSegar
Video walkthrough https://www.youtube.com/watch?v=OwIFK-r-NtQ Summary of agent This agent can write and rewrite its own rules, allowing you to mold its behavior. It receives rules from a database as system instructions and has tools to create, edit, or delete them. This is a great baseline for new agent builds. You can tell it things like "Next time, use present tense when talking about this subject" and it will use a tool to save this as a rule, then receive that instruction in all future iterations. How to start using it Option 1: With a Postgres database (e.g., Supabase) Supabase Schema: Create a table (e.g., agent_rules) with the following columns: id: bigint (Primary Key, auto-incrementing) created_at: timestamp with time zone (Default: now()) rule_text: text agent: text Workflow Updates: Update the Postgres credentials in the "Get rules from database," "Insert rule into database," and "Execute query on rule database" nodes. Update the agent value (currently 'TestAgent') in the "Get rules from database" and "Insert rule into database" nodes if you want a different agent name. Update the Anthropic API credentials. Option 2: With Google Sheets Google Sheet Setup: Create a Google Sheet with columns for rule_text and agent. Workflow Updates: Example Google Sheets nodes are included. You will need to: Connect your Google Sheets credentials. Select your Google Sheet (with rule_text and agent columns) in all relevant Google Sheets nodes. Replace the existing Postgres nodes ("Get rules from database", "Insert rule into database", "Execute query on rule database") with the configured Google Sheets nodes. Update the agent value (currently 'TestAgent') in the Google Sheets nodes if you want a different agent name. Update the Anthropic API credentials. Agent Instructions: Update the agent's system message and remove the database schema section as it is no longer relevant
by sayamol thiramonpaphakul
This workflow automatically checks the status of your websites using UptimeRobot API. If any site is down or unstable, it will: Generate a natural-language alert message using GPT-4o Push the message to a LINE group (with funny IT-style encouragement) Log all DOWN status entries into your Supabase database Wait 30 minutes before repeating 🔧 How It Works Schedule Trigger – Runs on a fixed interval (every few minutes). UptimeRobot Node – Fetches website monitor data. Code Node (Filter) – Filters only websites with status 8 (may be down) or 9 (down). IF Node – If any site is down, proceed. LangChain LLM Node – Formats alert with a humorous message using GPT-4o. Line Notify (HTTP Request) – Sends the alert to your LINE group. Loop Over Items – Loops through all monitors. Filter Down (Status = 9) – Selects only “fully down” sites. Supabase Node – Logs these into synlora_uptime_down table. Wait Node – Delays next alert by 30 minutes to avoid spamming. ⚙️ Setup Steps Required: 🔗 UptimeRobot API Key 📲 LINE Channel Access Token and Group ID 🧠 OpenAI Key (GPT-4o Mini) 🗃️ Supabase Project & Table Step-by-step: Go to UptimeRobot → Get API key and ensure monitors are set up. Create a Supabase table with fields: website, status, uptime_id. Create a LINE Messaging API bot, join it to your group, and get: Access Token Group ID (userId or groupId) Add your OpenAI API Key for GPT-4o Mini (or switch to your preferred LLM). Import the workflow JSON into n8n. Set credentials in all necessary nodes. Activate the workflow.
by Airtop
Automating LinkedIn Enrichment and ICP Scoring Use Case This automation enriches a person’s data using LinkedIn and calculates an Ideal Customer Profile (ICP) score based on their professional presence. It is particularly useful for lead qualification, contact research, and targeted outreach. What This Automation Does The automation processes the following input parameters: Person Name**: Full name of the individual. Work Email**: Business email address to validate corporate identity. Airtop Profile (connected to Linkedin)**: A LinkedIn-authenticated Airtop Profile for enrichment. How It Works Email Filtering: Checks if the email is corporate (excludes free and personal domains). LinkedIn Profile Discovery: Searches and verifies the correct LinkedIn URL using Airtop. Data Enrichment: Extracts professional details from the LinkedIn profile. ICP Scoring: Calculates an ICP score based on extracted data and profile context. Merge Outputs: Consolidates enriched profile data and ICP results into a single output. Setup Requirements Airtop API Key An Airtop Profile authenticated on LinkedIn. Next Steps Combine with CRM Integration**: Push enriched and scored data into CRMs like HubSpot or Salesforce. Batch Processing**: Automate for lists of leads using Airtop + n8n or Airtop SDK. Scoring Customization**: Adjust scoring logic to reflect your ideal customer attributes more precisely. Read more about data enrichment and ICP scoring
by Jimleuk
This n8n template shows you how to connect Github's Free Models to your existing n8n AI workflows. Whilst it is possible to use HTTP nodes to access Github Models, The aim of this template is to use it with existing n8n LLM nodes - saves the trouble of refactoring! Please note, Github states their model APIs are not intended for production usage! If you need higher rate limits, you'll need to use a paid service. How it works The approach builds a custom OpenAI compatible API around the Github Models API - all done in n8n! First, we attach an OpenAI subnode to our LLM node and configure a new OpenAI credential. Within this new OpenAI credential, we change the "Base URL" to point at a n8n webhook we've prepared as part of this template. Next, we create 2 webhooks which the LLM node will now attempt to connect with: "models" and "chat completion". The "models" webhook simply calls the Github Model's "list all models" endpoint and remaps the response to be compatible with our LLM node. The "Chat Completion" webhook does a similar task with Github's Chat Completion endpoint. How to use Once connected, just open chat and ask away! Any LLM or AI agent node connected with this custom LLM subnode will send requests to the Github Models API. Allowing your to try out a range of SOTA models for free. Requirements Github account and credentials for access to Models. If you've used the Github node previously, you can reuse this credential for this template. Customising this workflow This template is just an example. Use the custom OpenAI credential for your other workflows to test Github models. References https://docs.github.com/en/github-models/prototyping-with-ai-models https://docs.github.com/en/github-models
by Sirhexalot
This workflow facilitates seamless synchronization between Entra (Microsoft Azure AD) and Zammad. It automates the following processes: Fetch Entra Contacts: Create Universal User Object: Extracts key user information, such as email, phone, and name, and formats it for Zammad compatibility. Synchronize with Zammad: Identifies users in Zammad who need updates based on Entra data. Adds new users from Entra to Zammad. Deactivates users in Zammad if they are no longer in Entra. Key Features Dynamic Matching**: Compares contacts from Entra with existing Zammad users based on email and updates records accordingly. Efficient Management**: Automatically creates, updates, or deactivates Zammad users based on their status in Entra. Custom Fields**: Supports custom field mapping, ensuring enriched user profiles in Zammad. Setup Instructions Microsoft Entra Integration: Ensure proper API permissions for accessing Entra contacts. Configure Microsoft OAuth2 credentials in n8n. Zammad Integration: Set up Zammad API credentials with appropriate access rights. Customize the workflow to include additional fields or map existing fields as needed. Run Workflow: Trigger the workflow manually or set up an automation schedule (e.g., daily sync). Review created/updated/deactivated users in Zammad. Use Cases IT Administration**: Keep your support system in sync with the organization’s Entra data. Customer Management**: Ensure accurate and up-to-date user records in Zammad. Prerequisites Access to an Entra (Azure AD) environment with contacts data. A Zammad instance with API credentials for user management. A custom field in Zammad User Object (entra_key) of type String. A custom field in Zammad User Object (entra_object_type) of type `Single selection field with two key value pairs user = User contact = Contact` This workflow is fully customizable and can be adapted to your organization’s specific needs. Save time and reduce manual errors by automating your user sync process with this template! If you have found an error or have any suggestions, please report them here on Github.