by Manuel
Effortlessly optimize your workflow by automatically importing hundreds of manufacturers from a Google Sheet into your Shopware online store, saving countless hours of manual work. How it works Retrieve all manufactures from a Google Sheet Add each manufacture via Shopware sync API Endpoint to Shopware Upload a logo for each manufacture from a provided public URL to Shopware Set Up Steps Add your Shopware url to first node called Settings Create a Google Sheet in your Google account with the following columns (Demo Sheet) name (the name of the manufacturer which has to be unique and is required) website (url to the manufacturer website) description logo_url (public manufcaturer logo url. Have to be a png, jpg or svg file) translation_language_code_1 (optional. Language Code of your language. For example 'es-ES' for spanish. You have to make sure a language with this code exists in your Shopware shop.) translation_name_1 (optional. Manufacturer name translated to the language you defined at translation_language_code_1) translation_description_1 (optional. Manufacturer description translated to the language you defined at translation_language_code_1) translation_language_code_2 (optional. Same as translation_language_code_1 for another language) translation_name_2 (optional. Same as translation_name_1 for another language) translation_description_2 (optional. Same as translation_description_1 for another language) translation_language_code_3 (optional. Same as translation_language_code_1 for another language) translation_name_3 (optional. Same as translation_name_1 for another language) translation_description_3 (optional. Same as translation_description_1 for another language) Connect to your Google account Connect to your Shopware account Create a Shopware Integration Connect to Shopware at the nodes "Import Manufacturer" and "Upload Manufacturer Logo" using a Generic OAuth2 API Authentication with Grant Type "Client Credentials". The Access Token URL is https://your-shopware-domain.com/api/oauth/token. Run the workflow
by Airtop
Extracting LinkedIn Profile Information Use Case Manually copying data from LinkedIn profiles is time-consuming and error-prone. This automation helps you extract structured, detailed information from any public LinkedIn profile—enabling fast enrichment, hiring research, or lead scoring. What This Automation Does This automation extracts profile details from a LinkedIn URL using the following input parameters: airtop_profile**: The name of your Airtop Profile connected to LinkedIn. linkedin_url**: The URL of the LinkedIn profile you want to extract data from. How It Works Starts with a form trigger or via another workflow. Assigns the LinkedIn URL and Airtop profile variables. Opens the LinkedIn profile in a real browser session using Airtop. Uses an AI prompt to extract structured information, including: Name, headline, location Current company and position About section, experience, and education history Skills, certifications, languages, connections, and recommendations Returns structured JSON ready for further use or storage. Setup Requirements Airtop API Key — free to generate. An Airtop Profile connected to LinkedIn (requires one-time login). Next Steps Sync with CRM**: Push extracted data into HubSpot, Salesforce, or Airtable for lead enrichment. Combine with Search Automation**: Use with a LinkedIn search scraper to process profiles in bulk. Adapt to Other Platforms**: Customize the prompt to extract structured data from GitHub, Twitter, or company sites. Read more about the Extract Linkedin Profile Information automation.
by Viktor Klepikovskyi
Reusable and Independently Testable Sub-workflow This n8n workflow provides a standardized structure for building and testing sub-workflows in isolation. Its purpose is to help you create robust, reusable, and maintainable automations by enabling you to test the sub-workflow's logic without needing a separate parent workflow. Setup Instructions: Define Sub-workflow Inputs: Double-click the Execute Sub-workflow Trigger node to define the parameters (e.g., color) that your sub-workflow will expect from a parent workflow. Configure Test Data: Use the Test Input node (an Edit Fields (Set) node connected to the Manual Trigger) to provide sample data for isolated testing. Connect Inputs: The Combine Input node (an Edit Fields (Set) node) is the entry point for your sub-workflow's core logic. It should have two inputs: one from the Execute Sub-workflow Trigger and one from the Test Input node. Merge Inputs: Ensure the Combine Input node has the 'Include Other Input Fields' option enabled to merge data from both the live and test paths seamlessly. You can read the full blog post that explains this workflow setup in detail here.
by Haqi Ramadhani
Automatically detect new n8n releases (stable or beta) from GitHub, update Coolify environment variables, and trigger deployments. Functionality This workflow automates deployment of n8n releases to a Coolify instance. It supports two tracks: Beta Releases: Checks GitHub every minute for prereleases, filters duplicates, updates the N8N_VERSION environment variable, and deploys. Stable Releases (disabled by default): Checks the latest stable release hourly and deploys. Key Features: Deduplication**: Ensures no repeated deployments for the same release. Version Parsing**: Extracts the semantic version (e.g., 1.34.0) from GitHub release names. Coolify Integration**: Updates environment variables and triggers deployments via API. Expected Outcomes New n8n beta/stable releases detected via GitHub API. Coolify environment variable N8N_VERSION updated to the latest version. Automatic deployment triggered in Coolify. Setup Guide Replace Placeholders: Update m8ccg8k44coogsk84swk8kgs in the Update ENV and Deploy nodes with your Coolify Application UUID. Configure Credentials: Add Coolify API credentials (httpHeaderAuth) with a valid API token in the headers. Enable Triggers: Toggle the Auto Update Latest Release node if stable releases are desired. Adjust schedule intervals as needed. Test: Run the workflow manually to validate API connections and version parsing. SEO Keywords Automated Deployment, n8n CI/CD, Coolify Integration, GitHub Release Monitoring, Environment Variable Management, Beta Release Automation.
by Dionysus
Automating daily notifications of the latest releases from a GitHub repository. This template is ideal for developers and project managers looking to stay up-to-date with software updates. How it Works: Daily Trigger: The workflow initiates daily using the Schedule Trigger node. Fetch Repository Data: The HTTP Request node retrieves the latest release details from the specified GitHub repository. Check if new: The IF node check if the release was done in the last 24 hours. Split Content: The Split Out node processes the JSON response to extract and structure relevant data. Convert Markdown: The Markdown node converts release notes from Markdown format to HTML, making them ready to use in emails. Send a notification by email Key Features: Simple to customize by modifying the GitHub URL. Automatically processes and formats release notes for better readability. Modular design, allowing integration with other workflows like Gmail or Slack notifications. Setup Steps: Modify Repository URL: Update the Sticky Note node with the URL of the repository you want to monitor. Modify SMTP details: Update the Send Email node with your SMTP details.
by Ahmed Saadawi
⚠️ This Workflow Requires a Community Node and a Self-Hosted n8n Instance > This workflow uses the Vtiger CRM community node. To use it, you must be running a self-hosted version of n8n with Community Nodes enabled. 🔧 How to Install the Node Go to Settings → Community Nodes Click Install Node Enter the package name: n8n-nodes-vtiger-crm Restart your n8n instance if prompted 💬 Real-time Vtiger Support Tickets to Telegram with Auto Status Updates 📌 Overview Keep your support team instantly informed when new tickets are created in Vtiger CRM. This workflow: Fetches the most recent ticket marked as Open Sends its details to a Telegram chat Updates the status in Vtiger to In Progress to prevent re-sending 🔄 What This Workflow Does 📨 Pulls the latest open ticket from Vtiger HelpDesk 📲 Sends a rich-text message to Telegram with all key ticket details 🔁 Updates the ticket’s status to "In Progress" 🧠 Workflow Preview > 📲 Telegram Output Example > New ticket with the following details: Ticketid: TT2 Title: Internet down Status: Open Priority: High Severity: Minor Category: Small Problem Description: The internet was slow from yesterday and today is down completely 🛠️ Setup Instructions 🔗 Telegram Bot Setup Open Telegram and search for @BotFather Run /newbot and follow the instructions Save the bot token Add the bot to your chat or group Use @userinfobot to get your chat_id Paste the token and chat ID in the Telegram node inside n8n 🔗 Vtiger CRM Setup Make sure your Vtiger HelpDesk module includes: ticket_no, ticket_title, ticketstatus, ticketpriorities, ticketseverities, ticketcategories, description Connect your Vtiger API credentials inside n8n 👥 Who This Is For Customer support and IT helpdesk teams using Vtiger CRM Teams that want instant alerts in Telegram Anyone syncing CRM activity with chat-based notifications 🔐 Credentials Required ✅ Vtiger CRM API credentials ✅ Telegram Bot Token 🏷 Tags vtiger, telegram, crm automation, helpdesk alerts, no-code crm, realtime notifications, n8n telegram integration, support ticket automation, self-hosted n8n, community nodes, workflow automation, vtiger crm integration, helpdesk sync, n8n crm alerts `
by Alexander Bentlund
Search music and play to Spotify from Telegram This workflow is a simple demonstration on accessing a message model from Telegram and it makes searching for songs an easy task even if you can't remember the artist or song name. An OpenAI message model tries to figure out the song and sends it to an active Spotify device**. Use case Imagine an office where you play music in the background and the employees can control the music without having to login to the playing account. How it works You describe the song in Telegram. Telegram bot sends the text to n8n. An OpenAI message model tries to find the song. Spotify gets the search query string. First match is then added to queue. -- If there is no match a message is sent to Telegram and the process ends. We change to the next track in the list. We make sure the song starts playing by trying to resume. We fetch the currently playing track. We return "now playing" information to Telegram: Song Name - Artist Name - Album Name. Error handling Every Spotify step has it's on error handler under settings where we output the error. Message parser receives the error and sends it to Telegram. Requirements Active workflow* OpenAI API key Telegram bot Spotify account and Oauth2 API Spotify active on a device** .* The Telegram trigger is activated only if this workflow is active. You can however TEST the workflow in the editor by clicking "Test step" and then it waits for the Telegram event. When event is received, just step through all steps or just clicking "Test step" on the "Fetch Now Playing" node. .** You must have a Spotify device active when trying to communicate with a device. Open Spotify and play something - not it is active.
by n8n Team
This workflow automatically syncs Shopify orders with your Zendesk tickets. Using this workflow, Shopify orders will be added or have their information updated straight to your Zendesk tickets. Prerequisites Shopify account and Shopify credentials Zendesk account and Zendesk credentials How it works Shopify Trigger starts the workflow whenever an order is updated. Zendesk node finds if the order already exists and has a ticket assigned. Set node keeps and passes only ticket ID. Merge by Key node combines the Shopify order data with the Zendesk ticket data. If node splits the workflow conditionally, checks if the ticket already exists or not. If order is new, Zendesk node creates a new ticket for the order.
by Ahmed Saadawi
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 🧠 Vtiger CRM – Auto-Answer FAQs with DeepSeek AI Description: This workflow automates the process of answering FAQ drafts in Vtiger CRM using DeepSeek LLM via LangChain. It's perfect for teams who want to accelerate knowledge base creation, improve support response consistency, or reduce the manual effort of writing FAQ content. Every 1 minute, this workflow: 📥 Retrieves the most recent FAQ record marked as Draft in Vtiger CRM 🧠 Sends the question to a LangChain agent powered by DeepSeek AI 📝 Receives a plain-text answer 📤 Updates the original FAQ with the generated answer and changes its status to Published ⚙️ How It Works Trigger:** Scheduled to run every 1 minute Query:** Pulls the latest FAQ from Vtiger where faqstatus = 'Draft' AI Agent:** Uses LangChain + DeepSeek to generate a natural-language answer Memory Buffer:** Keeps context using LangChain memory Update:** Pushes the answer back to Vtiger and marks it as Published 🛠️ Setup Instructions Connect Credentials for: Vtiger CRM API DeepSeek API Ensure your Vtiger CRM has a Faq module with fields: question faq_answer faqstatus Install the required Community Node: Go to Settings → Community Nodes Click Install Node and enter: n8n-nodes-vtiger-crm Restart your instance when prompted. Optionally customize the schedule or field names as needed. 👤 Who Is This For? Customer support teams building a knowledge base Businesses using Vtiger as a CRM or internal helpdesk Teams looking to automate repetitive content creation using LLMs 🔐 Credentials Required ✅ Vtiger CRM API credentials ✅ DeepSeek AI API key ✅ Highlights Fully automated LLM-powered FAQ generation Uses custom community node for Vtiger support Lightweight and runs on a short interval (1 min) Includes sticky note for clarity and onboarding Clean conditional logic and memory context built-in 🏷 Tags vtiger, crm, faq automation, ai automation, deepseek, langchain, llm, open source crm, faq generation, customer support, n8n, n8n community nodes, workflow automation, ai generated answers, vtiger integration, deepseek ai, langchain integration
by Kees Bosch - Browserflow
Auto find & invite LinkedIn Leads This n8n template automates LinkedIn lead generation by scraping profiles, filtering out existing connections, and sending connection requests — all in a controlled, looped workflow. Ideal for outreach campaigns, recruitment, or lead gen efforts. ⚠️ Disclaimer – Community Node Notice This template uses a verified community node available inside the n8n cloud environment. To use it, go to "Nodes" → search for: Browserflow for Linkedin …and click Install. It’s officially verified and accessible directly from n8n cloud. In case you wish to run this template locally, you need to go to the settings, click community nodes and search for n8n-nodes-browserflow. Then after installing you can start using the actions in this node. 🛠️ How to Use Trigger: Manual Start Initiates the workflow manually via the “Test workflow” button, giving you full control. Scrape LinkedIn Profiles Uses the Browserflow automation to extract profile links from a LinkedIn search or keyword query. Split Out Results Converts the list of profiles into individual items for single-profile processing. Loop Through Each Profile Ensures each LinkedIn profile is handled one at a time, avoiding simultaneous actions. Check Existing Connection Verifies if you’re already connected with the lead on LinkedIn. Conditional Logic ✅ Already Connected → Skip to next profile ❌ Not Connected → Continue to next step Send Connection Invite Sends a LinkedIn connection request, optionally with a personalized message. 📦 Requirements n8n (cloud or self-hosted) Installed community node: Browserflow for Linkedin LinkedIn account Valid Browserflow acount (you can set up a free 7-day trial at https://browserflow.io) ⚙️ Setup Instructions Install the Browserflow Community Node Search “Browserflow for Linkedin” > Install. Get your API key Get your API key at https://browserflow.io Setup your Browserflow account After registering, setup your Browserflow and connect with Linkedin using the wizard at https://browserflow.io Connect with Browserflow by making a credential Click on the Browserflow actions to setup a connection with Browserflow by adding your API key to a credential. 🧩 Customization Tips Targeting: Adjust the Browserflow actions to scrape specific roles, industries, or locations. Messaging: You can add a message to the connection invite but remind that LinkedIn limits the amount of messages that can be send each month. Use variables in the message for personalization (e.g., {firstName}). Trigger: Replace manual trigger with a cron node for scheduled outreach. Integration: Combine with CRM tools (e.g., HubSpot, Notion, Airtable) for syncing leads or integrate with AI Agents.
by Angel Menendez
Who is this for? This subworkflow is ideal for developers and automation builders working with UniPile and n8n to automate message enrichment and LinkedIn lead routing. What problem is this workflow solving? UniPile separates personal and organization accounts into two different API endpoints. This flow handles both intelligently so you're not missing sender context due to API quirks or bad assumptions. What this workflow does This subworkflow is used by: LinkedIn Auto Message Router with Request Detection** LinkedIn AI Response Generator with Slack Approval** It receives a message sender ID and tries to enrich it using UniPile's /people and /organizations endpoints. It returns a clean, consistent profile object regardless of which source was used. Setup Generate a UniPile API token and save it in your n8n credentials Make sure this subworkflow is triggered correctly by your parent flows Test both people and organization lookups to verify responses are normalized How to customize this workflow to your needs Add a secondary enrichment layer using tools like Clearbit or FullContact Customize the fallback logic or error handling Expand the returned data for more AI context or user routing (e.g., job title, region)
by Miquel Colomer
This n8n workflow template automates the process of collecting and delivering the "Top Deals of the Day" from MediaMarkt, tailored to user preferences. By combining user-submitted forms, Bright Data web scraping, GPT-4o-mini deal generation, and email delivery, this workflow sends personalized product recommendations straight to a user’s inbox. > ⚠️ Note: This workflow uses community nodes (Bright Data and Document Generator) which only work on *self-hosted n8n instances*. 🚀 What It Does Collects user preferences via a form (categories + email) Scrapes MediaMarkt’s deals page using Bright Data Uses GPT-4o-mini (OpenAI) to recommend top deals Generates a structured HTML email using a template Sends the personalized deals directly via email 🧩 Community Node Integration We created and used the following community nodes: Bright Data** – To scrape MediaMarkt deals using proxy-based scraping Document Generator** – To generate a templated HTML document from deal data These nodes are not available in n8n Cloud and require self-hosted n8n. 🛠️ Step-by-Step Setup Install Community Nodes Make sure you're on a self-hosted n8n instance. Install: n8n-nodes-brightdata n8n-nodes-document-generator Configure Credentials Bright Data API Key (Proxy + Scraping setup) OpenAI API Key (GPT-4o-mini access) SMTP Credentials for sending emails Customize the Form Adapt the form node to collect desired categories and email addresses. Typical categories include appliances, phones, laptops, etc. Design Your HTML Template In the Document Generator node, you can tweak the HTML/CSS to change how deals appear in the final email. Test the Workflow Submit the form with test data and check that the entire flow—from scraping to email—executes as expected. 🧠 How It Works: Workflow Overview User Interaction via Form Users select product categories and enter their email. This triggers the workflow. Data Extraction via Bright Data Bright Data scrapes the MediaMarkt offers page and returns HTML content. HTML Parsing Key elements like product names, prices, and links are extracted for processing. GPT-4o-mini Recommendation Generation The extracted data is sent to OpenAI (GPT-4o-mini), which filters, ranks, and enhances deals based on the user’s preferences. Data Structuring & Split The result is split into individual deal items to be formatted. HTML Document Creation Document Generator populates a clean HTML template with the top recommended deals. Email Delivery The final document is emailed via SMTP to the user with a friendly message. 📨 Final Output Users receive a custom HTML email featuring a curated list of top MediaMarkt deals based on their selected categories. 🔐 Credentials Used Bright Data API** – Web scraping with proxy support OpenAI API** – Generating personalized recommendations SMTP** – Sending personalized deal emails ✨ Customization Tips Change the Data Source**: You can adapt this to scrape other e-commerce sites. Update the Email Template**: Make it match your branding or include images. Extend the Form**: Add preferences like price range or specific brands. Add Scheduling**: Use Cron to run the workflow daily or weekly. ❓Questions? Template and node created by Miquel Colomer and n8nhackers.com. Need help customizing or deploying? Contact us for consulting and support.