by Joachim Brindeau
Are you looking to install external libraries for your self-hosted N8N instance? This automated workflow makes adding npm packages to your N8N environment quick and effortless. Beware, this workflow only works on self-hosted instances. What This Workflow Does This solution automatically installs npm packages like axios, cheerio, or node-fetch in your self-hosted N8N Docker container, making them immediately available in Code nodes. Key features ✅ Automated Installation: No manual npm commands needed ✅ Daily Updates: Scheduled trigger keeps packages current ✅ Smart Installation: Only installs missing packages ✅ Multiple Triggers: Manual, scheduled, and on startup of the N8N instance so you can upgrade your N8N version without worrying about external libraries. How to install and update external libraries automatically Step 1: Setting Up Your Environment Variables Before using external libraries in N8N Code nodes, configure these environment variables in your Docker comppose file. Option A to allow specific external npm packages in N8N Code nodes NODE_FUNCTION_ALLOW_EXTERNAL=axios,cheerio,node-fetch Option B to allow all external npm packages in Code nodes NODE_FUNCTION_ALLOW_EXTERNAL=* Step 2: Import the external packages workflow Import the workflow into your N8N instance by copy pasting all nodes. Step 3: Input the list of external libraries you need Edit the libraries_set node Change the comma-separated list (e.g., axios,cheerio,node-fetch). If you chose Option A above, update your NODE_FUNCTION_ALLOW_EXTERNAL variable with the same packages Step 4: Start the workflow! Run the workflow manually or let it trigger automatically Why use this to install NPM packages in N8N? Managing external packages manually in N8N can be time-consuming. This workflow automates the entire process, making sure your libraries are always installed and up-to-date.
by bangank36
This workflow automates the Mark as Fulfilled action in Shopify for each order, ensuring a seamless fulfillment process without manual intervention. How It Works This workflow retrieves all unfulfilled orders and processes their fulfillment automatically. Since Shopify requires a Fulfillment Order ID (not Order ID) to trigger fulfillment, the workflow follows these steps: 1️⃣ Get all unfulfilled orders using the Shopify node. 2️⃣ Retrieve the Fulfillment Order ID using the "List Fulfillment Orders" action. 3️⃣ Create a fulfillment request using "Mark fulfillment order as fulfilled." 4️⃣ Handle edge cases, such as partially fulfilled orders or API errors. This ensures that every valid order is marked as fulfilled efficiently. 🔗 Ongoing discussions on this topic: Relevant Shopify API Discussion Step-by-step The workflow can be run as requested or on schedule You can adjust these parameters within the Shopify and filter nodes: Shopify Admin URL – A Global node to customize the Shopify store URL. To find your Shopify store ID, login into your Shopify admin, then look at the URL in your browser's address bar, the subdomain portion (e.g., example_store_id.myshopify.com) is your store ID (in this case: example_store_id) Order Filtering – Ensures only valid orders are fulfilled, particularly useful if: You sell digital downloads or gift cards exclusively. You use third-party fulfillment services for all products. Credentials To run this workflow, you'll need: Shopify API Key – Required for authentication. Who Is This For? Shopify store owners looking to automate their fulfillment process. Merchants selling digital or personalized products who need instant fulfillment. Explore More Templates 👉 Check out my other n8n templates
by Vitali
📌 Validate Seatable Webhooks with HMAC SHA256 Authentication This mini workflow is designed to securely validate incoming Seatable webhooks using HMAC SHA256 signature verification. 🔐 What it does: Listens for incoming Seatable webhook requests. Calculates a SHA256 HMAC hash of the raw request body using your shared secret. Compares the computed hash with the x-seatable-signature header (after removing the sha256= prefix). If the hashes match: responds with 200 OK and forwards the request to subsequent nodes. If the hashes don’t match: responds with 403 Forbidden. ⚠️ Important Notes: This workflow is provided as a template and is not intended to work standalone. Please duplicate it and integrate it with your custom logic at the "Add nodes for processing" node. Configuration steps: Set your secret key in the “Calculate sha256” crypto node (replace the placeholder). Adjust the webhook path to suit your environment (or set it to "manual" for testing). Connect your actual logic after the verification step.
by scrapeless official
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Brief Overview This automation template helps you track the latest job listings from the Y Combinator Jobs page. By using Scrapeless to scrape job listings, n8n to orchestrate the workflow, and Google Sheets to store the results, you can build a zero-code job tracking solution that runs automatically every 6 hours. How It Works Trigger on a Schedule: Every 6 hours, the workflow kicks off automatically. Scrape Job Listings: Scrapeless crawls the Y Combinator Jobs page and returns structured Markdown data. Extract & Parse Content: JavaScript nodes process the Markdown to extract job titles and links. Flatten Data: Each job becomes a single row with its title and link. Save to Google Sheets: New job listings are appended to your Google Sheet for easy viewing and sharing. Features No-code, automated job listing scraper. Scrapes and structures the latest Y Combinator job posts. Saves data directly to Google Sheets. Easy to schedule and run without manual effort. Extensible: Add Telegram, Slack, or email notifications easily in n8n. Requirements Scrapeless API Key: Scrapeless Service request credentials. Log in to the Scrapeless Dashboard Then click "Setting" on the left -> select "API Key Management" -> click "Create API Key". Finally, click the API Key you created to copy it. n8n Instance: Self-hosted or n8n.cloud account. Google Account: For Google Sheets API access. Target Site: This template is designed for the Y Combinator Jobs page but can be modified for other job boards. Installation Deploy n8n on your preferred platform. Import this workflow JSON file into your n8n workspace. Create and add your Scrapeless API Key in n8n’s credential manager. Connect your Google Sheets account in n8n. Update the target Google Sheet document URL and sheet name. Usage This automated job finder agent is ideal for: | Industry / Role | Use Case | |-------------------------------|--------------------------------------------------------------------------------------------| | Job Seekers | Automatically track newly posted startup jobs without manually visiting job boards. | | Recruitment Agencies | Monitor YC job postings and build a candidate-job matching system. | | Startup Founders / CTOs | Stay aware of which startups are hiring, for networking and market insights. | | Tech Media & Bloggers | Aggregate new job listings for newsletters, blogs, or social media sharing. | | HR & Talent Acquisition Teams | Monitor competitors’ hiring activity. | | Automation Enthusiasts | Example use case for learning web scraping + automation + data storage. | Output
by PiAPI
Who is the template for? This workflow is specifically designed for content creators and social media professionals, enabling Instagram and X (Twitter) influencers to produce highly artistic visual posts, empowering marketing teams to quickly generate event promotional graphics, assisting blog authors in creating featured images and illustrations, and helping knowledge-based creators transform key insights into easily shareable card visuals. Set up Instructions Fill in your API key from PiAPI. Fill in Basic Params Node following the sticky note guidelines. Set up a design template in Canvas Switchboard. Make a simple template in Switchboard. Click Crul and get the API code to fill in JSON of Design in Canvas. Click Test Workflow and get a url result. Use Case Here we will provide some setting examples to help users find a proper way to use this workflow. User could change these settings based on specific purposes. Basic Params Setting: theme: Hope scenario: Don't know about the future, confused and feel lost with tech-development. style: Cinematic Grandeur, Sci-Tech Aesthetic, 3D style example: 1. March. Because of your faith, it will happen. 2. Something in me will save me. 3. To everyone carrying a heavy heart in silence. You are going to be okay. 4. Tomorrow will be better. image prompt: A cinematic sci-fi metropolis where Deep Neural Nets control a hyper-connected society. Holographic interfaces glow in the air as robotic agents move among humans, symbolizing Industry 4.0. The scene contrasts organic human emotion with cold machine precision, rendered in a hyper-realistic 3D style with futuristic lighting. Epic wide shots showcase the grandeur of this civilization’s industrial evolution. Output Image: More Example Results for Reference
by Lucas Walter
Who's it for This template is perfect for sales professionals, marketers, and business developers who need to quickly gather contact information from company websites. Whether you're building prospect lists, researching potential partners, or collecting leads for outreach campaigns, this automation saves hours of manual email hunting. What it does This workflow automatically discovers and extracts email addresses from any website by: Taking a website URL as input through a simple form Using Firecrawl's mapping API to find relevant pages (about, contact, team pages) Batch scraping those pages to extract email addresses Intelligently handling common email obfuscations like "(at)" and "(dot)" Returning a clean, deduplicated list of valid email addresses The automation handles rate limiting, retries failed requests, and filters out invalid or hidden email addresses to ensure you get quality results. How to set up Get Firecrawl API access: Sign up at firecrawl.dev and obtain your API key Configure credentials: In n8n, create a new HTTP Header Auth credential named "Firecrawl" with: Header Name: Authorization Header Value: Bearer YOUR_API_KEY Import the workflow: Copy the workflow JSON into your n8n instance Test the form: Activate the workflow and test with a sample website URL How to customize the workflow Search parameters: Modify the search parameter in the map_website node to target different page types (currently searches for "about contact company authors team") Extraction limits: Adjust the limit parameter to scrape more or fewer pages per website Retry logic: The workflow includes retry logic with a 12-attempt limit - modify the check_retry_count node to change this Output format: The set_result node formats the final output - customize this to match your preferred data structure Email validation: The JSON schema in start_batch_scrape defines how emails are extracted - modify the prompt or schema for different extraction rules The workflow is designed to be reliable and handle common edge cases like rate limiting and failed requests, making it production-ready for regular use.
by PiAPI
What this workflow does? This workflow primarily uses the GPT-4o API from PiAPI and automatically creates front/side/top views of 3D models from commands. Who is this for? 3D Designers: Quickly generate standardized orthographic views for design review E-commerce Operators: Create multi-angle product display images 3D Modeling Beginners: Instantly produce basic reference views Step-by-step Instruction Fill in X-API-Key of your PiAPI account and the image prompt based on your inspiration. Click Test workflow. Get the image url in the final node. OutPut
by mahavishnu
This automation runs daily at 8:00 AM to automatically collect and organize business idea insights from IdeaBrowser.com into a structured Google Docs document. The workflow performs the following actions: Data Collection: Fetches the "idea of the day" content from ideabrowser.com/idea-of-the-day using authenticated HTTP requests. Content Processing: Extracts the base idea path and generates links to all related insight pages including value ladder, market analysis, proof signals, execution plans, and community insights. The workflow also cleans the HTML content to extract readable text. Document Creation: Creates a new Google Docs document in a specified folder with a timestamp and idea name in the title format. Content Aggregation: Systematically visits each insight page (main idea page, value ladder, why now, proof signals, market gap, execution plan, value equation, value matrix, ACP, community signals, and keywords) and collects their content. Document Population: Processes the collected content through markdown formatting and appends it to the Google Docs document, creating a comprehensive report of the daily business idea with all its associated insights. Automated Scheduling: Runs automatically every day at 8 AM, ensuring you have fresh business idea analysis delivered to your Google Drive without manual intervention. This automation is perfect for entrepreneurs, business analysts, or anyone who wants to stay updated with curated business ideas and their detailed market analysis in an organized, searchable format.
by Thomas
🧠 Writes original, thought-provoking blog posts using AI 🕓 Runs every 12 hours automatically ✍️ Publishes directly to Ghost blog with title, tags, and SEO meta 🔧 Features Scheduled every 12 hours OpenAI generates a multi-part blog post with metadata Markdown-compatible output (no HTML) Automatically published to Ghost CMS using authenticated API (🔐 no hardcoded keys) Fully modular and general-purpose — edit prompt for any blog theme! ⚙️ Nodes Overview Step Node Type Purpose 1️⃣ Schedule Trigger Runs every 12 hours 2️⃣ OpenAI Generates blog post + meta info 3️⃣ Code Extracts content, title, meta, and tags 4️⃣ Code Formats content as Ghost mobiledoc payload 5️⃣ HTTP Request Publishes post to Ghost via Admin API 📝 OpenAI Prompt (Generalized) Write a high-quality blog post on a creative or thought-provoking topic. The tone should be engaging and immersive. Length: 2–4 paragraphs. Then add a brief paragraph offering an alternative perspective or logical counterpoint. Finally, generate: Blog post title Meta description 5 tags 🔐 Notes ✅ No hardcoded API keys 🛠️ Ghost Admin API credentials must be set using the Credential Manager 📌 Prompt and Ghost URL are both easily customizable
by Samir Saci
Tags*: Supply Chain, Logistics, Geocoding, Transportation, GPS API Context Hi! I’m Samir — a Supply Chain Engineer and Data Scientist based in Paris, and founder of LogiGreen Consulting. I help companies improve their logistics operations using data, AI, and automation to reduce costs and minimize environmental footprint. > Let’s use n8n to analyze geographical data! 📬 For business inquiries, you can add find me on LinkedIn Who is this template for? This workflow is designed for logistics and transport teams but also market analytics experts that need to process geocoding data (get GPS coordinates from addresses). Ideal for: Transportation Planning Supply Chain Network Design Route optimization studies How does it work? This n8n workflow connects to a Google Sheet where you store addresses with country codes, and uses the OpenRouteService API to calculate: 📏 GPS Coordinates (longitude, latitude) 🗺️ Neighbourhood, City and local information Steps: ✅ Load addresses with country codes 🔁 Loop through each record 🚚 Query OpenRouteService 🧾 Extract and store results: longitude, latitude, neighbourhood 📤 Update the Google Sheet with new values What do I need to get started? This workflow is beginner-friendly and requires: A Google Sheet with route pairs (departure and destination coordinates) A free OpenRouteService API key 👉 Get one here Next Steps 🗒️ Follow the sticky notes inside the workflow to: Select your sheet Plug in your API key Launch the flow! 🎥 Check the Tutorial 🚀 You can customise the workflow to: Add additional outputs from the API Connect to your TMS via API or EDI This template was built using n8n v1.93.0 Submitted: June 1, 2025
by Julien DEL RIO
📌 Description This workflow serves a 1x1 transparent PNG image via a webhook, which can be embedded in an email to track when the email is opened. When the image is loaded by the recipient's email client, the webhook is triggered, optionally capturing a userId to identify who opened the email. 📂 Workflow Steps Webhook Trigger (Request img) Path: /webhook/change-with-your-id Triggered by an HTTP request (e.g. when the image is loaded in an email). Accepts a query parameter id to identify the recipient. Set Base64 Data (Create data pix) Creates a variable data containing a Base64-encoded transparent PNG image (1x1 pixel). Convert to Binary (Create img bin) Converts the Base64 data string into a binary file. Sets MIME type to image/png. Respond to Webhook (Respond to Webhook) Sends the binary image file in the HTTP response. Logging (Do anything to log) Placeholder node to log or process the id or request metadata. You can access the id using {{$json"query"}}. You can also use any parameter you want ✉️ How to Use in Emails Embed the image in an HTML email like this: When the email is opened and the image is loaded, the workflow will be triggered. 🛠️ Notes Some email clients block images by default; this may prevent tracking. You can enhance the workflow to store open events in a database, log the timestamp, IP, or user agent. Make sure to comply with data privacy and consent regulations (e.g. GDPR).
by Harshil Agrawal
This workflow demonstrates how to merge data for different executions. The Merge Data Function node fetches the data from different executions of the RSS Feed Read node and merges them under a single object. Note: If you want to process the items that get merged, you will have to convert the single item into n8n understandable multiple items.