by PUQcloud
Setting up n8n workflow Overview The Docker n8n WHMCS module uses a specially designed workflow for n8n to automate deployment processes. The workflow provides an API interface for the module, receives specific commands, and connects via SSH to a server with Docker installed to perform predefined actions. Prerequisites You must have your own n8n server. Alternatively, you can use the official n8n cloud installations available at: n8n Official Site Installation Steps Install the Required Workflow on n8n You have two options: Option 1: Use the Latest Version from the n8n Marketplace The latest workflow templates for our modules are available on the official n8n marketplace. Visit our profile to access all available templates: PUQcloud on n8n Option 2: Manual Installation Each module version comes with a workflow template file. You need to manually import this template into your n8n server. n8n Workflow API Backend Setup for WHMCS/WISECP Configure API Webhook and SSH Access Create a Basic Auth Credential for the Webhook API Block in n8n. Create an SSH Credential for accessing a server with Docker installed. Modify Template Parameters In the Parameters block of the template, update the following settings: server_domain – Must match the domain of the WHMCS/WISECP Docker server. clients_dir – Directory where user data related to Docker and disks will be stored. mount_dir – Default mount point for the container disk (recommended not to change). Do not modify the following technical parameters: screen_left screen_right Deploy-docker-compose In the Deploy-docker-compose element, you have the ability to modify the Docker Compose configuration, which will be generated in the following scenarios: When the service is created When the service is unlocked When the service is updated nginx In the nginx element, you can modify the configuration parameters of the web interface proxy server. The main section allows you to add custom parameters to the server block in the proxy server configuration file. The main\_location section contains settings that will be added to the location / block of the proxy server configuration. Here, you can define custom headers and other parameters specific to the root location. Bash Scripts Management of Docker containers and all related procedures on the server is carried out by executing Bash scripts generated in n8n. These scripts return either a JSON response or a string. All scripts are located in elements directly connected to the SSH element. You have full control over any script and can modify or execute it as needed.
by Incrementors
LinkedIn & Indeed Job Scraper with Bright Data & Google Sheets Export Overview This n8n workflow automates the process of scraping job listings from both LinkedIn and Indeed platforms simultaneously, combining results, and exporting data to Google Sheets for comprehensive job market analysis. It integrates with Bright Data for professional web scraping, Google Sheets for data storage, and provides intelligent status monitoring with retry mechanisms. Workflow Components 1. 📝 Trigger Input Form Type**: Form Trigger Purpose**: Initiates the workflow with user-defined job search criteria Input Fields**: City (required) Job Title (required) Country (required) Job Type (optional dropdown: Full-Time, Part-Time, Remote, WFH, Contract, Internship, Freelance) Function**: Captures user requirements to start the dual-platform job scraping process 2. 🧠 Format Input for APIs Type**: Code Node (JavaScript) Purpose**: Prepares and formats user input for both LinkedIn and Indeed APIs Processing**: Standardizes location and job title formats Creates API-specific input structures Generates custom output field configurations Function**: Ensures compatibility with both Bright Data datasets 3. 🚀 Start Indeed Scraping Type**: HTTP Request (POST) Purpose**: Initiates Indeed job scraping via Bright Data Endpoint**: https://api.brightdata.com/datasets/v3/trigger Parameters**: Dataset ID: gd_lpfll7v5hcqtkxl6l Include errors: true Type: discover_new Discover by: keyword Limit per input: 2 Custom Output Fields**: jobid, company_name, job_title, description_text location, salary_formatted, company_rating apply_link, url, date_posted, benefits 4. 🚀 Start LinkedIn Scraping Type**: HTTP Request (POST) Purpose**: Initiates LinkedIn job scraping via Bright Data (parallel execution) Endpoint**: https://api.brightdata.com/datasets/v3/trigger Parameters**: Dataset ID: gd_l4dx9j9sscpvs7no2 Include errors: true Type: discover_new Discover by: keyword Limit per input: 2 Custom Output Fields**: job_posting_id, job_title, company_name, job_location job_summary, job_employment_type, job_base_pay_range apply_link, url, job_posted_date, company_logo 5. 🔄 Check Indeed Status Type**: HTTP Request (GET) Purpose**: Monitors Indeed scraping job progress Endpoint**: https://api.brightdata.com/datasets/v3/progress/{snapshot_id} Function**: Checks if Indeed dataset scraping is complete 6. 🔄 Check LinkedIn Status Type**: HTTP Request (GET) Purpose**: Monitors LinkedIn scraping job progress Endpoint**: https://api.brightdata.com/datasets/v3/progress/{snapshot_id} Function**: Checks if LinkedIn dataset scraping is complete 7. ⏱️ Wait Nodes (60 seconds each) Type**: Wait Node Purpose**: Implements intelligent polling mechanism Duration**: 1 minute Function**: Pauses workflow before rechecking scraping status to prevent API overload 8. ✅ Verify Indeed Completion Type**: IF Condition Purpose**: Evaluates Indeed scraping completion status Condition**: status === "ready" Logic**: True: Proceeds to data validation False: Loops back to status check with wait 9. ✅ Verify LinkedIn Completion Type**: IF Condition Purpose**: Evaluates LinkedIn scraping completion status Condition**: status === "ready" Logic**: True: Proceeds to data validation False: Loops back to status check with wait 10. 📊 Validate Indeed Data Type**: IF Condition Purpose**: Ensures Indeed returned job records Condition**: records !== 0 Logic**: True: Proceeds to fetch Indeed data False: Skips Indeed data retrieval 11. 📊 Validate LinkedIn Data Type**: IF Condition Purpose**: Ensures LinkedIn returned job records Condition**: records !== 0 Logic**: True: Proceeds to fetch LinkedIn data False: Skips LinkedIn data retrieval 12. 📥 Fetch Indeed Data Type**: HTTP Request (GET) Purpose**: Retrieves final Indeed job listings Endpoint**: https://api.brightdata.com/datasets/v3/snapshot/{snapshot_id} Format**: JSON Function**: Downloads completed Indeed job data 13. 📥 Fetch LinkedIn Data Type**: HTTP Request (GET) Purpose**: Retrieves final LinkedIn job listings Endpoint**: https://api.brightdata.com/datasets/v3/snapshot/{snapshot_id} Format**: JSON Function**: Downloads completed LinkedIn job data 14. 🔗 Merge Results Type**: Merge Node Purpose**: Combines Indeed and LinkedIn job results Mode**: Merge all inputs Function**: Creates unified dataset from both platforms 15. 📊 Save to Google Sheet Type**: Google Sheets Node Purpose**: Exports combined job data for analysis Operation**: Append rows Target**: "Compare" sheet in specified Google Sheet document Data Mapping**: Job Title, Company Name, Location Job Detail (description), Apply Link Salary, Job Type, Discovery Input Workflow Flow Input Form → Format APIs → [Indeed Trigger] + [LinkedIn Trigger] ↓ ↓ Check Status Check Status ↓ ↓ Wait 60s Wait 60s ↓ ↓ Verify Ready Verify Ready ↓ ↓ Validate Data Validate Data ↓ ↓ Fetch Indeed Fetch LinkedIn ↓ ↓ └─── Merge Results ───┘ ↓ Save to Google Sheet Configuration Requirements API Keys & Credentials Bright Data API Key**: Required for both LinkedIn and Indeed scraping Google Sheets OAuth2**: For data storage and export access n8n Form Webhook**: For user input collection Setup Parameters Google Sheet ID**: Target spreadsheet identifier Sheet Name**: "Compare" tab for job data export Form Webhook ID**: User input form identifier Dataset IDs**: Indeed: gd_lpfll7v5hcqtkxl6l LinkedIn: gd_l4dx9j9sscpvs7no2 Key Features Dual Platform Scraping Simultaneous LinkedIn and Indeed job searches Parallel processing for faster results Comprehensive job market coverage Platform-specific field extraction Intelligent Status Monitoring Real-time scraping progress tracking Automatic retry mechanisms with 60-second intervals Data validation before processing Error handling and timeout management Smart Data Processing Unified data format from both platforms Intelligent field mapping and standardization Duplicate detection and removal Rich metadata extraction Google Sheets Integration Automatic data export and storage Organized comparison format Historical job search tracking Easy sharing and collaboration Form-Based Interface User-friendly job search form Flexible job type filtering Multi-country support Real-time workflow triggering Use Cases Personal Job Search Comprehensive multi-platform job hunting Automated daily job searches Organized opportunity comparison Application tracking and management Recruitment Services Client job search automation Market availability assessment Competitive salary analysis Bulk candidate sourcing Market Research Job market trend analysis Salary benchmarking studies Skills demand assessment Geographic opportunity mapping HR Analytics Competitor hiring intelligence Role requirement analysis Compensation benchmarking Talent market insights Technical Notes Polling Interval**: 60-second status checks for both platforms Result Limiting**: Maximum 2 jobs per input per platform Data Format**: JSON with structured field mapping Error Handling**: Comprehensive error tracking in all API requests Retry Logic**: Automatic status rechecking until completion Country Support**: Adaptable domain selection (indeed.com, fr.indeed.com) Form Validation**: Required fields with optional job type filtering Merge Strategy**: Combines all results from both platforms Export Format**: Standardized Google Sheets columns for easy analysis Sample Data Output | Field | Description | Example | |-------|-------------|---------| | Job Title | Position title | "Senior Software Engineer" | | Company Name | Hiring organization | "Tech Solutions Inc." | | Location | Job location | "San Francisco, CA" | | Job Detail | Full description | "We are seeking a senior developer..." | | Apply Link | Direct application URL | "https://company.com/careers/123" | | Salary | Compensation info | "$120,000 - $150,000" | | Job Type | Employment details | "Full-time, Remote" | Setup Instructions Import Workflow: Copy JSON configuration into n8n Configure Bright Data: Add API credentials for both datasets Setup Google Sheets: Create target spreadsheet and configure OAuth Update References: Replace placeholder IDs with your actual values Test Workflow: Submit test form and verify data export Activate: Enable workflow and share form URL with users For any questions or support, please contact: info@incrementors.com or fill out this form: https://www.incrementors.com/contact-us/
by Lucas Peyrin
How it works This template is a hands-on, practical exam designed to test your understanding of the fundamental JSON data types. It's the perfect way to solidify your knowledge after learning the basics. Think of it as the "driver's test" that comes after the "theory lesson". You'll be given a series of tasks, and the workflow will automatically check your answers, providing instant feedback. The test is broken down into six sequential challenges, each focusing on a core data type: String: Writing text values correctly. Number: Using integers and decimals. Boolean: Working with true and false. Null: Representing a non-existant value. Array: Creating ordered lists of data. Object: Building nested key-value structures. For each challenge, you'll modify a Set node with the correct JSON syntax. When you execute the workflow, a corresponding IF node will validate your input. A green path means you passed and can move to the next challenge. A red path means you need to try again! Set up steps Setup time: < 1 minute This workflow is a self-contained test and requires no setup or credentials. Read the instructions on the main sticky note to understand the goal. Start with the first challenge, "Test - String". Activate and modify the node according to the instructions on the purple sticky note next to it. Click "Execute Workflow". If the execution path is green, you've passed! You can move on to the next "Test" node in the sequence to continue. If the path is red, read the hint in the error message and try again. Repeat the process until you reach the final success message. Good luck!
by Calistus Christian
What this template does Sends you an email (via Gmail) whenever any workflow that references this one fails. The message includes the workflow name/ID, execution URL, last node executed, and the error message. Why it’s useful Centralizes error notifications so you notice failures immediately and can jump straight to the failed execution. Prerequisites A Gmail account connected through n8n’s Gmail node credentials. This workflow set as the Error Workflow inside the workflows you want to monitor. How it works Error Trigger starts this workflow whenever a linked workflow fails. Gmail (Send → Message) composes and sends an email using details from the Error Trigger. Notes Error workflows don’t need to be activated to work. You can’t test them by running manually—errors must occur in an automatically run workflow (cron, webhook, etc.).
by Alfred Nutile
How it works This workflow provides a streamlined process for uploading files to Digital Ocean Spaces, making them publicly accessible. The process happens in three main steps: User submits the form with file, in this case I needed it to upload images I use in my seo tags. File is automatically uploaded to Digital Ocean Spaces using S3-compatible storage Form completion confirmation is provided Setup steps Initial setup typically takes 5-10 minutes Configure your Digital Ocean Spaces credentials and bucket settings Test the upload functionality with a small sample file Verify public access permissions are working as expected Important notes Credentials are tricky check the screenshot above for how I set the url, bucket etc. I am just using the S3 Node Set the ACL as seen below Troubleshooting Bucket name might be incorrect Region Wrong Check Space permissions if uploads fail Verify API credentials are correctly configured You can see a video here. (live in 24 hours) https://youtu.be/pYOpy3Ntt1o
by DUBCOM
Workflow: Snapshot Contabo How it Works This workflow automates daily backups (snapshots) of VPS instances hosted on Contabo. Each day at midnight, it checks for existing snapshots and ensures that only the latest backups are retained by removing older ones. It provides a seamless, hands-off backup process to keep your data secure. Setup Steps Setting up this workflow is quick, typically taking about 10-15 minutes. The essential part of the setup is providing the necessary credentials, which you can easily retrieve from your Contabo control panel. Import the Workflow: Download and upload the workflow JSON into n8n. Configure Credentials: Add CLIENT_ID, CLIENT_SECRET, API_USER, and API_PASSWORD in the credential node. Activate the Workflow: Enable it to run automatically at midnight every day. Flow Overview Schedule Trigger (00:00 daily):** Automatically initiates the workflow. Formatted Date:** Prepares a timestamp for naming the snapshot. List Snapshots:** Verifies if an existing snapshot is available for each VPS. Conditional Logic:** No Snapshot? Proceeds to create a new one. Snapshot Found? Deletes the old snapshot before creating a new one. Key Points Snapshot Retention:** Old snapshots are deleted to ensure only the latest backups are stored. Unique Identifiers:** UUIDs are used to track and guarantee unique operations.
by Lucía Maio Brioso
🧑💼 Who is this for? This workflow is for any YouTube user who wants to bulk delete all playlists from their own channel — whether to start fresh, clean up old content, or prepare the account for a new purpose. It’s useful for: Creators reorganizing their channel People transferring content to another account Anyone who wants to avoid deleting playlists manually one by one 🧠 What problem is this workflow solving? YouTube does not offer a built-in way to delete multiple playlists at once. If you have dozens or hundreds of playlists, removing them manually is extremely time-consuming. This workflow automates the entire deletion process in seconds, saving you hours of repetitive effort. ⚙️ What this workflow does Connects to your YouTube account Fetches all playlists you’ve created (excluding system playlists) Deletes them one by one** automatically > ⚠️ This action is irreversible. Once a playlist is deleted, it cannot be recovered. Use with caution. 🛠️ Setup 🔐 Create a YouTube OAuth2 credential in n8n for your channel. 🧭 Assign the credential to both YouTube nodes. ✅ Click “Test workflow” to execute. > 🟨 By default, this workflow deletes everything. If you want to be more selective, see the customization tips below. 🧩 How to customize this workflow to your needs ✅ Add a confirmation flag Insert a Set node with a custom field like confirm_delete = true, and follow it with an IF node to prevent accidental execution. ✂️ Delete only some playlists Add a Filter node after fetching playlists — you can match by title, ID, or keyword (e.g. only delete playlists containing “old”). 🛑 Add a pause before deletion Insert a Wait or NoOp node to give you a moment to cancel before it runs. 🔁 Adapt to scheduled cleanups Use a Cron trigger if you want to periodically clear temporary playlists.
by David Ashby
Complete MCP server exposing 1 IP2Proxy Proxy Detection API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add IP2Proxy Proxy Detection credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the IP2Proxy Proxy Detection API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ip2proxy.com • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (1 total) 🔧 General (1 endpoints) • GET /: Check Proxy IP 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native IP2Proxy Proxy Detection API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Lakshit Ukani
Automated Instagram posting with Facebook Graph API and content routing Who is this for? This workflow is perfect for social media managers, content creators, digital marketing agencies, and small business owners who need to automate their Instagram posting process. Whether you're managing multiple client accounts or maintaining consistent personal branding, this template streamlines your social media operations. What problem is this workflow solving? Manual Instagram posting is time-inconsistent and prone to inconsistency. Content creators struggle with: Remembering to post at optimal times Managing different content types (images, videos, reels, stories, carousels) Maintaining posting schedules across multiple accounts Ensuring content is properly formatted for each post type This workflow eliminates manual posting, reduces human error, and ensures consistent content delivery across all Instagram format types. What this workflow does The workflow automatically publishes content to Instagram using Facebook's Graph API with intelligent routing based on content type. It handles image posts, video stories, Instagram reels, carousel posts, and story content. The system creates media containers, monitors processing status, and publishes content when ready. It supports both HTTP requests and Facebook SDK methods for maximum reliability and includes automatic retry mechanisms for failed uploads. Setup Connect Instagram Business Account to a Facebook Page Configure Facebook Graph API credentials with instagram_basic permissions Update the "Configure Post Settings" node with your Instagram Business Account ID Set media URLs and captions in the configuration section Choose post type (http_image, fb_reel, http_carousel, etc.) Test workflow with sample content before going live How to customize this workflow to your needs Modify the post_type variable to control content routing: Use http_* prefixes for direct API calls Use fb_* prefixes for Facebook SDK calls Use both HTTP and Facebook SDK nodes as fallback mechanisms** - if one method fails, automatically try the other for maximum success rate Add scheduling by connecting a Cron node trigger Integrate with Google Sheets or Airtable for content management Connect webhook triggers for automated posting from external systems Customize wait times based on your content file sizes Set up error handling** to switch between HTTP and Facebook SDK methods when API limits are reached
by Cyril Nicko Gaspar
🔍 Email Lookup with Google Search from Postgres Database This N8N workflow is designed to enrich seller data stored in a Postgres database by performing automated Google search lookups. It uses Bright Data's Web Unlocker to bypass search result restrictions and the HTML Extract node to parse and extract relevant information from webpages. The main purpose of this workflow is to discover missing contact details, company domains, and secondary emails for businesses or sellers based on existing database entries. 🎯 Problem This Workflow Solves Manually searching for missing seller or business details—like secondary emails, websites, or domain names—can be time-consuming and inefficient, especially for large datasets. This workflow automates the search and data enrichment process, significantly reducing manual effort while improving the quality and completeness of your seller database. ✅ Prerequisites Before using this template, make sure the following requirements are met: ✔️ A Bright Data account with access to the Web Unlocker or Amazon Scraper API ✔️ A valid Bright Data API key ✔️ An active PostgreSQL database with seller data ✔️ N8N self-hosted instance (recommended for using community nodes like n8n-nodes-brightdata) ✔️ Installed n8n-nodes-brightdata package (custom node for Bright Data integration) ⚙️ Setup Instructions Step 1: Prepare Your Postgres Table Create a table in Postgres with the following structure (you can adjust field names if needed): CREATE TABLE sellers ( seller_id SERIAL PRIMARY KEY, seller_name TEXT, primary_email TEXT, company_info TEXT, trade_name TEXT, business_address TEXT, coc_number TEXT, vat_number TEXT, commercial_register TEXT, secondary_email TEXT, domain TEXT, seller_slug TEXT, source TEXT ); Step 2: Setup Web Unlocker on Bright Data Go to your Bright Data dashboard. Navigate to Proxies & Scraping → Web Unlocker. Create a new zone, selecting Web Unlocker API under Scraping Solutions. Whitelist your server IP if required. Step 3: Generate API Key In the Bright Data dashboard, go to the API section. Generate a new API key. In N8N, create HTTP Request Credentials using Bearer Authentication with the API key. Step 4: Install the Bright Data Node in N8N In your N8N self-hosted instance, go to Settings → Community Nodes. Search and install n8n-nodes-brightdata. 🔄 Workflow Functionality 🔁 Trigger: Can be set to run on a schedule (e.g., daily) or manually. 📥 Read: Fetches seller records from the Postgres table. 🌐 Search: Uses Bright Data to perform a Google search based on seller_name, company_info, or trade_name. 🧾 Extract: Parses the HTML content using the HTML Extract node to identify potential websites and email addresses. 📝 Update: Writes enriched data (like domain or secondary_email) back to the Postgres table. 💡 Use Cases Lead enrichment for e-commerce sellers Domain and contact info discovery for B2B databases Email and web domain verification for CRM systems Market research automation 🛠️ Customization Tips You can enhance the parsing logic in the HTML Extract node to look for phone numbers, LinkedIn profiles, or social media links. Modify the search query logic to include additional parameters like location or industry for more refined results. Integrate additional APIs (e.g., Hunter.io, Clearbit) for email validation or social profile enrichment. Add filtering to skip entries that already have domain or secondary_email.
by Vadym Nahornyi
How it works Automatically sends Telegram notifications when any n8n workflow fails. Includes workflow name, error message, and execution ID in the alert. Setup Complete setup instructions included in the workflow's sticky note in 5 languages: 🇬🇧 English 🇪🇸 Español 🇩🇪 Deutsch 🇫🇷 Français 🇷🇺 Русский Features Monitors all workflows 24/7 Instant Telegram notifications Zero configuration needed Just add your bot token and chat ID Important ⚠️ Keep this workflow active 24/7 to capture all errors.
by Sina
👔 Who is this for? Entrepreneurs and startup founders preparing for investors Business consultants drafting complete client plans Strategy teams building long-term business models Accelerators, incubators, or pitch trainers ❓ What problem does this workflow solve? Writing a full business plan takes days of work, multiple tools, and often gets stuck in messy docs or slides. This template automates every major section, generating a clean, detailed, and professional business plan with AI in just minutes. ⚙️ What this workflow does Starts with a chat message asking for your business idea or startup concept Passes the idea through 83 intelligent agents, each handling a full business plan chapter: Executive Summary Problem & Solution Product Description Market Research Competitor Analysis Business Model Marketing Strategy (includes guerrilla ideas) Operational Plan Financial Plan Team & Advisors Roadmap Conclusion & Next Steps Each section uses tailored prompts and business logic Combines all outputs into a structured, professional Markdown file Final result: a ready-to-export business plan in seconds 🛠️ Setup Import this template into your n8n instance Replace the “LLM Chat Model” node with your preferred model (Ollama, GPT-4, etc.) Start from the chat input node — describe your startup or idea Wait for all agents to finish Download the final Business plan file 🤖 LLM Flexibility (Choose Your Model) Supports: OpenAI (GPT-4 / GPT-3.5) Ollama (LLaMA 3.1, Mistral, DeepSeek, etc.) Any compatible N8N chat model To change the model, just replace the “Language Model” node — no other updates required 📌 Notes All nodes are clearly named by function (e.g., “Market Research Generator”) Sticky notes included for clarity Generates high-quality plans suitable for VCs or accelerators Modular: you can turn off or reorder any chapter 📩 Need help? Email: sinamirshafiee@gmail.com Happy to support setup, LLM switching, or custom section development.