by kapio
How it Works: Capture Contact Requests:** This template efficiently handles contact requests coming through a WordPress website using the Contact Form 7 (CF7) plugin with a webhook extension. Contact Management:** It automatically creates or updates contacts in Pipedrive upon receiving a new request. Lead Management:** Each contact request is securely stored in the lead inbox of Pipedrive, ensuring no opportunity is missed. Task Creation:** For each new contact or update, the workflow triggers the creation of a related task, streamlining follow-up actions. Note Attachment:** A comprehensive note containing all details from the contact request is attached to the corresponding lead, ensuring that all information is readily accessible. Step-by-Step Guide: Estimated Setup Time: The setup process is straightforward and can be completed quickly. Specific time may vary depending on your familiarity with n8n and the systems involved. Detailed setup instructions are provided within the workflow via sticky notes. These notes offer in-depth guidance for configuring each component of the template to suit your specific needs.
by n8n Team
This workflow creates a GitHub issue when a new ticket is created in Zendesk. Subsequent comments on the ticket in Zendesk are added as comments to the issue in GitHub. Prerequisites Zendesk account and Zendesk credentials. GitHub account and GitHub credentials. GitHub repository to create issues in. How it works The workflow listens for new tickets in Zendesk. When a new ticket is created, the workflow creates a new issue in GitHub. The GitHub issue number is then saved in one of the ticket's fields (in setup we call this "GitHub Issue Number"). The next time a comment is added to the ticket, the workflow retrieves the GitHub issue number from the ticket's field and adds the comment to the issue in GitHub. Setup This workflow requires that you set up a webhook in Zendesk. To do so, follow the steps below: In the workflow, open the On new Zendesk ticket node and copy the webhook URL. In Zendesk, navigate to Admin Center > Apps and integrations > Webhooks > Actions > Create Webhook. Add all the required details which can be retrieved from the On new Zendesk ticket node. The webhook URL gets added to the “Endpoint URL” field, and the “Request method” should match what is shown in n8n. Save the webhook. In Zendesk, navigate to Admin Center > Objects and rules > Business rules > Triggers > Add trigger. Give trigger a name such as “New tickets”. Under “Conditions” in “Meet ALL of the following conditions”, add “Status is New”. Under “Actions”, select “Notify active webhook” and select the webhook you created previously. In the JSON body, add the following: { "id": "{{ticket.id}}", "comment": "{{ticket.latest_comment_html}}" } Save the Zendesk trigger. You will also need to set up a field in Zendesk to store the GitHub issue number. To do so, follow the steps below: In Zendesk, navigate to Admin Center > Objects and rules > Tickets > Fields > Add field. Use the number field option and give the field a name such as “GitHub Issue Number”. Save the field. In n8n, open the Update ticket node and select the field you created in Zendesk.
by Zacharia Kimotho
This workflow takes off the task of backing up workflows regularly on Github and uses Google Drive as the main tool to host these. This can be a good way to keep track of your workflows so that you never lose any workflows in case your n8n goes down. How does it work Creates a new folder within a specified folder with the time its backed up Loops around all workflows, converts them to a JSON file and uploads them to the created folder Gets the previous backups and deletes them This has a clean feel and look as it simplifies the backup while not keeping a cache of workflows on your drive. Setup Create a new folder Create new service account credentials Share the folder with the service account email Upload this workflow to your canvas and map the credentials Set the schedule that you need your workflows to run and manage your backups Activate the workflow Happy Productivity! @Imperol
by bangank36
This workflow retrieves all Squarespace Orders and saves them into a Google Sheets spreadsheet using the Squarespace Commerce API. It uses pagination to ensure all orders are collected efficiently. How It Works The workflow queries your Squarespace Orders API. It fetches data in paginated batches and inserts them into Google Sheets. The Global node is used to configure API parameters dynamically, allowing users to set date filters, pagination, and fulfillment status. The workflow runs on demand or on a schedule, ensuring your data stays up to date. Parameters This workflow allows you to customize the API request using the Global node settings: api-version** (string, required) – The current API version (see Squarespace Orders API documentation). modifiedAfter**={a-datetime} (string, conditional) – Fetch orders modified after a specific date (ISO 8601 format). modifiedBefore**={b-datetime} (string, conditional) – Fetch orders modified before a specific date (ISO 8601 format). cursor**={c} (string, conditional) – Used for pagination, cannot be combined with other filters. fulfillmentStatus**={status} (optional, enum) – Filter by fulfillment status: PENDING, FULFILLED, or CANCELED. maxPage** – Set -1 to enables infinite pagination to fetch all available orders. Requirements Credentials To use this workflow, you need: Squarespace API Key – Retrieve from your Squarespace settings. Google Sheets API credentials – Required to insert data into a spreadsheet. Google Sheets Setup Use the Squarespace order export feature to create a reference sheet. Google Sheets template is available Who Is This For? This workflow is designed for: Squarespace store owners exporting orders for tax reports, analytics, or sales tracking. Businesses automating order data retrieval for external reporting. Anyone needing an efficient way to extract Squarespace order data without manual effort. Explore More Templates Get all orders in Shopify to Google Sheets Sync Shopify customers to Google Sheets + Squarespace compatible csv 👉 Check out my other n8n templates
by Damian Karzon
This workflow randomly select recipes from a Mealie instance (can use a specific category) and then creates a meal plan in Mealie with those recipes. How it works: Workflow has a scheduled trigger (set to run weekly on a Friday) Config node sets a few properties to configure the workflow A call to the Mealie API to get the list of recipes The code node holds most of the logic, this will loop through the number of recipes defined in the config node and randomly select a recipe from the list (making sure not to double up any recipes) Once all the recipes are selected it will call the Mealie API to set up the meal plan on the days Setup Add your Mealie API token as a credential and set it on the Http Request nodes Set the relevant schedule trigger to run when you like Update the Config node with the config you want numberOfRecipes - Number of recipes to populate for the meal plan offsetPlanDays - Number of days in the future to start the plan (0 will start it today, 1 tomorrow, etc.) mealieCategoryId - A category id of the category you want to pull in recipes from (default to select from all recipes) mealieBaseUrl - The base url of your Mealie instance
by Oneclick AI Squad
This n8n template demonstrates how to create a comprehensive marketing automation and booking system that combines Excel-based lead management with voice-powered customer interactions. The system utilizes VAPI for voice communication and Excel/Google Sheets for data management, making it ideal for restaurants seeking to automate marketing campaigns and streamline booking processes through intelligent voice AI technology. Good to know Voice processing requires active VAPI subscription with per-minute billing Excel operations are handled in real-time with immediate data synchronization The system can handle multiple simultaneous voice calls and lead processing All customer data is stored securely in Excel with proper formatting and validation Marketing campaigns can be scheduled and automated based on lead data How it works Lead Management & Marketing Automation Workflow New Lead Trigger: Excel triggers capture new leads when customers are added to the lead management spreadsheet Lead Preparation: The system processes and formats lead data, extracting relevant details (name, phone, preferences, booking history) Campaign Loop: Automated loop processes through multiple leads for batch marketing campaigns Voice Marketing Call: VAPI initiates personalized voice calls to leads with tailored restaurant offers and booking invitations Response Tracking: All call results and lead responses are logged back to Excel for campaign analysis Booking & Order Processing Workflow Voice Response Capture: VAPI webhook triggers when customers respond to marketing calls or make direct booking requests Response Storage: Customer responses and booking preferences are immediately saved to Excel sheets Information Extraction: System processes natural language responses to extract booking details (party size, preferred times, special requests) Calendar Integration: Booking information is automatically scheduled in restaurant management systems Confirmation Loop: Automated follow-up voice messages confirm bookings and provide additional restaurant information Excel Sheet Structure Lead Management Sheet | Column | Description | |--------|-------------| | lead_id | Unique identifier for each lead | | customer_name | Customer's full name | | phone_number | Primary contact number | | email | Customer email address | | last_visit_date | Date of last restaurant visit | | preferred_cuisine | Customer's food preferences | | party_size_typical | Usual number of guests | | preferred_time_slot | Preferred dining times | | marketing_consent | Permission for marketing calls | | lead_source | How customer was acquired | | lead_status | Current status (new, contacted, converted, inactive) | | last_contact_date | Date of last marketing contact | | notes | Additional customer information | | created_at | Lead creation timestamp | Booking Responses Sheet | Column | Description | |--------|-------------| | response_id | Unique response identifier | | customer_name | Customer's name from call | | phone_number | Contact number used for call | | booking_requested | Whether customer wants to book | | party_size | Number of guests requested | | preferred_date | Requested booking date | | preferred_time | Requested time slot | | special_requests | Dietary restrictions or special occasions | | call_duration | Length of VAPI call | | call_outcome | Result of marketing call | | follow_up_needed | Whether additional contact is required | | booking_confirmed | Final booking confirmation status | | created_at | Response timestamp | Campaign Tracking Sheet | Column | Description | |--------|-------------| | campaign_id | Unique campaign identifier | | campaign_name | Descriptive campaign title | | target_audience | Lead segments targeted | | total_leads | Number of leads contacted | | successful_calls | Calls that connected | | bookings_generated | Number of bookings from campaign | | conversion_rate | Percentage of leads converted | | campaign_cost | Total VAPI usage cost | | roi | Return on investment | | start_date | Campaign launch date | | end_date | Campaign completion date | | status | Campaign status (active, completed, paused) | How to use Setup: Import the workflow into your n8n instance and configure VAPI credentials Excel Configuration: Set up Excel/Google Sheets with the required sheet structure provided above Lead Import: Populate the Lead Management sheet with customer data from various sources Campaign Setup: Configure marketing message templates in VAPI nodes to match your restaurant's branding Testing: Test voice commands such as "I'd like to book a table for tonight" or "What are your specials?" Automation: Enable triggers to automatically process new leads and schedule marketing campaigns Monitoring: Track campaign performance through the Campaign Tracking sheet and adjust strategies accordingly The system can handle multiple concurrent voice calls and scales with your restaurant's marketing needs. Requirements VAPI account** for voice processing and natural language understanding Excel/Google Sheets** for storing lead, booking, and campaign data n8n instance** with Excel/Sheets and VAPI integrations enabled Valid phone numbers** for lead contact and compliance with local calling regulations Customising this workflow Multi-location Support**: Adapt voice AI automation for restaurant chains with location-specific offers Seasonal Campaigns**: Try popular use-cases such as holiday promotions, special event marketing, or loyalty program outreach Integration Options**: The workflow can be extended to include CRM integration, SMS follow-ups, and social media campaign coordination Advanced Analytics**: Add nodes for detailed campaign performance analysis and customer segmentation
by Bright Data
🔍 Glassdoor Job Finder: Bright Data Scraping + Keyword-Based Automation A comprehensive n8n automation that scrapes Glassdoor job listings using Bright Data's web scraping service based on user-defined keywords, location, and country parameters, then automatically stores the results in Google Sheets. 📋 Overview This workflow provides an automated job search solution that extracts job listings from Glassdoor using form-based inputs and stores organized results in Google Sheets. Perfect for recruiters, job seekers, market research, and competitive analysis. Workflow Description: Automates Glassdoor job searches using Bright Data's web scraping capabilities. Users submit keywords, location, and country via form trigger. The workflow scrapes job listings, extracts company details, ratings, and locations, then automatically stores organized results in Google Sheets for easy analysis and tracking. ✨ Key Features 🎯 Form-Based Input: Simple web form for job type, location, and country 🔍 Glassdoor Integration: Uses Bright Data's Glassdoor dataset for accurate job data 📊 Smart Data Processing: Automatically extracts key job information 📈 Google Sheets Storage: Organized data storage with automatic updates 🔄 Status Monitoring: Built-in progress tracking and retry logic ⚡ Fast & Reliable: Professional scraping with error handling 🎯 Keyword Flexibility: Search any job type with location filters 📝 Structured Output: Clean, organized job listing data 🎯 What This Workflow Does Input Job Keywords:** Job title or role (e.g., "Software Engineer", "Marketing Manager") Location:** City or region for job search Country:** Target country for job listings Processing Form Submission Data Scraping via Bright Data Status Monitoring Data Extraction Data Processing Sheet Update Output Data Points | Field | Description | Example | |-------|-------------|---------| | Job Title | Position title from listing | Senior Software Engineer | | Company Name | Employer name | Google Inc. | | Location | Job location | San Francisco, CA | | Rating | Company rating score | 4.5 | | Job Link | Direct URL to listing | https://glassdoor.com/job/... | 🚀 Setup Instructions Prerequisites n8n instance (self-hosted or cloud) Google account with Sheets access Bright Data account with Glassdoor scraping dataset access 5–10 minutes for setup Step 1: Import the Workflow Copy the JSON workflow code from the provided file In n8n: Workflows → + Add workflow → Import from JSON Paste JSON and click Import Step 2: Configure Bright Data Set up Bright Data credentials in n8n Ensure access to dataset: gd_lpfbbndm1xnopbrcr0 Update API tokens in: "Scrape Job Data" node "Check Delivery Status of Snap ID" node "Getting Job Lists" node Step 3: Configure Google Sheets Integration Create a new Google Sheet (e.g., "Glassdoor Job Tracker") Set up Google Sheets OAuth2 credentials in n8n Prepare columns: Column A: Job Title Column B: Company Name Column C: Location Column D: Rating Column E: Job Link Step 4: Update Workflow Settings Update "Update Job List" node with your Sheet ID and credentials Test the form trigger and webhook URL Step 5: Test & Activate Submit test data (e.g., "Software Engineer" in "New York") Activate the workflow Verify Google Sheet updates and field extraction 📖 Usage Guide Submitting Job Searches Navigate to your workflow's webhook URL Fill in: Search Job Type Location Country Submit the form Reading the Results Real-time job listing data Company ratings and reviews Direct job posting links Location-specific results Processing timestamps 🔧 Customization Options More Data Points:** Add job descriptions, salary, company size, etc. Search Parameters:** Add filters for salary, experience, remote work Data Processing:** Add validation, deduplication, formatting 🚨 Troubleshooting Bright Data connection failed:** Check API credentials and dataset access No job data extracted:** Validate search terms and location format Google Sheets permission denied:** Re-authenticate and check sharing Form submission failed:** Check webhook URL and form config Workflow execution failed:** Check logs, add retry logic Advanced Troubleshooting Check execution logs in n8n Test individual nodes Verify data formats Monitor rate limits Add error handling 📊 Use Cases & Examples Recruitment Pipeline:** Track job postings, build talent database Market Research:** Analyze job trends, hiring patterns Career Development:** Monitor opportunities, salary trends Competitive Intelligence:** Track competitor hiring activity ⚙️ Advanced Configuration Batch Processing:** Accept multiple keywords, loop logic, delays Search History:** Track trends, compare results over time External Tools:** Integrate with CRM, Slack, databases, BI tools 📈 Performance & Limits Single search:** 2–5 minutes Data accuracy:** 95%+ Success rate:** 90%+ Concurrent searches:** 1–3 (depends on plan) Daily capacity:** 50–200 searches Memory:** ~50MB per execution API calls:** 3 Bright Data + 1 Google Sheets per search 🤝 Support & Community n8n Community Forum:** community.n8n.io Documentation:** docs.n8n.io Bright Data Support:** Via your dashboard GitHub Issues:** Report bugs and features Contributing: Share improvements, report issues, create variations, document best practices. Need Help? Check the full documentation or visit the n8n Community for support and workflow examples.
by David Olusola
This plug-and-play n8n workflow automates medical record digitization using Mistral’s OCR API and stores clean, structured data in Google Sheets. Whether you run a clinic or healthtech product, this no-code solution simplifies data entry from scanned or uploaded medical documents. 📌 Works seamlessly on both self-hosted and cloud-based n8n environments. 👥 Who is this for? Hospitals and private clinics Healthtech platforms & startups Medical admin and document processing teams Clinical researchers and labs 😓 What problem does it solve? ❌ Manual entry from printed forms ❌ Unstructured, scattered records ❌ Errors in data transcription ❌ Inconsistent document storage ✅ This automation brings consistency, structure, and speed to the way you handle medical documents. ✅ What this workflow does Captures uploaded documents through a public form Uploads file to Mistral for OCR processing Extracts clean text from each page (PDF or image) Parses patient fields (Name, DOB, Diagnosis, Medications, etc.) Saves records into a structured Google Sheet 🛠️ Setup Instructions Step 1: Google Sheet Prep Create a Google Sheet with these columns (case-sensitive): Name, Date of Birth, Patient ID, Date of Visit, Referring Physician, Department, Symptoms, Blood Pressure, Heart Rate, Temperature, Lab Results, Diagnosis, Medications, Next Appointment, Notes Step 2: Mistral API Access Sign up at Mistral AI Get your API key Ensure your plan supports file upload & OCR endpoints Step 3: Google OAuth Credentials (Self-hosted or Cloud) Go to n8n → Settings → Credentials, and add: Google Sheets OAuth2 Scopes needed: https://www.googleapis.com/auth/spreadsheets Step 4: Import Workflow Go to Workflows > Import from File Upload your JSON file Replace: Google Sheet document ID in the "Google Sheets" node Your Mistral API key in HTTP Header Auth Step 5: (Optional) Make Form Public In Cloud-based n8n: You can expose the form as a public page Otherwise, connect it to your website form via webhook 🧩 Customization Tips Extract More Fields Update the "Data cleaning" node and extend the list of fields: const fields = ["Name", "Diagnosis", "Medications", "Symptoms", ...]; Add EHR or Database Integration After Google Sheets, chain your custom system: PostgreSQL Airtable Supabase MongoDB Change Output Format Want JSON or Markdown output for internal tools? Use the Set or Code node before the final output step. 🧪 Troubleshooting Issue Fix File upload fails Check Mistral API key and file type Google Sheets not updating Verify credentials and document ID No data parsed Check OCR quality; verify field labels in document Workflow not triggering Ensure webhook or form is configured correctly 🌐 Self-Hosted vs Cloud Comparison Feature Self-Hosted n8n Cloud Public Form Access Manual setup Built-in OAuth App Config Required Pre-configured Storage Limits Depends on server Included with plan Scalability Fully customizable Scales automatically 📣 Getting Support n8n Docs Mistral API Docs n8n Community Or reach out to: David Olusola (dimejicole21@gmail.com) 🌟 Like this template? Give it a star in the template library and help other no-code builders discover it. "Turn scanned documents into structured data with zero code."
by Lucas Peyrin
How it works This workflow is a hands-on tutorial for the Code node in n8n, covering both basic and advanced concepts through a simple data processing task. Provides Sample Data: The workflow begins with a sample list of users. Processes Each Item (Run Once for Each Item): The first Code node iterates through each user to calculate their fullName and age. This demonstrates basic item-by-item data manipulation using $input.item.json. Fetches External Data (Advanced): The second Code node showcases a more advanced feature. For each user, it uses the built-in this.helpers.httpRequest function to call an external API (genderize.io) to enrich the data with a predicted gender. Processes All Items at Once (Run Once for All Items): The third Code node receives the fully enriched list of users and runs only once. It uses $items() to access the entire list and calculate the averageAge, returning a single summary item. Create a Binary File: The final Code node gets the fully enriched list of users once again and creates a binary CSV file to show how to use binary data Buffer in JavaScript. Set up steps Setup time: < 1 minute This workflow is a self-contained tutorial and requires no setup. Explore the Nodes: Click on each of the Code nodes to read the code and the comments explaining each step, from basic to advanced. Run the Workflow: Click "Execute Workflow" to see it in action. Check the Output: Click on each node after the execution to see how the data is transformed at each stage. Notice how the data is progressively enriched. Experiment! Try changing the data in the 1. Sample Data node, or modify the code in the Code nodes to see what happens.
by Teddy
Retrieve 20 Latest TechCrunch Articles Who is this for? This workflow is designed for developers, content creators, and data analysts who need to scrape recent articles from TechCrunch. It’s perfect for anyone looking to aggregate news articles or create custom feeds for analysis, reporting, or integration into other systems. What problem is this workflow solving? This workflow automates the process of scraping recent articles from TechCrunch. Manually collecting article data can be time-consuming and inefficient, but with this workflow, you can quickly gather up-to-date news articles with relevant metadata, saving time and effort. What this workflow does This workflow retrieves the latest 20 news articles from TechCrunch’s “Recent” page. It extracts the article URLs, metadata (such as titles and publication dates), and main content for each article, allowing you to access the information you need without any manual effort. Setup Clone or download the workflow template. Ensure you have a working n8n environment. Configure the HTTP Request nodes with your desired parameters to connect to the TechCrunch API. (Optional) Customize the workflow to target specific sections or topics of interest. Run the workflow to retrieve the latest 20 articles. How to customize this workflow to your needs Modify the HTTP request to pull articles from different pages or sections of TechCrunch. Adjust the number of articles to retrieve by changing the selection criteria. Add additional processing steps to further filter or analyze the article data. Workflow Steps Send an HTTP request to the TechCrunch "Recent" page. Parse a posts box that holds the list of articles. Parse all posts to extract all articles. spilt out posts for each article. Extract the URL and metadata from each article. Send an HTTP request for each article using its URL. Locate and parse the main content of each article. Note: Be sure to update the HTTP Request nodes with any necessary headers or authentication to work with TechCrunch’s website.
by Dvir Sharon
Goodreads Quote Extraction with Bright Data and Gemini This workflow demonstrates how to fetch data specifically from Goodreads web pages using Bright Data and then extract specific information (quotes) from that data using a Google Gemini AI model. How it works The workflow is triggered manually. It sends a request to a Bright Data collector to scrape data from a predefined list of Goodreads URLs. The collected text data from Goodreads is then passed to a Google Gemini AI node. The AI node processes the text and extracts quotes based on a specified JSON schema output format. Set up steps Setting up this workflow should take only a few minutes. You will need a Bright Data API key to configure the 'Header Auth' credential. You will need a Google Gemini API key to configure the 'Google Gemini(PaLM) Api account' credential. Ensure the correct Bright Data collector ID is set in the 'Perform Bright Data Web Request' node URL. Make sure the full list of target Goodreads URLs is correctly added to the 'Perform Bright Data Web Request' node's body. Link your created credentials to the respective nodes ('Perform Bright Data Web Request' and 'Quotes Extractor'). Keep detailed descriptions for specific node configurations in sticky notes inside your workflow canvas.
by Danger
Ok google download "movie name" I develop this automation to improve my quality of life in handling torrents in my media-center. Goal Automate the search operations of a movie based on its name and trigger a download using your transmission-daemon. Setup Prerequisite Transmission daemon up and running and its authentication method N8N configured self-hosted or with the possibility to add npm package better with docker-compose.yaml Telegram bot credential [optional] Configuration Create a folder where your docker-compose.yaml belongs n8n_dir and proceed in installing the node package. cd ~/n8n_dir npm i torrent-search-api Configuring your docker-compose.yaml file this way. You must include all the dependencies of torrent-search-api. This will let you run the new torrent search node presented in this workflow. version: '3.3' services: n8n: container_name: n8n ports: '5678:5678' restart: always volumes: '~/n8n_dir/.n8n:/home/node/.n8n' '~/n8n_dir/node_modules/@tootallnate:/usr/local/lib/node_modules/@tootallnate' '~/n8n_dir/node_modules/accepts:/usr/local/lib/node_modules/accepts' '~/n8n_dir/node_modules/agent-base:/usr/local/lib/node_modules/agent-base' '~/n8n_dir/node_modules/ajv:/usr/local/lib/node_modules/ajv' '~/n8n_dir/node_modules/ansi-styles:/usr/local/lib/node_modules/ansi-styles' '~/n8n_dir/node_modules/asn1:/usr/local/lib/node_modules/asn1' '~/n8n_dir/node_modules/assert:/usr/local/lib/node_modules/assert' '~/n8n_dir/node_modules/assert-plus:/usr/local/lib/node_modules/assert-plus' '~/n8n_dir/node_modules/ast-types:/usr/local/lib/node_modules/ast-types' '~/n8n_dir/node_modules/asynckit:/usr/local/lib/node_modules/asynckit' '~/n8n_dir/node_modules/aws-sign2:/usr/local/lib/node_modules/aws-sign2' '~/n8n_dir/node_modules/aws4:/usr/local/lib/node_modules/aws4' '~/n8n_dir/node_modules/base64-js:/usr/local/lib/node_modules/base64-js' '~/n8n_dir/node_modules/batch:/usr/local/lib/node_modules/batch' '~/n8n_dir/node_modules/bcrypt-pbkdf:/usr/local/lib/node_modules/bcrypt-pbkdf' '~/n8n_dir/node_modules/bluebird:/usr/local/lib/node_modules/bluebird' '~/n8n_dir/node_modules/boolbase:/usr/local/lib/node_modules/boolbase' '~/n8n_dir/node_modules/brotli:/usr/local/lib/node_modules/brotli' '~/n8n_dir/node_modules/bytes:/usr/local/lib/node_modules/bytes' '~/n8n_dir/node_modules/caseless:/usr/local/lib/node_modules/caseless' '~/n8n_dir/node_modules/chalk:/usr/local/lib/node_modules/chalk' '~/n8n_dir/node_modules/cheerio:/usr/local/lib/node_modules/cheerio' '~/n8n_dir/node_modules/cloudscraper:/usr/local/lib/node_modules/cloudscraper' '~/n8n_dir/node_modules/co:/usr/local/lib/node_modules/co' '~/n8n_dir/node_modules/color-convert:/usr/local/lib/node_modules/color-convert' '~/n8n_dir/node_modules/color-name:/usr/local/lib/node_modules/color-name' '~/n8n_dir/node_modules/combined-stream:/usr/local/lib/node_modules/combined-stream' '~/n8n_dir/node_modules/component-emitter:/usr/local/lib/node_modules/component-emitter' '~/n8n_dir/node_modules/content-disposition:/usr/local/lib/node_modules/content-disposition' '~/n8n_dir/node_modules/content-type:/usr/local/lib/node_modules/content-type' '~/n8n_dir/node_modules/cookiejar:/usr/local/lib/node_modules/cookiejar' '~/n8n_dir/node_modules/core-util-is:/usr/local/lib/node_modules/core-util-is' '~/n8n_dir/node_modules/css-select:/usr/local/lib/node_modules/css-select' '~/n8n_dir/node_modules/css-what:/usr/local/lib/node_modules/css-what' '~/n8n_dir/node_modules/dashdash:/usr/local/lib/node_modules/dashdash' '~/n8n_dir/node_modules/data-uri-to-buffer:/usr/local/lib/node_modules/data-uri-to-buffer' '~/n8n_dir/node_modules/debug:/usr/local/lib/node_modules/debug' '~/n8n_dir/node_modules/deep-is:/usr/local/lib/node_modules/deep-is' '~/n8n_dir/node_modules/degenerator:/usr/local/lib/node_modules/degenerator' '~/n8n_dir/node_modules/delayed-stream:/usr/local/lib/node_modules/delayed-stream' '~/n8n_dir/node_modules/delegates:/usr/local/lib/node_modules/delegates' '~/n8n_dir/node_modules/depd:/usr/local/lib/node_modules/depd' '~/n8n_dir/node_modules/destroy:/usr/local/lib/node_modules/destroy' '~/n8n_dir/node_modules/dom-serializer:/usr/local/lib/node_modules/dom-serializer' '~/n8n_dir/node_modules/domelementtype:/usr/local/lib/node_modules/domelementtype' '~/n8n_dir/node_modules/domhandler:/usr/local/lib/node_modules/domhandler' '~/n8n_dir/node_modules/domutils:/usr/local/lib/node_modules/domutils' '~/n8n_dir/node_modules/ecc-jsbn:/usr/local/lib/node_modules/ecc-jsbn' '~/n8n_dir/node_modules/ee-first:/usr/local/lib/node_modules/ee-first' '~/n8n_dir/node_modules/emitter-component:/usr/local/lib/node_modules/emitter-component' '~/n8n_dir/node_modules/enqueue:/usr/local/lib/node_modules/enqueue' '~/n8n_dir/node_modules/enstore:/usr/local/lib/node_modules/enstore' '~/n8n_dir/node_modules/entities:/usr/local/lib/node_modules/entities' '~/n8n_dir/node_modules/error-inject:/usr/local/lib/node_modules/error-inject' '~/n8n_dir/node_modules/escape-html:/usr/local/lib/node_modules/escape-html' '~/n8n_dir/node_modules/escape-string-regexp:/usr/local/lib/node_modules/escape-string-regexp' '~/n8n_dir/node_modules/escodegen:/usr/local/lib/node_modules/escodegen' '~/n8n_dir/node_modules/esprima:/usr/local/lib/node_modules/esprima' '~/n8n_dir/node_modules/estraverse:/usr/local/lib/node_modules/estraverse' '~/n8n_dir/node_modules/esutils:/usr/local/lib/node_modules/esutils' '~/n8n_dir/node_modules/extend:/usr/local/lib/node_modules/extend' '~/n8n_dir/node_modules/extsprintf:/usr/local/lib/node_modules/extsprintf' '~/n8n_dir/node_modules/fast-deep-equal:/usr/local/lib/node_modules/fast-deep-equal' '~/n8n_dir/node_modules/fast-json-stable-stringify:/usr/local/lib/node_modules/fast-json-stable-stringify' '~/n8n_dir/node_modules/fast-levenshtein:/usr/local/lib/node_modules/fast-levenshtein' '~/n8n_dir/node_modules/file-uri-to-path:/usr/local/lib/node_modules/file-uri-to-path' '~/n8n_dir/node_modules/forever-agent:/usr/local/lib/node_modules/forever-agent' '~/n8n_dir/node_modules/form-data:/usr/local/lib/node_modules/form-data' '~/n8n_dir/node_modules/format-parser:/usr/local/lib/node_modules/format-parser' '~/n8n_dir/node_modules/formidable:/usr/local/lib/node_modules/formidable' '~/n8n_dir/node_modules/fs-extra:/usr/local/lib/node_modules/fs-extra' '~/n8n_dir/node_modules/ftp:/usr/local/lib/node_modules/ftp' '~/n8n_dir/node_modules/get-uri:/usr/local/lib/node_modules/get-uri' '~/n8n_dir/node_modules/getpass:/usr/local/lib/node_modules/getpass' '~/n8n_dir/node_modules/graceful-fs:/usr/local/lib/node_modules/graceful-fs' '~/n8n_dir/node_modules/har-schema:/usr/local/lib/node_modules/har-schema' '~/n8n_dir/node_modules/har-validator:/usr/local/lib/node_modules/har-validator' '~/n8n_dir/node_modules/has-flag:/usr/local/lib/node_modules/has-flag' '~/n8n_dir/node_modules/htmlparser2:/usr/local/lib/node_modules/htmlparser2' '~/n8n_dir/node_modules/http-context:/usr/local/lib/node_modules/http-context' '~/n8n_dir/node_modules/http-errors:/usr/local/lib/node_modules/http-errors' '~/n8n_dir/node_modules/http-incoming:/usr/local/lib/node_modules/http-incoming' '~/n8n_dir/node_modules/http-outgoing:/usr/local/lib/node_modules/http-outgoing' '~/n8n_dir/node_modules/http-proxy-agent:/usr/local/lib/node_modules/http-proxy-agent' '~/n8n_dir/node_modules/http-signature:/usr/local/lib/node_modules/http-signature' '~/n8n_dir/node_modules/https-proxy-agent:/usr/local/lib/node_modules/https-proxy-agent' '~/n8n_dir/node_modules/iconv-lite:/usr/local/lib/node_modules/iconv-lite' '~/n8n_dir/node_modules/inherits:/usr/local/lib/node_modules/inherits' '~/n8n_dir/node_modules/ip:/usr/local/lib/node_modules/ip' '~/n8n_dir/node_modules/is-browser:/usr/local/lib/node_modules/is-browser' '~/n8n_dir/node_modules/is-typedarray:/usr/local/lib/node_modules/is-typedarray' '~/n8n_dir/node_modules/is-url:/usr/local/lib/node_modules/is-url' '~/n8n_dir/node_modules/isarray:/usr/local/lib/node_modules/isarray' '~/n8n_dir/node_modules/isobject:/usr/local/lib/node_modules/isobject' '~/n8n_dir/node_modules/isstream:/usr/local/lib/node_modules/isstream' '~/n8n_dir/node_modules/jsbn:/usr/local/lib/node_modules/jsbn' '~/n8n_dir/node_modules/json-schema:/usr/local/lib/node_modules/json-schema' '~/n8n_dir/node_modules/json-schema-traverse:/usr/local/lib/node_modules/json-schema-traverse' '~/n8n_dir/node_modules/json-stringify-safe:/usr/local/lib/node_modules/json-stringify-safe' '~/n8n_dir/node_modules/jsonfile:/usr/local/lib/node_modules/jsonfile' '~/n8n_dir/node_modules/jsprim:/usr/local/lib/node_modules/jsprim' '~/n8n_dir/node_modules/koa-is-json:/usr/local/lib/node_modules/koa-is-json' '~/n8n_dir/node_modules/levn:/usr/local/lib/node_modules/levn' '~/n8n_dir/node_modules/lodash:/usr/local/lib/node_modules/lodash' '~/n8n_dir/node_modules/lodash.assignin:/usr/local/lib/node_modules/lodash.assignin' '~/n8n_dir/node_modules/lodash.bind:/usr/local/lib/node_modules/lodash.bind' '~/n8n_dir/node_modules/lodash.defaults:/usr/local/lib/node_modules/lodash.defaults' '~/n8n_dir/node_modules/lodash.filter:/usr/local/lib/node_modules/lodash.filter' '~/n8n_dir/node_modules/lodash.flatten:/usr/local/lib/node_modules/lodash.flatten' '~/n8n_dir/node_modules/lodash.foreach:/usr/local/lib/node_modules/lodash.foreach' '~/n8n_dir/node_modules/lodash.map:/usr/local/lib/node_modules/lodash.map' '~/n8n_dir/node_modules/lodash.merge:/usr/local/lib/node_modules/lodash.merge' '~/n8n_dir/node_modules/lodash.pick:/usr/local/lib/node_modules/lodash.pick' '~/n8n_dir/node_modules/lodash.reduce:/usr/local/lib/node_modules/lodash.reduce' '~/n8n_dir/node_modules/lodash.reject:/usr/local/lib/node_modules/lodash.reject' '~/n8n_dir/node_modules/lodash.some:/usr/local/lib/node_modules/lodash.some' '~/n8n_dir/node_modules/lru-cache:/usr/local/lib/node_modules/lru-cache' '~/n8n_dir/node_modules/media-typer:/usr/local/lib/node_modules/media-typer' '~/n8n_dir/node_modules/methods:/usr/local/lib/node_modules/methods' '~/n8n_dir/node_modules/mime:/usr/local/lib/node_modules/mime' '~/n8n_dir/node_modules/mime-db:/usr/local/lib/node_modules/mime-db' '~/n8n_dir/node_modules/mime-types:/usr/local/lib/node_modules/mime-types' '~/n8n_dir/node_modules/monotonic-timestamp:/usr/local/lib/node_modules/monotonic-timestamp' '~/n8n_dir/node_modules/ms:/usr/local/lib/node_modules/ms' '~/n8n_dir/node_modules/negotiator:/usr/local/lib/node_modules/negotiator' '~/n8n_dir/node_modules/netmask:/usr/local/lib/node_modules/netmask' '~/n8n_dir/node_modules/nth-check:/usr/local/lib/node_modules/nth-check' '~/n8n_dir/node_modules/oauth-sign:/usr/local/lib/node_modules/oauth-sign' '~/n8n_dir/node_modules/object-assign:/usr/local/lib/node_modules/object-assign' '~/n8n_dir/node_modules/on-finished:/usr/local/lib/node_modules/on-finished' '~/n8n_dir/node_modules/optionator:/usr/local/lib/node_modules/optionator' '~/n8n_dir/node_modules/pac-proxy-agent:/usr/local/lib/node_modules/pac-proxy-agent' '~/n8n_dir/node_modules/pac-resolver:/usr/local/lib/node_modules/pac-resolver' '~/n8n_dir/node_modules/parseurl:/usr/local/lib/node_modules/parseurl' '~/n8n_dir/node_modules/performance-now:/usr/local/lib/node_modules/performance-now' '~/n8n_dir/node_modules/prelude-ls:/usr/local/lib/node_modules/prelude-ls' '~/n8n_dir/node_modules/process-nextick-args:/usr/local/lib/node_modules/process-nextick-args' '~/n8n_dir/node_modules/promise-polyfill:/usr/local/lib/node_modules/promise-polyfill' '~/n8n_dir/node_modules/proxy-agent:/usr/local/lib/node_modules/proxy-agent' '~/n8n_dir/node_modules/proxy-from-env:/usr/local/lib/node_modules/proxy-from-env' '~/n8n_dir/node_modules/psl:/usr/local/lib/node_modules/psl' '~/n8n_dir/node_modules/punycode:/usr/local/lib/node_modules/punycode' '~/n8n_dir/node_modules/qs:/usr/local/lib/node_modules/qs' '~/n8n_dir/node_modules/querystring:/usr/local/lib/node_modules/querystring' '~/n8n_dir/node_modules/raw-body:/usr/local/lib/node_modules/raw-body' '~/n8n_dir/node_modules/readable-stream:/usr/local/lib/node_modules/readable-stream' '~/n8n_dir/node_modules/request:/usr/local/lib/node_modules/request' '~/n8n_dir/node_modules/request-promise:/usr/local/lib/node_modules/request-promise' '~/n8n_dir/node_modules/request-promise-core:/usr/local/lib/node_modules/request-promise-core' '~/n8n_dir/node_modules/request-x-ray:/usr/local/lib/node_modules/request-x-ray' '~/n8n_dir/node_modules/safe-buffer:/usr/local/lib/node_modules/safe-buffer' '~/n8n_dir/node_modules/safer-buffer:/usr/local/lib/node_modules/safer-buffer' '~/n8n_dir/node_modules/selectn:/usr/local/lib/node_modules/selectn' '~/n8n_dir/node_modules/setprototypeof:/usr/local/lib/node_modules/setprototypeof' '~/n8n_dir/node_modules/sliced:/usr/local/lib/node_modules/sliced' '~/n8n_dir/node_modules/smart-buffer:/usr/local/lib/node_modules/smart-buffer' '~/n8n_dir/node_modules/socks:/usr/local/lib/node_modules/socks' '~/n8n_dir/node_modules/socks-proxy-agent:/usr/local/lib/node_modules/socks-proxy-agent' '~/n8n_dir/node_modules/source-map:/usr/local/lib/node_modules/source-map' '~/n8n_dir/node_modules/sshpk:/usr/local/lib/node_modules/sshpk' '~/n8n_dir/node_modules/statuses:/usr/local/lib/node_modules/statuses' '~/n8n_dir/node_modules/stealthy-require:/usr/local/lib/node_modules/stealthy-require' '~/n8n_dir/node_modules/stream-to-string:/usr/local/lib/node_modules/stream-to-string' '~/n8n_dir/node_modules/string-format:/usr/local/lib/node_modules/string-format' '~/n8n_dir/node_modules/string_decoder:/usr/local/lib/node_modules/string_decoder' '~/n8n_dir/node_modules/superagent:/usr/local/lib/node_modules/superagent' '~/n8n_dir/node_modules/superagent-proxy:/usr/local/lib/node_modules/superagent-proxy' '~/n8n_dir/node_modules/supports-color:/usr/local/lib/node_modules/supports-color' '~/n8n_dir/node_modules/toidentifier:/usr/local/lib/node_modules/toidentifier' '~/n8n_dir/node_modules/torrent-search-api:/usr/local/lib/node_modules/torrent-search-api' '~/n8n_dir/node_modules/tough-cookie:/usr/local/lib/node_modules/tough-cookie' '~/n8n_dir/node_modules/tslib:/usr/local/lib/node_modules/tslib' '~/n8n_dir/node_modules/tunnel-agent:/usr/local/lib/node_modules/tunnel-agent' '~/n8n_dir/node_modules/tweetnacl:/usr/local/lib/node_modules/tweetnacl' '~/n8n_dir/node_modules/type-check:/usr/local/lib/node_modules/type-check' '~/n8n_dir/node_modules/type-is:/usr/local/lib/node_modules/type-is' '~/n8n_dir/node_modules/universalify:/usr/local/lib/node_modules/universalify' '~/n8n_dir/node_modules/unpipe:/usr/local/lib/node_modules/unpipe' '~/n8n_dir/node_modules/uri-js:/usr/local/lib/node_modules/uri-js' '~/n8n_dir/node_modules/util:/usr/local/lib/node_modules/util' '~/n8n_dir/node_modules/util-deprecate:/usr/local/lib/node_modules/util-deprecate' '~/n8n_dir/node_modules/uuid:/usr/local/lib/node_modules/uuid' '~/n8n_dir/node_modules/vary:/usr/local/lib/node_modules/vary' '~/n8n_dir/node_modules/verror:/usr/local/lib/node_modules/verror' '~/n8n_dir/node_modules/word-wrap:/usr/local/lib/node_modules/word-wrap' '~/n8n_dir/node_modules/wrap-fn:/usr/local/lib/node_modules/wrap-fn' '~/n8n_dir/node_modules/x-ray:/usr/local/lib/node_modules/x-ray' '~/n8n_dir/node_modules/x-ray-crawler:/usr/local/lib/node_modules/x-ray-crawler' '~/n8n_dir/node_modules/x-ray-parse:/usr/local/lib/node_modules/x-ray-parse' '~/n8n_dir/node_modules/x-ray-scraper:/usr/local/lib/node_modules/x-ray-scraper' '~/n8n_dir/node_modules/xregexp:/usr/local/lib/node_modules/xregexp' '~/n8n_dir/node_modules/yallist:/usr/local/lib/node_modules/yallist' '~/n8n_dir/node_modules/yieldly:/usr/local/lib/node_modules/yieldly' image: 'n8nio/n8n:latest-rpi' environment: N8N_BASIC_AUTH_ACTIVE=true N8N_BASIC_AUTH_USER=username N8N_BASIC_AUTH_PASSWORD=your_secret_n8n_password EXECUTIONS_DATA_PRUNE=true EXECUTIONS_DATA_MAX_AGE=120 EXECUTIONS_TIMEOUT=300 EXECUTIONS_TIMEOUT_MAX=500 GENERIC_TIMEZONE=Europe/Berlin NODE_FUNCTION_ALLOW_EXTERNAL=torrent-search-api Once configured this way run n8n and create a new workflow coping the one proposed. Configure workflow Transmission In order to send command to transmission you must validate the Basic Auth. To do so: open the Start download node and edit the Credentials. Perform the same operation choosing the new credentials also in node Start download new token. In this automation we call transmission twice due to a security protocol in transmission system that prevents single click commands to be triggered, performing the request twice bypasses this security mechanism. https://en.wikipedia.org/wiki/Cross-site_request_forgery We use the X-Transmission-Session-Id provided by the first request to authenticate the second request. Telegram In order to make the workflow work as expected you must create a telegram bot and configure the nodes (Torrent not found and Telegram1) to send your message once the workflow is complete. Here's an easy guide to follow https://docs.n8n.io/nodes/n8n-nodes-base.telegram/ In those nodes you also should configure the Chat ID, you may use your telegram username or use a bot to retrieve your id. You may chat with useridinfobot that sends you your id. Ok google automation Since right now we do not have a n8n client for mobile that can trigger automation using google assistant I decided to use an IFTTT automation to trigger the webhook. I connect my IFTTT account with google assistant and pick the trigger. Say a phrase with a text ingredient as in the picture below. And configure the trigger this way. scarica $ -> download $ or metti in download $ -> put in download $ or some other trigger you may want. Then configure your server to trigger the webhook of n8n. Conclusion In conclusion we provide a fully working automation that integrates in n8n a node library and provides an easy trigger to perform a complex operation. Security concern Giving the ability to trigger a download may be problematic for potential unwanted torrent malware download, so you may decide to authenticate the webhook request passing in the body another field with a shared token between the two endpoints. Moreover the torrent-search-api library and its dependencies have some vulnerability that you may want to avoid on your own media-center, this will hopefully be patched soon in a further release of the library. This is just an interesting proof of concept. Quality of the download You may want to introduce another block between torrent search and webhook trigger to search for a movie based on the words detected by google assistant, sometimes it misinterprets something and you may end up downloading potential copyrighted material. Please use this automation only for free and open source movies and music.