by RedOne
This workflow is designed for e-commerce store owners, operations managers, and developers who use Shopify as their e-commerce platform and want an automated way to track and analyze their order data. It is particularly useful for businesses that: Need a centralized view of all Shopify orders Want to analyze order trends without logging into Shopify Need to share order data with team members who don't have Shopify access Want to build custom reports based on order information What Problem Is This Workflow Solving? While Shopify provides excellent order management within its platform, many businesses need their order data available in other systems for various purposes: Data accessibility**: Not everyone in your organization may have access to Shopify's admin interface Custom reporting**: Google Sheets allows for flexible analysis and report creation Data integration**: Having orders in Google Sheets makes it easier to combine with other business data Backup**: Creates an additional backup of your critical order information What This Workflow Does This n8n workflow creates an automated bridge between your Shopify store and Google Sheets: Listens for new order notifications from your Shopify store via webhooks Processes the incoming order data and transforms it into a structured format Stores each new order in a dedicated Google Sheets spreadsheet Sends real-time notifications to Telegram when new orders are received or errors occur Setup Create a Google Sheet Create a new Google Sheet to store your orders Add a sheet named "orders" with the following columns: orderId orderNumber created_at processed processed_at json customer shippingAddress lineItems totalPrice currency Set Up Telegram Bot Create a Telegram bot using BotFather (send /newbot to @BotFather) Save your bot token for use in n8n credentials Start a chat with your bot and get your chat ID (you can use @userinfobot) Configure the Workflow Set your Google Sheet ID in the "Edit Variables" node Enter your Telegram chat ID in the "Edit Variables" node Set up your Telegram API credentials in n8n Configure Shopify Webhook In your Shopify admin, go to: Settings > Notifications > Webhooks Create a new webhook for "Order creation" Set the URL to your n8n webhook URL (from the "Receive New Shopify Order" node) Set the format to JSON How to Customize This Workflow to Your Needs Additional data**: Modify the "Transform Order Data to Standard Format" function to extract more Shopify data Multiple sheets**: Duplicate the Google Sheets node to store different aspects of orders in separate sheets Telegram messages**: Customize the text in Telegram nodes to include more details or rich formatting Data processing**: Add nodes to perform calculations or transformations on order data Additional notifications**: Add more channels like Slack, Discord, or SMS Integrations**: Extend the workflow to send order data to other systems like CRMs, ERPs, or accounting software Final Notes This workflow serves as a foundation that you can build upon to create a comprehensive order management system tailored to your specific business needs.
by Zacharia Kimotho
This workflow takes off the task of backing up workflows regularly on Github and uses Google Drive as the main tool to host these. This can be a good way to keep track of your workflows so that you never lose any workflows in case your n8n goes down. How does it work Creates a new folder within a specified folder with the time its backed up Loops around all workflows, converts them to a JSON file and uploads them to the created folder Gets the previous backups and deletes them This has a clean feel and look as it simplifies the backup while not keeping a cache of workflows on your drive. Setup Create a new folder Create new service account credentials Share the folder with the service account email Upload this workflow to your canvas and map the credentials Set the schedule that you need your workflows to run and manage your backups Activate the workflow Happy Productivity! @Imperol
by Martech Mafia
Problem Monitoring SEO performance from Google Search Console (GSC) manually is repetitive and prone to human error. For marketers or analysts managing multiple domains, checking reports manually and copying data into spreadsheets or databases is time-consuming. There is a strong need for an automated solution that collects, stores, and updates SEO metrics regularly for easier analysis and dashboarding. Solution This workflow automatically pulls performance metrics from Google Search Console — including queries, pages, CTR, impressions, positions, and devices — and stores them in a structured format inside a NocoDB table. It’s ideal for SEO specialists, marketing teams, or data analysts who need to automate SEO reporting and centralize data for analytics or dashboards (like Superset or Metabase). Setup Instructions Authorize your Google Search Console account Connect via OAuth2 (requires GSC API access). Create a NocoDB table Define fields to match GSC response: query (text) page (URL) device (text) clicks (number) impressions (number) ctr (percentage) position (number) Add credentials in n8n Use credential nodes for both: Google OAuth2 NocoDB API Token Customize schedule trigger Set the frequency (e.g., weekly) and adjust the domain/date range as needed. Generalize domains Replace specific domains like martechmafia.net with your-domain.com before submission. NocoDB Table Structure The NocoDB table must match the fields coming from GSC's Search Analytics API. Here's a sample schema: { "query": "string", "page": "string", "device": "string", "clicks": "number", "impressions": "number", "ctr": "number", "position": "number" }
by WeblineIndia
Automate Telegram Chat Responses Using Google Gemini By WeblineIndia* ⚡ TL;DR (Quick Steps) Create a Telegram bot using @BotFather and copy the API Token. Obtain Google Gemini API Key via Google Cloud. Set up the n8n workflow: Trigger: Telegram message received. AI Model: Google Gemini generates response. Output: AI reply sent back to user via Telegram. Customize the system prompt, model, or message handling to suit your use case. 🧠 Description This n8n workflow enables seamless automation of real-time chat replies in Telegram by integrating with Google Gemini's Chat Model. Every time a user sends a message to your Telegram bot, the workflow routes it through the Gemini AI, which analyzes and crafts a professional response. This reply is then automatically delivered back to the user. The setup acts as a lightweight but powerful chatbot system — ideal for businesses, customer service, or even personal productivity bots. You can easily modify its tone, intelligence level, or logging mechanisms to cater to specific domains such as sales, tech support, or general Q&A. 🎯 Purpose of the Workflow The primary goal of this workflow is to automate intelligent, context-aware chat responses in Telegram using a robust AI model. It eliminates manual reply handling, enhances user engagement, and ensures 24/7 interaction capabilities — all through a no-code or low-code setup using n8n. 🛠️ Steps to Configure and Use ✅ Pre-Conditions / Requirements Telegram Bot Token**: Get it from @BotFather. Google Gemini API Key**: Available via Google Cloud PaLM/Gemini API access. n8n Instance**: Hosted or local instance with required nodes installed (Telegram, Basic LLM Chain, and Google Gemini support). 🔧 Setup Instructions Step 1: Telegram Trigger – Listen for Incoming Messages Add Telegram Trigger node. Select Trigger On: Message. Authenticate using your Telegram Bot Token. This will capture incoming messages from any user interacting with your bot. Step 2: Google Gemini AI – Generate a Smart Reply Add the Basic LLM Chain node. Connect the input message ({{$json.message.text}}) from the Telegram Trigger. System Prompt: > "You are an AI assistant. Reply to the following user message professionally:" Choose Google Gemini Chat Model (models/gemini-1.5-pro). Connect this node to receive the text input and pass it to Gemini for processing. Step 3: Telegram Reply – Send the AI Response Add a Telegram node (Operation: Send Message). Set Chat ID dynamically from the Telegram Trigger node. Input the generated message from the Gemini output. Enable Parse Mode as HTML for rich formatting. Final Step: Link All Nodes Receive Telegram Message → Generate AI Response → Send Telegram Reply. > Tip: Test the workflow by sending a message to your Telegram bot and ensure you receive an AI-generated reply. 🧩 Customization Guidance ✏️ Modify the AI tone by updating the system prompt. 🤖 Use other AI models (e.g., OpenAI GPT-4o). 🔍 Add filters to respond differently based on specific keywords. 📊 Extend the workflow to store chats in Google Sheets, Airtable, or databases for audit or analytics. 🌐 Multi-language support: Add translation layers before and after AI processing. 🛠️ Troubleshooting Guide No message received?** Check if your Telegram bot is active and webhook is working. AI not responding?** Validate your Google Gemini API key and usage quota. Wrong replies?** Refine the system prompt or validate message routing. Formatting issues?** Ensure Parse Mode is correctly set to HTML. 💡 Use Case Examples Customer Service Chatbot** for product queries. Educational Bots** for answering user questions on a topic. Mental Health Companion** that gives supportive replies. Event-based Announcers** or automatic responders during off-hours. > And many more! This workflow can be easily extended to support advanced use cases with just a few additional nodes. 👨💻 About the Creator This workflow is developed by WeblineIndia, a trusted provider of AI development services and process automation solutions. If you're looking to build or customize intelligent workflows like this, we invite you to get in touch with our team. We also offer specialized Python development and AI developer hiring services to supercharge your automation needs.
by Lucas Peyrin
How it works This template launches your very first AI Agent —an AI-powered chatbot that can do more than just talk— it can take action using tools. Think of an AI Agent as a smart assistant, and the tools are the apps on its phone. By connecting it to other nodes, you give your agent the ability to interact with real-world data and services, like checking the weather, fetching news, or even sending emails on your behalf. This workflow is designed to be the perfect starting point: The Chat Interface:** A Chat Trigger node provides a simple, clean interface for you to talk to your agent. The Brains:** The AI Agent node receives your messages, intelligently decides which tool to use (if any), and formulates a helpful response. Its personality and instructions are fully customizable in the "System Message". The Language Model:* It uses *Google Gemini** to power its reasoning and conversation skills. The Tools:** It comes pre-equipped with two tools to demonstrate its capabilities: Get Weather: Fetches real-time weather forecasts. Get News: Reads any RSS feed to get the latest headlines. The Memory:** A Conversation Memory node allows the agent to remember the last few messages, enabling natural, follow-up conversations. Set up steps Setup time: ~2 minutes You only need one thing to get started: a free Google AI API key. Get Your Google AI API Key: Visit Google AI Studio at aistudio.google.com/app/apikey. Click "Create API key in new project" and copy the key that appears. Add Your Credential in n8n: On the workflow canvas, go to the Connect your model (Google Gemini) node. Click the Credential dropdown and select + Create New Credential. Paste your API key into the API Key field and click Save. Start Chatting! Go to the Example Chat node. Click the "Open Chat" button in its parameter panel. Try asking it one of the example questions, like: "What's the weather in Paris?" or "Get me the latest tech news." That's it! You now have a fully functional AI Agent. Try adding more tools (like Gmail or Google Calendar) to make it even more powerful.
by Dvir Sharon
Goodreads Quote Extraction with Bright Data and Gemini This workflow demonstrates how to fetch data specifically from Goodreads web pages using Bright Data and then extract specific information (quotes) from that data using a Google Gemini AI model. How it works The workflow is triggered manually. It sends a request to a Bright Data collector to scrape data from a predefined list of Goodreads URLs. The collected text data from Goodreads is then passed to a Google Gemini AI node. The AI node processes the text and extracts quotes based on a specified JSON schema output format. Set up steps Setting up this workflow should take only a few minutes. You will need a Bright Data API key to configure the 'Header Auth' credential. You will need a Google Gemini API key to configure the 'Google Gemini(PaLM) Api account' credential. Ensure the correct Bright Data collector ID is set in the 'Perform Bright Data Web Request' node URL. Make sure the full list of target Goodreads URLs is correctly added to the 'Perform Bright Data Web Request' node's body. Link your created credentials to the respective nodes ('Perform Bright Data Web Request' and 'Quotes Extractor'). Keep detailed descriptions for specific node configurations in sticky notes inside your workflow canvas.
by Zacharia Kimotho
This workflow is designed to generate prompts for AI agents and store them in Airtable. It starts by receiving a chat message, processes it to create a structured prompt, categorizes the prompt, and finally stores it in Airtable. 2. Setup Instructions Prerequisites AI model eg Gemini, openAI etc** Airtable base and table or other storage tool** Step-by-Step Guide Clone the Workflow Copy the provided workflow JSON and import it into your n8n instance. Configure Credentials Set up the Google Gemini(PaLM) API account credentials. Set up the Airtable Personal Access Token account credentials. Map Airtable Base and Table Create a copy of the Prompt Library in Airtable. Map the Airtable base and table in the Airtable node. Customize Prompt Template Edit the 'Create prompt' node to customize the prompt template as needed. Configuration Options Prompt Template:** Customize the prompt template in the 'Create prompt' node to fit your specific use case. Airtable Mapping:** Ensure the Airtable base and table are correctly mapped in the Airtable node. 4. Running and Troubleshooting Running the Workflow Trigger the Workflow: Send a chat message to trigger the workflow. Monitor Execution: Use the n8n interface to monitor the workflow execution. Check Completion: Verify that the prompt is stored in Airtable and check the chat interface for the result. Troubleshooting Tips API Issues:** Ensure that the APIs and Airtable credentials are correctly configured. Data Mapping:** Verify that the Airtable base and table are correctly mapped. Prompt Template:** Check the prompt template for any errors or inconsistencies. Use Case Examples This workflow is particularly useful in scenarios where you want to automate the generation and management of AI agent prompts. Here are some examples: Rapid Prototyping of AI Agents: Quickly generate and test different prompts for AI agents in various applications. Content Creation:** Generate prompts for AI models that create blog posts, articles, or social media content. Customer Service Automation:** Develop prompts for AI-powered chatbots to handle customer inquiries and support requests. Educational Tools:** Create prompts for AI tutors or learning assistants. Industries/Professionals: Software Development:** Developers building AI-powered applications. Marketing:** Marketers automating content creation and social media management. Customer Service:** Customer service managers implementing AI-driven chatbots. Education:** Educators creating AI-based learning tools. Practical Value: Time Savings:** Automates the prompt generation process, saving significant time and effort. Improved Prompt Quality:** Leverages Google Gemini and structured prompt engineering principles to generate more effective prompts. Centralized Prompt Management:** Stores prompts in Airtable for easy access, organization, and reuse. 4. Running and Troubleshooting Running the Workflow:** Activate the workflow in n8n. Send a chat message to the webhook URL configured in the "When chat message received" node. Monitor the workflow execution in the n8n editor. Monitoring Execution:** Check the execution log in n8n to see the data flowing through each node and identify any errors. Checking for Successful Completion:** Verify that a new record is created in your Airtable base with the generated prompt, name, and category. Confirm that the "Return results" node sends back confirmation of the prompt in the chat interface. Troubleshooting Tips:** Error:** 400: Bad Request in the Google Gemini nodes: Cause:** Invalid API key or insufficient permissions. Solution:** Double-check your Google Gemini API key and ensure that the API is enabled for your project. Error:** Airtable node fails to create a record: Cause:** Invalid Airtable credentials, incorrect Base ID or Table ID, or mismatched column names. Solution:** Verify your Airtable API key, Base ID, Table ID, and column names. Ensure that the data types in n8n match the data types in your Airtable columns. Follow me on Linkedin for more
by Miquel Colomer
This n8n workflow template checks for new major releases (tagged with .0) of the n8n project using its official GitHub releases feed. It runs multiple times a day and sends notifications via email and Telegram if a new release is found. > ⚠️ Note: You must *activate the workflow* to start receiving release notifications. 🚀 What It Does Monitors the n8n GitHub releases feed Detects major versions (e.g., 1.0.0, 2.0.0) Sends alert messages via Telegram and email (SES) when a release is published ⏰ Scheduling Details The Cron node checks for new releases three times per day: 10:00, 14:00, and 18:00 server time. 🛠️ Step-by-Step Setup Configure Telegram Bot Connect your Telegram bot and specify the chat ID where you want to receive notifications. Set up AWS SES Credentials Use a verified sender email and set up AWS SES credentials in your n8n instance. Activate the Workflow Enable the workflow in your instance to start receiving notifications. Customize Notification Messages (Optional) You can modify the email subject, Telegram format, or filter logic. 🧠 How It Works: Workflow Overview Cron Trigger Runs the workflow at 10:00, 14:00, and 18:00 daily. Read RSS Feed Pulls data from https://github.com/n8n-io/n8n/releases.atom. Filter by Current Day Filters the feed to match: Releases published in the last 4 hours Titles starting with n8n@ and ending with .0 Condition Check Uses a regex to check if the filter result contains any release data. Notifications If a new major release is found, sends: Telegram message to a specified chat Email via AWS SES with release info 📨 Final Output You'll receive a Telegram message and email when a new major n8n version is released. 🔐 Credentials Used Telegram API** – For sending chat notifications AWS SES** – To send email alerts ✨ Customization Tips Change Notification Channels**: Add Slack, Discord, or other preferred channels. Adjust Cron Schedule**: Modify the Cron node to fit your check frequency. Modify Filters**: Detect patch or beta versions by changing the .0 condition. Send Release Notes**: Extend the feed parsing to include release content. ❓Questions? Template created by Miquel Colomer and n8nhackers.com. Need help customizing or deploying? Contact us for consulting and support.
by Airtop
Automating LinkedIn Company Data Extraction Use Case This automation extracts detailed company insights from a LinkedIn company page, including identity, scale, classification, and funding data. Ideal for investors, sales teams, and market researchers. What This Automation Does This automation accepts the following inputs: Company's LinkedIn URL**: The public LinkedIn page URL of the company. Airtop Profile (connected to LinkedIn)**: Your Airtop Profile authenticated on LinkedIn. It then extracts and returns structured data with: 1. Company Identity Full name Tagline Headquarters location (city, state, country) About section Website 2. Company Scale Current employee count Employee size bracket: [0-9], [10-150], [150+] 3. Business Classification Is the company an automation agency? (true/false) AI implementation level: Low / Medium / High Technical sophistication: Basic / Intermediate / Advanced / Expert 4. Funding Profile Most recent funding round Total amount raised Key investors Last funding update date How It Works Creates an Airtop session using the provided profile. Navigates to the company LinkedIn page. Executes an Airtop query to extract data. Outputs the result in a standardized JSON schema. Setup Requirements Airtop API Key A LinkedIn-authenticated Airtop Profile Next Steps Feed into CRM**: Enrich your accounts with detailed LinkedIn data. Prioritize Leads**: Use classification and funding data to prioritize outreach. Combine with People Data**: Integrate with individual-level enrichment for full context. Read more about how to extract company data from Linkedin with Airtop and n8n
by Pedro Santos
🤖 AI Agent Web Search using SearchApi & LLM Who is this for? This workflow is ideal for anyone conducting online research, including students, researchers, content creators, and professionals looking for accurate, up-to-date, and verifiable information. It also serves as an excellent foundation for building more sophisticated AI-driven applications. What problem does this workflow solve? / Use case This workflow automates web searches by enabling an AI agent to efficiently retrieve and summarize external, verifiable information, ensuring accuracy through source citations. What this workflow does Connects an AI agent node to SearchApi.io as an integrated search tool. Empowers the AI agent to perform real-time web searches using various SearchApi engines (e.g., Google, Bing). Allows the AI agent to dynamically determine search parameters based on user interaction, delivering contextually relevant results. Ensures responses include clearly cited sources for validation and further exploration. Setup Install the SearchApi community node: Open Settings → Community Nodes inside your self‑hosted n8n instance. Fill npm Package Name with @searchapi/n8n-nodes-searchapi. Accept the risk prompt, and hit Install. It should now appear as a node when you search for it. API Configuration: Set up your SearchApi.io credentials in n8n. Add your preferred LLM provider credentials (e.g., OpenRouter API). Input Requirements: Provide the YouTube video ID (e.g., wBuULAoJxok). Connect LLM Integration: Configure the summarization chain with your chosen model and parameters for text splitting. How to customize this workflow to your needs Integrate additional nodes to structure or store search results (e.g., saving to databases, Notion, Google Sheets). Extend chatbot capabilities to integrate with messaging platforms (Slack, Discord) or email notifications. Adjust search parameters and filters within the AI agent node to tailor information retrieval. Example Usage Input**: User asks, "What are the latest developments in AI regulation?" Output**: AI retrieves, summarizes, and cites recent, authoritative articles and news sources from the web.
by Danger
Ok google download "movie name" I develop this automation to improve my quality of life in handling torrents in my media-center. Goal Automate the search operations of a movie based on its name and trigger a download using your transmission-daemon. Setup Prerequisite Transmission daemon up and running and its authentication method N8N configured self-hosted or with the possibility to add npm package better with docker-compose.yaml Telegram bot credential [optional] Configuration Create a folder where your docker-compose.yaml belongs n8n_dir and proceed in installing the node package. cd ~/n8n_dir npm i torrent-search-api Configuring your docker-compose.yaml file this way. You must include all the dependencies of torrent-search-api. This will let you run the new torrent search node presented in this workflow. version: '3.3' services: n8n: container_name: n8n ports: '5678:5678' restart: always volumes: '~/n8n_dir/.n8n:/home/node/.n8n' '~/n8n_dir/node_modules/@tootallnate:/usr/local/lib/node_modules/@tootallnate' '~/n8n_dir/node_modules/accepts:/usr/local/lib/node_modules/accepts' '~/n8n_dir/node_modules/agent-base:/usr/local/lib/node_modules/agent-base' '~/n8n_dir/node_modules/ajv:/usr/local/lib/node_modules/ajv' '~/n8n_dir/node_modules/ansi-styles:/usr/local/lib/node_modules/ansi-styles' '~/n8n_dir/node_modules/asn1:/usr/local/lib/node_modules/asn1' '~/n8n_dir/node_modules/assert:/usr/local/lib/node_modules/assert' '~/n8n_dir/node_modules/assert-plus:/usr/local/lib/node_modules/assert-plus' '~/n8n_dir/node_modules/ast-types:/usr/local/lib/node_modules/ast-types' '~/n8n_dir/node_modules/asynckit:/usr/local/lib/node_modules/asynckit' '~/n8n_dir/node_modules/aws-sign2:/usr/local/lib/node_modules/aws-sign2' '~/n8n_dir/node_modules/aws4:/usr/local/lib/node_modules/aws4' '~/n8n_dir/node_modules/base64-js:/usr/local/lib/node_modules/base64-js' '~/n8n_dir/node_modules/batch:/usr/local/lib/node_modules/batch' '~/n8n_dir/node_modules/bcrypt-pbkdf:/usr/local/lib/node_modules/bcrypt-pbkdf' '~/n8n_dir/node_modules/bluebird:/usr/local/lib/node_modules/bluebird' '~/n8n_dir/node_modules/boolbase:/usr/local/lib/node_modules/boolbase' '~/n8n_dir/node_modules/brotli:/usr/local/lib/node_modules/brotli' '~/n8n_dir/node_modules/bytes:/usr/local/lib/node_modules/bytes' '~/n8n_dir/node_modules/caseless:/usr/local/lib/node_modules/caseless' '~/n8n_dir/node_modules/chalk:/usr/local/lib/node_modules/chalk' '~/n8n_dir/node_modules/cheerio:/usr/local/lib/node_modules/cheerio' '~/n8n_dir/node_modules/cloudscraper:/usr/local/lib/node_modules/cloudscraper' '~/n8n_dir/node_modules/co:/usr/local/lib/node_modules/co' '~/n8n_dir/node_modules/color-convert:/usr/local/lib/node_modules/color-convert' '~/n8n_dir/node_modules/color-name:/usr/local/lib/node_modules/color-name' '~/n8n_dir/node_modules/combined-stream:/usr/local/lib/node_modules/combined-stream' '~/n8n_dir/node_modules/component-emitter:/usr/local/lib/node_modules/component-emitter' '~/n8n_dir/node_modules/content-disposition:/usr/local/lib/node_modules/content-disposition' '~/n8n_dir/node_modules/content-type:/usr/local/lib/node_modules/content-type' '~/n8n_dir/node_modules/cookiejar:/usr/local/lib/node_modules/cookiejar' '~/n8n_dir/node_modules/core-util-is:/usr/local/lib/node_modules/core-util-is' '~/n8n_dir/node_modules/css-select:/usr/local/lib/node_modules/css-select' '~/n8n_dir/node_modules/css-what:/usr/local/lib/node_modules/css-what' '~/n8n_dir/node_modules/dashdash:/usr/local/lib/node_modules/dashdash' '~/n8n_dir/node_modules/data-uri-to-buffer:/usr/local/lib/node_modules/data-uri-to-buffer' '~/n8n_dir/node_modules/debug:/usr/local/lib/node_modules/debug' '~/n8n_dir/node_modules/deep-is:/usr/local/lib/node_modules/deep-is' '~/n8n_dir/node_modules/degenerator:/usr/local/lib/node_modules/degenerator' '~/n8n_dir/node_modules/delayed-stream:/usr/local/lib/node_modules/delayed-stream' '~/n8n_dir/node_modules/delegates:/usr/local/lib/node_modules/delegates' '~/n8n_dir/node_modules/depd:/usr/local/lib/node_modules/depd' '~/n8n_dir/node_modules/destroy:/usr/local/lib/node_modules/destroy' '~/n8n_dir/node_modules/dom-serializer:/usr/local/lib/node_modules/dom-serializer' '~/n8n_dir/node_modules/domelementtype:/usr/local/lib/node_modules/domelementtype' '~/n8n_dir/node_modules/domhandler:/usr/local/lib/node_modules/domhandler' '~/n8n_dir/node_modules/domutils:/usr/local/lib/node_modules/domutils' '~/n8n_dir/node_modules/ecc-jsbn:/usr/local/lib/node_modules/ecc-jsbn' '~/n8n_dir/node_modules/ee-first:/usr/local/lib/node_modules/ee-first' '~/n8n_dir/node_modules/emitter-component:/usr/local/lib/node_modules/emitter-component' '~/n8n_dir/node_modules/enqueue:/usr/local/lib/node_modules/enqueue' '~/n8n_dir/node_modules/enstore:/usr/local/lib/node_modules/enstore' '~/n8n_dir/node_modules/entities:/usr/local/lib/node_modules/entities' '~/n8n_dir/node_modules/error-inject:/usr/local/lib/node_modules/error-inject' '~/n8n_dir/node_modules/escape-html:/usr/local/lib/node_modules/escape-html' '~/n8n_dir/node_modules/escape-string-regexp:/usr/local/lib/node_modules/escape-string-regexp' '~/n8n_dir/node_modules/escodegen:/usr/local/lib/node_modules/escodegen' '~/n8n_dir/node_modules/esprima:/usr/local/lib/node_modules/esprima' '~/n8n_dir/node_modules/estraverse:/usr/local/lib/node_modules/estraverse' '~/n8n_dir/node_modules/esutils:/usr/local/lib/node_modules/esutils' '~/n8n_dir/node_modules/extend:/usr/local/lib/node_modules/extend' '~/n8n_dir/node_modules/extsprintf:/usr/local/lib/node_modules/extsprintf' '~/n8n_dir/node_modules/fast-deep-equal:/usr/local/lib/node_modules/fast-deep-equal' '~/n8n_dir/node_modules/fast-json-stable-stringify:/usr/local/lib/node_modules/fast-json-stable-stringify' '~/n8n_dir/node_modules/fast-levenshtein:/usr/local/lib/node_modules/fast-levenshtein' '~/n8n_dir/node_modules/file-uri-to-path:/usr/local/lib/node_modules/file-uri-to-path' '~/n8n_dir/node_modules/forever-agent:/usr/local/lib/node_modules/forever-agent' '~/n8n_dir/node_modules/form-data:/usr/local/lib/node_modules/form-data' '~/n8n_dir/node_modules/format-parser:/usr/local/lib/node_modules/format-parser' '~/n8n_dir/node_modules/formidable:/usr/local/lib/node_modules/formidable' '~/n8n_dir/node_modules/fs-extra:/usr/local/lib/node_modules/fs-extra' '~/n8n_dir/node_modules/ftp:/usr/local/lib/node_modules/ftp' '~/n8n_dir/node_modules/get-uri:/usr/local/lib/node_modules/get-uri' '~/n8n_dir/node_modules/getpass:/usr/local/lib/node_modules/getpass' '~/n8n_dir/node_modules/graceful-fs:/usr/local/lib/node_modules/graceful-fs' '~/n8n_dir/node_modules/har-schema:/usr/local/lib/node_modules/har-schema' '~/n8n_dir/node_modules/har-validator:/usr/local/lib/node_modules/har-validator' '~/n8n_dir/node_modules/has-flag:/usr/local/lib/node_modules/has-flag' '~/n8n_dir/node_modules/htmlparser2:/usr/local/lib/node_modules/htmlparser2' '~/n8n_dir/node_modules/http-context:/usr/local/lib/node_modules/http-context' '~/n8n_dir/node_modules/http-errors:/usr/local/lib/node_modules/http-errors' '~/n8n_dir/node_modules/http-incoming:/usr/local/lib/node_modules/http-incoming' '~/n8n_dir/node_modules/http-outgoing:/usr/local/lib/node_modules/http-outgoing' '~/n8n_dir/node_modules/http-proxy-agent:/usr/local/lib/node_modules/http-proxy-agent' '~/n8n_dir/node_modules/http-signature:/usr/local/lib/node_modules/http-signature' '~/n8n_dir/node_modules/https-proxy-agent:/usr/local/lib/node_modules/https-proxy-agent' '~/n8n_dir/node_modules/iconv-lite:/usr/local/lib/node_modules/iconv-lite' '~/n8n_dir/node_modules/inherits:/usr/local/lib/node_modules/inherits' '~/n8n_dir/node_modules/ip:/usr/local/lib/node_modules/ip' '~/n8n_dir/node_modules/is-browser:/usr/local/lib/node_modules/is-browser' '~/n8n_dir/node_modules/is-typedarray:/usr/local/lib/node_modules/is-typedarray' '~/n8n_dir/node_modules/is-url:/usr/local/lib/node_modules/is-url' '~/n8n_dir/node_modules/isarray:/usr/local/lib/node_modules/isarray' '~/n8n_dir/node_modules/isobject:/usr/local/lib/node_modules/isobject' '~/n8n_dir/node_modules/isstream:/usr/local/lib/node_modules/isstream' '~/n8n_dir/node_modules/jsbn:/usr/local/lib/node_modules/jsbn' '~/n8n_dir/node_modules/json-schema:/usr/local/lib/node_modules/json-schema' '~/n8n_dir/node_modules/json-schema-traverse:/usr/local/lib/node_modules/json-schema-traverse' '~/n8n_dir/node_modules/json-stringify-safe:/usr/local/lib/node_modules/json-stringify-safe' '~/n8n_dir/node_modules/jsonfile:/usr/local/lib/node_modules/jsonfile' '~/n8n_dir/node_modules/jsprim:/usr/local/lib/node_modules/jsprim' '~/n8n_dir/node_modules/koa-is-json:/usr/local/lib/node_modules/koa-is-json' '~/n8n_dir/node_modules/levn:/usr/local/lib/node_modules/levn' '~/n8n_dir/node_modules/lodash:/usr/local/lib/node_modules/lodash' '~/n8n_dir/node_modules/lodash.assignin:/usr/local/lib/node_modules/lodash.assignin' '~/n8n_dir/node_modules/lodash.bind:/usr/local/lib/node_modules/lodash.bind' '~/n8n_dir/node_modules/lodash.defaults:/usr/local/lib/node_modules/lodash.defaults' '~/n8n_dir/node_modules/lodash.filter:/usr/local/lib/node_modules/lodash.filter' '~/n8n_dir/node_modules/lodash.flatten:/usr/local/lib/node_modules/lodash.flatten' '~/n8n_dir/node_modules/lodash.foreach:/usr/local/lib/node_modules/lodash.foreach' '~/n8n_dir/node_modules/lodash.map:/usr/local/lib/node_modules/lodash.map' '~/n8n_dir/node_modules/lodash.merge:/usr/local/lib/node_modules/lodash.merge' '~/n8n_dir/node_modules/lodash.pick:/usr/local/lib/node_modules/lodash.pick' '~/n8n_dir/node_modules/lodash.reduce:/usr/local/lib/node_modules/lodash.reduce' '~/n8n_dir/node_modules/lodash.reject:/usr/local/lib/node_modules/lodash.reject' '~/n8n_dir/node_modules/lodash.some:/usr/local/lib/node_modules/lodash.some' '~/n8n_dir/node_modules/lru-cache:/usr/local/lib/node_modules/lru-cache' '~/n8n_dir/node_modules/media-typer:/usr/local/lib/node_modules/media-typer' '~/n8n_dir/node_modules/methods:/usr/local/lib/node_modules/methods' '~/n8n_dir/node_modules/mime:/usr/local/lib/node_modules/mime' '~/n8n_dir/node_modules/mime-db:/usr/local/lib/node_modules/mime-db' '~/n8n_dir/node_modules/mime-types:/usr/local/lib/node_modules/mime-types' '~/n8n_dir/node_modules/monotonic-timestamp:/usr/local/lib/node_modules/monotonic-timestamp' '~/n8n_dir/node_modules/ms:/usr/local/lib/node_modules/ms' '~/n8n_dir/node_modules/negotiator:/usr/local/lib/node_modules/negotiator' '~/n8n_dir/node_modules/netmask:/usr/local/lib/node_modules/netmask' '~/n8n_dir/node_modules/nth-check:/usr/local/lib/node_modules/nth-check' '~/n8n_dir/node_modules/oauth-sign:/usr/local/lib/node_modules/oauth-sign' '~/n8n_dir/node_modules/object-assign:/usr/local/lib/node_modules/object-assign' '~/n8n_dir/node_modules/on-finished:/usr/local/lib/node_modules/on-finished' '~/n8n_dir/node_modules/optionator:/usr/local/lib/node_modules/optionator' '~/n8n_dir/node_modules/pac-proxy-agent:/usr/local/lib/node_modules/pac-proxy-agent' '~/n8n_dir/node_modules/pac-resolver:/usr/local/lib/node_modules/pac-resolver' '~/n8n_dir/node_modules/parseurl:/usr/local/lib/node_modules/parseurl' '~/n8n_dir/node_modules/performance-now:/usr/local/lib/node_modules/performance-now' '~/n8n_dir/node_modules/prelude-ls:/usr/local/lib/node_modules/prelude-ls' '~/n8n_dir/node_modules/process-nextick-args:/usr/local/lib/node_modules/process-nextick-args' '~/n8n_dir/node_modules/promise-polyfill:/usr/local/lib/node_modules/promise-polyfill' '~/n8n_dir/node_modules/proxy-agent:/usr/local/lib/node_modules/proxy-agent' '~/n8n_dir/node_modules/proxy-from-env:/usr/local/lib/node_modules/proxy-from-env' '~/n8n_dir/node_modules/psl:/usr/local/lib/node_modules/psl' '~/n8n_dir/node_modules/punycode:/usr/local/lib/node_modules/punycode' '~/n8n_dir/node_modules/qs:/usr/local/lib/node_modules/qs' '~/n8n_dir/node_modules/querystring:/usr/local/lib/node_modules/querystring' '~/n8n_dir/node_modules/raw-body:/usr/local/lib/node_modules/raw-body' '~/n8n_dir/node_modules/readable-stream:/usr/local/lib/node_modules/readable-stream' '~/n8n_dir/node_modules/request:/usr/local/lib/node_modules/request' '~/n8n_dir/node_modules/request-promise:/usr/local/lib/node_modules/request-promise' '~/n8n_dir/node_modules/request-promise-core:/usr/local/lib/node_modules/request-promise-core' '~/n8n_dir/node_modules/request-x-ray:/usr/local/lib/node_modules/request-x-ray' '~/n8n_dir/node_modules/safe-buffer:/usr/local/lib/node_modules/safe-buffer' '~/n8n_dir/node_modules/safer-buffer:/usr/local/lib/node_modules/safer-buffer' '~/n8n_dir/node_modules/selectn:/usr/local/lib/node_modules/selectn' '~/n8n_dir/node_modules/setprototypeof:/usr/local/lib/node_modules/setprototypeof' '~/n8n_dir/node_modules/sliced:/usr/local/lib/node_modules/sliced' '~/n8n_dir/node_modules/smart-buffer:/usr/local/lib/node_modules/smart-buffer' '~/n8n_dir/node_modules/socks:/usr/local/lib/node_modules/socks' '~/n8n_dir/node_modules/socks-proxy-agent:/usr/local/lib/node_modules/socks-proxy-agent' '~/n8n_dir/node_modules/source-map:/usr/local/lib/node_modules/source-map' '~/n8n_dir/node_modules/sshpk:/usr/local/lib/node_modules/sshpk' '~/n8n_dir/node_modules/statuses:/usr/local/lib/node_modules/statuses' '~/n8n_dir/node_modules/stealthy-require:/usr/local/lib/node_modules/stealthy-require' '~/n8n_dir/node_modules/stream-to-string:/usr/local/lib/node_modules/stream-to-string' '~/n8n_dir/node_modules/string-format:/usr/local/lib/node_modules/string-format' '~/n8n_dir/node_modules/string_decoder:/usr/local/lib/node_modules/string_decoder' '~/n8n_dir/node_modules/superagent:/usr/local/lib/node_modules/superagent' '~/n8n_dir/node_modules/superagent-proxy:/usr/local/lib/node_modules/superagent-proxy' '~/n8n_dir/node_modules/supports-color:/usr/local/lib/node_modules/supports-color' '~/n8n_dir/node_modules/toidentifier:/usr/local/lib/node_modules/toidentifier' '~/n8n_dir/node_modules/torrent-search-api:/usr/local/lib/node_modules/torrent-search-api' '~/n8n_dir/node_modules/tough-cookie:/usr/local/lib/node_modules/tough-cookie' '~/n8n_dir/node_modules/tslib:/usr/local/lib/node_modules/tslib' '~/n8n_dir/node_modules/tunnel-agent:/usr/local/lib/node_modules/tunnel-agent' '~/n8n_dir/node_modules/tweetnacl:/usr/local/lib/node_modules/tweetnacl' '~/n8n_dir/node_modules/type-check:/usr/local/lib/node_modules/type-check' '~/n8n_dir/node_modules/type-is:/usr/local/lib/node_modules/type-is' '~/n8n_dir/node_modules/universalify:/usr/local/lib/node_modules/universalify' '~/n8n_dir/node_modules/unpipe:/usr/local/lib/node_modules/unpipe' '~/n8n_dir/node_modules/uri-js:/usr/local/lib/node_modules/uri-js' '~/n8n_dir/node_modules/util:/usr/local/lib/node_modules/util' '~/n8n_dir/node_modules/util-deprecate:/usr/local/lib/node_modules/util-deprecate' '~/n8n_dir/node_modules/uuid:/usr/local/lib/node_modules/uuid' '~/n8n_dir/node_modules/vary:/usr/local/lib/node_modules/vary' '~/n8n_dir/node_modules/verror:/usr/local/lib/node_modules/verror' '~/n8n_dir/node_modules/word-wrap:/usr/local/lib/node_modules/word-wrap' '~/n8n_dir/node_modules/wrap-fn:/usr/local/lib/node_modules/wrap-fn' '~/n8n_dir/node_modules/x-ray:/usr/local/lib/node_modules/x-ray' '~/n8n_dir/node_modules/x-ray-crawler:/usr/local/lib/node_modules/x-ray-crawler' '~/n8n_dir/node_modules/x-ray-parse:/usr/local/lib/node_modules/x-ray-parse' '~/n8n_dir/node_modules/x-ray-scraper:/usr/local/lib/node_modules/x-ray-scraper' '~/n8n_dir/node_modules/xregexp:/usr/local/lib/node_modules/xregexp' '~/n8n_dir/node_modules/yallist:/usr/local/lib/node_modules/yallist' '~/n8n_dir/node_modules/yieldly:/usr/local/lib/node_modules/yieldly' image: 'n8nio/n8n:latest-rpi' environment: N8N_BASIC_AUTH_ACTIVE=true N8N_BASIC_AUTH_USER=username N8N_BASIC_AUTH_PASSWORD=your_secret_n8n_password EXECUTIONS_DATA_PRUNE=true EXECUTIONS_DATA_MAX_AGE=120 EXECUTIONS_TIMEOUT=300 EXECUTIONS_TIMEOUT_MAX=500 GENERIC_TIMEZONE=Europe/Berlin NODE_FUNCTION_ALLOW_EXTERNAL=torrent-search-api Once configured this way run n8n and create a new workflow coping the one proposed. Configure workflow Transmission In order to send command to transmission you must validate the Basic Auth. To do so: open the Start download node and edit the Credentials. Perform the same operation choosing the new credentials also in node Start download new token. In this automation we call transmission twice due to a security protocol in transmission system that prevents single click commands to be triggered, performing the request twice bypasses this security mechanism. https://en.wikipedia.org/wiki/Cross-site_request_forgery We use the X-Transmission-Session-Id provided by the first request to authenticate the second request. Telegram In order to make the workflow work as expected you must create a telegram bot and configure the nodes (Torrent not found and Telegram1) to send your message once the workflow is complete. Here's an easy guide to follow https://docs.n8n.io/nodes/n8n-nodes-base.telegram/ In those nodes you also should configure the Chat ID, you may use your telegram username or use a bot to retrieve your id. You may chat with useridinfobot that sends you your id. Ok google automation Since right now we do not have a n8n client for mobile that can trigger automation using google assistant I decided to use an IFTTT automation to trigger the webhook. I connect my IFTTT account with google assistant and pick the trigger. Say a phrase with a text ingredient as in the picture below. And configure the trigger this way. scarica $ -> download $ or metti in download $ -> put in download $ or some other trigger you may want. Then configure your server to trigger the webhook of n8n. Conclusion In conclusion we provide a fully working automation that integrates in n8n a node library and provides an easy trigger to perform a complex operation. Security concern Giving the ability to trigger a download may be problematic for potential unwanted torrent malware download, so you may decide to authenticate the webhook request passing in the body another field with a shared token between the two endpoints. Moreover the torrent-search-api library and its dependencies have some vulnerability that you may want to avoid on your own media-center, this will hopefully be patched soon in a further release of the library. This is just an interesting proof of concept. Quality of the download You may want to introduce another block between torrent search and webhook trigger to search for a movie based on the words detected by google assistant, sometimes it misinterprets something and you may end up downloading potential copyrighted material. Please use this automation only for free and open source movies and music.
by Sarfaraz Muhammad Sajib
📬 Scheduled RSS News Digest Emails with Gmail Automatically send beautifully formatted news digests from any RSS feed (e.g., Prothom Alo) directly to your Gmail inbox on a schedule using this n8n workflow. Ideal for news curators, bloggers, media professionals, or anyone who wants a daily/weekly news summary in their email. ✅ Prerequisites Before using this workflow, ensure you have the following: An active Gmail account with OAuth2 credentials set up in n8n. A public RSS feed URL (e.g., https://prothomalo.com/feed). An instance of n8n running (self-hosted or via n8n cloud). Basic familiarity with how n8n workflows function. ⚙️ Setup Instructions 1. Schedule Trigger Triggers the workflow at your chosen interval (e.g., daily at 8 AM). You can configure this under the interval section of the Schedule Trigger node. 2. HTTP Request – Get RSS from Prothom Alo Fetches the latest RSS feed from your preferred news source. Set the URL field to your desired RSS feed, such as https://prothomalo.com/feed. 3. Convert XML to JSON Uses the XML node to parse the fetched XML into JSON format for further processing. 4. Code Node – Generate HTML News Preview Transforms the parsed JSON into a styled HTML template. Includes dynamic data like the article title, summary, author, category, and a “Read More” button. The date is formatted to bn-BD locale for regional display. 5. Gmail Node – Send a message Sends the generated HTML as an email. Requires Gmail OAuth2 credentials to be configured. Set the recipient address. Use the generated HTML inside the message field. Make sure to use Gmail OAuth2 credentials (you can set this under "Credentials"). 🛠 Customization Options RSS Feed Source**: Replace https://prothomalo.com/feed with any RSS/Atom feed of your choice. Email Design**: Modify the embedded HTML/CSS in the Gmail node and code block to reflect your brand/theme. Language & Locale**: Adjust the date and formatting based on your preferred locale (e.g., en-US, bn-BD, etc.). Email Frequency**: Set your schedule to send digests hourly, daily, or weekly. 🧹 Flow Overview Schedule Trigger → HTTP Request → XML → Code (HTML Builder) → Gmail Send 💡 Use Cases Daily Newsletters** Team Updates from Blogs** Industry Trends Monitoring** Client Briefings with Custom Feeds** This automated workflow ensures timely delivery of curated news in a mobile-responsive, branded HTML format. No manual copy-pasting — just scheduled insights, beautifully delivered.