by ist00dent
This n8n template enables you to instantly retrieve detailed geolocation information for any given IP address by simply sending a webhook request. Leverage the power of IP-API.com to gain insights into user locations, personalize experiences, or enhance security protocols within your automated workflows. 🔧 How it works Receive IP Webhook: This node acts as the entry point, listening for incoming POST requests. It expects a JSON body containing an ip property with the IP address you wish to look up. Get IP Geolocation: This node makes an HTTP GET request to the IP-API.com service, passing the IP address from your webhook. The API responds with a comprehensive JSON object detailing the IP's location (country, city, region), ISP, organization, and more. Respond with Geolocation Data: This node sends the full geolocation data received from IP-API.com back to the service that initiated the webhook. 👤 Who is it for? This workflow is ideal for: Marketing & Sales Teams: Personalize website content, offers, or ads based on a visitor's geographic location. Tailor email campaigns by region. Customer Support: Quickly identify a customer's location to provide more localized or relevant assistance. Security & Fraud Detection: Analyze incoming connection IPs to identify suspicious activity, block known malicious regions, or flag potential fraud. Analytics & Reporting: Augment your analytics data with geographical insights about your users or traffic. Developers & Integrators: Easily add IP lookup functionality to custom applications, internal tools, or monitoring systems. Content Delivery Networks (CDNs): Route users to the closest servers for faster content delivery (though advanced CDNs usually handle this automatically). 📑 Data Structure When you trigger the webhook, send a POST request with a JSON body structured as follows: { "ip": "8.8.8.8" // Replace with the IP address you want to look up } The workflow will return a JSON response similar to this (data will vary based on IP): { "status": "success", "country": "United States", "countryCode": "US", "region": "VA", "regionName": "Virginia", "city": "Ashburn", "zip": "20149", "lat": 39.0437, "lon": -77.4875, "timezone": "America/New_York", "isp": "Google LLC", "org": "Google Public DNS", "as": "AS15169 Google LLC", "query": "8.8.8.8" } ⚙️ Setup Instructions Import Workflow: In your n8n editor, click "Import from JSON" and paste the provided workflow JSON. Configure Webhook Path: Double-click the Receive IP Webhook node. In the 'Path' field, set a unique and descriptive path (e.g., /ip-lookup). Activate Workflow: Save and activate the workflow. 📝 Tips This workflow, while simple, is a powerful building block. Here's how you can make it even more useful: Conditional Logic: Add IF nodes after "Get IP Geolocation" to create conditional branches. For example: If countryCode is 'CN' or 'RU', send an alert to your security team. If city is 'New York', route the request to a specific sales representative. Data Enrichment: Integrate this workflow into larger automation. For instance: When a new sign-up occurs, pass their IP address to this workflow, then save the returned geolocation data (country, city, ISP) alongside their user profile in your CRM or database. For e-commerce, use the location data to pre-fill shipping fields or suggest local currency/language. Logging & Analytics: Store the lookup results in a spreadsheet (Google Sheets), database (PostgreSQL, Airtable), or logging service. This can help you track where your users are coming from over time. Rate Limiting: IP-API.com has rate limits for its free tier. If you anticipate high usage, consider adding a Delay node or implementing a caching mechanism with a Cache node to avoid hitting limits. For heavy use, you might need to upgrade to a paid plan. Dynamic Response: Instead of returning the full JSON, you could use a Function node to extract only specific pieces of information (e.g., just the country and city) and return a more concise response. Input Validation: For robust production use, add a Function node after the webhook to validate that the incoming ip value is indeed a valid IP address. If it's not, you can return an error message to the caller.
by Airtop
Automating Company ICP Scoring via LinkedIn Use Case This automation scores companies based on their LinkedIn profile using custom Ideal Customer Profile (ICP) criteria. It’s ideal for qualifying B2B leads and prioritizing outreach based on fit. What This Automation Does Inputs required: Company LinkedIn URL**: Public LinkedIn profile of the company. Airtop Profile (connected to LinkedIn)**: Airtop Profile authenticated to access and extract profile data. The automation analyzes the LinkedIn page and calculates a score based on: Scoring Criteria | Category | Classification | Points | |--------------------|---------------------------|------------| | AI Focus | Low | 5 | | | Medium | 10 | | | High | 25 | | Technical Level | Basic | 5 | | | Intermediate | 15 | | | Advanced | 25 | | | Expert | 35 | | Employee Count | 0–9 | 5 | | | 10–150 | 25 | | | 150+ | 30 | | Agency Status | Not Automation Agency | 0 | | | Automation Agency | 20 | | Geography | Outside US/Europe | 0 | | | US/Europe Based | 10 | The result includes: Total ICP score Detailed justifications for each score component How It Works Opens the company’s LinkedIn page using Airtop. Analyzes metadata including employee count, headquarters, services, and keywords. Applies the scoring rubric and returns structured JSON with scores and reasons. Optionally flattens the result for storage or CRM integration. Setup Requirements Airtop API Key LinkedIn-authenticated Airtop Profile Next Steps Combine with Lead Lists**: Score companies from outreach lists. Push to CRM**: Add scores to HubSpot or Salesforce records. Adjust Scoring Weights**: Modify rubric to reflect your ICP strategy. Read more about company ICP scoring automation with Airtop and n8n
by Jah coozi
AI Social Media Content Generator & Scheduler Transform your social media strategy with AI-powered content generation that creates platform-specific posts in seconds! 🚀 What It Does This workflow uses AI to generate optimized content for multiple social media platforms from a single topic input. Perfect for marketers, content creators, and businesses looking to maintain consistent social media presence. ✨ Key Features Multi-Platform Support**: LinkedIn, Twitter/X, Instagram, Facebook, TikTok AI-Powered Generation**: Uses GPT-4 for creative, engaging content Platform Optimization**: Respects character limits and best practices Hashtag Generation**: Platform-specific hashtag strategies Posting Time Suggestions**: Optimal times for each platform Tone Customization**: Professional, casual, friendly, or custom Multi-Language Support**: Generate content in any language Engagement Predictions**: Estimate reach and engagement Daily Automation**: Schedule automatic content generation Bulk Processing**: Generate content for multiple topics at once 📊 Use Cases Marketing Teams: Streamline content creation across channels Small Businesses: Maintain consistent social presence Content Agencies: Scale content production efficiently Personal Brands: Build thought leadership E-commerce: Product launches and promotions 🛠️ Setup Instructions Add OpenAI Credentials Get API key from OpenAI Add to n8n credentials Configure Webhook (Optional) Set custom path if needed Enable for external integrations Customize Settings Adjust tone and style Set platform preferences Configure posting schedule Test Generation Use example prompts Verify output quality 💡 Example Inputs "New product launch - eco-friendly water bottle" "Company milestone - 10 years in business" "Industry insights - Future of AI in healthcare" "Team spotlight - Meet our new developer" "Seasonal campaign - Summer sale 50% off" 📈 Benefits 10x Faster**: Create content in seconds vs hours Consistency**: Maintain brand voice across platforms Engagement**: Platform-optimized for maximum reach Scalability**: Generate unlimited content Cost-Effective**: Reduce content creation costs by 80% 🔧 Customization Options Custom brand voice training Industry-specific content rules Competitor analysis integration A/B testing capabilities Analytics webhook integration Auto-posting to platforms Image generation add-on Translation services 🎯 Pro Tips Train the AI with your best-performing posts Use platform analytics to refine strategies Test different tones for audience engagement Schedule content during peak hours Monitor and iterate based on performance Start creating engaging social media content today! Categories: Marketing & Growth Content Creation Social Media AI & Automation Productivity Difficulty: Beginner Required Services: OpenAI API (or compatible LLM) n8n instance Optional: Social media APIs for auto-posting
by Teddy
Retrieve 20 Latest TechCrunch Articles Who is this for? This workflow is designed for developers, content creators, and data analysts who need to scrape recent articles from TechCrunch. It’s perfect for anyone looking to aggregate news articles or create custom feeds for analysis, reporting, or integration into other systems. What problem is this workflow solving? This workflow automates the process of scraping recent articles from TechCrunch. Manually collecting article data can be time-consuming and inefficient, but with this workflow, you can quickly gather up-to-date news articles with relevant metadata, saving time and effort. What this workflow does This workflow retrieves the latest 20 news articles from TechCrunch’s “Recent” page. It extracts the article URLs, metadata (such as titles and publication dates), and main content for each article, allowing you to access the information you need without any manual effort. Setup Clone or download the workflow template. Ensure you have a working n8n environment. Configure the HTTP Request nodes with your desired parameters to connect to the TechCrunch API. (Optional) Customize the workflow to target specific sections or topics of interest. Run the workflow to retrieve the latest 20 articles. How to customize this workflow to your needs Modify the HTTP request to pull articles from different pages or sections of TechCrunch. Adjust the number of articles to retrieve by changing the selection criteria. Add additional processing steps to further filter or analyze the article data. Workflow Steps Send an HTTP request to the TechCrunch "Recent" page. Parse a posts box that holds the list of articles. Parse all posts to extract all articles. spilt out posts for each article. Extract the URL and metadata from each article. Send an HTTP request for each article using its URL. Locate and parse the main content of each article. Note: Be sure to update the HTTP Request nodes with any necessary headers or authentication to work with TechCrunch’s website.
by Dvir Sharon
Goodreads Quote Extraction with Bright Data and Gemini This workflow demonstrates how to fetch data specifically from Goodreads web pages using Bright Data and then extract specific information (quotes) from that data using a Google Gemini AI model. How it works The workflow is triggered manually. It sends a request to a Bright Data collector to scrape data from a predefined list of Goodreads URLs. The collected text data from Goodreads is then passed to a Google Gemini AI node. The AI node processes the text and extracts quotes based on a specified JSON schema output format. Set up steps Setting up this workflow should take only a few minutes. You will need a Bright Data API key to configure the 'Header Auth' credential. You will need a Google Gemini API key to configure the 'Google Gemini(PaLM) Api account' credential. Ensure the correct Bright Data collector ID is set in the 'Perform Bright Data Web Request' node URL. Make sure the full list of target Goodreads URLs is correctly added to the 'Perform Bright Data Web Request' node's body. Link your created credentials to the respective nodes ('Perform Bright Data Web Request' and 'Quotes Extractor'). Keep detailed descriptions for specific node configurations in sticky notes inside your workflow canvas.
by Miquel Colomer
This n8n workflow template checks for new major releases (tagged with .0) of the n8n project using its official GitHub releases feed. It runs multiple times a day and sends notifications via email and Telegram if a new release is found. > ⚠️ Note: You must *activate the workflow* to start receiving release notifications. 🚀 What It Does Monitors the n8n GitHub releases feed Detects major versions (e.g., 1.0.0, 2.0.0) Sends alert messages via Telegram and email (SES) when a release is published ⏰ Scheduling Details The Cron node checks for new releases three times per day: 10:00, 14:00, and 18:00 server time. 🛠️ Step-by-Step Setup Configure Telegram Bot Connect your Telegram bot and specify the chat ID where you want to receive notifications. Set up AWS SES Credentials Use a verified sender email and set up AWS SES credentials in your n8n instance. Activate the Workflow Enable the workflow in your instance to start receiving notifications. Customize Notification Messages (Optional) You can modify the email subject, Telegram format, or filter logic. 🧠 How It Works: Workflow Overview Cron Trigger Runs the workflow at 10:00, 14:00, and 18:00 daily. Read RSS Feed Pulls data from https://github.com/n8n-io/n8n/releases.atom. Filter by Current Day Filters the feed to match: Releases published in the last 4 hours Titles starting with n8n@ and ending with .0 Condition Check Uses a regex to check if the filter result contains any release data. Notifications If a new major release is found, sends: Telegram message to a specified chat Email via AWS SES with release info 📨 Final Output You'll receive a Telegram message and email when a new major n8n version is released. 🔐 Credentials Used Telegram API** – For sending chat notifications AWS SES** – To send email alerts ✨ Customization Tips Change Notification Channels**: Add Slack, Discord, or other preferred channels. Adjust Cron Schedule**: Modify the Cron node to fit your check frequency. Modify Filters**: Detect patch or beta versions by changing the .0 condition. Send Release Notes**: Extend the feed parsing to include release content. ❓Questions? Template created by Miquel Colomer and n8nhackers.com. Need help customizing or deploying? Contact us for consulting and support.
by Airtop
Automating LinkedIn Company Data Extraction Use Case This automation extracts detailed company insights from a LinkedIn company page, including identity, scale, classification, and funding data. Ideal for investors, sales teams, and market researchers. What This Automation Does This automation accepts the following inputs: Company's LinkedIn URL**: The public LinkedIn page URL of the company. Airtop Profile (connected to LinkedIn)**: Your Airtop Profile authenticated on LinkedIn. It then extracts and returns structured data with: 1. Company Identity Full name Tagline Headquarters location (city, state, country) About section Website 2. Company Scale Current employee count Employee size bracket: [0-9], [10-150], [150+] 3. Business Classification Is the company an automation agency? (true/false) AI implementation level: Low / Medium / High Technical sophistication: Basic / Intermediate / Advanced / Expert 4. Funding Profile Most recent funding round Total amount raised Key investors Last funding update date How It Works Creates an Airtop session using the provided profile. Navigates to the company LinkedIn page. Executes an Airtop query to extract data. Outputs the result in a standardized JSON schema. Setup Requirements Airtop API Key A LinkedIn-authenticated Airtop Profile Next Steps Feed into CRM**: Enrich your accounts with detailed LinkedIn data. Prioritize Leads**: Use classification and funding data to prioritize outreach. Combine with People Data**: Integrate with individual-level enrichment for full context. Read more about how to extract company data from Linkedin with Airtop and n8n
by Rudi Afandi
Description Turn your Telegram bot into a powerful OCR (Optical Character Recognition) tool. This workflow allows you to send any image (like a screenshot, a photo of a document, or a picture of a sign) to your bot, and it will instantly extract and send back the text from that image. Powered by Google's advanced Gemini AI, this automation is perfect for quickly digitizing notes, saving important snippets, or avoiding manual typing. How it works This workflow performs a few high-level steps: It triggers when a new image is sent to your Telegram bot. It sends the image to the Google Gemini Vision API to be analyzed. It extracts the text found in the image. It sends the extracted text back to you as a message in Telegram. Set up steps Estimated set up time: Less than 5 minutes. The setup is straightforward. You only need to configure two credentials: Telegram Bot Credentials: To connect your bot. Google Gemini API Credentials: To use the OCR feature. You can get a free API key from Google AI Studio.
by Danger
Ok google download "movie name" I develop this automation to improve my quality of life in handling torrents in my media-center. Goal Automate the search operations of a movie based on its name and trigger a download using your transmission-daemon. Setup Prerequisite Transmission daemon up and running and its authentication method N8N configured self-hosted or with the possibility to add npm package better with docker-compose.yaml Telegram bot credential [optional] Configuration Create a folder where your docker-compose.yaml belongs n8n_dir and proceed in installing the node package. cd ~/n8n_dir npm i torrent-search-api Configuring your docker-compose.yaml file this way. You must include all the dependencies of torrent-search-api. This will let you run the new torrent search node presented in this workflow. version: '3.3' services: n8n: container_name: n8n ports: '5678:5678' restart: always volumes: '~/n8n_dir/.n8n:/home/node/.n8n' '~/n8n_dir/node_modules/@tootallnate:/usr/local/lib/node_modules/@tootallnate' '~/n8n_dir/node_modules/accepts:/usr/local/lib/node_modules/accepts' '~/n8n_dir/node_modules/agent-base:/usr/local/lib/node_modules/agent-base' '~/n8n_dir/node_modules/ajv:/usr/local/lib/node_modules/ajv' '~/n8n_dir/node_modules/ansi-styles:/usr/local/lib/node_modules/ansi-styles' '~/n8n_dir/node_modules/asn1:/usr/local/lib/node_modules/asn1' '~/n8n_dir/node_modules/assert:/usr/local/lib/node_modules/assert' '~/n8n_dir/node_modules/assert-plus:/usr/local/lib/node_modules/assert-plus' '~/n8n_dir/node_modules/ast-types:/usr/local/lib/node_modules/ast-types' '~/n8n_dir/node_modules/asynckit:/usr/local/lib/node_modules/asynckit' '~/n8n_dir/node_modules/aws-sign2:/usr/local/lib/node_modules/aws-sign2' '~/n8n_dir/node_modules/aws4:/usr/local/lib/node_modules/aws4' '~/n8n_dir/node_modules/base64-js:/usr/local/lib/node_modules/base64-js' '~/n8n_dir/node_modules/batch:/usr/local/lib/node_modules/batch' '~/n8n_dir/node_modules/bcrypt-pbkdf:/usr/local/lib/node_modules/bcrypt-pbkdf' '~/n8n_dir/node_modules/bluebird:/usr/local/lib/node_modules/bluebird' '~/n8n_dir/node_modules/boolbase:/usr/local/lib/node_modules/boolbase' '~/n8n_dir/node_modules/brotli:/usr/local/lib/node_modules/brotli' '~/n8n_dir/node_modules/bytes:/usr/local/lib/node_modules/bytes' '~/n8n_dir/node_modules/caseless:/usr/local/lib/node_modules/caseless' '~/n8n_dir/node_modules/chalk:/usr/local/lib/node_modules/chalk' '~/n8n_dir/node_modules/cheerio:/usr/local/lib/node_modules/cheerio' '~/n8n_dir/node_modules/cloudscraper:/usr/local/lib/node_modules/cloudscraper' '~/n8n_dir/node_modules/co:/usr/local/lib/node_modules/co' '~/n8n_dir/node_modules/color-convert:/usr/local/lib/node_modules/color-convert' '~/n8n_dir/node_modules/color-name:/usr/local/lib/node_modules/color-name' '~/n8n_dir/node_modules/combined-stream:/usr/local/lib/node_modules/combined-stream' '~/n8n_dir/node_modules/component-emitter:/usr/local/lib/node_modules/component-emitter' '~/n8n_dir/node_modules/content-disposition:/usr/local/lib/node_modules/content-disposition' '~/n8n_dir/node_modules/content-type:/usr/local/lib/node_modules/content-type' '~/n8n_dir/node_modules/cookiejar:/usr/local/lib/node_modules/cookiejar' '~/n8n_dir/node_modules/core-util-is:/usr/local/lib/node_modules/core-util-is' '~/n8n_dir/node_modules/css-select:/usr/local/lib/node_modules/css-select' '~/n8n_dir/node_modules/css-what:/usr/local/lib/node_modules/css-what' '~/n8n_dir/node_modules/dashdash:/usr/local/lib/node_modules/dashdash' '~/n8n_dir/node_modules/data-uri-to-buffer:/usr/local/lib/node_modules/data-uri-to-buffer' '~/n8n_dir/node_modules/debug:/usr/local/lib/node_modules/debug' '~/n8n_dir/node_modules/deep-is:/usr/local/lib/node_modules/deep-is' '~/n8n_dir/node_modules/degenerator:/usr/local/lib/node_modules/degenerator' '~/n8n_dir/node_modules/delayed-stream:/usr/local/lib/node_modules/delayed-stream' '~/n8n_dir/node_modules/delegates:/usr/local/lib/node_modules/delegates' '~/n8n_dir/node_modules/depd:/usr/local/lib/node_modules/depd' '~/n8n_dir/node_modules/destroy:/usr/local/lib/node_modules/destroy' '~/n8n_dir/node_modules/dom-serializer:/usr/local/lib/node_modules/dom-serializer' '~/n8n_dir/node_modules/domelementtype:/usr/local/lib/node_modules/domelementtype' '~/n8n_dir/node_modules/domhandler:/usr/local/lib/node_modules/domhandler' '~/n8n_dir/node_modules/domutils:/usr/local/lib/node_modules/domutils' '~/n8n_dir/node_modules/ecc-jsbn:/usr/local/lib/node_modules/ecc-jsbn' '~/n8n_dir/node_modules/ee-first:/usr/local/lib/node_modules/ee-first' '~/n8n_dir/node_modules/emitter-component:/usr/local/lib/node_modules/emitter-component' '~/n8n_dir/node_modules/enqueue:/usr/local/lib/node_modules/enqueue' '~/n8n_dir/node_modules/enstore:/usr/local/lib/node_modules/enstore' '~/n8n_dir/node_modules/entities:/usr/local/lib/node_modules/entities' '~/n8n_dir/node_modules/error-inject:/usr/local/lib/node_modules/error-inject' '~/n8n_dir/node_modules/escape-html:/usr/local/lib/node_modules/escape-html' '~/n8n_dir/node_modules/escape-string-regexp:/usr/local/lib/node_modules/escape-string-regexp' '~/n8n_dir/node_modules/escodegen:/usr/local/lib/node_modules/escodegen' '~/n8n_dir/node_modules/esprima:/usr/local/lib/node_modules/esprima' '~/n8n_dir/node_modules/estraverse:/usr/local/lib/node_modules/estraverse' '~/n8n_dir/node_modules/esutils:/usr/local/lib/node_modules/esutils' '~/n8n_dir/node_modules/extend:/usr/local/lib/node_modules/extend' '~/n8n_dir/node_modules/extsprintf:/usr/local/lib/node_modules/extsprintf' '~/n8n_dir/node_modules/fast-deep-equal:/usr/local/lib/node_modules/fast-deep-equal' '~/n8n_dir/node_modules/fast-json-stable-stringify:/usr/local/lib/node_modules/fast-json-stable-stringify' '~/n8n_dir/node_modules/fast-levenshtein:/usr/local/lib/node_modules/fast-levenshtein' '~/n8n_dir/node_modules/file-uri-to-path:/usr/local/lib/node_modules/file-uri-to-path' '~/n8n_dir/node_modules/forever-agent:/usr/local/lib/node_modules/forever-agent' '~/n8n_dir/node_modules/form-data:/usr/local/lib/node_modules/form-data' '~/n8n_dir/node_modules/format-parser:/usr/local/lib/node_modules/format-parser' '~/n8n_dir/node_modules/formidable:/usr/local/lib/node_modules/formidable' '~/n8n_dir/node_modules/fs-extra:/usr/local/lib/node_modules/fs-extra' '~/n8n_dir/node_modules/ftp:/usr/local/lib/node_modules/ftp' '~/n8n_dir/node_modules/get-uri:/usr/local/lib/node_modules/get-uri' '~/n8n_dir/node_modules/getpass:/usr/local/lib/node_modules/getpass' '~/n8n_dir/node_modules/graceful-fs:/usr/local/lib/node_modules/graceful-fs' '~/n8n_dir/node_modules/har-schema:/usr/local/lib/node_modules/har-schema' '~/n8n_dir/node_modules/har-validator:/usr/local/lib/node_modules/har-validator' '~/n8n_dir/node_modules/has-flag:/usr/local/lib/node_modules/has-flag' '~/n8n_dir/node_modules/htmlparser2:/usr/local/lib/node_modules/htmlparser2' '~/n8n_dir/node_modules/http-context:/usr/local/lib/node_modules/http-context' '~/n8n_dir/node_modules/http-errors:/usr/local/lib/node_modules/http-errors' '~/n8n_dir/node_modules/http-incoming:/usr/local/lib/node_modules/http-incoming' '~/n8n_dir/node_modules/http-outgoing:/usr/local/lib/node_modules/http-outgoing' '~/n8n_dir/node_modules/http-proxy-agent:/usr/local/lib/node_modules/http-proxy-agent' '~/n8n_dir/node_modules/http-signature:/usr/local/lib/node_modules/http-signature' '~/n8n_dir/node_modules/https-proxy-agent:/usr/local/lib/node_modules/https-proxy-agent' '~/n8n_dir/node_modules/iconv-lite:/usr/local/lib/node_modules/iconv-lite' '~/n8n_dir/node_modules/inherits:/usr/local/lib/node_modules/inherits' '~/n8n_dir/node_modules/ip:/usr/local/lib/node_modules/ip' '~/n8n_dir/node_modules/is-browser:/usr/local/lib/node_modules/is-browser' '~/n8n_dir/node_modules/is-typedarray:/usr/local/lib/node_modules/is-typedarray' '~/n8n_dir/node_modules/is-url:/usr/local/lib/node_modules/is-url' '~/n8n_dir/node_modules/isarray:/usr/local/lib/node_modules/isarray' '~/n8n_dir/node_modules/isobject:/usr/local/lib/node_modules/isobject' '~/n8n_dir/node_modules/isstream:/usr/local/lib/node_modules/isstream' '~/n8n_dir/node_modules/jsbn:/usr/local/lib/node_modules/jsbn' '~/n8n_dir/node_modules/json-schema:/usr/local/lib/node_modules/json-schema' '~/n8n_dir/node_modules/json-schema-traverse:/usr/local/lib/node_modules/json-schema-traverse' '~/n8n_dir/node_modules/json-stringify-safe:/usr/local/lib/node_modules/json-stringify-safe' '~/n8n_dir/node_modules/jsonfile:/usr/local/lib/node_modules/jsonfile' '~/n8n_dir/node_modules/jsprim:/usr/local/lib/node_modules/jsprim' '~/n8n_dir/node_modules/koa-is-json:/usr/local/lib/node_modules/koa-is-json' '~/n8n_dir/node_modules/levn:/usr/local/lib/node_modules/levn' '~/n8n_dir/node_modules/lodash:/usr/local/lib/node_modules/lodash' '~/n8n_dir/node_modules/lodash.assignin:/usr/local/lib/node_modules/lodash.assignin' '~/n8n_dir/node_modules/lodash.bind:/usr/local/lib/node_modules/lodash.bind' '~/n8n_dir/node_modules/lodash.defaults:/usr/local/lib/node_modules/lodash.defaults' '~/n8n_dir/node_modules/lodash.filter:/usr/local/lib/node_modules/lodash.filter' '~/n8n_dir/node_modules/lodash.flatten:/usr/local/lib/node_modules/lodash.flatten' '~/n8n_dir/node_modules/lodash.foreach:/usr/local/lib/node_modules/lodash.foreach' '~/n8n_dir/node_modules/lodash.map:/usr/local/lib/node_modules/lodash.map' '~/n8n_dir/node_modules/lodash.merge:/usr/local/lib/node_modules/lodash.merge' '~/n8n_dir/node_modules/lodash.pick:/usr/local/lib/node_modules/lodash.pick' '~/n8n_dir/node_modules/lodash.reduce:/usr/local/lib/node_modules/lodash.reduce' '~/n8n_dir/node_modules/lodash.reject:/usr/local/lib/node_modules/lodash.reject' '~/n8n_dir/node_modules/lodash.some:/usr/local/lib/node_modules/lodash.some' '~/n8n_dir/node_modules/lru-cache:/usr/local/lib/node_modules/lru-cache' '~/n8n_dir/node_modules/media-typer:/usr/local/lib/node_modules/media-typer' '~/n8n_dir/node_modules/methods:/usr/local/lib/node_modules/methods' '~/n8n_dir/node_modules/mime:/usr/local/lib/node_modules/mime' '~/n8n_dir/node_modules/mime-db:/usr/local/lib/node_modules/mime-db' '~/n8n_dir/node_modules/mime-types:/usr/local/lib/node_modules/mime-types' '~/n8n_dir/node_modules/monotonic-timestamp:/usr/local/lib/node_modules/monotonic-timestamp' '~/n8n_dir/node_modules/ms:/usr/local/lib/node_modules/ms' '~/n8n_dir/node_modules/negotiator:/usr/local/lib/node_modules/negotiator' '~/n8n_dir/node_modules/netmask:/usr/local/lib/node_modules/netmask' '~/n8n_dir/node_modules/nth-check:/usr/local/lib/node_modules/nth-check' '~/n8n_dir/node_modules/oauth-sign:/usr/local/lib/node_modules/oauth-sign' '~/n8n_dir/node_modules/object-assign:/usr/local/lib/node_modules/object-assign' '~/n8n_dir/node_modules/on-finished:/usr/local/lib/node_modules/on-finished' '~/n8n_dir/node_modules/optionator:/usr/local/lib/node_modules/optionator' '~/n8n_dir/node_modules/pac-proxy-agent:/usr/local/lib/node_modules/pac-proxy-agent' '~/n8n_dir/node_modules/pac-resolver:/usr/local/lib/node_modules/pac-resolver' '~/n8n_dir/node_modules/parseurl:/usr/local/lib/node_modules/parseurl' '~/n8n_dir/node_modules/performance-now:/usr/local/lib/node_modules/performance-now' '~/n8n_dir/node_modules/prelude-ls:/usr/local/lib/node_modules/prelude-ls' '~/n8n_dir/node_modules/process-nextick-args:/usr/local/lib/node_modules/process-nextick-args' '~/n8n_dir/node_modules/promise-polyfill:/usr/local/lib/node_modules/promise-polyfill' '~/n8n_dir/node_modules/proxy-agent:/usr/local/lib/node_modules/proxy-agent' '~/n8n_dir/node_modules/proxy-from-env:/usr/local/lib/node_modules/proxy-from-env' '~/n8n_dir/node_modules/psl:/usr/local/lib/node_modules/psl' '~/n8n_dir/node_modules/punycode:/usr/local/lib/node_modules/punycode' '~/n8n_dir/node_modules/qs:/usr/local/lib/node_modules/qs' '~/n8n_dir/node_modules/querystring:/usr/local/lib/node_modules/querystring' '~/n8n_dir/node_modules/raw-body:/usr/local/lib/node_modules/raw-body' '~/n8n_dir/node_modules/readable-stream:/usr/local/lib/node_modules/readable-stream' '~/n8n_dir/node_modules/request:/usr/local/lib/node_modules/request' '~/n8n_dir/node_modules/request-promise:/usr/local/lib/node_modules/request-promise' '~/n8n_dir/node_modules/request-promise-core:/usr/local/lib/node_modules/request-promise-core' '~/n8n_dir/node_modules/request-x-ray:/usr/local/lib/node_modules/request-x-ray' '~/n8n_dir/node_modules/safe-buffer:/usr/local/lib/node_modules/safe-buffer' '~/n8n_dir/node_modules/safer-buffer:/usr/local/lib/node_modules/safer-buffer' '~/n8n_dir/node_modules/selectn:/usr/local/lib/node_modules/selectn' '~/n8n_dir/node_modules/setprototypeof:/usr/local/lib/node_modules/setprototypeof' '~/n8n_dir/node_modules/sliced:/usr/local/lib/node_modules/sliced' '~/n8n_dir/node_modules/smart-buffer:/usr/local/lib/node_modules/smart-buffer' '~/n8n_dir/node_modules/socks:/usr/local/lib/node_modules/socks' '~/n8n_dir/node_modules/socks-proxy-agent:/usr/local/lib/node_modules/socks-proxy-agent' '~/n8n_dir/node_modules/source-map:/usr/local/lib/node_modules/source-map' '~/n8n_dir/node_modules/sshpk:/usr/local/lib/node_modules/sshpk' '~/n8n_dir/node_modules/statuses:/usr/local/lib/node_modules/statuses' '~/n8n_dir/node_modules/stealthy-require:/usr/local/lib/node_modules/stealthy-require' '~/n8n_dir/node_modules/stream-to-string:/usr/local/lib/node_modules/stream-to-string' '~/n8n_dir/node_modules/string-format:/usr/local/lib/node_modules/string-format' '~/n8n_dir/node_modules/string_decoder:/usr/local/lib/node_modules/string_decoder' '~/n8n_dir/node_modules/superagent:/usr/local/lib/node_modules/superagent' '~/n8n_dir/node_modules/superagent-proxy:/usr/local/lib/node_modules/superagent-proxy' '~/n8n_dir/node_modules/supports-color:/usr/local/lib/node_modules/supports-color' '~/n8n_dir/node_modules/toidentifier:/usr/local/lib/node_modules/toidentifier' '~/n8n_dir/node_modules/torrent-search-api:/usr/local/lib/node_modules/torrent-search-api' '~/n8n_dir/node_modules/tough-cookie:/usr/local/lib/node_modules/tough-cookie' '~/n8n_dir/node_modules/tslib:/usr/local/lib/node_modules/tslib' '~/n8n_dir/node_modules/tunnel-agent:/usr/local/lib/node_modules/tunnel-agent' '~/n8n_dir/node_modules/tweetnacl:/usr/local/lib/node_modules/tweetnacl' '~/n8n_dir/node_modules/type-check:/usr/local/lib/node_modules/type-check' '~/n8n_dir/node_modules/type-is:/usr/local/lib/node_modules/type-is' '~/n8n_dir/node_modules/universalify:/usr/local/lib/node_modules/universalify' '~/n8n_dir/node_modules/unpipe:/usr/local/lib/node_modules/unpipe' '~/n8n_dir/node_modules/uri-js:/usr/local/lib/node_modules/uri-js' '~/n8n_dir/node_modules/util:/usr/local/lib/node_modules/util' '~/n8n_dir/node_modules/util-deprecate:/usr/local/lib/node_modules/util-deprecate' '~/n8n_dir/node_modules/uuid:/usr/local/lib/node_modules/uuid' '~/n8n_dir/node_modules/vary:/usr/local/lib/node_modules/vary' '~/n8n_dir/node_modules/verror:/usr/local/lib/node_modules/verror' '~/n8n_dir/node_modules/word-wrap:/usr/local/lib/node_modules/word-wrap' '~/n8n_dir/node_modules/wrap-fn:/usr/local/lib/node_modules/wrap-fn' '~/n8n_dir/node_modules/x-ray:/usr/local/lib/node_modules/x-ray' '~/n8n_dir/node_modules/x-ray-crawler:/usr/local/lib/node_modules/x-ray-crawler' '~/n8n_dir/node_modules/x-ray-parse:/usr/local/lib/node_modules/x-ray-parse' '~/n8n_dir/node_modules/x-ray-scraper:/usr/local/lib/node_modules/x-ray-scraper' '~/n8n_dir/node_modules/xregexp:/usr/local/lib/node_modules/xregexp' '~/n8n_dir/node_modules/yallist:/usr/local/lib/node_modules/yallist' '~/n8n_dir/node_modules/yieldly:/usr/local/lib/node_modules/yieldly' image: 'n8nio/n8n:latest-rpi' environment: N8N_BASIC_AUTH_ACTIVE=true N8N_BASIC_AUTH_USER=username N8N_BASIC_AUTH_PASSWORD=your_secret_n8n_password EXECUTIONS_DATA_PRUNE=true EXECUTIONS_DATA_MAX_AGE=120 EXECUTIONS_TIMEOUT=300 EXECUTIONS_TIMEOUT_MAX=500 GENERIC_TIMEZONE=Europe/Berlin NODE_FUNCTION_ALLOW_EXTERNAL=torrent-search-api Once configured this way run n8n and create a new workflow coping the one proposed. Configure workflow Transmission In order to send command to transmission you must validate the Basic Auth. To do so: open the Start download node and edit the Credentials. Perform the same operation choosing the new credentials also in node Start download new token. In this automation we call transmission twice due to a security protocol in transmission system that prevents single click commands to be triggered, performing the request twice bypasses this security mechanism. https://en.wikipedia.org/wiki/Cross-site_request_forgery We use the X-Transmission-Session-Id provided by the first request to authenticate the second request. Telegram In order to make the workflow work as expected you must create a telegram bot and configure the nodes (Torrent not found and Telegram1) to send your message once the workflow is complete. Here's an easy guide to follow https://docs.n8n.io/nodes/n8n-nodes-base.telegram/ In those nodes you also should configure the Chat ID, you may use your telegram username or use a bot to retrieve your id. You may chat with useridinfobot that sends you your id. Ok google automation Since right now we do not have a n8n client for mobile that can trigger automation using google assistant I decided to use an IFTTT automation to trigger the webhook. I connect my IFTTT account with google assistant and pick the trigger. Say a phrase with a text ingredient as in the picture below. And configure the trigger this way. scarica $ -> download $ or metti in download $ -> put in download $ or some other trigger you may want. Then configure your server to trigger the webhook of n8n. Conclusion In conclusion we provide a fully working automation that integrates in n8n a node library and provides an easy trigger to perform a complex operation. Security concern Giving the ability to trigger a download may be problematic for potential unwanted torrent malware download, so you may decide to authenticate the webhook request passing in the body another field with a shared token between the two endpoints. Moreover the torrent-search-api library and its dependencies have some vulnerability that you may want to avoid on your own media-center, this will hopefully be patched soon in a further release of the library. This is just an interesting proof of concept. Quality of the download You may want to introduce another block between torrent search and webhook trigger to search for a movie based on the words detected by google assistant, sometimes it misinterprets something and you may end up downloading potential copyrighted material. Please use this automation only for free and open source movies and music.
by Nick Saraev
AI LinkedIn Outreach Automation with Apollo, OpenAI & PhantomBuster Categories:* Sales Automation Lead Generation AI Personalization This workflow creates a complete LinkedIn outreach automation system that generates targeted lead lists from Apollo using natural language, enriches profiles with AI-personalized icebreakers, and automatically sends connection requests through PhantomBuster. Built by someone who's made over $1 million with AI automation, this system demonstrates the real-world approach to building profitable automation workflows. Benefits* Natural Language Lead Targeting - Describe your ideal prospects in plain English and automatically generate Apollo search URLs AI-Powered Personalization - Creates custom icebreakers based on LinkedIn profile data, employment history, and professional background Complete Outreach Pipeline - From lead discovery to personalized connection requests, fully automated end-to-end Smart Data Management - Automatically tracks all prospects in Google Sheets with deduplication and status tracking Cost-Effective Scraping - Uses Apify to extract Apollo data without expensive subscription costs Scalable Architecture - Processes hundreds of leads while respecting LinkedIn's connection limits How It Works* Natural Language Lead Generation: Form input accepts audience descriptions in plain English AI converts descriptions into properly formatted Apollo search URLs Automatically includes location, company size, job titles, and keyword filters Apollo Data Extraction: Uses Apify actor to scrape targeted lead lists from Apollo Extracts LinkedIn URLs, email addresses, employment history, and profile data Processes 500+ leads per run with detailed professional information AI Personalization Engine: Analyzes LinkedIn profile data including job history and company information Generates personalized icebreakers using proven connection request templates Creates human-like messages that reference specific career details and achievements Google Sheets Integration: Automatically stores all lead data in organized spreadsheet format Tracks prospect information, contact details, and generated icebreakers Provides easy data management and campaign tracking PhantomBuster Automation: Connects to PhantomBuster API to trigger LinkedIn connection campaigns Sends personalized connection requests with custom icebreakers Respects LinkedIn's daily limits and mimics human behavior patterns Business Use Cases* Sales Teams - Automate prospecting for B2B outreach campaigns Agencies - Scale client acquisition through targeted LinkedIn outreach Recruiters - Find and connect with qualified candidates efficiently Entrepreneurs - Build professional networks in specific industries Business Development - Generate qualified leads for partnership opportunities Revenue Potential This system can replace expensive LinkedIn outreach tools that cost $200-500/month. Users typically see: 400% improvement in response rates through personalization 10x faster lead generation compared to manual prospecting Ability to process 500+ leads per hour vs. 10-20 manually Difficulty Level: Intermediate Estimated Build Time: 1-2 hours Monthly Operating Cost: ~$50 (Apollo + PhantomBuster + AI APIs) Watch My Complete 1-Hour Build* Want to see exactly how I built this system from scratch? I walk through the entire development process live, including all the debugging, API integrations, and real-world testing that goes into building profitable automation systems. 🎥 See My Live Build Process: "Build This Automated AI LinkedIn DM System in 1 Hour (N8N)" This comprehensive tutorial shows my actual development approach - including the detours, problem-solving, and iterative testing that real automation building involves. Required Google Sheets Setup* Create a Google Sheet with these exact column headers: Essential Lead Columns: id - Unique prospect identifier first_name - Contact's first name last_name - Contact's last name name - Full name linkedin_url - LinkedIn profile URL title - Current job title email_status - Email verification status photo_url - Profile photo URL icebreaker - AI-generated personalized message Setup Instructions: Create Google Sheet with these headers in row 1 Connect Google Sheets OAuth in n8n Update the document ID in the "Add to Google Sheet" node PhantomBuster will read from this sheet for automated outreach Set Up Steps* Apollo & Apify Configuration: Set up Apify account and obtain API credentials Configure Apollo scraper actor with proper parameters Test lead extraction with sample audience descriptions AI Personalization Setup: Configure OpenAI API for natural language processing and personalization Set up prompt templates for audience targeting and icebreaker generation Test personalization quality with sample LinkedIn profiles Google Sheets Integration: Create lead tracking spreadsheet with proper column structure Configure Google Sheets API credentials and permissions Set up data mapping for automatic lead storage PhantomBuster Connection: Set up PhantomBuster account and LinkedIn connection Configure LinkedIn auto-connect agent with custom message templates Connect API for automated campaign triggering Form and Workflow Setup: Configure form trigger for audience input collection Set up data flow between all components Add proper error handling and rate limiting Testing and Optimization: Start with small batches (5-10 connections daily) Monitor LinkedIn account health and response rates Optimize icebreaker templates based on performance data Important Compliance Notes* LinkedIn Limits: Respect 100 connection requests per week limit Account Safety: Use PhantomBuster's human-like behavior patterns Message Quality: Regularly update templates to avoid automation detection Response Management: Monitor and respond to replies within 24 hours Advanced Extensions* This system can be enhanced with: Multi-channel Outreach: Add email sequences for comprehensive campaigns A/B Testing: Test different icebreaker templates automatically CRM Integration: Connect to Salesforce, HubSpot, or other sales systems Response Tracking: Monitor reply rates and optimize messaging Explore My Channel* For more advanced automation systems that generate real business results, check out my YouTube channel where I share the exact strategies I've used to make over $1 million with AI automation.
by n8n Team
This workflow automatically adds closed deals from Pipedrive as new customers into Stripe. Prerequisites Pipedrive account and Pipedrive credentials Stripe account and Stripe credentials How it works Pipedrive trigger node starts the workflow when a deal gets updated in Pipedrive. IF node checks that the current won time is not equal to the previuos one in the deal and continues the workflow if it's true. Pipedrive node extracts the organization's details to pass it further. HTTP Request node searches for the same organization's details within Stripe. If a customer doesn't exist within Stripe, Merge node passes a new customer details to Stripe. Stripe node creates a new customer.
by Shiva
AI Voice Calling Bot - OpenAI GPT-4o + ElevenLabs + Twilio Integration for Multilingual Appointment Booking & Service Orders Overview Transform your business with an intelligent voice calling bot that handles customer calls automatically in 25+ languages. This N8n workflow integrates OpenAI GPT-4o, ElevenLabs text-to-speech, and Twilio for seamless appointment scheduling, pizza orders, and service bookings. Key Features Multilingual Support**: Conversations in English, Spanish, French, German, Italian, Portuguese, Chinese, Japanese, Arabic, and 20+ more languages Natural AI Conversations**: GPT-4o powered responses with ElevenLabs realistic voice synthesis Multi-Service Handling**: Appointments, orders, and service requests with automatic logging Real-time Processing**: Instant speech-to-text and audio response generation Prerequisites N8n instance (self-hosted or cloud) Twilio account with phone number OpenAI API key (GPT-4o access) ElevenLabs API credentials Google Sheets access Cloud storage for audio files Setup Instructions Step 1: Configure Credentials Add API keys for OpenAI, ElevenLabs, Twilio, and Google Sheets in N8n credentials manager. Step 2: Prepare Data Storage Create Google Sheets for call logs and appointments with columns: timestamp, caller_id, speech_input, ai_response, language, call_sid. Step 3: Configure Twilio Set webhook URL to your N8n endpoint: https://your-n8n-instance.com/webhook/voice-webhook Step 4: Update Sheet IDs Replace placeholder Google Sheet IDs in workflow nodes with your actual sheet IDs. Customization Options Voice Settings**: Adjust ElevenLabs multilingual voice models and parameters AI Behavior**: Modify system prompts for specific business needs and languages Service Types**: Add custom service handling logic Business Hours**: Implement language-specific operating hours Monitoring Track call analytics, language preferences, conversion rates, and customer satisfaction across all supported languages through automated Google Sheets logging. Ready for production use with comprehensive error handling and scalability for global businesses.