by Yaron Been
🔥 AI Lead Scoring Agent: Smart Contact Form Triager Automatically score every contact form lead as Hot/Warm/Cold and alert your sales team instantly. This intelligent workflow captures contact form submissions, uses GPT-4 to analyze message content and score lead quality, then sends formatted alerts to Slack - ensuring your sales team always focuses on the hottest prospects first. 🚀 What It Does Instant Lead Capture: Automatically receives contact form submissions via webhook endpoint AI-Powered Scoring: GPT-4 analyzes message content and classifies leads as Hot 🔥, Warm 🌤, or Cold ❄️ Smart Data Extraction: Cleanly extracts name, email, and message from form submissions Real-Time Slack Alerts: Sends formatted notifications to your sales team with lead details and AI scoring 🎯 Key Benefits ✅ Never Miss Hot Prospects: AI identifies urgent leads automatically ✅ Save Sales Time: Focus effort on highest-probability leads first ✅ Instant Team Alerts: Real-time notifications in Slack channels ✅ Smart Prioritization: AI scoring eliminates guesswork in lead quality ✅ Zero Manual Work: Complete automation from form to sales alert ✅ Universal Integration: Works with any contact form or landing page 🏢 Perfect For Sales & Marketing Teams SaaS companies managing inbound leads Service businesses qualifying prospects E-commerce stores identifying serious buyers Agencies prioritizing client inquiries Business Applications Lead Qualification**: Identify purchase-ready prospects instantly Sales Efficiency**: Focus team effort on highest-value opportunities Response Prioritization**: Handle urgent inquiries first Team Coordination**: Keep entire sales team informed of new leads ⚙️ What's Included Complete Workflow: Ready-to-deploy lead scoring automation Webhook Endpoint: Receives submissions from any contact form AI Classification: GPT-4 powered lead interest analysis Slack Integration: Professional team notifications with emojis and formatting Data Processing: Clean extraction and formatting of lead information 🔧 Quick Setup Requirements n8n Platform**: Cloud or self-hosted instance OpenAI API**: GPT-4 access for lead scoring Slack Workspace**: Team channel for lead notifications Contact Form**: Any form that can POST to webhook endpoint 📱 Sample Slack Alert 🔥 New Lead: Sarah Johnson (sarah@techstartup.com) Message: "We're looking for a project management solution for our 50-person team. Need to implement ASAP as we're scaling fast. Can we schedule a demo this week?" Triage: 🔥 Hot ❄️ New Lead: John Smith (john@email.com) Message: "Just browsing your website. Might be interested in learning more someday." Triage: ❄️ Cold 🎨 Customization Options Scoring Criteria: Adjust AI prompts for industry-specific lead qualification Team Channels: Route different lead types to specific Slack channels Additional Fields: Capture company size, budget, timeline data CRM Integration: Connect to Salesforce, HubSpot, or Pipedrive Follow-up Automation: Trigger email sequences based on lead temperature Analytics Tracking: Monitor lead quality trends and conversion rates 🏷️ Tags & Categories #lead-scoring #sales-automation #contact-form-processing #ai-qualification #slack-integration #prospect-management #inbound-marketing #sales-productivity #lead-generation #openai-integration #webhook-automation #crm-automation #sales-alerts #lead-triage #ai-agent 💡 Use Case Examples SaaS Company: Score demo requests based on company size and urgency mentions Consulting Firm: Identify clients ready to start projects vs those still researching E-commerce Store: Spot bulk buyers and wholesale inquiries vs casual browsers Marketing Agency: Prioritize clients with specific budgets and timelines mentioned 📈 Expected Results 70% faster** lead response times through smart prioritization 3x higher** conversion rates focusing on Hot leads first 50% time savings** on manual lead qualification 100% lead coverage** - never miss or ignore a prospect again 🛠️ Setup & Support 5-Minute Setup: Simple webhook configuration with any contact form Universal Integration: Works with WordPress, Webflow, custom forms, landing pages Team Training: Clear Slack notification format anyone can understand Scalable: Handles unlimited form submissions automatically 📞 Get Help & Resources YouTube: https://www.youtube.com/@YaronBeen/videos 💼 Sales Automation Support LinkedIn: https://www.linkedin.com/in/yaronbeen/ 📧 Direct Help Email: Yaron@nofluff.online - Response within 24 hours Ready to never miss another hot lead? Get this AI Lead Scoring Agent and transform your contact forms into intelligent lead qualification systems. Your sales team will always know which prospects to call first, and you'll never waste time on cold leads again. Stop treating all leads equally. Start prioritizing the ones ready to buy.
by David Olusola
This plug-and-play n8n workflow automates medical record digitization using Mistral’s OCR API and stores clean, structured data in Google Sheets. Whether you run a clinic or healthtech product, this no-code solution simplifies data entry from scanned or uploaded medical documents. 📌 Works seamlessly on both self-hosted and cloud-based n8n environments. 👥 Who is this for? Hospitals and private clinics Healthtech platforms & startups Medical admin and document processing teams Clinical researchers and labs 😓 What problem does it solve? ❌ Manual entry from printed forms ❌ Unstructured, scattered records ❌ Errors in data transcription ❌ Inconsistent document storage ✅ This automation brings consistency, structure, and speed to the way you handle medical documents. ✅ What this workflow does Captures uploaded documents through a public form Uploads file to Mistral for OCR processing Extracts clean text from each page (PDF or image) Parses patient fields (Name, DOB, Diagnosis, Medications, etc.) Saves records into a structured Google Sheet 🛠️ Setup Instructions Step 1: Google Sheet Prep Create a Google Sheet with these columns (case-sensitive): Name, Date of Birth, Patient ID, Date of Visit, Referring Physician, Department, Symptoms, Blood Pressure, Heart Rate, Temperature, Lab Results, Diagnosis, Medications, Next Appointment, Notes Step 2: Mistral API Access Sign up at Mistral AI Get your API key Ensure your plan supports file upload & OCR endpoints Step 3: Google OAuth Credentials (Self-hosted or Cloud) Go to n8n → Settings → Credentials, and add: Google Sheets OAuth2 Scopes needed: https://www.googleapis.com/auth/spreadsheets Step 4: Import Workflow Go to Workflows > Import from File Upload your JSON file Replace: Google Sheet document ID in the "Google Sheets" node Your Mistral API key in HTTP Header Auth Step 5: (Optional) Make Form Public In Cloud-based n8n: You can expose the form as a public page Otherwise, connect it to your website form via webhook 🧩 Customization Tips Extract More Fields Update the "Data cleaning" node and extend the list of fields: const fields = ["Name", "Diagnosis", "Medications", "Symptoms", ...]; Add EHR or Database Integration After Google Sheets, chain your custom system: PostgreSQL Airtable Supabase MongoDB Change Output Format Want JSON or Markdown output for internal tools? Use the Set or Code node before the final output step. 🧪 Troubleshooting Issue Fix File upload fails Check Mistral API key and file type Google Sheets not updating Verify credentials and document ID No data parsed Check OCR quality; verify field labels in document Workflow not triggering Ensure webhook or form is configured correctly 🌐 Self-Hosted vs Cloud Comparison Feature Self-Hosted n8n Cloud Public Form Access Manual setup Built-in OAuth App Config Required Pre-configured Storage Limits Depends on server Included with plan Scalability Fully customizable Scales automatically 📣 Getting Support n8n Docs Mistral API Docs n8n Community Or reach out to: David Olusola (dimejicole21@gmail.com) 🌟 Like this template? Give it a star in the template library and help other no-code builders discover it. "Turn scanned documents into structured data with zero code."
by ist00dent
This n8n template enables you to instantly retrieve detailed geolocation information for any given IP address by simply sending a webhook request. Leverage the power of IP-API.com to gain insights into user locations, personalize experiences, or enhance security protocols within your automated workflows. 🔧 How it works Receive IP Webhook: This node acts as the entry point, listening for incoming POST requests. It expects a JSON body containing an ip property with the IP address you wish to look up. Get IP Geolocation: This node makes an HTTP GET request to the IP-API.com service, passing the IP address from your webhook. The API responds with a comprehensive JSON object detailing the IP's location (country, city, region), ISP, organization, and more. Respond with Geolocation Data: This node sends the full geolocation data received from IP-API.com back to the service that initiated the webhook. 👤 Who is it for? This workflow is ideal for: Marketing & Sales Teams: Personalize website content, offers, or ads based on a visitor's geographic location. Tailor email campaigns by region. Customer Support: Quickly identify a customer's location to provide more localized or relevant assistance. Security & Fraud Detection: Analyze incoming connection IPs to identify suspicious activity, block known malicious regions, or flag potential fraud. Analytics & Reporting: Augment your analytics data with geographical insights about your users or traffic. Developers & Integrators: Easily add IP lookup functionality to custom applications, internal tools, or monitoring systems. Content Delivery Networks (CDNs): Route users to the closest servers for faster content delivery (though advanced CDNs usually handle this automatically). 📑 Data Structure When you trigger the webhook, send a POST request with a JSON body structured as follows: { "ip": "8.8.8.8" // Replace with the IP address you want to look up } The workflow will return a JSON response similar to this (data will vary based on IP): { "status": "success", "country": "United States", "countryCode": "US", "region": "VA", "regionName": "Virginia", "city": "Ashburn", "zip": "20149", "lat": 39.0437, "lon": -77.4875, "timezone": "America/New_York", "isp": "Google LLC", "org": "Google Public DNS", "as": "AS15169 Google LLC", "query": "8.8.8.8" } ⚙️ Setup Instructions Import Workflow: In your n8n editor, click "Import from JSON" and paste the provided workflow JSON. Configure Webhook Path: Double-click the Receive IP Webhook node. In the 'Path' field, set a unique and descriptive path (e.g., /ip-lookup). Activate Workflow: Save and activate the workflow. 📝 Tips This workflow, while simple, is a powerful building block. Here's how you can make it even more useful: Conditional Logic: Add IF nodes after "Get IP Geolocation" to create conditional branches. For example: If countryCode is 'CN' or 'RU', send an alert to your security team. If city is 'New York', route the request to a specific sales representative. Data Enrichment: Integrate this workflow into larger automation. For instance: When a new sign-up occurs, pass their IP address to this workflow, then save the returned geolocation data (country, city, ISP) alongside their user profile in your CRM or database. For e-commerce, use the location data to pre-fill shipping fields or suggest local currency/language. Logging & Analytics: Store the lookup results in a spreadsheet (Google Sheets), database (PostgreSQL, Airtable), or logging service. This can help you track where your users are coming from over time. Rate Limiting: IP-API.com has rate limits for its free tier. If you anticipate high usage, consider adding a Delay node or implementing a caching mechanism with a Cache node to avoid hitting limits. For heavy use, you might need to upgrade to a paid plan. Dynamic Response: Instead of returning the full JSON, you could use a Function node to extract only specific pieces of information (e.g., just the country and city) and return a more concise response. Input Validation: For robust production use, add a Function node after the webhook to validate that the incoming ip value is indeed a valid IP address. If it's not, you can return an error message to the caller.
by Jah coozi
AI Social Media Content Generator & Scheduler Transform your social media strategy with AI-powered content generation that creates platform-specific posts in seconds! 🚀 What It Does This workflow uses AI to generate optimized content for multiple social media platforms from a single topic input. Perfect for marketers, content creators, and businesses looking to maintain consistent social media presence. ✨ Key Features Multi-Platform Support**: LinkedIn, Twitter/X, Instagram, Facebook, TikTok AI-Powered Generation**: Uses GPT-4 for creative, engaging content Platform Optimization**: Respects character limits and best practices Hashtag Generation**: Platform-specific hashtag strategies Posting Time Suggestions**: Optimal times for each platform Tone Customization**: Professional, casual, friendly, or custom Multi-Language Support**: Generate content in any language Engagement Predictions**: Estimate reach and engagement Daily Automation**: Schedule automatic content generation Bulk Processing**: Generate content for multiple topics at once 📊 Use Cases Marketing Teams: Streamline content creation across channels Small Businesses: Maintain consistent social presence Content Agencies: Scale content production efficiently Personal Brands: Build thought leadership E-commerce: Product launches and promotions 🛠️ Setup Instructions Add OpenAI Credentials Get API key from OpenAI Add to n8n credentials Configure Webhook (Optional) Set custom path if needed Enable for external integrations Customize Settings Adjust tone and style Set platform preferences Configure posting schedule Test Generation Use example prompts Verify output quality 💡 Example Inputs "New product launch - eco-friendly water bottle" "Company milestone - 10 years in business" "Industry insights - Future of AI in healthcare" "Team spotlight - Meet our new developer" "Seasonal campaign - Summer sale 50% off" 📈 Benefits 10x Faster**: Create content in seconds vs hours Consistency**: Maintain brand voice across platforms Engagement**: Platform-optimized for maximum reach Scalability**: Generate unlimited content Cost-Effective**: Reduce content creation costs by 80% 🔧 Customization Options Custom brand voice training Industry-specific content rules Competitor analysis integration A/B testing capabilities Analytics webhook integration Auto-posting to platforms Image generation add-on Translation services 🎯 Pro Tips Train the AI with your best-performing posts Use platform analytics to refine strategies Test different tones for audience engagement Schedule content during peak hours Monitor and iterate based on performance Start creating engaging social media content today! Categories: Marketing & Growth Content Creation Social Media AI & Automation Productivity Difficulty: Beginner Required Services: OpenAI API (or compatible LLM) n8n instance Optional: Social media APIs for auto-posting
by Viktor Klepikovskyi
Reusable and Independently Testable Sub-workflow This n8n workflow provides a standardized structure for building and testing sub-workflows in isolation. Its purpose is to help you create robust, reusable, and maintainable automations by enabling you to test the sub-workflow's logic without needing a separate parent workflow. Setup Instructions: Define Sub-workflow Inputs: Double-click the Execute Sub-workflow Trigger node to define the parameters (e.g., color) that your sub-workflow will expect from a parent workflow. Configure Test Data: Use the Test Input node (an Edit Fields (Set) node connected to the Manual Trigger) to provide sample data for isolated testing. Connect Inputs: The Combine Input node (an Edit Fields (Set) node) is the entry point for your sub-workflow's core logic. It should have two inputs: one from the Execute Sub-workflow Trigger and one from the Test Input node. Merge Inputs: Ensure the Combine Input node has the 'Include Other Input Fields' option enabled to merge data from both the live and test paths seamlessly. You can read the full blog post that explains this workflow setup in detail here.
by Miquel Colomer
This n8n workflow template checks for new major releases (tagged with .0) of the n8n project using its official GitHub releases feed. It runs multiple times a day and sends notifications via email and Telegram if a new release is found. > ⚠️ Note: You must *activate the workflow* to start receiving release notifications. 🚀 What It Does Monitors the n8n GitHub releases feed Detects major versions (e.g., 1.0.0, 2.0.0) Sends alert messages via Telegram and email (SES) when a release is published ⏰ Scheduling Details The Cron node checks for new releases three times per day: 10:00, 14:00, and 18:00 server time. 🛠️ Step-by-Step Setup Configure Telegram Bot Connect your Telegram bot and specify the chat ID where you want to receive notifications. Set up AWS SES Credentials Use a verified sender email and set up AWS SES credentials in your n8n instance. Activate the Workflow Enable the workflow in your instance to start receiving notifications. Customize Notification Messages (Optional) You can modify the email subject, Telegram format, or filter logic. 🧠 How It Works: Workflow Overview Cron Trigger Runs the workflow at 10:00, 14:00, and 18:00 daily. Read RSS Feed Pulls data from https://github.com/n8n-io/n8n/releases.atom. Filter by Current Day Filters the feed to match: Releases published in the last 4 hours Titles starting with n8n@ and ending with .0 Condition Check Uses a regex to check if the filter result contains any release data. Notifications If a new major release is found, sends: Telegram message to a specified chat Email via AWS SES with release info 📨 Final Output You'll receive a Telegram message and email when a new major n8n version is released. 🔐 Credentials Used Telegram API** – For sending chat notifications AWS SES** – To send email alerts ✨ Customization Tips Change Notification Channels**: Add Slack, Discord, or other preferred channels. Adjust Cron Schedule**: Modify the Cron node to fit your check frequency. Modify Filters**: Detect patch or beta versions by changing the .0 condition. Send Release Notes**: Extend the feed parsing to include release content. ❓Questions? Template created by Miquel Colomer and n8nhackers.com. Need help customizing or deploying? Contact us for consulting and support.
by Rudi Afandi
Description Turn your Telegram bot into a powerful OCR (Optical Character Recognition) tool. This workflow allows you to send any image (like a screenshot, a photo of a document, or a picture of a sign) to your bot, and it will instantly extract and send back the text from that image. Powered by Google's advanced Gemini AI, this automation is perfect for quickly digitizing notes, saving important snippets, or avoiding manual typing. How it works This workflow performs a few high-level steps: It triggers when a new image is sent to your Telegram bot. It sends the image to the Google Gemini Vision API to be analyzed. It extracts the text found in the image. It sends the extracted text back to you as a message in Telegram. Set up steps Estimated set up time: Less than 5 minutes. The setup is straightforward. You only need to configure two credentials: Telegram Bot Credentials: To connect your bot. Google Gemini API Credentials: To use the OCR feature. You can get a free API key from Google AI Studio.
by Danger
Ok google download "movie name" I develop this automation to improve my quality of life in handling torrents in my media-center. Goal Automate the search operations of a movie based on its name and trigger a download using your transmission-daemon. Setup Prerequisite Transmission daemon up and running and its authentication method N8N configured self-hosted or with the possibility to add npm package better with docker-compose.yaml Telegram bot credential [optional] Configuration Create a folder where your docker-compose.yaml belongs n8n_dir and proceed in installing the node package. cd ~/n8n_dir npm i torrent-search-api Configuring your docker-compose.yaml file this way. You must include all the dependencies of torrent-search-api. This will let you run the new torrent search node presented in this workflow. version: '3.3' services: n8n: container_name: n8n ports: '5678:5678' restart: always volumes: '~/n8n_dir/.n8n:/home/node/.n8n' '~/n8n_dir/node_modules/@tootallnate:/usr/local/lib/node_modules/@tootallnate' '~/n8n_dir/node_modules/accepts:/usr/local/lib/node_modules/accepts' '~/n8n_dir/node_modules/agent-base:/usr/local/lib/node_modules/agent-base' '~/n8n_dir/node_modules/ajv:/usr/local/lib/node_modules/ajv' '~/n8n_dir/node_modules/ansi-styles:/usr/local/lib/node_modules/ansi-styles' '~/n8n_dir/node_modules/asn1:/usr/local/lib/node_modules/asn1' '~/n8n_dir/node_modules/assert:/usr/local/lib/node_modules/assert' '~/n8n_dir/node_modules/assert-plus:/usr/local/lib/node_modules/assert-plus' '~/n8n_dir/node_modules/ast-types:/usr/local/lib/node_modules/ast-types' '~/n8n_dir/node_modules/asynckit:/usr/local/lib/node_modules/asynckit' '~/n8n_dir/node_modules/aws-sign2:/usr/local/lib/node_modules/aws-sign2' '~/n8n_dir/node_modules/aws4:/usr/local/lib/node_modules/aws4' '~/n8n_dir/node_modules/base64-js:/usr/local/lib/node_modules/base64-js' '~/n8n_dir/node_modules/batch:/usr/local/lib/node_modules/batch' '~/n8n_dir/node_modules/bcrypt-pbkdf:/usr/local/lib/node_modules/bcrypt-pbkdf' '~/n8n_dir/node_modules/bluebird:/usr/local/lib/node_modules/bluebird' '~/n8n_dir/node_modules/boolbase:/usr/local/lib/node_modules/boolbase' '~/n8n_dir/node_modules/brotli:/usr/local/lib/node_modules/brotli' '~/n8n_dir/node_modules/bytes:/usr/local/lib/node_modules/bytes' '~/n8n_dir/node_modules/caseless:/usr/local/lib/node_modules/caseless' '~/n8n_dir/node_modules/chalk:/usr/local/lib/node_modules/chalk' '~/n8n_dir/node_modules/cheerio:/usr/local/lib/node_modules/cheerio' '~/n8n_dir/node_modules/cloudscraper:/usr/local/lib/node_modules/cloudscraper' '~/n8n_dir/node_modules/co:/usr/local/lib/node_modules/co' '~/n8n_dir/node_modules/color-convert:/usr/local/lib/node_modules/color-convert' '~/n8n_dir/node_modules/color-name:/usr/local/lib/node_modules/color-name' '~/n8n_dir/node_modules/combined-stream:/usr/local/lib/node_modules/combined-stream' '~/n8n_dir/node_modules/component-emitter:/usr/local/lib/node_modules/component-emitter' '~/n8n_dir/node_modules/content-disposition:/usr/local/lib/node_modules/content-disposition' '~/n8n_dir/node_modules/content-type:/usr/local/lib/node_modules/content-type' '~/n8n_dir/node_modules/cookiejar:/usr/local/lib/node_modules/cookiejar' '~/n8n_dir/node_modules/core-util-is:/usr/local/lib/node_modules/core-util-is' '~/n8n_dir/node_modules/css-select:/usr/local/lib/node_modules/css-select' '~/n8n_dir/node_modules/css-what:/usr/local/lib/node_modules/css-what' '~/n8n_dir/node_modules/dashdash:/usr/local/lib/node_modules/dashdash' '~/n8n_dir/node_modules/data-uri-to-buffer:/usr/local/lib/node_modules/data-uri-to-buffer' '~/n8n_dir/node_modules/debug:/usr/local/lib/node_modules/debug' '~/n8n_dir/node_modules/deep-is:/usr/local/lib/node_modules/deep-is' '~/n8n_dir/node_modules/degenerator:/usr/local/lib/node_modules/degenerator' '~/n8n_dir/node_modules/delayed-stream:/usr/local/lib/node_modules/delayed-stream' '~/n8n_dir/node_modules/delegates:/usr/local/lib/node_modules/delegates' '~/n8n_dir/node_modules/depd:/usr/local/lib/node_modules/depd' '~/n8n_dir/node_modules/destroy:/usr/local/lib/node_modules/destroy' '~/n8n_dir/node_modules/dom-serializer:/usr/local/lib/node_modules/dom-serializer' '~/n8n_dir/node_modules/domelementtype:/usr/local/lib/node_modules/domelementtype' '~/n8n_dir/node_modules/domhandler:/usr/local/lib/node_modules/domhandler' '~/n8n_dir/node_modules/domutils:/usr/local/lib/node_modules/domutils' '~/n8n_dir/node_modules/ecc-jsbn:/usr/local/lib/node_modules/ecc-jsbn' '~/n8n_dir/node_modules/ee-first:/usr/local/lib/node_modules/ee-first' '~/n8n_dir/node_modules/emitter-component:/usr/local/lib/node_modules/emitter-component' '~/n8n_dir/node_modules/enqueue:/usr/local/lib/node_modules/enqueue' '~/n8n_dir/node_modules/enstore:/usr/local/lib/node_modules/enstore' '~/n8n_dir/node_modules/entities:/usr/local/lib/node_modules/entities' '~/n8n_dir/node_modules/error-inject:/usr/local/lib/node_modules/error-inject' '~/n8n_dir/node_modules/escape-html:/usr/local/lib/node_modules/escape-html' '~/n8n_dir/node_modules/escape-string-regexp:/usr/local/lib/node_modules/escape-string-regexp' '~/n8n_dir/node_modules/escodegen:/usr/local/lib/node_modules/escodegen' '~/n8n_dir/node_modules/esprima:/usr/local/lib/node_modules/esprima' '~/n8n_dir/node_modules/estraverse:/usr/local/lib/node_modules/estraverse' '~/n8n_dir/node_modules/esutils:/usr/local/lib/node_modules/esutils' '~/n8n_dir/node_modules/extend:/usr/local/lib/node_modules/extend' '~/n8n_dir/node_modules/extsprintf:/usr/local/lib/node_modules/extsprintf' '~/n8n_dir/node_modules/fast-deep-equal:/usr/local/lib/node_modules/fast-deep-equal' '~/n8n_dir/node_modules/fast-json-stable-stringify:/usr/local/lib/node_modules/fast-json-stable-stringify' '~/n8n_dir/node_modules/fast-levenshtein:/usr/local/lib/node_modules/fast-levenshtein' '~/n8n_dir/node_modules/file-uri-to-path:/usr/local/lib/node_modules/file-uri-to-path' '~/n8n_dir/node_modules/forever-agent:/usr/local/lib/node_modules/forever-agent' '~/n8n_dir/node_modules/form-data:/usr/local/lib/node_modules/form-data' '~/n8n_dir/node_modules/format-parser:/usr/local/lib/node_modules/format-parser' '~/n8n_dir/node_modules/formidable:/usr/local/lib/node_modules/formidable' '~/n8n_dir/node_modules/fs-extra:/usr/local/lib/node_modules/fs-extra' '~/n8n_dir/node_modules/ftp:/usr/local/lib/node_modules/ftp' '~/n8n_dir/node_modules/get-uri:/usr/local/lib/node_modules/get-uri' '~/n8n_dir/node_modules/getpass:/usr/local/lib/node_modules/getpass' '~/n8n_dir/node_modules/graceful-fs:/usr/local/lib/node_modules/graceful-fs' '~/n8n_dir/node_modules/har-schema:/usr/local/lib/node_modules/har-schema' '~/n8n_dir/node_modules/har-validator:/usr/local/lib/node_modules/har-validator' '~/n8n_dir/node_modules/has-flag:/usr/local/lib/node_modules/has-flag' '~/n8n_dir/node_modules/htmlparser2:/usr/local/lib/node_modules/htmlparser2' '~/n8n_dir/node_modules/http-context:/usr/local/lib/node_modules/http-context' '~/n8n_dir/node_modules/http-errors:/usr/local/lib/node_modules/http-errors' '~/n8n_dir/node_modules/http-incoming:/usr/local/lib/node_modules/http-incoming' '~/n8n_dir/node_modules/http-outgoing:/usr/local/lib/node_modules/http-outgoing' '~/n8n_dir/node_modules/http-proxy-agent:/usr/local/lib/node_modules/http-proxy-agent' '~/n8n_dir/node_modules/http-signature:/usr/local/lib/node_modules/http-signature' '~/n8n_dir/node_modules/https-proxy-agent:/usr/local/lib/node_modules/https-proxy-agent' '~/n8n_dir/node_modules/iconv-lite:/usr/local/lib/node_modules/iconv-lite' '~/n8n_dir/node_modules/inherits:/usr/local/lib/node_modules/inherits' '~/n8n_dir/node_modules/ip:/usr/local/lib/node_modules/ip' '~/n8n_dir/node_modules/is-browser:/usr/local/lib/node_modules/is-browser' '~/n8n_dir/node_modules/is-typedarray:/usr/local/lib/node_modules/is-typedarray' '~/n8n_dir/node_modules/is-url:/usr/local/lib/node_modules/is-url' '~/n8n_dir/node_modules/isarray:/usr/local/lib/node_modules/isarray' '~/n8n_dir/node_modules/isobject:/usr/local/lib/node_modules/isobject' '~/n8n_dir/node_modules/isstream:/usr/local/lib/node_modules/isstream' '~/n8n_dir/node_modules/jsbn:/usr/local/lib/node_modules/jsbn' '~/n8n_dir/node_modules/json-schema:/usr/local/lib/node_modules/json-schema' '~/n8n_dir/node_modules/json-schema-traverse:/usr/local/lib/node_modules/json-schema-traverse' '~/n8n_dir/node_modules/json-stringify-safe:/usr/local/lib/node_modules/json-stringify-safe' '~/n8n_dir/node_modules/jsonfile:/usr/local/lib/node_modules/jsonfile' '~/n8n_dir/node_modules/jsprim:/usr/local/lib/node_modules/jsprim' '~/n8n_dir/node_modules/koa-is-json:/usr/local/lib/node_modules/koa-is-json' '~/n8n_dir/node_modules/levn:/usr/local/lib/node_modules/levn' '~/n8n_dir/node_modules/lodash:/usr/local/lib/node_modules/lodash' '~/n8n_dir/node_modules/lodash.assignin:/usr/local/lib/node_modules/lodash.assignin' '~/n8n_dir/node_modules/lodash.bind:/usr/local/lib/node_modules/lodash.bind' '~/n8n_dir/node_modules/lodash.defaults:/usr/local/lib/node_modules/lodash.defaults' '~/n8n_dir/node_modules/lodash.filter:/usr/local/lib/node_modules/lodash.filter' '~/n8n_dir/node_modules/lodash.flatten:/usr/local/lib/node_modules/lodash.flatten' '~/n8n_dir/node_modules/lodash.foreach:/usr/local/lib/node_modules/lodash.foreach' '~/n8n_dir/node_modules/lodash.map:/usr/local/lib/node_modules/lodash.map' '~/n8n_dir/node_modules/lodash.merge:/usr/local/lib/node_modules/lodash.merge' '~/n8n_dir/node_modules/lodash.pick:/usr/local/lib/node_modules/lodash.pick' '~/n8n_dir/node_modules/lodash.reduce:/usr/local/lib/node_modules/lodash.reduce' '~/n8n_dir/node_modules/lodash.reject:/usr/local/lib/node_modules/lodash.reject' '~/n8n_dir/node_modules/lodash.some:/usr/local/lib/node_modules/lodash.some' '~/n8n_dir/node_modules/lru-cache:/usr/local/lib/node_modules/lru-cache' '~/n8n_dir/node_modules/media-typer:/usr/local/lib/node_modules/media-typer' '~/n8n_dir/node_modules/methods:/usr/local/lib/node_modules/methods' '~/n8n_dir/node_modules/mime:/usr/local/lib/node_modules/mime' '~/n8n_dir/node_modules/mime-db:/usr/local/lib/node_modules/mime-db' '~/n8n_dir/node_modules/mime-types:/usr/local/lib/node_modules/mime-types' '~/n8n_dir/node_modules/monotonic-timestamp:/usr/local/lib/node_modules/monotonic-timestamp' '~/n8n_dir/node_modules/ms:/usr/local/lib/node_modules/ms' '~/n8n_dir/node_modules/negotiator:/usr/local/lib/node_modules/negotiator' '~/n8n_dir/node_modules/netmask:/usr/local/lib/node_modules/netmask' '~/n8n_dir/node_modules/nth-check:/usr/local/lib/node_modules/nth-check' '~/n8n_dir/node_modules/oauth-sign:/usr/local/lib/node_modules/oauth-sign' '~/n8n_dir/node_modules/object-assign:/usr/local/lib/node_modules/object-assign' '~/n8n_dir/node_modules/on-finished:/usr/local/lib/node_modules/on-finished' '~/n8n_dir/node_modules/optionator:/usr/local/lib/node_modules/optionator' '~/n8n_dir/node_modules/pac-proxy-agent:/usr/local/lib/node_modules/pac-proxy-agent' '~/n8n_dir/node_modules/pac-resolver:/usr/local/lib/node_modules/pac-resolver' '~/n8n_dir/node_modules/parseurl:/usr/local/lib/node_modules/parseurl' '~/n8n_dir/node_modules/performance-now:/usr/local/lib/node_modules/performance-now' '~/n8n_dir/node_modules/prelude-ls:/usr/local/lib/node_modules/prelude-ls' '~/n8n_dir/node_modules/process-nextick-args:/usr/local/lib/node_modules/process-nextick-args' '~/n8n_dir/node_modules/promise-polyfill:/usr/local/lib/node_modules/promise-polyfill' '~/n8n_dir/node_modules/proxy-agent:/usr/local/lib/node_modules/proxy-agent' '~/n8n_dir/node_modules/proxy-from-env:/usr/local/lib/node_modules/proxy-from-env' '~/n8n_dir/node_modules/psl:/usr/local/lib/node_modules/psl' '~/n8n_dir/node_modules/punycode:/usr/local/lib/node_modules/punycode' '~/n8n_dir/node_modules/qs:/usr/local/lib/node_modules/qs' '~/n8n_dir/node_modules/querystring:/usr/local/lib/node_modules/querystring' '~/n8n_dir/node_modules/raw-body:/usr/local/lib/node_modules/raw-body' '~/n8n_dir/node_modules/readable-stream:/usr/local/lib/node_modules/readable-stream' '~/n8n_dir/node_modules/request:/usr/local/lib/node_modules/request' '~/n8n_dir/node_modules/request-promise:/usr/local/lib/node_modules/request-promise' '~/n8n_dir/node_modules/request-promise-core:/usr/local/lib/node_modules/request-promise-core' '~/n8n_dir/node_modules/request-x-ray:/usr/local/lib/node_modules/request-x-ray' '~/n8n_dir/node_modules/safe-buffer:/usr/local/lib/node_modules/safe-buffer' '~/n8n_dir/node_modules/safer-buffer:/usr/local/lib/node_modules/safer-buffer' '~/n8n_dir/node_modules/selectn:/usr/local/lib/node_modules/selectn' '~/n8n_dir/node_modules/setprototypeof:/usr/local/lib/node_modules/setprototypeof' '~/n8n_dir/node_modules/sliced:/usr/local/lib/node_modules/sliced' '~/n8n_dir/node_modules/smart-buffer:/usr/local/lib/node_modules/smart-buffer' '~/n8n_dir/node_modules/socks:/usr/local/lib/node_modules/socks' '~/n8n_dir/node_modules/socks-proxy-agent:/usr/local/lib/node_modules/socks-proxy-agent' '~/n8n_dir/node_modules/source-map:/usr/local/lib/node_modules/source-map' '~/n8n_dir/node_modules/sshpk:/usr/local/lib/node_modules/sshpk' '~/n8n_dir/node_modules/statuses:/usr/local/lib/node_modules/statuses' '~/n8n_dir/node_modules/stealthy-require:/usr/local/lib/node_modules/stealthy-require' '~/n8n_dir/node_modules/stream-to-string:/usr/local/lib/node_modules/stream-to-string' '~/n8n_dir/node_modules/string-format:/usr/local/lib/node_modules/string-format' '~/n8n_dir/node_modules/string_decoder:/usr/local/lib/node_modules/string_decoder' '~/n8n_dir/node_modules/superagent:/usr/local/lib/node_modules/superagent' '~/n8n_dir/node_modules/superagent-proxy:/usr/local/lib/node_modules/superagent-proxy' '~/n8n_dir/node_modules/supports-color:/usr/local/lib/node_modules/supports-color' '~/n8n_dir/node_modules/toidentifier:/usr/local/lib/node_modules/toidentifier' '~/n8n_dir/node_modules/torrent-search-api:/usr/local/lib/node_modules/torrent-search-api' '~/n8n_dir/node_modules/tough-cookie:/usr/local/lib/node_modules/tough-cookie' '~/n8n_dir/node_modules/tslib:/usr/local/lib/node_modules/tslib' '~/n8n_dir/node_modules/tunnel-agent:/usr/local/lib/node_modules/tunnel-agent' '~/n8n_dir/node_modules/tweetnacl:/usr/local/lib/node_modules/tweetnacl' '~/n8n_dir/node_modules/type-check:/usr/local/lib/node_modules/type-check' '~/n8n_dir/node_modules/type-is:/usr/local/lib/node_modules/type-is' '~/n8n_dir/node_modules/universalify:/usr/local/lib/node_modules/universalify' '~/n8n_dir/node_modules/unpipe:/usr/local/lib/node_modules/unpipe' '~/n8n_dir/node_modules/uri-js:/usr/local/lib/node_modules/uri-js' '~/n8n_dir/node_modules/util:/usr/local/lib/node_modules/util' '~/n8n_dir/node_modules/util-deprecate:/usr/local/lib/node_modules/util-deprecate' '~/n8n_dir/node_modules/uuid:/usr/local/lib/node_modules/uuid' '~/n8n_dir/node_modules/vary:/usr/local/lib/node_modules/vary' '~/n8n_dir/node_modules/verror:/usr/local/lib/node_modules/verror' '~/n8n_dir/node_modules/word-wrap:/usr/local/lib/node_modules/word-wrap' '~/n8n_dir/node_modules/wrap-fn:/usr/local/lib/node_modules/wrap-fn' '~/n8n_dir/node_modules/x-ray:/usr/local/lib/node_modules/x-ray' '~/n8n_dir/node_modules/x-ray-crawler:/usr/local/lib/node_modules/x-ray-crawler' '~/n8n_dir/node_modules/x-ray-parse:/usr/local/lib/node_modules/x-ray-parse' '~/n8n_dir/node_modules/x-ray-scraper:/usr/local/lib/node_modules/x-ray-scraper' '~/n8n_dir/node_modules/xregexp:/usr/local/lib/node_modules/xregexp' '~/n8n_dir/node_modules/yallist:/usr/local/lib/node_modules/yallist' '~/n8n_dir/node_modules/yieldly:/usr/local/lib/node_modules/yieldly' image: 'n8nio/n8n:latest-rpi' environment: N8N_BASIC_AUTH_ACTIVE=true N8N_BASIC_AUTH_USER=username N8N_BASIC_AUTH_PASSWORD=your_secret_n8n_password EXECUTIONS_DATA_PRUNE=true EXECUTIONS_DATA_MAX_AGE=120 EXECUTIONS_TIMEOUT=300 EXECUTIONS_TIMEOUT_MAX=500 GENERIC_TIMEZONE=Europe/Berlin NODE_FUNCTION_ALLOW_EXTERNAL=torrent-search-api Once configured this way run n8n and create a new workflow coping the one proposed. Configure workflow Transmission In order to send command to transmission you must validate the Basic Auth. To do so: open the Start download node and edit the Credentials. Perform the same operation choosing the new credentials also in node Start download new token. In this automation we call transmission twice due to a security protocol in transmission system that prevents single click commands to be triggered, performing the request twice bypasses this security mechanism. https://en.wikipedia.org/wiki/Cross-site_request_forgery We use the X-Transmission-Session-Id provided by the first request to authenticate the second request. Telegram In order to make the workflow work as expected you must create a telegram bot and configure the nodes (Torrent not found and Telegram1) to send your message once the workflow is complete. Here's an easy guide to follow https://docs.n8n.io/nodes/n8n-nodes-base.telegram/ In those nodes you also should configure the Chat ID, you may use your telegram username or use a bot to retrieve your id. You may chat with useridinfobot that sends you your id. Ok google automation Since right now we do not have a n8n client for mobile that can trigger automation using google assistant I decided to use an IFTTT automation to trigger the webhook. I connect my IFTTT account with google assistant and pick the trigger. Say a phrase with a text ingredient as in the picture below. And configure the trigger this way. scarica $ -> download $ or metti in download $ -> put in download $ or some other trigger you may want. Then configure your server to trigger the webhook of n8n. Conclusion In conclusion we provide a fully working automation that integrates in n8n a node library and provides an easy trigger to perform a complex operation. Security concern Giving the ability to trigger a download may be problematic for potential unwanted torrent malware download, so you may decide to authenticate the webhook request passing in the body another field with a shared token between the two endpoints. Moreover the torrent-search-api library and its dependencies have some vulnerability that you may want to avoid on your own media-center, this will hopefully be patched soon in a further release of the library. This is just an interesting proof of concept. Quality of the download You may want to introduce another block between torrent search and webhook trigger to search for a movie based on the words detected by google assistant, sometimes it misinterprets something and you may end up downloading potential copyrighted material. Please use this automation only for free and open source movies and music.
by Nick Saraev
AI LinkedIn Outreach Automation with Apollo, OpenAI & PhantomBuster Categories:* Sales Automation Lead Generation AI Personalization This workflow creates a complete LinkedIn outreach automation system that generates targeted lead lists from Apollo using natural language, enriches profiles with AI-personalized icebreakers, and automatically sends connection requests through PhantomBuster. Built by someone who's made over $1 million with AI automation, this system demonstrates the real-world approach to building profitable automation workflows. Benefits* Natural Language Lead Targeting - Describe your ideal prospects in plain English and automatically generate Apollo search URLs AI-Powered Personalization - Creates custom icebreakers based on LinkedIn profile data, employment history, and professional background Complete Outreach Pipeline - From lead discovery to personalized connection requests, fully automated end-to-end Smart Data Management - Automatically tracks all prospects in Google Sheets with deduplication and status tracking Cost-Effective Scraping - Uses Apify to extract Apollo data without expensive subscription costs Scalable Architecture - Processes hundreds of leads while respecting LinkedIn's connection limits How It Works* Natural Language Lead Generation: Form input accepts audience descriptions in plain English AI converts descriptions into properly formatted Apollo search URLs Automatically includes location, company size, job titles, and keyword filters Apollo Data Extraction: Uses Apify actor to scrape targeted lead lists from Apollo Extracts LinkedIn URLs, email addresses, employment history, and profile data Processes 500+ leads per run with detailed professional information AI Personalization Engine: Analyzes LinkedIn profile data including job history and company information Generates personalized icebreakers using proven connection request templates Creates human-like messages that reference specific career details and achievements Google Sheets Integration: Automatically stores all lead data in organized spreadsheet format Tracks prospect information, contact details, and generated icebreakers Provides easy data management and campaign tracking PhantomBuster Automation: Connects to PhantomBuster API to trigger LinkedIn connection campaigns Sends personalized connection requests with custom icebreakers Respects LinkedIn's daily limits and mimics human behavior patterns Business Use Cases* Sales Teams - Automate prospecting for B2B outreach campaigns Agencies - Scale client acquisition through targeted LinkedIn outreach Recruiters - Find and connect with qualified candidates efficiently Entrepreneurs - Build professional networks in specific industries Business Development - Generate qualified leads for partnership opportunities Revenue Potential This system can replace expensive LinkedIn outreach tools that cost $200-500/month. Users typically see: 400% improvement in response rates through personalization 10x faster lead generation compared to manual prospecting Ability to process 500+ leads per hour vs. 10-20 manually Difficulty Level: Intermediate Estimated Build Time: 1-2 hours Monthly Operating Cost: ~$50 (Apollo + PhantomBuster + AI APIs) Watch My Complete 1-Hour Build* Want to see exactly how I built this system from scratch? I walk through the entire development process live, including all the debugging, API integrations, and real-world testing that goes into building profitable automation systems. 🎥 See My Live Build Process: "Build This Automated AI LinkedIn DM System in 1 Hour (N8N)" This comprehensive tutorial shows my actual development approach - including the detours, problem-solving, and iterative testing that real automation building involves. Required Google Sheets Setup* Create a Google Sheet with these exact column headers: Essential Lead Columns: id - Unique prospect identifier first_name - Contact's first name last_name - Contact's last name name - Full name linkedin_url - LinkedIn profile URL title - Current job title email_status - Email verification status photo_url - Profile photo URL icebreaker - AI-generated personalized message Setup Instructions: Create Google Sheet with these headers in row 1 Connect Google Sheets OAuth in n8n Update the document ID in the "Add to Google Sheet" node PhantomBuster will read from this sheet for automated outreach Set Up Steps* Apollo & Apify Configuration: Set up Apify account and obtain API credentials Configure Apollo scraper actor with proper parameters Test lead extraction with sample audience descriptions AI Personalization Setup: Configure OpenAI API for natural language processing and personalization Set up prompt templates for audience targeting and icebreaker generation Test personalization quality with sample LinkedIn profiles Google Sheets Integration: Create lead tracking spreadsheet with proper column structure Configure Google Sheets API credentials and permissions Set up data mapping for automatic lead storage PhantomBuster Connection: Set up PhantomBuster account and LinkedIn connection Configure LinkedIn auto-connect agent with custom message templates Connect API for automated campaign triggering Form and Workflow Setup: Configure form trigger for audience input collection Set up data flow between all components Add proper error handling and rate limiting Testing and Optimization: Start with small batches (5-10 connections daily) Monitor LinkedIn account health and response rates Optimize icebreaker templates based on performance data Important Compliance Notes* LinkedIn Limits: Respect 100 connection requests per week limit Account Safety: Use PhantomBuster's human-like behavior patterns Message Quality: Regularly update templates to avoid automation detection Response Management: Monitor and respond to replies within 24 hours Advanced Extensions* This system can be enhanced with: Multi-channel Outreach: Add email sequences for comprehensive campaigns A/B Testing: Test different icebreaker templates automatically CRM Integration: Connect to Salesforce, HubSpot, or other sales systems Response Tracking: Monitor reply rates and optimize messaging Explore My Channel* For more advanced automation systems that generate real business results, check out my YouTube channel where I share the exact strategies I've used to make over $1 million with AI automation.
by shepard
Overview This workflow leverages the LangChain code node to implement a fully customizable conversational agent. Ideal for users who need granular control over their agent's prompts while reducing unnecessary token consumption from reserved tool-calling functionality (compared to n8n's built-in Conversation Agent). Setup Instructions Configure Gemini Credentials: Set up your Google Gemini API key (Get API key here if needed). Alternatively, you may use other AI provider nodes. Interaction Methods: Test directly in the workflow editor using the "Chat" button Activate the workflow and access the chat interface via the URL provided by the When Chat Message Received node Customization Options Interface Settings: Configure chat UI elements (e.g., title) in the When Chat Message Received node Prompt Engineering: Define agent personality and conversation structure in the Construct & Execute LLM Prompt node's template variable ⚠️ Template must preserve {chat_history} and {input} placeholders for proper LangChain operation Model Selection: Swap language models through the language model input field in Construct & Execute LLM Prompt Memory Control: Adjust conversation history length in the Store Conversation History node Requirements: ⚠️ This workflow uses the LangChain Code node, which only works on self-hosted n8n. (Refer to LangChain Code node docs)
by Sleak
Who is this template for? This workflow template is designed for business owners and HR professionals to automatically detect and structure unstructured job applications received through email. Additionally, other email categories can be added, each with it's own workflow. How it works Every time a new email is received, an OpenAI model classifies it into a predefined category by analyzing the plain text of the email and the extracted content from the attachment. If the email is classified as a job application, an OpenAI model uses the email’s plain text and extracted attachment content to populate predefined fields such as age and study. A relevant additional step would be to directly push the applicant and their structured job application into a CRM or ATS like Hubspot or Recruitee. Set up steps Configure your IMAP credentials to connect your email account. Use this n8n documentation page for quickstart guides for common email providers. Connect your OpenAI account in the 'Classify email' node. And add or remove any category for classification in this node. Make sure the description is clear and concise. Connect your OpenAI account in the 'Extract variables - email & attachment' node. And add or remove any predefined fields that should be populated for job applications in this node. Make sure the description is clear and concise.
by n8n Team
This workflow automatically adds closed deals from Pipedrive as new customers into Stripe. Prerequisites Pipedrive account and Pipedrive credentials Stripe account and Stripe credentials How it works Pipedrive trigger node starts the workflow when a deal gets updated in Pipedrive. IF node checks that the current won time is not equal to the previuos one in the deal and continues the workflow if it's true. Pipedrive node extracts the organization's details to pass it further. HTTP Request node searches for the same organization's details within Stripe. If a customer doesn't exist within Stripe, Merge node passes a new customer details to Stripe. Stripe node creates a new customer.