by Rudi Afandi
Description Turn your Telegram bot into a powerful OCR (Optical Character Recognition) tool. This workflow allows you to send any image (like a screenshot, a photo of a document, or a picture of a sign) to your bot, and it will instantly extract and send back the text from that image. Powered by Google's advanced Gemini AI, this automation is perfect for quickly digitizing notes, saving important snippets, or avoiding manual typing. How it works This workflow performs a few high-level steps: It triggers when a new image is sent to your Telegram bot. It sends the image to the Google Gemini Vision API to be analyzed. It extracts the text found in the image. It sends the extracted text back to you as a message in Telegram. Set up steps Estimated set up time: Less than 5 minutes. The setup is straightforward. You only need to configure two credentials: Telegram Bot Credentials: To connect your bot. Google Gemini API Credentials: To use the OCR feature. You can get a free API key from Google AI Studio.
by Danger
Ok google download "movie name" I develop this automation to improve my quality of life in handling torrents in my media-center. Goal Automate the search operations of a movie based on its name and trigger a download using your transmission-daemon. Setup Prerequisite Transmission daemon up and running and its authentication method N8N configured self-hosted or with the possibility to add npm package better with docker-compose.yaml Telegram bot credential [optional] Configuration Create a folder where your docker-compose.yaml belongs n8n_dir and proceed in installing the node package. cd ~/n8n_dir npm i torrent-search-api Configuring your docker-compose.yaml file this way. You must include all the dependencies of torrent-search-api. This will let you run the new torrent search node presented in this workflow. version: '3.3' services: n8n: container_name: n8n ports: '5678:5678' restart: always volumes: '~/n8n_dir/.n8n:/home/node/.n8n' '~/n8n_dir/node_modules/@tootallnate:/usr/local/lib/node_modules/@tootallnate' '~/n8n_dir/node_modules/accepts:/usr/local/lib/node_modules/accepts' '~/n8n_dir/node_modules/agent-base:/usr/local/lib/node_modules/agent-base' '~/n8n_dir/node_modules/ajv:/usr/local/lib/node_modules/ajv' '~/n8n_dir/node_modules/ansi-styles:/usr/local/lib/node_modules/ansi-styles' '~/n8n_dir/node_modules/asn1:/usr/local/lib/node_modules/asn1' '~/n8n_dir/node_modules/assert:/usr/local/lib/node_modules/assert' '~/n8n_dir/node_modules/assert-plus:/usr/local/lib/node_modules/assert-plus' '~/n8n_dir/node_modules/ast-types:/usr/local/lib/node_modules/ast-types' '~/n8n_dir/node_modules/asynckit:/usr/local/lib/node_modules/asynckit' '~/n8n_dir/node_modules/aws-sign2:/usr/local/lib/node_modules/aws-sign2' '~/n8n_dir/node_modules/aws4:/usr/local/lib/node_modules/aws4' '~/n8n_dir/node_modules/base64-js:/usr/local/lib/node_modules/base64-js' '~/n8n_dir/node_modules/batch:/usr/local/lib/node_modules/batch' '~/n8n_dir/node_modules/bcrypt-pbkdf:/usr/local/lib/node_modules/bcrypt-pbkdf' '~/n8n_dir/node_modules/bluebird:/usr/local/lib/node_modules/bluebird' '~/n8n_dir/node_modules/boolbase:/usr/local/lib/node_modules/boolbase' '~/n8n_dir/node_modules/brotli:/usr/local/lib/node_modules/brotli' '~/n8n_dir/node_modules/bytes:/usr/local/lib/node_modules/bytes' '~/n8n_dir/node_modules/caseless:/usr/local/lib/node_modules/caseless' '~/n8n_dir/node_modules/chalk:/usr/local/lib/node_modules/chalk' '~/n8n_dir/node_modules/cheerio:/usr/local/lib/node_modules/cheerio' '~/n8n_dir/node_modules/cloudscraper:/usr/local/lib/node_modules/cloudscraper' '~/n8n_dir/node_modules/co:/usr/local/lib/node_modules/co' '~/n8n_dir/node_modules/color-convert:/usr/local/lib/node_modules/color-convert' '~/n8n_dir/node_modules/color-name:/usr/local/lib/node_modules/color-name' '~/n8n_dir/node_modules/combined-stream:/usr/local/lib/node_modules/combined-stream' '~/n8n_dir/node_modules/component-emitter:/usr/local/lib/node_modules/component-emitter' '~/n8n_dir/node_modules/content-disposition:/usr/local/lib/node_modules/content-disposition' '~/n8n_dir/node_modules/content-type:/usr/local/lib/node_modules/content-type' '~/n8n_dir/node_modules/cookiejar:/usr/local/lib/node_modules/cookiejar' '~/n8n_dir/node_modules/core-util-is:/usr/local/lib/node_modules/core-util-is' '~/n8n_dir/node_modules/css-select:/usr/local/lib/node_modules/css-select' '~/n8n_dir/node_modules/css-what:/usr/local/lib/node_modules/css-what' '~/n8n_dir/node_modules/dashdash:/usr/local/lib/node_modules/dashdash' '~/n8n_dir/node_modules/data-uri-to-buffer:/usr/local/lib/node_modules/data-uri-to-buffer' '~/n8n_dir/node_modules/debug:/usr/local/lib/node_modules/debug' '~/n8n_dir/node_modules/deep-is:/usr/local/lib/node_modules/deep-is' '~/n8n_dir/node_modules/degenerator:/usr/local/lib/node_modules/degenerator' '~/n8n_dir/node_modules/delayed-stream:/usr/local/lib/node_modules/delayed-stream' '~/n8n_dir/node_modules/delegates:/usr/local/lib/node_modules/delegates' '~/n8n_dir/node_modules/depd:/usr/local/lib/node_modules/depd' '~/n8n_dir/node_modules/destroy:/usr/local/lib/node_modules/destroy' '~/n8n_dir/node_modules/dom-serializer:/usr/local/lib/node_modules/dom-serializer' '~/n8n_dir/node_modules/domelementtype:/usr/local/lib/node_modules/domelementtype' '~/n8n_dir/node_modules/domhandler:/usr/local/lib/node_modules/domhandler' '~/n8n_dir/node_modules/domutils:/usr/local/lib/node_modules/domutils' '~/n8n_dir/node_modules/ecc-jsbn:/usr/local/lib/node_modules/ecc-jsbn' '~/n8n_dir/node_modules/ee-first:/usr/local/lib/node_modules/ee-first' '~/n8n_dir/node_modules/emitter-component:/usr/local/lib/node_modules/emitter-component' '~/n8n_dir/node_modules/enqueue:/usr/local/lib/node_modules/enqueue' '~/n8n_dir/node_modules/enstore:/usr/local/lib/node_modules/enstore' '~/n8n_dir/node_modules/entities:/usr/local/lib/node_modules/entities' '~/n8n_dir/node_modules/error-inject:/usr/local/lib/node_modules/error-inject' '~/n8n_dir/node_modules/escape-html:/usr/local/lib/node_modules/escape-html' '~/n8n_dir/node_modules/escape-string-regexp:/usr/local/lib/node_modules/escape-string-regexp' '~/n8n_dir/node_modules/escodegen:/usr/local/lib/node_modules/escodegen' '~/n8n_dir/node_modules/esprima:/usr/local/lib/node_modules/esprima' '~/n8n_dir/node_modules/estraverse:/usr/local/lib/node_modules/estraverse' '~/n8n_dir/node_modules/esutils:/usr/local/lib/node_modules/esutils' '~/n8n_dir/node_modules/extend:/usr/local/lib/node_modules/extend' '~/n8n_dir/node_modules/extsprintf:/usr/local/lib/node_modules/extsprintf' '~/n8n_dir/node_modules/fast-deep-equal:/usr/local/lib/node_modules/fast-deep-equal' '~/n8n_dir/node_modules/fast-json-stable-stringify:/usr/local/lib/node_modules/fast-json-stable-stringify' '~/n8n_dir/node_modules/fast-levenshtein:/usr/local/lib/node_modules/fast-levenshtein' '~/n8n_dir/node_modules/file-uri-to-path:/usr/local/lib/node_modules/file-uri-to-path' '~/n8n_dir/node_modules/forever-agent:/usr/local/lib/node_modules/forever-agent' '~/n8n_dir/node_modules/form-data:/usr/local/lib/node_modules/form-data' '~/n8n_dir/node_modules/format-parser:/usr/local/lib/node_modules/format-parser' '~/n8n_dir/node_modules/formidable:/usr/local/lib/node_modules/formidable' '~/n8n_dir/node_modules/fs-extra:/usr/local/lib/node_modules/fs-extra' '~/n8n_dir/node_modules/ftp:/usr/local/lib/node_modules/ftp' '~/n8n_dir/node_modules/get-uri:/usr/local/lib/node_modules/get-uri' '~/n8n_dir/node_modules/getpass:/usr/local/lib/node_modules/getpass' '~/n8n_dir/node_modules/graceful-fs:/usr/local/lib/node_modules/graceful-fs' '~/n8n_dir/node_modules/har-schema:/usr/local/lib/node_modules/har-schema' '~/n8n_dir/node_modules/har-validator:/usr/local/lib/node_modules/har-validator' '~/n8n_dir/node_modules/has-flag:/usr/local/lib/node_modules/has-flag' '~/n8n_dir/node_modules/htmlparser2:/usr/local/lib/node_modules/htmlparser2' '~/n8n_dir/node_modules/http-context:/usr/local/lib/node_modules/http-context' '~/n8n_dir/node_modules/http-errors:/usr/local/lib/node_modules/http-errors' '~/n8n_dir/node_modules/http-incoming:/usr/local/lib/node_modules/http-incoming' '~/n8n_dir/node_modules/http-outgoing:/usr/local/lib/node_modules/http-outgoing' '~/n8n_dir/node_modules/http-proxy-agent:/usr/local/lib/node_modules/http-proxy-agent' '~/n8n_dir/node_modules/http-signature:/usr/local/lib/node_modules/http-signature' '~/n8n_dir/node_modules/https-proxy-agent:/usr/local/lib/node_modules/https-proxy-agent' '~/n8n_dir/node_modules/iconv-lite:/usr/local/lib/node_modules/iconv-lite' '~/n8n_dir/node_modules/inherits:/usr/local/lib/node_modules/inherits' '~/n8n_dir/node_modules/ip:/usr/local/lib/node_modules/ip' '~/n8n_dir/node_modules/is-browser:/usr/local/lib/node_modules/is-browser' '~/n8n_dir/node_modules/is-typedarray:/usr/local/lib/node_modules/is-typedarray' '~/n8n_dir/node_modules/is-url:/usr/local/lib/node_modules/is-url' '~/n8n_dir/node_modules/isarray:/usr/local/lib/node_modules/isarray' '~/n8n_dir/node_modules/isobject:/usr/local/lib/node_modules/isobject' '~/n8n_dir/node_modules/isstream:/usr/local/lib/node_modules/isstream' '~/n8n_dir/node_modules/jsbn:/usr/local/lib/node_modules/jsbn' '~/n8n_dir/node_modules/json-schema:/usr/local/lib/node_modules/json-schema' '~/n8n_dir/node_modules/json-schema-traverse:/usr/local/lib/node_modules/json-schema-traverse' '~/n8n_dir/node_modules/json-stringify-safe:/usr/local/lib/node_modules/json-stringify-safe' '~/n8n_dir/node_modules/jsonfile:/usr/local/lib/node_modules/jsonfile' '~/n8n_dir/node_modules/jsprim:/usr/local/lib/node_modules/jsprim' '~/n8n_dir/node_modules/koa-is-json:/usr/local/lib/node_modules/koa-is-json' '~/n8n_dir/node_modules/levn:/usr/local/lib/node_modules/levn' '~/n8n_dir/node_modules/lodash:/usr/local/lib/node_modules/lodash' '~/n8n_dir/node_modules/lodash.assignin:/usr/local/lib/node_modules/lodash.assignin' '~/n8n_dir/node_modules/lodash.bind:/usr/local/lib/node_modules/lodash.bind' '~/n8n_dir/node_modules/lodash.defaults:/usr/local/lib/node_modules/lodash.defaults' '~/n8n_dir/node_modules/lodash.filter:/usr/local/lib/node_modules/lodash.filter' '~/n8n_dir/node_modules/lodash.flatten:/usr/local/lib/node_modules/lodash.flatten' '~/n8n_dir/node_modules/lodash.foreach:/usr/local/lib/node_modules/lodash.foreach' '~/n8n_dir/node_modules/lodash.map:/usr/local/lib/node_modules/lodash.map' '~/n8n_dir/node_modules/lodash.merge:/usr/local/lib/node_modules/lodash.merge' '~/n8n_dir/node_modules/lodash.pick:/usr/local/lib/node_modules/lodash.pick' '~/n8n_dir/node_modules/lodash.reduce:/usr/local/lib/node_modules/lodash.reduce' '~/n8n_dir/node_modules/lodash.reject:/usr/local/lib/node_modules/lodash.reject' '~/n8n_dir/node_modules/lodash.some:/usr/local/lib/node_modules/lodash.some' '~/n8n_dir/node_modules/lru-cache:/usr/local/lib/node_modules/lru-cache' '~/n8n_dir/node_modules/media-typer:/usr/local/lib/node_modules/media-typer' '~/n8n_dir/node_modules/methods:/usr/local/lib/node_modules/methods' '~/n8n_dir/node_modules/mime:/usr/local/lib/node_modules/mime' '~/n8n_dir/node_modules/mime-db:/usr/local/lib/node_modules/mime-db' '~/n8n_dir/node_modules/mime-types:/usr/local/lib/node_modules/mime-types' '~/n8n_dir/node_modules/monotonic-timestamp:/usr/local/lib/node_modules/monotonic-timestamp' '~/n8n_dir/node_modules/ms:/usr/local/lib/node_modules/ms' '~/n8n_dir/node_modules/negotiator:/usr/local/lib/node_modules/negotiator' '~/n8n_dir/node_modules/netmask:/usr/local/lib/node_modules/netmask' '~/n8n_dir/node_modules/nth-check:/usr/local/lib/node_modules/nth-check' '~/n8n_dir/node_modules/oauth-sign:/usr/local/lib/node_modules/oauth-sign' '~/n8n_dir/node_modules/object-assign:/usr/local/lib/node_modules/object-assign' '~/n8n_dir/node_modules/on-finished:/usr/local/lib/node_modules/on-finished' '~/n8n_dir/node_modules/optionator:/usr/local/lib/node_modules/optionator' '~/n8n_dir/node_modules/pac-proxy-agent:/usr/local/lib/node_modules/pac-proxy-agent' '~/n8n_dir/node_modules/pac-resolver:/usr/local/lib/node_modules/pac-resolver' '~/n8n_dir/node_modules/parseurl:/usr/local/lib/node_modules/parseurl' '~/n8n_dir/node_modules/performance-now:/usr/local/lib/node_modules/performance-now' '~/n8n_dir/node_modules/prelude-ls:/usr/local/lib/node_modules/prelude-ls' '~/n8n_dir/node_modules/process-nextick-args:/usr/local/lib/node_modules/process-nextick-args' '~/n8n_dir/node_modules/promise-polyfill:/usr/local/lib/node_modules/promise-polyfill' '~/n8n_dir/node_modules/proxy-agent:/usr/local/lib/node_modules/proxy-agent' '~/n8n_dir/node_modules/proxy-from-env:/usr/local/lib/node_modules/proxy-from-env' '~/n8n_dir/node_modules/psl:/usr/local/lib/node_modules/psl' '~/n8n_dir/node_modules/punycode:/usr/local/lib/node_modules/punycode' '~/n8n_dir/node_modules/qs:/usr/local/lib/node_modules/qs' '~/n8n_dir/node_modules/querystring:/usr/local/lib/node_modules/querystring' '~/n8n_dir/node_modules/raw-body:/usr/local/lib/node_modules/raw-body' '~/n8n_dir/node_modules/readable-stream:/usr/local/lib/node_modules/readable-stream' '~/n8n_dir/node_modules/request:/usr/local/lib/node_modules/request' '~/n8n_dir/node_modules/request-promise:/usr/local/lib/node_modules/request-promise' '~/n8n_dir/node_modules/request-promise-core:/usr/local/lib/node_modules/request-promise-core' '~/n8n_dir/node_modules/request-x-ray:/usr/local/lib/node_modules/request-x-ray' '~/n8n_dir/node_modules/safe-buffer:/usr/local/lib/node_modules/safe-buffer' '~/n8n_dir/node_modules/safer-buffer:/usr/local/lib/node_modules/safer-buffer' '~/n8n_dir/node_modules/selectn:/usr/local/lib/node_modules/selectn' '~/n8n_dir/node_modules/setprototypeof:/usr/local/lib/node_modules/setprototypeof' '~/n8n_dir/node_modules/sliced:/usr/local/lib/node_modules/sliced' '~/n8n_dir/node_modules/smart-buffer:/usr/local/lib/node_modules/smart-buffer' '~/n8n_dir/node_modules/socks:/usr/local/lib/node_modules/socks' '~/n8n_dir/node_modules/socks-proxy-agent:/usr/local/lib/node_modules/socks-proxy-agent' '~/n8n_dir/node_modules/source-map:/usr/local/lib/node_modules/source-map' '~/n8n_dir/node_modules/sshpk:/usr/local/lib/node_modules/sshpk' '~/n8n_dir/node_modules/statuses:/usr/local/lib/node_modules/statuses' '~/n8n_dir/node_modules/stealthy-require:/usr/local/lib/node_modules/stealthy-require' '~/n8n_dir/node_modules/stream-to-string:/usr/local/lib/node_modules/stream-to-string' '~/n8n_dir/node_modules/string-format:/usr/local/lib/node_modules/string-format' '~/n8n_dir/node_modules/string_decoder:/usr/local/lib/node_modules/string_decoder' '~/n8n_dir/node_modules/superagent:/usr/local/lib/node_modules/superagent' '~/n8n_dir/node_modules/superagent-proxy:/usr/local/lib/node_modules/superagent-proxy' '~/n8n_dir/node_modules/supports-color:/usr/local/lib/node_modules/supports-color' '~/n8n_dir/node_modules/toidentifier:/usr/local/lib/node_modules/toidentifier' '~/n8n_dir/node_modules/torrent-search-api:/usr/local/lib/node_modules/torrent-search-api' '~/n8n_dir/node_modules/tough-cookie:/usr/local/lib/node_modules/tough-cookie' '~/n8n_dir/node_modules/tslib:/usr/local/lib/node_modules/tslib' '~/n8n_dir/node_modules/tunnel-agent:/usr/local/lib/node_modules/tunnel-agent' '~/n8n_dir/node_modules/tweetnacl:/usr/local/lib/node_modules/tweetnacl' '~/n8n_dir/node_modules/type-check:/usr/local/lib/node_modules/type-check' '~/n8n_dir/node_modules/type-is:/usr/local/lib/node_modules/type-is' '~/n8n_dir/node_modules/universalify:/usr/local/lib/node_modules/universalify' '~/n8n_dir/node_modules/unpipe:/usr/local/lib/node_modules/unpipe' '~/n8n_dir/node_modules/uri-js:/usr/local/lib/node_modules/uri-js' '~/n8n_dir/node_modules/util:/usr/local/lib/node_modules/util' '~/n8n_dir/node_modules/util-deprecate:/usr/local/lib/node_modules/util-deprecate' '~/n8n_dir/node_modules/uuid:/usr/local/lib/node_modules/uuid' '~/n8n_dir/node_modules/vary:/usr/local/lib/node_modules/vary' '~/n8n_dir/node_modules/verror:/usr/local/lib/node_modules/verror' '~/n8n_dir/node_modules/word-wrap:/usr/local/lib/node_modules/word-wrap' '~/n8n_dir/node_modules/wrap-fn:/usr/local/lib/node_modules/wrap-fn' '~/n8n_dir/node_modules/x-ray:/usr/local/lib/node_modules/x-ray' '~/n8n_dir/node_modules/x-ray-crawler:/usr/local/lib/node_modules/x-ray-crawler' '~/n8n_dir/node_modules/x-ray-parse:/usr/local/lib/node_modules/x-ray-parse' '~/n8n_dir/node_modules/x-ray-scraper:/usr/local/lib/node_modules/x-ray-scraper' '~/n8n_dir/node_modules/xregexp:/usr/local/lib/node_modules/xregexp' '~/n8n_dir/node_modules/yallist:/usr/local/lib/node_modules/yallist' '~/n8n_dir/node_modules/yieldly:/usr/local/lib/node_modules/yieldly' image: 'n8nio/n8n:latest-rpi' environment: N8N_BASIC_AUTH_ACTIVE=true N8N_BASIC_AUTH_USER=username N8N_BASIC_AUTH_PASSWORD=your_secret_n8n_password EXECUTIONS_DATA_PRUNE=true EXECUTIONS_DATA_MAX_AGE=120 EXECUTIONS_TIMEOUT=300 EXECUTIONS_TIMEOUT_MAX=500 GENERIC_TIMEZONE=Europe/Berlin NODE_FUNCTION_ALLOW_EXTERNAL=torrent-search-api Once configured this way run n8n and create a new workflow coping the one proposed. Configure workflow Transmission In order to send command to transmission you must validate the Basic Auth. To do so: open the Start download node and edit the Credentials. Perform the same operation choosing the new credentials also in node Start download new token. In this automation we call transmission twice due to a security protocol in transmission system that prevents single click commands to be triggered, performing the request twice bypasses this security mechanism. https://en.wikipedia.org/wiki/Cross-site_request_forgery We use the X-Transmission-Session-Id provided by the first request to authenticate the second request. Telegram In order to make the workflow work as expected you must create a telegram bot and configure the nodes (Torrent not found and Telegram1) to send your message once the workflow is complete. Here's an easy guide to follow https://docs.n8n.io/nodes/n8n-nodes-base.telegram/ In those nodes you also should configure the Chat ID, you may use your telegram username or use a bot to retrieve your id. You may chat with useridinfobot that sends you your id. Ok google automation Since right now we do not have a n8n client for mobile that can trigger automation using google assistant I decided to use an IFTTT automation to trigger the webhook. I connect my IFTTT account with google assistant and pick the trigger. Say a phrase with a text ingredient as in the picture below. And configure the trigger this way. scarica $ -> download $ or metti in download $ -> put in download $ or some other trigger you may want. Then configure your server to trigger the webhook of n8n. Conclusion In conclusion we provide a fully working automation that integrates in n8n a node library and provides an easy trigger to perform a complex operation. Security concern Giving the ability to trigger a download may be problematic for potential unwanted torrent malware download, so you may decide to authenticate the webhook request passing in the body another field with a shared token between the two endpoints. Moreover the torrent-search-api library and its dependencies have some vulnerability that you may want to avoid on your own media-center, this will hopefully be patched soon in a further release of the library. This is just an interesting proof of concept. Quality of the download You may want to introduce another block between torrent search and webhook trigger to search for a movie based on the words detected by google assistant, sometimes it misinterprets something and you may end up downloading potential copyrighted material. Please use this automation only for free and open source movies and music.
by Nick Saraev
AI LinkedIn Outreach Automation with Apollo, OpenAI & PhantomBuster Categories:* Sales Automation Lead Generation AI Personalization This workflow creates a complete LinkedIn outreach automation system that generates targeted lead lists from Apollo using natural language, enriches profiles with AI-personalized icebreakers, and automatically sends connection requests through PhantomBuster. Built by someone who's made over $1 million with AI automation, this system demonstrates the real-world approach to building profitable automation workflows. Benefits* Natural Language Lead Targeting - Describe your ideal prospects in plain English and automatically generate Apollo search URLs AI-Powered Personalization - Creates custom icebreakers based on LinkedIn profile data, employment history, and professional background Complete Outreach Pipeline - From lead discovery to personalized connection requests, fully automated end-to-end Smart Data Management - Automatically tracks all prospects in Google Sheets with deduplication and status tracking Cost-Effective Scraping - Uses Apify to extract Apollo data without expensive subscription costs Scalable Architecture - Processes hundreds of leads while respecting LinkedIn's connection limits How It Works* Natural Language Lead Generation: Form input accepts audience descriptions in plain English AI converts descriptions into properly formatted Apollo search URLs Automatically includes location, company size, job titles, and keyword filters Apollo Data Extraction: Uses Apify actor to scrape targeted lead lists from Apollo Extracts LinkedIn URLs, email addresses, employment history, and profile data Processes 500+ leads per run with detailed professional information AI Personalization Engine: Analyzes LinkedIn profile data including job history and company information Generates personalized icebreakers using proven connection request templates Creates human-like messages that reference specific career details and achievements Google Sheets Integration: Automatically stores all lead data in organized spreadsheet format Tracks prospect information, contact details, and generated icebreakers Provides easy data management and campaign tracking PhantomBuster Automation: Connects to PhantomBuster API to trigger LinkedIn connection campaigns Sends personalized connection requests with custom icebreakers Respects LinkedIn's daily limits and mimics human behavior patterns Business Use Cases* Sales Teams - Automate prospecting for B2B outreach campaigns Agencies - Scale client acquisition through targeted LinkedIn outreach Recruiters - Find and connect with qualified candidates efficiently Entrepreneurs - Build professional networks in specific industries Business Development - Generate qualified leads for partnership opportunities Revenue Potential This system can replace expensive LinkedIn outreach tools that cost $200-500/month. Users typically see: 400% improvement in response rates through personalization 10x faster lead generation compared to manual prospecting Ability to process 500+ leads per hour vs. 10-20 manually Difficulty Level: Intermediate Estimated Build Time: 1-2 hours Monthly Operating Cost: ~$50 (Apollo + PhantomBuster + AI APIs) Watch My Complete 1-Hour Build* Want to see exactly how I built this system from scratch? I walk through the entire development process live, including all the debugging, API integrations, and real-world testing that goes into building profitable automation systems. 🎥 See My Live Build Process: "Build This Automated AI LinkedIn DM System in 1 Hour (N8N)" This comprehensive tutorial shows my actual development approach - including the detours, problem-solving, and iterative testing that real automation building involves. Required Google Sheets Setup* Create a Google Sheet with these exact column headers: Essential Lead Columns: id - Unique prospect identifier first_name - Contact's first name last_name - Contact's last name name - Full name linkedin_url - LinkedIn profile URL title - Current job title email_status - Email verification status photo_url - Profile photo URL icebreaker - AI-generated personalized message Setup Instructions: Create Google Sheet with these headers in row 1 Connect Google Sheets OAuth in n8n Update the document ID in the "Add to Google Sheet" node PhantomBuster will read from this sheet for automated outreach Set Up Steps* Apollo & Apify Configuration: Set up Apify account and obtain API credentials Configure Apollo scraper actor with proper parameters Test lead extraction with sample audience descriptions AI Personalization Setup: Configure OpenAI API for natural language processing and personalization Set up prompt templates for audience targeting and icebreaker generation Test personalization quality with sample LinkedIn profiles Google Sheets Integration: Create lead tracking spreadsheet with proper column structure Configure Google Sheets API credentials and permissions Set up data mapping for automatic lead storage PhantomBuster Connection: Set up PhantomBuster account and LinkedIn connection Configure LinkedIn auto-connect agent with custom message templates Connect API for automated campaign triggering Form and Workflow Setup: Configure form trigger for audience input collection Set up data flow between all components Add proper error handling and rate limiting Testing and Optimization: Start with small batches (5-10 connections daily) Monitor LinkedIn account health and response rates Optimize icebreaker templates based on performance data Important Compliance Notes* LinkedIn Limits: Respect 100 connection requests per week limit Account Safety: Use PhantomBuster's human-like behavior patterns Message Quality: Regularly update templates to avoid automation detection Response Management: Monitor and respond to replies within 24 hours Advanced Extensions* This system can be enhanced with: Multi-channel Outreach: Add email sequences for comprehensive campaigns A/B Testing: Test different icebreaker templates automatically CRM Integration: Connect to Salesforce, HubSpot, or other sales systems Response Tracking: Monitor reply rates and optimize messaging Explore My Channel* For more advanced automation systems that generate real business results, check out my YouTube channel where I share the exact strategies I've used to make over $1 million with AI automation.
by shepard
Overview This workflow leverages the LangChain code node to implement a fully customizable conversational agent. Ideal for users who need granular control over their agent's prompts while reducing unnecessary token consumption from reserved tool-calling functionality (compared to n8n's built-in Conversation Agent). Setup Instructions Configure Gemini Credentials: Set up your Google Gemini API key (Get API key here if needed). Alternatively, you may use other AI provider nodes. Interaction Methods: Test directly in the workflow editor using the "Chat" button Activate the workflow and access the chat interface via the URL provided by the When Chat Message Received node Customization Options Interface Settings: Configure chat UI elements (e.g., title) in the When Chat Message Received node Prompt Engineering: Define agent personality and conversation structure in the Construct & Execute LLM Prompt node's template variable ⚠️ Template must preserve {chat_history} and {input} placeholders for proper LangChain operation Model Selection: Swap language models through the language model input field in Construct & Execute LLM Prompt Memory Control: Adjust conversation history length in the Store Conversation History node Requirements: ⚠️ This workflow uses the LangChain Code node, which only works on self-hosted n8n. (Refer to LangChain Code node docs)
by Shiva
AI Voice Calling Bot - OpenAI GPT-4o + ElevenLabs + Twilio Integration for Multilingual Appointment Booking & Service Orders Overview Transform your business with an intelligent voice calling bot that handles customer calls automatically in 25+ languages. This N8n workflow integrates OpenAI GPT-4o, ElevenLabs text-to-speech, and Twilio for seamless appointment scheduling, pizza orders, and service bookings. Key Features Multilingual Support**: Conversations in English, Spanish, French, German, Italian, Portuguese, Chinese, Japanese, Arabic, and 20+ more languages Natural AI Conversations**: GPT-4o powered responses with ElevenLabs realistic voice synthesis Multi-Service Handling**: Appointments, orders, and service requests with automatic logging Real-time Processing**: Instant speech-to-text and audio response generation Prerequisites N8n instance (self-hosted or cloud) Twilio account with phone number OpenAI API key (GPT-4o access) ElevenLabs API credentials Google Sheets access Cloud storage for audio files Setup Instructions Step 1: Configure Credentials Add API keys for OpenAI, ElevenLabs, Twilio, and Google Sheets in N8n credentials manager. Step 2: Prepare Data Storage Create Google Sheets for call logs and appointments with columns: timestamp, caller_id, speech_input, ai_response, language, call_sid. Step 3: Configure Twilio Set webhook URL to your N8n endpoint: https://your-n8n-instance.com/webhook/voice-webhook Step 4: Update Sheet IDs Replace placeholder Google Sheet IDs in workflow nodes with your actual sheet IDs. Customization Options Voice Settings**: Adjust ElevenLabs multilingual voice models and parameters AI Behavior**: Modify system prompts for specific business needs and languages Service Types**: Add custom service handling logic Business Hours**: Implement language-specific operating hours Monitoring Track call analytics, language preferences, conversion rates, and customer satisfaction across all supported languages through automated Google Sheets logging. Ready for production use with comprehensive error handling and scalability for global businesses.
by Oneclick AI Squad
An intelligent WhatsApp-based chatbot designed for restaurants to automate customer interactions related to table bookings, menu inquiries, opening hours, services, and offers. Built using the n8n automation platform and powered by an AI language model, this solution streamlines communication, boosts efficiency, and improves customer satisfaction. Objectives Automate replies to common customer queries on WhatsApp Handle table booking requests with confirmation Provide menu item details, pricing, and dietary information Share restaurant timing, location, and service availability Promote offers and handle promotional queries Operate 24/7 without manual intervention Store bookings and conversations for reporting and analytics Workflow Summary Step 1: Message Reception Node: WhatsApp Trigger (Webhook or API-based) Function: Listens for incoming customer messages. Step 2: Intent Recognition Node: AI Query Processor (e.g., OpenAI API) Function: Detects customer intent (e.g., booking, menu, timing). Step 3: Conditional Routing Node: Switch or IF Node Function: Routes flow based on detected intent: General information (menu, timing, services) Table booking Step 4A: Respond to General Info Queries Node: AI Response or Static Reply Node Function: Returns relevant information (menu, timing, address, etc.). Step 4B: Process Booking Requests Nodes: Collect Booking Details** (via chatbot interactions) Store Booking Info** (to DB or Google Sheets) Send Booking Confirmation** (to customer) Step 5: Context Management Node: Set/Update Customer Data Function: Maintains conversation state and tracks follow-up messages. Database or Google Sheet Columns for Table Booking | Column Name | Description | | ----------------- | ----------------------------------------------- | | reservation\_id | Unique reservation identifier | | guest\_name | Full name of the guest | | contact\_number | Customer’s WhatsApp or mobile number | | email | (Optional) Email address | | booking\_date | Reservation date (YYYY-MM-DD format) | | booking\_time | Reservation time (HH\:MM format) | | party\_size | Number of guests | | table\_id | (Optional) Table number or identifier | | special\_requests | Allergies, seating preferences, etc. | | status | Booking status: Confirmed / Cancelled / Pending | | created\_at | Timestamp when booking was made | | updated\_at | Timestamp when booking was last modified | Prerequisites Verified WhatsApp Business Account with API access n8n instance (Cloud or self-hosted) Access to an AI service (e.g., OpenAI, Claude) Google Sheets, Airtable, MySQL, or other DB integration Setup Instructions Connect WhatsApp API using webhook or third-party WhatsApp provider (e.g., 360Dialog, Twilio). Integrate AI using HTTP Request or OpenAI node for response generation. Create Data Store (Google Sheet, Airtable, or MySQL) with defined booking columns. Design Workflow in n8n with intent detection, conditional logic, and response nodes. Test End-to-End by sending different WhatsApp queries and checking logs and stored data. Example Conversation Customer: “Can I book a table for 2 people tomorrow at 8 PM?” Bot: “Sure. Please provide your name and contact number to confirm the reservation for 2 people at 8:00 PM tomorrow.” \[Booking details are saved, and a confirmation is sent.] Benefits Fully automated customer interaction Supports real-time table reservations Accurate and quick responses Scales without increasing staff effort Operates 24/7 Centralized booking data for analytics Analytics and Reporting Track key performance metrics such as: Number of bookings per day/week Average response time Customer satisfaction scores (via feedback node) Popular menu items or query types Booking conversion rates Security and Compliance End-to-end encrypted WhatsApp messages Role-based access to sensitive data Compliance with data protection regulations (e.g., GDPR) Secure API integrations and storage solutions Conclusion This WhatsApp chatbot serves as a reliable, AI-powered digital front desk for restaurants. Built using n8n and scalable components, it automates customer support, manages bookings, and enhances operational efficiency while offering a seamless customer experience.
by Praveena
Purpose The purpose of this automation is to help context switch from office to some side projects or passion gigs so you can be free of distracting thoughts and re-set your perspective. Benefits Anyone who works full time and also does something on the side (perhaps a side gig/being a mom/just follow your passion project) What you need N8N (lol) Any LLM API Key (I used OpenAI 4.1) IPhone (automations and shortcuts) Template Setup Setup LLM API key. Import template file to new workflow. On Iphone create a new shortcut as per video. Create automation steps. Resources Youtube
by Thomas Janssen
Build an AI Agent which accesses two MCP Servers: a RAG MCP Server and a Search Engine API MCP Server. This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Tutorial Click here to watch the full tutorial on YouTube! How it works We build an AI Agent which has access to two MCP servers: An MCP Server with a RAG database (click here for the RAG MCP Server An MCP Server which can access a Search Engine, so the AI Agent also has access to data about more current events Installation In order to use the MCP Client, you also have to use MCP Server Template. Open the MCP Client "MCP Client: RAG" node and update the SSE Endpoint to the MCP Server workflow Install the "n8n-nodes-mcp" community node via settings > community nodes ONLY FOR SELF-HOSTING: In Docker, click on your n8n container. Navigate to "Exec" and execute the below command to allow community nodes: N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE=true Navigate to Bright Data and create a new "Web Unlocker API" with the name "mcp_unlocker". Open the "MCP Client" and add the following credentials: How to use it Run the Chat node and start asking questions More detailed instructions Missed a step? Find more detailed instructions here: Personal Newsfeed With Bright Data and n8n What is Retrievel Augmented Generation (RAG)? Large Language Models (LLM's) are trained on data until a specific cutoff date. Imagine a model is trained in December 2023 based data until September 2023. This means the model doesn't have any knowledge about events which happened in 2024. So if you ask the LLM who was the Formula 1 World Champion of 2024, it doesn't know the answer. The solution? Retrieval Augmented Generation. When using Retrieval Augmented Generation, a user's question is being sent to a semantic database. The LLM will use the information retrieved from the semantic database to answer the user's question. What is Model Context Protocol (MCP)? MCP is a communication protocol which is used by AI agents to call tools hosted on external servers. When an MCP client communicates with an MCP server, the server will provide an overview of all its tools, prompts and resources. The MCP server can then choose which tools to execute (based on the user's request) and execute the tools. An MCP client can communicate with multiple MCP servers, which can all host multiple tools.
by ibrhdotme
Learning something new? Endlessly searching to find the best resources? This workflow finds top community-recommended learning resources on any topic from Hacker News, delivered to your inbox. How it works User submits a topic they want to learn via a simple form. The workflow searches for relevant "Ask HN" posts on Hacker News and extracts top-level comments. An LLM analyzes the comments and identifies the best learning resources. A personalized email is sent to the user with a Markdown formatted list of top recommendations, categorized by resource type (e.g., book, course, article) and difficulty level. Set up steps Add your Google Gemini API credentials. You'll need to create a project and enable the Generative Language API. Add your SMTP credentials for sending emails. Customize the Form and email subject (optional) Activate the workflow Screenshots for Workflow, Form and Email Built on Day-03 as part of the #100DaysOfAgenticAi Fork it, tweak it, have fun!
by Yang
Who is this for? This workflow is perfect for customer support teams, sales departments, or solopreneurs who receive frequent email enquiries and want to automate the initial response process using AI. If you spend too much time answering similar questions, this system helps respond faster and more intelligently—without writing a single line of code. What problem is this workflow solving? Manually responding to repeated customer enquiries slows productivity and increases delay. This workflow classifies if an incoming email is a real enquiry, analyzes the content with a LangChain-powered agent, fetches helpful context using Dumpling AI, and sends a personalized reply using Gmail—all within minutes. What this workflow does Listens for new incoming Gmail messages using the Gmail Trigger node. Classifies whether the email is an enquiry using a GPT-4o classification prompt. Uses a Filter node to continue only if the email was classified as an enquiry. Passes the email content to a LangChain Agent, enhanced with memory, AI tools, and Dumpling AI to search for relevant information. The agent constructs a smart, relevant response, then sends it to the original sender via Gmail. Setup Connect Gmail Use the Gmail Trigger node to connect to the Gmail account that receives enquiries. Make sure Gmail OAuth2 credentials are authenticated. Configure Dumpling AI Agent Sign up at Dumpling AI. Create an agent trained to search your help docs, site content, or FAQs. Copy your Dumpling agent ID and API key. Paste it in the Dumpling AI Agent – Search for Relevant Info HTTP Request node. Set Up LangChain Agent No extra setup needed beyond connecting OpenAI credentials. GPT-4o is used for classification and reply generation. Enable Gmail Reply Node The final Send Email Response via Gmail node will send the AI-generated reply back to the same thread. How to customize this workflow to your needs Change the classification prompt to include other email types like “support”, “complaint”, or “sales”. Add additional logic if you want to CC someone or forward certain types of enquiries. Add a Notion or Google Sheets node to log the conversation for analytics. Replace Gmail with Outlook or another email provider by switching the nodes. Improve context by adding more AI tools like database queries or preloaded FAQs.
by Yang
Who is this for? This workflow is built for newsletter writers, marketers, content creators, or anyone who curates and summarizes web articles. It’s especially helpful for virtual assistants and founders who need to quickly turn web content into digestible, branded newsletters using AI. What problem is this workflow solving? Manually reading, summarizing, and formatting multiple articles into a newsletter takes time and focus. This workflow automates the process using Dumpling AI for crawling, GPT-4o for summarization, and Gmail for delivery—so you can go from raw URLs to a polished email in minutes. What this workflow does Starts manually (can also be scheduled) Reads a list of article URLs from Google Sheets Sends URLs to Dumpling AI to crawl and extract content Splits each article into a single item for processing Uses a Code node to clean and structure article data Uses an Edit Fields node to merge articles into one JSON block GPT-4o summarizes and generates HTML content for the newsletter Sends the formatted newsletter via Gmail Setup Google Sheets Create a sheet with a column (A) for article URLs Update the Read URLs from Google Sheet node to use your Sheet ID and tab name Connect your Google account in the credentials Dumpling AI Sign up at https://app.dumplingai.com Create an agent for web crawling under /crawl Add your Dumpling API key in the HTTP headers of the Crawl Content with Dumpling AI node Split Node Breaks apart the array of articles from Dumpling AI so each article is processed individually Code Node Structures each article as JSON with title, url, and cleaned text content Edit Fields Node Gathers all structured articles back into a single JSON array to prepare for AI summarization OpenAI (GPT-4o) Processes the article list and returns a formatted subject line and HTML newsletter content Gmail Connect your Gmail account to send the AI-generated newsletter to your inbox or team Update the recipient field in the Send HTML Email via Gmail node How to customize this workflow to your needs Replace the manual trigger with a Schedule node to send newsletters weekly Modify the GPT-4o prompt to change tone (e.g., more professional, funny, casual) Add filtering logic to skip low-value articles Connect Slack, Airtable, or Notion for internal team usage Change Gmail to SendGrid or Outlook if preferred Final Notes This workflow uses: Dumpling AI** /crawl endpoint to extract article content Split, **Code, and Edit Fields nodes to format multi-article input GPT-4o** for summarization and HTML formatting Gmail** for delivery This setup eliminates manual steps and delivers fast, consistent newsletters powered by AI.
by Yaron Been
🎤 Audio-to-Insights: Auto Meeting Summarizer Transform your meeting recordings into actionable insights automatically. This powerful n8n workflow monitors your Google Drive for new audio files, transcribes them using OpenAI's Whisper, generates intelligent summaries with ChatGPT, and logs everything in Google Sheets - all without lifting a finger. 🔄 How It Works This workflow operates as a seamless 6-step automation pipeline: Step 1: Smart Detection The workflow continuously monitors a designated Google Drive folder (polls every minute) for newly uploaded audio files. Step 2: Secure Download When a new audio file is detected, the system automatically downloads it from Google Drive for processing. Step 3: AI Transcription OpenAI's Whisper technology converts your audio recording into accurate text transcription, supporting multiple audio formats. Step 4: Intelligent Summarization ChatGPT processes the transcript using a specialized prompt that extracts: Key discussion points and decisions Action items with assigned persons and deadlines Priority levels and follow-up tasks Clean, professional formatting Step 5: Timestamp Generation The system automatically adds the current date and formats it consistently for tracking purposes. Step 6: Automated Logging The final summary is appended to your Google Sheets document with the date, creating a searchable archive of all meeting insights. ⚙️ Setup Steps Prerequisites Before setting up the workflow, ensure you have: Active Google Drive account OpenAI API key with credits Google Sheets access n8n instance (cloud or self-hosted) Configuration Steps 1. Credential Setup Google Drive OAuth2**: Required for folder monitoring and file downloads OpenAI API Key**: Needed for both transcription (Whisper) and summarization (ChatGPT) Google Sheets OAuth2**: Essential for writing summaries to your spreadsheet 2. Google Drive Configuration Create a dedicated folder in Google Drive for meeting recordings Copy the folder ID from the URL (the long string after /folders/) Update the folderToWatch parameter in the workflow 3. Google Sheets Preparation Create a new Google Sheet or use an existing one Ensure it has columns: Date and Meeting Summary Copy the spreadsheet ID from the URL Update the documentId parameter in the workflow 4. Audio Requirements Supported Formats**: MP3, WAV, M4A, MP4 Recommended Size**: Under 100MB for optimal processing Language**: Optimized for English (customizable for other languages) Quality**: Clear audio produces better transcriptions 5. Workflow Activation Import the workflow JSON into your n8n instance Configure all credential connections Test with a sample audio file Activate the workflow trigger 🚀 Use Cases Project Management Team Standup Summaries**: Convert daily standups into actionable task lists Sprint Retrospectives**: Extract improvement points and action items Stakeholder Updates**: Generate concise reports for leadership Sales & Customer Success Discovery Call Notes**: Capture prospect pain points and requirements Demo Follow-ups**: Track questions, objections, and next steps Customer Check-ins**: Monitor satisfaction and expansion opportunities Consulting & Professional Services Client Strategy Sessions**: Document recommendations and implementation plans Requirements Gathering**: Organize complex project specifications Progress Reviews**: Track deliverables and milestone achievements HR & Training Interview Debriefs**: Standardize candidate evaluation notes Training Sessions**: Create searchable knowledge bases Performance Reviews**: Document development plans and goals Research & Development Brainstorming Sessions**: Capture innovative ideas and concepts Technical Reviews**: Log decisions and architectural choices User Research**: Organize feedback and insights systematically 💡 Advanced Customization Options Enhanced Summarization Modify the ChatGPT prompt to focus on specific elements: Add speaker identification for multi-person meetings Include sentiment analysis for customer calls Generate department-specific summaries (technical, sales, legal) Extract financial figures and metrics automatically Integration Expansions Slack Integration**: Auto-post summaries to relevant channels Email Notifications**: Send summaries to meeting participants CRM Updates**: Push action items directly to Salesforce/HubSpot Calendar Integration**: Schedule follow-up meetings based on action items Quality Improvements Audio Preprocessing**: Add noise reduction before transcription Multi-language Support**: Configure for international teams Custom Templates**: Create industry-specific summary formats Approval Workflows**: Add human review before final storage 🛠️ Troubleshooting & Best Practices Common Issues Large File Processing**: Split recordings over 100MB into smaller segments Poor Audio Quality**: Use noise reduction tools before uploading API Rate Limits**: Implement delay nodes for high-volume usage Formatting Issues**: Adjust ChatGPT prompts for consistent output Optimization Tips Upload files in supported formats only Ensure stable internet connection for cloud processing Monitor OpenAI API usage and costs Regularly backup your Google Sheets data Test workflow changes with sample files first 📊 Expected Outputs Sample Summary Format: Meeting Summary - March 15, 2024 Key Discussion Points: Q1 budget review and allocation decisions New product launch timeline and milestones Team restructuring and role assignments Action Items: John: Finalize budget proposal by March 20th (High Priority) Sarah: Schedule product demo sessions for March 25th Team: Submit org chart feedback by March 18th Decisions Made: Approved additional marketing budget of $50K Delayed product launch to April 15th for quality assurance Promoted Lisa to Senior Developer role 📞 Questions & Support For any questions, customizations, or technical support regarding this workflow: 📧 Email Support Primary Contact**: Yaron@nofluff.online Response Time**: Within 24 hours on business days Best For**: Setup questions, customization requests, troubleshooting 🎥 Learning Resources YouTube Channel**: https://www.youtube.com/@YaronBeen/videos Step-by-step setup tutorials Advanced customization guides Workflow optimization tips 🔗 Professional Network LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Connect for ongoing support Share your workflow success stories Get updates on new automation ideas 💡 What to Include in Your Support Request Describe your specific use case Share any error messages or logs Mention your n8n version and setup type Include sample audio file characteristics (if relevant) Ready to transform your meeting chaos into organized insights? Download the workflow and start automating your meeting summaries today!