by Angel Menendez
Who is this for? Public-facing professionals (developer advocates, founders, marketers, content creators) who get bombarded with LinkedIn messages that aren't actually for them - support requests when you're in marketing, sales inquiries when you're a devrel, partnership pitches when you handle content, etc. What problem is this workflow solving? When you're visible online, people assume you handle everything at your company. You end up spending hours daily playing human router, forwarding messages like "How do I reset my password?" or "What's your enterprise pricing?" to the right teams. This LinkedIn automation workflow stops you from being your company's unofficial customer service representative. What this workflow does This AI-powered LinkedIn DM management workflow automatically assesses incoming LinkedIn messages and routes them intelligently: Automated Message Assessment: Receives inbound LinkedIn messages via UniPile and looks up sender details from both personal and company LinkedIn profiles. Smart Route Matching: Compares the message content against your message routing workflow table in Notion, which contains: Question: "How can I become an n8n ambassador?" Description: "Route here when a user is requesting to become an n8n ambassador. Also when they're asking how they could do more to evangelize n8n in their city, or to start organizing n8n meetups and events in their city." Action: "Tell the user to open the following notion page which has details on ambassador program including how to apply, as well as perks of the program: https://www.notion.so/n8n-Ambassador-Program-d883b2a130e5448faedbebe5139187ea?pvs=21" AI Response Generation: When a message matches an existing route, this AI assistant generates a personalized response draft based on the "Action" instructions from your routing table. Human-in-the-Loop Approval: Sends the draft response to Slack with approve/reject buttons, so you maintain control while saving time. Draft can be edited from within Slack on desktop and mobile. Automated LinkedIn Responses: Once approved, sends the reply back via LinkedIn and marks the original message as handled. The result: You stop being a human switchboard and can focus on your actual job while people still get helpful, timely responses through automated customer service. You can also add routes for things you do handle but get asked about daily (like 'How do I join your beta?' or 'What's your content strategy?') to standardize your responses. Setup Sign up for a UniPile account and create a webhook under the Messaging section Set the callback URL to this workflow's production URL Generate a UniPile API key with all required scopes and store it in your n8n credentials Create a Slack app and enable interactive message buttons and webhooks Here is a slack App manifest template for easy deployment in slack: { "display_information": { "name": "Request Router", "description": "A bot that alerts when a new linkedin question comes in.", "background_color": "#12575e" }, "features": { "bot_user": { "display_name": "Request Router", "always_online": false } }, "oauth_config": { "scopes": { "bot": [ "chat:write", "chat:write.customize", "chat:write.public", "links:write", "im:history", "im:read", "im:write" ] } }, "settings": { "interactivity": { "is_enabled": true, "request_url": "Your webhook url here" }, "org_deploy_enabled": false, "socket_mode_enabled": false, "token_rotation_enabled": false } } Set up your Notion database with the three-column structure (Question, Description, Action) Configure the AI node with your preferred provider (OpenAI, Gemini, Ollama etc) Replace placeholder LinkedIn user and organization IDs with your own How to customize this workflow to your needs Database Options**: Swap Notion with Google Sheets, Airtable, or another database Filtering Logic**: Add custom filters based on keywords, message length, follower count, or business logic AI Customization**: Adjust the system prompt to match your brand tone and response goals Approval Platform**: Replace Slack with email, Discord, or another review platform Team Routing**: Use Slack metadata to route approvals to specific team members based on message category Enrichment**: Add secondary data enrichment using tools like Clearbit or FullContact Response Rules**: Create conditional logic for different response types based on sender profile or message content Perfect for anyone who's tired of being their company's accidental customer service department while trying to do their real job. This LinkedIn automation template was inspired by a live build done by Max Tkacz and Angel Menendez for The Studio.
by Rostislav
This n8n template provides a complete solution for Optical Character Recognition (OCR) of image and PDF files directly within Telegram Users can simply send PNG, JPEG, or PDF documents to your Telegram bot, and the workflow will process them, extract text using Mistral OCR, and return the content as a downloadable Markdown (.md) text file. Key Features & How it Works: Effortless OCR via Telegram**: Users send a file to the bot, and the system automatically detects the file type (PNG, JPEG, or PDF). File Size Validation: The workflow enforces a **25 MB file size limit, in line with Telegram Bot API restrictions, ensuring smooth operation. Mistral-Powered Recognition: Leveraging **Mistral OCR, the template accurately extracts text from various document types. Markdown Output**: Recognized text is automatically converted into a clean Markdown (.md) text file, ready for easy editing, storage, or further processing. Secure File Delivery: The processed Markdown file is delivered back to the user via Telegram. For this, the workflow ingeniously uses a **GET request to itself (acting as a file downloader proxy). This generated link allows Telegram to fetch the .md file directly. Please note: This download functionality requires the workflow to be in an Active status. Optional Whitelist Security: Enhance your bot's security with an **optional whitelist feature. You can configure specific Telegram User IDs to restrict access, ensuring only authorized users can interact with your bot. Simplified Webhook Management**: The template includes dedicated utility flows for convenient management of your Telegram bot's webhooks (for both development and production environments). This template is ideal for digitizing documents on the go, extracting text from scanned files, or converting image-based content into versatile, searchable text. Getting Started To get this powerful OCR bot up and running, follow these two main steps: Set Up Your Telegram Bot: First, you'll need to configure your Telegram bot and its webhooks. Follow the instructions detailed in the Telegram Bot Webhook Setup section to create your bot, obtain its API token, and set up the necessary webhook URLs. Configure Bot Settings: Next, you'll need to define key operational parameters for your bot. Proceed to the Settings Configuration section and populate the variables according to your preferences, including options for whitelist access.
by Garri
Description This workflow is designed to automate the security reputation check of domains and IP addresses using multiple APIs such as VirusTotal, AbuseIPDB, and Google DNS. It assesses potential threats including malicious and suspicious scores, as well as email security configurations (SPF, DKIM, DMARC). The analysis results are processed by AI to produce a concise assessment, then automatically updated into Google Sheets for documentation and follow-up. How It Works Automatic Trigger – The workflow runs periodically via a Schedule Trigger. Data Retrieval – Fetches a list of domains from Google Sheets with status "To do". Domain Analysis – Uses VirusTotal API to get the domain report, perform a rescan, and check IP resolutions. IP Analysis – Checks IP reputation using AbuseIPDB. Email Security Validation – Verifies SPF, DKIM, and DMARC configurations via Google DNS. AI Assessment – Analysis data is processed by AI to produce a short summary in Indonesian. Data Update – The results are automatically updated to Google Sheets, changing the status to "Done" or adding notes if potential threats are found. How to Setup Prepare API Keys Sign up and obtain API keys from VirusTotal and AbuseIPDB. Set up access to Google Sheets API. Configure Credentials in n8n Add VirusTotal API, AbuseIPDB API, and Google Sheets OAuth credentials in n8n. Prepare Google Sheets Create a sheet with columns No, Domain, Customer, Keterangan, Status. Ensure initial data has the status "To do". Import Workflow Upload the workflow JSON file into n8n. Set Schedule Trigger Define the checking interval as needed (e.g., every 1 hour). Test Run Run the workflow manually to ensure all API connections and Google Sheets output work properly.
by PUQcloud
Setting up n8n workflow Overview The Docker Grafana WHMCS module uses a specially designed workflow for n8n to automate deployment processes. The workflow provides an API interface for the module, receives specific commands, and connects via SSH to a server with Docker installed to perform predefined actions. Prerequisites You must have your own n8n server. Alternatively, you can use the official n8n cloud installations available at: n8n Official Site Installation Steps Install the Required Workflow on n8n You have two options: Option 1: Use the Latest Version from the n8n Marketplace The latest workflow templates for our modules are available on the official n8n marketplace. Visit our profile to access all available templates: PUQcloud on n8n Option 2: Manual Installation Each module version comes with a workflow template file. You need to manually import this template into your n8n server. n8n Workflow API Backend Setup for WHMCS/WISECP Configure API Webhook and SSH Access Create a Basic Auth Credential for the Webhook API Block in n8n. Create an SSH Credential for accessing a server with Docker installed. Modify Template Parameters In the Parameters block of the template, update the following settings: server_domain – Must match the domain of the WHMCS/WISECP Docker server. clients_dir – Directory where user data related to Docker and disks will be stored. mount_dir – Default mount point for the container disk (recommended not to change). Do not modify the following technical parameters: screen_left screen_right Deploy-docker-compose In the Deploy-docker-compose element, you have the ability to modify the Docker Compose configuration, which will be generated in the following scenarios: When the service is created When the service is unlocked When the service is updated nginx In the nginx element, you can modify the configuration parameters of the web interface proxy server. The main section allows you to add custom parameters to the server block in the proxy server configuration file. The main\_location section contains settings that will be added to the location / block of the proxy server configuration. Here, you can define custom headers and other parameters specific to the root location. Bash Scripts Management of Docker containers and all related procedures on the server is carried out by executing Bash scripts generated in n8n. These scripts return either a JSON response or a string. All scripts are located in elements directly connected to the SSH element. You have full control over any script and can modify or execute it as needed.
by Gede Suparsa
This template demonstrates how to provide an interactive chatbot for your work history based off your CV. Unanswered questions and follow-up email contacts are sent to you via Telegram. Use case: link on your profile to not only show off you AI workflow skills but also to provide an interactive chatbot about your work history for prospective employers. Good to Know It will require access to an OpenAI API Key (free for low usage) and setting up a bot in Telegram (free). How it Works The n8n inbuilt chat node will be hosted on n8n services to provide the chat interface. You will upload your CV either exported from LinkedIn or exported yourself to Microsoft OneDrive along with a simple text file explaining some other information about you. On each chat interaction the PDF and text file are used as tools to get context information for the chatbot to respond. If a question cannot be answered reliably, a subworkflow will be called to capture that question and send it to you as a telegram message. If the person chatting supplies their email address, this will be sent to you via a Telegram message along with other information the user provides. How to use After importing the template, create the subworkflows so that they can be used a Tools by the AI Node. Don't forget to add the Execute sub-workflow trigger. Setup credentials for Open AI, OneDrive and telegram. Upload your CV & text file summary to OneDrive and replace the document IDs in the get_documents sub-workflow. Activate the workflow so that publicly available chat will get generated on n8n.
by Muhammad Nouman
How it works This workflow turns a Google Drive folder into a fully automated YouTube publishing pipeline. Whenever a new video file is added to the folder, the workflow generates all YouTube metadata using AI, uploads the video to your YouTube channel, deletes the original file from Drive, sends a Telegram confirmation, and can optionally post to Instagram and Facebook using permanent system tokens. High-level flow: Detects new video uploads in a specific Google Drive folder. Downloads the file and uses AI to generate: • a polished first-person YouTube description • an SEO-optimized YouTube title • high-ranking YouTube tags Uploads the video to YouTube with the generated metadata. Deletes the original Drive file after upload. Sends a Telegram notification with video details. (Optional) Posts to Instagram & Facebook using permanent system user tokens. Set up steps Setup usually takes a few minutes. Add Google Drive OAuth2 credentials for the trigger and download/delete nodes. Add your OpenAI (or Gemini) API credentials for title/description/tag generation. Add YouTube OAuth2 credentials in the YouTube Upload node. Add Facebook/Instagram Graph API credentials if enabling cross-posting. Replace placeholder IDs (Drive folder ID, Page ID, IG media endpoint). Review sticky notes in the workflow—they contain setup guidance and token info. Activate the Google Drive trigger to start automated uploads.
by Intuz
This n8n template from Intuz provides a complete and automated solution for creating an autonomous social media manager. This workflow uses an AI agent to intelligently generate unique, high-quality content, check for duplicates, and post it on a consistent schedule to automate your entire Twitter presence. Who's this workflow for? Social Media Managers Marketing Teams & Agencies Startup Founders & Solopreneurs Content Creators How it works 1. Runs on a Schedule: The workflow automatically starts at a set interval (e.g., every 6 hours), ensuring a consistent posting schedule. 2. AI Generates a New Tweet: An advanced AI Agent, powered by OpenAI, uses a detailed prompt to craft a new, engaging tweet. The prompt defines the tone, topics, character limits, and hashtags. 3. Checks for Duplicates: Before finalizing the tweet, the AI Agent is equipped with a tool to read a Google Sheet containing a log of all previously published posts. This allows it to ensure the new content is always unique. 4. Posts to Twitter (X): The final, unique tweet is automatically posted to your connected Twitter account. 5. Logs the New Post: After posting, the workflow logs the new tweet back into the Google Sheet, updating the history for the next run. This completes the autonomous loop. Setup Instructions Schedule Your Posts: In the Start Workflow (Schedule Trigger) node, set the frequency you want the workflow to run (e.g., every 6 hours). Connect OpenAI: Add your OpenAI API key in the OpenAI Chat Model node. Customize the prompt in the AI Agent node to match your brand's voice, target keywords, and specific URLs. Configure Google Sheets: Connect your Google Sheets account. Create a sheet with two columns: Tweet Content and Status. In both the Get Data from Google Sheet and Add new Tweet to Google sheet nodes, select your credentials and specify the Document ID and Sheet Name. Connect Twitter (X): In the Create Tweet node, connect the Twitter account where you want to post. Activate Workflow: Save the workflow and toggle the "Active" switch to ON. Your AI social media manager is now live! Key Requirements to Use This Template Before you start, please ensure you have the following accounts and assets ready: An n8n Instance: An active n8n account (Cloud or self-hosted) where you can import and run this workflow. OpenAI Account: An active OpenAI account with an API Key. You will need to have billing enabled to use the language models for tweet generation. Google Account & Sheet: A Google account and a pre-made Google Sheet. The sheet must have two specific columns: Tweet Content and Status. Twitter (X) Developer Account: A Twitter (X) account with an approved Developer profile. You need an App created within the Developer Portal with the necessary permissions (v2 API access with Write scopes) to post tweets automatically. Connect with us Website: https://www.intuz.com/services Email: getstarted@intuz.com LinkedIn: https://www.linkedin.com/company/intuz Get Started: https://n8n.partnerlinks.io/intuz For Custom Worflow Automation Click here- Get Started
by Satyam Tripathi
Try It Out! This n8n template demonstrates how to build an autonomous AI news agent using Decodo MCP that automatically finds, scrapes, and delivers fresh industry news to your team via Slack. Use cases are many – automated news monitoring for your industry, competitive intelligence gathering, startup monitoring, regulatory updates, research automation, or daily briefings for your organization. How it works Define your news topics using the Set node – AI, MCP, web scraping, whatever matters to your business. The AI Agent processes those topics using the Gemini Chat Model, determining which tools to use and when. Here's where it gets interesting: Decodo MCP gives your AI agent the tools to search Google, scrape websites, and parse content automatically – all while bypassing geo-restrictions and anti-bot measures. The agent hunts for fresh articles from the last 48 hours, extracts clean data, and returns structured JSON results. Format Results cleans up the AI's messy output and removes duplicates. Your polished news digest gets delivered to Slack with clickable links and summaries. How to use Schedule trigger runs daily at 9 AM – adjust timing or swap for webhook triggers as needed. Customize topics in the Set node to match your industry. Scales effortlessly: add more topics, tweak search criteria, done. Requirements Decodo MCP credentials (free trial available) – grab the Smithery connection URL with keys and paste it straight into your n8n MCP node. Done. Gemini API key for the AI processing – drop it into the Google Gemini Chat Model node and pick whichever Gemini model fits your needs. Slack workspace for delivery – n8n's Slack integration docs have you covered. What the final output looks like Here's what your team receives in Slack every morning: Need help? Join the Discord or email support@decodo.com for questions. Happy Automating!
by Max
N8N Automated Twitter Reply Bot Workflow For latest version, check: dziura.online/automation Latest documentation can be find here You must have Apify community node installed before pasting the JSON to your workflow. Overview This n8n workflow creates an intelligent Twitter/X reply bot that automatically scrapes tweets based on keywords or communities, analyzes them using AI, generates contextually appropriate replies, and posts them while avoiding duplicates. The bot operates on a schedule with intelligent timing and retry mechanisms. Key Features Automated tweet scraping** from Twitter/X using Apify actors AI-powered reply generation** using LLM (Large Language Model) Duplicate prevention** via MongoDB storage Smart scheduling** with timezone awareness and natural posting patterns Retry mechanism** with failure tracking Telegram notifications** for status updates Manual trigger** option via Telegram command Required Credentials & Setup 1\. Telegram Bot Create a bot via @BotFather on Telegram Get your Telegram chat ID to receive status messages Credential needed**: Telegram account (Bot token) 2\. MongoDB Database Set up a MongoDB database to store replied tweets and prevent duplicates Create a collection (default name: collection\_name) Credential needed**: MongoDB account (Connection string) Tutorial**: MongoDB Connection Guide 3\. Apify Account Sign up at Apify.com Primary actors used**: Search Actor: api-ninja/x-twitter-advanced-search - For keyword-based tweet scraping (ID: 0oVSlMlAX47R2EyoP) Community Actor: api-ninja/x-twitter-community-search-post-scraper - For community-based tweet scraping (ID: upbwCMnBATzmzcaNu) Credential needed**: Apify account (API token) 4\. OpenRouter (LLM Provider) Sign up at OpenRouter.ai Used for AI-powered tweet analysis and reply generation Model used**: x-ai/grok-3 (configurable) Credential needed**: OpenRouter account (API key) 5\. Twitter/X API Set up developer account at developer.x.com Note**: Free tier limited to ~17 posts per day Credential needed**: X account (OAuth2 credentials) Workflow Components Trigger Nodes 1\. Schedule Trigger Purpose**: Runs automatically every 20 minutes Smart timing**: Only active between 7 AM - 11:59 PM (configurable timezone) Randomization**: Built-in probability control (~28% execution chance) to mimic natural posting patterns 2\. Manual Trigger Purpose**: Manual execution for testing 3\. Telegram Trigger Purpose**: Manual execution via /reply command in Telegram Usage**: Send /reply to your bot to trigger the workflow manually Data Processing Flow 1\. MongoDB Query (Find documents) Purpose**: Retrieves previously replied tweet IDs to avoid duplicates Collection**: collection\_name (configure to match your setup) Projection**: Only fetches tweet\_id field for efficiency 2\. Data Aggregation (Aggregate1) Purpose**: Consolidates tweet IDs into a single array for filtering 3\. Keyword/Community Selection (Keyword/Community List) Purpose**: Defines search terms and communities Configuration**: Edit the JSON to include your keywords and Twitter community IDs Format:{ "keyword\_community\_list": \[ "SaaS", "Entrepreneur", "1488663855127535616" // Community ID (19-digit number) \], "failure": 0 } 4\. Random Selection (Randomized community, keyword) Purpose**: Randomly selects one item from the list to ensure variety 5\. Routing Logic (If4) Purpose**: Determines whether to use Community search or Keyword search Logic**: Uses regex to detect 19-digit community IDs vs keywords Tweet Scraping (Apify Actors) Community Search Actor Actor**: api-ninja/x-twitter-community-search-post-scraper Purpose**: Scrapes tweets from specific Twitter communities Configuration:{ "communityIds": \["COMMUNITY\_ID"\], "numberOfTweets": 40 } Search Actor Actor**: api-ninja/x-twitter-advanced-search Purpose**: Scrapes tweets based on keywords Configuration:{ "contentLanguage": "en", "engagementMinLikes": 10, "engagementMinReplies": 5, "numberOfTweets": 20, "query": "KEYWORD", "timeWithinTime": "2d", "tweetTypes": \["original"\], "usersBlueVerifiedOnly": true } Filtering System (Community filter) The workflow applies multiple filters to ensure high-quality replies: Text length**: >60 characters (substantial content) Follower count**: >100 followers (audience reach) Engagement**: >10 likes, >3 replies (proven engagement) Language**: English only Views**: >100 views (visibility) Duplicate check**: Not previously replied to Recency**: Within 2 days (configurable in actor settings) AI-Powered Reply Generation LLM Chain (Basic LLM Chain) Purpose**: Analyzes filtered tweets and generates contextually appropriate replies Model**: Grok-3 via OpenRouter (configurable) Features**: Engagement potential scoring User authority analysis Timing optimization Multiple reply styles (witty, informative, supportive, etc.) <100 character limit for optimal engagement Output Parser (Structured Output Parser) Purpose**: Ensures consistent JSON output format Schema:{ "selected\_tweet\_id": "tweet\_id\_here", "screen\_name": "author\_screen\_name", "reply": "generated\_reply\_here" } Posting & Notification System Twitter Posting (Create Tweet) Purpose**: Posts the generated reply as a Twitter response Error handling**: Catches API limitations and rate limits Status Notifications Success**: Notifies via Telegram with tweet link and reply text Failure**: Notifies about API limitations or errors Format**: HTML-formatted messages with clickable links Database Storage (Insert documents) Purpose**: Saves successful replies to prevent future duplicates Fields stored**: tweet\_id, screen\_name, reply, tweet\_url, timestamp Retry Mechanism The workflow includes intelligent retry logic: Failure Counter (If5, Increment Failure Counter1) Logic**: If no suitable tweets found, increment failure counter Retry limit**: Maximum 3 retries with different random keywords Wait time**: 3-second delay between retries Final Failure Notification Trigger**: After 4 failed attempts Action**: Sends Telegram notification about unsuccessful search Recovery**: Manual retry available via /reply command Configuration Guide Essential Settings to Modify MongoDB Collection Name: Update collection\_name in MongoDB nodes Telegram Chat ID: Replace 11111111111 with your actual chat ID Keywords/Communities: Edit the list in Keyword/Community List node Timezone: Update timezone in Code node (currently set to Europe/Kyiv) Actor Selection: Enable only one actor (Community OR Search) based on your needs Filter Customization Adjust filters in Community filter node based on your requirements: Minimum engagement thresholds Text length requirements Time windows Language preferences LLM Customization Modify the AI prompt in Basic LLM Chain to: Change reply style and tone Adjust engagement criteria Modify scoring algorithms Set different character limits Usage Tips Start small: Begin with a few high-quality keywords/communities Monitor performance: Use Telegram notifications to track success rates Adjust filters: Fine-tune based on the quality of generated replies Respect limits: Twitter's free tier allows ~17 posts/day Test manually: Use /reply command for testing before scheduling Troubleshooting Common Issues No tweets found: Adjust filter criteria or check keywords API rate limits: Reduce posting frequency or upgrade Twitter API plan MongoDB connection: Verify connection string and collection name Apify quota: Monitor Apify usage limits LLM failures: Check OpenRouter credits and model availability Best Practices Monitor your bot's replies for quality and appropriateness Regularly update keywords to stay relevant Keep an eye on engagement metrics Adjust timing based on your audience's activity patterns Maintain a balanced posting frequency to avoid appearing spammy Documentation Links Full Documentation**: Google Doc Guide Latest Version**: dziura.online/automation MongoDB Setup Tutorial**: YouTube Guide This workflow provides a comprehensive solution for automated, intelligent Twitter engagement while maintaining quality and avoiding spam-like behavior.
by Halfbit 🚀
AI-Powered Invoice Processing: from Email to Database & Chat Notifications Automatically process PDF invoices directly from your email inbox. This workflow uses AI to extract key data, saves it to a PostgreSQL database, and instantly notifies you about the new document in your preferred chat application. The workflow listens for new emails, fetches PDF attachments, and then passes their content to a Large Language Model (LLM) for intelligent recognition and data extraction. Finally, the information is securely archived in the database, and a summary of the invoice is sent as a notification. > 📝 This workflow is highly customizable. > It uses PostgreSQL, OpenAI (GPT), and Discord by default, but you can easily swap these components. > Feel free to use a different database like MySQL or Airtable, another AI model provider, or send notifications to Slack, MS Teams, or any other chat platform. > ⚠️ Note: If the workflow fails to extract data correctly from invoices issued by certain companies, you may need to adjust the prompt used in the Basic LLM Chain node to improve parsing accuracy. Use Case Automating accounts payable for small businesses and freelancers Centralizing financial documents without manual data entry Creating a searchable database of all incoming invoices Receiving real-time notifications for new financial commitments Features 📧 Email Trigger (IMAP):** Monitors a dedicated email inbox for new messages with attachments 📄 PDF Filtering:** Automatically identifies and processes only PDF attachments 🤖 AI-Powered Data Extraction:** Uses an LLM (e.g., GPT-4o-mini) to extract invoice number, buyer/seller details, amounts, currency, and due dates ⚙️ Structured Data Output:** Converts AI output to standardized JSON 🔍 Database Write Logic:** Prevents duplicates by checking invoice/company combo 🗄️ PostgreSQL Integration:** Stores extracted data into company and invoice tables 💬 Chat Notifications:** Sends invoice summary as message to a designated channel Setup Instructions ⚠️ API Access & Costs To use the AI extraction feature, you need an API key from a provider like OpenAI. Most providers charge for access to language models. You'll likely need a billing account. 1. PostgreSQL Database Configuration Ensure your database has the following tables: -- Table for companies (invoice issuers) CREATE TABLE company ( id SERIAL PRIMARY KEY, tax_number VARCHAR(255) UNIQUE NOT NULL, name VARCHAR(255), address TEXT, created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP ); -- Table for invoices CREATE TABLE invoice ( id SERIAL PRIMARY KEY, company_id INTEGER REFERENCES company(id), invoice_number VARCHAR(255) NOT NULL, -- Add other fields: total_to_pay, currency, due_date created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP, UNIQUE(company_id, invoice_number) ); Then, in n8n, create a credential for your PostgreSQL DB. 2. Email (IMAP) Configuration In n8n, add credentials for the email account that receives invoices: IMAP host IMAP port Username Password 3. AI Provider Configuration Log in to OpenAI (or similar provider) Generate API key In n8n, create credentials and paste the key 4. Chat Notification (Discord) Go to Discord > Server Settings > Integrations > Webhooks > New Webhook Select channel Copy Webhook URL In n8n, paste URL into the Discord node Placeholders and Fields to Fill | Placeholder | Description | Example | |---------------------------|-------------------------------------------|------------------------------------------| | YOUR_EMAIL_CREDENTIALS | Your IMAP email account in n8n | My Invoice Mailbox | | YOUR_OPENAI_CREDENTIALS | API credentials for AI model | My OpenAI Key | | YOUR_POSTGRES_CREDENTIALS| Your PostgreSQL DB credentials in n8n | My Production DB | | YOUR_DISCORD_WEBHOOK | Webhook URL for your chat system | https://discord.com/api/webhooks/... | Testing the Workflow Send a test invoice to the inbox as a PDF attachment Run the workflow manually in n8n and check if the IMAP node fetches the message Verify AI Extraction – inspect the LLM output (e.g., GPT node) and confirm structured JSON Check the DB – ensure new rows appear in company and invoice Check the chat – verify the invoice summary appears in the chosen channel Customization Tips Change the DB:** Use MySQL, Airtable, or Google Sheets instead of PostgreSQL Other notifications:** Swap Discord for Slack, MS Teams, Telegram, etc. Expand AI logic:** Extract line items, prices, etc. by customizing the prompt Add payment logic:** Allow marking invoices as paid via emoji or a separate webhook
by Intuz
Disclaimer: Community nodes are used, and template can only be used on self-hosted n8n instances. This n8n template from Intuz provides a complete solution to automate your entire B2B lead generation pipeline, from discovering recently funded companies to drafting hyper-personalized outreach emails with AI. Who's this workflow for? Sales Development Representatives (SDRs) Business Development Teams Growth Hackers Startup Founders Marketing Agencies How it works 1. Scrape Funded Companies: The workflow begins by using Apify to scrape a target list of recently funded companies directly from a Crunchbase search. 2. Enrich with Apollo.io: It takes each company and uses the Apollo.io API to find key decision-makers (like VPs, Directors) and enrich their contact information, including finding their email addresses. 3. Populate Google Sheets: All the gathered lead data—company name, contact name, title, email, LinkedIn URL, etc.—is neatly organized and added to a Google Sheet. 4. AI-Personalized Email Crafting: The workflow sends the lead's information to OpenAI (GPT-4) with a highly specialized prompt, instructing it to write a concise, impactful, and hyper-personalized "first touch" cold email. 5. Update Lead List with Email Content: Finally, the unique, AI-generated email is saved back into the Google Sheet alongside the corresponding lead's information, making it ready for you to send. Pre-conditions and Requirements Before you can successfully execute this workflow, you must have the following accounts, credentials, and assets in place. 1. n8n Instance: You need an active n8n instance (self-hosted). 2. Apify Account & Crunchbase Access: Apify Account: A registered account on Apify. Crunchbase Account: An active, logged-in Crunchbase account (a paid subscription is recommended for accessing detailed search filters). 3. Apollo.io API: You need an Apollo.io plan that includes API access. You can generate the API from settings. 4. Google Sheet: Create a new Google Sheet to store your leads. The workflow is configured for two tabs: one for raw data ("HealthCare" in the template) and one for email generation ("Company sheet"). 5. OpenAI Account: An account with OpenAI with API access and billing set up. Setup Instructions 1. Apify Connection: Connect your Apify account in the Run an Actor node. You'll need an apify scrapper, here's the link In the Custom Body field, update the search.url with your target Crunchbase discovery URL and provide a valid cookie for authentication. 2. Apollo.io Connection: Connect your Apollo.io account using HTTP Header Authentication in the three Apollo nodes. You will need to provide your API key. 3. Google Sheets Connection: Connect your Google Sheets account. Create a spreadsheet and update the Document ID and Sheet Name in the three Google Sheets nodes to match yours. Ensure your sheet columns are set up to receive the data. 4. OpenAI Connection: Connect your OpenAI account in the Message a model node. The prompt is pre-engineered for high-quality output, but you can tailor it to better fit your specific value proposition. 5. Activate Workflow: Click "Execute workflow" to run the automation manually and watch your AI-powered lead list build itself. Customization Guide This workflow is a powerful template. To adapt it to your specific business needs, you should review and modify the following nodes. 1. Changing Your Target Companies (The Source) Node: Run an Actor What to change: The search.url parameter inside the customBody. How to do it: Go to Crunchbase and perform a search for your ideal companies (e.g., filter by different funding rounds, industry, location, keywords, etc.). Copy the URL from your browser's address bar after the search results have loaded. Paste this new URL as the value for "search.url" in the node. You can also adjust "count": 10 to pull more or fewer companies per run. Be mindful of Apify and Apollo credit usage. 2. Defining Your Ideal Contact Persona Node: Apollo - Get User What to change: The person_seniorities and person_titles arrays in the jsonBody. How to do it: 1. Seniority: Modify the person_seniorities list to match who you sell to. Examples: ["c_level", "founder"] or ["manager", "contributor"]. 2. Job Titles: This is crucial. Replace the existing list of titles ("engineering", "technology", etc.) with keywords relevant to your target buyer. For example, if you sell to marketing teams, you might use: ["marketing", "demand generation", "growth", "content", "brand"]. 3. Configuring Your Google Sheet Destination Nodes: Append or update row in sheet and Update row in sheet What to change: The documentId and sheetName. How to do it: Open your Google Sheet. The documentId is the long string of characters in the URL between /d/ and /edit. Copy and paste it into the "Document ID" field in both nodes. The sheetName (or Sheet ID/gid) needs to be set for your specific tabs. Make sure the sheet names/IDs in the nodes match the tabs in your document. Column Mapping: If you change the column names in your Google Sheet, you must update the column mapping inside these nodes to ensure the data is written to the correct place. 4. Tailoring the AI Email Generation Node: Message a model (OpenAI) What to change: The prompt, the model, and the input variables. How to do it: The Prompt: This is the heart of your outreach. Read the entire prompt carefully and edit it to reflect your company's value proposition, tone of voice, and specific call-to-action. Value Proposition: Change the line "We help them cut that specific infrastructure spend..." to match what your product does. Use a powerful, single data point if you have one. Call-to-Action (CTA): Modify the final question ("Curious if infra efficiency is on your roadmap...") to something that fits your sales process. Tone: Adjust the initial instructions (e.g., "Your tone is that of a peer...") if you want a different style. The Model: The workflow uses gpt-4.1. You can switch to a different model like gpt-4o (potentially better/faster) or gpt-3.5-turbo (much cheaper, but lower quality) depending on your budget and needs. Input Variables: The prompt uses {{ $json['Company Name'] }}, {{ $json['Person Designation'] }}, and {{ $json.Industry }}. If you want to add more personalization (e.g., based on a company's funding amount), you would first need to ensure that data is passed to this node, then add the new variable (e.g., {{ $json['Funding Amount'] }}) into the prompt. Connect with us Website: https://www.intuz.com/services Email: getstarted@intuz.com LinkedIn: https://www.linkedin.com/company/intuz Get Started: https://n8n.partnerlinks.io/intuz For Custom Workflow Automation Click here- Get Started
by Don Jayamaha Jr
Instantly fetch real-time Bitget spot market data directly in Telegram! This workflow integrates the Bitget REST v2 API with Telegram (plus optional AI-powered formatting) to deliver the latest crypto price, order book, candles, and recent trades. Perfect for crypto traders, analysts, and investors who need reliable market data at their fingertips—no API key required.  Sign-up for Bitget for 6,200 USDT in rewards to trade: Collect Now How It Works A Telegram bot listens for user requests (e.g., BTCUSDT). The workflow connects to Bitget public endpoints to fetch: Ticker (latest price & 24h stats) Order book depth (top bids/asks) Recent trades (price, side, volume, timestamp) Candlestick data (1m, 15m, 1h, 4h, 1d) Historical candles (optional, for backfill before endTime) A Calculator node derives useful metrics like mid-price and spread. A Think node reshapes raw JSON into Telegram-ready text. A splitter ensures reports over 4000 characters are chunked safely. The final market insights are delivered instantly back to Telegram. What You Can Do with This Agent ✅ Track live prices & 24h stats for any Bitget spot pair. ✅ Monitor order book liquidity and spreads in real-time. ✅ Analyze candlesticks across multiple timeframes. ✅ Review recent trades to see execution flow. ✅ Fetch historical candles for extended market context. ✅ Receive clean, structured reports with optional AI-enhanced formatting. Set Up Steps Create a Telegram Bot Use @BotFather to generate a bot token. Configure in n8n Import Bitget AI Agent v1.02.json into your n8n instance. Add your Telegram credentials (bot token + your Telegram ID in the User Authentication node). Add an OpenAI key if you want AI-powered formatting. (Optional) Add an *Bitget api key** . Deploy and Test Send BTCUSDT to your bot. Get live Bitget spot data instantly in Telegram! 🚀 Unlock powerful, real-time Bitget insights in Telegram—zero setup, zero API keys required! 📺 Setup Video Tutorial Watch the full setup guide on YouTube: 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. 🔗 For support: Don Jayamaha – LinkedIn