by Rod
Telegram Personal Assistant with Long-Term Memory & Note-Taking This n8n workflow transforms your Telegram bot into a powerful personal assistant that handles voice, photo, and text messages. The assistant uses AI to interpret messages, save important details as long-term memories or notes in a Baserow database, and recall information for future interactions. 🌟 How It Works Message Reception & Routing Telegram Integration: The workflow is triggered by incoming messages on your Telegram bot. Dynamic Routing: A switch node inspects the message to determine whether it's voice, text, or photo (with captions) and routes it for the appropriate processing. Content Processing Voice Messages: Audio files are retrieved and sent to an AI transcription node to convert spoken words into text. Text Messages: Text is directly captured and prepared for analysis. Photos: If an image is received, the bot fetches the file (and caption, if provided) and uses an AI-powered image analysis node to extract relevant details. AI-Powered Agent & Memory Management The core AI agent (powered by GPT-4o-mini) processes the incoming message along with any previous conversation history stored in PostgreSQL memory buffers. Long-Term Memory: When a message contains personal or noteworthy information, the assistant uses a dedicated tool to save this data as a long-term memory in Baserow. Note-Taking: For specific instructions or reminders, the assistant saves concise notes in a separate Baserow table. The AI agent follows defined rules to decide which details are saved as memories and which are saved as notes. Response Generation After processing the message and updating memory/notes as needed, the AI agent crafts a contextual and personalized response. The response is sent back to the user via Telegram, ensuring smooth and natural conversation flow. 🚀 Key Features Multimodal Input:** Seamlessly handles voice, photo (with captions), and text messages. Long-Term Memory & Note-Taking:** Uses a Baserow database to store personal details and notes, enhancing conversational context over time. AI-Driven Contextual Responses:** Leverages an AI agent to generate personalized, context-aware replies based on current input and past interactions. User Security & Validation:** Incorporates validation steps to verify the user's Telegram ID before processing, ensuring secure and personalized interactions. Easy Baserow Setup:** Comes with a clear setup guide and sample configurations to quickly integrate Baserow for managing memories and notes. 🔧 Setup Guide Telegram Bot Setup: Create your bot via BotFather and obtain the Bot Token. Configure the Telegram webhook in n8n with your bot's token and URL. Baserow Database Configuration: Memory Table: Create a workspace titled "Memories and Notes". Set up a table (e.g., "Memory Table") with at least two fields: Memory (long text) Date Added (US date format with time) Notes Table: Duplicate the Memory Table and rename it to "Notes Table". Change the first field's name from "Memory" to "Notes". n8n Workflow Import & Configuration: Import the workflow JSON into your n8n instance. Update credentials for Telegram, Baserow, OpenAI, and PostgreSQL (for memory buffering) as needed. Adjust node settings if you need to customize AI agent prompts or memory management rules. Testing & Deployment: Test your bot by sending various message types (text, voice, photo) to confirm that the workflow processes them correctly, updates Baserow, and returns the appropriate response. Monitor logs to ensure that memory and note entries are correctly stored and retrieved. ✨ Example Interactions Voice Message Processing:** User sends a voice note requesting a reminder. Bot Response: "Thanks for your message! I've noted your reminder and saved it for future reference." Photo with Caption:** User sends a photo with the caption "Save this recipe for dinner ideas." Bot Response: "Got it! I've saved this recipe along with the caption for you." Text Message for Memory Saving:** User: "I love hiking on weekends." Bot Response: "Noted! I’ll remember your interest in hiking." Retrieving Information:** User asks: "What notes do I have?" Bot Response: "Here are your latest notes: [list of saved notes]." 🛠️ Resources & Next Steps Telegram Bot Configuration:** Telegram BotFather Guide n8n Documentation:** n8n Docs Community Forums:** Join discussions and share your customizations! This workflow not only streamlines message processing but also empowers users with a personal AI assistant that remembers details over time. Customize the rules and responses further to fit your unique requirements and enjoy a more engaging, intelligent conversation experience on Telegram!
by Jimleuk
This n8n workflow builds an appointment scheduling AI agent which can Take enquiries from prospective customers and help them book an appointment by checking appointment availability Where no appointment is booked, the Agent is able to send follow-up messages to re-engage leads. After an appointment is booked, the agent is able reschedule or even cancel the booking for the user without human intervention. For small outfits, this workflow could contribute the necessary "man-power" required to increase business sales. The sample Airtable can be found here: https://airtable.com/appO2nHiT9XPuGrjN/shroSFT2yjf87XAox 2024-10-22 Updated to Cal.com API v2. How it works The customer sends an enquiry via SMS to trigger our workflow. For this trigger, we'll use a Twilio webhook. The prospective or existing customer's number is logged in an Airtable Base which we'll be using to track all our enquries. Next, the message is sent to our AI Agent who can reply to the user and decide if an appointment booking can be made. The reply is made via SMS using Twilio. A scheduled trigger which runs every day, checks our chat logs for a list of prospective customers who have yet to book an appointment but still show interest. This list is sent to our AI Agent to formulate a personalised follow-up message to each lead and ask them if they want to continue with the booking. The follow-up interaction is logged so as to not to send too many messages to the customer. Requirements A Twilio account to receive customer messages. An Airtable account and Base to use as our datastore for enquiries. Cal.com account to use as our scheduling service. OpenAI account for our AI model. Customising this workflow Not using Airtable? Swap this out for your CRM of choice such as hubspot or your own service. Not using Cal.com? Swap this out for API-enabled services such as Acuity Scheduling or your own service.
by Adnan Tariq
🛡 CyberScan – AI-Powered Vulnerability Scanner with Nessus, OpenAI, and Google Sheets 👤 Who’s it for Security teams, DevOps engineers, vulnerability analysts, and automation builders who want to eliminate repetitive Nessus scan parsing, AI-based risk triage, and manual reporting. Designed for orgs following NIST CSF or CISA KEV compliance guidelines. ⚙️ How it works / What it does Runs scheduled or manual scans via the Nessus API. Processes scan results and extracts asset + vulnerability data. Uses a custom AI-based risk metric (LEV) to triage findings into: 🚨 Expert review ✅ Self-healing 🕵️ Monitoring Automatically sends email alerts for critical CVEs. Exports daily summaries to Google Sheets (or your own BI system). Maps to NIST CSF (Identify, Protect, Detect, Respond, Recover). 🧰 How to set up Nessus: Add your Nessus API credentials and instance URL. Google Sheets: Authenticate your Google account. OpenAI / LLM: Use your API key if adding LLM triage or rewrite prompts. Email: Update SMTP credentials and alert recipient address. Set your targets: Adjust asset ranges or scan UUIDs as needed. ⚠️ All setup steps are explained in sticky notes inside the workflow. 📋 Requirements Nessus Essentials (Free) or Nessus Pro with API access. SMTP service (e.g. Gmail, Mailgun, SendGrid). Google Sheets OAuth2 credentials. Optional: OpenAI or other LLM provider for LEV scoring and CVE insights. 🛠 How to customize the workflow Swap Google Sheets with Airtable, Supabase, or PostgreSQL. Change scan logic or asset list to fit your internal network scope. Adjust AI scoring logic to match internal CVSS thresholds or KEV tags. Expand alerting logic to include Slack, Discord, or webhook triggers. 🔒 No sensitive data included. All credentials and sheet links are placeholders.
by David Ashby
Complete MCP server exposing 1 IP2Proxy Proxy Detection API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add IP2Proxy Proxy Detection credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the IP2Proxy Proxy Detection API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ip2proxy.com • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (1 total) 🔧 General (1 endpoints) • GET /: Check Proxy IP 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native IP2Proxy Proxy Detection API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Sam Nesler
Syncs assignments and completion states to and fro between Canvas LMS and a Notion database. Automatically triggers every 2 hours during the schoolday by default (meaning 7 times a day), but also supports manual refreshing via webhooks. Setup You'll need a few things to get started: A Canvas API key. You can generate one by going to your Canvas account settings and clicking on the "New Access Token" button. The URL looks like https://canvas.wisc.edu/profile/settings You'll also need to replace URLs in Canvas nodes with your institution's domain, unless you're a student at UW-Madison. Canvas nodes are all the HTTP Request nodes except the one labelled "OpenAI Categorization", which is an OpenAI node and will require a key in a later step. A Notion integration token. You can find this by going to your Notion integrations page and clicking "Create new integration". You can make it a "Internal Integration". A Notion database to sync to. I made a template for use with the workflow, but you can use any database that has the following fields: Status (status): Status with at least the options "Not Started" and "Completed" - assignments start out "Not Started", and are marked "Completed" when they are submitted on Canvas. Estimate (select): Select with at least the options "XS", "S", "M", "L", "XL" - this is where the estimated time to complete the assignment will be stored. Even if you don't use AI, they'll start out as "M" Priority (select): Select with at least the options "Could Do", "Should Do", "Must Do" - assignments start out "Should Do" ID (text): this is where the ID of the assignment will be stored. We use this to sync without having a database on the server Due Date (date): this is where the due date of the assignment will be stored Class (text): this is where the name of the class will be stored Link (URL): this is where the link to the assignment will be stored The ID of the Notion database you want to sync to. You can find this by clicking "Share" in the top right of your database and copying the link. The ID is the part of the link that comes after https://www.notion.so/ and before ?v=. So for https://www.notion.so/tsuniiverse/1976e99d91128076b034e7379464560f?v=1976e99d911281e7bd4b000c2cbec692&pvs=4, the ID would be 1976e99d91128076b034e7379464560f. An OpenAI key for assignment length estimation or disable the node. Manual Refreshing Embed the production URL from the Webhook Trigger inside a "toggle list" or "toggle heading" inside Notion, then expand the heading to refresh, like so:
by Femi Ad
Description AI-Powered Business Idea Generation & Social Media Content Strategy Workflow This intelligent content discovery and strategy system features 15 nodes that automatically monitor Reddit communities, analyze business opportunities, and generate targeted social media content for AI automation agencies and entrepreneurs. It leverages AI classification, structured analysis, and automated content creation to transform community discussions into actionable business insights and marketing materials. Core Components Reddit Intelligence: Multi-subreddit monitoring across AI automation, n8n, and entrepreneur communities with keyword-based filtering. AI Classification Engine: Intelligent categorization of posts into "Questions" vs "Requests" using LangChain text classification. Dual Analysis System: Specialized AI agents for educational content (questions) and sales-focused content (service requests). Content Strategy Generator: Automated creation of LinkedIn and Twitter content tailored to different audience engagement strategies. Telegram Integration: Real-time delivery of formatted content strategies and business insights. Structured Output Processing: JSON-formatted analysis with relevancy scores, feasibility assessments, and actionable content recommendations. Target Users • AI Automation Agency Owners seeking consistent lead generation and thought leadership content • Entrepreneurs wanting to identify market opportunities and position themselves as industry experts • Content Creators in the automation/AI space needing data-driven content strategies • Business Development Professionals looking for systematic opportunity identification • Digital Marketing Agencies serving tech and automation clients Setup Requirements To get started, you'll need: Reddit API Access: OAuth2 credentials for accessing Reddit's API and monitoring multiple subreddits. Required APIs: • OpenRouter (for AI model access - supports GPT-4, Claude, and other models) • Reddit OAuth2 API (for community monitoring and data extraction) n8n Prerequisites: • Version 1.7+ with LangChain nodes enabled • Webhook configuration for Telegram integration • Proper credential storage and management setup Telegram Bot: Create via @BotFather for receiving formatted content strategies and business insights. Disclaimer: This template uses LangChain nodes and Reddit API integration. Ensure your n8n instance supports these features and verify API rate limits for production use. Step-by-Step Setup Guide Install n8n: Ensure you're running n8n version 1.7 or higher with LangChain node support enabled. Set Up API Credentials: • Create Reddit OAuth2 application at reddit.com/prefs/apps • Set up OpenRouter account and obtain API key • Store credentials securely in n8n credential manager Create Telegram Bot: • Go to Telegram, search for @BotFather • Create new bot and note the token • Configure webhook pointing to your n8n instance Import the Workflow: • Copy the workflow JSON from the template submission • Import into your n8n dashboard • Verify all nodes are properly connected Configure Monitoring Settings: • Adjust subreddit targets (currently: ArtificialIntelligence, n8n, entrepreneur) • Set keyword filters for relevant topics • Configure post limits and sorting preferences Customize AI Analysis: • Update system prompts to match your business expertise • Adjust relevancy and feasibility scoring criteria • Modify content generation templates for your brand voice Test the Workflow: • Run manual execution to verify Reddit data collection • Check AI classification and analysis outputs • Confirm Telegram delivery of formatted content Schedule Automation: • Set up daily trigger (currently configured for 12 PM) • Monitor execution logs for any API rate limit issues • Adjust frequency based on content volume needs Usage Instructions Automated Discovery: The workflow runs daily at 12 PM, scanning three key subreddits for relevant posts about AI automation, business opportunities, and n8n workflows. Intelligent Classification: Posts are automatically categorized as either "Questions" (educational opportunities) or "Requests" (potential service leads) using AI text classification. Dual Analysis Approach: • Questions → Educational content strategy with relevancy and detail scoring • Requests → Sales-focused content with relevancy and feasibility scoring Content Strategy Generation: Each analyzed post generates: • 3 LinkedIn posts (thought leadership, case studies, educational frameworks) • 3 Twitter posts (quick insights, engagement questions, thread starters) Telegram Delivery: Receive formatted content strategies with: • Post summaries and business context • Relevancy/feasibility scores • Ready-to-use social media content • Strategic recommendations Content Customization: Adapt generated content for different tones (business, educational, technical) and posting schedules. Workflow Features Multi-Platform Monitoring: Simultaneous tracking of 3 key Reddit communities with customizable keyword filters. AI-Powered Classification: Automatic categorization of posts into actionable content types. Dual Scoring System: • Relevancy scores (0.05-0.95) for business alignment • Detail/Feasibility scores (0.05-0.95) for content quality assessment Content Variety: Generates both educational and sales-focused social media strategies. Structured Output: JSON-formatted analysis for easy integration with other systems. Real-time Delivery: Instant Telegram notifications with formatted content strategies. Scalable Monitoring: Easy addition of new subreddits and keyword filters. Error Handling: Comprehensive validation with graceful failure management. Performance Specifications • Monitoring Frequency: Daily automated execution with manual trigger capability • Post Analysis: 5 posts per subreddit (15 total daily) • Content Generation: 6 social media posts per analyzed opportunity • Classification Accuracy: AI-powered with structured output validation • Delivery Method: Real-time Telegram integration • Scoring Range: 0.05-0.95 scale for relevancy and feasibility assessment Why This Workflow? Systematic Opportunity Identification: Never miss potential business opportunities or content ideas from key communities. AI-Enhanced Analysis: Leverage advanced language models for intelligent content categorization and strategy generation. Time-Efficient Content Creation: Transform community discussions into ready-to-use social media content. Data-Driven Insights: Quantified scoring helps prioritize opportunities and content strategies. Automated Lead Intelligence: Identify potential service requests and educational content opportunities automatically. Workflow Image Need help customizing this workflow for your specific use case? As a fellow entrepreneur passionate about automation and business development, I'd be happy to consult. Connect with me on LinkedIn: https://www.linkedin.com/in/femi-adedayo-h44/ or email for support. Let's make your AI automation agency even more efficient!
by Robert Breen
✨ Overview This workflow allows candidates to schedule interviews through a conversational AI assistant. It integrates with your Google Calendar to check for existing events and generates a list of available 30-minute weekday slots between 9 AM and 5 PM Eastern Time. Once the candidate selects a suitable time and provides their contact information, the AI bot automatically books the meeting on your calendar and confirms the appointment. ⚡ Prerequisites To use this workflow, you need an OpenAI account with access to the GPT-4o model, a Google account with a calendar that can be accessed through the Google Calendar API, and an active instance of n8n—either self-hosted or via n8n cloud. Within n8n, you must have two credential configurations ready: one for Google Calendar using OAuth2 authentication, and another for your OpenAI API key. 🔐 API Credentials Setup For Google Calendar, go to the Google Cloud Console and create a new project. Enable the Google Calendar API, then create OAuth2 credentials by selecting “Web Application” as the application type. Add http://localhost:5678/rest/oauth2-credential/callback as the redirect URI if using local n8n. After that, go to n8n, navigate to the Credentials section, and create a new Google Calendar OAuth2 credential using your account. For OpenAI, visit platform.openai.com to retrieve your API key. Then go to the n8n Credentials page, create a new credential for OpenAI, paste your key, and name it for reference. 🔧 How to Make This Workflow Yours To customize the workflow for your use, start by replacing all instances of the calendar email rbreen.ynteractive@gmail.com with your own Google Calendar email. This email is referenced in multiple places, including Google Calendar nodes and the ToolWorkflow JSON for the node named "Run Get Availability." Also update any instances where the Google Calendar credential is labeled as Google Calendar account to match your own credential name within n8n. Do the same for the OpenAI credential label, replacing OpenAi account with the name of your own credential. Next, go to the node labeled Candidate Chat and copy the webhook URL. This is the public chat interface where candidates will engage with the bot—share this URL with them through email, your website, or anywhere you want to allow access. Optionally, you can also tweak the system message in the Interview Scheduler node to modify the tone, language, or logic used during conversations. If you want to add branding, update the title, subtitle, and inputPlaceholder in the Candidate Chat node, and consider modifying the final confirmation message in Final Response to User to reflect your brand voice. You can also update the business rules such as time zone, working hours, or default duration by editing the logic in the Generate 30 Minute Timeslots code node. 🧩 Workflow Explanation This workflow begins with the Candidate Chat node, which triggers when a user visits the public chat URL. The Interview Scheduler node acts as an AI agent, guiding the user through providing their email, phone number, and preferred interview time. It checks availability using the Run Get Availability tool, which in turn reads your calendar and compares it with generated free time slots from the Generate 30 Minute Timeslots node. The check day names tool helps the AI interpret natural language date expressions like “next Tuesday.” The schedule is only populated with 30-minute weekday slots from 9 AM to 5 PM Eastern Time, and no events are scheduled if they overlap with existing ones. When a suitable time is confirmed, the AI formats the result into structured JSON, creates an event on your Google Calendar, and sends a confirmation back to the user with all relevant meeting details. 🚀 Deployment Steps To deploy the interview scheduler, import the provided workflow JSON into your n8n instance. Update the Google Calendar email, OpenAI and Google credential labels, system prompts, and branding as needed. Test the connections to ensure the API credentials are working correctly. Once everything is configured, copy and share the public chat URL from the Candidate Chat node. When candidates engage with the chat, the workflow will walk them through the interview booking process, check your availability, and finalize the booking automatically. 💡 Additional Tips By default, the workflow avoids scheduling interviews on weekends and outside of 9–5 EST. Each interview lasts exactly 30 minutes, and overlapping with existing events is prevented. The assistant does not reveal details about other meetings. You can customize every part of this workflow to fit your use case, including subworkflows like Get Availability and check day names, or even white-label it for client use. This workflow is ready to become your AI-powered interview scheduling assistant. 🤝 Connect with Me Description I’m Robert Breen, founder of Ynteractive — a consulting firm that helps businesses automate operations using n8n, AI agents, and custom workflows. I’ve helped clients build everything from intelligent chatbots to complex sales automations, and I’m always excited to collaborate or support new projects. If you found this workflow helpful or want to talk through an idea, I’d love to hear from you. Links 🌐 Website: https://www.ynteractive.com 📺 YouTube: @ynteractivetraining 💼 LinkedIn: https://www.linkedin.com/in/robert-breen 📬 Email: rbreen@ynteractive.com
by PUQcloud
Setting up n8n workflow Overview The Docker n8n WHMCS module uses a specially designed workflow for n8n to automate deployment processes. The workflow provides an API interface for the module, receives specific commands, and connects via SSH to a server with Docker installed to perform predefined actions. Prerequisites You must have your own n8n server. Alternatively, you can use the official n8n cloud installations available at: n8n Official Site Installation Steps Install the Required Workflow on n8n You have two options: Option 1: Use the Latest Version from the n8n Marketplace The latest workflow templates for our modules are available on the official n8n marketplace. Visit our profile to access all available templates: PUQcloud on n8n Option 2: Manual Installation Each module version comes with a workflow template file. You need to manually import this template into your n8n server. n8n Workflow API Backend Setup for WHMCS/WISECP Configure API Webhook and SSH Access Create a Basic Auth Credential for the Webhook API Block in n8n. Create an SSH Credential for accessing a server with Docker installed. Modify Template Parameters In the Parameters block of the template, update the following settings: server_domain – Must match the domain of the WHMCS/WISECP Docker server. clients_dir – Directory where user data related to Docker and disks will be stored. mount_dir – Default mount point for the container disk (recommended not to change). Do not modify the following technical parameters: screen_left screen_right Deploy-docker-compose In the Deploy-docker-compose element, you have the ability to modify the Docker Compose configuration, which will be generated in the following scenarios: When the service is created When the service is unlocked When the service is updated nginx In the nginx element, you can modify the configuration parameters of the web interface proxy server. The main section allows you to add custom parameters to the server block in the proxy server configuration file. The main\_location section contains settings that will be added to the location / block of the proxy server configuration. Here, you can define custom headers and other parameters specific to the root location. Bash Scripts Management of Docker containers and all related procedures on the server is carried out by executing Bash scripts generated in n8n. These scripts return either a JSON response or a string. All scripts are located in elements directly connected to the SSH element. You have full control over any script and can modify or execute it as needed.
by Incrementors
LinkedIn & Indeed Job Scraper with Bright Data & Google Sheets Export Overview This n8n workflow automates the process of scraping job listings from both LinkedIn and Indeed platforms simultaneously, combining results, and exporting data to Google Sheets for comprehensive job market analysis. It integrates with Bright Data for professional web scraping, Google Sheets for data storage, and provides intelligent status monitoring with retry mechanisms. Workflow Components 1. 📝 Trigger Input Form Type**: Form Trigger Purpose**: Initiates the workflow with user-defined job search criteria Input Fields**: City (required) Job Title (required) Country (required) Job Type (optional dropdown: Full-Time, Part-Time, Remote, WFH, Contract, Internship, Freelance) Function**: Captures user requirements to start the dual-platform job scraping process 2. 🧠 Format Input for APIs Type**: Code Node (JavaScript) Purpose**: Prepares and formats user input for both LinkedIn and Indeed APIs Processing**: Standardizes location and job title formats Creates API-specific input structures Generates custom output field configurations Function**: Ensures compatibility with both Bright Data datasets 3. 🚀 Start Indeed Scraping Type**: HTTP Request (POST) Purpose**: Initiates Indeed job scraping via Bright Data Endpoint**: https://api.brightdata.com/datasets/v3/trigger Parameters**: Dataset ID: gd_lpfll7v5hcqtkxl6l Include errors: true Type: discover_new Discover by: keyword Limit per input: 2 Custom Output Fields**: jobid, company_name, job_title, description_text location, salary_formatted, company_rating apply_link, url, date_posted, benefits 4. 🚀 Start LinkedIn Scraping Type**: HTTP Request (POST) Purpose**: Initiates LinkedIn job scraping via Bright Data (parallel execution) Endpoint**: https://api.brightdata.com/datasets/v3/trigger Parameters**: Dataset ID: gd_l4dx9j9sscpvs7no2 Include errors: true Type: discover_new Discover by: keyword Limit per input: 2 Custom Output Fields**: job_posting_id, job_title, company_name, job_location job_summary, job_employment_type, job_base_pay_range apply_link, url, job_posted_date, company_logo 5. 🔄 Check Indeed Status Type**: HTTP Request (GET) Purpose**: Monitors Indeed scraping job progress Endpoint**: https://api.brightdata.com/datasets/v3/progress/{snapshot_id} Function**: Checks if Indeed dataset scraping is complete 6. 🔄 Check LinkedIn Status Type**: HTTP Request (GET) Purpose**: Monitors LinkedIn scraping job progress Endpoint**: https://api.brightdata.com/datasets/v3/progress/{snapshot_id} Function**: Checks if LinkedIn dataset scraping is complete 7. ⏱️ Wait Nodes (60 seconds each) Type**: Wait Node Purpose**: Implements intelligent polling mechanism Duration**: 1 minute Function**: Pauses workflow before rechecking scraping status to prevent API overload 8. ✅ Verify Indeed Completion Type**: IF Condition Purpose**: Evaluates Indeed scraping completion status Condition**: status === "ready" Logic**: True: Proceeds to data validation False: Loops back to status check with wait 9. ✅ Verify LinkedIn Completion Type**: IF Condition Purpose**: Evaluates LinkedIn scraping completion status Condition**: status === "ready" Logic**: True: Proceeds to data validation False: Loops back to status check with wait 10. 📊 Validate Indeed Data Type**: IF Condition Purpose**: Ensures Indeed returned job records Condition**: records !== 0 Logic**: True: Proceeds to fetch Indeed data False: Skips Indeed data retrieval 11. 📊 Validate LinkedIn Data Type**: IF Condition Purpose**: Ensures LinkedIn returned job records Condition**: records !== 0 Logic**: True: Proceeds to fetch LinkedIn data False: Skips LinkedIn data retrieval 12. 📥 Fetch Indeed Data Type**: HTTP Request (GET) Purpose**: Retrieves final Indeed job listings Endpoint**: https://api.brightdata.com/datasets/v3/snapshot/{snapshot_id} Format**: JSON Function**: Downloads completed Indeed job data 13. 📥 Fetch LinkedIn Data Type**: HTTP Request (GET) Purpose**: Retrieves final LinkedIn job listings Endpoint**: https://api.brightdata.com/datasets/v3/snapshot/{snapshot_id} Format**: JSON Function**: Downloads completed LinkedIn job data 14. 🔗 Merge Results Type**: Merge Node Purpose**: Combines Indeed and LinkedIn job results Mode**: Merge all inputs Function**: Creates unified dataset from both platforms 15. 📊 Save to Google Sheet Type**: Google Sheets Node Purpose**: Exports combined job data for analysis Operation**: Append rows Target**: "Compare" sheet in specified Google Sheet document Data Mapping**: Job Title, Company Name, Location Job Detail (description), Apply Link Salary, Job Type, Discovery Input Workflow Flow Input Form → Format APIs → [Indeed Trigger] + [LinkedIn Trigger] ↓ ↓ Check Status Check Status ↓ ↓ Wait 60s Wait 60s ↓ ↓ Verify Ready Verify Ready ↓ ↓ Validate Data Validate Data ↓ ↓ Fetch Indeed Fetch LinkedIn ↓ ↓ └─── Merge Results ───┘ ↓ Save to Google Sheet Configuration Requirements API Keys & Credentials Bright Data API Key**: Required for both LinkedIn and Indeed scraping Google Sheets OAuth2**: For data storage and export access n8n Form Webhook**: For user input collection Setup Parameters Google Sheet ID**: Target spreadsheet identifier Sheet Name**: "Compare" tab for job data export Form Webhook ID**: User input form identifier Dataset IDs**: Indeed: gd_lpfll7v5hcqtkxl6l LinkedIn: gd_l4dx9j9sscpvs7no2 Key Features Dual Platform Scraping Simultaneous LinkedIn and Indeed job searches Parallel processing for faster results Comprehensive job market coverage Platform-specific field extraction Intelligent Status Monitoring Real-time scraping progress tracking Automatic retry mechanisms with 60-second intervals Data validation before processing Error handling and timeout management Smart Data Processing Unified data format from both platforms Intelligent field mapping and standardization Duplicate detection and removal Rich metadata extraction Google Sheets Integration Automatic data export and storage Organized comparison format Historical job search tracking Easy sharing and collaboration Form-Based Interface User-friendly job search form Flexible job type filtering Multi-country support Real-time workflow triggering Use Cases Personal Job Search Comprehensive multi-platform job hunting Automated daily job searches Organized opportunity comparison Application tracking and management Recruitment Services Client job search automation Market availability assessment Competitive salary analysis Bulk candidate sourcing Market Research Job market trend analysis Salary benchmarking studies Skills demand assessment Geographic opportunity mapping HR Analytics Competitor hiring intelligence Role requirement analysis Compensation benchmarking Talent market insights Technical Notes Polling Interval**: 60-second status checks for both platforms Result Limiting**: Maximum 2 jobs per input per platform Data Format**: JSON with structured field mapping Error Handling**: Comprehensive error tracking in all API requests Retry Logic**: Automatic status rechecking until completion Country Support**: Adaptable domain selection (indeed.com, fr.indeed.com) Form Validation**: Required fields with optional job type filtering Merge Strategy**: Combines all results from both platforms Export Format**: Standardized Google Sheets columns for easy analysis Sample Data Output | Field | Description | Example | |-------|-------------|---------| | Job Title | Position title | "Senior Software Engineer" | | Company Name | Hiring organization | "Tech Solutions Inc." | | Location | Job location | "San Francisco, CA" | | Job Detail | Full description | "We are seeking a senior developer..." | | Apply Link | Direct application URL | "https://company.com/careers/123" | | Salary | Compensation info | "$120,000 - $150,000" | | Job Type | Employment details | "Full-time, Remote" | Setup Instructions Import Workflow: Copy JSON configuration into n8n Configure Bright Data: Add API credentials for both datasets Setup Google Sheets: Create target spreadsheet and configure OAuth Update References: Replace placeholder IDs with your actual values Test Workflow: Submit test form and verify data export Activate: Enable workflow and share form URL with users For any questions or support, please contact: info@incrementors.com or fill out this form: https://www.incrementors.com/contact-us/
by Halfbit 🚀
AI-Powered Invoice Processing: from Email to Database & Chat Notifications Automatically process PDF invoices directly from your email inbox. This workflow uses AI to extract key data, saves it to a PostgreSQL database, and instantly notifies you about the new document in your preferred chat application. The workflow listens for new emails, fetches PDF attachments, and then passes their content to a Large Language Model (LLM) for intelligent recognition and data extraction. Finally, the information is securely archived in the database, and a summary of the invoice is sent as a notification. > 📝 This workflow is highly customizable. > It uses PostgreSQL, OpenAI (GPT), and Discord by default, but you can easily swap these components. > Feel free to use a different database like MySQL or Airtable, another AI model provider, or send notifications to Slack, MS Teams, or any other chat platform. > ⚠️ Note: If the workflow fails to extract data correctly from invoices issued by certain companies, you may need to adjust the prompt used in the Basic LLM Chain node to improve parsing accuracy. Use Case Automating accounts payable for small businesses and freelancers Centralizing financial documents without manual data entry Creating a searchable database of all incoming invoices Receiving real-time notifications for new financial commitments Features 📧 Email Trigger (IMAP):** Monitors a dedicated email inbox for new messages with attachments 📄 PDF Filtering:** Automatically identifies and processes only PDF attachments 🤖 AI-Powered Data Extraction:** Uses an LLM (e.g., GPT-4o-mini) to extract invoice number, buyer/seller details, amounts, currency, and due dates ⚙️ Structured Data Output:** Converts AI output to standardized JSON 🔍 Database Write Logic:** Prevents duplicates by checking invoice/company combo 🗄️ PostgreSQL Integration:** Stores extracted data into company and invoice tables 💬 Chat Notifications:** Sends invoice summary as message to a designated channel Setup Instructions ⚠️ API Access & Costs To use the AI extraction feature, you need an API key from a provider like OpenAI. Most providers charge for access to language models. You'll likely need a billing account. 1. PostgreSQL Database Configuration Ensure your database has the following tables: -- Table for companies (invoice issuers) CREATE TABLE company ( id SERIAL PRIMARY KEY, tax_number VARCHAR(255) UNIQUE NOT NULL, name VARCHAR(255), address TEXT, created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP ); -- Table for invoices CREATE TABLE invoice ( id SERIAL PRIMARY KEY, company_id INTEGER REFERENCES company(id), invoice_number VARCHAR(255) NOT NULL, -- Add other fields: total_to_pay, currency, due_date created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP, UNIQUE(company_id, invoice_number) ); Then, in n8n, create a credential for your PostgreSQL DB. 2. Email (IMAP) Configuration In n8n, add credentials for the email account that receives invoices: IMAP host IMAP port Username Password 3. AI Provider Configuration Log in to OpenAI (or similar provider) Generate API key In n8n, create credentials and paste the key 4. Chat Notification (Discord) Go to Discord > Server Settings > Integrations > Webhooks > New Webhook Select channel Copy Webhook URL In n8n, paste URL into the Discord node Placeholders and Fields to Fill | Placeholder | Description | Example | |---------------------------|-------------------------------------------|------------------------------------------| | YOUR_EMAIL_CREDENTIALS | Your IMAP email account in n8n | My Invoice Mailbox | | YOUR_OPENAI_CREDENTIALS | API credentials for AI model | My OpenAI Key | | YOUR_POSTGRES_CREDENTIALS| Your PostgreSQL DB credentials in n8n | My Production DB | | YOUR_DISCORD_WEBHOOK | Webhook URL for your chat system | https://discord.com/api/webhooks/... | Testing the Workflow Send a test invoice to the inbox as a PDF attachment Run the workflow manually in n8n and check if the IMAP node fetches the message Verify AI Extraction – inspect the LLM output (e.g., GPT node) and confirm structured JSON Check the DB – ensure new rows appear in company and invoice Check the chat – verify the invoice summary appears in the chosen channel Customization Tips Change the DB:** Use MySQL, Airtable, or Google Sheets instead of PostgreSQL Other notifications:** Swap Discord for Slack, MS Teams, Telegram, etc. Expand AI logic:** Extract line items, prices, etc. by customizing the prompt Add payment logic:** Allow marking invoices as paid via emoji or a separate webhook
by Davide
Voiceflow is a no-code platform that allows you to design, prototype, and deploy conversational assistants across multiple channels—such as chat, voice, and phone—with advanced logic and natural language understanding. It supports integration with APIs, webhooks, and even tools like Twilio for phone agents. It's perfect for building customer support agents, voice bots, or intelligent assistants. This workflow connects n8n and Voiceflow with tools like Google Calendar, Qdrant (vector database), OpenAI, and an order tracking API to power a smart, multi-channel conversational agent. There are 3 main webhook endpoints in n8n that Voiceflow interacts with: n8n_order – receives user input related to order tracking, queries an API, and responds with tracking status. n8n_appointment – processes appointment booking, reformats date input using OpenAI, and creates a Google Calendar event. n8n_rag – handles general product/service questions using a RAG (Retrieval-Augmented Generation) system backed by: Google Drive document ingestion, Qdrant vector store for search, and OpenAI models for context-based answers. Each webhook is connected to a corresponding "Capture" block inside Voiceflow, which sends data to n8n and waits for the response. How It Works This n8n workflow integrates Voiceflow for chatbot/voice interactions, Google Calendar for appointment scheduling, and RAG (Retrieval-Augmented Generation) for knowledge-based responses. Here’s the flow: Trigger**: Three webhooks (n8n_order, n8n_appointment, n8n_rag) receive inputs from Voiceflow (chat, voice, or phone calls). Each webhook routes requests to specific functions: Order Tracking: Fetches order status via an external API. Appointment Scheduling: Uses OpenAI to parse dates, creates Google Calendar events, and confirms via WhatsApp. RAG System: Queries a Qdrant vector store (populated with Google Drive documents) to answer customer questions using GPT-4. AI Processing**: OpenAI Chains: Convert natural language dates to Google Calendar formats and generate responses. RAG Pipeline: Embeds documents (via OpenAI), stores them in Qdrant, and retrieves context-aware answers. Voiceflow Integration: Routes responses back to Voiceflow for multi-channel delivery (chat, voice, or phone). Outputs**: Confirmation messages (e.g., "Event created successfully"). Dynamic responses for orders, appointments, or product support. Setup Steps Prerequisites: APIs**: Google Calendar & Drive OAuth credentials. Qdrant vector database (hosted or cloud). OpenAI API key (for GPT-4 and embeddings). Configuration: Qdrant Setup: Run the "Create collection" and "Refresh collection" nodes to initialize the vector store. Populate it with documents using the Google Drive → Qdrant pipeline (embeddings generated via OpenAI). Voiceflow Webhooks: Link Voiceflow’s "Captures" to n8n’s webhook URLs (n8n_order, n8n_appointment, n8n_rag). Google Calendar: Authenticate the Google Calendar node and set event templates (e.g., summary, description). RAG System: Configure the Qdrant vector store and OpenAI embeddings nodes. Adjust the Retrieve Agent’s system prompt for domain-specific queries (e.g., electronics store support). Optional: Add Twilio for phone-agent capabilities. Customize OpenAI prompts for tone/accuracy. PS. You can import a Twilio number to assign it to your agent for becoming a Phone Agent Need help customizing? Contact me for consulting and support or add me on Linkedin
by Angel Menendez
Who is this for? Public-facing professionals (developer advocates, founders, marketers, content creators) who get bombarded with LinkedIn messages that aren't actually for them - support requests when you're in marketing, sales inquiries when you're a devrel, partnership pitches when you handle content, etc. What problem is this workflow solving? When you're visible online, people assume you handle everything at your company. You end up spending hours daily playing human router, forwarding messages like "How do I reset my password?" or "What's your enterprise pricing?" to the right teams. This LinkedIn automation workflow stops you from being your company's unofficial customer service representative. What this workflow does This AI-powered LinkedIn DM management workflow automatically assesses incoming LinkedIn messages and routes them intelligently: Automated Message Assessment: Receives inbound LinkedIn messages via UniPile and looks up sender details from both personal and company LinkedIn profiles. Smart Route Matching: Compares the message content against your message routing workflow table in Notion, which contains: Question: "How can I become an n8n ambassador?" Description: "Route here when a user is requesting to become an n8n ambassador. Also when they're asking how they could do more to evangelize n8n in their city, or to start organizing n8n meetups and events in their city." Action: "Tell the user to open the following notion page which has details on ambassador program including how to apply, as well as perks of the program: https://www.notion.so/n8n-Ambassador-Program-d883b2a130e5448faedbebe5139187ea?pvs=21" AI Response Generation: When a message matches an existing route, this AI assistant generates a personalized response draft based on the "Action" instructions from your routing table. Human-in-the-Loop Approval: Sends the draft response to Slack with approve/reject buttons, so you maintain control while saving time. Draft can be edited from within Slack on desktop and mobile. Automated LinkedIn Responses: Once approved, sends the reply back via LinkedIn and marks the original message as handled. The result: You stop being a human switchboard and can focus on your actual job while people still get helpful, timely responses through automated customer service. You can also add routes for things you do handle but get asked about daily (like 'How do I join your beta?' or 'What's your content strategy?') to standardize your responses. Setup Sign up for a UniPile account and create a webhook under the Messaging section Set the callback URL to this workflow's production URL Generate a UniPile API key with all required scopes and store it in your n8n credentials Create a Slack app and enable interactive message buttons and webhooks Here is a slack App manifest template for easy deployment in slack: { "display_information": { "name": "Request Router", "description": "A bot that alerts when a new linkedin question comes in.", "background_color": "#12575e" }, "features": { "bot_user": { "display_name": "Request Router", "always_online": false } }, "oauth_config": { "scopes": { "bot": [ "chat:write", "chat:write.customize", "chat:write.public", "links:write", "im:history", "im:read", "im:write" ] } }, "settings": { "interactivity": { "is_enabled": true, "request_url": "Your webhook url here" }, "org_deploy_enabled": false, "socket_mode_enabled": false, "token_rotation_enabled": false } } Set up your Notion database with the three-column structure (Question, Description, Action) Configure the AI node with your preferred provider (OpenAI, Gemini, Ollama etc) Replace placeholder LinkedIn user and organization IDs with your own How to customize this workflow to your needs Database Options**: Swap Notion with Google Sheets, Airtable, or another database Filtering Logic**: Add custom filters based on keywords, message length, follower count, or business logic AI Customization**: Adjust the system prompt to match your brand tone and response goals Approval Platform**: Replace Slack with email, Discord, or another review platform Team Routing**: Use Slack metadata to route approvals to specific team members based on message category Enrichment**: Add secondary data enrichment using tools like Clearbit or FullContact Response Rules**: Create conditional logic for different response types based on sender profile or message content Perfect for anyone who's tired of being their company's accidental customer service department while trying to do their real job. This LinkedIn automation template was inspired by a live build done by Max Tkacz and Angel Menendez for The Studio.