by InfoGrab
This is a response chatbot in public channels through slash commands. I explain more in detail through the YouTube video, but it's only available in Korean. How it works? When you request the created slash command in Slack, the request comes to the webhook. Then, the Switch Node branches appropriately according to each slash command request. Here, a slash command called /ask is connected to the chatbot, and the chatbot generates answers to the questions asked. The final node responds to the channel. Set up steps Create a Slack app. Add chat:write permission in Slack OAuth&Permissions>Scopes. Create a Command in Slack Slash Commands menu and enter the n8n Webhook node's URL. Complete creating the Slash Commands. Enter the created command in the Switch node. 슬래시 커맨드를 통한 공개 채널에서의 응답 챗봇 입니다. 유튜브 영상에 더 자세하게 설명 드립니다. 설명 슬랙에 생성한 슬래시 커맨드를 슬랙에서 요청하면 웹훅에 요청이 들어옵니다. 이후 Switch Node에서 각 슬래시 커맨드의 요청에 따라 알맞게 분기합니다. 여기에서는 /ask라는 슬래시 커맨드가 챗봇으로 연결되어 있고, 챗봇에서 질문한 내용의 답변을 생성합니다. 마지막 노드에서 채널로 응답을 합니다. 설정 방법 Slack 앱을 만드세요. Slack OAuth&Permissions>Scopes 에서 chat:write 권한을 추가하세요. Slack Slash Commands 메뉴에서 Command를 생성하고, n8n Webhook 노드의 url을 입력하세요. Slash Slash Commands 생성을 완료하세요. Switch 노드에 생성한 커맨드를 입력하세요.
by Nick Saraev
Smart Invoice Collection System with OpenAI, Gmail & Google Sheets Categories: Financial Automation, AI Business Tools, Cash Flow Management This workflow creates an intelligent invoice collection system that automatically follows up on overdue invoices using AI-powered personalization. The system monitors your invoice database, analyzes email history to prevent inappropriate follow-ups, and sends increasingly urgent but professional reminders at precise intervals. Built to solve one of the biggest cash flow killers for small businesses, this automation can easily generate an additional $10-15K per year in recovered payments. Benefits Automated Cash Flow Recovery** - Generate $10-15K annually in previously forgotten invoice payments Intelligent Timing** - AI analyzes email history to prevent sending follow-ups at inappropriate times Escalating Urgency** - Four template levels from gentle reminders to firm collection notices Professional Communication** - Maintains business relationships while ensuring payment Zero Manual Work** - Complete automation from overdue detection to email delivery High-Value Service** - Easily sold to clients for $1,500-3,000 per implementation How It Works Invoice Monitoring System: Connects to Google Sheets containing your invoice database with sent dates and client information Automatically calculates days overdue using date difference calculations Filters invoices to process only those at specific intervals (7, 14, 21, 28 days overdue) Runs on schedule (daily recommended) to catch invoices as they become overdue Email History Intelligence: Retrieves recent email conversations with each client using Gmail integration Analyzes communication patterns to identify recent discussions about invoices Prevents sending follow-ups within 72 hours of relevant email conversations Aggregates email threads with timestamps for comprehensive context analysis AI-Powered Follow-up Generation: Uses OpenAI to analyze conversation history and determine follow-up appropriateness Selects appropriate template based on days overdue and escalation schedule Generates personalized emails that reference specific client names and invoice details Adapts tone from friendly reminders to more urgent payment requests over time Professional Email Delivery: Creates properly formatted emails with personalized subject lines and content Maintains professional tone while applying appropriate urgency levels Includes clear next steps and payment instructions for clients Optionally creates drafts for review or sends automatically based on your preference Smart Template Escalation: 7 Days:** Gentle reminder with helpful tone ("Hope you're well, just checking on that invoice") 14 Days:** Professional follow-up with assistance offer ("Here to help if anything's unclear") 21 Days:** More direct approach with specific timeline expectations 28 Days:** Firm but professional final notice before escalation Required Google Sheets Setup Create a Google Sheet with these exact column headers: Invoice Tracking Sheet: Client Name - Full name of client/company for personalization Email - Client's email address for follow-up communications Date Sent - Date invoice was originally sent (format: YYYY-MM-DD) Invoice ID - Unique identifier for tracking and reference Amount - Invoice amount (optional, for internal tracking) Status - Payment status (Pending, Paid, Overdue - optional) Setup Instructions: Create Google Sheet with exact column headers listed above Populate with your outstanding invoice data Connect Google Sheets OAuth credentials in n8n Update the Google Sheets document ID in the "Google Sheets" node Configure Gmail OAuth for email access and sending Set schedule trigger for daily execution (recommended 9 AM business hours) Template Customization: The system includes 4 pre-written templates that you can customize: Modify sender name and signature in the AI prompt Adjust urgency levels and escalation timing Customize company-specific language and policies Add payment links or specific instructions Business Use Cases Service-Based Businesses** - Agencies, consultants, and freelancers with NET30/60 payment terms B2B Companies** - Businesses with recurring invoice cycles and multiple clients Accounting Firms** - Offer automated collections as a premium service to clients Small Business Owners** - Eliminate manual follow-up work while improving cash flow Automation Agencies** - High-value service offering with clear ROI demonstration Professional Services** - Lawyers, architects, and consultants with project-based billing Revenue Potential This system transforms invoice collection economics: $10-15K annual recovery** for average small business with forgotten invoices $1,500-3,000 implementation fee** when sold as a service to clients 40-60% reduction** in accounts receivable aging Zero ongoing labor costs** - replaces hours of manual follow-up work Improved client relationships** - consistent, professional communication Scalable business offering** - one-time setup serves clients indefinitely Difficulty Level: Intermediate Estimated Build Time: 1-2 hours Monthly Operating Cost: ~$15 (OpenAI + Google Workspace APIs) Watch My Complete Build Process Want to see exactly how I built this invoice collection system from scratch? I walk through the entire development process live, including the AI prompting strategies, email history analysis, and the exact business logic that generates thousands in recovered payments. 🎥 Watch My Live Build: "THIS Smart Invoice System Will Make You an Additional $750/Month" This comprehensive tutorial shows the real development process - including advanced filtering logic, AI prompt engineering, and the proven templates that maintain professional relationships while ensuring payment. Set Up Steps Google Sheets Integration: Create invoice tracking spreadsheet with required column structure Set up Google Sheets OAuth credentials and permissions Configure document ID and sheet name in the workflow Test data retrieval with sample invoice entries Gmail Configuration: Set up Gmail OAuth for both reading and sending emails Configure email search parameters for conversation history Test email retrieval and draft creation functionality Set appropriate sender name and signature AI Follow-up Engine: Configure OpenAI API credentials with appropriate rate limits Customize the 4 escalation templates for your business tone Test AI decision-making with sample conversation histories Optimize prompts for your specific industry and client types Schedule and Automation: Set up daily schedule trigger (recommended 9 AM business hours) Configure error handling for API failures and network issues Test complete workflow with small batch of overdue invoices Monitor initial runs and optimize timing/templates based on results Quality Control: Start with draft mode to review AI-generated emails before sending Monitor client responses and adjust templates for better results Track payment recovery rates to demonstrate system effectiveness Scale to full automation once confident in output quality Advanced Optimizations Enhance the system with additional capabilities: Multi-Currency Support** - Handle international invoices with proper formatting Payment Link Integration** - Include Stripe/PayPal links for immediate payment CRM Integration** - Sync with Salesforce, HubSpot, or other business systems Escalation Rules** - Automatically transfer to collections after final notice Performance Analytics** - Track recovery rates and optimize messaging Multi-Language Templates** - Support international clients with localized messaging Important Considerations Email Frequency** - Built-in 72-hour delays prevent overwhelming clients with follow-ups Professional Tone** - Templates maintain business relationships while ensuring payment Legal Compliance** - Follow local collection laws and regulations for your jurisdiction Client Communication** - System preserves conversation history for context and disputes Privacy Protection** - All email analysis happens securely within your n8n instance Why This System Works The key to successful automated invoice collection lies in intelligent timing and personalization: Context awareness** prevents follow-ups during ongoing payment discussions Graduated escalation** maintains professionalism while applying appropriate pressure Personalized messaging** makes automated emails feel human-written Consistent execution** ensures no invoices fall through the cracks Relationship preservation** maintains client trust while securing payments Check Out My Channel For more revenue-generating automation systems and proven business-building strategies, explore my YouTube channel where I share the exact systems used to scale automation agencies to $72K+ monthly revenue.
by Nasser
For Who? Content Creators Youtube Automation Marketing Team How it works? 1 - Retrieve Base Image, Image Description and Situation from Airtable 2 - Generate Image Prompt 3 - Generate Image via Fal AI 4 - Verify if Image is generated 5 - Upload Image on Airtable 📺 YouTube Video Tutorial: SETUP Setup Input : The first part of the workflow can be replaced with anything else. You need as input a Prompt and the Base Image URL (publicly available). Setup Output : In this Workflow, the output is storing the image on Airtable but you can replace that with anything else but basically you have two options : Store the Generated Image somewhere : Keep everything like this and replace the last Airtable node with the Third Party you want to use. Use the Image directly in n8n : In HTTP Request "Generate Image" switch sync_mode to "true", remove all the following nodes and add "Extract form File" node (convert to Base64 String) APIs : For the following third-party integrations, replace ==[YOUR_API_TOKEN]== with your API Token or connect your account via Client ID / Secret to your n8n instance: Fal AI (FLUX KONTEXT MAX) : https://fal.ai/models/fal-ai/flux-pro/kontext/max/api#schema-input Airtable : https://docs.n8n.io/integrations/builtin/app-nodes/n8n-nodes-base.airtable/?utm_source=n8n_app&utm_medium=node_settings_modal-credential_link&utm_campaign=n8n-nodes-base.airtable
by Ferenc Erb
Use Case Automate chat interactions in Bitrix24 with a customizable bot that can handle various events and respond to user messages. What This Workflow Does Processes incoming webhook requests from Bitrix24 Handles authentication and token validation Routes different event types (messages, joins, installations) Provides automated responses and bot registration Manages secure communication between Bitrix24 and external services Setup Instructions Configure Bitrix24 webhook endpoints Set up authentication credentials Customize bot responses and behavior Deploy and test the workflow
by Bazhard
TOTP Validation with Function Node This template allows you to verify if a 6-digit TOTP code is valid using the corresponding TOTP secret. It can be used in an authentication system. The inputs need to be: a base32 totp secret (String) a 6 digits code (String) ++Important:++ The 6-digit code must be in text format. If the code starts with zeros and is treated as a number, it could cause validation issues. The function node will generate a 6-digit code from the TOTP secret, then compare it with the provided code. If they match, it will return 1 otherwise, it will return 0. Example usage: You retrieve the user's TOTP secret from a database, then you want to verify if the 2FA code provided by the user is valid. Setup Guidelines You only need the ==TOTP VALIDATION== node. You will need to modify lines 39 and 40== of the node with the correct values for your specific context. Testing the Template You can define a sample secret and code in the EXAMPLE FIELDS node of the template, then click "Test Workflow". If the code is valid for the provided secret, the flow will proceed to the true branch of the IF CODE IS VALID node. Otherwise, it will go to the false branch.
by David Olusola
🎨 AI Image Editor with Form Upload + Telegram Delivery 🚀 Who’s it for? 👥 This workflow is built for content creators, social media managers, designers, and agencies who need fast, AI-powered image editing without the hassle. Whether you're batch-editing for clients or spicing up personal projects, this tool gets it done — effortlessly. What it does 🛠️ A seamless pipeline that: 📥 Accepts uploads + prompts via a clean form ☁️ Saves images to Google Drive automatically 🧠 Edits images with OpenAI’s image API 📁 Converts results to downloadable PNGs 📬 Delivers the final image instantly via Telegram Perfect for AI-enhanced workflows that need speed, structure, and simplicity. How it works ⚙️ User Uploads: Fill a form with an image + editing prompt Cloud Save: Auto-upload to your Google Drive folder AI Editing: OpenAI processes the image with your prompt Convert & Format: Image saved as PNG Telegram Delivery: Final result sent straight to your chat 💬 You’ll need ✅ 🔑 OpenAI API key 📂 Google Drive OAuth2 setup 🤖 Telegram bot token & chat ID ⚙️ n8n instance (self-hosted or cloud) Setup in 4 Easy Steps 🛠️ 1. Connect APIs Add OpenAI, Google Drive, and Telegram credentials to n8n Store keys securely (avoid hardcoding!) 2. Configure Settings Set Google Drive folder ID Add Telegram chat ID Tweak image size (default: 1024×1024) 3. Deploy the Form Add a Webhook Trigger node Test with a sample image Share the form link with users 🎯 Fine-Tune Variables In the Set node, customize: 📐 Image size 📁 Folder path 📲 Delivery options ⏱️ Timeout duration Want to customize more? 🎛️ 🖼️ Image Settings Change size (e.g. 512x512 or 2048x2048) Update the model (when new versions drop) 📂 Storage Auto-organize files by date/category Add dynamic file names using n8n expressions 📤 Delivery Swap Telegram with Slack, email, Discord Add multiple delivery channels Include image prompt or metadata in messages 📝 Form Upgrades Add fields for advanced editing Validate file types (e.g. PNG/JPEG only) Show a progress bar for long edits ⚡ Advanced Features Add error handling or retry flows Support batch editing Include approvals or watermarking before delivery ⚠️ Notes & Best Practices ✅ Check OpenAI credit balance 🖼️ Test with different image sizes/types ⏱️ Adjust timeout settings for larger files 🔐 Always secure your API keys
by Nick Saraev
Deep Multiline Icebreaker System (AI-Powered Cold Email Personalization) Categories: Lead Generation, AI Marketing, Sales Automation This workflow creates an advanced AI-powered cold email personalization system that achieves 5-10% reply rates by generating deeply personalized multi-line icebreakers. The system scrapes comprehensive website data, analyzes multiple pages per prospect, and uses advanced AI prompting to create custom email openers that make recipients believe you've personally researched their entire business. Benefits Superior Response Rates** - Achieves 5-10% reply rates vs. 1-2% for standard cold email campaigns Deep Website Intelligence** - Scrapes and analyzes multiple pages per prospect, not just homepages Advanced AI Personalization** - Uses sophisticated prompting techniques with examples and formatting rules Complete Lead Pipeline** - From Apollo search to personalized icebreakers in Google Sheets Scalable Processing** - Handle hundreds of prospects with intelligent batching and error handling Revenue-Focused Approach** - System designed around proven $72K/month agency methodologies How It Works Apollo Lead Acquisition: Integrates directly with Apollo.io search URLs through Apify scraper Processes 500+ leads per search with comprehensive contact data Filters for prospects with both email addresses and accessible websites Multi-Page Website Scraping: Scrapes homepage to extract all internal website links Processes relative URLs and filters out external/irrelevant links Performs intelligent batching to prevent IP blocking during scraping Comprehensive Content Analysis: Converts HTML to markdown for efficient AI processing Uses GPT-4 to generate detailed abstracts of each webpage Aggregates insights from multiple pages into comprehensive prospect profiles Advanced AI Icebreaker Generation: Employs sophisticated prompting with system messages, examples, and formatting rules Uses proven icebreaker templates that reference non-obvious website details Generates personalized openers that imply deep manual research Smart Data Processing: Removes duplicate URLs and handles scraping errors gracefully Implements token limits to control AI processing costs Organizes final output in structured Google Sheets format Required Google Sheets Setup Create a Google Sheet with these exact tab and column structures: Search URLs Tab: URL - Contains Apollo.io search URLs for your target audiences Leads Tab (Output): first_name - Contact's first name last_name - Contact's last name email - Contact's email address website_url - Company website URL headline - Job title/position location - Geographic location phone_number - Contact phone (if available) multiline_icebreaker - AI-generated personalized opener Setup Instructions: Create Google Sheet with "Search URLs" and "Leads" tabs Add your Apollo search URLs to the first tab (one per row) Connect Google Sheets OAuth credentials in n8n Update the Google Sheets document ID in all sheet nodes The workflow reads from Search URLs and outputs to Leads automatically Apollo Search URL Format: Your search URLs should look like: https://app.apollo.io/#/people?personLocations[]=United%20States&personTitles[]=ceo&qKeywords=marketing%20agency&page=1 Business Use Cases AI Automation Agencies** - Generate high-converting prospect outreach for service-based businesses B2B Sales Teams** - Create personalized cold email campaigns that actually get responses Marketing Agencies** - Offer premium personalization services to clients Consultants** - Build authority through deeply researched prospect outreach SaaS Companies** - Improve demo booking rates through personalized messaging Professional Services** - Stand out from generic sales emails with custom insights Revenue Potential This system transforms cold email economics: 5-10x Higher Response Rates** than standard cold email approaches $72K/month proven methodology** - exact system used to scale successful AI agency Premium Positioning** - prospects assume you've done extensive manual research Scalable Personalization** - process hundreds of prospects daily vs. manual research Difficulty Level: Advanced Estimated Build Time: 3-4 hours Monthly Operating Cost: ~$150 (Apollo + Apify + OpenAI + Email platform APIs) Watch My Complete Live Build Want to see me build this entire deep personalization system from scratch? I walk through every component live - including the AI prompting strategies, website scraping logic, error handling, and the exact techniques that generate 5-10% reply rates. 🎥 See My Live Build Process: "I Deep-Personalized 1000+ Cold Emails Using THIS AI System (FREE TEMPLATE)" This comprehensive tutorial shows the real development process - including advanced AI prompting, multi-page scraping architecture, and the proven icebreaker templates that have generated over $72K/month in agency revenue. Set Up Steps Apollo & Apify Integration: Configure Apify account with Apollo scraper access Set up API credentials and test lead extraction Define target audience parameters and lead qualification criteria Google Sheets Database Setup: Create multi-sheet structure (Search URLs, Leads) Configure proper column mappings for lead data Set up Google Sheets API credentials and permissions Website Scraping Infrastructure: Configure HTTP request nodes with proper redirect handling Set up error handling for websites that can't be scraped Implement intelligent batching with split-in-batches nodes AI Content Processing: Set up OpenAI API credentials with appropriate rate limits Configure dual-AI approach (page summarization + icebreaker generation) Implement token limiting to control processing costs Advanced Icebreaker Generation: Configure sophisticated AI prompting with system messages Set up example-based learning with input/output pairs Implement formatting rules for natural-sounding personalization Quality Control & Testing: Test complete workflow with small prospect batches Validate AI output quality and personalization accuracy Monitor response rates and optimize messaging templates Advanced Optimizations Scale the system with: Industry-Specific Templates:** Customize icebreaker formats for different verticals A/B Testing Framework:** Test different AI prompt variations and templates CRM Integration:** Automatically add qualified responders to sales pipelines Response Tracking:** Monitor which personalization elements drive highest engagement Multi-Touch Sequences:** Create follow-up campaigns based on initial response data Important Considerations AI Token Management:** System includes intelligent token limiting to control OpenAI costs Scraping Ethics:** Built-in delays and error handling prevent website overload Data Quality:** Filtering logic ensures only high-quality prospects with accessible websites Scalability:** Batch processing prevents IP blocking during high-volume scraping Why This System Works The key to 5-10% reply rates lies in making prospects believe you've done extensive manual research: Non-obvious details from deep website analysis Natural language patterns that avoid template detection Company name abbreviation (e.g., "Love AMS" vs "Love AMS Professional Services") Multiple page insights aggregated into compelling narratives Check Out My Channel For more advanced automation systems and proven business-building strategies that generate real revenue, explore my YouTube channel where I share the exact methodologies used to build successful automation agencies.
by Mark Shcherbakov
Video Guide I prepared a comprehensive guide demonstrating how to build a multi-level retrieval AI agent in n8n that smartly narrows down search results first by file descriptions, then retrieves detailed vector data for improved relevance and answer quality. Youtube Link Who is this for? This workflow suits developers, AI enthusiasts, and data engineers working with vector stores and large document collections who want to enhance the precision of AI retrieval by leveraging metadata-based filtering before deep content search. It helps users managing many files or documents and aiming to reduce noise and input size limits in AI queries. What problem does this workflow solve? Performing vector searches directly on large numbers of document chunks can degrade AI input quality and introduce noise. This workflow implements a two-stage retrieval process that first searches file descriptions to filter relevant files, then runs vector searches only within those files to fetch precise results. This reduces irrelevant data, improves answer accuracy, and optimizes performance when dealing with dozens or hundreds of files split into multiple pieces. What this workflow does This n8n workflow connects to a Supabase vector store to perform: Multi-level Retrieval:** File Description Search: Calls a Supabase RPC function to find files whose descriptions (metadata) best match the user query. It filters and limits the number of relevant files based on similarity scores. Document Chunk Retrieval: Uses retrieved file IDs to perform a second RPC call fetching detailed vector pieces only within those files, again filtered by similarity thresholds. OpenAI Integration:** The filtered document chunks and associated metadata (like file names and URLs) are passed to an OpenAI message node that includes system instructions to guide the AI in leveraging the knowledge base and linked resources for comprehensive responses. Custom Code Functions:** Two code nodes interact with Supabase stored procedures match_files and match_documents to perform the semantic searches with multiline metadata filtering unavailable in default vector filters. Helper Flows and SQL Setup:** Templates and SQL scripts prepare database tables and functions, with additional flows to generate embeddings from file description summaries using OpenAI. N8N Workflow Preparation: Create or verify Supabase account with vector store capability. Set up necessary database tables and RPC functions (match_files and match_documents) using provided SQL scripts. Replace all credentials in n8n nodes to connect to your Supabase and OpenAI accounts. Optionally upload document files and generate their vector embeddings and description summaries in a separate helper workflow. Main Workflow Logic: Code Function Node #1: Receives user query and calls the match_files RPC to retrieve file IDs by searching file descriptions with vector similarity thresholds and file limits. Code Function Node #2: Takes filtered file IDs, invokes match_documents RPC to fetch vector document chunks only from those files with additional similarity filtering and count limits. OpenAI Message Node: Combines fetched document pieces, their metadata (file URLs, similarity scores), and system prompts to generate precise AI-powered answers referencing the documents. This multi-tiered retrieval process improves search relevance and AI contextual understanding by smartly limiting vector search scope first to relevant files, then to specific document chunks, refining user query results.
by Daniel Nolde
What it is Chat with your event schedule from Google Sheets in Telegram: "When is the next meetup?" "How many events are there next month?" "Who presented most often?" "Which future meetups have no presenters yet?" This workflow lets you chat with a telegram bot about past, present and future events that are scheduled in a Google Spreadsheet. (Info: This proof-of-concept was created as a demo for a hackathon of an AI & Developer Meetup in Da Nang (Vietnam) that uses a telegram group to organize) Who it is for If you want an easy way for your audience to get information about your events, you can us this workflow for the same purpose, or easily adapt it to your needs and different use-cases where you want to query smaller amounts of tabular data in natural language. How it works Upon getting triggered by a chat message to a telegram bot, the schedule of meetups is retrieved from Google Spreadsheets, converted into a markdown table syntax and fed into the system prompt of an LLM (we're using OpenRouter in this example), whose output is posted back as answer into the same telegram chat. Setup steps TO REVIEWING IN ACTION As the reviewer of this workflow, you can temporarily use it via an existing telegram bot, simply point your telegram client to https://t.me/AiDaNangBot and start to ask questions like: "When is the next meetup?" "What future meetings do not have presenters?" "Who presented on Future of Human Relationships?" To build upon this workflow: Import the workflow Customize the Google Docs credentials for your individual access Create a telegram bot and connect it to the workflow by entering its API token into the credentials used in the telegram trigger node In the "Settings" node, replace the "scheduleURL" with the URL of your own copy of the Google Spreadsheet or a copy of the Event Schedule Template Sheet to spin off your own – whereby the structure of the spreadsheet doesn't matter, it's just important that you semantically structure your information in dedicated columns clearly labeled in the header row.
by Marth
🧠 How It Works This AI Agent automatically qualifies property buyer leads from form submissions. Form Submission Trigger When a user submits their details via a property inquiry form, the workflow is triggered. AI Lead Classification The buyer's input (budget, location, timeline, etc.) is analyzed by OpenAI to extract structured data and generate a lead score (0–100). Lead Qualification Logic Leads with a score of 70 or above are marked as qualified, the rest are ignored or stored separately. Follow-Up Action Qualified leads trigger: Email notification to the agent Record creation in Airtable as CRM ⚙️ How to Set Up Form Setup Replace the form trigger with your preferred source (Typeform, Google Form, etc.) Make sure the form includes: Name, Email, Budget, Location, Timeline, Property Type Connect Your Credentials Add your OpenAI API key for the LLM node Connect your Gmail account for notifications Link your Airtable base and table to store qualified leads Customize Scoring Logic (Optional) You can tweak the prompt in the Information Extractor node to change how scoring works Test the Workflow Submit a test entry via the form Check if you receive an email and see the lead in Airtable Activate & Go Live Turn on the workflow and start qualifying real buyer leads in real time Connect with my linkedin: https://www.linkedin.com/in/bheta-dwiki-maranatha-15654b227/
by ankitkansaldev
🎬 TikTok Influencer Scraper (URL Input) via Bright Data + n8n & Sheets A comprehensive n8n automation that scrapes TikTok influencer profiles using Bright Data's TikTok dataset and automatically saves detailed profile information to Google Sheets. 📋 Overview This workflow provides an automated TikTok influencer data collection solution that scrapes comprehensive profile information and saves it to Google Sheets. Perfect for influencer marketing research, competitor analysis, social media monitoring, and marketing campaign planning. ✨ Key Features 📝 Form-Based Input: Simple web form to submit TikTok profile URLs 🤖 Bright Data Integration: Uses Bright Data's TikTok dataset for reliable scraping ⏳ Status Monitoring: Intelligent polling system to check scraping progress 🔄 Retry Logic: Automatic retry mechanism with 30-second intervals 📊 Data Extraction: Comprehensive profile data including engagement metrics 📈 Google Sheets Storage: Automatic data storage and organization ⚡ Error Handling: Built-in error handling and status reporting 🎯 Custom Fields: Configurable output fields for specific data needs 🎯 What This Workflow Does Input Profile URLs**: TikTok profile URLs submitted through web form Custom Fields**: Configurable data fields for extraction Country Settings**: Geo-targeting for accurate data collection Processing Form Submission: User submits TikTok profile URL through web form API Trigger: Sends profile data to Bright Data for scraping Status Polling: Continuously checks scraping progress Wait & Retry: Implements 30-second delays between status checks Data Retrieval: Fetches complete profile data when ready Sheet Update: Saves extracted data to Google Sheets Status Reporting: Provides completion status and messages Output Data Points | Field | Description | Example | |-------|-------------|---------| | Account ID | Unique TikTok account identifier | @username123 | | Nickname | Display name on profile | "John Doe" | | Biography | Profile bio/description | "Content creator & influencer" | | Followers | Number of followers | 1,250,000 | | Following | Number of accounts following | 500 | | Likes | Total likes across all videos | 50,000,000 | | Videos Count | Total number of videos posted | 1,200 | | Profile URL | Direct link to TikTok profile | https://www.tiktok.com/@username | | Profile Picture | Profile image URL | https://p16-sign-sg.tiktokcdn.com/... | | Profile Picture HD | High-definition profile image | https://p16-sign-sg.tiktokcdn.com/... | | Is Verified | Verification status | true/false | | Bio Link | External link in bio | https://linktr.ee/username | | Like Engagement Rate | Engagement rate based on likes | 5.2% | | Comment Engagement Rate | Engagement rate based on comments | 2.1% | | Top Videos | List of top performing videos | [video_objects] | | Region | Geographic region | "US" | | Is Under Age 18 | Age status indicator | true/false | 🚀 Setup Instructions Prerequisites n8n instance (self-hosted or cloud) Google account with Sheets access Bright Data account with TikTok dataset access Valid TikTok profile URLs for testing 10-15 minutes for setup Step 1: Import the Workflow Copy the JSON workflow code from the provided file In n8n: Workflows → + Add workflow → Import from JSON Paste JSON and click Import Step 2: Configure Bright Data Set up Bright Data credentials: In n8n: Credentials → + Add credential → HTTP Request Generic Credential Name: "Bright Data API" Authentication: Bearer Token Token: Your Bright Data API key Test the connection Configure dataset: Ensure you have access to TikTok dataset (gd_l1villgoiiidt09ci) Verify dataset permissions in Bright Data dashboard Check dataset limits and pricing Step 3: Configure Google Sheets Integration Create a Google Sheet: Go to Google Sheets Create a new spreadsheet named "TikTok Influencer Data" Create a sheet tab named "TikTok profile by url" Copy the Sheet ID from URL: https://docs.google.com/spreadsheets/d/SHEET_ID_HERE/edit Set up Google Sheets credentials: In n8n: Credentials → + Add credential → Google Sheets OAuth2 API Complete OAuth setup and test connection Prepare your data sheet with columns: Column A: Account ID Column B: Nickname Column C: Biography Column D: Followers Column E: Following Column F: Likes Column G: Videos Count Column H: Profile URL Column I: Is Verified Column J: Bio Link Column K: Like Engagement Rate Column L: Comment Engagement Rate Column M: Region Column N: Status Column O: Message Step 4: Update Workflow Settings Update API credentials: Open "Sends profile URLs to Bright Data to trigger scraping" node Replace BRIGHT_DATA_API_KEY with your actual API key Update dataset ID if different Update Google Sheets nodes: Open "Google Sheets" node Replace document ID: 1OeqtCFm4Wek9DI5YFOWQXTpQJS-SJxC10iAPKEKkmiY Select your Google Sheets credential Choose the correct sheet/tab name Configure form settings: Open "Search by Profile URL" node Customize form title and field labels as needed Note the webhook URL for form access Step 5: Test & Activate Add test profiles: Access the form using the webhook URL Submit 1-2 TikTok profile URLs for testing Use full URLs (e.g., https://www.tiktok.com/@username) Test the workflow: Submit a test profile through the form Monitor execution in n8n Verify data appears in Google Sheet Check for any error messages 📖 Usage Guide Submitting TikTok Profiles Navigate to your form URL (found in Form Trigger node) Enter TikTok profile URL in the format: https://www.tiktok.com/@username Click Submit to start the scraping process Wait for processing (typically 1-3 minutes) Understanding the Process The workflow follows this sequence: Form Submission → Profile URL captured API Trigger → Scraping job submitted to Bright Data Status Polling → Checks every 30 seconds if data is ready Data Retrieval → Fetches complete profile information Sheet Update → Saves data to Google Sheets Monitoring Progress Check n8n execution logs for real-time status Bright Data dashboard shows scraping progress Google Sheets will populate when data is ready Status column shows "ready" when complete Reading the Results Your Google Sheet will show: Complete TikTok profile information Engagement metrics and statistics Profile verification status Bio links and external connections Timestamp of data collection 🔧 Customization Options Adding More Data Points Edit the JSON body in "Sends profile URLs to Bright Data" node to include additional fields: "custom_output_fields": [ "account_id", "nickname", "biography", "followers", "following", "likes", "videos_count", "language", "creation_time", "last_post_time", "avg_video_duration", "hashtags_used", "music_used" ] Modifying Input Parameters Customize the scraping parameters: Country targeting**: Change "country" field in input Search limits**: Adjust "limit_per_input" value Discovery method**: Modify "discover_by" parameter Error handling**: Toggle "include_errors" setting Batch Processing Multiple Profiles To process multiple profiles simultaneously: Modify the input array in the API call Add multiple profile URLs in single request Implement loop logic for processing results Add rate limiting between requests Custom Form Fields Enhance the form with additional inputs: Open "Search by Profile URL" node Add form fields for: Country selection Number of videos to analyze Specific date ranges Custom tags or categories 🚨 Troubleshooting Common Issues & Solutions "Bright Data connection failed" Cause: Invalid API credentials or dataset access Solution: Verify API key in Bright Data dashboard, check dataset permissions "Profile not found or private" Cause: Invalid TikTok URL or private profile Solution: Verify profile URL format, ensure profile is public "Google Sheets permission denied" Cause: Incorrect credentials or sheet permissions Solution: Re-authenticate Google Sheets, check sheet sharing settings "Scraping timeout" Cause: Profile data taking too long to process Solution: Increase wait time or implement longer polling intervals "Invalid dataset ID" Cause: Incorrect or expired dataset configuration Solution: Check Bright Data dashboard for correct dataset ID "Form submission failed" Cause: Webhook configuration issues Solution: Verify webhook URL and form trigger settings Advanced Troubleshooting Check execution logs** in n8n for detailed error messages Test individual nodes** by running them separately Verify data formats** ensure URLs are properly formatted Monitor API limits** check Bright Data usage quotas Add error handling** implement try-catch logic for robust operation 📊 Use Cases & Examples 1. Influencer Marketing Research Goal: Identify and analyze potential influencers for campaigns Research influencers in specific niches Analyze engagement rates and audience size Compare multiple influencers for campaign selection Track influencer growth over time 2. Competitive Analysis Goal: Monitor competitors' TikTok presence and performance Track competitor follower growth Analyze content strategies and engagement Monitor posting frequency and timing Identify trending content themes 3. Social Media Monitoring Goal: Track brand mentions and user-generated content Monitor branded hashtag usage Track brand advocates and micro-influencers Analyze sentiment and engagement patterns Identify trending topics in your industry 4. Market Research Pipeline Goal: Gather social media intelligence for business decisions Analyze target audience behavior Study content preferences and trends Generate reports for stakeholders Support marketing strategy development ⚙ Advanced Configuration Rate Limiting and Performance To optimize for large-scale scraping: Adjust wait times between status checks Implement exponential backoff for retries Add batch processing for multiple profiles Monitor API usage to avoid limits Data Validation and Cleaning Enhance data quality with validation: Add data type validation for numeric fields Implement URL format checking Clean and standardize text fields Add data completeness checks Integration with Business Tools Connect the workflow to your existing systems: CRM Integration**: Update customer records with influencer data Slack Notifications**: Send alerts when new data is available Database Storage**: Store data in PostgreSQL/MySQL for analysis BI Tools**: Connect to Tableau/Power BI for visualization Webhook Integration For real-time updates: Add webhook triggers for immediate profile checks Integrate with external systems via webhooks Create API endpoints for programmatic access Implement authentication for secure access 📈 Performance & Limits Expected Performance Single Profile**: 30-60 seconds average processing time Concurrent Requests**: 5-10 simultaneous (depends on Bright Data plan) Data Accuracy**: 95%+ for public TikTok profiles Success Rate**: 90%+ for accessible profiles Daily Capacity**: 100-1000 profiles (depends on rate limits) Resource Usage Memory**: ~50MB per execution Storage**: Minimal (data stored in Google Sheets) API Calls**: 3-5 Bright Data calls per profile (including status checks) Bandwidth**: ~1-2MB per profile scraped Execution Time**: 1-2 minutes per profile Scaling Considerations Rate Limiting**: Add delays for high-volume scraping Error Handling**: Implement retry logic for failed requests Data Validation**: Add checks for malformed profile data Monitoring**: Track success/failure rates over time Cost Optimization**: Monitor API usage to control costs 🤝 Support & Community Getting Help n8n Community Forum**: community.n8n.io Documentation**: docs.n8n.io Bright Data Support**: Contact through your dashboard GitHub Issues**: Report bugs and feature requests Contributing Share improvements with the community Report issues and suggest enhancements Create variations for specific use cases Document best practices and lessons learned 📋 Quick Setup Checklist Before You Start ☐ n8n instance running (self-hosted or cloud) ☐ Google account with Sheets access ☐ Bright Data account with TikTok dataset access ☐ Valid TikTok profile URLs for testing ☐ 15 minutes for setup Setup Steps ☐ Import Workflow - Copy JSON and import to n8n ☐ Configure Bright Data - Set up API credentials and test ☐ Create Google Sheet - New sheet with proper column structure ☐ Set up Google Sheets credentials - OAuth setup and test ☐ Update workflow settings - Replace sheet ID and API keys ☐ Test with sample profiles - Submit 1-2 URLs and verify results ☐ Activate workflow - Enable form trigger for production use Ready to Use! 🎉 Your form URL: https://your-n8n-instance.com/form/[webhook-id] 🎯 Happy TikTok Scraping! This workflow provides a solid foundation for automated TikTok influencer data collection. Customize it to fit your specific needs and use cases for influencer marketing, competitive analysis, and social media research.
by Ajith joseph
🤖 Create a Telegram Bot with Mistral AI and Conversation Memory A sophisticated Telegram bot that provides AI-powered responses with conversation memory. This template demonstrates how to integrate any AI API service with Telegram, making it easy to swap between different AI providers like OpenAI, Anthropic, Google AI, or any other API-based AI model. 🔧 How it works The workflow creates an intelligent Telegram bot that: 💬 Maintains conversation history for each user 🧠 Provides contextual AI responses using any AI API service 📱 Handles different message types and commands 🔄 Manages chat sessions with clear functionality 🔌 Easily adaptable to any AI provider (OpenAI, Anthropic, Google AI, etc.) ⚙️ Set up steps 📋 Prerequisites 🤖 Telegram Bot Token (from @BotFather) 🔑 AI API Key (from any AI service provider) 🚀 n8n instance with webhook capability 🛠️ Configuration Steps 🤖 Create Telegram Bot Message @BotFather on Telegram Create new bot with /newbot command Save the bot token for credentials setup 🧠 Choose Your AI Provider OpenAI: Get API key from OpenAI platform Anthropic: Sign up for Claude API access Google AI: Get Gemini API key NVIDIA: Access LLaMA models Hugging Face: Use inference API Any other AI API service 🔐 Set up Credentials in n8n Add Telegram API credentials with your bot token Add Bearer Auth/API Key credentials for your chosen AI service Test both connections 🚀 Deploy Workflow Import the workflow JSON Customize the AI API call (see customization section) Activate the workflow Set webhook URL in Telegram bot settings ✨ Features 🚀 Core Functionality 📨 Smart Message Routing**: Automatically categorizes incoming messages (commands, text, non-text) 🧠 Conversation Memory**: Maintains chat history for each user (last 10 messages) 🤖 AI-Powered Responses**: Integrates with any AI API service for intelligent replies ⚡ Command Support**: Built-in /start and /clear commands 📱 Message Types Handled 💬 Text Messages**: Processed through AI model with context 🔧 Commands**: Special handling for bot commands ❌ Non-text Messages**: Polite error message for unsupported content 💾 Memory Management 👤 User-specific chat history storage 🔄 Automatic history trimming (keeps last 10 messages) 🌐 Global state management across workflow executions 🤖 Bot Commands /start 🎯 - Welcome message with bot introduction /clear 🗑️ - Clears conversation history for fresh start Regular text 💬 - Processed by AI with conversation context 🔧 Technical Details 🏗️ Workflow Structure 📡 Telegram Trigger - Receives all incoming messages 🔀 Message Filtering - Routes messages based on type/content 💾 History Management - Maintains conversation context 🧠 AI Processing - Generates intelligent responses 📤 Response Delivery - Sends formatted replies back to user 🤖 AI API Integration (Customizable) Current Example (NVIDIA): Model: mistralai/mistral-nemotron Temperature: 0.6 (balanced creativity) Max tokens: 4096 Response limit: Under 200 words 🔄 Easy to Replace with Any AI Service: OpenAI Example: { "model": "gpt-4", "messages": [...], "temperature": 0.7, "max_tokens": 1000 } Anthropic Claude Example: { "model": "claude-3-sonnet-20240229", "messages": [...], "max_tokens": 1000 } Google Gemini Example: { "contents": [...], "generationConfig": { "temperature": 0.7, "maxOutputTokens": 1000 } } 🛡️ Error Handling ❌ Non-text message detection and appropriate responses 🔧 API failure handling ⚠️ Invalid command processing 🎨 Customization Options 🤖 AI Provider Switching To use a different AI service, modify the "NVIDIA LLaMA Chat Model" node: 📝 Change the URL in HTTP Request node 🔧 Update the request body format in "Prepare API Request" node 🔐 Update authentication method if needed 📊 Adjust response parsing in "Save AI Response to History" node 🧠 AI Behavior 📝 Modify system prompt in "Prepare API Request" node 🌡️ Adjust temperature and response parameters 📏 Change response length limits 🎯 Customize model-specific parameters 💾 Memory Settings 📊 Adjust history length (currently 10 messages) 👤 Modify user identification logic 🗄️ Customize data persistence approach 🎭 Bot Personality 🎉 Update welcome message content ⚠️ Customize error messages and responses ➕ Add new command handlers 💡 Use Cases 🎧 Customer Support**: Automated first-line support with context awareness 📚 Educational Assistant**: Homework help and learning support 👥 Personal AI Companion**: General conversation and assistance 💼 Business Assistant**: FAQ handling and information retrieval 🔬 AI API Testing**: Perfect template for testing different AI services 🚀 Prototype Development**: Quick AI chatbot prototyping 📝 Notes 🌐 Requires active n8n instance for webhook handling 💰 AI API usage may have rate limits and costs (varies by provider) 💾 Bot memory persists across workflow restarts 👥 Supports multiple concurrent users with separate histories 🔄 Template is provider-agnostic - easily switch between AI services 🛠️ Perfect starting point for any AI-powered Telegram bot project 🔧 Popular AI Services You Can Use | Provider | Model Examples | API Endpoint Style | |----------|---------------|-------------------| | 🟢 OpenAI | GPT-4, GPT-3.5 | https://api.openai.com/v1/chat/completions | | 🔵 Anthropic | Claude 3 Opus, Sonnet | https://api.anthropic.com/v1/messages | | 🔴 Google | Gemini Pro, Gemini Flash | https://generativelanguage.googleapis.com/v1beta/models/ | | 🟡 NVIDIA | LLaMA, Mistral | https://integrate.api.nvidia.com/v1/chat/completions | | 🟠 Hugging Face | Various OSS models | https://api-inference.huggingface.co/models/ | | 🟣 Cohere | Command, Generate | https://api.cohere.ai/v1/generate | Simply replace the HTTP Request node configuration to switch providers!