by Nasser
For Who? Content Creators Youtube Automation Marketing Team How it works? 1 - Retrieve Base Image, Image Description and Situation from Airtable 2 - Generate Image Prompt 3 - Generate Image via Fal AI 4 - Verify if Image is generated 5 - Upload Image on Airtable 📺 YouTube Video Tutorial: SETUP Setup Input : The first part of the workflow can be replaced with anything else. You need as input a Prompt and the Base Image URL (publicly available). Setup Output : In this Workflow, the output is storing the image on Airtable but you can replace that with anything else but basically you have two options : Store the Generated Image somewhere : Keep everything like this and replace the last Airtable node with the Third Party you want to use. Use the Image directly in n8n : In HTTP Request "Generate Image" switch sync_mode to "true", remove all the following nodes and add "Extract form File" node (convert to Base64 String) APIs : For the following third-party integrations, replace ==[YOUR_API_TOKEN]== with your API Token or connect your account via Client ID / Secret to your n8n instance: Fal AI (FLUX KONTEXT MAX) : https://fal.ai/models/fal-ai/flux-pro/kontext/max/api#schema-input Airtable : https://docs.n8n.io/integrations/builtin/app-nodes/n8n-nodes-base.airtable/?utm_source=n8n_app&utm_medium=node_settings_modal-credential_link&utm_campaign=n8n-nodes-base.airtable
by Nick Saraev
Deep Multiline Icebreaker System (AI-Powered Cold Email Personalization) Categories: Lead Generation, AI Marketing, Sales Automation This workflow creates an advanced AI-powered cold email personalization system that achieves 5-10% reply rates by generating deeply personalized multi-line icebreakers. The system scrapes comprehensive website data, analyzes multiple pages per prospect, and uses advanced AI prompting to create custom email openers that make recipients believe you've personally researched their entire business. Benefits Superior Response Rates** - Achieves 5-10% reply rates vs. 1-2% for standard cold email campaigns Deep Website Intelligence** - Scrapes and analyzes multiple pages per prospect, not just homepages Advanced AI Personalization** - Uses sophisticated prompting techniques with examples and formatting rules Complete Lead Pipeline** - From Apollo search to personalized icebreakers in Google Sheets Scalable Processing** - Handle hundreds of prospects with intelligent batching and error handling Revenue-Focused Approach** - System designed around proven $72K/month agency methodologies How It Works Apollo Lead Acquisition: Integrates directly with Apollo.io search URLs through Apify scraper Processes 500+ leads per search with comprehensive contact data Filters for prospects with both email addresses and accessible websites Multi-Page Website Scraping: Scrapes homepage to extract all internal website links Processes relative URLs and filters out external/irrelevant links Performs intelligent batching to prevent IP blocking during scraping Comprehensive Content Analysis: Converts HTML to markdown for efficient AI processing Uses GPT-4 to generate detailed abstracts of each webpage Aggregates insights from multiple pages into comprehensive prospect profiles Advanced AI Icebreaker Generation: Employs sophisticated prompting with system messages, examples, and formatting rules Uses proven icebreaker templates that reference non-obvious website details Generates personalized openers that imply deep manual research Smart Data Processing: Removes duplicate URLs and handles scraping errors gracefully Implements token limits to control AI processing costs Organizes final output in structured Google Sheets format Required Google Sheets Setup Create a Google Sheet with these exact tab and column structures: Search URLs Tab: URL - Contains Apollo.io search URLs for your target audiences Leads Tab (Output): first_name - Contact's first name last_name - Contact's last name email - Contact's email address website_url - Company website URL headline - Job title/position location - Geographic location phone_number - Contact phone (if available) multiline_icebreaker - AI-generated personalized opener Setup Instructions: Create Google Sheet with "Search URLs" and "Leads" tabs Add your Apollo search URLs to the first tab (one per row) Connect Google Sheets OAuth credentials in n8n Update the Google Sheets document ID in all sheet nodes The workflow reads from Search URLs and outputs to Leads automatically Apollo Search URL Format: Your search URLs should look like: https://app.apollo.io/#/people?personLocations[]=United%20States&personTitles[]=ceo&qKeywords=marketing%20agency&page=1 Business Use Cases AI Automation Agencies** - Generate high-converting prospect outreach for service-based businesses B2B Sales Teams** - Create personalized cold email campaigns that actually get responses Marketing Agencies** - Offer premium personalization services to clients Consultants** - Build authority through deeply researched prospect outreach SaaS Companies** - Improve demo booking rates through personalized messaging Professional Services** - Stand out from generic sales emails with custom insights Revenue Potential This system transforms cold email economics: 5-10x Higher Response Rates** than standard cold email approaches $72K/month proven methodology** - exact system used to scale successful AI agency Premium Positioning** - prospects assume you've done extensive manual research Scalable Personalization** - process hundreds of prospects daily vs. manual research Difficulty Level: Advanced Estimated Build Time: 3-4 hours Monthly Operating Cost: ~$150 (Apollo + Apify + OpenAI + Email platform APIs) Watch My Complete Live Build Want to see me build this entire deep personalization system from scratch? I walk through every component live - including the AI prompting strategies, website scraping logic, error handling, and the exact techniques that generate 5-10% reply rates. 🎥 See My Live Build Process: "I Deep-Personalized 1000+ Cold Emails Using THIS AI System (FREE TEMPLATE)" This comprehensive tutorial shows the real development process - including advanced AI prompting, multi-page scraping architecture, and the proven icebreaker templates that have generated over $72K/month in agency revenue. Set Up Steps Apollo & Apify Integration: Configure Apify account with Apollo scraper access Set up API credentials and test lead extraction Define target audience parameters and lead qualification criteria Google Sheets Database Setup: Create multi-sheet structure (Search URLs, Leads) Configure proper column mappings for lead data Set up Google Sheets API credentials and permissions Website Scraping Infrastructure: Configure HTTP request nodes with proper redirect handling Set up error handling for websites that can't be scraped Implement intelligent batching with split-in-batches nodes AI Content Processing: Set up OpenAI API credentials with appropriate rate limits Configure dual-AI approach (page summarization + icebreaker generation) Implement token limiting to control processing costs Advanced Icebreaker Generation: Configure sophisticated AI prompting with system messages Set up example-based learning with input/output pairs Implement formatting rules for natural-sounding personalization Quality Control & Testing: Test complete workflow with small prospect batches Validate AI output quality and personalization accuracy Monitor response rates and optimize messaging templates Advanced Optimizations Scale the system with: Industry-Specific Templates:** Customize icebreaker formats for different verticals A/B Testing Framework:** Test different AI prompt variations and templates CRM Integration:** Automatically add qualified responders to sales pipelines Response Tracking:** Monitor which personalization elements drive highest engagement Multi-Touch Sequences:** Create follow-up campaigns based on initial response data Important Considerations AI Token Management:** System includes intelligent token limiting to control OpenAI costs Scraping Ethics:** Built-in delays and error handling prevent website overload Data Quality:** Filtering logic ensures only high-quality prospects with accessible websites Scalability:** Batch processing prevents IP blocking during high-volume scraping Why This System Works The key to 5-10% reply rates lies in making prospects believe you've done extensive manual research: Non-obvious details from deep website analysis Natural language patterns that avoid template detection Company name abbreviation (e.g., "Love AMS" vs "Love AMS Professional Services") Multiple page insights aggregated into compelling narratives Check Out My Channel For more advanced automation systems and proven business-building strategies that generate real revenue, explore my YouTube channel where I share the exact methodologies used to build successful automation agencies.
by Mark Shcherbakov
Video Guide I prepared a comprehensive guide demonstrating how to build a multi-level retrieval AI agent in n8n that smartly narrows down search results first by file descriptions, then retrieves detailed vector data for improved relevance and answer quality. Youtube Link Who is this for? This workflow suits developers, AI enthusiasts, and data engineers working with vector stores and large document collections who want to enhance the precision of AI retrieval by leveraging metadata-based filtering before deep content search. It helps users managing many files or documents and aiming to reduce noise and input size limits in AI queries. What problem does this workflow solve? Performing vector searches directly on large numbers of document chunks can degrade AI input quality and introduce noise. This workflow implements a two-stage retrieval process that first searches file descriptions to filter relevant files, then runs vector searches only within those files to fetch precise results. This reduces irrelevant data, improves answer accuracy, and optimizes performance when dealing with dozens or hundreds of files split into multiple pieces. What this workflow does This n8n workflow connects to a Supabase vector store to perform: Multi-level Retrieval:** File Description Search: Calls a Supabase RPC function to find files whose descriptions (metadata) best match the user query. It filters and limits the number of relevant files based on similarity scores. Document Chunk Retrieval: Uses retrieved file IDs to perform a second RPC call fetching detailed vector pieces only within those files, again filtered by similarity thresholds. OpenAI Integration:** The filtered document chunks and associated metadata (like file names and URLs) are passed to an OpenAI message node that includes system instructions to guide the AI in leveraging the knowledge base and linked resources for comprehensive responses. Custom Code Functions:** Two code nodes interact with Supabase stored procedures match_files and match_documents to perform the semantic searches with multiline metadata filtering unavailable in default vector filters. Helper Flows and SQL Setup:** Templates and SQL scripts prepare database tables and functions, with additional flows to generate embeddings from file description summaries using OpenAI. N8N Workflow Preparation: Create or verify Supabase account with vector store capability. Set up necessary database tables and RPC functions (match_files and match_documents) using provided SQL scripts. Replace all credentials in n8n nodes to connect to your Supabase and OpenAI accounts. Optionally upload document files and generate their vector embeddings and description summaries in a separate helper workflow. Main Workflow Logic: Code Function Node #1: Receives user query and calls the match_files RPC to retrieve file IDs by searching file descriptions with vector similarity thresholds and file limits. Code Function Node #2: Takes filtered file IDs, invokes match_documents RPC to fetch vector document chunks only from those files with additional similarity filtering and count limits. OpenAI Message Node: Combines fetched document pieces, their metadata (file URLs, similarity scores), and system prompts to generate precise AI-powered answers referencing the documents. This multi-tiered retrieval process improves search relevance and AI contextual understanding by smartly limiting vector search scope first to relevant files, then to specific document chunks, refining user query results.
by Ajith joseph
🤖 Create a Telegram Bot with Mistral AI and Conversation Memory A sophisticated Telegram bot that provides AI-powered responses with conversation memory. This template demonstrates how to integrate any AI API service with Telegram, making it easy to swap between different AI providers like OpenAI, Anthropic, Google AI, or any other API-based AI model. 🔧 How it works The workflow creates an intelligent Telegram bot that: 💬 Maintains conversation history for each user 🧠 Provides contextual AI responses using any AI API service 📱 Handles different message types and commands 🔄 Manages chat sessions with clear functionality 🔌 Easily adaptable to any AI provider (OpenAI, Anthropic, Google AI, etc.) ⚙️ Set up steps 📋 Prerequisites 🤖 Telegram Bot Token (from @BotFather) 🔑 AI API Key (from any AI service provider) 🚀 n8n instance with webhook capability 🛠️ Configuration Steps 🤖 Create Telegram Bot Message @BotFather on Telegram Create new bot with /newbot command Save the bot token for credentials setup 🧠 Choose Your AI Provider OpenAI: Get API key from OpenAI platform Anthropic: Sign up for Claude API access Google AI: Get Gemini API key NVIDIA: Access LLaMA models Hugging Face: Use inference API Any other AI API service 🔐 Set up Credentials in n8n Add Telegram API credentials with your bot token Add Bearer Auth/API Key credentials for your chosen AI service Test both connections 🚀 Deploy Workflow Import the workflow JSON Customize the AI API call (see customization section) Activate the workflow Set webhook URL in Telegram bot settings ✨ Features 🚀 Core Functionality 📨 Smart Message Routing**: Automatically categorizes incoming messages (commands, text, non-text) 🧠 Conversation Memory**: Maintains chat history for each user (last 10 messages) 🤖 AI-Powered Responses**: Integrates with any AI API service for intelligent replies ⚡ Command Support**: Built-in /start and /clear commands 📱 Message Types Handled 💬 Text Messages**: Processed through AI model with context 🔧 Commands**: Special handling for bot commands ❌ Non-text Messages**: Polite error message for unsupported content 💾 Memory Management 👤 User-specific chat history storage 🔄 Automatic history trimming (keeps last 10 messages) 🌐 Global state management across workflow executions 🤖 Bot Commands /start 🎯 - Welcome message with bot introduction /clear 🗑️ - Clears conversation history for fresh start Regular text 💬 - Processed by AI with conversation context 🔧 Technical Details 🏗️ Workflow Structure 📡 Telegram Trigger - Receives all incoming messages 🔀 Message Filtering - Routes messages based on type/content 💾 History Management - Maintains conversation context 🧠 AI Processing - Generates intelligent responses 📤 Response Delivery - Sends formatted replies back to user 🤖 AI API Integration (Customizable) Current Example (NVIDIA): Model: mistralai/mistral-nemotron Temperature: 0.6 (balanced creativity) Max tokens: 4096 Response limit: Under 200 words 🔄 Easy to Replace with Any AI Service: OpenAI Example: { "model": "gpt-4", "messages": [...], "temperature": 0.7, "max_tokens": 1000 } Anthropic Claude Example: { "model": "claude-3-sonnet-20240229", "messages": [...], "max_tokens": 1000 } Google Gemini Example: { "contents": [...], "generationConfig": { "temperature": 0.7, "maxOutputTokens": 1000 } } 🛡️ Error Handling ❌ Non-text message detection and appropriate responses 🔧 API failure handling ⚠️ Invalid command processing 🎨 Customization Options 🤖 AI Provider Switching To use a different AI service, modify the "NVIDIA LLaMA Chat Model" node: 📝 Change the URL in HTTP Request node 🔧 Update the request body format in "Prepare API Request" node 🔐 Update authentication method if needed 📊 Adjust response parsing in "Save AI Response to History" node 🧠 AI Behavior 📝 Modify system prompt in "Prepare API Request" node 🌡️ Adjust temperature and response parameters 📏 Change response length limits 🎯 Customize model-specific parameters 💾 Memory Settings 📊 Adjust history length (currently 10 messages) 👤 Modify user identification logic 🗄️ Customize data persistence approach 🎭 Bot Personality 🎉 Update welcome message content ⚠️ Customize error messages and responses ➕ Add new command handlers 💡 Use Cases 🎧 Customer Support**: Automated first-line support with context awareness 📚 Educational Assistant**: Homework help and learning support 👥 Personal AI Companion**: General conversation and assistance 💼 Business Assistant**: FAQ handling and information retrieval 🔬 AI API Testing**: Perfect template for testing different AI services 🚀 Prototype Development**: Quick AI chatbot prototyping 📝 Notes 🌐 Requires active n8n instance for webhook handling 💰 AI API usage may have rate limits and costs (varies by provider) 💾 Bot memory persists across workflow restarts 👥 Supports multiple concurrent users with separate histories 🔄 Template is provider-agnostic - easily switch between AI services 🛠️ Perfect starting point for any AI-powered Telegram bot project 🔧 Popular AI Services You Can Use | Provider | Model Examples | API Endpoint Style | |----------|---------------|-------------------| | 🟢 OpenAI | GPT-4, GPT-3.5 | https://api.openai.com/v1/chat/completions | | 🔵 Anthropic | Claude 3 Opus, Sonnet | https://api.anthropic.com/v1/messages | | 🔴 Google | Gemini Pro, Gemini Flash | https://generativelanguage.googleapis.com/v1beta/models/ | | 🟡 NVIDIA | LLaMA, Mistral | https://integrate.api.nvidia.com/v1/chat/completions | | 🟠 Hugging Face | Various OSS models | https://api-inference.huggingface.co/models/ | | 🟣 Cohere | Command, Generate | https://api.cohere.ai/v1/generate | Simply replace the HTTP Request node configuration to switch providers!
by PollupAI
Who is this for? This workflow is designed for Customer Success Managers (CSM), sales, support, or marketing teams using HubSpot CRM who want to automate customer engagement tracking when new emails arrive. It’s ideal for businesses looking to streamline CRM updates without manual data entry. Problem Solved / Use Case Manually logging email interactions in HubSpot is time-consuming. This workflow automatically parses incoming emails, checks if the sender exists in HubSpot, and either: Creates a new contact + logs the email as an engagement (if the sender is new). Logs the email as an engagement for an existing contact. What This Workflow Does Triggers when a new email arrives in a connected IMAP inbox. Parses the email using AI (OpenAI) to extract structured data. Searches HubSpot for the sender’s email address. Updates HubSpot: Creates a contact (if missing) and logs the email as an engagement. Or logs the engagement for an existing contact. Setup Configure Email Account: Replace the default IMAP node with your email provider HubSpot Credentials: Add your HubSpot API key in the HubSpot nodes. OpenAI Integration: Ensure your OpenAI API key is set for email parsing. Customization Tips Improve AI Prompt**: Modify the OpenAI prompt to extract specific email data (e.g., customer intent). Add Filters**: Exclude auto-replies or spam by adding a filter node. Extend Functionality**: Use the parsed data to trigger follow-up tasks (e.g., Slack alerts, tickets). Need Help? Contact thomas@pollup.net for workflow modifications or help. Discover my other workflows here
by Aashiq
👤 Who’s it for This workflow is for content creators, marketers, educators, or anyone who wants to instantly summarize YouTube videos and repurpose them into different formats (LinkedIn post, tweet, etc.) via a simple Telegram chatbot. ⚙️ How it works This n8n automation listens for messages in Telegram. If the message contains a YouTube link, it: Extracts the video ID Fetches the video transcript using RapidAPI Cleans the transcript of any special characters Sends it to OpenAI to generate a summary If the message is not a link, it simply acts as an AI chatbot using OpenAI with memory support. ✅ Supports follow-up prompts like: “Make it shorter” “Turn this into a LinkedIn post” “Create a tweet thread” 🧑🤝🧑 Multi-User Support This Telegram bot supports multiple users simultaneously. It tracks memory and context separately for each user using Telegram's unique chat_id. ✅ Each user gets personalized AI replies ✅ Follow-up commands work per user ✅ No interference between users 🛠️ Requirements A Telegram bot token (get via @BotFather) An OpenAI API Key (from https://platform.openai.com/account/api-keys) A RapidAPI Key and Host (typically youtube-transcript3.p.rapidapi.com) > 🚨 API keys must be added manually — they are not included in the template. 🧩 How to Set It Up Configure the Telegram Trigger node with your bot token. In the HTTP Request node, set: X-RapidAPI-Key: your RapidAPI key X-RapidAPI-Host: your RapidAPI host URL Add your OpenAI API credentials to the AI Agent node. Use the provided sticky notes for guidance inside the workflow itself. 🎛️ How to Customize Modify AI prompt behavior in the AI Agent node Change the text formatting in the Code node Use a different transcript API if preferred Add commands like make it into a blog post, summarize in bullet points, etc. 📌 Notes All nodes are renamed to reflect their function API credentials are removed for security Includes colored boxes and sticky notes to guide the user Compatible with n8n cloud and self-hosted setups
by Jakkrapat Ampring
Description Quickly organize your inbox with AI! This simple workflow automatically classifies incoming emails into different categories — like High Priority, Work Related, or Promotions — and applies Gmail labels accordingly. Setup takes less than 2 minutes, and it runs 24/7, helping you stay focused on what matters most without manual sorting. Tools/Services Needed Gmail: To trigger the workflow and label emails. Google Gemini (or any LLM Model): To intelligently classify email content. How It Works Gmail Trigger: Detects every new incoming email. Text Classifier Node: Classifies the email content into predefined categories. Google Gemini Chat Model: Provides the AI-powered understanding behind the classification. Conditional Labeling: If the email is High Priority, label it accordingly. If it’s Work Related (e.g., internal emails), apply the work label. If it’s a Promotion, sort it into the promotions label. Gmail Labeling: Automatically adds the correct label to the email. Setup Instructions Connect your Gmail account to n8n. Connect your Google Gemini (or other LLM) credentials. Customize the categories and labels if needed. Activate the workflow — and that's it! Notes You can easily add more categories (like "Finance," "Newsletters," etc.) by adjusting the classification prompt. Works best with a clean and minimal set of categories to avoid overlap. Can be adapted to work with any other large language model (OpenAI, Claude, etc.) if preferred.
by Sarfaraz Muhammad Sajib
AI-Powered Automated Outreach Scheduling with Gemini, Gmail & Google Sheets Automate your lead generation and outreach process seamlessly using AI, Gmail, and Google Sheets—all within n8n. No complicated setup—just import, activate, and start reaching prospects with personalized messages generated by Google Gemini’s AI model. Quick Setup Import the Workflow Download and import the provided workflow into your n8n instance. Connect Your Accounts Authenticate your Google Sheets account. Connect your Gmail account for sending emails. Prepare the Spreadsheet Use this template to set up your leads and tracking sheet. Configure the Gemini API Obtain your Gemini API key. Here Add it to the Gemini API credentials within n8n. Set Scheduling Preferences Customize the Schedule Trigger node to control when the workflow runs. Edit Email Prompts Update the initial and follow-up email prompts to match your outreach tone and goals. Set Rate Limits Configure the rate limiting settings to comply with Gmail sending limits and avoid spam filters. Activate the Workflow Enable the workflow to begin automated outreach to your leads. Track and Manage Leads Monitor responses and update lead statuses directly in your Google Sheet. How It Works Schedule Trigger:** Automatically starts outreach based on your defined schedule Google Sheets Integration:** Fetches leads and updates their status after outreach Email Validation:** Checks if lead emails are valid before sending Website Scraper:** Gathers info from lead websites to personalize messages Google Gemini AI:** Generates tailored cold outreach messages optimized for high response Gmail Node:** Sends personalized emails directly from your Gmail account Core Features Pull leads automatically from Google Sheets Validate emails to avoid bounces Scrape lead websites for custom messaging context Generate AI-crafted outreach emails with dynamic personalization Send emails on schedule without manual intervention Update lead status to track outreach progress AI Integration Uses Google Gemini AI to create professional, friendly, and engaging outreach emails Dynamic prompt templates tailored to each lead’s company and website content Structured JSON output to easily map subject, greeting, and body content 💡 Usage Examples B2B cold outreach campaigns with personalized emails Automated follow-ups based on lead engagement Lead nurturing with context-aware messaging Sales prospecting workflows integrated into your CRM ✨ Benefits Save hours by automating personalized outreach Increase response rates with AI-optimized messaging Keep lead data organized and updated in Google Sheets Fully scalable and customizable n8n workflow Minimal setup, ready to run out-of-the-box
by Lucía Maio Brioso
🧑💼 Who is this for? If you’re using Notion to manage a database (like saving links, tasks, notes, or anything really), and it’s starting to get messy with duplicate entries, this workflow is for you. It’s especially useful if you want to keep things tidy without doing any manual cleanup. 🧠 What problem is this workflow solving? Notion doesn’t have a built-in way to find or remove duplicates, so you either clean them up manually 😩 or just let them pile up. This workflow automatically finds entries that share the same property (like a URL or title) and archives the extra copies, keeping just one. ⚙️ What this workflow does Pulls all pages from a Notion database. Identifies duplicates based on a property you choose. Archives the duplicate pages (which is like soft-deleting them). Keeps one version of each duplicate group. It includes two optional triggers: Run it every day ⏰ Or trigger it automatically when a new page is added to the database ⚡ 🛠️ Setup Connect your Notion account in n8n. Select your database in the Notion nodes. In the “Format items properly” node, replace "SET YOUR PROPERTY HERE" with a reference to the property you want to use for detecting duplicates. I recommend using the n8n property drag-and-drop feature. Enable whichever trigger you prefer — or both. And that’s it. It runs on its own after that. 🧩 How to customize this workflow to your needs Use a different property for detecting duplicates by updating the Set node. Want to tag duplicates instead of archiving them? Just replace the last Notion node with an update operation. Adjust the schedule to run it hourly, weekly, or whenever suits your setup.
by Yang
📄 What this workflow does This workflow turns TikTok videos into high-quality marketing insights and social-ready posts using Dumpling AI and GPT-4. It takes a TikTok URL, keyword, and product name, then automatically extracts the video transcript, analyzes the content for key marketing insights (pain points, outcomes, triggers), and rewrites it as a social media post that positions your product as the solution. Everything is logged to Google Sheets for use by your content or product team. 👤 Who is this for Product marketers doing UGC research Copywriters repurposing TikTok into content Founders or VAs turning viral clips into assets Agencies building research-based social proof ⚙️ How to set up ✅ Requirements Dumpling AI**: For TikTok transcript extraction OpenAI GPT-4 or GPT-4o-mini**: For analysis and rewriting Google Sheets**: To log the results n8n Form Trigger**: To input TikTok URL, Keyword, and Product 🔧 Setup Instructions Google Sheets Create a sheet with the following columns: Video URL, Original Transcription, Pain points, Desired outcomes, Triggers or motivating events, Interesting direct quotes, New Script Update the sheet ID and tab in the Google Sheets node Credentials Add your Dumpling AI key using HTTP Header Auth Use GPT-4 via OpenAI credentials Connect your Google Sheets using OAuth2 Customization (Optional) You can modify the GPT-4 prompts in the LangChain nodes to change tone, output structure, or content depth 🧠 How it works A form is submitted with a TikTok URL, keyword, and product Dumpling AI fetches and returns the TikTok transcript The VTT format is cleaned into plain text GPT-4 (via LangChain agent) extracts: Pain points Desired outcomes Motivating events Direct quotes GPT-4 then rewrites the transcript into a compelling marketing post Results are saved to Google Sheets for further use 🛠️ Customization ideas Push insights to Notion or Airtable instead of Sheets Use Claude or Gemini instead of GPT-4 Automatically generate image prompts to pair with the rewritten script Add notification email or Slack post when draft is ready This workflow gives marketers and founders a fast way to convert real social content into reusable copy, backed by authentic user voice and GPT-powered insights.
by Lucas Walter
Who's it for This workflow is perfect for directory site creators, content managers, and developers who need to automatically find and select the highest quality favicon or logo for websites they're showcasing. Instead of manually hunting down brand assets or settling for blurry default icons, this workflow does the heavy lifting by fetching multiple options and using AI to pick the best one. How it works The workflow takes a website URL and domain as input, then intelligently fetches favicon images from three different sources: Google's Favicon API - Gets the site's actual favicon Logo.dev - Provides high-quality brand logos Clearbit - Alternative logo source for business websites Once all images are collected, the workflow uses OpenAI's vision model to analyze each icon based on: Image quality and resolution (minimum 256x256) Brand authenticity (avoiding generic framework icons) Visual clarity without artifacts or blur Professional presentation suitable for directory listings The AI assigns quality scores from 0.0 to 1.0, and the workflow automatically returns the URL of the highest-scoring favicon. Requirements OpenAI API key (for image analysis) Logo.dev API key (free tier available) How to set up Configure API credentials: Add your OpenAI API key to n8n credentials Sign up for Logo.dev and add your API token The Clearbit and Google APIs require no authentication Test the workflow: Use the pinned test data (Fyxer AI example) or replace with your own Ensure all HTTP nodes can successfully fetch images Verify the AI analysis is working by checking the quality scores Customize input format: Modify the workflow trigger to accept your preferred input format Adjust the domain extraction logic if needed for your use case How to customize the workflow For different quality criteria: Edit the AI prompt in the "analyze_each_icon" node to emphasize different aspects (transparency, size, style preferences) For additional favicon sources: Add more HTTP Request nodes pointing to other favicon/logo APIs Update the merge node to handle additional inputs Modify the final URL construction logic to handle new sources For batch processing: Wrap this workflow in a loop to process multiple websites at once Add error handling for failed requests or AI analysis timeouts The workflow is designed to be reliable and handles errors gracefully - if one favicon source fails, it continues with the available options and still provides the best result possible.
by Oneclick AI Squad
This n8n workflow automates the process of scraping LinkedIn profiles using the Apify platform and organizing the extracted data into Google Sheets for easy analysis and follow-up. Use Cases Lead Generation**: Extract contact information and professional details from LinkedIn profiles Recruitment**: Gather candidate information for talent acquisition Market Research**: Analyze professional networks and industry connections Sales Prospecting**: Build targeted prospect lists with detailed professional information How It Works 1. Workflow Initialization & Input Webhook Start Scraper**: Triggers the entire scraping workflow Read LinkedIn URLs**: Retrieves LinkedIn profile URLs from Google Sheets Schedule Scraper Trigger**: Sets up automated scheduling for regular scraping 2. Data Processing & Extraction Data Formatting**: Prepares and structures the LinkedIn URLs for processing Fetch Profile Data**: Makes HTTP requests to Apify API with profile URLs Run Scraper Actor**: Executes the Apify LinkedIn scraper actor Get Scraped Results**: Retrieves the extracted profile data from Apify 3. Data Storage & Completion Save to Google Sheets**: Stores the scraped profile data in organized spreadsheet format Update Progress Tracker**: Updates workflow status and progress tracking Process Complete Wait**: Ensures all operations finish before final steps Send Success Notification**: Alerts users when scraping is successfully completed Requirements Apify Account Active Apify account with sufficient credits API token for authentication Access to LinkedIn Profile Scraper actor Google Sheets Google account with Sheets access Properly formatted input sheet with LinkedIn URLs Credentials configured in n8n n8n Setup HTTP Request node credentials for Apify Google Sheets node credentials Webhook endpoint configured How to Use Step 1: Prepare Your Data Create a Google Sheet with LinkedIn profile URLs Ensure the sheet has a column named 'linkedin_url' Add any additional columns for metadata (name, company, etc.) Step 2: Configure Credentials Set up Apify API credentials in n8n Configure Google Sheets authentication Update webhook endpoint URL Step 3: Customize Settings Adjust scraping parameters in the Apify node Modify data fields to extract based on your needs Set up notification preferences Step 4: Execute Workflow Trigger via webhook or manual execution Monitor progress through the workflow Check Google Sheets for scraped data Review completion notifications Good to Know Rate Limits**: LinkedIn scraping is subject to rate limits. The workflow includes delays to respect these limits. Data Quality**: Results depend on profile visibility and LinkedIn's anti-scraping measures. Costs**: Apify charges based on compute units used. Monitor your usage to control costs. Compliance**: Ensure your scraping activities comply with LinkedIn's Terms of Service and applicable laws. Customizing This Workflow Enhanced Data Processing Add data enrichment steps to append additional information Implement duplicate detection and merge logic Create data validation rules for quality control Advanced Notifications Set up Slack or email alerts for different scenarios Create detailed reports with scraping statistics Implement error recovery mechanisms Integration Options Connect to CRM systems for automatic lead creation Integrate with marketing automation platforms Export data to analytics tools for further analysis Troubleshooting Common Issues Apify Actor Failures**: Check API limits and actor status Google Sheets Errors**: Verify permissions and sheet structure Rate Limiting**: Implement longer delays between requests Data Quality Issues**: Review scraping parameters and target profiles Best Practices Test with small batches before scaling up Monitor Apify credit usage regularly Keep backup copies of your data Regular validation of scraped information accuracy