by Mark Shcherbakov
Video Guide I prepared a detailed guide that illustrates the entire process of building an AI agent using Supabase and Google Drive within N8N workflows. Youtube Link Who is this for? This workflow is designed for developers, data scientists, and business users who wish to automate document management and enable AI-powered interactions over their stored files. It's especially beneficial for scenarios where users need to process, analyze, and retrieve information from uploaded documents rapidly. What problem does this workflow solve? Managing files across multiple platforms often involves tedious manual processes. This workflow facilitates automated file handling, making it easier for users to upload, parse, and interact with documents through an AI agent. It reduces redundancy and enhances the efficiency of data retrieval and management tasks. What this workflow does This workflow integrates Supabase storage with Google Drive and employs an AI agent to manage files effectively. The agent can: Upload files to Supabase storage and activate processes based on file changes in Google Drive. Retrieve and parse documents, converting them into a structured format for easy querying. Utilize an AI agent to answer user queries based on saved document data. Data Collection: The workflow initially gathers files from Supabase storage, ensuring no duplicates are processed in the 'files' table. File Handling: It processes files to be parsed based on their type, leveraging LlamaParse for effective data transformation. Google Drive Integration: The workflow monitors a designated Google Drive folder to upload files automatically and refresh document records in the database with new data. AI Interaction: A webhook is established to enable the AI agent to converse with users, facilitating queries and leveraging stored document knowledge. Setup Supabase Storage Setup: Create a private bucket in Supabase storage, modifying the default name in the URL. Upload your files using the provided upload options. Database Configuration: Establish the 'file' and 'document' tables in Supabase with the necessary fields. Execute any required SQL queries for enabling vector matching features. N8N Workflow Logic: Start with a manual trigger for the initial workflow segment or consider alternative triggers like webhooks. Replace all relevant credentials across nodes with your own to ensure seamless operation. File Processing and Google Drive Monitoring: Set up file processing to take care of downloading and parsing files based on their types. Create triggers to monitor the designated Google Drive folder for file uploads and updates. Integrate AI Agent: Configure the webhook for the AI agent to accept chat inputs while maintaining session context for enhanced user interactions. Utilize PostgreSQL to store user interactions and manage conversation states effectively. Testing and Adjustments: Once everything is set up, run tests with the AI agent to validate its responses based on the documents in your database. Fine-tune the workflow and AI model as needed to achieve desired performance.
by Darryn Balanco
This workflow automates the management of DigitalOcean Droplet snapshots by listing all droplets, filtering based on the number of snapshots, and deleting excess snapshots before creating new ones. It ensures your droplet snapshots stay organized and within a manageable limit, preventing unnecessary storage costs due to an excess of snapshots. Who is this for? This workflow is perfect for users managing DigitalOcean Droplets and looking to automate the process of snapshot creation and cleanup to save on storage costs and maintain efficient resource management. It’s useful for DevOps teams, cloud administrators, or any developer leveraging DigitalOcean for their infrastructure. What problem is this workflow solving? When managing multiple DigitalOcean Droplets, snapshots can quickly accumulate, taking up space and increasing storage costs. Manually deleting and creating snapshots can be time-consuming and inefficient. This automation solves this problem by automating the snapshot management process, ensuring that no more than a defined number of snapshots are kept per droplet. What this workflow does Runs every 48 hours: The workflow is triggered by a cron node that runs every 48 hours, ensuring timely snapshot management. List all droplets: The workflow retrieves all droplets in the DigitalOcean account. Retrieve snapshots: For each droplet, the workflow retrieves a list of existing snapshots. Filter snapshots: If the number of snapshots exceeds 4, the workflow filters for snapshots that need to be deleted. Delete snapshots: Excess snapshots are automatically deleted based on the filter criteria. Create new snapshot: After cleaning up, the workflow creates a new snapshot for each droplet, ensuring that backups are always up-to-date. Setup DigitalOcean API Key: You’ll need to configure the HTTP Request nodes with your DigitalOcean API key. This key is required for authenticating requests to list droplets, retrieve snapshots, delete snapshots, and create new ones. Snapshot Threshold: By default, the workflow is set to keep no more than 4 snapshots per droplet. This can be adjusted by modifying the filter node conditions. Set Execution Frequency: The cron node is set to run every 48 hours, but you can adjust the timing to suit your needs. How to customize this workflow Adjust Snapshot Limit**: Change the value in the filter node if you want to keep more or fewer snapshots. Modify Run Frequency**: The workflow runs every 48 hours by default. You can change the frequency in the cron node to run more or less often. Enhance with Notifications**: You can add a notification node (e.g., Slack or email) to alert you when snapshots are deleted or created. Workflow Summary This workflow automates the management of DigitalOcean Droplet snapshots by keeping the number of snapshots under a defined limit, deleting the oldest ones, and ensuring new snapshots are created at regular intervals.
by Jez
🎯 Overview This n8n workflow automates the process of ingesting documents from multiple sources (Google Drive and web forms) into a Qdrant vector database for semantic search capabilities. It handles batch processing, document analysis, embedding generation, and vector storage - all while maintaining proper error handling and execution tracking. 🚀 Key Features Dual Input Sources**: Accepts files from both Google Drive folders and web form uploads Batch Processing**: Processes files one at a time to prevent memory issues and ensure reliability AI-Powered Analysis**: Uses Google Gemini to extract metadata and understand document context Vector Embeddings**: Generates OpenAI embeddings for semantic search capabilities Automated Cleanup**: Optionally deletes processed files from Google Drive (configurable) Loop Processing**: Handles multiple files efficiently with Split In Batches nodes Interactive Chat Interface**: Built-in chatbot for testing semantic search queries against indexed documents 📋 Use Cases Knowledge Base Creation**: Build searchable document repositories for organizations Document Compliance**: Process and index legal/regulatory documents (like Fair Work documents) Content Management**: Automatically categorize and store uploaded documents Research Libraries**: Create semantic search capabilities for research papers or reports Customer Support**: Enable instant answers to policy and documentation questions via chat interface 🔧 Workflow Components Input Methods Google Drive Integration Monitors a specific folder for new files Processes existing files in batch mode Supports automatic file conversion to PDF Web Form Upload Public-facing form for document submission Accepts PDF, DOCX, DOC, and CSV files Processes multiple file uploads in a single submission Processing Pipeline File Splitting: Separates multiple uploads into individual items Document Analysis: Google Gemini extracts document understanding Text Extraction: Converts documents to plain text Embedding Generation: Creates vector embeddings via OpenAI Vector Storage: Inserts documents with embeddings into Qdrant Loop Control: Manages batch processing with proper state handling Key Nodes Split In Batches**: Processes files one at a time with reset: false to maintain state Google Gemini**: Analyzes documents for context and metadata Langchain Vector Store**: Handles Qdrant insertion with embeddings HTTP Request**: Direct API calls for custom operations Chat Interface**: Interactive chatbot for testing vector search queries 🛠️ Technical Implementation Batch Processing Logic The workflow uses a clever looping mechanism: Split In Batches with batchSize: 1 ensures single-file processing reset: false maintains loop state across iterations Loop continues until all files are processed Error Handling All nodes include continueOnFail options where appropriate Execution logs are preserved for debugging File deletion only occurs after successful insertion Data Flow Form Upload → Split Files → Batch Loop → Analyze → Insert → Loop Back Google Drive → List Files → Batch Loop → Download → Analyze → Insert → Delete → Loop Back 📊 Performance Considerations Processing Time**: ~20-30 seconds per file Batch Size**: Set to 1 for reliability (configurable) Memory Usage**: Optimized for files under 10MB API Costs**: Uses OpenAI embeddings (text-embedding-3-large model) 🔐 Required Credentials Google Drive OAuth2: For file access and management OpenAI API: For embedding generation Qdrant API: For vector database operations Google Gemini API: For document analysis 💡 Implementation Tips Start Small: Test with a few files before processing large batches Monitor Costs: Track OpenAI API usage for embedding generation Backup First: Consider archiving instead of deleting processed files Check Collections: Ensure Qdrant collection exists before running 🎨 Customization Options Change Embedding Model**: Switch to text-embedding-3-small for cost savings Adjust Chunk Size**: Modify text splitting parameters for different document types Add Metadata**: Extend the Gemini prompt to extract specific fields Archive vs Delete**: Replace delete operation with move to "processed" folder 📈 Real-World Application This workflow was developed to process business documents and legal agreements, making them searchable through semantic queries. It's particularly useful for organizations dealing with large volumes of regulatory documentation that need to be quickly accessible and searchable. Chat Interface Testing The integrated chatbot interface allows users to: Query processed documents using natural language Test semantic search capabilities in real-time Verify document indexing and retrieval accuracy Ask questions about specific topics (e.g., "What are the pay rates for junior employees?") Get instant AI-powered responses based on the indexed content 🌟 Benefits Automation**: Eliminates manual document processing Scalability**: Handles individual files or bulk uploads Intelligence**: AI-powered understanding of document content Flexibility**: Multiple input sources and processing options Reliability**: Robust error handling and state management 👨💻 About the Creator Jeremy Dawes is the CEO of Jezweb, specializing in AI and automation deployment solutions. This workflow represents practical, production-ready automation that solves real business challenges while maintaining simplicity and reliability. 📝 Notes The workflow intelligently handles the n8n form upload pattern where multiple files create a single item with multiple binary properties (Files_0, Files_1, etc.) The Split In Batches pattern with reset: false is crucial for proper loop execution Direct API integration provides more control than pure Langchain implementations 🔗 Resources Qdrant Documentation OpenAI Embeddings n8n Documentation Jezweb - AI & Automation Solutions This workflow demonstrates practical automation that bridges document management with modern AI capabilities, creating intelligent document processing systems that scale with your needs.
by Andrey
⚠️ DISCLAIMER: This workflow uses the HDW LinkedIn community node, which is only available on self-hosted n8n instances. It will not work on n8n.cloud. Overview This n8n workflow automates the enrichment of CRM contact data with professional insights from LinkedIn profiles. The workflow integrates with both Pipedrive and HubSpot CRMs, finding LinkedIn profiles that match your contacts and updating your CRM with valuable information about their professional background and recent activities. Key Features Multi-CRM Support**: Works with both Pipedrive and HubSpot AI-Powered Data Enrichment**: Uses an advanced AI agent to analyze and summarize professional information Automated Triggers**: Activates when new contacts are added or when enrichment is requested Comprehensive Profile Analysis**: Captures LinkedIn profile summaries and post activity How It Works Triggers The workflow activates in three scenarios: When a new contact is created in CRM When a contact is updated in CRM with an enrichment flag LinkedIn Data Collection Process Email Lookup: First tries to find the LinkedIn profile using the contact's email Advanced Search: If email lookup fails, uses name and company details to find potential matches Profile Analysis: Collects comprehensive profile information Post Analysis: Gathers and analyzes the contact's recent LinkedIn activity CRM Updates The workflow updates your CRM with: LinkedIn profile URL Professional summary (skills, experience, background) Analysis of recent LinkedIn posts and activity Setup Instructions Requirements Self-hosted n8n instance with the HDW LinkedIn community node installed API access to OpenAI (for GPT-4o) Pipedrive and/or HubSpot account HDW API key https://app.horizondatawave.ai Installation Steps Install the HDW LinkedIn Node: npm install n8n-nodes-hdw Follow the detailed instructions at: https://www.npmjs.com/package/n8n-nodes-hdw Configure Credentials: OpenAI: Add your OpenAI API key Pipedrive: Connect your Pipedrive account (if using) HubSpot: Connect your HubSpot account (if using) HDW LinkedIn: Add your API key from https://app.horizondatawave.ai CRM Custom Fields Setup: For Pipedrive: Go to Settings → Data Fields → Contact Fields → + Add Field Create the following custom fields: LinkedIn Profile: Field type - Large text Profile Summary: Field type - Large text LinkedIn Posts Summary: Field type - Large text Need Enrichment: Field type - Single option (Yes/No) Detailed instructions for creating custom fields in Pipedrive: https://support.pipedrive.com/en/article/custom-fields For HubSpot: Go to Settings → Properties → Create property Create the following properties for Contact object: linkedin_url: Field type - Single-line text profile_summary: Field type - Multi-line text linkedin_posts_summary: Field type - Multi-line text need_enrichment: Field type - Checkbox (Boolean) Detailed instructions for creating properties in HubSpot: https://knowledge.hubspot.com/properties/create-and-edit-properties Import the Workflow: Import the "HDW_CRM_Enrichment.json" file into your n8n instance Activate Webhooks: Enable the webhook triggers for your CRM to ensure the workflow activates correctly Customization Options AI Agent Prompts You can modify the system prompts in the "Data Enrichment AI Agent" nodes to: Change the focus of profile analysis Adjust the tone and detail level of summaries Customize what information is extracted from posts CRM Field Mapping The workflow is pre-configured to update specific custom fields in Pipedrive and HubSpot. Update the field/property mappings in: "Update data in Pipedrive" nodes "Update data in HubSpot" node Troubleshooting Common Issues LinkedIn Profile Not Found**: Check if the contact's email is their work email; consider adjusting the search parameters Webhook Not Triggering**: Verify webhook configuration in your CRM Missing Custom Fields**: Ensure all required custom fields are created in your CRM with correct names Rate Limits Be aware of LinkedIn API rate limits (managed by HDW LinkedIn node) Consider implementing delays if processing large batches of contacts Best Practices Use enrichment flags to selectively update contacts rather than enriching all contacts Review and clean contact data in your CRM before enrichment Periodically review the AI-generated summaries to ensure quality and relevance
by Fahmi Oktafian
This n8n workflow is a Telegram bot that allows users to either: Generate AI images using Pollinations API, or Generate blog articles using Gemini AI Users simply type image your prompt or blog your title, and the bot responds with either an AI-generated image or article. Who's it for This template is ideal for: Content creators and marketers who want to generate visual and written content quickly Telegram bot developers looking for real-world AI integration Educators or students automating content workflows Anyone managing content pipelines using Google Sheets What it does / How it works Telegram Interaction Trigger Telegram Message: Listens for new messages or button clicks via Telegram Classify Telegram Input: JavaScript logic to classify input as /start, /help, normal text, or callback Switch Input Type: Directs the flow based on the classification Menu & Help Send Main Menu to User: Shows "Generate Image", "Blog Article", "Help" options Switch Callback Selection: Routes based on button pressed (image, blog, or help) Send Help Instructions: Sends markdown instructions on how to use the bot Input Validation Validate Command Format: Ensures input starts with image or blog Notify Invalid Input Format: If validation fails, informs user of correct format Image Generator Prompt User for Image Description → When user clicks Generate Image Detect Text-Based Input Type → Detects if text is image or blog Switch Text Command Type → Directs whether to generate image or article Show Typing for Image Generation → Sends "uploading photo..." typing status Build Image Generation URL → Constructs Pollinations API image URL from prompt Download AI Image → Makes HTTP request to get the image Send Image Result to Telegram → Sends image to user via Telegram Log Image Prompt to Google Sheets → Logs prompt, image URL, date, and user ID Upload Image to Google Drive → Saves image to Google Drive folder Blog Article Generator Prompt User for Blog Title → When user clicks Blog Article Store Blog Prompt → Saves prompt for later use Log Blog Prompt to Google Sheets → Writes title + user ID to Google Sheets Send Article Style Options → Offers: Formal, Casual, or News style Store Selected Article Style → Updates row with chosen style in Google Sheets Fetch Last User Prompt → Finds the latest prompt submitted by this user Extract Last Blog Prompt → Extracts row for use in AI request Gemini Chat Wrapper → Handles input into LangChain node for AI processing Generate Article with Gemini → Calls Gemini to create 3-paragraph blog post Parse Gemini Response → Parses JSON string to extract title and content Send Article to Telegram → Sends blog article result back to user Log Final Article to Google Sheets → Updates row with final content and timestamp Requirements Telegram bot (via @BotFather) Pollinations API (free and public endpoint) Google Sheets & Drive (OAuth credential setup in n8n) Google Gemini / PaLM API key via LangChain Self-hosted or cloud n8n setup Setup Instructions Clone the workflow and import it into your n8n instance Set credentials: Telegram API Google Sheets OAuth Google Drive OAuth Gemini (via LangChain) Replace: Sheet ID with your own Google Sheet Folder ID on Google Drive chat_id placeholders if needed (use expressions instead) Deploy and send /start in your Telegram bot 🔧 Customization Tips Edit the Gemini prompt to adjust article length or tone Add extra style buttons like "SEO", "Story", "Academic" Add image post-processing (e.g. compression, renaming) Add error catching logic (e.g. if Pollinations image fails) Store images with filenames based on timestamp/user Security Considerations Use n8n credentials for all tokens (Telegram, Gemini, Sheets, Drive) Never hardcode your token inside HTTP nodes Do not expose real Google Sheet or Drive links in shared version Use Set node to collect all editable variables (like folder ID, sheet name)
by Harsh Maniya
🤖 Universal E-Commerce AI Assistant (Shopify, WooCommerce & RAG) This powerful n8n workflow deploys a sophisticated, multi-talented AI chatbot designed to streamline your e-commerce and customer support operations. The AI assistant can intelligently understand user queries and route them to the correct specialized agent, whether it's for Shopify, WooCommerce, or general knowledge questions answered by a Retrieval-Augmented Generation (RAG) system. This template automates responses to a wide range of inquiries, from checking Shopify order statuses with GraphQL to fetching product lists from WooCommerce, and even answering general questions by looking up information in a Pinecone vector database. How It Works ⚙️ The workflow operates in a series of logical steps, starting from the moment a user sends a message. 💬 Chat Trigger: The workflow activates when a user sends a message in the n8n chat interface. It captures the user's input and a unique session ID to track the conversation. 🧠 Intelligent Routing: The user's query is first sent to a Router Agent powered by GPT-4o-mini. This agent's sole purpose is to classify the intent of the message and output one of three keywords: SHOPIFY, WOOCOMMERCE, or None of them. 🔀 Conditional Branching: Based on the Router's output, a series of IF nodes direct the conversation down one of three paths: General Queries Path Shopify Path WooCommerce Path 📚 General Queries (RAG): If the query is not about e-commerce, it's handled by a RAG agent. Embedding: The user's question is converted into a vector embedding using AWS Bedrock. Retrieval: The workflow searches a Pinecone Vector Store to find the most relevant information from your knowledge base. Generation: A GPT-4o-mini agent receives the context from Pinecone and generates a comprehensive, helpful answer. 🛍️ E-Commerce Specialists: If the query is about Shopify or WooCommerce, it's passed to a dedicated agent. Shopify Agent: This agent uses Google Gemini and has a suite of tools to manage Shopify tasks. It can Get Order info, Fetch All Products, or run complex queries using the powerful GraphQL tool. WooCommerce Agent: This agent also uses Google Gemini and is equipped with tools to Fetch Order Details and Fetch All Products from a WooCommerce store. 🗣️ Conversation Memory: Each agent (Router, General, Shopify, WooCommerce) is connected to its own Memory node. This allows the chatbot to remember previous parts of the conversation for a more natural and context-aware interaction. 🏁 Merge & Respond: All three paths converge at a final Merge node. This ensures that no matter which agent handled the request, the final answer is streamlined into a single output and sent back to the user in the chat. Nodes Used 🔗 Triggers: Chat Trigger: Starts the workflow when a chat message is received. AI & Agents: AI Agent: Four separate agents for Routing, Shopify, WooCommerce, and General Queries. OpenAI Chat Model: Uses GPT-4o-mini for the Router and General Queries agent. Google Gemini Chat Model: Uses Google Gemini for the Shopify and WooCommerce agents. Tools & Data: Shopify Tool: To get products and order information from Shopify. WooCommerce Tool: To get products and order information from WooCommerce. GraphQL Tool: For advanced, custom queries to the Shopify API. Pinecone Vector Store: To retrieve context for the RAG agent. AWS Bedrock Embeddings: To create vector embeddings for Pinecone. Logic & Memory: IF Node: To conditionally route the workflow. Merge Node: To consolidate the different branches before ending. Window Buffer Memory: Four nodes to provide conversational memory to each agent. Setup Guide 🛠️ To use this workflow, you'll need to configure several nodes with your own credentials and settings. 1\. AI Model Credentials OpenAI: Create an API key in your OpenAI Platform dashboard. Add this credential to the Router Model and GPT-4o-mini nodes. Google Gemini: Create an API key in your Google AI Studio dashboard. Add this credential to the Shopify Chat Model and WooCommerce Chat Model nodes. 2\. E-Commerce Platform Credentials Shopify: You will need a Shopify Access Token. Follow the n8n documentation to generate one. Add the credential to the Fetch All Products and Get Order info nodes. WooCommerce: Create API credentials from your WordPress dashboard. Add the credential to the Fetch All Products2 and Fetch Order Details nodes. 3\. RAG System Credentials (Pinecone & AWS) Pinecone: Sign up for a Pinecone account and create an API key. Add your Pinecone credentials in n8n. In the Pinecone Vector Store node, set the pineconeIndex to the name of your index. You must have a pre-existing index with data for the RAG to work. AWS: Create an AWS account and an IAM user with programmatic access to Amazon Bedrock. Add your AWS credentials in n8n. Select your AWS credentials in the AWS Bedrock Embeddings node. 4\. GraphQL Node Configuration In the GraphQL node, you must update the endpoint URL. Replace the placeholder https://{subdomain}.myshopify.com/admin/api/2025-04/graphql.json with your own Shopify store's GraphQL API endpoint.
by DUBCOM
Workflow: Snapshot Contabo How it Works This workflow automates daily backups (snapshots) of VPS instances hosted on Contabo. Each day at midnight, it checks for existing snapshots and ensures that only the latest backups are retained by removing older ones. It provides a seamless, hands-off backup process to keep your data secure. Setup Steps Setting up this workflow is quick, typically taking about 10-15 minutes. The essential part of the setup is providing the necessary credentials, which you can easily retrieve from your Contabo control panel. Import the Workflow: Download and upload the workflow JSON into n8n. Configure Credentials: Add CLIENT_ID, CLIENT_SECRET, API_USER, and API_PASSWORD in the credential node. Activate the Workflow: Enable it to run automatically at midnight every day. Flow Overview Schedule Trigger (00:00 daily):** Automatically initiates the workflow. Formatted Date:** Prepares a timestamp for naming the snapshot. List Snapshots:** Verifies if an existing snapshot is available for each VPS. Conditional Logic:** No Snapshot? Proceeds to create a new one. Snapshot Found? Deletes the old snapshot before creating a new one. Key Points Snapshot Retention:** Old snapshots are deleted to ensure only the latest backups are stored. Unique Identifiers:** UUIDs are used to track and guarantee unique operations.
by Greypillar
How it works • RSS feed monitors your blog for new posts automatically • Extracts and cleans full article content from the blog post • AI Chain (GPT-4o) transforms content into 5 platform-optimized formats (LinkedIn, Twitter, Instagram, Email, Video) • Unsplash API suggests relevant images for each content piece • Slack notification alerts content team with preview of all formats • Airtable logs everything for content calendar tracking • Optional auto-posting to LinkedIn and Twitter (disabled by default) • Structured output parser ensures all 5 formats are generated correctly with proper character limits Set up steps • Time to set up: 10-15 minutes • Replace RSS feed URL with your blog's feed (common formats: /feed, /rss, /feed.xml) • Get Slack channel ID for content team notifications • Create Airtable base with 14 columns (Original_Title, Original_URL, Published_Date, LinkedIn_Post, LinkedIn_Hashtags, Twitter_Thread, Twitter_Hashtags, Instagram_Caption, Instagram_Hashtags, Email_Subject, Email_Body, Video_Script, Suggested_Images, Status) • Add credentials: OpenAI (GPT-4o), Unsplash API, Slack OAuth2, Airtable Token • Replace placeholder IDs in Slack and Airtable nodes • Optional: Enable LinkedIn/Twitter auto-posting nodes and add OAuth2 credentials What you'll need • OpenAI API - GPT-4o access for AI content repurposing • Unsplash API - Free tier available for image suggestions • Slack - Standard workspace for team notifications • Airtable - Free plan works for content tracking • Blog with RSS feed - WordPress, Ghost, Medium, Webflow all supported • LinkedIn/Twitter OAuth2 (optional) - For auto-posting feature Who this is for Content creators, marketing teams, and agencies that want to maximize content ROI by automatically repurposing blog posts into platform-specific content. Perfect for B2B companies publishing regular blog content who need consistent multi-platform presence without manual reformatting.
by Didac Fernandez
Nova AI Content Marketing Agent - LinkedIn & Facebook Automation This n8n template demonstrates how to create a complete AI-powered social media content creation and scheduling system that generates platform-optimized posts for LinkedIn and Facebook with custom images and human approval workflows. Possible use cases: Generate a full week of social media content from a single brand brief Create platform-specific content that maintains brand voice consistency Automate image generation with AI while maintaining quality control Schedule approved content across multiple social platforms Track and organize all content in centralized spreadsheets How it works The automation starts with a form submission collecting 10 brand variables (name, industry, demographics, etc.) Nova AI Agent analyzes the brand information and generates 6 distinct social media posts (3 LinkedIn professional, 3 Facebook community-focused) Content is split by platform and routed to separate image generation workflows Google Imagen 4 Ultra creates custom visuals for each post with platform-specific aspect ratios Each generated image is sent to Slack for human approval via interactive forms If feedback is provided, NanoBanana AI edits the image based on natural language instructions Approved images are uploaded to Google Drive with organized naming conventions All content data is logged to Google Sheets with image URLs and scheduling information Final posts are scheduled via Late API to respective social platforms The workflow loops through each post individually for quality control Requirements OpenRouter API credentials for GPT-5 Mini access Replicate API key for Google Imagen 4 Ultra and NanoBanana Slack OAuth2 credentials with bot permissions Google Drive OAuth2 credentials Google Sheets API access GetLate API key connected to LinkedIn and Facebook accounts Perplexity API for research enhancement (optional) HOW TO USE STEP 1 - Setup Form and Brand Variables Configure the Form Trigger webhook URL for brand data collection Update the 10 form fields with your specific industry placeholders Test the form submission to ensure data flows correctly STEP 2 - Configure AI Services Add your OpenRouter API credentials to both Chat Model nodes Add your Replicate API key to the HTTP Header Auth credential Configure Perplexity API credentials for research functionality Set up custom session keys for memory management STEP 3 - Setup Approval Workflow Add Slack OAuth2 credentials to both "Send message and wait" nodes Update the Slack channel ID to your preferred approval channel Configure the custom form fields for approval/feedback collection STEP 4 - Configure Storage and Scheduling Add Google Drive OAuth2 credentials and update the target folder ID Add Google Sheets credentials and update the spreadsheet ID Get your Late API key from getlate.dev and add to HTTP Header Auth Update the Late accountId in both Schedule Post nodes with your platform IDs STEP 5 - Customize Content Strategy Modify the Nova system prompt to match your brand voice requirements Adjust the visual style requirements in the AI Agent configuration Update posting date logic and timezone settings as needed Test the complete workflow with sample brand data
by Cyril Nicko Gaspar
🔍 Email Lookup with Google Search from Postgres Database This N8N workflow is designed to enrich seller data stored in a Postgres database by performing automated Google search lookups. It uses Bright Data's Web Unlocker to bypass search result restrictions and the HTML Extract node to parse and extract relevant information from webpages. The main purpose of this workflow is to discover missing contact details, company domains, and secondary emails for businesses or sellers based on existing database entries. 🎯 Problem This Workflow Solves Manually searching for missing seller or business details—like secondary emails, websites, or domain names—can be time-consuming and inefficient, especially for large datasets. This workflow automates the search and data enrichment process, significantly reducing manual effort while improving the quality and completeness of your seller database. ✅ Prerequisites Before using this template, make sure the following requirements are met: ✔️ A Bright Data account with access to the Web Unlocker or Amazon Scraper API ✔️ A valid Bright Data API key ✔️ An active PostgreSQL database with seller data ✔️ N8N self-hosted instance (recommended for using community nodes like n8n-nodes-brightdata) ✔️ Installed n8n-nodes-brightdata package (custom node for Bright Data integration) ⚙️ Setup Instructions Step 1: Prepare Your Postgres Table Create a table in Postgres with the following structure (you can adjust field names if needed): CREATE TABLE sellers ( seller_id SERIAL PRIMARY KEY, seller_name TEXT, primary_email TEXT, company_info TEXT, trade_name TEXT, business_address TEXT, coc_number TEXT, vat_number TEXT, commercial_register TEXT, secondary_email TEXT, domain TEXT, seller_slug TEXT, source TEXT ); Step 2: Setup Web Unlocker on Bright Data Go to your Bright Data dashboard. Navigate to Proxies & Scraping → Web Unlocker. Create a new zone, selecting Web Unlocker API under Scraping Solutions. Whitelist your server IP if required. Step 3: Generate API Key In the Bright Data dashboard, go to the API section. Generate a new API key. In N8N, create HTTP Request Credentials using Bearer Authentication with the API key. Step 4: Install the Bright Data Node in N8N In your N8N self-hosted instance, go to Settings → Community Nodes. Search and install n8n-nodes-brightdata. 🔄 Workflow Functionality 🔁 Trigger: Can be set to run on a schedule (e.g., daily) or manually. 📥 Read: Fetches seller records from the Postgres table. 🌐 Search: Uses Bright Data to perform a Google search based on seller_name, company_info, or trade_name. 🧾 Extract: Parses the HTML content using the HTML Extract node to identify potential websites and email addresses. 📝 Update: Writes enriched data (like domain or secondary_email) back to the Postgres table. 💡 Use Cases Lead enrichment for e-commerce sellers Domain and contact info discovery for B2B databases Email and web domain verification for CRM systems Market research automation 🛠️ Customization Tips You can enhance the parsing logic in the HTML Extract node to look for phone numbers, LinkedIn profiles, or social media links. Modify the search query logic to include additional parameters like location or industry for more refined results. Integrate additional APIs (e.g., Hunter.io, Clearbit) for email validation or social profile enrichment. Add filtering to skip entries that already have domain or secondary_email.
by Zacharia Kimotho
Workflow documentation updated on 21 May 2025 This workflow keeps track of your brand mentions across different Facebook groups and provides an analysis of the posts as positive, negative or neutral and updates this to Googe sheets for further analysis This is useful and relevants for brands looking to keep track of what people are saying about their brands and guage the customer satisfaction or disatisfaction based on what they are talking about Who is this template for? This workflow is for you if You Need to keep track of your brand sentiments across different niche facebook groups Own a saas and want to monitor it across different local facebook Groups Are looking to do some competitor research to understand what others dont like about their products Are testing the market on different market offerings and products to get best results Are looking for sources other that review sites for product, software or service reviews Need to keep track of your brand sentiments across different niche facebook groups Are starting on market research and would like to get insights from differnt facebook groups on app usage, strngths weaknesses, features etc How it works You will set the desired schedule by which to monitor the groups This gets the brand names and facebook Groups to monitor. Setup Steps Before you begin You will need access to a Bright Data API to run this workflows Make a copy of the sheet below and add the urls for the facebook groups to scrap and the brand names you wish to monitor. Import the workflow json to your canvas Make a copy of this Google sheet to get started easily Set your APi key in the Map out the Google sheet to your tables You can use/update the current AI models to differnt models eg Gemini or anthropic Run the workflow Setup B Bright Data provides an option to receive the results on an external webhook via a POST call. This can be collected via the
by Alex Kim
Weather via Slack 🌦️💬 Overview This workflow provides real-time weather updates via Slack using a custom Slack command: /weather [cityname] Users can type this command in Slack (e.g., /weather New York), and the workflow will fetch and post the latest forecast, including temperature, wind conditions, and a short weather summary. While this workflow is designed for Slack, users can modify it to send weather updates via email, Discord, Microsoft Teams, or any other communication platform. How It Works Webhook Trigger – The workflow is triggered when a user runs /weather [cityname] in Slack. Geocoding with OpenStreetMap – The city name is converted into latitude and longitude coordinates. Weather Data from NOAA – The coordinates are used to retrieve detailed weather data from the National Weather Service (NWS) API. Formatted Weather Report – The workflow extracts relevant weather details, such as: Temperature (°F/°C) Wind speed and direction Short forecast summary Slack Notification – The weather forecast is posted back to the Slack channel in a structured format. Requirements A custom Slack app with: The ability to create a Slash Command (/weather) OAuth permissions to post messages in Slack An n8n instance to host and execute the workflow Customization Replace Slack messaging with email, Discord, Microsoft Teams, Telegram, or another service. Modify the weather data format for different output preferences. Set up scheduled weather updates for specific locations. Use Cases Instantly check the weather for any location directly in Slack. Automate weather reports for team members or projects. Useful for remote teams, outdoor event planning, or general weather tracking. Setup Instructions Create a custom Slack app: Go to api.slack.com/apps and create a new app. Add a Slash Command (/weather) with the webhook URL from n8n. Enable OAuth scopes for sending messages. Deploy the webhook – Ensure it can receive and process Slack commands. Run the workflow – Type /weather [cityname] in Slack and receive instant weather updates.