by Shiv Gupta
🎵 TikTok Post Scraper via Keywords | Bright Data + Sheets Integration 📝 Workflow Description Automatically scrapes TikTok posts based on keyword search using Bright Data API and stores comprehensive data in Google Sheets for analysis and monitoring. 🔄 How It Works This workflow operates through a simple, automated process: Keyword Input:** User submits search keywords through a web form Data Scraping:** Bright Data API searches TikTok for posts matching the keywords Processing Loop:** Monitors scraping progress and waits for completion Data Storage:** Automatically saves all extracted data to Google Sheets Result Delivery:** Provides comprehensive post data including metrics, user info, and media URLs ⏱️ Setup Information Estimated Setup Time: 10-15 minutes This includes importing the workflow, configuring credentials, and testing the integration. Most of the process is automated once properly configured. ✨ Key Features 📝 Keyword-Based Search Search TikTok posts using specific keywords 📊 Comprehensive Data Extraction Captures post metrics, user profiles, and media URLs 📋 Google Sheets Integration Automatically organizes data in spreadsheets 🔄 Automated Processing Handles scraping progress monitoring 🛡️ Reliable Scraping Uses Bright Data's professional infrastructure ⚡ Real-time Updates Live status monitoring and data processing 📊 Data Extracted | Field | Description | Example | |-------|-------------|---------| | url | TikTok post URL | https://www.tiktok.com/@user/video/123456 | | post_id | Unique post identifier | 7234567890123456789 | | description | Post caption/description | Check out this amazing content! #viral | | digg_count | Number of likes | 15400 | | share_count | Number of shares | 892 | | comment_count | Number of comments | 1250 | | play_count | Number of views | 125000 | | profile_username | Creator's username | @creativity_master | | profile_followers | Creator's follower count | 50000 | | hashtags | Post hashtags | #viral #trending #fyp | | create_time | Post creation timestamp | 2025-01-15T10:30:00Z | | video_url | Direct video URL | https://video.tiktok.com/tos/... | 🚀 Setup Instructions Step 1: Prerequisites n8n instance (self-hosted or cloud) Bright Data account with TikTok scraping dataset access Google account with Sheets access Basic understanding of n8n workflows Step 2: Import Workflow Copy the provided JSON workflow code In n8n: Go to Workflows → + Add workflow → Import from JSON Paste the JSON code and click Import The workflow will appear in your n8n interface Step 3: Configure Bright Data In n8n: Navigate to Credentials → + Add credential → Bright Data API Enter your Bright Data API credentials Test the connection to ensure it's working Update the workflow nodes with your dataset ID: gd_lu702nij2f790tmv9h Replace BRIGHT_DATA_API_KEY with your actual API key Step 4: Configure Google Sheets Create a new Google Sheet or use an existing one Copy the Sheet ID from the URL In n8n: Credentials → + Add credential → Google Sheets OAuth2 API Complete OAuth setup and test connection Update the Google Sheets node with your Sheet ID Ensure the sheet has a tab named "Tiktok by keyword" Step 5: Test the Workflow Activate the workflow using the toggle switch Access the form trigger URL to submit a test keyword Monitor the workflow execution in n8n Verify data appears in your Google Sheet Check that all fields are populated correctly ⚙️ Configuration Details Bright Data API Settings Dataset ID:** gd_lu702nij2f790tmv9h Discovery Type:** discover_new Search Method:** keyword Results per Input:** 2 posts per keyword Include Errors:** true Workflow Parameters Wait Time:** 1 minute between status checks Status Check:** Monitors until scraping is complete Data Format:** JSON response from Bright Data Error Handling:** Automatic retry on incomplete scraping 📋 Usage Guide Running the Workflow Access the form trigger URL provided by n8n Enter your desired keyword (e.g., "viral dance", "cooking tips") Submit the form to start the scraping process Wait for the workflow to complete (typically 2-5 minutes) Check your Google Sheet for the extracted data Best Practices Use specific, relevant keywords for better results Monitor your Bright Data usage to stay within limits Regularly backup your Google Sheets data Test with simple keywords before complex searches Review extracted data for accuracy and completeness 🔧 Troubleshooting Common Issues 🚨 Scraping Not Starting Verify Bright Data API credentials are correct Check dataset ID matches your account Ensure sufficient credits in Bright Data account 🚨 No Data in Google Sheets Confirm Google Sheets credentials are authenticated Verify sheet ID is correct Check that the "Tiktok by keyword" tab exists 🚨 Workflow Timeout Increase wait time if scraping takes longer Check Bright Data dashboard for scraping status Verify keyword produces available results 📈 Use Cases Content Research Research trending content and hashtags in your niche to inform your content strategy. Competitor Analysis Monitor competitor posts and engagement metrics to understand market trends. Influencer Discovery Find influencers and creators in specific topics or industries. Market Intelligence Gather data on trending topics, hashtags, and user engagement patterns. 🔒 Security Notes Keep your Bright Data API credentials secure Use appropriate Google Sheets sharing permissions Monitor API usage to prevent unexpected charges Regularly rotate API keys for better security Comply with TikTok's terms of service and data usage policies 🎉 Ready to Use! Your TikTok scraper is now configured and ready to extract valuable data. Start with simple keywords and gradually expand your research as you become familiar with the workflow. Need Help? Visit the n8n community forum or check the Bright Data documentation for additional support and advanced configuration options. For any questions or support, please contact: Email or fill out this form
by Dominik Baranowski
N8N for Beginners: Looping Over Items Description This workflow is designed for n8n beginners to understand how n8n handles looping (iteration) over multiple items. It highlights two key behaviors: Built-In Looping:** By default, most n8n nodes iterate over each item in an input array. Explicit Looping:* The *Loop Over Items* node allows controlled iteration, enabling *custom batch processing** and multi-step workflows. This workflow demonstrates the difference between processing an unsplit array of strings (single item) vs. a split array (multiple items). Setup 1. Input Data To begin, paste the following JSON into the Manual Trigger node: { "urls": [ "https://www.reddit.com", "https://www.n8n.io/", "https://n8n.io/", "https://supabase.com/", "https://duckduckgo.com/" ] } 📌 Steps to Paste Data: Double-click** the "Manual Trigger" node. Click "Edit Output" (top-right corner). Paste the JSON and Save. The node turns purple, indicating that test data is pinned. 1. Click "Test Workflow" button at the bottom of the canvas Explanation of the n8n Nodes in the Workflow | Node Name | Purpose | Documentation Link | |-----------|---------|--------------------| | Manual Trigger | Starts the workflow manually and sends test data | Docs | | Split Out | Converts an array of strings into separate JSON objects | Docs | | Loop Over Items (Loop Over Items 1) | Demonstrates how an unsplit array is treated as one item | Docs | | Loop Over Items (Loop Over Items 2) | Iterates over each item separately | Docs | | Wait | Introduces a delay per iteration (set to 1 second) | Docs | | Code | Adds a constant parameter (param1) to each item | Docs | | NoOp (Result Nodes) | Displays output for inspection | Docs | Execution Details 1. How the Workflow Runs Manual Trigger starts execution** with the pasted JSON data. The workflow follows two paths: Unsplit Array Path → Loop Over Items 1 Processes the entire array as a single item. Result1 & Result5: Show that the array was not split. Split Array Path → Split Out → Loop Over Items 2 Splits the array into separate objects. Result2, Result3, Result4: Show that each item is processed individually. A Wait node (1 sec delay) demonstrates controlled execution. Code nodes modify the JSON, adding a parameter (param1). 2. What You Will See | Node | Expected Output | |------|---------------| | Result1 & Result5 | The entire array is processed as one item. | | Result2, Result3, Result4 | The array is split and processed as individual items. | | Wait Node | Adds a 1-second delay per item in Loop Over Items 2. | Use Cases This workflow is useful for: ✅ API Data Processing: Loop through API responses containing arrays. ✅ Web Scraping: Process multiple URLs individually. ✅ Task Automation: Execute a sequence of actions per item. ✅ Workflow Optimization: Control execution order, delays, and dependencies. Notes Sticky notes are included in the workflow for easy reference. The Wait node is optional—remove it for faster execution. This template is structured for beginners but serves as a building block for more advanced automations.
by Ferenc Erb
Use Case Automate chat interactions in Bitrix24 with a customizable bot that can handle various events and respond to user messages. What This Workflow Does Processes incoming webhook requests from Bitrix24 Handles authentication and token validation Routes different event types (messages, joins, installations) Provides automated responses and bot registration Manages secure communication between Bitrix24 and external services Setup Instructions Configure Bitrix24 webhook endpoints Set up authentication credentials Customize bot responses and behavior Deploy and test the workflow
by Nick Saraev
Deep Multiline Icebreaker System (AI-Powered Cold Email Personalization) Categories: Lead Generation, AI Marketing, Sales Automation This workflow creates an advanced AI-powered cold email personalization system that achieves 5-10% reply rates by generating deeply personalized multi-line icebreakers. The system scrapes comprehensive website data, analyzes multiple pages per prospect, and uses advanced AI prompting to create custom email openers that make recipients believe you've personally researched their entire business. Benefits Superior Response Rates** - Achieves 5-10% reply rates vs. 1-2% for standard cold email campaigns Deep Website Intelligence** - Scrapes and analyzes multiple pages per prospect, not just homepages Advanced AI Personalization** - Uses sophisticated prompting techniques with examples and formatting rules Complete Lead Pipeline** - From Apollo search to personalized icebreakers in Google Sheets Scalable Processing** - Handle hundreds of prospects with intelligent batching and error handling Revenue-Focused Approach** - System designed around proven $72K/month agency methodologies How It Works Apollo Lead Acquisition: Integrates directly with Apollo.io search URLs through Apify scraper Processes 500+ leads per search with comprehensive contact data Filters for prospects with both email addresses and accessible websites Multi-Page Website Scraping: Scrapes homepage to extract all internal website links Processes relative URLs and filters out external/irrelevant links Performs intelligent batching to prevent IP blocking during scraping Comprehensive Content Analysis: Converts HTML to markdown for efficient AI processing Uses GPT-4 to generate detailed abstracts of each webpage Aggregates insights from multiple pages into comprehensive prospect profiles Advanced AI Icebreaker Generation: Employs sophisticated prompting with system messages, examples, and formatting rules Uses proven icebreaker templates that reference non-obvious website details Generates personalized openers that imply deep manual research Smart Data Processing: Removes duplicate URLs and handles scraping errors gracefully Implements token limits to control AI processing costs Organizes final output in structured Google Sheets format Required Google Sheets Setup Create a Google Sheet with these exact tab and column structures: Search URLs Tab: URL - Contains Apollo.io search URLs for your target audiences Leads Tab (Output): first_name - Contact's first name last_name - Contact's last name email - Contact's email address website_url - Company website URL headline - Job title/position location - Geographic location phone_number - Contact phone (if available) multiline_icebreaker - AI-generated personalized opener Setup Instructions: Create Google Sheet with "Search URLs" and "Leads" tabs Add your Apollo search URLs to the first tab (one per row) Connect Google Sheets OAuth credentials in n8n Update the Google Sheets document ID in all sheet nodes The workflow reads from Search URLs and outputs to Leads automatically Apollo Search URL Format: Your search URLs should look like: https://app.apollo.io/#/people?personLocations[]=United%20States&personTitles[]=ceo&qKeywords=marketing%20agency&page=1 Business Use Cases AI Automation Agencies** - Generate high-converting prospect outreach for service-based businesses B2B Sales Teams** - Create personalized cold email campaigns that actually get responses Marketing Agencies** - Offer premium personalization services to clients Consultants** - Build authority through deeply researched prospect outreach SaaS Companies** - Improve demo booking rates through personalized messaging Professional Services** - Stand out from generic sales emails with custom insights Revenue Potential This system transforms cold email economics: 5-10x Higher Response Rates** than standard cold email approaches $72K/month proven methodology** - exact system used to scale successful AI agency Premium Positioning** - prospects assume you've done extensive manual research Scalable Personalization** - process hundreds of prospects daily vs. manual research Difficulty Level: Advanced Estimated Build Time: 3-4 hours Monthly Operating Cost: ~$150 (Apollo + Apify + OpenAI + Email platform APIs) Watch My Complete Live Build Want to see me build this entire deep personalization system from scratch? I walk through every component live - including the AI prompting strategies, website scraping logic, error handling, and the exact techniques that generate 5-10% reply rates. 🎥 See My Live Build Process: "I Deep-Personalized 1000+ Cold Emails Using THIS AI System (FREE TEMPLATE)" This comprehensive tutorial shows the real development process - including advanced AI prompting, multi-page scraping architecture, and the proven icebreaker templates that have generated over $72K/month in agency revenue. Set Up Steps Apollo & Apify Integration: Configure Apify account with Apollo scraper access Set up API credentials and test lead extraction Define target audience parameters and lead qualification criteria Google Sheets Database Setup: Create multi-sheet structure (Search URLs, Leads) Configure proper column mappings for lead data Set up Google Sheets API credentials and permissions Website Scraping Infrastructure: Configure HTTP request nodes with proper redirect handling Set up error handling for websites that can't be scraped Implement intelligent batching with split-in-batches nodes AI Content Processing: Set up OpenAI API credentials with appropriate rate limits Configure dual-AI approach (page summarization + icebreaker generation) Implement token limiting to control processing costs Advanced Icebreaker Generation: Configure sophisticated AI prompting with system messages Set up example-based learning with input/output pairs Implement formatting rules for natural-sounding personalization Quality Control & Testing: Test complete workflow with small prospect batches Validate AI output quality and personalization accuracy Monitor response rates and optimize messaging templates Advanced Optimizations Scale the system with: Industry-Specific Templates:** Customize icebreaker formats for different verticals A/B Testing Framework:** Test different AI prompt variations and templates CRM Integration:** Automatically add qualified responders to sales pipelines Response Tracking:** Monitor which personalization elements drive highest engagement Multi-Touch Sequences:** Create follow-up campaigns based on initial response data Important Considerations AI Token Management:** System includes intelligent token limiting to control OpenAI costs Scraping Ethics:** Built-in delays and error handling prevent website overload Data Quality:** Filtering logic ensures only high-quality prospects with accessible websites Scalability:** Batch processing prevents IP blocking during high-volume scraping Why This System Works The key to 5-10% reply rates lies in making prospects believe you've done extensive manual research: Non-obvious details from deep website analysis Natural language patterns that avoid template detection Company name abbreviation (e.g., "Love AMS" vs "Love AMS Professional Services") Multiple page insights aggregated into compelling narratives Check Out My Channel For more advanced automation systems and proven business-building strategies that generate real revenue, explore my YouTube channel where I share the exact methodologies used to build successful automation agencies.
by Ajith joseph
🤖 Create a Telegram Bot with Mistral AI and Conversation Memory A sophisticated Telegram bot that provides AI-powered responses with conversation memory. This template demonstrates how to integrate any AI API service with Telegram, making it easy to swap between different AI providers like OpenAI, Anthropic, Google AI, or any other API-based AI model. 🔧 How it works The workflow creates an intelligent Telegram bot that: 💬 Maintains conversation history for each user 🧠 Provides contextual AI responses using any AI API service 📱 Handles different message types and commands 🔄 Manages chat sessions with clear functionality 🔌 Easily adaptable to any AI provider (OpenAI, Anthropic, Google AI, etc.) ⚙️ Set up steps 📋 Prerequisites 🤖 Telegram Bot Token (from @BotFather) 🔑 AI API Key (from any AI service provider) 🚀 n8n instance with webhook capability 🛠️ Configuration Steps 🤖 Create Telegram Bot Message @BotFather on Telegram Create new bot with /newbot command Save the bot token for credentials setup 🧠 Choose Your AI Provider OpenAI: Get API key from OpenAI platform Anthropic: Sign up for Claude API access Google AI: Get Gemini API key NVIDIA: Access LLaMA models Hugging Face: Use inference API Any other AI API service 🔐 Set up Credentials in n8n Add Telegram API credentials with your bot token Add Bearer Auth/API Key credentials for your chosen AI service Test both connections 🚀 Deploy Workflow Import the workflow JSON Customize the AI API call (see customization section) Activate the workflow Set webhook URL in Telegram bot settings ✨ Features 🚀 Core Functionality 📨 Smart Message Routing**: Automatically categorizes incoming messages (commands, text, non-text) 🧠 Conversation Memory**: Maintains chat history for each user (last 10 messages) 🤖 AI-Powered Responses**: Integrates with any AI API service for intelligent replies ⚡ Command Support**: Built-in /start and /clear commands 📱 Message Types Handled 💬 Text Messages**: Processed through AI model with context 🔧 Commands**: Special handling for bot commands ❌ Non-text Messages**: Polite error message for unsupported content 💾 Memory Management 👤 User-specific chat history storage 🔄 Automatic history trimming (keeps last 10 messages) 🌐 Global state management across workflow executions 🤖 Bot Commands /start 🎯 - Welcome message with bot introduction /clear 🗑️ - Clears conversation history for fresh start Regular text 💬 - Processed by AI with conversation context 🔧 Technical Details 🏗️ Workflow Structure 📡 Telegram Trigger - Receives all incoming messages 🔀 Message Filtering - Routes messages based on type/content 💾 History Management - Maintains conversation context 🧠 AI Processing - Generates intelligent responses 📤 Response Delivery - Sends formatted replies back to user 🤖 AI API Integration (Customizable) Current Example (NVIDIA): Model: mistralai/mistral-nemotron Temperature: 0.6 (balanced creativity) Max tokens: 4096 Response limit: Under 200 words 🔄 Easy to Replace with Any AI Service: OpenAI Example: { "model": "gpt-4", "messages": [...], "temperature": 0.7, "max_tokens": 1000 } Anthropic Claude Example: { "model": "claude-3-sonnet-20240229", "messages": [...], "max_tokens": 1000 } Google Gemini Example: { "contents": [...], "generationConfig": { "temperature": 0.7, "maxOutputTokens": 1000 } } 🛡️ Error Handling ❌ Non-text message detection and appropriate responses 🔧 API failure handling ⚠️ Invalid command processing 🎨 Customization Options 🤖 AI Provider Switching To use a different AI service, modify the "NVIDIA LLaMA Chat Model" node: 📝 Change the URL in HTTP Request node 🔧 Update the request body format in "Prepare API Request" node 🔐 Update authentication method if needed 📊 Adjust response parsing in "Save AI Response to History" node 🧠 AI Behavior 📝 Modify system prompt in "Prepare API Request" node 🌡️ Adjust temperature and response parameters 📏 Change response length limits 🎯 Customize model-specific parameters 💾 Memory Settings 📊 Adjust history length (currently 10 messages) 👤 Modify user identification logic 🗄️ Customize data persistence approach 🎭 Bot Personality 🎉 Update welcome message content ⚠️ Customize error messages and responses ➕ Add new command handlers 💡 Use Cases 🎧 Customer Support**: Automated first-line support with context awareness 📚 Educational Assistant**: Homework help and learning support 👥 Personal AI Companion**: General conversation and assistance 💼 Business Assistant**: FAQ handling and information retrieval 🔬 AI API Testing**: Perfect template for testing different AI services 🚀 Prototype Development**: Quick AI chatbot prototyping 📝 Notes 🌐 Requires active n8n instance for webhook handling 💰 AI API usage may have rate limits and costs (varies by provider) 💾 Bot memory persists across workflow restarts 👥 Supports multiple concurrent users with separate histories 🔄 Template is provider-agnostic - easily switch between AI services 🛠️ Perfect starting point for any AI-powered Telegram bot project 🔧 Popular AI Services You Can Use | Provider | Model Examples | API Endpoint Style | |----------|---------------|-------------------| | 🟢 OpenAI | GPT-4, GPT-3.5 | https://api.openai.com/v1/chat/completions | | 🔵 Anthropic | Claude 3 Opus, Sonnet | https://api.anthropic.com/v1/messages | | 🔴 Google | Gemini Pro, Gemini Flash | https://generativelanguage.googleapis.com/v1beta/models/ | | 🟡 NVIDIA | LLaMA, Mistral | https://integrate.api.nvidia.com/v1/chat/completions | | 🟠 Hugging Face | Various OSS models | https://api-inference.huggingface.co/models/ | | 🟣 Cohere | Command, Generate | https://api.cohere.ai/v1/generate | Simply replace the HTTP Request node configuration to switch providers!
by Agent Studio
Overview This workflow allows you to trigger custom logic in n8n directly from Retell's Voice Agent using Custom Functions. It captures a POST webhook from Retell every time a Voice Agent reaches a Custom Function node. You can plug in any logic—call an external API, book a meeting, update a CRM, or even return a dynamic response back to the agent. Who is it for For builders using Retell who want to extend Voice Agent functionality with real-time custom workflows or AI-generated responses. Prerequisites Have a Retell AI Account A Retell agent with a Custom Function node in its conversation flow (see template below) Set your n8n webhook URL in the Custom Function configuration (see "How to use it" below) (Optional) Familiarity with Retell's Custom Function docs Start a conversation with the agent (text or voice) Retell Agent Example To get you started, we've prepared a Retell Agent ready to be imported, that includes the call to this template. Import the agent to your Retell workspace (top-right button on your agent's page) You will need to modify the function URL in order to call your own instance. This template is a simple hotel agent that calls the custom function to confirm a booking, passing basic formatted data. How it works Retell sends a webhook to n8n whenever a Custom Function is triggered during a call (or test chat). The webhook includes: Full call context (transcript, call ID, etc.) Parameters defined in the Retell function node You can process this data and return a response string back to the Voice Agent in real-time. How to use it Copy the webhook URL (e.g. https://your-instance.app.n8n.cloud/webhook/hotel-retell-template) Modify the Retell Custom Function webhook URL (see template description for screenshots) Edit the function Modify the URL Modify the logic in the Set node or replace it with your own custom flow Deploy and test: Retell will hit your n8n workflow during the conversation Extension Ideas Call a third-party API to fetch data (e.g. hotel availability, CRM records) Use an LLM node to generate dynamic responses Trigger a parallel automation (Slack message, calendar invite, etc.) 👉 Reach out to us if you're interested in analyzing your Retell Agent conversations.
by Jordan Lee
This flexible template scrapes business listings for any industry and location, perfect for sales teams, marketers, and researchers. Good to know Works with any business category (restaurants, contractors, retailers, etc.) Fully customizable search parameters Results automatically organized in Google Sheets Built-in delay ensures scraping completes before data collection How it works Trigger: Manual or scheduled start Apify Configuration: Sets scraping parameters (industry, location, data fields) Scraping Execution: Runs the web scraping job Data Processing: Cleans and structures the raw data Storage: Saves results to your Google Sheets What is Apify? Apify is a webscraping tool, in this workflow the data is scraped from a google maps scraper: https://apify.com/compass/crawler-google-places How to use Apify Small # Lead Generation (Purple) https://apify.com/compass/crawler-google-places Add location and industry to scrape (Apify) Add the number of leads to output (Apify) Copy over the JSON file into N8N Copy & paste API endpoint "Get Run URL" in N8N Apify Large # Lead Generation (Grey) Configure the Manual Trigger When clicking 'Execute workflow' node is ready to use as-is This triggers the entire lead generation process Setup "Start Results (Apify)" Node Get Your Apify API Information Go to Apify.com and create a free account Navigate to Settings → Integrations → API tokens Copy your API token Find the Google Maps scraper actor ID: Configure the HTTP Request (start results) Method: POST URL: Replace "enter apify (get run)" with: https://api.apify.com/v2/acts/nwua9Gu5YrADL7ZDj/runs?token=YOUR_API_TOKEN C. Customize the JSON Body Parameters In the JSON body, modify these key fields: Location & Search: "locationQuery": Change "Toronto" to your target city "searchStringsArray": Change ["barber"] to your business type Examples: ["restaurants"], ["dentists"], ["contractors"] Configure the HTTP Request (start results) Method : Get Url: enter the get dataset URL from Apify Split Out Node Select fields to append in the google sheet Test the Configuration Click Execute workflow to test Check that the Apify job starts successfully Note the job ID returned for the next section This section initiates the scraping process and should complete in 30-60 seconds depending on your lead count. Setup Google Sheets Create a new Google Sheet with these columns: title (business name) address (full address) state (state/province) neighborhood (area/district) phone (contact number) emails (email addresses) Copy your Google Sheets document ID for workflow configuration Requirements Apify account Google Sheets document Google OAuth credentials Customization Options For different use cases: Lead Gen: Get business leads Local SEO: Collect competitor data Market Research: Analyze industry trends Advanced mofications: Add email enrichment Integrate with CRM systems Set up automatic daily runs
by Julian Kaiser
🗂️ Bulk File Upload to Google Drive with Folder Management How it works User submits files and target folder name via form Workflow checks if folder exists in Drive Creates folder if needed or uses existing one Processes and uploads all files maintaining structure Set up steps (Est. 10-15 mins) Set up Google Drive credentials in n8n Replace parent folder ID in search query with your Drive folder ID Configure form node with: Multiple file upload field Folder name text field Test workflow with sample files 💡 Detailed configuration steps and patterns are documented in sticky notes within the workflow. Perfect for: Bulk file organization Automated Drive folder management File upload automation Maintaining consistent file structures
by Humble Turtle
Architecture Agent Overview The Architect Agent listens to Slack messages and generates full data architecture blueprints in response. Powered by Claude 3.5 (Anthropic) for reasoning and design, and Tavily for real-time web search, this agent creates production-ready data pipeline scaffolds on-demand — transforming natural language prompts into structured data engineering solutions. Capabilities Understands and interprets user requests from Slack Designs end-to-end data pipelines architectures using industry best practices. Outputs include High-level architecture diagrams Required Connections To operate correctly, the following integrations must be in place: Slack API Token with permission to read messages and post responses Tavily API Key for external search functionality Claude 3.5 API Access via Anthropic Detailed configuration instructions are provided in the workflow Setup time <15 minutes Example input: "Create a data pipeline orchestrated by Airflow, running on a Docker image. It should connect to a MySQL database, load in the data into a PostgreSQL DB (incremental load) and then transform the data into business-oriented tables also in the PostgreSQL database. Create an example setup with raw sales data." Customising this workflow Try saving outputs to Google Drive to store all your architecture blueprints
by Extruct AI
Who’s it for: Investors, analysts, and startup enthusiasts who need a complete overview of startups, including industry, product, funding, and leadership information. How it works / What it does: Enter a startup’s name into the form, and the workflow will automatically collect and organize details such as the company’s industry, product, investors, and key decision-makers. All this information is neatly updated in your Google Sheet, making it easy to track and compare startups. How to set up: Sign up for Extruct at www.extruct.ai/. Open the Extruct table template, copy the table ID from the URL, and save it. Copy the Google Sheets template to your own Drive. Paste the table ID into the variables node in your n8n flow. Set up Bearer authentication in each HTTP Request node using your Extruct API token. In the Google Sheets node, paste your template link and connect your Google account. Run the flow once to reveal the mapping fields, then match each field to the correct column. Activate the flow and add startups via the form. Requirements: Extruct account and API token Extruct table template Google account with Google Sheets How to customize the workflow: Add new columns in both the Extruct table and your Google Sheet, then map them in the Google Sheets node to track additional startup data.
by Jimleuk
This n8n template shows you how to connect Github's Free Models to your existing n8n AI workflows. Whilst it is possible to use HTTP nodes to access Github Models, The aim of this template is to use it with existing n8n LLM nodes - saves the trouble of refactoring! Please note, Github states their model APIs are not intended for production usage! If you need higher rate limits, you'll need to use a paid service. How it works The approach builds a custom OpenAI compatible API around the Github Models API - all done in n8n! First, we attach an OpenAI subnode to our LLM node and configure a new OpenAI credential. Within this new OpenAI credential, we change the "Base URL" to point at a n8n webhook we've prepared as part of this template. Next, we create 2 webhooks which the LLM node will now attempt to connect with: "models" and "chat completion". The "models" webhook simply calls the Github Model's "list all models" endpoint and remaps the response to be compatible with our LLM node. The "Chat Completion" webhook does a similar task with Github's Chat Completion endpoint. How to use Once connected, just open chat and ask away! Any LLM or AI agent node connected with this custom LLM subnode will send requests to the Github Models API. Allowing your to try out a range of SOTA models for free. Requirements Github account and credentials for access to Models. If you've used the Github node previously, you can reuse this credential for this template. Customising this workflow This template is just an example. Use the custom OpenAI credential for your other workflows to test Github models. References https://docs.github.com/en/github-models/prototyping-with-ai-models https://docs.github.com/en/github-models
by Louis Chan
How it works Transform medical documents into structured data using Google Gemini AI with enterprise-grade accuracy. Classifies document types (receipts, prescriptions, lab reports, clinical notes) Extracts text with 95%+ accuracy using advanced OCR Structures data according to medical taxonomy standards Supports multiple languages (English, Chinese, auto-detect) Tracks processing costs and quality metrics automatically Set up steps Prerequisites Google Gemini API key (get from Google AI Studio) Quick setup Import this workflow template Configure Google Gemini API credentials in n8n Test with a sample medical document URL Deploy your webhook endpoint Usage Send POST request to your webhook: { "image_url": "https://example.com/medical-receipt.jpg", "expected_type": "financial", "language_hint": "auto" } Get structured response: json{ "success": true, "result": { "documentType": "financial", "metadata": { "providerName": "Dr. Smith Clinic", "createdDate": "2025-01-06", "currency": "USD" }, "content": { "amount": 150.00, "services": [...] }, "quality_metrics": { "overall_confidence": 0.95 } } } Use cases Healthcare Organizations Medical billing automation - Process receipts and invoices automatically Insurance claim processing - Extract data from claim documents Clinical documentation - Digitize patient records and notes Data standardization - Consistent structured output format System Integrators EMR integration - Connect with existing healthcare systems Workflow automation - Reduce manual data entry by 90% Multi-language support - Handle international medical documents Quality assurance - Built-in confidence scoring and validation Supported Document Types Financial: Medical receipts, bills, insurance claims, invoices Clinical: Medical charts, progress notes, consultation reports Prescription: Prescriptions, medication lists, pharmacy records Administrative: Referrals, authorizations, patient registration Diagnostic: Lab reports, test results, screening reports Legal: Medical certificates, documentation forms