by Greg Evseev
This workflow template provides a robust solution for efficiently sending multiple prompts to Anthropic's Claude models in a single batch request and retrieving the results. It leverages the Anthropic Batch API endpoint (/v1/messages/batches) for optimized processing and outputs each result as a separate item. Core Functionality & Example Usage Included This template includes: The Core Batch Processing Workflow: Designed to be called by another n8n workflow. An Example Usage Workflow: A separate branch demonstrating how to prepare data and trigger the core workflow, including examples using simple strings and n8n's Langchain Chat Memory nodes. Who is this for? This template is designed for: Developers, data scientists, and researchers** who need to process large volumes of text prompts using Claude models via n8n. Content creators** looking to generate multiple pieces of content (e.g., summaries, Q&As, creative text) based on different inputs simultaneously. n8n users** who want to automate interactions with the Anthropic API beyond single requests, improve efficiency, and integrate batch processing into larger automation sequences. Anyone needing to perform bulk text generation or analysis tasks with Claude programmatically. What problem does this workflow solve? Sending prompts to language models one by one can be slow and inefficient, especially when dealing with hundreds or thousands of requests. This workflow addresses that by: Batching:** Grouping multiple prompts into a single API call to Anthropic's dedicated batch endpoint (/v1/messages/batches). Efficiency:** Significantly reducing the time required compared to sequential processing. Scalability:** Handling large numbers of prompts (up to API limits) systematically. Automation:** Providing a ready-to-use, callable n8n structure for batch interactions with Claude. Structured Output:** Parsing the results and outputting each individual prompt's result as a separate n8n item. Use Cases: Bulk content generation (e.g., product descriptions, summaries). Large-scale question answering based on different contexts. Sentiment analysis or data extraction across multiple text snippets. Running the same prompt against many different inputs for research or testing. What the Core Workflow does (Triggered by the 'When Executed by Another Workflow' node) Receive Input: The workflow starts when called by another workflow (e.g., using the 'Execute Workflow' node). It expects input data containing: anthropic-version (string, e.g., "2023-06-01") requests (JSON array, where each object represents a single prompt request conforming to the Anthropic Batch API schema). Submit Batch Job: Sends the formatted requests data via POST to the Anthropic API /v1/messages/batches endpoint to create a new batch job. Requires Anthropic credentials. Wait & Poll: Enters a loop: Checks if the processing_status of the batch job is ended. If not ended, it waits for a set interval (10 seconds by default in the 'Batch Status Poll Interval' node). It then checks the batch job status again via GET to /v1/messages/batches/{batch_id}. Requires Anthropic credentials. This loop continues until the status is ended. Retrieve Results: Once the batch job is complete, it fetches the results file by making a GET request to the results_url provided in the batch status response. Requires Anthropic credentials. Parse Results: The results are typically returned in JSON Lines (.jsonl) format. The 'Parse response' Code node splits the response text by newlines and parses each line into a separate JSON object, storing them in an array field (e.g., parsed). Split Output: The 'Split Out Parsed Results' node takes the array of parsed results and outputs each result object as an individual item from the workflow. Prerequisites An active n8n instance (Cloud or self-hosted). An Anthropic API account with access granted to Claude models and the Batch API. Your Anthropic API Key. Basic understanding of n8n concepts (nodes, workflows, credentials, expressions, 'Execute Workflow' node). Familiarity with JSON data structures for providing input prompts and understanding the output. Understanding of the Anthropic Batch API request/response structure. (For Example Usage Branch) Familiarity with n8n's Langchain nodes (@n8n/n8n-nodes-langchain) if you plan to adapt that part. Setup Import Template: Add this template to your n8n instance. Configure Credentials: Navigate to the 'Credentials' section in your n8n instance. Click 'Add Credential'. Search for 'Anthropic' and select the Anthropic API credential type. Enter your Anthropic API Key and save the credential (e.g., name it "Anthropic account"). Assign Credentials: Open the workflow and locate the three HTTP Request nodes in the core workflow: Submit batch Check batch status Get results In each of these nodes, select the Anthropic credential you just configured from the 'Credential for Anthropic API' dropdown. Review Input Format: Understand the required input structure for the When Executed by Another Workflow trigger node. The primary inputs are anthropic-version (string) and requests (array). Refer to the Sticky Notes in the template and the Anthropic Batch API documentation for the exact schema required within the requests array. Activate Workflow: Save and activate the core workflow so it can be called by other workflows. ➡️ Quick Start & Input/Output Examples: Look for the Sticky Notes within the workflow canvas! They provide crucial information, including examples of the required input JSON structure and the expected output format. How to customize this workflow Input Source:* The core workflow is designed to be called. You will build *another workflow that prepares the anthropic-version and requests array and then uses the 'Execute Workflow' node to trigger this template. The included example branch shows how to prepare this data. Model Selection & Parameters:* Model (claude-3-opus-20240229, etc.), max_tokens, temperature, and other parameters are defined *within each object inside the requests array you pass to the workflow trigger. You configure these in the workflow calling this template. Polling Interval:** Modify the 'Wait' node ('Batch Status Poll Interval') duration if you need faster or slower status checks (default is 10 seconds). Be mindful of potential rate limits. Parsing Logic:** If Anthropic changes the result format or you have specific needs, modify the Javascript code within the 'Parse response' Code node. Error Handling:** Enhance the workflow with more specific error handling for API failures (e.g., using 'Error Trigger' or checking HTTP status codes) or batch processing issues (batch.status === 'failed'). Output Processing:* In the workflow that *calls this template, add nodes after the 'Execute Workflow' node to process the individual result items returned (e.g., save to a database, spreadsheet, send notifications). Example Usage Branch (Manual Trigger) This template also contains a separate branch starting with the Run example Manual Trigger node. Purpose:** This branch demonstrates how to construct the necessary anthropic-version and requests array payload. Methods Shown:** It includes steps for: Creating a request object from a simple query string. Creating a request object using data from n8n's Langchain Chat Memory nodes (@n8n/n8n-nodes-langchain). Execution:** It merges these examples, constructs the final payload, and then uses the Execute Workflow node to call the main batch processing logic described above. It finishes by filtering the results for demonstration. Note:** This branch is for demonstration and testing. You would typically build your own data preparation logic in a separate workflow. The use of Langchain nodes is optional for the core batch functionality. Notes API Limits:** According to the Anthropic API documentation, batches can contain up to 100,000 requests and be up to 256 MB in total size. Ensure your n8n instance has sufficient resources for large batches. API Costs:** Using the Anthropic API, including the Batch API, incurs costs based on token usage. Monitor your usage via the Anthropic dashboard. Completion Time:** Batch processing time depends on the number and complexity of prompts and current API load. The polling mechanism accounts for this variability. Versioning:** Always include the anthropic-version header in your requests, as shown in the workflow and examples. Refer to Anthropic API versioning documentation.
by PollupAI
Who is this for? This workflow is ideal for individuals focused on nutrition tracking, meal planning, or diet optimization—whether you’re a health-conscious individual, fitness coach, or developer working on a healthtech app. It also fits well for anyone who wants to capture their meal data via voice or text, without manually entering everything into a spreadsheet. What problem is this workflow solving? Manually logging meals and breaking down their nutritional content is time-consuming and often skipped. This workflow automates that process using Telegram for input, OpenAI for natural language understanding, and Google Sheets for structured tracking. It enables users to record meals by typing or sending voice messages, which are transcribed, analyzed for nutrients, and automatically stored for tracking and review. What this workflow does This n8n automation lets users send either a text or voice message to a Telegram bot describing their meal. The workflow then: Receives the Telegram message Checks if it’s a voice message • If yes: Downloads the audio file and transcribes it using OpenAI • If no: Uses the text input directly Sends the meal description to OpenAI to extract a structured list of ingredients and nutritional details Parses and stores the results in Google Sheets Responds via Telegram with a personalized confirmation message A testing interface also allows you to simulate prompts and view structured outputs for development or debugging. Setup Create a Telegram bot via BotFather and note the API token. Create an empty Google Sheet and store the sheet ID in the environment. Set up your OpenAI credentials in the n8n credential manager. Customize the “List of Ingredients and Nutrients” node with your prompt if needed. (Optional) Use the “Testing” section to simulate messages and refine outputs before going live. How to customize this workflow to your needs • Enhance prompts in the OpenAI node to improve the structure and accuracy of responses. • Add new fields in the Google Sheet and corresponding logic in the parser if you want more detail. • Adjust the Telegram response to provide motivational feedback, dietary tips, or summaries. • Upgrade to the “Pro” version mentioned in the contact section for USDA database integration and complete nutrient breakdowns. This is a lightweight, AI-powered meal logging automation that transforms voice or text into actionable nutrition data—perfect for making healthy eating easier and more data-driven. See my other workflows here
by Gavin
This Template gives the ability to monitor all uplinks for your Meraki Dashboard and then alert your team in a method you prefer. This example is a Teams notification to our Dispatch Channel Setup will probably take around 30 minutes to 1h provided with the Template. Most time intensive steps are getting a Meraki API key which I go over and setting up the Teams node which n8n has good documentation for. Tutorial & explanation https://www.youtube.com/watch?v=JvaN0dNwRNU
by phil
AI-Powered SEO Keyword Research Workflow with n8n > automates comprehensive keyword research for content creation Table of Contents Introduction Workflow Architecture NocoDB Integration Data Flow Core Components Setup Requirements Possible Improvements Introduction This n8n workflow automates SEO keyword research using AI and data-driven analytics. It combines OpenAI's language models with DataForSEO's analytics to generate comprehensive keyword strategies for content creation. The workflow is triggered by a webhook from NocoDB, processes the input data through multiple stages, and returns a detailed content brief with optimized keywords. Workflow Architecture The workflow follows a structured process: Input Collection: Receives data via webhook from NocoDB Topic Expansion: Generates keywords using AI Keyword Metrics Analysis: Gathers search volume, CPC, and difficulty metrics Competitor Analysis: Analyzes competitor content for ranking keywords Final Strategy Creation: Combines all data to generate a comprehensive keyword strategy Output Storage: Saves results back to NocoDB and sends notifications NocoDB Integration Database Structure The workflow integrates with two tables in NocoDB: Input Table Schema This table collects the input parameters for the keyword research: | Field Name | Type | Description | | --------------- | ------------- | --------------------------------------------------------------------------- | | ID | Auto Number | Unique identifier | | Primary Topic | Text | The main keyword/topic to research | | Competitor URLs | Text | Comma-separated list of competitor websites | | Target Audience | Single Select | Description of the target audience (Solopreneurs, Marketing Managers, etc.) | | Content Type | Single Select | Type of content (Blog, Product page, etc.) | | Location | Single Select | Target geographic location | | Language | Single Select | Target language for keywords | | Status | Single Select | Workflow status (Pending, Started, Done) | | Start Research | Checkbox | Active Workflow when you set this to true | Output Table Schema This table stores the generated keyword strategy: | Field Name | Type | Description | | ------------------ | ----------- | ------------------------------------------------ | | ID | Auto Number | Unique identifier | | primary_topic_used | Text | The topic that was researched | | report_content | Long Text | The complete keyword strategy in Markdown format | | generatedAt | Datetime | Automatically generated by NocoDb | Webhook Settings NocoDB Webhook Settings Data Flow The workflow handles data in the following sequence: Webhook Trigger: Receives input from NocoDB when a new keyword research request is created Field Extraction: Extracts primary topic, competitor URLs, audience, and other parameters AI Topic Expansion: Uses OpenAI to generate related keywords, categorized by type and intent Keyword Analysis: Sends primary keywords to DataForSEO to get search volume, CPC, and difficulty Competitor Research: Analyzes competitor pages to identify their keyword rankings Strategy Generation: Combines all data to create a comprehensive keyword strategy Storage & Notification: Saves the strategy to NocoDB and sends a notification to Slack Core Components 1. Topic Expansion This component uses OpenAI and a structured output parser to generate: 20 primary keywords 30 long-tail keywords with search intent 15 question-based keywords 10 related topics 2. DataForSEO Integration Two API endpoints are used: Search Volume & CPC**: Gets monthly search volume and cost-per-click data Keyword Difficulty**: Evaluates how difficult it would be to rank for each keyword 3. Competitor Analysis This component: Analyzes competitor URLs to identify which keywords they rank for Identifies content gaps or opportunities Determines the search intent their content targets 4. Final Keyword Strategy The AI-generated strategy includes: Top 10 primary keywords with metrics 15 long-tail opportunities with low competition 5 question-based keywords to address in content Content structure recommendations 3 potential content titles optimized for SEO Setup Requirements To use this workflow, you'll need: n8n Instance: Either cloud or self-hosted NocoDB Account: For data input and storage API Keys: OpenAI API key DataForSEO API credentials Slack API token (for notifications) Database Setup: Create the required tables in NocoDB as described above Possible Improvements The workflow could be enhanced with the following improvements: Enhanced Keyword Strategy Add topic clustering to group related keywords Enhance the final output with more specific content structure suggestions Include word count recommendations for each content section Additional Data Sources Integrate Google Search Console data for existing content optimization Add Google Trends data to identify rising topics Include sentiment analysis for different keyword groups Improved Competitor Analysis Analyze content length and structure from top-ranking pages Identify common backlink sources for competitor content Extract content headings to better understand content organization Automation Enhancements Add scheduling capabilities to run updates on existing content Implement content performance tracking over time Create alert thresholds for changes in keyword difficulty or search volume Example Output Here is an example Output the Workflow generated based on the following inputs. Inputs: Primary Topic: AI Automation Competitor URLs: n8n.io, zapier.com, make.com Target Audience: Small Business Owners Content Type: Landing Page Location: United States Language: English Output: Final Keyword Strategy The workflow provides a powerful automation for content marketers and SEO specialists to develop data-driven keyword strategies with minimal manual effort. > Original Workflow: AI-Powered SEO Keyword Research Automation - The vibe Marketer
by Billy Christi
Who is this for? This workflow is perfect for: HR professionals** seeking to automate employee and department management Startups and SMBs** that want an AI-powered HR assistant on Telegram Internal operations teams** that want to simplify onboarding and employee data tracking What problem is this workflow solving? Managing employee databases manually is error-prone and inefficient—especially for growing teams. This workflow solves that by: Enabling natural language-based HR operations directly through Telegram Automating the creation, retrieval, and deletion of employee records in Airtable Dynamically managing related data such as departments and job titles Handling data consistency and linking across relational tables automatically Providing a conversational interface backed by OpenAI for smart decision-making What this workflow does Using Telegram as the interface and Airtable as the backend database, this intelligent HR workflow allows users to: Chat in natural language (e.g. “Show me all employees” or “Create employee: Sarah, Marketing…”) Interpret and route requests via an AI Agent that acts as the orchestrator Query employee, department, and job title data from Airtable Create or update records as needed: Add new departments and job titles automatically if they don’t exist Create new employees and link them to the correct department and job title Delete employees based on ID Respond directly in Telegram, providing user-friendly feedback Setup View & Copy the Airtable base here: 👉 Employee Database Management – Airtable Base Template Telegram Bot: Set up a Telegram bot and connect it to the Telegram Trigger node Airtable: Prepare three Airtable tables: Employees with links to Departments and Job Titles Departments with Name & Description Job Titles with Title & Description Connect your Airtable API key and base/table IDs into the appropriate Airtable nodes Add your OpenAI API key to the AI Agent nodes Deploy both workflows: the main chatbot workflow and the employee creation sub-workflow Test with sample messages like: “Create employee: John Doe, john@company.com, Engineering, Software Engineer” “Remove employee ID rec123xyz” How to customize this workflow to your needs Switch databases**: Replace Airtable with Notion, PostgreSQL, or Google Sheets if desired Enhance security**: Add authentication and validation before allowing deletion Add approval flows**: Integrate Telegram button-based approvals for sensitive actions Multi-language support**: Expand system prompts to support multiple languages Add logging**: Store every user action in a log table for auditability Expand capabilities**: Integrate payroll, time tracking, or Slack notifications Extra Tips This is a two-workflow setup. Make sure the sub-workflow is deployed and accessible from the main agent. Use Simple Memory per chat ID to preserve context across user queries. You can expand the orchestration logic by adding more tools to the main agent—such as “Get active employees only” or “List employees by job title.”
by Incrementors
Google Play Review Intelligence with Bright Data & Telegram Alerts Overview This n8n workflow automates the process of scraping Google Play Store reviews, analyzing app performance, and sending alerts for low-rated applications. It integrates with Bright Data for web scraping, Google Sheets for data storage, and Telegram for notifications. Workflow Components 1. ✅ Trigger Input Form Type:** Form Trigger Purpose:** Initiates the workflow with user input Input Fields:** URL (Google Play Store app URL) Number of reviews to fetch Function:** Captures user requirements to start the scraping process 2. 🚀 Start Scraping Request Type:** HTTP Request (POST) Purpose:** Sends scraping request to Bright Data API Endpoint:** https://api.brightdata.com/datasets/v3/trigger Parameters:** Dataset ID: gd_m6zagkt024uwvvwuyu Include errors: true Limit multiple results: 5 Custom Output Fields:** url, review_id, reviewer_name, review_date review_rating, review, app_url, app_title app_developer, app_images, app_rating app_number_of_reviews, app_what_new app_content_rating, app_country, num_of_reviews 3. 🔄 Check Scrape Status Type:** HTTP Request (GET) Purpose:** Monitors the progress of the scraping job Endpoint:** https://api.brightdata.com/datasets/v3/progress/{snapshot_id} Function:** Checks if the dataset scraping is complete 4. ⏱️ Wait for Response 45 sec Type:** Wait Node Purpose:** Implements polling mechanism Duration:** 45 seconds Function:** Pauses workflow before checking status again 5. 🧩 Verify Completion Type:** IF Condition Purpose:** Evaluates scraping completion status Condition:** status === "ready" Logic:** True: Proceeds to fetch data False: Loops back to status check 6. 📥 Fetch Scraped Data Type:** HTTP Request (GET) Purpose:** Retrieves the final scraped data Endpoint:** https://api.brightdata.com/datasets/v3/snapshot/{snapshot_id} Format:** JSON Function:** Downloads completed review and app data 7. 📊 Save to Google Sheet Type:** Google Sheets Node Purpose:** Stores scraped data for analysis Operation:** Append rows Target:** Specified Google Sheet document Data Mapping:** URL, Review ID, Reviewer Name, Review Date Review Rating, Review Text, App Rating App Number of Reviews, App What's New, App Country 8. ⚠️ Check Low Ratings Type:** IF Condition Purpose:** Identifies poor-performing apps Condition:** review_rating < 4 Logic:** True: Triggers alert notification False: No action taken 9. 📣 Send Alert to Telegram Type:** Telegram Node Purpose:** Sends performance alerts Message Format:** ⚠️ Low App Performance Alert 📱 App: {app_title} 🧑💻 Developer: {app_developer} ⭐ Rating: {app_rating} 📝 Reviews: {app_number_of_reviews} 🔗 View on Play Store Workflow Flow Input Form → Start Scraping → Check Status → Wait 45s → Verify Completion ↑ ↓ └──── Loop ────┘ ↓ Fetch Data → Save to Sheet & Check Ratings ↓ Send Telegram Alert Configuration Requirements API Keys & Credentials Bright Data API Key:** Required for web scraping Google Sheets OAuth2:** For data storage access Telegram Bot Token:** For alert notifications Setup Parameters Google Sheet ID:** Target spreadsheet identifier Telegram Chat ID:** Destination for alerts N8N Instance ID:** Workflow instance identifier Key Features Data Collection Comprehensive app metadata extraction Review content and rating analysis Developer and country information App store performance metrics Quality Monitoring Automated low-rating detection Real-time performance alerts Continuous data archiving Integration Capabilities Bright Data web scraping service Google Sheets data persistence Telegram instant notifications Polling-based status monitoring Use Cases App Performance Monitoring Track rating trends over time Identify user sentiment patterns Monitor competitor performance Quality Assurance Early warning for rating drops Customer feedback analysis Market reputation management Business Intelligence Review sentiment analysis Performance benchmarking Strategic decision support Technical Notes Polling Interval:** 45-second status checks Rating Threshold:** Alerts triggered for ratings < 4 Data Format:** JSON with structured field mapping Error Handling:** Includes error tracking in dataset requests Result Limiting:** Maximum 5 multiple results per request For any questions or support, please contact: info@incrementors.com or fill out this form https://www.incrementors.com/contact-us/
by victor de coster
*Smartlead to HubSpot Performance Analytics A streamlined workflow to analyze your Smartlead performance metrics by tracking lifecycle stages in HubSpot and generating automated reports.* Who is this for? (Outbound) Automation Agencies, Sales and marketing teams using Smartlead for outreach campaigns who want to track their performance metrics and lead progression in HubSpot. What problem does this workflow solve? Manual tracking of lead performance across Smartlead and HubSpot is time-consuming and error-prone. This workflow automates performance reporting by connecting your Smartlead data with HubSpot lifecycle stages, providing clear insights into your outreach campaign effectiveness. What this workflow does Automatically pulls performance data from your Smartlead campaigns Cross-references contact status with HubSpot lifecycle stages Generates comprehensive performance reports in Google Sheets Provides customizable reporting schedules to match your team's needs Setup Requirements PostgreSQL Database Set up your PostgreSQL instance (includes $300 free GCP credits) Follow our step-by-step setup guide: Find a step-by-step guide here Google Account Integration Connect your Google Account to n8n Find the guide here Smartlead Configuration Connect your Smartlead instance: Detailed connection guide included in workflow How to customize this workflow Configure the Trigger node to adjust report frequency Modify the Google Sheets template to match your specific KPIs Customize HubSpot lifecycle stage mapping in the Function node Adjust PostgreSQL queries to track additional metrics Need assistance or have suggestions? lmk here
by Guido X Jansen
Introduction **Manual LinkedIn data collection is time-consuming, error-prone, and results in inconsistent data quality across CRM/database records.** This workflow is great for organizations that struggle with: Incomplete contact records with only LinkedIn URLs but missing profile details Hours spent manually copying LinkedIn information into databases Inconsistent data formats due to copy-paste from LinkedIn (emojis, styled text, special characters) Outdated profile information that doesn't reflect current roles/companies No systematic way to enrich contacts at scale Primary Users Sales & Marketing Teams Event Organizers & Conference Managers for event materials Recruitment & HR Professionals CRM Administrators Specific Problems Addressed Data Completeness: Automatically fills missing profile fields (headline, bio, skills, experience) Data Quality: Sanitizes problematic characters that break databases/exports Time Efficiency: Reduces hours of manual data entry to automated monthly updates Error Handling: Gracefully manages invalid/deleted LinkedIn profiles Scalability: Processes multiple profiles in batch without manual intervention Standardization: Ensures consistent data format across all records Cost Each URL scraped by Apify costs $0.01 to get all the data above. Apify charges per scrape, regardless of how much dta or fields you extract/use. Setup Instructions Prerequisites n8n Instance: Access to a running n8n instance (self-hosted or cloud) NocoDB Account: Database with a table containing LinkedIn URLs Apify Account: Free or paid account for LinkedIn scraping Required fields in NocoDB table Input: single LinkedIn URL NocoDB Field name LinkedIn Output: first/last/full name e-mail bio headline profile pic URL current role country skills current employer employer URL experiences (all previous jobs) personal website publications (articles) NocoDB Field names linkedin_full_name linkedin_first_name: linkedin_headline: linkedin_email: linkedin_bio: linkedin_profile_pic linkedin_current_role linkedin_current_company linkedin_country linkedin_skills linkedin_company_website linkedin_experiences linkedin_personal_website linkedin_publications linkedin_scrape_error_reason linkedin_scrape_last_attempt linkedin_scrape_status linkedin_last_modified Technically you also need an Id field, but that is always there so no need to add it :) n8n Setup 1. Import the Workflow Copy the workflow JSON from the template In n8n, click "Add workflow" → "Import from JSON" Paste the workflow and click "Import" 2. Configure NocoDB Connection Click on any NocoDB node in the workflow Add new credentials → "NocoDB Token account" Enter your NocoDB API token (found in NocoDB → User Settings → API Tokens) Update the projectId and table parameters in all NocoDB nodes 3. Set Up Apify Integration Create an Apify account at apify.com Generate an API token (Settings → Integrations → API) In the workflow, update the Apify token in the "Get Scraper Results" node Configure HTTP Query Auth credentials with your token 4. Map Your Database Fields Review the "Transform & Sanitize Data" node Update field mappings to match your NocoDB table structure Ensure these fields exist in your table: LinkedIn (URL field) linkedin_headline, linkedin_full_name, linkedin_bio, etc. linkedin_scrape_status, linkedin_last_modified 5. Configure the Filter In "Get Guests with LinkedIn" node Adjust the filter to match your requirements Default: (LinkedIn,isnot,null)~and(linkedin_headline,is,null) 6. Test the Workflow Click "Execute Workflow" with Manual Trigger Monitor execution for any errors Verify data is properly updated in NocoDB 7. Activate Automated Schedule Configure the Schedule Trigger node (default: monthly) Toggle the workflow to "Active" Monitor executions in n8n dashboard Customization Options 1. Data Source Modifications Different Database: Replace NocoDB nodes with Airtable, Google Sheets, or PostgreSQL Multiple Tables: Add parallel branches to process different contact tables Custom Filters: Modify the WHERE clause to target specific record subsets 2. Enrichment Fields Add Fields: Include additional LinkedIn data like education, certifications, or recommendations Remove Fields: Simplify by removing unnecessary fields (publications, skills) Custom Transformations: Add business logic for field calculations or formatting 3. Scheduling Options Frequency: Change from monthly to daily, weekly, or hourly Time-based: Set specific times for different timezones Event-triggered: Replace with webhook trigger for on-demand processing 4. Error Handling Enhancement Notifications: Add email/Slack nodes to alert on failures Retry Logic: Implement wait and retry for temporary failures Logging: Add database logging for audit trails 5. Data Quality Rules Validation: Add IF nodes to validate data before updates Duplicate Detection: Check for existing records before creating new ones Data Standardization: Add custom sanitization rules for industry-specific needs 6. Integration Extensions CRM Sync: Add nodes to push data to Salesforce, HubSpot, or Pipedrive AI Enhancement: Use OpenAI to summarize bios or extract key skills Image Processing: Download and store profile pictures locally 7. Performance Optimization Batch Size: Adjust the number of profiles processed per run Rate Limiting: Add delays between API calls to avoid limits Parallel Processing: Split large datasets across multiple workflow executions 8. Compliance Additions GDPR Compliance: Add consent checking before processing Data Retention: Implement automatic cleanup of old records Audit Logging: Track who accessed what data and when These customizations allow the workflow to adapt from simple contact enrichment to complex data pipeline scenarios across various industries and use cases.
by Jimleuk
This n8n template lets you summarize individual team member activity on MS Teams for the past week and generates a report. For remote teams, chat is a crucial communication tool to ensure work gets done but with so many conversations happening at once and in multiple threads, ideas, information and decisions usually live in the moment and get lost just as quickly - and all together forgotten by the weekend! Using this template, this doesn't have to be the case. Have AI crawl through last week's activity, summarize all messages and replies and generate a casual and snappy report to bring the team back into focus for the current week. A project manager's dream! How it works A scheduled trigger is set to run every Monday at 6am to gather all team channel messages within the last week. Messages are grouped by user. AI analyses the raw messages and replies to pull out interesting observations and highlights. This is referred to as the individual reports. All individual reports are then combined and summarized together into what becomes the team weekly report. This allows understanding of group and similar activities. Finally, the team weekly report is posted back to the channel. The timing is important as it should be the first message of the week and ready for the team to glance over coffee. How to use Ideally works best per project and where most of the comms happens on a single channel. Avoid combining channels and instead duplicate this workflow for more channels. You may need to filter for specific team members if you want specific team updates. Customise the report to suit your organisation, team or the channel. You may prefer to be more formal if clients or external stakeholders are also present. Requirements MS Teams for chat platform OpenAI for LLM Customising this workflow If the teams channel is busy enough already, consider posting the final report to email. Pull in project metrics to include in your report. As extra context, it may be interesting to tie the messages to production performance. Use an AI Agent to query for knowledgebase or tickets relevant to the messages. This may be useful for attaching links or references to add context.
by IvanCore
Disclaimer: This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Important distinction: This template manages Telegram Copilot's UserBots (client accounts), not Telegram Bots. UserBot vs. Bot: Key Differences 🔹 Telegram Copilot's UserBots Authenticate as real user accounts (phone number required) Can join groups/channels without "Bot" label Subject to Telegram's client API limits Require manual login (MFA supported) 🔹 Telegram Bots Use @BotFather-created tokens Limited to bot API functionality Can't initiate chats with unbidden users No phone number required This template solves the unique challenges of UserBot management through: Core Functionality 🛡️ Session Reliability Automatic crash recovery (5-step restart sequence) Persistent session monitoring (checks every 6h) Database cleanup via /clear command 📱 Multi-Device Support Manages sessions independently from mobile clients Tracks active devices via /stat command Isolates session data per credential 🔔 Smart Notifications Real-time alerts to admin chat Detailed error context with authState snapshots Success confirmations with session metadata Setup Guide Prerequisites Self-hosted n8n instance (community node required) Valid Telegram account for UserBot Telegram bot token for notifications TelePilot credentials with api_id/api_hash Configuration Steps Credential Setup Add TelePilot credentials in n8n Configure Telegram bot token in notification nodes Set admin chat ID for alerts Monitoring Customization Adjust check frequency in Schedule Trigger Modify alert thresholds in Filter nodes Configure retry logic in recovery sequence Session Management Test /start command flow Verify /stat output format Confirm notification delivery Workflow Customization Advanced Options Add secondary notification channels (Email, Slack) Implement escalating alert system Integrate with monitoring dashboards Customize recovery attempt limits Compliance Notes UserBots must comply with Telegram's Terms of Service Not intended for bulk messaging or spam Recommended for legitimate automation use cases Note: UserBots must comply with Telegram ToS. Not for spam/mass messaging. Why This Matters: UserBots enable automation scenarios impossible with regular bots (e.g., group management as normal user, reacting as human account). This workflow keeps them reliably online 24/7.
by Nikan Noorafkan
🤖 AI-Powered Content Marketing Research Tool > Transform your content strategy with automated competitor intelligence ⚡ What It Does Never miss a competitor move again. This workflow automatically: 🔍 Monitors competitor content across multiple domains 📊 Tracks trending keywords by region 💬 Extracts audience pain points from Reddit & forums 🤖 Generates AI strategy recommendations via OpenAI 📋 Outputs to Airtable, Notion & Slack for instant action 🎯 Perfect For Growth marketers** tracking competitor strategies Content teams** discovering trending topics SEO specialists** finding keyword opportunities Marketing agencies** managing multiple clients 🛠️ Technical Setup Required APIs & Credentials | Service | Credential Type | Monthly Cost | Purpose | |---------|----------------|--------------|---------| | Ahrefs | Header Auth | $99+ | Backlink & traffic analysis | | SEMrush | Query Auth | $119+ | Keyword research | | BuzzSumo | Header Auth | $199+ | Content performance | | OpenAI | Header Auth | ~$50 | AI recommendations | | Reddit | OAuth2 | Free | Audience insights | | Google Trends | Public API | Free | Trending topics | 📊 Database Schema Airtable Base: content-research-base Table 1: competitor-intelligence timestamp (Date) domain (Single line text) traffic_estimate (Number) backlinks (Number) content_gaps (Long text) publishing_frequency (Single line text) Table 2: keyword-opportunities timestamp (Date) trending_keywords (Long text) top_questions (Long text) content_opportunities (Long text) 🚀 Quick Start Guide Step 1: Import & Configure Import the workflow JSON Update competitor domains in 📋 Configuration Settings Map all API credentials Step 2: Setup Storage Airtable:** Create base with exact schema above Notion:** Create database with properties listed Slack:** Create #content-research-alerts channel Step 3: Test & Deploy First run populates: ✅ Airtable tables with competitor data ✅ Notion database with AI insights ✅ Slack channel with formatted alerts 💡 Example Output AI Recommendations Format { "action_items": [ { "topic": "Copy trading explainer", "format": "Video", "region": "UK", "priority": "High" } ], "publishing_calendar": [ {"week": "W34", "posts": 3} ], "alerts": [ "eToro gained 8 .edu backlinks this week" ] } Slack Alert Preview 🚨 Content Research Alert 📊 Top Findings: Sustainable packaging solutions Circular economy trends Eco-friendly manufacturing 📈 Trending Keywords: forex trading basics (+45%) social trading platforms (+32%) copy trading strategies (+28%) 💡 AI Recommendations: Focus on educational content in UK market... 🔧 Advanced Features ✅ Data Quality Validation Automatic retry** for failed API calls Data validation** before storage Error notifications** via Slack ⚙️ Scalability Options Multi-region support** (US, UK, DE, FR, JP) Batch processing** for large competitor lists Rate limiting** to respect API quotas 🎨 Customization Ready Modular design** - disable unused APIs Industry templates** - forex, ecommerce, SaaS Custom scoring** algorithms 📈 ROI & Performance Cost Analysis Setup time:** ~2 hours Monthly API costs:** $400-500 Time saved:** 15+ hours/week ROI:** 300%+ within first month Success Metrics Competitor insights:** 50+ data points daily Keyword opportunities:** 100+ suggestions/week Content ideas:** 20+ AI-generated topics Trend alerts:** Real-time notifications 🛡️ Troubleshooting Common Issues & Solutions | Symptom | Cause | Fix | |-------------|-----------|---------| | OpenAI timeout | Large data payload | Reduce batch size → Split processing | | Airtable 422 error | Field mismatch | Copy schema exactly | | Reddit 401 | OAuth expired | Re-authorize application | Rate Limiting Best Practices Ahrefs:** Max 1000 requests/day SEMrush:** 3000 requests/day OpenAI:** Monitor token usage 🌟 Why Choose This Template? > "From manual research to automated intelligence in 15 minutes" ✅ Production-ready - No additional coding required ✅ Cost-optimized - Uses free tiers where possible ✅ Scalable - Add competitors with one click ✅ Actionable - AI outputs ready for immediate use ✅ Community-tested - 500+ successful deployments Start your competitive intelligence today 🚀 Built with ❤️ for the n8n community
by Stéphane Heckel
Copy n8n workflows to a slave n8n repository Inspired by Alex Kim's workflow, this version adds the ability to keep multiple versions of the same workflow on the destination instance. Each copied workflow’s name is prefixed with the date (YYYY_MM_DD_), enabling simple version tracking. Process details and workflow counts are recorded centrally in Notion. How it works Workflows from the source n8n instance are copied to the destination using the n8n API node. On the destination, each workflow name is prefixed with the current date (e.g., 2025_08_03_PDF Summarizer), so you can keep multiple daily versions. The workflow tracks and saves: The date of execution. Number of workflows processed. Both details are recorded in Notion. Rolling retention policy example: Day 1:** Workflows are saved with 2025_08_03_ prefix. Day 2:** New set saved with 2025_08_04_. Day 3:** Day 1’s set is deleted, new set saved as 2025_08_05_. To keep more days, adjust the “Subtract From Date” node. How to use Create a Notion database with one page and three fields: sequence: Should contain "prefix". Value: Today's date as YYYY_MM_DD_. Comment: Number of saved workflows. Configure the Notion node: Enter your Notion credentials. Link to the created database/page. Update the "Subtract From Date" node: Set how many days’ versions you want to keep (default: 2 days). Set the limit to 1 in the "Limit" node for testing. Input credentials for both source and destination n8n instances. Requirements Notion** for tracking execution date and workflow count. n8n API Keys* for both source and destination instances. Ensure you have the necessary *API permissions** (read, create, delete workflows) n8n version** this workflow was tested on 1.103.2 (Ubuntu) Need Help? Contact me on LinkedIn or ask in the Forum!