by Edisson Garcia
🚀 Message-Batching Buffer Workflow (n8n) This workflow implements a lightweight message-batching buffer using Redis for temporary storage and a JavaScript consolidation function to merge messages. It collects incoming user messages per session, waits for a configurable inactivity window or batch size threshold, consolidates buffered messages via custom code, then clears the buffer and returns the combined response—all without external LLM calls. 🔑 Key Features Redis-backed buffer** queues incoming messages per context_id. Centralized Config Parameters** node to adjust thresholds and timeouts in one place. Dynamic wait time** based on message length (configurable minWords, waitLong, waitShort). Batch trigger** fires on inactivity timeout or when buffer_count ≥ batchThreshold. Zero-cost consolidation** via built-in JavaScript Function (consolidate buffer)—no GPT-4 or external API required. ⚙️ Setup Instructions Extract Session & Message Trigger: When chat message received (webhook) or When clicking ‘Test workflow’ (manual). Map inputs: set variables context_id and message into a Set node named Mock input data (for testing) or a proper mapping node in production. Config Parameters Add a Set node Config Parameters with: minWords: 3 # Word threshold waitLong: 10 # Timeout (s) for long messages waitShort: 20 # Timeout (s) for short messages batchThreshold: 3 # Messages to trigger batch early All downstream nodes reference these JSON values dynamically. Determine Wait Time Node: get wait seconds (Code) JS code: const msg = $json.message || ''; const wordCount = msg.split(/\s+/).filter(w => w).length; const { minWords, waitLong, waitShort } = items[0].json; const waitSeconds = wordCount < minWords ? waitShort : waitLong; return [{ json: { context_id: $json.context_id, message: msg, waitSeconds } }]; Buffer Message in Redis Buffer messages: LPUSH buffer_in:{{$json.context_id}} with payload {text, timestamp}. Set buffer\_count increment: INCR buffer_count:{{$json.context_id}} with TTL {{$json.waitSeconds + 60}}. Set last\_seen: record last_seen:{{$json.context_id}} timestamp with same TTL. Check & Set Waiting Flag Get waiting\_reply: if null, Set waiting\_reply to true with TTL {{$json.waitSeconds}}; else exit. Wait for Inactivity WaitSeconds (webhook): pauses for {{$json.waitSeconds}} seconds before batch evaluation. Check Batch Trigger Get last\_seen and Get buffer\_count. IF (now - last_seen) ≥ waitSeconds * 1000 OR buffer_count ≥ batchThreshold, proceed; else use Wait node to retry. Consolidate Buffer consolidate buffer (Code): const j = items[0].json; const raw = Array.isArray(j.buffer) ? j.buffer : []; const buffer = raw.map(x => { try { return typeof x === 'string' ? JSON.parse(x) : x; } catch { return null; } }).filter(Boolean); buffer.sort((a, b) => new Date(a.timestamp) - new Date(b.timestamp)); const texts = buffer.map(e => e.text?.trim()).filter(Boolean); const unique = [...new Set(texts)]; const message = unique.join(' '); return [{ json: { context_id: j.context_id, message } }]; Cleanup & Respond Delete Redis keys: buffer_in, buffer_count, waiting_reply, last_seen (for the context_id). Return consolidated message to the user via your chat integration. 🛠 Customization Guidance Adjust thresholds* by editing the *Config Parameters** node. Change concatenation** (e.g., line breaks) by modifying the join separator in the consolidation code. Add filters** (e.g., ignore empty or system messages) inside the consolidation Function. Monitor performance**: for very high volume, consider sharding Redis keys by date or user segments. © 2025 Innovatex • Automation & AI Solutions • innovatexiot.carrd.co • LinkedIn
by David Ashby
Complete MCP server exposing 1 Buy Marketing API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Buy Marketing API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Buy Marketing API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ebay.com/buy/marketing/v1_beta • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (1 total) 🔧 Merchandised_Product (1 endpoints) • GET /merchandised_product: Fetch Merchandised Products 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Buy Marketing API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Alex Kim
This workflow leverages n8n to perform automated Google Maps API queries and manage data efficiently in Google Sheets. It's designed to extract specific location data based on a given list of ZIP codes and categories. Features Queries the Google Maps API for location data using predefined ZIP codes and subcategories. Filters, de-duplicates, and organizes data into structured rows in Google Sheets. Implements exponential backoff retries to handle API rate limits. Logs and updates statuses directly in Google Sheets for easy tracking. Prerequisites Google OAuth Credentials: A configured Google Cloud project for Google Maps API and Sheets API access. Google Sheets: A sheet with ZIP codes and categories defined (e.g., "AZ Zips"). n8n Setup: A running instance of n8n with credentials configured for Google OAuth. Setup Instructions 1. Prepare Google Sheets Add the ZIP codes to the "AZ Zips" sheet. Define subcategories in another sheet (e.g., "Google Maps Categories"). Provide the sheet's URL in the Settings node of the workflow. 2. Configure API Access Set up Google OAuth credentials for Maps and Sheets APIs in n8n. Ensure your API key has access to the places.searchText endpoint. 3. Workflow Customization Modify textQuery parameters in the GMaps API node to match your query needs. Adjust trigger intervals as required (e.g., manual or scheduled execution). 4. Run the Workflow Execute the workflow manually or schedule periodic runs to keep your data updated. Notes This workflow includes robust error handling to retry failed API calls with exponential backoff. All data is organized and logged directly in Google Sheets for easy reference and updates. For more information or issues, feel free to reach out!
by Usman Liaqat
Description: This n8n workflow helps you capture Slack messages via a webhook and download attached media files (like images, documents, or videos) directly from those messages. How it works: Slack Trigger (Webhook) – Listens for new messages in a Slack channel where the app is added. HTTP Request – Uses the file's private download URL to retrieve the media securely. Use cases: Download files shared by team members in a Slack channel. Capture and process media from specific project or support channels. Prepare media for later processing, archiving, or review. Requirements: Slack app with appropriate permissions (files:read, channels:history, etc.). Slack webhook set up to listen to channel messages. - Authenticated HTTP request to handle private Slack file URLs. This template is ideal for users who want full control over file handling triggered by real-time Slack messages.
by PUQcloud
Overview The Docker InfluxDB WHMCS module uses a specially designed workflow for n8n to automate deployment processes. The workflow provides an API interface for the module, receives specific commands, and connects via SSH to a server with Docker installed to perform predefined actions. Prerequisites You must have your own n8n server. Alternatively, you can use the official n8n cloud installations available at: n8n Official Site Installation Steps Install the Required Workflow on n8n You have two options: Option 1: Use the Latest Version from the n8n Marketplace The latest workflow templates for our modules are available on the official n8n marketplace. Visit our profile: PUQcloud on n8n Option 2: Manual Installation Each module version comes with a workflow template file. You need to manually import this template into your n8n server. n8n Workflow API Backend Setup for WHMCS/WISECP 1. Configure API Webhook and SSH Access Create a Basic Auth Credential for the Webhook API block in n8n. Create an SSH Credential for accessing a server with Docker installed. 2. Modify Template Parameters In the Parameters block of the template, update the following settings: server_domain – Must match the domain of the WHMCS/WISECP Docker server. clients_dir – Directory where user data related to Docker and disks will be stored. mount_dir – Default mount point for the container disk (recommended not to change). Do not modify the following technical parameters: screen_left screen_right Deploy-docker-compose In the Deploy-docker-compose element, you can modify the Docker Compose configuration. This is generated in the following scenarios: When the service is created When the service is unlocked When the service is updated nginx In the nginx element, you can modify configuration parameters of the web interface proxy server. The main section allows you to add custom parameters to the server block in the proxy server configuration file. The main_location section contains settings that will be added to the location / block of the configuration. Here, you can define custom headers and parameters. Bash Scripts Management of Docker containers and related procedures is done by executing Bash scripts generated in n8n. These scripts return either JSON or plain strings. All scripts are located in elements directly connected to the SSH element. You have full control over any script and can modify or execute it as needed.
by PUQcloud
Overview The Docker NextCloud WHMCS module leverages a sophisticated workflow for n8n, designed to automate the comprehensive deployment, configuration, and management processes for NextCloud and NextCloud Office services. Through its intuitive API interface, the workflow securely receives commands and orchestrates predefined tasks via SSH on your Docker-hosted server, ensuring streamlined operations and efficient management. Prerequisites You must deploy your own dedicated n8n server to manage workflows effectively. Alternatively, you may opt for the official n8n cloud-based solutions accessible via: n8n Official Site Your Docker server must be accessible via SSH with necessary permissions. Installation Steps Install the Required Workflow on n8n You can select from two convenient installation options: Option 1: Use the Latest Version from the n8n Marketplace The latest workflow templates are continuously updated and available on the n8n marketplace. Explore all templates provided by PUQcloud directly here: PUQcloud on n8n Option 2: Manual Installation Each module version includes a bundled workflow template file. Import this workflow file directly into your n8n server manually. n8n Workflow API Backend Setup for WHMCS Configure API Webhook and SSH Access Create a secure Basic Auth Credential for Webhook API interactions within n8n. Create an SSH Credential within n8n to securely communicate with the Docker host. Modify Template Parameters Adjust and update the following critical parameters to match your deployment specifics: server_domain – Set this to the domain of your WHMCS Docker server. clients_dir – Specify the directory where user data and related resources will be stored. mount_dir – The standard mount point for container storage (recommended to remain unchanged). Do not alter the following technical parameters to avoid workflow disruption: screen_left, screen_right. Deploy-docker-compose Configuration Fine-tune Docker Compose configurations tailored specifically for these critical operational scenarios: Initial service provisioning and setup Service suspension and subsequent unlocking Service configuration updates Routine service maintenance tasks nginx Configuration Management Enhance and customize proxy server configurations using the dedicated nginx workflow element: main**: Define specialized parameters within the server configuration block. main_location**: Set custom headers, caching policies, and routing rules for the root location. Bash Script Automation Automate Docker container management and related server tasks through dynamically generated Bash scripts within n8n. Scripts execute securely via SSH and provide responses in JSON or plain text formats for easy parsing and logging. Scripts are conveniently linked directly to the SSH action elements. You retain complete flexibility to adapt or extend these scripts as necessary to meet your precise operational requirements.
by Juan Carlos Cavero Gracia
Description This n8n automation template provides an end-to-end solution for generating a series of themed images for Instagram and TikTok carousels using OpenAI's GPT Image (via the image generation API) and automatically publishing them to both platforms. It uses a sequence of prompts to create a narrative or themed carousel, generating each image based on the previous one, and then posts them with an AI-generated caption. Who Is This For? Social Media Managers:** Quickly create and schedule engaging image carousels for Instagram and TikTok. Content Creators:** Automate the visual content creation process for thematic posts or storytelling carousels. Digital Marketers:** Efficiently produce visual assets for campaigns that require sequential imagery. Small Businesses:** Generate unique promotional content for social media without needing advanced design skills. What Problem Does This Workflow Solve? Manually creating a series of related images for a carousel and then publishing them across multiple platforms can be repetitive and time-consuming. This workflow addresses these issues by: Automating Image Generation:** Uses OpenAI to generate a sequence of 5 images, where each new image is an evolution based on the previous one and a new prompt. Automating Caption Generation:** Leverages OpenAI (GPT) to create a suitable description/caption for the carousel based on the image prompts. Streamlining Multi-Platform Publishing:** Automatically uploads the generated image carousel and caption to both Instagram and TikTok. Reducing Manual Effort:** Significantly cuts down the time spent on designing individual images and manually uploading them. Ensuring Visual Cohesion:** The sequential image generation method (editing the previous image) helps maintain a consistent style or narrative across the carousel. How It Works Trigger: The workflow is initiated manually (can be adapted to a schedule or webhook). Define Prompts: Five distinct prompts are pre-set within the workflow to guide the generation of each image in the carousel. AI Caption Generation: OpenAI (GPT-4.1) generates a concise (≤ 90 characters for TikTok) description for the social media posts based on all five image prompts. Sequential AI Image Generation: Image 1: OpenAI's image generation API (specified as gpt-image-1) creates the first image based on prompt1. Image 2-5: For each subsequent image, the workflow uses the OpenAI image edits API. It takes the previously generated image and a new prompt (prompt2 for image 2, prompt3 for image 3, and so on) to create the next image in the sequence. Images are converted from base64 JSON to binary format. Content Aggregation: The five generated binary image files (named photo1 through photo5) are merged. Multi-Platform Distribution: The merged images and the AI-generated description are sent to api.upload-post.com for publishing as a carousel to Instagram. The same content is sent to api.upload-post.com for publishing as a carousel to TikTok, with an option to automatically add music. The TikTok description is truncated if it exceeds 90 characters. Setup Accounts & API Keys: You will need: An n8n instance. An OpenAI API key. An API key for upload-post.com. Configure Credentials: Add your OpenAI API key to the "OpenAI" credentials in n8n. This will be used by the "Generate Description for Tiktok and Instagram" node and the HTTP Request nodes calling the OpenAI image generation/edit APIs. In the "POST TO INSTAGRAM" and "POST TO TIKTOK" nodes, replace "Apikey add_api_key_here" with your actual upload-post.com API key. Update the user field in the "POST TO INSTAGRAM" and "POST TO TIKTOK" nodes if "upload_post" is not your user identifier for that service. Customize Prompts: Modify the five prompts (prompt1 to prompt5) in the "Set All Prompts" node to define the story or theme of your image carousel. Review Image Generation Parameters: In the "Set API Variables" node, you can adjust: size_of_image (e.g., "1024x1536" for vertical carousels). openai_image_model (ensure this matches a valid OpenAI model identifier for image generation/edits, like dall-e-2 or dall-e-3 if gpt-image-1 is a placeholder). response_format_image (should generally remain b64_json for this workflow). (Optional) TikTok Auto Music: The "POST TO TIKTOK" node has an auto_add_music parameter set to true. Change this to false if you prefer to add music manually or not at all. Requirements Accounts:** n8n, OpenAI, upload-post.com. API Keys & Credentials:** API Keys for OpenAI and https://upload-post.com. (Potentially) Paid Plans:** OpenAI and upload-post.com usage may incur costs depending on your volume and their respective pricing models. This template empowers you to automate the creation and distribution of visually consistent image carousels, saving time and enhancing your social media presence.
by Don Jayamaha Jr
📡 This workflow serves as the central Alpha Vantage API fetcher for Tesla trading indicators, delivering cleaned 20-point JSON outputs for three timeframes: 15min, 1hour, and 1day. It is required by the following agents: Tesla 15min, 1h, 1d Indicators Tools Tesla Financial Market Data Analyst Tool ✅ Requires an Alpha Vantage Premium API Key 🚀 Used as a sub-agent via webhook endpoints triggered by other workflows 📈 What It Does For each timeframe (15min, 1h, 1d), this tool: Triggers 6 technical indicators via Alpha Vantage: RSI MACD BBANDS SMA EMA ADX Trims the raw response to the latest 20 data points Reformats into a clean JSON structure: { "indicator": "MACD", "timeframe": "1hour", "data": { "timestamp": "...", "macd": 0.32, "signal": 0.29 } } Returns results via Webhook Respond for the calling agent 📂 Required Credentials 🔑 Alpha Vantage Premium API Key Set up under Credentials > HTTP Query Auth Name: Alpha Vantage Premium Query Param: apikey Get yours here: https://www.alphavantage.co/premium/ 🛠️ Setup Steps Import Workflow into n8n Name it: Tesla_Quant_Technical_Indicators_Webhooks_Tool Add HTTP Query Auth Credential Name: Alpha Vantage Premium Param key: apikey Value: your Alpha Vantage key Publish and Use the Webhooks This workflow exposes 3 endpoints: /15minData → used by 15m Indicator Tool /1hourData → used by 1h Indicator Tool /1dayData → used by 1d Indicator Tool Connect via Execute Workflow or HTTP Request Ensure caller sends webhook trigger correctly to the path 🧱 Architecture Summary Each timeframe section includes: | Component | Details | | ------------------ | --------------------------------------------- | | 📡 Webhook Trigger | Entry node (/15minData, /1hourData, etc.) | | 🔄 API Calls | 6 nodes fetching indicators via Alpha Vantage | | 🧹 Formatters | JS Code nodes to clean and trim responses | | 🧩 Merge Node | Consolidates cleaned JSONs | | 🚀 Webhook Respond | Returns structured data to calling workflow | 🧾 Sticky Notes Overview ✅ Webhook Entry: Instructions per timeframe ✅ API Call Summary: Alpha Vantage endpoint for each indicator ✅ Format Nodes: Explain JSON parsing and cleaning ✅ Merge Logic: Final output format ✅ Webhook Response: What gets returned to caller All stickies follow n8n standard color-coding: Blue = Webhook flow Yellow = API request group Purple = Formatters Green = Merge step Gray = Workflow overview and usage 🔐 Licensing & Support © 2025 Treasurium Capital Limited Company This agent is part of the Tesla Quant AI Trading System and protected under U.S. copyright. For support: 🔗 Don Jayamaha – LinkedIn 🔗 n8n Creator Profile 🚀 Use this API tool to feed Tesla technical indicators into any AI or trading agent across 15m, 1h, and 1d timeframes. Required for all Tesla Quant Agent indicator tools.
by ist00dent
This n8n template lets you automatically pull market data for the cryptocurrencies from CoinGecko every hour, calculate custom volatility and market-health metrics, classify each coin’s price action into buy/sell/hold/neutral signals with risk ratings, and expose both individual analyses and a portfolio summary via a webhook. It’s perfect for crypto analysts, DeFi builders, or portfolio managers who want on-demand insights without writing a single line of backend code. 🔧 How it works Schedule Trigger fires every hour (or interval you choose). HTTP Request (CoinGecko) fetches the top 10 coins by market cap, including 24 h, 7 d, and 30 d price change percentages. Split In Batches ensures each coin is processed sequentially. Function (Calculate Market Metrics) computes: A weighted volatility score Market-cap-to-volume ratio Price-to-ATH ratio Composite market score IF & Switch nodes categorize each coin’s 24 h price action (up >5%, down >5%, high volatility, or stable) and append: signal (BUY/SELL/HOLD/NEUTRAL) riskRating (High/Medium/Low/Unknown) recommendation & investmentStrategy guidance NoOp & Merge nodes consolidate each branch back into a single data stream. Function (Generate Portfolio Summary) aggregates all analyses into: A Markdown portfolioSummary Counts of buy/sell/hold/neutral signals Risk distribution Webhook Response returns the full JSON payload with individual analyses and the summary for downstream consumers. 👤 Who is it for? This workflow is ideal for: Crypto researchers and analysts who need scheduled market insights DeFi and trading bot developers looking to automate signal generation Portfolio managers seeking a no-code overview of top assets Automation engineers exploring API integration and data enrichment 📑 Data Structure When you trigger the webhook, you’ll receive a JSON object containing: individualAnalyses: Array of { coin, symbol, currentPrice, priceChanges, marketMetrics, signal, riskRating, recommendation } portfolioSummary: Markdown report summarizing signals, risk distribution, and top opportunity marketSignals: Counts of each signal type riskDistribution: Counts of each risk rating timestamp: ISO string of analysis time ⚙️ Setup Instructions Import: In n8n Editor → click “Import from JSON” → paste this workflow JSON. Configure Schedule: Double-click the Schedule Trigger → set your desired interval (default: every hour). Webhook Path: Open the Webhook node → choose a unique path (e.g., /crypto‐analysis) and “POST”. Activate: Save and activate the workflow. Test: Open the webhook url to other tab or use cURL curl -X POST https://<your-n8n-host>/webhook/<path> You’ll get back a JSON payload with both portfolioSummary and individualAnalyses. 📝 Tips Rate-Limit Handling: If CoinGecko returns 429, insert a Delay node (e.g., 500 ms) after the HTTP Request. Batch Size: Default is 1 coin at a time; you can bump it to parallelize. Customization: Tweak volatility weightings or add new metrics directly in the “Calculate Market Metrics” Function node. Extension: Swap CoinGecko for another API by updating the HTTP Request URL and field mappings.
by Andrey
⚠️ DISCLAIMER: This workflow uses the HDW LinkedIn community node, which is only available on self-hosted n8n instances. It will not work on n8n.cloud. Overview This n8n workflow automates the enrichment of CRM contact data with professional insights from LinkedIn profiles. The workflow integrates with both Pipedrive and HubSpot CRMs, finding LinkedIn profiles that match your contacts and updating your CRM with valuable information about their professional background and recent activities. Key Features Multi-CRM Support**: Works with both Pipedrive and HubSpot AI-Powered Data Enrichment**: Uses an advanced AI agent to analyze and summarize professional information Automated Triggers**: Activates when new contacts are added or when enrichment is requested Comprehensive Profile Analysis**: Captures LinkedIn profile summaries and post activity How It Works Triggers The workflow activates in three scenarios: When a new contact is created in CRM When a contact is updated in CRM with an enrichment flag LinkedIn Data Collection Process Email Lookup: First tries to find the LinkedIn profile using the contact's email Advanced Search: If email lookup fails, uses name and company details to find potential matches Profile Analysis: Collects comprehensive profile information Post Analysis: Gathers and analyzes the contact's recent LinkedIn activity CRM Updates The workflow updates your CRM with: LinkedIn profile URL Professional summary (skills, experience, background) Analysis of recent LinkedIn posts and activity Setup Instructions Requirements Self-hosted n8n instance with the HDW LinkedIn community node installed API access to OpenAI (for GPT-4o) Pipedrive and/or HubSpot account HDW API key https://app.horizondatawave.ai Installation Steps Install the HDW LinkedIn Node: npm install n8n-nodes-hdw Follow the detailed instructions at: https://www.npmjs.com/package/n8n-nodes-hdw Configure Credentials: OpenAI: Add your OpenAI API key Pipedrive: Connect your Pipedrive account (if using) HubSpot: Connect your HubSpot account (if using) HDW LinkedIn: Add your API key from https://app.horizondatawave.ai CRM Custom Fields Setup: For Pipedrive: Go to Settings → Data Fields → Contact Fields → + Add Field Create the following custom fields: LinkedIn Profile: Field type - Large text Profile Summary: Field type - Large text LinkedIn Posts Summary: Field type - Large text Need Enrichment: Field type - Single option (Yes/No) Detailed instructions for creating custom fields in Pipedrive: https://support.pipedrive.com/en/article/custom-fields For HubSpot: Go to Settings → Properties → Create property Create the following properties for Contact object: linkedin_url: Field type - Single-line text profile_summary: Field type - Multi-line text linkedin_posts_summary: Field type - Multi-line text need_enrichment: Field type - Checkbox (Boolean) Detailed instructions for creating properties in HubSpot: https://knowledge.hubspot.com/properties/create-and-edit-properties Import the Workflow: Import the "HDW_CRM_Enrichment.json" file into your n8n instance Activate Webhooks: Enable the webhook triggers for your CRM to ensure the workflow activates correctly Customization Options AI Agent Prompts You can modify the system prompts in the "Data Enrichment AI Agent" nodes to: Change the focus of profile analysis Adjust the tone and detail level of summaries Customize what information is extracted from posts CRM Field Mapping The workflow is pre-configured to update specific custom fields in Pipedrive and HubSpot. Update the field/property mappings in: "Update data in Pipedrive" nodes "Update data in HubSpot" node Troubleshooting Common Issues LinkedIn Profile Not Found**: Check if the contact's email is their work email; consider adjusting the search parameters Webhook Not Triggering**: Verify webhook configuration in your CRM Missing Custom Fields**: Ensure all required custom fields are created in your CRM with correct names Rate Limits Be aware of LinkedIn API rate limits (managed by HDW LinkedIn node) Consider implementing delays if processing large batches of contacts Best Practices Use enrichment flags to selectively update contacts rather than enriching all contacts Review and clean contact data in your CRM before enrichment Periodically review the AI-generated summaries to ensure quality and relevance
by Alfonso Corretti
Who is this for? 🧑🏻🫱🏻🫲🏻🤖 Humans and Robots alike. This workflow can be used as a Chat Trigger, as well as a Workflow Trigger. It will take a natural language request, and then generate a SQL query. The resulting query parameter will contain the query, and a sqloutput parameter will contain the results of executing such query. What's the use case? This template is most useful paired with other workflows that extract e-mail information and store it in a structured Postgres table, and use LLMs to understand inquiries about information contained in an e-mail inbox and formulate questions that needs answering. Plus, the prompt can be easily adapted to formulate SQL queries over any kind of structured database. Privacy and Economics As LLM provider I'm using Ollama locally, as I consider my e-mail extremely sensitive information. As model, phi4-mini does an excellent job balancing quality and efficiency. Setup Upon running for the first time, this workflow will automatically trigger a sub-section to read all tables and extract their schema into a local file. Then, either by chatting with the workflow in n8n's interface or by using it as a sub-workflow, you will get a query and a sqloutput response. Customizations If you want to work with just one particular table yet keep edits at bay, append a condition to the List all tables in a database step, like so: WHERE table_schema='public' AND table_name='my_emails_table_name' To repurpose this workflow to work with any other data corpus in a structured database, inspect the AI Agent user and system prompts and edit them accordingly.
by Miko
Stay ahead of trends by automating your content research. This workflow fetches trending keywords from Google Trends RSS, extracts key insights from top articles, and saves structured summaries in Google Sheets—helping you build a data-driven editorial plan effortlessly. How it works Fetch Google Trends RSS – The workflow retrieves trending keywords along with three related article links. Extract & Process Content – It fetches the content of these articles, cleans the HTML, and generates a concise summary using Jina AI. Store in Google Sheets – The processed insights, including the trending keyword and summary, are saved in a pre-configured Google Sheet. Setup Steps Prepare a Google Sheet – Ensure you have a Google Sheet ready to store the extracted data. Configure API Access – Set up Google Sheets API and any required authentication. Get Jina.ai API key Adjust Workflow Settings – A dedicated configuration node allows you to fine-tune how data is processed and stored. Customization Modify the RSS source to focus on specific Google Trends regions or categories. Adjust the content processing logic to refine how article summaries are created. Expand the workflow to integrate with CMS (e.g., WordPress) for automated content planning. This workflow is ideal for content strategists, SEO professionals, and news publishers who want to quickly identify and act on trending topics without manual research. 🚀 Google Sheets Fields Copy and paste these column headers into your Google Sheet: | Column Name | Description | |------------------------|-------------| | status | Initial status of the keyword (e.g., "idea") | | trending_keyword | Trending keyword extracted from Google Trends | | approx_traffic | Estimated traffic for the trending keyword | | pubDate | Date when the keyword was fetched | | news_item_url1 | URL of the first related news article | | news_item_title1 | Title of the first news article | | news_item_url2 | URL of the second related news article | | news_item_title2 | Title of the second news article | | news_item_url3 | URL of the third related news article | | news_item_title3 | Title of the third news article | | news_item_picture1 | Image URL from the first news article | | news_item_source1 | Source of the first news article | | news_item_picture2 | Image URL from the second news article | | news_item_source2 | Source of the second news article | | news_item_picture3 | Image URL from the third news article | | news_item_source3 | Source of the third news article | | abstract | AI-generated summary of the articles (limited to 49,999 characters) | Instructions Open Google Sheets and create a new spreadsheet. Copy the column names from the table above. Paste them into the first row of your Google Sheet.