by David Olusola
Overview This workflow watches for new rows in a Google Sheet (e.g., where you manually log customer reviews) and uses a Code node to perform a simple sentiment analysis, then updates the same row with the detected sentiment. Use Case: Quickly gauge customer satisfaction, identify positive/negative trends, and prioritize follow-ups based on sentiment. How It Works This workflow operates in four main steps: Google Sheets Trigger (New Row): The workflow starts with a Google Sheets Trigger node configured to monitor a specific Google Sheet for new rows. This triggers the workflow whenever a new review is added. Code Node (Sentiment Analysis): A Code node receives the new row data (containing the review text). Inside this node, JavaScript code performs a basic sentiment analysis by checking for keywords (e.g., "great", "excellent" for positive; "bad", "problem" for negative). It assigns "Positive", "Negative", or "Neutral" sentiment. Update Google Sheet Row: A Google Sheets node is configured to update the same row that triggered the workflow. It adds the sentiment result (and potentially other analysis data) to a new column in that row. Setup Steps To get this workflow up and running, follow these instructions: Step 1: Create Google Sheets Credentials in n8n In your n8n instance, click on Credentials in the left sidebar. Click New Credential. Search for and select "Google Sheets OAuth2 API" and follow the authentication steps with your Google account. Save it. Make note of the Credential Name (e.g., "My Google Sheets Account"). Step 2: Prepare Your Google Sheet (or better Make a copy of the one provided in the template) Create a new Google Sheet in your Google Drive (e.g., Customer Reviews). In the first row, add these column headers: Timestamp Customer Name Review Text Sentiment (This column will be updated by the workflow) Review ID (Optional, for tracking) Copy the Sheet ID from the URL (e.g., https://docs.google.com/spreadsheets/d/YOUR_GOOGLE_SHEET_ID_HERE/edit). Copy the GID of the specific sheet tab (e.g., https://docs.google.com/spreadsheets/d/YOUR_GOOGLE_SHEET_ID_HERE/edit#gid=YOUR_GID_HERE). This is the sheetName value. Step 3: Import the Workflow JSON Step 4: Activate and Test the Workflow Click the "Activate" toggle button in the top right corner of the n8n workflow editor. Go to your Google Sheet and manually add a new row with a "Review Text" (e.g., "This product is great, I love it!"). Leave the "Sentiment" column empty. The workflow should trigger automatically (it polls every minute by default), analyze the sentiment, and update the "Sentiment" column in your Google Sheet. You can also manually "Execute Workflow" to test immediately.
by Gleb D
This n8n workflow automatically retrieves recent Reuters news articles related to a user-specified keyword, summarizes the main findings using Google Gemini, formats the output into styled HTML, and sends a clean email report to a specified address. 🚀 What It Does Collects news data from Bright Data's Reuters dataset. Sorts and filters top 10 most recent news articles by publication_date. Sends structured news data to Gemini Flash for summarization. Converts Gemini's response (in Markdown) into styled HTML. Delivers a concise news briefing via email, including clickable source links and topic highlights. 🛠️ Step-by-Step Setup User Form: Accepts a keyword from the user via an n8n form trigger. Bright Data API: Posts a discover_new request to Bright Data's Reuters dataset using the keyword. Snapshot Polling: Waits and checks for dataset readiness using the snapshot ID. Data Retrieval: Downloads the news data once the snapshot is complete. Parsing: Filters and sorts the latest 10 articles using a Python Code node. AI Analysis: Google Gemini summarizes the filtered content into one briefing. Markdown → HTML: Converts AI response into styled HTML using Markdown + Code node. Email Delivery: Sends the briefing as an email to a predefined recipient. 🧠 How It Works Polling Control: Uses Wait and If nodes to handle Bright Data snapshot readiness. Date Sorting: Publication dates (ISO 8601 format) are parsed and used for sorting. AI Summarization: Gemini condenses multiple articles into one cohesive summary. Formatting: Clean HTML with readable styles is generated dynamically before sending. 📨 Final Output The email includes: A brief summary of the most important developments Date range of the collected news Topics covered 🔐 Credentials Used Bright Data API (replace YOUR_API_KEY in the HTTP nodes) Google Gemini (Flash) API Email SMTP (configured in Email Send node) ⚠️ Notes You must replace all YOUR_API_KEY placeholders in Bright Data request headers with your actual Bright Data API key. You can customize the keyword prompt and output style freely. I would recommend to keep the sort = relevance option for best chronological results - sorting by date is handled later.
by n8n Team
This workflow combines customers' details with their payment data and passes the input to Pipedrive as a note to the organization. Prerequisites Stripe account and Stripe credentials Pipedrive account and Pipedrive credentials How it works Cron node triggers the workflow every day at 8 a.m. HTTP Request node searches for payments in Stripe. The Item Lists node creates separate items from a list of payment data. Merge node takes in the payment data as an input 1. Stripe node gets all the customers data. Set node renames customer-related data fields and keeps only needed fields. Merge node takes in the customer data as an input 2. Merge node combines the payment data with the customers one. Pipedrive node searches for the organization and creates a note with payment data.
by ist00dent
This n8n template allows you to automatically create shortened URLs using the TinyURL API by simply sending a webhook request. It's a quick and efficient way to integrate URL shortening into your automated workflows, ideal for sharing long links in social media, emails, or other applications. 🔧 How it works Receive Link Webhook: This node acts as the entry point for the workflow. It listens for incoming POST requests and expects a JSON body containing the url to be shortened and your api_key for TinyURL. Create TinyURL: This node sends a POST request to the TinyURL API, passing the long URL and your API key. It can also accept optional parameters like domain, alias, and description to customize the shortened link. Respond with Shortened URL: This node sends the response from the TinyURL API (which includes the new shortened URL) back to the service that initiated the webhook. 👤 Who is it for? This workflow is ideal for: Content Managers & Marketers: Quickly shorten links for campaigns, social media posts, or tracking. Developers: Automate the process of link shortening within applications or scripts. Automation Enthusiasts: Integrate a URL shortener into various n8n workflows (e.g., after generating a report, before sending a notification). Anyone needing on-demand short links: A flexible solution for ad-hoc link shortening. 📑 Data Structure When you trigger the webhook, send a POST request with a JSON body structured as follows: { "api_key": "YOUR_TINYURL_API_KEY", "url": "https://www.verylongwebsite.com/path/to/specific/page?param1=value1¶m2=value2", "domain": "tinyurl.com", // Optional: defaults to tinyurl.com "alias": "myCustomAlias", // Optional: desired custom alias for the link "description": "My project link" // Optional: description for the link } The workflow will return the JSON response directly from the TinyURL API, which will include the short_url and other details about the newly created link. ⚙️ Setup Instructions Obtain TinyURL API Key: Before importing, make sure you have an API key from TinyURL. You can typically get this by signing up for an account on their website. Import Workflow: In your n8n editor, click "Import from JSON" and paste the provided workflow JSON. Configure Webhook Path: Double-click the Receive Link Webhook node. In the 'Path' field, set a unique and descriptive path (e.g., /shorten-link). Activate Workflow: Save and activate the workflow. 📝 Tips Dynamic Inputs: The workflow is set up to dynamically use the url, api_key, alias, and description from the incoming webhook data. This makes it highly flexible. Error Handling: You can add an Error Trigger node to catch any issues (e.g., invalid API key, malformed URL) during the TinyURL creation process. Configure it to send notifications or log errors for easy troubleshooting. Post-Shortening Actions: After generating the shortened URL, you can insert additional nodes before the Respond with Shortened URL node to perform other actions. For example, you could: Save to a Database: Store the original and shortened URLs in a database like Airtable, Google Sheets, or a PostgreSQL database. Send a Message: Automatically send the shortened URL via Slack, Discord, email, or SMS. Update a Record: Update a CRM record or project management task with the new shortened link. Custom Domains: If you have a custom domain configured with your TinyURL account, you can change the domain parameter in the Create TinyURL node to use it.
by Matthieu
LinkedIn Profile Enrichment Automation Who is this for? This template is perfect for sales teams, recruiters, marketing professionals, and business development specialists who need to gather comprehensive LinkedIn profile data at scale. Ideal for lead generation teams building prospect databases, recruiters sourcing candidate information, sales professionals researching prospects, and marketing teams creating targeted outreach campaigns. What problem does this workflow solve? Manually collecting detailed information from LinkedIn profiles is incredibly time-consuming and prone to inconsistency. Visiting each profile individually to extract names, job titles, experience, education, skills, and contact details can take hours for even small prospect lists. This automation eliminates the tedious manual data entry while ensuring consistent, comprehensive profile enrichment. What this workflow does This workflow automatically enriches a list of LinkedIn profile URLs by extracting comprehensive professional data including: Personal details** (first name, last name, headline, location) Professional status** (hiring status, open to work indicators) Network metrics** (connections, followers count) Work experience** (up to 5 most recent positions with company details, dates, and roles) Education background** (up to 3 educational institutions with degrees and dates) Skills and languages** (complete skill sets and language proficiencies) Professional summary** (profile description and bio) The enriched data is automatically organized and updated in your Google Sheets database with structured formatting for easy analysis and outreach. Setup Create a Ghost Genius API account and obtain your API key for cookieless LinkedIn profile scraping Configure HTTP Request credentials with Header Auth using your Ghost Genius API key Set up your Google Sheets database using the provided template with columns: URL, Firstname, Lastname, Tagline, Location Connections, Followers, Hiring?, Open to work? Summary, Languages, Skills Experience 1-5, Education 1-3 Configure Google Sheets OAuth2 credentials following n8n's authentication setup guide Add LinkedIn profile URLs to the first column of your Google Sheet to begin enrichment Test the workflow with a small batch before processing larger lists How to customize this workflow Adjust batch processing settings** to handle larger volumes by modifying the batch size and interval timing Add data validation rules** to filter out incomplete or invalid profiles before processing Integrate with CRM systems** like HubSpot or Salesforce to automatically sync enriched data Set up automated scheduling** to regularly re-enrich profiles and capture profile updates Add email notifications** to alert when enrichment batches are completed or encounter errors Customize data mapping** to include additional profile fields or reorganize the output structure Add duplicate detection** to prevent re-processing the same profiles multiple times
by Don Jayamaha Jr
This advanced agent analyzes long-term price action in the Binance Spot Market using 1-day candles. It calculates key macro indicators like RSI, MACD, BBANDS, EMA, SMA, and ADX to identify high-confidence trend setups and market momentum. Used by the Quant AI system for directional bias and macro-level signal validation. 🎥 Watch Tutorial: 🎯 Purpose Detect major trend reversals, consolidation zones, and macro bias Support long-term swing trading decisions Provide reliable 1-day signals for downstream agents 🧠 Core Features | Feature | Description | | --------------------------- | ------------------------------------------------------------ | | 🔁 Trigger | Called by parent workflows via Execute Workflow | | 📥 Input Format | { "message": "MATICUSDT", "sessionId": "telegram_id" } | | 📡 Webhook Call | Sends request to internal 1d indicators webhook | | 🧮 Technical Indicators | RSI, MACD, BBANDS, EMA, SMA, ADX (based on 40 daily candles) | | 🧠 GPT (gpt-4.1-mini) Agent | Interprets numerical data into human-readable trend signals | | 💬 Output | Summary suitable for Telegram or further agent consumption | 🔗 External Tools Called https://treasurium.app.n8n.cloud/webhook/1d-indicators Sends: { "symbol": "SOLUSDT" } 📊 Indicator Calculations | Indicator | Purpose | | -------------- | ------------------------------- | | RSI (14) | Overbought / Oversold Signals | | MACD (12,26,9) | Trend Reversals / Momentum | | BBANDS (20, 2) | Volatility Expansion | | EMA (20) | Short-Term Trend Confirmation | | SMA (20) | Macro-Level Support/Resistance | | ADX (14) | Trend Strength + Directional DI | 📦 Setup Import the JSON into n8n. Add your OpenAI API credentials. Ensure webhook /1d-indicators is connected and working. Use this agent as a sub-workflow in: Binance SM Financial Analyst Tool Binance Spot Market Quant AI Agent 📤 Output Example 📅 1D Overview – MATICUSDT • RSI: 71 → Overbought • MACD: Bearish Cross forming • BBANDS: Widening Volatility • EMA < SMA → Downtrend Momentum • ADX: 33 → High Trend Strength 📌 Notes Not user-facing — outputs are structured JSON or Telegram-style summaries. Pairs well with shorter timeframe tools (15m–4h) for confidence stacking. 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. 🔗 Need help? Reach out on LinkedIn – Don Jayamaha
by Joseph LePage
MCP AI Chatbot using Brave Search Disclaimer: This workflow only works with local installations of n8n because it uses a community MCP node Who is this for? This workflow is ideal for developers, automation enthusiasts, and businesses looking to integrate AI-powered chat capabilities into their workflows. It's particularly useful for those leveraging Brave Search and MCP tools to enhance user interactions and streamline data retrieval. What problem is this workflow solving? This workflow addresses the challenge of creating an intelligent chatbot that can process user queries, execute searches using Brave Search, and provide responses enriched by AI. It simplifies the integration of multiple tools into a cohesive system, saving time and effort for users who need a robust conversational AI solution. What this workflow does Listens for incoming chat messages using the Chat Trigger node. Processes user input with an AI Agent powered by GPT-4o. Retrieves relevant tools using the MCP Get Brave Tools node. Executes specific search queries via the MCP Execute Brave Search node. Maintains short-term memory of conversations with the Simple Memory node. Setup Prerequisites: Access to an n8n instance (self-hosted). API credentials for OpenAI and MCP Client Tools. Brave Search API key. Steps: Import the workflow JSON into your n8n instance. Configure the API credentials for OpenAI and MCP Client Tools in their respective nodes. Set up your Brave Search API key in the MCP nodes. https://brave.com/search/api/ Testing: Use the built-in chat interface to send test messages. Verify that the chatbot processes queries and returns results as expected. How to customize this workflow to your needs Modify the AI Agent's prompt settings to tailor responses to your specific use case. Adjust the memory buffer in the Simple Memory node to retain more or less conversational context. Replace or add additional tools in the MCP nodes to expand functionality.
by n8n Team
This template shows how you can create reports on data in an app and share a summary in another app. Specifically, this example checks a Notion database for new submissions, filters for submissions with a specific tag, and then sends a Slack message with the number created this week. Setup instructions are located inside the workflow template.
by Automate With Marc
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 🧠 AI-Powered Blog Post Generator Category: Content Automation / AI Writing / Marketing Description: This automated workflow helps you generate fresh, SEO-optimized blog posts daily using AI tools—perfect for solo creators, marketers, and content teams looking to stay on top of the latest AI trends without manual research or writing. For more of such builds and step-by-step Tutorial Guides, check out: https://www.youtube.com/@Automatewithmarc Here’s how it works: Schedule Trigger kicks off the workflow daily (or at your preferred interval). Perplexity AI Node researches the most interesting recent AI news tailored for a non-technical audience. AI Agent (Claude via Anthropic) turns that news into a full-length blog post based on a structured prompt that includes title, intro, 3+ section headers, takeaway, and meta description—designed for clarity, engagement, and SEO. Optional Memory & Perplexity Tool Nodes enhance the agent's responses by allowing it to clarify facts or fetch more context. Google Docs Node automatically saves the final blog post to your selected document—ready for review, scheduling, or publishing. Key Features: Combines Perplexity AI + Claude AI (Anthropic) for research + writing Built-in memory and retrieval logic for deeper contextual accuracy Non-technical, friendly writing style ideal for general audiences Output saved directly to Google Docs Fully no-code, customizable, and extendable Use Cases: Automate weekly blog content for your newsletter or site Repurpose content into social posts or scripts Keep your brand relevant in the fast-moving AI landscape Setup Requirements: Perplexity API Key Anthropic API Key Google Docs (OAuth2 connected)
by Joseph
This workflow automates invoice generation from form submissions, ensuring unique order IDs, creating PDF invoices, storing files, emailing customers, and logging invoice data — all seamlessly integrated. 🔹 Workflow Overview Trigger (Webhook) Starts when an order form is submitted, capturing customer and order details. Generate Random Order ID A Function node creates a unique alphanumeric invoice ID (e.g., INV-X92B7D). Check for Duplicate Order ID Google Sheets looks up the generated order ID in your invoice log sheet to prevent duplicates. Conditional Check (IF Node) If the ID already exists → regenerates a new ID (loops back) If unique → proceeds to invoice creation Prepare Invoice Data A Set node formats customer info, date, order items, and the unique order ID to fit your invoice template. Convert HTML to PDF HTTP Request node sends your invoice HTML to the RapidAPI HTML-to-PDF service and receives the PDF file. Upload PDF to Cloud Storage Save the PDF in Google Drive or Dropbox with a clear file name like Invoice-INV-X92B7D.pdf. Send Invoice Email to Customer Email node attaches the PDF and includes the order ID in the email subject/body. Log Invoice Details Append invoice data (customer info, order ID, total, PDF link) to your Google Sheet for tracking. ⚙️ Node Details & Setup 1. Webhook Trigger Configure to receive form submissions (order details like name, email, items, total). 2. Function: Generate Random Order ID Sample JS code generates unique IDs prefixed by INV-. 3. Google Sheets: Lookup Row Set up connection to your invoice log sheet. Search for existing order ID to avoid duplicates. 4. IF Node: Check Order ID Existence Condition: If order ID found → loop to regenerate. Else → continue workflow. 5. Set Node: Prepare Invoice HTML Define variables like customer name, date, items, and order ID. This data populates your HTML invoice template. 6. HTTP Request: Convert HTML to PDF API URL to get your key Send invoice HTML in the request body. Receive PDF file blob or download URL. 7. Google Drive (or Dropbox) Upload Upload the PDF file. Use file name format: Invoice-{{$json["order_id"]}}.pdf 8. Email Node Recipient: customer email from the form data. Attach generated PDF. Include order ID in email subject or body for reference. 9. Google Sheets: Append Row Log invoice metadata to keep records updated. 📁 Google Sheets Template You can make a copy of the invoice log template here This sheet includes columns for order\_id, customer name, email, total, and invoice PDF link. Customize it as needed. 📌 Additional Notes Customize the invoice HTML template inside the Set node to match your branding. Ensure API credentials for RapidAPI, Google Drive/Dropbox, and email are properly set up in your n8n credentials. You can expand this workflow by adding payment processing or SMS notifications. Need help or want a custom workflow? Reach out via email at joseph@uppfy.com.
by Hunyao
What it does Pulls up to 700 Amazon reviews per product (recent and top-rated) and writes them straight into a Google Sheet tab you choose. Perfect for • Brand and product managers tracking sentiment • Marketplace sellers analysing competitor feedback • Agencies building product-review dashboards Apps used RapidAPI Real-Time Amazon Data, Google Sheets, n8n Form Trigger How it works Form Trigger collects brand, product and sheet info. Code node extracts the ASIN and builds 70 API requests (10 pages × star ratings). Split-in-batches loops through the request list, throttled by two Wait nodes. HTTP Request fetches reviews from RapidAPI. IF node drops empty or error responses. Split Out breaks arrays into single reviews. Google Sheets appends every review to the target tab. Loop continues until all pages finish. Setup Fill in Brand name, Product / Model Name, Amazon Product URL, Tab URL to insert reviews in the form. Grab your X-RapidAPI-Key from RapidAPI → Add as httpHeaderAuth credential. Connect Google Sheets OAuth2 and make the spreadsheet Anyone with the link can edit. Open Workflow Settings → set timezone if you plan to schedule runs. Hit Execute workflow or share the form link. Credentials • Real-Time Amazon Data (RapidAPI HTTP Header Auth) • Google Sheets OAuth2 Limits and notes • \~100 RapidAPI calls for the free plan. Plan quota accordingly. • Assumes Amazon returns 10 pages per star rating; fewer pages skip silently. • Large sheets may hit Google API write quotas. If you have any questions in running the workflow, feel free to reach out to me at my youtube channel: https://www.youtube.com/@lifeofhunyao
by Stephan Koning
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. **Alternatively, you can delete the community node and use the HTTP node instead. ** Most email agent templates are fundamentally broken. They're stateless—they have no long-term memory. An agent that can't remember past conversations is just a glorified auto-responder, not an intelligent system. This workflow is Part 1 of building a truly agentic system: creating the brain. Before you can have an agent that replies intelligently, you need a knowledge base for it to draw from. This system uses a sophisticated parser to automatically read, analyze, and structure every incoming email. It then logs that intelligence into a persistent, long-term memory powered by mem0. The Problem This Solves Your inbox is a goldmine of client data, but it's unstructured, and manually monitoring it is a full-time job. This constant, reactive work prevents you from scaling. This workflow solves that "system problem" by creating an "always-on" engine that automatically processes, analyzes, and structures every incoming email, turning raw communication into a single source of truth for growth. How It Works This is an autonomous, multi-stage intelligence engine. It runs in the background, turning every new email into a valuable data asset. Real-Time Ingest & Prep: The system is kicked off by the Gmail Trigger, which constantly watches your inbox. The moment a new email arrives, the workflow fires. That email is immediately passed to the Set Target Email node, which strips it down to the essentials: the sender's address, the subject, and the core text of the message (I prefer using the plain text or HTML-as-text for reliability). While this step is optional, it's a good practice for keeping the data clean and orderly for the AI. AI Analysis (The Brain): The prepared text is fed to the core of the system: the AI Agent. This agent, powered by the LLM of your choice (e.g., GPT-4), reads and understands the email's content. It's not just reading; it's performing analysis to: Extract the core message. Determine the sentiment (Positive, Negative, Neutral). Identify potential red flags. Pull out key topics and keywords. The agent uses Window Buffer Memory to recall the last 10 messages within the same conversation thread, giving it the context to provide a much smarter analysis. Quality Control (The Parser): We don't trust the AI's first draft blindly. The analysis is sent to an Auto-fixing Output Parser. If the initial output isn't in a perfect JSON format, a second Parsing LLM (e.g., Mistral) automatically corrects it. This is our "twist" that guarantees your data is always perfectly structured and reliable. Create a Permanent Client Record: This is the most critical step. The clean, structured data is sent to mem0. The analysis is now logged against the sender's email address. This moves beyond just tracking conversations; it builds a complete, historical intelligence file on every person you communicate with, creating an invaluable, long-term asset. Optional Use: For back-filling historical data, you can disable the Gmail Trigger and temporarily connect a Gmail "Get Many" node to the Set Target Email node to process your backlog in batches. Setup Requirements To deploy this system, you'll need the following: An active n8n instance. Gmail** API credentials. An API key for your primary LLM (e.g., OpenAI). An API key for your parsing LLM (e.g., Mistral AI). An account with mem0.ai for the memory layer.