by Don Jayamaha Jr
🧪 Binance SM 1hour Indicators Tool A precision trading signal engine that interprets 1-hour candlestick indicators for Binance Spot Market pairs using a GPT-4.1-mini LLM. Ideal for swing traders seeking directional bias and momentum clarity across medium timeframes. 🎥 Watch Tutorial: 🎯 Purpose This tool provides a structured 1-hour market read using: RSI** (Relative Strength Index) MACD** (Moving Average Convergence Divergence) BBANDS** (Bollinger Bands) SMA & EMA** (Simple and Exponential Moving Averages) ADX** (Average Directional Index) It’s invoked as a sub-agent in broader AI workflows, such as the Binance Financial Analyst Tool and the Spot Market Quant AI Agent. ⚙️ Key Features | Feature | Description | | ---------------------- | ------------------------------------------------------------- | | 🔄 Subworkflow Trigger | Runs only when called by parent agent (not standalone) | | 🧠 GPT-4.1-mini LLM | Translates numeric indicators into natural-language summaries | | 📊 Real-time Data | Pulls latest 40×1h candles via internal webhook from Binance | | 📥 Input Format | { "message": "ETHUSDT", "sessionId": "telegram_chat_id" } | | 📤 Output Format | JSON summary + Telegram-friendly HTML overview | 💡 Example Output 📊 1h Technical Overview – ETHUSDT • RSI: 59 (Neutral) • MACD: Bullish Crossover • BBANDS: Price at Upper Band • EMA > SMA → Positive Slope • ADX: 28 → Moderate Trend Strength 🧩 Use Cases | Scenario | Result | | -------------------------------------- | ----------------------------------------------- | | Mid-frame market alignment | Verifies momentum between 15m and 4h timeframes | | Quant AI Agent input | Supplies trend context for entry/exit decisions | | Standalone medium-term signal snapshot | Validates swing trade setups or filters noise | 📦 Installation Instructions Import workflow into your n8n instance Confirm internal webhook /1h-indicators is live and authorized Insert your OpenAI credentials for GPT-4.1-mini node Use only when triggered via: Binance Financial Analyst Tool Binance Spot Market Quant AI Agent 🧾 Licensing & Support 🔗 Don Jayamaha – LinkedIn linkedin.com/in/donjayamahajr © 2025 Treasurium Capital Limited Company Architecture, prompts, and signal logic are proprietary. Redistribution or commercial use requires explicit licensing. No unauthorized cloning permitted.
by Matt F.
AI Customer-Support Assistant that auto-maps any business site, answers WhatsApp in real time, and lets you earn or save thousands by replacing pricey SaaS chat tools. ⚡ What the workflow does Live “AI employee”* - the bot crawls pages on demand (products, policies, FAQs) so you *never** upload documents or fine-tune a model. No-code setup** - Drop in API keys, paste your domain, publish the webhook—ready in \~15 min. Chat memory** - each conversation turn is written to Supabase/Postgres and automatically replayed into the next prompt, letting the assistant remember context so follow-up questions feel natural and coherent even across long sessions. WhatsApp ready** - Free-form replies inside the 24-hour service window, automatically switches to a template when required (recommended by Meta). 🚀 Why you’ll love it | Benefit | Impact | | ------------------------- | --------------------------------------------------------------------- | | Zero content training | Point the AI Agent at any domain → go live. | | Save or earn money | Replace pricey SaaS chat tools or sell white-label bots to clients. | | Channel-agnostic | Ships with WhatsApp; swap one node for Telegram, Slack, or web chat. | | Flexible voice | Adjust tone & language by editing one prompt line. | 🧰 Prerequisites (all free-tier friendly) OpenAI key Meta WhatsApp Cloud API number + permanent token (easy setup) Supabase (or Postgres) URL for chat memory (easy setup) 🛠 5-step setup Import the template into n8n. Add credentials for OpenAI, WhatsApp, and Supabase. Enter your root domain in the root\_url variable. Point Meta’s Webhook to the n8n URL. Hit Execute Trigger and send “Hi” from WhatsApp—watch the bot answer with live data. 🔄 Easy to extend Voice & language** – change wording in the System Prompt. Escalation** – add an “If fallback” branch → Zendesk, email, live agents. Extra channels** – duplicate the reply node for Telegram or Slack. Commerce API hooks** – plug in Shopify, Woo, Stripe for order status or payments. 💡 Monetization ideas Replace costly SaaS seats.* Deploy the bot on your own server and *stop paying \$300–\$500 every month for third-party “AI support” platforms. Sell it as a service.* Spin up a branded instance for local shops, clinics, or e-commerce stores and *charge each client \$300–\$500 per month**—setup time is under 15 minutes. Upsell premium coverage (24/7 human hand-off) once the bot handles routine questions. Embed product links in answers and earn affiliate or upsell revenue automatically. Spin it up, connect a domain and a phone number, and you—or your customers—get enterprise-grade support without code, training, or ongoing licence fees.
by David Roberts
This workflow allows you to ask questions about data stored in a database using AI. To use it, you'll need an OpenAI API key (although you could also swap in a model from another service). Supported databases: Postgres MySQL SQLite The workflow uses n8n's embedded chat, but you could also modify it to work with a chat service such as Slack, MS Teams or WhatsApp. Note that to use this template, you need to be on n8n version 1.19.4 or later.
by Yaron Been
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automatically scrapes and summarizes the latest industry news, delivering a curated digest to your team. Stay informed without sifting through countless articles. Overview Bright Data scrapes top news sites, blogs, and press release feeds relevant to your sector. OpenAI summarizes each article and tags it by topic. The daily digest is compiled into Markdown and sent via Slack and email, while full summaries are archived in Notion. Tools Used n8n** – Automation framework Bright Data** – Scrapes news sources reliably OpenAI** – Generates concise summaries and tags Slack & Gmail** – Distributes daily digest Notion** – Stores detailed article notes How to Install Import the Workflow into n8n. Configure Bright Data credentials. Set Up OpenAI API key. Authorize Slack, Gmail, and Notion. Customize Source List & Keywords in the Set node. Use Cases Executive Briefings**: Keep leadership updated. Product Teams**: Track competitor announcements. Marketing**: Identify content trends quickly. Investors**: Monitor sector developments. Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #industrynews #webscraping #brightdata #openai #newsdigest #n8nworkflow #nocode
by Samir Saci
Tags*: Sustainability, Business Travel, Carbon Emissions, Flight Tracking, Carbon Interface API Context Hi! I’m Samir — a Supply Chain Engineer and Data Scientist based in Paris, and founder of LogiGreen Consulting. I help companies monitor and reduce their environmental footprint by combining AI automation, carbon estimation APIs, and workflow automation. This workflow is part of our sustainability reporting initiative, allowing businesses to track the CO₂ emissions of employee flights. > Automate carbon tracking for your business travel with AI-powered workflows in n8n! 📬 For business inquiries, feel free to connect with me on LinkedIn Who is this template for? This workflow is designed for travel managers, sustainability teams, or finance teams who need to measure and report on emissions from business travel. Let’s imagine your company receives a flight confirmation email: The AI Agent reads the email and extracts structured data, such as flight dates, airport codes, and number of passengers. Then, the Carbon Interface API is called to estimate CO₂ emissions, which are stored in a Google Sheet for sustainability reporting. How does it work? This workflow automates the end-to-end process of tracking flight emissions from email to CO₂ estimation: 📨 Gmail Trigger captures booking confirmations 🧠 AI Agent extracts structured data (airports, dates, flight numbers) ✈️ Each flight leg is processed individually 🌍 Carbon Interface API returns distance and carbon emissions 📄 A second Google Sheet node appends the emission data for reporting Steps: 💌 Trigger on new flight confirmation email 🧠 Extract structured trip data using AI Agent (flights, airports, dates) 📑 Store flight metadata in Google Sheets 🧭 For each leg, call the Carbon Interface API 📥 Append distance, CO₂ in kg, and timestamp to the flight row What do I need to get started? You’ll need: A Gmail account receiving SAP Concur or travel confirmation emails A Google Sheet to record trip metadata and CO₂ emissions A free Carbon Interface API key Access to OpenAI for parsing the email via AI Agent A few sample flight confirmation emails to test Next Steps 🗒️ Use the sticky notes in the n8n canvas to: Add your Gmail and Carbon Interface credentials Send a sample booking email to your inbox Verify that emissions and distances are correctly added to your sheet This template was built using n8n v1.93.0 Submitted: June 7, 2025
by Max Mitcham
Want to check out all my flows, follow me on: https://maxmitcham.substack.com/ https://www.linkedin.com/in/max-mitcham/ This automation flow is designed to generate comprehensive, research-backed lead magnet articles based on a user-submitted topic, conduct deep research across multiple sources, and automatically create a professional Google Doc ready for LinkedIn sharing. ⚙️ How It Works (Step-by-Step): 📝 Chat Input (Entry Point) A user submits a topic through the chat interface: Topic for lead magnet content Target audience (automatically detected) Company context (when relevant) 🔍 Query Builder Agent An AI agent refines the input by: Converting the topic into 5 targeted research queries Determining if topic relates to *company for specialized research Using structured output parsing for consistent results 📚 Research Leader Agent Conducts comprehensive research that: Uses Perplexity API for real-time web research Integrates *company knowledge base when relevant Creates detailed table of contents with research insights Identifies key trends, expert opinions, and case studies 📋 Project Planner Agent Structures the content by: Generating professional title and subtitle Creating 8-10 logical chapter outlines Developing detailed writing prompts for each section Ensuring step-by-step actionable guidance ✍️ Research Assistant Team Multiple AI agents write simultaneously: Each agent writes one chapter with proper citations Maintains consistent voice across all sections Includes real-world examples and implementation steps Uses both web research and *company knowledge 📝 Editor Agent Professional content polishing: Refines tone for authenticity and engagement Adds image placeholders where appropriate Ensures proper flow between chapters Optimizes for LinkedIn lead magnet format 📄 Google Docs Creation Automated document generation: Creates new Google Doc with formatted content Sets proper sharing permissions (public link) Organizes in designated company folder Returns shareable URL for immediate use 🛠️ Tools Used: n8n: Workflow orchestration platform Anthropic Claude: Primary AI model for content generation OpenRouter: Backup AI model options Perplexity API: Real-time research capabilities *Company Knowledge Hub: Internal documentation access Google Docs API: Document creation and formatting Google Drive API: File management and sharing 📦 Key Features: End-to-end automation from topic to published document Multi-agent approach ensures comprehensive coverage Real-time research with proper citations Company-specific knowledge integration Professional editing and formatting Automatic Google Docs creation with sharing Scalable content generation (3-5 minutes per article) 🚀 Ideal Use Cases: B2B companies building thought leadership content Sales teams creating industry-specific lead magnets Marketing departments scaling content production Consultants developing expertise-demonstrating resources SaaS companies creating feature-focused educational content Startups establishing market presence without content teams `
by Yaron Been
Stop manually checking keyword rankings and let automation do the work for you. This comprehensive SEO monitoring workflow automatically tracks your keyword positions, compares them against your target URLs, and instantly alerts your team via Slack whenever rankings change - ensuring you never miss critical SEO movements. ✨ What This Workflow Does: 📊 Automated Rank Checking**: Continuously monitors keywords stored in Airtable 🔍 Real-Time SERP Analysis**: Uses Firecrawl API to fetch current search rankings 📈 Intelligent Comparison**: Compares current vs. previous rankings automatically 📝 Database Updates**: Updates Airtable records with new ranking data 🚨 Instant Alerts**: Sends Slack notifications only when rankings change 🎯 Target URL Matching**: Specifically tracks your domain's position in search results 🔧 Key Features: Trigger-based automation** that activates when Airtable data changes Smart rank comparison** logic that prevents false alerts Conditional notifications** - only alerts on actual ranking changes Clean data management** with automatic Airtable updates Team collaboration** through Slack integration Scalable monitoring** for unlimited keywords 📋 Prerequisites: Airtable account with Personal Access Token Firecrawl API key for SERP data Slack workspace with API access Basic Airtable setup with keyword data 🎯 Perfect For: SEO agencies managing multiple client campaigns Digital marketing teams tracking organic performance Content creators monitoring content rankings E-commerce businesses tracking product visibility Startups needing cost-effective SEO monitoring Any business serious about search visibility 💡 How It Works: Data Collection: Fetches keywords, target URLs, and current ranks from Airtable SERP Analysis: Queries Firecrawl API for real-time search results Rank Detection: Searches results for your target URL and determines position Smart Comparison: Compares new ranking against stored data Database Update: Updates Airtable with latest ranking information Conditional Alert: Sends Slack notification only if ranking changed Team Notification: Delivers actionable ranking updates to your team 📦 What You Get: Complete n8n workflow with all integrations configured Airtable template with proper field structure Firecrawl API integration setup Slack notification templates Comprehensive setup documentation Sample keyword data for testing 🚀 Benefits: Save Hours Weekly**: Eliminate manual rank checking Never Miss Changes**: Get instant alerts on ranking movements Team Alignment**: Keep everyone informed via Slack Historical Tracking**: Maintain ranking history in Airtable Cost Effective**: Replace expensive SEO tools with automation Scalable Solution**: Monitor unlimited keywords effortlessly 💡 Need Help or Want to Learn More? Created by Yaron Been 📧 Support: Yaron@nofluff.online 🎥 YouTube Tutorials: https://www.youtube.com/@YaronBeen/videos 💼 LinkedIn: https://www.linkedin.com/in/yaronbeen/ Discover more SEO automation workflows and digital marketing tutorials on my channels! 🏷️ Tags: SEO, Keyword Tracking, Airtable, Slack, Firecrawl, SERP, Automation, Rank Monitoring, Digital Marketing, Search Rankings
by Eric
This is a specific use case. The ElevenLabs guide for Cal.com bookings is comprehensive but I was having trouble with the booking API request. So I built a simple workflow to validate the request and handle the booking creation. Who's this for? You have an ElevenLabs voice agent (or other external service) booking meetings in your Cal.com account and you want more control over the book_meeting tool called by the voice agent. How's it work? Request is received by the webhook trigger node Request sent from ElevenLabs voice agent, or other source Request body contains contact info for the user with whom a meeting will be booked in Cal.com Workflow validates input data for required fields in Cal.com If validation fails, a 400 bad request response is returned If valid, meeting is booked in Cal.com api How do I use this? Create a custom tool in the ElevenLabs agent setup, and connect it to the webhook trigger in this workflow. Add authorization for security. Instruct your voice agent to call this tool after it has collected the required information from the user. Expected input structure Note: Modify this according to your needs, but be sure to reflect your changes in all following nodes. Requirements here depend on required fields in your Cal.com event type. If you have multiple event types in Cal.com with varying required fields, you'll need to handle this in this workflow, and provide appropriate instructions in your *voice agent prompt*. "body": { "attendee_name": "Some Guy", "start": "2025-07-07T13:30:00Z", "attendee_phone": "+12125551234", "attendee_timezone": "America/New_York", "eventTypeId": 123456, "attendee_email": "someguy@example.com", "attendee_company": "Example Inc", "notes": "Discovery call to find synergies." } Modifications Note: ElevenLabs doesn't handle webhook response headers or body, and only recognizes the response code. In other words, if the workflow responds with 400 Bad request that's the only info the voice agent gets back; it doesn't get back any details, eg. "User email still needed". You can modify the structure of the expected webhook request body, and then you should reflect that structure change in all following nodes in the workflow. Ie. if you change attendee_name to attendeeFirstName and attendeeLastName then you need to make this change in the following nodes that use these properties. You can also require or make optional other user data for the Cal.com event type which would reduce or increase the data the voice agent must collect from the user. You can modify the authorization of this webhook to meet your security needs. ElevenLabs has some limitations and you should be mindful of those, but it also offers a secret feature with proves useful. An improvement to this workflow could include a GET request to a CRM or other db to get info on the user interacting with the voice agent. This could reduce some of the data collection needed from the voice agent, like if you already have the user's email address, for example. I believe you can also get the user's phone number if the voice agent is set up on a dial-in interface, so then the agent wouldn't need to ask for it. This all depends on your use case. A savvy step might be prompting the voice agent to get an email, and using the email in this workflow to pull enrichment data from Apollo.io or similar ;-)
by Samir Saci
Tags*: Ghost CMS, SEO Audit, Image Optimisation, Alt Text, Google Sheets, Automation Context Hi! I’m Samir — a Supply Chain Engineer and Data Scientist based in Paris, and founder of LogiGreen Consulting. I help companies and content creators use automation and analytics to improve visibility, enhance performance, and reduce manual work. > Let’s use n8n to automate SEO audits to increase your traffic! 📬 For business inquiries, feel free to connect on LinkedIn Who is this template for? This workflow is perfect for bloggers, marketers, or content teams using Ghost CMS who want to: Extract and review all images from articles Detect missing or short alt texts Check image file size and filename SEO compliance Push the audit results into a Google Sheet How does it work? This n8n workflow extracts all blog posts from Ghost CMS, scans the HTML to collect all embedded images, then evaluates each image for: ✅ Presence and length of alt text 📏 File size in kilobytes 🔤 Filename SEO quality (e.g. lowercase, hyphenated, no special chars) All findings are written to Google Sheets for further analysis or manual cleanup. 🧭 Workflow Steps: 🚀 Trigger the workflow manually or on schedule 📰 Extract blog post content from Ghost CMS 🖼️ Parse all ` tags with src and alt` attributes 📤 Store image metadata in a Google Sheet (step 1) 🌐 Download each image using HTTP request 🧮 Extract file size, extension, and filename SEO flag 📄 Update the audit sheet with size and format insights What do I need to get started? This workflow requires: A Ghost Content API key A Google Sheet (to log audit results) No AI or external APIs required — works fully with built-in nodes Next Steps 🗒️ Follow the sticky notes inside the workflow to: Plug in your Ghost blog credentials Select or create a Google Sheet Run the audit and start improving your SEO! This template was built using n8n v1.93.0 Submitted: June 8, 2025
by explorium
Automatically enrich prospect data from HubSpot using Explorium and create leads in Salesforce This n8n workflow streamlines the process of enriching prospect information by automatically pulling data from HubSpot, processing it through Explorium's AI-powered tools, and creating new leads in Salesforce with enhanced prospect details. Credentials Required To use this workflow, set up the following credentials in your n8n environment: HubSpot Type**: App Token (or OAuth2 for broader compatibility) Used for**: triggering on new contacts, fetching contact data Explorium API Type**: Generic Header Auth Header**: Authorization Value**: Bearer YOUR_API_KEY Get explorium api key Salesforce Type**: OAuth2 or Username/Password Used for**: creating new lead records Go to Settings → Credentials, create these three credentials, and assign them in the respective nodes before running the workflow. Workflow Overview Node 1: HubSpot Trigger This node listens for real-time events from the connected HubSpot account. Once triggered, the node passes metadata about the event to the next step in the flow. Node 2: HubSpot This node fetches contact details from HubSpot after the trigger event. Credential**: Connected using a HubSpot App Token Resource**: Contact Operation**: Get Contact Return All**: Disabled This node retrieves the full contact details needed for further processing and enrichment. Node 3: Match prospect This node sends each contact's data to Explorium's AI-powered prospect matching API in real time. Method**: POST Endpoint**: https://api.explorium.ai/v1/prospects/match Authentication**: Generic Header Auth (using a configured credential) Headers**: Content-Type: application/json The request body is dynamically built from contact data, typically including: full_name, company_name, email, phone_number, linkedin. These fields are matched against Explorium's intelligence graph to return enriched or validated profiles. Response Output: total_matches, matched_prospects, and a prospect_id. Each response is used downstream to enrich, validate, or create lead information. Node 4: Filter This node filters the output from the Match prospect step to ensure that only valid, matched results continue in the flow. Only records that contain at least one matched prospect with a non-null prospect_id are passed forward. Status: Currently deactivated (as shown by the "Deactivate" label) Node 5: Extract Prospect IDs from Matched Results This node extracts all valid prospect_id values from previously matched prospects and compiles them into a flat array. It loops over all matched items, extracts each prospect_id from the matched_prospects array and returns a single object with an array of all prospect_ids. Node 6: Explorium Enrich Contacts Information This node performs bulk enrichment of contacts by querying Explorium with a list of matched prospect_ids. Node Configuration: Method**: POST Endpoint**: https://api.explorium.ai/v1/prospects/contacts_information/bulk_enrich Authentication**: Header Auth (using saved credentials) Headers**: "Content-Type": "application/json", "Accept": "application/json" Returns enriched contact information, such as: emails**: professional/personal email addresses phone_numbers**: mobile and work numbers professions_email, **professional_email_status, mobile_phone Node 7: Explorium Enrich Profiles This additional enrichment node provides supplementary contact data enhancement, running in parallel with the primary enrichment process. Node 8: Merge This node combines multiple data streams from the parallel enrichment processes into a single output, allowing you to consolidate data from different Explorium enrichment endpoints. The "combine" setting indicates it will merge the incoming data streams rather than overwriting them. Node 9: Code - flatten This custom code node processes and transforms the merged enrichment data before creating the Salesforce lead. It can be used to: Flatten nested data structures Format data according to Salesforce field requirements Apply business logic or data validation Map Explorium fields to Salesforce lead properties Handle data type conversions Node 10: Salesforce This final node creates new leads in Salesforce using the enriched data returned by Explorium. Credential**: Salesforce OAuth2 or Username/Password Resource**: Lead Operation**: Create Lead The node creates new lead records with enriched information including contact details, company information, and professional data obtained through the Explorium enrichment process. Workflow Flow Summary Trigger: HubSpot webhook triggers on new/updated contacts Fetch: Retrieve contact details from HubSpot Match: Find prospect matches using Explorium Filter: Keep only successfully matched prospects (currently deactivated) Extract: Compile prospect IDs for bulk enrichment Enrich: Parallel enrichment of contact information through multiple Explorium endpoints Merge: Combine enrichment results Transform: Flatten and prepare data for Salesforce (Code node) Create: Create new lead records in Salesforce This workflow ensures comprehensive data enrichment while maintaining data quality and providing a seamless integration between HubSpot prospect data and Salesforce lead creation. The parallel enrichment structure maximizes data collection efficiency before creating high-quality leads in your CRM system.
by Jimleuk
This n8n template showcases the new HTTP tool released in version 1.47.0. Overall, the tool helps simplify AI Agent workflows where custom sub-workflows were performing the same simple http requests. Comparisons 1. AI agent that can scrape webpages Remake of https://n8n.io/workflows/2006-ai-agent-that-can-scrape-webpages/ Changes: Replaces Execute Workflow Tool and Subworkflow Replaces Response Formatting 2. Allow your AI to call an API to fetch data Remake of https://n8n.io/workflows/2094-allow-your-ai-to-call-an-api-to-fetch-data/ Changes: Replaces Execute Workflow Tool and Subworkflow Replaces Manual Query Params Definitions Replaces Response Formatting
by Solomon
The Stripe API does not provide custom fields in invoice or charge data. So you have to get it from the Checkout Sessions endpoint. But that endpoint is not easy for begginners. It has dictionary parameters and pagination settings. This workflows solves that problem by having a preconfigured GET request that gets all the checkout sessions from the last 7 days. It then transforms the data to make it easier to work with and allows you to filter by the custom_fields you want to get. Want to generate Stripe invoices automatically? Open 👉 this workflow . Check out my other templates https://n8n.io/creators/solomon/