by mariskarthick
QuantumDefender AI is a next-generation intelligent cybersecurity assistant designed to harness the symbolic strength of quantum computing’s promise alongside cutting-edge AI capabilities. This sophisticated agent empowers SOC analysts, red teamers, and security researchers with rapid threat investigation, operational automation, and intelligent command execution—all driven by GPT-4 and integrated tools, accessible through Telegram or on any medium. 🔑 Key Features: Expert-Level Cybersecurity Research & Analysis: Leverages powerful AI models to deliver clean, detailed, domain-specific insights across detection, remediation, and offensive security. Command & Control: Executes Linux shell commands, autonomous scripts, and system operations securely in isolated environments. Real-Time Web Intelligence: Utilizes integrated Langsearch API to provide timely internet research with contextual relevance. Calendar & Scheduling Automation: Manage Google Calendar events or any similar application(create, update, delete, retrieve) dynamically from chat. Multi-Tool Orchestration: Combines calculator functions, internet searches, command execution, and messaging for comprehensive operational support. Telegram-native Chatbot: Delivers an adaptive, memory-informed, and interactive conversational experience with immediate typing indicators and high responsiveness. Conversation & Session Management: Maintains context-aware, session-based memory to enable smooth, multi-turn dialogues with individual users. Sends “typing…” indicators during processing to ensure an interactive, user-friendly chat experience. Operates exclusively within Telegram, delivering rich, timely responses and leveraging all Telegram bot capabilities. Execution Intelligence & Safety: Fully autonomous in deciding which tools to invoke, how frequently, and in what sequence to fulfill user requests comprehensively and responsibly. Operates within a secure temporary folder environment to contain all command executions safely and avoid persistent or harmful side effects. Enforces strict safety protocols to avoid running malicious or destructive commands, maintaining ethical standards and compliance. Use Cases: Cybersecurity researchers and operators seeking an intelligent assistant to accelerate investigations and automate routine tasks. Red team professionals requiring on-the-fly command execution and information gathering integrated with tactical chat interactions. SOC teams aiming to augment their alert triage and incident handling workflows with AI-powered analysis and action. Anyone looking for a robust multi-tool AI chatbot integrated with real-world operational capabilities. Setup Requirements: OpenAI API key for GPT-4.1-nano language processing. Telegram Bot API credentials with proper webhook setup to receive and respond to messages. Google OAuth credentials for Calendar integration if calendar features are used. SSH access credentials for executing commands on remote hosts, if remote execution is enabled. Internet connectivity for the Langsearch web search API. Customization & Extensibility: The workflow is built modularly with n8n’s flexible node system. Users can extend it by adding more tools, integrating other services (ticketing, threat intel, scanning tools), or modifying interaction logic to suit specialized operational needs and environments. Created by Mariskarthick M Senior Security Analyst | Detection Engineer | Threat Hunter | Open-Source Enthusiast
by Alex
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How It Works This template orchestrates a multi-step workflow that constructs a comprehensive four-zone automation matrix—Green, Yellow, Red, and White—grounded in the Human Agency Scale (HAS). When a user sends a job title via Telegram, the workflow routes both text and voice messages appropriately. Voice messages are transcribed via OpenAI's Whisper, while text inputs bypass transcription. Both streams merge into a single data flow. The AI Agent node, powered by GPT-4, analyzes the user's profession and core tasks. It also leverages live context by calling the Tavily search tool, ensuring the analysis incorporates up-to-date information. After the evaluation, the workflow formats and returns the completed matrix, with detailed task examples and rationales for each zone, back to the user via Telegram. Setup Instructions Create an OpenAI credential in n8n (model: GPT-4.1 mini). Add a Tavily credential with your API key (FREE plan available). Configure a Telegram Bot credential: API bot token. Import this JSON as a new workflow in n8n and map credentials in each node. Activate the workflow; test by sending sample job titles; adjust node timeouts and webhook settings as needed. Requirements n8n v1.0.0 or higher Active OpenAI API key (GPT-4.1 mini access) Tavily API key for web context search Telegram Bot token with correctly configured webhook Stable internet connectivity Audience & Problem This template is designed for consultants, HR professionals, and analysts who need a scalable, standardized approach to evaluate which routine tasks in a given profession can be automated, which require human oversight, and which should remain manual to preserve strategic judgment, creativity, and expertise.
by MattF
This workflow helps SEO teams catch top movers in Google Search Console by comparing daily performance across keyword segments like brand, nonbrand, and content categories. Instead of serving as a routine check, it highlights the queries and pages with the biggest jumps or drops, making it ideal for spotting wins, losses, or unexpected shifts early. How It Works Runs daily on a scheduled trigger (e.g. every morning). Pulls GSC data for the prior two days (e.g. yesterday vs. day before). Segments traffic by keyword type or URL pattern (e.g. brand, nonbrand, recipes, blogs, etc.). Calculates changes in clicks, impressions, CTR, and average position. Flags top movers with the biggest positive or negative deltas. Sends structured reports via Slack or email, grouped by segment and sorted by impact. Setup Steps Connect your Google Search Console account and optionally Gmail or Slack. Swap in your own domain(s) and customize segmentation logic (e.g. brand terms, path filters). By default, the workflow includes Slack alerts, but these can be easily switched to or combined with email, webhook, or other channels. Full setup takes around 15–20 minutes with working GSC credentials. Note: The “recipes” segment is included as an example of how to segment content. This can be changed to match blog, FAQ, product pages, or any other category.
by Ron
Objective In industry and production sometimes machine data is available in databases. That might be sensor data like temperature or pressure or just binary information. In this sample flow reads machine data and sends an alert to your SIGNL4 team when the machine is down. When the machine is up again the alert in SIGNL4 will get closed automatically. Setup We simulate the machine data using a Notion table. When we un-check the Up box we simulate a machine-down event. In certain intervals n8n checks the database for down items. If such an item has been found an alert is send using SIGNL4 and the item in Notion is updates (in order not to read it again). Status updates from SIGNL4 (acknowledgement, close, annotation, escalation, etc.) are received via webhook and we update the Notion item accordingly. This is how the alert looks like in the SIGNL4 app. The flow can be easily adapted to other database monitoring scenarios.
by Ranjan Dailata
Who this is for? Extract & Summarize Indeed Company Info is an automated workflow that extracts the Indeed company profile information using Bright Data Web Unlocker, transform it using Google Gemini’s LLM, and forward the transformed response with the summary to a specified webhook for downstream use. This workflow is tailored for: Recruiters and HR teams looking to assess companies quickly during talent sourcing. Job seekers researching potential employers and needing summarized company insights. Market researchers and analysts monitoring competitor or industry players. What problem is this workflow solving? Searching and evaluating company profiles on Indeed manually can be time-consuming and inefficient, especially when dealing with large volumes of companies. Manually browsing, copying, and summarizing company descriptions, reviews, and ratings from Indeed hinders productivity and limits real-time insights. This workflow solves this by: Automating the extraction of company details from Indeed using Bright Data Web Unlocker. Summarizing the raw data using Google Gemini's language model for a quick, human-readable overview. Sending the transformed response with the summary to a chosen endpoint, like Slack, Notion, Airtable, or a custom webhook. What this workflow does This automated pipeline does the following: Scrape Indeed company profile pages (e.g., ratings, description, reviews) using Bright Data’s Web Unlocker. Transform the scraped content into structured JSON using n8n’s built-in tools. Summarize and extract meaningful insights using Google Gemini's large language model. Forward the summarized data to a specified webhook or app for real-time access, storage, or analysis. Forward the formatted response to a specified webhook or app for real-time access, storage, or analysis. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the search query, Bright Data zone by navigating to the Set Indeed Search Query node. Update the Webhook Notifier with the Webhook endpoint of your choice. How to customize this workflow to your needs This workflow is built to be flexible - whether you're a company or a market researcher, entrepreneur, or data analyst. Here’s how you can adapt it to fit your specific use case: Changing the data source**: Replace the Indeed search input with other job or business listing platforms if needed (e.g., Glassdoor, Crunchbase) Refining the LLM prompt**: Tailor the Gemini prompt to transform or summarize the Indeed company information in a specific format. Routing the output to different destinations**: Send summaries or transformed response to Google Sheets, Airtable, or CRMs like HubSpot or Salesforce etc.
by Luciano Gutierrez
Instagram Auto-Comment Responder with AI Agent Integration Version: 1.1.0 ‧ n8n Version: 1.88.0+ ‧ License: MIT A fully automated workflow for managing and responding to Instagram comments using AI agents. Designed to improve engagement and save time, this system listens for new Instagram comments, verifies and filters them, fetches relevant post data, processes valid messages with a natural language AI, and posts context-aware replies directly on the original post. Key Features 💬 AI-Driven Engagement: Intelligent responses to comments via a GPT-powered agent. ✅ Webhook Verification: Handles Instagram webhook handshake to ensure secure integration. 📦 Data Extraction: Maps incoming payload fields (user ID, username, message text, media ID) for processing. 🚫 Self-Comment Filtering: Automatically skips comments made by the account owner to prevent loops. 📡 Post Data Retrieval: Fetches the media’s id and caption from the Graph API (v22.0) before generating a reply. 🧠 Natural Language Processing: Uses a custom system prompt to maintain brand tone and context. 🔁 Automated Replies: Posts the AI-generated message back to the comment thread using Instagram’s API. 🧩 Modular Architecture: Clear separation of steps via sticky notes and dedicated HTTP Request and Agent nodes. Use Cases Social Media Automation**: Keep followers engaged 24/7 with instant, relevant replies. Community Building**: Maintain a consistent voice and tone across all interactions. Brand Reputation Management**: Ensure no valid comment goes unanswered. AI Customer Support**: Triage simple questions and direct followers to resources or support. Technical Implementation Webhook Verification Node: Webhook + Respond to Webhook Echoes hub.challenge to confirm subscription and secure incoming events. Data Extraction Node: Set Maps payload fields into structured variables: conta.id, usuario.id, usuario.name, usuario.message.id, usuario.message.text, usuario.media.id, endpoint. User Validation Node: Filter Skips processing if conta.id equals usuario.id (self-comments). Post Data Retrieval Node: HTTP Request (Get post data) GET https://graph.instagram.com/v22.0/{{ $json.usuario.media.id }}?fields=id,caption&access_token={{ credentials }} Captures the media’s caption for richer context in replies. AI Response Generation Nodes: AI Agent + OpenRouter Chat Model Uses a detailed system prompt with: Profile persona (expert in AI & automations, friendly tone). Input data (username, comment text, post caption). Filtering logic (spam, praise, questions, vague comments). Returns either the reply text or [IGNORE] for irrelevant content. Posting the Reply Node: HTTP Request (Post comment) POST {{ $json.endpoint }}/{{ $json.usuario.message.id }}/replies with message={{ $json.output }} Sends the AI answer back under the original comment. Instructions for Setup Import Workflow In n8n > Workflows > Import from File, upload the provided .json template. Configure Credentials Instagram Graph API (Header Auth or FacebookGraphApi) with instagram_basic, instagram_manage_comments scopes. OpenRouter/OpenAI API key for AI agent. Customize System Prompt Edit the AI Agent’s prompt to adjust brand tone, language (Brazilian Portuguese), length, or emoji usage. Test & Activate Publish a test comment on an Instagram post. Verify each node’s execution, ensuring the webhook, filter, data extraction, HTTP requests, and AI Agent respond as expected. Extend & Monitor Add sentiment analysis or lead capture nodes as needed. Monitor execution logs for errors or rate-limit events. Tags Social Media • Instagram Automation • Webhook Verification • AI Agent • HTTP Request • Auto Reply • Community Management
by Amit Mehta
How it Works This workflow automates the complete newsletter management process from content creation to client delivery, using Google Sheets, AI content generation, Google Drive, and Gmail. Whether you're a content creator, marketing agency, or small business owner, this workflow helps you automate newsletter creation and manage client communications with built-in approval workflows — all triggered from a simple spreadsheet. 🎯 Use Case Ideal for: Marketing Teams** streamlining newsletter distribution Agencies** managing multiple client newsletters Content Creators** automating regular communications Small Businesses** maintaining customer engagement Setup Instructions 1. Upload the Spreadsheet File name: Newsletter_Management Sheet structure: | ID | Topic | Client Name | Client Email | Status | Created Date | Send Date | Add newsletter topics and set their Status as Pending 2. Configure Google Sheets Nodes Connect your Google account to: Get topic from newsletter sheet Pick records to send email to client Get Client email address Update Status as Generated Update status as Sent 3. Add API Credentials OpenAI API Key** → for AI content generation Google Drive Access** → for document storage Gmail Account** → for sending newsletters and notifications 4. Activate the Workflow Once live, the workflow will: Manual Path: Generate newsletter content from pending topics Scheduled Path: Send approved newsletters to clients automatically Track status updates throughout the entire process Store generated content in Google Drive Send admin notifications and client emails 🔁 Workflow Logic Main Workflow (Content Generation) Trigger: Manual activation for newsletter creation Retrieve: Pending topics from Google Sheets Validate: Status confirmation (Pending only) Generate: AI-powered HTML newsletter content Store: Upload to Google Drive Notify: Send completion email to admin Update: Mark status as "Generated" Scheduled Workflow (Client Distribution) Trigger: Schedule-based activation Retrieve: Approved newsletters from Google Sheets Validate: Status confirmation (Approved only) Lookup: Client email addresses Loop: Process multiple recipients Send: Personalized newsletters via Gmail Update: Mark status as "Sent" 🧩 Node Descriptions | Node Name | Description | |-----------|-------------| | When clicking 'Test workflow' | Manual trigger to start newsletter generation | | Get topic from newsletter sheet | Retrieves pending newsletter topics from Google Sheets | | Validate Status as Pending | Checks whether status is 'Pending' for processing | | Create HTML for Newsletter | AI-powered content generation using OpenAI | | Prepare Data to create word doc | Formats generated content for document creation | | Upload doc to google drive | Stores completed newsletters in Google Drive | | Send an email to admin | Notifies administrators of completion | | Update Status as Generated | Marks processed items as 'Generated' | | Schedule Trigger | Automated trigger for client email distribution | | Pick records to send email to client | Retrieves approved newsletters for sending | | Validate Status as Approved | Ensures only approved content is processed | | Get Client email address | Fetches client contact information | | Loop Over Items | Processes multiple newsletter recipients | | Send email to client | Delivers personalized newsletters via Gmail | | Update status as Sent | Marks newsletters as successfully delivered | 🛠️ Customization Tips Modify AI prompts for different content styles and tones Add Slack notifications instead of or alongside Gmail Export to different formats (PDF, Word, etc.) Schedule multiple sending times for different client segments Add approval workflows with webhook triggers Integrate with CRM systems for client management 📒 Suggested Sticky Notes for Workflow | Node/Section | Sticky Note Content | |--------------|---------------------| | Manual Trigger | "Click to start newsletter generation process" | | AI Content Generation | "Customize prompts here for different newsletter styles" | | Google Drive Upload | "Organized storage - change folder structure as needed" | | Gmail Admin Notification | "Update admin email addresses and notification templates" | | Schedule Trigger | "Set optimal sending times for your audience" | | Client Email Loop | "Handles bulk sending - monitors for delivery errors" | | Status Updates | "Maintains audit trail - prevents duplicate processing" | 📎 Required Files | File Name | Purpose | |-----------|---------| | Newsletter_Management.xlsx | Google Sheet to manage topics, clients, and status tracking | | Client_Database.xlsx | Client contact information and preferences | | Newsletter_Workflow.json | Main n8n workflow export for this automation | 🧪 Testing Tips Add one test topic with status = Pending and run manual trigger Verify AI content generation produces quality HTML Check Google Drive upload and folder organization Test admin email delivery and formatting Add test client with valid email for scheduled workflow Monitor workflow logs for API responses and errors Confirm status updates occur at each step 🏷 Suggested Tags & Categories #Newsletter #EmailMarketing #ContentGeneration #ClientCommunication #Automation #GoogleWorkspace #AIContent #MarketingAutomation #WorkflowManagement #BusinessProcess 🔧 Prerequisites Google Workspace account (Sheets, Drive, Gmail) OpenAI API account with GPT-4 access n8n instance (Cloud or self-hosted) Basic understanding of Google Sheets and email marketing 📊 Expected Performance Setup Time**: 30-45 minutes Monthly Executions**: 100-500 (varies by newsletter frequency) Processing Time**: 2-5 minutes per newsletter Scalability**: Handles 100+ clients efficiently 🚨 Important Notes Ensure proper Google API permissions are configured Monitor OpenAI API usage and rate limits Set up error handling for failed email deliveries Regularly backup your Google Sheets data Test thoroughly before production deployment 💡 Advanced Features Approval Workflows**: Add manual approval steps between generation and sending A/B Testing**: Create multiple versions and track performance Analytics Integration**: Connect with Google Analytics for tracking Multi-language Support**: Generate content in different languages Dynamic Personalization**: Use client data for personalized content
by Don Jayamaha Jr
⏱️ Analyze Tesla (TSLA) short-term market structure and momentum using 6 technical indicators on the 15-minute timeframe. This AI agent tool is part of the Tesla Quant Trading AI Agent system. It is designed to detect intraday shifts in volatility, trend strength, and potential reversal signals. ⚠️ Not standalone. This agent is triggered via Execute Workflow by the Tesla Financial Market Data Analyst Tool. 🔌 Requires: Tesla Quant Technical Indicators Webhooks Tool Alpha Vantage Premium API Key 📊 What It Does This workflow pulls the latest 20 data points for 6 key technical indicators from a webhook-powered source, then uses GPT-4.1 to interpret market momentum and structure: Connected Indicators: RSI (Relative Strength Index)** MACD (Moving Average Convergence Divergence)** BBANDS (Bollinger Bands)** SMA (Simple Moving Average)** EMA (Exponential Moving Average)** ADX (Average Directional Index)** The output is a structured JSON with: Market summary Timeframe (15m) Indicator values 📋 Sample Output { "summary": "TSLA shows fading momentum. RSI dropped below 60, MACD is flattening, and BBANDS are tightening. Expect short-term consolidation.", "timeframe": "15m", "indicators": { "RSI": 58.3, "MACD": { "macd": -0.020, "signal": -0.018, "histogram": -0.002 }, "BBANDS": { "upper": 183.10, "lower": 176.70, "middle": 179.90, "close": 177.60 }, "SMA": 178.20, "EMA": 177.70, "ADX": 19.6 } } 🧠 Agent Components | Module | Role | | --------------------- | -------------------------------------------------------- | | Webhook Data Node | Calls /15minData endpoint for Alpha Vantage indicators | | LangChain Agent | Parses indicator payloads and generates reasoning | | OpenAI GPT-4.1 | Powers the AI logic to interpret technical structure | | Memory Module | Maintains session consistency for multi-agent calls | 🛠️ Setup Instructions Import Workflow into n8n Name it: Tesla_15min_Indicators_Tool Configure Webhook Source Install and publish: Tesla_Quant_Technical_Indicators_Webhooks_Tool Ensure /15minData is publicly reachable (or tunnel-enabled) Add Credentials Alpha Vantage API Key (HTTP Query Auth) OpenAI GPT-4.1 (OpenAI Chat Model) Link as Sub-Agent This workflow is not triggered manually. It is executed using Execute Workflow by: 👉 Tesla_Financial_Market_Data_Analyst_Tool Pass in: message (optional) sessionId (for short-term memory linkage) 📌 Sticky Notes Summary 🟢 Trigger Integration – Receives sessionId and message from parent 🟡 Webhook Fetcher – Pulls Alpha Vantage data from /15minData 🧠 GPT-4.1 Reasoning – Produces structured JSON insight 🔵 Session Memory – Maintains evaluation flow across tools 📘 Tool Description – Explains indicator use and AI output format 🔒 Licensing & Author © 2025 Treasurium Capital Limited Company All logic, formatting, and agent design are protected under copyright. No resale or public re-use without permission. Created by: Don Jayamaha Creator Profile: https://n8n.io/creators/don-the-gem-dealer/ 🚀 Build faster intraday Tesla trading models using clean 15-minute indicator insights—processed by AI. Required by the Tesla Financial Market Data Analyst Tool.
by Jimleuk
This n8n workflow demonstrates how to automate oftern time-consuming form filling tasks in the early stages of the tendering process; the Request for Proposal document or "RFP". It does this by utilising a company's knowledgebase to generating question-and-answer pairs using Large Language Models. How it works A buyer's RFP is submitted to the workflow as a digital document that can be parsed. Our first AI agent scans and extracts all questions from the document into list form. The supplier sets up an OpenAI assistant prior loaded with company brand, marketing and technical documents. The workflow loops through each of the buyer's questions and poses these to the OpenAI assistant. The assistant's answers are captured until all questions are satisified and are then exported into a new document for review. A sales team member is then able to use this document to respond quickly to the RFP before their competitors. Example Webhook Request curl --location 'https://<n8n_webhook_url>' \ --form 'id="RFP001"' \ --form 'title="BlueChip Travel and StarBus Web Services"' \ --form 'reply_to="jim@example.com"' \ --form 'data=@"k9pnbALxX/RFP Questionnaire.pdf"' Requirements An OpenAI account to use AI services. Customising the workflow OpenAI assistants is only one approach to hosting a company knowledgebase for AI to use. Exploring different solutions such as building your own RAG-powered database can sometimes yield better results in terms of control of how the data is managed and cost.
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for? The Search Engine Intelligence Extractor is a powerful n8n automation that leverages Bright Data’s MCP based AI Agents to simulate human-like searches across Google, Bing, and Yandex, and then distills clean, structured insights using Google Gemini. This workflow is tailored for: SEO analysts researching competitors or market trends Market researchers needing real-time search visibility Journalists & content writers gathering contextual insights AI developers creating intelligent assistants Digital marketers tracking brand mentions or news What problem is this workflow solving? Traditional scraping of search engines is often blocked, cluttered, or filled with irrelevant information. Manually analyzing and cleaning this data for insight is time-consuming. This workflow solves the problem by: Simulating real user search behavior via Bright Data MCP based AI Agent Performing multi-platform search (Google, Bing, Yandex) in one unified flow Extracting clean, human-readable results (stripping ads, navigation, etc.) Structuring the content using Google Gemini LLM Automating delivery via Webhook or saving to disk What this workflow does Input Fields Node: Accepts the search query Accepts action for example - Perform a google search. Replace the action with bing, yandex etc. for other search providers Accepts Webhook notification URL Bright Data MCP Agent Execution: Triggers Bright Data’s intelligent search agent Handles search navigation, result loading, pagination Human Readable Data Extractor: Cleanses HTML, removes ads, footers, irrelevant links Produces a readable narrative of results Final Output Handling: Saves the processed response to disk Sends the structured data to a Webhook for real-time use Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Add Scheduled Execution Add a Cron trigger to run this workflow on a set schedule (e.g., daily/weekly keyword tracking). Push Results to Custom Destinations Connect output to: Google Sheets (for analytics or dashboards) PostgreSQL or MySQL DBs (for structured storage) Notion or Airtable (for content pipelines) Slack or Email (for alerting teams) Customize Webhook Notifications Update the Webhook URL in the notification node to push processed results to external APIs, CRMs, or real-time dashboards.
by Roninimous
This n8n workflow integrates Shopify order management with Telegram, allowing you to query open orders and order details directly through Telegram chat commands. It provides an interactive way to monitor your Shopify store orders using Telegram as an interface. Key Features Telegram Trigger: Listens for messages and callback queries from your Telegram bot. Switch Node: Routes incoming Telegram messages to different flows based on message content: /orders command to fetch all open orders Callback queries starting with /order_ to fetch details of a specific order Shopify Get Orders: Retrieves all open orders from your Shopify store using your Shopify API credentials. Conditional Check (If Node): Determines if there are any open orders; branches accordingly: If orders exist, prepare an interactive Telegram message with a list of orders.1 If no orders exist, send a “No Order” message. Orders Code Node: Formats the list of open orders into a Telegram message with inline buttons. Each button corresponds to an order and sends a callback data containing the order ID. Get Order Details: When a user selects an order button, the workflow extracts the order ID from the callback data, fetches detailed order information from Shopify, and formats the order items into a readable message. Send Messages to Telegram: Sends formatted messages back to Telegram: The list of open orders with clickable buttons. Detailed information about a selected order. “No Order” notification if there are no open orders. How It Works A Telegram user sends /orders to the bot. The workflow fetches open orders from Shopify and sends a message with buttons listing each order. When a user clicks an order button, the workflow fetches and displays detailed information about that specific order in Telegram. If there are no open orders, the bot replies accordingly. Setup Instructions Create a Telegram Bot: Use @BotFather on Telegram to create a bot and get the bot token. Obtain Shopify API Credentials: Create a private app in your Shopify admin dashboard with permission to read orders. Obtain the API key and access token. Configure n8n Credentials: Add your Telegram bot token as Telegram API credentials in n8n. Add your Shopify API credentials in n8n Shopify credentials. Import the Workflow: Import this workflow into your n8n instance. Update the Telegram and Shopify credential nodes to use your credentials. Set Webhook URLs: Ensure your Telegram bot webhook is set correctly to receive messages. n8n webhook URLs should be publicly accessible. Test the Workflow: Send /orders to your Telegram bot to verify it retrieves and lists open orders. Customization Guidance Modify Commands: Update the Switch node to add more Telegram commands or change existing ones. Change Message Formats: Edit the Code nodes to customize how order lists and details appear. Expand Shopify Integration: Add nodes to handle other Shopify operations like updating orders, managing products, etc. Multi-User Support: Adapt the workflow to handle multiple Telegram chat IDs dynamically. Security and Implementation Notes The native Telegram node in n8n has limitations: it does not support sending dynamic inline keyboard arrays in JSON format, which is essential for displaying a variable number of buttons depending on how many orders are retrieved from Shopify. To overcome this, this workflow uses the HTTP Request node to call Telegram’s API directly, allowing full flexibility to send dynamic inline keyboards as JSON objects. (I will make an update once Telegram Node support dynamic inline keyboards). Security Considerations:** Always store your Telegram bot token securely in n8n credentials and never expose it in the HTTP Request node’s URL or body directly. Use environment variables or n8n credentials to inject tokens safely. Be mindful of Telegram API rate limits and add error handling in your workflow. While using HTTP Request nodes increases flexibility, it also requires careful management of request payloads and authentication, as opposed to the built-in Telegram node which abstracts much of this complexity. Benefits Quickly access Shopify order data without leaving Telegram. Interactive inline buttons improve user experience. Automated, real-time integration between Shopify and Telegram.
by Ranjan Dailata
Who this is for? Google SERP Tracker + Trends and Recommendations is an AI-powered n8n workflow that extracts Google search results via Bright Data, parses them into structured JSON using Google Gemini, and generates actionable recommendations and search trends. It outputs CSV reports and sends real-time Webhook notifications. This workflow is ideal for: SEO Agencies needing automated rank & trend tracking Growth Marketers seeking daily/weekly search-based insights Product Teams monitoring brand or competitor visibility Market Researchers performing search behavior analysis No-code Builders automating search intelligence workflows What problem is this workflow solving? Traditional tracking of search engine rankings and search trends is often fragmented and manual. Analyzing SERP changes and trends requires: Manual extraction or using unstable scrapers Unstructured or cluttered HTML data Lack of actionable insights or recommendations This workflow solves the problem by: Automating real-time Google SERP data extraction using Bright Data Structuring unstructured search data using Google Gemini LLM Generating actionable recommendations and trends Exporting both CSV reports automatically to disk for downstream use Notifying external systems via Webhook What this workflow does Accepts search input, zone name, and webhook notification URL Uses Bright Data to extract Google Search Results Uses Google Gemini LLM to parse the SERP data into structured JSON Loops over structured results to: Extract recommendations Extract trends Saves both as .csv files (example below): Google_SERP_Recommendations_Response_2025-06-10T23-01-50-650Z.csv Google_SERP_Trends_Response_2025-06-10T23-01-38-915Z.csv Sends a Webhook with the summary or file reference LLM Usage Google Gemini LLM handles: Parsing Google Search HTML into structured JSON Summarizing recommendation data Deriving trends from the extracted SERP metadata Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. A Google Gemini API key (or access through Vertex AI or proxy). Update the Set input fields with the search criteria, Bright Data Zone name, Webhook notification URL. How to customize this workflow to your needs Input Customization Set your target keyword/phrase in the search field Add your webhook_notification_url for external triggers or notifications SERP Source You can extend the Bright Data search logic to include other engines like Bing or DuckDuckGo. Output Format Edit the .csv structure in the Convert to File nodes if you want to include/exclude specific columns. LLM Prompt Tuning The Gemini LLM prompt inside the Recommendation or Trends extractor nodes can be fine-tuned for domain-specific insight (e.g., SEO vs eCommerce focus).