by RealSimple Solutions
🧠 Analyze and Diagnose n8n Workflow Errors Automatically via OpenAI and Email > ⚠️ This template is available on ☁️ Cloud & 🖥️ self-hosted n8n instances with the OpenAI node enabled. 👤 Who is this for? This workflow is designed for n8n developers, automation engineers, and DevOps teams who want to automatically capture and analyze workflow errors, and receive professional HTML-styled diagnostics directly in their inbox. 💥 What problem does this solve? Manually troubleshooting failed workflows in n8n can be time-consuming. This template streamlines error detection by: Capturing workflow failures using the Error Trigger node Diagnosing root causes with the help of OpenAI Sending a fully-formatted, human-readable HTML error report via email Including practical resolutions and next-step suggestions This helps you or your team resolve issues faster and avoid repeated manual debugging. ⚙️ What this workflow does ⚡ Triggers on any n8n workflow error 📦 Extracts relevant error metadata including node, execution ID, and timestamps 🧠 Sends error content to OpenAI for analysis and recommendations 💌 Generates an HTML email report with inline styles and clear formatting 📥 Emails the result to a system administrator or support email 🛠️ Setup Install the OpenAI node in your self-hosted n8n instance. Add your OpenAI API Key securely in credentials. Configure the SMTP Email node with your email credentials. Adjust the Error Trigger to monitor specific workflows or all workflows. Set your preferred admin or dev email address in the final node. 🔧 How to customize this workflow to your needs 🧩 Use a [Set node] to define your variables, such as: Default admin email Workflow filter (optional) ✍️ Customize the prompt sent to OpenAI if you want deeper or more specific analysis 🎨 Modify the email HTML styles to match your brand or internal format 💾 Add additional logging (e.g., to Airtable, Google Sheets, or Notion) for long-term error tracking 📌 Sticky Note Title: Automated Error Reporter with AI-Powered Diagnosis Description: Captures any n8n error, sends it to OpenAI, and emails a beautiful HTML report to the administrator with steps to resolve the issue. Requires OpenAI credentials and SMTP configured.
by SamirLiu
📝 What this workflow does Every morning at 8 a.m., this workflow fetches the latest AI-related articles from both GNews and NewsAPI. It merges up to 40 new articles daily, selects the 15 most relevant ones on AI technology and applications, and uses GPT-4.1 to generate concise summaries in accurate Traditional Chinese (while preserving essential English technical terms). Each summary also includes the article link for easy referral. The compiled digest is then posted to your designated Telegram account or group. 👥 Who is this for? AI enthusiasts, professionals, and anyone interested in artificial intelligence news Individuals and teams wanting a concise daily digest of AI developments in Traditional Chinese Telegram users who prefer automated information delivery 🎯 What problem does this workflow solve? With the rapid evolution of AI technology, it can be overwhelming to keep up with new developments. This workflow addresses information overload by automatically collecting, summarizing, and translating the most important AI news each morning — all delivered conveniently to your chosen Telegram channel or group. ⚙️ Setup 🔑 Add NewsAPI and GNews API Keys Register for accounts on NewsAPI.org and GNews to obtain your API keys. Input your NewsAPI key directly into the Fetch NewsAPI articles node. Input your GNews API key into the Fetch GNews articles node. 🤖 Set up your Telegram Bot Create a Telegram Bot via BotFather and copy the generated Bot Token. In n8n, create Telegram Bot credentials using this token. In the Send summary to Telegram node, enter the chat ID of your target user, group, or channel to receive the messages. 🧠 Configure OpenAI Credentials In n8n, create a new credential using your OpenAI API key. Assign this credential to the GPT-4.1 Model node (or equivalent OpenAI/AI nodes). After completing these steps, your workflow is fully configured to fetch, summarize, and deliver daily AI news to your selected Telegram chat automatically. 🛠️ How to customize this workflow 🔍 Change the topic:** Update the keywords in the NewsAPI and GNews nodes for other subjects (e.g., "blockchain", "quantum computing"). ⏰ Adjust delivery time:** Modify the scheduled trigger to your preferred hour. ✍️ Tweak summary style or language:** Refine the prompt in the AI summarizer node for different tones or translate into other languages as needed. 📦 Dependencies NewsAPI account GNews account Telegram Bot OpenAI API access (for GPT-4.1) or compatible AI model for Langchain agent
by Jimleuk
This n8n demonstrates how to build a simple Youtube MCP server to look up videos on Youtube and download their transcripts for research purposes. Youtube videos are a great source of new and updated information on a variety of cutting edge developments but they''re are not always simple to understand and lengthy videos may take too much time. Using this MCP server, you extract and feed their transcripts for your AI agents which then allows it to breakdown the content into manageble learnings and insights. How it works A MCP server trigger is used and connected to 3 custom workflow tools: Youtube Search, Youtube Transcripts and Usage Reports. Both Youtube tools use an external scraping service called APIFY.com. This is my preference as it's a much simpler interface and there are no rate limits. The Youtube Search fetches 10 results based on the user's query. The Youtube Transcripts downloads the subtitles from one or more given URL. The usage reports pulls in your monthly APIFY.com monthly spending and limits as a way to check your account. How to use This Apify Youtube MCP server allows any compatible MCP client to research YouTube videos for any desired topic. An Apify account is required however to connect and use the service. Connect your MCP client by following the n8n guidelines here - https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.mcptrigger/#integrating-with-claude-desktop Alternatively, connect any n8n AI agent with the MCP client tool. Try the following queries in your MCP client: "what is MCP?" "How can I use MCP in n8n?" "How can I use Apify's official MCP server?" Requirements APIFY.com for Youtube Scraping. This is a paid service but there is a $5 free tier which is ample for this template. MCP Client or Agent for usage such as Claude Desktop - https://claude.ai/download Customising this workflow Add as many APIFY.com actors as required for your use-case or users. Consider using Apify's official MCP server for 4000+ available tools. Remember to set the MCP server to require credentials before going to production and sharing this MCP server with others!
by Alex Kim
Automatically convert documents from Google Drive into vector embeddings using OpenAI, LangChain, and PGVector — fully automated through n8n. ⚙️ What It Does This workflow monitors a Google Drive folder for new files, supports multiple file types (PDF, TXT, JSON), and processes them into vector embeddings using OpenAI’s text-embedding-3-small model. These embeddings are stored in a Postgres database using the PGVector extension, making them query-ready for semantic search or RAG-based AI agents. After successful processing, files are moved to a separate “vectorized” folder to avoid duplication. 💡 Use Cases Powering Retrieval-Augmented Generation (RAG) AI agents Semantic search across private documents AI assistant knowledge ingestion Automated document pipelines for indexing or classification 🧠 Workflow Highlights Trigger Options:** Manual or Scheduled (3 AM daily by default) Supported File Types:** PDF, TXT, JSON Embedding Stack:** LangChain Text Splitter, OpenAI Embeddings, PGVector Deduplication:** Files are moved after processing License:** CC BY-SA 4.0 Author:** AlexK1919 🛠 What You’ll Need Google Drive OAuth2** credentials (connected to Search Folder, Download File, and Move File nodes) OpenAI API Key** (used in the Embeddings OpenAI node) Postgres + PGVector** database (connected in the Postgres PGVector Store node) 🔧 Step-by-Step Setup Instructions Create Google OAuth2 credentials in n8n and connect them to all Google Drive nodes. Set your source folder ID in the Search Folder node — this is where incoming files are placed. Set your processed folder ID in the Move File node — files will be moved here after vectorization. Ensure you have a PGVector-enabled Postgres instance and input the table name and collection in the Postgres PGVector Store node. Add your OpenAI credentials to the Embeddings OpenAI node and select text-embedding-3-small. Optional: Activate the Schedule Trigger node to run daily or configure your own schedule. Run manually by triggering When clicking ‘Test workflow’ for on-demand ingestion. 🧩 Customization Tips Want to support more file types or enhance the pipeline? Add new extractors**: Use Extract from File with other formats like DOCX, Markdown, or HTML. Refine logic by file type**: The Switch node routes files to the correct extraction method based on MIME type (application/pdf, text/plain, application/json). Pre-process with OCR**: Add an OCR step before extraction to handle scanned PDFs or images. Add filters**: Enhance the Search Folder or Switch node logic to skip specific files or folders. 📄 License This workflow is available under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license. You are free to use, adapt, and share this workflow for non-commercial purposes under the terms of this license. Full license details: https://creativecommons.org/licenses/by-nc-sa/4.0/
by Amjid Ali
Automate Digital Delivery After PayPal Purchase Using n8n A Complete Step-by-Step Guide to Seamless Template Delivery Built by Amjid Ali – SyncBricks Deliver personalized files instantly after PayPal transactions using n8n – without writing a single backend line. 🚀 What This n8n Workflow Does This automation template helps you automatically deliver a digital product (such as an n8n template or JSON file) to customers who pay via PayPal — within seconds. You can: Automatically extract customer info Identify what was purchased Send a clean, branded email with the product file Promote your other courses, books, and tools 📦 Use Case Example Product: AI-Powered Social Media Content Generator & Publisher When a customer buys this product through PayPal, this automation: Listens for a successful payment event Fetches order details via API Sends an HTML email with the template attached Promotes your other offerings with embedded links 🔧 Prerequisites You’ll need: An n8n instance (self-hosted or n8n Cloud) A PayPal developer account PayPal OAuth2 credentials configured in n8n Your product hosted as a downloadable .json file (Oracle, Dropbox, GitHub, etc.) SMTP email credentials in n8n 🧠 Step-by-Step Setup 1. Webhook Trigger Node: Webhook Listens for a POST request from PayPal’s webhook for PAYMENT.CAPTURE.COMPLETED events. 📌 Add the webhook to your PayPal Developer App > Webhooks. 2. Wait Node: Wait Adds a brief delay to ensure the payment is completely processed before continuing. 3. Filter Event Type Node: Switch Processes only when the event is PAYMENT.CAPTURE.COMPLETED. 4. Fetch Order Details Node: HTTP Request Retrieves the order information from PayPal's Orders API. URL format: https://api.paypal.com/v2/checkout/orders/{{ order_id }} 5. Extract Email & Product Info Node: Set Extracts first name, last name, email address, and the purchased item name. 6. Identify Product Purchased Node: Switch Checks if the product is “AI-Powered Social Media Content Generator & Publisher”. 7. Download Workflow File Node: HTTP Request Fetches the hosted workflow JSON from object storage (Oracle in this case). 8. Convert to Downloadable File Node: Code Converts the JSON content into a binary file and attaches it. 9. Send Custom Email Node: Send Email Sends a rich HTML email to the buyer with: Their name The file attachment Product name Helpful resource links: 📘 Mastering n8n Course on Udemy 📖 Step-by-Step Guide (n8n Book) 🎓 n8n Video Tutorials (Free Course) ☁️ Sign up for n8n Cloud – Use code AMJID10 🎥 YouTube Video Walkthrough 📚 Additional Learning Resources 🚀 My Full Automation Suite Explore more and master n8n with these resources: 🎓 Mastering n8n (Full Udemy Course) 📕 Get Your Step-by-Step Guide (n8n Book) 🎥 Get Step-by-Step Tutorials (Video Course) ☁️ Sign up for n8n Cloud 💡 Templates, Tools, and More 📺 YouTube Channel – SyncBricks 🙋 Need Help or Customization? Reach out! Email: amjid@amjidali.com LinkedIn: linkedin.com/in/amjidali Website: syncbricks.com
by Luciano Gutierrez
Instagram Auto-Comment Responder with AI Agent Integration Version: 1.1.0 ‧ n8n Version: 1.88.0+ ‧ License: MIT A fully automated workflow for managing and responding to Instagram comments using AI agents. Designed to improve engagement and save time, this system listens for new Instagram comments, verifies and filters them, fetches relevant post data, processes valid messages with a natural language AI, and posts context-aware replies directly on the original post. Key Features 💬 AI-Driven Engagement: Intelligent responses to comments via a GPT-powered agent. ✅ Webhook Verification: Handles Instagram webhook handshake to ensure secure integration. 📦 Data Extraction: Maps incoming payload fields (user ID, username, message text, media ID) for processing. 🚫 Self-Comment Filtering: Automatically skips comments made by the account owner to prevent loops. 📡 Post Data Retrieval: Fetches the media’s id and caption from the Graph API (v22.0) before generating a reply. 🧠 Natural Language Processing: Uses a custom system prompt to maintain brand tone and context. 🔁 Automated Replies: Posts the AI-generated message back to the comment thread using Instagram’s API. 🧩 Modular Architecture: Clear separation of steps via sticky notes and dedicated HTTP Request and Agent nodes. Use Cases Social Media Automation**: Keep followers engaged 24/7 with instant, relevant replies. Community Building**: Maintain a consistent voice and tone across all interactions. Brand Reputation Management**: Ensure no valid comment goes unanswered. AI Customer Support**: Triage simple questions and direct followers to resources or support. Technical Implementation Webhook Verification Node: Webhook + Respond to Webhook Echoes hub.challenge to confirm subscription and secure incoming events. Data Extraction Node: Set Maps payload fields into structured variables: conta.id, usuario.id, usuario.name, usuario.message.id, usuario.message.text, usuario.media.id, endpoint. User Validation Node: Filter Skips processing if conta.id equals usuario.id (self-comments). Post Data Retrieval Node: HTTP Request (Get post data) GET https://graph.instagram.com/v22.0/{{ $json.usuario.media.id }}?fields=id,caption&access_token={{ credentials }} Captures the media’s caption for richer context in replies. AI Response Generation Nodes: AI Agent + OpenRouter Chat Model Uses a detailed system prompt with: Profile persona (expert in AI & automations, friendly tone). Input data (username, comment text, post caption). Filtering logic (spam, praise, questions, vague comments). Returns either the reply text or [IGNORE] for irrelevant content. Posting the Reply Node: HTTP Request (Post comment) POST {{ $json.endpoint }}/{{ $json.usuario.message.id }}/replies with message={{ $json.output }} Sends the AI answer back under the original comment. Instructions for Setup Import Workflow In n8n > Workflows > Import from File, upload the provided .json template. Configure Credentials Instagram Graph API (Header Auth or FacebookGraphApi) with instagram_basic, instagram_manage_comments scopes. OpenRouter/OpenAI API key for AI agent. Customize System Prompt Edit the AI Agent’s prompt to adjust brand tone, language (Brazilian Portuguese), length, or emoji usage. Test & Activate Publish a test comment on an Instagram post. Verify each node’s execution, ensuring the webhook, filter, data extraction, HTTP requests, and AI Agent respond as expected. Extend & Monitor Add sentiment analysis or lead capture nodes as needed. Monitor execution logs for errors or rate-limit events. Tags Social Media • Instagram Automation • Webhook Verification • AI Agent • HTTP Request • Auto Reply • Community Management
by Ranjan Dailata
Who this is for? Extract & Summarize Indeed Company Info is an automated workflow that extracts the Indeed company profile information using Bright Data Web Unlocker, transform it using Google Gemini’s LLM, and forward the transformed response with the summary to a specified webhook for downstream use. This workflow is tailored for: Recruiters and HR teams looking to assess companies quickly during talent sourcing. Job seekers researching potential employers and needing summarized company insights. Market researchers and analysts monitoring competitor or industry players. What problem is this workflow solving? Searching and evaluating company profiles on Indeed manually can be time-consuming and inefficient, especially when dealing with large volumes of companies. Manually browsing, copying, and summarizing company descriptions, reviews, and ratings from Indeed hinders productivity and limits real-time insights. This workflow solves this by: Automating the extraction of company details from Indeed using Bright Data Web Unlocker. Summarizing the raw data using Google Gemini's language model for a quick, human-readable overview. Sending the transformed response with the summary to a chosen endpoint, like Slack, Notion, Airtable, or a custom webhook. What this workflow does This automated pipeline does the following: Scrape Indeed company profile pages (e.g., ratings, description, reviews) using Bright Data’s Web Unlocker. Transform the scraped content into structured JSON using n8n’s built-in tools. Summarize and extract meaningful insights using Google Gemini's large language model. Forward the summarized data to a specified webhook or app for real-time access, storage, or analysis. Forward the formatted response to a specified webhook or app for real-time access, storage, or analysis. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the search query, Bright Data zone by navigating to the Set Indeed Search Query node. Update the Webhook Notifier with the Webhook endpoint of your choice. How to customize this workflow to your needs This workflow is built to be flexible - whether you're a company or a market researcher, entrepreneur, or data analyst. Here’s how you can adapt it to fit your specific use case: Changing the data source**: Replace the Indeed search input with other job or business listing platforms if needed (e.g., Glassdoor, Crunchbase) Refining the LLM prompt**: Tailor the Gemini prompt to transform or summarize the Indeed company information in a specific format. Routing the output to different destinations**: Send summaries or transformed response to Google Sheets, Airtable, or CRMs like HubSpot or Salesforce etc.
by Yang
👥 Who is this for? This workflow is ideal for virtual assistants, researchers, developers, automation specialists, and data analysts who need to regularly extract and organize structured product information (like books) from a website. It’s especially useful for those working with catalog-based websites who want to automate extraction and delivery of clean, sorted data. 🧩 What problem is this solving? Manually copying product listings like book titles and prices from a website into a spreadsheet is slow and repetitive. This automation solves that problem by scraping content using Dumpling AI, extracting the right data using CSS selectors, and formatting it into a clean CSV file that is sent to your email—all triggered automatically when a new URL is added to Google Sheets. ⚙️ What this workflow does This template automates an entire content scraping and delivery process: Watches a Google Sheet for new URLs Scrapes the HTML content of the given webpage using Dumpling AI Uses CSS selectors in the HTML node to extract each book from the page Splits the HTML array into individual items Extracts the book title and price from each HTML block Sorts the books in descending order based on price Converts the sorted data to a CSV file Sends the CSV via email using Gmail 🛠️ Setup Google Sheets Create a sheet titled something like URLs Add your product listing URLs (e.g., http://books.toscrape.com) Connect the Google Sheets trigger node to your sheet Ensure you have proper credentials connected Dumpling AI Create an account at Dumpling AI) - Generate your API key Set the HTTP Method to POST and pass the URL dynamically from the Google Sheet Use Header Auth to include your API key in the request header Make sure "cleaned": "True" is included in the body for optimized HTML output HTML Node The first HTML node extracts the main book container blocks using: .row > li The second HTML node parses out the individual fields: title: h3 > a (via the title attribute) price: .price_color Sort Node Sorts books by price in descending order Note: price is extracted as a string, ensure it's parsable if you plan to use numeric filtering later Convert to CSV The JSON data is passed into a Convert node and transformed into a CSV file Gmail Sends the CSV as an attachment to a designated email 🔄 How to customize this workflow Extract more data**: Add more CSS selectors in the second HTML node to pull fields like author, availability, or product links Switch destinations**: Replace Gmail with Slack, Google Drive, Dropbox, or another platform Adjust sorting**: Sort alphabetically or based on another extracted value Use a different source**: As long as the site structure is consistent, this can scrape any listing-like page Trigger differently**: Use a webhook, form submission, or schedule trigger instead of Google Sheets ⚠️ Dependencies and Notes This workflow uses Dumpling AI to perform the web scraping. This requires an API key and uses credits per request. The HTML node depends on valid CSS selectors. If the site layout changes, the selectors may need to be updated. Ensure you’re not scraping content from websites that prohibit automated scraping.
by GYANENDRA DWIVEDI
🚀 WhatsApp Automation Template Designed & Developed by Infridet Solutions Private Limited 🔧 Objective: Automate your lead nurturing and sales process from YouTube/Instagram → Landing Page → CRM → Email → WhatsApp → Sales → Deal Closure using tools like: 🌐 WordPress (Landing Page + Fluent Forms) 🧾 Google Sheets (Backup Log) 📩 FluentCRM (Lead Tagging + Email Sequences) 💬 Whinta.com (WhatsApp Messaging API) ⚙️ N8N (Workflow Automation Engine) 🧩 System Flow Overview: Lead Source: YouTube or Instagram CTA Landing Page: Built on WordPress with a story-driven design Form Capture: Fluent Forms with dynamic input fields Data Sync: Backup to Google Sheets Push lead to FluentCRM and tag as New Lead Email Sequence: Warm-up emails (1 to 5) Introduce offer or service WhatsApp Outreach: Send personalized message via Whinta Triggered 1 hour after form fill or last email Sales Follow-Up: Sales team handles replies manually CRM tag updated to Customer upon closing 📁 Folder Structure (Optional Git/Zip File): 📦 WhatsApp-Automation-Infridet/ │ ├── whatsapp-automation-n8n.json # N8N Flowchart Import File ├── email-templates.docx # Warm-up Email Scripts ├── whinta-api-integration.pdf # API Documentation ├── crm-tagging-notes.txt # CRM Tag Setup Details └── readme.md # This Instruction File 🛠️ Required Integrations & Setup ✅ Fluent Forms (WordPress) Embed form with Name, Email, Phone Enable webhook to N8N: /lead-capture ✅ Google Sheets Use n8n-nodes-base.googleSheets node Capture name, email, phone, source, timestamp ✅ FluentCRM REST API enabled Push contact and assign tag New Lead Setup Email Automation via tag trigger ✅ SMTP Email (Optional) Use Gmail SMTP or Brevo Trigger email on form submission ✅ Whinta.com (WhatsApp API) Send POST request Payload includes phone, message, sender_id Customize message with personalization 💬 Sample WhatsApp Message: Hey {{name}}, Gyan here from Account Craft 👋 I saw your form submission – would you like help in starting your YouTube journey this week? Let me know. I'm just one text away. ✅ 📧 Sample Email (Warmup Day 1): > Subject: Welcome to Account Craft 🚀 > Body: > Hi {{name}}, > > I’m Gyan from Account Craft. Thanks for joining us! > Here’s what’s coming next: exclusive videos, personalized tips, and real support to get your YouTube channel earning. > > Let’s go! > – Gyan 🔁 CRM Tag Updates: | Action | Tag Assigned | |-------------------|------------------| | On form fill | New Lead | | After WhatsApp | Engaged | | After sale closed | Customer | 📌 Final Output: Once completed, the system will: Log all leads into a database Automatically send emails and WhatsApp messages Notify your sales team Update lead status without manual entry > Automation Template Designed & Deployed by > Infridet Solutions Private Limited > Smart Integrations. Seamless Business. > 🌐 www.infridetsolutions.com | 📞 +91-8853354829
by Paulo Ramirez
Upload your CRM contacts to telli and schedule AI voice-agent calls Introduction to telli and AI Voice-Agent Calls telli is an innovative platform that provides AI-powered voice agents capable of making calls and performing tasks tailored to specific customer use cases. These AI voice-agents can handle a wide range of communication tasks, from appointment scheduling to customer support, with remarkable efficiency and natural conversation flow. This template is designed for businesses and organizations looking to automate their outbound calling processes using telli's AI voice-agents in conjunction with Airtable as their CRM. It solves the problem of manual call scheduling and data transfer between your CRM and calling system, saving time and reducing human error. Prerequisites telli account Airtable base with contact information n8n instance Step-by-Step Setup Guide n8n Setup: Create a new workflow in n8n. Add the Airtable node to connect to your CRM table. telli API Configuration: Log in to your telli dashboard. Locate and copy your API key under telli - Settings - API/Webhooks. Workflow Configuration: Add two HTTP Request nodes to your n8n workflow. Set the "Authorization" header in both POST requests, replacing the value with your telli API key. Configure the first request to use the /add-contact endpoint. Set up the second request to use the /schedule-call endpoint. Data Mapping: Map the relevant fields from your Airtable node to the telli API requests. Testing and Activation: Run a test execution of your workflow. Once satisfied with the results, activate the workflow. API Endpoint Details Add Contact Endpoint URL**: https://api.telli.com/v1/add-contact Method**: POST Headers**: Authorization: YOUR-API-KEY Content-Type: application/json Payload**: { "external_contact_id": "string", "salutation": "string", "first_name": "string", "last_name": "string", "phone_number": "string", "email": "jsmith@example.com", "contact_details": {}, "timezone": "string" } Schedule Call Endpoint URL**: https://api.telli.com/v1/schedule-call Method**: POST Headers**: Authorization: YOUR-API-KEY Content-Type: application/json Payload**: { "contact_id": TELLI-CONTACT-ID, "agent_id": "string", "max_retry_days": 123, "call_details": { "message": "Hello, this is your friendly reminder!", "questions": [ { "fieldName": "email", "neededInformation": "email of the customer", "exampleQuestion": "What is your email address?", "responseFormat": "email string" } ] }, "override_from_number": "string" } Use Cases This template is versatile and can be applied to various scenarios, including: Lead Qualification*: Automatically schedule calls to new leads entered in your CRM. Appointment Reminders*: Set up calls to remind clients of upcoming appointments. Customer Feedback*: Schedule follow-up calls after product deliveries or service completions. Uploading Multiple Contacts For bulk operations, you have two options: Loop Node: Include a Loop node in your n8n workflow to process multiple contacts sequentially. Batch Endpoints: Instead of /add-contact and /schedule-call, use telli's batch endpoints: /add-contacts-batch: Add multiple contacts within an array. /schedule-calls-batch: Schedule multiple calls at once. Example of batch endpoint usage: { "contacts": [ {"name": "John Doe", "phone": "+1234567890"}, {"name": "Jane Smith", "phone": "+1987654321"} ] } By leveraging this template, you can seamlessly integrate your Airtable CRM with telli's powerful AI voice-agents, automating your outbound calling process and enhancing your customer communication strategy.
by Ranjan Dailata
Who this is for? The Automate Etsy Data Mining with Bright Data Scrape & Google Gemini workflow is designed for eCommerce analysts, product researchers, and AI developers seeking to extract actionable insights from Etsy listings at scale. It is ideal for: eCommerce Entrepreneurs** - Researching product demand and competition. Market Analysts** - Tracking pricing, reviews, and trends across Etsy categories. Product Managers** - Identifying niche opportunities and design inspirations. Data Scientists & AI Engineers** - Automating product intelligence pipelines. Growth Hackers** - Leveraging Etsy insights to refine product-market fit. What problem is this workflow solving? Manually browsing Etsy to analyze product listings, pricing, reviews, and seller activity is slow, inconsistent, and unscalable. Scraping Etsy requires unlocking JavaScript-heavy content and structuring noisy data for analysis. This workflow solves: Automated and scalable scraping of Etsy product listings using Bright Data’s infrastructure. A fully paginated data structured Estry production data extraction via the Google Gemini LLM. Enables faster decision-making for product research and competitive analysis via the fully automated paginated data extraction. What this workflow does Receives input: Sets the Esty URL for the data extraction and analysis. Uses Bright Data's Web Unlocker to extract content from relevant sites. Cleans and preprocesses the scraped content for readability. Sends the content to Google Gemini for: Enriched results including: Data persistence over the disk. Sends the response to a target system via Webhook notification. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. A Google Gemini API key (or access through Vertex AI or proxy). Update the Set Esty Search Query for setting the brand content URL and the Bright Data Zone name. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Input Sources** : Replace the static URL with dynamic input from Google Sheets, Webhook, or Airtable to research multiple niches. Prompt Customization** : Adjust Gemini prompts to extract specific insights for example: List key features of the product Summarization of the review themes Data Output Options** : Update the Webhook notification to save data to: Google Sheets Notion or Airtable SQL/NoSQL Slack/Email
by mariskarthick
QuantumDefender AI is a next-generation intelligent cybersecurity assistant designed to harness the symbolic strength of quantum computing’s promise alongside cutting-edge AI capabilities. This sophisticated agent empowers SOC analysts, red teamers, and security researchers with rapid threat investigation, operational automation, and intelligent command execution—all driven by GPT-4 and integrated tools, accessible through Telegram or on any medium. 🔑 Key Features: Expert-Level Cybersecurity Research & Analysis: Leverages powerful AI models to deliver clean, detailed, domain-specific insights across detection, remediation, and offensive security. Command & Control: Executes Linux shell commands, autonomous scripts, and system operations securely in isolated environments. Real-Time Web Intelligence: Utilizes integrated Langsearch API to provide timely internet research with contextual relevance. Calendar & Scheduling Automation: Manage Google Calendar events or any similar application(create, update, delete, retrieve) dynamically from chat. Multi-Tool Orchestration: Combines calculator functions, internet searches, command execution, and messaging for comprehensive operational support. Telegram-native Chatbot: Delivers an adaptive, memory-informed, and interactive conversational experience with immediate typing indicators and high responsiveness. Conversation & Session Management: Maintains context-aware, session-based memory to enable smooth, multi-turn dialogues with individual users. Sends “typing…” indicators during processing to ensure an interactive, user-friendly chat experience. Operates exclusively within Telegram, delivering rich, timely responses and leveraging all Telegram bot capabilities. Execution Intelligence & Safety: Fully autonomous in deciding which tools to invoke, how frequently, and in what sequence to fulfill user requests comprehensively and responsibly. Operates within a secure temporary folder environment to contain all command executions safely and avoid persistent or harmful side effects. Enforces strict safety protocols to avoid running malicious or destructive commands, maintaining ethical standards and compliance. Use Cases: Cybersecurity researchers and operators seeking an intelligent assistant to accelerate investigations and automate routine tasks. Red team professionals requiring on-the-fly command execution and information gathering integrated with tactical chat interactions. SOC teams aiming to augment their alert triage and incident handling workflows with AI-powered analysis and action. Anyone looking for a robust multi-tool AI chatbot integrated with real-world operational capabilities. Setup Requirements: OpenAI API key for GPT-4.1-nano language processing. Telegram Bot API credentials with proper webhook setup to receive and respond to messages. Google OAuth credentials for Calendar integration if calendar features are used. SSH access credentials for executing commands on remote hosts, if remote execution is enabled. Internet connectivity for the Langsearch web search API. Customization & Extensibility: The workflow is built modularly with n8n’s flexible node system. Users can extend it by adding more tools, integrating other services (ticketing, threat intel, scanning tools), or modifying interaction logic to suit specialized operational needs and environments. Created by Mariskarthick M Senior Security Analyst | Detection Engineer | Threat Hunter | Open-Source Enthusiast