by RedOne
ποΈ AI Audio Assistant with Voice-to-Voice Response Who is this for? Businesses, customer service teams, content creators, and organizations who want to provide intelligent voice-based interactions through Telegram. Perfect for accessibility-focused services, multilingual support, or hands-free customer assistance. What problem does this solve? Enables natural voice conversations with AI Breaks down language and accessibility barriers Provides instant voice responses to customer queries Reduces typing requirements for users Offers 24/7 voice-based customer support Maintains conversation context across voice interactions What this workflow does: Receives voice messages via Telegram bot Transcribes audio using Deepgram's advanced speech-to-text Processes transcribed text through AI agent with knowledge base access Generates intelligent responses based on conversation context Converts AI response to natural-sounding speech using Deepgram TTS Sends audio response back to user via Telegram Maintains conversation memory for contextual interactions π§ Technical Architecture Core Components: Telegram Bot**: Receives and sends voice messages Deepgram STT**: Transcribes voice to text with high accuracy OpenAI GPT**: Processes queries and generates responses Supabase Knowledge Base**: Stores and retrieves business information Memory Management**: Maintains conversation context Deepgram TTS**: Converts text responses to natural speech Data Flow: Voice Message β Telegram API β File Download Audio File β Deepgram STT β Transcript Transcript β AI Agent β Response Generation Response β Deepgram TTS β Audio File Audio Response β Telegram β User π οΈ Setup Instructions Prerequisites Telegram Bot Token Create bot via @BotFather Get bot token and configure webhook Deepgram API Key Sign up at deepgram.com Get API key for STT and TTS services Note: Currently hardcoded in workflow OpenAI API Key OpenAI account with API access Configure in OpenAI Chat Model node Supabase Database Create Supabase project Set up knowledge_base table Configure API credentials Step-by-Step Setup Configure Telegram Bot Update telegramToken in "Prepare Voice Message Data" node Set correct bot token in Telegram nodes Test bot connectivity Set Up Deepgram Integration Replace API key in "Transcribe with Deepgram" node Update TTS endpoint in "HTTP Request" node Test voice transcription accuracy Configure Knowledge Base -- Create knowledge_base table in Supabase CREATE TABLE knowledge_base ( id UUID DEFAULT gen_random_uuid() PRIMARY KEY, question TEXT NOT NULL, answer TEXT NOT NULL, category VARCHAR(100), keywords TEXT[], created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW() ); Customize AI Prompts Update system message in "Telegram AI Agent" node Adjust temperature and max tokens in OpenAI model Configure memory session keys Test End-to-End Flow Send test voice message to bot Verify transcription accuracy Check AI response quality Validate audio output clarity ποΈ Configuration Options Voice Recognition Settings Model**: nova-2 (Deepgram's latest model) Language**: English (en) - can be changed Smart Format**: Enabled for better punctuation AI Response Settings Temperature**: 0.3 (conservative responses) Max Tokens**: 100 (adjust based on needs) Memory**: Session-based conversation context Text-to-Speech Settings Model**: aura-2-thalia-en (natural female voice) Alternative voices**: Available in Deepgram TTS API Audio Format**: Optimized for Telegram π Security Considerations API Key Management // Current implementation has hardcoded tokens // Recommended: Use environment variables const telegramToken = process.env.TELEGRAM_BOT_TOKEN; const deepgramKey = process.env.DEEPGRAM_API_KEY; Data Privacy Voice messages are processed by external APIs Consider data retention policies Implement user consent mechanisms Ensure GDPR compliance if applicable π Monitoring & Analytics Key Metrics to Track Voice message processing time Transcription accuracy rates AI response quality scores User engagement metrics Error rates and failure points Recommended Logging // Add to workflow for monitoring console.log({ timestamp: new Date().toISOString(), user_id: userData.user_id, transcript_confidence: transcriptData.confidence, response_length: aiResponse.length, processing_time: processingTime }); π Customization Ideas Enhanced Features Multi-language Support Add language detection Support multiple TTS voices Translate responses Voice Commands Implement wake words Add voice shortcuts Create voice menus Advanced AI Features Sentiment analysis Intent classification Escalation triggers Integration Expansions Connect to CRM systems Add calendar scheduling Integrate with help desk tools Performance Optimizations Implement audio preprocessing Add response caching Optimize API call sequences Implement retry mechanisms π Troubleshooting Common Issues Voice Not Transcribing Check Deepgram API key validity Verify audio format compatibility Test with shorter voice messages Poor Audio Quality Adjust TTS model settings Check network connectivity Verify Telegram audio limits AI Responses Too Generic Improve knowledge base content Adjust system prompts Increase context window Memory Not Working Check session key configuration Verify user ID extraction Test conversation continuity π‘ Best Practices Voice Interface Design Keep responses concise and clear Use natural speech patterns Avoid technical jargon Provide clear next steps Knowledge Base Management Regular content updates Clear categorization Keyword optimization Quality assurance testing User Experience Fast response times (<5 seconds) Consistent voice personality Graceful error handling Clear capability communication π Success Metrics Technical KPIs Response time: <3 seconds average Transcription accuracy: >95% User satisfaction: >4.5/5 Uptime: >99.5% Business KPIs Customer query resolution rate Support ticket reduction User engagement increase Cost per interaction decrease π Maintenance Schedule Daily Monitor error logs Check API rate limits Verify service uptime Weekly Review conversation quality Update knowledge base Analyze usage patterns Monthly Performance optimization Security audit Feature updates User feedback review π Additional Resources Documentation Links Deepgram STT API Deepgram TTS API Telegram Bot API OpenAI API Supabase Documentation Community Support n8n Community Forum Telegram Bot Developers Group Deepgram Developer Discord OpenAI Developer Community Note: This template requires active API subscriptions for Deepgram and OpenAI services. Costs may apply based on usage volume.
by Agniva Mahata
How it Works: Trigger: The workflow is triggered by a webhook, initiated by an Airtable automation. This automation sends the Book or Chapter record ID and the desired action (e.g., "Generate Book Details," "Generate Chapters," "Generate Chapter Research," "Generate Chapter Content"). Action Routing: A "Switch" node directs the workflow based on the action query parameter received from the webhook. This determines which part of the book creation process will be executed. Data Retrieval: The workflow fetches the relevant book or chapter data from Airtable using the provided recordId. AI Processing: Book Details Generation: If the action is "Generate Book Details," an AI Agent (powered by a Large Language Model (LLM) like Google Gemini and the Perplexity search tool) researches the book idea. It focuses on crafting a compelling book description, identifying the target audience, and conducting general book research to maximize bestseller potential. The research brief is then saved back to Airtable. Chapter Generation: If the action is "Generate Chapters," an LLM generates 7-10 chapter titles and descriptions based on the book idea and previous research. A structured output parser ensures the chapter data is in the correct format. The chapters are then split into individual items and saved as separate records in the "Chapter" table in Airtable, linked to the main book record. Chapter Research Generation: If the action is "Generate Chapter Research," another AI Agent conducts in-depth research on a specific chapter, using the Perplexity search tool multiple times. It focuses on finding stories, case studies, historical events, and expert perspectives to make the chapter engaging and credible. The research is saved back to the "Chapter" record in Airtable. Chapter Content Generation: If the action is "Generate Chapter Content," an LLM writes the full content of the chapter, using the research gathered in the previous step, the overall book research, and the chapter description. The generated content is saved back to the "Chapter" record in Airtable. Airtable Updates: In each of the AI processing steps, the workflow updates the corresponding Airtable record (either "Book" or "Chapter") with the generated results (research, chapter details, or content) and sets the "Action" field back to "Idle." Set Up Steps: Airtable Setup (Estimated time: 10-15 minutes): Copy the Airtable base blueprint: https://airtable.com/appfkz4KUlKvOjtbp/shra78TlDfqLRdSfT. This will create the "Book" and "Chapter" tables with the necessary fields. In the "Book" table, create three Airtable Automations: Trigger: When a record matches conditions -> Action is Generate Book Details Action: Run a script. Use the following script: let autoRoute = input.config(); await fetch(autoRoute.webhookUrl + "?recordId=" + autoRoute.recordId + "&action=" + autoRoute.action); In the script action's configuration, add three "Input variables": webhookUrl (map it to your n8n webhook URL, obtained in the next step) recordId (map it to the Airtable record ID) action (map it to Action) Repeat this process to create two more automations in the "Book" table, identical except triggered when Action is Generate Chapters, respectively. In the "Chapter" table, create two Airtable Automations: Trigger: When a record matches conditions -> Action is Generate Chapter Research Action: Run a script (use the same script as above, with the same input variables). Create a second automation, identical except triggered when Action is Generate Chapter Content. n8n Setup (Estimated time: 15-20 minutes): Import the provided JSON workflow into n8n. Webhook Node: Copy the "Test URL" from the Webhook node. This is the webhookUrl you'll use in the Airtable automations. Important: Once you've tested and are ready to go live, switch to the "Production URL." Airtable Nodes: Configure all Airtable nodes (there are eight). You'll need to connect your Airtable account using OAuth 2. Select the correct Base ("Book Agency \[v1] Cobuild" or whatever you named it) and Table ("Book" or "Chapter") for each node. The field mappings are already defined in the template, but double-check them. LLM Nodes (Google Gemini & OpenAI): Connect your Google Gemini and OpenAI accounts to the respective LLM nodes. You'll need API keys for both. You may also configure different LLM Models. Perplexity Nodes Connect your Perplexity AI API to the Perplexity nodes. You'll need API keys for that. Activate the workflow. Testing (Estimated Time: 5-10 minutes): Go to your Airtable "Book" table. Create a New Record. Fill in the "Idea" field with a book concept. Change the "Action" field to "Generate Book Details". The Airtable automation should trigger, sending a request to your n8n webhook. Monitor the n8n execution log to see the workflow in action. Check the Airtable record to see if the "Research" field is populated. Repeat the testing for Generate Chapters, Generate Chapter Research and Generate Chapter Content.
by Ranjan Dailata
Who this is for? The LinkedIn Company Story Generator is an automated workflow that extracts company profile data from LinkedIn using Bright Data's web scraping infrastructure, then transforms that data into a professionally written narrative or story using a language model (e.g., OpenAI, Gemini). The final output is sent via webhook notification, making it easy to publish, review, or further automate. This workflow is tailored for:β Marketing Professionals**: Seeking to generate compelling company narratives for campaigns.β Sales Teams**: Aiming to understand potential clients through summarized company insights.β Content Creators**: Looking to craft stories or articles based on company data.β Recruiters**: Interested in obtaining concise overviews of companies for talent acquisition strategies.β What problem is this workflow solving? Manually gathering and summarizing company information from LinkedIn can be time-consuming and inconsistent. This workflow automates the process, ensuring:β Efficiency**: Quick extraction and summarization of company data.β Consistency**: Standardized summaries for uniformity across use cases.β Scalability**: Ability to process multiple companies without additional manual effort. What this workflow does The workflow performs the following steps:β Input Acquisition**: Receives a company's name or LinkedIn URL as input.β Data Extraction**: Utilizes Bright Data to scrape the company's LinkedIn profile.β Information Parsing**: Processes the extracted HTML content to retrieve relevant company details.β Summarization**: Employs AI Google Gemini to generate a concise company story. Output Delivery**: Sends the summarized content to a specified webhook or email address. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the LinkedIn URL by navigating to the Set LinkedIn URL node. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Input Variations: Modify the **Set LinkedIn URL node to accept a different company LinkedIn URL. Data Points**: Adjust the HTML Data Extractor Node to retrieve additional details like employee count, industry, or headquarters location.β Summarization Style**: Customize the AI prompt to generate summaries in different tones or formats (e.g., formal, casual, bullet points).β Output Destinations**: Configure the output node to send summaries to various platforms, such as Slack, CRM systems, or databases.
by Ferenc Erb
Overview An automation workflow that creates a complete REST API for digitally signing PDF documents using n8n webhooks. This service demonstrates how to implement secure document signing functionality through standardized API endpoints with file upload and download capabilities. Use Case This workflow is designed for developers and automation specialists who need to implement digital document signing. It's particularly useful for: Integrating PDF signing capabilities into existing document workflows API-based automation of signature processes Creating proof-of-concept implementations for document verification systems Learning n8n's webhook capabilities and file handling techniques Testing PDF signing in development environments before production implementation What This Workflow Does API-Based Document Management Exposes RESTful webhook endpoints for all document operations Handles multipart/form-data uploads for PDF documents Processes JSON payloads for signing configuration Provides download functionality for completed documents Digital Certificate Handling Uploads existing PFX/PKCS#12 digital certificates Generates new certificates with customizable attributes Securely manages certificate storage and access Associates certificates with signing operations Cryptographic PDF Signing Applies digital signatures using industry-standard cryptographic methods Embeds signature information within PDF document structure Validates document integrity through cryptographic verification Preserves original document while adding signature elements Webhook Integration System Routes different API methods to appropriate handlers Validates request payloads and file content Manages authentication through webhook paths Returns structured responses for integration with other systems Technical Architecture Components API Gateway: n8n webhook nodes that receive external requests Request Router: Switch nodes that direct operations based on method parameters Document Processor: Function nodes for PDF manipulation and verification Certificate Manager: Specialized nodes for cryptographic key operations Storage Interface: File operation nodes for document persistence Response Formatter: Nodes that structure API responses Integration Flow Client Request β Webhook Endpoint β Method Router β Processing Engine β Digital Signing β Storage β Response Generation β Client Response Setup Instructions Prerequisites n8n installation (minimum version 0.214.0) Node.js 14 or higher Required environment variable: NODE_FUNCTION_ALLOW_EXTERNAL: "node-forge,@signpdf/signpdf,@signpdf/signer-p12,@signpdf/placeholder-plain" Configuration Steps Import Workflow Import the workflow JSON into your n8n instance Activate the workflow to enable the webhooks Configure Storage Set the storage path variables in the workflow Ensure proper permissions on the storage directories Test API Endpoints Use the included test scripts to verify functionality Test PDF upload, certificate generation, and signing Integration Document the webhook URLs for integration with other systems Configure error handling according to your requirements Testing Methods Test the workflow functionality using various HTTP requests and JSON data: Upload PDF documents to the document processing endpoint Upload or generate digital certificates Execute PDF signing operations Download signed documents from the download endpoint Webhook Endpoints The workflow exposes two primary webhook endpoints that form a complete API for PDF digital signing operations: 1. Document Processing Endpoint (/webhook/docu-digi-sign) This endpoint handles all document and certificate operations: Method: Upload PDF HTTP: POST Content-Type: multipart/form-data Parameters: method, uploadType, fileName, fileData Method: Upload Certificate HTTP: POST Content-Type: multipart/form-data Parameters: method, uploadType, fileName, fileData Method: Generate Certificate HTTP: POST Content-Type: application/json Parameters: method, subjectCN, issuerCN, serialNumber, validFrom, validTo, password Method: Sign PDF HTTP: POST Content-Type: application/json Parameters: method, inputPdf, pfxFile, pfxPassword 2. Document Download Endpoint (/webhook/docu-download) This endpoint handles the retrieval of processed documents: Method: Download Signed PDF HTTP: GET Content-Type: application/json Parameters: method, fileType, fileName Key Workflow Sections The workflow is organized into logical sections with clear responsibilities: Request Processing**: Parses incoming webhook data Method Routing**: Directs requests to appropriate handlers Document Management**: Handles file operations and storage Cryptographic Operations**: Manages signing and certificate functions Response Formatting**: Structures and returns results
by NonoCode
Who is this template for? This workflow template is designed for accounting, human resources, and IT project management teams looking to automate the generation of PDF and Word documents. It can be particularly useful for: The accounting department: for generating invoices in PDF format, thus streamlining the invoicing process and payment tracking. The human resources department: for creating employment contracts in PDF, simplifying the administrative management of employees. IT project management teams: for producing Word documents, such as project specifications, to clearly define project requirements and objectives. Example result in mail This PDF and Word document generation workflow offers a practical and efficient solution for automating administrative and document-related tasks, allowing teams to focus on higher-value activities. How it works This workflow currently operates with an n8n form, but you can easily replace this form with a webhook triggered by an external application such as AirTable, SharePoint, DocuWare, etc. Once the configuration information is retrieved, we fill the API request body of JSReport. The body is defined at the time of template creation in JSReport (Example of JSReport usage). Then, in a straightforward manner, we fetch the PDF and send it via email. Here's a brief overview of this n8n workflow template: Link to n8n workflow template presentation To summarize This workflow integrates with an n8n form, but it's flexible to work with various triggering methods like webhooks from other applications such as AirTable, SharePoint, or DocuWare. After configuring the necessary information, it populates the API request body of JSReport, which defines the template in JSReport. Once the template is populated, it retrieves the PDF and sends it via email. In essence, it streamlines the process of generating PDF documents based on user input and distributing them via email. Instructions: Create a JSReport Account: Sign up for a JSReport account to create your PDF template model. Define PDF Template in JSReport: Use JSON data from your system to set up the content of your PDF template in JSReport. Configure HTTP Request in n8n: Use the HTTP Request node in n8n to send a request to JSReport. Set the node's body to the JSON data defining your PDF template. Watch the Video: For detailed setup guidance, watch the setup video. Remember, this template was created in n8n v1.38.2.
by David Ashby
π οΈ Twilio Tool MCP Server Complete MCP server exposing all Twilio Tool operations to AI agents. Zero configuration needed - all 2 operations pre-built. β‘ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL π§ How it Works β’ MCP Trigger: Serves as your server endpoint for AI agent requests β’ Tool Nodes: Pre-configured for every Twilio Tool operation β’ AI Expressions: Automatically populate parameters via $fromAI() placeholders β’ Native Integration: Uses official n8n Twilio Tool tool with full error handling π Available Operations (2 total) Every possible Twilio Tool operation is included: π§ Call (1 operations) β’ Make a call π§ Sms (1 operations) β’ Send an SMS/MMS/WhatsApp message π€ AI Integration Parameter Handling: AI agents automatically provide values for: β’ Resource IDs and identifiers β’ Search queries and filters β’ Content and data payloads β’ Configuration options Response Format: Native Twilio Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic π‘ Usage Examples Connect this MCP server to any AI agent or workflow: β’ Claude Desktop: Add MCP server URL to configuration β’ Custom AI Apps: Use MCP URL as tool endpoint β’ Other n8n Workflows: Call MCP tools from any workflow β’ API Integration: Direct HTTP calls to MCP endpoints β¨ Benefits β’ Complete Coverage: Every Twilio Tool operation available β’ Zero Setup: No parameter mapping or configuration needed β’ AI-Ready: Built-in $fromAI() expressions for all parameters β’ Production Ready: Native n8n error handling and logging β’ Extensible: Easily modify or add custom logic > π Free for community use! Ready to deploy in under 2 minutes.
by Akash Kankariya
π Discover trending and viral YouTube videos easily with this powerful n8n automation! This workflow helps you perform bulk research on YouTube videos related to any search term, analyzing engagement data like views, likes, comments, and channel statistics β all in one streamlined process. β¨ Perfect for: Content creators wanting to find viral video ideas Marketers analyzing competitor content YouTubers optimizing their content strategy How It Works π― 1οΈβ£ Input Your Search Term β Simply enter any keyword or topic you want to research. 2οΈβ£ Select Video Format β Choose between short, medium, or long videos. 3οΈβ£ Choose Number of Videos β Define how many videos to analyze in bulk. 4οΈβ£ Automatic Data Fetch β The workflow grabs video IDs, then fetches detailed video data and channel statistics from the YouTube API. 5οΈβ£ Performance Scoring β Videos are scored based on engagement rates with easy-to-understand labels like π HOLY HELL (viral) or π Dead. 6οΈβ£ Export to Google Sheets β All data, including thumbnails and video URLs, is appended to your Google Sheet for comprehensive review and easy sharing. Setup Instructions π οΈ Google API Key Get your YouTube Data API key from Google Developers Console. Add it securely in the n8n credentials manager (do not hardcode). Google Sheets Setup Create a Google Sheet to store your results (template link is provided). Share the sheet with your Google account used in n8n. Update the workflow with your sheet's Document ID and Sheet Name if needed. Run the Workflow Trigger the form webhook via browser or POST call. Enter search term, format, and number of videos. Let it process and check your Google Sheet for insights! Features β¨ Bulk fetches the latest and top-viewed YouTube videos. Intelligent video performance scoring with emojis for quick insights π₯π¬. Organizes data into Google Sheets with thumbnail previews πΌοΈ. Easy to customize search parameters via an intuitive form. Fully automated, no manual API calls needed. Get Started Today! π Boost your YouTube content strategy and stay ahead with this powerful viral video research automation! Try it now on your n8n instance and tap into the world of viral content like a pro π₯π‘
by Sergey Skorobogatov
Accept YooKassa payments and log transactions in Google Sheets π§Ύ Summary This workflow allows you to accept online payments via YooKassa and log both orders and transactions in Google Sheets β all without writing a single line of code. It supports full payment flow: product selection, payment initiation, webhook processing, refund updates, and payment status checks. π₯ Who is this for? This template is ideal for: Online stores with simple checkout flows Sellers of digital products or info-courses Entrepreneurs using Telegram bots or web forms Anyone needing quick payment integration with Google Sheets tracking π― What problem does this workflow solve? Setting up online payments usually requires backend infrastructure. This no-code solution automates the entire payment flow: Handles product listing and price retrieval Initiates payments with email and return URL Listens for payment.succeeded and refund.succeeded events Records every action into structured Google Sheets βοΈ What this workflow does 1. GET /products Returns a sorted list of products from a Google Sheet (products). 2. POST /payment Validates required fields (product_id, email, return_url) Checks email format Fetches product data from products Generates a unique idempotence key Sends a request to YooKassa API Saves the order into the orders sheet Returns a payment confirmation link 3. POST /yoomoney Webhook to process payment/refund events: On payment.succeeded, adds entry to transactions On refund.succeeded, updates transaction status 4. GET /status/\:id Returns real-time payment status from YooKassa π Setup Connect credentials: Google Sheets (OAuth2) YooKassa (Basic Auth using shopId and secretKey) Update the following Google Sheets: products: should contain product_id, title, price orders: for saving confirmed purchases transactions: for logging all successful or refunded payments Test endpoints using any HTTP client: Example payload for /payment: { "product_id": "abc123", "email": "user@example.com", "return_url": "https://your.site/success" } π§ How to customize this workflow Add delivery logic (e.g., email with product link after successful payment) Replace Google Sheets with a database (e.g., PostgreSQL) Connect Telegram or other messengers for post-payment notifications Add promo codes, discounts, or subscriptions logic πΌ Use cases Simple online checkouts Telegram bots selling access Educational product sales MVP e-commerce flows Donation or membership payments π Notes β Includes Sticky Notes for sections β Includes error handling and validation β No custom code needed except UUID generation
by Mind-Front
Workflow Description This workflow is a powerful, fully automated web query and semantic reranking system that allows users to perform precise, detailed searches, intelligently rank search results and provide high-quality, structured output. Built with AI-powered components, the workflow leverages semantic query generation, result re-ranking, and real-time reporting to deliver actionable insights. It is particularly well-suited for real-time data retrieval, market research, and any domain requiring automated yet customizable search result processing. How It Works Webhook Integration for Input: The workflow begins with a Webhook Node that captures the user's search query as input, enabling seamless integration with other systems. Step 1: Semantic Query Generation (Powered by "Semantic Search - Query Maker"): Using AI (Google Gemini), the initial query is refined and transformed into a context-aware, expert-level search query. The process ensures that the search engine retrieves the most relevant and precise results. Step 2: Web Search Execution: A free Brave Search API processes the refined query to fetch search results, ensuring speed and cost efficiency. Step 3: Semantic Re-Ranking of Results (Powered by "Semantic Search - Result Re-Ranker"): The workflow reranks the search results based on relevance to the original question, prioritizing the most relevant URLs dynamically. Results are passed through AI-powered intelligent reranking to ensure the final output reflects optimal relevance and quality. Step 4: Structured Output Generation: Results are converted into a well-structured, organized JSON format, ranking the top 10 search results with their titles, links, and descriptions. Missing ranks (if fewer than 10 results) are handled gracefully with placeholders, ensuring consistency. Step 5: Real-Time Reporting: The reranked search results are sent back to the user or integrated system via the Webhook Node in a JSON-formatted response. Reports are highly structured and ready for downstream processing or consumption. Key Features AI-Powered Query Refinement: Transforms basic queries into detailed, expert-level search terms for optimal results. Dual-Stage Semantic Search: Combines query generation and result reranking for precise, high-relevance outputs. Top 10 Result Reranking: Dynamically ranks and organizes the top 10 results based on semantic relevance to the query. Customizable Integration: Fully modifiable for alternative APIs or integrations, such as other search engines or custom ranking logic. JSON-Formatted Structured Results: Outputs reranked results in a standardized format, ideal for integration into systems requiring machine-readable data. Webhook-Based Flexibility: Works seamlessly with Webhook inputs for easy deployment in diverse workflows. Cost-Effective API Usage: Pre-integrated with the free Brave Search API, minimizing operational costs while delivering accurate search results. Instructions for API Setup Brave Search API: Visit api.search.brave.com to obtain a free-tier API key for web search. AI Integration (Google Gemini): Visit Google AI Studio and generate an API key for semantic query generation and reranking. Webhook Configuration: Set up the input Webhook to capture search queries and the output Webhook to deliver reranked results. Why Choose This Workflow? Precision and Relevance**: Combines AI-based query generation with advanced reranking for accurate results. Fully Customizable**: Easily adapt the workflow to alternative APIs, search engines, or ranking logic. Real-Time Insights**: Provides structured, real-time output ready for immediate use. Scalable and Modular**: Ideal for businesses, researchers, and data analysts needing a robust, repeatable solution. Tags AI Workflow, Semantic Search, Query Refinement, Search Result Reranking, Real-Time Search, Web Search Automation, Google Search, Brave Search, News Search, API Integration, Market Research, Competitive Intelligence, Business Intelligence,Google Gemini, Anthropic Claude, OpenAI, GPT, LLM
by algopi.io
Who is this template for? This workflow template is designed for Marketing and pre-Sales to get prospects from a form like Tally, decline data in the famous opensource CRM (SuiteCRM), synchronize contact in Brevo with linking the id from CRM, and then notify in NextCloud. Bonus : validate email with ++CaptainVerify++ and notify in NextCloud depending on response How it works For each submission in the form, a webhook is triggered. A check of the email is done with CaptainVerify. Depending on the response, and if it is ok, then a Lead is created in SuiteCRM. Else, a message in your selected discussion is sent. As the lead has been created, we can create a contact in Brevo (for future campain), ank link this contact with the lead_id from the CRM in a dedicated field. Finaly, a message in your selected discussion in NextCloud informs you about the lead. Set up instructions Complete the Set up credentials step when you first open the workflow. You'll need a CaptainVerify account (Api Key), a dedicated SuiteCRM user with Oauth, a Brevo account (Api Key) and a Nextcloud account. Set up the Webhook in the form's tool of your choice (why not Tally ?). Set each node with the explanations in sticky Notes. Enjoy ! Template was created in n8n v1.44.1
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for The Brave Search Structured Data Extractor workflow is designed for professionals and teams that need high-quality, structured insights from Brave search results in real time. Whether you're performing market research, tracking competitors, training AI models, or powering content engines, this workflow offers a robust and automated solution. This workflow is tailored for: Market Researchers - Who analyze trends across multimedia channels AI Developers - Who require clean, structured datasets for model fine-tuning SEO & Content - Analysts looking to monitor visibility across news, images, and videos Media Researchers - Curating timely and relevant information across formats Automation Engineers - Integrating search insights into downstream workflows What problem is this workflow solving? Traditional web scraping and search result parsing is fragmented, inconsistent, and prone to errors, especially when dealing with multimedia (images, videos, news) data from search engines. This workflow provides: Centralized Brave search data extraction across all content types. Switches the search execution based upon the type of search that is being set. ex: news, images, videos, all Automated structured data transformation using Google Gemini Unified output persistence and notification across disk, webhook, and Google Sheets What this workflow does Input Configuration Define your Brave search query Set the search type: videos, images, news, or all Configure your Bright Data MCP zone Bright Data MCP Search Execution Initiates a Brave search via Bright Data MCP using the correct URL pattern for each search type Returns raw HTML of search results Google Gemini LLM Structured Data Extraction Transforms raw results into structured data (e.g., title, URL, source, snippet) Output Handling Save to disk (e.g., JSON or CSV file) Send Webhook notification with structured data (e.g., Slack, internal dashboards) Store in Google Sheets for team-wide access or dashboarding Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Enhance Output Analysis Add additional LLM prompts for topic classification, sentiment scoring, or trend forecasting. Output Format Options Choose to output CSV, Markdown, or HTML reports based on your integration target. Schedule Automation Trigger the workflow on a schedule (daily/weekly) to keep monitoring topical content.
by Angel Menendez
CallForge - AI Gong Transcript PreProcessor Transform your Gong.io call transcripts into structured, enriched, and AI-ready data for better sales insights and analytics. Who is This For? This workflow is designed for: β Sales teams looking to automate call transcript formatting. β Revenue operations (RevOps) professionals optimizing AI-driven insights. β Businesses using Gong.io that need structured, enriched call transcripts for better decision-making. What Problem Does This Workflow Solve? Manually processing raw Gong call transcripts is inefficient and often lacks essential context for AI-driven insights. With CallForge, you can: β Extract and format Gong call transcripts for structured AI processing. β Enhance metadata using sales data from Salesforce. β Classify speakers as internal (sales team) or external (customers). β Identify external companies by filtering out free email domains (e.g., Gmail, Yahoo). β Enrich customer profiles using PeopleDataLabs to identify company details and locations. β Prepare transcripts for AI models by structuring conversations and removing unnecessary noise. What This Workflow Does 1. Retrieves Gong Call Data Calls the Gong API to extract call metadata, speaker interactions, and collaboration details. Fetches call transcripts for AI processing. 2. Processes and Cleans Transcripts Converts call transcripts into structured, speaker-based dialogues. Assigns each speaker as either Internal (Sales Team) or External (Customer). 3. Extracts Company Information Retrieves Salesforce data** to match customers with existing sales opportunities. Filters out free email domains* to determine the *customerβs actual company domain**. Calls the PeopleDataLabs API to retrieve additional company data and location details. 4. Merges and Enriches Data Combines Gong metadata, Salesforce customer details and insights**. Ensures all necessary data is available for AI-driven sales insights. 5. Final Formatting for AI Processing Merges all call transcript data into a single structured format for AI analysis. Extracts the final cleaned, enriched dataset for further AI-powered insights. How to Set Up This Workflow 1. Connect Your APIs πΉ Gong API Access β Set up your Gong API credentials in n8n. πΉ Salesforce Setup β Ensure API access if you want customer enrichment. πΉ PeopleDataLabs API β Required to retrieve company and location details based on email domains. πΉ Webhook Integration β Modify the webhook call to push enriched call data to an internal system. CallForge - 01 - Filter Gong Calls Synced to Salesforce by Opportunity Stage CallForge - 02 - Prep Gong Calls with Sheets & Notion for AI Summarization CallForge - 03 - Gong Transcript Processor and Salesforce Enricher CallForge - 04 - AI Workflow for Gong.io Sales Calls CallForge - 05 - Gong.io Call Analysis with Azure AI & CRM Sync CallForge - 06 - Automate Sales Insights with Gong.io, Notion & AI CallForge - 07 - AI Marketing Data Processing with Gong & Notion CallForge - 08 - AI Product Insights from Sales Calls with Notion How to Customize This Workflow π‘ Modify Data Sources β Connect different CRMs (e.g., HubSpot, Zoho) instead of Salesforce. π‘ Expand AI Analysis β Add another AI model (e.g., OpenAI GPT, Claude) for advanced conversation insights. π‘ Change Speaker Classification Rules β Adjust internal vs. external speaker logic to match your teamβs structure. π‘ Filter Specific Customers β Modify the free email filtering logic to better fit your companyβs needs. Why Use CallForge? π Automate Gong call transcript processing to save time. π Improve AI accuracy with enriched, structured data. π Enhance sales strategy by extracting actionable insights from calls. Start optimizing your Gong transcript analysis today!