by Jimleuk
This n8n template demonstrates how to calculate the evaluation metric "Correctness" which in this scenario, measures the compares and classifies the agent's response against a set of ground truths. The scoring approach is adapted from the open-source evaluations project RAGAS and you can see the source here https://github.com/explodinggradients/ragas/blob/main/ragas/src/ragas/metrics/_answer_correctness.py How it works This evaluation works best where the agent's response is allowed to be more verbose and conversational. For our scoring, we classify the agent's response into 3 buckets: True Positive (in answer and ground truth), False Positive (in answer but not ground truth) and False Negative (not in answer but in ground truth). We also calculate an average similarity score on the agent's response against all ground truths. The classification and the similarity score is then averaged to give the final score. A high score indicates the agent is accurate whereas a low score could indicate the agent has incorrect training data or is not providing a comprehensive enough answer. Requirements n8n version 1.94+ Check out this Google Sheet for a sample data https://docs.google.com/spreadsheets/d/1YOnu2JJjlxd787AuYcg-wKbkjyjyZFgASYVV0jsij5Y/edit?usp=sharing
by Eric Mooney
Usecase: When a new service ticket is created in Taiga, it's often unclear whether it contains sufficient details to begin work. This workflow automates the triage process by: Using an AI model to extract key information from the ticket description. Automatically assigning values for: Type (Bug, Enhancement, Onboarding, Question) Severity (Wishlist, Minor, Normal, Important, Critical) Priority (Low, Normal, High) Status (New, Needs More Info, etc.) Detecting missing critical data and blocking the ticket if incomplete. Setup instructions here: https://github.com/emooney/Service_Ticket_Triage_Helper
by NovaNode
Who is this for? This template is designed for internal support teams, product specialists, and knowledge managers who want to build an AI-powered knowledge assistant with retrieval-augmented generation (RAG) and reinforcement learning from human feedback (RLHF) via Telegram. What problem is this workflow solving? Manual knowledge management and answering support queries can be time-consuming and error-prone. This solution automates importing and indexing official documentation into MongoDB vector search and enhances AI responses with Telegram-based user feedback to continuously improve answer quality. What these workflows do Workflow 1: Document ingestion & indexing Manually triggered workflow imports product documentation from Google Docs. Documents are split into manageable chunks and embedded using OpenAI embeddings. Embedded document chunks are stored in MongoDB Atlas vector store to enable semantic search. Workflow 2: Telegram chat with RLHF feedback loop Listens for user messages via Telegram bot integration. Uses vector similarity search on MongoDB to retrieve relevant documentation chunks. Generates answers with OpenAI GPT-4o-mini model using retrieval-augmented generation. Sends answers back via Telegram and waits for user feedback (approval or disapproval). Captures feedback, maps it as positive or negative, and stores it with the conversation data for future model improvement. Setup Setting up vector embeddings Authenticate Google Docs and connect your Google Docs URL containing the product documentation you want to index. Authenticate MongoDB Atlas and connect the collection where you want to store the vector embeddings. Create a search index on this collection to support vector similarity queries. Ensure the index name matches the one configured in n8n (data_index). See the example MongoDB search index template below for reference. Setting up chat with Telegram RLHF Create a bot in Telegram with @botFather using the /newbot command. Connect the MongoDB database and search index used for vector search in the previous workflow. Also create two new collections in MongoDB Atlas: one for feedback and one for chat history. Create a search index for feedback, copying the provided template. Configure the AI system prompt in the “Knowledge Base Agent” node, making sure it references all three tools connected (productDocs, feedbackPositive, feedbackNegative) as provided in the template prompt. Make sure Product documentation and feedback collections must connect to the same MongoDB database. There are three distinct MongoDB collections: one for product documentation, one for feedback, and one for chat history (chat history collection can be separate). Telegram API credentials are valid and webhook URLs are correctly set up. MongoDB Search Index Templates Documentation Collection Index { "mappings": { "dynamic": false, "fields": { "_id": { "type": "string" }, "text": { "type": "string" }, "embedding": { "type": "knnVector", "dimensions": 1536, "similarity": "cosine" }, "source": { "type": "string" }, "doc_id": { "type": "string" } } } } Feedback Collection Index { "mappings": { "dynamic": false, "fields": { "prompt": { "type": "string" }, "response": { "type": "string" }, "text": { "type": "string" }, "embedding": { "type": "knnVector", "dimensions": 1536, "similarity": "cosine" }, "feedback": { "type": "token" } } } }
by Jimleuk
This n8n template demonstrates the beginnings of building your own n8n-powered WhatsApp chatbot! Under the hood, utilise n8n's powerful AI features to handle different message types and use an AI agent to respond to the user. A powerful tool for any use-case! How it works Incoming WhatsApp Trigger provides a way to get messages into the workflow. The message received is extracted and sent through 1 of 4 branches for processing. Each processing branch uses AI to analyse, summarize or transcribe the message so that the AI agent can understand it. The supported types are text, image, audio (voice notes) and video. The AI Agent is used to generate a response generally and uses a wikipedia tool for more complex queries. Finally, the response message is sent back to the WhatsApp user using the WhatsApp node. How to use Once you have setup and configured your WhatsApp account, you'll need to activate your workflow to start processing messages. Good to know: Large media files may negatively impact workflow performance. Requirements WhatsApp Buisness account Google Gemini for LLM. Gemini is used specifically because it can accept audio and video files whereas at time of writing, many other providers like OpenAI's GPT, do not. Customising this workflow For performance reasons, consider detecting large audio and video before sending to the LLM. Pre-processing such files may allow your agent to perform better. Go beyond and create rich and engagement customer experiences by responding using images, audio and video instead of just text!
by Sebastian/OptiLever
Tired of spending HOURS writing product descriptions that don’t rank or convert? This could be your solution. This free Product Description Writer workflow for n8n uses a multi-agent AI system to turn your product list into conversion-focused, SEO-ready copy. It analyzes your product images, identifies key features, and writes optimized titles and descriptions for platforms like Shopify and Google Shopping. It can process your entire catalog in minutes, saving you countless hours of manual work. This workflow is perfect for: 🛒 Shopify stores 🛒 Etsy sellers 🛒 Product managers 🛒 Digital marketers 🛒 Anyone who hates writing product copy manually! How it works This workflow automates the entire product description process in a few high-level steps: Reads Your Products: The workflow starts by reading product data from your specified Google Sheet, including the product name, an image URL, and optional fields like brand voice or target market. Analyzes Product Images: It downloads each product image and uses an AI vision model (GPT-4o-mini) to perform a detailed visual analysis, extracting objective information like materials, colors, features, and structure. Writes Optimized Copy: The visual analysis and your original data are passed to two specialized AI agents. The first drafts a Shopify-optimized title and description, while the second refines it and generates additional SEO-focused copy for Google Merchant Center. Updates Your Spreadsheet: The final, optimized product titles and descriptions for both Shopify and Google are automatically written back to the original Google Sheet. Set up steps Setting up this workflow takes only a few minutes. You will need to configure credentials for the following services: Google Sheets**: To allow the workflow to read your product list and write back the results. OpenAI**: To power the AI agents that analyze images and generate the copy. Detailed instructions and customization tips are included in the sticky notes inside the workflow itself. Benefits Automated Vision-Based Copywriting**: Reduces manual description writing time. Multi-Channel Ready**: Outputs are optimized for both Shopify and Google Merchant Center standards. Brand Alignment**: Uses optional user-provided draft descriptions and brand voice to maintain brand tone. SEO and Conversion Focus**: Titles and descriptions are optimized for both search engines and consumer engagement. Image-Centric Accuracy**: Uses actual product images for accurate attribute extraction, minimizing errors from missing or vague text data. Tips & Customization To adjust brand voice or tone, modify the system prompts in the Shopify and GMC AI agents. To extend the workflow for scheduled runs, add a cron trigger or a Google Sheets "status column" filter. For QA/debugging, consider adding logging nodes to Slack or Discord, or export AI outputs to a review sheet before updating the main sheet. To improve Shopify or GMC field mappings, edit the final Google Sheets update node's column settings. For speed optimization, the batch size in the Loop Over Items node can be adjusted, but be mindful of API rate limits.
by Alex Huy
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Description This n8n workflow automatically scrapes Airbnb listings from a specified location and saves the data to a Google Sheet. It performs pagination to collect listings across multiple pages, extracts detailed information for each property, and organizes the data in a structured format for easy analysis. How it Works The workflow operates through these high-level steps: Search Initialization: Starts with an Airbnb search for a specific location (London) with defined check-in/check-out dates and guest count Pagination Loop: Automatically processes multiple pages of search results using cursor-based pagination Data Extraction: Parses listing information including names, prices, ratings, reviews, and URLs Detail Enhancement: Fetches additional details for each listing (house rules, highlights, descriptions, amenities) Data Storage: Saves all collected data to a Google Sheet with proper formatting Loop Control: Continues until reaching the page limit (2 pages) or no more results are available Setup Steps Prerequisites n8n instance with MCP (Model Context Protocol) support Google Sheets API credentials configured Airbnb MCP client properly set up Configuration Steps Configure MCP Client Set up the Airbnb MCP client with credential ID: Ensure the client has access to airbnb_search and airbnb_listing_details tools Google Sheets Setup Create a Google Sheet with ID: 15IOJquaQ8CBtFilmFTuW8UFijux10NwSVzStyNJ1MsA Configure Google Sheets OAuth2 credentials (ID: 6YhBlgb8cXMN3Ra2) Ensure the sheet has these column headers: "id, name, url, price_per_night, total_price, price_details beds_rooms, rating, reviews, badge, location houseRules, highlights, description, amenities" Search Parameters Location: "London" (can be modified in the "Airbnb Search" node) Adults: 7 Children: 1 Check-in: "2025-08-14" Check-out: "2025-08-17" Page limit: 2 (can be adjusted in the "If1" condition node) Execution Use the manual trigger "When clicking 'Execute workflow'" to start the process Monitor the workflow execution through the n8n interface Check the Google Sheet for populated data after completion Key Features Automatic Pagination: Processes multiple pages without manual intervention Comprehensive Data: Extracts both basic listing info and detailed property information Error Handling: Includes JSON parsing error handling and data validation Batch Processing: Uses split batches for efficient processing of individual listings Real-time Updates: Appends new data to existing Google Sheet records Output Data Structure Each listing contains: Basic info: ID, name, URL, pricing details, room/bed count Ratings: Average rating and review count Location: Latitude and longitude coordinates Enhanced details: House rules, highlights, descriptions, amenities Metadata: Page number, check-in/out dates, badges
by Usama Rehman
Advanced Gmail AI Auto-Responder with Context Intelligence The next-generation email automation that knows your communication style, remembers conversations, and responds with human-like intelligence. 🚀 What Makes This Advanced? Unlike basic AI email responders, this workflow creates contextually intelligent responses by: 📄 Reading your communication profile from Google Drive 🧠 Remembering full conversation history with vector embeddings 🎯 Understanding context from previous emails in the thread 🤖 Using AI agents instead of simple prompt-response patterns 💾 Building memory of your communication style and preferences The Result: Responses that sound authentically like you, with perfect context awareness. ⏱️ Time & Impact Setup Time: 45 minutes Time Saved: 2-3 hours daily Skill Level: Intermediate-Advanced Monthly Cost: $20-30 (OpenAI API + storage) Intelligence Level: Human-like contextual awareness 🛠️ Prerequisites & Setup Required Accounts: n8n Cloud/Self-hosted (AI features required) Gmail Account with API access Google Drive with profile document OpenAI Account (GPT-4o recommended) Required Credentials in n8n: Gmail OAuth2 API Google Drive OAuth2 API OpenAI API (with sufficient credits)
by Khairul Muhtadin
The blogblizt: polylang workflow streamlines the creation and publication of high-quality blog content using powerful automation with n8n, OpenAI’s GPT and the WordPress API. It enables effortlessly generate SEO-friendly articles complete with metadata and optimized featured images, improving content freshness and search engine visibility. 💡 Why Use blogblizt? Automate content creation** to keep your blog fresh and engaging Generate SEO-optimized posts** with expert-crafted titles, meta descriptions, and focus keyphrases Save hours** of manual writing, image sourcing, and SEO configuration Leverage AI** for topic ideation and high-quality writing tailored to international student audiences Seamlessly publish and manage drafts** directly on your WordPress site via API Produce captivating, relevant featured images** without external tools Support multilingual content creation** with randomized language selection for diversity ⚡ Who Is This For? Content strategists managing WordPress blogs needing efficient topic generation SEO specialists wanting automated post creation with optimized metadata Website owners aiming to maintain active, multilingual content Marketers who want to leverage AI for high-quality, consistent article production ❓ What Problem Does It Solve? This workflow automates the entire editorial cycle—from generating engaging topics with AI, drafting full-length articles, producing featured images automatically, to posting drafts configured for SEO on WordPress—dramatically reducing editor workload and improving content output. 🔧 What This Workflow Does ⏱ Trigger Runs on manual trigger or a weekly schedule to ensure consistent content flow 📎 Fetch Site Context Retrieves recent posts, taxonomies, and WordPress API schema to understand site structure 🔍 Generate Topic Uses OpenAI GPT-4.1-mini to roll a random language and craft a targeted blog post topic + SEO metadata 🤖 Draft Article Composes a comprehensive, SEO-friendly article tailored to the generated topic 💌 Create Draft Posts the draft on WordPress with Yoast SEO fields populated 🖼 Generate Image Creates a high-quality, cinematic featured image via AI 📤 Upload & Attach Uploads the image to the WordPress media library and sets it as the post’s featured image 🔐 Setup Instructions Import the workflow file into n8n: Add credentials: WordPress API (with create-post & media permissions) OpenAI API key (for GPT and image models) Customize categories, languages, and schedule in the relevant nodes Adjust the Schedule Trigger timing as desired (e.g. every Monday at 9 AM) Test end-to-end on a staging WordPress site to verify drafts and images publish correctly 🧩 Pre-Requirements An operational n8n instance (Cloud or self-hosted) (self-hosted or n8n cloud) WordPress site with REST API access & proper authentication OpenAI account with API access for both language and image models (Optional) Yoast SEO plugin installed for metadata recognition 🛠️ Customize It Further Tweak OpenAI prompts for niche topics or additional languages Add social-media nodes to auto-share new posts Insert an editorial review step before publishing Refine image prompts for different visual styles (e.g., “modern infographic” vs. “cinematic portrait”) 🧠 Nodes Used Manual Trigger** Schedule Trigger** (weekly) HTTP Request** (fetch posts, taxonomies, schema; upload media) Code** (JavaScript analyzers for API schema & taxonomy parsing) OpenAI Chat** (GPT-4.1-mini for topics & articles) OpenAI Image Generation** (for featured images) WordPress** (create draft post) Sticky Notes** (in-flow documentation) 📞 Support Built by: Khaisa Studio Tags: wordpress, marketing, polylang Category: Content Creation Need a custom? contact me on LinkedIn or Web
by Lucas Peyrin
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works This template is your personal launchpad into the world of AI-powered automation. It provides a fully functional, interactive AI chatbot that you can set up in minutes, designed specifically for those new to AI Agents. What is an AI Agent? Think of it as a smart assistant that doesn't just talk—it acts. You give it a set of "tools" (like other n8n tool nodes), and it intelligently decides which tool to use to answer your questions or complete your tasks. This starter kit comes with a pre-built "toolbox" of superpowers, allowing your agent to: Get the Weather:** Ask for the forecast anywhere in the world. Get the News:** Fetch the latest headlines from n8n, CNN, and others. The workflow is designed to be a hands-on learning experience, with detailed sticky notes explaining every component, from the chat interface to the agent's "brain" and "memory." Set up steps Setup time: ~2-3 minutes This workflow is designed to be incredibly easy to start. You only need one free API key to get it working. Add Your AI Key: The workflow uses Google's Gemini model by default. You will need a free Gemini API key. Find the Gemini node on the canvas. The sticky note right below it (How to Get Google Gemini Credentials) provides a link and simple instructions to get your key. In the Gemini node, click the Credential dropdown and select + Create New Credential to add your key. Activate the Workflow: At the top-right of the screen, click the "Inactive" toggle switch. It will turn green and say "Active". Your agent is now live! Start Chatting: Open the Example Chat Window node (it has a 💬 icon). In its parameter panel, you will see a Chat URL. Click the link to copy it. Paste the URL into a new browser tab and start asking your agent questions! Optional: The template also includes disabled OpenAI chat model node and tools for Google Calendar, and Gmail. You can enable and configure these later to change the underlying AI model or give your agent even more superpowers!
by RedOne
🎙️ AI Audio Assistant with Voice-to-Voice Response Who is this for? Businesses, customer service teams, content creators, and organizations who want to provide intelligent voice-based interactions through Telegram. Perfect for accessibility-focused services, multilingual support, or hands-free customer assistance. What problem does this solve? Enables natural voice conversations with AI Breaks down language and accessibility barriers Provides instant voice responses to customer queries Reduces typing requirements for users Offers 24/7 voice-based customer support Maintains conversation context across voice interactions What this workflow does: Receives voice messages via Telegram bot Transcribes audio using Deepgram's advanced speech-to-text Processes transcribed text through AI agent with knowledge base access Generates intelligent responses based on conversation context Converts AI response to natural-sounding speech using Deepgram TTS Sends audio response back to user via Telegram Maintains conversation memory for contextual interactions 🔧 Technical Architecture Core Components: Telegram Bot**: Receives and sends voice messages Deepgram STT**: Transcribes voice to text with high accuracy OpenAI GPT**: Processes queries and generates responses Supabase Knowledge Base**: Stores and retrieves business information Memory Management**: Maintains conversation context Deepgram TTS**: Converts text responses to natural speech Data Flow: Voice Message → Telegram API → File Download Audio File → Deepgram STT → Transcript Transcript → AI Agent → Response Generation Response → Deepgram TTS → Audio File Audio Response → Telegram → User 🛠️ Setup Instructions Prerequisites Telegram Bot Token Create bot via @BotFather Get bot token and configure webhook Deepgram API Key Sign up at deepgram.com Get API key for STT and TTS services Note: Currently hardcoded in workflow OpenAI API Key OpenAI account with API access Configure in OpenAI Chat Model node Supabase Database Create Supabase project Set up knowledge_base table Configure API credentials Step-by-Step Setup Configure Telegram Bot Update telegramToken in "Prepare Voice Message Data" node Set correct bot token in Telegram nodes Test bot connectivity Set Up Deepgram Integration Replace API key in "Transcribe with Deepgram" node Update TTS endpoint in "HTTP Request" node Test voice transcription accuracy Configure Knowledge Base -- Create knowledge_base table in Supabase CREATE TABLE knowledge_base ( id UUID DEFAULT gen_random_uuid() PRIMARY KEY, question TEXT NOT NULL, answer TEXT NOT NULL, category VARCHAR(100), keywords TEXT[], created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW() ); Customize AI Prompts Update system message in "Telegram AI Agent" node Adjust temperature and max tokens in OpenAI model Configure memory session keys Test End-to-End Flow Send test voice message to bot Verify transcription accuracy Check AI response quality Validate audio output clarity 🎛️ Configuration Options Voice Recognition Settings Model**: nova-2 (Deepgram's latest model) Language**: English (en) - can be changed Smart Format**: Enabled for better punctuation AI Response Settings Temperature**: 0.3 (conservative responses) Max Tokens**: 100 (adjust based on needs) Memory**: Session-based conversation context Text-to-Speech Settings Model**: aura-2-thalia-en (natural female voice) Alternative voices**: Available in Deepgram TTS API Audio Format**: Optimized for Telegram 🔒 Security Considerations API Key Management // Current implementation has hardcoded tokens // Recommended: Use environment variables const telegramToken = process.env.TELEGRAM_BOT_TOKEN; const deepgramKey = process.env.DEEPGRAM_API_KEY; Data Privacy Voice messages are processed by external APIs Consider data retention policies Implement user consent mechanisms Ensure GDPR compliance if applicable 📊 Monitoring & Analytics Key Metrics to Track Voice message processing time Transcription accuracy rates AI response quality scores User engagement metrics Error rates and failure points Recommended Logging // Add to workflow for monitoring console.log({ timestamp: new Date().toISOString(), user_id: userData.user_id, transcript_confidence: transcriptData.confidence, response_length: aiResponse.length, processing_time: processingTime }); 🚀 Customization Ideas Enhanced Features Multi-language Support Add language detection Support multiple TTS voices Translate responses Voice Commands Implement wake words Add voice shortcuts Create voice menus Advanced AI Features Sentiment analysis Intent classification Escalation triggers Integration Expansions Connect to CRM systems Add calendar scheduling Integrate with help desk tools Performance Optimizations Implement audio preprocessing Add response caching Optimize API call sequences Implement retry mechanisms 🐛 Troubleshooting Common Issues Voice Not Transcribing Check Deepgram API key validity Verify audio format compatibility Test with shorter voice messages Poor Audio Quality Adjust TTS model settings Check network connectivity Verify Telegram audio limits AI Responses Too Generic Improve knowledge base content Adjust system prompts Increase context window Memory Not Working Check session key configuration Verify user ID extraction Test conversation continuity 💡 Best Practices Voice Interface Design Keep responses concise and clear Use natural speech patterns Avoid technical jargon Provide clear next steps Knowledge Base Management Regular content updates Clear categorization Keyword optimization Quality assurance testing User Experience Fast response times (<5 seconds) Consistent voice personality Graceful error handling Clear capability communication 📈 Success Metrics Technical KPIs Response time: <3 seconds average Transcription accuracy: >95% User satisfaction: >4.5/5 Uptime: >99.5% Business KPIs Customer query resolution rate Support ticket reduction User engagement increase Cost per interaction decrease 🔄 Maintenance Schedule Daily Monitor error logs Check API rate limits Verify service uptime Weekly Review conversation quality Update knowledge base Analyze usage patterns Monthly Performance optimization Security audit Feature updates User feedback review 📚 Additional Resources Documentation Links Deepgram STT API Deepgram TTS API Telegram Bot API OpenAI API Supabase Documentation Community Support n8n Community Forum Telegram Bot Developers Group Deepgram Developer Discord OpenAI Developer Community Note: This template requires active API subscriptions for Deepgram and OpenAI services. Costs may apply based on usage volume.
by Mind-Front
Workflow Description This workflow is a powerful, fully automated web query and semantic reranking system that allows users to perform precise, detailed searches, intelligently rank search results and provide high-quality, structured output. Built with AI-powered components, the workflow leverages semantic query generation, result re-ranking, and real-time reporting to deliver actionable insights. It is particularly well-suited for real-time data retrieval, market research, and any domain requiring automated yet customizable search result processing. How It Works Webhook Integration for Input: The workflow begins with a Webhook Node that captures the user's search query as input, enabling seamless integration with other systems. Step 1: Semantic Query Generation (Powered by "Semantic Search - Query Maker"): Using AI (Google Gemini), the initial query is refined and transformed into a context-aware, expert-level search query. The process ensures that the search engine retrieves the most relevant and precise results. Step 2: Web Search Execution: A free Brave Search API processes the refined query to fetch search results, ensuring speed and cost efficiency. Step 3: Semantic Re-Ranking of Results (Powered by "Semantic Search - Result Re-Ranker"): The workflow reranks the search results based on relevance to the original question, prioritizing the most relevant URLs dynamically. Results are passed through AI-powered intelligent reranking to ensure the final output reflects optimal relevance and quality. Step 4: Structured Output Generation: Results are converted into a well-structured, organized JSON format, ranking the top 10 search results with their titles, links, and descriptions. Missing ranks (if fewer than 10 results) are handled gracefully with placeholders, ensuring consistency. Step 5: Real-Time Reporting: The reranked search results are sent back to the user or integrated system via the Webhook Node in a JSON-formatted response. Reports are highly structured and ready for downstream processing or consumption. Key Features AI-Powered Query Refinement: Transforms basic queries into detailed, expert-level search terms for optimal results. Dual-Stage Semantic Search: Combines query generation and result reranking for precise, high-relevance outputs. Top 10 Result Reranking: Dynamically ranks and organizes the top 10 results based on semantic relevance to the query. Customizable Integration: Fully modifiable for alternative APIs or integrations, such as other search engines or custom ranking logic. JSON-Formatted Structured Results: Outputs reranked results in a standardized format, ideal for integration into systems requiring machine-readable data. Webhook-Based Flexibility: Works seamlessly with Webhook inputs for easy deployment in diverse workflows. Cost-Effective API Usage: Pre-integrated with the free Brave Search API, minimizing operational costs while delivering accurate search results. Instructions for API Setup Brave Search API: Visit api.search.brave.com to obtain a free-tier API key for web search. AI Integration (Google Gemini): Visit Google AI Studio and generate an API key for semantic query generation and reranking. Webhook Configuration: Set up the input Webhook to capture search queries and the output Webhook to deliver reranked results. Why Choose This Workflow? Precision and Relevance**: Combines AI-based query generation with advanced reranking for accurate results. Fully Customizable**: Easily adapt the workflow to alternative APIs, search engines, or ranking logic. Real-Time Insights**: Provides structured, real-time output ready for immediate use. Scalable and Modular**: Ideal for businesses, researchers, and data analysts needing a robust, repeatable solution. Tags AI Workflow, Semantic Search, Query Refinement, Search Result Reranking, Real-Time Search, Web Search Automation, Google Search, Brave Search, News Search, API Integration, Market Research, Competitive Intelligence, Business Intelligence,Google Gemini, Anthropic Claude, OpenAI, GPT, LLM
by Eric Francis
How it works This workflow reads a list of URLs every 15 minutes, and sends an HTTP request to every URL on the list. Set up steps Schedule the workflow to run at your desired frequency (default is every 15 minutes). Add your desired URLs to the list. The list should be in the same format as the image below (Don't forget to have single quotes around every URL in the list, and separate each one with a comma!): Turn the workflow ON. Ideas to customize the workflow for your own use cases: Change the HTTP method Add headers Add a request body