by Davide
This workflow implements a Retrieval-Augmented Generation (RAG) system using Google Gemini's File Search API. It allows users to upload files to a dedicated search store and then ask questions about their content in a chat interface. The system automatically retrieves relevant information from the uploaded files to provide accurate, context-aware answers. Key Advantages 1. ✅ Seamless Integration of File Upload + AI Context The workflow automates the entire lifecycle: Upload file Index file Retrieve content for AI chat Everything happens inside one n8n automation, without manual actions. 2. ✅ Automatic Retrieval for Every User Query The AI agent is instructed to always query the Search Store. This ensures: More accurate answers Context-aware responses Ability to reference the exact content the user has uploaded Perfect for knowledge bases, documentation Q&A, internal tools, and support. 3. ✅ Reusable Search Store for Multiple Sessions Once created, the Search Store can be reused: Multiple files can be imported Many queries can leverage the same indexed data A sustainable foundation for scalable RAG operations. 4. ✅ Visual and Modular Workflow Design Thanks to n8n’s node-based flow: Each step is clearly separated Easy to debug Easy to expand (e.g., adding authentication, connecting to a database, notifications, etc.) 5. ✅ Supports Both Form Submission and Chat Messages The workflow is built with two entry points: A form for uploading files A chat-triggered entry point for RAG conversations Meaning the system can be embedded in multiple user interfaces. 6. ✅ Compliant and Efficient Interaction With Gemini APIs Your workflow respects the structure of Gemini’s File Search API: /fileSearchStores (create store) upload endpoint importFile endpoint generateContent with file search tools This ensures compatibility and future expandability. 7. ✅ Memory-Aware Conversations With the Memory Buffer node, the chat session preserves context across messages—providing a more natural and sophisticated conversational experience. How it Works STEP 1 - Create a new Search Store Triggered manually via the “Execute workflow” node, this step sends a request to the Gemini API to create a FileSearch Store, which acts as a private vector index for your documents. The store name is then saved using a Set node. This store will later be used for file import and retrieval. STEP 2 - Upload and import a file into the Search Store When the form is submitted (through the Form Trigger), the workflow: Accepts a file upload via the form. Uploads the file to Gemini using the /upload endpoint. Imports the uploaded file into the Search Store, making it searchable. This step ensures content is stored, chunked, and indexed so the AI model can retrieve relevant sections later. STEP 3 - RAG-enabled Chat with Google Gemini When a chat message is received: The workflow loads the Search Store identifier. A LangChain Agent is used along with the Google Gemini Chat Model. The model is configured to always use the SearchStore tool, so every user query is enriched by a search inside the indexed files. The system retrieves relevant chunks from your documents and uses them as context for generating more accurate responses. This creates a fully functioning RAG chatbot powered by Gemini. Set up Steps Before activating this workflow, you must complete the following configuration: Google Gemini API Credentials: Ensure you have a valid Google AI Studio API key. This key must be entered in all HTTP Request nodes (Create Store, Upload File, Import to Store, and SearchStore). Configure the Search Store: Manually trigger the "Create Store" node once via the "Execute Workflow" button. This will call the Gemini API to create a new File Search Store and return its resource name (e.g., fileSearchStores/my-store-12345). Copy this resource name and update the "Get Store" and "Get Store1" Set nodes. Replace the placeholder value fileSearchStores/my-store-XXX in both nodes with the actual name of your newly created store. Deploy Triggers: For production use, you should activate the workflow. This will generate public URLs for the "On form submission" node (for file uploads) and the "When chat message received" node (for the chat interface). These URLs can be embedded in your applications (e.g., a website or dashboard). Once these steps are complete, the workflow is ready. Users can start uploading files via the form and then ask questions about them in the chat. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Oneclick AI Squad
Competitor Price & Feature Tracker for Real Estate Projects Overview This solution monitors competitor pricing and features for real estate projects by fetching data from a competitor API, parsing it, logging it to Google Sheets, and sending email alerts for significant price changes. It runs on a scheduled basis to keep real-time track of market trends. Operational Process Set Cron**: Triggers the workflow on a scheduled basis (e.g., hourly). Fetch Competitor Data**: Performs GET requests to retrieve competitor pricing and feature data (e.g., https://api.competitor.com). Wait For Data**: Introduces a delay to ensure data is fully retrieved. Parse Data**: Processes and extracts relevant pricing and feature details. Log to Google Sheets**: Appends the parsed data to a Google Sheet for tracking. Check Price Change**: Evaluates if there’s a significant price change. Send Alert Email**: Sends an email notification if a price change is detected. No Action for Minor Changes**: Skips action if no significant price change is found. Implementation Guide Import the workflow JSON into n8n. Configure the Cron node for the desired schedule (e.g., every hour). Set up the HTTP node with the competitor API URL (e.g., https://api.competitor.com). Configure Google Sheets integration and specify the log sheet. Test with a sample API call and verify email alerts. Adjust price change thresholds in the Check Price Change node as needed. Requirements HTTP request capability for API data retrieval. Google Sheets API for data logging. Email service (e.g., SMTP) for alerts. n8n for workflow automation and scheduling. Customization Options Adjust the Cron schedule for different intervals (e.g., daily). Modify the HTTP node to fetch additional competitor data (e.g., features, availability). Customize email alert content in the Send Alert Email node. Enhance Google Sheets log with additional fields (e.g., timestamp, competitor name). Add Slack or WhatsApp notifications for additional alert channels.
by Milan Vasarhelyi - SmoothWork
Video Introduction Want to automate your inbox or need a custom workflow? 📞 Book a Call | 💬 DM me on Linkedin Workflow Overview This workflow automatically delivers a daily weather forecast to your email inbox every morning at 7:00 AM. It demonstrates practical API-to-API integration by connecting the Meteosource weather API directly to Gmail using n8n's HTTP Request node, without requiring a pre-built Meteosource integration. Why This Workflow is Valuable Instead of manually checking weather forecasts each morning, this automation fetches current and next-day weather summaries from Meteosource and delivers them directly to your inbox. It's a perfect example of how direct API integration unlocks tools that don't have dedicated n8n nodes, giving you access to the full functionality of any service with an API. Key Features Scheduled daily execution at 7:00 AM (customizable) Fetches weather data using the Meteosource API with secure Query Auth credentials Sends formatted email with today's weather in the subject and tomorrow's forecast in the body Easy location and recipient customization through a Config node Setup Requirements Meteosource API Account: Sign up for a free account at Meteosource to get your API key (includes 400 free calls per day, more than enough for daily forecasts). Credentials needed: Meteosource credentials**: Create an HTTP Query Auth credential in n8n with the parameter name key and your Meteosource API key as the value Gmail OAuth2**: Connect your Gmail account to n8n for sending emails Configuration Open the Config node to personalize: place_id**: Change from "london" to your desired location (use Meteosource place ID format) send_to_email**: Update with your preferred recipient email address This workflow demonstrates the power of the HTTP Request node for connecting any API to your automation workflows.
by EmailListVerify
This workflow allows to : scrape Google Maps data using SerpAPI discovery generic email addresses like contact@ using EmailListVerify API Who’s it for This template is designed to prepare cold outreach for local businesses like restaurants or hotels (you need to target a type of business that is listed on Google Maps). This template will generate a list of leads with phone numbers and email addresses. The email addresses you will get are generic, like contact@. This isn’t a problem if you are targeting small businesses, as the owner will most likely monitor those emails. However, if your ideal customer profile has more than 20 employees, I do not recommend using those email addresses for cold outreach. Requirement This template uses: Google Sheet to handle input and output data SerpAPI to scrape Google Maps (250 searches/month in the free plan) EmailListVerify to discover email (from $0.05 per email) Notes This template is an extension of Lucas Perret template (adding email discovery module). If there is some interest in it, I can make a similar template using Apify as an alternative to SerpAPI for Google Map scraping.
by Vigh Sandor
SETUP INSTRUCTIONS 1. Configure Kubeconfig Open the "Kubeconfig Setup" node Paste your entire kubeconfig file content into the kubeconfigContent variable Set your target namespace in the namespace variable (default: 'production') Example kubeconfig format: apiVersion: v1 kind: Config clusters: cluster: certificate-authority-data: LS0tLS1CRUd... server: https://your-cluster.example.com:6443 name: your-cluster contexts: context: cluster: your-cluster user: your-user name: your-context current-context: your-context users: name: your-user user: token: eyJhbGciOiJSUzI1... 2. Telegram Configuration Create a Telegram bot via @BotFather Get your bot token and add it as a credential in n8n (Telegram API) Find your chat ID: Message your bot Visit: https://api.telegram.org/bot<YourBotToken>/getUpdates Look for "chat":{"id":...} Open the "Send Telegram Alert" node Replace YOUR_TELEGRAM_CHAT_ID with your actual chat ID Select your Telegram API credential 3. Schedule Configuration Open the "Schedule Trigger" node Default: runs every 1 minute Adjust the interval based on your monitoring needs: Every 5 minutes: Change field to minutes and set minutesInterval to 5 Every hour: Change field to hours and set hoursInterval to 1 Cron expression: Use custom cron schedule 4. kubectl Installation The workflow automatically downloads kubectl (v1.34.0) during execution No pre-installation required on the n8n host kubectl is downloaded and used temporarily for each execution HOW IT WORKS Workflow Steps Schedule Trigger Runs automatically based on configured interval Initiates the monitoring cycle Kubeconfig Setup Loads the kubeconfig and namespace configuration Passes credentials to kubectl commands Parallel Data Collection Get Pods: Fetches all pods from the specified namespace Get Deployments: Fetches all deployments from the specified namespace Both commands run in parallel for efficiency Process & Generate Report Parses pod and deployment data Groups pods by their owner (Deployment, DaemonSet, StatefulSet, or Node) Calculates readiness statistics for each workload Detects alerts: workloads with 0 ready pods Generates a comprehensive Markdown report including: Deployments with replica counts and pod details Other workloads (DaemonSets, StatefulSets, Static Pods) Standalone pods (if any) Pod-level details: status, node, restart count Has Alerts? Checks if any workloads have 0 ready pods Routes to appropriate action Send Telegram Alert (if alerts exist) Sends formatted alert message to Telegram Includes: Namespace information List of problematic workloads Full status report Save Report Saves the Markdown report to a file Filename format: k8s-report-YYYY-MM-DD-HHmmss.md Always executes, regardless of alert status Security Features Temporary kubectl**: Downloaded and used only during execution Temporary kubeconfig**: Written to /tmp/kubeconfig-<random>.yaml Automatic cleanup**: Kubeconfig file is deleted after each kubectl command No persistent credentials**: Nothing stored on disk between executions Alert Logic Alerts are triggered when any workload has zero ready pods: Deployments with readyReplicas < 1 DaemonSets with numberReady < 1 StatefulSets with readyReplicas < 1 Static Pods (Node-owned) with no ready instances Report Sections Deployments: All Deployment-managed pods (via ReplicaSets) Other Workloads: DaemonSets, StatefulSets, and Static Pods (kube-system components) Standalone Pods: Pods without recognized owners (rare) Alerts: Summary of workloads requiring attention KEY FEATURES Automatic kubectl management** - No pre-installation needed Multi-workload support** - Deployments, DaemonSets, StatefulSets, Static Pods Smart pod grouping** - Uses Kubernetes ownerReferences Conditional alerting** - Only notifies when issues detected Detailed reporting** - Pod-level status, node placement, restart counts Secure credential handling** - Temporary files, automatic cleanup Markdown format** - Easy to read and store TROUBLESHOOTING Issue: "Cannot read properties of undefined" Ensure both "Get Pods" and "Get Deployments" nodes execute successfully Check that kubectl can access your cluster with the provided kubeconfig Issue: No alerts when there should be Verify the namespace contains deployments or workloads Check that pods are actually not ready (use kubectl get pods -n <namespace>) Issue: Telegram message not sent Verify Telegram API credential is configured correctly Confirm chat ID is correct and the bot has permission to message you Check that the bot was started (send /start to the bot) Issue: kubectl download fails Check internet connectivity from n8n host Verify access to dl.k8s.io domain Consider pre-installing kubectl on the host and removing the download commands CUSTOMIZATION Change Alert Threshold Edit the Process & Generate Report node to change when alerts trigger: // Change from "< 1" to your desired threshold if (readyReplicas < 2) { // Alert if less than 2 ready pods alerts.push({...}); } Monitor Multiple Namespaces Duplicate the workflow for each namespace Or modify "Kubeconfig Setup" to loop through multiple namespaces Custom Report Format Edit the markdown generation in Process & Generate Report node to customize: Section order Information displayed Formatting style Additional Notification Channels Add nodes after "Has Alerts?" to send notifications via: Email (SMTP node) Slack (Slack node) Discord (Discord node) Webhook (HTTP Request node)
by Luis Acosta
🎧 Upload Podcast Episodes to Spotify via RSS & Google Drive Skip the manual steps and publish your podcast episodes to Spotify in minutes — fully automated. This workflow takes your finished audio, uploads it to Google Drive, updates your podcast’s RSS feed in GitHub, and pushes it live on Spotify and other platforms linked to that feed. No more copy-pasting links or manually editing XML files — everything happens in one click. It’s perfect for podcasters who already have an RSS feed connected to Spotify for Podcasters and want a repeatable, hands-free publishing process. 💡 What this workflow does ✅ Reads your finished MP3 from a local path or previous automation step ☁️ Uploads the audio to Google Drive and creates a public share link 📄 Fetches your existing rss.xml file from GitHub ➕ Appends a new <item> entry with title, description, publication date, and MP3 link 🔄 Commits the updated RSS file back to GitHub, triggering updates on Spotify 🎯 Ensures your episode appears on Spotify once your RSS is already linked in Spotify for Podcasters 🛠 What you’ll need A Google Drive account with OAuth credentials and a target folder ID A GitHub repository containing your rss.xml file An RSS feed connected to Spotify for Podcasters (set this up once before running the workflow) An MP3 file that meets Spotify’s audio format requirements ✨ Use cases Automate weekly or daily podcast publishing to Spotify Push your AI-generated podcast episodes live without manual editing Maintain a single source of truth for your feed in GitHub while streaming across multiple platforms 📬 Contact & Feedback Need help customizing this? Have ideas for improvement? 📩 Luis.acosta@news2podcast.com Or DM me on Twitter @guanchehacker If you’re building something more advanced with audio + AI, like fully automated podcast creation and publishing, let’s talk — I might have the missing piece you need.
by DataMinex
Dynamic Search Interface with Elasticsearch and Automated Report Generation 🎯 What this workflow does This template creates a comprehensive data search and reporting system that allows users to query large datasets through an intuitive web form interface. The system performs real-time searches against Elasticsearch, processes results, and automatically generates structured reports in multiple formats for data analysis and business intelligence. Key Features: 🔍 Interactive web form for dynamic data querying ⚡ Real-time Elasticsearch data retrieval with complex filtering 📊 Auto-generated reports (Text & CSV formats) with custom formatting 💾 Automatic file storage system for data persistence 🎯 Configurable search parameters (amounts, time ranges, entity filters) 🔧 Scalable architecture for handling large datasets 🛠️ Setup requirements Prerequisites Elasticsearch cluster** running on https://localhost:9220 Transaction dataset** indexed in bank_transactions index Sample dataset**: Download from Bank Transaction Dataset File system access** to /tmp/ directory for report storage HTTP Basic Authentication** credentials for Elasticsearch Required Elasticsearch Index Structure This template uses the Bank Transaction Dataset from GitHub: https://github.com/dataminexcode/n8n-workflow/blob/main/Dynamic%20Search%20Interface%20with%20Elasticsearch%20and%20Automated%20Report%20Generation/data You can use this python script for importing the csv file into elasticsearch: Python script for importing data Your bank_transactions index should contain documents with these fields: { "transaction_id": "TXN_123456789", "customer_id": "CUST_000001", "amount": 5000, "merchant_category": "grocery_net", "timestamp": "2025-08-10T15:30:00Z" } Dataset Info: This dataset contains realistic financial transaction data perfect for testing search algorithms and report generation, with over 1 million transaction records including various transaction patterns and data types. Credentials Setup Create HTTP Basic Auth credentials in n8n Configure with your Elasticsearch username/password Assign to the "Search Elasticsearch" node ⚙️ Configuration 1. Form Customization Webhook Path**: Update the webhook ID if needed Form Fields**: Modify amounts, time ranges, or add new filters Validation**: Adjust required fields based on your needs 2. Elasticsearch Configuration URL**: Change localhost:9220 to your ES cluster endpoint Index Name**: Update bank_transactions to your index name Query Logic**: Modify search criteria in "Build Search Query" node Result Limit**: Adjust the size: 100 parameter for more/fewer results 3. File Storage Directory**: Change /tmp/ to your preferred storage location Filename Pattern**: Modify fraud_report_YYYY-MM-DD.{ext} format Permissions**: Ensure n8n has write access to the target directory 4. Report Formatting CSV Headers**: Customize column names in the Format Report node Text Layout**: Modify the report template for your organization Data Fields**: Add/remove transaction fields as needed 🚀 How to use For Administrators: Import this workflow template Configure Elasticsearch credentials Activate the workflow Share the webhook URL with data analysts For Data Analysts: Access the search interface via the webhook URL Set parameters: Minimum amount, time range, entity filter Choose format: Text report or CSV export Submit form to generate instant data report Review results in the generated file Sample Use Cases: Data analysis**: Search for transactions > $10,000 in last 24 hours Entity investigation**: Filter all activity for specific customer ID Pattern analysis**: Quick analysis of transaction activity patterns Business reporting**: Generate CSV exports for business intelligence Dataset testing**: Perfect for testing with the transaction dataset 📊 Sample Output Text Report Format: DATA ANALYSIS REPORT Search Criteria: Minimum Amount: $10000 Time Range: Last 24 Hours Customer: All Results: 3 transactions found TRANSACTIONS: Transaction ID: TXN_123456789 Customer: CUST_000001 Amount: $15000 Merchant: grocery_net Time: 2025-08-10T15:30:00Z CSV Export Format: Transaction_ID,Customer_ID,Amount,Merchant_Category,Timestamp "TXN_123456789","CUST_000001",15000,"grocery_net","2025-08-10T15:30:00Z" 🔧 Customization ideas Enhanced Analytics Features: Add data validation and quality checks Implement statistical analysis (averages, trends, patterns) Include data visualization charts and graphs Generate summary metrics and KPIs Advanced Search Capabilities: Multi-field search with complex boolean logic Fuzzy search and text matching algorithms Date range filtering with custom periods Aggregation queries for data grouping Integration Options: Email notifications**: Alert teams of significant data findings Slack integration**: Post analytics results to team channels Dashboard updates**: Push metrics to business intelligence systems API endpoints**: Expose search functionality as REST API Report Enhancements: PDF generation**: Create formatted PDF analytics reports Data visualization**: Add charts, graphs, and trending analysis Executive summaries**: Include key metrics and business insights Export formats**: Support for Excel, JSON, and other data formats 🏷️ Tags elasticsearch, data-search, reporting, analytics, automation, business-intelligence, data-processing, csv-export 📈 Use cases Business Intelligence**: Organizations analyzing transaction patterns and trends E-commerce Analytics**: Detecting payment patterns and customer behavior analysis Data Science**: Real-time data exploration and pattern recognition systems Operations Teams**: Automated reporting and data monitoring workflows Research & Development**: Testing search algorithms and data processing techniques Training & Education**: Learning Elasticsearch integration with realistic datasets Financial Technology**: Transaction data analysis and business reporting systems ⚠️ Important notes Security Considerations: Never expose Elasticsearch credentials in logs or form data Implement proper access controls for the webhook URL Consider encryption for sensitive data processing Regular audit of generated reports and access logs Performance Tips: Index optimization improves search response times Consider pagination for large result sets Monitor Elasticsearch cluster performance under load Archive old reports to manage disk usage Data Management: Ensure data retention policies align with business requirements Implement audit trails for all search operations Consider data privacy requirements when processing datasets Document all configuration changes for maintenance This template provides a production-ready data search and reporting system that can be easily customized for various data analysis needs. The modular design allows for incremental enhancements while maintaining core search and reporting functionality.
by spencer owen
How it works Uses the rentcast.io api to get approximate value of real estate. Updates the asset in YNAB. Setup Get Rentcast.io api key Get YNAB API Key Get YNAB Buget ID and Account ID This can be done by navigating to your budget in the browser, then extracting ID from the URL https://app.ynab.com/XXXX/accounts/YYYY xxxx = Budget ID yyyy = Account ID If you don't already have an account to track your property, create a new Unbudgeted tracking asset. Set the veriables in the 'Set Fields' node (Or setup subworkflow if you have multiple properties). | Variable| Explination | Example | | --- | --- | --- | |rentcast_api | api key for rentcast | | | ynab_api | apk key for ynab | | | address | Exact address. Its recomended to look it up in rentcast first since they use non standard values like 'srt' , 'ave', ect... |1600 Pennsylvania Ave NW, Washington, DC 20500 | | propertyType | one of 'Single Family', 'Condo', 'Apartment', see api docs for all options | Single Family | | bedrooms | Number of bedrooms (whole number) | 3 | | bathrooms | Number of bathrooms, while fractions (2.5) are probably supported, they haven't been tested | 2 | | squareFootage | Total square feet | 1500 | | ynab_budget | Budget ID (derive from URL) |xxxx| | ynab_account | Account ID (derive from URL ) | yyyy |
by higashiyama
Personal Daily Morning Briefing Automation Who’s it for Busy professionals who want a quick daily update combining their calendar, weather, and top news. How it works Every morning at 7 AM, this workflow gathers: Today’s Google Calendar events Current weather for Tokyo Top 3 news headlines (from Google News RSS) Then it formats everything into a single Slack message. How to set up Connect your Google Calendar and Slack accounts in the Credentials section. Update rssUrl or weatherApiUrl if you want different sources. Set your Slack channel in the "Post to Slack" node. Requirements Google Calendar and Slack accounts RSS feed and weather API (no authentication required) How to customize You can modify: The trigger time (in the Schedule Trigger node) City for the weather RSS feed source Message format in the “Format Briefing Message” node
by Gabriel Santos
Who’s it for Teams and project managers who want to turn meeting transcripts into actionable Trello tasks automatically, without worrying about duplicate cards. What it does This workflow receives a transcript file in .txt format and processes it with AI to extract clear, concise tasks. Each task includes a short title, a description, an assignee (if mentioned), and a deadline (if available). The workflow then checks Trello for duplicates across all lists, comparing both card titles (name) and descriptions (desc). If a matching card already exists, the workflow returns the existing Trello card ID. If not, it creates a new card in the predefined default list. Finally, the workflow generates a user-friendly summary: how many tasks were found, how many already existed, how many new cards were created, and how many tasks had no assignee or deadline. Requirements A Trello account with API credentials configured in n8n (no hardcoded keys). An OpenAI (or compatible) LLM account connected in n8n. How to customize Adjust similarity thresholds for title/description matching in the Trello Sub-Agent. Modify the summary text to always return in your preferred language. Extend the Trello card creation step with labels, members, or due dates.
by Hermilio
Executes schedule routines, and triggers alerts via telegram
by dave
Temporary solution using the undocumented REST API for backups with file versioning (Nextcloud)