by Milan Vasarhelyi - SmoothWork
Video Introduction Want to automate your inbox or need a custom workflow? 📞 Book a Call | 💬 DM me on Linkedin Workflow Overview This workflow automatically delivers a daily weather forecast to your email inbox every morning at 7:00 AM. It demonstrates practical API-to-API integration by connecting the Meteosource weather API directly to Gmail using n8n's HTTP Request node, without requiring a pre-built Meteosource integration. Why This Workflow is Valuable Instead of manually checking weather forecasts each morning, this automation fetches current and next-day weather summaries from Meteosource and delivers them directly to your inbox. It's a perfect example of how direct API integration unlocks tools that don't have dedicated n8n nodes, giving you access to the full functionality of any service with an API. Key Features Scheduled daily execution at 7:00 AM (customizable) Fetches weather data using the Meteosource API with secure Query Auth credentials Sends formatted email with today's weather in the subject and tomorrow's forecast in the body Easy location and recipient customization through a Config node Setup Requirements Meteosource API Account: Sign up for a free account at Meteosource to get your API key (includes 400 free calls per day, more than enough for daily forecasts). Credentials needed: Meteosource credentials**: Create an HTTP Query Auth credential in n8n with the parameter name key and your Meteosource API key as the value Gmail OAuth2**: Connect your Gmail account to n8n for sending emails Configuration Open the Config node to personalize: place_id**: Change from "london" to your desired location (use Meteosource place ID format) send_to_email**: Update with your preferred recipient email address This workflow demonstrates the power of the HTTP Request node for connecting any API to your automation workflows.
by Sk developer
Bilibili Video Downloader with Google Drive Upload & Email Notification Automate downloading of Bilibili videos via the Bilibili Video Downloader API (RapidAPI), upload them to Google Drive, and notify users by email — all using n8n workflow automation. 🧠 Workflow Overview This n8n automation allows users to: Submit a Bilibili video URL. Fetch download info from the Bilibili Video Downloader API (RapidAPI). Automatically download and upload the video to Google Drive. Share the file and send an email notification to the user. ⚙️ Node-by-Node Explanation | Node | Function | | ---------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- | | On form submission | Triggers when a user submits the Bilibili video URL through the form. | | Fetch Bilibili Video Info from API | Sends the video URL to the Bilibili Video Downloader API (RapidAPI) to retrieve download info. | | Check API Response Status | Validates that the API returned a 200 success status before proceeding. | | Download Video File | Downloads the actual video from the provided resource URL. | | Upload Video to Google Drive | Uploads the downloaded video file to the user’s connected Google Drive. | | Google Drive Set Permission | Sets sharing permissions to make the uploaded video publicly accessible. | | Success Notification Email with Drive Link | Sends the Google Drive link to the user via email upon successful upload. | | Processing Delay | Adds a delay before executing error handling if something fails. | | Failure Notification Email | Sends an error notification to the user if download/upload fails. | 🧩 How to Configure Google Drive in n8n In n8n, open Credentials → New → Google Drive. Choose OAuth2 authentication. Follow the on-screen instructions to connect your Google account. Use the newly created credential in both Upload Video and Set Permission nodes. Test the connection to ensure access to your Drive. 🔑 How to Obtain Your RapidAPI Key To use the Bilibili Video Downloader API (RapidAPI): Visit bilibili videodownloade. Click Subscribe to Test (you can choose free or paid plans). Copy your x-rapidapi-key from the “Endpoints” section. Paste the key into your n8n Fetch Bilibili Video Info from API node header. Example header: { "x-rapidapi-host": "bilibili-video-downloader.p.rapidapi.com", "x-rapidapi-key": "your-rapidapi-key-here" } 💡 Use Case This automation is ideal for: Content creators archiving Bilibili videos. Researchers collecting media resources. Teams that need centralized video storage in Google Drive. Automated content management workflows. 🚀 Benefits ✅ No manual downloads – fully automated. ✅ Secure cloud storage via Google Drive. ✅ Instant user notification on success or failure. ✅ Scalable for multiple users or URLs. ✅ Powered by the reliable Bilibili Video Downloader API (RapidAPI). 👥 Who Is This For n8n developers** wanting to explore advanced workflow automations. Content managers** handling large volumes of Bilibili content. Digital archivists** storing video data in Google Drive. Educators** sharing Bilibili educational videos securely. 🏁 Summary With this n8n workflow, you can seamlessly integrate the Bilibili Video Downloader API (RapidAPI) into your automation stack — enabling effortless video downloading, Google Drive uploading, and user notifications in one unified system.
by EmailListVerify
This workflow allows to : scrape Google Maps data using SerpAPI discovery generic email addresses like contact@ using EmailListVerify API Who’s it for This template is designed to prepare cold outreach for local businesses like restaurants or hotels (you need to target a type of business that is listed on Google Maps). This template will generate a list of leads with phone numbers and email addresses. The email addresses you will get are generic, like contact@. This isn’t a problem if you are targeting small businesses, as the owner will most likely monitor those emails. However, if your ideal customer profile has more than 20 employees, I do not recommend using those email addresses for cold outreach. Requirement This template uses: Google Sheet to handle input and output data SerpAPI to scrape Google Maps (250 searches/month in the free plan) EmailListVerify to discover email (from $0.05 per email) Notes This template is an extension of Lucas Perret template (adding email discovery module). If there is some interest in it, I can make a similar template using Apify as an alternative to SerpAPI for Google Map scraping.
by Vigh Sandor
SETUP INSTRUCTIONS 1. Configure Kubeconfig Open the "Kubeconfig Setup" node Paste your entire kubeconfig file content into the kubeconfigContent variable Set your target namespace in the namespace variable (default: 'production') Example kubeconfig format: apiVersion: v1 kind: Config clusters: cluster: certificate-authority-data: LS0tLS1CRUd... server: https://your-cluster.example.com:6443 name: your-cluster contexts: context: cluster: your-cluster user: your-user name: your-context current-context: your-context users: name: your-user user: token: eyJhbGciOiJSUzI1... 2. Telegram Configuration Create a Telegram bot via @BotFather Get your bot token and add it as a credential in n8n (Telegram API) Find your chat ID: Message your bot Visit: https://api.telegram.org/bot<YourBotToken>/getUpdates Look for "chat":{"id":...} Open the "Send Telegram Alert" node Replace YOUR_TELEGRAM_CHAT_ID with your actual chat ID Select your Telegram API credential 3. Schedule Configuration Open the "Schedule Trigger" node Default: runs every 1 minute Adjust the interval based on your monitoring needs: Every 5 minutes: Change field to minutes and set minutesInterval to 5 Every hour: Change field to hours and set hoursInterval to 1 Cron expression: Use custom cron schedule 4. kubectl Installation The workflow automatically downloads kubectl (v1.34.0) during execution No pre-installation required on the n8n host kubectl is downloaded and used temporarily for each execution HOW IT WORKS Workflow Steps Schedule Trigger Runs automatically based on configured interval Initiates the monitoring cycle Kubeconfig Setup Loads the kubeconfig and namespace configuration Passes credentials to kubectl commands Parallel Data Collection Get Pods: Fetches all pods from the specified namespace Get Deployments: Fetches all deployments from the specified namespace Both commands run in parallel for efficiency Process & Generate Report Parses pod and deployment data Groups pods by their owner (Deployment, DaemonSet, StatefulSet, or Node) Calculates readiness statistics for each workload Detects alerts: workloads with 0 ready pods Generates a comprehensive Markdown report including: Deployments with replica counts and pod details Other workloads (DaemonSets, StatefulSets, Static Pods) Standalone pods (if any) Pod-level details: status, node, restart count Has Alerts? Checks if any workloads have 0 ready pods Routes to appropriate action Send Telegram Alert (if alerts exist) Sends formatted alert message to Telegram Includes: Namespace information List of problematic workloads Full status report Save Report Saves the Markdown report to a file Filename format: k8s-report-YYYY-MM-DD-HHmmss.md Always executes, regardless of alert status Security Features Temporary kubectl**: Downloaded and used only during execution Temporary kubeconfig**: Written to /tmp/kubeconfig-<random>.yaml Automatic cleanup**: Kubeconfig file is deleted after each kubectl command No persistent credentials**: Nothing stored on disk between executions Alert Logic Alerts are triggered when any workload has zero ready pods: Deployments with readyReplicas < 1 DaemonSets with numberReady < 1 StatefulSets with readyReplicas < 1 Static Pods (Node-owned) with no ready instances Report Sections Deployments: All Deployment-managed pods (via ReplicaSets) Other Workloads: DaemonSets, StatefulSets, and Static Pods (kube-system components) Standalone Pods: Pods without recognized owners (rare) Alerts: Summary of workloads requiring attention KEY FEATURES Automatic kubectl management** - No pre-installation needed Multi-workload support** - Deployments, DaemonSets, StatefulSets, Static Pods Smart pod grouping** - Uses Kubernetes ownerReferences Conditional alerting** - Only notifies when issues detected Detailed reporting** - Pod-level status, node placement, restart counts Secure credential handling** - Temporary files, automatic cleanup Markdown format** - Easy to read and store TROUBLESHOOTING Issue: "Cannot read properties of undefined" Ensure both "Get Pods" and "Get Deployments" nodes execute successfully Check that kubectl can access your cluster with the provided kubeconfig Issue: No alerts when there should be Verify the namespace contains deployments or workloads Check that pods are actually not ready (use kubectl get pods -n <namespace>) Issue: Telegram message not sent Verify Telegram API credential is configured correctly Confirm chat ID is correct and the bot has permission to message you Check that the bot was started (send /start to the bot) Issue: kubectl download fails Check internet connectivity from n8n host Verify access to dl.k8s.io domain Consider pre-installing kubectl on the host and removing the download commands CUSTOMIZATION Change Alert Threshold Edit the Process & Generate Report node to change when alerts trigger: // Change from "< 1" to your desired threshold if (readyReplicas < 2) { // Alert if less than 2 ready pods alerts.push({...}); } Monitor Multiple Namespaces Duplicate the workflow for each namespace Or modify "Kubeconfig Setup" to loop through multiple namespaces Custom Report Format Edit the markdown generation in Process & Generate Report node to customize: Section order Information displayed Formatting style Additional Notification Channels Add nodes after "Has Alerts?" to send notifications via: Email (SMTP node) Slack (Slack node) Discord (Discord node) Webhook (HTTP Request node)
by Luis Acosta
🎧 Upload Podcast Episodes to Spotify via RSS & Google Drive Skip the manual steps and publish your podcast episodes to Spotify in minutes — fully automated. This workflow takes your finished audio, uploads it to Google Drive, updates your podcast’s RSS feed in GitHub, and pushes it live on Spotify and other platforms linked to that feed. No more copy-pasting links or manually editing XML files — everything happens in one click. It’s perfect for podcasters who already have an RSS feed connected to Spotify for Podcasters and want a repeatable, hands-free publishing process. 💡 What this workflow does ✅ Reads your finished MP3 from a local path or previous automation step ☁️ Uploads the audio to Google Drive and creates a public share link 📄 Fetches your existing rss.xml file from GitHub ➕ Appends a new <item> entry with title, description, publication date, and MP3 link 🔄 Commits the updated RSS file back to GitHub, triggering updates on Spotify 🎯 Ensures your episode appears on Spotify once your RSS is already linked in Spotify for Podcasters 🛠 What you’ll need A Google Drive account with OAuth credentials and a target folder ID A GitHub repository containing your rss.xml file An RSS feed connected to Spotify for Podcasters (set this up once before running the workflow) An MP3 file that meets Spotify’s audio format requirements ✨ Use cases Automate weekly or daily podcast publishing to Spotify Push your AI-generated podcast episodes live without manual editing Maintain a single source of truth for your feed in GitHub while streaming across multiple platforms 📬 Contact & Feedback Need help customizing this? Have ideas for improvement? 📩 Luis.acosta@news2podcast.com Or DM me on Twitter @guanchehacker If you’re building something more advanced with audio + AI, like fully automated podcast creation and publishing, let’s talk — I might have the missing piece you need.
by DataMinex
Dynamic Search Interface with Elasticsearch and Automated Report Generation 🎯 What this workflow does This template creates a comprehensive data search and reporting system that allows users to query large datasets through an intuitive web form interface. The system performs real-time searches against Elasticsearch, processes results, and automatically generates structured reports in multiple formats for data analysis and business intelligence. Key Features: 🔍 Interactive web form for dynamic data querying ⚡ Real-time Elasticsearch data retrieval with complex filtering 📊 Auto-generated reports (Text & CSV formats) with custom formatting 💾 Automatic file storage system for data persistence 🎯 Configurable search parameters (amounts, time ranges, entity filters) 🔧 Scalable architecture for handling large datasets 🛠️ Setup requirements Prerequisites Elasticsearch cluster** running on https://localhost:9220 Transaction dataset** indexed in bank_transactions index Sample dataset**: Download from Bank Transaction Dataset File system access** to /tmp/ directory for report storage HTTP Basic Authentication** credentials for Elasticsearch Required Elasticsearch Index Structure This template uses the Bank Transaction Dataset from GitHub: https://github.com/dataminexcode/n8n-workflow/blob/main/Dynamic%20Search%20Interface%20with%20Elasticsearch%20and%20Automated%20Report%20Generation/data You can use this python script for importing the csv file into elasticsearch: Python script for importing data Your bank_transactions index should contain documents with these fields: { "transaction_id": "TXN_123456789", "customer_id": "CUST_000001", "amount": 5000, "merchant_category": "grocery_net", "timestamp": "2025-08-10T15:30:00Z" } Dataset Info: This dataset contains realistic financial transaction data perfect for testing search algorithms and report generation, with over 1 million transaction records including various transaction patterns and data types. Credentials Setup Create HTTP Basic Auth credentials in n8n Configure with your Elasticsearch username/password Assign to the "Search Elasticsearch" node ⚙️ Configuration 1. Form Customization Webhook Path**: Update the webhook ID if needed Form Fields**: Modify amounts, time ranges, or add new filters Validation**: Adjust required fields based on your needs 2. Elasticsearch Configuration URL**: Change localhost:9220 to your ES cluster endpoint Index Name**: Update bank_transactions to your index name Query Logic**: Modify search criteria in "Build Search Query" node Result Limit**: Adjust the size: 100 parameter for more/fewer results 3. File Storage Directory**: Change /tmp/ to your preferred storage location Filename Pattern**: Modify fraud_report_YYYY-MM-DD.{ext} format Permissions**: Ensure n8n has write access to the target directory 4. Report Formatting CSV Headers**: Customize column names in the Format Report node Text Layout**: Modify the report template for your organization Data Fields**: Add/remove transaction fields as needed 🚀 How to use For Administrators: Import this workflow template Configure Elasticsearch credentials Activate the workflow Share the webhook URL with data analysts For Data Analysts: Access the search interface via the webhook URL Set parameters: Minimum amount, time range, entity filter Choose format: Text report or CSV export Submit form to generate instant data report Review results in the generated file Sample Use Cases: Data analysis**: Search for transactions > $10,000 in last 24 hours Entity investigation**: Filter all activity for specific customer ID Pattern analysis**: Quick analysis of transaction activity patterns Business reporting**: Generate CSV exports for business intelligence Dataset testing**: Perfect for testing with the transaction dataset 📊 Sample Output Text Report Format: DATA ANALYSIS REPORT Search Criteria: Minimum Amount: $10000 Time Range: Last 24 Hours Customer: All Results: 3 transactions found TRANSACTIONS: Transaction ID: TXN_123456789 Customer: CUST_000001 Amount: $15000 Merchant: grocery_net Time: 2025-08-10T15:30:00Z CSV Export Format: Transaction_ID,Customer_ID,Amount,Merchant_Category,Timestamp "TXN_123456789","CUST_000001",15000,"grocery_net","2025-08-10T15:30:00Z" 🔧 Customization ideas Enhanced Analytics Features: Add data validation and quality checks Implement statistical analysis (averages, trends, patterns) Include data visualization charts and graphs Generate summary metrics and KPIs Advanced Search Capabilities: Multi-field search with complex boolean logic Fuzzy search and text matching algorithms Date range filtering with custom periods Aggregation queries for data grouping Integration Options: Email notifications**: Alert teams of significant data findings Slack integration**: Post analytics results to team channels Dashboard updates**: Push metrics to business intelligence systems API endpoints**: Expose search functionality as REST API Report Enhancements: PDF generation**: Create formatted PDF analytics reports Data visualization**: Add charts, graphs, and trending analysis Executive summaries**: Include key metrics and business insights Export formats**: Support for Excel, JSON, and other data formats 🏷️ Tags elasticsearch, data-search, reporting, analytics, automation, business-intelligence, data-processing, csv-export 📈 Use cases Business Intelligence**: Organizations analyzing transaction patterns and trends E-commerce Analytics**: Detecting payment patterns and customer behavior analysis Data Science**: Real-time data exploration and pattern recognition systems Operations Teams**: Automated reporting and data monitoring workflows Research & Development**: Testing search algorithms and data processing techniques Training & Education**: Learning Elasticsearch integration with realistic datasets Financial Technology**: Transaction data analysis and business reporting systems ⚠️ Important notes Security Considerations: Never expose Elasticsearch credentials in logs or form data Implement proper access controls for the webhook URL Consider encryption for sensitive data processing Regular audit of generated reports and access logs Performance Tips: Index optimization improves search response times Consider pagination for large result sets Monitor Elasticsearch cluster performance under load Archive old reports to manage disk usage Data Management: Ensure data retention policies align with business requirements Implement audit trails for all search operations Consider data privacy requirements when processing datasets Document all configuration changes for maintenance This template provides a production-ready data search and reporting system that can be easily customized for various data analysis needs. The modular design allows for incremental enhancements while maintaining core search and reporting functionality.
by spencer owen
How it works Uses the rentcast.io api to get approximate value of real estate. Updates the asset in YNAB. Setup Get Rentcast.io api key Get YNAB API Key Get YNAB Buget ID and Account ID This can be done by navigating to your budget in the browser, then extracting ID from the URL https://app.ynab.com/XXXX/accounts/YYYY xxxx = Budget ID yyyy = Account ID If you don't already have an account to track your property, create a new Unbudgeted tracking asset. Set the veriables in the 'Set Fields' node (Or setup subworkflow if you have multiple properties). | Variable| Explination | Example | | --- | --- | --- | |rentcast_api | api key for rentcast | | | ynab_api | apk key for ynab | | | address | Exact address. Its recomended to look it up in rentcast first since they use non standard values like 'srt' , 'ave', ect... |1600 Pennsylvania Ave NW, Washington, DC 20500 | | propertyType | one of 'Single Family', 'Condo', 'Apartment', see api docs for all options | Single Family | | bedrooms | Number of bedrooms (whole number) | 3 | | bathrooms | Number of bathrooms, while fractions (2.5) are probably supported, they haven't been tested | 2 | | squareFootage | Total square feet | 1500 | | ynab_budget | Budget ID (derive from URL) |xxxx| | ynab_account | Account ID (derive from URL ) | yyyy |
by higashiyama
Personal Daily Morning Briefing Automation Who’s it for Busy professionals who want a quick daily update combining their calendar, weather, and top news. How it works Every morning at 7 AM, this workflow gathers: Today’s Google Calendar events Current weather for Tokyo Top 3 news headlines (from Google News RSS) Then it formats everything into a single Slack message. How to set up Connect your Google Calendar and Slack accounts in the Credentials section. Update rssUrl or weatherApiUrl if you want different sources. Set your Slack channel in the "Post to Slack" node. Requirements Google Calendar and Slack accounts RSS feed and weather API (no authentication required) How to customize You can modify: The trigger time (in the Schedule Trigger node) City for the weather RSS feed source Message format in the “Format Briefing Message” node
by Gabriel Santos
Who’s it for Teams and project managers who want to turn meeting transcripts into actionable Trello tasks automatically, without worrying about duplicate cards. What it does This workflow receives a transcript file in .txt format and processes it with AI to extract clear, concise tasks. Each task includes a short title, a description, an assignee (if mentioned), and a deadline (if available). The workflow then checks Trello for duplicates across all lists, comparing both card titles (name) and descriptions (desc). If a matching card already exists, the workflow returns the existing Trello card ID. If not, it creates a new card in the predefined default list. Finally, the workflow generates a user-friendly summary: how many tasks were found, how many already existed, how many new cards were created, and how many tasks had no assignee or deadline. Requirements A Trello account with API credentials configured in n8n (no hardcoded keys). An OpenAI (or compatible) LLM account connected in n8n. How to customize Adjust similarity thresholds for title/description matching in the Trello Sub-Agent. Modify the summary text to always return in your preferred language. Extend the Trello card creation step with labels, members, or due dates.
by Hermilio
Executes schedule routines, and triggers alerts via telegram
by dave
Temporary solution using the undocumented REST API for backups with file versioning (Nextcloud)
by Lorena
This workflow gets data from an API and exports it into Google Sheets and a CSV file.