by David Ashby
🛠️ Demio Tool MCP Server Complete MCP server exposing all Demio Tool operations to AI agents. Zero configuration needed - all 4 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Demio Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Demio Tool tool with full error handling 📋 Available Operations (4 total) Every possible Demio Tool operation is included: 📅 Event (3 operations) • Get an event • Get many events • Register an event 🔧 Report (1 operations) • Get a report 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Demio Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Demio Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing all Oura Tool operations to AI agents. Zero configuration needed - all 4 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Oura Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Oura Tool tool with full error handling 📋 Available Operations (4 total) Every possible Oura Tool operation is included: 🔧 Profile (1 operations) • Get a profile 🔧 Summary (3 operations) • Get activity summary • Get readiness summary • Get sleep summary 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Oura Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Oura Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Adam Bertram
LintGuardian: Automated PR Linting with n8n & AI What It Does LintGuardian is an n8n workflow template that automates code quality enforcement for GitHub repositories. When a pull request is created, the workflow automatically analyzes the changed files, identifies linting issues, fixes them, and submits a new PR with corrections. This eliminates manual code style reviews, reduces back-and-forth comments, and lets your team focus on functionality rather than formatting. How It Works The workflow is triggered by a GitHub webhook when a PR is created. It fetches all changed files from the PR using the GitHub API, processes them through an AI-powered linting service (Google Gemini), and automatically generates fixes. The AI agent then creates a new branch with the corrected files and submits a "linting fixes" PR against the original branch. Developers can review and merge these fixes with a single click, keeping code consistently formatted with minimal effort. Prerequisites To use this template, you'll need: n8n instance: Either self-hosted or using n8n.cloud GitHub repository: Where you want to enforce linting standards GitHub Personal Access Token: With permissions for repo access (repo, workflow, admin:repo_hook) Google AI API Key: For the Gemini language model that powers the linting analysis GitHub webhook: Configured to send PR creation events to your n8n instance Setup Instructions Import the template into your n8n instance Configure credentials: Add your GitHub Personal Access Token under Credentials → GitHub API Add your Google AI API key under Credentials → Google Gemini API Update repository information: Locate the "Set Common Fields" code node at the beginning of the workflow Change the gitHubRepoName and gitHubOrgName values to match your repository const commonFields = { 'gitHubRepoName': 'your-repo-name', 'gitHubOrgName': 'your-org-name' } Configure the webhook: Create a file named .github/workflows/lint-guardian.yml in your repository replacing the Trigger n8n Workflow step with your webhook: name: Lint Guardian on: pull_request: types: [opened, synchronize] jobs: trigger-linting: runs-on: ubuntu-latest steps: name: Trigger n8n Workflow uses: fjogeleit/http-request-action@v1 with: url: 'https://your-n8n-instance.com/webhook/1da5a6e1-9453-4a65-bbac-a1fed633f6ad' method: 'POST' contentType: 'application/json' data: | { "pull_request_number": ${{ github.event.pull_request.number }}, "repository": "${{ github.repository }}", "branch": "${{ github.event.pull_request.head.ref }}", "base_branch": "${{ github.event.pull_request.base.ref }}" } preventFailureOnNoResponse: true Customize linting rules (optional): Modify the AI Agent's system message to specify your team's linting preferences Adjust file handling if you have specific file types to focus on or ignore Security Considerations When creating your GitHub Personal Access Token, remember to: Choose the minimal permissions needed (repo, workflow, admin:repo_hook) Set an appropriate expiration date Treat your token like a password and store it securely Consider using GitHub's fine-grained personal access tokens for more limited scope As GitHub documentation notes: "Personal access tokens are like passwords, and they share the same inherent security risks." Extending the Template You can enhance this workflow by: Adding Slack notifications when linting fixes are submitted Creating custom linting rules specific to your team's needs Expanding it to handle different types of code quality checks Adding approval steps for more controlled environments This template provides an excellent starting point that you can customize to fit your team's exact workflow and code style requirements.
by InfraNodus
This template can be used to upload the files in your Google drive to an InfraNodus knowledge graph. The InfraNodus graph will then reveal the main topics and ideas in your collection of documents and show the content gaps in them. You can also use the built-in AI to converse with the documents. You can also access the InfraNodus Graphs via its GraphRAG API to re-use them in your other n8n workflows for high-quality content retrieval and knowledge base optimization. The template showcases the use of multiple n8n nodes and processes: Extracting documents from a Google Drive folder text extraction optional: high-quality PDF conversion using ConvertAPI InfraNodus knowledge graph generation Note: If you want to **Sync your Google drive to an InfraNodus graph, check out our other workflow* How it works Here's a description of this workflow step by step: Find all the files in a specific Google drive folder For each file found: reiterate the workflow and Identify the type of the file (TXT, PDF, Markdown) For TXT and Markdown files extract the text data For PDF files use a special PDF to Text convertor to extract the text data. (Optional: using ConvertAPI for better quality PDF conversion) Forward everything to the InfraNodus graphAndStatements API endpoint with the name of the new graph, the text field with the text data, the text settings, and doNotSave=false to create a new graph Reiterate through another file. How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Use that API key to set up authorization for the InfraNodus tool in the workflow. If you want to upload the files to an existing graph, you should copy its name from InfraNodus. Otherwise you can specify any name you want. Requirements An InfraNodus account and API key A Google Drive account and authorization (you will need to set it up via Google Cloud using the n8n instructions provided in the Google Drive node). Customizing this workflow You can use Dropbox instead of Google Drive. You can also modify this workflow slightly to make it Sync with a Google Drive when the new files appear in it. Check out the complete guide at https://support.noduslabs.com/hc/en-us/articles/20267019838108-Upload-Sync-Your-Google-Drive-Folder-with-InfraNodus-using-n8n
by Ranjan Dailata
Who is this for? This workflow automates the process of querying Bing's Copilot Search, extracting structured data from the results, summarizing the information, and sending a notification via webhook. It leverages the Microsoft Copilot to retrieve search results and integrates AI-powered tools for data extraction and summarization. What problem is this workflow solving? Data Analysts and Researchers: Who need to gather and summarize information from Bing search results efficiently. Developers and Engineers: Looking to integrate Bing search data into applications or services. Digital Marketers and SEO Specialists: Interested in monitoring search engine results for specific keywords or topics. What this workflow does Manually extracting and summarizing information from search engine results can be time-consuming and error-prone. This workflow automates the process by: Performing Bing searches using Bright Data's Bing Search API. Extracting structured data from the search results. Summarizing the extracted information using AI tools. Sending the summarized data to a specified endpoint via webhook. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Perform a Bing Copilot Request node with the prompt you wish to perform the search. Update the Structured Data Webhook Notifier node with the Webhook endpoint of your choice. Update the Summary Webhook Notifier node with the Webhook endpoint of your choice. How to customize this workflow to your needs Modify Search Queries: Adjust the search terms to target different topics or keywords. Change Data Extraction Logic: Customize the extraction process to capture specific data points from the search results. Alter Summarization Techniques: Integrate different AI models or adjust parameters to change how summaries are generated. Update Webhook Endpoints: Direct the summarized data to different endpoints as required. Schedule Workflow Runs: Set up automated triggers to run the workflow at desired intervals.
by Greg Evseev
This workflow template provides a robust solution for efficiently sending multiple prompts to Anthropic's Claude models in a single batch request and retrieving the results. It leverages the Anthropic Batch API endpoint (/v1/messages/batches) for optimized processing and outputs each result as a separate item. Core Functionality & Example Usage Included This template includes: The Core Batch Processing Workflow: Designed to be called by another n8n workflow. An Example Usage Workflow: A separate branch demonstrating how to prepare data and trigger the core workflow, including examples using simple strings and n8n's Langchain Chat Memory nodes. Who is this for? This template is designed for: Developers, data scientists, and researchers** who need to process large volumes of text prompts using Claude models via n8n. Content creators** looking to generate multiple pieces of content (e.g., summaries, Q&As, creative text) based on different inputs simultaneously. n8n users** who want to automate interactions with the Anthropic API beyond single requests, improve efficiency, and integrate batch processing into larger automation sequences. Anyone needing to perform bulk text generation or analysis tasks with Claude programmatically. What problem does this workflow solve? Sending prompts to language models one by one can be slow and inefficient, especially when dealing with hundreds or thousands of requests. This workflow addresses that by: Batching:** Grouping multiple prompts into a single API call to Anthropic's dedicated batch endpoint (/v1/messages/batches). Efficiency:** Significantly reducing the time required compared to sequential processing. Scalability:** Handling large numbers of prompts (up to API limits) systematically. Automation:** Providing a ready-to-use, callable n8n structure for batch interactions with Claude. Structured Output:** Parsing the results and outputting each individual prompt's result as a separate n8n item. Use Cases: Bulk content generation (e.g., product descriptions, summaries). Large-scale question answering based on different contexts. Sentiment analysis or data extraction across multiple text snippets. Running the same prompt against many different inputs for research or testing. What the Core Workflow does (Triggered by the 'When Executed by Another Workflow' node) Receive Input: The workflow starts when called by another workflow (e.g., using the 'Execute Workflow' node). It expects input data containing: anthropic-version (string, e.g., "2023-06-01") requests (JSON array, where each object represents a single prompt request conforming to the Anthropic Batch API schema). Submit Batch Job: Sends the formatted requests data via POST to the Anthropic API /v1/messages/batches endpoint to create a new batch job. Requires Anthropic credentials. Wait & Poll: Enters a loop: Checks if the processing_status of the batch job is ended. If not ended, it waits for a set interval (10 seconds by default in the 'Batch Status Poll Interval' node). It then checks the batch job status again via GET to /v1/messages/batches/{batch_id}. Requires Anthropic credentials. This loop continues until the status is ended. Retrieve Results: Once the batch job is complete, it fetches the results file by making a GET request to the results_url provided in the batch status response. Requires Anthropic credentials. Parse Results: The results are typically returned in JSON Lines (.jsonl) format. The 'Parse response' Code node splits the response text by newlines and parses each line into a separate JSON object, storing them in an array field (e.g., parsed). Split Output: The 'Split Out Parsed Results' node takes the array of parsed results and outputs each result object as an individual item from the workflow. Prerequisites An active n8n instance (Cloud or self-hosted). An Anthropic API account with access granted to Claude models and the Batch API. Your Anthropic API Key. Basic understanding of n8n concepts (nodes, workflows, credentials, expressions, 'Execute Workflow' node). Familiarity with JSON data structures for providing input prompts and understanding the output. Understanding of the Anthropic Batch API request/response structure. (For Example Usage Branch) Familiarity with n8n's Langchain nodes (@n8n/n8n-nodes-langchain) if you plan to adapt that part. Setup Import Template: Add this template to your n8n instance. Configure Credentials: Navigate to the 'Credentials' section in your n8n instance. Click 'Add Credential'. Search for 'Anthropic' and select the Anthropic API credential type. Enter your Anthropic API Key and save the credential (e.g., name it "Anthropic account"). Assign Credentials: Open the workflow and locate the three HTTP Request nodes in the core workflow: Submit batch Check batch status Get results In each of these nodes, select the Anthropic credential you just configured from the 'Credential for Anthropic API' dropdown. Review Input Format: Understand the required input structure for the When Executed by Another Workflow trigger node. The primary inputs are anthropic-version (string) and requests (array). Refer to the Sticky Notes in the template and the Anthropic Batch API documentation for the exact schema required within the requests array. Activate Workflow: Save and activate the core workflow so it can be called by other workflows. ➡️ Quick Start & Input/Output Examples: Look for the Sticky Notes within the workflow canvas! They provide crucial information, including examples of the required input JSON structure and the expected output format. How to customize this workflow Input Source:* The core workflow is designed to be called. You will build *another workflow that prepares the anthropic-version and requests array and then uses the 'Execute Workflow' node to trigger this template. The included example branch shows how to prepare this data. Model Selection & Parameters:* Model (claude-3-opus-20240229, etc.), max_tokens, temperature, and other parameters are defined *within each object inside the requests array you pass to the workflow trigger. You configure these in the workflow calling this template. Polling Interval:** Modify the 'Wait' node ('Batch Status Poll Interval') duration if you need faster or slower status checks (default is 10 seconds). Be mindful of potential rate limits. Parsing Logic:** If Anthropic changes the result format or you have specific needs, modify the Javascript code within the 'Parse response' Code node. Error Handling:** Enhance the workflow with more specific error handling for API failures (e.g., using 'Error Trigger' or checking HTTP status codes) or batch processing issues (batch.status === 'failed'). Output Processing:* In the workflow that *calls this template, add nodes after the 'Execute Workflow' node to process the individual result items returned (e.g., save to a database, spreadsheet, send notifications). Example Usage Branch (Manual Trigger) This template also contains a separate branch starting with the Run example Manual Trigger node. Purpose:** This branch demonstrates how to construct the necessary anthropic-version and requests array payload. Methods Shown:** It includes steps for: Creating a request object from a simple query string. Creating a request object using data from n8n's Langchain Chat Memory nodes (@n8n/n8n-nodes-langchain). Execution:** It merges these examples, constructs the final payload, and then uses the Execute Workflow node to call the main batch processing logic described above. It finishes by filtering the results for demonstration. Note:** This branch is for demonstration and testing. You would typically build your own data preparation logic in a separate workflow. The use of Langchain nodes is optional for the core batch functionality. Notes API Limits:** According to the Anthropic API documentation, batches can contain up to 100,000 requests and be up to 256 MB in total size. Ensure your n8n instance has sufficient resources for large batches. API Costs:** Using the Anthropic API, including the Batch API, incurs costs based on token usage. Monitor your usage via the Anthropic dashboard. Completion Time:** Batch processing time depends on the number and complexity of prompts and current API load. The polling mechanism accounts for this variability. Versioning:** Always include the anthropic-version header in your requests, as shown in the workflow and examples. Refer to Anthropic API versioning documentation.
by Naveen Choudhary
This workflow automatically enriches company domain lists with comprehensive business information scraped from ZoomInfo, organizing the data in Google Sheets for sales teams and researchers. Who's it for Sales teams** building prospect databases with accurate company information Marketing professionals** researching target companies for outreach campaigns Business development teams** qualifying leads with revenue and employee data Researchers** collecting structured company data for market analysis Lead generation specialists** enriching domain lists with contact details How it works The workflow processes unprocessed domains from a Google Sheet, searches for their ZoomInfo profiles using Serper API, scrapes the company pages through Oxylabs proxy service, and extracts structured business data. Each domain is marked as processed to prevent duplicates, and the workflow includes proper rate limiting to respect API limits. What it does Loads unprocessed domains from your Google Sheets database Searches ZoomInfo using targeted queries via Serper API for each domain Validates search results and extracts relevant ZoomInfo profile URLs Scrapes company pages using Oxylabs to bypass anti-scraping protection Extracts structured data including company details, address, revenue, and employee count Updates Google Sheets with enriched company information Tracks processing status to prevent reprocessing the same domains Requirements Serper API account** with search credits (Get API key) Oxylabs subscription** for web scraping proxy service (Sign up here) Google Sheets API access** with OAuth2 authentication Google Sheets template** - Make a copy of this template sheet with pre-configured columns How to set up Make a copy of the Google Sheets template - Click here to copy the template to your Google Drive Configure API credentials in the respective HTTP Request nodes: Add Serper API key in the search node Set up Oxylabs username/password in the scraping node Set up Google Sheets authentication using OAuth2 Update the Google Sheets document ID in all Google Sheets nodes to point to your copied template Add your domain list to the sheet with 'processed' column empty or false Run the workflow using the manual trigger How to customize the workflow Search query modification**: Update the search query in the Serper node for different geographic focus (currently set for Czech Republic) Data extraction fields**: Modify the Google Sheets column mapping to include/exclude specific company data points Rate limiting**: Adjust wait times between requests to match your API rate limits Batch processing**: Configure the split batch size for processing domains in smaller groups Error handling**: Customize the continue-on-error settings based on your data quality requirements Scheduling**: Replace Manual Trigger with Schedule Trigger for automated daily/weekly runs Output data includes Complete company name and official address Phone numbers and contact information Revenue figures and employee headcount Industry classifications and business categories LinkedIn company profile URLs Geographic location details (city, state, country, postal code) Processing status tracking for workflow management Note: This workflow includes comprehensive error handling to ensure domains are always marked as processed, preventing infinite loops while maintaining data integrity. Rate limiting is built-in to respect API quotas and avoid service interruptions.
by Alexander Bentlund
What this workflow does This workflow is used as a bridge between your private Google Calendar to your Work Outlook Calendar. The same mentality can be used with other calendar types. Description Send a copy of a Google Calendar event to your Outlook work account as a reminder to yourself or co-workers that you are booked for private matters like "Dentist appointment", "Taking kids to Disney Land" etc. How it works Create event -- You create a Google Calendar event. -- A trigger in n8n reacts and collects the event info. -- An Outlook event is created with the same information in your Outlook Calendar. Cancel -- You cancel an event in Google Calendar -- A trigger in n8n reacts and collects the canceled event info. -- Using the Outlook node to getAll events searches for the event in your Outlook Calendar. -- If the event is found it is then deleted. -- An email with the details of the cancelation is then sent to your Outook e-mail address. The n8n Merge node is used to combine results from two different nodes that are necessary to create the cancelled event e-mail notification. Important notice Make sure you use a dedicated Google Calendar for private events that will be displayed in your work Outlook calendar in order to avoid displaying unwanted calendar events that you do not wish to share with your co-workers. Requirements Active workflow* Google Calendar OAuth2 API Microsoft Outlook OAuth2 API .*The Google Calendar trigger is activated only if this workflow is active. You can however TEST the workflow in the editor by clicking “Test step”. You will then receive a response from Google Calendar that you can use in order to view what data Google Sends.
by David Roberts
This workflow allows you to ask questions about the data in a Google Sheet over a chat interface. It uses n8n's built-in chat, but could be modified to work with Slack, Teams, WhatsApp, etc. Behind the scenes, the workflow uses GPT4, so you'll need to have an OpenAI API key that supports it. How it works The workflow uses an AI agent with custom tools that call a sub-workflow. That sub-workflow reads the Google Sheet and returns information from it. Because models have a context window (and therefore a maximum number of characters they can accept), we can't pass the whole Google Sheet to GPT - at least not for big sheets. So we provide three ways of querying less data, that can be used in combination to answer questions. Those three functions are: List all the columns in the sheet Get all values of a single column Get all values of a single row Note that to use this template, you need to be on n8n version 1.19.4 or later.
by Davide
This workflow allows users to generate AI videos using Google Veo3, save them to Google Drive, generate optimized YouTube titles with GPT-4o, and automatically upload them to YouTube with Upload-Post. The entire process is triggered from a Google Sheet that acts as the central interface for input and output. IT automates video creation, uploading, and tracking, ensuring seamless integration between Google Sheets, Google Drive, Google Veo3, and YouTube. Benefits of this Workflow 💡 No Code Interface**: Trigger and control the video production pipeline from a simple Google Sheet. ⚙️ Full Automation**: Once set up, the entire video generation and publishing process runs hands-free. 🧠 AI-Powered Creativity**: Generates engaging YouTube titles using GPT-4o. Leverages advanced generative video AI from Google Veo3. 📁 Cloud Storage & Backup**: Stores all generated videos on Google Drive for safekeeping. 📈 YouTube Ready**: Automatically uploads to YouTube with correct metadata, saving time and boosting visibility. 🧪 Scalable**: Designed to process multiple video prompts by looping through new entries in Google Sheets. 🔒 API-First**: Utilizes secure API-based communication for all services. How It Works Trigger: The workflow can be started manually ("When clicking ‘Test workflow’") or scheduled ("Schedule Trigger") to run at regular intervals (e.g., every 5 minutes). Fetch Data: The "Get new video" node retrieves unfilled video requests from a Google Sheet (rows where the "VIDEO" column is empty). Video Creation: The "Set data" node formats the prompt and duration from the Google Sheet. The "Create Video" node sends a request to the Fal.run API (Google Veo3) to generate a video based on the prompt. Status Check: The "Wait 60 sec." node pauses execution for 60 seconds. The "Get status" node checks the video generation status. If the status is "COMPLETED," the workflow proceeds; otherwise, it waits again. Video Processing: The "Get Url Video" node fetches the video URL. The "Generate title" node uses OpenAI (GPT-4.1) to create an SEO-optimized YouTube title. The "Get File Video" node downloads the video file. Upload & Update: The "Upload Video" node saves the video to Google Drive. The "HTTP Request" node uploads the video to YouTube via the Upload-Post API. The "Update Youtube URL" and "Update result" nodes update the Google Sheet with the video URL and YouTube link. Set Up Steps Google Sheet Setup: Create a Google Sheet with columns: PROMPT, DURATION, VIDEO, and YOUTUBE_URL. Share the Sheet link in the "Get new video" node. API Keys: Obtain a Fal.run API key (for Veo3) and set it in the "Create Video" node (Header: Authorization: Key YOURAPIKEY). Get an Upload-Post API key (for YouTube uploads) and configure the "HTTP Request" node (Header: Authorization: Apikey YOUR_API_KEY). YouTube Upload Configuration: Replace YOUR_USERNAME in the "HTTP Request" node with your Upload-Post profile name. Schedule Trigger: Configure the "Schedule Trigger" node to run periodically (e.g., every 5 minutes). Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Arnaud MARIE
Monthly Spotify Track Archiving and Playlist Classification This n8n workflow allows you to automatically archive your monthly Spotify liked tracks in a Google Sheet, along with playlist details and descriptions. Based on this data, Claude 3.5 is used to classify each track into multiple playlists and add them in bulk. Who is this template for? This workflow template is perfect for Spotify users who want to systematically archive their listening history and organize their tracks into custom playlists. What problem does this workflow solve? It automates the monthly process of tracking, storing, and categorizing Spotify tracks into relevant playlists, helping users maintain well-organized music collections and keep a historical record of their listening habits. Workflow Overview Trigger Options**: Can be initiated manually or on a set schedule. Spotify Playlists Retrieval**: Fetches the current playlists and filters them by owner. Track Details Collection**: Retrieves information such as track ID and popularity from the user’s library. Audio Features Fetching**: Uses Spotify's API to get audio features for each track. Data Merging**: Combines track information with their audio features. Duplicate Checking**: Filters out tracks that have already been logged in Google Sheets. Data Logging**: Archives new tracks into a Google Sheet. AI Classification**: Uses an AI model to classify tracks into suitable playlists. Playlist Updates**: Adds classified tracks to the corresponding playlists. Setup Instructions Credentials Setup: Make sure you have valid Spotify OAuth2 and Google Sheets access credentials. Trigger Configuration: Choose between manual or scheduled triggers to start the workflow. Google Sheets Preparation: Set up a Google Sheet with the necessary structure for logging track details. Spotify Playlists Setup: Have a diverse range of playlists and exhaustive description (see example) ready to accommodate different music genres and moods. Customization Options Adjust Playlist Conditions**: Modify the AI model’s classification criteria to align with your personal music preferences. Enhance Track Analysis**: Incorporate additional audio features or external data sources for more refined track categorization. Personalize Data Logging**: Customize which track attributes to log in Google Sheets based on your archival preferences. Configure Scheduling**: Set a preferred schedule for periodic track archiving, e.g., monthly or weekly. Cost Estimate For 300 tracks, the token usage amounts to approximately 60,000 tokens (58,000 for input and 2,000 for completion), costing around 20 cents with Claude 3.5 Sonnet (as of October 2024). Playlists' Description Examples | Playlist Name | Playlist Description | |-------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Classique | Indulge in the timeless beauty of classical music with this refined playlist. From baroque to romantic periods, this collection showcases renowned compositions. | | Poi | Find your flow with this dynamic playlist tailored for poi, staff, and ball juggling. Featuring rhythmic tracks that complement your movements. | | Pro Sound | Boost your productivity and focus with this carefully selected mix of concentration-enhancing music. Ideal for work or study sessions. | | ChillySleep | Drift off to dreamland with this soothing playlist of sleep-inducing tracks. Gentle melodies and ambient sounds create a peaceful atmosphere for restful sleep. | | To Sing | Warm up your vocal cords and sing your heart out with karaoke-friendly tracks. Featuring popular songs, perfect for solo performances or group sing-alongs. | | 1990s | Relive the diverse musical landscape of the 90s with this eclectic mix. From grunge to pop, hip-hop to electronic, this playlist showcases defining genres. | | 1980s | Take a nostalgic trip back to the era of big hair and neon with this 80s playlist. Packed with iconic hits and forgotten gems, capturing the energy of the decade.| | Groove Up | Elevate your mood and energy with this upbeat playlist. Featuring a mix of feel-good tracks across various genres to lift your spirits and get you moving. | | Reggae & Dub | Relax and unwind with the laid-back vibes of reggae and dub. This playlist combines classic reggae tunes with deep, spacious dub tracks for a chilled-out vibe. | | Psytrance | Embark on a mind-bending journey with this collection of psychedelic trance tracks. Ideal for late-night dance sessions or intense focus. | | Cumbia | Sway to the infectious rhythms of Cumbia with this lively playlist. Blending traditional Latin American sounds with modern interpretations for a danceable mix. | | Funky Groove | Get your body moving with this collection of funk and disco tracks. Featuring irresistible basslines and catchy rhythms, perfect for dance parties. | | French Chanson | Experience the romance and charm of France with this mix of classic and modern French songs, capturing the essence of French musical culture. | | Workout Motivation | Push your limits and power through your exercise routine with this high-energy playlist. From warm-up to cool-down, these tracks will keep you motivated. | | Cinematic Instrumentals | Immerse yourself in a world of atmospheric sounds with this collection of cinematic instrumental tracks, perfect for focus, relaxation, or contemplation. |
by Joseph
(Image Generation → Hosting → Video Generation) This workflow is designed for creators, automation enthusiasts, and indie hackers who want to generate image-based videos automatically using AI tools — at a low cost. ⚙️ Workflow Overview This automation performs the following steps: Trigger (Schedule or manual) Generate an image using Flux (choose between two APIs) Upload the image to Kraken.io to get a public URL Send the image to Runway ML (choose between two APIs) to generate a video Receive the video as a URL — ready for posting, download, or further automation 🛠️ Step-by-Step Setup 🖼️ Flux (Image Generation) You can use either of the following providers: Option 1: Flux by BlackForest Labs (Direct API) 🔑 Get your API key here: https://docs.bfl.ml/ Paste your API key in the HTTP Request node named Flux (Blackforest) You can customize prompts or styles inside the JSON body Option 2: Flux via RapidAPI 🔑 Subscribe and get your key here: https://rapidapi.com/poorav925/api/ai-text-to-image-generator-flux-free-api/playground/apiendpoint\_e38039ee-1912-4ef9-b4d4-270d72fca851 Enter your RapidAPI key in the X-RapidAPI-Key header Optional: tweak prompts, style, or resolution inside the JSON body 🐙 Kraken.io (Hosting the Image Publicly) Runway ML requires the image to be publicly accessible. We use Kraken.io to host the generated image and return a public URL. 🔑 Get your API credentials: https://kraken.io/account/api-credentials Setup: Copy your API Key and API Secret Open the Kraken Upload node in n8n Replace placeholders with your credentials The node uploads your image and gives back a public image URL for Runway to use 🎬 RunwayML (Video Generation) You also have two options here: Option 1: Runway Official API 🔑 Get your credentials at: https://dev.runwayml.com/ Use the public image URL from Kraken in the JSON body Paste your Bearer token in the Authorization header Customize other settings like video length, style, FPS, etc. Option 2: Runway via RapidAPI 🔑 Subscribe and get your key here: https://rapidapi.com/fortunehoppers/api/runwayml/playground/apiendpoint\_93c8554d-8097-40cd-8252-3d4dec9c0e68 Paste your RapidAPI key in the request header Customize prompt and generation options in the body Use the Kraken-generated image URL as the input source 📤 What to Do with the Video Once the video is generated, you’ll get a direct video URL. You can: Save it to Google Sheets or Notion Send it via email Trigger a YouTube upload automation Or download manually for editing and reposting 💡 Optional Tips & Notes You can schedule this workflow to generate AI videos daily or weekly Combine it with a Google Sheet of prompts for bulk automation Try using a consistent visual style or theme for better branding This workflow is lightweight and affordable — perfect for indie projects or experimental content generation Great for shorts, quote visuals, music loops, AI art promos, etc. 🔗 Resources Flux (Blackforest) Docs Flux on RapidAPI RunwayML Official Docs Runway on RapidAPI Kraken.io API Dashboard 🙋 Need Help? Feel free to reach out: 🐦 Twitter: @juppfy 📧 Email: joseph@uppfy.com If you’d like to hire me for custom n8n workflows or product automations, don’t hesitate to get in touch.