by Usman Liaqat
This workflow listens for incoming WhatsApp messages that contain media (e.g., images) and automatically downloads the media file using WhatsApp's private media URL. The trigger node activates when a WhatsApp message with media is received. The media ID is extracted from the message payload. A private media URL is retrieved using the media ID. The media file is downloaded using an authenticated HTTP request. Ideal for: Archiving WhatsApp media to external systems. Triggering further automations based on received media. Integrating with cloud storage like Google Drive, Dropbox, or Amazon S3. Set up steps Connect your WhatsApp Business API account. Add HTTP credentials for downloading media via private URL. Set up the webhook in your WhatsApp Business account. Extend the workflow as needed for your use case (e.g., file storage, alerts).
by Yaron Been
Description This workflow automatically searches multiple freelance platforms for new gigs matching your skills and requirements. It saves you time by eliminating the need to manually check multiple job boards and sends you alerts for relevant opportunities. Overview This workflow automatically scrapes freelance job boards and platforms for new gigs matching your skills and requirements. It uses Bright Data to access job listings and can notify you of new opportunities or save them to a database. Tools Used n8n:** The automation platform that orchestrates the workflow. Bright Data:** For scraping freelance platforms like Upwork, Fiverr, Freelancer, etc. without getting blocked. (Optional) Email/Slack/Database:** For notifications or data storage. How to Install Import the Workflow: Download the .json file and import it into your n8n instance. Configure Bright Data: Add your Bright Data credentials to the Bright Data node. Set Up Notifications: Configure how you want to receive job alerts. Customize: Add your skills, rate requirements, and other filters. Use Cases Freelancers:** Get notified of new gigs matching your skills. Agencies:** Monitor job boards for potential client opportunities. Remote Workers:** Track new remote job postings across multiple platforms. Connect with Me Website:** https://www.nofluff.online YouTube:** https://www.youtube.com/@YaronBeen/videos LinkedIn:** https://www.linkedin.com/in/yaronbeen/ Get Bright Data:** https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #freelance #brightdata #webscraping #remotework #freelancejobs #gigeconomy #upwork #fiverr #freelancer #jobsearch #remotejobs #freelanceopportunities #n8nworkflow #workflow #nocode #jobhunting #freelancegigs #jobscraper #jobnotifications #freelancecareer #digitalnomad #workfromhome #jobopportunities #freelancetools #jobmonitoring #freelancesuccess
by Yaron Been
Description This workflow monitors Bitcoin prices across multiple exchanges and sends you alerts when significant price drops occur. It helps crypto traders and investors identify buying opportunities without constantly watching the markets. Overview This workflow monitors Bitcoin prices across multiple exchanges and sends you alerts when significant price drops occur. It uses Bright Data to scrape real-time price data and can be configured to notify you through various channels. Tools Used n8n:** The automation platform that orchestrates the workflow. Bright Data:** For scraping cryptocurrency exchange data without getting blocked. Notification Services:** Email, SMS, Telegram, or other messaging platforms. How to Install Import the Workflow: Download the .json file and import it into your n8n instance. Configure Bright Data: Add your Bright Data credentials to the Bright Data node. Set Up Notifications: Configure your preferred notification method. Customize: Set your price thresholds, monitoring frequency, and which exchanges to track. Use Cases Crypto Traders:** Get notified of buying opportunities during price dips. Investors:** Monitor your crypto investments and make informed decisions. Financial Analysts:** Track Bitcoin price movements for market analysis. Connect with Me Website:** https://www.nofluff.online YouTube:** https://www.youtube.com/@YaronBeen/videos LinkedIn:** https://www.linkedin.com/in/yaronbeen/ Get Bright Data:** https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #bitcoin #cryptocurrency #brightdata #pricealerts #cryptotrading #bitcoinalerts #cryptoalerts #cryptomonitoring #n8nworkflow #workflow #nocode #cryptoinvesting #bitcoinprice #cryptomarket #tradingalerts #cryptotools #bitcointrading #pricemonitoring #cryptoautomation #bitcoininvestment #cryptotracker #marketalerts #tradingopportunities #cryptoprices
by Yang
🔎 Who is this for? This workflow is designed for podcast creators, content marketers, and video producers who want to convert YouTube videos into podcast-ready scripts. It's perfect for anyone repurposing long-form content to reach audio-first audiences without manual effort. 🧠 What problem is this workflow solving? Creating podcast scripts from YouTube videos manually is time-consuming. This workflow automates the process by pulling transcripts, cleaning the text, organizing the dialogue, summarizing the key points, and saving everything in one place. It removes the need for manual transcription, formatting, and structuring. ⚙️ What this workflow does This workflow uses Dumpling AI and GPT-4o to automate the transformation of YouTube video transcripts into polished podcast scripts. Here's how it works: RSS Feed Trigger Monitors a YouTube RSS feed for new video uploads. When a new video is detected, the workflow begins automatically. Get YouTube Transcript (Dumpling AI) Uses Dumpling AI's get-youtube-transcript endpoint to extract the full transcript from the video URL. Generate Podcast Script with GPT-4o GPT-4o receives the transcript and generates a structured JSON output including: Cleaned transcript with filler words removed Speaker labels for clarity A short, engaging podcast title A concise summary of the episode Save to Airtable The structured data (title, summary, cleaned transcript) is saved to Airtable for easy review, editing, or publishing. This automation is an ideal workflow for repurposing video content into audio-friendly formats, cutting down production time while increasing content output across platforms.
by Friedemann Schuetz
Welcome to my Airbnb Telegram Agent Workflow! This workflow creates an intelligent Telegram bot that helps users search and find Airbnb accommodations using natural language queries and voice messages. DISCLAIMER: This workflow only works with self-hosted n8n instances! You have to install the n8n-nodes-mcp-client Community Node! What this workflow does This workflow processes incoming Telegram messages (text or voice) and provides personalized Airbnb accommodation recommendations. The AI agent understands natural language queries, searches through Airbnb data using MCP tools, and returns mobile-optimized results with clickable links, prices, and key details. Key Features: Voice message support (speech-to-text and text-to-speech) Conversation memory for context-aware responses Mobile-optimized formatting for Telegram Real-time Airbnb data access via MCP integration This workflow has the following sequence: Telegram Trigger - Receives incoming messages from users Text or Voice Switch - Routes based on message type Voice Processing (if applicable) - Downloads and transcribes voice messages Text Preparation - Formats text input for the AI agent Airbnb AI Agent - Core logic that: Lists available MCP tools for Airbnb data Executes searches with parsed parameters Formats results for mobile display Response Generation - Sends formatted text response Voice Response (optional) - Creates and sends audio summary Requirements: Telegram Bot API**: Documentation Create a bot via @BotFather on Telegram Get bot token and configure webhook OpenAI API**: Documentation Used for speech transcription (Whisper) Used for chat completion (GPT-4) Used for text-to-speech generation MCP Community Client Node**: Documentation Custom integration for Airbnb data Requires MCP server setup with Airbnb/Airtable connection Provides tools for accommodation search and details Important: You need to set up an MCP server with Airbnb data access. The workflow uses MCP tools to retrieve real accommodation data, so ensure your MCP server is properly configured with the Airtable/Airbnb integration. Configuration Notes: Update the Telegram chat ID in the trigger for your specific bot Modify the system prompt in the Airbnb Agent for different use cases The workflow supports both individual users and can be extended for group chats Feel free to contact me via LinkedIn, if you have any questions!
by Yaron Been
This workflow provides automated access to the Izzaanel Betia AI model through the Replicate API. It saves you time by eliminating the need to manually interact with AI models and provides a seamless integration for other generation tasks within your n8n automation workflows. Overview This workflow automatically handles the complete other generation process using the Izzaanel Betia model. It manages API authentication, parameter configuration, request processing, and result retrieval with built-in error handling and retry logic for reliable automation. Model Description: Advanced AI model for automated processing and generation tasks. Key Capabilities Specialized AI model with unique capabilities** Advanced processing and generation features** Custom AI-powered automation tools** Tools Used n8n**: The automation platform that orchestrates the workflow Replicate API**: Access to the Izzaanel/betia AI model Izzaanel Betia**: The core AI model for other generation Built-in Error Handling**: Automatic retry logic and comprehensive error management How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Replicate API: Add your Replicate API token to the 'Set API Token' node Customize Parameters: Adjust the model parameters in the 'Set Other Parameters' node Test the Workflow: Run the workflow with your desired inputs Integrate: Connect this workflow to your existing automation pipelines Use Cases Specialized Processing**: Handle specific AI tasks and workflows Custom Automation**: Implement unique business logic and processing Data Processing**: Transform and analyze various types of data AI Integration**: Add AI capabilities to existing systems and workflows Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Replicate API**: https://replicate.com (Sign up to access powerful AI models) #n8n #automation #ai #replicate #aiautomation #workflow #nocode #aiprocessing #dataprocessing #machinelearning #artificialintelligence #aitools #automation #digitalart #contentcreation #productivity #innovation
by Thibaud
Title: Automatic Strava Titles & Descriptions Generation with AI Description: This n8n workflow connects your Strava account to an AI to automatically generate personalized titles and descriptions for every new cycling activity. It leverages the native Strava trigger to detect new activities, extracts and formats ride data, then queries an AI agent (OpenRouter, ChatGPT, etc.) with an optimized prompt to get a catchy title and inspiring description. The workflow then updates the Strava activity in real time, with zero manual intervention. Key Features: Secure connection to the Strava API (OAuth2) Automatic triggering for every new activity Intelligent data preparation and formatting AI-powered generation of personalized content (title + description) Instant update of the activity on Strava Use Cases: Cyclists wanting to automatically enhance their Strava rides Sports content creators Community management automation for sports groups Prerequisites: Strava account Strava OAuth2 credentials set up in n8n Access to a compatible AI agent (OpenRouter, ChatGPT, etc.) Benefits: Saves time Advanced personalization Boosts the appeal of every ride to your community
by JustinLee
This workflow demonstrates a simple Retrieval-Augmented Generation (RAG) pipeline in n8n, split into two main sections: 🔹 Part 1: Load Data into Vector Store Reads files from disk (or Google Drive). Splits content into manageable chunks using a recursive text splitter. Generates embeddings using the Cohere Embedding API. Stores the vectors into an In-Memory Vector Store (for simplicity; can be replaced with Pinecone, Qdrant, etc.). 🔹 Part 2: Chat with the Vector Store Takes user input from a chat UI or trigger node. Embeds the query using the same Cohere embedding model. Retrieves similar chunks from the vector store via similarity search. Uses Groq-hosted LLM to generate a final answer based on the context. 🛠️ Technologies Used: 📦 Cohere Embedding API ⚡ Groq LLM for fast inference 🧠 n8n for orchestrating and visualizing the flow 🧲 In-Memory Vector Store (for prototyping) 🧪 Usage: Upload or point to your source documents. Embed them and populate the vector store. Ask questions through the chat trigger node. Receive context-aware responses based on retrieved content.
by Rhys
This workflow turns news monitoring into an early-warning demand engine. It continuously ingests Google Alert RSS feeds, extracts the full text of every article, and runs real-time purchase-intent modeling to predict which stories will sway your buyers—positively or negatively. The moment a spike in intent is detected, it triggers an early warning email so you can run with the right playbooks: amplify favorable narratives to accelerate deal cycles, or counter harmful ones before they dent your pipeline. Ideal for revenue teams that want to harness media signals instead of reacting to them after the fact. 📝 Step-by-Step Instructions RSS Triggers - RSS trigger checks for news every [enter time] Extract content- using the RSS link, run a HTTP request. Structure Output - Parse out article content and format simulation query Rally Simulation Testing - AI personas get content as memory, and are asked (in voting mode) to answer how it impacts interest in spending money on [synthetic research] (swap for your category) Extract Individual Votes - Splits Rally's response array to process each persona's individual voting decision for detailed analysis Calculate Responses - Custom code processes all votes, counts selections for each variation, calculates percentages Alert trigger- Depending on count thresholds, triggers emails.
by Yaron Been
This workflow provides automated access to the Jhonp4 Jhonpiedrahita_Ai01 AI model through the Replicate API. It saves you time by eliminating the need to manually interact with AI models and provides a seamless integration for other generation tasks within your n8n automation workflows. Overview This workflow automatically handles the complete other generation process using the Jhonp4 Jhonpiedrahita_Ai01 model. It manages API authentication, parameter configuration, request processing, and result retrieval with built-in error handling and retry logic for reliable automation. Model Description: Advanced AI model for automated processing and generation tasks. Key Capabilities Specialized AI model with unique capabilities** Advanced processing and generation features** Custom AI-powered automation tools** Tools Used n8n**: The automation platform that orchestrates the workflow Replicate API**: Access to the Jhonp4/jhonpiedrahita_ai01 AI model Jhonp4 Jhonpiedrahita_Ai01**: The core AI model for other generation Built-in Error Handling**: Automatic retry logic and comprehensive error management How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Replicate API: Add your Replicate API token to the 'Set API Token' node Customize Parameters: Adjust the model parameters in the 'Set Other Parameters' node Test the Workflow: Run the workflow with your desired inputs Integrate: Connect this workflow to your existing automation pipelines Use Cases Specialized Processing**: Handle specific AI tasks and workflows Custom Automation**: Implement unique business logic and processing Data Processing**: Transform and analyze various types of data AI Integration**: Add AI capabilities to existing systems and workflows Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Replicate API**: https://replicate.com (Sign up to access powerful AI models) #n8n #automation #ai #replicate #aiautomation #workflow #nocode #aiprocessing #dataprocessing #machinelearning #artificialintelligence #aitools #automation #digitalart #contentcreation #productivity #innovation
by iamvaar
⚠️ RUN the FIRST WORKFLOW ONLY ONCE (as it will convert your content in Embedding format and save it in DB and is ready for the RAG Chat) 📌 Telegram Trigger Type:** telegramTrigger Purpose:** Waits for new Telegram messages to trigger the workflow. Note:** Currently disabled. 📄 Content for the Training Type:** googleDocs Purpose:** Fetches document content from Google Docs using its URL. Details:** Uses Service Account authentication. ✂️ Splitting into Chunks Type:** code Purpose:** Splits the fetched document text into smaller chunks (1000 chars each) for processing. Logic:** Loops over text and slices it. 🧠 Embedding Uploaded Document Type:** httpRequest Purpose:** Calls Together AI embedding API to get vector embeddings for each text chunk. Details:** Sends JSON with model name and chunk as input. 🛢 Save the embedding in DB Type:** supabase Purpose:** Saves each text chunk and its embedding vector into the Supabase embed table. SECOND WORKFLOW EXPLAINATION: 💬 When chat message received Type:** chatTrigger Purpose:** Starts the workflow when a user sends a chat message. Details:** Sends an initial greeting message to the user. 🧩 Embend User Message Type:** httpRequest Purpose:** Generates embedding for the user’s input message. Details:** Calls Together AI embeddings API. 🔍 Search Embeddings Type:** httpRequest Purpose:** Searches Supabase DB for the top 5 most similar text chunks based on the generated embedding. Details:** Calls Supabase RPC function matchembeddings1. 📦 Aggregate Type:** aggregate Purpose:** Combines all retrieved text chunks into a single aggregated context for the LLM. 🧠 Basic LLM Chain Type:** chainLlm Purpose:** Passes the user's question + aggregated context to the LLM to generate a detailed answer. Details:** Contains prompt instructing the LLM to answer only based on context. 🤖 OpenRouter Chat Model Type:** lmChatOpenRouter Purpose:** Provides the actual AI language model that processes the prompt. Details:** Uses qwen/qwen3-8b:free model via OpenRouter and you can use any of your choice.
by Harshil Agrawal
This workflow analyzes the sentiments of the feedback provided by users and sends them to a Mattermost channel. Typeform Trigger node: Whenever a user submits a response to the Typeform, the Typeform Trigger node will trigger the workflow. The node returns the response that the user has submitted in the form. Google Cloud Natural Language node: This node analyses the sentiment of the response the user has provided and gives a score. IF node: The IF node uses the score provided by the Google Cloud Natural Language node and checks if the score is negative (smaller than 0). If the score is negative we get the result as True, otherwise False. Mattermost node: If the score is negative, the IF node returns true and the true branch of the IF node is executed. We connect the Mattermost node with the true branch of the IF node. Whenever the score of the sentiment analysis is negative, the node gets executed and a message is posted on a channel in Mattermost. NoOp: This node here is optional, as the absence of this node won't make a difference to the functioning of the workflow. This workflow can be used by Product Managers to analyze the feedback of the product. The workflow can also be used by HR to analyze employee feedback. You can even use this node for sentiment analysis of Tweets. To perform a sentiment analysis of Tweets, replace the Typeform Trigger node with the Twitter node. Note:* You will need a Trigger node or Start node to start the workflow. Instead of posting a message on Mattermost, you can save the results in a database or a Google Sheet, or Airtable. Replace the Mattermost node with (or add after the Mattermost node) the node of your choice to add the result to your database. You can learn to build this workflow on the documentation page of the Google Cloud Natural Language node.