by Jason Krol
This is a simple webpage scraper that specifically grabs today's newest 4K Bluray Preorders as listed on the Blu-ray.com website. This is a scheduled workflow that can run every day and will post a formatted summary message of links to a Discord channel of your choice. Minimal setup required: Just create a webhook for the channel you want posted to in Discord and provide that in the final step. The timezone format step is set to East Coast (NYC) by default, feel free to change. No API keys or any special configuration needed (beyond your Discord webhook) Feel free to customize the formatting of the message that gets posted ๐ How it works: First format todays date to match the formatting used on the website Grab the HTML for the preorders page at www.blu-ray.com Filter only the hyperlinks for each Bluray on the page Then further filter only those with an html header matching today's date Format how you want the message to be sent to your Discord channel (in this case a simple list of Hyperlinks for each Title) Send to Discord! Disclaimer: This should be only for personal use.** The links go back to the blu-ray.com website, which is a good thing! Don't abuse this by slamming their site with some crazy level of automation frequency. Support the blu-ray.com website by using their affiliate links whenever you do want to preorder a title ;) This is one of my first shared templates, so it may not be super optimal or perfect but it works for my needs and hopefully you'll find some use out of it! Discord currently has a 2000 character limit on webhook messages. Some of the messages may get truncated as a result.
by Blue Code
It allows you to automate candidate retrieval and onboarding in your HR processes. How it works It monitors a Gmail address for new emails with a PDF attachment It expects the PDF to be a candidateโs CV, extracts the text using OCR, and then structures the data using ChatGPT Once the data is processed, it connects to Notion and adds (or updates) an entry in the specified database How to use Configure your Gmail account and provide your ChatGPT API key Provide an API key for the OCR service in a variable named OCR_SPACE_API_KEY Connect your Notion account Once everything is configured, the workflow will monitor your inbox for new emails. Just send an email with a PDF attachment to the configured address Requirements In addition to Gmail, ChatGPT, and Notion, the system uses a third-party OCR API (OCR SPACE). Youโll need to create an account and obtain an API key You must map the fields returned by ChatGPT to the Notion database, or use the same field names we are using Customising It should be easy to replace Notion with PostgreSQL or another database if needed
by andsync
Who is this template for? This template is for learners, researchers, students and professionals who want to quickly capture the essence of a YouTube video. Steps in the workflow: Gets the transcript from any YouTube video through Supadata. Process the result from Supadata to one text Process the text with AI (any LLM of your choice) Final result: Produces a summary accompanied with the most important lessons and interesting facts mentioned in the video. The workflow automatically creates a new Google Doc wiht this output, in a folder of your choice on your Google Drive. (If you want to convert the markdown text to real markup after the Google Doc is created: just select all text (Ctrl-A or CMD-A), Cut the text (Ctrl-X or CMD-X and then go to Edit > Paste from Markdown.) Setup Edit your Supadata credentials in the second node (you can start for free) Choose your favourite LLM for AI processing Edit your Google Drive credentials. How to adjust it to your needs If you want the outcome to be different, edit the Prompt in "Proces transcript to summary template". The file name is a combination of โtranscript โ and the date and time. You can change this to whatever you need in the Google Drive node. Supadata offers more details and options (or even translation) when working with transcripts. Check the options here: https://supadata.ai/documentation/youtube/get-transcript
by Rodrigue Gbadou
How it works Behavioral analytics**: Real-time analysis of product usage and engagement signals Churn prediction**: Predictive model identifying at-risk customers 15 days before Smart upselling**: Personalized recommendations based on usage and profile Retention campaigns**: Automated retention campaigns with dynamic offers Set up steps Product analytics**: Connect Mixpanel, Amplitude or proprietary analytics Billing system**: Integrate Stripe, Chargebee, Recurly for billing data Customer data**: Synchronize your CRM with complete customer history Email/SMS platforms**: Configure SendGrid, Twilio for communications Pricing rules**: Define your pricing matrix and promotional offers ML pipeline**: Configure predictive model training Key Features ๐ฎ Churn prediction**: At-risk customer identification with 85% accuracy ๐ฐ Smart upselling**: Personalized recommendations increasing ARPU by 35% โก Proactive interventions**: Automated actions before customer churns ๐ Revenue optimization**: Price optimization based on willingness to pay ๐ฏ Dynamic segmentation**: Real-time customer groups updates ๐ A/B testing**: Automated testing of retention strategies ๐ LTV maximization**: Customer lifetime value optimization ๐ก๏ธ Dunning management**: Automated payment failure handling
by Arlin Perez
AI Research Assistant via Telegram (GPT-4o mini + DeepSeek R1 + SerpAPI) ๐ฅ Whoโs it for This workflow is perfect for anyone who wants to receive AI-powered research summaries directly on Telegram. Ideal for people asking frequent product, tech, or decision-making questions and want up-to-date answers sourced from the web. ๐ค What it does Users send a question via Telegram. An AI agent (DeepSeek R1) reformulates and understands the intent, while a second agent (GPT-4o mini) performs live research using SerpAPI. The most relevant answers, including links and images, are delivered back via Telegram. โ๏ธ How it works ๐ฒ Telegram Trigger โ Starts when a user sends a message to your Telegram bot. ๐ง DeepSeek R1 Agent โ Understands, clarifies, or reformulates the user query. ๐ง Research AI Agent (GPT-4o mini + SerpAPI) โ Searches the web and summarizes the best results. ๐ค Send Telegram Message โ Sends the response back to the same user. ๐ Requirements Telegram bot (via BotFather) with API token set in n8n credentials OpenAI account with API key and balance for GPT-4o mini SerpAPI account (100 free searches/month) with API key DeepSeek account with API key and balance ๐ ๏ธ How to set up Create your Telegram bot using BotFather and connect it using the Telegram Trigger node Set up DeepSeek credentials and add a Chat Model AI Agent node using DeepSeek R1 to reformulate the userโs question Set up OpenAI credentials and add a second ChatGPT AI Agent node using GPT-4o mini In the GPT-4o node, enable the SerpAPI Tool and add your SerpAPI API key Pass the reformulated question from DeepSeek to the GPT-4o agent for live search and summarization Format the response (text, links, optional images) Send the final reply to the user using the Telegram Send Message node Ensure your n8n instance is publicly accessible Test the workflow by sending a message to your Telegram bot โ
by Jimleuk
This n8n template watches a Gmail inbox for support messages and creates an equivalent issue item in Linear. How it works A scheduled trigger fetches recent Gmail messages from the inbox which collects support requests. These support requests are filtered to ensure they are only processed once and their HTML body is converted to markdown for easier parsing. Each support request is then triaged via an AI Agent which adds appropriate labels, assesses priority and summarises a title and description of the original request. Finally, the AI generated values are used to create an issue in Linear to be actioned. How to use Ensure the messages fetched are solely support requests otherwise you'll need to classify messages before processing them. Specify the labels and priorities to use in the system prompt of the AI agent. Requirements Gmail for incoming support messages OpenAI for LLM Linear for issue management Customising this workflow Consider automating more steps after the issue is created such as attempting issue resolution or capacity planning.
by Niklas Hatje
This template shows how to use the Question and Answer tool to save costs in RAG use cases. Who is this for? This template is for everyone who wants to start giving knowledge to their Agents through RAG. Requirements Have a PDF with custom knowledge that you want to provide to your agent. Setup No setup required. Just hit Execute Workflow, upload your knowledge document and then start chatting. How to customize this to your needs Add custom instructions to your Agent by changing the prompts in it. Add a different way to load in knowledge to your vector store, e.g. by looking at some Google Drive files or loading knowledge from a table. Describe your data properly in the Q&A tool Exchange the Simple Vector Store nodes with your own vector store tools ready for production. Add a more sophisticated way to rank files found in the vector store. For more information read our docs on RAG in n8n.
by GiovanniSegar
Video walkthrough https://www.youtube.com/watch?v=OwIFK-r-NtQ Summary of agent This agent can write and rewrite its own rules, allowing you to mold its behavior. It receives rules from a database as system instructions and has tools to create, edit, or delete them. This is a great baseline for new agent builds. You can tell it things like "Next time, use present tense when talking about this subject" and it will use a tool to save this as a rule, then receive that instruction in all future iterations. How to start using it Option 1: With a Postgres database (e.g., Supabase) Supabase Schema: Create a table (e.g., agent_rules) with the following columns: id: bigint (Primary Key, auto-incrementing) created_at: timestamp with time zone (Default: now()) rule_text: text agent: text Workflow Updates: Update the Postgres credentials in the "Get rules from database," "Insert rule into database," and "Execute query on rule database" nodes. Update the agent value (currently 'TestAgent') in the "Get rules from database" and "Insert rule into database" nodes if you want a different agent name. Update the Anthropic API credentials. Option 2: With Google Sheets Google Sheet Setup: Create a Google Sheet with columns for rule_text and agent. Workflow Updates: Example Google Sheets nodes are included. You will need to: Connect your Google Sheets credentials. Select your Google Sheet (with rule_text and agent columns) in all relevant Google Sheets nodes. Replace the existing Postgres nodes ("Get rules from database", "Insert rule into database", "Execute query on rule database") with the configured Google Sheets nodes. Update the agent value (currently 'TestAgent') in the Google Sheets nodes if you want a different agent name. Update the Anthropic API credentials. Agent Instructions: Update the agent's system message and remove the database schema section as it is no longer relevant
by sayamol thiramonpaphakul
This workflow automatically checks the status of your websites using UptimeRobot API. If any site is down or unstable, it will: Generate a natural-language alert message using GPT-4o Push the message to a LINE group (with funny IT-style encouragement) Log all DOWN status entries into your Supabase database Wait 30 minutes before repeating ๐ง How It Works Schedule Trigger โ Runs on a fixed interval (every few minutes). UptimeRobot Node โ Fetches website monitor data. Code Node (Filter) โ Filters only websites with status 8 (may be down) or 9 (down). IF Node โ If any site is down, proceed. LangChain LLM Node โ Formats alert with a humorous message using GPT-4o. Line Notify (HTTP Request) โ Sends the alert to your LINE group. Loop Over Items โ Loops through all monitors. Filter Down (Status = 9) โ Selects only โfully downโ sites. Supabase Node โ Logs these into synlora_uptime_down table. Wait Node โ Delays next alert by 30 minutes to avoid spamming. โ๏ธ Setup Steps Required: ๐ UptimeRobot API Key ๐ฒ LINE Channel Access Token and Group ID ๐ง OpenAI Key (GPT-4o Mini) ๐๏ธ Supabase Project & Table Step-by-step: Go to UptimeRobot โ Get API key and ensure monitors are set up. Create a Supabase table with fields: website, status, uptime_id. Create a LINE Messaging API bot, join it to your group, and get: Access Token Group ID (userId or groupId) Add your OpenAI API Key for GPT-4o Mini (or switch to your preferred LLM). Import the workflow JSON into n8n. Set credentials in all necessary nodes. Activate the workflow.
by Jimleuk
This n8n template watches an outlook shared inbox for support messages and creates an equivalent issue item in JIRA. How it works A scheduled trigger fetches recent Outlook messages from an shared inbox which collects support requests. These support requests are filtered to ensure they are only processed once and their HTML body is converted to markdown for easier parsing. Each support request is then triaged via an AI Agent which adds appropriate labels, assesses priority and summarises a title and description of the original request. Finally, the AI generated values are used to create an issue in JIRA to be actioned. How to use Ensure the messages fetched are solely support requests otherwise you'll need to classify messages before processing them. Specify the labels and priorities to use in the system prompt of the AI agent. Requirements Outlook for incoming support OpenAI for LLM JIRA for issue management Customising this workflow Consider automating more steps after the issue is created such as attempting issue resolution or capacity planning.
by Parth Pansuriya
Automate Amazon searches to Telegram with AI-powered scraping This workflow connects Amazon product lookups to Telegram using AI-enhanced scraping and automation. It lets users send a product name to a Telegram bot and instantly receive pricing, discount, and product links โ all pulled dynamically from Amazon. Whoโs it for Amazon affiliates Telegram shopping groups Product reviewers & resellers Deal-focused communities Anyone wanting fast price checks without browsing How it works Telegram Trigger receives messages from the user. AI Classifier (via OpenRouter & LangChain) detects whether the user is asking for a product. If yes, it sends the query to Apify's Amazon Scraper to fetch real product listings. The scraped data (price, discount, rating, link) is formatted into a Telegram response. If not a product query, an AI fallback response is sent instead. Requirements Telegram Bot Token Apify API Token OpenRouter API Key (or compatible LLM provider)
by Thomas Janssen
Build a 100% local RAG with n8n, Ollama and Qdrant. This agent uses a semantic database (Qdrant) to answer questions about PDF files. Tutorial Click here to view the YouTube Tutorial How it works Build a chatbot that answers based on documents you provide it (Retrieval Augmented Generation). You can upload as many PDF files as you want to the Qdrant database. The chatbot will use its retrieval tool to fetch the chunks and use them to answer questions. Installation Install n8n + Ollama + Qdrant using the Self-hosted AI starter kit Make sure to install Llama 3.2 and mxbai-embed-large as embeddings model. How to use it First run the "Data Ingestion" part and upload as many PDF files as you want Run the Chatbot and start asking questions about the documents you uploaded