by AI/ML API | D1m7asis
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. n8n Workflow Template: AI‑Powered Mental Health Support Bot Overview: This template enables you to build a Telegram bot that delivers real‑time, empathetic mental health support. Incoming messages tagged with #vent, #insight, or #cope are routed to GPT‑4o via the AI/ML API, which returns tailored, compassionate responses. How it works: Telegram Trigger listens for new chat messages or voice notes. Show Typing Indicator immediately signals “typing…” in the chat. Switch Node examines the text prefix and routes to one of four branches (Vent, Insight, Cope, or default). Set Prompt nodes build a JSON payload with a specific role‑play prompt for each branch. AI/ML API node (model gpt-4o) generates the response. Telegram node sends the AI’s answer back to the user. Setup Steps: Connect your Telegram bot token in the Telegram credentials. Add your AI/ML API key (GPT‑4o) in n8n’s credential settings. Activate the workflow and deploy your n8n instance webhook URL to BotFather. Test by sending #vent I’m stressed, #insight Why do I feel…, or any tag in your Telegram chat. This plug‑and‑play workflow brings AI‑driven emotional support directly into Telegram.
by n8n Team
This workflow sends the contents of an email to a Notion database. The email must be labeled with a specific label for the workflow to trigger. The email subject will be the title of the Notion page, and a snippet of the email body will be the content of the Notion page. The email link will be added to the Notion page as a property. Prerequisites Notion account and Notion credentials. Google account and Google credentials. How it works On scheduled intervals, find all emails with a specific label. For each email, check if the email already exists in the Notion database. If it does not exist, create a new page in the Notion database, otherwise do nothing. When the task in the Notion database is checked off, the label will be removed from the email. Setup This workflow requires that you set up a Notion database or use an existing one with at least the following fields: Title (title) Thread ID (text) Email thread (URL) Additionally, create a label that will be used to trigger the workflow in Gmail. In this workflow, the label is called "Notion".
by Obsidi8n
This workflow converts any n8n workflow outputs into Markdown notes that are accessible in your Obsidian Vault through Google Drive synchronization. Setup Requirements Create a designated folder in Google Drive (Desktop). Create a symbolic link between this folder and a new target folder in your Obsidian Vault. Configure Google Drive n8n node settings. Send the output of any workflow to the trigger, and the notes will appear in your Vault folder. Optional Features You can use AI agents to: Write notes in your preferred format (e.g., Zettelkasten). Compose YAML front matter. Suggest tags. Use Cases Convert RSS feed items to notes. Create notes from YouTube video transcripts. Transform tasks in Slack messages into Obsidian tasks. (Requires setting up a corresponding workflow, e.g., RSS trigger, YouTube transcriber, or Slack bot.)
by n8n Team
This template quickly shows how to use RAG in n8n. Who is this for? This template is for everyone who wants to start giving knowledge to their Agents through RAG. Requirements Have a PDF with custom knowledge that you want to provide to your agent. Setup No setup required. Just hit Execute Workflow, upload your knowledge document and then start chatting. How to customize this to your needs Add custom instructions to your Agent by changing the prompts in it. Add a different way to load in knowledge to your vector store, e.g. by looking at some Google Drive files or loading knowledge from a table. Exchange the Simple Vector Store nodes with your own vector store tools ready for production. Add a more sophisticated way to rank files found in the vector store. For more information read our docs on RAG in n8n.
by bangank36
This workflow backup Squarespace website header and footer injections into Github How It Works The Squarespace injections are fetched when an URL is placed Setup Instructions First, edit HTTP Request's URL to put your Squarespace site URL there Next, to configure the Github, update the Globals node with the following values: repo.owner – Your GitHub username repo.name – The name of your GitHub repository storing the workflows repo.path – The folder path within the repository where workflows are stored For example, if your GitHub username is john-doe, your repository is named n8n-backups, and injections are stored in a squarespace-backup/ folder, you would set: repo.owner → john-doe repo.name → n8n-backups repo.path → squarespace-backup/ Each site's injections will be added into seperate folder Required Credentials GitHub API – Access to your repository Who Is This For? This template is made for Squarespace users who want to backup their header and footer injections at interval to or on demand Check out my other templates: 👉 My n8n Templates
by Jason Krol
This is a simple webpage scraper that specifically grabs today's newest 4K Bluray Preorders as listed on the Blu-ray.com website. This is a scheduled workflow that can run every day and will post a formatted summary message of links to a Discord channel of your choice. Minimal setup required: Just create a webhook for the channel you want posted to in Discord and provide that in the final step. The timezone format step is set to East Coast (NYC) by default, feel free to change. No API keys or any special configuration needed (beyond your Discord webhook) Feel free to customize the formatting of the message that gets posted 👍 How it works: First format todays date to match the formatting used on the website Grab the HTML for the preorders page at www.blu-ray.com Filter only the hyperlinks for each Bluray on the page Then further filter only those with an html header matching today's date Format how you want the message to be sent to your Discord channel (in this case a simple list of Hyperlinks for each Title) Send to Discord! Disclaimer: This should be only for personal use.** The links go back to the blu-ray.com website, which is a good thing! Don't abuse this by slamming their site with some crazy level of automation frequency. Support the blu-ray.com website by using their affiliate links whenever you do want to preorder a title ;) This is one of my first shared templates, so it may not be super optimal or perfect but it works for my needs and hopefully you'll find some use out of it! Discord currently has a 2000 character limit on webhook messages. Some of the messages may get truncated as a result.
by Vadym Nahornyi
This workflow automatically transcribes audio files, translates the content between languages, and generates natural-sounding speech from the translated text - all in one seamless process. Who's it for Content creators, educators, and businesses needing to make their audio content accessible across language barriers. Perfect for translating podcasts, voice messages, lectures, or any audio content while preserving the spoken format. How it works The workflow receives an audio file through a webhook, transcribes it using OpenAI's Whisper, translates and structures the text with GPT-4, generates new audio in the target language, and stores it in S3 for easy access. The entire process takes seconds and returns both the transcribed/translated text and a URL to the translated audio file. How to set up Configure OpenAI credentials - Add your OpenAI API key for Whisper transcription and GPT-4 translation Set up AWS S3 - Create a bucket with public read permissions for audio storage Update configuration - Replace 'YOUR-BUCKET-NAME' with your actual S3 bucket name Activate webhook - Deploy and copy your webhook URL for receiving audio files Send a POST request with: Binary audio file (as 'audiofile') Languages parameter (e.g., "English, Spanish") Requirements OpenAI API account with access to Whisper and GPT-4 AWS account with S3 bucket configured Basic understanding of webhooks and API requests How to customize Add language detection** - Automatically detect source language if not specified Customize voice settings** - Adjust speech speed, pitch, or select different voices Add file validation** - Implement size limits and format checks Enhance security** - Add webhook authentication and rate limiting Extend functionality** - Add subtitle generation or multiple output formats
by Blue Code
It allows you to automate candidate retrieval and onboarding in your HR processes. How it works It monitors a Gmail address for new emails with a PDF attachment It expects the PDF to be a candidate’s CV, extracts the text using OCR, and then structures the data using ChatGPT Once the data is processed, it connects to Notion and adds (or updates) an entry in the specified database How to use Configure your Gmail account and provide your ChatGPT API key Provide an API key for the OCR service in a variable named OCR_SPACE_API_KEY Connect your Notion account Once everything is configured, the workflow will monitor your inbox for new emails. Just send an email with a PDF attachment to the configured address Requirements In addition to Gmail, ChatGPT, and Notion, the system uses a third-party OCR API (OCR SPACE). You’ll need to create an account and obtain an API key You must map the fields returned by ChatGPT to the Notion database, or use the same field names we are using Customising It should be easy to replace Notion with PostgreSQL or another database if needed
by andsync
Who is this template for? This template is for learners, researchers, students and professionals who want to quickly capture the essence of a YouTube video. Steps in the workflow: Gets the transcript from any YouTube video through Supadata. Process the result from Supadata to one text Process the text with AI (any LLM of your choice) Final result: Produces a summary accompanied with the most important lessons and interesting facts mentioned in the video. The workflow automatically creates a new Google Doc wiht this output, in a folder of your choice on your Google Drive. (If you want to convert the markdown text to real markup after the Google Doc is created: just select all text (Ctrl-A or CMD-A), Cut the text (Ctrl-X or CMD-X and then go to Edit > Paste from Markdown.) Setup Edit your Supadata credentials in the second node (you can start for free) Choose your favourite LLM for AI processing Edit your Google Drive credentials. How to adjust it to your needs If you want the outcome to be different, edit the Prompt in "Proces transcript to summary template". The file name is a combination of ‘transcript ‘ and the date and time. You can change this to whatever you need in the Google Drive node. Supadata offers more details and options (or even translation) when working with transcripts. Check the options here: https://supadata.ai/documentation/youtube/get-transcript
by Roni Bandini
This workflow receives plain English instructions from a retro console via a webhook. Using an AI agent, it can combine multiple tools to read general RSS news headlines, stock market updates, emails, calendar events, search X, send Telegram messages, and run Linux commands. The idea is to avoid using smartphones or regular laptops in the morning, and instead use a retro console installed on an old notebook or netbook. You will need to copy a Python script onto the notebook, configure the webhook URL, and set up all the required credentials. Steps: Setup Gemini API key, Google Gmail and Calendar credentials from console.google.com Setup X credentials, RSS URL, etc Obtain the webhook URL and paste into the Python code to be executed at the Linux machine Run the python script with python3 console.py Note: if you ask for a Linux command, the command will not only be returned but also executed.
by Henry
Who is this for? This workflow is ideal for social media managers, content creators, marketing teams, and automation enthusiasts looking to streamline their Instagram Reels posting from Google Drive using n8n, Google Sheets, and Cloudinary. What problem is this workflow solving? / Use case Manually downloading video files, uploading to third-party platforms, and posting to Instagram Reels is time-consuming. This workflow automates the whole process, ensuring timely, consistent content delivery and reducing manual errors. What this workflow does Automatically fetches scheduled Reel content from Google Sheets (Sample link) Downloads video files from Google Drive folders Uploads videos to Cloudinary for hosting Posts the videos as Instagram Reels with custom captions Updates the Google Sheet to mark content as posted Setup Prepare a Google Drive folder set to public sharing for your videos Create a Cloudinary account and configure upload presets Connect an Instagram Business account (linked to a Facebook Page) Set up a Google Sheet with video post details: Video Name, Type, Caption, Status Configure the workflow schedule in n8n How to customize this workflow to your needs Adjust the schedule for desired posting frequency Add fields to your sheet for custom tags or content variations Change the Cloudinary or Instagram settings for different media types Integrate additional steps for error handling or approval workflows
by Jimleuk
This n8n template demonstrates how to calculate the evaluation metric "RAG document groundedness" which in this scenario, measures the ability to provide or reference information included only in retrieved vector store documents. The scoring approach is adapted from https://cloud.google.com/vertex-ai/generative-ai/docs/models/metrics-templates#pointwise_groundedness How it works This evaluation works best for an agent that requires document retrieval from a vector store or similar source. For our scoring, we need to collect the agent's response and the documents retrieved and use an LLM to assess if the former is based off the latter. A key factor is to look out information in the response which is not mentioned in the documents. A high score indicates LLM adherence and alignment whereas a low score could signal inadequate prompt or model hallucination. Requirements n8n version 1.94+ Check out this Google Sheet for a sample data https://docs.google.com/spreadsheets/d/1YOnu2JJjlxd787AuYcg-wKbkjyjyZFgASYVV0jsij5Y/edit?usp=sharing