by Mutasem
Use Case Automatically archive emails in your Gmail inbox from the last day, unless they have been starred. Been using this with my personal and work emails to stick to an Inbox Zero strategy, without having to click or swipe a lot. Setup Add your Gmail creds How to adjust this template Set your own schedule for when to run this. Otherwise, should be good to go. π€π½
by David Roberts
Overview This workflow takes some French text, and translates it into spoken audio. It then transcribes that audio back into text, translates it into English and generates an audio file of the English text. To do so, it uses ElevenLabs (which has a free tier) and OpenAI. Setup These steps should only take a few minutes: In ElevenLabs, add a voice to your voice lab and copy its ID. Add it to the 'Set voice ID' node Get your ElevenLabs API key (click your name in the bottom-left of ElevenLabs and choose βprofileβ) In the 'Generate French audio' node, create a new header auth cred. Set the name to xi-api-key and the value to your API key In the 'credential' field of the 'Transcribe audio' node, create a new OpenAI cred with your OpenAI API key Run the workflow by clicking the orange button at the bottom of the canvas
by Derek Cheung
Use case This workflow enables a Telegram bot that can: Accept speech input in one of 55 supported languages Automatically detect the language spoken and translate the speech to another language Responds back with the translated speech output. This allows users to communicate across language barriers by simply speaking to the bot, which will handle the translation seamlessly. How does it work? Translation In the translation step the workflow converts the user's speech input to text and detects the language of the input text. If it's English, it will translate to French. If it's French, it will translate to English. To change the default translation languages, you can update the prompt in the AI node. Output In the output step, we provide the translated text output back to the user and speech output is generated in the translated language. Setup steps Obtain Telegram API Token Start a chat with the BotFather. Enter /newbot and reply with your new bot's display name and username. Copy the bot token and use it in the Telegram node credentials in n8n. Update the Settings node to customize the desired languages Activate the flow Full list of supported languages All supported languages:
by Manu
In Grist, when I mark a row as confirmed (via a toggle): a webhook is set up to notify n8n, and this workflow will create derived records in the destination table. Design decisions Confirmation-based In the source table there is a boolean column "Confirmed" that will trigger the transfer. This way there is a manual check involved & it's a conscious step to trigger the workflow. Runs once If the destination table already contains an entry, we will not re-create/update it (as it might've already been changed manually) Setup Create a boolean column Confirmed in source table Add a webhook in Grist Settings Add grist API credentials in n8n Set document ID & source table ID/Name in the 'get existing' node Set docID, the destination table ID/Name - and the columns & values you want in the Create Row node
by Victor Gonzalez
Who is this for? This template is designed for businesses and organizations that use Mautic for email marketing and want to automate the process of removing contacts from specific segments when they receive an unsubscribe request via email. What problem is this workflow solving? / use case Many email recipients, especially those who are less tech-savvy, may not follow the standard unsubscribe link provided in emails. Instead, for example in Gmail, they click the "Unsubscribe" button in the Gmail web interface, which in turn sends an email with a consistent format, these emails contain the word unsubscribe in the 'To' field using the following structure: hello+unsubscribe_6629823aa976f053068426@example.com This workflow automates the process of identifying such unsubscribe emails and removing the contact from the relevant Mautic segments, ensuring compliance with unsubscribe requests and maintaining a clean mailing list. What this workflow does Monitors a Gmail account for incoming emails. Identifies unsubscribe emails based on specific patterns in the "To" field (e.g., containing the word "unsubscribe"). Retrieves the contact's ID from Mautic based on the email address. Removes the contact from the specified "newsletter" segment in Mautic. Adds the contact to the "unsubscribed" segment in Mautic. Sends a confirmation email to the contact, acknowledging their unsubscribe request. Setup Configure your email address and unsubscribe message in the "Edit Fields" node. Set your credentials in the Gmail trigger and in the Mautic nodes. Set the segments for the "newsletter" and "unsubscribed" in the Mautic nodes. Make sure your n8n installation has a public endpoint for your Gmail trigger to work correctly. Deploy the workflow. How to customize this workflow to your needs Adjust the conditions for identifying unsubscribe emails based on your specific requirements. Modify the segments or actions taken in Mautic according to your desired behavior. Customize the confirmation email message and sender details. Note: This workflow assumes a consistent structure for unsubscribe emails, where the "From" field contains the word "unsubscribe" using the "+" sign. If your email provider follows a different convention, adjust the conditions in the "Is automated unsubscribe?" node accordingly.
by bartv
How it works: Did you ever miss any errors in your workflow executions? I did! And I usually only realised a few days or weeks later. π This template attaches a default error workflow to all your active workflows. From now on, you'll receive a notification whenever a workflow errors and you'll have peace of mind again. It runs every night at midnight so you never have to think of this again. Of course, you can also run it manually. Steps to set up: Update the Gmail note with your own email address, or replace it with any other notification mechanism. You can also use Slack, Discord, Telegram or text messages.. Activate the workflow. Relax. Caveats: I did not add any rate limiting, so if you have a workflow that runs very frequently and it errors... well let's say your mailbox will not be a nice place anymore. Ideas for improvement? If you have any suggestions for improvement, feel free to reach out to me at bart@n8n.io. Enjoy!
by Jimleuk
This n8n template demonstrates an approach to image embeddings for purpose of building a quick image contextual search. Use-cases could for a personal photo library, product recommendations or searching through video footage. How it works A photo is imported into the workflow via Google Drive. The photo is processed by the edit image node to extract colour information. This information forms part of our semantic metadata used to identify the image. The photo is also processed by a vision-capable model which analyses the image and returns a short description with semantic keywords. Both pieces of information about the image are combined with the metadata of the image to form a document describing the image. This document is then inserted into our vector store as a text embedding which is associated with our image. From here, the user can query the vector store as they would any document and the relevant image references and/or links should be returned. Requirements Google account to download image files from Google Drive. OpenAI account for the Vision-capable AI and Embedding models. Customise this workflow Text summarisation is just one of many techniques to generate image embeddings. If the results are unsatisfactory, there are dedicated image embedding models such as Google's vertex AI multimodal embeddings.
by Jimleuk
This n8n workflow is a fun way to query and search over your credentials on your n8n instance. Good to know Your credentials should remain safe as this workflow does not decrypt or use any decrypted data. Example Usage "Which workflows are using Slack and Google Calendar?" "Which workflows have AI in their name but are not using openAI?" How it works Using the n8n API, it fetches all workflow data on the instance. Workflow data contains references to credentials used so this will be extracted. With some necessary reformatting, the workflows and their credentials metadata are stored to a SQLite database. Next, an AI agent is used with a custom SQL tool that reads the SQLite database created in the previous step. The AI agent is instructed to perform SQL queries against our workflow credential table when asked about credentials by the user. Requirements You'll need an n8n API key. Please note that only workflows will be scoped to your API key. Customising the workflow Add extra table fields to the SQLite database to answer even more complex queries such as: workflow status to differentiate between active and inactive workflows.
by Ria
This is a demo workflow to showcase how to use Supabase to embed a document, retrieve information from the vector store via chat and update the database. Setup steps: set your credentials for Supabase set your credentials for an AI model of your choice set credentials for any service you want to use to upload documents please follow the guidelines in the workflow itself (Sticky Notes) Feedback & Questions If you have any questions or feedback about this workflow - Feel free to get in touch at ria@n8n.io
by Ayoub
Who is this for? This workflow is designed for businesses or developers looking to integrate voice-based chat applications with dynamic responses and conversational memory. What problem does this solve? It automates AI-powered voice conversations, maintaining context between sessions and converting speech-to-text and text-to-speech. What this workflow does: The workflow receives audio input, transcribes it using OpenAI, and processes the conversation using Google Gemini Chat Model (you can use OpenAI Chat Model). Responses are converted back to speech using ElevenLabs. Prerequisites: You'll need API keys for: OpenAI (you can obtain it from OpenAI website) ElevenLabs (you can obtain it from their website) Google Gemini (You can obtain it from Google AI Studio) Setup: Configure you API keys Ensure that the value (voice_message) in the "Path" parameter in the Webhook node is used as the name of the parameter that will contain the voice message you are sending via the HTTP Post request.
by Paul Mikulskis
This template is based on the following template. Thank you for the groundwork, Matheus. How it works: Store your snippets of text in a Notion table. Each snippet should have an image associated with it (copy + pasted into the text) Connect to your table via a Notion "integration", from which N8N can then query your pre-meditated posts The text is fed through an OpenAI assistant to boost engagement via formatting The re-formatted text along with the image pulled from the Notion snippet are combined into a post for your LinkedIn The row in the original Notion table from step 1 containing this post is set to a status of "Done" Set up steps: You will need to create a Notion "integration", which will yield a "secret key" which you enter into your N8N as a "Credential". You will need to create a LinkedIn "app" in order to post on your behalf. When creating your LinkedIn "app", you will be required to link this "app" to a company page on LinkedIn. If you are doing this for yourself, seach for the "Default Company Payge (for API testing)", and select this page as it is provided by LinkedIn for individuals. You can find your LinkedIn apps here, and if you get stuck, further instructions on setting up this workflow (including this LinkedIn OAuth piece) can be found in this YouTube Video Aide to these instructions. Lastly, you will need to create an OpenAI API key, found on your OpenAI Playground Dashboard. Once you created an API key, make sure you have an assistant created from the "Assistants" tab on the OpenAI dashboard. This assistant and its instructions will be needed for carrying out the re-formatting of your post.
by Jimleuk
This n8n template demonstrates how to calculate the evaluation metric "Similarity" which in this scenario, measures the consistency of the agent. The scoring approach is adapted from the open-source evaluations project RAGAS and you can see the source here https://github.com/explodinggradients/ragas/blob/main/ragas/src/ragas/metrics/_answer_similarity.py How it works This evaluation works best where questions are close-ended or about facts where the answer can have little to no deviation. For our scoring, we generate embeddings for both the AI's response and ground truth and calculate the cosine similarity between them. A high score indicates LLM consistency with expected results whereas a low score could signal model hallucination. Requirements n8n version 1.94+ Check out this Google Sheet for a sample data https://docs.google.com/spreadsheets/d/1YOnu2JJjlxd787AuYcg-wKbkjyjyZFgASYVV0jsij5Y/edit?usp=sharing