by Ibrahim
Overview This n8n workflow is designed to extract specific interests from messages in a Telegram chat and retrieve related information using the Facebook Graph API. It aims to provide a streamlined solution for parsing and analyzing user-provided interests within the Telegram platform. Features Interest Extraction:** Automatically identifies and extracts interests from messages that start with the hashtag "#interest". Data Retrieval:** Utilizes the Facebook Graph API to retrieve information related to the extracted interests. Structured Outputs:** Presents the retrieved data in an organized format for further analysis and review. Requirements Operational instance of n8n (self-hosted or cloud version). Basic understanding of n8n workflows and nodes. Setup and Configuration Import Workflow: Load the provided JSON workflow into your n8n instance. Configure Telegram Trigger Node: Ensure the Telegram trigger node is set up with the appropriate credentials and webhook ID. Configure and Test Nodes: Adjust node parameters as necessary and test the workflow to ensure proper functionality. How it Works Telegram Trigger: Listens for incoming messages in a specified Telegram chat. Check Message Contents: Verifies if the message begins with the specified hashtag and is from the designated chat ID. Extract Message: Extracts the content of the message for further processing. Split Message: Splits the extracted message to identify the interest and remaining content. Connect to Graph API: Utilizes the Facebook Graph API to search for information related to the extracted interest. Split Interests into a Table: Organizes the retrieved data into a structured table format. Get Variables: Maps the retrieved data to create a new JSON object containing specific fields related to the interest. Create a Spreadsheet: Generates a spreadsheet file in CSV format based on the retrieved and formatted data. Send the Spreadsheet File: Sends the generated spreadsheet file back to the original Telegram chat. Customization Modify the filtering conditions and fields to suit specific requirements. Adjust the frequency of the trigger node based on preference. Best Practices Regularly test the workflow to ensure consistent performance. Stay informed about any changes to external APIs that might affect the workflow's functionality. Contributing Your feedback and contributions are highly valued. Feel free to adapt, modify, and share enhancements with the n8n community.
by Joey D’Anna
This template is a set of building blocks to access Monday.com in ways not supported by the official Monday node. Prerequisites Monday account and Monday credentials. Included are setups to: Find a column value by the column's name (instead of a numerical index which can change when board structure is changed) Find a column value by the column's ID (again, instead of using a numerical index) Pull a board relation column, and get all the related pulses Pull an items subitems and split them out Upload a file to an item's files field Setup Create a Monday.com credential Update the nodes in the template to use your credential Copy/Paste the nodes you need from this template into any other workflow To retreive a column by name: Route a Monday.com node that gets an item to the COLUMN BY NAME node Edit the COLUMN BY NAME node, and enter the name in the first line of code. To retreive a column by its ID: Follow Monday.com's instructions to locate the column's ID Route a Monday.com node that gets an item to the COLUMN BY ID node -Edit the COLUMN BY ID node, and enter the ID in the first line of code. To retreive all linked pulses from a Board Relation column: Route a Monday.com node that gets an item to the GET BOARD RELATION node Edit the GET BOARD RELATION node to specify the column name. All linked pulses will be retrieved by the subsequent PULL LINKEDPULSE node To pull all subitems from an item: Route a Monday.com node that gets an item to the PULL SUBITEMS node All subitems will be retrieved by the subsequent GET EACH SUBITEM node To upload a File: Repalce the Convert to File node with whatever node you are using to output your binary file data Enable the MONDAY UPLOAD node If the destination column is named anything other then the default of "file" - edit the MONDAY UPLOAD node and change column_id:"file" in the first Value field to match the name of your file column
by Milorad Filipović
How It works It's very important to come prepared to Sales calls. This often means a lot of manual research about the person you're calling with. This workflow delivers the latest news about businesses you are about to interact with each day. Scans Your Calendar**: Each morning, it reviews your Google Calendar for any scheduled meetings or calls with companies. Fetches Latest News**: For each identified company, it searches the web for the most recent and relevant news articles using newsapi.org Delivers Insights**: You receive personalized emails via Gmail, each dedicated to a company you're meeting with that day, containing a curated list of news headlines, brief descriptions, and direct links to full articles. Setup steps The workflow requires you to have the following accounts set up in their respective nodes: Google Calendar GMail Besides those, there are a few parameters in the node called Setup that can be used to tweak the workflow:
by Mutasem
Use Case This workflow aims to enrich new contacts in HubSpot. The more relevant the HubSpot profile, the more useful it is. Once active, this n8n workflow will update the social profiles, contact data (phone, email) as well as location data from ExactBuyer. Setup Add HubSpot trigger credential (be careful, scopes must be exactly as in n8n docs ) Add your Exact Buyer API key Add HubSpot credential for update node (be careful, scopes must be same as n8n docs for this. This is different from the trigger cred) Activate workflow How to adjust this template There's plenty of interesting info that ExactBuyer returns that could be helpful. Take a look and update this workflow to add what you need.
by Joachim Brindeau
What it does The workflow is a simple yet efficient way to automate the process of indexing your website on Google using the Google Indexing API. How it works It works by extracting information from your sitemap, converting it into a JSON file, and looping through each URL to submit it for indexing. Here's a brief rundown of the workflow: The workflow can be triggered manually via the "Execute Workflow" button or scheduled to run at a specific time using the "Schedule Trigger" node. The sitemap of your website is fetched using the "sitemap_set" node with a HTTP Request to the sitemap URL. This XML sitemap is then converted into a JSON file using the "sitemap_convert" node. The "sitemap_parse" node splits the JSON file into individual URLs. The "url_set" node then prepares each URL to be sent to the Google Indexing API. A loop is created using the "loop" node to process each URL individually and make a POST request to Google Indexing API indicating that the URL has been updated. If the POST request is successful and the URL has been updated, the workflow waits for 2 seconds before moving to the next URL. In case the daily limit for the Google Indexing API is reached (200/day by default), an error message is triggered using the "Stop and Error" node. Before you use the workflow Activate the indexing API Create an account with Google Cloud Platform > Console and then create a new project Search for the Indexing API in the Library Activate the API Create a Service Account and get credentials Open the Service accounts page. If prompted, select a project. Click add Create Service Account, enter a name and description for the service account. You can use the default service account ID, or choose a different, unique one. When done click Create. On the Grant users access to this service account screen, scroll down to the Create key section. Click add Create key. In the side panel that appears, select the JSON format Click Create. Your new public/private key pair is generated and downloaded to your machine. Open the file and copy the private key. Add the credentials in the url_index node Add the user as owner of the site Beware, for each site you need to add the user as a owner like this: Set your sitemap Open the sitemap_set node and add the url to your sitemap. Now you should be able to ensure that Google is always up-to-date with the latest content on your website, improving your website's visibility and SEO rankings, have fun!
by Daniel Nolde
What it is Chat with your event schedule from Google Sheets in Telegram: "When is the next meetup?" "How many events are there next month?" "Who presented most often?" "Which future meetups have no presenters yet?" This workflow lets you chat with a telegram bot about past, present and future events that are scheduled in a Google Spreadsheet. (Info: This proof-of-concept was created as a demo for a hackathon of an AI & Developer Meetup in Da Nang (Vietnam) that uses a telegram group to organize) Who it is for If you want an easy way for your audience to get information about your events, you can us this workflow for the same purpose, or easily adapt it to your needs and different use-cases where you want to query smaller amounts of tabular data in natural language. How it works Upon getting triggered by a chat message to a telegram bot, the schedule of meetups is retrieved from Google Spreadsheets, converted into a markdown table syntax and fed into the system prompt of an LLM (we're using OpenRouter in this example), whose output is posted back as answer into the same telegram chat. Setup steps TO REVIEWING IN ACTION As the reviewer of this workflow, you can temporarily use it via an existing telegram bot, simply point your telegram client to https://t.me/AiDaNangBot and start to ask questions like: "When is the next meetup?" "What future meetings do not have presenters?" "Who presented on Future of Human Relationships?" To build upon this workflow: Import the workflow Customize the Google Docs credentials for your individual access Create a telegram bot and connect it to the workflow by entering its API token into the credentials used in the telegram trigger node In the "Settings" node, replace the "scheduleURL" with the URL of your own copy of the Google Spreadsheet or a copy of the Event Schedule Template Sheet to spin off your own – whereby the structure of the spreadsheet doesn't matter, it's just important that you semantically structure your information in dedicated columns clearly labeled in the header row.
by David Olusola
🎨 AI Image Editor with Form Upload + Telegram Delivery 🚀 Who’s it for? 👥 This workflow is built for content creators, social media managers, designers, and agencies who need fast, AI-powered image editing without the hassle. Whether you're batch-editing for clients or spicing up personal projects, this tool gets it done — effortlessly. What it does 🛠️ A seamless pipeline that: 📥 Accepts uploads + prompts via a clean form ☁️ Saves images to Google Drive automatically 🧠 Edits images with OpenAI’s image API 📁 Converts results to downloadable PNGs 📬 Delivers the final image instantly via Telegram Perfect for AI-enhanced workflows that need speed, structure, and simplicity. How it works ⚙️ User Uploads: Fill a form with an image + editing prompt Cloud Save: Auto-upload to your Google Drive folder AI Editing: OpenAI processes the image with your prompt Convert & Format: Image saved as PNG Telegram Delivery: Final result sent straight to your chat 💬 You’ll need ✅ 🔑 OpenAI API key 📂 Google Drive OAuth2 setup 🤖 Telegram bot token & chat ID ⚙️ n8n instance (self-hosted or cloud) Setup in 4 Easy Steps 🛠️ 1. Connect APIs Add OpenAI, Google Drive, and Telegram credentials to n8n Store keys securely (avoid hardcoding!) 2. Configure Settings Set Google Drive folder ID Add Telegram chat ID Tweak image size (default: 1024×1024) 3. Deploy the Form Add a Webhook Trigger node Test with a sample image Share the form link with users 🎯 Fine-Tune Variables In the Set node, customize: 📐 Image size 📁 Folder path 📲 Delivery options ⏱️ Timeout duration Want to customize more? 🎛️ 🖼️ Image Settings Change size (e.g. 512x512 or 2048x2048) Update the model (when new versions drop) 📂 Storage Auto-organize files by date/category Add dynamic file names using n8n expressions 📤 Delivery Swap Telegram with Slack, email, Discord Add multiple delivery channels Include image prompt or metadata in messages 📝 Form Upgrades Add fields for advanced editing Validate file types (e.g. PNG/JPEG only) Show a progress bar for long edits ⚡ Advanced Features Add error handling or retry flows Support batch editing Include approvals or watermarking before delivery ⚠️ Notes & Best Practices ✅ Check OpenAI credit balance 🖼️ Test with different image sizes/types ⏱️ Adjust timeout settings for larger files 🔐 Always secure your API keys
by Bazhard
TOTP Validation with Function Node This template allows you to verify if a 6-digit TOTP code is valid using the corresponding TOTP secret. It can be used in an authentication system. The inputs need to be: a base32 totp secret (String) a 6 digits code (String) ++Important:++ The 6-digit code must be in text format. If the code starts with zeros and is treated as a number, it could cause validation issues. The function node will generate a 6-digit code from the TOTP secret, then compare it with the provided code. If they match, it will return 1 otherwise, it will return 0. Example usage: You retrieve the user's TOTP secret from a database, then you want to verify if the 2FA code provided by the user is valid. Setup Guidelines You only need the ==TOTP VALIDATION== node. You will need to modify lines 39 and 40== of the node with the correct values for your specific context. Testing the Template You can define a sample secret and code in the EXAMPLE FIELDS node of the template, then click "Test Workflow". If the code is valid for the provided secret, the flow will proceed to the true branch of the IF CODE IS VALID node. Otherwise, it will go to the false branch.
by Niklas Hatje
Use case When working with multiple teams, bugs must get in front of the right team as quickly as possible to be resolved. Normally this includes a manual grooming of new bugs that have arrived in your ticketing system (in our case Linear). We found this way too time-consuming. That's why we built this workflow. What this workflow does This workflow triggers every time a Linear issue is created or updated within a certain team. For us at n8n, we created one general team called Engineering where all bugs get added in the beginning. The workflow then checks if the issue meets the criteria to be auto-moved to a certain team. In our case, that means that the description is filled, that it has the bug label, and that it's in the Triage state. The workflow then classifies the bug using OpenAI's GPT-4 model before updating the team property of the Linear issue. If the AI fails to classify a team, the workflow sends an alert to Slack. Setup Add your Linear and OpenAi credentials Change the team in the Linear Trigger to match your needs Customize your teams and their areas of responsibility in the Set me up node. Please use the format Teamname. Also, make sure that the team names match the names in Linear exactly. Change the Slack channel in the Set me up node to your Slack channel of choice. How to adjust it to your needs Play around with the context that you're giving to OpenAI, to make sure the model has enough knowledge about your teams and their areas of responsibility Adjust the handling of AI failures to your needs How to enhance this workflow At n8n we use this workflow in combination with some others. E.g. we have the following things on top: We're using an automation that enables everyone to add new bugs easily with the right data via a /bug command in Slack (check out this template if that's interesting to you) This workflow was built using n8n version 1.30.0
by David Roberts
The built-in Gmail node doesn't yet support embedding images within the body of the email, but you can pull this off using the HTTP node, and this template shows you how. Requirements A Gmail account How it works The workflow downloads an image, converts it into the format that the Gmail API expects (base64), packages it into a multipart MIME email and uses the HTTP node to send it.
by Jimleuk
This n8n workflow demonstrates how to automate oftern time-consuming form filling tasks in the early stages of the tendering process; the Request for Proposal document or "RFP". It does this by utilising a company's knowledgebase to generating question-and-answer pairs using Large Language Models. How it works A buyer's RFP is submitted to the workflow as a digital document that can be parsed. Our first AI agent scans and extracts all questions from the document into list form. The supplier sets up an OpenAI assistant prior loaded with company brand, marketing and technical documents. The workflow loops through each of the buyer's questions and poses these to the OpenAI assistant. The assistant's answers are captured until all questions are satisified and are then exported into a new document for review. A sales team member is then able to use this document to respond quickly to the RFP before their competitors. Example Webhook Request curl --location 'https://<n8n_webhook_url>' \ --form 'id="RFP001"' \ --form 'title="BlueChip Travel and StarBus Web Services"' \ --form 'reply_to="jim@example.com"' \ --form 'data=@"k9pnbALxX/RFP Questionnaire.pdf"' Requirements An OpenAI account to use AI services. Customising the workflow OpenAI assistants is only one approach to hosting a company knowledgebase for AI to use. Exploring different solutions such as building your own RAG-powered database can sometimes yield better results in terms of control of how the data is managed and cost.
by Jimleuk
This n8n workflow demonstrates how to automate image captioning tasks using Gemini 1.5 Pro - a multimodal LLM which can accept and analyse images. This is a really simple example of how easy it is to build and leverage powerful AI models in your repetitive tasks. How it works For this demo, we'll import a public image from a popular stock photography website, Pexel.com, into our workflow using the HTTP request node. With multimodal LLMs, there is little do preprocess other than ensuring the image dimensions fit within the LLMs accepted limits. Though not essential, we'll resize the image using the Edit image node to achieve fast processing. The image is used as an input to the basic LLM node by defining a "user message" entry with the binary (data) type. The LLM node has the Gemini 1.5 Pro language model attached and we'll prompt it to generate a caption title and text appropriate for the image it sees. Once generated, the generated caption text is positioning over the original image to complete the task. We can calculate the positioning relative to the amount of characters produced using the code node. An example of the combined image and caption can be found here: https://res.cloudinary.com/daglih2g8/image/upload/f_auto,q_auto/v1/n8n-workflows/l5xbb4ze4wyxwwefqmnc Requirements Google Gemini API Key. Access to Google Drive. Customising the workflow Not using Google Gemini? n8n's basic LLM node supports the standard syntax for image content for models that support it - try using GPT4o, Claude or LLava (via Ollama). Google Drive is only used for demonstration purposes. Feel free to swap this out for other triggers such as webhooks to fit your use case.