by AiAgent
Disclaimer This workflow contains a community node. What It Does Leverage the power of GPT-4o to seamlessly summarize a scientific research PDF of your choosing. By simply downloading a PDF of a scientific research article into a folder on your computer this powerful workflow will automatically read the article and produce a detailed summarization of the article. The workflow will then save this summarization onto your computer for future convenience. Who Is This For? The workflow is the perfect tool for all types of self-learners attempting to improve their knowledge base as efficiently as possible. It is a way to rapidly improve your knowledge base using peer reviewed scientific articles in a quick and efficient way. This workflow will provide a more detailed summary of the scientific research article than a typical abstract, while taking a fraction of the time it would take to read an entire paper. It will provide you with enough information to have a firm grasp on the information provided within the scientific article and will allow you to determine if you would like to dive deeper into the article. This workflow is perfect for professionals who need to stay current on the most recent literature in their field, as well as the self-learners who enjoy diving deep into a specific topic. It can aid anyone who is performing academic research, a literature review, or attempting to increase their knowledge base in a field using peer reviewed sources. How It Works Utilizing the power of GPT-4o, the moment you save a PDF of a scientific research article to a predesignated folder it will being to read the article and produce a summary that will be saved into another designated folder on your computer via the following steps below. Search the internet and your favorite journal databases for a scientific article that interests you. With the n8n workflow activated, download a PDF of the scientific article and save it to a specific designated folder. Saving the scientific article to this folder will trigger the workflow to initiate. The workflow will then extract the contents of the PDF and pass the data along to an AI agent utilizing the power of GPT-4o. This AI agent will produce a detailed summary of the scientific article. This summary will include the following: Introduction heading discussing the importance of the article and the specific aims of the study Methods heading detailing how the study was conducted, what variables they evaluated, what their inclusion and exclusion criteria were, and what their measurement standards were. Results heading providing specific data provided in the study for all variables tested as well as the statistical significance of each result. Summary heading evaluating the importance of the results, how it compares to other scientific articles in the same field, as well as the recommendations of the authors on how to interpret the data provided by the results. Conclusion heading summarizing the strengths and weaknesses of the scientific article as well as providing deficiencies in knowledge on the subject that would be a good topic for future studies. After the AI agent has completed its summary, it will convert the summary to text and save it to a designated folder on your computer for future viewing. Set Up Steps You will need to create a folder on your computer where you would like to save your scientific article PDFs. You will then copy the pathway to this folder into the local file trigger node. You will need to obtain an Open AI API key from platform.openai.com/api-keys After you obtain this Open AI API key you will need to connect it to the Open AI Chat Model connected to the Summarizer Tools Agent. You will now need to fund your Open AI account. GPT-4o costs ~$0.01 to run the workflow. Finally, create a folder on your computer you wish to have the summarizations saved to. Copy the pathway to this folder into the Save to Folder node. Customization This workflow is easy to customize to a specific area of research to provide the best possible summarization. If you have a specific expertise in a field of study, you can customize the output to provide data at a higher level of understanding for that field. For example, if you are a marine biologist, you can change the portion of the text prompt in the summarizer tool from "You are a research expert who is providing data to another researcher." to "You are a marine biologist expert who is providing data to another marine biologist." Disclaimer If the pdf is too large, open AI will not be able to summarize it and will provide the error that you have reached your limit of requests.
by nepomuc
This flow migrates all repositories of a Gitlab group to a Gitea organization by triggering Gitea's integrated migration tool. Set up steps: Copy this workflow Create an empty Gitea-organization you want to migrate to. (The flow will skip all projects which have the same name of possibly already existing repos in the target Gitea organization.) Create an access token in your Gitea (https://gitea.example.com/user/settings/applications), set it up as a Header Auth with it's name being "Authorization" and value being "token [your-gitea-token]" and select it for the "Gitea:"-named nodes. Create a Personal access token in Gitlab (https://gitlab.com/-/user_settings/personal_access_tokens), create a Header Auth with name "PRIVATE-TOKEN" and value "[your-gitlab-token]" and select it for the "Gitlab:"-named node. Also keep the value of your Gitlab-token available for step 5. Edit the Set node right after the trigger node and set paste your personal access token in there as well as the names of the Gitlab source group and the Gitea target organization. Use the url-friendly version of their names by simply copy&pasting them from their URLs. Run the flow and enjoy the show :)
by Yulia
Create a Telegram bot that combines advanced AI functionalities with LangChain nodes and new tools. Nodes as tools and the HTTP request tool are a new n8n feature that extend custom workflow tool and simplify your setup. We used the workflow tool in the previous Telegram template to call the Dalle-3 model. In the new version, we've achieved similar results using the HTTP Request tool and the Telegram node tool instead. The main difference is that Telegram bot becomes more flexible. The LangChain Agent node can decide which tool to use and when. In the previous version, all steps inside the custom workflow tool were executed sequentially. ⚠️ Note that you'd need to select the Tools Agent to work with new tools. Before launching the template, make sure to set up your OpenAI and Telegram credentials. Here’s how the new Telegram bot works: Telegram Trigger listens for new messages in a specified Telegram chat. This node activates the rest of the workflow after receiving a message. AI Tool Agent receives input text, processes it using the OpenAI model and replies to a user. It addresses users by name and sends image links when an image is requested. The OpenAI GPT-4o model generates context-aware responses. You can configure the model parameters or swap this node entirely. Window buffer memory helps maintain context across conversations. It stores the last 10 interactions and ensures that the agent can access previous messages within a session. Conversations from different users are stored in different buffers. The HTTP request tool connects with OpenAI's DALL-E-3 API to generate images based on user prompts. The tool is called when the user asks for an image. Telegram node tool sends generated images back to the user in a Telegram chat. It retrieves the image from the URL returned by the DALL-E-3 model. This does not happen directly, however. The response from the HTTP request tool is first stored in the Agent’s scratchpad (think of it as a short-term memory). In the next iteration, the Agent sends the updated response to the GPT model once again. The GPT model will then create a new tool request to send the image back to the user. To pass the image URL, the tool uses the new $fromAI() expression. Send final reply node sends the final response message created by the agent back to the user on Telegram. Even though the image was already passed to the user, the Agent always stops with the final response that comes from dedicated output. ⚠️ Note, that the Agent may not adhere to the same sequence of actions in 100% of situations. For example, sometimes it could skip sending the file via the Telegram node tool and instead just send an URL in the final reply. If you have a longer series of predefined steps, it may be better to use the “old” custom workflow tool. This template is perfect as a starting point for building AI agentic workflow. Take a look at another agentic Telegram AI template that can handle both text and voice messages.
by joseph
🧵 Generate Conversational Twitter/X Threads with GPT-4o AI (n8n Workflow) This workflow uses OpenAI (GPT-4o) and Twitter/X to automatically generate and publish engaging, conversational threads in response to a trigger (e.g., from a chatbot or form). 🚀 What Does It Do? Listens for an incoming message (e.g., via webhook or another n8n input). Uses GPT-4o to craft a narrative-style Twitter thread in a personal, friendly tone. Publishes the first tweet, then automatically posts each following tweet as a reply—building a full thread. 🛠️ What Do You Need to Configure? Before using this template, make sure to set up the following credentials: OpenAI Add your OpenAI API key in the OpenAI Chat Model node. This is used to generate the thread content. Twitter/X Add your Twitter OAuth2 credentials to the First Tweet and Thread Reply nodes. This allows the workflow to publish tweets on your behalf. ✨ Who Is This For? This template is perfect for: Content creators who want to share ideas regularly Personal brands looking to grow their presence Social media managers automating thread creation 🔧 How to Customize It You can easily adjust the tone, structure, or length of the threads by modifying the system prompt in the OpenAI node. For example: To create threads with humor, change the prompt to “Write in a witty and humorous tone.” To tailor it for marketing, prompt it with “Write a persuasive product-focused Twitter thread.” You can also integrate this workflow with: Telegram bots Web forms (e.g., Typeform, Tally) CRM tools or newsletter platforms 📋 Sample Output Prompt sent to the workflow: “Tips for growing on Twitter in 2025” Generated thread: ++Tweet 1:++ Thinking of growing your presence on Twitter/X in 2024? Here's a thread with the most effective strategies that actually work 🧵 ++Reply 1:++ Engage, don’t broadcast Twitter is a conversation platform. Reply to others, quote-tweet, and start discussions instead of just posting links. ++Reply 2:++ Consistency beats virality Tweeting regularly builds trust and visibility. You don't need to go viral — just show up.
by Agent Circle
This workflow demonstrates how to automate live information gathering, fact-checking, and trend analysis in response to any chat message - using a powerful AI agent, memory, and a real-time search tool. Use cases are many: This is perfect for researchers needing instant, up-to-date data; support teams providing live, accurate answers; content creators looking to verify facts or find hot topics; and analysts automating regular reports with the freshest information. How It Works The workflow is triggered whenever a chat message is received (e.g., a user question, research prompt, or data request). The message is sent to the AI Agent, which follows the following steps: First, it queries SerpAPI – Research to gather the latest real-time information and data from the web. Next, it checks the Window Buffer Memory for any related past interactions or contextual information that may be useful. Finally, it sends all collected data and context to the Google Gemini Chat Model, which analyzes the information and generates a comprehensive, intelligent response. Then, the AI Agent delivers the analyzed, up-to-date answer directly in the chat, combining live data, context, and expert analysis. How To Set Up Download and import the workflow into your n8n workspace. Set up API credentials and tool access for the AI Agent: Google Gemini (for chat-based intelligence) → connected to Node Google Gemini Chat Model. SerpAPI (for real-time web and search results) → connected to Node SerpAPI - Research. Window Buffer Memory (for richer, context-aware conversations) → connected to Node Window Buffer Memory. Open the chat in n8n and type the topic or trend you want to research. Send the message and wait for the process to complete. Receive the AI-powered research reply in the chat box. Requirements An n8n instance (self-hosted or cloud). SerpAPI** credentials for live web search and data gathering. Window Buffer Memory** configured to provide relevant conversation context in history. Google Gemini API** access to analyze collected data and generate responses. How To Customize Choose your preferred AI model: Replace **Google Gemini with OpenAI ChatGPT, or any other chat model as preferred. Add or change memory: Replace **Window Buffer Memory with more advanced memory options for deeper recall. Connect your preferred chat platform**: Easily swap out the default chat integration for Telegram, Slack, or any other compatible messaging platform to trigger and interact with the workflow. Need Help? If you’d like this workflow customized, or if you’re looking to build a tailored AI Agent for your own business - please feel free to reach out to Agent Circle. We’re always here to support and help you to bring automation ideas to life. Join our community on different platforms for assistance, inspiration and tips from others. Website: https://www.agentcircle.ai/ Etsy: https://www.etsy.com/shop/AgentCircle Gumroad: http://agentcircle.gumroad.com/ Discord Global: https://discord.gg/d8SkCzKwnP FB Page Global: https://www.facebook.com/agentcircle/ FB Group Global: https://www.facebook.com/groups/aiagentcircle/ X: https://x.com/agent_circle YouTube: https://www.youtube.com/@agentcircle LinkedIn: https://www.linkedin.com/company/agentcircle
by Aadarsh Jain
Who is this for? This workflow is designed for DevOps engineers, platform engineers, and Kubernetes administrators who want to interact with their Kubernetes clusters through natural language queries in n8n. It's perfect for teams who need quick cluster insights without memorizing complex kubectl commands or switching between multiple cluster contexts manually. How it works? The workflow operates in three intelligent stages: Cluster Discovery & Context Switching - Automatically lists available clusters from your kubeconfig and switches to the appropriate cluster based on your natural language query Command Generation - Uses GPT-4o to analyze your request and generate the correct kubectl command with proper flags, selectors, and output formatting Command Execution - Executes the generated kubectl command against your selected cluster and returns the results The workflow supports multi-cluster environments and can handle queries like: "Show me all pods in production cluster" "List failing deployments in production" "Get pod details in kube-system namespace" Setup Clone the MCP Server git clone https://github.com/aadarshjain/kubectl-mcp-server cd kubectl-mcp-server Configure your kubeconfig - Ensure your ~/.kube/config contains all the clusters you want to access Set up MCP STDIO credentials in n8n Command: /full/path/to/python-package Arguments: /full/path/to/kubectl-mcp-server/server.py Import the workflow into your n8n instance Configure OpenAI credentials for the GPT-4o models Test the workflow using the chat interface with queries like "show pods in [cluster-name]"
by Vadym Nahornyi
This workflow automatically transcribes audio files, translates the content between languages, and generates natural-sounding speech from the translated text - all in one seamless process. Who's it for Content creators, educators, and businesses needing to make their audio content accessible across language barriers. Perfect for translating podcasts, voice messages, lectures, or any audio content while preserving the spoken format. How it works The workflow receives an audio file through a webhook, transcribes it using OpenAI's Whisper, translates and structures the text with GPT-4, generates new audio in the target language, and stores it in S3 for easy access. The entire process takes seconds and returns both the transcribed/translated text and a URL to the translated audio file. How to set up Configure OpenAI credentials - Add your OpenAI API key for Whisper transcription and GPT-4 translation Set up AWS S3 - Create a bucket with public read permissions for audio storage Update configuration - Replace 'YOUR-BUCKET-NAME' with your actual S3 bucket name Activate webhook - Deploy and copy your webhook URL for receiving audio files Send a POST request with: Binary audio file (as 'audiofile') Languages parameter (e.g., "English, Spanish") Requirements OpenAI API account with access to Whisper and GPT-4 AWS account with S3 bucket configured Basic understanding of webhooks and API requests How to customize Add language detection** - Automatically detect source language if not specified Customize voice settings** - Adjust speech speed, pitch, or select different voices Add file validation** - Implement size limits and format checks Enhance security** - Add webhook authentication and rate limiting Extend functionality** - Add subtitle generation or multiple output formats
by Blue Code
It allows you to automate candidate retrieval and onboarding in your HR processes. How it works It monitors a Gmail address for new emails with a PDF attachment It expects the PDF to be a candidate’s CV, extracts the text using OCR, and then structures the data using ChatGPT Once the data is processed, it connects to Notion and adds (or updates) an entry in the specified database How to use Configure your Gmail account and provide your ChatGPT API key Provide an API key for the OCR service in a variable named OCR_SPACE_API_KEY Connect your Notion account Once everything is configured, the workflow will monitor your inbox for new emails. Just send an email with a PDF attachment to the configured address Requirements In addition to Gmail, ChatGPT, and Notion, the system uses a third-party OCR API (OCR SPACE). You’ll need to create an account and obtain an API key You must map the fields returned by ChatGPT to the Notion database, or use the same field names we are using Customising It should be easy to replace Notion with PostgreSQL or another database if needed
by Jimleuk
This n8n template demonstrates how to calculate the evaluation metric "RAG document groundedness" which in this scenario, measures the ability to provide or reference information included only in retrieved vector store documents. The scoring approach is adapted from https://cloud.google.com/vertex-ai/generative-ai/docs/models/metrics-templates#pointwise_groundedness How it works This evaluation works best for an agent that requires document retrieval from a vector store or similar source. For our scoring, we need to collect the agent's response and the documents retrieved and use an LLM to assess if the former is based off the latter. A key factor is to look out information in the response which is not mentioned in the documents. A high score indicates LLM adherence and alignment whereas a low score could signal inadequate prompt or model hallucination. Requirements n8n version 1.94+ Check out this Google Sheet for a sample data https://docs.google.com/spreadsheets/d/1YOnu2JJjlxd787AuYcg-wKbkjyjyZFgASYVV0jsij5Y/edit?usp=sharing
by Arlin Perez
AI Research Assistant via Telegram (GPT-4o mini + DeepSeek R1 + SerpAPI) 👥 Who’s it for This workflow is perfect for anyone who wants to receive AI-powered research summaries directly on Telegram. Ideal for people asking frequent product, tech, or decision-making questions and want up-to-date answers sourced from the web. 🤖 What it does Users send a question via Telegram. An AI agent (DeepSeek R1) reformulates and understands the intent, while a second agent (GPT-4o mini) performs live research using SerpAPI. The most relevant answers, including links and images, are delivered back via Telegram. ⚙️ How it works 📲 Telegram Trigger – Starts when a user sends a message to your Telegram bot. 🧠 DeepSeek R1 Agent – Understands, clarifies, or reformulates the user query. 🧠 Research AI Agent (GPT-4o mini + SerpAPI) – Searches the web and summarizes the best results. 📤 Send Telegram Message – Sends the response back to the same user. 📋 Requirements Telegram bot (via BotFather) with API token set in n8n credentials OpenAI account with API key and balance for GPT-4o mini SerpAPI account (100 free searches/month) with API key DeepSeek account with API key and balance 🛠️ How to set up Create your Telegram bot using BotFather and connect it using the Telegram Trigger node Set up DeepSeek credentials and add a Chat Model AI Agent node using DeepSeek R1 to reformulate the user’s question Set up OpenAI credentials and add a second ChatGPT AI Agent node using GPT-4o mini In the GPT-4o node, enable the SerpAPI Tool and add your SerpAPI API key Pass the reformulated question from DeepSeek to the GPT-4o agent for live search and summarization Format the response (text, links, optional images) Send the final reply to the user using the Telegram Send Message node Ensure your n8n instance is publicly accessible Test the workflow by sending a message to your Telegram bot ✅
by Jimleuk
This n8n template watches a Gmail inbox for support messages and creates an equivalent issue item in Linear. How it works A scheduled trigger fetches recent Gmail messages from the inbox which collects support requests. These support requests are filtered to ensure they are only processed once and their HTML body is converted to markdown for easier parsing. Each support request is then triaged via an AI Agent which adds appropriate labels, assesses priority and summarises a title and description of the original request. Finally, the AI generated values are used to create an issue in Linear to be actioned. How to use Ensure the messages fetched are solely support requests otherwise you'll need to classify messages before processing them. Specify the labels and priorities to use in the system prompt of the AI agent. Requirements Gmail for incoming support messages OpenAI for LLM Linear for issue management Customising this workflow Consider automating more steps after the issue is created such as attempting issue resolution or capacity planning.
by sayamol thiramonpaphakul
This workflow automatically checks the status of your websites using UptimeRobot API. If any site is down or unstable, it will: Generate a natural-language alert message using GPT-4o Push the message to a LINE group (with funny IT-style encouragement) Log all DOWN status entries into your Supabase database Wait 30 minutes before repeating 🔧 How It Works Schedule Trigger – Runs on a fixed interval (every few minutes). UptimeRobot Node – Fetches website monitor data. Code Node (Filter) – Filters only websites with status 8 (may be down) or 9 (down). IF Node – If any site is down, proceed. LangChain LLM Node – Formats alert with a humorous message using GPT-4o. Line Notify (HTTP Request) – Sends the alert to your LINE group. Loop Over Items – Loops through all monitors. Filter Down (Status = 9) – Selects only “fully down” sites. Supabase Node – Logs these into synlora_uptime_down table. Wait Node – Delays next alert by 30 minutes to avoid spamming. ⚙️ Setup Steps Required: 🔗 UptimeRobot API Key 📲 LINE Channel Access Token and Group ID 🧠 OpenAI Key (GPT-4o Mini) 🗃️ Supabase Project & Table Step-by-step: Go to UptimeRobot → Get API key and ensure monitors are set up. Create a Supabase table with fields: website, status, uptime_id. Create a LINE Messaging API bot, join it to your group, and get: Access Token Group ID (userId or groupId) Add your OpenAI API Key for GPT-4o Mini (or switch to your preferred LLM). Import the workflow JSON into n8n. Set credentials in all necessary nodes. Activate the workflow.