by Easy8.ai
Automated Helpdesk Ticket Alerts to Microsoft Teams from Easy Redmine Intro/Overview This workflow automatically posts a Microsoft Teams message whenever a new helpdesk ticket is created in Easy Redmine. It’s perfect for IT teams who want real-time visibility into new issues without constantly checking ticket queues or inboxes. By integrating Easy Redmine with Teams, this setup ensures tickets are discussed and resolved faster, improving both response and resolution times. How it works Catch Easy Webhook – New Issue Created Triggers whenever Easy Redmine sends a webhook for a newly created ticket. Uses the webhook URL generated from Easy Redmine’s webhook settings. Get a new ticket by ID Fetches full ticket details (subject, priority, description) via the Easy Redmine API using the ticket ID from the webhook payload. Pick Description & Create URL to Issue Extracts the ticket description. Builds a direct link to the ticket in Easy Redmine for quick access. AI Agent – Description Processing Uses an AI model to summarize the ticket and suggest possible solutions based on the issue description. MS Teams Message to Support Channel Formats and sends the ticket details, priority, summary, and issue link into a designated Microsoft Teams channel. Uses the Teams message layout for clarity and quick scanning. How to Use Import the workflow into your n8n instance. Set up credentials: Easy Redmine API credentials with permission to read helpdesk tickets. Microsoft Teams credentials for posting messages to a channel. Configure Easy Redmine webhook To trigger on ticket creation events. Insert n8n webhook URL to your active Easy Redmine Webhook which can be created at https://easy-redmine-application.com/easy_web_hooks Adjust node settings: In the webhook node, use your Easy Redmine webhook URL. In the “Get a new ticket by ID” node, insert your API endpoint and authentication. In the Teams message node, select the correct Teams channel. Adjust timezone or scheduling if your team works across different time zones. Test the workflow by creating a sample ticket in Easy Redmine and confirming that it posts to Teams. Example Use Cases IT Helpdesk**: Notify the support team immediately when new issues are logged. Customer Support Teams**: Keep the entire team updated on urgent tickets in real time. Project Teams**: Ensure critical bug reports are shared instantly with the right stakeholders. Requirements Easy Redmine application Easy Redmine technical user for API calls with “read” permissions on tickets Microsoft Teams technical user for API calls with “post message” permissions Active n8n instance Customization Change the AI prompt to adjust how summaries and solutions are generated. Modify the Teams message format (e.g., bold priority, add emojis for urgency). Add filters so only high-priority or specific project tickets trigger notifications. Send alerts to multiple Teams channels based on ticket type or project. Workflow Improvement Suggestions: Rename nodes for clarity (e.g., “Fetch Ticket Details” instead of “get-one-issue”). Ensure no private ticket data is exposed beyond intended recipients. Add error handling for failed API calls to avoid missing ticket alerts.
by Evoort Solutions
📥 Instagram to MP4 Converter with Google Drive Integration This n8n workflow enables users to convert Instagram video links into downloadable MP4 files, store them in Google Drive, and log the results (success or failure) in Google Sheets. 🔧 Node-by-Node Overview On form submission – Triggers when a user submits an Instagram video URL. Instagram Downloader API Request – Calls the Instagram Downloader API to retrieve a downloadable link for the video. If – Checks if the API response indicates success. MP4 Downloader – Downloads the video from the provided media URL. Upload To Google Drive – Uploads the MP4 video to a specified folder in Google Drive. Google Drive Set Permission – Sets the uploaded file to public with a sharable link. Google Sheets – Logs successful conversions, including the original URL and Drive link. Wait – Adds a pause before logging failure to avoid rapid writes to Google Sheets. Google Sheets Append Row – Logs failed attempts with Drive_URL marked as N/A. 🚀 Key Features 🔗 Uses the Instagram Downloader API to convert Instagram video URLs 🗂 Uploads MP4s directly to Google Drive 📊 Logs all actions in Google Sheets 🧠 Smart error handling using conditional and wait nodes 📌 Use Case & Benefits Convert Instagram videos to MP4 instantly from a simple form submission Automatically upload videos to Google Drive Log successful and failed conversions into Google Sheets Ideal for marketers, content managers, educators, and archivists No manual downloading, renaming, or organizing — it's fully automated 🌐 API Key Requirement To use this workflow, you’ll need an API key from the Instagram Downloader API. Follow these steps to obtain your API key: Go to the Instagram Downloader API Sign up or log in to RapidAPI Subscribe to a plan (either free or paid) Copy your x-rapidapi-key and paste it in the HTTP Request node where required 🛠 Full Setup Instructions 1. API Setup Create an account with RapidAPI. Subscribe to the Instagram Downloader API and copy your API key. Use this key in the HTTP Request node in n8n to call the Instagram Downloader API. 2. Google Services Setup Google Drive Integration: Go to the Google Developer Console. Create a new project. Enable the Google Drive API. Create OAuth 2.0 credentials and download the JSON credentials file. Upload this file to n8n under your Google Drive credentials setup. Google Sheets Integration: Enable the Google Sheets API in the Google Developer Console. Create OAuth 2.0 credentials for Sheets access. Download the credentials file and upload it to n8n for authentication. Make sure the Google Sheet you're using has columns for Original_URL, Drive_URL, and Status. 3. Customizing the Template Custom Folder for Google Drive: In the "Upload To Google Drive" node, change the folder ID to match your desired folder in Google Drive where videos should be stored. Custom Google Sheets Columns: By default, the template logs the Original_URL, Drive_URL, and Status (success/failure). To add more columns, simply update the "Google Sheets Append Row" node with new column headers and ensure the data from each step corresponds correctly. 4. Column Mapping for Google Sheets The default columns in your Google Sheet are: Original_URL: The original Instagram video URL submitted by the user. Drive_URL: The sharable link to the uploaded MP4 in Google Drive. Status: Whether the conversion was successful or failed. Important Note: Ensure your Google Sheet is properly formatted with these columns before running the workflow. 💡 Additional Tips Monitoring API Usage**: The Instagram Downloader API has rate limits. Check your API usage in the RapidAPI dashboard. Automating with Triggers**: You can trigger the workflow automatically when a user submits a form URL through tools like Google Forms or external services that integrate with n8n. Error Handling**: If you encounter frequent failures, check the API's response format and ensure that all your credentials are correctly set up.
by Amuratech
This template is designed for SEO specialists, content marketers, and digital growth teams who want to automate the process of tracking keyword rankings. Manually checking SERPs every week is time-consuming and prone to error. This workflow solves that by automatically querying Google search results with the Serper API, updating rankings in Google Sheets, and keeping historical data for up to 12 weeks. Prerequisites Before you begin, make sure you have: A Google Sheet with columns: Sr.no (unique row identifier) Keyword Target Page (the URL you want to track) A Google Service Account credential set up in n8n A Serper API key (added to n8n credentials as serperApiKey) Detailed Setup Import the workflow into n8n. Update the Google Sheets nodes: Replace your-google-sheet-id with your actual Google Sheet ID Replace your-sheet-name with the correct tab name Add your Google Service Account credentials to the Google Sheets nodes. Add your Serper API key to the HTTP Request node (serperApiKey). (Optional) Update the HARDCODED_DOMAIN variable in the Code node if you want to lock rankings to a specific domain. Run the workflow once manually to confirm everything is working. Usage & Customization By default, the workflow runs every Monday at 00:00 (midnight). You can adjust this by editing the Cron node. The workflow stores ranking history for 12 weeks. If you want more, simply extend the columns in your Google Sheet and update the Code node logic. The workflow checks for both exact URLs and domains. You can customize this in the Code node depending on whether you want to track page-level or domain-level rankings. Data is updated only for the current week unless you allow overwriting, ensuring historical accuracy.
by Dahiana
🔍 Low competition keyword finder What it does: Discovers all keywords a domain ranks for and analyzes their difficulty, trends, and intent. Identifies opportunities based on actual ranking data rather than suggestions. How it works: Reads domain seeds from Google Sheets (with location/language settings) Fetches all keywords the domain ranks for (Keywords For Site API) Gets keyword difficulty scores for each keyword (Bulk Keyword Difficulty API) Combines data with search trends, intent classification, and backlink metrics Writes comprehensive results to keywords_opportunities sheet Setup Requirements: DataForSEO API credentials (Basic Auth) Google Sheets with input columns: seed, location_name, language_name, limit Output sheet: keywords_opportunities Data Captured: Keyword & Search Volume Monthly/Quarterly/Yearly Trends Keyword Difficulty (0-100) Search Intent (main + foreign) Average Backlinks Last Updated Time SE Type & Location/Language codes Best For: Competitor keyword analysis Content gap identification Monitoring domain keyword portfolio Finding keywords you already rank for
by Kunsh
How it works Automatically monitors Twitter for bug bounty tips and educational content every 4 hours, then saves valuable insights to Google Sheets for easy reference and organization. Set up steps Get your API key from https://twitterapi.io/ (free tier available) Configure Google Sheets credentials in n8n Create a Google Sheet with the required columns Update the Sheet ID in the final node What you'll get A continuously updated database of bug bounty tips, techniques, and insights from the security community, perfectly organized in Google Sheets with: Tweet content and URLs Engagement metrics (likes, retweets, replies) Formatted timestamps for easy sorting Automatic duplicate prevention Perfect for security researchers, bug bounty hunters, and cybersecurity professionals who want to stay updated with the latest tips and techniques from Twitter's security community.
by Toshiya Minami
Title Prioritize Todoist tasks and send a daily summary to Slack Who’s it for Busy professionals, team leads, and freelancers who want a plug-and-play, AI-assisted morning briefing that turns messy task lists into a clear, actionable plan. What it does / How it works At 08:00 every morning, the workflow pulls open tasks from Todoist. An AI agent scores and ranks them by urgency, importance, dependencies, and effort, then produces a concise plan. You receive the summary in Slack (Markdown). Overdue or critical items are highlighted with warnings. How to set up Connect OAuth for Todoist and Slack. Select your posting channel in Send to Slack. Adjust the time in Morning Schedule Trigger (default 08:00). Run once to verify the parser output and Slack preview, then set the workflow Active. Requirements n8n (cloud or self-hosted) Todoist account / Slack workspace LLM provider connected in the AI node (do not hardcode keys in HTTP nodes) How to customize the workflow Edit the prompt in AI Task Analyzer to tweak prioritization rules. Adjust Format AI Summary to match your tone and structure. Add filters in the Todoist node (e.g., due today). (Optional) Log results to Google Sheets or a database for analytics. Disclaimer (community node) This template uses a community LangChain node for AI features and is intended for self-hosted n8n. Add a workflow image at the top of your submission page for a clearer preview.
by Brandon True
Overview Send an AI a few details about your "Dream Customer" in normal english, then have it search the web and give you a "Dream 100" - 100 ideal prospects to connect with in your industry. Great for stress-testing a product idea or giving you a start for networking in an industry. How it works Send the AI agent information about your business + ideal customer. It will ask you to clarify any additional info. The agent will use an LLM to turn your criteria into specific prompts for an internet search Perplexity will use those prompts to identify ideal customers An LLM will format those Perplexity results, then they'll be added to a Google sheet. Set up steps Copy the provided google sheets template into your Google Drive Connect your Google Drive/Sheets to the workflow Connect OpenRouter and Perplexity to the workflow (Just paste in your API key!) If desired, connect the Slack trigger/response nodes to control the agent from Slack.
by Roni Bandini
How it works This template waits for an external button to be pressed via webhook, then reads a Google Sheet with pending shipments. The sheet contains the columns: idEnvio, fechaOrden, nombre, direccion, detalle, and enviado. It determines the next shipment using Google Gemini Flash 2.5, considering not only the date but also the customer’s comments. Once the next shipment is selected, the column “enviado” is updated with an X, and the shipping information is forwarded to Unihiker’s n8n Terminal. Setup Create a new Google Sheet and name it "Shipping". Add the following column headers in the first row: idEnvio, fechaOrden, nombre, direccion, detalle, and enviado. Connect your Google Sheets and Google Gemini credentials. In your n8n workflow, select the Shipping sheet in the Google Sheets node. Copy the webhook URL and paste it into the .ino code for your Unihiker n8n Terminal. 🚀
by Abdullah Alshiekh
🧩 What Problem Does It Solve? Manually reviewing CVs from Telegram job applicants is slow, error-prone, and often inconsistent. This workflow automates the collection, analysis, and storage of CVs — saving HR teams hours while ensuring structured, high-quality candidate data for fast decision-making. 📝 Description This workflow is built to help HR teams collect and qualify CVs sent over Telegram. It verifies that a candidate submits a valid PDF, stores the file securely, extracts key information using AI, and logs everything neatly in Google Sheets. 🎯 Key Advantages for HR Teams ✅ Automatically filters out non-PDF and invalid messages ✅ Uses AI to extract clean, structured candidate data ✅ Links CV files to Google Sheets for easy HR access ✅ Eliminates manual data entry from physical CVs ✅ Provides a scalable CV pipeline via Telegram 🛠️ Features Telegram bot for CV collection MIME-type PDF validation Google Drive integration for secure storage Text extraction from PDFs Gemini AI-powered CV parsing Google Sheets integration for candidate logging Merge logic to synchronize multiple streams JSON-safe parsing for AI output Automatic job title and experience categorization Duplicate prevention through name-based matching 🔧 Requirements A Telegram bot token Google Drive API credentials Google Sheets API credentials Gemini API key (or another LLM) n8n instance with relevant credentials configured Candidates sending CVs in PDF format 🧠 Use Case Examples Recruitment Agencies: Automate pre-screening and reduce manual effort Small Startups: Collect high-quality CVs without paying for an ATS Internship Programs: Quickly categorize applicants by experience Remote Hiring: Accept global CVs via Telegram from mobile users Freelancer Portals: Auto-log contractor profiles from incoming resumes ⚙️ Configuration Tips 1-Set up Telegram Bot API credentials 2-Configure Google Drive API access 3-Configure Google Sheets API access 4-Configure Google Gemini/PaLM API access 5-Replace all placeholder IDs with your actual values If you need any help Get in Touch
by Yang
Who is this for? This workflow is perfect for content marketers, bloggers, SEO professionals, and virtual assistants who need to transform keyword research into complete blog posts without spending hours writing and formatting. What problem is this workflow solving? Writing a blog post from scratch requires research, summarizing content, and structuring it into a polished article. This workflow automates that process by taking a single keyword, fetching related news articles, cleaning the data, and generating a professional blog draft automatically in Google Docs. What this workflow does The workflow begins when a keyword is submitted through a form. It expands the keyword into trending suggestions using Dumpling AI Autocomplete, then fetches recent news articles with Dumpling AI Google News. Articles are filtered to only include those published within the last 1–2 days, then scraped and cleaned for quality text. The aggregated content is sent to OpenAI, which generates a polished blog draft with a clear title. Finally, the draft is saved directly into Google Docs for easy editing and publishing. Nodes Overview Form Trigger – Form Submission (Keywords) Starts the workflow when a keyword is submitted through a form. HTTP Request – Dumpling AI Autocomplete Expands the keyword into multiple trending search suggestions. Split Out – Split Autocomplete Suggestions Breaks the list of autocomplete suggestions into individual items for processing. Loop – Loop Suggestions Iterates through each suggestion to process articles separately. Wait – Delay Between Requests Adds a pause to avoid sending too many requests at once. HTTP Request – Dumpling AI Google News Fetches recent news articles for each suggestion. Split Out – Split News Articles Splits the returned news results into individual articles. Code – Filter Articles (1–2 Days Old) Keeps only articles that are between 1 and 2 days old for fresh content. Limit – Limit Articles Restricts the workflow to the top 2 articles for each suggestion. HTTP Request – Dumpling AI Scraper Scrapes and cleans the full text content from the article URLs. Code – Clean & Prepare Article Content Removes clutter like links, images, and unrelated sections to ensure clean input. Aggregate – Aggregate Articles Combines the cleaned article content into one dataset. OpenAI – Generate Blog Draft Uses OpenAI to create a polished blog post draft and title in Markdown format. Google Docs – Create Blog File Creates a new Google Doc with the generated blog title. Google Docs – Insert Blog Content Inserts the full blog draft into the created document. 📝 Notes Set up Dumpling AI and generate your API key from: Dumpling AI OpenAI must be connected with an active API key for blog generation. Google Docs must be connected with write permissions to create and update blog posts. You can adjust the article filter (currently set to 1–2 days old) in the code node depending on your needs.
by Baris Cem Ant
Workflow Objective This n8n workflow automates the entire content creation process by monitoring specified RSS feeds for new articles. It then leverages Google Gemini AI to generate comprehensive, SEO-optimized blog posts inspired by these articles, creates unique cover images, and distributes the final content as a JSON file to stakeholders via Telegram. The primary goal is to automate the end-to-end content pipeline, saving significant time and ensuring a consistent output of high-quality content. Step-by-Step Breakdown Monitor News Sources (RSS Triggers): The workflow is triggered periodically (e.g., hourly, weekly) by multiple RSS Feed nodes that monitor sources like "Search Engine Journal" and "Tech Crunch" for new publications. Prevent Duplicate Content (Deduplication): For each new article fetched from the RSS feeds, the workflow checks an AWS DynamoDB database to see if the article's URL has been processed before. If the link already exists in the database, the process for that item is halted, and a debug notification is sent to Telegram via the "Telegram Debugger" node. This prevents the generation of duplicate content. AI-Powered Content Generation (Gemini Content Generation): If the article is new, its link is passed to a Google Gemini node. Using a highly detailed and structured prompt, Gemini generates a complete blog post in a specific JSON format. This output includes a title, meta description, SEO-friendly slug, a descriptive prompt for generating a cover image, and the full markdown body of the article (including an introduction, subheadings, conclusion, FAQ section, etc.). Data Cleaning and Parsing (JSON Parser): The raw text response from the AI is processed by a "Code" node. This custom script cleans the output—removing markdown code blocks, fixing potential syntax errors—and reliably parses it into a valid JSON object, ensuring the data is clean for subsequent steps. Image Generation and Cloud Storage: The image_generation_prompt from the parsed JSON is sent to another Google Gemini node configured for image generation, creating a 1200x630 cover image for the blog post. The newly created image is renamed using the slug. Finally, the image is uploaded to a cloud storage service like Cloudflare R2. If the upload fails, an error message is sent to Telegram. Final Data Assembly and Distribution: The generated text content is merged with the URL of the uploaded image to create the final, complete blog post data object. This entire data structure is converted into a JSON file, named using the format [slug].json. In the final step, this JSON file is sent as a document to designated recipients User via the Telegram nodes. Technologies and Services Used Trigger:** RSS Feed Reader Artificial Intelligence:** Google Gemini (for both text and image generation) Database:** AWS DynamoDB (for content deduplication) Cloud Storage:** Cloudflare R2 (S3-compatible) Notification & Distribution:** Telegram Data Processing:** n8n's native nodes (Merge, If, Set, Code)
by Lakindu Siriwardana
📚 Chat with Internal Documents (RAG AI Agent) ✅ Features Answers should given only within provided text. Chat interface powered by LLM (Ollama) Retrieval-Augmented Generation (RAG) using Supabase Vector DB Multi-format file support (PDF, Excel, Google Docs, text files) Automated file ingestion from Google Drive Real-time document update handling Embedding generation via Ollama for semantic search Memory-enabled agent using PostgreSQL Custom tools for document lookup with context-aware chat ⚙️ How It Works 📥 Document Ingestion & Vectorization Watches a Google Drive folder for new or updated files. Deletes old vector entries for the file. Uses conditional logic to extract content from PDFs, Excel, Docs, or text Summarizes and preprocesses content. (if needed) Splits and embeds the text via Ollama. Stores embeddings in Supabase Vector DB 💬 RAG Chat Agent Chat is initiated via Webhook or built-in chat interface. User input is passed to the RAG Agent. Agent queries the User_documents tool (Supabase vector store) using the Ollama model to fetch relevant content. If context is found, it answers directly. Otherwise, it can call tools or request clarification. Responses are returned to the user, with memory stored in PostgreSQL for continuity. 🛠 Supabase Database Configuration Create a Supabase project at https://supabase.com and go to the SQL editor. Create a documents table with the following schema: id - int8 content - text metadata - jsonb embedding - vector Generate an API Key