by Oneclick AI Squad
This n8n workflow enables an AI-powered symptom checker where users input symptoms via a form or chat, analyzes them using an AI model, matches possible conditions, and suggests relevant doctors with contact details via WhatsApp or email, enhancing healthcare accessibility. Why Use It This workflow improves healthcare outreach by providing quick, AI-driven symptom analysis and doctor recommendations, reducing the burden on medical staff, empowering users with informed choices, and streamlining appointment scheduling. How to Import It Download the Workflow JSON: Obtain the workflow file from the n8n template or create it based on this document. Import into n8n: In your n8n instance, go to "Workflows," click the three dots, select "Import from File," and upload the JSON. Configure Credentials: Set up form/webhook, AI model, WhatsApp, email (e.g., SMTP), and optional doctor database API credentials in n8n. Run the Workflow: Test with a sample symptom input and verify responses. System Architecture Symptom Input Pipeline**: Form/Chat Trigger: Initiates the workflow when a user submits symptoms. Extract Symptom Data: Processes the input from the form or chat. AI Analysis Flow**: Send to AI Model: Analyzes symptoms using an AI model. Match Possible Conditions: Identifies potential health conditions. Doctor Suggestion Flow**: Retrieve Doctor Details: Fetches relevant doctor data from a Google Sheet or API. Prepare Suggestion Message: Formats the suggestion with doctor names and contacts. Send WhatsApp Suggestion: Delivers the suggestion via WhatsApp. Send Email Suggestion: Delivers the suggestion via email. Update Log: Logs the request and response in a Google Sheet. Google Sheet File Structure Columns**: timestamp: Date and time of the symptom submission. user_id: Unique identifier for the user (e.g., form ID or chat handle). symptoms: List of symptoms entered by the user. condition: AI-identified possible condition. doctor_name: Name of the suggested doctor. contact: Doctor’s contact (phone or email). sent_via: Channel used for delivery (e.g., WhatsApp, Email). Customization Ideas Add More Channels**: Integrate SMS or Slack for additional notifications. Enhance AI**: Train the AI model with more medical data for better accuracy. Include Appointment Booking**: Add a node to schedule appointments with suggested doctors. Multilingual Support**: Adapt responses for different languages. Severity Alerts**: Flag critical conditions for immediate medical attention. Requirements to Run This Workflow Google Sheets Account**: For logging symptom data and doctor details. AI Model**: Ollama or similar for symptom analysis (requires API access). Form/Chat Service**: Google Forms, WhatsApp webhook, or similar for user input. WhatsApp Business API**: For sending WhatsApp messages (requires token and phone number). Email Service**: Gmail, SMTP, or similar for email delivery. n8n Instance**: With Google Sheets, AI, WhatsApp, and email connectors configured. Internet Connection**: To access APIs and services. Want a tailored workflow for your business? Our experts can craft it quickly Contact our team
by takuma
Who's it for This template is for home cooks, small restaurant owners, or anyone who wants to streamline their meal planning, ingredient cost tracking, leftover management, nutritional analysis, and social media promotion. It's ideal for those looking to optimize their kitchen operations, reduce food waste, maintain a healthy diet, and efficiently share their culinary creations. How it works / What it does This advanced workflow acts as a comprehensive culinary assistant. Triggered by a new menu item, it performs several key functions: Cost and Ingredient Tracking:** A "Menu Agent" uses AI to analyze your input (e.g., a recipe or dish) and extract a detailed list of ingredients, their associated costs, unit prices, and total cost, then logs this into a Google Sheet as a "Recipe List." Leftover Management:** A "Leftovers Agent" identifies any unused ingredients from your planned dish and suggests three new recipes to utilize them, helping to minimize food waste. This information is also recorded in a Google Sheet. Nutritional Diary:** A "Nutritionist Agent" generates a diary-style entry with dietary advice based on the meal, highlighting key nutrients and offering personalized suggestions. This entry is appended to a "Diary" Google Sheet. Social Media Promotion:** A "Post Agent" takes the nutritional diary entry and transforms it into an engaging social media post (specifically for X/Twitter in this template), which is then sent as a direct message, ready for you to share with your followers. How to set up Webhook Trigger: The workflow starts with a Webhook. Copy the webhook URL from the "Webhook" node. You will send your menu item input to this URL. Google Sheets Integration: You need to set up a Google Sheets credential for your n8n instance. Create a Google Sheet document (e.g., "Recipe List"). Within this document, create three sheets: "Recipe: This sheet will store your menu items, ingredients, costs, etc. Ensure it has columns for Date, Item, Ingredients, Ingredient Cost, Unit Price, Quantity, Total, Cost, and Leftover Ingredients. "leftovers" (Leftovers): This sheet will store suggested recipes for leftover ingredients. Ensure it has columns for Date and Ingredients. "diary" (Diary): This sheet will store your nutritional diary entries. Ensure it has a column for Diary. In the "Append row in sheet", "Append row in sheet1", and "Append row in sheet2" nodes, replace the Document ID with the ID of your Google Sheet. For "Sheet Name," ensure you select the correct sheet (e.g., "レシピ", "diary", "leftovers") from the dropdown. OpenRouter Chat Model: Set up your OpenRouter credentials in the "OpenRouter Chat Model" nodes. You will need your OpenRouter API key. Twitter Integration: Set up your Twitter credentials for the "Create Direct Message" node. In the "Create Direct Message" node, specify the User (username) to whom the direct message should be sent. This is typically your own Twitter handle or a test account. Requirements An n8n instance. A Google account with Google Sheets enabled. An OpenRouter API key. A Twitter (X) account with developer access to send Direct Messages. How to customize the workflow Input Data:** The initial input to the "Webhook" node is expected to be the name of a dish or recipe. You can modify the "Menu Agent" to accept more detailed inputs if needed. Google Sheets Structure:** Adjust the column mappings in the Google Sheets nodes if your spreadsheet column headers differ. AI Agent Prompts:** Customize the System Message in each AI Agent node (Menu Agent, Leftovers Agent, Nutritionist Agent, Post Agent) to refine their behavior and the kind of output they generate. For example, you could ask the Nutritionist Agent to focus on specific dietary needs. Social Media Platform:** The "Create Direct Message" node is configured for Twitter. You can swap this with another social media node (e.g., Mastodon, Discord) if you prefer to post elsewhere, remembering to adjust the "Post Agent" system message accordingly. Output Parser:** The "Structured Output Parser" is configured for a specific JSON structure. If you change the "Menu Agent" to output a different structure, you'll need to update this parser.
by AttenSys AI
🧥 Virtual Try-On Image & Video Generation (VLM Run) 📌 Overview This n8n workflow enables a Virtual Try-On experience where users upload a dress image and the system: Combines it with a fashion model image Generates a realistic try-on image Generates a fashion walking video Automatically shares results via: Telegram Discord YouTube 🚀 Use Cases Virtual fashion try-on AI fashion marketing Clothing e-commerce previews Social media fashion automation Influencer & brand demo pipelines ✨ Key Features 🖼️ Image-based virtual try-on (model wearing the dress) 🎥 AI-generated fashion video 🔗 Multi-platform publishing (Telegram, Discord, YouTube) 🧩 Modular, extensible workflow design 🧠 Workflow Architecture 🟨 Input Dress Image** – Uploaded by user (Form Trigger) Model Image** – Downloaded from predefined URL Prompt** – Auto-constructed inside workflow 🟦 Output 🖼️ Try-On Image 🎥 Fashion Walk Video 📤 Shared to: Telegram (image/video) Discord (image) YouTube (video upload) 🔐 Required Credentials You must configure the following credentials in n8n: | Service | Credential Type | | -------- | ------------------ | | VLM Run | VLM Run API | | Telegram | Telegram Bot API | | Discord | Discord OAuth2 | | YouTube | YouTube OAuth2 | ⚠️ Community Node Warning > Important: This workflow uses a Community Node > @vlm-run/n8n-nodes-vlmrun What this means: This node is NOT installed by default in n8n You must manually install it before using the workflow 📦 Installation Run the following command in your n8n environment: npm install @vlm-run/n8n-nodes-vlmrun Then restart n8n. 📖 Community Nodes Documentation: https://docs.n8n.io/integrations/community-nodes/
by Nijan
This workflow turns Slack into your content control hub and automates the full blog creation pipeline — from sourcing trending headlines, validating topics, drafting posts, and preparing content for your CMS. With one command in Slack, you can source news from RSS feeds, refine them with Gemini AI, generate high-quality blog posts, and get publish-ready output — all inside a single n8n workflow. ⸻ ⚙️ How It Works 1.Trigger in Slack Type start in a Slack channel to fetch trending headlines. Headlines are pulled from your configured RSS feeds. 2.Topic Generation (Gemini AI) Gemini rewrites RSS headlines into unique, non-duplicate topics. Slack displays these topics in a numbered list (e.g., reply with 2 to pick topic 2). 3.Content Validation When you reply with a number, Gemini validates and slightly rewrites the topic to ensure originality. Slack confirms the selected topic back to you. 4.Content Creation Gemini generates a LinkedIn/blog-style draft: Strong hook introduction 3–5 bullet insights A closing takeaway and CTA Optionally suggests asset ideas (e.g., image, infographic). 5.CMS-Ready Output Final draft is structured for publishing (markdown or plain text). You can expand this workflow to automatically send the output to your CMS (WordPress, Ghost, Notion, etc.). ⸻ 🛠 Setup Instructions Connect your Slack Bot to n8n. Configure your RSS Read nodes with feeds relevant to your niche. Add your Gemini API credentials in the AI node. Run the workflow: Type start in Slack → see trending topics. Reply with a number (e.g., gen 3) → get a generated blog draft in the same Slack thread. ⸻ 🎛 Customization Options • Change RSS sources to match your industry. • Adjust Gemini prompts for tone (educational, casual, professional). • Add moderation filters (skip sensitive or irrelevant topics). • Connect the final output step to your CMS, Notion, or Google Docs for publishing. ⸻ ✅ Why Use This Workflow? • One-stop flow: Sourcing → Validation → Writing → Publishing. • Hands-free control: Everything happens from Slack. • Flexible: Easily switch feeds, tone, or target CMS. • Scalable: Extend to newsletters, social posts, or knowledge bases.
by Summer
LinkedIn Job Search Automation Creator: Summer Chang Setup Instructions This n8n workflow automatically searches for senior designer jobs on LinkedIn every day at 5am and saves them to a Notion database. Prerequisites n8n instance (cloud or self-hosted) Notion account with API access A Notion database set up to receive job listings Setup Steps ✅ 1. Create Your Notion Database Or duplicate my template ✅ 2. Connect Notion to n8n In the "Save to Notion" node, click on the Notion credentials Follow the authentication flow to connect your Notion account Select your job search database from the dropdown ✅ 3. Customize Your Search Criteria In the "Set Search Criteria" node, modify these parameters to match your job preferences: search_keywords: Job titles to search for (comma-separated) Default: senior product designer, product design lead, senior UX designer, AI designer excluded_keywords: Terms to filter out (comma-separated) Default: contract, freelance location: Where you want to work (comma-separated) Default: remote, san francisco f_TPR: Time filter for job postings r86400 = Last 24 hours r604800 = Last week r2592000 = Last month sortBy: How to sort results DD = Most recent first R = Most relevant first ✅ 4. Adjust the Schedule In the "Everyday @5am" node: Click on the node Modify the schedule to your preferred time You can set it to run daily, weekly, or at custom intervals ✅ 5. Set Result Limits In the "Limit1" node: Default: Processes 10 jobs per run Adjust the maxItems value to get more or fewer results ✅ 6. Configure Wait Time (Optional) The "Wait2" node adds a 10-second delay between requests to avoid rate limiting: Default: 10 seconds Increase if you're getting blocked by LinkedIn Decrease for faster processing (not recommended) How It Works Trigger: Runs automatically every day at 5am Search: Queries LinkedIn with your specified criteria Parse: Extracts job title, company, location, and URL from search results Filter: Removes any jobs with missing critical information Wait: Delays between requests to avoid rate limiting Fetch Details: Retrieves full job descriptions and poster information Save: Adds each job to your Notion database
by Panth1823
AI Workflow Description and Template Generator This workflow automates the creation of professional documentation and template-ready sticky notes for any n8n workflow using AI. How it works Receives an n8n workflow JSON file via Telegram Validates the input file type and extracts workflow data Scrubs sensitive information and analyzes workflow structure Uses Google Gemini AI to generate comprehensive documentation Assembles a complete template with main workflow sticky note and logical section stickies Sends back the documented workflow file, usage checklist, and setup guide via Telegram Setup Configure Telegram Trigger credentials for receiving files Configure Telegram API credentials for sending messages Configure Google Gemini Chat Model (Google PaLM API) credentials Customization Adjust the prompt in the "AI Template Generator" node to modify documentation style, detail level, or specific requirements for your use case.
by Richard Black
Generate GitHub Release Notes with AI Automatically generate GitHub release notes using AI. This workflow compares your latest two GitHub releases, summarises the changes, and produces a clean, ready-to-paste changelog entry. It’s ideal for automating GitHub Releases, versioning workflows, and keeping your documentation or CHANGELOG.md up to date without manual editing. What this workflow does Listens for newly published GitHub Releases. Fetches and compares the latest two GitHub release versions. Uses an AI Chat Model to summarise changes and generate structured release notes. Outputs clean, reusable release note content for GitHub, documentation, or CI/CD pipelines. How it works GitHub Trigger detects a new published release. Release detail nodes extract the latest tag, body, and repository metadata. Comparison logic fetches the previous release and prepares a diff. Chat Model nodes (via OpenRouter) generate both a summary and a final, formatted release note. Requirements / Connections GitHub OAuth credential configured in n8n. OpenRouter API key connected to the Chat Model nodes. Setup instructions Import the template. Select your GitHub OAuth connection in all GitHub nodes. Add your OpenRouter credential to the Chat Model nodes. (Optional) Adjust the AI prompts to customise tone or formatting. Output The workflow produces: A concise summary of differences between the last two GitHub releases. A polished AI-generated GitHub release note ready to publish. Customisation ideas Push generated notes directly into a CHANGELOG.md or documentation repo. Send release summaries to Slack or Teams. Include commit messages, PR titles, or labels for deeper analysis.
by Dr. Christoph Schorsch
Rename Workflow Nodes with AI for Clarity This workflow automates the tedious process of renaming nodes in your n8n workflows. Instead of manually editing each node, it uses an AI language model to analyze its function and assign a concise, descriptive new name. This ensures your workflows are clean, readable, and easy to maintain. Who's it for? This template is perfect for n8n developers and power users who build complex workflows. If you often find yourself struggling to understand the purpose of different nodes at a glance or spend too much time manually renaming them for documentation, this tool will save you significant time and effort. How it works / What it does The workflow operates in a simple, automated sequence: Configure Suffix: A "Set" node at the beginning allows you to easily define the suffix that will be appended to the new workflow's name (e.g., "- new node names"). Fetch Workflow: It then fetches the JSON data of a specified n8n workflow using its ID. AI-Powered Renaming: The workflow's JSON is sent to an AI model (like Google Gemini or Anthropic Claude), which has been prompted to act as an n8n expert. The AI analyzes the type and parameters of each node to understand its function. Generate New Names: Based on this analysis, the AI proposes new, meaningful names and returns them in a structured JSON format. Update and Recreate: A Code Node processes these suggestions, updates all node names, and correctly rebuilds the connections and expressions. Create & Activate New Workflow: Finally, it creates a new workflow with the updated name, deactivates the original to avoid confusion, and activates the new version.
by Guillaume Duvernay
Move beyond generic AI-generated content and create articles that are high-quality, factually reliable, and aligned with your unique expertise. This template orchestrates a sophisticated "research-first" content creation process. Instead of simply asking an AI to write an article from scratch, it first uses an AI planner to break your topic down into logical sub-questions. It then queries a Super assistant—which you've connected to your own trusted knowledge sources like Notion, Google Drive, or PDFs—to build a comprehensive research brief. Only then is this fact-checked brief handed to a powerful AI writer to compose the final article, complete with source links. This is the ultimate workflow for scaling expert-level content creation. Who is this for? Content marketers & SEO specialists:** Scale the creation of authoritative, expert-level blog posts that are grounded in factual, source-based information. Technical writers & subject matter experts:** Transform your complex internal documentation into accessible public-facing articles, tutorials, and guides. Marketing agencies:** Quickly generate high-quality, well-researched drafts for clients by connecting the workflow to their provided brand and product materials. What problem does this solve? Reduces AI "hallucinations":** By grounding the entire writing process in your own trusted knowledge base, the AI generates content based on facts you provide, not on potentially incorrect information from its general training data. Ensures comprehensive topic coverage:** The initial AI-powered "topic breakdown" step acts like an expert outliner, ensuring the final article is well-structured and covers all key sub-topics. Automates source citation:** The workflow is designed to preserve and integrate source URLs from your knowledge base directly into the final article as hyperlinks, boosting credibility and saving you manual effort. Scales expert content creation:** It effectively mimics the workflow of a human expert (outline, research, consolidate, write) but in an automated, scalable, and incredibly fast way. How it works This workflow follows a sophisticated, multi-step process to ensure the highest quality output: Decomposition: You provide an article title and guidelines via the built-in form. An initial AI call then acts as a "planner," breaking down the main topic into an array of 5-8 logical sub-questions. Fact-based research (RAG): The workflow loops through each of these sub-questions and queries your Super assistant. This assistant, which you have pre-configured and connected to your own knowledge sources (Notion pages, Google Drive folders, PDFs, etc.), finds the relevant information and source links for each point. Consolidation: All the retrieved question-and-answer pairs are compiled into a single, comprehensive research brief. Final article generation: This complete, fact-checked brief is handed to a final, powerful AI writer (e.g., GPT-5). Its instructions are clear: write a high-quality article using only the provided information and integrate the source links as hyperlinks where appropriate. Implementing the template Set up your Super assistant (Prerequisite): First, go to Super, create an assistant, connect it to your knowledge sources (Notion, Drive, etc.), and copy its Assistant ID and your API Token. Configure the workflow: Connect your AI provider (e.g., OpenAI) credentials to the two Language Model nodes (GPT 5 mini and GPT 5 chat). In the Query Super Assistant (HTTP Request) node, paste your Assistant ID in the body and add your Super API Token for authentication (we recommend using a Bearer Token credential). Activate the workflow: Toggle the workflow to "Active" and use the built-in form to generate your first fact-checked article! Taking it further Automate publishing:* Connect the final *Article result* node to a *Webflow* or *WordPress** node to automatically create a draft post in your CMS. Generate content in bulk:* Replace the *Form Trigger* with an *Airtable* or *Google Sheet** trigger to automatically generate a whole batch of articles from your content calendar. Customize the writing style:* Tweak the system prompt in the final *New content - Generate the AI output** node to match your brand's specific tone of voice, add SEO keywords, or include specific calls-to-action.
by Samir Saci
Tags: Logistics, Supply Chain, Warehouse Operations, Paperless processes, Quality Management Context Hi! I’m Samir — Supply Chain Engineer, Data Scientist based in Paris, and founder of LogiGreen. > Let us use n8n to help small companies digitalise their logistics and supply chain! This workflow helps warehouse operators generate a complete damage report without needing to write anything manually. In warehouse operations, damaged pallets must be reported quickly and consistently. You can automate the entire process using AI to analyse photos of the damages. 📬 For business inquiries, you can find me on LinkedIn Example of damage report The process starts with instructions sent with the operator: A photo of the damaged pallets is shared with the bot: A complete report is generated and sent by email: 🎥 Tutorial A complete tutorial (with explanations of every node) is available on YouTube: Who is this template for? This template is ideal for companies with limited IT ressources: Warehouse operators** who need a fast reporting tool Quality teams** who want consistent and structured reports 3PLs and logistics providers** looking to digitalise damage claims Manufacturers and retailers** with high inbound pallet volumes Anyone using Telegram** on the warehouse floor for quick interactions What does this workflow do? This workflow acts as an AI-powered damaged goods reporting assistant using Telegram, OpenAI Vision and Gmail. A operator sends a picture of the damaged pallet via Telegram. The workflow downloads the image and sends it to GPT-4o for damage analysis. The bot replies and asks for a photo of the pallet barcode. The barcode picture is processed by GPT-4o Mini to extract the pallet number. The workflow combines both results (damage analysis + pallet ID). It generates an HTML email report with: damage summary, observed issues, severity level and recommended actions The report is automatically sent via Gmail to the configured recipient. The operator receives a confirmation message in Telegram. The processes does not require any data input form the operator, only to take pictures! Next Steps Before running the workflow, follow the sticky notes and configure: Connect your Telegram Bot API Add your OpenAI API Key in the AI nodes Connect your Gmail credentials Update the recipient email in the “Send Report by Email” node Submitted: 20 November 2025 Template designed with n8n version 1.116.2
by Stephan Koning
Recruiter Mirror is a proof‑of‑concept ATS analysis tool for SDRs/BDRs. Compare your LinkedIn or CV to job descriptions and get recruiter‑ready insights. By comparing candidate profiles against job descriptions, it highlights strengths, flags missing keywords, and generates actionable optimization tips. Designed as a practical proof of concept for breaking into tech sales, it shows how automation and AI prompts can turn LinkedIn into a recruiter‑ready magnet. Got it ✅ — based on your workflow (Webhook → LinkedIn CV/JD fetch → GhostGenius API → n8n parsing/transform → Groq LLM → Output to Webhook), here’s a clear list of tools & APIs required to set up your Recruiter Mirror (Proof of Concept) project: 🔧 Tools & APIs Required 1. n8n (Automation Platform) Either n8n Cloud or self‑hosted n8n instance. Used to orchestrate the workflow, manage nodes, and handle credentials securely. 2. Webhook Node (Form Intake) Captures LinkedIn profile (LinkedIn_CV) and job posting (LinkedIn_JD) links submitted by the user. Acts as the starting point for the workflow. 3. GhostGenius API Endpoints Used: /v2/profile → Scrapes and returns structured CV/LinkedIn data. /v2/job → Scrapes and returns structured job description data. Auth**: Requires valid credentials (e.g., API key / header auth). 4. Groq LLM API (via n8n node) Model Used: moonshotai/kimi-k2-instruct (via Groq Chat Model node). Purpose: Runs the ATS Recruiter Check, comparing CV JSON vs JD JSON, then outputs a structured JSON per the ATS schema. Auth**: Groq account + saved API credentials in n8n. 5. Code Node (JavaScript Transformation) Parses Groq’s JSON output safely (JSON.parse). Generates clean, recruiter‑ready HTML summaries with structured sections: Status Reasoning Recommendation Matched keywords / Missing keywords Optimization tips 6. n8n Native Nodes Set & Aggregate Nodes** → Rebuild structured CV & JD objects. Merge Node** → Combine CV data with job description for comparison. If Node** → Validates LinkedIn URL before processing (fallback to error messaging). Respond to Webhook Node** → Sends back the final recruiter‑ready insights in JSON (or HTML). ⚠️ Important Notes Credentials**: Store API keys & auth headers securely inside n8n Credentials Manager (never hardcode inside nodes). Proof of Concept: This workflow demonstrates feasibility but is **not production‑ready (scraping stability, LinkedIn terms of use, and API limits should be considered before real deployments).
by Pixcels Themes
Who’s it for This template is designed for recruiters, lead-generation teams, agency owners, and sales professionals who collect LinkedIn profile data and need to automate the process of finding verified company domains and email addresses. It is ideal for teams looking to eliminate manual research and streamline prospect enrichment. What it does / How it works This workflow reads contact records from a Google Sheet, including name, position, and description. An AI agent analyzes each profile to determine the company domain. If the domain is already identifiable from the description, it is used directly. If no domain is found, the workflow generates an intelligent search term and performs a Google Custom Search to extract the most accurate domain from real web results using another AI agent. Once the domain is confirmed, the workflow queries Hunter.io to find the best-matching email address for the contact. Finally, the enriched data—email and company domain—is appended back into the Google Sheet, updating each row automatically. Requirements Google Sheets OAuth2 credentials Google Gemini (PaLM) API credentials Hunter.io API key Google Custom Search API key and CSE ID A Google Sheet with columns for name, position, description, and domain How to set up Connect your Google Sheets, Gemini, Hunter.io, and Google Search credentials. Replace the Google Sheet ID and sheet name with your own. Add your API keys to the designated nodes. Ensure column names match your sheet structure. Execute the workflow to begin enrichment. How to customize the workflow Modify AI prompts for better domain inference Add additional enrichment steps (social profiles, industry tags) Add fallback email providers (Snov, Apollo, etc.) Change update logic to support multiple sheets or batch processing