by Jenny
Create a recommendation tool without hallucinations based on RAG with the Qdrant Vector database. This example is based on movie recommendations on the IMDB-top1000 dataset. You can provide your wishes and your "big no's" to the chatbot, for example: "A movie about wizards but not Harry Potter", and get top-3 recommendations. How it works a video with the full design process Upload IMDB-1000 dataset to Qdrant Vector Store, embedding movie descriptions with OpenAI; Set up an AI agent with a chat. This agent will call a workflow tool to get movie recommendations based on a request written in the chat; Create a workflow which calls Qdrant's Recommendation API to retrieve top-3 recommendations of movies based on your positive and negative examples. Set Up Steps You'll need to create a free tier Qdrant Cluster (Qdrant can also be used locally; it's open-sourced) and set up API credentials You'll OpenAI credentials You'll need GitHub credentials & to upload the IMDB Kaggle dataset to your GitHub.
by German Velibekov
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Transform email overload into actionable insights with this automated daily digest workflow that intelligently summarizes categorized emails using AI. Who's it for This workflow is perfect for busy professionals, content creators, and newsletter subscribers who need to stay informed without spending hours reading through multiple emails. Whether you're tracking industry news, monitoring competitor updates, or managing content subscriptions, this automation helps you extract key insights efficiently. How it works The workflow runs automatically each morning at 9 AM, fetching emails from a specific Gmail label received in the last 24 hours. Each email is processed through OpenAI's language model using LangChain to create concise, readable summaries that preserve important links and formatting. All summaries are then combined into a single, well-formatted digest email and sent to your inbox, replacing dozens of individual emails with one comprehensive overview. How to set up Create a Gmail label for emails you want summarized (e.g., "Tech News", "Industry Updates") Configure credentials for both Gmail OAuth2 and OpenAI API in their respective nodes Update the Gmail label ID in the "Get mails (last 24h)" node with your specific label Set your email address in the "Send Digested mail" node Adjust the schedule in the Schedule Trigger if you prefer a different time than 9 AM Test the workflow with a few labeled emails to ensure proper formatting Requirements Gmail account with OAuth2 authentication configured OpenAI API account and valid API key At least one Gmail label set up for email categorization Basic understanding of n8n workflow execution How to customize the workflow Change summarization style: Modify the prompt in the "Summarization Mails" node to adjust tone, length, or format of summaries. You can make summaries more technical, casual, or focus on specific aspects like action items. Adjust time range: Change the receivedAfter parameter in the Gmail node to fetch emails from different time periods (last 2 days, last week, etc.). Multiple labels: Duplicate the Gmail retrieval section to process multiple labels and combine them into categories within your digest. Add filtering: Insert additional conditions to filter emails by sender, subject keywords, or other criteria before summarization. Custom formatting: Modify the "Combine Subject and Body" code node to change the HTML structure, add styling, or include additional metadata like email timestamps or priority indicators.
by Shun Fukuchi
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Automated Research Reports with AI and Tavily Search An intelligent research automation workflow designed for Japanese users that transforms user queries into comprehensive HTML reports via email. Using Google Gemini AI and Tavily search, this workflow generates optimized search queries, conducts multi-perspective research, and delivers structured analysis reports in Japanese. Who's it for Content creators, researchers, analysts, and businesses in Japan who need comprehensive research reports on various topics without manual information gathering. Particularly valuable for Japanese professionals conducting competitive analysis, market research, and technical comparisons who prefer reports in their native language. How it works The workflow follows a strategic four-step process: Query Optimization: Google Gemini AI analyzes user input and generates three optimized search queries for comprehensive coverage Multi-Query Research: Tavily's advanced search executes all queries with deep search parameters and AI-generated answers Report Synthesis: Another Gemini AI model consolidates findings, eliminates duplicates, and structures information into readable HTML format Email Delivery: Gmail automatically sends the final HTML report to specified recipients Requirements Google Gemini API credentials (for three separate AI nodes) Tavily API credentials for advanced search functionality Gmail authentication for email delivery Basic n8n workflow execution permissions How to set up Configure API credentials in all Google Gemini and Tavily nodes Update email settings in the "Send a message" node with your recipient address Customize your query in the "Edit Fields" node (default: "n8nとdifyの違い") Test the workflow to ensure all connections work properly How to customize the workflow Research depth: Increase max_results in Tavily search for more comprehensive data gathering. Query optimization: Modify system prompts in the Query Generator for domain-specific searches. Report format: Adjust the Report Agent's system message to change output structure, language, or focus areas. Multi-recipient delivery: Duplicate the Gmail node for multiple email destinations. The workflow processes Japanese and English queries effectively, with built-in support for Japanese language output, making it ideal for Japanese professionals who need multilingual research capabilities. Advanced search parameters ensure high-quality, relevant results for professional research applications.
by Seven Liu
Who’s it for 👥 This template is perfect for content creators, marketers, and researchers managing WeChat public account articles! 🚀 It’s ideal for n8n newcomers or anyone wanting to save time on manual content analysis, especially if you use Google Sheets for tracking. 📊 Whether you’re into AI, 欧阳良宜, or automation, this is for you! 😄 How it works / What it does 🔧 This workflow automates the retrieval, filtering, classification, and summarization of WeChat articles. 🌐 It reads RSS feed links from a Google Sheet, filters articles from the last 10 days ⏳, cleans HTML content 🧹, classifies them as relevant or not 🎯, generates insightful Chinese summaries with AI 🤖, and saves results to Google Sheets and Notion. 📝 Outputs are Slack-formatted for team collaboration! 💬 How to set up 🛠️ Prepare Google Sheets: Use your own documentId (replace the example) and set up sheets "Save Initial Links" (gid=198451233) and "Save Processed Data" (gid=1936091950). 📋 Configure Credentials: Add Google Sheets and OpenAI API credentials—avoid hardcoding keys! 🔐 Set RSS Feed: Update the rss_feed_url in the "RSS Read" node with your WeChat RSS feed. 🌐 Customize AI: Tweak "Relevance Classification" and "Basic LLM Chain" prompts for your topics (e.g., 欧阳良宜, AI). 🎨 Notion (Optional): Swap the databaseId (e.g., 22e79d55-2675-8055-a143-d55302c3c1b1) with your own. 📚 Run Workflow: Trigger manually via the "When clicking ‘Execute workflow’" node. 🚀 Requirements ✅ n8n account with Google Sheets and OpenAI integrations. Access to a WeChat public account RSS feed. Basic JSON and node config knowledge. How to customize the workflow 🎛️ Topic Adjustment: Update categories in "Relevance Classification" for new topics (e.g., "technology", "education"). 🌱 Summary Length: Modify the LLM prompt in "Basic LLM Chain" to adjust length or style. ✂️ Output Destination: Add Slack or Email nodes for more outputs. 📩 Date Filter: Change the "IF (Filter by Date)" condition (e.g., 7 days instead of 10). ⏰ Scalability: Use a "Schedule Trigger" node for automation. ⏳
by Jah coozi
AI Medical Symptom Checker & Health Assistant A responsible, privacy-focused health information assistant that provides general health guidance while maintaining strict safety protocols and medical disclaimers. ⚠️ IMPORTANT DISCLAIMER This tool provides general health information only and is NOT a substitute for professional medical advice, diagnosis, or treatment. Always consult qualified healthcare providers for medical concerns. 🚀 Key Features Safety First Emergency Detection**: Automatically identifies emergency situations Immediate Escalation**: Provides emergency numbers for critical cases Clear Disclaimers**: Every response includes medical disclaimers No Diagnosis**: Never attempts to diagnose conditions Professional Referral**: Always recommends consulting healthcare providers Core Functionality Symptom Information**: General information about common symptoms Wellness Guidance**: Health tips and preventive care Medication Reminders**: General medication information Multi-Language Support**: Serve diverse communities Privacy Protection**: No data storage, anonymous processing Resource Links**: Connects to trusted health resources 🎯 Use Cases General Health Information: Learn about symptoms and conditions Pre-Appointment Preparation: Organize questions for doctors Wellness Education: General health and prevention tips Emergency Detection: Immediate guidance for critical situations Health Resource Navigation: Find appropriate care providers 🛡️ Safety Protocols Emergency Keywords Detection Chest pain, heart attack, stroke Breathing difficulties Severe bleeding, unconsciousness Allergic reactions, poisoning Mental health crises Response Guidelines Never diagnoses conditions Never prescribes medications Always includes disclaimers Encourages professional consultation Provides emergency numbers when needed 🔧 Setup Instructions Configure OpenAI API Add your API key Set temperature to 0.3 for consistency Review Legal Requirements Check local health information regulations Customize disclaimers as needed Implement required data policies Emergency Contacts Update emergency numbers for your region Add local health resources Include mental health hotlines Test Thoroughly Verify emergency detection Check disclaimer display Test various symptom queries 💡 Example Interactions General Symptom Query: User: "I have a headache for 3 days" Bot: Provides general headache information, self-care tips, when to see a doctor Emergency Detection: User: "Chest pain, can't breathe" Bot: EMERGENCY response with immediate action steps and emergency numbers Wellness Query: User: "How can I improve my sleep?" Bot: General sleep hygiene tips and healthy habits information 🏥 Integration Options Healthcare Websites**: Embed as support widget Telemedicine Platforms**: Pre-consultation tool Health Apps**: General information module Insurance Portals**: Member resource Pharmacy Systems**: General drug information 📊 Compliance & Privacy HIPAA Considerations**: No PHI storage GDPR Compliant**: No personal data retention Anonymous Processing**: Session-based only Audit Trails**: Optional logging for compliance Data Encryption**: Secure transmission 🚨 Limitations Cannot diagnose medical conditions Cannot prescribe treatments Cannot replace emergency services Cannot provide specific medical advice Should not delay seeking medical care 🔒 Best Practices Always maintain clear disclaimers Never minimize serious symptoms Encourage professional consultation Keep information general and educational Update emergency contacts regularly Review and update health information Monitor for misuse Maintain audit trails where required 🌍 Customization Options Add local emergency numbers Include regional health resources Translate to local languages Integrate with local health systems Add specific disclaimers Customize for specific populations Start providing responsible health information today!
by Guillaume Duvernay
This n8n template provides a powerful AI-powered chatbot that acts as your personal Spotify DJ. Simply tell the chatbot what kind of music you're in the mood for, and it will intelligently create a custom playlist, give it a fitting name, and populate it with relevant tracks directly in your Spotify account. The workflow is built to be flexible, allowing you to easily change the underlying AI model to your preferred provider, making it a versatile starting point for any AI-driven project. Who is this for? Music lovers:** Instantly create playlists for any activity, mood, or genre without interrupting your flow. Developers & AI enthusiasts:** A perfect starting point to understand how to build a functional AI Agent that uses tools to interact with external services. Automation experts:** See a practical example of how to chain AI actions and sub-workflows for more complex, stateful automations. What problem does this solve? Manually creating a good playlist is time-consuming. You have to think of a name, search for individual songs, and add them one by one. This workflow solves that by: Automating playlist creation:** Turns a simple natural language request (e.g., "I need a playlist for my morning run") into a fully-formed Spotify playlist. Reducing manual effort:** Eliminates the tedious task of searching for and adding multiple tracks. Providing player control:** Allows you to manage your Spotify player (play, pause, next) directly from the chat interface. Centralizing music management:** Acts as a single point of control for both creating playlists and managing playback. How it works Trigger & input: The workflow starts when you send a message in the Chat Trigger interface. AI agent & tool-use: An AI Agent, powered by a Large Language Model (LLM), interprets your message. It has access to a set of "tools" that allow it to interact with Spotify. Playlist creation sub-workflow: If you ask for a new playlist, the Agent calls a sub-workflow using the Create new playlist tool. This sub-workflow uses another AI call to brainstorm a creative playlist name and a list of suitable songs based on your request. Spotify actions: The sub-workflow then connects to Spotify to: Create a new, empty playlist with the generated name. Search for each song from the AI's list to get its official Spotify Track ID. Add each track to the new playlist. Player control: If your request is to control the music (e.g., "pause the music"), the Agent uses the appropriate tool (Pause player, Resume player, etc.) to directly control your active Spotify player. Setup Accounts & API keys: You will need active accounts and credentials for: Your AI provider (e.g., OpenAI, Groq, local LLMs via Ollama): To power the AI Agent and the playlist generation. Spotify: To create playlists and control the player. You'll need to register an application in the Spotify Developer Dashboard to get your credentials. Configure credentials: Add your AI provider's API key to the Chat Model nodes. The template uses OpenAI by default, but you can easily swap this out for any compatible Langchain model node. Add your Spotify OAuth2 credentials to all Spotify and Spotify Tool nodes. Activate workflow: Once all credentials are set and the workflow is saved, click the "Active" toggle. You can now start interacting with your Spotify AI Agent via the chat panel! Taking it further This template is a great foundation. Here are a few ideas to expand its capabilities: Become the party DJ:** Make the Chat Trigger's webhook public. You can then generate a QR code that links to the chat URL. Party guests can scan the code and request songs directly from their phones, which the agent can add to a collaborative playlist or the queue. Expand the agent's skills:** The Spotify Tool node has more actions available. Add a new tool for Add to Queue so you can ask the agent to queue up a specific song without creating a whole new playlist. Integrate with other platforms:** Swap the Chat Trigger for a Telegram or Discord trigger to build a Spotify bot for your community. You could also connect it to a Webhook to take requests from a custom web form.
by ARRE
Good to know: This workflow automatically transcribes your favorite podcasts or videos saved in a YouTube playlist and generates a comprehensive, AI-powered summary—so you can quickly understand the main topics and insights without having to watch or listen to the entire episode. 👤 Who is this for? Podcast fans who want to save time and get the key points from episodes Busy professionals who follow educational or industry videos and need quick takeaways Content creators or researchers who organize and review large amounts of video/audio material Anyone who wants to efficiently capture and summarize information from YouTube playlists ❓ What problem is this workflow solving? This workflow solves the challenge of information overload from long-form podcasts and videos. It: Automatically transcribes each video or podcast episode in your chosen YouTube playlist Uses AI to create a clear, well-structured summary of the content Lets you learn and extract valuable information without watching or listening to the entire recording Organizes everything in a Google Sheets document for easy tracking and future reference ✅ What this workflow does: 📺 Fetches all videos from a specified YouTube playlist 🔗 Extracts video titles, URLs, and IDs 📝 Retrieves and combines transcripts for each video or podcast episode 📜 Processes transcript data for clarity 🤖 Uses AI to generate a detailed, sectioned summary that covers all main topics and insights 📊 Automatically logs video titles, transcripts, summaries, and row numbers to a Google Sheets spreadsheet ⚙️ How it works: 🟢 Trigger: Start the workflow manually or on a schedule 📺 Fetch videos from your chosen YouTube playlist 🔗 Extract and organize video details (title, URL, ID) 📝 Retrieve the transcript for each video or podcast episode 📜 Combine transcript segments into a single script ✂️ Extract the first sentences for focused summarization 🤖 AI agent creates a comprehensive summary of the episode or video 📊 Save all data—title, transcript, summary, and row number—to Google Sheets 🛠️ How to use: Set up YouTube OAuth2 credentials in n8n Configure Google Sheets OAuth2 credentials Set up API credentials for transcript and AI processing Create and link your Google Sheets document Input your playlist ID and adjust any filters as needed Activate the workflow 📝 Requirements: n8n instance (cloud or self-hosted) YouTube account with OAuth2 access Google Sheets account Access to transcript and AI APIs Basic n8n workflow knowledge 🟢 Customizing this workflow: Change the YouTube playlist ID to target your preferred podcasts or video series Adjust the transcript retrieval process for other APIs or formats Customize the AI prompt for different summary styles or focus areas Add or remove fields in the Google Sheets output Change the workflow trigger or polling frequency Switch to a different AI model if desired This workflow is designed to help you quickly learn from podcasts and videos you care about—without spending hours consuming the full content.
by Lucas Peyrin
How it works This template is a complete, hands-on tutorial for building a RAG (Retrieval-Augmented Generation) pipeline. In simple terms, you'll teach an AI to become an expert on a specific topic—in this case, the official n8n documentation—and then build a chatbot to ask it questions. Think of it like this: instead of a general-knowledge AI, you're building an expert librarian. The workflow is split into two main parts: Part 1: Indexing the Knowledge (Building the Library) This is a one-time process you run manually. The workflow automatically scrapes all the pages of the n8n documentation, breaks them down into small, digestible chunks, and uses an AI model to create a special numerical representation (an "embedding") for each chunk. These embeddings are then stored in your own private knowledge base (a Supabase vector store). This is like a librarian reading every book and creating a hyper-detailed index card for every paragraph. Part 2: The AI Agent (The Expert Librarian) This is the chat interface. When you ask a question, the AI agent doesn't guess the answer. Instead, it uses your question to find the most relevant "index cards" (chunks) from the knowledge base it just built. It then feeds these specific, relevant chunks to a powerful language model (like Gemini) with a strict instruction: "Answer the user's question using ONLY this information." This ensures the answers are accurate, factual, and grounded in your provided documents. Set up steps Setup time: ~15-20 minutes This is an advanced workflow that requires setting up a free external database. Follow these steps carefully. Set up Supabase (Your Knowledge Base): You need a free Supabase account. Follow the detailed instructions in the large Workflow Setup sticky notes in the top-right of the workflow to: Create a new Supabase project. Run the provided SQL query in the SQL Editor to prepare your database. Get your Project URL and Service Role Key. Configure n8n Credentials: In your n8n instance, create a new Supabase credential using the Project URL and Service Role Key from the previous step. Create a new Google AI credential with your Gemini API key. Configure the Workflow Nodes: Select your new Supabase credential in the three Supabase nodes: Your Supabase Vector Store, Official n8n Documentation and Keep Supabase Instance Alive. Select your new Google AI credential in the three Gemini nodes: Gemini Chunk Embedding, Gemini Query Embedding and Gemini 2.5 Flash. Build the Knowledge Base: Find the Start Indexing manual trigger node at the top-left. Click its "Execute workflow" button to start the indexing process. This will take several minutes as it scrapes and processes the entire n8n documentation. You only need to do this once. Chat with Your Expert Agent: Once the indexing is complete, Activate the entire workflow. Open the RAG Chatbot chat trigger node and copy its Public URL. Open the URL in a new tab and start asking questions about n8n! For example: "How does the IF node work?" or "What is a sub-workflow?".
by Roshan Ramani
Overview An intelligent email automation workflow that revolutionizes how you handle email responses. This sophisticated system monitors your Gmail inbox, uses AI to determine which emails require replies, generates professional responses, and sends them only after your approval via Telegram. Perfect for busy professionals who want to maintain personalized communication while leveraging AI efficiency. 🌟 Key Features Intelligent Email Analysis Smart Detection**: Automatically identifies emails that genuinely need responses Context Understanding**: Distinguishes between promotional content, newsletters, and actionable emails Priority Filtering**: Focuses on emails with questions, requests, or time-sensitive matters AI-Powered Response Generation Professional Tone**: Maintains appropriate business communication standards Contextual Replies**: Generates responses based on email content and context Structured Output**: Creates properly formatted subject lines and email bodies Customizable Prompts**: Easily adjust AI behavior to match your communication style Human-in-the-Loop Approval Telegram Integration**: Review and approve responses directly from your mobile device Visual Preview**: See both original email and AI-generated response before sending Dual Approval System**: Approve or reject with simple Telegram buttons Timeout Protection**: Automatically expires after 5 minutes to prevent accidental sends 🔧 How It Works Workflow Architecture Email Monitoring: Continuous Gmail inbox surveillance (every minute) Inbox Filtering: Processes only emails in your main inbox folder AI Analysis: Determines response necessity using advanced language models Response Generation: Creates professional, contextual replies when needed Telegram Notification: Sends preview to your Telegram for approval Conditional Sending: Executes email send only upon your explicit approval Decision Logic The AI evaluates emails based on: Question Detection**: Identifies direct questions requiring answers Action Requests**: Recognizes requests for information or tasks Urgency Assessment**: Prioritizes time-sensitive communications Context Analysis**: Considers sender, subject, and content relevance 🚀 Setup Requirements Prerequisites Gmail Account**: With OAuth2 authentication enabled OpenAI API Key**: For AI language model access Telegram Bot**: Personal bot token and chat ID N8N Instance**: Cloud or self-hosted environment Required Credentials Gmail OAuth2 credentials OpenAI API authentication Telegram bot token and chat configuration 📊 Use Cases Business Applications Customer Support**: Automated responses to common inquiries Sales Teams**: Quick replies to prospect questions Account Management**: Timely responses to client communications HR Operations**: Efficient handling of employee inquiries Personal Productivity Email Management**: Reduce inbox overwhelm Professional Communication**: Maintain consistent response quality Time Management**: Focus on high-priority tasks while AI handles routine replies Mobile Workflow**: Approve emails anywhere via Telegram ⚙️ Customization Options AI Behavior Tuning Response Style**: Adjust tone from formal to casual Content Filters**: Modify email analysis criteria Response Length**: Control reply brevity or detail level Language Patterns**: Customize communication style Workflow Modifications Polling Frequency**: Adjust email checking intervals Approval Timeout**: Modify decision time limits Multi-Account Support**: Extend to multiple Gmail accounts Category Routing**: Different handling for different email types 🔒 Security & Privacy Data Protection Local Processing**: All email analysis occurs within your N8N instance No Data Storage**: Email content is not permanently stored Secure Authentication**: OAuth2 and API key protection Encrypted Communication**: Secure Telegram API integration Access Control Personal Approval**: You control every outgoing message Audit Trail**: Complete workflow execution logging Fail-Safe Design**: Defaults to no action if approval isn't received 📈 Performance & Reliability Efficiency Metrics Processing Speed**: Sub-second email analysis Accuracy**: High-quality response generation Reliability**: Robust error handling and retry mechanisms Scalability**: Handles high email volumes efficiently Resource Usage Lightweight Operation**: Minimal server resource consumption API Optimization**: Efficient OpenAI token usage Rate Limiting**: Respects Gmail and Telegram API limits 💡 Best Practices Optimization Tips Monitor AI Responses**: Regularly review and refine AI prompts Approval Patterns**: Establish consistent approval workflows Response Templates**: Create reusable response patterns Performance Monitoring**: Track workflow efficiency metrics Common Configurations Business Hours**: Limit processing to working hours VIP Senders**: Priority handling for important contacts Subject Filters**: Custom rules for specific email types Escalation Rules**: Forward complex emails to human review 🏆 Benefits Productivity Gains Time Savings**: Reduce manual email composition time by 60-80% Consistency**: Maintain professional communication standards Responsiveness**: Faster reply times improve customer satisfaction Focus**: Concentrate on high-value tasks while AI handles routine communications Professional Advantages Always Available**: Respond to emails even when busy Quality Assurance**: AI ensures grammatically correct, professional responses Scalability**: Handle increasing email volumes without proportional time investment Competitive Edge**: Faster response times improve business relationships Tags: Email Automation, AI Assistant, Gmail Integration, Telegram Bot, Workflow Automation, OpenAI, Business Productivity, Customer Service, Response Management, Professional Communication
by Oneclick AI Squad
This guide walks you through setting up an automated workflow that compares live flight fares across multiple booking platforms (e.g., Skyscanner, Akasa Air, Air India, IndiGo) using API calls, sorts the results by price, and sends the best deals via email. Ready to automate your flight fare comparison process? Let’s get started! What’s the Goal? Automatically fetch and compare live flight fares from multiple platforms using scheduled triggers. Aggregate and sort fare data to identify the best deals. Send the comparison results via email for review or action. Enable 24/7 fare monitoring with seamless integration. By the end, you’ll have a self-running system that delivers the cheapest flight options effortlessly. Why Does It Matter? Manual flight fare comparison is time-consuming and often misses the best deals. Here’s why this workflow is a game-changer: Zero Human Error**: Automated data fetching and sorting ensure accuracy. Time-Saving Automation**: Instantly compare fares across platforms, boosting efficiency. 24/7 Availability**: Monitor fares anytime without manual effort. Cost Optimization**: Focus on securing the best deals rather than searching manually. Think of it as your tireless flight fare assistant that always finds the best prices. How It Works Here’s the step-by-step magic behind the automation: Step 1: Trigger the Workflow Set Schedule Node**: Triggers the workflow at a predefined schedule to check flight fares automatically. Captures the timing for regular fare updates. Step 2: Process Input Data Set Input Data Node**: Sets the input parameters (e.g., origin, destination, departure date, return date) for flight searches. Prepares the data to be sent to various APIs. Step 3: Fetch Flight Data Skyscanner API Node**: Retrieves live flight fare data from Skyscanner using its API endpoint. Akasa Air API Node**: Fetches live flight fare data from Akasa Air using its API endpoint. Air India API Node**: Collects flight fare data directly from Air India’s API. IndiGo API Node**: Gathers flight fare data from IndiGo’s API. Step 4: Merge API Results Merge API Data Node**: Combines the flight data from Skyscanner and Akasa Air into a single dataset. Merge Both API Data Node**: Merges the data from Air India and IndiGo with the previous dataset. Merge All API Results Node**: Consolidates all API data into one unified result for further processing. Step 5: Analyze and Sort Compare Data and Sorting Price Node**: Compares all flight fares and sorts them by price to highlight the best deals. Step 6: Send Results Send Response via Email Node**: Sends the sorted flight fare comparison results to the user via email for review or action. How to Use the Workflow? Importing this workflow in n8n is a straightforward process that allows you to use this pre-built solution to save time. Below is a step-by-step guide to importing the Flight Fare Comparison Workflow in n8n. Steps to Import a Workflow in n8n Obtain the Workflow JSON Source the Workflow: The workflow is shared as a JSON file or code snippet (provided earlier or exported from another n8n instance). Format: Ensure you have the workflow in JSON format, either as a file (e.g., workflow.json) or copied text. Access the n8n Workflow Editor Log in to n8n: Open your n8n instance (via n8n Cloud or self-hosted). Navigate to Workflows: Go to the Workflows tab in the n8n dashboard. Open a New Workflow: Click Add Workflow to create a blank workflow. Import the Workflow Option 1: Import via JSON Code (Clipboard): In the n8n editor, click the three dots (⋯) in the top-right corner to open the menu. Select Import from Clipboard. Paste the JSON code (provided earlier) into the text box. Click Import to load the workflow. Option 2: Import via JSON File: In the n8n editor, click the three dots (⋯) in the top-right corner. Select Import from File. Choose the .json file from your computer. Click Open to import the workflow. Setup Notes API Credentials**: Configure each API node (Skyscanner, Akasa Air, Air India, IndiGo) with the respective API keys and endpoints. Check the API provider’s documentation for details. Email Integration**: Authorize the Send Response via Email node with your email service (e.g., Gmail SMTP settings or an email API like SendGrid). Input Customization**: Adjust the Set Input Data node to include specific origin/destination pairs and date ranges as needed. Schedule Configuration**: Set the desired frequency in the Set Schedule node (e.g., daily at 9 AM IST). Example Input Send a POST request to the workflow (if integrated with a webhook) with: { "origin": "DEL", "destination": "BOM", "departureDate": "2025-08-01", "returnDate": "2025-08-07" } Optimization Tips Error Handling**: Add IF nodes to manage API failures or rate limits. Rate Limits**: Include a Wait node if APIs have strict limits. Data Logging**: Add a node (e.g., Google Sheets) to log all comparisons for future analysis. This workflow transforms flight fare comparison into an automated, efficient process, delivering the best deals directly to your inbox!
by Sachin Shrestha
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This n8n workflow automates invoice management by integrating Gmail, PDF analysis, and Azure OpenAI GPT-4.1, with an optional human verification step for accuracy and control. It's ideal for businesses or individuals who regularly receive invoice emails and want to streamline their accounts payable process with minimal manual effort. The system continuously monitors Gmail for new messages from specified senders. When it detects an email with a PDF attachment and relevant subject line (e.g., "Invoice"), it automatically extracts text from the PDF, analyzes it using Azure OpenAI, and determines if it is a valid invoice. If the AI is uncertain, the workflow sends a manual approval request to a human reviewer. Valid invoices are saved to local storage with a timestamped filename, and a confirmation email is sent upon successful processing. 🎯 Who This Is For Small to medium businesses Freelancers or consultants who receive invoices via email IT or automation teams looking to streamline document workflows Anyone using n8n with access to Gmail and Azure OpenAI ✅ Features Gmail Monitoring** – Automatically checks for new emails from trusted senders AI-Powered Invoice Detection** – Uses Azure GPT-4.1 to intelligently verify PDF contents PDF Text Extraction** – Extracts readable text for analysis Human-in-the-Loop Verification** – Requests approval when AI confidence is low Secure File Storag**e – Saves invoices locally with structured filenames Email Notifications** – Sends confirmations or manual review alerts ⚙️ Setup Instructions 1. Prerequisites An active n8n instance (self-hosted or cloud) A Gmail account with OAuth2 credentials An Azure OpenAI account with access to the GPT-4.1 model A local directory for saving invoices (e.g., C:/Test/Invoices/) 2. Gmail OAuth2 Setup In n8n, create Gmail OAuth2 credentials. Configure it with Gmail API access (read emails and attachments). Update the Gmail Trigger node to filter by sender email (e.g., sender@gmail.com). 3. Azure OpenAI Setup Create Azure OpenAI API credentials in n8n. Ensure your endpoint is correctly set and GPT-4.1 access is enabled. Link the credentials in the AI Analysis node. 4. Customize Workflow Settings Sender Email – Update in Gmail Trigger Notification Email – Update in Send Notification node Save Directory – Change in Save Invoice node 5. Testing the Workflow Send a test email from the configured sender with a PDF invoice. Wait for the workflow to trigger and check for: File saved in the directory Confirmation email received Manual review request (if needed) 🔄 Workflow Steps Gmail Trigger → Check for PDF Invoice → Extract PDF Text → Analyze with GPT-4.1 → ↳ If Invoice: Save & Notify ↳ If Uncertain: Request Human Review ↳ If Not Invoice: Send Invalid Alert
by Mirza Ajmal
Who is this for? This workflow is ideal for: HR teams and recruiters seeking to streamline resume screening. Hiring managers who want quick, summarized candidate insights. Recruitment agencies handling large volumes of applicant data. Startups and small businesses looking to automate hiring without complex systems. AI and automation professionals who want to build smart HR workflows using n8n and OpenAI. What problem is this workflow solving? / Use Case Manually reviewing resumes is time-consuming, inconsistent, and prone to human bias. This workflow automates the resume intake and evaluation process—ensuring that each applicant is screened, summarized, and scored using a consistent, data-driven method. It enhances efficiency and supports better hiring decisions. What this workflow does Accepts resume submissions via form and saves files to Google Drive. Extracts key information from resumes using AI (e.g., name, contact, education, experience). Summarizes candidate qualifications into a short, readable profile. Allows HR to rate applicants and leave comments. Logs all extracted data and evaluations into a centralized Google Sheet for tracking. Setup Resume is submitted through an n8n form. The uploaded file is automatically stored in Google Drive. n8n uses OpenAI and document parsing tools to extract candidate data. Extracted information is structured and summarized using GPT. A review form is triggered for internal HR rating and notes. All data is appended to a Google Sheet for records and filtering. How to customize this workflow to your needs** Change the form tool (e.g., Typeform, Tally, or custom HTML) based on your stack. Adapt the summary prompt to align with your specific role requirements. Add filters to auto-flag top-tier candidates based on score or skills. Integrate Slack or email to notify hiring managers when top resumes are processed. Connect to your ATS if you want to push processed resumes into your recruitment system.