by Adam Janes
This workflow gives you the ability to reply to a long email with a voice note, rather than having to type everything out. ChatGPT will format your audio response and create an email draft for you. How it works When a new email arrives in your inbox, the workflow checks if it needs a response, and it it does, it sends a message to you on Telegram via a VoiceEmailer bot. When you reply to that message with an audio message, the second part of this workflow is triggered. It checks if the message is in the right format, transcribes the audio, and creates a draft response that shows up in the same email thread. Set up steps Add your credentials for Gmail and OpenAI Create an Telegram bot following the instructions here. Connect your telegram credentials so the workflow will use your bot. Turn on the workflow, and message the bot from your telegram. Find the Chat ID from the Executions tab of your workflow, and enter it in as a variable.
by Rizqi Pratama Ramadhani
Automated Financial Tracker: Telegram Invoices to Notion with AI Summaries & Reports Tired of manually logging every expense? Streamline your financial tracking with this powerful n8n workflow! Snap a photo of your invoice in Telegram, and let AI (powered by Google Gemini) automatically extract the details, record them in your Notion database, and even send you a quick summary. Plus, get scheduled weekly reports with charts to visualize your spending. Automate your finances, save time, and gain better insights with this easy-to-use template! Transform your expense tracking from a chore into an automated breeze. Try it out! Overview: This workflow revolutionizes how you track your finances by automating the entire process from invoice capture to reporting. Simply send a photo of an invoice or receipt to a designated Telegram chat, and this workflow will: Extract Data with AI: Utilize Google Gemini's capabilities to perform OCR on the image, understand the content, and extract key details like item name, quantity, price, total, date, and even attempt to categorize the expense. Store in Notion: Automatically log each extracted transaction into a structured Notion database. Instant Feedback: Send a summary of the processed transaction back to your Telegram chat. Scheduled Reporting: Generate and send a visual summary of your expenses (e.g., weekly spending by category) as a chart to your preferred Telegram chat or group. This workflow is perfect for individuals, freelancers, or small teams looking to effortlessly manage their expenses without manual data entry. Key Features & Benefits: Effortless Expense Logging:** Just send a picture โ no more typing! AI-Powered Data Extraction:** Leverages Google Gemini for intelligent invoice processing. Centralized Data in Notion:** Keep all your financial records neatly organized in a Notion database. Automated Categorization:** AI helps in categorizing your expenses (e.g., Food & Beverage, Transportation). Instant Summaries:** Get immediate confirmation and a summary of what was recorded. Visual Reporting:** Receive scheduled charts (e.g., bar charts of spending by category) directly in Telegram. Customizable:** Easily adapt the workflow to your specific needs, categories, and reporting preferences. Time-Saving:** Drastically reduces the time spent on manual financial administration. How It Works (Workflow Breakdown): The workflow is divided into two main parts: Part 1: Real-time Invoice Processing & Logging (## Auto Notes Transaction with Telegram and Notion database) Telegram Trigger (Telegram Trigger | When recive photo): Activates when a new photo is sent to the configured Telegram chat. Get Photo Info (Get Info Photo from telegram chat): Retrieves the details of the received photo. Get Image Info (Get Image Info): Prepares the image data. AI Data Extraction (Google Gemini Chat Model & Basic LLM Chain): The image data is sent to the Google Gemini Chat Model. A specific prompt instructs the AI to extract details (date, ID, name, quantity, price, total, category, tax) in a JSON array format and provide a summary message. The categories include Food & Beverage, Transportation, Utilities, Shopping, Healthcare, Entertainment, Housing, and Education. Parse AI Output (Parse To your object | Table): Structures the AI's JSON output for easier handling. Split Transactions (Split Out | data transaction): If an invoice contains multiple items, this node splits them into individual records. Record to Notion (Record To Notion Database): Each transaction item is added as a new page/entry in your specified Notion database, mapping fields like Name, Quantity, Price, Total, Category, Date, and Tax. Send Telegram Summary (Sendback to chat and give summarize text): The summary message generated by the AI is sent back to the original Telegram chat. Part 2: Scheduled Financial Reporting (## Schedule report to send on chanel or private message) Schedule Trigger (Schedule Trigger | for send chart report): Runs at a predefined interval (e.g., every week) to generate reports. Get Recent Data from Notion (Get Recent Data from Notions): Fetches transaction data from the Notion database for a specific period (e.g., the past week). Summarize Data (Summarize Transaction Data): Aggregates the data, for example, by summing up the 'total' amount for each 'category'. Prepare Chart Data (Convert Data to JSON chart payload): Transforms the summarized data into a JSON format suitable for generating a chart (e.g., labels for categories, data for spending amounts). Generate Chart (Generate Chart): Uses the QuickChart node to create a visual chart (e.g., a bar chart) from the prepared data. Send Chart to Telegram (Send Chart Image to Group or Private Chat): Sends the generated chart image to a specified Telegram chat ID or group. Nodes Used (Key Nodes): Telegram Trigger & Telegram Node:** For receiving images and sending messages/images. Google Gemini Chat Model (Langchain):** For AI-powered OCR and data extraction from invoices. Basic LLM Chain (Langchain):** To interact with the language model using specific prompts. Output Parser Structured (Langchain):** To structure the output from the language model. Notion Node:** For reading from and writing to your Notion databases. Schedule Trigger:** To automate the reporting process. Summarize Node:** To aggregate data for reports. Code Node:** Used here to format data for the chart. QuickChart Node:** For generating charts. SplitOut Node:** To process multiple items from a single invoice. Setup Instructions: Credentials: Telegram: Create a Telegram bot and get its API token. You'll also need the Chat ID where you'll send invoices and where reports should be sent. Google Gemini (PaLM) API: You'll need an API key for Google Gemini. Notion: Create a Notion integration and get the API key. Create a Notion database with properties corresponding to the data you want to save (e.g., Name (Title), Quantity (Number), Price (Number), Total (Number), Category (Select), Date (Text or Date), Tax (Number)). Share this database with your Notion integration. Configure Telegram Trigger: Add your Telegram Bot API token. When you first activate the workflow or test the trigger, send /start to your bot in the chat you want to use for sending invoices. n8n will then capture the Chat ID. Configure Google Gemini Node (Google Gemini Chat Model): Select or add your Google Gemini API credentials. Review the prompt in the Basic LLM Chain node and adjust if necessary (e.g., date format, categories). Configure Notion Nodes: Record To Notion Database: Select or add your Notion API credentials. Select your target Notion Database ID. Map the properties from the workflow (e.g., ={{ $json.name }}) to your Notion database columns. Get Recent Data from Notions: Select or add your Notion API credentials. Select your target Notion Database ID. Adjust the filter if needed (default is "past_week"). Configure Telegram Node for Reports (Send Chart Image to Group or Private Chat): Select or add your Telegram Bot API token. Enter the Chat ID for the group or private chat where you want to receive the reports. Configure Schedule Trigger (Schedule Trigger | for send chart report): Set your desired schedule (e.g., every Monday at 9 AM). Test: Send an image of an invoice to your Telegram bot and check if the data appears in Notion and if you receive a summary message. Wait for the scheduled report or manually trigger it to test the reporting functionality. Sticky Note Text for Your n8n Template: (These are suggestions. You would place these directly into the sticky notes within your n8n workflow editor.) Existing High-Level Sticky Notes: ## Auto Notes Transaction with Telegram and Notion database ## Schedule report to send on chanel or private message Specific Sticky Notes to Add: On Telegram Trigger | When recive photo:** ๐ธ INVOICE INPUT ๐ธ Bot listens here for photos of your receipts/invoices. Ensure your Telegram Bot API token is set in credentials. Near Google Gemini Chat Model & Basic LLM Chain:** ๐ค AI MAGIC HAPPENS HERE ๐ง Image is sent to Google Gemini for data extraction. Check 'Basic LLM Chain' to customize the AI prompt (e.g., categories, output format). Requires Google Gemini API credentials. On Parse To your object | Table:** โจ STRUCTURING AI DATA โจ Converts the AI's text output into a usable JSON object. Check the schema if you modify the AI prompt significantly. On Record To Notion Database:** ๐ SAVING TO NOTION ๐ Extracted transaction data is saved here. Configure with your Notion API key & Database ID. Map fields correctly to your database columns! On Sendback to chat and give summarize text:** ๐ฌ TRANSACTION SUMMARY ๐ฌ Sends a confirmation message back to the user in Telegram with a summary of the recorded expense. On Schedule Trigger | for send chart report:** ๐๏ธ REPORTING SCHEDULE ๐๏ธ Set how often you want to receive your spending report (e.g., weekly, monthly). On Get Recent Data from Notions:** ๐ FETCHING DATA FOR REPORT ๐ Retrieves transactions from Notion for the report period. Default: "Past Week". Adjust filter as needed. Requires Notion API credentials & Database ID. On Summarize Transaction Data:** โ SUMMARIZING SPENDING โ Aggregates your expenses, usually by category, to prepare for the chart. On Convert Data to JSON chart payload (Code Node):** ๐จ PREPARING CHART DATA ๐จ This Code node formats the summarized data into the JSON structure needed by QuickChart. On Generate Chart (QuickChart Node):** ๐ GENERATING VISUAL REPORT ๐ Creates the actual chart image based on your spending data. You can customize chart type (bar, pie, etc.) here. On Send Chart Image to Group or Private Chat:** ๐ค SENDING REPORT TO TELEGRAM ๐ค Delivers the generated chart to your chosen Telegram chat/group. Set the correct Chat ID and Bot API token. General Sticky Note (Place where relevant):** ๐ CREDENTIALS NEEDED ๐ Remember to set up API keys/tokens for: Telegram Google Gemini Notion General Sticky Note (Place where relevant):** ๐ก CUSTOMIZE ME! ๐ก Adjust AI prompts for better accuracy. Change Notion database structure. Modify report frequency and content. `
by Davide
This workflow allows users to generate AI videos using the cheaper model Google Veo3 Fast, save them to Google Drive, generate optimized titles with GPT-4o, and automatically upload them to YouTube and TikTok with Upload-Post. The entire process is triggered from a Google Sheet that acts as the central interface for input and output. IT automates video creation, uploading, and tracking, ensuring seamless integration between Google Sheets, Google Drive, Google Veo3 Fast, TikTok and YouTube. Benefits of this Workflow ๐ก No Code Interface**: Trigger and control the video production pipeline from a simple Google Sheet. โ๏ธ Full Automation**: Once set up, the entire video generation and publishing process runs hands-free. ๐ง AI-Powered Creativity**: Generates engaging YouTube and TikTok titles using GPT-4o. Leverages advanced generative video AI from Google Veo3. ๐ Cloud Storage & Backup**: Stores all generated videos on Google Drive for safekeeping. ๐ YouTube Ready**: Automatically uploads to YouTube with correct metadata, saving time and boosting visibility. ๐ TikTok Ready**: Automatically uploads to TikTok with correct metadata, saving time and boosting visibility. ๐งช Scalable**: Designed to process multiple video prompts by looping through new entries in Google Sheets. ๐ API-First**: Utilizes secure API-based communication for all services. How It Works Trigger: The workflow can be started manually ("When clicking โTest workflowโ") or scheduled ("Schedule Trigger") to run at regular intervals (e.g., every 5 minutes). Fetch Data: The "Get new video" node retrieves unfilled video requests from a Google Sheet (rows where the "VIDEO" column is empty). Video Creation: The "Set data" node formats the prompt and duration from the Google Sheet. The "Create Video" node sends a request to the Fal.run API (Google Veo3 Fast) to generate a video based on the prompt. Status Check: The "Wait 60 sec." node pauses execution for 60 seconds. The "Get status" node checks the video generation status. If the status is "COMPLETED," the workflow proceeds; otherwise, it waits again. Video Processing: The "Get Url Video" node fetches the video URL. The "Generate title" node uses OpenAI (GPT-4.1) to create an SEO-optimized YouTube and TikTok title. The "Get File Video" node downloads the video file. Upload & Update: The "Upload Video" node saves the video to Google Drive. The "HTTP Request" node uploads the video to YouTube via the Upload-Post API. The "HTTP Request" node uploads the video to TikTok via the Upload-Post API. The "Update Youtube URL" and "Update result" nodes update the Google Sheet with the video URL and YouTube link. Set Up Steps Google Sheet Setup: Create a Google Sheet with columns: PROMPT, DURATION, VIDEO, and YOUTUBE_URL. Share the Sheet link in the "Get new video" node. API Keys: Obtain a Fal.run API key (for Veo3) and set it in the "Create Video" node (Header: Authorization: Key YOURAPIKEY). Get an Upload-Post API key (for YouTube uploads) and configure the "HTTP Request" node (Header: Authorization: Apikey YOUR_API_KEY). Get an Upload-Post API key (for TikTok uploads) and configure the "HTTP Request" node (Header: Authorization: Apikey YOUR_API_KEY). YouTube Upload Configuration: Replace YOUR_USERNAME in the "HTTP Request" node with your Upload-Post profile name. Schedule Trigger: Configure the "Schedule Trigger" node to run periodically (e.g., every 5 minutes). Need help customizing? Contact me for consulting and support or add me on Linkedin.
by InfyOm Technologies
โ What problem does this workflow solve? Many websites lack a smart, searchable interface. Visitors often leave due to unanswered questions. This workflow transforms any website into a Retrieval-Augmented Generation (RAG) chatbotโautomatically extracting content, creating embeddings, and enabling real-time, context-aware chat on your own site. โ๏ธ What does this workflow do? Accepts a website URL through a form trigger. Fetches and cleans website content. Parses content into smaller sections. Generates vector embeddings using OpenAI (or your embedding model). Stores embeddings and metadata in Supabaseโs vector database. When a user asks a question: Searches Supabase for relevant chunks via similarity search. Retrieves matching content as context. Sends context + question to OpenAI to generate an accurate answer. Returns the AI-generated response to the user in the chat interface. ๐ง Setup Instructions ๐ฅ๏ธ Website Form Trigger Use a Form / HTTP Trigger to submit website URLs for indexing. ๐ฅ Content Extraction & Chunking Use HTTP nodes to fetch HTML. Clean and parse it (e.g., remove scripts, ads). Use a Function node to split into manageable text chunks. ๐ง Embedding Generation Call OpenAI (or Cohere) to generate embeddings for each chunk. Insert vectors and metadata into Supabase via its API or n8n Supabase node. ๐ฌ User Query Handling Use a Chat Trigger (webhook/UI) to receive user questions. Convert the question into an embedding. Query Supabase with similarity search (e.g., match_documents RPC). Retrieve top-matching chunks and feed them into OpenAI with the user question. Return the reply to the user. ๐ AI & Database Setup OpenAI API key** for embedding and chat. A Supabase project with: vector extension enabled Tables for document chunks and embeddings A similarity search function like match_documents ๐ฌ How to Embed the Chat Widget on Your Website You can add the chatbot interface to your website with a simple JavaScript snippet. Steps: Open the "When chat message received" node Copy Chat URL Make sure, "Make Chat Publicly Available "Toggle is enabled Make sure the mode is "Embedded Chat" Follow the instructions given on this package here. ๐ง How it Works Submit URL โ Form Trigger Fetch Website Content โ HTTP Request Clean & Chunk Content โ Function Node Make Embeddings (OpenAI/Cohere) Store in Supabase โ embeddings + metadata User Chat โ Chat Trigger Search for Similar Content โ Supabase similarity match Generate Answer โ OpenAI completion w/ context Send Reply โ Chat interface returns answer ๐ Why Supabase? Supabase offers a scalable Postgres-based vector database with extensions like pgvector, making it easy to: Store vector data alongside metadata Run ANN (Approximate Nearest Neighbor) similarity searches Integrate seamlessly with n8n and your chatbot UI :contentReference[oaicite:1]{index=1} ๐ค Who can use this? ๐ Documentation websites ๐ฉโ๐ผ Support portals ๐ข Product/Landing pages ๐ Internal knowledge bases Perfect for anyone who wants a smart, website-specific chatbot without building an entire AI stack from scratch. ๐ Ready to Deploy? Plug in your: โ OpenAI API Key โ Supabase project credentials โ Chat UI or webhook endpoint โฆ and launch your AI-powered, website-specific RAG chatbot in minutes!
by Niranjan G
This workflow leverages AI to intelligently analyze incoming Gmail messages and automatically apply relevant labels based on the email content. The default configuration includes the following labels: Newsletter**: Subscription updates or promotional content. Inquiry**: Emails requesting information or responses. Invoice**: Billing and payment-related emails. Proposal**: Business offers or collaboration opportunities. Action Required**: Emails demanding immediate tasks or actions. Follow-up Reminder**: Emails prompting follow-up actions. Task**: Emails containing actionable tasks. Personal**: Non-work-related emails. Urgent**: Time-sensitive or critical communications. Bank**: Banking alerts and financial statements. Job Update**: Recruitment or job-related communications. Spam/Junk**: Unwanted or irrelevant bulk emails. Social/Networking**: Notifications from social platforms. Receipt**: Purchase confirmations and receipts. Event Invite**: Invitations or calendar-related messages. Subscription Renewal**: Reminders for subscription expirations. System Notification**: Technical alerts from services or systems. You can customize labels and definitions based on your specific use case. How it works: The workflow periodically retrieves new Gmail messages. Only emails without existing labels, regardless of read status, are sent to the AI for analysis. Email content (subject and body) is analyzed by an AI model to determine the appropriate label. Labels identified by the AI are applied to each email accordingly. Note: This workflow performs 100% better than the default Gmail trigger method, which is why the workflow was switched from Gmail trigger to a scheduled workflow. By selectively processing only unlabeled emails, it ensures comprehensive labeling while significantly reducing AI processing costs. Setup Steps: Configure credentials for Gmail and your chosen AI service (e.g., OpenAI). Ensure labels exist in your Gmail account matching the workflow definitions. Adjust the AI prompt to match your labeling needs. Optionally customize the polling interval (default: every 2 minutes). This workflow streamlines your email management, keeping your inbox organized effortlessly while optimizing resource usage.
by Dr. Firas
AI-Powered HR Workflow: CV Analysis and Evaluation from Gmail to Sheets Who is this for? This workflow is designed for HR professionals, recruiters, startup founders, and operations teams who receive candidate resumes by email and want to automate the evaluation process using AI. It's ideal for teams that receive high volumes of applications and want to streamline screening without sacrificing quality. What problem is this workflow solving? Manually reviewing every resume is time-consuming, inconsistent, and often inefficient. This workflow automates the initial screening process by: Extracting resume data directly from incoming emails Analyzing resumes using GPT-4 to evaluate candidate fit Saving scores and notes in Google Sheets for easy filtering It helps teams qualify candidates faster while staying organized. What this workflow does Detects when a new email with a CV is received (Gmail) Filters out non-relevant messages using an AI classifier Extracts the resume text (PDF parsing) Uploads the original file to Google Drive Retrieves job offer details from a connected Google Sheet Uses GPT-4 to evaluate the candidateโs fit for the job Parses the AI output to extract the candidate's score Logs the results into a central Google Sheet Sends a confirmation email to the applicant Setup Install n8n self-hosted Add your OpenAI API Key in the AI nodes Enable the following APIs in your Google Cloud Console: Gmail API Google Drive API Google Sheets API Create OAuth credentials and connect them in n8n Configure your Gmail trigger to watch the inbox receiving CVs Create a Google Sheet with columns like: Candidate, Score, Job, Status, etc. How to customize this workflow to your needs Adjust the AI scoring prompt to match your companyโs hiring criteria Add new columns to the Google Sheet for additional metadata Include Slack or email notifications for each qualified candidate Add multiple job profiles and route candidates accordingly Add a Telegram or WhatsApp step to notify HR in real time ๐ Documentation: Notion Guide Need help customizing? Contact me for consulting and support : Linkedin / Youtube
by Muhammad Ashar
How It Works โ Your AI Marketing Team in Action This automation acts as your AI-powered content and image marketing assistant inside Telegram. With just a voice note or text message, it can: ๐ง Understand your request โ Whether you send a message or speak into Telegram, it transcribes and processes your input using GPT-4. ๐จ Create and edit content โ Based on what you say, it can generate: โ๏ธ Blog posts ๐ผ LinkedIn posts ๐ฌ Faceless videos ๐ผ๏ธ AI-generated images ๐ช Edits to existing images ๐ Searches through your image database ๐ฌ Replies directly in Telegram โ It sends you back the resultโwhether thatโs a post, image, or video linkโwithout leaving the app. ๐งฉ Built using LangChain agent logic โ It intelligently chooses the right tool from a suite of sub-workflows like "Create Image", "Blog Post", or "Video" using agent reasoning. ๐ ๏ธ Setup Steps โ Get Started in Minutes! โ Time Estimate: ~15โ30 minutes (faster if you're familiar with n8n) ๐ 1. Import the Template Pack ๐ฅ Download and install these workflows into your n8n: Create Image, Edit Image, Search Images Blog Post, LinkedIn Post, Video ๐ 2. Add Required Credentials Telegram Bot ๐ค OpenRouter AI ๐ง Tavily API (for smart research) ๐ ElevenLabs ๐๏ธ (for voice in videos) PiAPI & Runway ๐๏ธ (for faceless videos) ๐งฉ 3. Link the Tools to the Agent Node โ Make sure the "Marketing Team Agent" is connected to each of the content creation tools as shown in the workflow. ๐ 4. Download Templates & Logs ๐งพ Google Sheets Log Template (to track output) ๐ผ๏ธ Creatomate Template (optional for enhanced image control โ shared in Skool group) ๐ Pro Tip: All detailed step-by-step setup instructions are included as sticky notes inside the n8n canvas. Just follow along!
by Saswat Saubhagya Rout
๐ Use Case This n8n workflow automates the creation and publication of technical blog posts based on a list of topics stored in Google Sheets. It fetches context using Tavily and Wikipedia, generates Markdown-formatted content with Gemini AI, commits it to a GitHub repository, and updates a Jekyll-powered blog โ all without manual intervention. Ideal for developers, bloggers, or content teams who want to streamline technical content creation and publishing. โ๏ธ Setup Instructions ๐ Prerequisites n8n (cloud or self-hosted) Tavily API key Google Sheets with blog topics Gemini (Google Palm) API key GitHub repository (Jekyll enabled) GitHub OAuth2 credentials Google OAuth2 credentials ๐งฉ Setup Steps Import the workflow JSON into your n8n instance. Set up the following credentials in n8n: Tavily API Google Sheets OAuth2 Google Palm/Gemini AI GitHub OAuth2 Prepare your Google Sheet: Columns: Title, status, row_number Set status to blank for topics to be picked up. Configure: GitHub repo and _posts/ path Jekyll setup (front matter, _config.yml, GitHub Pages) Adjust prompt/custom parameters if needed. Enable and deploy the workflow. Schedule it daily or trigger manually. ๐ Workflow Details | Node | Function | |------|----------| | Schedule Trigger | Triggers the flow at a set interval | | Google Sheets (Get Topic) | Fetches the next incomplete blog topic | | Extract Topic | Parses topic text from the sheet | | Tavily Search | Gathers up-to-date content related to the topic | | Wikipedia Tool | Optionally adds more context or images | | Summarize Results | Formats the context for the AI | | Gemini AI Agent (LangChain) | Generates a Markdown blog post with YAML front matter | | Set File Parameters | Prepares the filename, content, and commit message | | GitHub Commit | Uploads the .md file to the _posts/ directory | | Update Google Sheet | Marks topic as done after successful commit | ๐ ๏ธ Customization Options Change LLM prompt (e.g. tone, depth, format). Use OpenAI instead of Gemini by switching nodes. Modify filename pattern or GitHub repo path. Add Slack/Discord notifications after publish. Extend flow to upload images or embed YouTube links. โ ๏ธ Community Nodes Used This workflow uses the following community nodes: @tavily/n8n-nodes-tavily.tavily โ for deep search > โ ๏ธ Ensure these are installed and enabled in your n8n instance. ๐ก Pro Tips Use GitHub Actions to trigger an automatic Jekyll build post-commit. Structure blog posts with front matter, headings, and table of contents for SEO. Set Schedule Trigger to daily at a fixed time to keep content flowing. Enhance formatting in AI output using code blocks, images, and lists. โ Example Output title: "How LLMs Are Changing Web Development" date: "2025-07-25" categories: [webdev, AI] tags: [LLM, Gemini, n8n, automation] excerpt: "Learn how LLMs like Gemini are transforming how we generate and deploy developer content." author: "Saswat Saubhagya" Table of Contents Introduction Understanding LLMs Use Cases in Web Development Challenges Conclusion ...
by Anderson Adelino
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Build intelligent AI chatbot with RAG and Cohere Reranker Who is it for? This template is perfect for developers, businesses, and automation enthusiasts who want to create intelligent chatbots that can answer questions based on their own documents. Whether you're building customer support systems, internal knowledge bases, or educational assistants, this workflow provides a solid foundation for document-based AI conversations. How it works This workflow creates an intelligent AI assistant that combines RAG (Retrieval-Augmented Generation) with Cohere's reranking technology for more accurate responses: Chat Interface: Users interact with the AI through a chat interface Document Processing: PDFs from Google Drive are automatically extracted and converted into searchable vectors Smart Search: When users ask questions, the system searches through vectorized documents using semantic search Reranking: Cohere's reranker ensures the most relevant information is prioritized AI Response: OpenAI generates contextual answers based on the retrieved information Memory: Conversation history is maintained for context-aware interactions Setup steps Prerequisites n8n instance (self-hosted or cloud) OpenAI API key Supabase account with vector extension enabled Google Drive access Cohere API key 1. Configure Supabase Vector Store First, create a table in Supabase with vector support: CREATE TABLE cafeina ( id SERIAL PRIMARY KEY, content TEXT, metadata JSONB, embedding VECTOR(1536) ); -- Create a function for similarity search CREATE OR REPLACE FUNCTION match_cafeina( query_embedding VECTOR(1536), match_count INT DEFAULT 10 ) RETURNS TABLE( id INT, content TEXT, metadata JSONB, similarity FLOAT ) LANGUAGE plpgsql AS $$ BEGIN RETURN QUERY SELECT cafeina.id, cafeina.content, cafeina.metadata, 1 - (cafeina.embedding <=> query_embedding) AS similarity FROM cafeina ORDER BY cafeina.embedding <=> query_embedding LIMIT match_count; END; $$; 2. Set up credentials Add the following credentials in n8n: OpenAI**: Add your OpenAI API key Supabase**: Add your Supabase URL and service role key Google Drive**: Connect your Google account Cohere**: Add your Cohere API key 3. Configure the workflow In the "Download file" node, replace URL DO ARQUIVO with your Google Drive file URL Adjust the table name in both Supabase Vector Store nodes if needed Customize the agent's tool description in the "searchCafeina" node 4. Load your documents Execute the bottom workflow (starting with "When clicking 'Execute workflow'") This will download your PDF, extract text, and store it in Supabase You can repeat this process for multiple documents 5. Start chatting Once documents are loaded, activate the main workflow and start chatting with your AI assistant through the chat interface. How to customize Different document types**: Replace the Google Drive node with other sources (Dropbox, S3, local files) Multiple knowledge bases**: Create separate vector stores for different topics Custom prompts**: Modify the agent's system message for specific use cases Language models**: Switch between different OpenAI models or use other LLM providers Reranking settings**: Adjust the top-k parameter for more or fewer search results Memory window**: Configure the conversation memory buffer size Tips for best results Use high-quality, well-structured documents for better search accuracy Keep document chunks reasonably sized for optimal retrieval Regularly update your vector store with new information Monitor token usage to optimize costs Test different reranking thresholds for your use case Common use cases Customer Support**: Create bots that answer questions from product documentation HR Assistant**: Build assistants that help employees find information in company policies Educational Tutor**: Develop tutors that answer questions from course materials Research Assistant**: Create tools that help researchers find relevant information in papers Legal Helper**: Build assistants that search through legal documents and contracts
by Vishal Kumar
Trigger The workflow runs when a GitLab Merge Request (MR) is created or updated. Extract & Analyze It retrieves the code diff and sends it to Claude AI or GPT-4o for risk assessment and issue detection. Generate Report AI produces a structured summary with: Risk levels Identified issues Recommendations Test cases Notify Developers The report is: Emailed to developers and QA teams Posted as a comment on the GitLab MR Setup Guide Connect GitLab Add GitLab API credentials Select repositories to track Configure AI Analysis Enter Anthropic (Claude) or OpenAI (GPT-4o) API key Set Up Notifications Add Gmail credentials Update the email distribution list Test & Automate Create a test MR to verify analysis and email delivery Key Benefits Automated Code Review** โ AI-driven risk assessment and recommendations Security & Compliance** โ Identifies vulnerabilities before code is merged Integration with GitLab CI/CD** โ Works within existing DevOps workflows Improved Collaboration** โ Keeps developers and QA teams informed Developed by Quantana, an AI-powered automation and software development company.
by Hichul
n8n workflow template description [template] This workflow automatically drafts replies to your emails using an OpenAI Assistant, streamlining your inbox management. It's designed for support teams, sales professionals, or anyone looking to accelerate their email response process by leveraging AI to create context-aware draft replies in Gmail. How it works The workflow runs on a schedule (every minute) to check for emails with a specific label in your Gmail account. It takes the content of the newest email in a thread and sends it to your designated OpenAI Assistant for processing. A draft reply is generated by the AI assistant. This AI-generated reply is then added as a draft to the original email thread in Gmail. Finally, the initial trigger label is removed from the email thread to prevent it from being processed again. Set up steps Connect your accounts: You'll need to connect your Gmail and OpenAI accounts in the respective nodes. Configure the trigger: In the "Get threads with specific labels" Gmail node, specify the label that you want to use to trigger the workflow (e.g., generate-reply). Any email you apply this label to will be processed. Select your OpenAI Assistant: In the "Ask OpenAI Assistant" node, choose the pre-configured Assistant you want to use for generating replies. Configure label removal: In the "Remove AI label from email" Gmail node, ensure the same trigger label is selected to be removed after the draft has been successfully created. Activate the workflow: Save and activate the workflow to begin automating your email replies.
by vinci-king-01
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works This workflow automatically monitors competitor prices, analyzes market demand, and optimizes product pricing in real-time for maximum profitability using advanced AI algorithms. Key Steps Hourly Trigger - Runs automatically every hour for real-time price optimization and competitive response. Multi-Platform Competitor Monitoring - Uses AI-powered scrapers to track prices from Amazon, Best Buy, Walmart, and Target. Market Demand Analysis - Analyzes Google Trends data to understand search volume trends and seasonal patterns. Customer Sentiment Analysis - Reviews customer feedback to assess price sensitivity and value perception. AI Pricing Optimization - Calculates optimal prices using weighted factors including competitor positioning, demand indicators, and inventory levels. Automated Price Updates - Directly updates e-commerce platform prices when significant opportunities are identified. Comprehensive Analytics - Logs all pricing decisions and revenue projections to Google Sheets for performance tracking. Set up steps Setup time: 15-20 minutes Configure ScrapeGraphAI credentials - Add your ScrapeGraphAI API key for AI-powered competitor and market analysis. Set up e-commerce API connection - Connect your e-commerce platform API for automated price updates. Configure Google Sheets - Set up Google Sheets connections for pricing history and revenue analytics logging. Set up Slack notifications - Connect your Slack workspace for real-time pricing alerts and team updates. Customize product catalog - Modify the product configuration with your actual products, costs, and pricing constraints. Adjust monitoring frequency - Change the trigger timing based on your business needs (hourly, daily, etc.). Configure competitor platforms - Update competitor URLs and selectors for your target market. What you get Real-time price optimization** with 15-25% potential revenue increase through intelligent pricing Competitive intelligence** with automated monitoring of major e-commerce platforms Market demand insights** with seasonal and trend-based pricing adjustments Customer sentiment analysis** to understand price sensitivity and value perception Automated price updates** when significant opportunities are identified (>2% change, >70% confidence) Comprehensive analytics** with pricing history, revenue projections, and performance tracking Team notifications** with detailed market analysis and pricing recommendations Margin protection** with intelligent constraints to maintain profitability