by DIGITAL BIZ TECH
AI Product Catalog Chatbot with Google Drive Ingestion & Supabase RAG Overview This workflow builds a dual-system that connects automated document ingestion with a live product catalog chatbot powered by Mistral AI and Supabase. It includes: Ingestion Pipeline:** Automatically fetches JSON files from Google Drive, processes their content, and stores vector embeddings in Supabase. Chatbot:** An AI agent that queries the Supabase vector store (RAG) to answer user questions about the product catalog. It uses Mistral AI for chat intelligence and embeddings, and Supabase for vector storage and semantic product search. Chatbot Flow Trigger:** When chat message received or Webhook (from live website) Model:** Mistral Cloud Chat Model (mistral-medium-latest) Memory:** Simple Memory (Buffer Window) — keeps last 15 messages for conversational context Vector Search Tool:** Supabase Vector Store Embeddings:** Mistral Cloud Agent:** product catalog agent Responds to user queries using the products table in Supabase. Searches vectors for relevant items and returns structured product details (name, specs, images, and links). Maintains chat session history for natural follow-up questions. Document → Knowledge Base Pipeline Triggered manually (Execute workflow) to populate or refresh the Supabase vector store. Steps Google Drive (List Files) → Fetch all files from the configured Google Drive folder. Loop Over Items → For each file: Google Drive (Get File) → Download the JSON document. Extract from File → Parse and read raw JSON content. Map Data into Fields (Set node) → Clean and normalize JSON keys (e.g., page_title, comprehensive_summary, key_topics). Convert Data into Chunks (Code node) → Merge text fields like summary and markdown. → Split content into overlapping 2,000-character chunks. → Add metadata such as title, URL, and chunk index. Embeddings (Mistral Cloud) → Generate vector embeddings for each text chunk. Insert into Supabase Vectorstore → Save chunks + embeddings into the website_mark table. Wait → Pause for 30 seconds before the next file to respect rate limits. Integrations Used | Service | Purpose | Credential | |----------|----------|------------| | Google Drive | File source for catalog JSON documents | Google Drive account dbt | | Mistral AI | Chat model & embeddings | Mistral Cloud account dbt | | Supabase | Vector storage & RAG search | Supabase DB account dbt | | Webhook / Chat | User-facing interface for chatbot | Website or Webhook | Sample JSON Data Format (for Ingestion) The ingestion pipeline expects structured JSON product files, which can include different categories such as Apparel or Tools. Apparel Example (T-Shirts) [ { "Name": "Classic Crewneck T-Shirt", "Item Number": "A-TSH-NVY-M", "Image URL": "https://www.example.com/images/tshirt-navy.jpg", "Image Markdown": "", "Size Chart URL": "https://www.example.com/charts/tshirt-sizing", "Materials": "100% Pima Cotton", "Color": "Navy Blue", "Size": "M", "Fit": "Regular Fit", "Collection": "Core Essentials" } ] Tools Example (Drill Bits) [ { "Name": "Titanium Drill Bit, 1/4\"", "Item Number": "T-DB-TIN-250", "Image URL": "https://www.example.com/images/drill-bit-1-4.jpg", "Image Markdown": "", "Spec Sheet URL": "https://www.example.com/specs/T-DB-TIN-250", "Materials": "HSS with Titanium Coating", "Type": "Twist Drill Bit", "Size (in)": "1/4", "Shank Type": "Hex", "Application": "Metal, Wood, Plastic" } ] Agent System Prompt Summary > “You are an AI product catalog assistant. Use only the Supabase vector database as your knowledge base. Provide accurate, structured responses with clear formatting — including product names, attributes, and URLs. If data is unavailable, reply politely: ‘I couldn’t find that product in the catalog.’” Key Features Automated JSON ingestion from Google Drive → Supabase Intelligent text chunking and metadata mapping Dual-workflow architecture (Ingestion + Chatbot) Live conversational product search via RAG Supports both embedded chat and webhook channels Summary > A powerful end-to-end workflow that transforms your product data into a searchable, AI-ready knowledge base, enabling real-time product Q&A through a Mistral-powered chatbot. Perfect for eCommerce teams, distributors, or B2B companies managing large product catalogs. Need Help or More Workflows? Want to customize this workflow for your business or integrate it with your tools? Our team at Digital Biz Tech can tailor it precisely to your use case — from automation pipelines to AI-powered product discovery. 💡 We can help you set it up for free — from connecting credentials to deploying it live. Contact: rajeet.nair@digitalbiz.tech Website: https://www.digitalbiz.tech LinkedIn: https://www.linkedin.com/company/digital-biz-tech/ You can also DM us on LinkedIn for any help.
by Oneclick AI Squad
This n8n workflow helps users easily discover nearby residential construction projects by automatically scraping and analyzing property listings from 99acres and other real estate platforms. Users can send an email with their location preferences and receive a curated list of available properties with detailed information, including pricing, area, possession dates, and construction status. Good to know The workflow focuses specifically on residential construction projects and active developments Property data is scraped in real-time to ensure the most current information Results are automatically formatted and structured for easy reading The system handles multiple property formats and data variations from different sources Fallback mechanisms ensure reliable data extraction even when website structures change How it works Trigger: New Email** - Detects incoming emails with property search requests and extracts location preferences from email content Extract Area & City** - Parses the email body to identify target areas (e.g., Gota, Ahmedabad) and falls back to city-level search if specific area is not mentioned Scrape Construction Projects** - Performs web scraping on 99acres and other property websites based on the extracted area and city information Parse Project Listings** - Cleans and formats the scraped HTML data into structured project entries with standardized fields Format Project Details** - Transforms all parsed projects into a consistent email-ready list format with bullet points and organized information Send Results to User** - Delivers a professionally formatted email with the complete list of matching construction projects to the original requester Email Format Examples Input Email Format To: properties@yourcompany.com Subject: Property Search Request Hi, I am interested in buying a flat. Can you please send me the list of available properties in Gota, Ahmedabad? Output Email Example Subject: 🏘️ Property Search Results: 4 Projects Found in Gota, Ahmedabad 🏘️ Available Construction Projects in Gota, Ahmedabad Search Area: Gota, Ahmedabad Total Projects: 4 Search Date: August 4, 2025 📋 PROJECT LISTINGS: 🔷 Project 1 🏠 Name: Vivaan Oliver offers 🏢 BHK: 3 BHK 💰 Price: N/A 📐 Area: 851.0 Sq.Ft 🗓️ Possession: August 2025 📊 Status: under construction 📍 Location: Thaltej, Ahmedabad West 🕒 Scraped Date: 2025-08-04 🔷 Project 2 🏠 Name: Vivaan Oliver offers 🏢 BHK: 3 BHK 💰 Price: Price on Request 📐 Area: 891 Sq Ft 🗓️ Possession: N/A 📊 Status: Under Construction 📍 Location: Thaltej, Ahmedabad West 🕒 Scraped Date: 2025-08-04 🔷 Project 3 🏠 Name: It offers an exclusive range of 🏢 BHK: 3 BHK 💰 Price: N/A 📐 Area: 250 Sq.Ft 🗓️ Possession: 0 2250 📊 Status: Under Construction 📍 Location: Thaltej, Ahmedabad West 🕒 Scraped Date: 2025-08-04 🔷 Project 4 🏠 Name: N/A 🏢 BHK: 2 BHK 💰 Price: N/A 📐 Area: N/A 🗓️ Possession: N/A 📊 Status: N/A 📍 Location: Thaltej, Ahmedabad West 💡 Next Steps: • Contact builders directly for detailed pricing and floor plans • Schedule site visits to shortlisted properties • Verify possession timelines and construction progress • Compare amenities and location advantages 📞 For more information or specific requirements, reply to this email. How to use Setup Instructions Import the workflow into your n8n instance Configure Email Credentials: Set up email trigger for incoming property requests Set up SMTP credentials for sending property listings Configure Web Scraping: Ensure proper headers and user agents for 99acres access Set up fallback mechanisms for different property websites Test the workflow with sample property search emails Sending Property Search Requests Send an email to your configured property search address Include location details in natural language (e.g., "Gota, Ahmedabad") Optionally specify preferences like BHK, budget, or amenities Receive detailed property listings within minutes Requirements n8n instance** (cloud or self-hosted) with web scraping capabilities Email account** with IMAP/SMTP access for automated communication Reliable internet connection** for real-time property data scraping Valid target websites** (99acres, MagicBricks, etc.) access Troubleshooting No properties found**: Verify area spelling and check if the location has active listings Scraping errors**: Update user agents and headers if websites block requests Duplicate results**: Implement better deduplication logic based on property names and locations Email parsing issues**: Test with various email formats and improve regex patterns Website structure changes**: Implement fallback parsers and regular monitoring of scraping success rates
by Rakin Jakaria
Who this is for This workflow is for freelancers, job seekers, or service providers who want to automatically apply to businesses by scraping their website information, extracting contact details, and sending personalized job application emails with AI-powered content — all from one form submission. What this workflow does This workflow starts every time someone submits the Job Applier Form. It then: Scrapes the target business website** to gather company information and contact details. Converts HTML content** to readable markdown format for better AI processing. Extracts email addresses* and creates a company summary using *GPT-5 AI**. Validates email addresses** to ensure they contain proper formatting (@ symbol check). Accesses your experience data* from a connected *Google Sheet** with your skills and portfolio. Generates personalized application emails* (subject + body) using *GPT-5** based on the job position and company info. Sends the application email* automatically via *Gmail** with your name as sender. Provides confirmation** through a completion form showing the AI's response. Setup To set this workflow up: Form Trigger – Customize the job application form fields (Target Business Website, Applying As dropdown with positions like Video Editor, SEO Expert, etc.). OpenAI GPT-5 – Add your OpenAI API credentials for both AI models used in the workflow. Google Sheets – Connect your sheet containing your working experience data, skills, and portfolio information. Gmail Account – Link your Gmail account for sending application emails automatically. Experience Data – Update the Google Sheet with your relevant skills, experience, and achievements for each job type. Sender Name – Modify the sender name in Gmail settings (currently set to "Jamal Mia"). How to customize this workflow to your needs Add more job positions to the dropdown menu (currently includes Video Editor, SEO Expert, Full-Stack Developer, Social Media Manager). Modify the AI prompt to reflect your unique value proposition and application style. Enhance email validation with additional checks like domain verification or email format patterns. Add follow-up scheduling to automatically send reminder emails after a certain period. Include attachment functionality to automatically attach your resume or portfolio to applications. Switch to different email providers or add multiple sender accounts for variety.
by Wolf Bishop
A reliable, no-frills web scraper that extracts content directly from websites using their sitemaps. Perfect for content audits, migrations, and research when you need straightforward HTML extraction without external dependencies. How It Works This streamlined workflow takes a practical approach to web scraping by leveraging XML sitemaps and direct HTTP requests. Here's how it delivers consistent results: Direct Sitemap Processing: The workflow starts by fetching your target website's XML sitemap and parsing it to extract all available page URLs. This eliminates guesswork and ensures comprehensive coverage of the site's content structure. Robust HTTP Scraping: Each page is scraped using direct HTTP requests with realistic browser headers that mimic legitimate web traffic. The scraper includes comprehensive error handling and timeout protection to handle various website configurations gracefully. Intelligent Content Extraction: The workflow uses sophisticated JavaScript parsing to extract meaningful content from raw HTML. It automatically identifies page titles through multiple methods (title tags, Open Graph metadata, H1 headers) and converts HTML structure into readable text format. Framework Detection: Built-in detection identifies whether sites use WordPress, Divi themes, or heavy JavaScript frameworks. This helps explain content extraction quality and provides valuable insights about the site's technical architecture. Rich Metadata Collection: Each scraped page includes detailed metadata like word count, HTML size, response codes, and technical indicators. This data is formatted into comprehensive markdown files with YAML frontmatter for easy analysis and organization. Respectful Rate Limiting: The workflow includes a 3-second delay between page requests to respect server resources and avoid overwhelming target websites. The processing is sequential and controlled to maintain ethical scraping practices. Detailed Success Reporting: Every scraped page generates a report showing extraction success, potential issues (like JavaScript dependencies), and technical details about the site's structure and framework. Setup Steps Configure Google Drive Integration Connect your Google Drive account in the "Save to Google Drive" node Replace YOUR_GOOGLE_DRIVE_CREDENTIAL_ID with your actual Google Drive credential ID Create a dedicated folder for your scraped content in Google Drive Copy the folder ID from the Google Drive URL (the long string after /folders/) Replace YOUR_GOOGLE_DRIVE_FOLDER_ID_HERE with your actual folder ID in both the folderId field and cachedResultUrl Update YOUR_FOLDER_NAME_HERE with your folder's actual name Set Your Target Website In the "Set Sitemap URL" node, replace https://yourwebsitehere.com/page-sitemap.xml with your target website's sitemap URL Common sitemap locations include /sitemap.xml, /page-sitemap.xml, or /sitemap_index.xml Tip: Not sure where your sitemap is? Use a free online tool like https://seomator.com/sitemap-finder Verify the sitemap URL loads correctly in your browser before running the workflow Update Workflow IDs (Automatic) When you import this workflow, n8n will automatically generate new IDs for YOUR_WORKFLOW_ID_HERE, YOUR_VERSION_ID_HERE, YOUR_INSTANCE_ID_HERE, and YOUR_WEBHOOK_ID_HERE No manual changes needed for these placeholders Adjust Processing Limits (Optional) The "Limit URLs (Optional)" node is currently disabled for full site scraping Enable this node and set a smaller number (like 5-10) for initial testing For large websites, consider running in batches to manage processing time and storage Customize Rate Limiting (Optional) The "Wait Between Pages" node is set to 3 seconds by default Increase the delay for more respectful scraping of busy sites Decrease only if you have permission and the target site can handle faster requests Test Your Configuration Enable the "Limit URLs (Optional)" node and set it to 3-5 pages for testing Click "Test workflow" to verify the setup works correctly Check your Google Drive folder to confirm files are being created with proper content Review the generated markdown files to assess content extraction quality Run Full Extraction Disable the "Limit URLs (Optional)" node for complete site scraping Execute the workflow and monitor the execution log for any errors Large websites may take considerable time to process completely (plan for several hours for sites with hundreds of pages) Review Results Each generated file includes technical metadata to help you assess extraction quality Look for indicators like "Limited Content" warnings for JavaScript-heavy pages Files include word counts and framework detection to help you understand the site's structure Framework Compatibility: This scraper is specifically designed to work well with WordPress sites, Divi themes, and many JavaScript-heavy frameworks. The intelligent content extraction handles dynamic content effectively and provides detailed feedback about framework detection. While some single-page applications (SPAs) that render entirely through JavaScript may have limited content extraction, most modern websites including those built with popular CMS platforms will work excellently with this scraper. Important Notes: Always ensure you have permission to scrape your target website and respect their robots.txt guidelines. The workflow includes respectful delays and error handling, but monitor your usage to maintain ethical scraping practices.RetryClaude can make mistakes. Please double-check responses.
by WeblineIndia
📊 Generate Weekly Energy Consumption Reports with API, Email and Google Drive This workflow automates the process of retrieving energy consumption data, formatting it into a CSV report, and distributing it every week via email and Google Drive. ⚡ Quick Implementation Steps: Import the workflow into your n8n instance. Configure your API, email details and Google Drive folder. (Optional) Adjust the CRON schedule if you need a different time or frequency. Activate workflow—automated weekly reports begin immediately. 🎯 Who’s It For Energy providers, sustainability departments, facility managers, renewable energy operators. 🛠 Requirements n8n instance Energy Consumption API access Google Drive account Email SMTP access ⚙️ How It Works Workflow triggers every Monday at 8 AM, fetches consumption data, emails CSV report and saves a copy to Google Drive. 🔄 Workflow Steps 1. Schedule Weekly (Mon 8:00 AM) Type: Cron Node Runs every Monday at 8:00 AM. Triggers the workflow execution automatically. 2. Fetch Energy Data Type: HTTP Request Node Makes a GET request to: https://api.energidataservice.dk/dataset/ConsumptionDE35Hour (sample API) The API returns JSON data with hourly electricity consumption in Denmark. Sample Response Structure: { "records": [ { "HourDK": "2025-08-25T01:00:00", "MunicipalityNo": _, "MunicipalityName": "Copenhagen", "ConsumptionkWh": 12345.67 } ] } 3. Normalize Records Type: Code Node Extracts the records array from the API response and maps each entry into separate JSON items for easier handling downstream. Code used: const itemlist = $input.first().json.records; return itemlist.map(r => ({ json: r })); 4. Convert to File Type: Convert to File Node Converts the array of JSON records into a CSV file. The CSV is stored in a binary field called data. 5. Send Email Weekly Report Type: Email Send Node Sends the generated CSV file as an attachment. Parameters: fromEmail: Sender email address (configure in node). toEmail: Recipient email address. subject: "Weekly Energy Consumption Report". attachments: =data (binary data from the previous node). 6. Report File Upload to Google Drive Type: Google Drive Node Uploads the CSV file to your Google Drive root folder. Filename pattern: energy_report_{{ $now.format('yyyy_MM_dd_HH_ii_ss') }} Requires valid Google Drive OAuth2 credentials. ✨ How To Customize Change report frequency, email template, data format (CSV/Excel) or add-ons. ➕ Add-ons Integration with analytics tools (Power BI, Tableau) Additional reporting formats (Excel, PDF) Slack notifications 🚦 Use Case Examples Automated weekly/monthly reporting for compliance Historical consumption tracking Operational analytics and forecasting 🔍 Troubleshooting Guide | Issue | Cause | Solution | |-------|-------|----------| | Data not fetched | API endpoint incorrect | Verify URL | | Email delivery issues | SMTP configuration incorrect | Verify SMTP | | Drive save fails | Permissions/Drive ID incorrect | Check Drive permissions | 📞 Need Assistance? Contact WeblineIndia for additional customization and support, we're happy to help.
by JESUS PACAHUALA ARROYO
This workflow automates the process of identifying local businesses with a weak digital presence to offer them specialized marketing services. By combining real-time data from Google Maps with the analytical power of Gemini AI, it transforms raw search results into a structured sales pipeline. How it works Data Extraction: The process starts with a form where you enter search keywords (e.g., "restaurants in Lima"). The workflow then queries the SerpApi to fetch the top local results from Google Maps. Filtering & Prioritization: It filters results by region and sorts them by rating. It specifically targets the top 5 businesses with the lowest ratings or missing information, as these represent the highest conversion opportunities. AI Analysis: The Gemini AI agent acts as a senior consultant. It analyzes each lead's weaknesses, assigns a priority score, and generates a personalized sales pitch and email copy. Record Keeping: Finally, all enriched data, including the AI-generated strategy, is formatted and saved into a Google Sheet for immediate sales action. Setup steps SerpApi:** Register at serpapi.com to get your API key and add it to the HTTP Request node credentials. Google Gemini:** Set up your Google AI Studio credentials for the AI Agent node. Google Sheets:** Create a spreadsheet with columns for Company Name, Rating, Address, AI Score, and Sales Strategy. Link it in the final node.
by DataForSEO
Once a week, this workflow automatically scans Google for newly ranked keywords for your domains using the DataForSEO API. It pulls the latest data for every target you track, stores a fresh snapshot in Google Sheets, and compares it to the previous run. Any newly ranked keywords are automatically added to a dedicated Google Sheet, creating an easy-to-review log. Lastly, the workflow sends a short summary to Slack, so your team can quickly see what’s changed without manual checks. Who’s it for SEO specialists and marketers who want to automatically track newly ranked keywords for their target domains and get quick weekly updates without doing manual Google checks. What it does This workflow automatically fetches new keywords your domains started ranking for on Google using DataForSEO Labs API, saves them into Google Sheets, and sends you a Slack summary so you can quickly see what’s changed. How it works Triggers on your chosen schedule (default: once a week). Reads your keywords and target domains from Google Sheets. Extracts fresh ranking data from Google via DataForSEO API. Compares the results with the previous run. Adds newly ranked keywords into a dedicated Google Sheet. Sends a weekly summary message to Slack. Requirements DataForSEO account A spreadsheet in Google Sheets with your keywords that matches the required column structure (as in the example). A spreadsheet in Google Sheets with your target domains that matches the required column structure (as in the example). Slack account Customization You can easily customize the workflow by changing the schedule, exporting results to dashboards and other tools (such as Looker Studio and BigQuery) instead of Google Sheets, and modifying the Slack message text.
by Felix
How It Works This workflow automates multi-currency expense tracking via Telegram. Send a receipt photo to your bot, and it automatically extracts the invoice details, converts the amount to EUR using a live exchange rate, and logs everything straight into Google Sheets. Flow overview: User sends a receipt photo via Telegram easybits Extractor reads the document and returns structured data The data is normalised and cleaned The exchange rate is fetched (with fallback if needed) The amount is converted to EUR The result is appended to Google Sheets Step-by-Step Setup Guide 1. Set Up Your easybits Extractor Pipeline Before connecting this workflow, you need a configured extraction pipeline on easybits. Go to extractor.easybits.tech and click "Create a Pipeline". Fill in the Pipeline Name and Description – describe the type of document you're processing (e.g. "Invoice / Receipt"). Upload a sample receipt or invoice as your reference document. Click "Map Fields" and define the following fields to extract: invoice_number (String) – The unique identifier of the invoice, e.g. INV-20240301 currency (String) – The currency code found on the invoice, e.g. USD amount (Number) – The total amount due on the invoice, e.g. 149.99 Click "Save & Test Pipeline" in the Test tab to verify the extraction works correctly. 2. Connect the easybits Node in n8n Once you have finalized your pipeline, go back to your dashboard and click Pipelines in the left sidebar. Click "View Pipeline" on the pipeline you want to connect. On the Pipeline Details page, you will find: API URL: https://extractor.easybits.tech/api/pipelines/[YOUR_PIPELINE_ID] API Key: Your unique authentication token Copy both values and integrate them into the "easybits Extractor" HTTP Request node in the workflow. > To keep in mind: Each pipeline has its own API Key and Pipeline ID. If you have multiple pipelines (for example, one for receipts and one for invoices), you will need separate credentials for each. > Important: When adding your API Key, set the Credential Type to Bearer Auth and paste your API Key as the Bearer Token value. 3. Connect Your Telegram Bot Open the Telegram: Receipt Photo node. Connect your Telegram Bot credentials (Bot Token from @BotFather). Make sure "Download" is enabled under Additional Fields so the image binary is forwarded correctly. 4. Connect Google Sheets Open the Append row in sheet node. Connect your Google Sheets account via OAuth2. Select your target spreadsheet and sheet. Make sure your sheet has at least these two columns: Vendor Name and Overall Due. 5. Activate the Workflow Click the "Active" toggle in the top-right corner of n8n to enable the workflow. Send a receipt photo to your Telegram bot to test it end to end. Check your Google Sheet – a new row with the invoice reference and EUR amount should appear.
by JJ Tham
Automate Google Ads Search Term Analysis and Send Insights to Slack Stop manually digging through endless Google Ads search term reports! 📊 This workflow puts your brand campaign analysis on autopilot, acting as an AI-powered performance marketer that works for you 24/7. This template fetches your recent search term data, uses AI to identify wasted ad spend and new keyword opportunities, and delivers a concise, actionable report directly to your Slack channel—complete with buttons to approve the changes. ⚙️ How it works This workflow connects to your Google Ads account to pull search term data from your brand campaigns. It then feeds this data to Google Gemini with a specific prompt to: Identify Non-Brand Keywords: Isolate all search terms that are not related to your brand. Calculate Wasted Spend: Find terms with zero conversions and sum up the total cost. Flag Opportunities: Highlight non-brand terms that are converting for manual review. Send to Slack: Format the findings into a beautiful, easy-to-read Slack message with interactive buttons to approve adding the wasteful terms as negative keywords. 👥 Who’s it for? PPC & SEM Managers: Save hours each week by automating the search query mining process. Performance Marketers: Instantly spot and plug budget leaks in your brand campaigns. Digital Marketing Agencies: Provide proactive, data-driven insights to clients with zero manual effort. 🛠️ How to set up This is an advanced workflow that requires several connection points. Setup involves connecting your Google Ads account, providing your Manager and Client IDs, specifying which campaign and brand terms to analyze, configuring the direct API call with your developer token, and finally connecting your Slack workspace. 👉 For a detailed, step-by-step guide, please refer to the yellow sticky note inside the workflow.
by Abelion Lavv
What this workflow does Transform YouTube videos into structured Active Learning study sheets using AI. This workflow extracts video metadata, transcribes audio with Google Gemini, applies the ICAP Framework (Interactive, Constructive, Active, Passive) for deep learning, and generates micro-goals, Cornell notes, Feynman explanations, and practice tasks—all automatically formatted in Notion. Perfect for students, educators, and lifelong learners who want to maximize retention from video content. How it works Submit YouTube URL - User enters video URL via form Extract metadata - Retrieves title, channel, publish date, and thumbnail Download & transcribe - Gets audio via RapidAPI and transcribes with Google Gemini AI analysis - Applies Active Learning framework to generate structured study content Create Notion page - Builds formatted page with all learning materials Setup requirements Credentials needed: YouTube Data API (OAuth2) Google Gemini API key RapidAPI key (YouTube Downloader API) Notion API integration Before running: Duplicate the Notion database template (link in workflow sticky note) Share database with your Notion integration Configure credentials in n8n Add your RapidAPI key in the HTTP node Add your Notion Database ID in the Build Page Structure node All configuration points are clearly marked with sticky notes in the workflow.
by Felix
How it works I wanted to avoid the rush at end of month to log expenses. I tried existing expense apps but found them either too expensive for what they offer, or frustrating with inconsistent extraction results. That is why I built my own Telegram expense bot that: Lets users send receipt photos or PDFs via Telegram Automatically extracts vendor, amount, date, and category using AI Applies expense rules like partial reimbursement rates (for example, 80% for phone bills) Organizes expenses into monthly Google Sheets tabs Asks for clarification when the category is unclear Supports flexible descriptions via Telegram caption Sends a confirmation message with expense details The whole extraction process takes about 10 seconds and is fully GDPR compliant. No coding. No manual typing. Just snap and send. Step-by-step guide Initial Setup Import the JSON workflow Sign up and log in to easybits at https://extractor.easybits.tech Create a pipeline by uploading an example receipt and mapping the fields you want to extract: -- vendor_name -- total_amount -- currency -- transaction_date -- category -- extraction_confidence For more details, visit our Quick Start Guide Get Your easybits Credentials Once you have finalized your pipeline, go back to your dashboard and click Pipelines in the left sidebar Click View Pipeline on the pipeline you want to connect On the Pipeline Details page, you will find: API URL:** https://extractor.easybits.tech/api/pipelines/[YOUR_PIPELINE_ID] API Key:** Your unique authentication token Copy both values and integrate them into the "Extract with easybits" HTTP Request node To keep in mind: Each pipeline has its own API Key and Pipeline ID. If you have multiple pipelines (for example, one for receipts and one for invoices), you will need separate credentials for each. Important: To integrate your API Key, make sure to set it up in the following format: > Bearer [YOUR_API_KEY] Set Up Telegram Bot Open Telegram and search for @BotFather Send /newbot and follow the prompts Copy your Bot Token and add it to the Telegram credentials in n8n Connect Google Sheets Create a new spreadsheet for expenses Copy the Spreadsheet ID from the URL Update the Google Sheets nodes with your Spreadsheet ID Go Live Activate the workflow and send your first receipt photo to your Telegram bot
by Nitesh
🚀 How the System Works This automation operates in three distinct phases: Ingestion, Storage, and Generation. | Phase | Component | What Happens | | --- | --- | --- | | 1. The Trigger | Google Drive | Every time you update your rag_posts.csv in your Drive folder, the system wakes up. | | 2. The Brain | Gemini Embeddings | It turns your text into "Vectors" (numbers) so the AI understands the meaning of your writing style, not just the words. | | 3. The Vault | MongoDB Atlas | Your posts are stored in a vector database, acting as a "Style Library" the AI can browse instantly. | | 4. The Writer | AI Agents | When you ask for a post, the AI searches your vault, finds the best matches, and mimics the formatting exactly. | 🛠️ Step-by-Step Setup Guide 1. Prepare Your Data Source Create a Google Drive Folder and note its ID (the long string of characters in the URL). Create a CSV file named rag_posts.csv. Columns needed:** Post Text, Hook Type, Engagement, Category. Upload it to that folder. 2. Configure MongoDB Atlas (The Vector Store) Sign up for a free MongoDB Atlas account. Create a Cluster and a Database named n8n_rag_data. Crucial Step:* Create an *Atlas Vector Search Index** on your collection. Name the index data_index. 3. Google Gemini API Go to the Google AI Studio. Generate an API Key. This will power both the "Embeddings" (understanding the text) and the "Chat" (writing the post). 4. Connect the n8n Nodes Google Drive Trigger:** Paste your Folder ID and select fileUpdated. MongoDB Nodes:** Enter your Connection String (SRV) and credentials. Gemini Nodes:** Paste your API Key into the Credentials section. Google Sheets Tool:** Link your specific spreadsheet ID so the "Knowledge Base Agent 1" can read specific rows.