by tanaypant
This workflow automatically queries a Postgres database to find outlier readings for which SMS notifications have not been sent. This is Workflow 2 in the blog tutorial Database activity monitoring and alerting. Prerequisites A Postgres database set up and credentials A Twilio account and credentials Nodes Cron node triggers the workflow every minute, so the database is queried at regular intervals. Postgres nodes extract values from, and update values in the database. Twilio node sends an alert SMS about the outlier reading to a specified phone number. Set node sets the notification value to true.
by Jenny
Vector Database as a Big Data Analysis Tool for AI Agents Workflows from the webinar "Build production-ready AI Agents with Qdrant and n8n". This series of workflows shows how to build big data analysis tools for production-ready AI agents with the help of vector databases. These pipelines are adaptable to any dataset of images, hence, many production use cases. Uploading (image) datasets to Qdrant Set up meta-variables for anomaly detection in Qdrant Anomaly detection tool KNN classifier tool For anomaly detection The first pipeline to upload an image dataset to Qdrant. The second pipeline is to set up cluster (class) centres & cluster (class) threshold scores needed for anomaly detection. The third is the anomaly detection tool, which takes any image as input and uses all preparatory work done with Qdrant to detect if it's an anomaly to the uploaded dataset. For KNN (k nearest neighbours) classification The first pipeline to upload an image dataset to Qdrant. This pipeline is the KNN classifier tool, which takes any image as input and classifies it on the uploaded to Qdrant dataset. To recreate both You'll have to upload crops and lands datasets from Kaggle to your own Google Storage bucket, and re-create APIs/connections to Qdrant Cloud (you can use Free Tier cluster), Voyage AI API & Google Cloud Storage. [This workflow] KNN classification tool This tool takes any image URL, and as output, it returns a class of the object on the image based on the image uploaded to the Qdrant dataset (lands). An image URL is received via the Execute Workflow Trigger, which is then sent to the Voyage AI Multimodal Embeddings API to fetch its embedding. The image's embedding vector is then used to query Qdrant, returning a set of X similar images with pre-labeled classes. Majority voting is done for classes of neighbouring images. A loop is used to resolve scenarios where there is a tie in Majority Voting, and we increase the number of neighbours to retrieve. When the loop finally resolves, the identified class is returned to the calling workflow.
by Md. Nazmul Islam
AI-Powered MCQ Quiz Generator from YouTube Videos Transform any YouTube video into an interactive MCQ quiz automatically! This workflow uses Google Gemini AI to analyze video content and generate comprehensive multiple-choice questions with automatic grading - perfect for educators, trainers, and content creators. Who is this For This workflow is perfect for: Educators** creating quizzes from educational YouTube content Corporate Trainers** developing assessments from training videos Content Creators** engaging their audience with interactive quizzes Students** testing their knowledge on video lectures Online Course Creators** building assessments from video content Features AI Video Analysis**: Google Gemini 2.5 Flash analyzes entire YouTube videos (up to 50 minutes) Dynamic Question Generation**: Creates up to 90 MCQ questions with 3 options each Automatic Form Creation**: Generates Google Forms with quiz functionality Smart Grading**: Built-in correct answer identification and scoring Error Handling**: Robust error management with user feedback How It Works User Input via n8n Web Form: Form Name (Quiz Title) Email Address YouTube Video URL Number of Questions (1-90) AI Processing Pipeline: Google Gemini analyzes the YouTube video content AI extracts key concepts and generates relevant questions Structured output parser formats questions into JSON Google Forms Integration: Automatically creates a new Google Form Adds all generated questions with multiple choice options Configures quiz settings with correct answers and scoring Completion & Access: User receives direct link to the generated quiz Form ready for immediate use or sharing Video Demo: See this youtube Video to explore "how it works". Set Up Steps Import the Workflow Create a new workflow in n8n Import the JSON file by clicking "three dots" (upper right corner) > "Import from file..." Configure Google Gemini API Get your Google AI Studio API key from Google AI Studio On “HTTP Request to Gemini” node replace the “API_KEY” from url with your API key. Create a "Google Gemini (PaLM) API" credential in n8n Add your API key to the credential Connect the credential to the "Google Gemini Chat Model" node Set Up Google Forms Integration Enable Google Forms API in Google Cloud Console Create a "Google OAuth2 API" credential in n8n Authorize the credential with Forms permissions Connect the credential to both HTTP Request nodes (“Create a Google Form” node and “Create MCQ Quizzes” node) Configure Form Trigger The workflow includes a built-in form trigger No additional setup needed - the form URL will be generated automatically Customize form fields if needed in the “Input YouTube URL" node Test the Workflow Activate the workflow Submit the form to generate a test quiz Verify the Google Form is created successfully Pre-requisites Necessary Accounts:** Google Account (for Forms API access) Google AI Studio Account (for Gemini API access) n8n Instance (cloud or self-hosted) API Access:** Google Forms API enabled Google drive API enabled Google Generative AI API access Valid API keys and OAuth credentials N8N Requirements:** n8n version 1.95.2 or higher LangChain nodes package installed Internet access for API calls Customization Guidance Question Generation Prompts: Modify the prompt in "Set Prompt and model" node for different question styles Adjust difficulty levels or focus areas Change question format (True/False, Fill-in-blanks, etc.) Form Customization: Update form title and description templates Add additional input fields (difficulty level, subject area) Customize success/error messages Advanced Features You Can Add: Email Notifications: Send quiz links via email Analytics Integration: Track quiz performance and completion rates Multi-language Support: Generate quizzes in different languages Question Bank Storage: Save generated questions to a database Batch Processing: Generate multiple quizzes from a YouTube playlist Error Handling Enhancements: Add retry logic for API failures Implement fallback question generation Create detailed error logging Technical Specifications Video Length**: Up to 50 minutes supported Question Limit**: 1-90 questions per quiz Processing Time**: 2-10 minutes depending on video length Supported Formats**: YouTube videos (public and unlisted) Output Format**: Google Forms with automatic grading Limitations & Considerations YouTube video must be publicly accessible or unlisted Processing time increases with video length and question count API rate limits may apply for high-volume usage Some complex visual content may not be fully analyzed Ready to Transform Videos into Quizzes? This workflow streamlines the entire process from video analysis to quiz deployment. Perfect for educators and trainers looking to create engaging assessments from video content quickly and efficiently.
by Jenny
Vector Database as a Big Data Analysis Tool for AI Agents Workflows from the webinar "Build production-ready AI Agents with Qdrant and n8n". This series of workflows shows how to build big data analysis tools for production-ready AI agents with the help of vector databases. These pipelines are adaptable to any dataset of images, hence, many production use cases. Uploading (image) datasets to Qdrant Set up meta-variables for anomaly detection in Qdrant Anomaly detection tool KNN classifier tool For anomaly detection The first pipeline to upload an image dataset to Qdrant. The second pipeline is to set up cluster (class) centres & cluster (class) threshold scores needed for anomaly detection. 3. This is the third pipeline --- the anomaly detection tool, which takes any image as input and uses all preparatory work done with Qdrant to detect if it's an anomaly to the uploaded dataset. For KNN (k nearest neighbours) classification The first pipeline to upload an image dataset to Qdrant. The second is the KNN classifier tool, which takes any image as input and classifies it on the uploaded to Qdrant dataset. To recreate both You'll have to upload crops and lands datasets from Kaggle to your own Google Storage bucket, and re-create APIs/connections to Qdrant Cloud (you can use Free Tier cluster), Voyage AI API & Google Cloud Storage. [This workflow] Anomaly Detection Tool This is the tool that can be used directly for anomalous images (crops) detection. It takes as input (any) image URL and returns a text message telling if whatever this image depicts is anomalous to the crop dataset stored in Qdrant. An Image URL is received via the Execute Workflow Trigger, which is used to generate embedding vectors using the Voyage AI Embeddings API. The returned vectors are used to query the Qdrant collection to determine if the given crop is known by comparing it to threshold scores of each image class (crop type). If the image scores lower than all thresholds, then the image is considered an anomaly for the dataset.
by Yaron Been
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automatically monitors local event platforms (Eventbrite, Meetup, Facebook Events) and aggregates upcoming events that match your criteria. Never miss a networking or sponsorship opportunity again. Overview A scheduled trigger scrapes multiple event sites via Bright Data, filtering by location, date range, and keywords. OpenAI classifies each event (conference, meetup, workshop) and extracts key details such as venue, organizers, and ticket price. Updates are posted to Slack and archived in Airtable for quick lookup. Tools Used n8n** – Core automation engine Bright Data** – Reliable multi-site scraping OpenAI** – NLP-based event categorization Slack** – Delivers daily event digests Airtable** – Stores enriched event records How to Install Import the Workflow: Add the .json file to n8n. Configure Bright Data: Provide your account credentials. Set Up OpenAI: Insert your API key. Connect Slack & Airtable: Authorize both services. Customize Filters: Edit the initial Set node to adjust city, radius, and keywords. Use Cases Community Managers**: Curate a calendar of relevant events. Sales Teams**: Identify trade shows and meetups for prospecting. Event Planners**: Track competing events when choosing dates. Marketers**: Spot speaking or sponsorship opportunities. Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #eventmonitoring #brightdata #openscraping #openai #slackalerts #n8nworkflow #nocode #meetup #eventbrite
by Léo Mathurin
✨ Try It Out! Sync your Linear issues to Todoist automatically with this n8n workflow. When an issue is created, updated, or completed in Linear, a corresponding task is created, updated, or closed in Todoist. ⚙️ How It Works Triggered by issue changes via linearTrigger Routes based on action (create, update, remove) Checks if a matching Todoist task already exists (via issue ID) If the issue has: A due date And is assigned to you (youremail@example.com) ➤ Then it creates or updates the task accordingly If the issue is marked Done, the Todoist task is closed If it's deleted in Linear, the Todoist task is also removed Sub-issues get enriched with their parent title for clarity 🛠️ Customization Replace youremail@example.com with your Linear email in the IF nodes Adjust which states are synced (e.g. “In Progress”, “Todo”...) Customize the Todoist project, labels, or title formatting ⚠️ Bi-directional Sync? This workflow is one-way (Linear ➜ Todoist). Bi-directional syncing might be possible but isn’t handled here—it would be a cool upgrade! ✅ Requirements n8n with OAuth2 access to Linear and Todoist Your Linear email set in the workflow for filtering A target Todoist project (default: Inbox) 💬 Need Help? Ask in the n8n Forum or join the Discord. Happy Automating! 🚀
by dirogar
Telegram Tasker Bot — это сценарий n8n, который принимает голосовые сообщения в Telegram, автоматически превращает их в текст, извлекает из него ключевые поля задачи и создаёт карточку в нужной доске Trello. Пользователь просто говорит задачу — бот сам оформляет её и присылает ссылку на готовую карточку. Для использования вам потребуется telegram bot. Его можно создать через бота BotFather Так же понадобится доступ к API chatgpt - он используется только для транскрибции аудио в речь. Вы можете использовать любой другой сервис, по вашему выбору. И аккаунт в trello, с доступом к API. !Внимание! ID доски в trello можно взять из url ID столбца на доске трелло можно взять через инструменты разработчика (по крайней мере я так получал эти данные)
by Oneclick AI Squad
This comprehensive n8n workflow automates the entire travel business call management process, from initial customer inquiries to trip bookings and marketing outreach. The system handles incoming calls, validates trip details, processes bookings, captures leads, and manages outbound marketing campaigns to promote trip organizer services. It streamlines the complete sales cycle while maintaining organized data records for business intelligence. Essential Information The system operates across four distinct workflows to handle different aspects of travel call management. All call data is automatically captured and stored in organized spreadsheets for analysis and follow-up. The workflow validates trip details before processing to ensure data accuracy and prevent booking errors. Outbound marketing campaigns are automatically triggered based on lead detection and formatting. System Architecture Call Handling Pipeline**: The Detect Incoming Call node captures all incoming customer calls, followed by the Validate Trip Details node which verifies and processes trip information, and the Deliver Organizer Info node that provides relevant trip organizer details to callers. Booking Management Flow**: The Capture Voice Input node records customer booking requests, the Update Booking Record node processes and stores booking information, and the Send Booking Confirmation node delivers confirmation details to customers. Lead Generation Process**: The Detect New Lead node identifies potential customers from call data, the Format Lead Information node structures the lead data for marketing use, and the Initiate Marketing Outreach node launches targeted marketing campaigns. Data Management System**: The Receive Call Response node collects call interaction data, the Log User Input node records customer information in spreadsheets, and the Relay Response to System node ensures data synchronization across all components. Implementation Guide Import the workflow into n8n and configure phone system integration for call detection and voice capture. Set up spreadsheet connections for booking records, lead management, and call logging. Configure marketing automation tools for outbound campaign management. Test each workflow section independently before enabling the complete system. Monitor call handling accuracy and adjust validation rules as needed. Technical Dependencies Phone system API or telephony service for call detection and voice processing Spreadsheet service (Google Sheets, Excel Online) for data storage and management Marketing automation platform for outbound campaign execution Voice recognition service for capturing and processing customer input CRM integration for lead management and customer tracking Database & Sheet Structure Call Tracking Sheet**: Columns should include Call_ID, Customer_Phone, Call_Time, Call_Duration, Call_Status, Trip_Interest, Organizer_Assigned Booking Records Sheet**: Required columns are Booking_ID, Customer_Name, Customer_Phone, Destination, Travel_Dates, Group_Size, Booking_Status, Confirmation_Sent Lead Management Sheet**: Essential columns include Lead_ID, Customer_Name, Phone_Number, Email, Trip_Preference, Lead_Source, Lead_Status, Marketing_Campaign_Sent Trip Organizer Database**: Contains Organizer_ID, Organizer_Name, Specialization, Contact_Info, Availability_Status, Performance_Rating Marketing Outreach Log**: Tracks Campaign_ID, Lead_ID, Campaign_Type, Send_Date, Response_Status, Follow_up_Required Customization Possibilities Adjust the Validate Trip Details node to include specific travel validation rules or partner requirements. Modify the Format Lead Information node to match your CRM system's data structure and marketing campaign formats. Configure the Initiate Marketing Outreach node to integrate with your preferred marketing platforms and campaign templates. Customize the data logging structure in the Log User Input node to capture additional customer information or booking details. Add additional validation steps or approval workflows between booking capture and confirmation sending.
by Anir Agram
🛡️📥 Telegram Invoice Agent → 🔎 OCR → 🤖 AI Parsing → 📄 Google Sheets + 🗂️ Drive What this workflow does 🤖 Captures invoices from Telegram and auto-downloads PDFs/images. 🔎 Runs OCR, then uses AI to structure clean invoice fields. 📄 Appends parsed data to a Google Sheets “Invoice Database.” 🗂️ Uploads the original file to Google Drive with a neat name. 💬 Sends a friendly Telegram summary with totals, due date, notes, and link. Why it’s useful ⚡ Faster bookkeeping with zero manual copy-paste. 🧱 Consistent schema for reliable reporting and pivots. 👥 Team-friendly drop-and-log via Telegram. 🧩 Easy to extend with approvals, ERP/CRM sync, or vendor routing. How it works 📲 Telegram Trigger → file received. 🌐 HTTP OCR (OCR.space) → text extracted. 🤖 AI Agent → maps to strict JSON schema. 📄 Google Sheets → appends structured row. 🗂️ Google Drive → saves original invoice. 💬 Telegram → concise confirmation and links. What you’ll need 🤖 Telegram Bot token. 🔑 OCR API key (OCR.space: free tier; upgrade for volume/accuracy). 🔐 Google OAuth for Sheets + Drive. 🧠 LLM account (e.g., Gemini/OpenAI-compatible). Setup steps 🔗 Connect credentials: Telegram, Google, OCR, AI. 📄 Prepare Sheet columns: Invoice Number, Date, Total Amount ($), Billing Address, Due Date, Notes. 🧭 Update sheet ID and Drive folder ID. 🧪 Test: send a sample invoice and validate OCR, AI output, row append, and Drive link. Customization ideas 🎯 Higher accuracy OCR: swap to Google Vision. 📊 Line items: extract into a second tab for analytics. ✅ Approvals: add Telegram keyboard confirmation before write. 🧯 Robustness: IF/Retry on empty OCR; user prompt to retake photo. Who it’s for 🧑💻 Freelancers/agencies needing fast invoice intake via Telegram. 🧾 Small finance teams wanting a searchable ledger with links to originals. 🏗️ Builders extending to ERPs/CRMs and custom accounting flows. Want help customizing? 📧 anirpoke@gmail.com 🔗 Linkedin
by Brian Burnett
Basics Provides a mechanism to save all your workflows into a github repository and path (of your choosing). These can then be shared through your entire org and used to track changes (if you make any sad 'oopsies'. Flow Obtains and creates listing of currently configured workflows. Iterates through each workflow looking at the following Github source (if present) Actual workflow code (from N8N) Workflow code is sorted and compared for any changes If changed (or new) the workflows are saved / archived into github. Configuration Most of the configuration is done in the Globals node which houses the repo detail for github nodes. The only other dependency is that it by default looks for a GitHub credential, if you use something other than that precise wording you will need to change the credential used on the respective nodes. We gave it 'Manage' rights, but that was only so that it was able to override a requirement for checks to complete? Most would probably only need 'Write' privileges. Background Well, so we initially started using N8N just as a kubernetes-based service housed with its DB running inside the pod. Worked great for getting to know N8N and we jut kept all our workflows and credentials listed in a readme. Fast forward about a year... We have migrated this into our 'production' toolsets and maintain a bunch of team worflows inside it (not company-wide, but LOTS of team fun). While trying to spin a copy of our production RDS database, the ++actual++ production database was deleted, and in doing so AWS was nice enough to wipe our snapshots too!! Yea! Thankfully it only took us a few hours to get everything back up and running thanks to this, so I'm sharing it for everyone to benefit. We have used it to restore old workflows, changes, and now to test our full DR proceedures! (Ok, I might have taken that a bit far)
by Miquel Colomer
Do you want to discover company-related information to enrich a signup process? This workflow enriches any company by name using the uProc Get Company by Name tool. This tool combines Google Maps and emails research on the internet to return results. You get no results if the company has no presence on Google Maps. You need to add your credentials (Email and API Key - real -) located at Integration section to n8n. You can replace node "Create Company Item" with any other supported service returning Company names and countries, like Hubspot, Google Sheets, MySQL, or Typeform. You can set up the uProc node with several parameters: country: the country name you want to use. name: the name of the company you need to locate. Every "uProc" node returns the next fields per every located company: name: Contains the company's given name. email: Contains the company's given email. cif: Contains company's cif number. address: Contains company's formatted address. city: Contains the city location of the company. state: Contains province location of the company. county: Contains state location of the company country: Contains country location of the company zipcode: Contains zipcode code of the company phone: Contains phone number of the company website: Contains website of the company latitude: Contains latitude of the company longitude: Contains longitude of the company Next, you can save results to a CRM or Google Sheets, and prepare returned email or phone to launch an email or telemarketing campaign.
by Oneclick AI Squad
This automated n8n workflow scrapes job listings from Upwork using Apify, processes and cleans the data, and generates daily email reports with job summaries. The system uses Google Sheets for data storage and keyword management, providing a comprehensive solution for tracking relevant job opportunities and market trends. What is Apify? Apify is a web scraping and automation platform that provides reliable APIs for extracting data from websites like Upwork. It handles the complexities of web scraping including rate limiting, proxy management, and data extraction while maintaining compliance with website terms of service. Good to Know Apify API calls may incur costs based on usage; check Apify pricing for details Google Sheets access must be properly authorized to avoid data sync issues The workflow includes data cleaning and deduplication to ensure high-quality results Email reports provide structured summaries for easy review and decision-making Keyword management through Google Sheets allows for flexible job targeting How It Works The workflow is organized into three main phases: Phase 1: Job Scraping & Initial Processing This phase handles the core data collection and initial storage: Trigger Manual Run - Manually starts the workflow for on-demand job scraping Fetch Keywords from Google Sheet - Reads the list of job-related keywords from the All Keywords sheet Loop Through Keywords - Iterates over each keyword to trigger Apify scraping Trigger Apify Scraper - Sends HTTP request to start Apify actor for job scraping Wait for Apify Completion - Waits for the Apify actor to finish execution Delay Before Dataset Read - Waits a few seconds to ensure dataset is ready for processing Fetch Scraped Job Dataset - Fetches the latest dataset from Apify Process Raw Job Data - Filters jobs posted in the last 24 hours and formats the data Save Jobs to Daily Sheet - Appends new job data to the daily Google Sheet Update Keyword Job Count - Updates job count in the All Keywords summary sheet Phase 2: Data Cleaning & Deduplication This phase ensures data quality and removes duplicates: Load Today's Daily Jobs - Loads all jobs added in today's sheet for processing Remove Duplicates by Title/Desc - Removes duplicates based on title and description matching Save Clean Job Data - Saves the cleaned, unique entries back to the sheet Clear Old Daily Sheet Data - Deletes old or duplicate entries from the sheet Reload Clean Job Data - Loads clean data again after deletion for final processing Phase 3: Daily Summary & Email Report This phase generates summaries and delivers the final report: Generate Keyword Summary Stats - Counts job totals per keyword for analysis Update Summary Sheet - Updates the summary sheet with keyword statistics Fetch Final Summary Data - Reads the summary sheet for reporting purposes Build Email Body - Formats email with statistics and sheet link Send Daily Report Email - Sends the structured daily summary email to recipients Data Sources The workflow utilizes Google Sheets for data management: AI Keywords Sheet - Contains keyword management data with columns: Keyword (text) - Job search terms Job Count (number) - Number of jobs found for each keyword Status (text) - Active/Inactive status Last Updated (timestamp) - When keyword was last processed Daily Jobs Sheet - Contains scraped job data with columns: Job Title (text) - Title of the job posting Description (text) - Job description content Budget (text) - Job budget or hourly rate Client Rating (number) - Client's rating on Upwork Posted Date (timestamp) - When job was posted Job URL (text) - Direct link to the job posting Keyword (text) - Which keyword found this job Scraped At (timestamp) - When data was collected Summary Sheet - Contains daily statistics with columns: Date (date) - Report date Total Jobs (number) - Total jobs found Keywords Processed (number) - Number of keywords searched Top Keyword (text) - Most productive keyword Average Budget (currency) - Average job budget Report Generated (timestamp) - When summary was created How to Use Import the workflow into n8n Configure Apify API credentials and Google Sheets API access Set up email credentials for daily report delivery Create three Google Sheets with the specified column structures Add relevant job keywords to the AI Keywords sheet Test with sample keywords and adjust as needed Requirements Apify API credentials and actor access Google Sheets API access Email service credentials (Gmail, SMTP, etc.) Upwork job search keywords for targeting Customizing This Workflow Modify the Process Raw Job Data node to filter jobs by additional criteria like budget range, client rating, or job type. Adjust the email report format to include more detailed statistics or add visual aids, such as charts. Customize the data cleaning logic to better handle duplicate detection based on your specific requirements, or add additional data sources beyond Upwork for comprehensive job market analysis.