by Oneclick AI Squad
This automated n8n workflow enables the creation and management of AWS RDS databases through email interactions. Users can send emails with commands such as "Create RDS" or "Delete RDS," including details like database engine, instance class, and credentials. The workflow parses the email, uses Terraform to execute the requested action on AWS RDS, updates a Google Sheet with the status, and sends a confirmation email. Fundamental Aspects Gmail Trigger**: Initiates the workflow upon receiving a new email in Gmail. Parse Email Content**: Analyzes the email body to extract the command (create or delete) and database details like region, identifier, engine, and credentials. Manage RDS Instance**: Executes Terraform commands to create or delete the AWS RDS database instance based on the parsed details. Wait For Data**: Pauses the workflow to allow time for the RDS operation to complete and data to become available. Update Google Sheet**: Appends or updates the Google Sheet with the database instance details, status, and any relevant IDs. Send Confirmation Email**: Formats and sends a response email confirming the action taken, including success/failure details. Setup Instructions Import the Workflow into n8n**: Download the workflow JSON and import it via the n8n interface. Configure API Credentials**: Set up Gmail API credentials for email triggering and sending. Configure AWS credentials with RDS management permissions. Set up Google Sheets API credentials with read/write access. Ensure Terraform is integrated or nodes are configured for Terraform execution. Prepare Google Sheet**: Create a sheet with columns for database identifier, engine, instance class, status, and other relevant fields. Run the Workflow**: Activate the Gmail trigger and test by sending an email with a create or delete command. Verify Responses**: Check the Google Sheet for updates and your email for confirmation messages. Adjust Parameters**: Fine-tune Terraform variables, email parsing logic, or wait times as needed. Columns For The Google Sheet: Database Identifier: Unique identifier for the RDS instance (e.g., var.db_identifier). Engine: Database engine type (e.g., MySQL, PostgreSQL) (e.g., var.db_engine). Instance Class: RDS instance class (e.g., var.instance_class) (e.g., db.t3.micro). Allocated Storage: Storage size in GB (e.g., var.allocated_storage) (e.g., 20). Region: AWS region for the instance (e.g., var.aws_region) (e.g., us-east-1). Username: Database admin username (e.g., var.db_username) (e.g., admin). Password: Database admin password (e.g., var.db_password) (e.g., SecurePassword123). Status: Current status of the RDS instance (e.g., creating, deleted). Database Name: Name or tag for the database (e.g., var.db_name) (e.g., MyRDSDatabase). Technical Dependencies Gmail API**: For receiving trigger emails and sending confirmations. AWS RDS API**: For database management (via Terraform). Google Sheets API**: For logging and updating database status. Terraform**: For infrastructure-as-code management of RDS instances. n8n**: For workflow automation and node integrations. Customization Possibilities Support Additional Commands**: Extend to include update or snapshot operations for RDS instances. Enhance Parsing**: Improve email content analysis with AI for better intent detection. Add Database Engines**: Include support for more RDS engines like Oracle or SQL Server. Integrate Monitoring**: Add nodes to monitor RDS performance and alert via email. Customize Sheets**: Modify sheet columns or add visualizations for database metrics. Security Enhancements**: Incorporate additional validation for sensitive credentials in emails. Want a tailored workflow for your business? Our experts can craft it quickly Contact our team
by Avkash Kakdiya
How it works This workflow consolidates data from five different systems — Google Sheets, PostgreSQL, MongoDB, Microsoft SQL Server, and Google Analytics — into a single master Google Sheet. It runs on a scheduled trigger three times a week. Each dataset is tagged with a unique source identifier before merging, ensuring data traceability. Finally, the merged dataset is cleaned, standardized, and written into the output Google Sheet for reporting and analysis. Step-by-step 1. Trigger the workflow Schedule Trigger** – Runs the workflow at set weekly intervals. 2. Collect data from sources Google Sheets Source** – Retrieves records from a specific sheet. PostgreSQL Source** – Extracts customer data from the database. MongoDB Source** – Pulls documents from the defined collection. Microsoft SQL Server** – Executes a SQL query and returns results. Google Analytics** – Captures user activity and engagement metrics. 3. Tag each dataset Add Sheets Source ID** – Marks data from Google Sheets. Add PostgreSQL Source ID** – Marks data from PostgreSQL. Add MongoDB Source ID** – Marks data from MongoDB. Add SQL Server Source ID** – Marks data from SQL Server. Add Analytics Source ID** – Marks data from Google Analytics. 4. Merge and process Merge** – Combines all tagged datasets into a single structure. Process Merged Data** – Cleans, aligns schemas, and standardizes key fields. 5. Store consolidated output Final Google Sheet** – Appends or updates the master sheet with the processed data. Why use this? Centralizes multiple data sources into a single, consistent dataset. Ensures data traceability by tagging each source. Reduces manual effort in data cleaning and consolidation. Provides a reliable reporting hub for business analysis. Enables scheduled, automated updates for up-to-date visibility.
by AureusR
Synchronize Excel or Google Sheets with Postgres (bi-directional) Who’s it for This workflow is perfect for companies that have always managed their operations in Excel or Google Sheets and want to gradually transition to using a database or custom software. It ensures business continuity while modernizing data management. How it works / What it does Trigger options → Run the sync manually, on schedule, or as part of another workflow. Get data from Excel → Reads rows from an Excel or Google Sheet table. Sanitize data → Cleans up formats (e.g., converting Excel serial dates into proper date strings). Upsert into Postgres → Inserts or updates rows in the database, ensuring no duplicates. For auto-mapping to work, the column names in Excel/Sheets and the DB must match exactly. If you want different names, you can manually map columns in the Postgres node. (Optional) → Can be extended to push DB updates back to Excel, creating a true two-way sync. This way, your team can continue working in Excel/Sheets while data is safely persisted in a database—ideal for scaling into dashboards, SaaS, or ERP systems later. How to set up Import the workflow JSON into your n8n instance. Connect your credentials: Microsoft Excel / Google Sheets OAuth2 Postgres database Point the Excel node to the right workbook, worksheet, and table. Make sure column names match between the Excel sheet and DB table (or map manually if they differ). Run manually or configure the schedule trigger for automated syncs. Requirements n8n self-hosted or cloud account. Either Microsoft Excel Online or Google Sheets access. Postgres database (or replace with MySQL, MariaDB, or any supported DB). How to customize the workflow Replace Excel with Google Sheets by swapping the node. Replace Postgres with any preferred database node. Add validation steps (e.g., check for missing emails, duplicate IDs). Extend with reporting workflows (e.g., sync DB data to BI dashboards). Use this as a stepping stone to migrate from spreadsheets into software-driven processes.
by Bhavy Shekhaliya
Overview This n8n template demonstrates how to use AI to automatically analyze WordPress blog content and generate relevant, SEO-optimized tags for WordPress posts. Use cases Automate content tagging for WordPress blogs, maintain consistent taxonomy across large content libraries, save hours of manual tagging work, or improve SEO by ensuring every post has relevant, searchable tags! Good to know The workflow creates new tags automatically if they don't exist in WordPress. Tag generation is intelligent - it avoids duplicates by mapping to existing tag IDs. How it works We fetch a WordPress blog post using the WordPress node with sticky data enabled for testing. The post content is sent to GPT-4.1-mini which analyzes it and generates 5-10 relevant tags using a structured output parser. All existing WordPress tags are fetched via HTTP Request to check for matches. A smart loop processes each AI-generated tag: If the tag already exists, it maps to the existing tag ID If it's new, it creates the tag via WordPress API All tag IDs are aggregated and the WordPress post is updated with the complete tag list. How to use The manual trigger node is used as an example but feel free to replace this with other triggers such as webhook, schedule, or WordPress webhook for new posts. Modify the "Fetch One WordPress Blog" node to fetch multiple posts or integrate with your publishing workflow. Requirements WordPress site with REST API enabled OpenAI API Customising this workflow Adjust the AI prompt to generate tags specific to your industry or SEO strategy Change the tag count (currently 5-10) based on your needs Add filtering logic to only tag posts in specific categories
by Aslamul Fikri Alfirdausi
How it works This workflow is a professional-grade market intelligence tool designed to bridge the gap between search interest and social media engagement. It automates the end-to-end process of trend discovery and content strategy. Detection: Polls Google Trends RSS daily for rising regional search queries. Parallel Extraction: Concurrently triggers industrial-grade Apify actors to scrape TikTok, Instagram, and X (Twitter) without the risk of account bans. Data Aggregation: Uses custom JavaScript logic to clean and merge disparate data points, optimizing them for LLM processing. AI Analysis: Google Gemini Flash analyzes the data to identify core topics, sentiment, and trend strength. Granular Delivery: Delivers individual, structured reports for each identified trend directly to Discord via Webhooks. Set up steps API Credentials: Prepare your Apify API Token and Google Gemini API Key. Discord Setup: Create a Webhook in your Discord server and paste the URL into the Discord node. Regional Configuration: Set your target country code (e.g., JP, ID, US) in the "Edit Fields" node at the start of the workflow. Node Settings: Ensure all scraper nodes are set to "Continue on Fail" to maintain workflow resilience. Requirements Apify Account. Google Gemini API Key. Discord Server for report delivery.
by Supira Inc.
Overview This template automates invoice processing for teams that currently copy data from PDFs into spreadsheets by hand. It is ideal for small businesses, back-office teams, accounting, and operations who want to reduce manual entry, avoid human error, and never miss a payment deadline. The workflow watches a structured Google Drive folder, performs OCR, converts the text into clean structured JSON with an LLM, and appends one row per invoice into Google Sheets. It preserves a link back to the original file for easy review and audit. Designed for small businesses and back-office teams.** Eliminates manual typing** and reduces errors. Prevents missed due dates** by centralizing data. Works with monthly subfolders like "2025年10月分" (meaning "October 2025"). Keeps a Google Drive link to each invoice file. How It Works The workflow runs on a schedule, scans your Drive folder hierarchy, OCRs the PDFs/images, cleans the text, extracts key fields with an LLM, and appends a row to Google Sheets per invoice. Each step is modular so you can swap services or tweak prompts without breaking the flow. Scheduled trigger** runs on a recurring cadence. Scan the parent folder** in Google Drive. Auto-detect the current-month folder** (e.g., a folder named "2025年10月分" meaning "October 2025"). Download PDFs/images** from the detected folder. Extract text** using the OCR.Space API. Clean noise** and normalize with a Code node. Use an OpenAI model** to extract invoice_date, due_date, client_name, line items, totals, and bank info to JSON. Append one row per invoice** to Google Sheets. Requirements Before you start, make sure you have access to the required services and that your Drive is organized into monthly subfolders so the workflow can find the right files. n8n account.** Google Drive access.** Google Sheets access.** OCR.Space API key** (set as <your_ocr_api_key>). OpenAI / LLM API credential** (e.g., <your_openai_credential_name>). Invoice PDFs organized by month** on Google Drive (e.g., folders like "2025年10月分"). Setup Instructions Import the workflow, replace placeholder credentials and IDs with your own, and enable the schedule. You can also run it manually for testing. The parent-folder query and sheet ID must reflect your environment. Replace <your_google_drive_credential_id> and <your_google_drive_credential_name> with your Google Drive Credential. Adjust the parent folder search query to your invoice repository name. Replace the Sheets document ID <your_google_sheet_id> with your spreadsheet ID. Ensure your OpenAI credential <your_openai_credential_name> is selected. Set your OCR.Space key as <your_ocr_api_key>. Enable the Schedule Trigger** after testing. Customization This workflow is easily extensible. You can adapt folder naming rules, enrich the spreadsheet schema, and expand the AI prompt to extract custom fields specific to your company. It also works beyond invoices, covering receipts, quotes, or purchase orders with minor changes. Change the monthly folder naming rule such as {{$now.setZone("Asia/Tokyo").format("yyyy年MM月")}}分 to match your convention. Modify or extend Google Sheets column mappings as needed. Tune the AI prompt to extract project codes, owner names, or custom fields. Repurpose for receipts, quotes, or purchase orders. Localize date formats and tax calculation rules to your standards.
by Vinay Gangidi
LOB Underwriting with AI This template ingests borrower documents from OneDrive, extracts text with OCR, classifies each file (ID, paystub, bank statement, utilities, tax forms, etc.), aggregates everything per borrower, and asks an LLM to produce a clear underwriting summary and decision (plus next steps). Good to know AI and OCR usage consume credits (OpenAI + your OCR provider). Folder lookups by name can be ambiguous—use a fixed folderId in production. Scanned image quality drives OCR accuracy; bad scans yield weak text. This flow handles PII—mask sensitive data in logs and control access. Start small: batch size and pagination keep costs/memory sane. How it works Import & locate docs: Manual trigger kicks off a OneDrive folder search (e.g., “LOBs”) and lists files inside. Per-file loop: Download each file → run OCR → classify the document type using filename + extracted text. Aggregate: Combine per-file results into a borrower payload (make BorrowerName dynamic). LLM analysis: Feed the payload to an AI Agent (OpenAI model) to extract underwriting-relevant facts and produce a decision + next steps. Output: Return a human-readable summary (and optionally structured JSON for systems). How to use Start with the Manual Trigger to validate end-to-end on a tiny test folder. Once stable, swap in a Schedule/Cron or Webhook trigger. Review the generated underwriting summary; handle only flagged exceptions (unknown/unreadable docs, low confidence). Setup steps Connect accounts Add credentials for OneDrive, OCR, and OpenAI. Configure inputs In Search a folder, point to your borrower docs (prefer folderId; otherwise tighten the name query). In Get items in a folder, enable pagination if the folder is large. In Split in Batches, set a conservative batch size to control costs. Wire the file path Download a file must receive the current file’s id from the folder listing. Make sure the OCR node receives binary input (PDFs/images). Classification Update keyword rules to match your region/lenders/utilities/tax forms. Keep a fallback Unknown class and log it for review. Combine Replace the hard-coded BorrowerName with: a Set node field, a form input, or parsing from folder/file naming conventions. AI Agent Set your OpenAI model/credentials. Ask the model to output JSON first (structured fields) and Markdown second (readable summary). Keep temperature low for consistent, audit-friendly results. Optional outputs Persist JSON/Markdown to Notion/Docs/DB or write to storage. Customize if needed Doc types: add/remove categories and keywords without touching core logic. Error handling: add IF paths for empty folders, failed downloads, empty OCR, or Unknown class; retry transient API errors. Privacy: redact IDs/account numbers in logs; restrict execution visibility. Scale: add MIME/size filters, duplicate detection, and multi-borrower folder patterns (parent → subfolders).
by SpaGreen Creative
WhatsApp Bulk Number Verification in Google Sheets Using Unofficial Rapiwa API Who’s it for This workflow is for marketers, small business owners, freelancers, and support teams who want to automate WhatsApp messaging using a Google Sheet without the official WhatsApp Business API. It’s suitable when you need a budget-friendly, easy-to-maintain solution that uses your personal or business WhatsApp number via an unofficial API service such as Rapiwa. How it works / What it does The workflow looks for rows in a Google Sheet where the Status column is pending. It cleans each phone number (removes non-digits). It verifies the number with the Rapiwa verify endpoint (/api/verify-whatsapp). If the number is verified: The workflow can send a message (optional). It updates the sheet: Verification = verified, Status = sent (or leaves Status for the send node to update). If the number is not verified: It skips sending. It updates the sheet: Verification = unverified, Status = not sent. The workflow processes rows in batches and inserts short delays between items to avoid rate limits. The whole process runs on a schedule (configurable). Key features Scheduled automatic checks (configurable interval; recommended 5–10 minutes). Cleans phone numbers to a proper format before verification. Verifies WhatsApp registration using Rapiwa. Batch processing with limits to control workload (recommended max per run configurable). Short delay between items to reduce throttling and temporary blocks. Automatic sheet updates for auditability (verified/unverified, sent/not sent). Defaults recommended in this workflow Trigger interval: every 5–10 minutes (adjustable). Max items per run: configurable (example: 200 max per cycle). Delay between items: 2–5 seconds (example uses 3 seconds). How to set up Duplicate the sample Google Sheet: ➤ Sample Fill contact rows and set Status = pending. Include columns like WhatsApp No, Name, Message, Verification, Status. In n8n, add and authenticate a Google Sheets node pointed to your sheet. Create an HTTP Bearer credential in n8n and paste your Rapiwa API key. Configure the workflow nodes (Trigger → Google Sheets → Limit/SplitInBatches → Code (clean) → HTTP Request (verify) → If → Update Sheet → Wait). Enable the workflow and monitor first runs with a small test batch. Requirements n8n instance with Google Sheets and HTTP Request nodes enabled. Google Sheets OAuth2 credentials configured in n8n. Rapiwa account and Bearer token (stored in n8n credentials). Google Sheet formatted to match the workflow columns. Why use Rapiwa Cost-effective and developer-friendly REST API for WhatsApp verification and sending. Simple integration via HTTP requests and n8n. Useful when you prefer not to use the official WhatsApp Business API. Note: Rapiwa is an unofficial service — review its terms and risks before production use. How to customize Change schedule frequency in the Trigger node. Adjust maxItems in Limit/SplitInBatches for throughput control. Change the Wait node delay for safer sending. Modify the HTTP Request body to support media or templates if the provider supports it. Add logging or a separate audit sheet to record API responses and errors. Best practices Test with a small batch first. Keep the sheet headers exact and consistent. Store API keys in n8n credentials (do not hardcode). Increase Wait time or reduce batch size if you see rate limits. Keep a log sheet of verified/unverified rows for troubleshooting. Example HTTP verify body (n8n HTTP Request node) { "number": "{{ $json['WhatsApp No'] }}" } Notes and best practices Test with a small batch before scaling. Store the Rapiwa token in n8n credentials, not in node fields. Increase Wait delay or reduce batch size if you see rate limits or temporary blocks. Keep the sheet headers consistent; the workflow matches columns by name. Log API responses or errors for troubleshooting. Optional Add a send-message HTTP Request node after verification to send messages. Append successful and failed rows to separate sheets for easy review. Support & Community Need help setting up or customizing the workflow? Reach out here: WhatsApp: Chat with Support Discord: Join SpaGreen Server Facebook Group: SpaGreen Community Website: SpaGreen Creative Envato: SpaGreen Portfolio
by vinci-king-01
Public Transport Delay Tracker with Microsoft Teams and Todoist ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow continuously monitors public-transportation websites and apps for real-time schedule changes and delays, then posts an alert to a Microsoft Teams channel and creates a follow-up task in Todoist. It is ideal for commuters or travel coordinators who need instant, actionable updates about transit disruptions. Pre-conditions/Requirements Prerequisites An n8n instance (self-hosted or n8n cloud) ScrapeGraphAI community node installed Microsoft Teams account with permission to create an Incoming Webhook Todoist account with at least one project Access to target transit authority websites or APIs Required Credentials ScrapeGraphAI API Key** – Enables scraping of transit data Microsoft Teams Webhook URL** – Sends messages to a specific channel Todoist API Token** – Creates follow-up tasks (Optional) Transit API key if you are using a protected data source Specific Setup Requirements | Resource | What you need | |--------------------------|---------------------------------------------------------------| | Teams Channel | Create a channel → Add “Incoming Webhook” → copy the URL | | Todoist Project | Create “Transit Alerts” project and note its Project ID | | Transit URLs/APIs | Confirm the URLs/pages contain the schedule & delay elements | How it works This workflow continuously monitors public-transportation websites and apps for real-time schedule changes and delays, then posts an alert to a Microsoft Teams channel and creates a follow-up task in Todoist. It is ideal for commuters or travel coordinators who need instant, actionable updates about transit disruptions. Key Steps: Webhook (Trigger)**: Starts the workflow on a schedule or via HTTP call. Set Node**: Defines target transit URLs and parsing rules. ScrapeGraphAI Node**: Scrapes live schedule and delay data. Code Node**: Normalizes scraped data, converts times, and flags delays. IF Node**: Determines if a delay exceeds the user-defined threshold. Microsoft Teams Node**: Sends formatted alert message to the selected Teams channel. Todoist Node**: Creates a “Check alternate route” task with due date equal to the delayed departure time. Sticky Note Node**: Holds a blueprint-level explanation for future editors. Set up steps Setup Time: 15–20 minutes Install community node: In n8n, go to “Manage Nodes” → “Install” → search for “ScrapeGraphAI” → install and restart n8n. Create Teams webhook: In Microsoft Teams, open target channel → “Connectors” → “Incoming Webhook” → give it a name/icon → copy the URL. Create Todoist API token: Todoist → Settings → Integrations → copy your personal API token. Add credentials in n8n: Settings → Credentials → create new for ScrapeGraphAI, Microsoft Teams, and Todoist. Import workflow template: File → Import Workflow JSON → select this template. Configure Set node: Replace example transit URLs with those of your local transit authority. Adjust delay threshold: In the Code node, edit const MAX_DELAY_MINUTES = 5; as needed. Activate workflow: Toggle “Active”. Monitor executions to ensure messages and tasks are created. Node Descriptions Core Workflow Nodes: Webhook** – Triggers workflow on schedule or external HTTP request. Set** – Supplies list of URLs and scraping selectors. ScrapeGraphAI** – Scrapes timetable, status, and delay indicators. Code** – Parses results, converts to minutes, and builds payloads. IF** – Compares delay duration to threshold. Microsoft Teams** – Posts formatted adaptive-card-style message. Todoist** – Adds a task with priority and due date. Sticky Note** – Internal documentation inside the workflow canvas. Data Flow: Webhook → Set → ScrapeGraphAI → Code → IF a. IF (true branch) → Microsoft Teams → Todoist b. IF (false branch) → (workflow ends) Customization Examples Change alert message formatting // In the Code node const message = `⚠️ Delay Alert: Route: ${item.route} Expected: ${item.scheduled} New Time: ${item.newTime} Delay: ${item.delay} min Link: ${item.url}`; return [{ json: { message } }]; Post to multiple Teams channels // Duplicate the Microsoft Teams node and reference a different credential items.forEach(item => { item.json.webhookUrl = $node["Set"].json["secondaryChannelWebhook"]; }); return items; Data Output Format The workflow outputs structured JSON data: { "route": "Blue Line", "scheduled": "2024-12-01T14:25:00Z", "newTime": "2024-12-01T14:45:00Z", "delay": 20, "status": "Delayed", "url": "https://transit.example.com/blue-line/status" } Troubleshooting Common Issues Scraping returns empty data – Verify CSS selectors/XPath in the Set node and ensure the target site hasn’t changed its markup. Teams message not sent – Check that the stored webhook URL is correct and the connector is still active. Todoist task duplicated – Add a unique key (e.g., route + timestamp) to avoid inserting duplicates. Performance Tips Limit the number of URLs per execution when monitoring many routes. Cache previous scrape results to avoid hitting site rate limits. Pro Tips: Use n8n’s built-in Cron instead of Webhook if you only need periodic polling. Add a SplitInBatches node after scraping to process large route lists incrementally. Enable execution logging to an external database for detailed audit trails.
by Yasir
🧠 Workflow Overview — AI-Powered Jobs Scraper & Relevancy Evaluator This workflow automates the process of finding highly relevant job listings based on a user’s resume, career preferences, and custom filters. It scrapes fresh job data, evaluates relevance using OpenAI GPT models, and automatically appends the results to your Google Sheet tracker — while skipping any jobs already in your sheet, so you don’t have to worry about duplicates. Perfect for recruiters, job seekers, or virtual assistants who want to automate job research and filtering. ⚙️ What the Workflow Does Takes user input through a form — including resume, preferences, target score, and Google Sheet link. Fetches job listings via an Apify LinkedIn Jobs API actor. Filters and deduplicates results (removes duplicates and blacklisted companies). Evaluates job relevancy using GPT-4o-mini, scoring each job (0–100) against the user’s resume & preferences. Applies a relevancy threshold to keep only top-matching jobs. Checks your Google Sheet for existing jobs and prevents duplicates. Appends new, relevant jobs directly into your provided Google Sheet. 📋 What You’ll Get A personal Job Scraper Form (public URL you can share or embed). Automatic job collection & filtering based on your inputs. Relevance scoring** (0–100) for each job using your resume and preferences. Real-time job tracking Google Sheet that includes: Job Title Company Name & Profile Job URLs Location, Salary, HR Contact (if available) Relevancy Score 🪄 Setup Instructions 1. Required Accounts You’ll need: ✅ n8n account (self-hosted or Cloud) ✅ Google account (for Sheets integration) ✅ OpenAI account (for GPT API access) ✅ Apify account (to fetch job data) 2. Connect Credentials In your n8n instance: Go to Credentials → Add New: Google Sheets OAuth2 API Connect your Google account. OpenAI API Add your OpenAI API key. Apify API Replace <your_apify_api> with your apify api key. Set Up Apify API Get your Apify API key Visit: https://console.apify.com/settings/integrations Copy your API key. Rent the required Apify actor before running this workflow Go to: https://console.apify.com/actors/BHzefUZlZRKWxkTck/input Click “Rent Actor”. Once rented, it can be used by your Apify account to fetch job listings. 3. Set Up Your Google Sheet Make a copy of this template: 📄 Google Sheet Template Enable Edit Access for anyone with the link. Copy your sheet’s URL — you’ll provide this when submitting the workflow form. 4. Deploy & Run Import this workflow (jobs_scraper.json) into your n8n workspace. Activate the workflow. Visit your form trigger endpoint (e.g. https://your-n8n-domain/webhook/jobs-scraper). Fill out the form with: Job title(s) Location Contract type, Experience level, Working mode, Date posted Target relevancy score Google Sheet link Resume text Job preferences or ranking criteria Submit — within minutes, new high-relevance job listings will appear in your Google Sheet automatically. 🧩 Example Use Cases Automate daily job scraping for clients or yourself. Filter jobs by AI-based relevance instead of keywords. Build a smart job board or job alert system. Support a career agency offering done-for-you job search services. 💡 Tips Adjust the “Target Relevancy Score” (e.g., 70–85) to control how strict the filtering is. You can add your own blacklisted companies in the Filter & Dedup Jobs node.
by Sridevi Edupuganti
Try It Out! Use n8n to extract medical test data from diagnostic reports uploaded to Google Drive, automatically detect abnormal values, and generate personalized health advice. How it works Upload a medical report (PDF or image) to a monitored Google Drive folder Mistral AI extracts text using OCR while preserving document structure GPT-4 parses the extracted text into structured JSON (patient info, test names, results, units, reference ranges) All test results are saved to the "All Values" sheet in Google Sheets JavaScript code compares each result against its reference range to detect abnormalities For out-of-range values, GPT-4 generates personalized dietary, lifestyle, and exercise advice based on patient age and gender Abnormal results with recommendations are saved to the "Out of Range Values" sheet How to use Set up Google Drive folder monitoring and Google Sheets with two tabs: "All Values" and "Out of Range Values" Configure API credentials for Google Drive, Mistral AI, and OpenAI (GPT-4) Upload medical reports to your monitored folder Review extracted data and personalized health advice in Google Sheets Requirements Google Drive and Sheets with OAuth2 authentication Mistral AI API key for OCR OpenAI API key (GPT-4 access required) for intelligent extraction and advice generation Need Help? See the detailed Read Me file at https://drive.google.com/file/d/1Wv7dfcBLsHZlPcy1QWPYk6XSyrS3H534/view?usp=sharing Join the n8n community forum for support
by Nikan Noorafkan
🧠 Google Ads Monthly Performance Optimization (Channable + Google Ads + Relevance AI) 🚀 Overview This workflow automatically analyzes your Google Ads performance every month, identifies top-performing themes and categories, and regenerates optimized ad copy using Relevance AI — powered by insights from your Channable product feed. It then saves the improved ads to Google Sheets for review and sends a detailed performance report to your Slack workspace. Ideal for marketing teams who want to automate ad optimization at scale with zero manual intervention. 🔗 Integrations Used Google Ads** → Fetch campaign and ad performance metrics using GAQL. Relevance AI** → Analyze performance data and regenerate ad copy using AI agents and tools. Channable** → Pull updated product feeds for ad refresh cycles. Google Sheets** → Save optimized ad copy for review and documentation. Slack** → Send a 30-day performance report to your marketing team. 🧩 Workflow Summary | Step | Node | Description | | ---- | --------------------------------------------------- | --------------------------------------------------------------------------- | | 1 | Monthly Schedule Trigger | Runs automatically on the 1st of each month to review last 30 days of data. | | 2 | Get Google Ads Performance Data | Fetches ad metrics via GAQL query (impressions, clicks, CTR, etc.). | | 3 | Calculate Performance Metrics | Groups results by ad group and theme to find top/bottom performers. | | 4 | AI Performance Analysis (Relevance AI) | Generates human-readable insights and improvement suggestions. | | 5 | Update Knowledge Base (Relevance AI) | Saves new insights for future ad copy training. | | 6 | Get Updated Product Feed (Channable) | Retrieves the latest catalog items for ad regeneration. | | 7 | Split Into Batches | Splits the feed into groups of 50 to avoid API rate limits. | | 8 | Regenerate Ad Copy with Insights (Relevance AI) | Rewrites ad copy with the latest product and performance data. | | 9 | Save Optimized Ads to Sheets | Writes output to your “Optimized Ads” Google Sheet. | | 10 | Generate Performance Report | Summarizes the AI analysis, CTR trends, and key insights. | | 11 | Email Performance Report (Slack) | Sends report directly to your Slack channel/team. | 🧰 Requirements Before running the workflow, make sure you have: A Google Ads account with API access and OAuth2 credentials. A Relevance AI project (with one Agent and one Tool setup). A Channable account with API key and project feed. A Google Sheets document for saving results. A Slack webhook URL for sending performance summaries. ⚙️ Environment Variables Add these environment variables to your n8n instance (via .env or UI): | Variable | Description | | -------------------------------- | ------------------------------------------------------------------- | | GOOGLE_ADS_API_VERSION | API version (e.g., v17). | | GOOGLE_ADS_CUSTOMER_ID | Your Google Ads customer ID. | | RELEVANCE_AI_API_URL | Base Relevance AI API URL (e.g., https://api.relevanceai.com/v1). | | RELEVANCE_AGENT_PERFORMANCE_ID | ID of your Relevance AI Agent for performance analysis. | | RELEVANCE_KNOWLEDGE_SOURCE_ID | Knowledge base or dataset ID used to store insights. | | RELEVANCE_TOOL_AD_COPY_ID | Relevance AI tool ID for generating ad copy. | | CHANNABLE_API_URL | Channable API endpoint (e.g., https://api.channable.com/v1). | | CHANNABLE_COMPANY_ID | Your Channable company ID. | | CHANNABLE_PROJECT_ID | Your Channable project ID. | | FEED_ID | The feed ID for product data. | | GOOGLE_SHEET_ID | ID of your Google Sheet to store optimized ads. | | SLACK_WEBHOOK_URL | Slack Incoming Webhook URL for sending reports. | 🔐 Credentials Setup in n8n | Credential | Type | Usage | | ----------------------------------------------- | ------- | --------------------------------------------------- | | Google Ads OAuth2 API | OAuth2 | Authenticates your Ads API queries. | | HTTP Header Auth (Relevance AI & Channable) | Header | Uses your API key as Authorization: Bearer <key>. | | Google Sheets OAuth2 API | OAuth2 | Writes optimized ads to Sheets. | | Slack Webhook | Webhook | Sends monthly reports to your team channel. | 🧠 Example AI Insight Output { "insights": [ "Ad groups using 'vegan' and 'organic' messaging achieved +23% CTR.", "'Budget' keyword ads underperformed (-15% CTR).", "Campaigns featuring 'new' or 'bestseller' tags showed higher conversion rates." ], "recommendations": [ "Increase ad spend for top-performing 'vegan' and 'premium' categories.", "Revise copy for 'budget' and 'sale' ads with low CTR." ] } 📊 Output Example (Google Sheet) | Product | Category | Old Headline | New Headline | CTR Change | Theme | | ------------------- | -------- | ------------------------ | -------------------------------------------- | ---------- | ------- | | Organic Protein Bar | Snacks | “Healthy Energy Anytime” | “Organic Protein Bar — 100% Natural Fuel” | +12% | Organic | | Eco Face Cream | Skincare | “Gentle Hydration” | “Vegan Face Cream — Clean, Natural Moisture” | +17% | Vegan | 📤 Automation Flow Run Automatically on the first of every month (cron: 0 0 1 * *). Fetch Ads Data → Analyze & Learn → Generate New Ads → Save & Notify. Every iteration updates the AI’s knowledge base — improving your campaigns progressively. ⚡ Scalability The flow is batch-optimized (50 items per request). Works for large ad accounts with up to 10,000 ad records. AI analysis & regeneration steps are asynchronous-safe (timeouts extended). Perfect for agencies managing multiple ad accounts — simply duplicate and update the environment variables per client. 🧩 Best Use Cases Monthly ad creative optimization for eCommerce stores. Marketing automation for Google Ads campaign scaling. Continuous learning ad systems powered by Relevance AI insights. Agencies automating ad copy refresh cycles across clients. 💬 Slack Report Example 30-Day Performance Optimization Report Date: 2025-10-01 Analysis Period: Last 30 days Ads Analyzed: 842 Top Performing Themes Vegan: 5.2% CTR (34 ads) Premium: 4.9% CTR (28 ads) Underperforming Themes Budget: 1.8% CTR (12 ads) AI Insights “Vegan” and “Premium” themes outperform baseline by +22% CTR. “Budget” ads underperform due to lack of value framing. Next Optimization Cycle: 2025-11-01 🛠️ Maintenance Tips Update your GAQL query occasionally to include new metrics or segments. Refresh Relevance AI tokens every 90 days (if required). Review generated ads in Google Sheets before pushing them live. Test webhook and OAuth connections after major n8n updates. 🧩 Import Instructions Open n8n → Workflows → Import from File / JSON. Paste this workflow JSON or upload it. Add all required environment variables and credentials. Execute the first run manually to validate connections. Once verified, enable scheduling for automatic monthly runs. 🧾 Credits Developed for AI-driven marketing teams leveraging Google Ads, Channable, and Relevance AI to achieve continuous ad improvement — fully automated via n8n.