by Vivekanand M
Upwork Proposal Automation with AI, Airtable and Slack 📘 Description This workflow automates the complete Upwork job discovery and proposal generation process by continuously monitoring job listings, intelligently filtering opportunities based on your skill set, generating personalised AI-written proposals, and delivering instant notifications — all without any manual effort. The workflow is triggered automatically every minute via Vollna's RSS feed, which monitors Upwork job postings matching your configured search filters. Each new job listing is parsed and analysed to extract key details, including title, description, budget, required skills, and job ID. A skills matching engine scores each job against your defined skill set and filters out weak matches. Duplicate jobs are automatically detected and skipped using Airtable as a reference store, ensuring AI credits are never wasted on already-processed listings. For every qualified new job, GPT-4o-mini generates a tailored 150–250-word proposal that references specific details from the job post, aligns your experience to the client's exact requirements, and ends with a clear call to action. The proposal and all job metadata are saved to an Airtable base for review. A formatted Slack notification is sent instantly with the full job details and generated proposal, allowing you to review, edit, and apply directly from Upwork with a single click. ⚙️ What This Workflow Does (Step-by-Step) 📡 RSS Feed Monitoring — Polls Vollna's Upwork RSS feed every minute for new job listings matching your skill keywords. Vollna replaces Upwork's discontinued native RSS feed (removed August 2024) and supports 30+ filter parameters, including category, budget, and client history. 🔍 Parse & Extract — Extracts structured fields from each RSS item, including job title, full description, budget, required skills, posted date, job ID, and clean Upwork job URL (decoded from Vollna's redirect format). 🎯 Filter: Skills Match — Scores each job against your defined skill list. Jobs scoring fewer than 2 matched skills are dropped immediately, ensuring only relevant opportunities proceed. ⭐ Filter: Client Quality — Filters out clients with ratings below 4.5. New clients with no rating history are allowed through by default. 🔁 Duplicate Detection — Queries Airtable to check if the job ID has already been processed in a previous run. Duplicate jobs are silently skipped without generating a proposal. 🤖 AI Proposal Generation — Calls GPT-4o-mini with a structured prompt containing the job details and your freelancer profile. Generates a concise, personalised proposal that opens with a specific reference to the job post, highlights relevant experience with real numbers, proposes a concrete first step, and ends with a soft call to action. 💾 Save to Airtable — Creates a new record in your Airtable base with all job fields, matched skills, match score, generated proposal, and status set to "New" for review tracking. 💬 Slack Notification — Sends a formatted message to your Slack channel with the job title, budget, match score, matched skills, required skills, direct Upwork job link, and the full AI-generated proposal — ready to copy and submit. 🧩 Prerequisites Vollna account** — Free tier available at vollna.com. Create a job filter matching your skills and copy the RSS feed URL from the Filters section OpenAI API key** — Used for GPT-4o-mini proposal generation (~$0.007 per proposal) Airtable account** — Free tier supports up to 1,000 records. Create a base with the schema below Slack workspace** — Bot token with chat:write permission, invited to your target channel 🗄️ Airtable Base Schema Create a table called Upwork Proposals with these fields: | Field Name | Type | |---|---| | Job Title | Single line text | | Job URL | URL | | Upwork URL | URL | | Posted At | Date | | Budget | Single line text | | Skills Required | Long text | | Matched Skills | Long text | | Match Score | Number | | AI Proposal | Long text | | Status | Single select: New, Reviewed, Applied, Skipped | | Job ID | Single line text | | Notes | Long text | 💰 Cost Estimate | Item | Estimated Cost | |---|---| | Vollna (free tier) | $0/mo | | GPT-4o-mini (50 proposals/day) | $1–3/mo | | Airtable (free tier) | $0/mo | | n8n self-hosted (AWS t3.small) | ~$10–15/mo | | Total | ~$11–18/mo | ⚙️ Setup Instructions Vollna — Sign up at vollna.com, create a job filter with your target keywords and skill categories, then copy the RSS feed URL from the Filters section Airtable — Create a new base and table using the schema above. Copy your Base ID from the Airtable URL and connect your Personal Access Token in n8n credentials OpenAI — Add your OpenAI API key as an n8n credential (HTTP Header Auth with Authorisation: Bearer sk-...) Slack — Create a Slack app, add chat:write scope, install to your workspace, invite the bot to your channel with /invite @your-bot-name Customise the AI prompt — Open the Build OpenAI Payload node and update the MY PROFILE section with your actual name, skills, and experience details Update skill filters — In the Filter: Skills Match node, update the YOUR_SKILLS array to match your exact skill set Publish the workflow — Click Publish. The RSS trigger will begin polling Vollna every minute automatically 💡 Key Benefits ✔ Fully automated job discovery — no manual searching required ✔ Skills-based filtering ensures AI only runs on relevant jobs ✔ Personalised proposals referencing specific job details — not generic templates ✔ Airtable CRM for tracking proposal status and conversion rates ✔ Instant Slack alerts with one-click access to apply on Upwork ✔ Deduplication prevents reprocessing the same job across runs ✔ Modular design — swap OpenAI for Claude or AWS Bedrock with minimal changes ✔ Cost-optimised — GPT-4o-mini keeps proposal generation under $3/month at scale 👥 Perfect For Freelancers on Upwork wanting to automate proposal writing Agencies managing multiple freelancer profiles Developers and automation specialists looking to win more technical contracts Anyone spending more than 30 minutes per day manually browsing and applying to Upwork jobs
by Jaruphat J.
This workflow automates the entire process of creating and publishing social media ads — directly from Telegram. By simply sending a product photo to your Telegram bot, the system analyzes the image, generates an AI-based advertising prompt, creates a marketing image via Fal.AI, writes an engaging Facebook/Instagram caption, and posts it automatically. This template saves hours of manual work for marketers and small business owners who constantly need to design, write, and publish product campaigns. It eliminates repetitive steps like prompt writing, AI model switching, and post scheduling — letting you focus on strategy, not execution. The workflow integrates seamlessly with Fal.AI for image generation, OpenAI Vision for image analysis, and the Facebook Graph API for automated publishing. Whether you’re launching a 10.10 campaign or promoting a new product line, this template transforms your product photo into a ready-to-publish ad in just minutes. Who’s it for This workflow is designed for: Marketers and e-commerce owners** who need to create social content quickly. Agencies** managing multiple clients’ campaigns. Small business owners** who want to automate Facebook/Instagram posts. n8n creators** who want to learn AI-assisted content automation. What problem does this solve Manually creating ad images and captions is time-consuming and repetitive. You need to: Edit the product photo. Write a creative brief or prompt. Generate an image in Fal.AI or Midjourney. Write a caption. Log into Facebook and post. This workflow combines all five steps into one automation — triggered directly by sending a Telegram message. It handles AI analysis, image creation, caption writing, and posting, removing human friction while maintaining quality and creative control. What this workflow does The workflow is divided into four main zones, color-coded inside the canvas: 🟩 Zone 1 – Product Image Analysis Trigger: User sends a product image to a Telegram bot. n8n retrieves the file path using Telegram API. OpenAI Vision analyzes the product photo and describes color, material, and shape. An AI agent converts this into structured data for generating ad prompts. 🟥 Zone 2 – Generate Ad Image Prompt The AI agent creates a professional advertising prompt based on the product description and campaign (e.g., “10.10 Sale”). The prompt is sent to the user for confirmation via Telegram before proceeding. 🟨 Zone 3 – Create Ad Image via Fal.AI The confirmed prompt and image are sent to Fal.AI’s image generation API. The system polls the generation status until completion. The generated image is sent back to Telegram for user review and approval. 🟦 Zone 4 – Write Caption & Publish The approved image is re-analyzed by AI to write a Facebook/Instagram caption. The user confirms the text on Telegram. Once approved, the workflow uploads the final post (image + caption) to Facebook automatically using the Graph API. Setup Prerequisites n8n self-hosted or Cloud account** Telegram Bot Token** (via @BotFather) Fal.AI API key** Facebook Page Access Token** with publishing permissions OpenAI API Key** for image analysis and text generation Steps Create a Telegram Bot and paste its token into n8n Credentials. Set up Fal.AI Credentials under HTTP Request → Authentication. Connect your Facebook Page through Facebook Graph API credentials. In the HTTP Request node, set: URL: https://fal.run/fal-ai/nano-banana Auth: Bearer {{ $credentials.FalAI.apiKey }} Configure all LLM and Vision nodes using your OpenAI credentials. Node settings 🟩 Analyze Image (OpenAI Vision) { "model": "gpt-4o-mini", "input": [ { "role": "user", "content": [ { "type": "image_url", "image_url": "{{$json.image_url}}" }, { "type": "text", "text": "Describe this product in detail for advertising context." } ] } ] } 🟥 Set Node – Prepare Fal.AI Body { "prompt": {{ JSON.stringify(($json.ad_prompt || '').replace(/\r?\n/g, ' ')) }}, "image_urls": [{{ JSON.stringify($json.image_url || '') }}], "num_images": 1, "output_format": "jpeg" } 🟦 HTTP Request (Facebook Graph API) { "method": "POST", "url": "https://graph.facebook.com/v19.0/me/photos", "body": { "caption": "{{ $json.caption_text }}", "url": "{{ $json.final_image_url }}", "access_token": "{{ $credentials.facebook.accessToken }}" } } How to customize the workflow Change AI Models:** Swap Fal.AI for Flux, Veo3, or SDXL by adjusting API endpoints. Add Channels:** Extend the workflow to post on LINE OA or Instagram. Add Approval Logic:** Keep Telegram confirmation steps before every publish. Brand Rules:** Adjust AI prompt templates to enforce tone, logo, or color palette consistency. Multi-language Posts:** Add translation nodes for global campaigns. Troubleshooting | Problem | Cause | Solution | |----------|--------|-----------| | Telegram message not triggering | Webhook misconfigured | Reconnect Telegram Trigger | | Fal.AI API error | Invalid JSON or token | Use JSON.stringify() in Set node and check credentials | | Facebook upload fails | Missing permissions | Ensure Page Access Token has pages_manage_posts | | LLM parser error | Output not valid JSON | Add Structured Output Parser and enforce schema | ⚠️ Security Notes Do NOT hardcode API keys** in Set or HTTP Request nodes. Always store credentials securely under n8n Credentials Manager. For self-hosted setups, use .env variables for sensitive keys (OpenAI, Fal.AI, Facebook). 🏷️ Hashtags #n8n #Automation #AIworkflow #FalAI #FacebookAPI #TelegramBot #nanobanana #NoCode #MarketingAutomation #SocialMediaAI #JaruphatJ #WorkflowTemplate #OpenAI #LLM #ProductAds #CreativeAutomation Product Image Process Step
by Davide
This workflow automates the process of scraping real estate property listings from websites using ScrapeGraph AI, extracting structured data, and saving it to a Google Sheet. It is designed to handle paginated listing pages and can be adapted to any real estate site that uses URL parameters for pagination. NOTE: This workflow has been tested with Immobiliare.it, the #1 real estate website in Italy. However, it is designed to be adaptable by modifying the pagination parameter and the listing URL pattern, you can use it with any real estate website that structures its listings with URL-based pagination. Business Use Cases: Real estate market intelligence Lead generation for agencies Price trend analysis Property comparison dashboards CRM enrichment Competitor monitoring Key Advantages 1. ✅ Fully Automated Lead Collection Automatically collects real estate listings without manual browsing. 2. ✅ AI-Powered Extraction Uses AI instead of rigid selectors: More resilient to website layout changes Handles dynamic content better Reduces maintenance effort 3. ✅ Structured Data Output The defined JSON schema ensures: Clean database-ready data Standardized fields Easy integration with CRM or analytics tools 4. ✅ Pagination Scalability Can easily scale: Increase number of pages Change city Adapt to different portals 5. ✅ Duplicate Prevention Google Sheets uses URL matching to: Avoid duplicates Update existing records 6. ✅ Modular Architecture The workflow is modular and reusable: URL generation logic is independent Extraction schema is customizable Storage layer can be replaced (CRM, database, Airtable, etc.) 7. ✅ Cost & Time Efficiency Eliminates manual data entry Saves research time Enables automated market monitoring How it works The workflow is structured in two main phases: Listing URL Discovery The user provides a base URL, the maximum number of pages to scrape, and the pagination parameter name (e.g., pag for Immobiliare.it). A Code node generates a list of page URLs by appending the pagination parameter. Each page URL is processed through the ScrapegraphAI node, which extracts all individual listing URLs. An Information Extractor node (powered by Google Gemini) filters and validates the extracted URLs based on a defined structure. A Wait node introduces a delay between requests to avoid rate limiting. A Loop Over Items node ensures all generated page URLs are processed. Data Extraction & Storage All collected listing URLs are aggregated and split into individual items. A second loop processes each listing URL through another ScrapegraphAI node, which extracts detailed property data (title, description, price, area, bedrooms, bathrooms, floor, rooms, balcony, terrace, cellar, heating, air conditioning, image URLs) based on a JSON schema. The extracted data is then written to a Google Sheet using the Google Sheets node, with each listing stored in a new row and deduplicated based on the listing URL. The workflow is fully automated and can scale to handle multiple listing pages and hundreds of individual property URLs. Set up steps To use this workflow, follow these steps: Import the workflow into your n8n instance. Configure credentials: ScrapegraphAI: Add your API key for ScrapegraphAI. Google Gemini (PaLM): Add your Google Gemini API credentials. Google Sheets OAuth2: Authenticate with the Google account where you want to store the data. Prepare your target Google Sheet: Create a new Google Sheet (or clone this template). Note the Sheet ID (from the URL) and the sheet name (tab name) where data should be written. Customize the input parameters: In the Set params node, define: url: The base URL of the listing page (without pagination parameters). max_pages: The number of pages to scrape. page_format_value: The query parameter used for pagination (e.g., pag for Immobiliare.it). Adjust the listing URL structure (if needed): In the Extract individual URL node, update the system prompt to match the URL pattern of the target website (e.g., https://www.xxx.it/xxx/xxxx). Review the output schema: In the Extract data node, you can modify the JSON schema to match the fields you want to extract from each listing. Update the Google Sheet node: Set the correct Document ID and Sheet Name in the Update real estate listings node. Ensure the column mapping matches your sheet structure. Activate the workflow and click Execute Workflow to start scraping. 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Václav Čikl
Overview This workflow automates the entire process of creating professional subtitle (.SRT) and synced lyrics (.LRC) files from audio recordings. Upload your vocal track, let Whisper AI transcribe it with precise timestamps, and GPT-5-nano segments it into natural, singable lyric lines. With an optional quality control step, you can manually refine the output while maintaining perfect timestamp alignment. Key Features Whisper AI Transcription**: Word-level timestamps with multi-language support via ISO codes Intelligent Segmentation**: GPT-5-nano formats transcriptions into natural lyric lines (2-8 words per line) Quality Control Option**: Download, edit, and re-upload corrections with smart timestamp matching Advanced Alignment**: Levenshtein distance algorithm preserves timestamps during manual edits Dual Format Export**: Generate both .SRT (video subtitles) and .LRC (synced lyrics) files No Storage Needed**: Files generated in-memory for instant download Multi-Language**: Supports various languages through Whisper API Use Cases Generate synced lyrics for music video releases on YouTube Create .LRC files for Musixmatch, Apple Music, and Spotify Prepare professional subtitles for social media content Batch process subtitle files for catalog releases Maintain consistent lyric formatting across artists Streamline content delivery for streaming platforms Speed up video editing workflow Perfect For For Musicians & Artists For Record Labels For Content Creators What You'll Need Required Setup OpenAI API Key** for Whisper transcription and GPT-5-nano segmentation Recommended Input Format**: MP3 audio files (max 25MB) Content**: Clean vocal tracks work best (isolated vocals recommended, but whole tracks works still good) Languages**: Any language supported by Whisper (specify via ISO code) How It Works Automatic Mode (No Quality Check) Upload your MP3 vocal track to the workflow Transcription: Whisper AI processes audio with word-level timestamps Segmentation: GPT-5-nano formats text into natural lyric lines Generation: Workflow creates .SRT and .LRC files Download your ready-to-use subtitle files Manual Quality Control Mode Upload your MP3 vocal track and enable quality check Transcription: Whisper AI processes audio with timestamps Initial Segmentation: GPT-5-nano creates first draft Download the .TXT file for review Edit lyrics in any text editor (keep line structure intact) Re-upload corrected .TXT file Smart Matching: Advanced diff algorithm aligns changes with original timestamps Download final .SRT and .LRC files with perfect timing Technical Details Transcription API**: OpenAI Whisper (/v1/audio/transcriptions) Segmentation Model**: GPT-5-nano with custom lyric-focused prompt System Prompt*: *"You are helping with preparing song lyrics for musicians. Take the following transcription and split it into lyric-like lines. Keep lines short (2–8 words), natural for singing/rap phrasing, and do not change the wording." Timestamp Matching**: Levenshtein distance + alignment algorithm File Size Limit**: 25MB (n8n platform default) Processing**: All in-memory, no disk storage Cost**: Based on Whisper API usage (varies with audio length) Output Formats .SRT (SubRip Subtitle) Standard format for: YouTube video subtitles Video editing software (Premiere, DaVinci Resolve, etc.) Media players (VLC, etc.) .LRC (Lyric File) Synced lyrics format for: Musixmatch Apple Music Spotify Music streaming services Audio players with lyrics display Pro Tips 💡 For Best Results: Use isolated vocal tracks when possible (remove instrumentals) Ensure clear recordings with minimal background noise For quality check edits, only modify text content—don't change line breaks Test with shorter tracks first to optimize your workflow ⚙️ Customization Options: Adjust GPT segmentation style by modifying the system prompt Add language detection or force specific languages in Whisper settings Customize output file naming conventions in final nodes Extend workflow with additional format exports if needed Workflow Components Audio Input: Upload interface for MP3 files Whisper Transcribe: OpenAI API call with timestamp extraction Post-Processing: GPT-5-nano segmentation into lyric format Routing Quality Check: Decision point for manual review Timestamp Matching: Diff and alignment for corrected text Subtitles Preparation: JSON formatting for both output types File Generation: Convert to .SRT and .LRC formats Download Nodes: Export final files Template Author: Questions or need help with setup? 📧 Email:xciklv@gmail.com 💼 LinkedIn:https://www.linkedin.com/in/vaclavcikl/
by Le Nguyen
This template implements a recursive web crawler inside n8n. Starting from a given URL, it crawls linked pages up to a maximum depth (default: 3), extracts text and links, and returns the collected content via webhook. 🚀 How It Works 1) Webhook Trigger Accepts a JSON body with a url field. Example payload: { "url": "https://example.com" } 2) Initialization Sets crawl parameters: url, domain, maxDepth = 3, and depth = 0. Initializes global static data (pending, visited, queued, pages). 3) Recursive Crawling Fetches each page (HTTP Request). Extracts body text and links (HTML node). Cleans and deduplicates links. Filters out: External domains (only same-site is followed) Anchors (#), mailto/tel/javascript links Non-HTML files (.pdf, .docx, .xlsx, .pptx) 4) Depth Control & Queue Tracks visited URLs Stops at maxDepth to prevent infinite loops Uses SplitInBatches to loop the queue 5) Data Collection Saves each crawled page (url, depth, content) into pages[] When pending = 0, combines results 6) Output Responds via the Webhook node with: combinedContent (all pages concatenated) pages[] (array of individual results) Large results are chunked when exceeding ~12,000 characters 🛠️ Setup Instructions 1) Import Template Load from n8n Community Templates. 2) Configure Webhook Open the Webhook node Copy the Test URL (development) or Production URL (after deploy) You’ll POST crawl requests to this endpoint 3) Run a Test Send a POST with JSON: curl -X POST https://<your-n8n>/webhook/<id> \ -H "Content-Type: application/json" \ -d '{"url": "https://example.com"}' 4) View Response The crawler returns a JSON object containing combinedContent and pages[]. ⚙️ Configuration maxDepth** Default: 3. Adjust in the Init Crawl Params (Set) node. Timeouts** HTTP Request node timeout is 5 seconds per request; increase if needed. Filtering Rules** Only same-domain links are followed (apex and www treated as same-site) Skips anchors, mailto:, tel:, javascript: Skips document links (.pdf, .docx, .xlsx, .pptx) You can tweak the regex and logic in Queue & Dedup Links (Code) node 📌 Limitations No JavaScript rendering (static HTML only) No authentication/cookies/session handling Large sites can be slow or hit timeouts; chunking mitigates response size ✅ Example Use Cases Extract text across your site for AI ingestion / embeddings SEO/content audit and internal link checks Build a lightweight page corpus for downstream processing in n8n ⏱️ Estimated Setup Time ~10 minutes (import → set webhook → test request)
by Ramdoni
Track changes and approvals in Excel 365 📌 Overview This workflow monitors an Excel 365 sheet every minute and detects new, updated, and deleted rows using a unique ID column. It compares the current dataset with the previous snapshot and identifies field-level differences. When changes are detected, the workflow filters rows that require approval (Status = “Waiting Approval”), sends structured notifications, and optionally logs every field-level change into an audit sheet (Excel or Google Sheets). The configuration layer allows you to define the ID column, ignored fields, and audit logging behavior without modifying the comparison logic. This template is suitable for approval tracking, operational monitoring, and lightweight compliance logging. How it works Runs every minute using a schedule trigger Reads rows from Excel 365 Normalizes and stores a snapshot Compares with the previous state Detects new, updated, and deleted rows Filters rows with “Waiting Approval” status Sends structured notifications Logs changes if audit logging is enabled Setup steps Configure Microsoft Excel credentials Ensure your sheet contains a unique ID column Update the Environment Config node 4.(Optional) Configure Google Sheets credentials for audit logging Activate the workflow 🚀 Features ⏱ Scheduled Monitoring Runs automatically every 1 minute Near real-time Excel monitoring Prevents unnecessary execution when no changes are detected 🔍 Row-Level Change Detection Detects: ✅ New rows ✏️ Updated rows ❌ Deleted rows Uses a unique ID field per row for accurate tracking. ⸻ 🧠 Field-Level Comparison Compares previous vs current values Identifies exactly which fields changed Outputs structured change data Prevents false positives via data normalization ⸻ ⚙️ Environment Configuration Layer Centralized configuration node allows easy customization without modifying core logic. Configurable options include: idField ignoreFields monitorOnly firstRunSilent enableAuditLog No hardcoded logic required. ⸻ 🛑 Approval Validation Layer Filters rows where Status = "Waiting Approval" Sends notifications only for relevant approval cases Prevents unnecessary alerts ⸻ 🔔 Smart Notification System Sends formatted change notifications Includes: Change Type (NEW / UPDATED / DELETED) Row ID Field-level old → new values Fully customizable message formatting. ⸻ 📊 Optional Audit Logging If enabled in the Environment Config: Converts each field-level change into structured audit rows Appends logs to: Excel 365 (Audit Sheet) Google Sheets (External Log) Audit Log Structure | Timestamp | ChangeType | RowID | Field | OldValue | New Value | |-------------|--------------|--------|------|----------|------------| Designed for compliance and tracking purposes. 📦 Use Cases Internal approval tracking Financial data monitoring Sales pipeline control Procurement workflows Excel-based compliance systems SME automation systems 🧩 Requirements Microsoft 365 (Excel Online – Business) n8n (Cloud or Self-hosted) Microsoft credentials configured in n8n Telegram Bot (Optional) Google Sheets credentials for audit logging 🔧 Configuration Guide All system behavior is controlled from the Environment Config node. Example configuration structure: { CONFIG: { idField: "ID", ignoreFields: ["UpdatedAt", "LastModified"], monitorOnly: null, firstRunSilent: true, enableAuditLog: true } } You can customize: Which column acts as unique ID Which fields to ignore Which fields to monitor exclusively Whether to enable audit logging Whether first run should be silent 🟢 First Run Behavior On first execution: The workflow initializes internal snapshot storage No mass notification is sent (if firstRunSilent = true) This prevents false “NEW row” alerts during setup. 🏢 Who Is This For? Operations teams Finance departments SMEs using Excel as core system Automation consultants Businesses requiring lightweight audit tracking ⸻ 💡 Why This Workflow? Unlike simple Excel polling workflows, this solution: Tracks changes at field level Supports approval-based filtering Includes structured audit logging Avoids duplicate alerts Is fully configurable Designed for production usage This is not just an Excel notifier — it is a structured Change Tracking & Approval Monitoring System built on n8n.
by Mohamed Salama
Let AI agents fetch communicate with your Bubble app automatically. It connects direcly with your Bubble data API. This workflow is designed for teams building AI tools or copilots that need seamless access to Bubble backend data via natural language queries. How it works Triggered via a webhook from an AI agent using the MCP (Model-Chain Prompt) protocol. The agent selects the appropriate data tool (e.g., projects, user, bookings) based on user intent. The workflow queries your Bubble database and returns the result. Ideal for integrating with ChatGPT, n8n AI-Agents, assistants, or autonomous workflows that need real-time access to app data. Set up steps Enable access to your Bubble data or backend APIs (as needed). Create a Bubble admin token. Add your Bubble node/s to your n8n workflow. Add your Bubble admin token. Configer your Bubble node/s. Copy the generated webhook URL from the MCP Server Trigger node and register it with your AI tool (e.g., LangChain tool loader). (Optional) Adjust filters in the “Get an Object Details” node to match your dataset needs. Once connected, your AI agents can automatically retrieve context-aware data from your Bubble app, no manual lookups required.
by Rahul Joshi
📘 Description This workflow automates dependency update risk analysis and reporting using Jira, GPT-4o, Slack, and Google Sheets. It continuously monitors Jira for new package or dependency update tickets, uses AI to assess their risk levels (Low, Medium, High), posts structured comments back into Jira, and alerts the DevOps team in Slack — all while logging historical data into Google Sheets for visibility and trend analysis. This ensures fast, data-driven decisions for dependency upgrades, improved code stability, and reduced security risks — with zero manual triage. ⚙️ What This Workflow Does (Step-by-Step) 🟢 When Clicking “Execute Workflow” Manually triggers the dependency risk analysis sequence for immediate review or scheduled monitoring. 📋 Fetch All Active Jira Issues Retrieves all active Jira issues to identify tickets related to dependency or package updates. Provides the complete dataset — including summary, status, and assignee information — for AI-based risk evaluation. ✅ Validate Jira Query Response Verifies that Jira returned valid issue data before proceeding. If data exists → continues filtering dependency updates. If no data or API error → logs the failure to Google Sheets. Prevents workflow from continuing with empty or broken datasets. 🔍 Identify Dependency Update Issues Filters Jira issues to find only dependency-related tickets (keywords like “update,” “bump,” “package,” or “library”). This ensures only relevant version update tasks are analyzed — filtering out unrelated feature or bug tickets. 🏷️ Extract Relevant Issue Metadata Extracts essential fields such as key, summary, priority, assignee, status, and created date for downstream AI processing. Simplifies the data payload and ensures accurate, structured analysis. 📢 Alert DevOps Team in Slack Immediately notifies the assigned DevOps engineer via Slack DM about any new dependency update issue. Includes formatted details like summary, key, status, priority, and direct Jira link for quick access. Ensures rapid visibility and faster response to potential risk tickets. 🤖 AI-Powered Risk Assessment Analyzer Uses GPT-4o (Azure OpenAI) to intelligently evaluate each dependency update’s risk level and impact summary. Considers factors such as: Dependency criticality Version change type (major/minor/patch) Security or EOL indicators Potential breaking changes Outputs a clean JSON with fields: {"risk_level": "Low | Medium | High","impact_summary": "Short human-readable explanation"} Helps DevOps teams prioritize updates with context. 🧠 GPT-4o Language Model Configuration Configures the AI reasoning engine for precise, context-aware DevOps assessments. Optimized for consistent technical tone and cost-efficient batch evaluation. 📊 Parse AI Response to Structured Data Safely parses the AI’s JSON output, removing markdown artifacts and ensuring structure. Adds parsed fields — risk_level and impact_summary — back to the Jira context. Includes fail-safes to prevent crashes on malformed AI output (fallbacks to “Unknown” and “Failed to parse”). 💬 Post AI Risk Assessment to Jira Ticket Automatically posts the AI’s analysis as a comment on the Jira issue: Displays 🤖 AI Risk Assessment Report header Shows Risk Level and Impact Summary Includes a checklist of next steps for developers Creates a permanent audit trail for each dependency decision inside Jira. 📈 Log Dependency Updates to Tracking Dashboard Appends all analyzed updates into Google Sheets, recording: Date Jira Key & Summary Risk Level & Impact Summary Assignee & Status This builds a historical dependency risk database that supports: Trend monitoring Security compliance reviews Dependency upgrade metrics DevOps productivity tracking 📊 Log Jira Query Failures to Error Sheet If the Jira query fails, the workflow automatically logs the error (API/auth/network) into a centralized error sheet for troubleshooting and visibility. 🧩 Prerequisites Jira Software Cloud API credentials Azure OpenAI (GPT-4o) access Slack API connection Google Sheets OAuth2 credentials 💡 Key Benefits ✅ Automated dependency risk assessment ✅ Instant Slack alerts for update visibility ✅ Historical tracking in Google Sheets ✅ Reduced manual triage and faster decision-making ✅ Continuous improvement in release reliability and security 👥 Perfect For DevOps and SRE teams managing large dependency graphs Engineering managers monitoring package updates and risks Security/compliance teams tracking vulnerability fix adoption Product teams aiming for stable CI/CD pipelines
by David Ashby
Complete MCP server exposing 14 doqs.dev | PDF filling API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add doqs.dev | PDF filling API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the doqs.dev | PDF filling API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.doqs.dev/v1 • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (14 total) 🔧 Designer (7 endpoints) • GET /designer/templates/: List Templates • POST /designer/templates/: Create Template • POST /designer/templates/preview: Preview • DELETE /designer/templates/{id}: Delete • GET /designer/templates/{id}: List Templates • PUT /designer/templates/{id}: Update Template • POST /designer/templates/{id}/generate: Generate Pdf 🔧 Templates (7 endpoints) • GET /templates: List • POST /templates: Create • DELETE /templates/{id}: Delete • GET /templates/{id}: Get Template • PUT /templates/{id}: Update • GET /templates/{id}/file: Get File • POST /templates/{id}/fill: Fill 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native doqs.dev | PDF filling API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Yves Tkaczyk
Use cases Monitor Google Drive folder, parsing PDF, DOCX and image file into a destination folder, ready for further processing (e.g. RAG ingestion, translation, etc.) Keep processing log in Google Sheet and send Slack notifications. How it works Trigger: Watch Google Drive folder for new and updated files. Create a uniquely named destination folder, copying the input file. Parse the file using Mistral Document, extracting content and handling non-OCRable images separately. Save the data returned by Mistral Document into the destination Google Drive folder (raw JSON file, Markdown files, and images) for further processing. How to use Google Drive and Google Sheets nodes: Create Google credentials with access to Google Drive and Google Sheets. Read more about Google Credentials. Update all Google Drive and Google Sheets nodes (14 nodes total) to use the credentials Mistral node: Create Mistral Cloud API credentials. Read more about Mistral Cloud Credentials. Update the OCR Document node to use the Mistral Cloud credentials. Slack nodes: Create Slack OAuth2 credentials. Read more about Slack OAuth2 credentials Update the two Slack nodes: Send Success Message and Send Error Message: Set the credentials Select the channel where you want to send the notifications (channels can be different for success and errors). Create a Google Sheets spreadsheet following the steps in Google Sheets Configuration. Ensure the spreadsheet can be accessed as Editor by the account used by the Google Credentials above. Create a directory for input files and a directory for output folders/files. Ensure the directories can be accessed by the account used by the Google Credentials. Update the File Created, File Updated and Workflow Configuration node following the steps in the green Notes. Requirements Google account with Google API access Mistral Cloud account access to Mistral API key. Slack account with access to Slack client ID and secret ID. Basic n8n knowledge: understanding of triggers, expressions, and credential management Who’s it for Anyone building a data pipeline ingesting files to be OCRed for further processing. 🔒 Security All credentials are stored as n8n credentials. The only information stored in this workflow that could be considered sensitive are the Google Drive Directory and Sheet IDs. These directories and the spreadsheet should be secured according to your needs. Need Help? Reach out on LinkedIn or Ask in the Forum!
by Corentin Ribeyre
This template can be used as a real-time listening and processing of search results with Icypeas. Be sure to have an active account to use this template. How it works This workflow can be divided into two steps : A webhook node to link your Icypeas account with your n8n workflow. A set node to retrieve the relevant informations. Set up steps You will need a working icypeas account to run the workflow and you will have to paste the production URL provided by the n8n webhook node.
by Pixril
Overview This workflow deploys a fully autonomous "Viral News Agency" inside your n8n instance. Unlike simple auto-posters, this is a comprehensive content production pipeline. It acts as a 24/7 news monitor that scrapes viral stories, rewrites them into educational scripts using GPT-4o, designs professional 10-slide carousels, and publishes them directly to Instagram Business—completely on autopilot. Key Features Dual-Engine Architecture:* The unique "Hybrid Core" lets you choose between *Free (Gotenberg/Docker)* or *Paid (APITemplate)** image generation. Switch engines instantly via the Setup Form. Smart RSS Scraping:** Cleans incoming feeds and extracts high-quality "OG" (Open Graph) images to use as dynamic backgrounds. Viral Content Writer:** Uses a specialized AI Agent prompt to write "Hot Takes" and educational hooks, ensuring content is engaging, not just a summary. Auto-Publisher:** Handles the complex Meta API flow (Container > Media Bundle > Publish) to upload multi-slide carousels automatically. How it works Monitor: The News Source node watches your chosen RSS feeds (Tech, Sports, Politics, etc.) for breaking stories. Analyze: The AI Analyst (GPT-4o) reads the article, extracts the viral angle, and writes a full 10-slide script with captions and hashtags. Design: The workflow routes data to your chosen engine. It loops through the script 10 times to generate individual slides (Title, Content, Quotes). Publish: The agent uploads the images to Facebook's servers, bundles them into a Carousel Container, and publishes it live to your Instagram feed. Set up steps Estimated time: 10 minutes Credentials: Add your keys for OpenAI (Intelligence), Google Drive (Storage), and Facebook Graph API (Publishing). Instagram ID: Open the 3 Facebook nodes ("Create Container", "Carousel Bundle", "Publish Carousel") and replace the placeholder ID with your Instagram Business User ID. Image Engine: Option A (Free): Ensure you have a local Gotenberg instance running via Docker (docker run --rm -p 3000:3000 gotenberg/gotenberg:8). Option B (Paid): In the "Generate Image" node, add your APITemplate API Key and Template ID. Run: Use the "SETUP FORM" node to enter your RSS URL and Brand Name, then toggle to "Active"! About the Creator Built by Pixril. We specialize in building advanced, production-ready AI agents for n8n. Visit our website: https://www.pixril.com/ Find more professional workflows in our shop: https://pixril.etsy.com