by Kev
Automatically create and publish ready-to-post social media news updates — all powered by AI. This workflow turns any RSS feed into professional, branded posts, complete with visuals and captions. Use cases include automating news updates, sharing industry insights, or maintaining an active social presence without manual work. Good to know Fully automated end-to-end publishing — from RSS feed to social post Uses JsonCut for dynamic image composition (backgrounds, text overlays, logos) Publishes directly to Instagram (or other channels) via Blotato Utilizes OpenAI GPT-5 for post text and image prompt generation Polling mechanism checks job status every 3 seconds Setup time: under 10 minutes once credentials are in place How it works The RSS Trigger monitors any RSS feed for new content. OpenAI GPT-5 rewrites the headline and creates a short, social-friendly post caption. An AI image prompt is generated to match the article’s topic and mood. JsonCut combines the background, logo, and headline text into a branded image. Once the image is ready, Blotato uploads and publishes the post directly to Instagram (or other connected platforms). The process runs completely automatically — no human input required after setup. How to use Import the workflow into your n8n instance. Configure your RSS feed URL(s). Add your JsonCut, Blotato, and OpenAI credentials. Activate the workflow — it will automatically generate and post new content whenever your RSS source updates. Requirements Free account at jsoncut.com Account at blotato.com (paid service — can be replaced with any social media API or publishing platform) API keys for both services: JsonCut API Key via app.jsoncut.com Blotato API Key via www.blotato.com OpenAI credential** (GPT-5 or compatible model) RSS Feed URL** (e.g. from a news site, blog, or press page) Setup steps Sign up for a free account at app.jsoncut.com. If you use Blotato, create an account at blotato.com and generate an API key. In n8n, add: JsonCut API Key (HTTP Header Auth, header: x-api-key) Blotato API credential (optional — can be replaced) OpenAI credential for GPT-5 Replace the example RSS URL in the RSS Feed Trigger node with your own. Activate the workflow — it will start monitoring, generating, and posting automatically. Customising this workflow You can easily adjust: The image layout and branding (in the “Create JsonCut Job” node) The tone or length of social captions (in the “Create Instagram Text” node prompt) The publishing platform — replace Blotato with another integration (e.g. Buffer, Hootsuite, or native social API) Posting frequency via the RSS trigger interval For advanced customization, check out: JsonCut Documentation JsonCut Image Generation Examples Blotato Website n8n Documentation
by V3 Code Studio
How it works This workflow provides an API endpoint /api/v1/get-companies that retrieves company records directly from your Odoo database. It’s built for teams who need to query or export company data — either as structured JSON for integrations or as Excel (.xlsx) for reporting. When a request is made, the workflow: Accepts query parameters (name, response_format). Validates the name input (required for company search). Fetches all matching companies from Odoo using a like filter for partial name matches. Returns results as a JSON response or Excel file depending on the response_format parameter. This makes it ideal for quickly exporting or syncing company information with other tools. Setup steps Open the Webhook node and note the endpoint /api/v1/get-companies. Connect your Odoo API credentials in the Odoo node. Optionally update the fieldsList in the Odoo node to include more company details (VAT, address, etc.). Test using a browser or Postman: /api/v1/get-companies?name=Tech&response_format=json /api/v1/get-companies?name=Tech&response_format=excel
by Francisco Rivera
What this template does Connect a Vapi AI voice agent to Google Calendar to capture contact details and auto-book appointments. The agent asks for name, address, service type, and a preferred time. The workflow checks availability and either proposes times or books the slot—no code needed. How it works (node map) Webhook: Production URL = VAPI Server URL — receives tool calls from Vapi and returns results. CONFIGURATION (EDIT ME)** — your timezone, work hours, meeting length, buffers, and cadence. Route by Tool Name** — routes Vapi tool calls: checkAvailability → calendar lookup path bookAppointment → create event path Get Calendar Events (EDIT ME)** — reads events for the requested day. Calculate Potential Slots / Filter for Available Slots** — builds conflict-free options with buffers. Respond with Available Times — returns formatted slots to Vapi. Book Appointment in Calendar (EDIT ME)** — creates the calendar event with details. Booking Confirmation** — returns success back to Vapi. > Sticky notes in the canvas show exactly what to edit (required by n8n). No API keys are hardcoded; Google uses OAuth credentials. Requirements n8n (Cloud or self-hosted) Google account with Calendar (OAuth credential in n8n) Vapi account + one Assistant Setup (5 minutes) A) Vapi → n8n connection Open the Webhook node and copy the Production URL. In Vapi → Assistant → Messaging, set Server URL = that Production URL. In Server Messages, enable only toolCalls. B) Vapi tools (names must match exactly) Create two Custom Tools in Vapi and attach them to the assistant: Tool 1: checkAvailability Arguments** initialSearchDateTime (string, ISO-8601 with timezone offset, e.g. 2025-09-09T09:00:00-05:00) Tool 2: Arguments** startDateTime (string, ISO-8601 with tz) endDateTime (string, ISO-8601 with tz) clientName (string) propertyAddress (string) serviceType (string) > The Switch node routes based on C) Configure availability Open 1. CONFIGURATION (EDIT ME) and set: D) Connect Google Calendar Open 2. Get Calendar Events (EDIT ME) → Credentials: select/create Google Calendar OAuth. Then choose the calendar to check availability. Open 3. Book Appointment in Calendar (EDIT ME) → use the same credential and same calendar to book. E) Activate & test Toggle the workflow Active. Call your Vapi number (or start a session) and book a test slot. Verify the event appears with description fields (client, address, service type, call id). Customising Change summary/description format in 3. Book Appointment. Add SMS/Email confirmations, CRM sync, rescheduling, or analytics as follow-ups (see sticky note “I’m a note”). Troubleshooting No response back to Vapi** → confirm Vapi is set to send toolCalls only and the Server URL matches the Production URL. Switch doesn’t route** → tool names must be exactly checkAvailability and bookAppointment. No times returned** → ensure timezone + work hours + cadence generate at least one future slot; confirm Google credential and calendar selection. Event not created** → use the same Google credential & calendar in both nodes; check OAuth scopes/consent. Security & privacy Google uses OAuth; credentials live in n8n. No API keys hardcoded. Webhook receives only the fields needed to check times or book.
by Jonathan
This workflow creates a project in Clockify that any user can track time against. Syncro should be setup with a webhook via Notification Set for Ticket - created (for anyone). > This workflow is part of an MSP collection, The original can be found here: https://github.com/bionemesis/n8nsyncro
by Jonathan
This workflow uses a WooCommerce trigger that will run when a new product has been added, It will then post it to Slack so your team is always kept up to date with new products. To use this workflow you will need to set the credentials to use for the WooCommerce and Slack nodes, You will also need to pick a channel to post the message to.
by Tom
This workflow sends out email notifications when a new file has been uploaded to Google Drive. The workflow uses two nodes: Google Drive Trigger**: This node will trigger the workflow whenever a new file has been uploaded to a given folder Send Email**: This node sends out the email using data from the previous Google Drive Trigger node.
by Tom
This easy-to-extend workflow automatically serves a static HTML page when a URL is accessed in a browser. Prerequisites Basic knowledge of HTML Nodes Webhook node triggers the workflow on an incoming request. Respond to Webhook node serves the HTML page in response to the webhook.
by Harshil Agrawal
This workflow sends a message on Slack when site deployment fails. Netlify Trigger node: This node triggers the workflow when the site deployment fails. Slack node: This node sends a message on Slack alerting the team about the failed deployment. If you want to send a message to a different platform, replace the Slack node with the node of the respective platform.
by Vytenis
Fully automate deep research from start to finish: scrape Google Search results, select relevant sources, scrape & analyze each source in parallel, and generate a comprehensive research report. Who is this for? This workflow is for anyone who needs to research topics quickly and thoroughly: content creators, marketers, product managers, researchers, journalists, students, or anyone seeking deep insights without spending hours browsing websites. If you find yourself opening dozens of browser tabs to piece together information, this template will automate that entire process and deliver comprehensive reports in minutes. How it works Submit your research questions through n8n's chat interface (include as much context as you need) AI generates strategic search queries to explore different angles of your topic (customize the number of queries as needed) Oxylabs scrapes Google Search results for each query (up to 50 results per query) AI evaluates and selects sources that are the most relevant and authoritative Content extraction runs in parallel as Oxylabs scrapes each source and AI extracts key insights Summaries are collected in n8n's data table for final processing AI synthesizes everything into a comprehensive research report with actionable insights See the complete step-by-step tutorial on the n8n blog. Requirements Oxylabs AI Studio API key** – Get a free API key with 1000 credits OpenAI API key** (or use alternatives like Claude, Gemini, and local Ollama LLMs) Setup Install Oxylabs AI Studio as shown on this page Set your API keys: Oxylabs AI Studio OpenAI Create a data table Select the table name in each data table node Create a sub-workflow: Select the 3 nodes (Scrape content, Summarize content, Insert row) Right-click Select “Convert 3 nodes to sub-workflow” Edit the sub-workflow settings for for parallel execution: Mode: Run once for each item Options → Add Option → disable “Wait For Sub-Workflow Completion” Once you finish all these setup steps, you can run the workflow through n8n's chat interface. For example, send the following message: I'm planning to build a wooden summer house and would appreciate guidance on the process. What are the key considerations I should keep in mind from planning through completion? I'm particularly interested in the recommended construction steps and which materials will ensure long-term durability and quality. Customize this workflow for your needs Feel free to modify the workflow to fit the scale and final output your project requires: To reuse this workflow, clear the data table after the final analysis by adding a Data table node with the Delete row(s) action Scale up** by processing more search queries, increasing results per query beyond 10, and selecting additional relevant URLs Enable JavaScript rendering** in Oxylabs AI Studio (Scraper) node to ensure all content is gathered Adjust the system prompts** in LLM nodes to fit your specific research goals Explore other AI Studio apps** like Browser Agent for interactive browser control or Crawler for mapping entire websites Connect other nodes** like Google Sheets, Notion, Airtable, or webhooks to route results where you need them
by Milan Vasarhelyi - SmoothWork
Video Introduction Want to automate your inbox or need a custom workflow? 📞 Book a Call | 💬 DM me on Linkedin Overview This workflow automates sending personalized SMS messages directly from a Google Sheet using Twilio. Simply update a row's status to "To send" and the workflow automatically sends the text message, then updates the status to "Success" or "Error" based on delivery results. Perfect for event reminders, bulk notifications, appointment confirmations, or any scenario where you need to send customized messages to multiple recipients. Key Features Simple trigger mechanism**: Change the status column to "To send" to queue messages Personalization support**: Use [First Name] and [Last Name] placeholders in message templates Automatic status tracking**: The workflow updates your spreadsheet with delivery results Error handling**: Failed deliveries are clearly marked, making it easy to identify issues like invalid phone numbers Runs every minute**: The workflow polls your sheet continuously when active Setup Instructions Step 1: Copy the Template Spreadsheet Make a copy of the Google Sheets template by going to File → Make a copy. You must use your own copy so the workflow has permission to update status values. Step 2: Connect Your Accounts Google Sheets: Add your Google account credentials to the 'Monitor Google Sheet for SMS Queue' trigger node Twilio: Sign up for a free Twilio account (trial works for testing). From your Twilio dashboard, get your Account SID, Auth Token, and Twilio phone number, then add these credentials to the 'Send SMS via Twilio' node Step 3: Configure the Workflow In the Config node, update: sheet_url: Paste the URL of your copied Google Sheet from_number: Enter your Twilio phone number (include country code, e.g., +1234567890) Step 4: Activate and Test Activate the workflow using the toggle in the top right corner. Add a row to your sheet with the required information (ID, First Name, Phone Number, Message Template) and set the Status to "To send". Within one minute, the workflow will process the message and update the status accordingly.
by Shahrear
Automatically process Construction Blueprints into structured Google Sheets entries with VLM extraction What this workflow does Monitors Google Drive for new blueprints in a target folder Downloads the file inside n8n for processing Sends the file to VLM Run for VLM analysis Fetches details from the construction.blueprint domain as JSON Appends normalized fields to a Google Sheet as a new row Setup Prerequisites: Google account, VLM Run API credentials, Google Sheets access, n8n. Install the verified VLM Run node by searching for VLM Run in the node list, then click Install. Once installed, you can start using it in your workflows. Quick Setup: Create the Drive folder you want to watch and copy its Folder ID Create a Google Sheet with headers like: timestamp, file_name, file_id, mime_type, size_bytes, uploader_email, document_type, document_number, issue_date, author_name, drawing_title_numbers, revision_history, job_name, address, drawing_number, revision, drawn_by, checked_by, scale_information, agency_name, document_title, blueprint_id, blueprint_status, blueprint_owner, blueprint_url Configure Google Drive OAuth2 for the trigger and download nodes Add VLM Run API credentials from https://app.vlm.run/dashboard to the VLM Run node Configure Google Sheets OAuth2 and set Spreadsheet ID and target sheet tab Test by uploading a sample file to the watched Drive folder and activate Perfect for Converting uploaded construction blueprint documents into clean text Organizing extracted blueprint details into structured sheets Quickly accessing key attributes from technical files Centralized archive of blueprint-to-text conversions Key Benefits End to end automation** from Drive upload to structured Sheet entry Accurate text extraction** of construction blueprint documents Organized attribute mapping** for consistent records Searchable archives** directly in Google Sheets Hands-free processing** after setup How to customize Extend by adding: Version control that links revisions of the same drawing and highlights superseded rows Confidence scores per extracted field with threshold-based routing to manual or AI review Auto-generate a human-readable summary column for quick scanning of blueprint details Split large multi-sheet PDFs into per-drawing rows with individual attributes Cross-system sync to Procore, Autodesk Construction Cloud, or BIM 360 for project-wide visibility
by Shahrear
Automatically process healthcare claims into structured Google Sheets entries with VLM Run extraction What this workflow does Monitors Google Drive for new files in a target folder Downloads the file inside n8n for processing Sends the file to VLM Run for AI transcription or analysis Fetches extra details from the healthcare.claims-processing domain as JSON Appends normalized fields to a Google Sheet as a new row Setup Prerequisites: Google account, VLM Run API credentials, Google Sheets access, n8n. Install the verified VLM Run node by searching for VLM Run in the node list, then click Install. Once installed, you can start using it in your workflows. Quick Setup: Create the Drive folder you want to watch and copy its Folder ID Create a Google Sheet with headers like: timestamp, file\_name, file\_id, mime\_type, size\_bytes, uploader\_email, form\_type, carrier\_name, patient\_name, patient\_birth\_date, patient\_sex, patient\_address, insurance\_type, insurance\_id, insured\_name, total\_charge, amount\_due, amount\_paid, hospitalization\_from, hospitalization\_to, referring\_physician\_name, processing\_notes, …other claim fields as needed Configure Google Drive OAuth2 for the trigger and download nodes Add VLM Run API credentials from https://app.vlm.run/dashboard to the VLM Run node Configure Google Sheets OAuth2 and set Spreadsheet ID and target sheet tab Test by uploading a sample file to the watched Drive folder and activate Perfect for Centralized intake of healthcare claim documents with instant AI summaries Claims and operations teams collecting structured claim insights Customer support attachments that need quick triage to a Sheet Compliance and audit logs for claim documents Key Benefits End to end automation from Drive to Sheets Accurate AI output via VLM Run with optional timestamps Domain enrichment from healthcare.claims-processing JSON Clean, searchable logs in Google Sheets No manual steps after activation How to customize Extend by adding: OCR tuning and field validation for claim forms Per type routing for PDFs, images, or scanned forms Slack notifications on each new Sheet append Keyword extraction and auto tagging for claim categories Error branch that records failures to a second Sheet