by Piotr Sikora
Who’s it for This workflow is perfect for content managers, SEO specialists, and website owners who want to easily analyze their WordPress content structure. It automatically fetches posts, categories, and tags from a WordPress site and exports them into a Google Sheet for further review or optimization. What it does This automation connects to the WordPress REST API, collects data about posts, categories, and tags, and maps the category and tag names directly into each post. It then appends all this enriched data to a Google Sheet — providing a quick, clean way to audit your site’s content and taxonomy structure. How it works Form trigger: Start the workflow by submitting a form with your website URL and the number of posts to analyze. Fetch WordPress data: The workflow sends three API requests to collect posts, categories, and tags. Merge data: It combines all the data into one stream using the Merge node. Code transformation: A Code node replaces category and tag IDs with their actual names. Google Sheets export: Posts are appended to a Google Sheet with the following columns: URL Title Categories Tags Completion form: Once the list is created, you’ll get a confirmation message and a link to your sheet. If the WordPress API isn’t available, the workflow automatically displays an error message to help you troubleshoot. Requirements A WordPress site with the REST API enabled (/wp-json/wp/v2/). A Google account connected to n8n with access to Google Sheets. A Google Sheet containing the columns: URL, Title, Categories, Tags. How to set up Import this workflow into n8n. Connect your Google Sheets account under credentials. Make sure your WordPress site’s API is accessible publicly. Adjust the Post limit (per_page) in the form node if needed. Run the workflow and check your Google Sheet for results. How to customize Add additional WordPress endpoints (e.g., authors, comments) by duplicating and modifying HTTP Request nodes. Replace Google Sheets with another integration (like Airtable or Notion). Extend the Code node to include SEO metadata such as meta descriptions or featured images.
by Tomohiro Goto
🧠 How it works This workflow automatically transcribes and translates voice messages from Telegram to Slack, enabling seamless communication between Japanese and English speakers. In our real-world use case, our distributed team often sends short voice updates on Telegram — but most discussion happens on Slack. Before this workflow, we constantly asked: “Can someone write a summary of that voice message?” “I can’t understand what was said — is there a transcript?” “Can we translate this audio for our English-speaking teammates?” This workflow fixes that problem without changing anyone’s communication habits. Built with n8n, OpenAI Whisper, and GPT-4o-mini, it automatically: Detects when a voice message is posted on Telegram Downloads and transcribes it via Whisper Translates the text with GPT-4o-mini Posts the result in Slack — with flags 🇯🇵→🇺🇸 and username attribution ⚙️ Features 🎧 Voice-to-text transcription using OpenAI Whisper 🌐 Automatic JA ↔ EN detection and translation via GPT-4o-mini 💬 Clean Slack message formatting with flags, username, and original text 🔧 Easy to customize: adjust target languages, tone, or message style ⚡ Typical end-to-end time: under 10 seconds for short audio clips 💼 Use Cases Global teams** – Send quick voice memos in Telegram and share readable translations in Slack Project coordination** – Record updates while commuting and post bilingual notes automatically Remote check-ins** – Replace daily written reports with spoken updates Cross-language collaboration** – Let English and Japanese teammates stay perfectly synced 💡 Perfect for Bilingual creators and managers** working across Japan and Southeast Asia AI automation enthusiasts** who love connecting voice and chat platforms Teams using Telegram for fast communication** and Slack for structured workspaces 🧩 Notes Requires three credentials: TELEGRAM_BOT_TOKEN OPENAI_API_KEY_HEADER SLACK_BOT_TOKEN_HEADER Slack scopes: chat:write, files:write, channels:history You can change translation direction or add languages in the “Detect Language” → “Translate (OpenAI)” nodes. Keep audio files under 25 MB for Whisper processing. Always export your workflow with credentials OFF before sharing or publishing. ✨ Powered by OpenAI Whisper × GPT-4o-mini × n8n × Telegram Bot API × Slack API A complete multilingual voice-to-text bridge — connecting speech, translation, and collaboration across platforms. 🌍
by WeblineIndia
Webhook from Payment Provider → Jira Finance Ticket → Slack Invoice Follow-up Automation This workflow automates failed subscription renewal processing by validating webhook data, using AI to analyze urgency and churn risk, creating a Jira Finance Task and notifying the finance team via Slack. If required fields are missing, it sends an error alert for manual review instead. ⚡ Quick Implementation Steps (Start Using in 60 Seconds) Import workflow JSON into n8n. Add Jira & Slack credentials. Configure webhook URL /payment-failed-renewal in payment provider. Test with: { "customerId": "C-101", "customerEmail": "user@example.com", "subscriptionId": "S-500", "amount": 39.99 } Activate workflow. What It Does This automation connects your payment system with your financial operations. When a subscription renewal fails, the payment provider sends a webhook. The workflow validates the fields, uses OpenAI to analyze the payment failure reason (determining urgency & churn risk), routes high-value failures to high priority, creates a Jira task with an AI-drafted recovery email and alerts the finance team on Slack. If required data is missing, the workflow prevents incomplete Jira tickets by routing the event to an error handler and sending a detailed Slack alert listing all missing fields and full payload for manual inspection. Who’s It For Finance & billing departments SaaS companies with recurring billing Teams using Jira for billing operations Slack-based financial support teams Companies wanting automated revenue recovery workflows Requirements to Use This Workflow n8n instance OpenAI API Key (or compatible LLM credential) Jira Software account with permissions for FIN project Slack bot token with channel posting rights Payment provider that supports POST JSON webhooks Webhook configured to: https://YOUR-N8N-URL/webhook/payment-failed-renewal How It Works & How To Set Up Step-by-Step Flow Webhook receives payment failure payload. Validation node checks required fields: customerId customerEmail subscriptionId amount AI Analysis: OpenAI analyzes failure reason, sets urgency, and drafts email. Logic: Switch node routes High Value (>$500) to 'High' priority. Jira Finance Task created (with AI draft). Slack message sent (with Churn Risk score). Setup Steps Step 1 — Webhook Setup Method: POST Path: payment-failed-renewal Step 2 — Jira Setup Select Jira credentials in Create Jira Finance Ticket node. Ensure: Project: FIN Issue type: Task Step 3 — Slack Setup Add Slack credentials to both Slack nodes. Select finance alert channel. Step 4 — OpenAI Setup Add OpenAI credentials in the AI Analysis node. Step 5 — Test { "customerId": "CUST-001", "customerEmail": "billing@example.com", "subscriptionId": "SUB-1001", "amount": 19.99 } Step 6 — Activate Enable the workflow. How To Customize Nodes Webhook Add Basic Auth Add token-based security Add JSON schema validation Validate Payload Enhance with: Email format validation Numeric validation for amount Auto-fallback values Jira Node Customize: Ticket summary structure Labels (billing-recovery, urgent, etc.) Add custom fields Change issue type or project Slack Nodes Enhance: Mentions: @finance-team Threads instead of channel posts Rich blocks, buttons, or attachments Add-ons (Optional Enhancements) Automated email to customer for payment recovery Retry count–based escalation (e.g., retry ≥ 3 → escalate to manager) Log data to Airtable / Google Sheets Sync events into CRM (HubSpot, Salesforce, Zoho) Notify Sales for high-value customer failures Use Case Examples Stripe renewal payment fails → Create Jira task → Slack finance alert. Chargebee retry attempts exhausted → Notify billing team immediately. Declined credit card → Jira ticket with failure reason. Razorpay/PayPal renewal failure → Automated follow-up. Webhook missing data → Slack error alert ensures nothing is silently ignored. Troubleshooting Guide | Issue | Possible Cause | Solution | |-------|----------------|----------| | Webhook not triggering | Wrong URL / method | Use POST + correct endpoint | | Jira ticket missing | No permissions or invalid payload | Check Jira permissions + required fields | | Slack shows undefined values | Missing fields in payload | Confirm payload structure | | Error alert triggered incorrectly | Field names mismatch | Match exact names: customerId, customerEmail, subscriptionId, amount | | Payment provider not sending events | Firewall/CDN blocking | Whitelist the n8n webhook URL | | Workflow silent | Not activated | Turn workflow ON | Need Help? If you want help customizing this workflow or extending it into a complete revenue recovery automation suite: WeblineIndia can support you with: Jira & Slack automation pipelines Payment provider webhook integrations Finance workflow optimization AI-based billing insights End‑to‑end automation solutions Reach out anytime for expert implementation or enhancements.
by Reinhard Schmidbauer
Overview This template automatically exports Meta (Facebook) Ads campaign performance into Google Sheets — both daily and for historical backfills. It’s ideal for performance marketers, agencies, and analytics teams who want a reliable data pipeline from Meta Ads into their reporting stack. What this workflow does Runs a daily cron job to pull yesterday’s campaign-level performance from the Meta Ads Insights API. Flattens the API response and calculates key KPIs like CPL, CPA, ROAS, CTR, CPC, CPM, frequency and more. Appends one row per campaign per day to a Google Sheet (for dashboards and further analysis). Provides a separate Manual Backfill section to import historical data using a time_range parameter (e.g. last 12–24 months). Use cases Build Looker Studio / Power BI dashboards on top of a clean, daily Meta Ads dataset. Track ROAS, CPL, CPA, CTR, and frequency trends over time. Combine campaign data with CRM or ecommerce data in the same spreadsheet. Quickly backfill past performance when onboarding a new Meta Ads account. How it works Daily Incremental Flow A Schedule Trigger runs every day at 05:00. The Set config node defines ad account, date preset (yesterday), and Google Sheet details. The Meta Insights node calls the Facebook Graph insights edge at level=campaign. The Code node flattens the data and derives CPL, CPA, ROAS, and other KPIs. The Google Sheets node appends the rows to your Meta_Daily_Data sheet. Manual Backfill Flow A Manual Trigger lets you run the flow on demand. The Set backfill config node defines backfillSince and backfillUntil. The Meta Insights (time_range) node fetches performance for that historical range. The same transform logic is applied, and rows are appended to the same sheet. Prerequisites A Meta Business account with: A system user and a long-lived access token with ads_read / read_insights. A Google Sheet with a header row that matches the mapped column names. n8n credentials for: Facebook Graph API Google Sheets OAuth2 Setup steps Import this template into your n8n instance. Open the Set config and Set backfill config nodes: Set your adAccountId (e.g. act_123456789012345). Set your sheetId (Google Sheet ID) and sheet name (e.g. Meta_Daily_Data). Configure your Facebook Graph API and Google Sheets credentials in n8n. (Optional) Run the Manual Backfill section for your desired historical ranges (e.g. per quarter). Enable the workflow so the Daily Incremental section runs automatically. Customization Change level from campaign to adset or ad if you need more granular reporting. Add breakdowns (e.g. publisher_platform, platform_position) to split by platform and placement. Extend the transform code with additional KPIs or dimensions that match your reporting needs. Use a separate sheet for raw data and build dashboards on top of a cleaned or pivoted view. Consulting & support If you need help with: E-Commerce Strategy & Development** (Shopify, Shopware 6, Magento 2, SAP Commerce Cloud, etc.) Growth & Performance Marketing** (Google / Meta / Microsoft Ads, etc.) Data & Analytics Setups** (tracking, dashboards, attribution, gdpr, etc.) please reach out to Serendipity Technologies: 👉 https://www.serendipity.at We can help you turn this workflow into a full analytics stack and reporting system tailored to your business.
by Atta
What it does Instead of manually checking separate apps for your calendar, weather, and news each morning, this workflow consolidates the most important information into a single, convenient audio briefing. The "Good Morning Podcast" is designed to be a 3-minute summary of your day ahead, delivered directly to you. It's multi-lingual and customizable, allowing you to start your day informed and efficiently. How it works The workflow executes in three parallel branches before merging the data to generate the final audio file. Weather Summary: It starts by taking a user-provided city and fetching the current 15-hour forecast from the OpenWeatherMap. It formats this information into a concise weather report. Calendar Summary: It securely connects to your Google Calendar to retrieve all of today's scheduled meetings and events. It then formats the schedule into a clear, readable summary. News Summary: It connects to the NewsAPI to perform two tasks: it fetches the top general headlines and also searches for articles based on user-defined keywords (e.g., "AI", "automation", "space exploration"). The collected headlines are then summarized using a Google Gemini node to create a brief news digest. Audio Generation and Delivery: All three text summaries (weather, calendar, and news) are merged into a single script. The workflow uses Google's Text-to-Speech (TTS) to generate the raw multi-speaker audio. A dedicated FFmpeg node then processes and converts this audio into the final MP3 format. The completed podcast is then sent directly to you via a Telegram Bot. Setup Instructions To get this workflow running, you will need to configure credentials for each of the external services and set your initial parameters. ⚠️ Important Prerequisite Install FFmpeg: The workflow requires the FFmpeg software package to be installed on the machine running your n8n instance (local or server). Please ensure it is installed and accessible in your system's PATH before running this workflow. Required Credentials OpenWeatherMap: Sign up for a free account at OpenWeatherMap and get your API key. Add the API key to your n8n OpenWeatherMap credentials. Google Calendar & Google AI (Gemini/TTS): You will need Google OAuth2 credentials for the Google Calendar node. You will also need credentials for the Google AI services (Gemini and Text-to-Speech). Follow the n8n documentation to create and add these credentials. NewsAPI: Get a free API key from NewsAPI.org. Add the API key to your n8n NewsAPI credentials. Telegram: Create a new bot by talking to the BotFather in your Telegram app. Copy the Bot Token it provides and add it to your n8n Telegram credentials. Send a message to your new bot and get your Chat ID from the Telegram Trigger node or another method. You will need this for the Telegram send node. Workflow Inputs In the first node (or when you run the workflow manually), you must provide the following initial data: name: Your first name for a personalized greeting. city: The city for your local weather forecast (e.g., "Amsterdam"). language: The language for the entire podcast output (e.g., "en-US", "nl-NL", "fa-IR"). news_keywords: A comma-separated list of topics you are interested in for the news summary (e.g., "n8n,AI,technology"). How to Adapt the Template This workflow is highly customizable. Here are several ways you can adapt it to fit your needs: Triggers Automate It:* The default trigger is manual. Change it to a *Schedule Trigger** to have your podcast automatically generated and sent to you at the same time every morning (e.g., 7:00 AM). Content Sources Weather:** In the "User Weather Map" node, you can change the forecast type or switch the units from metric to imperial. Calendar:** In the "Get Today Meetings" node, you can select a different calendar from your Google account (e.g., a shared work calendar instead of your personal one). News:** In the "Get Headlines From News Sources" node, change the country or category to get different top headlines. In the "Get Links From Keywords" node, update your keywords to track different topics. In the "Aggregate Headlines" (Gemini) node, you can modify the prompt to change the tone or length of the AI-generated news summary. Audio Generation Voice & Language:** The language is a starting parameter, but you can go deeper into the Google TTS nodes (Generate Virtual Parts, etc.) to select specific voices, genders, and speaking rates to create a unique podcast host style. Scripting:** Modify the Set and Merge nodes that construct the final script. You can easily change the greeting, the transition phrases between sections, or the sign-off message. Delivery Platform:** Don't use Telegram? Swap the Telegram node for a Slack node, Discord node, or even an Email node to send the MP3 file to your preferred platform. Message:** Customize the text message that is sent along with the audio file in the final node.
by Oussama
This n8n template creates an intelligent Ideation Agent 🤖 that captures your ideas from text and voice notes sent via Telegram. The assistant automatically transcribes your voice memos, analyzes the content with a powerful AI, and organizes it into a structured Google Sheet database. It's the perfect workflow for capturing inspiration whenever it strikes, just by talking or typing 💡. Use Cases: 🗣️ Text-Based Capture: Send any idea as a simple text message to your Telegram bot for instant processing. 🎙️ Voice-to-Idea: Record voice notes on the go. The workflow transcribes them into text and categorizes them automatically. 📂 Automated Organization: The AI agent intelligently structures each idea with a title, description, score, category, and priority level without any manual effort. 📊 Centralized Database: Build a comprehensive and well-organized library of all your ideas in Google Sheets, making it easy to search, review, and act upon them. How it works: Multi-Modal Input: The workflow starts with a Telegram Trigger that listens for incoming text messages and voice notes. Content-Based Routing: A Switch node detects the message type. Text messages are sent directly for processing, while audio files are routed for transcription. Voice Transcription: Voice messages are sent to the ElevenLabs API, which accurately converts the speech into text. Unified Input: Both the original text and the transcribed audio are passed to the AI Agent in a consistent format. AI Analysis & Structuring: An AI Agent, receives the text. It follows a detailed system prompt to analyze the idea and structure it into predefined fields: Idea, Idea Description, Idea Type, Score, Category, Priority, Status, and Complexity. Data Storage: The agent uses the Google Sheets Tool (add_row_tool) to seamlessly add the fully structured idea as a new row in your designated spreadsheet. Instant Confirmation: Once the idea is saved, the workflow sends a confirmation message back to you on Telegram, summarizing the captured idea. Requirements: 🌐 A Telegram Bot API token. 🤖 An AI provider with API access (the template uses Azure OpenAI, but can be adapted). 🗣️ An ElevenLabs API key for voice-to-text transcription. 📝 Google Sheets API credentials to connect to your database. Good to know: ⚠️ Before you start, make sure your Google Sheet has columns that exactly match the fields defined in the Agent's system prompt (e.g., "Idea ", "Idea Description ", "Idea Type", etc.). Note that some have a trailing space in the template. 🎤 The quality of the voice transcription is dependent on the clarity of your recorded audio. ✅ You can completely customize the AI's behavior, including all the categories, types, and scoring logic, by editing the system prompt in the Agent node. Customizing this workflow: ✏️ Modify Categories: To change the available Idea Type, Category/Domain, or Priority Level options, simply edit the list within the Agent node's system prompt. 🔄 Swap LLM: You can easily change the AI model by replacing the Azure OpenAI Chat Model node with another one, such as the standard OpenAI node or a local AI model. 🔗 Change Database: To save ideas to a different platform, just replace the add_row_tool1 (Google Sheets Tool) with a tool for another service like Notion, Airtable, or a database.
by Davide
This workflow is an AI-powered text-to-speech production pipeline designed to generate highly expressive audio using ElevenLabs v3. It automates the entire process from raw text input to final audio distribution and upload the mp3 file to Google Drive and an FTP space. Key Advantages 1. ✅ Cinematic-quality audio output By combining AI-driven emotional tagging with ElevenLabs v3, the workflow produces audio that feels acted, not simply read. 2. ✅ Fully automated pipeline From raw text to hosted audio file, everything is handled automatically: No manual tagging No manual uploads No post-processing 3. ✅ Multi-input flexibility The workflow supports: Manual testing Chat-based usage API/Webhook integrations This makes it ideal for apps, CMSs, games, and content platforms. 4. ✅ Language-agnostic The agent preserves the original language of the input text and applies tags accordingly, making it suitable for international projects. 5.✅ Consistent and correct tagging The use of Context7 ensures that all audio tags follow the official ElevenLabs v3 specifications, reducing errors and incompatibilities. 6. ✅ Scalable and production-ready Automatic uploads to Drive and FTP make this workflow ready for: Large content volumes CDN delivery Team collaboration 7.✅ Perfect for storytelling and media The workflow is especially effective for: Horror and cinematic storytelling Audiobooks and podcasts Games and immersive narratives Voiceovers with emotional depth How it Works Text Input & Processing: The workflow accepts text input through multiple triggers - manual execution via "Set text" node, webhook POST requests, or chat message inputs. This text is passed to the Audio Tagger Agent. AI-Powered Audio Tagging: The Audio Tagger Agent uses Claude Sonnet 4.5 to analyze the input text and intelligently insert ElevenLabs v3 audio tags. The agent follows strict rules: maintaining original meaning, adding tags for pauses, rhythm, emphasis, emotional tones, breathing, laughter, and delivery variations while keeping the output in the original language. Reference Validation: During tagging, the agent consults the Context7 MCP tool, which provides access to the official ElevenLabs v3 audio tags guide to ensure correct and consistent tag usage. Text-to-Speech Conversion: The tagged text is sent to ElevenLabs' v3 (alpha) model, which converts it into speech using a specific voice with customized voice settings including stability, similarity boost, style, speaker boost, and speed controls. Dual Output Distribution: The generated audio file is simultaneously uploaded to two destinations: Google Drive (in a specified "Elevenlabs" folder) and an FTP server (BunnyCDN), ensuring the file is stored in both cloud storage platforms. Set Up Steps Prerequisite Configuration: Configure Anthropic API credentials for Claude Sonnet access Set up ElevenLabs API credentials with access to v3 (alpha) models Configure Google Drive OAuth2 credentials with access to the target folder Set up FTP credentials for BunnyCDN or alternative storage Configure Context7 MCP tool with appropriate authentication headers Workflow-Specific Setup: In the "Set text" node, replace "YOUR TEXT" with the default text you want to process (for manual execution) In the "Upload to FTP" node, update the path from "/YOUR_PATH/" to your actual FTP directory structure Verify the Google Drive folder ID points to your intended destination folder Ensure the webhook path is correctly configured for external integrations Adjust voice parameters in the ElevenLabs node if different voice characteristics are desired Execution Options: For one-time processing: Use the manual trigger and set text in the "Set text" node For API integration: Use the webhook endpoint to receive text via POST requests For chat-based interaction: Use the chat trigger for conversational text input 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Belen
This n8n template automatically transcribes GoHighLevel (GHL) call recordings and creates an AI-generated summary that is added as a note directly to the related contact in your GHL CRM. It’s designed for real estate investors, agencies, and sales teams that handle a large volume of client calls and want to keep detailed, searchable notes without spending hours on manual transcription. Who’s it for Sales and acquisitions teams that want instant call notes in their CRM Real estate wholesalers or agencies using GoHighLevel for deal flow Support and QA teams that need summarized transcripts for review Any business owner who wants to automatically document client conversations How it works A HighLevel automation workflow triggers when a call is marked “Completed” and automatically sends a webhook to n8n. The n8n workflow receives this webhook and waits briefly to ensure the call recording is ready. It retrieves the conversation and message IDs from the webhook payload. The call recording is fetched from GHL’s API. An AI transcription node converts the audio to text. A summarization node condenses the transcript into bullet points or a concise paragraph. A Code node formats the AI output into proper JSON for GHL’s “Create Note” endpoint. Finally, an HTTP Request node posts the summary to the contact’s record in GHL. How to set up Add your GoHighLevel OAuth credential and connect your agency account. Add your AI credential (e.g., OpenAI, Anthropic, or Gemini). Replace the sample webhook URL with your n8n endpoint. Test with a recent call and confirm the summary appears in the contact timeline. Requirements GoHighLevel account with API and OAuth access AI service for transcription and summarization (e.g., OpenAI Whisper + GPT) Customizing this workflow You can tailor this automation for your specific team or workflow: Add sentiment analysis or keyword extraction to the summary. Change the AI prompt to focus on “action items,” “objections,” or “next steps.” Send summaries to Slack, Notion, or Google Sheets for reporting. Trigger follow-up tasks automatically in your CRM based on keywords. Good to know AI transcription and summarization costs vary by provider — check your LLM’s pricing. GoHighLevel’s recording availability may take up to 1 minute after the call ends; adjust the delay accordingly. For OAuth setup help, refer to GHL’s OAuth documentation. Happy automating! ⚙️
by Stephan Koning
Real-Time ClickUp Time Tracking to HubSpot Project Sync This workflow automates the synchronization of time tracked on ClickUp tasks directly to a custom project object in HubSpot, ensuring your project metrics are always accurate and up-to-date. Use Case & Problem This workflow is designed for teams that use a custom object in HubSpot for high-level project overviews (tracking scoped vs. actual hours per sprint) but manage daily tasks and time logging in ClickUp. The primary challenge is the constant, manual effort required to transfer tracked hours from ClickUp to HubSpot, a process that is both time-consuming and prone to errors. This automation eliminates that manual work entirely. How It Works Triggers on Time Entry:** The workflow instantly starts whenever a user updates the time tracked on any task in a specified ClickUp space. ⏱️ Fetches Task & Time Details:** It immediately retrieves all relevant data about the task (like its name and custom fields) and the specific time entry that was just updated. Identifies the Project & Sprint:** The workflow processes the task data to determine which HubSpot project it belongs to and categorizes the work into the correct sprint (e.g., Sprint 1, Sprint 2, Additional Requests). Updates HubSpot in Real-Time:** It finds the corresponding project record in HubSpot and updates the master actual_hours_tracked property. It then intelligently updates the specific field for the corresponding sprint (e.g., actual_sprint_1_hours), ensuring your reporting remains granular and accurate. Requirements ✅ ClickUp Account with the following custom fields on your tasks: A Dropdown custom field named Sprint to categorize tasks. A Short Text custom field named HubSpot Deal ID or similar to link to the HubSpot record. ✅ HubSpot Account with: A Custom Object used for project tracking. Custom Properties** on that object to store total and sprint-specific hours (e.g., actual_hours_tracked, actual_sprint_1_hours, total_time_remaining, etc.). > Note: Since this workflow interacts with a custom HubSpot object, it uses flexible HTTP Request nodes instead of the standard n8n HubSpot nodes. Setup Instructions Configure Credentials: Add your ClickUp (OAuth2) and HubSpot (Header Auth with a Private App Token) credentials to the respective nodes in the workflow. Set ClickUp Trigger: In the Time Tracked Update Trigger node, select your ClickUp team and the specific space you want to monitor for time updates. Update HubSpot Object ID: Find the ID of your custom project object in HubSpot. In the HubSpot HTTP Request nodes (e.g., OnProjectFolder), replace the placeholder ID objectTypeId in the URL with your own objectTypeId How to Customize Adjust the Code: Extract Sprint & Task Data node to change how sprint names are mapped or how time is calculated. Update the URLs in the HubSpot HTTP Request nodes if your custom object or property names differ.
by Dhinesh Ravikumar
Who it's for Project managers, AI builders, and teams who want structured, automated meeting summaries with zero manual work. What it does This workflow monitors a Google Drive folder for new meeting notes (PDF/TXT), extracts text, summarizes it via OpenAI GPT-4o, groups tasks by sentiment, builds a styled HTML summary, and sends it via Gmail. How to set it up Connect Google Drive, OpenAI, and Gmail credentials. Point the Drive Trigger to your meeting notes folder. Paste the system prompt into the AI node. Set Gmail Email Type to HTML and Message to {{$json.email_html}}. Drop a test file and execute once. Requirements n8n account Google Drive, OpenAI, and Gmail credentials Non-scanned PDFs or plain text files Customization ideas Add Slack or Notion logging Support additional file types Translate summaries automatically Tags #ai #automation #productivity #gmail #drive #meeting-summary #openai
by Amit Mehta
This workflow performs structured data extraction and data mining from a web page by combining the capabilities of Bright Data and Google Gemini. How it Works This workflow focuses on extracting structured data from a web page using Bright Data's Web Unlocker Product. It then uses n8n's AI capabilities, specifically Google Gemini Flash Exp, for information extraction and custom sentiment analysis. The results are sent to webhooks and saved as local files. Use Cases Data Mining**: Automating the process of extracting and analyzing data from websites. Web Scraping**: Gathering structured data for market research, competitive analysis, or content aggregation. Sentiment Analysis**: Performing custom sentiment analysis on unstructured text. Setup Instructions Bright Data Credentials: You need to have an account and a Web Unlocker zone with Bright Data. Update the Header Auth account credentials in the Perform Bright Data Web Request node. Google Gemini Credentials: Provide your Google Gemini(PaLM) Api account credentials for the AI-related nodes. Configure URL and Zone: In the Set URL and Bright Data Zone node, set the web URL you want to scrape and your Bright Data zone. Update Webhook: Update the Webhook Notification URL in the relevant HTTP Request nodes. Workflow Logic Trigger: The workflow is triggered manually. Set Parameters: It sets the target URL and the Bright Data zone. Web Request: The workflow performs a web request to the specified URL using Bright Data's Web Unlocker. The output is formatted as markdown. Data Extraction & Analysis: The markdown content is then processed by multiple AI nodes to: Extract textual data from the markdown. Perform topic analysis with a structured response. Analyze trends by location and category with a structured response. Output: The extracted data and analysis are sent to webhooks and saved as JSON files on disk. Node Descriptions | Node Name | Description | |-----------|-------------| | When clicking 'Test workflow' | A manual trigger node to start the workflow. | | Set URL and Bright Data Zone | A Set node to define the URL to be scraped and the Bright Data zone to be used. | | Perform Bright Data Web Request | An httpRequest node that performs the web request to Bright Data's API to retrieve the content. | | Markdown to Textual Data Extractor | An AI node that uses Google Gemini to convert markdown content into plain text. | | Google Gemini Chat Model | A node representing the Google Gemini model used for the data extraction. | | Topic Extractor with the structured response | An AI node that performs topic analysis and outputs the results in a structured JSON format. | | Trends by location and category with the structured response | An AI node that analyzes and clusters emerging trends by location and category, outputting a structured JSON. | | Initiate a Webhook Notification... | These nodes send the output of the AI analysis to a webhook. | | Create a binary file... | Function nodes that convert the JSON output into binary format for writing to a file. | | Write the topics/trends file to disk | readWriteFile nodes that save the binary data to a local file (d:\topics.json and d:\trends.json). | Customization Tips Change the web URL in the Set URL and Bright Data Zone node to scrape different websites. Modify the AI prompts in the AI nodes to customize the analysis (e.g., change the sentiment analysis criteria). Adjust the output path in the readWriteFile nodes to save the files to a different location. Suggested Sticky Notes for Workflow Note**: "This workflow deals with the structured data extraction by utilizing Bright Data Web Unlocker Product... Please make sure to set the web URL of your interest within the 'Set URL and Bright Data Zone' node and update the Webhook Notification URL". LLM Usages**: "Google Gemini Flash Exp model is being used... Information Extraction is being used for the handling the custom sentiment analysis with the structured response". Required Files 1GOrjyc9mtZCMvCr_Structured_Data_Extract,Data_Mining_with_Bright_Data&_Google_Gemini.json: The main n8n workflow export for this automation. Testing Tips Run the workflow and check the webhook to verify that the extracted data is being sent correctly. Confirm that the d:\topics.json and d:\trends.json files are created on your disk with the expected structured data. Suggested Tags & Categories Engineering AI
by Chris Pryce
Overview This workflow streamlines the process of setting up a chat-bot using the Signal Messager API. What this is for Chat-bot applications have become very popular on Whatsapp and Telegram. However, security conscious people may be hesitant to connect their AI agents to these applications. Compared to Whatsapp and Telegram, the Signal messaging app is more secure and end-to-end encrypted by default. In part because of this, it is more difficult to create a chat-bot application in this app. However, this is still possible to do if you host your own Signal API endpoint. This workflow requires the installation of a community-node package. Some additional setup for the locally hosted Signal API endpoint is also necessary. As such, it will only work with self-hosted instances of n8n. You may use any AI model you wish for this chat-bot, and connect different tools and APIs depending on your use-case. How to setup Step 1: Setup Rest API Before implementing this workflow, you must setup a local Signal Client Rest API. This can be done using a docker container based on this project: bbernhard/signal-cli-rest-api. version: "3" services: signal-cli-rest-api: image: bbernhard/signal-cli-rest-api:latest environment: MODE=normal #supported modes: json-rpc, native, normal #- AUTO_RECEIVE_SCHEDULE=0 22 * * * #enable this parameter on demand (see description below) ports: "8080:8080" #map docker port 8080 to host port 8080. volumes: "./signal-cli-config:/home/.local/share/signal-cli" #map "signal-cli-config" folder on host system into docker container. the folder contains the password and cryptographic keys when a new number is registered After starting the docker-container, you will be able to interact with a local Signal client over a Rest API at http://localhost:8080 (by default, this setting can be modified in the docker-compose file). Step 2: Install Node Package This workflow requires the community-node package developed by ZBlaZe: n8n-nodes-signal-cli-rest-api. Navigate to ++your-n8n-server-address/settings/community-nodes++, click the 'Install' button, and paste in the communiy-node package name '++n8n-nodes-signal-cli-rest-api++' to install this community node. Step 3: Register and Verify Account The last step requires a phone-number. You may use your own phone-number, a pre-paid sim card, or (if you are a US resident), a free Google Voice digital phone-number. An n8n web-form has been created in this workflow to make headless setup easier. In the Form nodes, replace the URL with the address of your local Signal Rest API endpoint. Open the webform and enter the phone number you are using to register your bot's Signal account Signal needs to verify you are human before registering an account. Visit this page to complete the captcha challenge. The right-click the 'Open Signal' button and copy the link address. Paste this into the second form field and hit 'Submit'. At this point you should receive a verification token as an SMS message to the phone-number you are using. Copy this and paste it into the second web-form. Your bot's signal account should be setup now. To use this account in n8n, add the Rest-API address and account-number (phone-number) as a new n8n credential. Step 4: Optional For extra security it is recommended to restrict communication with this chat-bot. In the 'If' workflow node, enter your own signal account phone-number. You may also provide a UUID. This is an identifier number unique to your mobile device. You can find this by sending a test message to your bot's signal account and copying it from the workflow execution data.