by AI/ML API | D1m7asis
Who’s it for Teams and makers who want a plug-and-play vision bot: users send a photo in Telegram, the bot returns a concise description plus OCR text. No custom servers required—just n8n, a Telegram bot, and an AIMLAPI key. What it does / How it works The workflow listens for new Telegram messages, fetches the highest-resolution photo, converts it to base64, normalizes the MIME type, and calls AIMLAPI (GPT-4o Vision) via the HTTP Request node using the OpenAI-compatible messages format with an image_url data URI. The model returns a short caption and extracted text. The answer is sent back to the same Telegram chat. Requirements n8n instance (self-hosted or cloud) Telegram bot token (from @BotFather) AIMLAPI account and API key (OpenAI-compatible endpoint) How to set up Create a Telegram bot with @BotFather and copy the token. In n8n, add Telegram credentials (no hardcoded tokens in nodes). Add AIMLAPI credentials with your API key (base URL: https://api.aimlapi.com/v1). Import the workflow JSON and connect credentials in the nodes. Execute the trigger and send a photo to your bot to test. How to customize the workflow Modify the vision prompt (e.g., add brand, language, or formatting rules). Switch models within AIMLAPI (any vision-capable model using the same messages schema). Add an IF branch for text-only messages (reply with guidance). Log usage to Google Sheets or a database (user id, file id, response). Add rate limits, user allowlists, or Markdown formatting in Telegram responses. Increase timeouts/retries in the HTTP Request node for long-running images.
by Piotr Sikora
Who’s it for This workflow is perfect for content managers, SEO specialists, and website owners who want to easily analyze their WordPress content structure. It automatically fetches posts, categories, and tags from a WordPress site and exports them into a Google Sheet for further review or optimization. What it does This automation connects to the WordPress REST API, collects data about posts, categories, and tags, and maps the category and tag names directly into each post. It then appends all this enriched data to a Google Sheet — providing a quick, clean way to audit your site’s content and taxonomy structure. How it works Form trigger: Start the workflow by submitting a form with your website URL and the number of posts to analyze. Fetch WordPress data: The workflow sends three API requests to collect posts, categories, and tags. Merge data: It combines all the data into one stream using the Merge node. Code transformation: A Code node replaces category and tag IDs with their actual names. Google Sheets export: Posts are appended to a Google Sheet with the following columns: URL Title Categories Tags Completion form: Once the list is created, you’ll get a confirmation message and a link to your sheet. If the WordPress API isn’t available, the workflow automatically displays an error message to help you troubleshoot. Requirements A WordPress site with the REST API enabled (/wp-json/wp/v2/). A Google account connected to n8n with access to Google Sheets. A Google Sheet containing the columns: URL, Title, Categories, Tags. How to set up Import this workflow into n8n. Connect your Google Sheets account under credentials. Make sure your WordPress site’s API is accessible publicly. Adjust the Post limit (per_page) in the form node if needed. Run the workflow and check your Google Sheet for results. How to customize Add additional WordPress endpoints (e.g., authors, comments) by duplicating and modifying HTTP Request nodes. Replace Google Sheets with another integration (like Airtable or Notion). Extend the Code node to include SEO metadata such as meta descriptions or featured images.
by Tomohiro Goto
🧠 How it works This workflow automatically transcribes and translates voice messages from Telegram to Slack, enabling seamless communication between Japanese and English speakers. In our real-world use case, our distributed team often sends short voice updates on Telegram — but most discussion happens on Slack. Before this workflow, we constantly asked: “Can someone write a summary of that voice message?” “I can’t understand what was said — is there a transcript?” “Can we translate this audio for our English-speaking teammates?” This workflow fixes that problem without changing anyone’s communication habits. Built with n8n, OpenAI Whisper, and GPT-4o-mini, it automatically: Detects when a voice message is posted on Telegram Downloads and transcribes it via Whisper Translates the text with GPT-4o-mini Posts the result in Slack — with flags 🇯🇵→🇺🇸 and username attribution ⚙️ Features 🎧 Voice-to-text transcription using OpenAI Whisper 🌐 Automatic JA ↔ EN detection and translation via GPT-4o-mini 💬 Clean Slack message formatting with flags, username, and original text 🔧 Easy to customize: adjust target languages, tone, or message style ⚡ Typical end-to-end time: under 10 seconds for short audio clips 💼 Use Cases Global teams** – Send quick voice memos in Telegram and share readable translations in Slack Project coordination** – Record updates while commuting and post bilingual notes automatically Remote check-ins** – Replace daily written reports with spoken updates Cross-language collaboration** – Let English and Japanese teammates stay perfectly synced 💡 Perfect for Bilingual creators and managers** working across Japan and Southeast Asia AI automation enthusiasts** who love connecting voice and chat platforms Teams using Telegram for fast communication** and Slack for structured workspaces 🧩 Notes Requires three credentials: TELEGRAM_BOT_TOKEN OPENAI_API_KEY_HEADER SLACK_BOT_TOKEN_HEADER Slack scopes: chat:write, files:write, channels:history You can change translation direction or add languages in the “Detect Language” → “Translate (OpenAI)” nodes. Keep audio files under 25 MB for Whisper processing. Always export your workflow with credentials OFF before sharing or publishing. ✨ Powered by OpenAI Whisper × GPT-4o-mini × n8n × Telegram Bot API × Slack API A complete multilingual voice-to-text bridge — connecting speech, translation, and collaboration across platforms. 🌍
by Atta
What it does Instead of manually checking separate apps for your calendar, weather, and news each morning, this workflow consolidates the most important information into a single, convenient audio briefing. The "Good Morning Podcast" is designed to be a 3-minute summary of your day ahead, delivered directly to you. It's multi-lingual and customizable, allowing you to start your day informed and efficiently. How it works The workflow executes in three parallel branches before merging the data to generate the final audio file. Weather Summary: It starts by taking a user-provided city and fetching the current 15-hour forecast from the OpenWeatherMap. It formats this information into a concise weather report. Calendar Summary: It securely connects to your Google Calendar to retrieve all of today's scheduled meetings and events. It then formats the schedule into a clear, readable summary. News Summary: It connects to the NewsAPI to perform two tasks: it fetches the top general headlines and also searches for articles based on user-defined keywords (e.g., "AI", "automation", "space exploration"). The collected headlines are then summarized using a Google Gemini node to create a brief news digest. Audio Generation and Delivery: All three text summaries (weather, calendar, and news) are merged into a single script. The workflow uses Google's Text-to-Speech (TTS) to generate the raw multi-speaker audio. A dedicated FFmpeg node then processes and converts this audio into the final MP3 format. The completed podcast is then sent directly to you via a Telegram Bot. Setup Instructions To get this workflow running, you will need to configure credentials for each of the external services and set your initial parameters. ⚠️ Important Prerequisite Install FFmpeg: The workflow requires the FFmpeg software package to be installed on the machine running your n8n instance (local or server). Please ensure it is installed and accessible in your system's PATH before running this workflow. Required Credentials OpenWeatherMap: Sign up for a free account at OpenWeatherMap and get your API key. Add the API key to your n8n OpenWeatherMap credentials. Google Calendar & Google AI (Gemini/TTS): You will need Google OAuth2 credentials for the Google Calendar node. You will also need credentials for the Google AI services (Gemini and Text-to-Speech). Follow the n8n documentation to create and add these credentials. NewsAPI: Get a free API key from NewsAPI.org. Add the API key to your n8n NewsAPI credentials. Telegram: Create a new bot by talking to the BotFather in your Telegram app. Copy the Bot Token it provides and add it to your n8n Telegram credentials. Send a message to your new bot and get your Chat ID from the Telegram Trigger node or another method. You will need this for the Telegram send node. Workflow Inputs In the first node (or when you run the workflow manually), you must provide the following initial data: name: Your first name for a personalized greeting. city: The city for your local weather forecast (e.g., "Amsterdam"). language: The language for the entire podcast output (e.g., "en-US", "nl-NL", "fa-IR"). news_keywords: A comma-separated list of topics you are interested in for the news summary (e.g., "n8n,AI,technology"). How to Adapt the Template This workflow is highly customizable. Here are several ways you can adapt it to fit your needs: Triggers Automate It:* The default trigger is manual. Change it to a *Schedule Trigger** to have your podcast automatically generated and sent to you at the same time every morning (e.g., 7:00 AM). Content Sources Weather:** In the "User Weather Map" node, you can change the forecast type or switch the units from metric to imperial. Calendar:** In the "Get Today Meetings" node, you can select a different calendar from your Google account (e.g., a shared work calendar instead of your personal one). News:** In the "Get Headlines From News Sources" node, change the country or category to get different top headlines. In the "Get Links From Keywords" node, update your keywords to track different topics. In the "Aggregate Headlines" (Gemini) node, you can modify the prompt to change the tone or length of the AI-generated news summary. Audio Generation Voice & Language:** The language is a starting parameter, but you can go deeper into the Google TTS nodes (Generate Virtual Parts, etc.) to select specific voices, genders, and speaking rates to create a unique podcast host style. Scripting:** Modify the Set and Merge nodes that construct the final script. You can easily change the greeting, the transition phrases between sections, or the sign-off message. Delivery Platform:** Don't use Telegram? Swap the Telegram node for a Slack node, Discord node, or even an Email node to send the MP3 file to your preferred platform. Message:** Customize the text message that is sent along with the audio file in the final node.
by Anirudh Aeran
This workflow provides a complete backend solution for building your own WhatsApp marketing dashboard. It enables you to send dynamic, personalized, and rich-media broadcast messages to an entire contact list stored in Google Sheets. The system is built on three core functions: automatically syncing your approved Meta templates, providing an API endpoint for your front-end to fetch those templates, and a powerful broadcast engine that merges your contact data with the selected template for mass delivery. Who’s it for? This template is for marketers, developers, and businesses who want to run sophisticated WhatsApp campaigns without being limited by off-the-shelf tools. It's perfect for anyone who needs to send personalized bulk messages with dynamic content (like unique images or links for each user) and wants to operate from a simple, custom-built web interface. How it works This workflow is composed of three independent, powerful parts: Automated Template Sync: A scheduled trigger runs periodically to fetch all of your approved message templates directly from your Meta Business Account. It then clears and updates an n8n Data Table, ensuring your list of available templates is always perfectly in sync with Meta. Front-end API Endpoint: A dedicated webhook acts as an API for your dashboard. When your front-end calls this endpoint, it returns a clean JSON list of all available templates from the n8n Data Table, which you can use to populate a dropdown menu for the user. Dynamic Broadcast Engine: The main webhook listens for a request from your front-end, which includes the name of the template to send. It then: Looks up the template's structure in the Data Table. Fetches all contacts from your Google Sheet. For each contact, a Code node dynamically constructs a personalized API request. It can merge the contact's name into the body, add a unique user ID to a button's URL, and even pull a specific image URL from your Google Sheet to use as a dynamic header. Sends the fully personalized message to the contact. How to set up Pre-requisite - Front-end: This workflow is a backend and is designed to be triggered by a front-end application. You will need a simple UI with a dropdown to select a template and a button to trigger the broadcast. Meta for Developers: You need a Meta App with the WhatsApp Business API configured. From your app, you will need your WhatsApp Business Account ID, a Phone Number ID, and a permanent System User Access Token. n8n Data Table: Create an n8n Data Table (e.g., named "WhatsApp Templates") with the following columns: template_name, language_code, components_structure, template_id, status, category. Google Sheet: Create a Google Sheet to store your contacts. It must have columns like Phone Number, Full Name, and for dynamic images, Marketing Image URL. Configure Credentials: -> Create an HTTP Header Auth credential in n8n for WhatsApp. Use Authorization as the Header Name and Bearer YOUR_PERMANENT_TOKEN as the value. -> Add your Google Sheets credentials. Configure Nodes: -> In both HTTP Request nodes, select your WhatsApp Header Auth credential. Update the URLs with your own Phone Number ID and WABA ID. -> In the Google Sheets node, select your credential and enter the Sheet ID. -> In all Data Table nodes, select the Data Table you created. First Run: Manually execute the "Sync Meta Templates" flow (starting with the Schedule Trigger) once to populate your Data Table with your templates. Activate: Activate all parts of the workflow. Requirements A Meta for Developers account with a configured WhatsApp Business App. A permanent System User Access Token for the WhatsApp Business API. A Google Sheets account. A front-end application/dashboard to trigger the workflow.
by Belen
This n8n template automatically transcribes GoHighLevel (GHL) call recordings and creates an AI-generated summary that is added as a note directly to the related contact in your GHL CRM. It’s designed for real estate investors, agencies, and sales teams that handle a large volume of client calls and want to keep detailed, searchable notes without spending hours on manual transcription. Who’s it for Sales and acquisitions teams that want instant call notes in their CRM Real estate wholesalers or agencies using GoHighLevel for deal flow Support and QA teams that need summarized transcripts for review Any business owner who wants to automatically document client conversations How it works A HighLevel automation workflow triggers when a call is marked “Completed” and automatically sends a webhook to n8n. The n8n workflow receives this webhook and waits briefly to ensure the call recording is ready. It retrieves the conversation and message IDs from the webhook payload. The call recording is fetched from GHL’s API. An AI transcription node converts the audio to text. A summarization node condenses the transcript into bullet points or a concise paragraph. A Code node formats the AI output into proper JSON for GHL’s “Create Note” endpoint. Finally, an HTTP Request node posts the summary to the contact’s record in GHL. How to set up Add your GoHighLevel OAuth credential and connect your agency account. Add your AI credential (e.g., OpenAI, Anthropic, or Gemini). Replace the sample webhook URL with your n8n endpoint. Test with a recent call and confirm the summary appears in the contact timeline. Requirements GoHighLevel account with API and OAuth access AI service for transcription and summarization (e.g., OpenAI Whisper + GPT) Customizing this workflow You can tailor this automation for your specific team or workflow: Add sentiment analysis or keyword extraction to the summary. Change the AI prompt to focus on “action items,” “objections,” or “next steps.” Send summaries to Slack, Notion, or Google Sheets for reporting. Trigger follow-up tasks automatically in your CRM based on keywords. Good to know AI transcription and summarization costs vary by provider — check your LLM’s pricing. GoHighLevel’s recording availability may take up to 1 minute after the call ends; adjust the delay accordingly. For OAuth setup help, refer to GHL’s OAuth documentation. Happy automating! ⚙️
by Grigory Frolov
📊 YouTube Personal Channel Videos → Google Sheets Automatically sync your YouTube videos (title, description, tags, publish date, captions, etc.) into Google Sheets — perfect for creators and marketers who want a clean content database for analysis or reporting. 🚀 What this workflow does ✅ Connects to your personal YouTube channel via Google OAuth 🔁 Fetches all uploaded videos automatically (with pagination) 🏷 Extracts metadata: title, description, tags, privacy status, upload status, thumbnail, etc. 🧾 Retrieves captions (SRT format) if available 📈 Writes or updates data in your Google Sheets document ⚙️ Can be run manually or scheduled via Cron 🧩 Nodes used Manual Trigger** — to start manually or connect with Cron HTTP Request (YouTube API v3)** — fetches channel, uploads, and captions Code Nodes** — manage pagination and collect IDs SplitOut** — iterates through video lists Google Sheets (appendOrUpdate)** — stores data neatly If Conditions** — control data flow and prevent empty responses ⚙️ Setup guide Connect your Google Account Used for both YouTube API and Google Sheets. Make sure the credentials are set up in Google OAuth2 API and Google Sheets OAuth2 API nodes. Create a Google Sheet Add a tab named Videos. Add these columns: youtube_id | title | description | tags | privacyStatus | uploadStatus | thumbnail | captions You can also include categoryId, maxres, or published if you’d like. Replace the sample Sheet ID In each Google Sheets node, open the “Spreadsheet” field and choose your own document. Make sure the sheet name matches the tab name (Videos). Run the workflow Execute it manually first to pull your latest uploads. Optionally add a Cron Trigger node for daily sync (e.g., once per day). Check your Sheet Your data should appear instantly — with each video’s metadata and captions (if available). 🧠 Notes & tips ⚙️ The flow loops through all pages of your upload playlist automatically — no manual pagination needed. 🕒 The workflow uses YouTube’s “contentDetails.relatedPlaylists.uploads” to ensure you only fetch your own uploads. 💡 Captions fetch may fail for private videos — use “Continue on Fail” if you want the rest to continue. 🧮 Ideal for dashboards, reporting sheets, SEO analysis, or automation triggers. 💾 To improve speed, you can disable the “Captions” branch if you only need metadata. 👥 Ideal for 🎬 YouTube creators maintaining a video database 📊 Marketing teams tracking SEO performance 🧠 Digital professionals building analytics dashboards ⚙️ Automation experts using YouTube data in other workflows 💛 Credits Created by Grigory Frolov YouTube: @gregfrolovpersonal More workflows and guides → ozwebexpert.com/n8n
by Oussama
This n8n template creates an intelligent Ideation Agent 🤖 that captures your ideas from text and voice notes sent via Telegram. The assistant automatically transcribes your voice memos, analyzes the content with a powerful AI, and organizes it into a structured Google Sheet database. It's the perfect workflow for capturing inspiration whenever it strikes, just by talking or typing 💡. Use Cases: 🗣️ Text-Based Capture: Send any idea as a simple text message to your Telegram bot for instant processing. 🎙️ Voice-to-Idea: Record voice notes on the go. The workflow transcribes them into text and categorizes them automatically. 📂 Automated Organization: The AI agent intelligently structures each idea with a title, description, score, category, and priority level without any manual effort. 📊 Centralized Database: Build a comprehensive and well-organized library of all your ideas in Google Sheets, making it easy to search, review, and act upon them. How it works: Multi-Modal Input: The workflow starts with a Telegram Trigger that listens for incoming text messages and voice notes. Content-Based Routing: A Switch node detects the message type. Text messages are sent directly for processing, while audio files are routed for transcription. Voice Transcription: Voice messages are sent to the ElevenLabs API, which accurately converts the speech into text. Unified Input: Both the original text and the transcribed audio are passed to the AI Agent in a consistent format. AI Analysis & Structuring: An AI Agent, receives the text. It follows a detailed system prompt to analyze the idea and structure it into predefined fields: Idea, Idea Description, Idea Type, Score, Category, Priority, Status, and Complexity. Data Storage: The agent uses the Google Sheets Tool (add_row_tool) to seamlessly add the fully structured idea as a new row in your designated spreadsheet. Instant Confirmation: Once the idea is saved, the workflow sends a confirmation message back to you on Telegram, summarizing the captured idea. Requirements: 🌐 A Telegram Bot API token. 🤖 An AI provider with API access (the template uses Azure OpenAI, but can be adapted). 🗣️ An ElevenLabs API key for voice-to-text transcription. 📝 Google Sheets API credentials to connect to your database. Good to know: ⚠️ Before you start, make sure your Google Sheet has columns that exactly match the fields defined in the Agent's system prompt (e.g., "Idea ", "Idea Description ", "Idea Type", etc.). Note that some have a trailing space in the template. 🎤 The quality of the voice transcription is dependent on the clarity of your recorded audio. ✅ You can completely customize the AI's behavior, including all the categories, types, and scoring logic, by editing the system prompt in the Agent node. Customizing this workflow: ✏️ Modify Categories: To change the available Idea Type, Category/Domain, or Priority Level options, simply edit the list within the Agent node's system prompt. 🔄 Swap LLM: You can easily change the AI model by replacing the Azure OpenAI Chat Model node with another one, such as the standard OpenAI node or a local AI model. 🔗 Change Database: To save ideas to a different platform, just replace the add_row_tool1 (Google Sheets Tool) with a tool for another service like Notion, Airtable, or a database.
by Daniel
Generate stunning 10-second AI-crafted nature stock videos on autopilot and deliver them straight to your Telegram chat—perfect for content creators seeking effortless inspiration without the hassle of manual prompting or editing. 📋 What This Template Does This workflow automates the creation and delivery of high-quality, 10-second nature-themed videos using AI generation tools. Triggered on a schedule, it leverages Google Gemini to craft precise video prompts, submits them to the Kie AI API for video synthesis, polls for completion, downloads the result, and sends it via Telegram. Dynamically generates varied nature scenes (e.g., misty forests, ocean sunsets) with professional cinematography specs. Handles asynchronous video processing with webhook callbacks for efficiency. Ensures commercial-ready outputs: watermark-free, portrait aspect, natural ambient audio. Customizable schedule for daily/weekly bursts of creative B-roll footage. 🔧 Prerequisites n8n instance with HTTP Request and LangChain nodes enabled. Google Gemini API access for prompt generation. Kie AI API account for video creation (supports Sora-like text-to-video models). Telegram Bot setup for message delivery. 🔑 Required Credentials Google Gemini API Setup Go to aistudio.google.com → Create API key. Ensure the key has access to Gemini 1.5 Flash or Pro models. Add to n8n as "Google Gemini API" credential type. Kie AI API Setup Sign up at kie.ai → Dashboard → API Keys. Generate a new API key with video generation permissions (sora-2-text-to-video model). Add to n8n as "HTTP Header Auth" credential (header: Authorization, value: Bearer [Your API Key]). Telegram Bot API Setup Create a bot via @BotFather on Telegram → Get API token. Note your target chat ID (use @userinfobot for personal chats). Add to n8n as "Telegram API" credential type. ⚙️ Configuration Steps Import the workflow JSON into your n8n instance. Assign the required credentials to the Gemini, Kie AI, and Telegram nodes. Update the Telegram node's chat ID with your target chat (e.g., personal or group). Adjust the Schedule Trigger interval (e.g., daily at 9 AM) via node settings. Activate the workflow and monitor the first execution for video delivery. 🎯 Use Cases Content creators automating daily social media B-roll: Generate fresh nature clips for Instagram Reels or YouTube intros without filming. Marketing teams sourcing versatile stock footage: Quickly produce themed videos for campaigns, like serene landscapes for wellness brands. Educational bots for classrooms: Deliver randomized nature videos to Telegram groups for biology lessons on ecosystems and wildlife. Personal productivity: Schedule motivational nature escapes to your chat for remote workers needing quick digital breaks. ⚠️ Troubleshooting Video generation fails with quota error: Check Kie AI dashboard for usage limits and upgrade plan if needed. Prompt output too generic: Tweak the Video Prompting Agent's system prompt for more specificity (e.g., add seasonal themes). Telegram send error: Verify bot token and chat ID; test with a simple message node first. Webhook callback timeout: Ensure n8n production URL is publicly accessible; use ngrok for local testing.
by Stephan Koning
Real-Time ClickUp Time Tracking to HubSpot Project Sync This workflow automates the synchronization of time tracked on ClickUp tasks directly to a custom project object in HubSpot, ensuring your project metrics are always accurate and up-to-date. Use Case & Problem This workflow is designed for teams that use a custom object in HubSpot for high-level project overviews (tracking scoped vs. actual hours per sprint) but manage daily tasks and time logging in ClickUp. The primary challenge is the constant, manual effort required to transfer tracked hours from ClickUp to HubSpot, a process that is both time-consuming and prone to errors. This automation eliminates that manual work entirely. How It Works Triggers on Time Entry:** The workflow instantly starts whenever a user updates the time tracked on any task in a specified ClickUp space. ⏱️ Fetches Task & Time Details:** It immediately retrieves all relevant data about the task (like its name and custom fields) and the specific time entry that was just updated. Identifies the Project & Sprint:** The workflow processes the task data to determine which HubSpot project it belongs to and categorizes the work into the correct sprint (e.g., Sprint 1, Sprint 2, Additional Requests). Updates HubSpot in Real-Time:** It finds the corresponding project record in HubSpot and updates the master actual_hours_tracked property. It then intelligently updates the specific field for the corresponding sprint (e.g., actual_sprint_1_hours), ensuring your reporting remains granular and accurate. Requirements ✅ ClickUp Account with the following custom fields on your tasks: A Dropdown custom field named Sprint to categorize tasks. A Short Text custom field named HubSpot Deal ID or similar to link to the HubSpot record. ✅ HubSpot Account with: A Custom Object used for project tracking. Custom Properties** on that object to store total and sprint-specific hours (e.g., actual_hours_tracked, actual_sprint_1_hours, total_time_remaining, etc.). > Note: Since this workflow interacts with a custom HubSpot object, it uses flexible HTTP Request nodes instead of the standard n8n HubSpot nodes. Setup Instructions Configure Credentials: Add your ClickUp (OAuth2) and HubSpot (Header Auth with a Private App Token) credentials to the respective nodes in the workflow. Set ClickUp Trigger: In the Time Tracked Update Trigger node, select your ClickUp team and the specific space you want to monitor for time updates. Update HubSpot Object ID: Find the ID of your custom project object in HubSpot. In the HubSpot HTTP Request nodes (e.g., OnProjectFolder), replace the placeholder ID objectTypeId in the URL with your own objectTypeId How to Customize Adjust the Code: Extract Sprint & Task Data node to change how sprint names are mapped or how time is calculated. Update the URLs in the HubSpot HTTP Request nodes if your custom object or property names differ.
by Amit Mehta
This workflow performs structured data extraction and data mining from a web page by combining the capabilities of Bright Data and Google Gemini. How it Works This workflow focuses on extracting structured data from a web page using Bright Data's Web Unlocker Product. It then uses n8n's AI capabilities, specifically Google Gemini Flash Exp, for information extraction and custom sentiment analysis. The results are sent to webhooks and saved as local files. Use Cases Data Mining**: Automating the process of extracting and analyzing data from websites. Web Scraping**: Gathering structured data for market research, competitive analysis, or content aggregation. Sentiment Analysis**: Performing custom sentiment analysis on unstructured text. Setup Instructions Bright Data Credentials: You need to have an account and a Web Unlocker zone with Bright Data. Update the Header Auth account credentials in the Perform Bright Data Web Request node. Google Gemini Credentials: Provide your Google Gemini(PaLM) Api account credentials for the AI-related nodes. Configure URL and Zone: In the Set URL and Bright Data Zone node, set the web URL you want to scrape and your Bright Data zone. Update Webhook: Update the Webhook Notification URL in the relevant HTTP Request nodes. Workflow Logic Trigger: The workflow is triggered manually. Set Parameters: It sets the target URL and the Bright Data zone. Web Request: The workflow performs a web request to the specified URL using Bright Data's Web Unlocker. The output is formatted as markdown. Data Extraction & Analysis: The markdown content is then processed by multiple AI nodes to: Extract textual data from the markdown. Perform topic analysis with a structured response. Analyze trends by location and category with a structured response. Output: The extracted data and analysis are sent to webhooks and saved as JSON files on disk. Node Descriptions | Node Name | Description | |-----------|-------------| | When clicking 'Test workflow' | A manual trigger node to start the workflow. | | Set URL and Bright Data Zone | A Set node to define the URL to be scraped and the Bright Data zone to be used. | | Perform Bright Data Web Request | An httpRequest node that performs the web request to Bright Data's API to retrieve the content. | | Markdown to Textual Data Extractor | An AI node that uses Google Gemini to convert markdown content into plain text. | | Google Gemini Chat Model | A node representing the Google Gemini model used for the data extraction. | | Topic Extractor with the structured response | An AI node that performs topic analysis and outputs the results in a structured JSON format. | | Trends by location and category with the structured response | An AI node that analyzes and clusters emerging trends by location and category, outputting a structured JSON. | | Initiate a Webhook Notification... | These nodes send the output of the AI analysis to a webhook. | | Create a binary file... | Function nodes that convert the JSON output into binary format for writing to a file. | | Write the topics/trends file to disk | readWriteFile nodes that save the binary data to a local file (d:\topics.json and d:\trends.json). | Customization Tips Change the web URL in the Set URL and Bright Data Zone node to scrape different websites. Modify the AI prompts in the AI nodes to customize the analysis (e.g., change the sentiment analysis criteria). Adjust the output path in the readWriteFile nodes to save the files to a different location. Suggested Sticky Notes for Workflow Note**: "This workflow deals with the structured data extraction by utilizing Bright Data Web Unlocker Product... Please make sure to set the web URL of your interest within the 'Set URL and Bright Data Zone' node and update the Webhook Notification URL". LLM Usages**: "Google Gemini Flash Exp model is being used... Information Extraction is being used for the handling the custom sentiment analysis with the structured response". Required Files 1GOrjyc9mtZCMvCr_Structured_Data_Extract,Data_Mining_with_Bright_Data&_Google_Gemini.json: The main n8n workflow export for this automation. Testing Tips Run the workflow and check the webhook to verify that the extracted data is being sent correctly. Confirm that the d:\topics.json and d:\trends.json files are created on your disk with the expected structured data. Suggested Tags & Categories Engineering AI
by Daniel
Harness OpenAI's Sora 2 for instant video creation from text or images using fal.ai's API—powered by GPT-5 for refined prompts that ensure cinematic quality. This template processes form submissions, intelligently routes to text-to-video (with mandatory prompt enhancement) or image-to-video modes, and polls for completion before redirecting to your generated clip. 📋 What This Template Does Users submit prompts, aspect ratios (9:16 or 16:9), models (sora-2 or pro), durations (4s, 8s, or 12s), and optional images via a web form. For text-to-video, GPT-5 automatically refines the prompt for optimal Sora 2 results; image mode uses the raw input. It calls one of four fal.ai endpoints (text-to-video, text-to-video/pro, image-to-video, image-to-video/pro), then loops every 60s to check status until the video is ready. Handles dual modes: Text (with GPT-5 enhancement) or image-seeded generation Supports pro upgrades for higher fidelity and longer clips Auto-uploads images to a temp host and polls asynchronously for hands-free results Redirects directly to the final video URL on completion 🔧 Prerequisites n8n instance with HTTP Request and LangChain nodes enabled fal.ai account for Sora 2 API access OpenAI account for GPT-5 prompt refinement 🔑 Required Credentials fal.ai API Setup Sign up at fal.ai and navigate to Dashboard → API Keys Generate a new key with "sora-2" permissions (full access recommended) In n8n, create "Header Auth" credential: Name it "fal.ai", set Header Name to "Authorization", Value to "Key [Your API Key]" OpenAI API Setup Log in at platform.openai.com → API Keys (top-right profile menu) Click "Create new secret key" and copy it (store securely) In n8n, add "OpenAI API" credential: Paste key, select GPT-5 model in the LLM node ⚙️ Configuration Steps Import the workflow JSON into your n8n instance via Settings → Import from File Assign fal.ai and OpenAI credentials to the relevant HTTP Request and LLM nodes Activate the workflow—the form URL auto-generates in the trigger node Test by submitting a sample prompt (e.g., "A cat chasing a laser"); monitor executions for video output Adjust polling wait (60s node) for longer generations if needed 🎯 Use Cases Social Media Teams**: Generate 9:16 vertical Reels from text ideas, like quick product animations enhanced by GPT-5 for professional polish Content Marketers**: Animate uploaded images into 8s promo clips, e.g., turning a static ad graphic into a dynamic story for email campaigns Educators and Trainers**: Create 4s explainer videos from outlines, such as historical reenactments, using pro mode for detailed visuals App Developers**: Embed as a backend service to process user prompts into Sora 2 videos on-demand for creative tools ⚠️ Troubleshooting API quota exceeded**: Check fal.ai dashboard for usage limits; upgrade to pro tier or extend polling waits Prompt refinement fails**: Ensure GPT-5 credential is set and output matches JSON schema—test LLM node independently Image upload errors**: Confirm file is JPG/PNG under 10MB; verify tmpfiles.org endpoint with a manual curl test Endless polling loop**: Add an IF node after 10 checks to timeout; increase wait to 120s for 12s pro generations