by giangxai
Overview This workflow automatically creates AI product review videos from a product image and short description using n8n and Veo 3. It connects content generation, image creation, video rendering, video merging, and publishing into a single automated flow. Once configured, the workflow runs end to end with minimal manual input. The workflow is designed for creators, marketers, and affiliate builders who want a reliable and repeatable way to produce short-form product review videos without manual editing. What can this workflow do? Automatically generate AI product review videos from product images Create review scripts and structured prompts using an AI model Generate product images and video scenes with AI services Merge multiple video scenes into a single final video Publish videos automatically to social platforms Track publishing results and errors in Google Sheets This workflow helps reduce manual work while keeping the video production process structured and scalable. How it works You start by submitting a product image and basic product information through a form. The workflow analyzes the image to understand visual context and key product features. An AI Agent then generates a review script along with structured image and video prompts. Next, image generation APIs create product visuals, and video generation APIs such as Veo 3 render short video scenes. All generated scenes are automatically merged into one final product review video. The finished video is then uploaded and published to platforms like TikTok, Facebook Reels, and YouTube Shorts. Publishing results are logged to Google Sheets for monitoring. Setup steps Connect an AI model (Gemini or OpenRouter) for script and prompt generation. Add image and video generation API keys (Veo 3 or compatible providers). Configure the video merge step (custom request or ffmpeg-based API). Add Blotato API credentials for automated publishing. Connect Google Sheets to log publishing results. Once set up, the workflow runs automatically without manual intervention. Documentation For a full walkthrough and advanced customization ideas, watch the detailed tutorial on YouTube.
by Davide
This workflow implements an AI-powered design and prototyping assistant that integrates Telegram, Google Gemini, and Google Stitch (MCP) to enable conversational UI generation and project management. Supported actions include: Creating new design projects Retrieving existing projects Listing projects and screens Fetching individual screens Generating new UI screens directly from text descriptions Key Advantages 1. โ Conversational Design Workflow Design and UI prototyping can be driven entirely through natural language. Users can create screens, explore layouts, or manage projects simply by chatting, without opening design tools. 2. โ Tight Integration with Google Stitch By leveraging the Stitch MCP API, the workflow provides direct access to structured design capabilities such as screen generation, project management, and UI exploration, avoiding manual API calls or custom scripting. 3. โ Intelligent Tool Selection The AI agent does not blindly call APIs. It first analyzes the user request, determines the required level of fidelity and intent, and then selects the most appropriate Stitch function or combination of functions. 4. โ Multi-Channel Support The workflow supports both generic chat triggers and Telegram, making it flexible for internal tools, demos, or production chatbots. 5. โ Security and Access Control Telegram access is restricted to a specific user ID, and execution only happens when a dedicated command is used. This prevents accidental or unauthorized usage. 6. โ Context Awareness with Memory The inclusion of conversational memory allows the agent to maintain context across interactions, enabling iterative design discussions rather than isolated commands. 7. โ Production-Ready Output Formatting Responses are automatically converted into Telegram-compatible HTML, ensuring clean, readable, and well-formatted messages without manual post-processing. 8. โ Extensible and Modular Architecture The workflow is highly modular: additional Stitch tools, AI models, or communication channels can be added with minimal changes, making it future-proof and easy to extend. How It Works This workflow functions as a Telegram-powered AI agent that leverages Google Stitch's MCP (Model Context Protocol) tools for design, UI generation, and product prototyping. It combines conversational AI, tool-based actions, and web search capabilities. Trigger & Authorization: The workflow is activated by an incoming message from a configured Telegram bot. A code node first checks the sender's Telegram User ID against a hardcoded value (xxx) to restrict access. Only authorized users can proceed. Command Parsing: An IF node filters messages, allowing the agent to proceed only if the message text starts with the command /stitch. This ensures the agent is only invoked intentionally. Query Preparation: The /stitch prefix is stripped from the message text, and the cleaned query, along with the user's ID (used as a session identifier), is passed to the main agent. AI Agent Execution: The core "Google Stitch Agent" node is an LLM-powered agent (using Google Gemini) equipped with: Tools: Access to several Google Stitch MCP functions (create_project, get_project, list_projects, list_screens, get_screen, generate_screen_from_text) and a Perplexity web search tool. Memory: A conversation buffer window to maintain context within a session. System Prompt: Instructs the agent to intelligently select and use the appropriate Stitch tools based on the user's design-related request (e.g., generating screens from text, managing projects). It is directed to use web search when necessary for additional context. Response Processing & Delivery: The agent's text output (in Markdown) is passed through another LLM chain ("From MD to HTML") that converts it to Telegram-friendly HTML. Finally, the formatted response is sent back to the user via the Telegram bot. Set Up Steps To make this workflow operational, you need to configure credentials and update specific nodes: Telegram Bot Configuration: In the "Code" node (id: 08bfae9e...), replace xxx in the condition $input.first().json.message.from.id !== xxx with your actual Telegram User ID. This ensures only you can trigger the agent. Ensure the "Telegram Trigger" and "Send a text message" nodes have valid Telegram Bot credentials configured. Google Stitch API Setup: Obtain an API key from Google Stitch. Configure the HTTP Header Auth credential named "Google Stitch" (referenced by all MCP tool nodes: Create Project, Get Project, etc.). Set the Header Auth with: Name: X-Goog-Api-Key Value: Your actual Google Stitch API Key (YOUR-API-KEY). AI Model & Tool Credentials: Verify the credentials for the Google Gemini Chat Model nodes are correctly set up for API access. Verify the credentials for the Perplexity API node ("Search on web") are configured if web search functionality is required. Activation: Once all credentials are configured, set the workflow to Active. The Telegram webhook will be registered, and the workflow will listen for authorized messages containing the /stitch command. ๐ Subscribe to my new YouTube channel. Here Iโll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Jonas Frewert
Blog Post Content Creation (Multi-Topic with Brand Research, Google Drive, and WordPress) Description This workflow automates the full lifecycle of generating and publishing SEO-optimized blog posts from a list of topics. It reads topics (and optionally brands) from Google Sheets, performs brand research, generates a structured HTML article via AI, converts it into an HTML file for Google Drive, publishes a draft post on WordPress, and repeats this for every row in the sheet. When the final topic has been processed, a single Slack message is sent to confirm completion and share links. How It Works 1. Input from Google Sheets A Google Sheets node reads rows containing at least: Brand (optional, can be defaulted) Blog Title or Topic A Split In Batches node iterates through the rows one by one so each topic is processed independently. 2. Configuration The Configuration node maps each rowโs values into: Brand Blog Title These values are used consistently across brand research, content creation, file naming, and WordPress publishing. 3. Brand Research A Language Model Chain node calls an OpenRouter model to gather background information about the brand and its services. The brand context is used as input for better, on-brand content generation. 4. Content Creation A second Language Model Chain node uses the brand research and the blog title or topic to generate a full-length, SEO-friendly blog article. Output is clean HTML with: Exactly one `` at the top Structured ` and ` headings Semantic tags only No inline CSS No <html> or <body> wrappers No external resources 5. HTML Processing A Code node in JavaScript: Strips any markdown-style code fences around the HTML Normalizes paragraph breaks Builds a safe file name from the blog title Encodes the HTML as a binary file payload 6. Upload to Google Drive A Google Drive node uploads the generated HTML file to a specified folder. Each topic creates its own HTML file, named after the blog title. 7. Publish to WordPress An HTTP Request node calls the WordPress REST API to create a post. The post content is the generated HTML, and the title comes from the Configuration node. By default, the post is created with status draft (can be changed to publish if desired). 8. Loop Control and Slack Notification After each topic is processed (Drive upload and WordPress draft), the workflow loops back to Split In Batches to process the next row. When there are no rows left, an IF node detects that the loop has finished. Only then is a single Slack message sent to: Confirm that all posts have been processed Share links to the last generated Google Drive file and WordPress post Integrations Used OpenRouter - AI models for brand research and SEO content generation Google Sheets - Source of topics and (optionally) brands Google Drive - Storage for generated HTML files WordPress REST API - Blog post creation (drafts or published posts) Slack - Final summary notification when the entire batch is complete Ideal Use Case Content teams and agencies managing a queue of blog topics in a spreadsheet Marketers who want a hands-off pipeline from topic list to WordPress drafts Teams who need generated HTML files stored in Drive for backup, review, or reuse Any workflow where automation should handle the heavy lifting and humans only review the final drafts Setup Instructions Google Sheets Create a sheet with columns like Brand and Blog Title or Topic. In the Get Blog Topics node, set the sheet ID and range to match your sheet. Add your Google Sheets credentials in n8n. OpenRouter (LLM) Add your OpenRouter API key as credentials. In the OpenRouter Chat Model nodes, select your preferred models and options if you want to customize behavior. Google Drive Add Google Drive credentials. Update the folder ID in the Upload file node to your target directory. WordPress In the Publish to WordPress node, replace the example URL with your siteโs REST API endpoint. Configure authentication (for example, Application Passwords or Basic Auth). Adjust the status field (draft or publish) to match your desired workflow. Slack Add Slack OAuth credentials. Set the channel ID in the Slack node where the final summary message should be posted. Run the Workflow Click Execute Workflow. The workflow will loop through every row in the sheet, generating content, saving HTML files to Drive, and creating WordPress posts. When all rows have been processed, a single Slack notification confirms completion.
by giangxai
Overview This workflow automatically creates hours-long wave music videos by combining AI-generated music from Suno with a background video, fully automated using n8n and ffmpeg-api. It connects music prompt generation, AI song creation, audio aggregation, video merging, and YouTube publishing into a single end-to-end automation. Once configured, the workflow runs continuously with no manual editing required. This workflow is built for creators producing lo-fi, wave, ambient, or long-play music content who want a reliable and scalable way to generate long-form videos automatically. What can this workflow do? Collect music themes, background video URLs, and track counts via an input form Generate multiple AI music tracks using Suno Automatically check rendering status and retrieve completed songs Concatenate multiple tracks into a single long-form audio file Merge the final audio with a background video using ffmpeg-api Upload the completed video to YouTube automatically Generate SEO-optimized titles and descriptions using an AI model This workflow reduces manual work while keeping the entire music video production process structured and repeatable. How it works You start by submitting a music theme, a background video URL, and the number of music tracks through an n8n form. The workflow initializes a working directory using ffmpeg-api to manage all audio and video assets. An AI agent converts the music theme into structured Suno prompts. Suno then generates multiple music tracks, and the workflow continuously checks their rendering status until all songs are ready. Once completed, the songs are downloaded, uploaded to ffmpeg-api storage, and concatenated into a single long-form audio track. This audio is merged with the background video to create the final hours-long wave music video. Finally, the completed video is uploaded to YouTube. An AI model generates SEO-friendly metadata, and the video is published automatically without manual intervention. Setup steps Connect an AI model (Gemini) for music prompt generation and YouTube metadata creation Configure Suno API access for AI music generation Set up ffmpeg-api for directory creation, file uploads, audio concatenation, and video merging Connect your YouTube account for automated uploads Review and customize the input form fields if needed After setup, the workflow runs end to end automatically. Documentation For a full walkthrough and advanced customization ideas, watch the detailed tutorial on YouTube.
by Nguyen Thieu Toan
๐ค Build a customer service AI chatbot for Facebook Messenger with Google Gemini ๐ Overview A streamlined Facebook Messenger chatbot powered by AI with conversation memory. This is a simplified version designed for quick deployment, learning, and testing โ not suitable for production environments. Base workflows: Smart message batching AI-powered Facebook Messenger chatbot use Data Table Smart human takeover & auto pause AI-powered Facebook Messenger chatbot ๐ฏ What This Workflow Does โ Core Features: Receives messages from Facebook Messenger via webhook Processes user messages with Google Gemini AI Maintains conversation context using Simple Memory node Automatically responds with AI-generated replies Handles webhook verification for Facebook setup Send image or video to customer through Facebook Messenger ๐น Simplified Approach: Memory**: Simple Memory node (10-message window) Format**: Cleans text, strips markdown, truncates >1900 chars Response**: Single message delivery โ ๏ธ Limitations & Trade-offs: No Smart Batching โ fragmented user messages cause spam-like replies No Human Takeover Detection โ bot continues even when admin joins Basic Memory Management โ no persistence, not reliable in production Basic Text Formatting โ strips markdown, truncates brutally, no smart splitting ๐ When to Upgrade Upgrade to full workflows when you need: Production deployment with reliability & persistence Analytics & tracking (query history, reports) Professional formatting (bold, italic, lists, code blocks) Handling long messages (>2000 chars) Smart batching for fragmented inputs Human handoff detection Full conversation persistence Key upgrades available: Smart message batching workflow Smart human takeover workflow โ๏ธ Setup Requirements Facebook Setup Create Facebook App at developers.facebook.com Add Messenger product Configure webhook: URL: https://your-domain.com/webhook/your-path Verify token: secure string Subscribe to: messages, messaging_postbacks Generate Page Access Token Copy token to "Set Context" node n8n Setup Import workflow Edit "Set Context" node โ update page_access_token Configure "Gemini Flash" node credentials Deploy workflow (must be publicly accessible) ๐ How It Works User Message โ Facebook Webhook โ Validation โ Set Context (extract user_id, message, token) โ Mark Seen โ Show Typing โ AI Agent (Gemini + 10-message memory) โ Format Output (remove markdown, truncate) โ Send Response via Facebook API ๐๏ธ Architecture Overview Section 1: Webhook & Initial Processing Facebook Webhook: handles GET (verification) & POST (messages) Confirm Webhook: returns challenge / acknowledges receipt Filters text messages only Blocks echo messages from bot itself Section 2: AI Processing with Memory Set Context: extracts user_id, message, token Seen & Typing: user feedback Conversation Memory: 10-message window, per-user isolation Process Merged Message: AI Agent with Jenix persona Gemini Flash: Googleโs AI model for response generation Section 3: Format & Delivery Cuts replies >2000 chars, strips markdown Sends text via Facebook Graph API ๐จ Customisation Guide Bot Personality**: edit system prompt in "Process Merged Message" node Memory**: adjust contextWindowLength (default 10), change sessionKey if needed AI Model**: replace Gemini Flash with OpenAI, Anthropic Claude, or other LLMs ๐ Important Notes โ ๏ธ Production Warning: testing only, memory lost on n8n restart in queue mode ๐ No Analytics: no history storage, no reporting ๐ง Format Limitations: responses โค1800 chars, markdown stripped, no complex formatting ๐ ๏ธ Troubleshooting Bot not responding** โ check token, webhook accessibility, event subscriptions Memory not working** โ verify session key, ensure not in queue mode, restart workflow Messages truncated** โ adjust system prompt for conciseness, reduce response length ๐ License & Credits Created by: Nguyแป n Thiแปu Toร n (Jay Nguyen) Email: me@nguyenthieutoan.com Website: nguyenthieutoan.com n8n Creator: n8n.io/creators/nguyenthieutoan Company: GenStaff
by DIGITAL BIZ TECH
Travel Reimbursement - OCR & Expense Extraction Workflow Overview This is a lightweight n8n workflow that accepts chat input and uploaded receipts, runs OCR, stores parsed results in Supabase, and uses an AI agent to extract structured travel expense data and compute totals. Designed for zero retention operation and fast integration. Workflow Structure Frontend:** Chat UI trigger that accepts text and file uploads. Preprocessing:** Binary normalization + per-file OCR request. Storage:** Store OCR-parsed blocks in Supabase temp_table. Core AI:** Travel reimbursement agent that extracts fields, infers missing values, and calculates totals using the Calculator tool. Output:** Agent responds to the chat with a concise expense summary and breakdowns. Chat Trigger (Frontend) Trigger node:** When chat message received public: true, allowFileUploads: true, sessionId used to tie uploads to the chat session. Custom CSS + initial messages configured for user experience. Binary Presence Check Node:** CHECK IF BINARY FILE IS PRESENT OR NOT (IF) Checks whether incoming payload contains files. If files present -> route to Split Out -> NORMALIZE binary file -> OCR (ANY OCR API) -> STORE OCR OUTPUT -> Merge. If no files -> route directly to Merge -> Travel reimbursement agent. Binary Normalization Node:** Split Out and NORMALIZE binary file (Code) Split Out extracts binary entries into a data field. NORMALIZE binary file picks the first binary key and rewrites payload to binary.data for consistent downstream shape. OCR Node:** OCR (ANY OCR API ) (HTTP Request) Sends multipart/form-data to OCR endpoint, expects JSONL or JSON with blocks. Body includes mode=single, output_type=jsonl, include_images=false. Store OCR Output Node:** STORE OCR OUTPUT (Supabase) Upserts into temp_table with session_id, parsed blocks, and file_name. Used by agent to fetch previously uploaded receipts for same session. Memory & Tooling Nodes:** Simple Memory and Simple Memory1 (memoryBufferWindow) Keep last 10 messages for session context. Node:** Calculator1 (toolCalculator) Used by agent to sum multiple charges, handle currency arithmetic and totals. Travel Reimbursement Agent (Core) Node:** Travel reimbursement agent (LangChain agent) Model: Mistral Cloud Chat Model (mistral-medium-latest) Behavior: Parse OCR blocks and non-file chat input. Extract required fields: vendor_name, category, invoice_date, checkin_date, checkout_date, time, currency, total_amount, notes, estimated. When fields are missing, infer logically and mark estimated: true. Use Calculator tool to sum totals across multiple receipts. Fetch stored OCR entries from Supabase when user asks for session summaries. Always attempt extraction; never reply with "unclear" or ask for a reupload unless user requests audit-grade precision. Final output: Clean expense table and Grand Total formatted for chat. Data Flow Summary User sends chat message plus or minus file. IF file present -> Split Out -> Normalize -> OCR -> Store OCR output -> Merge with chat payload. Travel reimbursement agent consumes merged item, extracts fields, uses Calculator tool for sums, and replies with a formatted expense summary. Integrations Used | Service | Purpose | Credential | |---------|---------|-----------| | Mistral Cloud | LLM for agent | Mistral account | | Supabase | Store parsed OCR blocks and session data | Supabase account | | OCR API | Text extraction from images/PDFs | Configurable HTTP endpoint | | n8n Core | Flow control, parsing, editing | Native | Agent System Prompt Summary > You are a Travel Expense Extraction and Calculation AI. Extract vendor, dates, currency, category, and total amounts from uploaded receipts, invoices, hotel bills, PDFs, and images. Infer values when necessary and mark them as estimated. When asked, fetch session entries from Supabase and compute totals using the Calculator tool. Respond in a concise business professional format with a category wise breakdown and a Grand Total. Never reply "unclear" or ask for a reupload unless explicitly asked. Required final response format example: Key Features Zero retention friendly design: OCR output stored only to temp_table per session. Robust extraction with inference when OCR quality is imperfect. Session aware: agent retrieves stored receipts for consolidated totals. Calculator integration for accurate numeric sums and currency handling. Configurable OCR endpoint so you can swap providers without changing logic. Setup Checklist Add Mistral Cloud and Supabase credentials. Configure OCR endpoint to accept multipart uploads and return blocks. Create temp_table schema with session_id, file, file_name. Test with single receipts, multipage PDFs, and mixed uploads. Validate agent responses and Calculator totals. Summary A practical n8n workflow for travel expense automation: accept receipts, run OCR, store parsed data per session, extract structured fields via an AI agent, compute totals, and return clean expense summaries in chat. Built for reliability and easy integration. Need Help or More Workflows? We can integrate this into your environment, tune the agent prompt, or adapt it for different OCR providers. We can help you set it up for free โ from connecting credentials to deploying it live. Contact: shilpa.raju@digitalbiz.tech Website: https://www.digitalbiz.tech LinkedIn: https://www.linkedin.com/company/digital-biz-tech/ You can also DM us on LinkedIn for any help.
by Cojocaru David
This n8n template demonstrates how to automatically generate and publish blog posts using trending keywords, AI-generated content, and watermarked stock images. Use cases include maintaining an active blog with fresh SEO content, scaling content marketing without manual writing, and automating the full publishing pipeline from keyword research to WordPress posting. Good to know At time of writing, each AI content generation step will incur costs depending on your OpenAI pricing plan. Image search is powered by Pexels, which provides free-to-use stock images. The workflow also applies a watermark for branding. Google Trends data may vary by region, and results depend on availability in your selected location. How it works The workflow begins with a scheduled trigger that fetches trending keywords from Google Trends. The XML feed is converted to JSON and filtered for relevant terms, which are logged into a Google Sheet for tracking. One random keyword is selected, and OpenAI is used to generate blog content around it. A structured output parser ensures the text is clean and well-formatted. The system then searches Pexels for a matching image, uploads it, adds metadata for SEO, and applies a watermark. Finally, the complete article (text and image) is published directly to WordPress. How to use The schedule trigger is provided as an example, but you can replace it with other triggers such as webhooks or manual inputs. You can also customize the AI prompt to match your niche, tone, or industry focus. For higher volumes, consider adjusting the keyword filtering and batching logic. Requirements OpenAI account for content generation Pexels API key for stock image search Google account with Sheets for keyword tracking WordPress site with API access for publishing Customising this workflow This automation can be adapted for different use cases. Try adjusting the prompts for technical blogs, fashion, finance, or product reviews. You can also replace the image source with other providers or integrate your own media library. The watermark feature ensures branding, but it can be modified or removed depending on your needs.
by Servify
Takes a product image from Google Sheets, adds frozen effect with Gemini, generates ASMR video with Veo3, writes captions with GPT-4o, and posts to 4 platforms automatically. How it works Schedule trigger picks first unprocessed row from Google Sheet Downloads product image and sends to Gemini for frozen/ice effect Uploads frozen image to ImgBB (Veo3 needs public URL) Veo3 generates 10-12s ASMR video with ice cracking sounds GPT-4o writes platform-specific titles and captions Uploads simultaneously to YouTube, TikTok, Instagram, Pinterest Updates sheet status and sends Telegram notification Setup Replace these placeholders in the workflow: YOUR_GOOGLE_AI_API_KEY (Gemini) YOUR_KIE_AI_API_KEY (Veo3) YOUR_IMGBB_API_KEY (free) YOUR_UPLOAD_POST_API_KEY YOUR_GOOGLE_SHEET_ID YOUR_PINTEREST_BOARD_ID YOUR_PINTEREST_USERNAME YOUR_TIKTOK_USERNAME YOUR_INSTAGRAM_USERNAME YOUR_TELEGRAM_CHAT_ID Google Sheet format | topic | image_url | status | |-------|-----------|--------| | Dior Sauvage โ Dior | https://example.com/img.jpg | | Leave status empty. Workflow sets it to processing then uploaded. Requirements Gemini API key - Google AI Studio Kie.ai account - kie.ai ImgBB API key - api.imgbb.com OpenAI API key upload-post.com account with connected TikTok/IG/Pinterest YouTube channel with OAuth
by Pinecone
Try it out This n8n workflow template lets you chat with your Google Drive documents (.docx, .json, .md, .txt, .pdf) using OpenAI and Pinecone Assistant. It retrieves relevant context from your files in real time so you can get accurate, context-aware answers about your proprietary dataโwithout the need to train your own LLM. What is Pinecone Assistant? Pinecone Assistant allows you to build production-grade chat and agent-based applications quickly. It abstracts the complexities of implementing retrieval-augmented (RAG) systems by managing the chunking, embedding, storage, query planning, vector search, model orchestration, reranking for you. Prerequisites A Pinecone account and API key A GCP project with Google Drive API enabled and configured Note: When setting up the OAuth consent screen, skip steps 8-10 if running on localhost An Open AI account and API key Setup Create a Pinecone Assistant in the Pinecone Console here Name your Assistant n8n-assistant and create it in the United States region If you use a different name or region, update the related nodes to reflect these changes No need to configure a Chat model or Assistant instructions Setup your Google Drive OAuth2 API credential in n8n In the File added node -> Credential to connect with, select Create new credential Set the Client ID and Client Secret from the values generated in the prerequisites Set the OAuth Redirect URL from the n8n credential in the Google Cloud Console (instructions) Name this credential Google Drive account so that other nodes reference it Setup Pinecone API key credential in n8n In the Upload file to assistant node -> PineconeApi section, select Create new credential Paste in your Pinecone API key in the API Key field Setup Pinecone MCP Bearer auth credential in n8n In the Pinecone Assistant node -> Credential for Bearer Auth section, select Create new credential Set the Bearer Token field to your Pinecone API key used in the previous step Setup the Open AI credential in n8n In the OpenAI Chat Model node -> Credential to connect with, select Create new credential Set the API Key field to your OpenAI API key Add your files to a Drive folder named n8n-pinecone-demo in the root of your My Drive If you use a different folder name, you'll need to update the Google Drive triggers to reflect that change Activate the workflow or test it with a manual execution to ingest the documents Chat with your docs! Ideas for customizing this workflow Customize the System Message on the AI Agent node to your use case to indicate what kind of knowledge is stored in Pinecone Assistant Change the top_k value of results returned from Assistant by adding "and should set a top_k of 3" to the System Message to help manage token consumption Configure the Context Window Length in the Conversation Memory node Swap out the Conversation Memory node for one that is more persistent Make the chat node publicly available or create your own chat interface that calls the chat webhook URL. Need help? You can find help by asking in the Pinecone Discord community, asking on the Pinecone Forum, or filing an issue on this repo.
by Don Jayamaha Jr
A fully autonomous, HTX Spot Market AI Agent (Huobi AI Agent) built using GPT-4o and Telegram. This workflow is the primary interface, orchestrating all internal reasoning, trading logic, and output formatting. โ๏ธ Core Features ๐ง LLM-Powered Intelligence: Built on GPT-4o with advanced reasoning โฑ๏ธ Multi-Timeframe Support: 15m, 1h, 4h, and 1d indicator logic ๐งฉ Self-Contained Multi-Agent Workflow: No external subflows required ๐งฎ Real-Time HTX Market Data: Live spot price, volume, 24h stats, and order book ๐ฒ Telegram Bot Integration: Interact via chat or schedule ๐ Autonomous Runs: Support for webhook, schedule, or Telegram triggers ๐ฅ Input Examples | User Input | Agent Action | | --------------- | --------------------------------------------- | | btc | Returns 15m + 1h analysis for BTC | | eth 4h | Returns 4-hour swing data for ETH | | bnbusdt today | Full day snapshot with technicals + 24h stats | ๐ฅ๏ธ Telegram Output Sample ๐ BTC/USDT Market Summary ๐ฐ Price: $62,400 ๐ 24h Stats: High $63,020 | Low $60,780 | Volume: 89,000 BTC ๐ 1h Indicators: โข RSI: 68.1 โ Overbought โข MACD: Bearish crossover โข BB: Tight squeeze forming โข ADX: 26.5 โ Strengthening trend ๐ Support: $60,200 ๐ Resistance: $63,800 ๐ ๏ธ Setup Instructions Create your Telegram Bot using @BotFather Add Bot Token in n8n Telegram credentials Add your GPT-4o or OpenAI-compatible key under HTTP credentials in n8n (Optional) Add your HTX API credentials if expanding to authenticated endpoints Deploy this main workflow using: โ Webhook (HTTP Request Trigger) โ Telegram messages โ Cron / Scheduled automation ๐ฅ Live Demo ๐ง Internal Architecture | Component | Role | | ------------------ | -------------------------------------------------------- | | ๐ Telegram Trigger | Entry point for external or manual signal | | ๐ง GPT-4o | Symbol + timeframe extraction + strategy generation | | ๐ Data Collector | Internal tools fetch price, indicators, order book, etc. | | ๐งฎ Reasoning Layer | Merges everything into a trading signal summary | | ๐ฌ Telegram Output | Sends formatted HTML report via Telegram | ๐ Use Case Examples | Scenario | Outcome | | -------------------------------------- | ------------------------------------------------------- | | Auto-run every 4 hours | Sends new HTX signal summary to Telegram | | Human requests โeth 1hโ | Bot replies with real-time 1h chart-based summary | | System-wide trigger from another agent | Invokes webhook and returns response to parent workflow | ๐งพ Licensing & Attribution ยฉ 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. ๐ For support: Don Jayamaha โ LinkedIn
by WeblineIndia
Facebook Page Comment Moderation Scoreboard โ Team Report This workflow automatically monitors Facebook Page comments, analyzes them using AI for intent, toxicity & spam, stores moderation results in a database and sends a clear summary report to Slack and Telegram. This workflow runs every few hours to fetch Facebook Page comments and analyze them using OpenAI. Each comment is classified as positive, neutral or negative, checked for toxicity, spam & abusive language and then stored in Supabase. A simple moderation summary is sent to Slack and Telegram. You receive: Automated Facebook comment moderation AI-based intent, toxicity, and spam detection Database logging of all moderated comments Clean Slack & Telegram summary reports Ideal for teams that want visibility into comment quality without manually reviewing every message. Quick Start โ Implementation Steps Import the workflow JSON into n8n. Add your Facebook Page access token to the HTTP Request node. Connect your OpenAI API key for comment analysis. Configure your Supabase table for storing moderation data. Connect Slack and Telegram credentials and choose target channels. Activate the workflow โ moderation runs automatically. What It Does This workflow automates Facebook comment moderation by: Running on a scheduled interval (every 6 hours). Fetching recent comments from a Facebook Page. Preparing each comment for AI processing. Sending comments to OpenAI for moderation analysis. Extracting structured moderation data: Comment intent Toxicity score Spam detection Abusive language detection Flagging risky comments based on defined rules. Storing moderation results in Supabase. Generating a summary report. Sending the report to Slack and Telegram. This ensures consistent, repeatable moderation with no manual effort. Whoโs It For This workflow is ideal for: Social media teams Community managers Marketing teams Customer support teams Moderation and trust & safety teams Businesses managing high-volume Facebook Pages Anyone wanting AI-assisted comment moderation Requirements to Use This Workflow To run this workflow, you need: n8n instance** (cloud or self-hosted) Facebook Page access token** OpenAI API key** Supabase project and table** Slack workspace** with API access Telegram bot** and chat ID Basic understanding of APIs and JSON (helpful but not required) How It Works Scheduled Trigger โ Workflow starts automatically every 6 hours. Fetch Comments โ Facebook Page comments are retrieved. Prepare Data โ Comments are formatted for processing. AI Moderation โ OpenAI analyzes each comment. Normalize Results โ AI output is cleaned and standardized. Store Data โ Moderation results are saved in Supabase. Aggregate Stats โ Summary statistics are calculated. Send Alerts โ Reports are sent to Slack and Telegram. Setup Steps Import the workflow JSON into n8n. Open the Fetch Facebook Page Comments node and add: Page ID Access token Connect your OpenAI account in the AI moderation node. Create a Supabase table and map fields correctly. Connect Slack and select a reporting channel. Connect Telegram and set the chat ID. Activate the workflow. How To Customize Nodes Customize Flagging Rules Update the normalization logic to: Change toxicity thresholds Flag only spam or abusive comments Add custom moderation rules Customize Storage You can extend Supabase fields to include: Language AI confidence score Reviewer notes Resolution status Customize Notifications Slack and Telegram messages can include: Emojis Mentions (@channel) Links to Facebook comments Severity labels Add-Ons (Optional Enhancements) You can extend this workflow to: Auto-hide or delete toxic comments Reply automatically to positive comments Detect language and region Generate daily or weekly moderation reports Build dashboards using Supabase or BI tools Add escalation alerts for high-risk comments Track trends over time Use Case Examples 1. Community Moderation Automatically identify harmful or spam comments. 2. Brand Reputation Monitoring Spot negative sentiment early and respond faster. 3. Support Oversight Detect complaints or frustration in comments. 4. Marketing Insights Measure positive vs negative engagement. 5. Compliance & Auditing Keep historical moderation logs in a database. Troubleshooting Guide | Issue | Possible Cause | Solution | |-----|---------------|----------| | No comments fetched | Invalid Facebook token | Refresh token & permissions | | AI output invalid | Prompt formatting issue | Use strict JSON prompt | | Data not saved | Supabase mapping mismatch | Verify table fields | | Slack message missing | Channel or credential error | Recheck Slack config | | Telegram alert fails | Wrong chat ID | Confirm bot permissions | | Workflow not running | Trigger disabled | Enable Cron node | Need Help? If you need help customizing, scaling or extending this workflow โ such as advanced moderation logic, dashboards, auto-actions or production hardening, then our n8n workflow development team at WeblineIndia can assist with expert automation solutions.
by Cheng Siong Chin
How It Works This workflow automates end-to-end marketing campaign management for digital marketing teams and agencies executing multi-channel strategies. It solves the complex challenge of coordinating personalized content across email, social media, and advertising platforms while maintaining brand consistency and optimizing engagement. The system processes scheduled campaign triggers through AI-powered content generation and personalization engines, then intelligently distributes tailored messages across six parallel channels: email campaigns, social media posts, paid advertising, influencer outreach, content marketing, and performance analytics. Each channel receives audience-specific messaging optimized for platform requirements, engagement patterns, and conversion objectives. This eliminates manual content adaptation, ensures consistent campaign timing, and delivers data-driven personalization at scale. Setup Steps Configure campaign schedule trigger or webhook integration with marketing automation platform Add AI model API credentials for content generation, personalization, and A/B testing optimization Connect email service provider with segmented audience lists and template configurations Set up social media management platform APIs for Facebook, Instagram, LinkedIn Integrate advertising platforms (Google Ads, Meta Ads) with campaign tracking parameters Prerequisites Marketing automation platform access, AI service API keys, email service provider account Use Cases Product launch campaigns coordinating announcements across channels Customization Adjust AI prompts for brand voice consistency, modify channel priorities based on audience preferences Benefits Reduces campaign setup time by 80%, ensures consistent messaging across all channels