by DIGITAL BIZ TECH
Travel Reimbursement - OCR & Expense Extraction Workflow Overview This is a lightweight n8n workflow that accepts chat input and uploaded receipts, runs OCR, stores parsed results in Supabase, and uses an AI agent to extract structured travel expense data and compute totals. Designed for zero retention operation and fast integration. Workflow Structure Frontend:** Chat UI trigger that accepts text and file uploads. Preprocessing:** Binary normalization + per-file OCR request. Storage:** Store OCR-parsed blocks in Supabase temp_table. Core AI:** Travel reimbursement agent that extracts fields, infers missing values, and calculates totals using the Calculator tool. Output:** Agent responds to the chat with a concise expense summary and breakdowns. Chat Trigger (Frontend) Trigger node:** When chat message received public: true, allowFileUploads: true, sessionId used to tie uploads to the chat session. Custom CSS + initial messages configured for user experience. Binary Presence Check Node:** CHECK IF BINARY FILE IS PRESENT OR NOT (IF) Checks whether incoming payload contains files. If files present -> route to Split Out -> NORMALIZE binary file -> OCR (ANY OCR API) -> STORE OCR OUTPUT -> Merge. If no files -> route directly to Merge -> Travel reimbursement agent. Binary Normalization Node:** Split Out and NORMALIZE binary file (Code) Split Out extracts binary entries into a data field. NORMALIZE binary file picks the first binary key and rewrites payload to binary.data for consistent downstream shape. OCR Node:** OCR (ANY OCR API ) (HTTP Request) Sends multipart/form-data to OCR endpoint, expects JSONL or JSON with blocks. Body includes mode=single, output_type=jsonl, include_images=false. Store OCR Output Node:** STORE OCR OUTPUT (Supabase) Upserts into temp_table with session_id, parsed blocks, and file_name. Used by agent to fetch previously uploaded receipts for same session. Memory & Tooling Nodes:** Simple Memory and Simple Memory1 (memoryBufferWindow) Keep last 10 messages for session context. Node:** Calculator1 (toolCalculator) Used by agent to sum multiple charges, handle currency arithmetic and totals. Travel Reimbursement Agent (Core) Node:** Travel reimbursement agent (LangChain agent) Model: Mistral Cloud Chat Model (mistral-medium-latest) Behavior: Parse OCR blocks and non-file chat input. Extract required fields: vendor_name, category, invoice_date, checkin_date, checkout_date, time, currency, total_amount, notes, estimated. When fields are missing, infer logically and mark estimated: true. Use Calculator tool to sum totals across multiple receipts. Fetch stored OCR entries from Supabase when user asks for session summaries. Always attempt extraction; never reply with "unclear" or ask for a reupload unless user requests audit-grade precision. Final output: Clean expense table and Grand Total formatted for chat. Data Flow Summary User sends chat message plus or minus file. IF file present -> Split Out -> Normalize -> OCR -> Store OCR output -> Merge with chat payload. Travel reimbursement agent consumes merged item, extracts fields, uses Calculator tool for sums, and replies with a formatted expense summary. Integrations Used | Service | Purpose | Credential | |---------|---------|-----------| | Mistral Cloud | LLM for agent | Mistral account | | Supabase | Store parsed OCR blocks and session data | Supabase account | | OCR API | Text extraction from images/PDFs | Configurable HTTP endpoint | | n8n Core | Flow control, parsing, editing | Native | Agent System Prompt Summary > You are a Travel Expense Extraction and Calculation AI. Extract vendor, dates, currency, category, and total amounts from uploaded receipts, invoices, hotel bills, PDFs, and images. Infer values when necessary and mark them as estimated. When asked, fetch session entries from Supabase and compute totals using the Calculator tool. Respond in a concise business professional format with a category wise breakdown and a Grand Total. Never reply "unclear" or ask for a reupload unless explicitly asked. Required final response format example: Key Features Zero retention friendly design: OCR output stored only to temp_table per session. Robust extraction with inference when OCR quality is imperfect. Session aware: agent retrieves stored receipts for consolidated totals. Calculator integration for accurate numeric sums and currency handling. Configurable OCR endpoint so you can swap providers without changing logic. Setup Checklist Add Mistral Cloud and Supabase credentials. Configure OCR endpoint to accept multipart uploads and return blocks. Create temp_table schema with session_id, file, file_name. Test with single receipts, multipage PDFs, and mixed uploads. Validate agent responses and Calculator totals. Summary A practical n8n workflow for travel expense automation: accept receipts, run OCR, store parsed data per session, extract structured fields via an AI agent, compute totals, and return clean expense summaries in chat. Built for reliability and easy integration. Need Help or More Workflows? We can integrate this into your environment, tune the agent prompt, or adapt it for different OCR providers. We can help you set it up for free โ from connecting credentials to deploying it live. Contact: shilpa.raju@digitalbiz.tech Website: https://www.digitalbiz.tech LinkedIn: https://www.linkedin.com/company/digital-biz-tech/ You can also DM us on LinkedIn for any help.
by Cojocaru David
This n8n template demonstrates how to automatically generate and publish blog posts using trending keywords, AI-generated content, and watermarked stock images. Use cases include maintaining an active blog with fresh SEO content, scaling content marketing without manual writing, and automating the full publishing pipeline from keyword research to WordPress posting. Good to know At time of writing, each AI content generation step will incur costs depending on your OpenAI pricing plan. Image search is powered by Pexels, which provides free-to-use stock images. The workflow also applies a watermark for branding. Google Trends data may vary by region, and results depend on availability in your selected location. How it works The workflow begins with a scheduled trigger that fetches trending keywords from Google Trends. The XML feed is converted to JSON and filtered for relevant terms, which are logged into a Google Sheet for tracking. One random keyword is selected, and OpenAI is used to generate blog content around it. A structured output parser ensures the text is clean and well-formatted. The system then searches Pexels for a matching image, uploads it, adds metadata for SEO, and applies a watermark. Finally, the complete article (text and image) is published directly to WordPress. How to use The schedule trigger is provided as an example, but you can replace it with other triggers such as webhooks or manual inputs. You can also customize the AI prompt to match your niche, tone, or industry focus. For higher volumes, consider adjusting the keyword filtering and batching logic. Requirements OpenAI account for content generation Pexels API key for stock image search Google account with Sheets for keyword tracking WordPress site with API access for publishing Customising this workflow This automation can be adapted for different use cases. Try adjusting the prompts for technical blogs, fashion, finance, or product reviews. You can also replace the image source with other providers or integrate your own media library. The watermark feature ensures branding, but it can be modified or removed depending on your needs.
by Pinecone
Try it out This n8n workflow template lets you chat with your Google Drive documents (.docx, .json, .md, .txt, .pdf) using OpenAI and Pinecone Assistant. It retrieves relevant context from your files in real time so you can get accurate, context-aware answers about your proprietary dataโwithout the need to train your own LLM. What is Pinecone Assistant? Pinecone Assistant allows you to build production-grade chat and agent-based applications quickly. It abstracts the complexities of implementing retrieval-augmented (RAG) systems by managing the chunking, embedding, storage, query planning, vector search, model orchestration, reranking for you. Prerequisites A Pinecone account and API key A GCP project with Google Drive API enabled and configured Note: When setting up the OAuth consent screen, skip steps 8-10 if running on localhost An Open AI account and API key Setup Create a Pinecone Assistant in the Pinecone Console here Name your Assistant n8n-assistant and create it in the United States region If you use a different name or region, update the related nodes to reflect these changes No need to configure a Chat model or Assistant instructions Setup your Google Drive OAuth2 API credential in n8n In the File added node -> Credential to connect with, select Create new credential Set the Client ID and Client Secret from the values generated in the prerequisites Set the OAuth Redirect URL from the n8n credential in the Google Cloud Console (instructions) Name this credential Google Drive account so that other nodes reference it Setup Pinecone API key credential in n8n In the Upload file to assistant node -> PineconeApi section, select Create new credential Paste in your Pinecone API key in the API Key field Setup Pinecone MCP Bearer auth credential in n8n In the Pinecone Assistant node -> Credential for Bearer Auth section, select Create new credential Set the Bearer Token field to your Pinecone API key used in the previous step Setup the Open AI credential in n8n In the OpenAI Chat Model node -> Credential to connect with, select Create new credential Set the API Key field to your OpenAI API key Add your files to a Drive folder named n8n-pinecone-demo in the root of your My Drive If you use a different folder name, you'll need to update the Google Drive triggers to reflect that change Activate the workflow or test it with a manual execution to ingest the documents Chat with your docs! Ideas for customizing this workflow Customize the System Message on the AI Agent node to your use case to indicate what kind of knowledge is stored in Pinecone Assistant Change the top_k value of results returned from Assistant by adding "and should set a top_k of 3" to the System Message to help manage token consumption Configure the Context Window Length in the Conversation Memory node Swap out the Conversation Memory node for one that is more persistent Make the chat node publicly available or create your own chat interface that calls the chat webhook URL. Need help? You can find help by asking in the Pinecone Discord community, asking on the Pinecone Forum, or filing an issue on this repo.
by Don Jayamaha Jr
A fully autonomous, HTX Spot Market AI Agent (Huobi AI Agent) built using GPT-4o and Telegram. This workflow is the primary interface, orchestrating all internal reasoning, trading logic, and output formatting. โ๏ธ Core Features ๐ง LLM-Powered Intelligence: Built on GPT-4o with advanced reasoning โฑ๏ธ Multi-Timeframe Support: 15m, 1h, 4h, and 1d indicator logic ๐งฉ Self-Contained Multi-Agent Workflow: No external subflows required ๐งฎ Real-Time HTX Market Data: Live spot price, volume, 24h stats, and order book ๐ฒ Telegram Bot Integration: Interact via chat or schedule ๐ Autonomous Runs: Support for webhook, schedule, or Telegram triggers ๐ฅ Input Examples | User Input | Agent Action | | --------------- | --------------------------------------------- | | btc | Returns 15m + 1h analysis for BTC | | eth 4h | Returns 4-hour swing data for ETH | | bnbusdt today | Full day snapshot with technicals + 24h stats | ๐ฅ๏ธ Telegram Output Sample ๐ BTC/USDT Market Summary ๐ฐ Price: $62,400 ๐ 24h Stats: High $63,020 | Low $60,780 | Volume: 89,000 BTC ๐ 1h Indicators: โข RSI: 68.1 โ Overbought โข MACD: Bearish crossover โข BB: Tight squeeze forming โข ADX: 26.5 โ Strengthening trend ๐ Support: $60,200 ๐ Resistance: $63,800 ๐ ๏ธ Setup Instructions Create your Telegram Bot using @BotFather Add Bot Token in n8n Telegram credentials Add your GPT-4o or OpenAI-compatible key under HTTP credentials in n8n (Optional) Add your HTX API credentials if expanding to authenticated endpoints Deploy this main workflow using: โ Webhook (HTTP Request Trigger) โ Telegram messages โ Cron / Scheduled automation ๐ฅ Live Demo ๐ง Internal Architecture | Component | Role | | ------------------ | -------------------------------------------------------- | | ๐ Telegram Trigger | Entry point for external or manual signal | | ๐ง GPT-4o | Symbol + timeframe extraction + strategy generation | | ๐ Data Collector | Internal tools fetch price, indicators, order book, etc. | | ๐งฎ Reasoning Layer | Merges everything into a trading signal summary | | ๐ฌ Telegram Output | Sends formatted HTML report via Telegram | ๐ Use Case Examples | Scenario | Outcome | | -------------------------------------- | ------------------------------------------------------- | | Auto-run every 4 hours | Sends new HTX signal summary to Telegram | | Human requests โeth 1hโ | Bot replies with real-time 1h chart-based summary | | System-wide trigger from another agent | Invokes webhook and returns response to parent workflow | ๐งพ Licensing & Attribution ยฉ 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. ๐ For support: Don Jayamaha โ LinkedIn
by Dean Pike
Transcript โ AI Analysis โ Formatted Doc This workflow automatically converts Fathom meeting transcripts into beautifully formatted Google Docs with AI-generated summaries, key points, decisions, and action items. Good to know Works fully with Fathom free account Webhook responds immediately to prevent Fathom timeout and duplicate triggers Validates transcript quality (3+ conversation turns) before AI processing to save costs Uses Google Gemini API (generous free tier and rate limits, typically enough to avoid paying for API requests, but check latest pricing at Google AI Pricing) Creates temporary HTML file that's auto-deleted after conversion Who's it for Individuals or teams using Fathom for meetings who want more control and flexibility with their AI meeting analysis and storage independently of Fathom, as well as automatic, formatted documentation without manual note-taking. Perfect for recurring syncs, client meetings, or interview debriefs. How it works Fathom webhook triggers when meeting ends and sends transcript data Validates transcript has meaningful conversation (3+ turns) Google Gemini AI analyzes transcript and generates structured summary (key points, decisions, actions, next steps) Creates formatted HTML with proper styling Uploads to Google Drive and converts to native Google Doc Reduces page margins for readability and deletes temporary HTML file Requirements Fathom account with API webhook access (available on free tier) Google Drive account (OAuth2) Google Docs account (OAuth2) Google Gemini API key (Get free key here) How to set up Add credentials: Google Drive OAuth2, Google Docs OAuth2, Google Gemini API Copy the webhook URL from the Get Fathom Meeting webhook node (Test URL first, change to Production URL when ready) In Fathom: Settings โ API Access โ Add โ Add webhook URL and select all events including "Transcript" Test with a short meeting, verify Google Doc appears in Drive Activate workflow Customizing this workflow Change save location: Edit "Upload File as HTML" node โ update "Parent Folder" Modify AI output: Edit "AI Meeting Analysis" node โ customize the prompt to add/remove sections (e.g., risks, follow-ups, sentiment, etc) Adjust document margins: Edit "Reduce Page Margins" node โ change margin pixel values Add notifications: E.g. add Slack/Email node after "Convert to Google Doc" to notify team when summary is ready Quick Troubleshooting "Transcript Present?" fails: Fathom must send transcript_merged with 3+ conversation turns (i.e. only send to Gemini for analysis if there's a genuine transcript) HTML as plain text: Check "Convert to Google Doc" uses POST to /copy endpoint 401/403 errors: Re-authorize Google credentials Inadequate meeting notes: Edit prompts in "AI Meeting Analysis" node Sample File and Storage Output Google Doc meeting notes - sample Google Drive sample folder output:
by Miha
This n8n template drafts customer-ready email replies using Google Gemini, enriched with HubSpot context (contact, deals, companies, tickets). Each draft is routed to Slack for one-click approval before itโs sent from Gmailโso you move fast without losing control. Ideal for support and sales teams that want speedy, personalized responses while keeping humans in the loop. How it works Gmail Trigger** watches for new inbound emails. Sender filter** excludes internal domains (e.g., n8n.io) to avoid auto-replying to teammates. HubSpot contact lookup* finds the sender and fetches associated *deals/companies/tickets** via association + batch read. CRM context is normalized** into clean, LLM-friendly fields (no IDs or sensitive noise). Gemini (Google AI Studio)** generates a concise, friendly reply using: Sender name, subject, and message snippet Safe, relevant HubSpot context (e.g., top 1โ2 deals or an open ticket) Style constraints (โค \~150 words, single CTA, optional clarifying question) Slack approval* posts the draft to a channel; if *approved, n8n **replies via Gmail in the original thread. How to use Gmail: Connect the same account for the trigger and reply nodes. HubSpot: Connect OAuth on the search + HTTP request nodes. Gemini: Add your Google AI Studio API key to the Google Gemini Chat Model node. Slack: Connect and select the channel for draft approvals. (Optional) Filter: Adjust the Allowed Sender filter before going live. (Optional) Prompt: Edit โDraft Reply (AI Agent)โ tone/length or how much CRM detail to include. Activate the workflow. New emails will produce Slack-approved replies automatically. Requirements Gmail** (trigger + send) HubSpot** (OAuth2) for contact + associations Slack** for approval step Google Gemini** (Google AI Studio API key) Notes & customization Safety rails:** The prompt avoids exposing IDs/raw JSON and caps CRM details to whatโs useful. Auto-send mode:** Skip Slack if you want fully automated replies for specific senders/labels. Richer context:** Extend the batch read to pull more properties (e.g., next step, renewal date). Triage:** Branch on subject/labels to route billing vs. technical requests to different prompts. QA queue:* If the model asks a clarifying question, keep it to *one**โthe node enforces that.
by Juan Carlos Cavero Gracia
This automation workflow is designed for e-commerce businesses, digital marketers, and entrepreneurs who need to create high-quality promotional content for their products quickly and efficiently. From a single product image and description, the system automatically generates 4 promotional carousel-style images, perfect for social media, advertising campaigns, or web catalogs. Note: This workflow uses Gemini 2.5 Flash API for image generation, imgbb for image storage, and upload-post.com for automatic Instagram, Tiktok, Facebook and Youtube publishing* Who Is This For? E-commerce Owners:** Transform basic product photos into professional promotional content featuring real people using products in authentic situations. Digital Marketers & Agencies:** Generate multiple advertising content variations for Facebook Ads, Instagram Stories, and digital marketing campaigns. Small Businesses & Entrepreneurs:** Create professional promotional material without expensive photo shoots or graphic designers. Social Media Managers:** Produce engaging and authentic content that drives engagement and conversions across all social platforms. What Problem Does This Workflow Solve? Creating quality promotional content requires time, resources, and design skills. This workflow addresses these challenges by: Automatic Carousel Generation:** Converts a single product photo into 4 promotional images featuring people using the product naturally. Authentic & Engaging Content:** Generates images showing real product usage, increasing credibility and conversions. Integrated Promotional Text:** Automatically includes visible offers, benefits, and call-to-actions in the images. Social Media Optimization:** Produces vertical 9:16 format images, perfect for Instagram, TikTok, and Facebook Stories. Automatic Publishing:** Optionally publishes the complete carousel directly to Instagram with AI-generated optimized descriptions. How It Works Product Upload: Upload a product image and provide detailed description through the web form. Smart Analysis: The AI agent analyzes the product and creates a storyboard of 4 different promotional images. Image Generation: Gemini 2.5 Flash generates 4 variations showing people using the product in authentic contexts. Automatic Processing: Images are automatically processed, optimized, and stored in imgbb. Promotional Description: GPT-4 generates an attractive, social media-optimized description based on the created images. Optional Publishing: The system can automatically publish the complete carousel to Instagram. Setup fal.ai Credentials: Sign up at fal.ai and add your API token to the Gemini 2.5 Flash nodes. imgbb API: Create an account at imgbb.com Get your API key and configure it in the "Set APIs Vars" node Upload-Post (Optional): For automatic Instagram publishing: Register your account at upload-post.com Connect your Instagram business account Configure credentials in the "Upload Post" node OpenAI API: Configure your OpenAI API key for promotional description generation. Requirements Accounts:** n8n, fal.ai, imgbb.com, OpenAI, upload-post.com (optional), Instagram business (optional). API Keys:** fal.ai token, imgbb API key, OpenAI API key, upload-post.com credentials. Image Format:** Any standard image format (JPG, PNG, WebP) of the product to promote. Features Advanced Generative AI:** Uses Gemini 2.5 Flash to create realistic images of people using products Smart Storyboard:** Automatically creates 4 different concepts to maximize engagement Integrated Promotional Text:** Includes offers, benefits, and CTAs directly in the images Optimized Format:** Generates vertical 9:16 images perfect for social media Parallel Processing:** Generates all 4 images simultaneously for maximum efficiency Automatic Publishing:** Option to publish directly to Instagram with optimized descriptions Use this template to transform basic product photos into complete promotional campaigns, saving time and resources while generating high-quality content that converts visitors into customers.
by WeblineIndia
Facebook Page Comment Moderation Scoreboard โ Team Report This workflow automatically monitors Facebook Page comments, analyzes them using AI for intent, toxicity & spam, stores moderation results in a database and sends a clear summary report to Slack and Telegram. This workflow runs every few hours to fetch Facebook Page comments and analyze them using OpenAI. Each comment is classified as positive, neutral or negative, checked for toxicity, spam & abusive language and then stored in Supabase. A simple moderation summary is sent to Slack and Telegram. You receive: Automated Facebook comment moderation AI-based intent, toxicity, and spam detection Database logging of all moderated comments Clean Slack & Telegram summary reports Ideal for teams that want visibility into comment quality without manually reviewing every message. Quick Start โ Implementation Steps Import the workflow JSON into n8n. Add your Facebook Page access token to the HTTP Request node. Connect your OpenAI API key for comment analysis. Configure your Supabase table for storing moderation data. Connect Slack and Telegram credentials and choose target channels. Activate the workflow โ moderation runs automatically. What It Does This workflow automates Facebook comment moderation by: Running on a scheduled interval (every 6 hours). Fetching recent comments from a Facebook Page. Preparing each comment for AI processing. Sending comments to OpenAI for moderation analysis. Extracting structured moderation data: Comment intent Toxicity score Spam detection Abusive language detection Flagging risky comments based on defined rules. Storing moderation results in Supabase. Generating a summary report. Sending the report to Slack and Telegram. This ensures consistent, repeatable moderation with no manual effort. Whoโs It For This workflow is ideal for: Social media teams Community managers Marketing teams Customer support teams Moderation and trust & safety teams Businesses managing high-volume Facebook Pages Anyone wanting AI-assisted comment moderation Requirements to Use This Workflow To run this workflow, you need: n8n instance** (cloud or self-hosted) Facebook Page access token** OpenAI API key** Supabase project and table** Slack workspace** with API access Telegram bot** and chat ID Basic understanding of APIs and JSON (helpful but not required) How It Works Scheduled Trigger โ Workflow starts automatically every 6 hours. Fetch Comments โ Facebook Page comments are retrieved. Prepare Data โ Comments are formatted for processing. AI Moderation โ OpenAI analyzes each comment. Normalize Results โ AI output is cleaned and standardized. Store Data โ Moderation results are saved in Supabase. Aggregate Stats โ Summary statistics are calculated. Send Alerts โ Reports are sent to Slack and Telegram. Setup Steps Import the workflow JSON into n8n. Open the Fetch Facebook Page Comments node and add: Page ID Access token Connect your OpenAI account in the AI moderation node. Create a Supabase table and map fields correctly. Connect Slack and select a reporting channel. Connect Telegram and set the chat ID. Activate the workflow. How To Customize Nodes Customize Flagging Rules Update the normalization logic to: Change toxicity thresholds Flag only spam or abusive comments Add custom moderation rules Customize Storage You can extend Supabase fields to include: Language AI confidence score Reviewer notes Resolution status Customize Notifications Slack and Telegram messages can include: Emojis Mentions (@channel) Links to Facebook comments Severity labels Add-Ons (Optional Enhancements) You can extend this workflow to: Auto-hide or delete toxic comments Reply automatically to positive comments Detect language and region Generate daily or weekly moderation reports Build dashboards using Supabase or BI tools Add escalation alerts for high-risk comments Track trends over time Use Case Examples 1. Community Moderation Automatically identify harmful or spam comments. 2. Brand Reputation Monitoring Spot negative sentiment early and respond faster. 3. Support Oversight Detect complaints or frustration in comments. 4. Marketing Insights Measure positive vs negative engagement. 5. Compliance & Auditing Keep historical moderation logs in a database. Troubleshooting Guide | Issue | Possible Cause | Solution | |-----|---------------|----------| | No comments fetched | Invalid Facebook token | Refresh token & permissions | | AI output invalid | Prompt formatting issue | Use strict JSON prompt | | Data not saved | Supabase mapping mismatch | Verify table fields | | Slack message missing | Channel or credential error | Recheck Slack config | | Telegram alert fails | Wrong chat ID | Confirm bot permissions | | Workflow not running | Trigger disabled | Enable Cron node | Need Help? If you need help customizing, scaling or extending this workflow โ such as advanced moderation logic, dashboards, auto-actions or production hardening, then our n8n workflow development team at WeblineIndia can assist with expert automation solutions.
by Dinakar Selvakumar
Description This n8n template generates high-quality, platform-ready hashtags for beauty and skincare brands by combining AI, live website analysis, and current social media trends. It is designed for marketers, agencies, and founders who want smarter hashtag strategies without manual research. Use cases Beauty & skincare brands building social media reach Agencies managing multiple client accounts Content teams creating Instagram, LinkedIn, or Facebook posts Founders validating brand positioning through hashtags What this template demonstrates Form-based user input in n8n Website scraping with HTTP Request AI-driven brand analysis using Gemini Structured AI outputs with output parsers Live trend research using search tools Automated storage in Google Sheets How it works Users submit brand details through a form. The workflow scrapes the brandโs website, analyzes it with AI, generates tailored hashtags, enriches them with platform-specific trends, and stores the final result in Google Sheets. How to use Activate the workflow Open the form URL Enter brand details and website URL Submit the form View generated hashtags in Google Sheets Requirements Google Gemini API credentials Google Sheets account SerpAPI account for trend research Good to know Website scraping is best suited for public, text-rich sites Hashtags are generated dynamically based on brand tone and audience You can reuse the Google Sheet as a hashtag library Customising this workflow Change the number of hashtags generated Modify the AI prompt for different industries Add filters for banned or restricted hashtags Extend the workflow to auto-post to social platforms
by Avkash Kakdiya
How it works This workflow automatically generates personalized follow-up messages for leads or customers after key interactions (e.g., demos, sales calls). It enriches contact details from HubSpot (or optionally Monday.com), uses AI to draft a professional follow-up email, and distributes it across multiple communication channels (Slack, Telegram, Teams) as reminders for the sales team. Step-by-step 1. Trigger & Input Schedule Trigger โ Runs automatically at a defined interval (e.g., daily). Set Sample Data โ Captures the contactโs name, email, and context from the last interaction (e.g., โhad a product demo yesterday and showed strong interestโ). 2. Contact Enrichment HubSpot Contact Lookup โ Searches HubSpot CRM by email to confirm or enrich contact details. Monday.com Contact Fetch (Optional) โ Can pull additional CRM details if enabled. 3. AI Message Generation AI Language Model (OpenAI) โ Provides the underlying engine for message creation. Generate Follow-Up Message โ Drafts a short, professional, and friendly follow-up email: References previous interaction context. Suggests clear next steps (call, resources, etc.). Ends with a standardized signature block for consistency. 4. Multi-Channel Communication Slack Reminder โ Posts the generated message as a reminder in the sales teamโs Slack channel. Telegram Reminder โ Sends the follow-up draft to a Telegram chat. Teams Reminder โ Shares the same message in a Microsoft Teams channel. Benefits Personalized Outreach at Scale โ AI ensures each follow-up feels tailored and professional. Context-Aware Messaging โ Pulls in CRM details and past interactions for relevance. Cross-Platform Delivery โ Distributes reminders via Slack, Teams, and Telegram so no follow-up is missed. Time-Saving for Sales Teams โ Eliminates manual drafting of repetitive follow-up emails. Consistent Branding โ Ensures every message includes a unified signature block.
by Madame AI
Automate social media content aggregation to a Telegram channel This n8n template automatically aggregates and analyzes key updates from your social media platforms Home Page, delivering them as curated posts to a Telegram channel. This workflow is perfect for digital marketers, brand managers, or data analysts and Busy people, seeking to monitor real-time trends and competitor activity without manual effort. How it works The workflow is triggered automatically on a schedule to aggregate the latest social media posts. A series of If and Wait nodes monitor the data processing job until the full data is ready. An AI Agent, powered by Google Gemini, refines the content by summarizing posts and removing duplicates. An If node checks for an image in the post to decide if a photo or a text message should be sent. Finally, the curated posts are sent to your Telegram channel as rich media messages. How to use Set up BrowserAct Template: In your BrowserAct account, set up โTwitter/X Content Aggregationโ template. Set up Credentials: Add your credentials for BrowserAct In Run Node , Google Gemini in Agent Node, and Telegram in Send Node. Add Workflow ID: Change the workflow_id value inside the HTTP Request inside the Run Node, to match the one from your BrowserAct workflow. Activate Workflow: To enable the automated schedule, simply activate the workflow. Requirements BrowserAct** API account BrowserAct* *โTwitter/X Content Aggregationโ** Template Gemini** account Telegram** credentials customizing this workflow This workflow provides a powerful foundation for social media monitoring. You could: Replace the Telegram node with an email or Slack node to send notifications to a different platform. Add more detailed prompts to the AI Agent for more specific analysis or summarization. customize BrowserAct Workflow to reach your desire. Need Help ? How to Find Your BrowseAct API Key & Workflow ID How to Connect n8n to Browseract How to Use & Customize BrowserAct Templates Workflow Guidance and Showcase Automate Your Social Media: Get All X/Twitter Updates Directly in Telegram!
by Bhuvanesh R
Your Cold Email is Now Researched. This pipeline finds specific bottlenecks on prospect websites and instantly crafts an irresistible pitch ๐ฏ Problem Statement Traditional high-volume cold email outreach is stuck on generic personalization (e.g., "Love your website!"). Sales teams, especially those selling high-value AI Receptionists, struggle to efficiently find the one Unique Operational Hook (like manual scheduling dependency or high call volume) needed to make the pitch relevant. This forces reliance on expensive, slow manual research, leading to low reply rates and inefficient spending on bulk outreach tools. โจ Solution This workflow deploys a resilient Dual-AI Personalization Pipeline that runs on a batch basis. It uses the Filter (Qualified Leads) node as a cost-saving Quality Gate to prevent processing bad leads. It executes a Targeted Deep Dive on successful leads, using GPT-4 for analytical insight extraction and Claude Sonnet for coherent, human-like copy generation. The entire process outputs campaign-ready data directly to Google Sheets and sends a critical QA Draft via Gmail. โ๏ธ How It Works (Multi-Step Execution) 1\. Ingestion and Cost Control (The Quality Gate) Trigger and Ingestion:* The workflow starts via a *Manual Trigger, pulling leads directly from **Get All Leads (Google Sheets). Cost Filtering:* The *Filter (Qualified Leads)** node removes leads that lack a working email or website URL. Execution Isolation:* The *Loop Over Leads* node initiates individual processing. The *Capture Lead Data (Set)** node immediately captures and locks down the original lead context for stability throughout the loop. Hybrid Scraping:* The *Scrape Site (HTTP Request)* and *Extract Text & Links (HTML)* nodes execute the *Hybrid Scraping* strategy, simultaneously capturing *website text* and *external links**. Data Shaping & Status:* The *Filter Social & Status (Code)* node is the control center. It filters links, bundles the context, and critically, assigns a *status** of 'Success' or 'Scrape Fail'. Cost Control Branch:* The *If (IF node)* checks this status. Items with 'Scrape Fail' bypass all AI steps (saving *100% of AI token costs) and jump directly to **Log Final Result. Successful items proceed to the AI core. 2\. Dual-AI Coherence & Dispatch (The Executive Output) Analytical Synthesis:* The *Summarize Website (OpenAI)* node uses *GPT-4* to synthesize the full context and extract the *Unique Operational Hook** (e.g., manual booking overhead). Coherent Copy Generation:* The *Generate Subject & Body (Anthropic)* node uses the *Claude Sonnet* model to generate the subject and the multi-line body, guaranteeing *coherence** by creating both simultaneously in a single JSON output. Final Parsing:* The *Parse AI Output (Code)* node reliably strips markdown wrappers and extracts the clean *subject* and *body** strings. Final Delivery:* The data is logged via *Log Final Result (Google Sheets), and the completed email is sent to the user via **Create a draft (Gmail) for final Quality Assurance before sending. ๐ ๏ธ Setup Steps Before running the workflow, ensure these credentials and data structures are correctly configured: Credentials Anthropic:** Configure credentials for the Language Model (Claude Sonnet). OpenAI:** Configure credentials for the Analytical Model (GPT-4/GPT-4o). Google Services:* Set up OAuth2 credentials for *Google Sheets* (Input/Output) and *Gmail** (Draft QA and Completion Alert). Configuration Google Sheet Setup:* Your input sheet must include the columns *email, **website\_url, and an empty Icebreaker column for initial filtering. HTTP URL:* Verify that the *Scrape Site** node's URL parameter is set to pull the website URL from the stabilized data structure: ={{ $json.website\_url }}. AI Prompts:** Ensure the Anthropic prompt contains your current Irresistible Sales Offer and the required nested JSON output structure. โ Benefits Coherence Guarantee:* A single *Anthropic** node generates both the subject and body, guaranteeing the message is perfectly aligned and hits the same unique insight. Maximum Cost Control:* The *IF node* prevents spending tokens on bad or broken websites, making the campaign highly *budget-efficient**. Deep Personalization:* Combines *website text* and *social media links**, creating an icebreaker that implies thorough, manual research. High Reliability:* Uses robust *Code nodes** for data structuring and parsing, ensuring the workflow runs consistently under real-world conditions without crashing. Zero-Risk QA:* The final *Gmail (Create a draft)** step ensures human review of the generated copy before any cold emails are sent out.