by Trung Tran
📚 Telegram RAG Chatbot with PDF Document & Google Drive Backup An upgraded Retrieval-Augmented Generation (RAG) chatbot built in n8n that lets users ask questions via Telegram and receive accurate answers from uploaded PDFs. It embeds documents using OpenAI and backs them up to Google Drive. 👤 Who’s it for Perfect for: Knowledge workers who want instant access to private documents Support teams needing searchable SOPs and guides Educators enabling course material Q&A for students Individuals automating personal document search + cloud backup ⚙️ How it works / What it does 💬 Telegram Chat Handling User sends a message Triggered by the Telegram bot, the workflow checks if the message is text. Text message → OpenAI RAG Agent If the message is text, it's passed to a GPT-powered document agent. This agent: Retrieves relevant info from embedded documents using semantic search Returns a context-aware answer to the user Send answer back The bot sends the generated response back to the Telegram user. Non-text input fallback If the message is not text, the bot replies with a polite unsupported message. 📄 PDF Upload and Embedding User uploads PDFs manually A manual trigger starts the embedding flow. Default Data Loader Reads and chunks the PDF(s) into text segments. Insert to Vector Store (Embedding) Text chunks are embedded using OpenAI and saved for retrieval. Backup to Google Drive The original PDF is uploaded to Google Drive for safekeeping. 🛠️ How to set up Telegram Bot Create via BotFather Connect it to the Telegram Trigger node OpenAI Use your OpenAI API key Connect the Embeddings and Chat Model nodes (GPT-3.5/4) Ensure both embedding and querying use the same Embedding node Google Drive Set up credentials in n8n for your Google account Connect the “Backup to Google Drive” node PDF Ingestion Use the “Upload your PDF here” trigger Connect it to the loader, embedder, and backup flow ✅ Requirements Telegram bot token OpenAI API key (GPT + Embeddings) n8n instance (self-hosted or cloud) Google Drive integration PDF files to upload 🧩 How to customize the workflow | Feature | How to Customize | |-------------------------------|-------------------------------------------------------------------| | Auto-ingest from folders | Add Google Drive/Dropbox watchers for new PDFs | | Add file upload via Telegram | Extend Telegram bot to receive PDFs and run the embedding flow | | Track user questions | Log Telegram usernames and questions to a database | | Summarize documents | Add summarization step on upload | | Add Markdown or HTML support | Format replies for better Telegram rendering | Built with 💬 Telegram + 📄 PDF + 🧠 OpenAI Embeddings + ☁️ Google Drive + ⚡ n8n
by Robert Breen
Give business users a chat box; get back valid BigQuery SQL and live query results. The workflow: Captures a plain-language question from a chat widget or internal portal. Fetches the current table + column schema from your BigQuery dataset (via INFORMATION_SCHEMA). Feeds both the schema and the question to GPT-4o so it can craft a syntactically correct SQL query using only fields that truly exist. Executes the AI-generated SQL in BigQuery and returns the results. Stores a short-term memory by session, enabling natural follow-up questions. Perfect for analysts, customer-success teams, or any stakeholder who needs data without writing SQL. ⚙️ Setup Instructions Import the workflow n8n → Workflows → Import from File (or Paste JSON) → Save Add credentials | Service | Where to create credentials | Node(s) to update | |---------|----------------------------|-------------------| | OpenAI | <https://platform.openai.com> → Create API key | OpenAI Chat Model | | Google BigQuery | Google Cloud Console → IAM & Admin → Service Account JSON key | Google BigQuery (schema + query) | Point the schema fetcher to your dataset In Google BigQuery1 you’ll see: SELECT table_name, column_name, data_type FROM n8nautomation-453001.email_leads_schema.INFORMATION_SCHEMA.COLUMNS Replace n8nautomation-453001.email_leads_schema with YOUR_PROJECT.YOUR_DATASET. Keep the rest of the query the same—BigQuery’s INFORMATION_SCHEMA always surfaces table_name, column_name, and data_type. Update the execution node Open Google BigQuery (the second BigQuery node). In Project ID select your project. The SQL Query field is already {{ $json.output.query }} so it will run whatever the AI returns. (Optional)Embed the chat interface Test end-to-end Open the embedded chat widget. Ask: “How many distinct email leads were created last week?” After a few seconds the workflow will return a table of results—or an error if the schema lacks the requested fields. As specific questions about your data Activate Toggle Active so the chat assistant is available 24/7. 🧩 Customization Ideas Row-limit safeguard**: automatically append LIMIT 1000 to every query. Chart rendering**: send query results to Google Sheets + Looker Studio for instant visuals. Slack bot**: forward both the question and the SQL result to a Slack channel for team visibility. Schema caching**: store the INFORMATION_SCHEMA result for 24 hours to cut BigQuery costs. Contact Email:** rbreen@ynteractive.com Website:** https://ynteractive.com YouTube:** https://www.youtube.com/@ynteractivetraining LinkedIn:** https://www.linkedin.com/in/robertbreen
by Shinji Watanabe
Who’s it for Teams that care about space-weather impact—SRE/infra, satellite ops, aviation, power utilities, researchers—or anyone who wants timely, readable alerts when NASA publishes significant solar events. How it works / What it does Every 30 minutes a Cron trigger runs, the NASA DONKI node fetches the past 24 hours of space-weather notifications, and a code step de-duplicates, labels event types, and assigns a severity (CRITICAL / HIGH / OTHER). A Switch routes items: CRITICAL/HIGH** → an LLM (“AI Agent”) produces a concise Japanese alert → Slack posts with local time and source link. OTHER** → an LLM creates a short summary for record-keeping → a small merge step prepares fields → Google Sheets appends a new row. Sticky notes in the canvas explain the schedule, data source, and overall flow. How to set up Add credentials for Slack, Google Sheets, and OpenAI (or compatible LLM). Open the Slack nodes and select your workspace + target channel. Select your Google Sheet and worksheet for logging. (Optional) Adjust the Cron interval and the NASA lookback window. Test with a manual execution, then activate. Requirements Slack Bot with permission to post to the chosen channel Google account with access to the target Sheet OpenAI (or API-compatible) credentials for the LLM nodes Internet access to NASA DONKI (no API key required) How to customize the workflow Tweak severity rules inside the Analyze & Prioritize code node. Edit prompt tone/length in each AI Agent node. Change Slack formatting or mention style (@channel vs none). Add filters (e.g., alert only on CME/FLR) or extend logging fields in the merge step.
by Khairul Muhtadin
Auto repost job with RAG is a workflow designed to automatically extract, process, and publish job listings from monitored sources using Google Drive, OpenAI, Supabase, and WordPress. This integration streamlines job reposting by intelligently extracting relevant job data, mapping categories and types accurately, managing media assets, and publishing posts seamlessly. 💡 Why Use Auto repost job with RAG? Automated Publishing: Slash manual entry time by automating job post extraction and publication, freeing hours every week. Error-Resistant Workflow: Avoid incomplete job posts with smart validation checks to ensure all necessary fields are ready before publishing. Consistent Content Quality: Maintain formatting, SEO, and style consistency backed by AI-driven article regeneration adhering strictly to your guidelines. Competitive Edge: Get fresh jobs live faster than your competitors without lifting more than a finger—because robots don't take coffee breaks! ⚡ Perfect For Recruiters & HR Teams: Accelerate your job posting funnel with error-free automation. Content Managers: Keep your job boards fresh with AI-enriched standardized listings. Digital Marketers: Automate content flows to boost SEO and engagement without the headache. 🔧 How It Works ⏱ Trigger: Job link inputs via Telegram. 📎 Process: Auto-download of job documents, data extraction using Jina AI and OpenAI's GPT-4 model to parse content and metadata. 🤖 Smart Logic: AI agent regenerates articles based on strict RAG dataset rules; category & job type IDs mapped to match WordPress taxonomy; fallback attempts with default images for missing logos. 💌 Output: Job posts formatted and published to WordPress; success or failure updates sent back via Telegram notifications. 🗂 Storage: Uses Supabase vector store for document embedding and retrieval related to formatting rules and job data. 🔐 Quick Setup Import the provided JSON workflow into your n8n instances Add credentials: Google Drive OAuth, OpenAI API, Supabase API, Telegram API, WordPress API Customize: Set your Google Drive folder ID, WordPress endpoints, and Telegram chat IDs Update: Confirm default logo URLs and fallback settings as needed Test: Submit a new job link via Telegram or add a file to the watched Drive folder 🧩 You'll Need Active n8n instances Google Drive Account with OAuth2 credentials OpenAI API access for GPT-4 processing Supabase account configured for vector storage WordPress API credentials for job listing publishing Telegram Bot for notifications and job link inputs 🛠️ Level Up Ideas Integrate Slack, Gmail or Teams notifications for teams visibility Add a sentiment analysis step to prioritize certain jobs Automate social media posting of new job listings for wider reach Made by: Khmuhtadin Tags: automation, job-posting, AI, OpenAI, Google Drive, WordPress Category: content automation Need custom work? Contact me
by Dayong Huang
How it works This template creates a fully automated Twitter content system that discovers trending topics, analyzes why they're trending using AI, and posts intelligent commentary about them. The workflow uses MCP (Model Context Protocol) with the twitter154 MCP server from MCPHub to connect with Twitter APIs and leverages OpenAI GPT models to generate brand-safe, engaging content about current trends. Key Features: 🔍 Smart Trend Discovery: Automatically finds US trending topics with engagement scoring 🤖 AI-Powered Analysis: Uses GPT to explain "why it's trending" in 30-60 words 📊 Duplicate Prevention: MySQL database tracks posted trends with 3-day cooldowns 🛡️ Brand Safety: Filters out NSFW content and low-quality hashtags ⚡ Rate Limiting: Built-in delays to respect API limits 🐦 Powered by twitter154: Uses the robust "Old Bird" MCP server for comprehensive Twitter data access Set up steps Setup time: ~10 minutes Prerequisites: OpenAI API key for GPT models Twitter API access for posting MySQL database for trend tracking MCP server access**: twitter154 from aigeon-ai via MCPHub Configuration: Set up MCP integration with twitter154 server endpoint: https://api.mcphub.com/mcp/aigeon-ai-twitter154 Configure credentials for OpenAI, Twitter, and MySQL connections Set up authentication for the twitter154 MCP server (Header Auth required) Create MySQL table for keyword registry (schema provided in workflow) Test the workflow with manual execution before enabling automation Set schedule for automatic trend discovery (recommended: every 2-4 hours) MCP Server Features Used: Search Tweets**: Core functionality for trend analysis Get Trends Near Location**: Discovers trending topics by geographic region AI Tools**: Leverages sentiment analysis and topic classification capabilities Customization Options: Modify trend scoring criteria in the AI agent prompts Adjust cooldown periods in database queries Change target locale from US to other regions (WOEID configuration) Customize tweet formatting and content style Configure different MCP server endpoints if needed Perfect for: Social media managers, content creators, and businesses wanting to stay current with trending topics while maintaining consistent, intelligent posting schedules. Powered by: The twitter154 MCP server ("The Old Bird") provides robust access to Twitter data including tweets, user information, trends, and AI-powered text analysis tools.
by Jorge Martínez
Automating WhatsApp replies in Go High Level with Redis and Anthropic Description Integrates GHL + Wazzap with Redis and an AI Agent using ClientInfo to process messages, generate accurate replies, and send them via a custom field trigger. Who’s it for This workflow is for businesses using GoHighLevel (GHL), including the Wazzap plugin for WhatsApp, who want to automate inbound SMS/WhatsApp replies with AI. It’s ideal for teams that need accurate, data-driven responses from a predefined ClientInfo source and want to send them back to customers without paying for extra inbound automations. How it works / What it does Receive message in n8n via Webhook from GHL (Customer Replied (SMS) automation). WhatsApp messages arrive the same way using the Wazzap plugin. Filter message type: If audio → skip processing and send fallback asking for text. If text → sanitize by fixing escaped quotes, escaping line breaks/carriage returns/tabs, and removing invalid fields. Buffer messages in Redis to group multiple messages sent in a short window. Run AI Agent using the ClientInfo tool to answer only with accurate service/branch data. Sanitize AI output before sending back. Update GHL contact custom field (IA_answer) with the AI’s response. Send SMS reply automatically via GHL’s outbound automation triggered by the updated custom field. How to set up In GHL, create: Inbound automation: Trigger on Customer Replied (SMS) → Send to your n8n Webhook. Outbound automation: Trigger when IA_answer is updated → Send SMS to the contact. Create a custom field named IA_answer. Connect Wazzap in GHL to handle WhatsApp messages. Configure Redis in n8n (host, port, DB index, password). Add your AI model credentials (Anthropic, OpenAI, etc.) in n8n. (Optional) Set up the Google Drive Excel Merge sub-workflow to enrich ClientInfo with external data. Requirements GoHighLevel sub-account API key**. Anthropic (Claude)** API key or another supported LLM provider. Redis database** for temporary message storage. GHL automations: one for inbound messages to n8n, one for outbound replies when **IA\_answer is updated. GHL custom field: **IA\_answer to store and trigger replies. Wazzap plugin** in GHL for WhatsApp message handling. How to customize the workflow Add more context or business-specific data to the AI Agent prompt so replies match your brand tone and policies. Expand the ClientInfo dataset with additional services, branches, or product details. Adjust the Redis wait time to control how long the workflow buffers messages before replying.
by Jose Bossa
👥 Who's it for This workflow is perfect for businesses or individuals who want to automate WhatsApp conversations 💬 with an intelligent AI chatbot that can handle text, voice notes 🎵, and images 🖼️. No advanced coding required! 🤖 What it does It automatically receives WhatsApp messages through WasenderAPI, intelligently buffers consecutive messages to avoid fragmented responses, processes multimedia content (transcribing audio and analyzing images with AI), and responds naturally using GPT-4o mini with conversation memory. All while protecting your WhatsApp account from being banned. ⚙️ How it works 📱 Webhook Trigger – Receives new messages from WasenderAPI 🗃️ Redis Buffer System – Groups consecutive messages intelligently (7-second window) 🔀 Content Classifier – Routes messages by type (text, audio, or image) 🎵 Audio Processing – Decrypts and transcribes voice notes using OpenAI Whisper 🖼️ Image Analysis – Decrypts and analyzes images with GPT-4O Vision 🧠 AI Agent (GPT-4o mini) – Generates intelligent responses with 10-message memory ⏱️ Anti-Ban Wait – 6-second delay to simulate human typing 📤 Message Sender – Delivers response back to WhatsApp user 📋 Requirements WasenderAPI account with connected WhatsApp number : https://wasenderapi.com/ Redis database (free tier works fine) OpenAI API key with access to GPT-4o mini and Whisper n8n's AI Agent, LangChain, and Redis nodes 🛠️ How to set up Create your WasenderAPI account and connect a WhatsApp number Set up a free Redis database and get connection credentials Configure OpenAI API key in n8n credentials Replace the WasenderAPI Bearer token in "Get the audio", "Get the photo", and "Send Message to User" nodes Change the Manual Trigger to a Webhook and configure it in WasenderAPI Customize the AI Agent prompt to match your business needs Adjust wait times if needed (default: 6 seconds for responses, 7 seconds for buffer) Save and activate the workflow ✅ 🎨 How to customize Modify the AI Agent prompt to change bot personality and instructions Adjust buffer wait time (7 seconds) for faster/slower message grouping Change response delay (6 seconds) based on your use case , its recomendable 30 seconds. Add more content types (documents, videos) by extending the Switch Type node Configure conversation memory window (default: 10 messages)
by FabioInTech
J.A.R.V.I.S. Multimodal AI assistant on Telegram with OpenAI This workflow transforms your Telegram bot into J.A.R.V.I.S., a powerful, multimodal AI assistant. It can understand and process text, voice messages, images, and documents. The assistant can search the web, scrape websites, generate images, perform calculations, and reference uploaded documents to provide comprehensive and context-aware responses in either text or audio format. 🧑💻 Who’s it for This workflow is for developers, AI enthusiasts, and businesses who want to create an advanced, interactive AI assistant on Telegram. It’s perfect for automating customer support, creating a personal AI helper, or exploring the capabilities of multimodal large language models (LLMs) in a practical application. ⚙️ How it works The workflow begins when a message is received by your Telegram bot. A Switch node then directs the data based on the message type: Text:** The message is formatted and sent directly to the main AI agent. Voice:** The audio file is downloaded from Telegram and transcribed into text using the OpenAI API. Image:** The image is downloaded and analyzed by an OpenAI vision model to understand its content. Document:** The file is downloaded and its content is stored in a temporary vector store, making it searchable for the AI. The processed input is then passed to the core "J.A.R.V.I.S." Agent node. This agent uses an OpenAI model, conversational memory, and a suite of tools (Google Search, Web Scraper, Image Generator, Calculator, and the document vector store) to formulate a response. Finally, the workflow checks if the initial message was a voice note; if so, it generates an audio response. Otherwise, it sends the answer as a text message back to the user. 🛠️ How to set up Telegram: Create a Telegram Bot - Use @BotFather to create a bot and obtain your bot token; Add Telegram API credentials in n8n with your bot token to the Receive Message Trigger node and all other Telegram nodes. In the Receive Message node, enter the chatId of the user or group authorized to interact with the bot. OpenAI: Add your OpenAI API credentials to all OpenAI, AI Agent, and AI tool nodes. SerpAPI: Add your SerpAPI credentials to the Basic Google Search node to enable web search functionality. Jina AI: Add your Jina AI API key to the Setup Node - The API Key is used on the Webpage Scraper node. ✅ Requirements Telegram Bot API credentials and Bot token. OpenAI API credentials. SerpAPI API credentials. Jina.ai API credentials 🎨 How to customize the workflow Change the AI model:** You can select a different OpenAI model in the OpenAI Chat Model node (e.g., switch from gpt-4.1 to gpt-4o) or in the Analyze Image and Transcribe nodes. Modify the AI's personality:** Edit the system prompt in the J.A.R.V.I.S. Agent node to change its name, tone, instructions, or default language. Expand its tools:** Connect more tools to the J.A.R.V.I.S. Agent node to extend its capabilities, such as connecting to a database or another third-party API. Adjust the response format:** Modify the If Audio Response node to change the conditions for sending text or audio messages. For example, you could configure it to always respond with text. 💬 Need Help? Join the Discord or ask in the Forum
by Sk developer
📊 Automated Website Traffic Tracker with Google Sheets Logging Track website traffic and backlinks effortlessly using the Website Traffic Checker - Ahref API. This n8n workflow automates data retrieval and logging into Google Sheets, making it perfect for SEO professionals and digital marketers. 🧩 What This Workflow Does (Summary) Accepts a domain via a simple web form. Sends the domain to Website Traffic Checker - Ahref API. If successful: Extracts backlink and traffic data. Appends the results to two separate Google Sheets. If failed: Sends an email alert with domain and status code. 🔧 Node-by-Node Explanation | Node | Purpose | | ---------------------------------- | ---------------------------------------------------------------------------------------------------------------- | | 🟢 Form Trigger | Starts the workflow when a domain is submitted via form. | | 🟩 Set Domain Value | Stores the submitted domain into a variable. | | 🌐 HTTP Request | Calls Website Traffic Checker - Ahref API. | | ✅ IF Node | Checks if the API responded with statusCode = 200. | | ❌ Email Node (Fail) | Sends an alert email if API fails. | | 📦 Code (Backlink Info) | Extracts backlink data from API response. | | 📄 Google Sheet: Backlink Info | Appends backlink data to a sheet. | | 📦 Code (Traffic Info) | Extracts traffic data from API response. | | 📄 Google Sheet: Traffic Data | Appends traffic metrics to another sheet. | 📁 Google Sheet Columns Backlink Info Sheet | Column | Description | | ------------------ | --------------------------- | | website | Domain submitted | | ascore | Authority score | | referring domain | Number of referring domains | | total backlinks | Total backlinks | Traffic Data Sheet | Column | Description | |----------------------|---------------------------------------------| | accuracy | Accuracy level of the traffic data | | bounce_rate | Bounce rate percentage | | desktop_share | Percentage of traffic from desktop devices | | direct | Direct traffic sources | | display_ad | Display ad traffic sources | | display_date | Date when traffic data was captured | | mail | Traffic from email campaigns | | mobile_share | Percentage of traffic from mobile devices | | pages_per_visit | Average number of pages per visit | | paid | Paid traffic sources | | prev_bounce_rate | Bounce rate in the previous period | | prev_direct | Previous period's direct traffic | | prev_display_ad | Previous period's display ad traffic | | prev_mail | Previous period's email traffic | | prev_pages_per_visit | Previous period's pages per visit | | prev_referral | Previous period's referral traffic | | prev_search_organic | Previous organic search traffic | | prev_search_paid | Previous paid search traffic | | prev_social_organic | Previous organic social traffic | | prev_social_paid | Previous paid social traffic | | prev_time_on_site | Previous time spent on site | | prev_users | Number of users in the previous period | | prev_visits | Visits in the previous period | | rank | Global rank of the website | | referral | Referral traffic | | search | Total search traffic | | search_organic | Organic search traffic | | search_paid | Paid search traffic | | social | Total social traffic | | social_organic | Organic social traffic | | social_paid | Paid social traffic | | target | Targeted country or demographic | | time_on_site | Average time spent on site | | unknown_channel | Traffic from unknown sources | | users | Number of unique users | | visits | Total number of visits | 🔐 How to Configure 🔑 Get API Key Go to Website Traffic Checker - Ahref API on RapidAPI. Sign in or create a free RapidAPI account. Subscribe to the API plan. Copy your x-rapidapi-key from the Endpoints tab. 📝 Add Key in n8n Go to your HTTP Request node. Under Headers, set: x-rapidapi-host = website-traffic-checker-ahref.p.rapidapi.com x-rapidapi-key = your API key 📄 How to Setup Google Sheets in n8n Connect a Google account via Google Sheets credentials in n8n. Use the full Google Sheet URL in the documentId field. Set correct Sheet name or GID (e.g., "Traffic Data"). Use Auto Map or Custom Map to define columns. > Make sure your Google Sheet has edit access and headers already created. 🧠 Use Case & Benefits 👤 Ideal For: SEO analysts Digital marketers Agencies managing multiple clients Web analytics consultants ✅ Benefits: Fully automated data collection. No manual copy-paste** from tools. Real-time insights delivered to Google Sheets. Easy monitoring of backlinks and traffic trends.
by Mantaka Mahir
How it works This workflow automates the process of converting Google Drive documents into searchable vector embeddings for AI-powered applications: • Takes a Google Drive folder URL as input • Initializes a Supabase vector database with pgvector extension • Fetches all files from the specified Drive folder • Downloads and converts each file to plain text • Generates 768-dimensional embeddings using Google Gemini • Stores documents with embeddings in Supabase for semantic search Built for the Study Agent workflow to power document-based Q&A, but also works perfectly for any RAG system, AI chatbot, knowledge base, or semantic search application that needs to query document collections. Set up steps Prerequisites: • Google Drive OAuth2 credentials • Supabase account with Postgres connection details • Google Gemini API key (free tier available) Setup time: ~10 minutes Steps: Add your Google Drive OAuth2 credentials to the Google Drive nodes Configure Supabase Postgres credentials in the SQL node Add Supabase API credentials to the Vector Store node Add Google Gemini API key to the Embeddings node Update the input with your Drive folder URL Execute the workflow Note: The SQL query will drop any existing "documents" table, so backup data if needed. Detailed node-by-node instructions are in the sticky notes within the workflow. Works with: Study Agent (main use case), custom AI agents, chatbots, documentation search, customer support bots, or any RAG application.
by Rami Cole
🚀 AI Marketing Campaign Generator Upload product image + details → Get complete professional marketing campaign with 5 custom-generated assets automatically. 🤖 AI Model GPT-4o Mini (OpenAI) - For campaign strategy | Prompt Image generation GPT Image-1 (OpenAI) - For visual asset generation 🔑 Required API Keys OpenAI API - AI analysis & image generation Google Drive API - Asset storage & organization 🎯 What It Generates 5 Marketing Assets: Instagram Post, Instagram Story, Website Banner, Ad Creative, Testimonial Graphic Brand Strategy: Colors, tone, positioning from your product image Campaign Strategy: Messaging, target audience, objectives Visual Analysis: Extracts colors, materials, styling from uploaded image ⚙️ Setup Import JSON to n8n Add OpenAI & Google Drive credentials Configure Google Drive folder for asset storage Deploy form webhook Test with product image upload 📱 How It Works Upload product image → AI analyzes visual + text → Generates complete campaign → Creates 5 custom marketing assets → Saves to Google Drive
by Guilherme Campos
This n8n workflow automates the process of creating high-quality, scroll-stopping LinkedIn posts based on live research, AI insight generation, and Google Sheets storage. Instead of relying on recycled AI tips or boring summaries, this system combines real-time trend discovery via Perplexity, structured idea shaping with GPT-4, and content generation tailored to a bold, human LinkedIn voice. The workflow saves each post idea (with image prompt, tone, and summary) to a Google Sheet, sends you a Telegram alert, and even formats your content for direct publishing. Perfect for solopreneurs, startup marketers, or anyone who posts regularly on LinkedIn and wants to sound original, not robotic. Who’s it for Content creators and solopreneurs building an audience on LinkedIn Startup teams, PMs, and tech marketers looking to scale thought leadership Anyone tired of generic AI-generated posts and craving structured, edgy output How it works Daily trigger at 6 AM starts the workflow. Pulls recent post history from Google Sheets to avoid repeated ideas. Perplexity AI scans the web Generates 3 structured post ideas (including tone, hook, visual prompt, and summary). GPT-4 refines each into a bold, human-style LinkedIn post, following detailed brand voice rules. Saves everything to Google Sheets (idea, content, image prompt, post status). Sends a Telegram notification to alert you new ideas are ready. How to set up Connect your Perplexity, OpenAI, Google Sheets, and Telegram credentials. Point to your preferred Google Sheet and sheet tab for storing post data. Adjust the schedule trigger if you want more or fewer ideas per week. (Optional) Tweak the content style prompt to match your personal tone or niche. Requirements Perplexity API account OpenAI API access (GPT-4 or GPT-4-mini) Telegram bot connected to your account Google Sheets document with appropriate column headers How to customize the workflow Change the research sources or prompt tone (e.g., more tactical, more spicy, more philosophical) Add an image generation tool to turn prompts into visuals for each post Filter or tag ideas based on type (trend, tip, story, etc.) Post automatically via LinkedIn API or Buffer integration