by Roninimous
This n8n workflow integrates Shopify order management with Telegram, allowing you to query open orders and order details directly through Telegram chat commands. It provides an interactive way to monitor your Shopify store orders using Telegram as an interface. Key Features Telegram Trigger: Listens for messages and callback queries from your Telegram bot. Switch Node: Routes incoming Telegram messages to different flows based on message content: /orders command to fetch all open orders Callback queries starting with /order_ to fetch details of a specific order Shopify Get Orders: Retrieves all open orders from your Shopify store using your Shopify API credentials. Conditional Check (If Node): Determines if there are any open orders; branches accordingly: If orders exist, prepare an interactive Telegram message with a list of orders.1 If no orders exist, send a “No Order” message. Orders Code Node: Formats the list of open orders into a Telegram message with inline buttons. Each button corresponds to an order and sends a callback data containing the order ID. Get Order Details: When a user selects an order button, the workflow extracts the order ID from the callback data, fetches detailed order information from Shopify, and formats the order items into a readable message. Send Messages to Telegram: Sends formatted messages back to Telegram: The list of open orders with clickable buttons. Detailed information about a selected order. “No Order” notification if there are no open orders. How It Works A Telegram user sends /orders to the bot. The workflow fetches open orders from Shopify and sends a message with buttons listing each order. When a user clicks an order button, the workflow fetches and displays detailed information about that specific order in Telegram. If there are no open orders, the bot replies accordingly. Setup Instructions Create a Telegram Bot: Use @BotFather on Telegram to create a bot and get the bot token. Obtain Shopify API Credentials: Create a private app in your Shopify admin dashboard with permission to read orders. Obtain the API key and access token. Configure n8n Credentials: Add your Telegram bot token as Telegram API credentials in n8n. Add your Shopify API credentials in n8n Shopify credentials. Import the Workflow: Import this workflow into your n8n instance. Update the Telegram and Shopify credential nodes to use your credentials. Set Webhook URLs: Ensure your Telegram bot webhook is set correctly to receive messages. n8n webhook URLs should be publicly accessible. Test the Workflow: Send /orders to your Telegram bot to verify it retrieves and lists open orders. Customization Guidance Modify Commands: Update the Switch node to add more Telegram commands or change existing ones. Change Message Formats: Edit the Code nodes to customize how order lists and details appear. Expand Shopify Integration: Add nodes to handle other Shopify operations like updating orders, managing products, etc. Multi-User Support: Adapt the workflow to handle multiple Telegram chat IDs dynamically. Security and Implementation Notes The native Telegram node in n8n has limitations: it does not support sending dynamic inline keyboard arrays in JSON format, which is essential for displaying a variable number of buttons depending on how many orders are retrieved from Shopify. To overcome this, this workflow uses the HTTP Request node to call Telegram’s API directly, allowing full flexibility to send dynamic inline keyboards as JSON objects. (I will make an update once Telegram Node support dynamic inline keyboards). Security Considerations:** Always store your Telegram bot token securely in n8n credentials and never expose it in the HTTP Request node’s URL or body directly. Use environment variables or n8n credentials to inject tokens safely. Be mindful of Telegram API rate limits and add error handling in your workflow. While using HTTP Request nodes increases flexibility, it also requires careful management of request payloads and authentication, as opposed to the built-in Telegram node which abstracts much of this complexity. Benefits Quickly access Shopify order data without leaving Telegram. Interactive inline buttons improve user experience. Automated, real-time integration between Shopify and Telegram.
by InfraNodus
Set up a chat with your documents without the complex vector store setup. This templates helps you ingest** your PDF / text / MD documents into a knowledge graph use the graph as the knowledge base for your AI chatbots (and other workflows) visualize the main topics* and *gaps** in your documents (good for observability and research) The knowledge base is provided using the InfraNodus GraphRAG with the knowledge graphs offering high-quality responses without the need to set up complex RAG vector store workflows. The advantages of using GraphRAG instead of the standard vector stores for knowledge are: Easy and quick to set up and update** — no complex data import workflows needed A knowledge graph offers a holistic and interactive view of your knowledge base (accessible via our API or a web interface — also shareable) Better retrieval of relations** between the document chunks = higher quality responses How it works This template uses the InfraNodus knowledge graph as a knowledge base for your n8n AI agent node. The knowledge graph contains the documents you can upload using this template from your Google Drive. When the user asks a question via the chat interface, the agent forwards this question to the InfraNodus knowledge graph, retrieves a response, a summary, and a list of matching statements (based advanced Graph RAG), then delivers the final response back the user. Here's a description step by step: Step 1: Upload your documents Put the PDF / text / MD files you want to chat with into a folder on your Google drive Authorize access to that folder using the Google drive node in the template. Add the InfraNodus API key to the InfraNodus Save to Graph HTTP node Optional: change the name of the graph you want to save the data to in the InfraNodus HTTP node (in the name field of the HTTP post request). Run the workflow to ingest all the files and save them into the graph Optional: check the link provided in the Step 1 workflow description to see the visualization of your knowledge base. It will look something like that: Note:* you can replace the PDF to Text convertor node with a better quality *PDF convertor* from ConvertAPI which respects the original file layout and doesn't split text into small chunks Step 2: Chat with your documents Deactive the trigger in the Step 1 Activate the chat trigger in the Step 2 Add your InfraNodus API credentials to Knowledge Base GraphRAG InfraNodus node Optional: change the graph name in the Knowledge Base node to match the name you provided in the step 1 above Run the chat and ask the question Watch the magic How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Requirements An InfraNodus account and API key An OpenAI (or any other LLM) API key A Google Drive OAuth access (follow the n8n instructions) Optional: ConvertAPI API key for better quality PDF conversion Customizing this workflow You can customize this workflow by adding several experts to your AI agent. Check out the complete guide at https://support.noduslabs.com/hc/en-us/articles/20174217658396-Using-InfraNodus-Knowledge-Graphs-as-Experts-for-AI-Chatbot-Agents-in-n8n Also check out the video tutorial with a demo: For support and feedback, please, contact us at https://support.noduslabs.com To learn more about InfraNodus: https://infranodus.com
by Yannick
🚀 How it works (Fonctionnement résumé) : Ce template permet de transformer un document (PDF, TXT, DocX...) en post LinkedIn engageant, prêt à être publié ou validé par email, le tout avec l’aide d’une IA spécialisée en copywriting LinkedIn. Voici les étapes clés : Formulaire de dépôt : L'utilisateur charge un fichier ou colle un texte. Détection du type de contenu : Un Switch analyse le type de fichier (PDF, DOCX, TXT, ou texte brut). Attention pour DocX nécessite un compte Make pour transformer le doc (mais cela fonctionne aussi sans docX) Extraction du contenu : Selon le format, le bon module d'extraction est utilisé. Génération d’un post LinkedIn : L'IA transforme le contenu en post LinkedIn selon une méthodologie de copywriting optimisée. Validation par email : Un email est envoyé à l’utilisateur pour approbation avec possibilité d’ajouter une image. Publication automatique : Si l'utilisateur valide, le post est publié sur LinkedIn. ⚙️ Setup Steps : Connecte tes comptes : Google Docs OAuth LinkedIn OAuth OpenAI (via gpt-4.1-mini ou un autre modèle) SMTP + IMAP pour l'envoi et la lecture d'emails Configure les champs du formulaire dans le nœud Form Trigger selon ton usage. Personnalise le prompt IA dans le nœud AI Agent si tu veux adapter le ton ou la méthodologie. Vérifie les emails dans le nœud d'envoi (Send Email) et de lecture (Email Trigger (IMAP)), pour que la validation fonctionne. Teste le workflow avec différents fichiers pour t'assurer que tous les types sont bien traités (PDF, DOCX, TXT, etc.). 🧩 Cas d’usage typiques : Créer des posts à partir de notes de réunion ou de rapports. Valoriser un article ou une publication professionnelle sous forme de contenu LinkedIn. Déléguer à l'IA le premier jet de ton contenu réseau. Bonus surveille une newsletter de ta messagerie pour proposer un post pertinent sur LinkedIn (vous pouvez supprimer il fonctionne en parallèle)
by keisha kalra
Try It Out! This n8n template creates a fully automated Instagram content schedule using AI and Google Sheets. It is perfect for content creators, marketing teams, or local businesses looking to organize and scale their social media posting. How it works The workflow starts by reading two sets of inputs from a Google Sheet: Your content strategy inputs (Pillar, Objective, Frequency, Format, Structure, Examples). A list of scraped blog posts with title, URL, and description (fetched from your website). Blog posts are scraped using Apify and parsed to extract key fields, which are stored in a tab labeled "Input (blog month)". You can assign a preferred posting month for each blog (e.g. fall blog posts get tagged for September). The workflow then merges both inputs and extracts the relevant information for further information added by ChatGPT. AI Scheduling & Personalization Once merged, the workflow loops through each content item and: Identifies if the scheduled post falls on or near a holiday (like Mother’s Day) and adjusts the content accordingly. A reference tool is attached to guide structure and tone, based on a library of post examples. Sends the content to an AI Agent (using GPT-4, but customizable) that generates: A compelling Instagram caption A visual description Hashtags Suggested post date, day, content pillar, and format (carousel, reel, image, etc.) Output All generated content including captions, structure, dates, hashtags, and pillar is exported into a tab titled Output in your Google Sheet. The final schedule is ready for manual review, editing, or publishing to social media. How to use The workflow uses a manual trigger to start, but you can replace it with a Webhook, cron job, or form submission. Add/edit your content strategy in Google Sheets. How to Set-Up Initial Input Tab Define your content pillars and structure Create a tab named "Input" or "Strategy" Include these columns: Pillar: e.g., Family images Objective: e.g., Showcase images Frequency: e.g., Bi-weekly Content Form: e.g., Images, Reels Structure: brief description of expected layout (e.g., carousel Q&A, singular photo) Examples: prompts or questions to guide AI (e.g., Why do you think families should do a session?) Input (blog month) Tab – Store scraped blog content Include these columns: URL: direct link to blog post Title: blog post title Description: short summary of the post Preferred Month: month you want it posted (e.g., August, September) This sheet is partially auto-filled by the workflow (except for Preferred Month) Output Tab – Final scheduled content Include these columns: Date: scheduled posting date (YYYY-MM-DD) Day: day of the week Pillar: content category assigned Format: e.g., Images, Reels, Carousel Description: visual summary Caption: Instagram-ready caption Hashtags: complete hashtag block To use the Apify HTTP Request node: Drag in an HTTP Request node into your n8n workflow. Set the Method and URL based on how you're using Apify: Use POST if you want to run an actor live with dynamic input (e.g. scrape blog posts in real time). Use GET if you want to retrieve results from a completed or static dataset run (faster and cheaper if you're reusing previous data). Configure query or body parameters: Include your Apify API token for authentication (e.g. token=YOUR_API_KEY) For POST: include an input object with any required actor settings (e.g., blog URL to scrape). For GET: specify the dataset ID in the URL Test the node to ensure you're retrieving the blog titles, descriptions, and URLs as expected. Requirements Apify account for scraping blog posts OpenAI key (e.g. GPT-4) or another model of your choice Google Sheets Credentials Example Use Cases A photographer repurposing blogs into Instagram carousels A nonprofit automatically generating seasonal posts A small team managing multi-pillar content across weeks or months Need Help? Join the n8n Discord or ask in the n8n Forum! Happy Content Making ! 📅✨
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for? The Search Engine Intelligence Extractor is a powerful n8n automation that leverages Bright Data’s MCP based AI Agents to simulate human-like searches across Google, Bing, and Yandex, and then distills clean, structured insights using Google Gemini. This workflow is tailored for: SEO analysts researching competitors or market trends Market researchers needing real-time search visibility Journalists & content writers gathering contextual insights AI developers creating intelligent assistants Digital marketers tracking brand mentions or news What problem is this workflow solving? Traditional scraping of search engines is often blocked, cluttered, or filled with irrelevant information. Manually analyzing and cleaning this data for insight is time-consuming. This workflow solves the problem by: Simulating real user search behavior via Bright Data MCP based AI Agent Performing multi-platform search (Google, Bing, Yandex) in one unified flow Extracting clean, human-readable results (stripping ads, navigation, etc.) Structuring the content using Google Gemini LLM Automating delivery via Webhook or saving to disk What this workflow does Input Fields Node: Accepts the search query Accepts action for example - Perform a google search. Replace the action with bing, yandex etc. for other search providers Accepts Webhook notification URL Bright Data MCP Agent Execution: Triggers Bright Data’s intelligent search agent Handles search navigation, result loading, pagination Human Readable Data Extractor: Cleanses HTML, removes ads, footers, irrelevant links Produces a readable narrative of results Final Output Handling: Saves the processed response to disk Sends the structured data to a Webhook for real-time use Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Add Scheduled Execution Add a Cron trigger to run this workflow on a set schedule (e.g., daily/weekly keyword tracking). Push Results to Custom Destinations Connect output to: Google Sheets (for analytics or dashboards) PostgreSQL or MySQL DBs (for structured storage) Notion or Airtable (for content pipelines) Slack or Email (for alerting teams) Customize Webhook Notifications Update the Webhook URL in the notification node to push processed results to external APIs, CRMs, or real-time dashboards.
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for? This workflow template enables intelligent data extraction from ProductHunt using Bright Data’s Model Context Protocol (MCP) and processes search results with Google Gemini. This workflow is designed for individuals and teams who need automated, intelligent discovery and analysis of new tech products. It's especially valuable for: Startup Analysts & VC Researchers Growth Hackers & Marketers Recruiters & Tech Scouts Product Managers & Innovation Teams AI & Automation Enthusiasts What problem is this workflow solving? Traditional product discovery on ProductHunt is constrained by limited descriptions and requires repeated manual validation through web searches. Manually extracting and enriching this data is slow, repetitive, and error-prone. This workflow solves the problem by: Extracting real-time ProductHunt data using Bright Data’s MCP infrastructure to mimic real-user behavior and avoid blocks. Performing contextual searches on Google for a specific product on ProductHunt to gather use cases, reviews, and related information. Structuring results using Google Gemini LLM to provide human-readable insights and reduce noise. Delivering results seamlessly by saving output to disk, updating Google Sheets, and sending Webhook alerts. What this workflow does Input Field Node Define the ProductHunt category with the search term(s) you want to target. This is used to drive extraction and search operations. Agent Operation Node The agent performs two major tasks: Extract from ProductHunt Retrieves trending products from ProductHunt using Bright Data MCP Contextual Google Search for the product the agent searches Google for deeper context, including: Reviews Competitor mentions Real-world usage examples LLM Node (Google Gemini) Analyzes and summarizes extracted web content Removes noise (ads, menus, etc.) Structures content into bullet points, insights, or JSON objects Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs This workflow is flexible and modular, allowing you to adapt it for various research, product discovery, or trend analysis use cases. Below are the key customization points and how to modify them. Define Your Target Products or Topics: Change the input parameter to a specific ProductHunt category, tag, or keyword (e.g., "AI tools", "SaaS", "DevOps") Change Output Destinations : Save to Disk**: Change the file format (.json, .csv, .md) or directory path Google Sheet**: Modify sheet name, structure (columns like Product, Summary, Link) Webhook Notification**: Point to a Slack/Discord/CRM/Webhook URL with payload mapping
by Ibrahim Malick
⚠️ This template uses only official n8n nodes. No community nodes required. 🧑💼 Who is this for? This workflow is designed for: Legal tech founders Marketing freelancers or consultants Agencies supporting lawyers and small law firms Anyone doing outbound outreach in the legal niche ❓ What problem is this solving? LinkedIn is a goldmine for targeting legal professionals — but scraping and personalizing outreach is tedious and expensive. Most tools either: Require paid LinkedIn Sales Navigator Can’t personalize at scale Violate LinkedIn’s TOS This workflow solves that by using free Google Search, OpenRouter AI, and GPT-4o to find, enrich, and message up to 1,000 solo lawyers per day — without using browser automation or scrapers. ⚙️ What this workflow does Uses Google Programmable Search to find solo lawyers and small firm founders on LinkedIn Parses each profile’s name, title, profile URL, and snippet Saves raw lead data to Google Sheets Uses OpenRouter Sonar Pro to enrich each profile with external content Generates a personalized, 1-line message using GPT-4o Appends the final message into Google Sheets for outreach 🛠️ Setup Estimated time: 15–20 minutes ✅ Google Programmable Search Enable the Custom Search API on Google Cloud Create a programmable search engine set to search the full web Copy your API key and CX ID ✅ Google Sheets Create a sheet with columns: Name, Title, Profile URL, Outreach Message Share the sheet with your OAuth-connected Google account ✅ OpenRouter Sign up at openrouter.ai Fund with at least $5 and generate your API key Use the model perplexity/sonar-pro for real-time research ✅ GPT-4o (optional) You can use your OpenAI key or route GPT-4o via OpenRouter All setup-specific values are marked clearly in sticky notes and placeholders. 🛠️ How to customize this workflow to your needs Change the Google search query to match your industry (e.g., "founder" AND "therapist" site:linkedin.com/in) Modify the AI prompt to match your tone (formal, casual, humorous) Connect the final output to your CRM (like HubSpot, Airtable, etc.) Add a second outreach message variant to A/B test performance 📌 Sticky Notes & Annotations All nodes are clearly renamed for understandability (e.g., Find Lawyer Profiles, Parse LinkedIn Search Results) Color-coded sticky notes explain: Setup instructions Required credentials Use case 🗂 Category AI Sales Marketing
by Jay Hartley
What this template does This workflow will collect order data as it is produced, then send a summary email of all orders at the end of every day, formatted in a table. It receives new orders via webhook and stores in Airtable. At 7PM every day, it sends a summary email with the day's orders in a HTML table Setup: Instructions Video Create a new table in Airtable and give it a field time with type date, orderID with type number, and orderPrice also with type number. Create a new access token if you haven't already at https://airtable.com/create/tokens/new. Make sure to give the token the scopes data.records:read, data.records:write, schema.bases:read and access to whichever table you choose to store the orders. A pop-up window appears with the token. Use this token to make Create New Credential > Access Token for Airtable in the Store Order and Airtable Get Today's Orders nodes. Create access credentials for your Gmail as described here: https://developers.google.com/workspace/guides/create-credentials. Use the credentials from your client_secret.json in the Send to Gmail node. In the Store Order node, change Base and Table to the database and table in your Airtable account you wish to use to store orders. Make sure to use these same values in the Airtable Get Today's Orders node. Every time an order is created in your system, send a POST request to Webhook from your order software. Each request must contain a single order containing fields 'orderID' and 'orderPrice' (or, edit Set Order Fields to select which incoming fields you wish to save) Change the schedule time for sending email from Everyday at 7PM to whichever time you choose. Test: Activate the workflow. From the node Webhook, copy Production URL Send the following CURL request to the URL given to you: curl -X POST -H "Content-Type: application/json" -d '{"orderID": 12345, "orderPrice": 99.99}' YOUR_URL_HERE It should say Node executed successfully. Now check your Airtable and confirm the order was stored in the right place.
by Joachim Hummel
This n8n workflow automates posting Amazon affiliate products to Mastodon — complete with image upload, description, and a shortened tracking URL using Shlink. 🔧 How it works Input Source: The workflow starts by reading from a connected Google Sheet that contains: SHlink (Shortlink) Amazon Link Description (Optional) PicURL Send /NO or YES A Send column (used as a flag to check if the row was already posted) Image Upload: It fetches the product image via HTTP and uploads it directly to a Mastodon instance via the /media API endpoint. URL Shortening (Shlink): The original Amazon URL is shortened using your self-hosted or cloud-hosted Shlink instance to enable click tracking and better presentation. Text Generation: A two-line promotional text is automatically generated using a Language Model (LLM), based on the product description. Posting to Mastodon: The post is then published on Mastodon with: The image The generated text The shortened Shlink URL Row Update: Once published, the Send column in the Google Sheet is updated to "YES" to prevent duplicates. Requirements ✅ Shlink – Required for shortening and tracking Amazon URLs ✅ Google Sheet – Used as a product queue and post ✅ Google Sheet Example https://link.unixweb.home64.de/w7VqY ✅ Mastodon account – OAuth2 credentials with write scope ✅ Product image URL – Must be valid and accessible ✅ n8n credentials – Set up for Google Sheets, Mastodon, and optionally OpenRouter or other LLM providers This workflow is ideal for content creators, affiliate marketers, and automation fans who want to save time and optimize reach across the Fediverse. #affiliate #amazon #mastodon #advertisment
by DigiMetaLab
A smart personal assistant that can reason, search, calculate, and remember — powered by Google Gemini and ready in one click. Most AI agents only respond — this one thinks before replying, pulls in real-time facts, does the math, and even remembers the last 5 things you said. 🔧 How it works This template builds a conversational agent using the Google Gemini API. It uses multiple tools like: 🧠 Think – to reason step-by-step 🔍 SerpAPI – to search live data on Google ➗ Calculator – to solve math problems 💾 Memory – to remember short-term chat history You can embed this agent into a chatbot, web app, or automate any customer support, research, or productivity workflow. 🧠 Your agent will: Understand what you're asking Think logically using the Think tool Search facts in real-time using SerpAPI Calculate numbers using a math engine Recall past context using a memory buffer And respond clearly — just like a real assistant. 🧑💼 Who is this template for? This template is ideal for: Creators & developers building AI agents Teams needing a Gemini-powered assistant Beginners exploring LangChain + n8n Anyone curious about combining LLMs + tools + memory 🚀 How to set up Plug in your Google Gemini API key Add your SerpAPI key Run the workflow and start chatting! Everything is pre-wired for you — just import and go. 📬 Use cases You can connect this agent to: Telegram bots 🤖 WhatsApp via Twilio 📱 Slack, Discord, or Gmail 💬 Or just trigger it inside n8n manually 🔁 👉 Check out my other templates https://n8n.io/creators/digimetalab
by Alexander K.
Transform your creative sparks into professional Instagram Reel scripts instantly! This AI-powered workflow takes your raw ideas (text or voice messages) via Telegram and generates complete, viral-ready Reel scenarios with hooks, scripts, captions, and visual concepts. Who is this template for? This template is perfect for: Content creators looking to streamline their Reel production process Social media managers who need to generate multiple Reel concepts quickly Marketing professionals seeking data-driven, psychology-based content strategies Influencers and entrepreneurs who want to maintain consistent, engaging content Small business owners looking to create viral marketing content without hiring expensive copywriters Anyone who struggles with writer's block or wants to improve their Instagram engagement What this template does This comprehensive workflow provides a complete Reel creation assistant that: 🎯 Accepts Multiple Input Types: Text messages with your Reel ideas Voice notes that get automatically transcribed Processes ideas in real-time through Telegram 🧠 AI-Powered Content Generation: Creates 3 attention-grabbing hook variants designed to stop the scroll Generates a complete 30-60 second script with Hook, Subtitle, Body, and Call-to-Action Writes engaging captions that complement (not repeat) your video content Provides specific visual concepts with cinematic direction for filming 📊 Smart Features: Memory system that remembers your conversation context for personalized suggestions Optional Google Sheets integration to automatically log and organize all your Reel ideas Error handling for seamless user experience Instant delivery of results back to your Telegram chat 🎨 Professional Quality Output: Scripts based on proven marketing psychology from industry legends Hooks designed using viral content strategies Visual concepts that are specific and actionable (not generic "film yourself" advice) Captions optimized for engagement and shareability Sample Results Input (Idea): How I Saved 10 Hours a Week with Blog Automation? Output: 💡 Hook (variants): "Blogging doesn't have to be a time-suck. Here's my secret…" "Is blogging eating up your spare time? Let's fix that!" "Unlock 10 hours a week AND keep your blog thriving!" 🎬 Script: Hook: "Blogging doesn't have to be a time-suck. Here's my secret…" Subtitle: "Maximize your time with blog automation hacks!" Body: "Picture this: writing, editing, posting, and promoting your blog without breaking a sweat. I was buried under endless tasks until I discovered blog automation. Scheduling posts, auto-publishing, automating social shares—it's a game-changer. 10 hours a week, back in my pocket! More time for creativity or even a break. Imagine what automation could do for your content game." CTA: "Which blog task do you wish was automated? Drop a comment!" 📝 Reel Caption: Blog automation isn't just convenience—it's freedom. What will you create with your extra time? 📸 Visual Idea: Open with a whirlwind of papers and sticky notes symbolizing chaos. Transition to a person seamlessly typing on a laptop, where blog posts are auto-scheduled. Quick cuts show blog shares and responses happening automatically. Conclude with a serene scene: the person outdoors, notebook in hand, jotting ideas peacefully on a sunny day. Setup Instructions Prerequisites: Telegram account OpenAI API account with GPT-4 access Google account (optional, for logging ideas) Step 1: Create Your Telegram Bot Open Telegram and search for @BotFather Send /newbot and follow the instructions to create your bot Save the Bot Token you receive - you'll need this for n8n Send /setprivacy to @BotFather, select your bot, and choose "Disable" to allow the bot to read all messages Step 2: Get Your OpenAI API Key Visit OpenAI's API platform Create an account or log in Navigate to API Keys section Create a new API key and save it securely Ensure you have access to GPT-4 models (required for optimal results) Step 3: Configure the Workflow Import this template into your n8n instance Set up Telegram credentials: Add your Bot Token to all Telegram nodes Test the connection Configure OpenAI credentials: Add your API key to the "OpenAI Chat Model" and "Transcribes audio" nodes Verify GPT-4o model access Optional - Google Sheets setup: Create a new Google Sheet with columns: Status, Date, Description, Script Connect your Google account to the "Google Sheets" node Select your spreadsheet and sheet Step 4: Activate and Test Click "Activate" in the top-right corner of your workflow Open Telegram and find your bot Send a test message like "Create a Reel about morning routines" Verify you receive a complete Reel scenario response Step 5: Start Creating! Send text ideas directly to your bot Record voice notes with your concepts Receive professional Reel scenarios within seconds Use the optional Google Sheets integration to build your content library Pro Tips: Be specific with your ideas for better results The AI remembers your conversation context, so you can refine ideas iteratively Voice messages work great for capturing spontaneous ideas on-the-go Review the generated visual concepts - they're designed to be immediately actionable Troubleshooting: Ensure your OpenAI account has sufficient credits Verify your Telegram bot privacy settings allow message reading Check that all credentials are properly configured and tested For Google Sheets issues, confirm the sheet structure matches the expected columns This template transforms the tedious process of content creation into an instant, AI-powered system that delivers professional-quality Reel scenarios whenever inspiration strikes!
by Samuel Kimutai
How it works Automatically generates trending LinkedIn content topics using AI Researches current industry angles and hooks Writes posts in your authentic voice using OpenAI Creates professional images with DALL-E Posts everything on schedule without manual intervention Set up steps Connect OpenAI API for content generation and image creation Link LinkedIn API for automated posting Configure scheduling triggers (daily/weekly posting) Customize prompts to match your writing style and industry Set up content approval workflows (optional) Results you can expect 400% increase in profile views within 3 weeks Generate 120+ posts per month vs manual 12 posts Free up 15+ hours weekly for revenue-generating activities Consistent posting schedule that builds audience engagement Professional content that converts followers to clients Time to set up: 30-45 minutes Technical level: Beginner to intermediate APIs required: OpenAI, LinkedIn API Cost: OpenAI usage fees only (approximately $5-15/month) This workflow transforms LinkedIn content creation from a time-consuming daily task into a fully automated system that works while you sleep. Perfect for entrepreneurs, marketers, and content creators who want consistent LinkedIn presence without the manual effort.