by Kanaka Kishore Kandregula
Daily Magento 2 stock check Automation It identifies SKUs with low inventory per source and sends daily alerts via: 📬 Gmail (HTML email) 💬 Slack (formatted text message) This automation empowers store owners and operations teams to stay ahead of inventory issues by proactively monitoring stock levels across all Magento 2 sources. By receiving early alerts for low-stock products, businesses can restock before items sell out—ensuring continuous product availability, reducing missed sales opportunities, and maintaining customer trust. Avoiding stockouts not only protects your brand reputation but also keeps your store competitive by preventing customers from turning to competitors due to unavailable items. Timely restocking leads to higher fulfillment rates, improved customer satisfaction, and ultimately, stronger revenue and long-term loyalty. ✅ Features: Filters out configurable, virtual, and downloadable products Uses Magento 2 MSI stock per source Customizable thresholds (default: ≤10 overall or ≤5 per source) HTML-formatted email report Slack notification with a code-formatted Runs daily via Cron (08:50 AM) No need of any 3rd part Modules One time Setup 🔑 Credentials Used HTTP Request (Magento 2 REST API using Bearer Token) Gmail (OAuth2) Slack (OAuth2 or Webhook) 📊 Tags Magento, Inventory, MSI, Stock Alert, Ecommerce, Slack, Gmail, Automation 📂 Category E-commerce → Magento 2 (Adobe Commerce) 👤 Author Kanaka Kishore Kandregula Certified Magento 2 Developer https://gravatar.com/kmyprojects https://www.linkedin.com/in/kanakakishore
by Chad McGreanor
Overview This workflow automates LinkedIn posts using OpenAI. The prompts are stored in the workflow and can be customized as needed to fit your needs. The workflow uses a combination of a Schedule Trigger, some code that determines what day of the week it is (no posting Friday - Sunday), a prompts node to set your OpenAI prompts, a random selection of a prompt so that you are not generating content that looks repetitive. We send that all to OpenAI API, select a random time, have the final LinkedIn post sent to your Telegram for approval, once approved wait for the correct time slot, and then Post to your LinkedIn account using the LinkedIn node. How it works: Run or schedule the workflow in n8n The automation can be triggered manually or on a custom schedule (excluding weekends if needed). You should customize the prompts in the Prompt Node to suit your needs. A random LinkedIn post prompt is selected Pre-written prompts are rotated to keep content fresh and non-repetitive. OpenAI generates the LinkedIn post The prompt is sent to OpenAI via API, and the result is returned in clean, ready-to-use form. You receive the draft via Telegram. The post is sent to Telegram for quick approval or review. Post is scheduled or published via the LinkedIn Connector Once approved, the workflow delays until the target time, then sends the content to LinkedIn. What's needed: An OpenAPI API key, LinkedIn Account, and a Telegram Account. For Telegram you will need to configure the Bot service. Step-by-Step: Telegram Approval for Your Workflow A. Set Up a Telegram Bot Open Telegram and search for “@BotFather”. Start a chat and type /newbot to create a bot. Give your bot a name and a unique username (e.g., YourApprovalBot). Copy the API token that BotFather gives you. B. Add Your Bot to a Private Chat (with You) Find your bot in Telegram, click “Start” to activate it. Send a test message (like “hello”) so the chat is created. C. Get Your User ID Search for “userinfobot” or use sites like userinfobot in Telegram. Type /start and it will reply with your Telegram user ID. OpenAI powers the LinkedIn post creation Add Your OpenAI API Key: Log in to your OpenAI Platform account: https://platform.openai.com/. Go to API keys and create a new secret key. In n8n, create a new "OpenAI API" credential and paste your API key. Give it a name. Apply Credential to Nodes: OpenAI Message Node Connect your LinkedIn account to the Linked in Node Select your account from the LinkedIn Dropdown box.
by Einar César Santos
🧠 Long-Term Memory System for AI Agents with Vector Database Transform your AI assistants into intelligent agents with persistent memory capabilities. This production-ready workflow implements a sophisticated long-term memory system using vector databases, enabling AI agents to remember conversations, user preferences, and contextual information across unlimited sessions. 🎯 What This Template Does This workflow creates an AI assistant that never forgets. Unlike traditional chatbots that lose context after each session, this implementation uses vector database technology to store and retrieve conversation history semantically, providing truly persistent memory for your AI agents. 🔑 Key Features Persistent Context Storage**: Automatically stores all conversations in a vector database for permanent retrieval Semantic Memory Search**: Uses advanced embedding models to find relevant past interactions based on meaning, not just keywords Intelligent Reranking**: Employs Cohere's reranking model to ensure the most relevant memories are used for context Structured Data Management**: Formats and stores conversations with metadata for optimal retrieval Scalable Architecture**: Handles unlimited conversations and users with consistent performance No Context Window Limitations**: Effectively bypasses LLM token limits through intelligent retrieval 💡 Use Cases Customer Support Bots**: Remember customer history, preferences, and previous issues Personal AI Assistants**: Maintain user preferences and conversation continuity over months or years Knowledge Management Systems**: Build accumulated knowledge bases from user interactions Educational Tutors**: Track student progress and adapt teaching based on history Enterprise Chatbots**: Maintain context across departments and long-term projects 🛠️ How It Works User Input: Receives messages through n8n's chat interface Memory Retrieval: Searches vector database for relevant past conversations Context Integration: AI agent uses retrieved memories to generate contextual responses Response Generation: Creates informed responses based on historical context Memory Storage: Stores new conversation data for future retrieval 📋 Requirements OpenAI API Key**: For embeddings and chat completions Qdrant Instance**: Cloud or self-hosted vector database Cohere API Key**: Optional, for enhanced retrieval accuracy n8n Instance**: Version 1.0+ with LangChain nodes 🚀 Quick Setup Import this workflow into your n8n instance Configure credentials for OpenAI, Qdrant, and Cohere Create a Qdrant collection named 'ltm' with 1024 dimensions Activate the workflow and start chatting! 📊 Performance Metrics Response Time**: 2-3 seconds average Memory Recall Accuracy**: 95%+ Token Usage**: 50-70% reduction compared to full context inclusion Scalability**: Tested with 100k+ stored conversations 💰 Cost Optimization Uses GPT-4o-mini for optimal cost/performance balance Implements efficient chunking strategies to minimize embedding costs Reranking can be disabled to save on Cohere API costs Average cost: ~$0.01 per conversation 📖 Learn More For a detailed explanation of the architecture and implementation details, check out the comprehensive guide: Long-Term Memory for LLMs using Vector Store - A Practical Approach with n8n and Qdrant 🤝 Support Documentation**: Full setup guide in the article above Community**: Share your experiences and get help in n8n community forums Issues**: Report bugs or request features on the workflow page Tags: #AI #LangChain #VectorDatabase #LongTermMemory #RAG #OpenAI #Qdrant #ChatBot #MemorySystem #ArtificialIntelligence
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for Recipe Recommendation Engine with Bright Data MCP & OpenAI is a powerful automated workflow combines Bright Data's MCP for scraping trending or regional recipe data with OpenAI 4o mini to generate personalized recipe recommendations. This automated workflow is designed for: Food Bloggers & Culinary Creators : Who want to automate the extraction and curation of recipes from across the web to generate content, compile cookbooks, or publish newsletters. Nutritionists & Health Coaches : Who need structured recipe data to analyze ingredients, calories, and nutrition for personalized meal planning or dietary tracking. AI/ML Engineers & Data Scientists : Building models that classify cuisines, predict recipes from ingredients, or generate dynamic meal suggestions using clean, structured datasets. Grocery & Meal Kit Platforms : Who aim to extract recipes to power recommendation engines, ingredient lists, or personalized meal plans. Recipe Aggregator Startups : Looking to scale recipe data collection, filtering, and standardization across diverse cooking websites with minimal human intervention. Developers Integrating Cooking Features : Into apps or digital assistants that offer recipe recommendations, step-by-step cooking instructions, or nutritional insights. What problem is this workflow solving? This workflow solves: Automated recipe data extraction from any public URL AI-driven structured data extraction Scalable looped crawling and processing Real-time notifications and data persistence What this workflow does 1. Set Recipe Extract URL Configure the recipe website URL in the input node Set your Bright Data zone name and authentication 2. Paginated Data Extract Triggers a paginated extraction across multiple pages (recipe listing, index, or search pages) Returns a list of recipe links for processing 3. Loop Over Items Loops through the array of recipe links Each link is passed individually to the scraping engine 4. Bright Data MCP Client (Per Recipe) Scrapes each individual recipe page using scrape_as_html Smartly bypasses common anti-bot protections via Bright Data Web Unlocker 5. Structured Recipe Data Extract (via OpenAI GPT-4o mini) Converts raw HTML to clean text using an LLM preprocessing node Uses OpenAI GPT-4o mini to extract structured data 6. Webhook Notification Pushes the structured recipe data to your configured webhook endpoint Format: JSON payload, ideal for Slack, internal APIs, or dashboards 7. Save Response to Disk Saves the structured recipe JSON information to local file system Pre-conditions You need to have a Bright Data account and do the necessary setup as mentioned in the "Setup" section below. You need to have an OpenAI Account. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the OpenAi account credentials. Make sure to set the fields as part of Set the Recipe Extract URL. Remember to set the webhook_url to send a webhook notification of recipe response. Set the desired local path in the Write the structured content to disk node to save the recipe response. How to customize this workflow to your needs You can tailor the Recipe Recommendation Engine workflow to better fit your specific use case by modifying the following key components: 1. Input Fields Node Update the Recipe URL to target specific cuisine sites or recipe types (e.g., vegan, keto, regional dishes). 2. LLM Configuration Swap out the OpenAI GPT-4o mini model with another provider (like Google Gemini) if you prefer. Modify the structured data prompt to extract custom fields that you wish. 3. Webhook Notification Configure the Webhook Notification node to point to your preferred integration (e.g., Slack, Discord, internal APIs). 4. Storage Destination Change the Save to Disk node to store the structured recipe data in: A cloud bucket (S3, GCS, Azure Blob etc.) A database (MongoDB, PostgreSQL, Firestore) Google Sheets or Airtable for spreadsheet-style access.
by Tony Paul
How it works ++Download the google sheet here++ and replace this with the googles sheet node: Google sheet , upload to google sheets and replace in the google sheets node. Scheduled trigger: Runs once a day at 8 AM (server time). Fetch product list: Reads your “master” sheet (product_url + last known price) from Google Sheets. Loop with delay: Iterates over each row (product) one at a time, inserting a short pause (20 s) between HTTP requests to avoid blocking. Scrape current price: Loads each product_url, extracts the current price via a simple CSS selector. Compare & normalize: Compares the newly scraped price against the “last_price” from your sheet, calculates percentage change, and tags items where price_changed == true. On price change: Send alert: Formats a Telegram message (“Price Drop” or “Price Hike”) and pushes it to your configured chat. Log history: Appends a new row to a separate “price_tracking” tab with timestamp, old price, new price, and % change. Update master sheet: After a 1 min pause, writes the updated current_price back to your “master” sheet so future runs use it as the new baseline. Set up step Google Sheets credentials (~5 min) Create a Google Sheets OAuth credential in n8n. Copy your sheet’s ID and ensure you have two tabs: product_data (columns: product_url, price) price_tracking (columns: timestamp, product_url, last_price, current_price, price_diff_pct, price_changed) Paste the sheet ID into both Google Sheets nodes (“Read” and “Append/Update”). Telegram credentials (~5 min) Create a Telegram Bot token via BotFather. Copy your chat_id (for your target group or personal chat). Add those credentials to n8n and drop them into the “Telegram” node. Workflow parameters (~5 min) Verify the schedule in the Schedule Trigger node is set to 08:00 (or adjust to your preferred run time). In the Loop Over Items node, confirm “Batch Size” is 1 (to process one URL at a time). Adjust the Delay to avoid Request Blocking node if your site requires a longer pause (default is 20 s). In the Parse Data From The HTML Page node, double-check the CSS selector matches how prices appear on your target site. Once credentials are in place and your sheet tabs match the expected column names, the flow should be ready to activate. Total setup time is under 15 minutes—detailed notes are embedded as sticky comments throughout the workflow to help you tweak selectors, change timeouts, or adjust sheet names without digging into code.
by Trung Tran
📝 Smart Vendor Contract Renewal & Reminder Workflow With GPT 4.1 mini Never miss a vendor renewal again! This smart workflow automatically tracks expiring contracts, reminds your finance team via Slack, and helps initiate renewal with vendors through email — all with built-in approval and logging. Perfect for managing both auto-renew and manual contracts. 📌 Who’s it for This workflow is designed for Finance and Procurement teams responsible for managing vendor/service contracts. It ensures timely notifications for expiring contracts and automates the initiation of renewal conversations with vendors. ⚙️ How it works / What it does ⏰ Daily Trigger Runs every day at 6:00 AM using a scheduler. 📄 Retrieve Contract List Reads vendor contract data from a Google Sheet (or any data source). Filters for contracts nearing their end date, using a Notice Period (days) field. 🔀 Branch Based on Renewal Type Auto-Renew Contracts: Compose a Slack message summarizing the auto-renewal. Notify the finance contact via Slack. Manual Renewal Contracts: Use an OpenAI-powered agent to generate a meaningful Slack message. Send message and wait for approval from the finance contact (e.g., within 8 hours). Upon approval, generate a formal HTML email to the vendor. Send the email to initiate the contract extension process. 📊 (Optional) Logging Can be extended to log all actions (Slack messages, emails, approvals) to Google Sheets or other databases. 🛠️ How to set up Prepare your Google Sheet Include the following fields: Vendor Name, Vendor Email, Service Type, Contract Start Date, Contract End Date, Notice Period (days), Renewal Type, Finance Contact, Contact Email, Slack ID, Contract Value, Notes. Sample: https://docs.google.com/spreadsheets/d/1zdDgKyL0sY54By57Yz4dNokQC_oIbVxcCKeWJ6PADBM/edit?usp=sharing Configure Integrations 🟢 Google Sheets API: To read contract data. 🔵 Slack API: To notify and wait for approval. 🧠 OpenAI API (GPT-4): To generate personalized reminders. ✉️ Email (SMTP/Gmail): To send emails to vendors. Set the Daily Scheduler Use a Cron node to trigger the workflow at 6:00 AM daily. ✅ Requirements | Component | Required | |----------------------------------|----------| | Google Sheets API | ✅ | | Slack API | ✅ | | OpenAI API (GPT-4) | ✅ | | Email (SMTP/Gmail) | ✅ | | n8n (Self-hosted or Cloud) | ✅ | | Contract Sheet with proper schema| ✅ | 🧩 How to customize the workflow Adjust Reminder Period**: Modify the logic in the Find Expiring Vendors node (based on Contract End Date and Notice Period). Change Message Tone or Format**: Customize the OpenAI agent's prompt or switch from plain text to branded HTML email. Add Logging or Tracking: Add a node to append logs to a **Google Sheet, Notion, or database. Replace Data Source: Swap out Google Sheets for **Airtable, PostgreSQL, or other CRM/database systems. Adjust Wait/Approval Duration**: Modify the sendAndWait Slack node timeout (e.g., from 8 hours to 2 hours). 📦 Optional Extensions 🧾 Add PDF contract preview via Drive link 🧠 Use GPT to summarize renewal terms 🛠 Auto-create Jira task for contract review
by Onur
Proactively retain customers predicted to churn with this automated n8n workflow. Running daily, it identifies high-risk customers from your Google Sheet, uses Google Gemini to generate personalized win-back offers based on their churn score and preferences, sends these offers via Gmail, and logs all actions for tracking. What does this workflow do? This workflow automates the critical process of customer retention by: Running automatically every day** on a schedule you define. Fetching customer data** from a designated Google Sheet containing metrics like predicted churn scores and preferred categories. Filtering* to identify customers with a high churn risk (score > 0.7) who haven't recently received a specific campaign (based on the created_campaign_date field - *you might need to adjust this logic). Using Google Gemini AI to dynamically generate one of three types of win-back offers, personalized based on the customer's specific churn score and preferred product categories: Informational: (Score 0.7-0.8) Highlights new items in preferred categories. Bonus Points: (Score 0.8-0.9) Offers points for purchases in a target category (e.g., Books). Discount Percentage: (Score 0.9-1.0) Offers a percentage discount in a target category (e.g., Books). Sending the personalized offer* directly to the customer via *Gmail**. Logging** each sent offer or the absence of eligible customers for the day in a separate 'SYSTEM_LOG' Google Sheet for monitoring and analysis. Who is this for? CRM Managers & Retention Specialists:** Automate personalized outreach to at-risk customers. Marketing Teams:** Implement data-driven retention campaigns with minimal manual effort. E-commerce Businesses & Subscription Services:** Proactively reduce churn and increase customer lifetime value. Anyone** using customer data (especially churn prediction scores) who wants to automate personalized retention efforts via email. Benefits Automated Retention:** Set it up once, and it runs daily to engage at-risk customers automatically. AI-Powered Personalization:** Go beyond generic offers; tailor messages based on churn risk and customer preferences using Gemini. Proactive Churn Reduction:* Intervene *before customers leave by addressing high churn scores with relevant offers. Scalability:** Handle personalized outreach for many customers without manual intervention. Improved Customer Loyalty:** Show customers you value them with relevant, timely offers. Action Logging:** Keep track of which customers received offers and when the workflow ran. How it Works Daily Trigger: The workflow starts automatically based on the schedule set (e.g., daily at 9 AM). Fetch Data: Reads all customer data from your 'Customer Data' Google Sheet. Filter Customers: Selects customers where predicted_churn_score > 0.7 AND created_campaign_date is empty (verify this condition fits your needs). Check for Eligibility: Determines if any customers passed the filter. IF Eligible Customers Found: Loop: Processes each eligible customer one by one. Generate Offer (Gemini): Sends the customer's predicted_churn_score and preferred_categories to Gemini. Gemini analyzes these and the defined rules to create the appropriate offer type, value, title, and detailed message, returning it as structured JSON. Log Sent Offer: Records action_taken = SENT_WINBACK_OFFER, the timestamp, and customer_id in the 'SYSTEM_LOG' sheet. Send Email: Uses the Gmail node to send an email to the customer's user_mail with the generated offer_title as the subject and offer_details as the body. IF No Eligible Customers Found: Set Status: Creates a record indicating system_log = NOT_FOUND. Log Status: Records this 'NOT_FOUND' status and the current timestamp in the 'SYSTEM_LOG' sheet. n8n Nodes Used Schedule Trigger Google Sheets (x3 - Read Customers, Log Sent Offer, Log Not Found) Filter If SplitInBatches (Used for Looping) Langchain Chain - LLM (Gemini Offer Generation) Langchain Chat Model - Google Gemini Langchain Output Parser - Structured Set (Prepare 'Not Found' Log) Gmail (Send Offer Email) Prerequisites Active n8n instance (Cloud or Self-Hosted). Google Account** with access to Google Sheets and Gmail. Google Sheets API Credentials (OAuth2):** Configured in n8n. Two Google Sheets:** 'Customer Data' Sheet: Must contain columns like customer_id, predicted_churn_score (numeric), preferred_categories (string, e.g., ["Books", "Electronics"]), user_mail (string), and potentially created_campaign_date (date/string). 'SYSTEM_LOG' Sheet: Should have columns like system_log (string), date (string/timestamp), and customer_id (string, optional for 'NOT_FOUND' logs). Google Cloud Project** with the Vertex AI API enabled. Google Gemini API Credentials:** Configured in n8n (usually via Google Vertex AI credentials). Gmail API Credentials (OAuth2):** Configured in n8n with permission to send emails. Setup Import the workflow JSON into your n8n instance. Configure Schedule Trigger: Set the desired daily run time (e.g., Hours set to 9). Configure Google Sheets Nodes: Select your Google Sheets OAuth2 credentials for all three Google Sheets nodes. 1. Fetch Customer Data...: Enter your 'Customer Data' Spreadsheet ID and Sheet Name. 5b. Log Sent Offer...: Enter your 'SYSTEM_LOG' Spreadsheet ID and Sheet Name. Verify column mapping. 3b. Log 'Not Found'...: Enter your 'SYSTEM_LOG' Spreadsheet ID and Sheet Name. Verify column mapping. Configure Filter Node (2. Filter High Churn Risk...): Crucially, review the second condition: {{ $json.created_campaign_date.isEmpty() }}. Ensure this field and logic correctly identify customers who should receive the offer based on your campaign strategy. Modify or remove if necessary. Configure Google Gemini Nodes: Select your configured Google Vertex AI / Gemini credentials in the Google Gemini Chat Model node. Review the prompt in the 5a. Generate Win-Back Offer... node to ensure the offer logic matches your business rules (especially category names like "Books"). Configure Gmail Node (5c. Send Win-Back Offer...): Select your Gmail OAuth2 credentials. Activate the workflow. Ensure your 'Customer Data' and 'SYSTEM_LOG' Google Sheets are correctly set up and populated. The workflow will run automatically at the next scheduled time. This workflow provides a powerful, automated way to engage customers showing signs of churn, using personalized AI-driven offers to encourage them to stay. Adapt the filtering and offer logic to perfectly match your business needs!
by Trung Tran
📒 Telegram Expense Tracker to Google Sheets with GPT-4.1 👤 Who’s it for This workflow is for anyone who wants to log their daily expenses by simply chatting with a Telegram bot. Ideal for: Individuals who want a quick way to track spending Freelancers who log receipts and purchases on the go Teams or small business owners who want lightweight expense capture ⚙️ How it works / What it does User sends a text message on Telegram describing an expense (e.g., “Bought coffee for 50k at Highlands”) Message format is validated If the message is text, it proceeds to GPT-4.1 Mini for processing. If it's not text (e.g. image or file), the bot sends a fallback message. OpenAI GPT-4.1 Mini parses the message and returns: relevant: true/false expense_record: structured fields (date, amount, currency, category, description, source) message: a friendly confirmation or fallback If valid: The bot replies with a fun acknowledgment The data is saved to a connected Google Sheet If invalid: A fallback message is sent to encourage proper input 🛠️ How to set up 1. Telegram Bot Setup Create a bot using BotFather on Telegram Copy the bot token and paste it into the Telegram Trigger node 2. Google Sheet Setup Create a Google Sheet with these columns: Date | Amount | Currency | Category | Description | SourceMessage Share the sheet with your n8n service account email 3. OpenAI Configuration Connect the OpenAI Chat Model node using your OpenAI API key Use GPT-4.1 Mini as the model Apply a system prompt that extracts structured JSON with: relevant, expense_record, and message 4. Add Parser Use the Structured Output Parser node to safely parse the JSON response 5. Conditional Logic Nodes Is text message? Checks if the message is in text format Supported scenario? Checks if relevant = true in the LLM response 6. Final Actions If relevant**: Send confirmation via Telegram Append row to Google Sheet If not relevant**: Send fallback message via Telegram ✅ Requirements Telegram bot token OpenAI GPT-4.1 Mini API access n8n instance (self-hosted or cloud) Google Sheet with access granted to n8n Basic understanding of n8n node configuration 🧩 How to customize the workflow | Feature | How to Customize | |----------------------------------|-------------------------------------------------------------------| | Add multi-currency support | Update system prompt to detect and extract different currencies | | Add more categories | Modify the list of categories in the system prompt | | Track multiple users | Add username or chat ID column to the Google Sheet | | Trigger alerts | Add Slack, Email, or Telegram alerts for specific expense types | | Weekly summaries | Use a cron node + Google Sheet query + Telegram message | | Visual dashboards | Connect the sheet to Looker Studio or Google Data Studio | Built with 💬 Telegram + 🧠 GPT-4.1 Mini + 📊 Google Sheets + ⚡ n8n
by Aryan Shinde
How it works This workflow automates the process of creating, approving, and optionally posting LinkedIn content from a Google Sheet. Here's a high-level overview: Scheduled Trigger: Runs automatically based on your defined time interval (daily, weekly, etc.). Fetch Data from Google Sheets: Pulls the first row from your sheet where Status is marked as Pending. Generate LinkedIn Post Content: Uses OpenAI to create a professional LinkedIn post using the Post Description and Instructions from the sheet. Format & Prepare Data: Formats the generated content along with the original instruction and post description for email. Send for Approval: Sends an email to a predefined user (e.g., marketing team) with a custom form for approval, including a dropdown to accept/reject and an optional field for edits. (Optional) Image Fetch: Downloads an image from a URL (if provided in the sheet) for future use in post visuals. Set up steps You’ll need the following before you start: A Google Sheet with the following columns: Post Description, Instructions, Image (URL), Status Access to an OpenAI API key A connected Gmail account for sending approval emails Your own Google Sheets and Gmail credentials added in n8n Steps: Google Sheet Preparation: Create a new Google Sheet with the mentioned columns (Post Description, Instructions, Image, Status, Output, Post Link). Add a row with test data and set Status to Pending. Credentials: In n8n, create OAuth2 credentials for: a. Google Sheets b. Gmail c. OpenAI (API Key) Assign these credentials to the respective nodes in the JSON. OpenAI Model: Choose a model like gpt-4o-mini (used here) or any other available in your plan. Adjust the prompt in the "Generate Post Content" node if needed. Email Configuration: In the Gmail node, set the recipient email to your own or your team’s address. Customize the email message template if necessary. Schedule the Workflow: Set the trigger interval (e.g., every morning at 9 AM). Testing: Run the workflow manually first to confirm everything works. Check Gmail for the approval form, respond, and verify the results.
by Anna Bui
🎯 Universal Meeting Transcript to LinkedIn Content Automatically transform your meeting insights into engaging LinkedIn content with AI Perfect for coaches, consultants, sales professionals, and content creators who want to share valuable insights from their meetings without the manual effort of content creation. How it works Calendar trigger detects when your coaching/meeting ends Waits for meeting completion, then sends you a form via email You provide the meeting transcript and specify post preferences AI analyzes the transcript using your personal brand guidelines Generates professional LinkedIn content based on real insights Creates organized Google Docs with both transcript and final post Sends you links to review and publish your content How to use Connect your Google Calendar and Gmail accounts Update the calendar filter to match your meeting types Customize the AI prompts with your brand voice and style Replace email addresses with your own Test with a sample meeting transcript Requirements Google Calendar (for meeting detection) Gmail (for form delivery and notifications) Google Drive & Docs (for content storage) LangChain AI nodes (for content generation) Good to know AI processing may incur costs based on your LangChain provider Works with any meeting platform - just copy/paste transcripts Can be adapted to use webhooks from recording tools like Fireflies.ai Memory nodes store your brand guidelines for consistent output Happy Content Creating!
by Robert Breen
This n8n workflow scrapes recent Instagram posts by hashtag and generates new, relevant caption ideas using OpenAI. It avoids making up suggestions by analyzing real-world content and surfacing common patterns. ✅ Use Case Marketing teams, content creators, or social media managers can: Discover what’s trending for a specific topic Automatically generate Instagram captions based on real posts Understand common caption styles for a niche Save time brainstorming ideas while staying on-brand 🧠 How It Works 1️⃣ Manual Trigger 🧩 Node: When clicking ‘Execute workflow’ Manually starts the workflow for testing or single-run execution. 2️⃣ Define the Hashtag 🧩 Node: Create Search Term Sets the value of the hashtag you'd like to scan. Default is n8n, but you can modify it to anything. { "Search_Term": "yourCustomHashtag" } 3️⃣ Scrape Instagram Posts 🧩 Node: Find Recent Posts API**: Apify Instagram Hashtag Scraper Setup**: Visit Apify Console Create an API token In n8n, go to Credentials and add HTTP Query Auth Use ?token=yourTokenHere as the query string JSON Body: { "hashtags": ["{{ $json.Search_Term }}"], "resultsLimit": 20, "resultsType": "posts" } 4️⃣ Extract Captions 🧩 Node: Set bio and follower count Extracts just the caption from each post and stores it in a clean variable for the AI agent to use. 5️⃣ Aggregate Captions 🧩 Node: Aggregate Gathers all captions into one list before processing. Useful for passing a large text block into the AI. 6️⃣ Convert to Single Text Block 🧩 Node: Convert table names and columns into single text for agent Uses a Code node to combine all captions into a single string for OpenAI to read: return [ { json: { text: items .map(item => - ${JSON.stringify(item.json)}) .join('\n\n'), }, }, ]; 7️⃣ Generate Caption Ideas with AI 🧩 Node: AI Agent Takes the combined post text and sends it to GPT-4o-mini. Includes this system message: I'm looking for ideas for posts about {{ $('Create Search Term').item.json.Search_Term }}. Here’s the last 5 posts on Instagram about the topic. Use those to help me generate a list of relevant captions. Do not make up ideas that are not like the others in the list. Output like this: { "Post Idea": ["Idea1", "Idea2"], "Most Common Post": ["common post 1", "common post 2"] } 8️⃣ Choose Language Model 🧩 Node: OpenAI Chat Model Model**: gpt-4o-mini Credential**: Use your OpenAI API key. Get it from: OpenAI API Keys Add it in n8n under OpenAI credentials. 9️⃣ Parse the AI Output 🧩 Node: Structured Output Parser Parses the GPT response into structured JSON: { "Post Idea": ["Idea1", "Idea2"], "Most Common Post": ["common post 1", "common post 2"] } 🔟 Split the Outputs 🧩 Nodes: Split Out, Split Out1 Separates the Post Idea list and Most Common Post list into individual items. 🔁 Merge for Final Output 🧩 Node: Merge Combines the two split lists into one output stream. 👤 Need More Help? Robert Breen Automation Consultant | AI Workflow Designer | n8n Expert 📧 robert@ynteractive.com 🔗 LinkedIn
by Jason Krol
Notion Weekly Journal AI Summary This workflow will run on a weekly schedule and retrieve your Notion Daily Journal pages for the past week and aggregate them into a ChatGPT generated concise summary. It will save that weekly summary back to your Notion as a new Note in addition to posting to a personal Discord channel. Additionally it will also retrieve all of the Tasks you've completed in the past week and provide a quick total with a congratulatory message to a Discord channel as well. Requirements/Setup: You need Notion setup w/ a Notes database If you want the Discord messages, setup a Discord webhook for your channel as well, or simply delete the Discord nodes. One of the properties for the Notes db should be Type with a value of Journal The contents of your daily Journal pages can be whatever you want I've found what works best for me is the format of "What was a highlight of the day?", "What was a low point of the day?", and "What decisions did I delegate, delay, or dodge?" You should also create an additional Type for your Weekly summary page that gets created - in this case I used simply Weekly Automate this to run weekly on your day of choice. I tend to only journal on weekdays so I've set mine up to run every Friday retrieving the past week's Journal entries. Options: You don't have to use Discord, feel free to swap out with Slack or remove altogether. You also don't need to use the Tasks summary bottom half, simply remove that if you don't want it or need it. You can easily reuse this workflow to aggregate your Weekly Summary notes (that this workflow auto generates/saves) to generate a Quarterly or even Yearly summary!