by JPres
👥 Who Is This For? Content creators, marketing teams, and channel managers who need to streamline video publishing with optimized metadata and scheduled releases across multiple videos. 🛠 What Problem Does This Solve? Manual YouTube video publishing is time-consuming and often results in inconsistent descriptions, tags, and scheduling. This workflow fully automates: Extracting video transcripts via Apify for metadata generation Creating SEO-optimized descriptions and tags for each video Setting videos to private during initial upload (critical for scheduling) Implementing scheduled publishing at strategic times Maintaining consistent branding and formatting across all content 🔄 Node-by-Node Breakdown | Step | Node Purpose | |------|--------------| | 1 | Every Day (Scheduler) | Trigger workflow on a regular schedule | | 2 | Get Videos to Harmonize | Retrieve videos requiring metadata enhancement | | 3 | Get Video IDs (Unpublished) | Filter for videos that need publishing | | 4 | Loop over Video IDs | Process each video individually | | 5 | Get Video Data | Retrieve metadata for the current video | | 6 | Loop over Videos with Parameter IS | Set parameters for processing | | 7 | Set Videos to Private | Ensure videos are private (required for scheduling) | | 8 | Apify: Get Transcript | Extract video transcript via Apify | | 9 | Fetch Latest Videos | Get most recent channel content | | 10 | Loop Over Items | Process each video item | | 11 | Generate Description, Tags, etc. | Create optimized metadata from transcript | | 12 | AP Clean ID | Format identifiers | | 13 | Retrieve Generated Data | Collect the enhanced metadata | | 14 | Adjust Transcript Format | Format transcript for better processing | | 15 | Update Video's Metadata | Apply generated description and tags to video | ⚙️ Pre-conditions / Requirements n8n with YouTube API credentials configured Apify account with API access for transcript extraction YouTube channel with upload permissions Master templates for description formatting Videos must be initially set to private for scheduling to work ⚙️ Setup Instructions Import this workflow into your n8n instance. Configure YouTube API credentials with proper channel access. Set up Apify integration with appropriate actor for transcript extraction. Define scheduling parameters in the Every Day node. Configure description templates with placeholders for dynamic content. Set default tags and customize tag generation rules. Test with a single video before batch processing. 🎨 How to Customize Adjust prompt templates for description generation to match your brand voice. Modify tag selection algorithms based on your channel's SEO strategy. Create multiple publishing schedules for different content categories. Integrate with analytics tools to optimize publishing times. Add notification nodes to alert when videos are successfully scheduled. ⚠️ Important Notes Videos MUST be uploaded as private initially - the Publish At logic only works for private videos that haven't been published before. Publishing schedules require videos to remain private until their scheduled time. Transcript quality affects metadata generation results. Consider YouTube API quotas when scheduling large batches of videos. 🔐 Security and Privacy API credentials are stored securely within n8n. Transcripts are processed temporarily and not stored permanently. Webhook URLs should be protected to prevent unauthorized triggering. Access to the workflow should be limited to authorized team members only.
by Khairul Muhtadin
Effortlessly track your expenses with MoneyMate, an n8n workflow that transforms receipts into organized financial insights. Upload a photo or text via Telegram, and let MoneyMate extract key details—store info, transaction dates, items, and totals—using Google Vision OCR and AI-powered parsing via OpenRouter. It categorizes expenses (e.g., Food & Beverages, Transport, Household) and delivers a clean, emoji-rich summary back to your Telegram chat. Handles zero-total errors with a friendly nudge to double-check inputs. Perfect for freelancers, small business owners, or anyone seeking hassle-free expense management. No database required, ensuring privacy and simplicity. Deploy MoneyMate and take control of your finances today! Key Features 📱 Telegram Integration: Input via photo or text, receive summaries instantly. 📸 Receipt Scanning: Converts receipt images to text using Google Vision API. 🤖 AI Parsing: Categorizes transactions with OpenRouter’s AI analysis. 🛡️ Privacy-First: Processes data on-the-fly without storage. ⚠️ Smart Error Handling: Catches zero totals with user-friendly prompts. 📊 Flexible Categories: Supports Income/Expense and custom expense types. Ideal For Budget-conscious individuals** managing personal finances. Entrepreneurs** tracking business expenses. Teams** needing quick, automated expense reporting. Pre-Requirements n8n Instance:** A running n8n instance (cloud or self-hosted). Credentials:** Telegram: A bot token and webhook setup (obtained via BotFather). For more information, please refer to Telegram bots creation Google Cloud: A service account with Google Vision API enabled and API key. For more informations, please refer to Google cloud Vision OpenRouter: An account with API access for AI language model usage. Telegram Bot:* A configured *Telegram** bot to receive inputs and send summaries. Setup Instructions Import Workflow:* Copy the *MoneyMate** workflow JSON and import it into your n8n instance using the "Import Workflow" option. Set Up Telegram Bot:* Create a bot via BotFather on *Telegram** to get a token and set up a webhook. For detailed steps, refer to n8n’s Telegram setup guide. Configure Credentials:** In the Telegram Trigger, Send Error Message, and Send Expense Summary nodes, add Telegram API credentials with your bot token. In the Get Telegram File and Download Image nodes, ensure Telegram API credentials are linked. In the Google Vision OCR node, add Google Cloud credentials with Google Vision API access. In the OpenRouter AI Model node, set up OpenRouter API credentials. Test the Workflow:* Send a test receipt photo or text (e.g., "Lunch 50,000 IDR") via *Telegram** and verify the summary in your chat. Activate:** Enable the workflow in n8n to run automatically for each input. Customization Options Add Categories:* Modify the *AI Categorizer* node to include new expense types (e.g., *Entertainment**). Change Output Format:* Adjust the *Format Summary Message** node to include more details like taxes or payment methods. Switch AI Model:* In the *OpenRouter AI Model* node, select a different *OpenRouter** model for better parsing. Store Data:* Add a *Google Sheets* node after *Parse Receipt Data** to save expense records. Enhance Errors:* Include an email notification node after *Check Invalid Input** for failed inputs. Why Choose MoneyMate? Save time, reduce manual entry, and gain clarity on your spending with MoneyMate’s AI-driven workflow. Ready to streamline your finances? Get MoneyMate now! Made by: khmuhtadin Need a custom? contact me on LinkedIn or Web
by Solomon
This n8n workflow automates lead extraction from Google Maps, enriches data with AI, and stores results for cold outreach. It uses the Bright Data community node and Bright Data MCP for scraping and AI message generation. How it works Form Submission User provides Google Maps starting location, keyword and country. Bright Data Scraping Bright Data community node triggers a Maps scraping job, monitors progress, and downloads results. AI Message Generation Uses Bright Data MCP with LLMs to create a personalized cold call script and talking points for each lead. Database Storage Enriched leads and scripts are upserted to Supabase. How to use Set up all the credentials, create your Postgres table and submit the form. The rest happens automatically. Requirements LLM account (OpenAI, Gemini…) for API usage. Bright Data account for API and MCP usage. Supabase account (or other Postgres database) to store information.
by Joseph LePage
This workflow creates an automated system for monitoring and receiving notifications about new videos from your favorite YouTube channels through RSS feeds, with customizable email and Telegram notifications. 🌟 Key Features 📡 RSS Feed Management Accepts custom YouTube channel IDs or uses default channels Automatically creates RSS feeds for each YouTube channel Monitors channels for new video uploads Labels and filters recent videos within a 3-day window (change this as required) 📨 Multi-Channel Notification System Sends Telegram notifications with video thumbnails and links Delivers customized email notifications in two formats: Individual emails for each new video Single digest email containing all new videos ⚙️ Content Processing Fetches detailed video information using YouTube API Creates responsive HTML email templates with video previews Includes video thumbnails, titles, descriptions, and direct links Maintains professional formatting across different email clients 🛠️ Setup Requirements 🔑 API Configuration YouTube Data API credentials Gmail account for sending notifications Telegram bot token and chat ID OpenAI API key for content processing 📋 Channel Management Add YouTube channel IDs through form input Configure default channel list Set notification preferences Adjust monitoring schedule This workflow is perfect for content creators, marketers, or anyone wanting to stay updated with their favorite YouTube channels through automated, professionally formatted notifications delivered via email and Telegram.
by AppStoneLab Technologies LLP
🎉 Festival Social Media Automation with Gemini AI for X/Twitter & Facebook Transform your festival marketing with this comprehensive automation workflow that creates and posts culturally authentic social media content across multiple platforms daily. ⚙️ What this workflow does This workflow automatically: Fetches festival data** from Google Sheets based on today's date Generates AI-powered prompts** for both image creation and social media content Creates stunning festival images** using Google Gemini 2.0 Flash Preview Produces platform-specific content** optimized for X (Twitter) and Facebook Posts automatically** with proper image attachments and error handling ✨ Key Features 🎯 Intelligent Content Generation AI-powered prompt generation tailored to each festival's cultural context Platform-specific content optimization (character limits, hashtag strategies) Culturally sensitive and authentic messaging 🎨 Visual Content Creation Automated image generation using Google Gemini 2.0 Flash Preview Festival-themed graphics with vibrant, culturally appropriate designs Optimized for social media engagement 📲 Multi-Platform Publishing Simultaneous posting to X (Twitter) and Facebook Platform-specific formatting and optimization Built-in error handling and backup posting methods ⏰ Fully Automated Daily execution at 8:00 AM Date-based festival data retrieval Zero manual intervention required 📱 Apps and Integrations Google Sheets** - Festival calendar and data storage Google Gemini 2.0 Flash Preview** - AI content and image generation X (Twitter)** - Social media posting Facebook Graph API** - Facebook page posting Schedule Trigger** - Daily automation 🛠️🕊️ Setup Instructions 1. 📊 Google Sheets Configuration Create a Google Sheets document with columns: Date, Name of the Festival, Description Format dates as DD/MM/YYYY Connect your Google Sheets credential in n8n 2. 🤖 Google Gemini API Setup Obtain a Google AI Studio API key from Google AI Studio Configure the Google Gemini credential in n8n Ensure you have access to Gemini 2.0 Flash Preview 3. 🕊️X (Twitter) Credentials Setup Important: Due to X API limitations, you'll need TWO separate OAuth2 credentials: X API For Image Upload (Generic OAuth2): Create a new OAuth2 credential with these settings: Grant Type: PKCE Authorization URL: https://x.com/i/oauth2/authorize Access Token URL: https://api.x.com/2/oauth2/token Scope: media.write offline.access tweet.read users.read Note: Cannot combine media.write with tweet.write in the same credential For Tweet Posting (X OAuth2): Use the predefined X OAuth2 credential Configure with scopes: tweet.write offline.access tweet.read users.read 4. 📘Facebook Graph API Setup Create a Facebook App and get your access token from Meta for Developers Configure the Facebook Graph API credential Update the node with your Facebook page ID 🎬 How to Use Populate your Google Sheets with festival data for upcoming dates Activate the workflow - it will run automatically daily at 8:00 AM Monitor the execution - check logs for successful posts or any errors Customize content by modifying the prompt generation logic if needed 🔄 Workflow Components 🔗 Data Flow Daily Trigger → Get Today's Date → Fetch Festival Data Generate AI Prompts → Create Image & Content Process Media → Merge Data → Post to Platforms 🛡️ Error Handling Backup HTTP posting method for X if primary method fails Continue execution even if individual platform posting fails Comprehensive error logging for troubleshooting 🎨 Customization Options ✍️ Content Personalization Modify the prompt generation logic for different content styles Adjust platform-specific character limits and hashtag strategies Customize image generation prompts for different visual styles 🌐 Platform Extension Add Instagram, LinkedIn, or other social media platforms Implement additional content formats (Stories, Reels, etc.) Create platform-specific posting schedules 📊 Data Sources Connect to different data sources (Airtable, Notion, CMS) Add support for multiple festival categories Implement content approval workflows 💡 Best Practices 📝 Content Quality Regularly review and update your festival database Monitor AI-generated content for cultural sensitivity Test different prompt styles for optimal engagement 🔑 API Management Monitor API usage limits for all connected services Implement rate limiting for high-volume posting Set up alerts for credential expiration ⏰ Scheduling Consider time zones for optimal posting times Implement staggered posting across platforms Add weekend/holiday scheduling logic 🔧 Troubleshooting ⚠️ Common Issues Image upload fails**: Check OAuth2 credentials and API limits Content generation errors**: Verify Gemini API key and model availability Date matching issues**: Ensure date format consistency in Google Sheets ⚡️ Performance Tips Optimize image generation prompts for faster processing Use structured output parsing for consistent results Implement content caching for repeated festivals 🎯 Use Cases Cultural Organizations** - Automate festival announcements and celebrations Event Management Companies** - Scale social media presence across multiple events Tourism Boards** - Promote local festivals and cultural events Marketing Agencies** - Manage multiple client festival campaigns Community Organizations** - Engage audiences with regular cultural content ⭐️ Benefits Time Savings** - Eliminate manual social media posting Consistency** - Maintain regular posting schedule Cultural Authenticity** - AI-generated content respects cultural context Multi-Platform Reach** - Simultaneous posting increases visibility Scalability** - Handle unlimited festivals with zero additional effort This workflow transforms festival marketing from a time-consuming manual process into a fully automated, culturally intelligent system that engages audiences across multiple platforms while maintaining authenticity and relevance.
by Mohamed Abdelwahab
Automates the process of generating, storing, and publishing engaging LinkedIn posts derived from books (PDFs) using AI and vector search. 🧠 Overview This workflow: Watches a Google Drive folder for new or updated book PDFs. Extracts and embeds the content using OpenAI. Stores the data in a Pinecone vector database. Uses a LangChain agent to generate post ideas. Creates concise LinkedIn posts with hook, insight, CTA. Updates a Google Sheet and posts to LinkedIn. 🛠 Workflow Breakdown 📥 1. Google Drive Trigger Trigger:** Watches a folder for new or updated PDF files. Action:** Downloads the updated PDF. 📄 2. Extract and Embed Content Extract from File:** Parses PDF to extract text. Text Splitter:** Breaks text into chunks. Embeddings (OpenAI):** Converts chunks into vector embeddings. Pinecone Vector Store:** Saves the embeddings with the book name as namespace. 🧠 3. Post Idea Generation (LangChain Agent) Uses a prompt to: Search Pinecone DB Extract insights Format into 5 LinkedIn post ideas with: Hook Insight CTA Memory buffer** and structured output parser are used for clean AI interaction. ✍️ 4. Post Creation Each idea is: Split Rewritten with a GPT model prompt to match LinkedIn tone Styled for under 600 characters Includes emojis, hashtags, and tone guidelines 📊 5. Google Sheet Integration Saves all generated posts to a Google Sheet. Marks status: "published" or "no". 🔁 6. Scheduled Publishing Every day: Pulls an unpublished post Publishes it to LinkedIn Updates the post's status and timestamp in the Google Sheet ⚙️ Setup Guide 📂 Google Drive Create a folder for book PDFs Connect your Google Drive account to n8n Provide access token with file read permission 📊 Google Sheets Create a Google Sheet with columns: bookname, hook, insight, cta, postContent, published, date Add credentials in n8n with read/write permission 🧠 Pinecone Set up a Pinecone project and index (linkdenpost) Namespace will be auto-named using the book filename 🔑 API Credentials Required OpenAI API** (for embeddings and post generation) Pinecone API** (for vector storage and retrieval) LinkedIn OAuth2** (to publish posts) Google Drive & Sheets** credentials 🔁 Flow Summary graph TD A[Google Drive Trigger] --> B[Download PDF] B --> C[Extract Text] C --> D[Text Splitter] D --> E[Create Embeddings] E --> F[Pinecone Vector Store] F --> G[LangChain Agent] G --> H[Structured Output (5 Post Ideas)] H --> I[Split Ideas] I --> J[Format as LinkedIn Post (GPT)] J --> K[Store in Google Sheet] L[Schedule Trigger] --> M[Get Unpublished Post] M --> N[Post to LinkedIn] N --> O[Mark as Published] 🧪 Prompt Example (Used in LangChain Agent) You are a content strategist. Search the Pinecone vector DB containing a book. Generate 5 unique LinkedIn post ideas with: A Hook (curiosity driven) Insight (summary < 100 words) CTA ("Agree or disagree?", etc.) Respond in structured JSON: [ { "Hook": "...", "Insight": "...", "CTA": "..." }, ... ] ✅ Output Sample { "Hook": "Why your lab's results might be invalid 😱", "Insight": "ISO/IEC 17025 stresses that labs must plan and address risks to impartiality and validity.", "CTA": "Does your lab audit for these risks?" } 📆 Schedule Control Uses Schedule Trigger to post daily at a set time. Ensures automation with LinkedIn and accurate Google Sheet syncing. 📝 Notes Posts remain professional and concise for a LinkedIn audience Works with any PDF book Supports multi-book pipelines You can filter and tag books by filename or folder for segmenting post styles
by Mario
Purpose This workflow automatically creates Tasks from forwarded Emails, similar to Asana, but better. Emails are processed by AI and converted to rather actionable task. In addition this workflow is build in a way, that multiple users can share this single process by setting up their individual configuration through a user friendly portal (internal tool) instead of the need to manage their own workflows. Demo How it works One Gmail account is used to process inbound mails from different users. A custom web portal enables users to define “routes”. Thats where the mapping between an automatically generated Gmail Alias and a Notion Database URL, including the personal API Token, happens. Using a Gmail Trigger, new entries are split by the Email Alias, so the corresponding route can be retrieved from the Database connected to the portal. Every Email then gets processed by AI to get generate an actionable task and get a short summary of the original Email as well as some metadata. Based on a predefined structure a new Page is created in the corresponding Notion Database. Finally the Email is marked as “processed” in Gmail. If an error happens, the route gets paused for a possible overflow and the user gets notified by Email. Setup Create a new Google account (alternatively you can use an existing one and set up rules to keep your inbox organized) Create two Labels in Gmail: “Processed” and “Error” Clone this Softr template including the Airtable dataset and publish the application Clone this workflow and choose credentials (Gmail, Airtable) Follow the additional instructions provided within the workflow notes Enable the workflow, so it runs automatically in the background How to use Open published Softr application Register as a new user Create a new route containing the Notion API key and the Notion Database URL Expand the new entry to copy the Email address Save the address as a new contact in your Email provider of choice Forward an Email to it and watch how it gets converted to an actionable task Disclamer Airtable was chosen, so you can setup this template fairly quickly. It is advised to replace the persistence by something you own, like a self hosted SQL server, since we are dealing with sensitive information of multiple users This solution is only meant for building internal tools, unless you own an embed license for n8n.
by NanaB
What it does This n8n workflow creates a cutting-edge, multi-modal AI Memory Assistant designed to capture, understand, and intelligently recall your personal or business information from diverse sources. It automatically processes voice notes, images, documents (like PDFs), and text messages sent via Telegram. Leveraging GPT-4o for advanced AI processing (including visual analysis, document parsing, transcription, and semantic understanding) and MongoDB Atlas Vector Search for persistent and lightning-fast recall, this assistant acts as an external brain. Furthermore, it integrates with Gmail, allowing the AI to send and search emails as part of its memory and response capabilities. This end-to-end solution blurprint provides a powerful starting point for personal knowledge management and intelligent automation. How it works 1. Multi-Modal Input Ingestion 🗣️📸📄💬 Your memories begin when you send a voice note, an image, a document (e.g., PDF), or a text message to your Telegram bot. The workflow immediately identifies the input type. 2. Advanced AI Content Processing 🧠✨ Each input type undergoes specialized AI processing by GPT-4o: Voice notes are transcribed into text using OpenAI Whisper. Images are visually analyzed by GPT-4o Vision, generating detailed textual descriptions. Documents (PDFs) are processed for text extraction, leveraging GPT-4o for robust parsing and understanding of content and structure. Unsupported document types are gracefully handled with a user notification. Text messages are directly forwarded for further processing. This phase transforms all disparate input formats into a unified, rich textual representation. 3. Intelligent Memory Chunking & Vectorization ✂️🏷️➡️🔢 The processed content (transcriptions, image descriptions, extracted document text, or direct text) is then fed back into GPT-4o. The AI intelligently chunks the information into smaller, semantically coherent pieces, extracts relevant keywords and tags, and generates concise summaries. Each of these enhanced memory chunks is then converted into a high-dimensional vector embedding using OpenAI Embeddings. 4. Persistent Storage & Recall (MongoDB Atlas Vector Search) 💾🔍 These vector embeddings, along with their original content, metadata, and tags, are stored in your MongoDB Atlas cluster, which is configured with Atlas Vector Search. This allows for highly efficient and semantically relevant retrieval of memories based on user queries, forming the core of your "smart recall" system. 5. AI Agent & External Tools (Gmail Integration) 🤖🛠️ When you ask a question, the AI Agent (powered by GPT-4o) acts as the central intelligence. It uses the MongoDB Chat Memory to maintain conversational context and, crucially, queries the MongoDB Atlas Vector Search store to retrieve relevant past memories. The agent also has access to Gmail tools, enabling it to send emails on your behalf or search your past emails to find information or context that might not be in your personal memory store. 6. Smart Response Generation & Delivery 💬➡️📱 Finally, using the retrieved context from MongoDB and the conversational history, GPT-4o synthesizes a concise, accurate, and contextually aware answer. This response is then delivered back to you via your Telegram bot. How to set it up (~20 Minutes) Getting this powerful workflow running requires a few key configurations and external service dependencies. Telegram Bot Setup: Use BotFather in Telegram to create a new bot and obtain its API Token. In your n8n instance, add a new Telegram API credential. Give it a clear name (e.g., "My AI Memory Bot") and paste your API Token. OpenAI API Key Setup: Log in to your OpenAI account and generate a new API key. Within n8n, create a new OpenAI API credential. Name it appropriately (e.g., "My OpenAI Key for GPT-4o") and paste your API key. This credential will be used by the OpenAI Chat Model (GPT-4o for processing, chunking, and RAG), Analyze Image, and Transcribe Audio nodes. MongoDB Atlas Setup: If you don't have one, create a free-tier or paid cluster on MongoDB Atlas. Create a database and a collection within your cluster to store your memory chunks and their vector embeddings. Crucially, configure an Atlas Vector Search index on your chosen collection. This index will be on the field containing your embeddings (e.g., embedding field, type knnVector). Refer to MongoDB Atlas documentation for detailed instructions on creating vector search indexes. In n8n, add a new MongoDB credential. Provide your MongoDB Atlas connection string (ensure it includes your username, password, and database name), and give it a clear name (e.g., "My Atlas DB"). This credential will be used by the MongoDB Chat Memory node and for any custom HTTP requests you might use for Atlas Vector Search insertion/querying. Gmail Account Setup: Go to Google Cloud Console, enable the Gmail API for your project, and configure your OAuth consent screen. Create an OAuth 2.0 Client ID for a Desktop app (or Web application, depending on your n8n setup and redirect URI). Download the JSON credentials. In n8n, add a new Gmail OAuth2 API credential. Follow the n8n instructions to configure it using your Google Client ID and Client Secret, and authenticate with your Gmail account, ensuring it has sufficient permissions to send and search emails. External API Services: If your Extract from File node relies on an external service for robust PDF/DocX text extraction, ensure you have an API key and the service is operational. The current flow uses ConvertAPI. Add the necessary credential (e.g., ConvertAPI) in n8n. How you could enhance it ✨ This workflow offers numerous avenues for advanced customization and expansion: Expanded Document Type Support: Enhance the "Document Processing" section to handle a wider range of document types beyond just PDFs (e.g., .docx, .xlsx, .pptx, markdown, CSV) by integrating additional conversion APIs or specialized parsing libraries (e.g., using a custom code node or dedicated third-party services like Apache Tika, Unstructured.io). Fine-tuned Memory Chunks & Metadata: Implement more sophisticated chunking strategies for very long documents, perhaps based on semantic breaks or document structure (headings, sections), to improve recall accuracy. Add more metadata fields (e.g., original author, document date, custom categories) to your MongoDB entries for richer filtering and context. Advanced AI Prompting: Allow users to dynamically set parameters for their memory inputs (e.g., "This is a high-priority meeting note," "This image contains sensitive information") which can influence how GPT-4o processes, tags, and stores the memory, or how it's retrieved later. n8n Tool Expansion for Proactive Actions: Significantly expand the AI Agent's capabilities by providing it with access to a wider range of n8n tools, moving beyond just information retrieval and email External Data Source Integration (APIs): Expand the AI Agent's tools to query other external APIs (e.g., weather, stock prices, news, CRM systems) so it can provide real-time information relevant to your memories. Getting Assistance & More Resources Need assistance setting this up, adapting it to a unique use case, or exploring more advanced customizations? Don't hesitate to reach out! You can contact me directly at nanabrownsnr@gmail.com. Also, feel free to check out my Youtube Channel where I discuss other n8n templates, as well as Innovation and automation solutions.
by Angel Menendez
Streamline Case Management in TheHive via Slack! Our TheHive Slack Integration empowers SOC analysts by allowing them to efficiently manage and update case attributes directly within Slack, reducing the need to switch contexts and enhancing response time. Key Features: Direct Case Management**: Modify case details such as assignee, severity, status, and more through intuitive form inputs embedded within Slack messages. Seamless Integration**: Assumes matching email addresses between TheHive and Slack users for straightforward assignee updates. Note: Ensure email consistency to avoid assignment errors. Instant Case Actions**: Quickly close cases as false positives or adjust threat levels with minimal clicks, directly impacting case status in TheHive and reflecting updates immediately in Slack. Task Management**: Add tasks to cases through a user-friendly modal popup, fostering better task tracking and delegation within your team. Operational Benefits: Efficiency**: Enables analysts to perform multiple case actions without leaving Slack, streamlining workflows and saving valuable time. Accuracy**: Reduces the chances of human error by providing a controlled interface for case updates. Agility**: Enhances the SOC team's agility by providing tools for rapid response and case management, crucial for effective security operations. Setup Tips: Verify that all SOC team members have matching email IDs in TheHive and Slack. Familiarize your team with the Slack form inputs and ensure they understand the importance of accurate data entry. Regularly review and update the integration settings to accommodate any changes in your security operations protocols. Need Help? For detailed setup instructions or troubleshooting, refer to our Integration Guide or reach out on our Support Forum. Leverage this integration to maximize your SOC team's efficiency and responsiveness, ensuring that case management is as streamlined and effective as possible.
by Angel Menendez
Upload Public-Facing Images to an S3 Cloudflare Bucket via Slack Modal 🛠 Who is this for? This workflow is for teams that use Slack for internal communication and need a streamlined way to upload public-facing images to an S3 Cloudflare bucket. It's especially beneficial for DevOps, marketing, or content management teams who frequently share assets and require efficient cloud storage integration. 💡 What problem does this workflow solve? Manually uploading images to cloud storage can be time-consuming and disruptive, especially if you're already working in Slack. This workflow automates the process, allowing you to upload images directly from Slack via a modal popup. It reduces friction and keeps your workflow within a single platform. 🔍 What does this workflow do? This workflow connects Slack with an S3 Cloudflare bucket to simplify the image-uploading process: Slack Modal Interaction**: Users trigger a Slack modal to select images for upload. Dynamic Folder Management**: Choose to create a new folder or use an existing one for uploads. S3 Integration**: Automatically uploads the images to a specified S3 Cloudflare bucket. Slack Confirmation**: After upload, Slack sends a confirmation with the uploaded file URLs. 🚀 Setup Instructions Prerequisites Slack Bot with the following permissions: commands files:write files:read chat:write Cloudflare S3 Credentials: Create an API token with write access to your S3 bucket. n8n Instance: Ensure n8n is properly set up with webhook capabilities. Steps Configure Slack Bot: Set up a Slack app and enable the Events API. Add your n8n webhook URL to the Events Subscription section. Add Credentials: Add your Slack API and S3 Cloudflare credentials to n8n. Customize the Workflow: Open the Idea Selector Modal node and update folder options to suit your needs. Update the Post Image to Channel node with your Slack channel ID. Deploy the Workflow: Activate the workflow and test by triggering the Slack modal. 🛠 How to Customize This Workflow Adjust the Slack Modal You can modify the modal layout in the Idea Selector Modal node to add additional fields or adjust the styling. Change the Bucket Structure Update the Upload to S3 Bucket node to customize the folder paths or change naming conventions. 🔗 References and Helpful Links Slack API Documentation Cloudflare S3 Setup n8n Documentation 📓 Workflow Notes Key Features: Slack Integration**: Uses Slack modal interactions to streamline the upload process. Cloud Storage**: Automatically uploads to a Cloudflare S3 bucket. User Feedback**: Sends a Slack message with file URLs upon successful upload. Setup Dependencies: Slack API token Cloudflare S3 credentials n8n webhook configuration Sticky Notes Included Sticky notes are embedded within the workflow to guide you through configuration and explain node functionality. 🌟 Why Use This Workflow? This workflow keeps your image-uploading process intuitive, efficient, and fully integrated with tools you already use. By leveraging n8n's flexibility, you can ensure smooth collaboration and quick sharing of public-facing assets without switching contexts.
by Adam Bertram
An intelligent IT support agent that uses Azure AI Search for knowledge retrieval, Microsoft Entra ID integration for user management, and Jira for ticket creation. The agent can answer questions using internal documentation and perform administrative tasks like password resets. How It Works The workflow operates in three main sections: Agent Chat Interface: A chat trigger receives user messages and routes them to an AI agent powered by Google Gemini. The agent maintains conversation context using buffer memory and has access to multiple tools for different tasks. Knowledge Management: Users can upload documentation files (.txt, .md) through a form trigger. These documents are processed, converted to embeddings using OpenAI's API, and stored in an Azure AI Search index with vector search capabilities. Administrative Tools: The agent can query Microsoft Entra ID to find users, reset passwords, and create Jira tickets when issues need escalation. It uses semantic search to find relevant internal documentation before responding to user queries. The workflow includes a separate setup section that creates the Azure AI Search service and index with proper vector search configuration, semantic search capabilities, and the required field schema. Prerequisites To use this template, you'll need: n8n cloud or self-hosted instance Azure subscription with permissions to create AI Search services Microsoft Entra ID (Azure AD) access with user management permissions OpenAI API account for embeddings Google Gemini API access Jira Software Cloud instance Basic understanding of Azure resource management Setup Instructions Import the template into n8n. Configure credentials: Add Google Gemini API credentials Add OpenAI API credentials for embeddings Add Microsoft Azure OAuth2 credentials with appropriate permissions Add Microsoft Entra ID OAuth2 credentials Add Jira Software Cloud API credentials Update workflow parameters: Open the "Set Common Fields" nodes Replace <azure subscription id> with your Azure subscription ID Replace <azure resource group> with your target resource group name Replace <azure region> with your preferred Azure region Replace <azure ai search service name> with your desired service name Replace <azure ai search index name> with your desired index name Update the Jira project ID in the "Create Jira Ticket" node Set up Azure infrastructure: Run the manual trigger "When clicking 'Test workflow'" to create the Azure AI Search service and index This creates the vector search index with semantic search configuration Configure the vector store webhook: Update the "Invoke Query Vector Store Webhook" node URL with your actual webhook endpoint The webhook URL should point to the "Semantic Search" webhook in the same workflow Upload knowledge base: Use the "On Knowledge Upload" form to upload your internal documentation Supported formats: .txt and .md files Documents will be automatically embedded and indexed Test the setup: Use the chat interface to verify the agent responds appropriately Test knowledge retrieval with questions about uploaded documentation Verify Entra ID integration and Jira ticket creation Security Considerations Use least-privilege access for all API credentials Microsoft Entra ID credentials should have limited user management permissions Azure credentials need Search Service Contributor and Search Index Data Contributor roles OpenAI API key should have usage limits configured Jira credentials should be restricted to specific projects Consider implementing rate limiting on the chat interface Review password reset policies and ensure force password change is enabled Validate all user inputs before processing administrative requests Extending the Template You could enhance this template by: Adding support for additional file formats (PDF, DOCX) in the knowledge upload Implementing role-based access control for different administrative functions Adding integration with other ITSM tools beyond Jira Creating automated escalation rules based on query complexity Adding analytics and reporting for support interactions Implementing multi-language support for international organizations Adding approval workflows for sensitive administrative actions Integrating with Microsoft Teams or Slack for notifications
by Harsh Maniya
✨ Automate Daily Hindu Festival Posts on X (Twitter) with AI 🐦 This workflow automates the entire process of creating and publishing culturally rich social media content about Hindu festivals. It starts by building a comprehensive festival calendar for the year in a Google Sheet, then runs daily to post engaging, bilingual updates on X (formerly Twitter). 🗓️ The workflow uses a sophisticated dual-AI system: Google Gemini acts as a content generator creating multiple post options ✍️, while OpenAI's GPT-4o Mini acts as a discerning social media manager, selecting the very best post for publication. 🧠 This ensures your content is not only automated but also high-quality and optimized for engagement. How it works ⚙️ This workflow operates in two distinct stages: Part 1: Data Population (One-Time Manual Run) 🔍 Fetch Festival Data: Manually trigger the workflow to scrape a list of 2025 Hindu festivals from a public calendar using the Jina AI Reader API. ✨ Enrich with AI: For each festival, a Google Gemini model researches and extracts key details: The festival's name in Hindi. A concise description of its significance in English. A Hindi translation of the description. 📝 Store in Google Sheets: The enriched data for the entire year is then systematically organized and saved in a designated Google Sheet, creating a content calendar. Part 2: Daily Automated Posting ⏰ Daily Trigger: A Schedule Trigger node activates the workflow every morning at 8 AM. ✔️ Check for Festivals: The workflow gets today's date and checks the Google Sheet to see if there is a corresponding festival. 🎨 Generate Post Options: If a festival is scheduled for the day, Google Gemini generates three distinct and engaging post options for X. Each post is crafted to be concise, use a mix of English and Hindi, and include relevant emojis and hashtags. 🏆 Select the Best Post: OpenAI's GPT-4o Mini then evaluates the three generated posts based on criteria like clarity, engagement potential, and effective use of language. It selects the single most impactful post. 🚀 Publish to X: The winning post is automatically published to your connected X account. Features ⭐ 🤖 Fully Automated Content Pipeline: From data collection to final publication, no manual intervention is needed after the initial setup. 🧠 Dual-AI System: Leverages Google Gemini for creative generation and OpenAI GPT-4o Mini for critical selection, ensuring high-quality output. 🗣️ Bilingual Content: Creates posts that blend English and Hindi to enhance cultural connection and broaden audience reach. 🎯 Dynamic and Contextual: Posts are automatically tailored to the specific festival of the day. 🗓️ Centralized Content Calendar: Uses Google Sheets as a reliable, easy-to-manage database for your yearly social media plan. Prerequisites 🛠️ Before you can use this workflow, you will need to: Have an n8n instance set up. Create a new, empty Google Sheet. Obtain credentials for the following services: Jina AI: Get a free Bearer Token from the Jina AI API page. Google: Set up Google credentials (OAuth2) for the Google Sheets and Google Gemini nodes. OpenAI: Get an API key from your OpenAI Platform dashboard. X (Twitter): Set up X credentials (OAuth2) to allow n8n to post on your behalf. How to use this template 🚀 🔑 Set up Credentials: In n8n, go to the "Credentials" section and add new credentials for Jina AI, Google (for both Sheets and Gemini), OpenAI, and X using the API keys and tokens you obtained. 📊 Configure the Google Sheet: Create a new Google Sheet. In the first row, create the following headers exactly as written: Name of the Festival Date English Description (Note the trailing space) Hindi Description Open the "Add all Rows at once" and "Fetch Data of Matched Date" nodes in the workflow and connect them to your Google account and select the Sheet you just created. ▶️ Populate the Data (Manual Step): Click the "Execute workflow" button on the When clicking ‘Execute workflow’ node. This will run the first part of the workflow, filling your Google Sheet with festival data for 2025. This only needs to be done once. ✅ Activate the Workflow: Save the workflow and then activate it using the toggle at the top right of the n8n canvas. The workflow will now run automatically every day to post about the day's festival. Extending the Workflow 💡 🖼️ Add Image Generation: Integrate a node like DALL-E or Midjourney to generate a unique image for each festival and include it in the tweet. 🌐 Cross-Platform Posting: Duplicate the final "Post to X" node and adapt it to post on other platforms like Facebook, LinkedIn, or Telegram. 🎨 Change the Tone: Modify the prompts in the "Generate Posts" and "Select Best Post" nodes to change the style of your social media content—make it more formal, humorous, or poetic. 📅 Use a Different Year: Update the URL in the "Get Festival Data" node to fetch data for a different year. The current URL is https://r.jina.ai/https://www.calendarlabs.com/2025-hindu-calendar.