by inderjeet Bhambra
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Who is this for? IT teams and support organizations looking to automate Level 1 support with AI-powered assistance while maintaining proper ticket management workflows. What problem does this solve? Eliminates repetitive manual support tasks by providing instant, context-aware assistance that references organizational knowledge and creates structured tickets when needed. What this workflow does RAG Pipeline**: Processes PDF/CSV documents into searchable vector database Intelligent Slack Bot**: This AI-helpdesk assistant handles support requests with thread-aware conversations Vector Knowledge Search**: Searches embedded knowledge base articles and historical case data JIRA Integration**: Creates, searches, and manages support tickets automatically Emoji Reactions**: Users can trigger actions (create tickets, escalate) via emoji reactions Requirements Required Accounts: n8n Cloud or self-hosted instance Slack workspace with admin access Supabase account (vector database) JIRA Cloud instance OpenAI API key Technical Prerequisites: Basic n8n workflow knowledge Slack app creation experience Understanding of vector databases Setup Steps 1. Slack App Configuration Create new Slack app with Bot Token Scopes: app_mentions:read, channels:history, channels:read, groups:history, groups:read, im:history, im:read, mpim:history, mpim:read, users:read Configure Event Subscriptions: app_mention, message.channels, message.groups, reaction_added Set Request URL to your n8n Slack Trigger webhook 2. Supabase Vector Database Setup Create new Supabase project Enable pgvector extension Create documents table with vector column (1536 dimensions for OpenAI embeddings) Configure RLS policies for secure access 3. JIRA Configuration Generate API token from JIRA Cloud Create helpdesk project with appropriate issue types Note project ID and issue type IDs for workflow configuration 4. n8n Workflow Configuration Import workflow and configure credentials Update Slack channel IDs in trigger nodes Set OpenAI API key in all OpenAI nodes Configure Supabase connection in vector store nodes Update JIRA project settings in MCP server nodes 5. Knowledge Base Data Format Supported file formats: PDF, CSV CSV Structure: Structure your data with columns, but not limited to, Ticket#, Issue Description, Issue Summary, Resolution Provided, Case Status, Contact User PDF Content: Technical documentation, troubleshooting guides, policy documents Upload documents via the form trigger to automatically embed in vector database. Customization Options AI Agent Behavior Modify system prompt in AIHelpdesk Agent node Adjust conversation memory window (default: 20 messages) Change AI model (GPT-4o, GPT-3.5-turbo, etc.) Reaction Mappings Customize emoji-to-action mappings in Reaction Handler code Add new reaction types for department-specific workflows Configure escalation rules and priority levels JIRA Integration Customize ticket templates and fields Add auto-assignment rules based on issue type Configure SLA and priority mappings Vector Search Adjust similarity thresholds for knowledge retrieval Modify search result limits and relevance scoring Add metadata filtering for departmental knowledge bases Advanced Features Thread-aware conversation memory Automatic bot loop prevention Context-preserving ticket creation Multi-modal file processing (PDF + CSV) Scalable MCP architecture for tool integration Use Cases Level 1 IT Support**: Automate common troubleshooting workflows Employee Onboarding**: Answer policy and procedure questions Internal Help Desk**: Route and track internal service requests Knowledge Management**: Make organizational knowledge searchable and actionable Template includes Complete Slack integration with thread support RAG pipeline for document processing Vector similarity search implementation JIRA ticket lifecycle management Emoji reaction-based user interactions Comprehensive error handling and validation
by Davide
This workflow is designed to analyze YouTube videos by extracting their transcripts, summarizing the content using AI models, and sending the analysis via email. This workflow is ideal for content creators, marketers, or anyone who needs to quickly analyze and summarize YouTube videos for research, content planning, or educational purposes. How It Works: Trigger: The workflow starts with a manual trigger, allowing you to test it by clicking "Test workflow." You can also set a YouTube video URL manually or dynamically. YouTube Video ID Extraction: The workflow extracts the YouTube video ID from the provided URL using a custom JavaScript function. This ID is necessary for fetching the transcript. Transcript Generation: The video ID is sent via an HTTP request to generate the transcript. You need to replace APIKEY with a free API key from the service. Transcript Validation: The workflow checks if a transcript exists for the video. If a transcript is available, it proceeds; otherwise, it stops. Full Text Extraction: If a transcript exists, the workflow combines all transcript segments into a single text variable for further analysis. AI-Powered Analysis: The full transcript is passed to an AI model (DeepSeek, OpenAI, or OpenRouter) for analysis. The AI generates a structured summary, including a title and key points, formatted in markdown. Email Notification: The analysis results (title and summary) are sent via email using SMTP credentials. The email contains the structured summary of the video. Set Up Steps: YouTube Transcript API: Obtain a free API key from youtube-transcript.io and replace APIKEY in the "Generate transcript" node with your key. AI Model Configuration: Configure the AI model nodes (DeepSeek, OpenAI, or OpenRouter) with the appropriate API credentials. You can choose one or multiple models depending on your preference. Email Setup: Configure the "Send Email" node with your SMTP credentials (e.g., Gmail, Outlook, or any SMTP service). Ensure the email settings are correct to send the analysis results. Key Features: Free Tools: Uses **youtube-transcript.io for free transcript generation. AI Models**: Supports multiple AI models (DeepSeek, OpenAI, OpenRouter) for flexible analysis. Email Notifications**: Sends the analysis results directly to your inbox. Customizable**: Easily adapt the workflow to analyze different videos or use different AI models.
by Onur
Turn BBC News Articles into Podcasts using Hugging Face and Google Gemini Effortlessly transform BBC news articles into engaging podcasts with this automated n8n workflow. Who is this for? This template is perfect for: Content creators** who want to quickly produce podcasts from current events. Students** looking for an efficient way to create audio content for projects or assignments. Individuals** interested in generating their own podcasts without technical expertise. Setup Information Install n8n: If you haven't already, download and install n8n from n8n.io. Import the Workflow: Copy the JSON code for this workflow and import it into your n8n instance. Configure Credentials: Gemini API: Set up your Gemini API credentials in the workflow's LLM nodes. Hugging Face Token: Obtain an access token from Hugging Face and add it to the HTTP Request node for the text-to-speech model. Customize (Optional): Filtering Criteria: Adjust the News Classifier node to fine-tune the selection of news articles based on your preferences. Output Options: Modify the workflow to save the generated audio file to a cloud storage service or publish it to a podcast hosting platform. Prerequisites An active n8n instance. Basic understanding of n8n workflows (no coding required). API credentials for Gemini and a Hugging Face account with an access token. What problem does it solve? This workflow eliminates the manual effort involved in creating podcasts from news articles. It automates the entire process, from fetching and filtering news to generating the final audio file. What are the benefits? Time-saving:** Create podcasts in minutes, not hours. Easy to use:** No coding or technical skills required. Customizable:** Adapt the workflow to your specific needs and preferences. Cost-effective:** Leverage free or low-cost services like Gemini and Hugging Face. How does it work? The workflow fetches news articles from the BBC website. It filters articles based on their suitability for a podcast. It extracts the full content of the selected articles. It uses Gemini LLM to create a podcast script. It converts the script to speech using Hugging Face's text-to-speech model. The final podcast audio is ready for use. Nodes in the Workflow Fetch BBC News Page: Retrieves the main BBC News page. News Classifier: Categorizes news articles using Gemini LLM. Fetch BBC News Detail: Extracts detailed content from suitable articles. Basic Podcast LLM Chain: Generates a podcast script using Gemini LLM. HTTP Request: Converts the script to speech using Hugging Face. Add Story I'm excited to share this workflow with the n8n community and help content creators and students easily produce engaging podcasts! Additional Tips Explore the n8n documentation and community resources for more advanced customization options. Experiment with different filtering criteria and LLM prompts to achieve your desired podcast style.
by Mark Shcherbakov
Video Guide I prepared a detailed guide that demonstrates the complete process of building a trading agent automation using n8n and Telegram, seamlessly integrating various functions for stock analysis. Youtube Link Who is this for? This workflow is perfect for traders, financial analysts, and developers looking to automate stock analysis interactions via Telegram. It’s especially valuable for those who want to leverage AI tools for technical analysis without needing to write complex code. What problem does this workflow solve? Many traders desire real-time analysis of stock data but lack the technical expertise or tools to perform in-depth analysis. This workflow allows users to easily interact with an AI trading agent through Telegram for seamless stock analysis, chart generation, and technical evaluation, all while eliminating the need for manual interventions. What this workflow does This workflow utilizes n8n to construct an end-to-end automation process for stock analysis through Telegram communication. The setup involves: Receiving messages via a Telegram bot. Processing audio or text messages for trading queries. Transcribing audio using OpenAI API for interpretation. Gathering and displaying charts based on user-specified parameters. Performing technical analysis on generated charts. Sending back the analyzed results through Telegram. Setup Prepare Airtable: Create simple table to store tickers. Prepare Telegram Bot: Ensure your Telegram bot is set up correctly and listening for new messages. Replace Credentials: Update all nodes with the correct credentials and API keys for services involved. Configure API Endpoints: Ensure chart service URLs are correctly set to interact with the corresponding APIs properly. Start Interaction: Message your bot to initiate analysis; specify ticker symbols and desired chart styles as required.
by Lakindu Siriwardana
📄 Automated Lease Renewal Offer by Email ✅ Features Automated Lease Offer Generation using AI (Ollama model). Duplicate File Check to avoid reprocessing the same customer. Personalized Offer Letter creation based on customer details from Supabase. PDF/Text File Conversion for formatted output. Automatic Google Drive Management for storing and retrieving files. Email Sending with generated offer letter attached. Seamless Integration with Supabase, Google Drive, Gmail, and AI LLM. ⚙️ How It Works Trigger: Workflow starts on form submission with customer details. Customer Lookup: Searches Supabase for customer data. Updates customer information if needed. File Search & Duplication Check: Looks for existing lease offer files in Google Drive. If duplicate found, deletes old file before proceeding. AI Lease Offer Creation: Uses the LLM Chain (offerLetter) to generate a customized lease renewal letter. File Conversion: Converts AI-generated text into a downloadable file format. Upload to Drive: Saves the new lease offer in Google Drive. Email Preparation: Uses Basic LLM Chain-email to draft the email body. Downloads the offer file from Drive and attaches it. Email Sending: Sends the renewal offer email via Gmail to the customer. 🛠 Setup Steps Supabase Connection: Add Supabase credentials in n8n. Ensure a customers table exists with relevant columns. 🔜Future Steps Add specific letter template (organization template). PDF offer letter
by Vishal Kumar
Trigger The workflow runs when a GitLab Merge Request (MR) is created or updated. Extract & Analyze It retrieves the code diff and sends it to Claude AI or GPT-4o for risk assessment and issue detection. Generate Report AI produces a structured summary with: Risk levels Identified issues Recommendations Test cases Notify Developers The report is: Emailed to developers and QA teams Posted as a comment on the GitLab MR Setup Guide Connect GitLab Add GitLab API credentials Select repositories to track Configure AI Analysis Enter Anthropic (Claude) or OpenAI (GPT-4o) API key Set Up Notifications Add Gmail credentials Update the email distribution list Test & Automate Create a test MR to verify analysis and email delivery Key Benefits Automated Code Review** – AI-driven risk assessment and recommendations Security & Compliance** – Identifies vulnerabilities before code is merged Integration with GitLab CI/CD** – Works within existing DevOps workflows Improved Collaboration** – Keeps developers and QA teams informed Developed by Quantana, an AI-powered automation and software development company.
by Jaruphat J.
Overview This workflow automatically saves files received via LINE Messaging API into Google Drive and logs the file details into a Google Sheet. It checks the file type against allowed types, organizes files into date-based folders and (optionally) file type–specific subfolders, and sends a reply message back to the LINE user with the file URL or an error message if the file type is not permitted. Who is this for? Developers & IT Administrators: Looking to integrate LINE with Google Drive and Sheets for automated file management. Businesses & Marketing Teams: That want to automatically archive media files and documents received from users via LINE. Anyone Interested in No-Code Automation: Users who want to leverage n8n’s capabilities without heavy coding. What Problem Does This Workflow Solve? Automated File Organization: Files received from LINE are automatically checked for allowed file types, then stored in a structured folder hierarchy in Google Drive (by date and/or file type). Data Logging: Each file upload is recorded in a Google Sheet, providing an audit trail with file names, upload dates, URLs, and types. Instant Feedback: Users receive an immediate reply via LINE confirming the file upload, or an error message if the file type is not allowed. What This Workflow Does 1. Receives Incoming Requests: A webhook node ("LINE Webhook Listener") listens for POST requests from LINE, capturing file upload events and associated metadata. 2. Configuration Loading: A Google Sheets node ("Get Config") reads configuration data (e.g., parent folder ID, allowed file types, folder organization settings, and credentials) from a pre-defined sheet. Data Merging & Processing: The "Merge Event and Config Data" and "Process Event and Config Data" nodes merge and structure the event data with configuration settings. A "Determine Folder Info" node calculates folder names based on the configuration. If Store by Date is enabled, it uses the current date (or a specified date) as the folder name. If Store by File Type is also enabled, it uses the file’s type (e.g., image) for a subfolder. 4. Folder Search & Creation: The workflow searches for an existing date folder ("Search Date Folder"). If the date folder is not found, an IF node ("Check Existing Date Folder") routes to a "Create Date Folder" node. Similarly, for file type organization, the workflow uses a "Search FileType Folder" node (with appropriate conditions) to look for a subfolder, or creates it if not found. The "Set Date Folder ID" and "Set Image Folder ID" nodes capture and merge the resulting folder IDs. Finally, the "Config final ParentId" node sets the final target folder ID based on the configuration conditions: Store by Date: TRUE, Store by File Type: TRUE: Use the file type folder (inside the date folder). Store by Date: TRUE, Store by File Type: FALSE: Use the date folder. Store by Date: FALSE, Store by File Type: TRUE: Use the file type folder. Store by Date: FALSE, Store by File Type: FALSE: Use the Parent Folder ID from the configuration. 5. File Retrieval and Validation: A HTTP Request node ("Get File Binary Content") fetches the file’s binary data from the LINE API. A Function node ("Validate File Type") checks if the file’s MIME type is included in the allowed list (e.g., "audio|image|video"). If not, it throws an error that is captured for the reply. 6. File Upload and Logging: The "Upload File to Google Drive" node uploads the validated binary file to the final target folder. After a successful upload, the "Log File Details to Google Sheet" node logs details such as file name, upload date, Google Drive URL, and file type into a designated Google Sheet. 7. User Feedback: The "Check Reply Enabled Flag" node checks if the reply feature is enabled. Finally, the "Send LINE Reply Message" node sends a reply message back to the LINE user with either the file URL (if the upload was successful) or an error message (if the file type was not allowed). Setup Instructions 1. Google Sheets Setup: Create a Google Sheet with two sheets:** config: Include columns for Parent Folder Path, Parent Folder ID, Store by Date (boolean), Store by File Type (boolean), Allow File Types (e.g., “audio|image|video”), CurrentDate, Reply Enabled, and CHANNEL ACCESS TOKEN. fileList: Create headers for File Name, Date Uploaded, Google Drive URL, and File Type. For an example of the required format, check this Google Sheets template: Google Sheet Template 2. Google Drive Credentials: Set up and authorize your Google Drive credentials in n8n. 3. LINE Messaging API: Configure your LINE Developer Console webhook to point to the n8n Webhook URL ("Line Chat Bot" node). Ensure you have the proper Channel Access Token stored in your Google Sheet. 4. n8n Workflow Import: Import the provided JSON file into your n8n instance. Verify node connections and update any credential references as needed. 5. Test the Workflow: Send a test message via LINE to confirm that files are properly validated, uploaded, logged, and that reply messages are sent. How to Customize This Workflow Allowed File Types: Adjust the "Validate File Type" field in your config sheet to control which file types are accepted. Folder Structure: Modify the logic in the "Determine Folder Info" and subsequent folder nodes to change how folders are structured (e.g., use different date formats or add additional categorization). Logging: Update the "Log File Details to Google Sheet" node if you wish to log additional file metadata. Reply Messages: Customize the reply text in the "Send LINE Reply Message" node to include more detailed information or instructions.
by Abdullah Maftah
Auto Source LinkedIn Candidates with GPT-4 Boolean Search & Google X-ray How It Works: User Input: The user pastes a job description or ideal candidate specifications into the workflow. Boolean Search String Generation: OpenAI processes the input and generates a precise LinkedIn Boolean search string formatted as: site:linkedin.com/in ("Job Title" AND "Skill1" AND "Skill2") This search string is optimized to find relevant LinkedIn profiles matching the provided criteria. Google Sheet Creation: A new Google Sheet is automatically created within a specified document to store extracted LinkedIn profile URLs. Google Search Execution: The workflow sends a search request to Google using an HTTP node with the generated Boolean string. Iterative Search & Data Extraction: The workflow retrieves the first 10 results from Google. If the desired number of LinkedIn profiles has not been reached, the workflow loops, fetching the next set of 10 results until the if condition is met. Data Storage: The workflow extracts LinkedIn profile URLs from the search results and saves them to the newly created Google Sheet for further review. Setup Steps: 1. API Key Configuration Under "Credentials", add your OpenAI API key from your OpenAI account settings. This key is used to generate the LinkedIn Boolean search string. 2. Adjust Search Parameters Navigate to the "If" node and update the condition to define the desired number of LinkedIn profiles to extract. The default is 50, but you can set it to any number based on your needs. 3. Establish Google Sheets Connection Connect your Google Sheets account** to the workflow. Create a document** to store the sourced LinkedIn profiles. The workflow automatically creates a new sheet for each new search, so no manual setup is needed. 4. Authenticate Google Search Google search requires authentication** for better results. Use the Cookie-Editor browser extension to export your header string and enable authenticated Google searches within the workflow. 5. Run the Workflow Execute* the workflow and monitor the *Google Sheet** for newly added LinkedIn profiles. Benefits: ✅ Automates profile sourcing, reducing manual search time. ✅ Generates precise LinkedIn Boolean search strings tailored to job descriptions. ✅ Extracts and saves LinkedIn profiles efficiently for recruitment efforts. This solution leverages OpenAI and advanced search techniques to enhance your talent sourcing process, making it faster and more accurate! 🚀
by Anna Bui
🎯 Universal Meeting Transcript to LinkedIn Content Automatically transform your meeting insights into engaging LinkedIn content with AI Perfect for coaches, consultants, sales professionals, and content creators who want to share valuable insights from their meetings without the manual effort of content creation. How it works Calendar trigger detects when your coaching/meeting ends Waits for meeting completion, then sends you a form via email You provide the meeting transcript and specify post preferences AI analyzes the transcript using your personal brand guidelines Generates professional LinkedIn content based on real insights Creates organized Google Docs with both transcript and final post Sends you links to review and publish your content How to use Connect your Google Calendar and Gmail accounts Update the calendar filter to match your meeting types Customize the AI prompts with your brand voice and style Replace email addresses with your own Test with a sample meeting transcript Requirements Google Calendar (for meeting detection) Gmail (for form delivery and notifications) Google Drive & Docs (for content storage) LangChain AI nodes (for content generation) Good to know AI processing may incur costs based on your LangChain provider Works with any meeting platform - just copy/paste transcripts Can be adapted to use webhooks from recording tools like Fireflies.ai Memory nodes store your brand guidelines for consistent output Happy Content Creating!
by Luke
Automatically backs up your workflows to Github and generates documentation in a Notion database. Weekly run, uses the "internal-infra" tag to look for new or recently modified workflows Uses a Notion database page to hold the workflow summary, last updated date, and a link to the workflow Uses OpenAI's 4o-mini to generate a summarization of what the workflow does Stores a backup of the workflow in GitHub (recommend a private repo) Sends notification to Slack channel for new or updated workflows Who is this for Anyone seeking backup of their most important workflows Anyone seeking version control for their most important workflows Credentials required N8N: You will need an N8N credential created so the workflow can query the N8N instance to find all active workflows with the "internal-infra" tag Notion: You will need an Notion credential created OpenAI: You will need an OpenAI credential, unless you intend on rewiring this with your AI of choice (ollama, openrouter, etc.) GitHub: You will need an GitHub credential Slack: You will require an Slack credential, recommend a Bot / access token configuration Setup Notion Create a database with the following columns. Column type is specified in [type]. Workflow Name [text] isActive (dev) [checkbox] Error workflow setup [checkbox] AI Summary [text] Record last update [date/time] URL (dev) [text/url] Workflow created at [date/time] Workflow updated at [date/time] Slack Create a channel for updates to be posted into Github Create a private repo for your workflows to be exported into N8N Download & install the template Configure the blocks to use your N8N, Notion, OpenAI & Slack credentials for your own Edit the "Set Fields" block and change the URL to that of your N8N instance (cloud or self-hosted) Edit the "Add to Notion" action and specify the Database page you wish to update Edit the Slack actions to specify the Channel you want slack notifications posted to Edit the GitHub actions to specify the Repository Owner & Repository Name Sample output in Notion Workflow diagram
by Jon Doran
Summary Engage multiple, uniquely configured AI agents (using different models via OpenRouter) in a single conversation. Trigger specific agents with @mentions or let them all respond. Easily scalable by editing simple JSON settings. Overview This workflow is for users who want to experiment with or utilize multiple AI agents with distinct personalities, instructions, and underlying models within a single chat interface, without complex setup. It solves the problem of managing and interacting with diverse AI assistants simultaneously for tasks like brainstorming, comparative analysis, or role-playing scenarios. It enables dynamic conversations with multiple AI assistants simultaneously within a single chat interface. You can: Define multiple unique AI agents. Configure each agent with its own name, system instructions, and LLM model (via OpenRouter). Interact with specific agents using @AgentName mentions. Have all agents respond (in random order) if no specific agents are mentioned. Maintain conversation history across multiple turns. It's designed for flexibility and scalability, allowing you to easily add or modify agents without complex workflow restructuring. Key Features Multi-Agent Interaction:** Chat with several distinct AI personalities at once. Individual Agent Configuration:** Customize name, system prompt, and LLM for each agent. OpenRouter Integration:** Access a wide variety of LLMs compatible with OpenRouter. Mention-Based Triggering:** Direct messages to specific agents using @AgentName. All-Agent Fallback:** Engages all defined agents randomly if no mentions are used. Scalable Setup:** Agent configuration is centralized in a single Code node (as JSON). Conversation Memory:** Remembers previous interactions within the session. How to Set Up Configure Settings (Code Nodes): Open the Define Global Settings Code node: Edit the JSON to set user details (name, location, notes) and add any system message instructions that all agents should follow. Open the Define Agent Settings Code node: Edit the JSON to define your agents. Add or remove agent objects as needed. For each agent, specify: "name": The unique name for the agent (used for @mentions). "model": The OpenRouter model identifier (e.g., "openai/gpt-4o", "anthropic/claude-3.7-sonnet"). "systemMessage": Specific instructions or persona for this agent. Add OpenRouter Credentials: Locate the AI Agent node. Click the OpenRouter Chat Model node connected below it via the Language Model input. In the 'Credential for OpenRouter API' field, select or create your OpenRouter API credentials. How to Use Start a conversation using the Chat Trigger input. To address specific agents, include @AgentName in your message. Agents will respond sequentially in the order they are mentioned. Example: "@Gemma @Claude, please continue the count: 1" will trigger Gemma first, followed by Claude. If your message contains no @mentions, all agents defined in Define Agent Settings will respond in a randomized order. Example: "What are your thoughts on the future of AI?" will trigger Chad, Claude, and Gemma (based on your default settings) in a random sequence. The workflow will collect responses from all triggered agents and return them as a single, formatted message. How It Works (Technical Details) Settings Nodes: Define Global Settings and Define Agent Settings load your configurations. Mention Extraction: The Extract mentions Code node parses the user's input (chatInput) from the When chat message received trigger. It looks for @AgentName patterns matching the names defined in Define Agent Settings. Agent Selection: If mentions are found, it creates a list of the corresponding agent configurations in the order they were mentioned. If no mentions are found, it creates a list of all defined agent configurations and shuffles them randomly. Looping: The Loop Over Items node iterates through the selected agent list. Dynamic Agent Execution: Inside the loop: An If node (First loop?) checks if it's the first agent responding. If yes (true path -> Set user message as input), it passes the original user message to the Agent. If no (false path -> Set last Assistant message as input), it passes the previous agent's formatted output (lastAssistantMessage) to the next agent, creating a sequential chain. The AI Agent node receives the input message. Its System Message and the Model in the connected OpenRouter Chat Model node are dynamically populated using expressions referencing the current agent's data from the loop ({{ $('Loop Over Items').item.json.* }}). The Simple Memory node provides conversation history to the AI Agent. The agent's response is formatted (e.g., AgentName:\n\nResponse) in the Set lastAssistantMessage node. Response Aggregation: After the loop finishes, the Combine and format responses Code node gathers all the lastAssistantMessage outputs and joins them into a single text block, separated by horizontal rules (---), ready to be sent back to the user. Benefits Scalability & Flexibility:** Instead of complex branching logic, adding, removing, or modifying agents only requires editing simple JSON in the Define Agent Settings node, making setup and maintenance significantly easier, especially for those managing multiple assistants. Model Choice:** Use the best model for each agent's specific task or persona via OpenRouter. Centralized Configuration:** Keeps agent setup tidy and manageable. Limitations Sequential Responses:** Agents respond one after another based on mention order (or randomly), not in parallel. No Direct Agent-to-Agent Interaction (within a turn):* Agents cannot directly call or reply to each other *during the processing of a single user message. Agent B sees Agent A's response only because the workflow passes it as input in the next loop iteration. Delayed Output:* The user receives the combined response only *after all triggered agents have completed their generation.
by Elay Guez
Daily Economic News Brief for Israel (Hebrew, RTL, GPT-4o) Overview Stay ahead of the curve with this AI-powered workflow that delivers a daily economic summary tailored for professionals tracking the Israeli economy. At 8:00 PM Israel Time, this workflow: Retrieves the latest articles from Calcalist and Mako via RSS Filters duplicates and irrelevant stories Uses OpenAI’s GPT-4o to identify the 5 most important stories of the day Summarizes each article in concise, readable Hebrew Generates a fully styled, responsive HTML email (with proper RTL layout) Sends it to your inbox using your preferred SMTP email provider Perfect for economists, analysts, investors, or policymakers who want an actionable and personalized news digest -- no distractions, no fluff. Setup Instructions Estimated setup time: 10 minutes Required credentials: OpenAI API Key SMTP credentials (for email delivery) Steps: Import this template into your n8n instance. Add your OpenAI API Key under credentials. Configure the SMTP Email node with: Host (e.g. smtp.gmail.com) Port (465 or 587) Username (your email) Password (app-specific password or login) Set your target email address in the last node. (Optional) Customize the GPT prompt to adjust tone or audience (e.g. general public, policy makers). Activate the workflow and receive daily updates straight to your inbox. Customization Tips Change the RSS sources to pull from other Hebrew or international news websites Modify the summarization prompt to fit different sectors (e.g. tech, health, politics) Add integrations like Notion, Airtable, or Telegram for logging or distribution Apply your branding to the HTML output (logos, footer, colors) Why Use This? This is more than a news digest. It’s an intelligent economic assistant that filters noise, highlights what matters, and keeps you informed-automatically. You can set it up in 10 minutes and benefit every single day.