by Agniva Mahata
How it Works: Trigger: The workflow is triggered by a webhook, initiated by an Airtable automation. This automation sends the Book or Chapter record ID and the desired action (e.g., "Generate Book Details," "Generate Chapters," "Generate Chapter Research," "Generate Chapter Content"). Action Routing: A "Switch" node directs the workflow based on the action query parameter received from the webhook. This determines which part of the book creation process will be executed. Data Retrieval: The workflow fetches the relevant book or chapter data from Airtable using the provided recordId. AI Processing: Book Details Generation: If the action is "Generate Book Details," an AI Agent (powered by a Large Language Model (LLM) like Google Gemini and the Perplexity search tool) researches the book idea. It focuses on crafting a compelling book description, identifying the target audience, and conducting general book research to maximize bestseller potential. The research brief is then saved back to Airtable. Chapter Generation: If the action is "Generate Chapters," an LLM generates 7-10 chapter titles and descriptions based on the book idea and previous research. A structured output parser ensures the chapter data is in the correct format. The chapters are then split into individual items and saved as separate records in the "Chapter" table in Airtable, linked to the main book record. Chapter Research Generation: If the action is "Generate Chapter Research," another AI Agent conducts in-depth research on a specific chapter, using the Perplexity search tool multiple times. It focuses on finding stories, case studies, historical events, and expert perspectives to make the chapter engaging and credible. The research is saved back to the "Chapter" record in Airtable. Chapter Content Generation: If the action is "Generate Chapter Content," an LLM writes the full content of the chapter, using the research gathered in the previous step, the overall book research, and the chapter description. The generated content is saved back to the "Chapter" record in Airtable. Airtable Updates: In each of the AI processing steps, the workflow updates the corresponding Airtable record (either "Book" or "Chapter") with the generated results (research, chapter details, or content) and sets the "Action" field back to "Idle." Set Up Steps: Airtable Setup (Estimated time: 10-15 minutes): Copy the Airtable base blueprint: https://airtable.com/appfkz4KUlKvOjtbp/shra78TlDfqLRdSfT. This will create the "Book" and "Chapter" tables with the necessary fields. In the "Book" table, create three Airtable Automations: Trigger: When a record matches conditions -> Action is Generate Book Details Action: Run a script. Use the following script: let autoRoute = input.config(); await fetch(autoRoute.webhookUrl + "?recordId=" + autoRoute.recordId + "&action=" + autoRoute.action); In the script action's configuration, add three "Input variables": webhookUrl (map it to your n8n webhook URL, obtained in the next step) recordId (map it to the Airtable record ID) action (map it to Action) Repeat this process to create two more automations in the "Book" table, identical except triggered when Action is Generate Chapters, respectively. In the "Chapter" table, create two Airtable Automations: Trigger: When a record matches conditions -> Action is Generate Chapter Research Action: Run a script (use the same script as above, with the same input variables). Create a second automation, identical except triggered when Action is Generate Chapter Content. n8n Setup (Estimated time: 15-20 minutes): Import the provided JSON workflow into n8n. Webhook Node: Copy the "Test URL" from the Webhook node. This is the webhookUrl you'll use in the Airtable automations. Important: Once you've tested and are ready to go live, switch to the "Production URL." Airtable Nodes: Configure all Airtable nodes (there are eight). You'll need to connect your Airtable account using OAuth 2. Select the correct Base ("Book Agency \[v1] Cobuild" or whatever you named it) and Table ("Book" or "Chapter") for each node. The field mappings are already defined in the template, but double-check them. LLM Nodes (Google Gemini & OpenAI): Connect your Google Gemini and OpenAI accounts to the respective LLM nodes. You'll need API keys for both. You may also configure different LLM Models. Perplexity Nodes Connect your Perplexity AI API to the Perplexity nodes. You'll need API keys for that. Activate the workflow. Testing (Estimated Time: 5-10 minutes): Go to your Airtable "Book" table. Create a New Record. Fill in the "Idea" field with a book concept. Change the "Action" field to "Generate Book Details". The Airtable automation should trigger, sending a request to your n8n webhook. Monitor the n8n execution log to see the workflow in action. Check the Airtable record to see if the "Research" field is populated. Repeat the testing for Generate Chapters, Generate Chapter Research and Generate Chapter Content.
by Davide
This workflow is designed to analyze YouTube videos by extracting their transcripts, summarizing the content using AI models, and sending the analysis via email. This workflow is ideal for content creators, marketers, or anyone who needs to quickly analyze and summarize YouTube videos for research, content planning, or educational purposes. How It Works: Trigger: The workflow starts with a manual trigger, allowing you to test it by clicking "Test workflow." You can also set a YouTube video URL manually or dynamically. YouTube Video ID Extraction: The workflow extracts the YouTube video ID from the provided URL using a custom JavaScript function. This ID is necessary for fetching the transcript. Transcript Generation: The video ID is sent via an HTTP request to generate the transcript. You need to replace APIKEY with a free API key from the service. Transcript Validation: The workflow checks if a transcript exists for the video. If a transcript is available, it proceeds; otherwise, it stops. Full Text Extraction: If a transcript exists, the workflow combines all transcript segments into a single text variable for further analysis. AI-Powered Analysis: The full transcript is passed to an AI model (DeepSeek, OpenAI, or OpenRouter) for analysis. The AI generates a structured summary, including a title and key points, formatted in markdown. Email Notification: The analysis results (title and summary) are sent via email using SMTP credentials. The email contains the structured summary of the video. Set Up Steps: YouTube Transcript API: Obtain a free API key from youtube-transcript.io and replace APIKEY in the "Generate transcript" node with your key. AI Model Configuration: Configure the AI model nodes (DeepSeek, OpenAI, or OpenRouter) with the appropriate API credentials. You can choose one or multiple models depending on your preference. Email Setup: Configure the "Send Email" node with your SMTP credentials (e.g., Gmail, Outlook, or any SMTP service). Ensure the email settings are correct to send the analysis results. Key Features: Free Tools: Uses **youtube-transcript.io for free transcript generation. AI Models**: Supports multiple AI models (DeepSeek, OpenAI, OpenRouter) for flexible analysis. Email Notifications**: Sends the analysis results directly to your inbox. Customizable**: Easily adapt the workflow to analyze different videos or use different AI models.
by Onur
Turn BBC News Articles into Podcasts using Hugging Face and Google Gemini Effortlessly transform BBC news articles into engaging podcasts with this automated n8n workflow. Who is this for? This template is perfect for: Content creators** who want to quickly produce podcasts from current events. Students** looking for an efficient way to create audio content for projects or assignments. Individuals** interested in generating their own podcasts without technical expertise. Setup Information Install n8n: If you haven't already, download and install n8n from n8n.io. Import the Workflow: Copy the JSON code for this workflow and import it into your n8n instance. Configure Credentials: Gemini API: Set up your Gemini API credentials in the workflow's LLM nodes. Hugging Face Token: Obtain an access token from Hugging Face and add it to the HTTP Request node for the text-to-speech model. Customize (Optional): Filtering Criteria: Adjust the News Classifier node to fine-tune the selection of news articles based on your preferences. Output Options: Modify the workflow to save the generated audio file to a cloud storage service or publish it to a podcast hosting platform. Prerequisites An active n8n instance. Basic understanding of n8n workflows (no coding required). API credentials for Gemini and a Hugging Face account with an access token. What problem does it solve? This workflow eliminates the manual effort involved in creating podcasts from news articles. It automates the entire process, from fetching and filtering news to generating the final audio file. What are the benefits? Time-saving:** Create podcasts in minutes, not hours. Easy to use:** No coding or technical skills required. Customizable:** Adapt the workflow to your specific needs and preferences. Cost-effective:** Leverage free or low-cost services like Gemini and Hugging Face. How does it work? The workflow fetches news articles from the BBC website. It filters articles based on their suitability for a podcast. It extracts the full content of the selected articles. It uses Gemini LLM to create a podcast script. It converts the script to speech using Hugging Face's text-to-speech model. The final podcast audio is ready for use. Nodes in the Workflow Fetch BBC News Page: Retrieves the main BBC News page. News Classifier: Categorizes news articles using Gemini LLM. Fetch BBC News Detail: Extracts detailed content from suitable articles. Basic Podcast LLM Chain: Generates a podcast script using Gemini LLM. HTTP Request: Converts the script to speech using Hugging Face. Add Story I'm excited to share this workflow with the n8n community and help content creators and students easily produce engaging podcasts! Additional Tips Explore the n8n documentation and community resources for more advanced customization options. Experiment with different filtering criteria and LLM prompts to achieve your desired podcast style.
by Mark Shcherbakov
Video Guide I prepared a detailed guide that demonstrates the complete process of building a trading agent automation using n8n and Telegram, seamlessly integrating various functions for stock analysis. Youtube Link Who is this for? This workflow is perfect for traders, financial analysts, and developers looking to automate stock analysis interactions via Telegram. It’s especially valuable for those who want to leverage AI tools for technical analysis without needing to write complex code. What problem does this workflow solve? Many traders desire real-time analysis of stock data but lack the technical expertise or tools to perform in-depth analysis. This workflow allows users to easily interact with an AI trading agent through Telegram for seamless stock analysis, chart generation, and technical evaluation, all while eliminating the need for manual interventions. What this workflow does This workflow utilizes n8n to construct an end-to-end automation process for stock analysis through Telegram communication. The setup involves: Receiving messages via a Telegram bot. Processing audio or text messages for trading queries. Transcribing audio using OpenAI API for interpretation. Gathering and displaying charts based on user-specified parameters. Performing technical analysis on generated charts. Sending back the analyzed results through Telegram. Setup Prepare Airtable: Create simple table to store tickers. Prepare Telegram Bot: Ensure your Telegram bot is set up correctly and listening for new messages. Replace Credentials: Update all nodes with the correct credentials and API keys for services involved. Configure API Endpoints: Ensure chart service URLs are correctly set to interact with the corresponding APIs properly. Start Interaction: Message your bot to initiate analysis; specify ticker symbols and desired chart styles as required.
by Jaruphat J.
Overview This workflow automatically saves files received via LINE Messaging API into Google Drive and logs the file details into a Google Sheet. It checks the file type against allowed types, organizes files into date-based folders and (optionally) file type–specific subfolders, and sends a reply message back to the LINE user with the file URL or an error message if the file type is not permitted. Who is this for? Developers & IT Administrators: Looking to integrate LINE with Google Drive and Sheets for automated file management. Businesses & Marketing Teams: That want to automatically archive media files and documents received from users via LINE. Anyone Interested in No-Code Automation: Users who want to leverage n8n’s capabilities without heavy coding. What Problem Does This Workflow Solve? Automated File Organization: Files received from LINE are automatically checked for allowed file types, then stored in a structured folder hierarchy in Google Drive (by date and/or file type). Data Logging: Each file upload is recorded in a Google Sheet, providing an audit trail with file names, upload dates, URLs, and types. Instant Feedback: Users receive an immediate reply via LINE confirming the file upload, or an error message if the file type is not allowed. What This Workflow Does 1. Receives Incoming Requests: A webhook node ("LINE Webhook Listener") listens for POST requests from LINE, capturing file upload events and associated metadata. 2. Configuration Loading: A Google Sheets node ("Get Config") reads configuration data (e.g., parent folder ID, allowed file types, folder organization settings, and credentials) from a pre-defined sheet. Data Merging & Processing: The "Merge Event and Config Data" and "Process Event and Config Data" nodes merge and structure the event data with configuration settings. A "Determine Folder Info" node calculates folder names based on the configuration. If Store by Date is enabled, it uses the current date (or a specified date) as the folder name. If Store by File Type is also enabled, it uses the file’s type (e.g., image) for a subfolder. 4. Folder Search & Creation: The workflow searches for an existing date folder ("Search Date Folder"). If the date folder is not found, an IF node ("Check Existing Date Folder") routes to a "Create Date Folder" node. Similarly, for file type organization, the workflow uses a "Search FileType Folder" node (with appropriate conditions) to look for a subfolder, or creates it if not found. The "Set Date Folder ID" and "Set Image Folder ID" nodes capture and merge the resulting folder IDs. Finally, the "Config final ParentId" node sets the final target folder ID based on the configuration conditions: Store by Date: TRUE, Store by File Type: TRUE: Use the file type folder (inside the date folder). Store by Date: TRUE, Store by File Type: FALSE: Use the date folder. Store by Date: FALSE, Store by File Type: TRUE: Use the file type folder. Store by Date: FALSE, Store by File Type: FALSE: Use the Parent Folder ID from the configuration. 5. File Retrieval and Validation: A HTTP Request node ("Get File Binary Content") fetches the file’s binary data from the LINE API. A Function node ("Validate File Type") checks if the file’s MIME type is included in the allowed list (e.g., "audio|image|video"). If not, it throws an error that is captured for the reply. 6. File Upload and Logging: The "Upload File to Google Drive" node uploads the validated binary file to the final target folder. After a successful upload, the "Log File Details to Google Sheet" node logs details such as file name, upload date, Google Drive URL, and file type into a designated Google Sheet. 7. User Feedback: The "Check Reply Enabled Flag" node checks if the reply feature is enabled. Finally, the "Send LINE Reply Message" node sends a reply message back to the LINE user with either the file URL (if the upload was successful) or an error message (if the file type was not allowed). Setup Instructions 1. Google Sheets Setup: Create a Google Sheet with two sheets:** config: Include columns for Parent Folder Path, Parent Folder ID, Store by Date (boolean), Store by File Type (boolean), Allow File Types (e.g., “audio|image|video”), CurrentDate, Reply Enabled, and CHANNEL ACCESS TOKEN. fileList: Create headers for File Name, Date Uploaded, Google Drive URL, and File Type. For an example of the required format, check this Google Sheets template: Google Sheet Template 2. Google Drive Credentials: Set up and authorize your Google Drive credentials in n8n. 3. LINE Messaging API: Configure your LINE Developer Console webhook to point to the n8n Webhook URL ("Line Chat Bot" node). Ensure you have the proper Channel Access Token stored in your Google Sheet. 4. n8n Workflow Import: Import the provided JSON file into your n8n instance. Verify node connections and update any credential references as needed. 5. Test the Workflow: Send a test message via LINE to confirm that files are properly validated, uploaded, logged, and that reply messages are sent. How to Customize This Workflow Allowed File Types: Adjust the "Validate File Type" field in your config sheet to control which file types are accepted. Folder Structure: Modify the logic in the "Determine Folder Info" and subsequent folder nodes to change how folders are structured (e.g., use different date formats or add additional categorization). Logging: Update the "Log File Details to Google Sheet" node if you wish to log additional file metadata. Reply Messages: Customize the reply text in the "Send LINE Reply Message" node to include more detailed information or instructions.
by Abdullah Maftah
Auto Source LinkedIn Candidates with GPT-4 Boolean Search & Google X-ray How It Works: User Input: The user pastes a job description or ideal candidate specifications into the workflow. Boolean Search String Generation: OpenAI processes the input and generates a precise LinkedIn Boolean search string formatted as: site:linkedin.com/in ("Job Title" AND "Skill1" AND "Skill2") This search string is optimized to find relevant LinkedIn profiles matching the provided criteria. Google Sheet Creation: A new Google Sheet is automatically created within a specified document to store extracted LinkedIn profile URLs. Google Search Execution: The workflow sends a search request to Google using an HTTP node with the generated Boolean string. Iterative Search & Data Extraction: The workflow retrieves the first 10 results from Google. If the desired number of LinkedIn profiles has not been reached, the workflow loops, fetching the next set of 10 results until the if condition is met. Data Storage: The workflow extracts LinkedIn profile URLs from the search results and saves them to the newly created Google Sheet for further review. Setup Steps: 1. API Key Configuration Under "Credentials", add your OpenAI API key from your OpenAI account settings. This key is used to generate the LinkedIn Boolean search string. 2. Adjust Search Parameters Navigate to the "If" node and update the condition to define the desired number of LinkedIn profiles to extract. The default is 50, but you can set it to any number based on your needs. 3. Establish Google Sheets Connection Connect your Google Sheets account** to the workflow. Create a document** to store the sourced LinkedIn profiles. The workflow automatically creates a new sheet for each new search, so no manual setup is needed. 4. Authenticate Google Search Google search requires authentication** for better results. Use the Cookie-Editor browser extension to export your header string and enable authenticated Google searches within the workflow. 5. Run the Workflow Execute* the workflow and monitor the *Google Sheet** for newly added LinkedIn profiles. Benefits: ✅ Automates profile sourcing, reducing manual search time. ✅ Generates precise LinkedIn Boolean search strings tailored to job descriptions. ✅ Extracts and saves LinkedIn profiles efficiently for recruitment efforts. This solution leverages OpenAI and advanced search techniques to enhance your talent sourcing process, making it faster and more accurate! 🚀
by Mirza Ajmal
Who is this for? This workflow is ideal for: HR teams and recruiters seeking to streamline resume screening. Hiring managers who want quick, summarized candidate insights. Recruitment agencies handling large volumes of applicant data. Startups and small businesses looking to automate hiring without complex systems. AI and automation professionals who want to build smart HR workflows using n8n and OpenAI. What problem is this workflow solving? / Use Case Manually reviewing resumes is time-consuming, inconsistent, and prone to human bias. This workflow automates the resume intake and evaluation process—ensuring that each applicant is screened, summarized, and scored using a consistent, data-driven method. It enhances efficiency and supports better hiring decisions. What this workflow does Accepts resume submissions via form and saves files to Google Drive. Extracts key information from resumes using AI (e.g., name, contact, education, experience). Summarizes candidate qualifications into a short, readable profile. Allows HR to rate applicants and leave comments. Logs all extracted data and evaluations into a centralized Google Sheet for tracking. Setup Resume is submitted through an n8n form. The uploaded file is automatically stored in Google Drive. n8n uses OpenAI and document parsing tools to extract candidate data. Extracted information is structured and summarized using GPT. A review form is triggered for internal HR rating and notes. All data is appended to a Google Sheet for records and filtering. How to customize this workflow to your needs** Change the form tool (e.g., Typeform, Tally, or custom HTML) based on your stack. Adapt the summary prompt to align with your specific role requirements. Add filters to auto-flag top-tier candidates based on score or skills. Integrate Slack or email to notify hiring managers when top resumes are processed. Connect to your ATS if you want to push processed resumes into your recruitment system.
by Luke
Automatically backs up your workflows to Github and generates documentation in a Notion database. Weekly run, uses the "internal-infra" tag to look for new or recently modified workflows Uses a Notion database page to hold the workflow summary, last updated date, and a link to the workflow Uses OpenAI's 4o-mini to generate a summarization of what the workflow does Stores a backup of the workflow in GitHub (recommend a private repo) Sends notification to Slack channel for new or updated workflows Who is this for Anyone seeking backup of their most important workflows Anyone seeking version control for their most important workflows Credentials required N8N: You will need an N8N credential created so the workflow can query the N8N instance to find all active workflows with the "internal-infra" tag Notion: You will need an Notion credential created OpenAI: You will need an OpenAI credential, unless you intend on rewiring this with your AI of choice (ollama, openrouter, etc.) GitHub: You will need an GitHub credential Slack: You will require an Slack credential, recommend a Bot / access token configuration Setup Notion Create a database with the following columns. Column type is specified in [type]. Workflow Name [text] isActive (dev) [checkbox] Error workflow setup [checkbox] AI Summary [text] Record last update [date/time] URL (dev) [text/url] Workflow created at [date/time] Workflow updated at [date/time] Slack Create a channel for updates to be posted into Github Create a private repo for your workflows to be exported into N8N Download & install the template Configure the blocks to use your N8N, Notion, OpenAI & Slack credentials for your own Edit the "Set Fields" block and change the URL to that of your N8N instance (cloud or self-hosted) Edit the "Add to Notion" action and specify the Database page you wish to update Edit the Slack actions to specify the Channel you want slack notifications posted to Edit the GitHub actions to specify the Repository Owner & Repository Name Sample output in Notion Workflow diagram
by Jimleuk
This n8n template uses existing emails from customers as context to customise and "finetune" outreach emails to them using AI. By now, it should be common knowledge that we can leverage AI to generate unique emails but in a way, they can remain generic as the AI lacks the customer context to be truly personalised. One way to solve this is by pulling in a source of customer data - and what better way then by using existing email correspondence. How it works Customers to target are pulled from Hubspot and each customer is then run in a loop. We're using a loop as the retrieved emails for each customer become separate items and a loop helps with item reference. We connect to our Gmail account to pull all emails recieved from the customer. The contents of the email will be suitable to build a short persona of the customer. We use the Information Extractor to get our AI model to pull out the key attributes of this persona such as decision making style and communication preferences. With this persona, we can now pass this to our AI model to generate a personalised outreach email specifically for our customer. Finally, a draft email is created for human review before sending. If you would rather send the email straight away, this is also possible. How to use Define the topic of the outreach email in the "variables" node. This directs the AI on what outreach email to generate. Ensure the emails are pulled from the right account. If emails may contain sensitive data, adjust the filters and text parsing to ensure these are not leaked to the AI (which might then leak into the generated email). Requirements Hubspot for Contacts List OpenAI for LLM Gmail for Existing Emails and Sending Emails Customising this workflow Not using Hubspot? Any CRM would work just as well or even a simple text csv! If you have customer past deals or engagements in your CRM, consider using this as additional context for the AI to use.
by Greg Evseev
This workflow template provides a robust solution for efficiently sending multiple prompts to Anthropic's Claude models in a single batch request and retrieving the results. It leverages the Anthropic Batch API endpoint (/v1/messages/batches) for optimized processing and outputs each result as a separate item. Core Functionality & Example Usage Included This template includes: The Core Batch Processing Workflow: Designed to be called by another n8n workflow. An Example Usage Workflow: A separate branch demonstrating how to prepare data and trigger the core workflow, including examples using simple strings and n8n's Langchain Chat Memory nodes. Who is this for? This template is designed for: Developers, data scientists, and researchers** who need to process large volumes of text prompts using Claude models via n8n. Content creators** looking to generate multiple pieces of content (e.g., summaries, Q&As, creative text) based on different inputs simultaneously. n8n users** who want to automate interactions with the Anthropic API beyond single requests, improve efficiency, and integrate batch processing into larger automation sequences. Anyone needing to perform bulk text generation or analysis tasks with Claude programmatically. What problem does this workflow solve? Sending prompts to language models one by one can be slow and inefficient, especially when dealing with hundreds or thousands of requests. This workflow addresses that by: Batching:** Grouping multiple prompts into a single API call to Anthropic's dedicated batch endpoint (/v1/messages/batches). Efficiency:** Significantly reducing the time required compared to sequential processing. Scalability:** Handling large numbers of prompts (up to API limits) systematically. Automation:** Providing a ready-to-use, callable n8n structure for batch interactions with Claude. Structured Output:** Parsing the results and outputting each individual prompt's result as a separate n8n item. Use Cases: Bulk content generation (e.g., product descriptions, summaries). Large-scale question answering based on different contexts. Sentiment analysis or data extraction across multiple text snippets. Running the same prompt against many different inputs for research or testing. What the Core Workflow does (Triggered by the 'When Executed by Another Workflow' node) Receive Input: The workflow starts when called by another workflow (e.g., using the 'Execute Workflow' node). It expects input data containing: anthropic-version (string, e.g., "2023-06-01") requests (JSON array, where each object represents a single prompt request conforming to the Anthropic Batch API schema). Submit Batch Job: Sends the formatted requests data via POST to the Anthropic API /v1/messages/batches endpoint to create a new batch job. Requires Anthropic credentials. Wait & Poll: Enters a loop: Checks if the processing_status of the batch job is ended. If not ended, it waits for a set interval (10 seconds by default in the 'Batch Status Poll Interval' node). It then checks the batch job status again via GET to /v1/messages/batches/{batch_id}. Requires Anthropic credentials. This loop continues until the status is ended. Retrieve Results: Once the batch job is complete, it fetches the results file by making a GET request to the results_url provided in the batch status response. Requires Anthropic credentials. Parse Results: The results are typically returned in JSON Lines (.jsonl) format. The 'Parse response' Code node splits the response text by newlines and parses each line into a separate JSON object, storing them in an array field (e.g., parsed). Split Output: The 'Split Out Parsed Results' node takes the array of parsed results and outputs each result object as an individual item from the workflow. Prerequisites An active n8n instance (Cloud or self-hosted). An Anthropic API account with access granted to Claude models and the Batch API. Your Anthropic API Key. Basic understanding of n8n concepts (nodes, workflows, credentials, expressions, 'Execute Workflow' node). Familiarity with JSON data structures for providing input prompts and understanding the output. Understanding of the Anthropic Batch API request/response structure. (For Example Usage Branch) Familiarity with n8n's Langchain nodes (@n8n/n8n-nodes-langchain) if you plan to adapt that part. Setup Import Template: Add this template to your n8n instance. Configure Credentials: Navigate to the 'Credentials' section in your n8n instance. Click 'Add Credential'. Search for 'Anthropic' and select the Anthropic API credential type. Enter your Anthropic API Key and save the credential (e.g., name it "Anthropic account"). Assign Credentials: Open the workflow and locate the three HTTP Request nodes in the core workflow: Submit batch Check batch status Get results In each of these nodes, select the Anthropic credential you just configured from the 'Credential for Anthropic API' dropdown. Review Input Format: Understand the required input structure for the When Executed by Another Workflow trigger node. The primary inputs are anthropic-version (string) and requests (array). Refer to the Sticky Notes in the template and the Anthropic Batch API documentation for the exact schema required within the requests array. Activate Workflow: Save and activate the core workflow so it can be called by other workflows. ➡️ Quick Start & Input/Output Examples: Look for the Sticky Notes within the workflow canvas! They provide crucial information, including examples of the required input JSON structure and the expected output format. How to customize this workflow Input Source:* The core workflow is designed to be called. You will build *another workflow that prepares the anthropic-version and requests array and then uses the 'Execute Workflow' node to trigger this template. The included example branch shows how to prepare this data. Model Selection & Parameters:* Model (claude-3-opus-20240229, etc.), max_tokens, temperature, and other parameters are defined *within each object inside the requests array you pass to the workflow trigger. You configure these in the workflow calling this template. Polling Interval:** Modify the 'Wait' node ('Batch Status Poll Interval') duration if you need faster or slower status checks (default is 10 seconds). Be mindful of potential rate limits. Parsing Logic:** If Anthropic changes the result format or you have specific needs, modify the Javascript code within the 'Parse response' Code node. Error Handling:** Enhance the workflow with more specific error handling for API failures (e.g., using 'Error Trigger' or checking HTTP status codes) or batch processing issues (batch.status === 'failed'). Output Processing:* In the workflow that *calls this template, add nodes after the 'Execute Workflow' node to process the individual result items returned (e.g., save to a database, spreadsheet, send notifications). Example Usage Branch (Manual Trigger) This template also contains a separate branch starting with the Run example Manual Trigger node. Purpose:** This branch demonstrates how to construct the necessary anthropic-version and requests array payload. Methods Shown:** It includes steps for: Creating a request object from a simple query string. Creating a request object using data from n8n's Langchain Chat Memory nodes (@n8n/n8n-nodes-langchain). Execution:** It merges these examples, constructs the final payload, and then uses the Execute Workflow node to call the main batch processing logic described above. It finishes by filtering the results for demonstration. Note:** This branch is for demonstration and testing. You would typically build your own data preparation logic in a separate workflow. The use of Langchain nodes is optional for the core batch functionality. Notes API Limits:** According to the Anthropic API documentation, batches can contain up to 100,000 requests and be up to 256 MB in total size. Ensure your n8n instance has sufficient resources for large batches. API Costs:** Using the Anthropic API, including the Batch API, incurs costs based on token usage. Monitor your usage via the Anthropic dashboard. Completion Time:** Batch processing time depends on the number and complexity of prompts and current API load. The polling mechanism accounts for this variability. Versioning:** Always include the anthropic-version header in your requests, as shown in the workflow and examples. Refer to Anthropic API versioning documentation.
by Jon Doran
Summary Engage multiple, uniquely configured AI agents (using different models via OpenRouter) in a single conversation. Trigger specific agents with @mentions or let them all respond. Easily scalable by editing simple JSON settings. Overview This workflow is for users who want to experiment with or utilize multiple AI agents with distinct personalities, instructions, and underlying models within a single chat interface, without complex setup. It solves the problem of managing and interacting with diverse AI assistants simultaneously for tasks like brainstorming, comparative analysis, or role-playing scenarios. It enables dynamic conversations with multiple AI assistants simultaneously within a single chat interface. You can: Define multiple unique AI agents. Configure each agent with its own name, system instructions, and LLM model (via OpenRouter). Interact with specific agents using @AgentName mentions. Have all agents respond (in random order) if no specific agents are mentioned. Maintain conversation history across multiple turns. It's designed for flexibility and scalability, allowing you to easily add or modify agents without complex workflow restructuring. Key Features Multi-Agent Interaction:** Chat with several distinct AI personalities at once. Individual Agent Configuration:** Customize name, system prompt, and LLM for each agent. OpenRouter Integration:** Access a wide variety of LLMs compatible with OpenRouter. Mention-Based Triggering:** Direct messages to specific agents using @AgentName. All-Agent Fallback:** Engages all defined agents randomly if no mentions are used. Scalable Setup:** Agent configuration is centralized in a single Code node (as JSON). Conversation Memory:** Remembers previous interactions within the session. How to Set Up Configure Settings (Code Nodes): Open the Define Global Settings Code node: Edit the JSON to set user details (name, location, notes) and add any system message instructions that all agents should follow. Open the Define Agent Settings Code node: Edit the JSON to define your agents. Add or remove agent objects as needed. For each agent, specify: "name": The unique name for the agent (used for @mentions). "model": The OpenRouter model identifier (e.g., "openai/gpt-4o", "anthropic/claude-3.7-sonnet"). "systemMessage": Specific instructions or persona for this agent. Add OpenRouter Credentials: Locate the AI Agent node. Click the OpenRouter Chat Model node connected below it via the Language Model input. In the 'Credential for OpenRouter API' field, select or create your OpenRouter API credentials. How to Use Start a conversation using the Chat Trigger input. To address specific agents, include @AgentName in your message. Agents will respond sequentially in the order they are mentioned. Example: "@Gemma @Claude, please continue the count: 1" will trigger Gemma first, followed by Claude. If your message contains no @mentions, all agents defined in Define Agent Settings will respond in a randomized order. Example: "What are your thoughts on the future of AI?" will trigger Chad, Claude, and Gemma (based on your default settings) in a random sequence. The workflow will collect responses from all triggered agents and return them as a single, formatted message. How It Works (Technical Details) Settings Nodes: Define Global Settings and Define Agent Settings load your configurations. Mention Extraction: The Extract mentions Code node parses the user's input (chatInput) from the When chat message received trigger. It looks for @AgentName patterns matching the names defined in Define Agent Settings. Agent Selection: If mentions are found, it creates a list of the corresponding agent configurations in the order they were mentioned. If no mentions are found, it creates a list of all defined agent configurations and shuffles them randomly. Looping: The Loop Over Items node iterates through the selected agent list. Dynamic Agent Execution: Inside the loop: An If node (First loop?) checks if it's the first agent responding. If yes (true path -> Set user message as input), it passes the original user message to the Agent. If no (false path -> Set last Assistant message as input), it passes the previous agent's formatted output (lastAssistantMessage) to the next agent, creating a sequential chain. The AI Agent node receives the input message. Its System Message and the Model in the connected OpenRouter Chat Model node are dynamically populated using expressions referencing the current agent's data from the loop ({{ $('Loop Over Items').item.json.* }}). The Simple Memory node provides conversation history to the AI Agent. The agent's response is formatted (e.g., AgentName:\n\nResponse) in the Set lastAssistantMessage node. Response Aggregation: After the loop finishes, the Combine and format responses Code node gathers all the lastAssistantMessage outputs and joins them into a single text block, separated by horizontal rules (---), ready to be sent back to the user. Benefits Scalability & Flexibility:** Instead of complex branching logic, adding, removing, or modifying agents only requires editing simple JSON in the Define Agent Settings node, making setup and maintenance significantly easier, especially for those managing multiple assistants. Model Choice:** Use the best model for each agent's specific task or persona via OpenRouter. Centralized Configuration:** Keeps agent setup tidy and manageable. Limitations Sequential Responses:** Agents respond one after another based on mention order (or randomly), not in parallel. No Direct Agent-to-Agent Interaction (within a turn):* Agents cannot directly call or reply to each other *during the processing of a single user message. Agent B sees Agent A's response only because the workflow passes it as input in the next loop iteration. Delayed Output:* The user receives the combined response only *after all triggered agents have completed their generation.
by Davide
This workflow optimizes the management of inquiries received through a contact form (Contact Form 7 - CF7 Plugin) on a WordPress site, automating the process of classification, response drafting, and data storage. This workflow is particularly useful for businesses that receive multiple daily inquiries and want to improve their efficiency in managing customer communications. Benefits: ✅ Automation & Speed – Reduces the time needed to handle inquiries manually. ✅ Better Email Management – Ensures every message receives a timely and accurate response. ✅ Customization – The generated draft can be edited before sending, maintaining a personal touch. ✅ Inquiry History – Storing data in Google Sheets allows for easy tracking of customer interactions. ✅ Easy Integration – Works seamlessly with Contact Form 7 without complex configurations. How It Works Form Submission Handling: The workflow starts with a WordPress form submission captured via a webhook. The form data (first name, last name, email, phone, and message) is extracted and structured using the "Set Fields" node. Message Classification: The submitted message is classified into predefined categories (e.g., "Product Info," "Order Info," or "Other") using the "Message Classifier" node, powered by Google Gemini. Automated Email Drafting: Based on the classification, the workflow generates a professional email draft using one of three "Email Writer" nodes (for Product, Order, or Other requests). Each node uses Google Gemini to craft a personalized response with a structured format (subject and body). Email Draft Creation: The drafted email is sent as a Gmail draft to the appropriate department, including the original form data for context. Data Logging: All submissions, along with their classifications and email drafts, are logged in a Google Sheets spreadsheet for record-keeping and further action. Set Up Steps Install WordPress Plugin: Install the "CF7 to Webhook" plugin on WordPress and configure it to send form submissions to the n8n webhook URL. Configure Webhook in n8n: Set up the "From Wordpress" webhook node in n8n to receive POST requests from the WordPress form. Google Gemini Integration: Ensure the Google Gemini nodes are properly authenticated with the correct API credentials. Gmail and Google Sheets Setup: Authenticate the Gmail and Google Sheets nodes with the appropriate OAuth2 credentials and specify the target spreadsheet and sheet name. Customize Classification Categories: Adjust the categories in the "Message Classifier" node to match your business needs. Test the Workflow: Trigger a test form submission to verify the workflow processes data correctly, classifies the message, generates an email draft, and logs the data. Need help customizing? Contact me for consulting and support or add me on Linkedin.