by Marketing Canopy
Automate Pinterest Analysis & AI-Powered Content Suggestions With Pinterest API This workflow automates the collection, analysis, and summarization of Pinterest Pin data to help marketers optimize content strategy. It gathers Pinterest Pin performance data, analyzes trends using an AI agent, and delivers actionable insights to the Marketing Manager via email. This setup is ideal for content creators and marketing teams who need weekly insights on Pinterest trends to refine their content calendar and audience engagement strategy. Prerequisites Before setting up this workflow, ensure you have the following: Pinterest API Access & Developer Account Sign up at Pinterest Developers and obtain API credentials. Ensure you have access to both Organic and Paid Pin data. Airtable Account & API Key Create an account at Airtable and set up a database. Obtain an API key from Account Settings. AI Agent for Trend Analysis An AI-powered agent (such as OpenAI's GPT or a custom ML model) is required to analyze Pinterest trends. Ensure integration with your workflow automation tool (e.g., Zapier, Make, or a custom Python script). Email Automation Setup Configure an SMTP email service (e.g., Gmail, Outlook, SendGrid) to send the summarized results to the Marketing Manager. Step-by-Step Guide to Automating Pinterest Pin Analysis 1. Scheduled Trigger for Data Collection At 8:00 AM (or your preferred time), an automated trigger starts the workflow. Adjust the timing based on your marketing schedule to optimize trend tracking. 2. Fetch Data from Pinterest API Retrieve recent Pinterest Pin performance data, including impressions, clicks, saves, and engagement rate. Ensure both Organic and Paid Ads data are labeled correctly for clarity. 3. Store Data in Airtable Pins are logged and categorized in an Airtable database for further analysis. Sample Airtable Template for Pinterest Pins | Column Name | Description | |---------------|---------------------------------------| | pin_id | Unique identifier for each Pin | | created_at | Timestamp of when the Pin was created | | title | Title of the Pin | | description| Short description of the Pin | | link | URL linking to the Pin | | type | Type of Pin (e.g., organic, ad) | 4. AI Agent Analyzes Pinterest Trends The AI model reviews the latest Pinterest data and identifies: Trending Topics & Keywords** Engagement Patterns** Audience Interests & Behavior Changes** Optimal Posting Times & Formats** 5. Generate Content Suggestions with AI The AI Agent recommends new Pin ideas and content calendar updates to maximize engagement. Suggestions include creative formats, hashtags, and timing adjustments for better performance. 6. Summary & Insights Generated by AI A concise report is created, summarizing Pinterest trends and actionable insights for content strategy. 7. Email Report Sent to the Marketing Manager The summary is emailed to the Marketing Manager to assist with content planning and execution. The report includes: Performance Overview of Recent Pins Trending Content Ideas Best Performing Pin Formats AI-Generated Recommendations This workflow enables marketing teams to automate Pinterest analysis and optimize their content strategy through AI-driven insights. 🚀
by Teddy
Webhook | Paper Summarization Who is this for? This workflow is designed for researchers, students, and professionals who frequently read academic papers and need concise summaries. It is useful for anyone who wants to quickly extract key information from research papers hosted on arXiv. What problem is this workflow solving? Academic papers are often lengthy and complex, making it time-consuming to extract essential insights. This workflow automates the process of retrieving, processing, and summarizing research papers, allowing users to focus on key findings without manually reading the entire paper. What this workflow does This workflow extracts the content of an arXiv research paper, processes its abstract and main sections, and generates a structured summary. It provides a well-organized output containing the Abstract Overview, Introduction, Results, and Conclusion, ensuring that users receive critical information in a concise format. Setup Ensure you have n8n installed and configured. Import this workflow into your n8n instance. Configure an external trigger using the Webhook node to accept paper IDs. Test the workflow by providing an arXiv paper ID. (Optional) Modify the summarization model or output format according to your preferences. How to customize this workflow to your needs Adjust the HTTPRequest node to fetch papers from other sources beyond arXiv. Modify the Summarization Chain node to refine the summary output. Enhance the Reorganize Paper Summary step by integrating additional language models. Add an email or Slack notification step to receive summaries directly. Workflow Steps Webhook receives a request with an arXiv paper ID. Send an HTTP request using "Request to Paper Page" to fetch the HTML content of the paper. Extract the abstract and sections using "Extract Contents". Split out all sections using "Split out All Sections" to process individual paragraphs. Clean up text using "Remove useless links" to remove unnecessary elements. Summarize extracted content using "Summarization Chain". Aggregate summarized content using "Aggregate summarized content". Reorganize the paper summary into structured sections using "Reorganize Paper Summary". Extract key information using "Content Extractor" to classify data into Abstract Overview, Introduction, Results, and Conclusion. Respond to the webhook with the structured summary. Note: This workflow is designed for use with arXiv research papers but can be adapted to process papers from other sources.
by Juan Carlos Cavero Gracia
Description This automation template is designed for content creators, digital marketers, and social media managers looking to simplify their video posting workflow. It automates the process of generating engaging video descriptions and uploading content to both Instagram and TikTok, making your social media management more efficient and error-free. Who Is This For? Content Creators & Influencers:** Streamline your video uploads and focus more on creating content. Digital Marketers:** Ensure consistent posting across multiple platforms with minimal manual intervention. Social Media Managers:** Automate repetitive tasks and maintain a steady online presence. What Problem Does This Workflow Solve? Manually creating descriptions and uploading videos to different platforms can be time-consuming and error-prone. This workflow addresses these challenges by: Automating Video Uploads:** Monitors a designated Google Drive folder for new videos. Generating Descriptions:** Uses OpenAI to transcribe video audio and generate engaging, customized social media descriptions. Ensuring Multi-Platform Consistency:** Simultaneously posts your video with the generated description to Instagram and TikTok. Error Notifications:** Optional Telegram integration sends alerts in case of issues, ensuring smooth operations. How It Works Video Upload: Place your video in the designated Google Drive folder. Description Generation: The automation triggers OpenAI to transcribe your video’s audio and generate a captivating description. Content Distribution: Automatically uploads the video and description to both Instagram and TikTok. Error Handling: Sends Telegram notifications if any issues arise during the process. Setup Generate an API token at upload-post.com and configure it in both the Upload to TikTok and Upload to Instagram nodes. Google Cloud Project: Create a project in Google Cloud Platform, enable the Google Drive API, and generate the necessary OAuth credentials to connect to your Google Drive account. Set up your Google Drive folder in the Google Drive Trigger node. Customize the OpenAI prompt in the Generate Social Description node to match your brand’s tone. (Optional) Configure Telegram credentials for error notifications. Requirements Accounts:** upload-post.com, Google Drive, and (optionally) Telegram. API Keys & Credentials:** Upload-post.com API token, OpenAI API key, and (optional) Telegram bot token. Google Cloud:** A project with the Google Drive API enabled and valid OAuth credentials. Use this template to enhance your productivity, maintain consistency across your social media channels, and engage your audience with high-quality video content.
by Amjid Ali
This n8n workflow automates YouTube video metadata generation using AI. It extracts video transcripts, analyzes content, and produces optimized titles, descriptions, tags, hashtags, and call-to-action elements. Additionally, the workflow integrates affiliate and promotional links to enhance overall video performance. Key Features Automated Metadata Generation Utilizes an AI agent integrated with OpenAI GPT-4 to generate engaging metadata based on the provided video transcript. SEO and Engagement Optimization Creates keyword-rich, well-structured content that boosts search engine visibility and audience engagement. Affiliate and Promotional Integration Retrieves pre-set promotional and affiliate links using a Google Docs integration. Direct YouTube Update Automatically updates video details on YouTube via the YouTube API. Customization Allows you to modify the AI prompt to tailor metadata for your specific niche. Workflow Breakdown User Submission Users supply the YouTube video link, transcript, and optionally, focus keywords. Video ID Extraction The workflow converts the YouTube URL into a video ID to streamline automation. Link Retrieval Affiliate and course links are fetched from a designated Google Docs file. AI-Powered Metadata Generation The AI agent generates the video title, description, tags, hashtags, and call-to-action elements. Metadata Formatting and Update The generated metadata is structured and directly updated on YouTube. Confirmation A success message is displayed upon completion of the update process. Setup and Configuration Deploying the Workflow Deploy the workflow in n8n and ensure all integrations are properly set up. Configuring Integrations Google Docs:** Configure credentials to retrieve affiliate and promotional links. OpenAI (GPT-4):** Set up credentials for AI-powered metadata generation. YouTube API:** Enter your API credentials to enable automatic video updates. User Input Requirements Provide a valid YouTube video link and its corresponding transcript. Optionally, include focus keywords to further enhance metadata accuracy. Ideal For YouTube Content Creators:** Automate video descriptions and boost SEO. Digital Marketers:** Enhance content for improved search rankings and audience engagement. Affiliate Marketers:** Simplify the insertion of promotional and affiliate links. AI & Automation Enthusiasts:** Explore the integration of AI into automated workflows. Additional Resources For further guidance, refer to the tutorial video on this workflow. More courses and resources are available on the SyncBricks website. For support or inquiries, contact Amjid Ali at info@syncbricks.com. You can also support this work via PayPal donations and subscribe for additional AI and automation workflows. Watch the Tutorial:** YouTube Video on This Workflow More Courses & Resources:** SyncBricks LMS Full Course on ERPNext & AI Automation Connect:** Email: info@syncbricks.com Website: SyncBricks YouTube: SyncBricks Channel LinkedIn: Amjid Ali Support & Subscribe:** Donate via PayPal Subscribe for More AI & Automation Workflows
by Dustin
Are you a cord-cutter? Do you find yourself looking through the many titles of videos uploaded to Youtube, just to find the ones you want to watch? Even when you subscribe to the channels you like, do you find that you want to watch the news now and my tech/n8n videos later? Well, now you can have n8n grab the last 8 videos, posted in the last 24 hours, and put them in a playlist for the day; and, each day the old playlist is deleted. Are you tired of a channel filling your subscriptions with tons of videos a day; this workflow can be used for any channel, whether you are subscribed to the channel or not. It's a YouTube playlist automation. How it works: Create your list of prefered Youtube Channels in a Google Sheet and it will create you a daily playlist; and, it will delete the playlist created yesterday. Instructions To set this up, you need to create a Google Sheet with the following headings in line 1: Channel User Name Channel Name Channel Link Channel ID Copy the 'Create your Channel List' into it's own workflow and link the Sheets links to your new sheet. To get the 'Create your Channel List' to work, you need to visit each channel's page that you want included in your playlist; you need to get the "@" name of the channel and add it to the 'Channel User Name' column of your Google Sheet. For example: if you wanted to include this channel: Recruit Training Videos - Corporal Stock, you would search for the name, to add to the next available row of the 'Channel User Name' column: @CorporalStock Once you add all Channel User Names, run the 'Create your Channel list workflow, and it will fill in the remaining details. Now the 'YT Playlist Creator' can be run. Note: The first time the workflow us run, disconnect the 'Delete Yesterday's Playlist' leg, or the workflow will error and stop (because there is no 'Yesterday's Playlist'. Note: this was made to create a playlist every day, delete yesterday's playlist, and only get the last 8 videos posted within the last 24 hours. I choose to put the date (YYMMDD format) in front of the playlist, to ensure that it doesn't conflict with another playlist. Also, I have it notifying me in Telegram, so I know that the new playlist is posted.
by explorium
Explorium Event-Triggered Outreach This n8n and agent-based workflow automates outbound prospecting by monitoring Explorium event data (e.g. product launches, new office opening, new investment and more), researching companies, identifying key contacts, and generating tailored sales emails leveraging the Explorium MCP server. Template Workflow Overview Node 1: Webhook Trigger Purpose: Listens for real-time product launch events pushed from Explorium's webhook system. How it works: Explorium sends HTTP POST requests containing event data The webhook payload includes company name, business ID, domain, product name, and event type Pay attention: Product launch is just one example. You can easily enroll to many more meaningful events. to learn about events and how to enroll to events, visit the events documentation. Node 2: Company Research Agent Agent Type: Tools Agent Purpose: Enrich company data after an event occurs. How it works: Uses Explorium MCP via the MCP Client tool to gather additional company data Uses Anthropic Claude (Chat Model) to process and interpret company information for downstream personalization Node 3: Employee Data Retrieval Purpose: Retrieve prospect-level data for targeting. How it works: Uses HTTP Request node to call Explorium's fetch_prospects endpoint Filters prospects by: Company business_id Departments: Product, R&D, etc... Seniority levels: owner, cxo, vp, director, senior, manager, partner, etc... Pay Attention: Follow our fetch prospect documentation for the full list of filter and best practice. Limits results to top 5 relevant employees Code nodes handle: Filtering logic Cleaning API response Formatting data for downstream agents Node 4: Conditional Branch - Prospect Data Check If Node: Checks whether prospect data was successfully retrieved Logic: If prospects found → personalized emails per person If no prospects → fallback to company-level general email Node 5A: Email Writer #1 (No Prospect Data) Agent Type: Tools Agent Purpose: Write generic outbound email using only company-level research and event info. Powered by: Anthropic Chat Model Node 5B: Loop Over Prospects → Email Writer #2 (Personalized) Agent Type: Tools Agent Purpose: Write highly personalized email for each identified employee. How it works: Loops through each individual prospect Passes company research + employee data to LLM agent Generates customized emails referencing: Prospect's title & department Product launch Role-relevant Explorium value proposition Node 6: Slack Notifications Purpose: Posts completed emails to internal Slack channel for review or testing before final deployment. Future State: Can be swapped with an email sequencing platform in production. Setup Requirements Explorium API Access MCP Client credentials for company enrichment and prospect fetching Registered webhook for event listening Get explorium api key n8n Configuration Secure environment variables for API keys & webhook secret Code nodes configured for JSON transformation, filtering & signature validation Customization Options Personalization Logic Update LLM prompt instructions to reflect ICP priorities Modify email templates based on role, department, or tenure logic Adjust fallback behavior when prospect data is unavailable API Request Tuning Adjust page_size for number of prospects retrieved Fine-tune seniority and department filters to match evolving targeting Future Expansion Swap Slack notifications for outbound email automation Integrate call task assignment directly into CRM Introduce engagement scoring feedback loop (opens, clicks, replies) Troubleshooting Tips Validate webhook signature matching to prevent unauthorized requests Ensure correct business_id is passed to prospect fetching endpoint Confirm business enrichment returns sufficient data for company researcher agents Review agent LLM responses for correct output structure and parsing consistency
by Joseph LePage
Who is this for? This workflow template is designed for AI enthusiasts, developers, and privacy-conscious users who want to leverage the power of local large language models (LLMs) without sending data to external services. It's particularly valuable for those running Ollama locally who want intelligent routing between different specialized models. What problem is this workflow solving? When working with multiple local LLMs, each with different strengths and capabilities, it can be challenging to manually select the right model for each specific task. This workflow automatically analyzes user prompts and routes them to the most appropriate specialized Ollama model, ensuring optimal performance without requiring technical knowledge from the end user. What this workflow does This intelligent router: Analyzes incoming user prompts to determine the nature of the request Automatically selects the optimal Ollama model from your local collection based on task requirements Routes requests between specialized models for different tasks: Text-only models (qwq, llama3.2, phi4) for various reasoning and conversation tasks Code-specific models (qwen2.5-coder) for programming assistance Vision-capable models (granite3.2-vision, llama3.2-vision) for image analysis Maintains conversation memory for consistent interactions Processes everything locally for complete privacy and data security Setup Ensure you have Ollama installed and running locally Pull the required models mentioned in the workflow using Ollama CLI (e.g., ollama pull phi4) Configure the Ollama API credentials in n8n (default: http://127.0.0.1:11434) Activate the workflow and start interacting through the chat interface How to customize this workflow to your needs Add or remove models from the router's decision framework based on your specific Ollama collection Adjust the system prompts in the LLM Router to prioritize different model selection criteria Modify the decision tree logic to better suit your specific use cases Add additional preprocessing steps for specialized inputs This workflow demonstrates how n8n can be used to create sophisticated AI orchestration systems that respect user privacy by keeping everything local while still providing intelligent model selection capabilities.
by Francis Njenga
Workflow Documentation: Auto-Retry Engine – Error Recovery Workflow Detailed Description The Auto-Retry Engine: Error Recovery Workflow is designed to automate the process of identifying and retrying failed executions in n8n workflows. By leveraging scheduled triggers, API integrations, and conditional logic, this workflow ensures that any failed executions are automatically retried on an hourly basis. This reduces manual intervention, improves system reliability, and ensures smoother workflow operations. Who is this for? This workflow is ideal for: Automation Engineers**: Managing and maintaining workflows with minimal manual intervention. DevOps Teams**: Ensuring high availability and reliability of automated processes. IT Administrators**: Reducing downtime and improving system performance by automating error recovery. What problem does this workflow solve? Manual Error Handling**: Eliminates the need for manual monitoring and retrying of failed executions. Improved Reliability**: Automatically retries failed executions, reducing downtime and improving workflow success rates. Time Efficiency**: Saves time by automating repetitive error recovery tasks, allowing teams to focus on higher-priority work. What this workflow does This workflow automates the following steps: Scheduled Monitoring: Checks for failed executions hourly using a schedule trigger. Error Filtering: Identifies executions that have failed and filters out those that have already been successfully retried. Authentication: Logs into the n8n instance using API credentials to retrieve session details. Automatic Retry: Retries the failed executions using the n8n API. Batch Processing: Processes multiple failed executions in batches to avoid overloading the system. Setup Prerequisites To use this workflow, you’ll need: n8n Account**: To create and run the workflow. n8n API Credentials**: For logging into the n8n instance and retrying executions. HTTP Request Node**: Configured to interact with the n8n API. Schedule Trigger**: Set to run the workflow hourly. Setup Process Configure Schedule Trigger Set the trigger to run hourly to check for failed executions. Set Login Credentials Add your n8n instance URL, username, and password in the Set Node. Integrate n8n API Use the HTTP Request node to log into the n8n instance and retrieve session details. Retry Failed Executions Configure the HTTP Request node to retry failed executions using the session details. Batch Processing Use the Split in Batches node to process multiple failed executions in batches. How to customize this workflow Tailor the workflow to fit your specific needs: Adjust Schedule Frequency** Modify the schedule trigger to run at different intervals (e.g., every 30 minutes). Add Notifications** Integrate email or Slack notifications to alert teams about failed retries. Refine Error Filtering** Customize the filtering logic to exclude specific types of failed executions. Scale Batch Size** Adjust the batch size in the Split in Batches node to optimize performance. Conclusion The Auto-Retry Engine: Error Recovery Workflow is a powerful tool for automating error recovery in n8n workflows. By reducing manual intervention and ensuring failed executions are retried automatically, this workflow enhances system reliability and operational efficiency. Whether you're managing a few workflows or a complex automation ecosystem, this workflow ensures your processes run smoothly and consistently.
by Mutasem
Use Case Following up at the right time is one of the most important parts of sales. This workflow uses Gmail to send outreach emails to Hubspot contacts that have already been contacted only once more than a month ago, and records the engagement in Hubspot. Setup Setup HubSpot Oauth2 creds (Be careful with scopes. They have to be exact, not less or more. Yes, it’s not simple, but it’s well documented in the n8n docs. Be smarter than me, read the docs) Setup Gmail creds. Change the email variables in the Set keys node How to adjust this template There's plenty to do here because the approach here is really just a starting point. Most important here is to figure out what your rules are to follow up. After a month? More than once? Also, remember to update the follow-up email! Unless you want to sell n8n 😉
by Audun
Who is this for? This workflow is tailored for content creators, artists, and developers who use Ko-fi to receive financial support through donations, subscriptions, or product sales. Use case This workflow automates the process of receiving and categorizing payment notifications from Ko-fi, ensuring that creators can focus on their work rather than administrative tasks. What this workflow does Webhook Reception**: The workflow listens for incoming payment notifications from Ko-fi via a configured webhook. Token Verification**: It validates incoming requests to ensure they originate from Ko-fi using a verification token for enhanced security. Type Differentiation**: It categorizes payments into types—donations, subscriptions, and shop orders—allowing for tailored handling for each payment type. Custom Response Options**: Depending on the payment type received, the workflow activates specific actions or processes, enabling seamless integration with other applications or services. Setup Webhook Configuration: Access the Webhook node within the workflow and take note of your unique webhook URL. Visit your Ko-fi webhooks management page at Ko-fi Webhooks Management and input this URL. Verification Token Setup: In your Ko-fi account, locate the verification token in the advanced settings. Input this token in the Prepare node of your n8n workflow. Enable the Workflow: Activate the workflow in n8n to start listening for incoming webhook notifications. Testing: Use the test feature in the Ko-fi webhooks settings to send a test webhook to ensure everything is functioning as expected. How to customize this workflow to your needs Add Actions for Each Payment Type**: You can modify the Donation, Subscription, and Shop Order nodes to include actions such as sending emails, logging payments within a database, or triggering notifications. Enhance Security Measures**: You can further refine the Check token node to include additional checks or to log all incoming webhook requests for monitoring. Integration with Other Services**: Consider linking this workflow with messaging platforms (e.g., Slack, Discord) or CRM tools to keep your supporters informed or to manage relationships more effectively. Custom Fields**: If needed, adjust the fields captured in the Subscription and Shop Order nodes to include more data or different parameters based on your specific use case.
by Niklas Hatje
Who this is for This template is for everyone that wants to download their n8n Cloud invoices automatically as a PDF instead of downloading them manually. How it works This workflow checks your Gmail inbox for new n8n invoice emails from n8n's payment provider Paddle. Once it finds something, it converts the URL into a PDF using pdflayer and saves it in Google Drive. Setup Setup your Gmail and Google Drive credentials Create a free account at https://pdflayer.com/ Insert your pdflayer API key into the Setup node Insert the URL to the wanted drive folder into the setup node (make sure to remove everything after the ?) How to adjust it to your need Instead of saving the PDF in Google drive, you could also save it in your local system, any other storage provider or send the PDF automatically to the right person in your company.
by Onur
Description This workflow empowers you to effortlessly get answers to your n8n platform questions through an AI-powered assistant. Simply send your query, and the assistant will search documentation, forum posts, and example workflows to provide comprehensive, accurate responses tailored to your specific needs. > Note: This workflow uses community nodes (n8n-nodes-mcp.mcpClientTool) and will only work on self-hosted n8n instances. You'll need to install the required community nodes before importing this workflow. ! What does this workflow do? This workflow streamlines the information retrieval process by automatically researching n8n platform documentation, community forums, and example workflows, providing you with relevant answers to your questions. Who is this for? New n8n Users**: Quickly get answers to basic platform questions and learn how to use n8n effectively Experienced Developers**: Find solutions to specific technical issues or discover advanced workflows Teams**: Boost productivity by automating the research process for n8n platform questions Anyone** looking to leverage AI for efficient and accurate n8n platform knowledge retrieval Benefits Effortless Research**: Automate the research process across n8n documentation, forum posts, and example workflows AI-Powered Intelligence**: Leverage the power of LLMs to understand context and generate helpful responses Increased Efficiency**: Save time and resources by automating the research process Quick Solutions**: Get immediate answers to your n8n platform questions Enhanced Learning**: Discover new workflows, features, and best practices to improve your n8n experience How It Works Receive Request: The workflow starts when a chat message is received containing your n8n-related question AI Processing: The AI agent powered by OpenAI GPT-4o analyzes your question Research and Information Gathering: The system searches across multiple sources: Official n8n documentation for general knowledge and how-to guides Community forums for bug reports and specific issues Example workflow repository for relevant implementations Response Generation: The AI agent compiles the research and generates a clear, comprehensive answer Output: The workflow provides you with the relevant information and step-by-step guidance when applicable n8n Nodes Used When chat message received (Chat Trigger) OpenAI Chat Model (GPT-4o mini) N8N AI Agent n8n-assistant tools (MCP Client Tool - Community Node) n8n-assistant execute (MCP Client Tool - Community Node) Prerequisites Self-hosted n8n instance OpenAI API credentials MCP client community node installed MCP server configured to search n8n resources Setup Import the workflow JSON into your n8n instance Configure the OpenAI credentials Configure your MCP client API credentials In the n8n-assistant execute node, ensure the parameter is set to "specific" (corrected from "spesific") Test the workflow by sending a message with an n8n-related question MCP Server Connection To connect to the MCP server that powers this assistant's research capabilities, you need to use the following URL: https://smithery.ai/server/@onurpolat05/n8n-assistant This MCP server is specifically designed to search across three types of n8n resources: Official documentation for general platform information and workflow creation guidance Community forums for bug-related issues and troubleshooting Example workflow repositories for reference implementations Configure this URL in your MCP client credentials to enable the assistant to retrieve relevant information based on user queries. This workflow combines the convenience of chat with the power of AI to provide a seamless n8n platform research experience. Start getting instant answers to your n8n questions today!