by Lakshit Ukani
Automated Instagram posting with Facebook Graph API and content routing Who is this for? This workflow is perfect for social media managers, content creators, digital marketing agencies, and small business owners who need to automate their Instagram posting process. Whether you're managing multiple client accounts or maintaining consistent personal branding, this template streamlines your social media operations. What problem is this workflow solving? Manual Instagram posting is time-inconsistent and prone to inconsistency. Content creators struggle with: Remembering to post at optimal times Managing different content types (images, videos, reels, stories, carousels) Maintaining posting schedules across multiple accounts Ensuring content is properly formatted for each post type This workflow eliminates manual posting, reduces human error, and ensures consistent content delivery across all Instagram format types. What this workflow does The workflow automatically publishes content to Instagram using Facebook's Graph API with intelligent routing based on content type. It handles image posts, video stories, Instagram reels, carousel posts, and story content. The system creates media containers, monitors processing status, and publishes content when ready. It supports both HTTP requests and Facebook SDK methods for maximum reliability and includes automatic retry mechanisms for failed uploads. Setup Connect Instagram Business Account to a Facebook Page Configure Facebook Graph API credentials with instagram_basic permissions Update the "Configure Post Settings" node with your Instagram Business Account ID Set media URLs and captions in the configuration section Choose post type (http_image, fb_reel, http_carousel, etc.) Test workflow with sample content before going live How to customize this workflow to your needs Modify the post_type variable to control content routing: Use http_* prefixes for direct API calls Use fb_* prefixes for Facebook SDK calls Use both HTTP and Facebook SDK nodes as fallback mechanisms** - if one method fails, automatically try the other for maximum success rate Add scheduling by connecting a Cron node trigger Integrate with Google Sheets or Airtable for content management Connect webhook triggers for automated posting from external systems Customize wait times based on your content file sizes Set up error handling** to switch between HTTP and Facebook SDK methods when API limits are reached
by Jimleuk
This n8n workflow takes Slack conversations and turns them into Calendar events complete with accurate date and times and location information. Adding and removing attendees are also managed automatically. How it works Workflow monitors a Slack channel for invite messages with a "📅" reaction and sends this to the AI agent. AI agent parses the message determining the time, date and location. Using its Location tool, the AI agent searches for the precise location address from Google Maps. Using its Calendar tool, the AI agent creates a Google Calendar invite with the title, description and location address for the user. Back in the Slack channel, others can RSVP to the invite by reacting with the "✅" emjoi. The workflow polls the message after a while and adds the users who have reacted to the Calendar Invite as attendees. Conversely, removing any attendees who have since removed their reaction. Examples Jill: "Hey team, I'm organising a round of Laser Tag (Bunker 51) next Thursday around 6pm. Please RSVP with a ✅" AI: "I've helped you create an event in your calendar https://cal.google.com/..." Jack: "✅" AI: "I've added Jack to the event as an attendee". Requirements Slack channel to attach the workflow OpenAI account to use a GPT model Google Calendar to create and update events Customising the Workflow This workflow can work with other messaging platforms that support reactions or tagging like features such as discord. Don't use Google Calendar? Swap it out for Outlook or your own. Use any combinations of emjoi reactions and add new rules like "RSVP maybe" which could send reminder updates nearer the event date.
by Sina
🧠 Who is this for? Startup founders designing creative growth strategies Marketing teams seeking low-cost, high-impact campaigns Consultants and agencies needing fast guerrilla plans Creators exploring AI-powered content and campaigns ❓ What problem does this workflow solve? Building a full guerrilla marketing strategy usually takes hours of brainstorming, validation, and formatting. This template does all of that in minutes using a swarm of AI agents, from idea generation to KPIs, and even kills bad ideas before you waste time on them. ⚙️ What this workflow does Starts with a chat input where you describe your business or idea A “Swarm Intelligence” loop: One AI agent generates guerrilla ideas Another agent critically validates the idea and gives honest feedback If the idea is weak, it asks for a new one If accepted, the swarm continues with 16 AI specialists generating: 🎯 Objectives 🧍♂️ Personas 🎤 Messaging 🧨 Tactics 📢 Channels 🧮 Budget 📊 KPIs 📋 Risk plan and more Merges all chapters into a final Markdown file Lets you download the campaign in seconds 🛠️ Setup Import the workflow to your n8n instance (Optional) Configure your LLM (OpenAI or Ollama) in the “OpenAI Chat Model” node Type your business idea (e.g., “Luxury dog collar brand for Instagram dads”) Wait for flow completion Download the final marketing plan file 🤖 LLM Flexibility (Choose Your Model) Supports any LLM via LangChain: Ollama (LLaMA 3.1, Mistral, DeepSeek) OpenAI (GPT-4, GPT-3.5) To switch models, just replace the “Language Model” node, no other logic needs updating 📌 Notes Output is professional and ready-to-pitch Built-in pessimistic validator filters out bad ideas before wasting time 📩 Need help? Email: sinamirshafiee@gmail.com Happy to support setup or customization!
by Cameron Wills
Who is this for? Content creators, digital marketers, and social media managers who want to automate the creation of short-form videos for platforms like TikTok, YouTube Shorts, and Instagram Reels without extensive video editing skills. What problem does this workflow solve? Creating engaging short-form videos consistently is time-consuming and requires multiple tools and skills. This workflow automates the entire process from ideation to publishing, significantly reducing the manual effort needed while maintaining content quality. What this workflow does This all-in-one solution transforms ideas into fully produced short-form videos through a 5-step process: Generate video captions from ideas stored in a Google Sheet Create AI-generated images using Flux and the OpenAI API Convert images to videos using Kling's API Add voice-overs to your content with Eleven Labs Complete the video production with Creatomate by adding templates, transitions, and combining all elements The workflow handles everything from sourcing content ideas to rendering the final video, and even notifies you on Discord when videos are ready. Setup (Est. time: 20-30 minutes) Before getting started, you'll need: n8n installation (tested on version 1.81.4) OpenAI API Key (free trial credits available) PiAPI (free trial credits available) Eleven Labs (free account) Creatomate API Key (free trial credits available) Google Sheets API enabled in Google Cloud Console Google Drive API enabled in Google Cloud Console OAuth 2.0 Client ID and Client Secret from your Google Cloud Console Credentials How to customize this workflow to your needs Adjust the Google Sheet structure to include additional data like video length, duration, style, etc. Modify the prompt templates for each AI service to match your brand voice and content style Update the Creatomate template to reflect your visual branding Configure notification preferences in Discord to manage your workflow This workflow combines multiple AI technologies to create a seamless content production pipeline, saving you hours of work per video and allowing you to focus on strategy rather than production.
by Adam Bertram
An intelligent IT support agent that uses Azure AI Search for knowledge retrieval, Microsoft Entra ID integration for user management, and Jira for ticket creation. The agent can answer questions using internal documentation and perform administrative tasks like password resets. How It Works The workflow operates in three main sections: Agent Chat Interface: A chat trigger receives user messages and routes them to an AI agent powered by Google Gemini. The agent maintains conversation context using buffer memory and has access to multiple tools for different tasks. Knowledge Management: Users can upload documentation files (.txt, .md) through a form trigger. These documents are processed, converted to embeddings using OpenAI's API, and stored in an Azure AI Search index with vector search capabilities. Administrative Tools: The agent can query Microsoft Entra ID to find users, reset passwords, and create Jira tickets when issues need escalation. It uses semantic search to find relevant internal documentation before responding to user queries. The workflow includes a separate setup section that creates the Azure AI Search service and index with proper vector search configuration, semantic search capabilities, and the required field schema. Prerequisites To use this template, you'll need: n8n cloud or self-hosted instance Azure subscription with permissions to create AI Search services Microsoft Entra ID (Azure AD) access with user management permissions OpenAI API account for embeddings Google Gemini API access Jira Software Cloud instance Basic understanding of Azure resource management Setup Instructions Import the template into n8n. Configure credentials: Add Google Gemini API credentials Add OpenAI API credentials for embeddings Add Microsoft Azure OAuth2 credentials with appropriate permissions Add Microsoft Entra ID OAuth2 credentials Add Jira Software Cloud API credentials Update workflow parameters: Open the "Set Common Fields" nodes Replace <azure subscription id> with your Azure subscription ID Replace <azure resource group> with your target resource group name Replace <azure region> with your preferred Azure region Replace <azure ai search service name> with your desired service name Replace <azure ai search index name> with your desired index name Update the Jira project ID in the "Create Jira Ticket" node Set up Azure infrastructure: Run the manual trigger "When clicking 'Test workflow'" to create the Azure AI Search service and index This creates the vector search index with semantic search configuration Configure the vector store webhook: Update the "Invoke Query Vector Store Webhook" node URL with your actual webhook endpoint The webhook URL should point to the "Semantic Search" webhook in the same workflow Upload knowledge base: Use the "On Knowledge Upload" form to upload your internal documentation Supported formats: .txt and .md files Documents will be automatically embedded and indexed Test the setup: Use the chat interface to verify the agent responds appropriately Test knowledge retrieval with questions about uploaded documentation Verify Entra ID integration and Jira ticket creation Security Considerations Use least-privilege access for all API credentials Microsoft Entra ID credentials should have limited user management permissions Azure credentials need Search Service Contributor and Search Index Data Contributor roles OpenAI API key should have usage limits configured Jira credentials should be restricted to specific projects Consider implementing rate limiting on the chat interface Review password reset policies and ensure force password change is enabled Validate all user inputs before processing administrative requests Extending the Template You could enhance this template by: Adding support for additional file formats (PDF, DOCX) in the knowledge upload Implementing role-based access control for different administrative functions Adding integration with other ITSM tools beyond Jira Creating automated escalation rules based on query complexity Adding analytics and reporting for support interactions Implementing multi-language support for international organizations Adding approval workflows for sensitive administrative actions Integrating with Microsoft Teams or Slack for notifications
by Gleb D
This n8n workflow automates the enrichment of a company list by discovering and extracting each company’s official LinkedIn URL using Bright Data’s search capabilities and Google Gemini AI for HTML parsing and result interpretation. Who is this template for? This workflow is ideal for sales, business development, and data research professionals who need to collect official LinkedIn company profiles for multiple organizations, starting from a list of company names in Google Sheets. It’s especially useful for teams who want to automate sourcing LinkedIn URLs, enrich their prospect database, or validate company data at scale. How it works Manual Trigger: The workflow is started manually (useful for controlled batch runs and testing). Read Company Names: Company names are loaded from a specified Google Sheets table. Loop Over Each Company: Each company is processed one-by-one: A custom Google Search URL is generated for each name. A Bright Data Web Unlocker request is sent to fetch Google search results for “site:linkedin.com [company name]”. Parse LinkedIn Profile URL Using AI: Google Gemini (or your specified LLM) analyzes the fetched search page and extracts the most likely official LinkedIn company profile. Result Handling: If a profile is found, it’s stored in the results. If not, an empty result is created, but you can add custom logic (notifications, retries, etc.). Batch Data Enrichment: All found company URLs are bundled into a single request for further enrichment from a Bright Data dataset. Export: The workflow appends the final, structured data for each company to another sheet in your Google Sheets file. Setup instructions 1. Replace API Keys: Insert your Bright Data API key in these nodes: Bright Data Web Request - Google Search for Company LinkedIn URL HTTP Request - Post API call to Bright Data Snapshot Progress HTTP Request - Getting data from Bright Data 2. Connect Google Sheets: Set up your Google Sheets credentials and specify the sheet for reading input and writing output. 3. Customize Output Structure: Adjust the Python code node (see sticky note in the template) if you want to include additional or fewer fields in your output. 4. Adjust for Scale or Error Handling: You can modify the logic for “not found” results (e.g., to notify a Slack channel or retry failed companies). 5. Run the Workflow: Start manually, monitor the run, and check your Google Sheet for results. Customization guidance Change Input/Output Sheets: Update the sheet names or columns if your source/target spreadsheet has a different structure. Use Another AI Model: Replace the Google Gemini node with another LLM node if preferred. Integrate Alerts: Add Slack or email nodes to notify your team when a LinkedIn profile is not found or when the process is complete.
by Alex Kim
Weather via Slack 🌦️💬 Overview This workflow provides real-time weather updates via Slack using a custom Slack command: /weather [cityname] Users can type this command in Slack (e.g., /weather New York), and the workflow will fetch and post the latest forecast, including temperature, wind conditions, and a short weather summary. While this workflow is designed for Slack, users can modify it to send weather updates via email, Discord, Microsoft Teams, or any other communication platform. How It Works Webhook Trigger – The workflow is triggered when a user runs /weather [cityname] in Slack. Geocoding with OpenStreetMap – The city name is converted into latitude and longitude coordinates. Weather Data from NOAA – The coordinates are used to retrieve detailed weather data from the National Weather Service (NWS) API. Formatted Weather Report – The workflow extracts relevant weather details, such as: Temperature (°F/°C) Wind speed and direction Short forecast summary Slack Notification – The weather forecast is posted back to the Slack channel in a structured format. Requirements A custom Slack app with: The ability to create a Slash Command (/weather) OAuth permissions to post messages in Slack An n8n instance to host and execute the workflow Customization Replace Slack messaging with email, Discord, Microsoft Teams, Telegram, or another service. Modify the weather data format for different output preferences. Set up scheduled weather updates for specific locations. Use Cases Instantly check the weather for any location directly in Slack. Automate weather reports for team members or projects. Useful for remote teams, outdoor event planning, or general weather tracking. Setup Instructions Create a custom Slack app: Go to api.slack.com/apps and create a new app. Add a Slash Command (/weather) with the webhook URL from n8n. Enable OAuth scopes for sending messages. Deploy the webhook – Ensure it can receive and process Slack commands. Run the workflow – Type /weather [cityname] in Slack and receive instant weather updates.
by Niranjan G
How it works This workflow acts like your own personal AI assistant, automatically fetching and summarizing the most relevant Security, Privacy, and Compliance news from curated RSS feeds. It processes only the latest articles (past 24 hours), organizes them by category, summarizes key insights using AI, and delivers a clean HTML digest straight to your inbox—saving you time every day. Key Highlights Handles three independent tracks: Security, Privacy, and Compliance Processes content from customizable RSS sources (add/remove easily) Filters fresh articles, removes duplicates, and sorts by recency Uses AI to summarize and format insights in a digestible format Sends polished HTML digests via Gmail—one per category Fully modular and extensible—adapt it to your needs Personalization You can easily tailor the workflow: 🎯 Customize feeds: Add or remove sources in the following Code nodes: Fetch Security RSS, Fetch Privacy Feeds, and Fetch Compliance Feeds 🔧 Modify logic: Adjust filters, sorting, formatting, or even AI prompts as needed 🧠 Bring your own LLM: Works with Gemini, but easily swappable for other LLM APIs Setup Instructions Requires Gmail and LLM (e.g., Gemini) credentials Prebuilt with placeholders for RSS feeds and email output Designed to be readable, maintainable, and fully adaptable
by Miko
Stay ahead of trends by automating your content research. This workflow fetches trending keywords from Google Trends RSS, extracts key insights from top articles, and saves structured summaries in Google Sheets—helping you build a data-driven editorial plan effortlessly. How it works Fetch Google Trends RSS – The workflow retrieves trending keywords along with three related article links. Extract & Process Content – It fetches the content of these articles, cleans the HTML, and generates a concise summary using Jina AI. Store in Google Sheets – The processed insights, including the trending keyword and summary, are saved in a pre-configured Google Sheet. Setup Steps Prepare a Google Sheet – Ensure you have a Google Sheet ready to store the extracted data. Configure API Access – Set up Google Sheets API and any required authentication. Get Jina.ai API key Adjust Workflow Settings – A dedicated configuration node allows you to fine-tune how data is processed and stored. Customization Modify the RSS source to focus on specific Google Trends regions or categories. Adjust the content processing logic to refine how article summaries are created. Expand the workflow to integrate with CMS (e.g., WordPress) for automated content planning. This workflow is ideal for content strategists, SEO professionals, and news publishers who want to quickly identify and act on trending topics without manual research. 🚀 Google Sheets Fields Copy and paste these column headers into your Google Sheet: | Column Name | Description | |------------------------|-------------| | status | Initial status of the keyword (e.g., "idea") | | trending_keyword | Trending keyword extracted from Google Trends | | approx_traffic | Estimated traffic for the trending keyword | | pubDate | Date when the keyword was fetched | | news_item_url1 | URL of the first related news article | | news_item_title1 | Title of the first news article | | news_item_url2 | URL of the second related news article | | news_item_title2 | Title of the second news article | | news_item_url3 | URL of the third related news article | | news_item_title3 | Title of the third news article | | news_item_picture1 | Image URL from the first news article | | news_item_source1 | Source of the first news article | | news_item_picture2 | Image URL from the second news article | | news_item_source2 | Source of the second news article | | news_item_picture3 | Image URL from the third news article | | news_item_source3 | Source of the third news article | | abstract | AI-generated summary of the articles (limited to 49,999 characters) | Instructions Open Google Sheets and create a new spreadsheet. Copy the column names from the table above. Paste them into the first row of your Google Sheet.
by Krupal Patel
🔧 Workflow Summary This system automates LinkedIn lead generation and enrichment in six clear stages: 1. Lead Collection (via Apollo.io) Automatically pulls leads based on keywords, roles, or industries using Apollo’s API. Captures name, job title, company, and LinkedIn profile URL. You can kick off the workflow via form, webhook, WhatsApp, Telegram, or any other custom trigger that passes search parameters. 2. LinkedIn Username Extraction Extracts usernames from LinkedIn profile URLs using a script step. These usernames are required for further enrichment using RapidAPI. 3. Email Retrieval (via Apollo.io User ID) Fetches verified work email using the Apollo User ID. Email validity is double-checked using www.mails.so filtering out undeliverable or inactive emails by checking MX records and deliverability. 4. Profile Summary (via LinkedIn API on RapidAPI) Enriches lead data by pulling bio/summary details to understand their background and expertise. 5. Activity Insights (Posts & Reposts) Collects recent posts or reposts to help craft personalised messages based on what they’re currently engaging with. 6. Leads Sheet Update All data is written into a Google Sheet. New columns are populated dynamically without erasing existing data. ⸻ ✅ Smart Retry Logic Each workflow is equipped with a fail-safe system: Tracks status per row: ✅ done, ❌ failed, ⏳ pending Failed rows are automatically retried after a custom delay (e.g., 2 weeks). Ensures minimal drop-offs and complete data coverage. 📊 Google Sheets Setup Make a copy of the following: Template 1: Apollo Leads Scraper & Enrichment Template 2: Final Enriched Leads The system appends data (like emails, bios, activity) step by step. 🔐 API Credentials Needed 1. Apollo API Sign up and generate API key at Apollo Developer Portal Be sure to enable the “Master API Key” toggle so the same key works for all endpoints. 2. LinkedIn Data API (via RapidAPI) Subscribe at RapidAPI - LinkedIn Data Use your key in the x-rapidapi-key header. 3. Mails.so API Get your API Key from mails.so dashboard 🛠️ Troubleshooting – LinkedIn Lead Machine ✅ Common Mistakes & Fixes 1. API Keys Not Working Make sure API keys for Apollo, RapidAPI, and mails.so are correct. Apollo “Master API Key” must be enabled. Keys should be saved as Generic Credentials in n8n. 2. Leads Not Found Check if the search query (keyword/job title) is too narrow. Apollo might return empty results if the filters are incorrect. 3. LinkedIn URLs Missing or Invalid Ensure Apollo is returning valid LinkedIn URLs. Improper URLs will cause username extraction and enrichment steps to fail. 4. Emails Not Coming Through Apollo may not have verified emails for all leads. mails.so might reject invalid or expired email addresses. 5. Google Sheet Not Updating Make sure the Google Sheet is shared with the right Google account (linked to n8n). Check if the column names match and data isn’t blocked due to formatting. 6. Status Columns Not Changing Each row must have done, failed, or pending in the status column. If the status doesn’t update, the retry logic won’t trigger. 7. RapidAPI Not Returning Data Double-check if username is present and valid. Make sure the RapidAPI plan is active and within limits. 8. Workflow Not Running Check if the trigger node (form, webhook, etc.) is connected and active. Make sure you’re passing the required inputs (keyword, role, etc.). Need Help? Contact www.KrupalPatel.com for support and custom workflow development
by David
Who might benfit from this workflow? Do you have to record your working hours yourself? Then this n8n workflow in combination with an iOS shortcut will definitely help you. Once set up, you can use a shortcut, which can be stored as an app icon on your home screen, to record the start, end and duration of your break. How it works Once setup you can tap the iOS shortcut on your iPhone. You will see a menu containing three options: "Track Start", "Track Break" and "Track End". After time is tracked iOS will display you a notification about the successful operation. How to set it up Copy the notion database to your notion workspace (Top right corner). Copy the n8n workflow to your n8n workspace In the notion nodes in the n8n workflow, add your notion credentials and select the copied notion database. Download the iOS Shortcut from our documentation page Edit the shortcut and paste the url of your n8n Webhook trigger node to the first "Text" node of the iOS shortcut flow. It is a best practice to use authentication. You can do so by adding "Header" auth to the webhook node and to the shrotcut. You need help implementing this or any other n8n workflow? Feel free to contact me via LinkedIn or my business website. You want to start using n8n? Use this link to register for n8n (This is an affiliate link)
by Artem Boiko
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. CAD-BIM Multi-Format Validation Pipeline This workflow enables automated validation of CAD and BIM files in multiple formats (Revit, IFC, DWG, DGN) for compliance with project standards and requirements. Key Features Converts Revit, IFC, DWG, and DGN models into open data tables Runs automated validation checks on model naming, structure, attributes, and completeness Generates error reports and QTO (Quantity Take-Off) tables for all processed files How it works Upload one or more project files in Revit (.rvt), IFC (.ifc), DWG, or DGN formats The pipeline automatically processes each file and validates against configurable rules in Excel form Error summaries and QTO tables are generated All outputs are available for download as Excel Converter Path:** Make sure the converter executable (e.g. RvtExporter.exe) is placed in DDC Exporter\datadrivenlibs\. Specify the full path in the workflow settings if required. Troubleshooting:** If conversion fails, double-check the path to the executable. Only supported formats can be processed (see GitHub Readme). Review logs in /output for error details. Docs & Issues:** Full Readme on GitHub