by Lakshit Ukani
Automated Instagram posting with Facebook Graph API and content routing Who is this for? This workflow is perfect for social media managers, content creators, digital marketing agencies, and small business owners who need to automate their Instagram posting process. Whether you're managing multiple client accounts or maintaining consistent personal branding, this template streamlines your social media operations. What problem is this workflow solving? Manual Instagram posting is time-inconsistent and prone to inconsistency. Content creators struggle with: Remembering to post at optimal times Managing different content types (images, videos, reels, stories, carousels) Maintaining posting schedules across multiple accounts Ensuring content is properly formatted for each post type This workflow eliminates manual posting, reduces human error, and ensures consistent content delivery across all Instagram format types. What this workflow does The workflow automatically publishes content to Instagram using Facebook's Graph API with intelligent routing based on content type. It handles image posts, video stories, Instagram reels, carousel posts, and story content. The system creates media containers, monitors processing status, and publishes content when ready. It supports both HTTP requests and Facebook SDK methods for maximum reliability and includes automatic retry mechanisms for failed uploads. Setup Connect Instagram Business Account to a Facebook Page Configure Facebook Graph API credentials with instagram_basic permissions Update the "Configure Post Settings" node with your Instagram Business Account ID Set media URLs and captions in the configuration section Choose post type (http_image, fb_reel, http_carousel, etc.) Test workflow with sample content before going live How to customize this workflow to your needs Modify the post_type variable to control content routing: Use http_* prefixes for direct API calls Use fb_* prefixes for Facebook SDK calls Use both HTTP and Facebook SDK nodes as fallback mechanisms** - if one method fails, automatically try the other for maximum success rate Add scheduling by connecting a Cron node trigger Integrate with Google Sheets or Airtable for content management Connect webhook triggers for automated posting from external systems Customize wait times based on your content file sizes Set up error handling** to switch between HTTP and Facebook SDK methods when API limits are reached
by Cyril Nicko Gaspar
🔍 Email Lookup with Google Search from Postgres Database This N8N workflow is designed to enrich seller data stored in a Postgres database by performing automated Google search lookups. It uses Bright Data's Web Unlocker to bypass search result restrictions and the HTML Extract node to parse and extract relevant information from webpages. The main purpose of this workflow is to discover missing contact details, company domains, and secondary emails for businesses or sellers based on existing database entries. 🎯 Problem This Workflow Solves Manually searching for missing seller or business details—like secondary emails, websites, or domain names—can be time-consuming and inefficient, especially for large datasets. This workflow automates the search and data enrichment process, significantly reducing manual effort while improving the quality and completeness of your seller database. ✅ Prerequisites Before using this template, make sure the following requirements are met: ✔️ A Bright Data account with access to the Web Unlocker or Amazon Scraper API ✔️ A valid Bright Data API key ✔️ An active PostgreSQL database with seller data ✔️ N8N self-hosted instance (recommended for using community nodes like n8n-nodes-brightdata) ✔️ Installed n8n-nodes-brightdata package (custom node for Bright Data integration) ⚙️ Setup Instructions Step 1: Prepare Your Postgres Table Create a table in Postgres with the following structure (you can adjust field names if needed): CREATE TABLE sellers ( seller_id SERIAL PRIMARY KEY, seller_name TEXT, primary_email TEXT, company_info TEXT, trade_name TEXT, business_address TEXT, coc_number TEXT, vat_number TEXT, commercial_register TEXT, secondary_email TEXT, domain TEXT, seller_slug TEXT, source TEXT ); Step 2: Setup Web Unlocker on Bright Data Go to your Bright Data dashboard. Navigate to Proxies & Scraping → Web Unlocker. Create a new zone, selecting Web Unlocker API under Scraping Solutions. Whitelist your server IP if required. Step 3: Generate API Key In the Bright Data dashboard, go to the API section. Generate a new API key. In N8N, create HTTP Request Credentials using Bearer Authentication with the API key. Step 4: Install the Bright Data Node in N8N In your N8N self-hosted instance, go to Settings → Community Nodes. Search and install n8n-nodes-brightdata. 🔄 Workflow Functionality 🔁 Trigger: Can be set to run on a schedule (e.g., daily) or manually. 📥 Read: Fetches seller records from the Postgres table. 🌐 Search: Uses Bright Data to perform a Google search based on seller_name, company_info, or trade_name. 🧾 Extract: Parses the HTML content using the HTML Extract node to identify potential websites and email addresses. 📝 Update: Writes enriched data (like domain or secondary_email) back to the Postgres table. 💡 Use Cases Lead enrichment for e-commerce sellers Domain and contact info discovery for B2B databases Email and web domain verification for CRM systems Market research automation 🛠️ Customization Tips You can enhance the parsing logic in the HTML Extract node to look for phone numbers, LinkedIn profiles, or social media links. Modify the search query logic to include additional parameters like location or industry for more refined results. Integrate additional APIs (e.g., Hunter.io, Clearbit) for email validation or social profile enrichment. Add filtering to skip entries that already have domain or secondary_email.
by Jimleuk
This template attempts to replicate OpenAI's DeepResearch feature which, at time of writing, is only available to their pro subscribers. > An agent that uses reasoning to synthesize large amount of online information and complete multi-step research tasks for you. Source Though the inner workings of DeepResearch have not been made public, it is presumed the feature relies on the ability to deep search the web, scrape web content and invoking reasoning models to generate reports. All of which n8n is really good at! Using this workflow, n8n users can enjoy a variation of the Deep Research experience for themselves and their teams at a fraction of the cost. Better yet, learn and customise this Deep Research template for their businesses and/or organisations. Check out the generated reports here: https://jimleuk.notion.site/19486dd60c0c80da9cb7eb1468ea9afd?v=19486dd60c0c805c8e0c000ce8c87acf How it works A form is used to first capture the user's research query and how deep they'd like the researcher to go. Once submitted, a blank Notion page is created which will later hold the final report and the researcher gets to work. The user's query goes through a recursive series of web serches and web scraping to collect data on the research topic to generate partial learnings. Once complete, all learnings are combined and given to a reasoning LLM to generate the final report. The report is then written to the placeholder Notion page created earlier. How to use Duplicate this Notion database template and make sure all Notion related nodes point to it. Sign-up for APIFY.com API Key for web search and scraping services. Ensure you have access to OpenAI's o3-mini model. Alternatively, switch this out for o1 series. You must publish this workflow and ensure the form url is publically accessible. On depth & breadth configuration For more detailed reports, increase depth and breadth but be warned the workflow will take exponentially longer and cost more to complete. The recommended defaults are usually good enough. Depth=1 & Breadth=2 - will take about 5 - 10mins. Depth=1 & Breadth=3 - will take about 15 - 20mins. Dpeth=3 & Breadth=5 - will take about 2+ hours! Customising this workflow I deliberately chose not to use AI-powered scrapers like Firecrawl as I felt these were quite costly and quotas would be quickly exhausted. However, feel free to switch web search and scraping services which suit your environment. Maybe you don't decide to source the web and instead, data collection comes from internal documents instead. This template gives you freedom to change this. Experiment with different Reasoning/Thinking models such as Deepseek and Google's Gemini 2.0. Finally, the LLM prompts could definitely be improved. Refine them to fit your use-case. Credits This template is largely based off the work by David Zhang (dzhng) and his open source implementation of Deep Research: https://github.com/dzhng/deep-research
by Vadym Nahornyi
How it works Automatically sends Telegram notifications when any n8n workflow fails. Includes workflow name, error message, and execution ID in the alert. Setup Complete setup instructions included in the workflow's sticky note in 5 languages: 🇬🇧 English 🇪🇸 Español 🇩🇪 Deutsch 🇫🇷 Français 🇷🇺 Русский Features Monitors all workflows 24/7 Instant Telegram notifications Zero configuration needed Just add your bot token and chat ID Important ⚠️ Keep this workflow active 24/7 to capture all errors.
by Greypillar
How it works • RSS feed monitors your blog for new posts automatically • Extracts and cleans full article content from the blog post • AI Chain (GPT-4o) transforms content into 5 platform-optimized formats (LinkedIn, Twitter, Instagram, Email, Video) • Unsplash API suggests relevant images for each content piece • Slack notification alerts content team with preview of all formats • Airtable logs everything for content calendar tracking • Optional auto-posting to LinkedIn and Twitter (disabled by default) • Structured output parser ensures all 5 formats are generated correctly with proper character limits Set up steps • Time to set up: 10-15 minutes • Replace RSS feed URL with your blog's feed (common formats: /feed, /rss, /feed.xml) • Get Slack channel ID for content team notifications • Create Airtable base with 14 columns (Original_Title, Original_URL, Published_Date, LinkedIn_Post, LinkedIn_Hashtags, Twitter_Thread, Twitter_Hashtags, Instagram_Caption, Instagram_Hashtags, Email_Subject, Email_Body, Video_Script, Suggested_Images, Status) • Add credentials: OpenAI (GPT-4o), Unsplash API, Slack OAuth2, Airtable Token • Replace placeholder IDs in Slack and Airtable nodes • Optional: Enable LinkedIn/Twitter auto-posting nodes and add OAuth2 credentials What you'll need • OpenAI API - GPT-4o access for AI content repurposing • Unsplash API - Free tier available for image suggestions • Slack - Standard workspace for team notifications • Airtable - Free plan works for content tracking • Blog with RSS feed - WordPress, Ghost, Medium, Webflow all supported • LinkedIn/Twitter OAuth2 (optional) - For auto-posting feature Who this is for Content creators, marketing teams, and agencies that want to maximize content ROI by automatically repurposing blog posts into platform-specific content. Perfect for B2B companies publishing regular blog content who need consistent multi-platform presence without manual reformatting.
by Markhah
Overview This workflow generates automated revenue and expense comparison reports from a structured Google Sheet. It enables users to compare financial data across the current period, last month, and last year, then uses an AI agent to analyze and summarize the results for business reporting. 1.Prerequisites A connected Google Sheets OAuth2 credential. A valid DeepSeek AI API (or replaceable with another Chat Model). A sub-workflow (child workflow) that handles processing logic. Properly structured Google Sheets data (see below). 2.Required Google Sheet Structure Column headers must include at least: Date, Amount, Type. Data format for Date must be in dd/MM/yyyy or dd-MM-yyyy. Entries should span over multiple time periods (e.g., current month, last month, last year). 3.Setup Steps Import the workflow into your n8n instance. Connect your Google Sheets and DeepSeek API credentials. Update: Sheet ID and Tab Name (already embedded in node: Get revenual from google sheet). Custom sub-workflow ID (in the Call n8n Workflow Tool node). Optionally configure chatbot webhook in the When chat message received node. 4.What the Workflow Does Accepts date inputs via AI chat interface (ChatTrigger + AI Agent). Fetches raw transaction data from Google Sheets. Segments and pivots revenue by classification for: Current period Last month Last year Aggregates totals and applies custom titles for comparison. Merges all summaries into a final unified JSON report. 5.Customization Options Replace DeepSeek with OpenAI or other LLMs. Change the date fields or cycle comparisons (e.g., quarterly, weekly). Add more AI analysis steps such as sentiment scoring or forecasting. Modify the pivot logic to suit specific KPI tags or labels. 6.Troubleshooting Tips If Google Sheets fetch fails: ensure the document is shared with your n8n Google credential. If parsing errors: verify that all dates follow the expected format. Sub-workflow must be active and configured to accept the correct inputs (6 dates). 7.SEO Keywords: google sheets report, AI financial report, compare revenue by month, expense analysis automation, chatbot n8n report generator, n8n Google Sheet integration
by Uche Madu
🧩 What This Workflow Does This workflow automates the process of identifying and enriching decision-maker contacts from a list of companies. By integrating with Apollo's APIs and Google Sheets, it streamlines lead generation, ensures data accuracy through human verification, and maintains an organized leads database. 📚 Use Case Ideal for sales and marketing teams aiming to: Automate the discovery of key decision-makers (e.g., CEOs, CTOs). Enrich contact information with LinkedIn profiles, emails, and phone numbers. Maintain an up-to-date leads database with minimal manual intervention. Receive weekly summaries of newly verified leads. 🧪 Setup 1. Google Sheets Preparation: Use the following pre-configured Google Sheet: Company Decision Maker Discovery Sheet. This spreadsheet includes the necessary tabs and columns: Companies, Contacts, and Contacts (Verified). It also contains a custom onEdit Apps Script function that automatically updates the Status column to Pending whenever the Domain field is modified. To review or modify the script, navigate to Extensions > Apps Script within the Google Sheet. 2. Credentials Setup: Configure the following credentials in your n8n instance: Google Sheets: To read from and write to the spreadsheet. Slack: To send verification prompts and weekly reports. Apollo: To access the Organization Search, Organization Enrichment, People Search, and Bulk People Enrichment APIs. LLM Service (e.g., OpenAI): To generate company summaries and determine departments based on job titles. 3. Workflow Configuration: Import the workflow into your n8n instance. Update the nodes to reference the correct Google Sheet and Slack channel. Ensure that the Apollo and LLM nodes have the appropriate API keys and configurations. 4. Testing the Workflow: Add a new company entry in the Companies tab of the Google Sheet. Verify that the workflow triggers automatically, processes the data, and updates the Contacts and Contacts (Verified) tabs accordingly. Check Slack for any verification prompts and confirm that weekly reports are sent as scheduled.
by Bilel Aroua
//ASMR AI Workflow Who is this for? Content Creators, YouTube Automation Enthusiasts, and AI Hobbyists looking to autonomously generate and publish unique, satisfying ASMR-style YouTube Shorts without manual effort. What problem does this solve? This workflow solves the creative bottleneck and time-consuming nature of daily content creation. It fully automates the entire production pipeline, from brainstorming trendy ideas to publishing a finished video, turning your n8n instance into a 24/7 content factory. What this workflow does 1. Two-Stage AI Ideation & Planning: Uses an initial AI agent to brainstorm a short, viral ASMR concept based on current trends. A second "Planning" AI agent then takes this concept and expands it into a detailed, structured production plan, complete with a viral-optimized caption, hashtags, and descriptions for the environment and sound. 2. Multi-Modal Asset Generation: Video:* Feeds detailed scene prompts to the *ByteDance Seedance** text-to-video model (via Wavespeed AI) to generate high-quality video clips. Audio:* Simultaneously calls the *Fal AI** text-to-audio model to create custom, soothing ASMR sound effects that match the video's theme. Assembly:** Automatically sequences the video clips and sound into a single, cohesive final video file using an FFMPEG API call. 3. Closed-Loop Publishing & Logging: Logging:** Initially logs the new idea to a Google Sheet with a status of "In Progress". Publishing:** Automatically uploads the final, assembled video directly to your YouTube channel, setting the title and description from the AI's plan. Updating:** Finds the original row in the Google Sheet and updates its status to "Done", adding a direct link to the newly published YouTube video. Notifications:** Sends real-time alerts to Telegram and/or Gmail with the video title and link, confirming the successful publication. Setup Credentials: You will need to create credentials in your n8n instance for the following services: OpenAI API Wavespeed AI API (for Seedance) Fal AI API Google OAuth Credential (enable YouTube Data API v3 and Google Sheets API in your Google Cloud Project) Telegram Bot Credential (Optional) Gmail OAuth Credential Configuration: This is an advanced workflow. The initial setup should take approximately 15-20 minutes. Google Sheet:* Create a Google Sheet with these columns: idea, caption, production_status, youtube_url. Add the *Sheet ID** to the Google Sheets nodes in the workflow. Node Configuration:** In the Telegram Notification node, enter your own Chat ID. In the Gmail Notification node, update the recipient email address. Activate:** Once configured, save and set the workflow to "Active" to let it run on its schedule. How to customize Creative Direction:* To change the style or theme of the videos (e.g., from kinetic sand to soap cutting), simply edit the systemMessage in the *"2. Enrich Idea into Plan"* and *"Prompts AI Agent"** nodes. Initial Ideas:* To influence the AI's starting concepts, modify the prompt in the *"1. Generate Trendy Idea"** node. Video & Sound:* To change the video duration or sound style, adjust the parameters in the *"Create Clips"* and *"Create Sounds"** nodes. Notifications:* Add or remove notification channels (like Slack or Discord) after the *"Upload to YouTube"** node.
by Oneclick AI Squad
This workflow monitors brand mentions across multiple platforms (Twitter/X, Reddit, News) and automatically detects reputation crises based on sentiment analysis and trend detection. How it works Multi-platform monitoring: Every 10 minutes, scans Twitter/X, Reddit, and news sites for brand mentions Data normalization: Converts all platform data into unified format Smart filtering: Removes duplicates and already-analyzed mentions AI sentiment analysis: Analyzes each mention for: Sentiment score (positive/negative/neutral) Amplification factors (engagement, verified accounts, news sources) Crisis-level phrases and keywords Trend detection: Compares current sentiment to 24-hour baseline: Detects sharp sentiment drops Identifies negative mention spikes Calculates impact surge Crisis classification: Assigns severity (CRITICAL/HIGH/MEDIUM/LOW) Automated response: For crises, triggers immediate alerts: Executive crisis brief with action plan Slack alerts to crisis team Email to leadership and PR team JIRA ticket creation Crisis event logging Setup steps Connect platforms: Twitter/X: Add OAuth credentials to "Monitor Twitter/X" node Reddit: Add OAuth credentials to "Monitor Reddit" node News API: Get API key from newsapi.org and add to "Monitor News & Blogs" node Configure brand monitoring: Update brand name and handles in search queries Add additional platforms if needed (LinkedIn, Facebook, Instagram) Set up alerting: Slack: Add credentials and update channel names Email: Add SMTP settings and recipient lists JIRA: Add credentials and project ID Adjust sensitivity: Modify sentiment keyword dictionaries in "AI Sentiment Analysis Engine" Adjust crisis threshold scores Customize amplification factors Test thoroughly: Run manual execution with test data Verify alert routing and content Test false positive handling Activate: Enable for continuous 24/7 monitoring Key Features: Multi-platform monitoring** (every 10 minutes): Twitter/X, Reddit, and News sites Data normalization** that converts different platform formats into unified structure AI sentiment analysis** engine that evaluates: Sentiment keywords (critical, severe, moderate, mild negative/positive) Amplification factors (engagement, verified accounts, follower counts) Impact scoring based on reach and influence Baseline trend detection** that tracks 24-hour sentiment history and detects: Sharp sentiment drops (15+ points = crisis) Negative mention spikes (30%+ increase) Impact surges Automated crisis response workflow**: Aggregates crisis mentions into executive brief Generates detailed action plan based on severity Sends Slack alerts to crisis team Emails leadership with comprehensive brief Creates JIRA ticket for tracking Logs all events for analysis Two-path routing**: Crisis-level events trigger full response workflow, while routine mentions are logged for trend analysis
by Luan Correia
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This comprehensive RAG workflow enables your AI agents to answer user questions with contextual knowledge pulled from your own documents — using metadata-rich embeddings stored in Supabase. 🔧 Key Features: RAG Agents powered by GPT-4.5 or GPT-3.5 via OpenRouter or OpenAI. Supabase Vector Store to store and retrieve document embeddings. Cohere Reranker to improve response relevance and quality. Metadata Agent to enrich vectorized data before ingestion. PDF Extraction Flow to automatically parse and upload documents with metadata. ✅ Setup Steps: Connect your Supabase Vector Store. Use OpenAI Embeddings (e.g. text-embedding-3-small). Add API keys for OpenAI and/or OpenRouter. Connect a reranker like Cohere. Process documents with metadata before embedding. Start chatting — your AI agent now returns context-rich answers from your own knowledge base! Perfect for building AI assistants that can reason, search and answer based on internal company data, academic papers, support docs, or personal notes.
by Davide
How It Works This workflow creates an AI-powered Telegram chatbot with session management, allowing users to: Start new conversations** (/new). Check current sessions** (/current). Resume past sessions** (/resume). Get summaries** (/summary). Ask questions** (/question). Key Components: Session Management**: Uses Google Sheets to track active/expired sessions (storing SESSION IDs and STATE). /new creates a session; /resume reactivates past ones. AI Processing**: OpenAI GPT-4 generates responses with contextual memory (via Simple Memory node). Summarization: Condenses past conversations when requested. Data Logging**: All interactions (prompts/responses) are saved to Google Sheets for audit and retrieval. User Interaction**: Telegram commands trigger specific actions (e.g., /question [query] fetches answers from session history). Main Advantages 1. Multi-session Handling Each user can create, manage, and switch between multiple sessions independently, perfect for organizing different conversations without confusion. 2. Persistent Memory Conversations are stored in Google Sheets, ensuring that chat history and session states are preserved even if the server or n8n instance restarts. 3. Commands for Full Control With intuitive commands like /new, /current, /resume, /summary, and /question, users can manage sessions easily without needing a web interface. 4. Smart Summarization and Q&A Thanks to OpenAI models, the workflow can summarize entire conversations or answer specific questions about past discussions, saving time and improving the chatbot’s usability. 5. Easy Setup and Scalability By using Google Sheets instead of a database, the workflow is easy to clone, modify, and deploy — ideal for quick prototyping or lightweight production uses. 6. Modular and Extensible Each logical block (new session, get current session, resume, summarize, ask question) is modular, making it easy to extend the workflow with additional features like analytics, personalized greetings, or integrations with CRM systems. Setup Steps Prerequisites: Telegram Bot Token**: Create via BotFather. Google Sheets**: Duplicate this template. Two sheets: Session (active/inactive sessions) and Database (chat logs). OpenAI API Key**: For GPT-4 responses. Configuration: Telegram Integration: Add your bot token to the Telegram Trigger and Telegram Send nodes. Google Sheets Setup: Authenticate the Google Sheets nodes with OAuth. Ensure sheet names (Session, Database) and column mappings match the template. OpenAI & Memory: Add your API key to the OpenAI Chat Model nodes. Adjust contextWindowLength in the Simple Memory node for conversation history length. Testing: Use Telegram commands to test: /new: Starts a session. /question [query]: Tests AI responses. /summary: Checks summarization. Deployment: Activate the workflow; the bot will respond to Telegram messages in real-time. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Sina
👔 Who is this for? Entrepreneurs and startup founders preparing for investors Business consultants drafting complete client plans Strategy teams building long-term business models Accelerators, incubators, or pitch trainers ❓ What problem does this workflow solve? Writing a full business plan takes days of work, multiple tools, and often gets stuck in messy docs or slides. This template automates every major section, generating a clean, detailed, and professional business plan with AI in just minutes. ⚙️ What this workflow does Starts with a chat message asking for your business idea or startup concept Passes the idea through 83 intelligent agents, each handling a full business plan chapter: Executive Summary Problem & Solution Product Description Market Research Competitor Analysis Business Model Marketing Strategy (includes guerrilla ideas) Operational Plan Financial Plan Team & Advisors Roadmap Conclusion & Next Steps Each section uses tailored prompts and business logic Combines all outputs into a structured, professional Markdown file Final result: a ready-to-export business plan in seconds 🛠️ Setup Import this template into your n8n instance Replace the “LLM Chat Model” node with your preferred model (Ollama, GPT-4, etc.) Start from the chat input node — describe your startup or idea Wait for all agents to finish Download the final Business plan file 🤖 LLM Flexibility (Choose Your Model) Supports: OpenAI (GPT-4 / GPT-3.5) Ollama (LLaMA 3.1, Mistral, DeepSeek, etc.) Any compatible N8N chat model To change the model, just replace the “Language Model” node — no other updates required 📌 Notes All nodes are clearly named by function (e.g., “Market Research Generator”) Sticky notes included for clarity Generates high-quality plans suitable for VCs or accelerators Modular: you can turn off or reorder any chapter 📩 Need help? Email: sinamirshafiee@gmail.com Happy to support setup, LLM switching, or custom section development.