by Yaron Been
Automate expense reviews with AI-powered CFO-level analysis. This workflow monitors Airtable expense submissions, uses GPT-4 to analyze expenses like an experienced CFO, flags suspicious expenses with detailed reasoning, and maintains comprehensive audit trails in Pinecone vector database. 🚀 What It Does Smart Monitoring**: Watches Airtable for new expense submissions AI CFO Analysis**: GPT-4 applies financial expertise to review amounts, categories, and descriptions Intelligent Flagging**: Automatically identifies policy violations and suspicious patterns Audit Trail**: Stores all decisions in Pinecone for compliance and searchability Auto Updates**: Updates Airtable records with AI decisions and detailed reasoning 🎯 Perfect For Finance teams needing intelligent expense oversight CFOs wanting to automate expense policy enforcement Growing companies scaling expense management Businesses requiring compliance documentation ⚙️ Key Benefits ✅ 99% faster expense processing vs manual review ✅ CFO-level intelligence applied to every expense ✅ Complete audit trail for compliance ✅ Real-time fraud detection and policy enforcement ✅ Detailed explanations for every decision 🔧 What You Need Airtable base with expense data (template included) OpenAI API for GPT-4 analysis Pinecone account for audit trail storage Basic expense submission process 📊 Sample Results Input: $4,500 business class flight to Tokyo AI Decision: "Flagged - Amount exceeds typical travel thresholds. Requires verification against travel policies and client justification for premium travel." 🛠️ Setup & Support Quick Setup: Deploy in 60 minutes with included templates and documentation YouTube: https://www.youtube.com/@YaronBeen/videos 💼 Expert Support LinkedIn: https://www.linkedin.com/in/yaronbeen/ 📧 Direct Help Email: Yaron@nofluff.online Transform expense management from manual bottleneck to intelligent automation. Let AI handle policy compliance while your finance team focuses on strategy.
by InfraNodus
Analyze and Explore your ZenDesk Support Requests using AI-Powered Knowledge Graph This template helps you create an interactive InfraNodus knowledge graph for your ZenDesk tickets using any search criteria (e.g. after a certain date, specific status, sender, keyword) that will automatically be sent to a selected Slack channel. Here's an example of the InfraNodus graph that shows the main topics and gaps in ZenDesk support tickets: You can use the workflow to: Get an instant overview of the main topics your customers are talking about Generate business and product ideas based on the blind spots identified using the InfraNodus AI See which topics correlate to the negative / positive sentiment understanding the weak and strong sides of your product and support Receive daily notifications on the main topics your customers are talking about via Slack / Telegram / Email and other channels Perform detailed search using a password-protected web form for tickets filtered by a certain date, status, tag, sender, keyword. Use the interactive graph to explore specific topics and concepts your customers are talking about — a great way to engage with their concerns in a non-linear way, bypassing the boring tabular interface Use the graph to explore the support requests by specific segments — e.g. status, priority, sentiment, tags, urgency. Use the graph generated as an AI expert available to your AI agents in other n8n workflows via InfraNodus GraphRAG. For instance, you could connect your knowledge base to the support tickets graph and let the agent discover possible solutions to your customers' most typical problems. See an sample template here. How it works You can start this workflow manually, with a daily / weekly trigger, or via a password-protected web form, where you can provide search requests. Once started, it will perform a ZenDesk tickets search with the default or your custom criteria. Then it will use the search results to generate an InfraNodus graph (or add the new data to an existing one), and — finally — use the InfraNodus AI endpoints to generate a topical summary and a product business idea based on the blind spots identified. The results are delivered a channel of your choice. Here's a description step by step: Start the workflow (manually or on schedule) Assign values to variables (search criteria, graph name) Perform ZenDesk support tickets search Convert the data received and submit it to InfraNodus to generate a knowledge graph Generate topical summary with InfraNodus Generate a business idea with InfraNodus (you can also change the setting to generate a question instead) Send a notification via Slack / Telegram / Email or back to the webform How to use You need an InfraNodus API account and key to use this workflow. You also need a ZenDesk account. It takes about 5 minutes to set everything up. Create an InfraNodus account. Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Add the authorization key to all the InfraNodus HTTP nodes in the template (Steps 3, 5, and 6). Generate a ZenDesk authorization token following the instructions in n8n's ZenDesk node (Step 3). Optionally: connect your Slack or Telegram or Gmail account to receive automated notifications with the link to the graph, once the workflow is ready (it takes about 30 seconds to run). Run it with using the form to play around with the search criteria that works best for you (you can leave everything empty at first), then choose the parameters you like and activate the Daily Trigger node to receive executive summaries to a channel of your choice. Open the graph in InfraNodus and use our customer feedback analysis guide to explore the graph and generate new insights. Requirements An InfraNodus account and API key A ZenDesk API key (Optional) — a Slack / Telegram / Gmail connection for notifications FAQ 1. What are the best use cases to try? I love to set the graph to deliver me a daily visual briefing of what's happening in my support portal. It shows me the main topics and gaps and generates product ideas based on them. Great to keep the pulse on the business. I also really like generating a graph for the past week manually, using the form, and then exploring the graph in InfraNodus directly using the customer feedback analysis workflow to: discover main topics my customers are talking about? understand the topics that have the most negative connotation for them (using the sentiment filter)? discover some support tickets that need more attention or that talk about the topics I'm personally interested in and engage with the client identify the gaps in your customers' discourse based on the blind spots — useful for generating ideas, see the graph below with a demo of how it works: 2. Why use the graph and not just AI summary? AI summary will just give you generic results. You'll see what you already know. Using the graph helps you deconstruct the discourse and get a much more nuanced understanding of the main pain points and interests of your customers. The auto-generated InfraNodus summary and business ideas have a direct explainable connection to the discourse, so you can always see where they are coming from and maintain the focus on all the topics, rather than the most prominent ones. Additionally, having an interactive graph opens a possibility to explore your customers' concerns in a more engaging way, finding the topics and concepts that are relevant to your interests or to your agents' expertise, helping you find the conversations that you'd otherwise have missed. 3. Is my customers' data safe? Absolutely. InfraNodus' terms of use and privacy policy state that the customers' data and text graphs are not used in AI training and are not offered to any third parties. Its underlying API system uses the Open API which explicitly states that data is not used for training either. So all the customers' data are private and safe. As an extra precaution, you can always delete the graphs after you analyzed them, in which case there is no trace of this data left on the servers. Customizing this workflow Check out the complete setup guide for this workflow at https://support.noduslabs.com/hc/en-us/articles/20447530961308-Zendesk-Tickets-Summarization-Sentiment-Analysis-and-Slack-Integration-with-n8n-and-InfraNodus For support with this template, please, contact https://support.noduslabs.com For more InfraNodus n8n workflows, please, see our creators page: https://n8n.io/creators/infranodus/ To learn more about InfraNodus, GraphRAG, and knowledge graph analysis: https://infranodus.com
by ist00dent
This n8n template provides a simple yet powerful utility for validating if a given string input is a valid JSON format. You can use this to pre-validate data received from external sources, ensure data integrity before further processing, or provide immediate feedback to users submitting JSON strings. 🔧 How it works Webhook: This node acts as the entry point for the workflow, listening for incoming POST requests. It expects a JSON body with a single property: jsonString: The string that you want to validate as JSON. Code (JSON Validator): This node contains custom JavaScript code that attempts to parse the jsonString provided in the webhook body. If the jsonString can be successfully parsed, it means it's valid JSON, and the node returns an item with valid: true. If parsing fails, it catches the error and returns an item with valid: false and the specific error message. This logic is applied to each item passed through the node, ensuring all inputs are validated. Respond to Webhook: This node sends the validation result (either valid: true or valid: false with an error message) back to the service that initiated the webhook request. 👤 Who is it for? This workflow is ideal for: Developers & Integrators: Pre-validate JSON payloads from external systems (APIs, webhooks) before processing them in your workflows, preventing errors. Data Engineers: Ensure the integrity of JSON data before storing it in databases or data lakes. API Builders: Offer a dedicated endpoint for clients to test their JSON strings for validity. Customer Support Teams: Quickly check user-provided JSON configurations for errors. Anyone handling JSON data: A quick and easy way to programmatically check JSON string correctness without writing custom code in every application. 📑 Data Structure When you trigger the webhook, send a POST request with a JSON body structured as follows: { "jsonString": "{\"name\": \"n8n\", \"type\": \"workflow\"}" } Example of an invalid JSON string: { "jsonString": "{name: \"n8n\"}" // Missing quotes around 'name' } The workflow will return a JSON response indicating validity: For a valid JSON string: { "valid": true } For an invalid JSON string: { "valid": false, "error": "Unexpected token 'n', \"{name: \"n8n\"}\" is not valid JSON" } ⚙️ Setup Instructions Import Workflow: In your n8n editor, click "Import from JSON" and paste the provided workflow JSON. Configure Webhook Path: Double-click the Webhook node. In the 'Path' field, set a unique and descriptive path (e.g., /validate-json). Activate Workflow: Save and activate the workflow. 📝 Tips This JSON validator workflow is a solid starting point. Consider these enhancements: Enhanced Error Feedback: Upgrade: Add a Set node after the Code node to format the error message into a more user-friendly string before responding. Leverage: Make it easier for the caller to understand the issue. Logging Invalid Inputs: Upgrade: After the Code node, add an IF node to check if valid is false. If so, branch to a node that logs the invalid jsonString and error to a Google Sheet, database, or a logging service. Leverage: Track common invalid inputs for debugging or improvement. Transforming Valid JSON: Upgrade: If the JSON is valid, you could add another Function node to parse the jsonString and then operate on the parsed JSON data directly within the workflow. Leverage: Use this validator as the first step in a larger workflow that processes JSON data. Asynchronous Validation: Upgrade: For very large JSON strings or high-volume requests, consider using a separate queueing mechanism (e.g., RabbitMQ, SQS) and an asynchronous response pattern. Leverage: Prevent webhook timeouts and improve system responsiveness.
by Roninimous
This n8n workflow integrates Shopify order management with Telegram, allowing you to query open orders and order details directly through Telegram chat commands. It provides an interactive way to monitor your Shopify store orders using Telegram as an interface. Key Features Telegram Trigger: Listens for messages and callback queries from your Telegram bot. Switch Node: Routes incoming Telegram messages to different flows based on message content: /orders command to fetch all open orders Callback queries starting with /order_ to fetch details of a specific order Shopify Get Orders: Retrieves all open orders from your Shopify store using your Shopify API credentials. Conditional Check (If Node): Determines if there are any open orders; branches accordingly: If orders exist, prepare an interactive Telegram message with a list of orders.1 If no orders exist, send a “No Order” message. Orders Code Node: Formats the list of open orders into a Telegram message with inline buttons. Each button corresponds to an order and sends a callback data containing the order ID. Get Order Details: When a user selects an order button, the workflow extracts the order ID from the callback data, fetches detailed order information from Shopify, and formats the order items into a readable message. Send Messages to Telegram: Sends formatted messages back to Telegram: The list of open orders with clickable buttons. Detailed information about a selected order. “No Order” notification if there are no open orders. How It Works A Telegram user sends /orders to the bot. The workflow fetches open orders from Shopify and sends a message with buttons listing each order. When a user clicks an order button, the workflow fetches and displays detailed information about that specific order in Telegram. If there are no open orders, the bot replies accordingly. Setup Instructions Create a Telegram Bot: Use @BotFather on Telegram to create a bot and get the bot token. Obtain Shopify API Credentials: Create a private app in your Shopify admin dashboard with permission to read orders. Obtain the API key and access token. Configure n8n Credentials: Add your Telegram bot token as Telegram API credentials in n8n. Add your Shopify API credentials in n8n Shopify credentials. Import the Workflow: Import this workflow into your n8n instance. Update the Telegram and Shopify credential nodes to use your credentials. Set Webhook URLs: Ensure your Telegram bot webhook is set correctly to receive messages. n8n webhook URLs should be publicly accessible. Test the Workflow: Send /orders to your Telegram bot to verify it retrieves and lists open orders. Customization Guidance Modify Commands: Update the Switch node to add more Telegram commands or change existing ones. Change Message Formats: Edit the Code nodes to customize how order lists and details appear. Expand Shopify Integration: Add nodes to handle other Shopify operations like updating orders, managing products, etc. Multi-User Support: Adapt the workflow to handle multiple Telegram chat IDs dynamically. Security and Implementation Notes The native Telegram node in n8n has limitations: it does not support sending dynamic inline keyboard arrays in JSON format, which is essential for displaying a variable number of buttons depending on how many orders are retrieved from Shopify. To overcome this, this workflow uses the HTTP Request node to call Telegram’s API directly, allowing full flexibility to send dynamic inline keyboards as JSON objects. (I will make an update once Telegram Node support dynamic inline keyboards). Security Considerations:** Always store your Telegram bot token securely in n8n credentials and never expose it in the HTTP Request node’s URL or body directly. Use environment variables or n8n credentials to inject tokens safely. Be mindful of Telegram API rate limits and add error handling in your workflow. While using HTTP Request nodes increases flexibility, it also requires careful management of request payloads and authentication, as opposed to the built-in Telegram node which abstracts much of this complexity. Benefits Quickly access Shopify order data without leaving Telegram. Interactive inline buttons improve user experience. Automated, real-time integration between Shopify and Telegram.
by Oneclick AI Squad
This automated n8n workflow checks daily class schedules, syncs upcoming classes to Google Calendar, and sends reminder notifications to students via email or SMS. Perfect for educational institutions to keep students informed about their daily classes and schedule changes. What This Workflow Does: Automatically checks class schedules every day Identifies today's classes and upcoming sessions Syncs class information to Google Calendar Sends personalized reminders to enrolled students Tracks reminder delivery status and logs activities Handles both email and SMS notification preferences Main Components Daily Schedule Check** - Triggers daily to check class schedules Read Class Schedule** - Retrieves today's class schedule from database/Excel Filter Today's Classes** - Identifies classes happening today Has Classes Today?** - Checks if there are any classes scheduled Read Student Contacts** - Gets student contact information for enrolled classes Sync to Google Calendar** - Creates/updates events in Google Calendar Create Student Reminders** - Generates personalized reminder messages Split Into Batches** - Processes reminders in manageable batches Email or SMS?** - Routes based on student communication preferences Prepare Email Reminders** - Creates email reminder content Prepare SMS Reminders** - Creates SMS reminder content Read Reminder Log** - Checks previous reminder history Update Reminder Log** - Records sent reminders Save Reminder Log** - Saves updated log data Essential Prerequisites Class schedule database/Excel file with student enrollments Student contact database with email and phone numbers Google Calendar API access and credentials SMTP server for email notifications SMS service provider (Twilio, etc.) for text reminders Reminder log file for tracking sent notifications Required Data Files: class_schedule.xlsx: Class ID | Class Name | Date | Time | Duration Instructor | Room | Students Enrolled | Status student_contacts.xlsx: Student ID | Name | Email | Phone | Preferred Contact Program | Class IDs | Active Status reminder_log.xlsx: Log ID | Date | Student ID | Class ID | Contact Method Status | Sent Time | Response Key Features ⏰ Daily Automation:** Runs automatically every day 📅 Calendar Sync:** Syncs classes to Google Calendar 📧 Smart Reminders:** Sends email or SMS based on preference 👥 Batch Processing:** Handles multiple students efficiently 📊 Activity Logging:** Tracks all reminder activities 🔄 Duplicate Prevention:** Avoids sending multiple reminders 📱 Multi-Channel:** Supports both email and SMS notifications Quick Setup Import workflow JSON into n8n Configure daily trigger schedule Set up class schedule and student contact files Connect Google Calendar API credentials Configure SMTP server for emails Set up SMS service provider (Twilio) Test with sample class data Activate workflow Parameters to Configure schedule_file_path: Path to class schedule file contacts_file_path: Path to student contacts file google_calendar_id: Google Calendar ID for syncing google_api_credentials: Google Calendar API credentials smtp_host: Email server settings smtp_user: Email username smtp_password: Email password sms_api_key: SMS service API key sms_phone_number: SMS sender phone number Sample Reminder Messages Email:** "Hi [Name], reminder: [Class Name] starts at [Time] in [Room]. See you there!" SMS:** "[Name], your [Class Name] class starts at [Time] in [Room]. Don't miss it!" Use Cases Daily class reminders for students Schedule change notifications Exam and assignment deadline alerts Teacher absence notifications Room change announcements
by Don Jayamaha Jr
A short-term technical analysis agent for 15-minute candles on Binance Spot Market pairs. Calculates and interprets key trading indicators (RSI, MACD, BBANDS, ADX, SMA/EMA) and returns structured summaries, optimized for Telegram or downstream AI trading agents. This tool is designed to be triggered by another workflow (such as the Binance SM Financial Analyst Tool or Binance Quant AI Agent) and is not intended for standalone use. 🔧 Key Features ⏱️ Uses 15-minute kline data (last 100 candles) 📈 Calculates: RSI, MACD, Bollinger Bands, SMA/EMA, ADX 🧠 Interprets numeric data using GPT-4.1-mini 📤 Outputs concise, formatted analysis like: • RSI: 72 → Overbought • MACD: Cross Up • BB: Expanding • ADX: 34 → Strong Trend 🧠 AI Agent Purpose > You are a short-term analysis tool for spotting volatility, early breakouts, and scalping setups. Used by higher agents to determine: Entry/exit precision Momentum shifts Scalping opportunities ⚙️ How it Works Triggered externally by another workflow Accepts input: { "message": "BTCUSDT", "sessionId": "123456789" } Sends POST request to backend endpoint: https://treasurium.app.n8n.cloud/webhook/15m-indicators Fetches last 100 candles and calculates indicators Passes data to GPT for interpretation Returns summary with indicator tags for human readability 🔗 Dependencies This tool is triggered by: ✅ Binance SM Financial Analyst Tool ✅ Binance Spot Market Quant AI Agent 🚀 Setup Instructions Import into your n8n instance Make sure /15m-indicators webhook is active and calculates indicators correctly Connect your OpenAI GPT-4.1-mini credentials Trigger from upstream agent with Binance symbol and session ID Ensure all external calls (to Binance + webhook) are working 🧪 Example Use Cases | Use Case | Result | | ------------------------------------- | --------------------------------------- | | Short-term trade decision for ETHUSDT | Receives 15m signal indicators summary | | Input from Financial Analyst Tool | Returns real-time volatility snapshot | | Telegram bot asks for “DOGE update” | Returns momentum indicators in 15m view | 🎥 Watch Tutorial: 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding or resale permitted. 🔗 For support: Don Jayamaha – LinkedIn
by Don Jayamaha Jr
A medium-term trend analyzer for the Binance Spot Market that leverages core technical indicators across 4-hour candle data to provide human-readable swing-trade signals via AI. 🎥 Watch Tutorial: 🎯 What It Does Accepts a Binance trading pair (e.g., AVAXUSDT) Sends the symbol to an internal webhook for technical indicator calculation Computes 4h RSI, MACD, Bollinger Bands, SMA, EMA, ADX Returns structured, GPT-analyzed signals ready for Telegram delivery 🧠 AI Agent Details Model:** GPT-4.1-mini (OpenAI Chat) Agent Role:** Translates raw indicator values into sentiment-labeled signals Memory:** Tracks session + symbol context for cleaner multi-turn logic 🔗 Required Backend Workflow To calculate indicators, this tool depends on: POST https://treasurium.app.n8n.cloud/webhook/4h-indicators { "symbol": "AVAXUSDT" } Returns a JSON object with the latest 40×4h candle-based calculations. 📥 Input Format { "message": "AVAXUSDT", "sessionId": "telegram_chat_id" } 📊 Sample Output 🕓 4h Technical Signals – AVAXUSDT • RSI: 64 → Slightly Bullish • MACD: Bullish Cross above baseline • BB: Upper band touch – volatility expanding • EMA > SMA → Confirmed Upside Momentum • ADX: 31 → Strengthening Trend 📚 Use Case Scenarios | Use Case | Result | | ----------------------------- | ---------------------------------------------------- | | Swing trend confirmation | Uses 4h indicators to validate or reject setups | | Breakout signal confluence | Helps assess if momentum is real or noise | | Inputs to Quant AI or Analyst | Supports higher-frame trade recommendation synthesis | 🛠️ Setup Instructions Import the JSON template into your n8n workspace. Set your OpenAI API credentials for the GPT node. Ensure the /webhook/4h-indicators backend tool is live and accessible. Connect this to your Binance Financial Analyst Tool or master Quant AI orchestrator. 🤖 Parent Workflows That Use This Tool Binance SM Financial Analyst Tool Binance Spot Market Quant AI Agent 📎 Sticky Notes & Annotations This workflow includes internal sticky notes describing: Node roles (GPT, webhook, memory) System behavior (reasoning agent logic) Telegram formatting guidance 🔐 Licensing & Attribution © 2025 Treasurium Capital Limited Company All architecture, prompt logic, and signal formatting are proprietary. Redistribution or rebranding is prohibited. 🔗 Connect with the creator: Don Jayamaha – LinkedIn
by InfraNodus
Set up a chat with your documents without the complex vector store setup. This templates helps you ingest** your PDF / text / MD documents into a knowledge graph use the graph as the knowledge base for your AI chatbots (and other workflows) visualize the main topics* and *gaps** in your documents (good for observability and research) The knowledge base is provided using the InfraNodus GraphRAG with the knowledge graphs offering high-quality responses without the need to set up complex RAG vector store workflows. The advantages of using GraphRAG instead of the standard vector stores for knowledge are: Easy and quick to set up and update** — no complex data import workflows needed A knowledge graph offers a holistic and interactive view of your knowledge base (accessible via our API or a web interface — also shareable) Better retrieval of relations** between the document chunks = higher quality responses How it works This template uses the InfraNodus knowledge graph as a knowledge base for your n8n AI agent node. The knowledge graph contains the documents you can upload using this template from your Google Drive. When the user asks a question via the chat interface, the agent forwards this question to the InfraNodus knowledge graph, retrieves a response, a summary, and a list of matching statements (based advanced Graph RAG), then delivers the final response back the user. Here's a description step by step: Step 1: Upload your documents Put the PDF / text / MD files you want to chat with into a folder on your Google drive Authorize access to that folder using the Google drive node in the template. Add the InfraNodus API key to the InfraNodus Save to Graph HTTP node Optional: change the name of the graph you want to save the data to in the InfraNodus HTTP node (in the name field of the HTTP post request). Run the workflow to ingest all the files and save them into the graph Optional: check the link provided in the Step 1 workflow description to see the visualization of your knowledge base. It will look something like that: Note:* you can replace the PDF to Text convertor node with a better quality *PDF convertor* from ConvertAPI which respects the original file layout and doesn't split text into small chunks Step 2: Chat with your documents Deactive the trigger in the Step 1 Activate the chat trigger in the Step 2 Add your InfraNodus API credentials to Knowledge Base GraphRAG InfraNodus node Optional: change the graph name in the Knowledge Base node to match the name you provided in the step 1 above Run the chat and ask the question Watch the magic How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Requirements An InfraNodus account and API key An OpenAI (or any other LLM) API key A Google Drive OAuth access (follow the n8n instructions) Optional: ConvertAPI API key for better quality PDF conversion Customizing this workflow You can customize this workflow by adding several experts to your AI agent. Check out the complete guide at https://support.noduslabs.com/hc/en-us/articles/20174217658396-Using-InfraNodus-Knowledge-Graphs-as-Experts-for-AI-Chatbot-Agents-in-n8n Also check out the video tutorial with a demo: For support and feedback, please, contact us at https://support.noduslabs.com To learn more about InfraNodus: https://infranodus.com
by Inga Kruger
GBP Exchange Rate Email Workflow Sends email with table that have GBP exchange rate for few money value every day, if user clicks the button to run it. How it works: Data come from API Changed into HTML table Before is send inside email
by keisha kalra
Try It Out! This n8n template creates a fully automated Instagram content schedule using AI and Google Sheets. It is perfect for content creators, marketing teams, or local businesses looking to organize and scale their social media posting. How it works The workflow starts by reading two sets of inputs from a Google Sheet: Your content strategy inputs (Pillar, Objective, Frequency, Format, Structure, Examples). A list of scraped blog posts with title, URL, and description (fetched from your website). Blog posts are scraped using Apify and parsed to extract key fields, which are stored in a tab labeled "Input (blog month)". You can assign a preferred posting month for each blog (e.g. fall blog posts get tagged for September). The workflow then merges both inputs and extracts the relevant information for further information added by ChatGPT. AI Scheduling & Personalization Once merged, the workflow loops through each content item and: Identifies if the scheduled post falls on or near a holiday (like Mother’s Day) and adjusts the content accordingly. A reference tool is attached to guide structure and tone, based on a library of post examples. Sends the content to an AI Agent (using GPT-4, but customizable) that generates: A compelling Instagram caption A visual description Hashtags Suggested post date, day, content pillar, and format (carousel, reel, image, etc.) Output All generated content including captions, structure, dates, hashtags, and pillar is exported into a tab titled Output in your Google Sheet. The final schedule is ready for manual review, editing, or publishing to social media. How to use The workflow uses a manual trigger to start, but you can replace it with a Webhook, cron job, or form submission. Add/edit your content strategy in Google Sheets. How to Set-Up Initial Input Tab Define your content pillars and structure Create a tab named "Input" or "Strategy" Include these columns: Pillar: e.g., Family images Objective: e.g., Showcase images Frequency: e.g., Bi-weekly Content Form: e.g., Images, Reels Structure: brief description of expected layout (e.g., carousel Q&A, singular photo) Examples: prompts or questions to guide AI (e.g., Why do you think families should do a session?) Input (blog month) Tab – Store scraped blog content Include these columns: URL: direct link to blog post Title: blog post title Description: short summary of the post Preferred Month: month you want it posted (e.g., August, September) This sheet is partially auto-filled by the workflow (except for Preferred Month) Output Tab – Final scheduled content Include these columns: Date: scheduled posting date (YYYY-MM-DD) Day: day of the week Pillar: content category assigned Format: e.g., Images, Reels, Carousel Description: visual summary Caption: Instagram-ready caption Hashtags: complete hashtag block To use the Apify HTTP Request node: Drag in an HTTP Request node into your n8n workflow. Set the Method and URL based on how you're using Apify: Use POST if you want to run an actor live with dynamic input (e.g. scrape blog posts in real time). Use GET if you want to retrieve results from a completed or static dataset run (faster and cheaper if you're reusing previous data). Configure query or body parameters: Include your Apify API token for authentication (e.g. token=YOUR_API_KEY) For POST: include an input object with any required actor settings (e.g., blog URL to scrape). For GET: specify the dataset ID in the URL Test the node to ensure you're retrieving the blog titles, descriptions, and URLs as expected. Requirements Apify account for scraping blog posts OpenAI key (e.g. GPT-4) or another model of your choice Google Sheets Credentials Example Use Cases A photographer repurposing blogs into Instagram carousels A nonprofit automatically generating seasonal posts A small team managing multi-pillar content across weeks or months Need Help? Join the n8n Discord or ask in the n8n Forum! Happy Content Making ! 📅✨
by Aymeric Besset
> 🛠️ Note: This workflow uses a custom Mastodon API request. Ensure your server supports bookmark access, and that your access token has the right permissions. OAuth or token-based credentials must be configured. 🧑💼 Who is this for? This workflow is ideal for digital researchers, social media users, and knowledge workers who want to automatically archive Mastodon bookmarks into their Raindrop.io collection for future reference and tagging. 🔧 What problem is this solving? Mastodon users often bookmark posts they want to read or save for later, but there's no native integration to archive them outside the app. This workflow solves that by syncing bookmarked posts from Mastodon to Raindrop, making them more accessible, organized, and searchable long-term. ⚙️ What this workflow does Triggers on schedule (or manually). Tracks the latest fetched min_id using workflow static data to avoid duplicates. Sends an HTTP GET request to the Mastodon bookmarks API, using bearer token authentication. Validates and processes the bookmarks if new entries exist. Parses pagination metadata (e.g. min_id) from response headers. Splits response array to handle individual bookmarks. Filters out entries with missing data. Saves each post to Raindrop.io, using its title and URL. Use the card URL if exist. Updates the min_id to remember where it left off. 🚀 Setup Create a Mastodon access token with access to bookmarks. Add a credential in n8n of type HTTP Bearer Auth with your token. Create and connect a Raindrop OAuth2 credential. Replace {VOTRE SERVEUR MASTODON} with your Mastodon server's base URL. (Optional) Adjust the scheduling interval under the "Schedule Trigger" node. Make sure the Raindrop collection ID is correct or leave it as default (-1) as this is the index for the `Unsorted` collection. 🧪 How to customize this workflow To save to a specific Raindrop collection, change the collectionId in both Raindrop nodes. You can extend the Code node to pull additional metadata like author, hashtags, or content excerpts. Add an Email or Slack node after Raindrop to notify you of saved bookmarks.
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for? The Search Engine Intelligence Extractor is a powerful n8n automation that leverages Bright Data’s MCP based AI Agents to simulate human-like searches across Google, Bing, and Yandex, and then distills clean, structured insights using Google Gemini. This workflow is tailored for: SEO analysts researching competitors or market trends Market researchers needing real-time search visibility Journalists & content writers gathering contextual insights AI developers creating intelligent assistants Digital marketers tracking brand mentions or news What problem is this workflow solving? Traditional scraping of search engines is often blocked, cluttered, or filled with irrelevant information. Manually analyzing and cleaning this data for insight is time-consuming. This workflow solves the problem by: Simulating real user search behavior via Bright Data MCP based AI Agent Performing multi-platform search (Google, Bing, Yandex) in one unified flow Extracting clean, human-readable results (stripping ads, navigation, etc.) Structuring the content using Google Gemini LLM Automating delivery via Webhook or saving to disk What this workflow does Input Fields Node: Accepts the search query Accepts action for example - Perform a google search. Replace the action with bing, yandex etc. for other search providers Accepts Webhook notification URL Bright Data MCP Agent Execution: Triggers Bright Data’s intelligent search agent Handles search navigation, result loading, pagination Human Readable Data Extractor: Cleanses HTML, removes ads, footers, irrelevant links Produces a readable narrative of results Final Output Handling: Saves the processed response to disk Sends the structured data to a Webhook for real-time use Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Add Scheduled Execution Add a Cron trigger to run this workflow on a set schedule (e.g., daily/weekly keyword tracking). Push Results to Custom Destinations Connect output to: Google Sheets (for analytics or dashboards) PostgreSQL or MySQL DBs (for structured storage) Notion or Airtable (for content pipelines) Slack or Email (for alerting teams) Customize Webhook Notifications Update the Webhook URL in the notification node to push processed results to external APIs, CRMs, or real-time dashboards.