by Alex Hi no code
Automate Instagram DMs with OpenAI GPT and ManyChat How It Works: Once connected, GPT will automatically initiate conversations with messages from new recipients in Intagram. Who Is This For? This workflow is ideal for marketers, business owners content creators who want to automatically respond to Instagram direct messages using OpenAI GPT. By integrating ManyChat, you can manage conversations, nurture leads, and provide instant replies at scale. What This Workflow Does Captures** incoming Instagram DMs through ManyChat’s integration. Processes** messages with GPT to generate a relevant response. Delivers** instant replies back to Instagram users, creating efficient, AI-driven communication. Setup Import the Template: Copy the n8n workflow into your workspace. OpenAI Credentials: Add your OpenAI API key in n8n so GPT can generate responses. ManyChat Account: Create (or log in to) your ManyChat account. Connect Instagram: Link your Instagram profile as a channel in ManyChat. ManyChat Custom Field: Create a custom field for storing user input or conversation context. Configure Default Reply: In ManyChat, set up the default Instagram reply flow to point to your n8n webhook. Add External Request: Create an external request step in ManyChat to send messages to n8n. Test the Flow: Send yourself a DM on Instagram to confirm the workflow triggers and GPT responds correctly. Instructions and links: Notion instruction Register in ManyChat
by explorium
Explorium Prospects Search Chatbot Template Download the following json file and import it to a new n8n workflow: mcp\_to\_prospects\_to\_csv.json Overview This n8n workflow creates a chatbot that understands natural language requests for finding business prospects and automatically: Interprets your query using AI (Claude Sonnet 3.7) Converts it to proper Explorium API filters Validates the API request structure Fetches prospect data from Explorium Exports results as a downloadable CSV file Perfect for sales teams, recruiters, and business development professionals who need to quickly find and export targeted prospect lists without learning complex API syntax. Key Features Natural Language Interface**: Simply describe who you're looking for in plain English Smart Query Translation**: AI converts your request to valid API parameters Built-in Validation**: Ensures API calls meet Explorium's requirements Error Recovery**: Automatically retries with corrections if validation fails Pagination Support**: Handles large result sets automatically CSV Export**: Clean, formatted output ready for CRM import Conversation Memory**: Maintains context for follow-up queries Example Queries The chatbot understands queries like: "Find marketing directors at SaaS companies in New York with 50-200 employees" "Get me CTOs from fintech startups in California" "Show me sales managers at healthcare companies with revenue over $10M" "Find engineers at Microsoft with 3-5 years experience" "Get customer service leads from e-commerce companies in Europe" Prerequisites Before setting up this workflow, ensure you have: n8n instance with chat interface enabled Anthropic API key for Claude Explorium API credentials (Bearer token) - Get explorium api key Basic understanding of n8n chat workflows Supported Filters The chatbot can search using these criteria: Company Filters Size**: 1-10, 11-50, 51-200, 201-500, 501-1000, 1001-5000, 5001-10000, 10001+ employees Revenue**: Ranges from $0-500K up to $10T+ Age**: 0-3, 3-6, 6-10, 10-20, 20+ years Location**: Countries, regions, cities Industry**: Google categories, NAICS codes, LinkedIn categories Name**: Specific company names Prospect Filters Job Level**: CXO, VP, Director, Manager, Senior, Entry, etc. Department**: Sales, Marketing, Engineering, Finance, HR, etc. Experience**: Total months and current role duration Location**: Country and region codes Contact Info**: Filter by email/phone availability Installation & Setup Step 1: Import the Workflow Copy the workflow JSON from the template In n8n: Workflows → Add Workflow → Import from File Paste the JSON and click Import Step 2: Configure Anthropic Credentials Click on the Anthropic Chat Model1 node Under Credentials, click Create New Add your Anthropic API key Name: "Anthropic API" Save credentials Step 3: Configure Explorium Credentials You'll need to set up Explorium credentials in two places: For MCP Client: Click on the MCP Client node Under Credentials, create new Header Auth Add your authentication header (usually Authorization: Bearer YOUR_TOKEN) Save credentials For API Calls: Click on the Prospects API Call node Use the same Header Auth credentials created above Verify the API endpoint is correct Step 4: Activate the Workflow Save the workflow Click the Active toggle to enable it The chat interface will now be available Step 5: Access the Chat Interface Click on the When chat message received node Copy the webhook URL Access this URL in your browser to start chatting How It Works Workflow Architecture Chat Trigger: Receives natural language queries from users Memory Buffer: Maintains conversation context AI Agent: Interprets queries and generates API parameters Validation: Checks API structure against Explorium requirements API Call: Fetches prospect data with pagination Data Processing: Formats results for CSV export File Conversion: Creates downloadable CSV file Processing Flow User Query → AI Interpretation → Validation → API Call → CSV Export ↑ ↓ └──── Error Correction Loop ←──────┘ Validation Rules The workflow validates: Filter keys are allowed by Explorium API Values match expected formats (e.g., valid country codes) Range filters have proper gte/lte values No duplicate values in arrays Required structure is maintained Usage Guide Basic Conversation Flow Start with your query: "Find me VPs of Sales at software companies in the US" Bot processes and responds: Generates API filters Validates the structure Fetches data Returns CSV download link Refine if needed: "Can you also include directors and filter for companies with 100+ employees?" Query Tips Be specific**: Include job titles, departments, company details Use standard terms**: "CTO" instead of "Chief Technology Officer" Specify locations**: Use country names or standard codes Include size/revenue**: Helps narrow results effectively Advanced Queries Combine multiple criteria: "Find engineering managers and senior engineers at B2B SaaS companies in New York and California with 50-500 employees and revenue over $5M who have been in their role for at least 1 year" Output Format The CSV file includes: Prospect ID Name (first, last, full) Location (country, region, city) LinkedIn profile Experience summary Skills and interests Company details Job information Business ID Troubleshooting Common Issues "Validation failed" errors Check that your query uses supported filter values Ensure location names are spelled correctly Verify company sizes/revenues match allowed ranges No results returned Broaden your search criteria Check if the company exists in Explorium's database Verify filter combinations aren't too restrictive Chat not responding Ensure workflow is activated Check all credentials are properly configured Verify webhook URL is accessible Large result sets timing out Try adding more specific filters Limit results by location or company size Use the size parameter (max 10,000) Error Messages The bot provides clear feedback: Invalid filters**: Shows which filters aren't supported Value errors**: Lists correct options for each field API failures**: Explains connection or authentication issues Performance Optimization Best Practices Start broad, then narrow: Begin with basic criteria and add filters Use business IDs: When targeting specific companies Limit by contact info: Add has_email: true for actionable leads Batch by location: Process regions separately for large searches API Limits Maximum 10,000 results per search Pagination handles up to 100 records per page Rate limits apply based on your Explorium subscription Customization Options Modify AI Behavior Edit the AI Agent system message to: Change response format Add custom filters Adjust interpretation logic Include additional instructions Extend Functionality Add nodes to: Send results via email Import directly to CRM Schedule recurring searches Create custom reports Integration Ideas Connect to Slack for team queries Add to CRM workflows Create lead scoring systems Build automated outreach campaigns Security Considerations API credentials are stored securely in n8n Chat sessions are isolated No prospect data is stored permanently CSV files are generated on-demand Support Resources For issues with: n8n platform**: Check n8n documentation Explorium API**: Contact Explorium support Anthropic/Claude**: Refer to Anthropic docs Workflow logic**: Review node configurations
by Daniel Shashko
This workflow automates daily or manual keyword rank tracking on Google Search for your target domain. Results are logged in Google Sheets and sent via email using Bright Data's SERP API. Requirements: n8n (local or cloud) with Google Sheets and Gmail nodes enabled Bright Data API credentials Main Use Cases Track Google search rankings for multiple keywords and domains automatically Maintain historical rank logs in Google Sheets for SEO analysis Receive scheduled or on-demand HTML email reports with ranking summaries Customize or extend for advanced SEO monitoring and reporting How it works The workflow is divided into several logical steps: 1. Workflow Triggers Manual:** Start by clicking 'Test workflow' in n8n. Scheduled:** Automatically triggers every 24 hours via Schedule Trigger. 2. Read Keywords and Target Domains Fetches keywords and domains from a specified Google Sheets document. The sheet must have columns: Keyword and Domain. 3. Transform Keywords Formats each keyword for URL querying (spaces become +, e.g., seo expert → seo+expert). 4. Batch Processing Processes keywords in batches so each is checked individually. 5. Get Google Search Results via Bright Data Sends a request to Bright Data's SERP API for each keyword with location (default: US). Receives the raw HTML of the search results. 6. Parse and Find Ranking Extracts all non-Google links from HTML. Searches for the target domain among the results. Captures the rank (position), URL, and total number of results checked. Saves timestamp. 7. Save Results to Google Sheets Appends the findings (keyword, domain, rank, found URL, check time) to a “Results” sheet for history. 8. Generate HTML Report and Send Email Builds an HTML table with current rankings. Emails the formatted table to the specified recipient(s) with Gmail. Setup Steps Google Sheets: Create a sheet named “Results”, and another with Keyword and Domain columns. Update document ID and sheet names in the workflow’s config. Bright Data API: Acquire your Bright Data API token. Enter it in the Authorization header of the 'Getting Ranks' HTTP Request node. Gmail: Connect your Gmail account via OAuth2 in n8n. Set your destination email in the 'Sending Email Message' node. Location Customization: Modify the gl= parameter in the SERP API URL to change country/location (e.g., gl=GB for the UK). Notes This workflow is designed for n8n local or cloud environments with suitable connector credentials. Customize batch size, recipient list, or ranking extraction logic per your needs. Use sticky notes in n8n for further setup guidance and workflow tips. With this workflow, you have an automated, repeatable process to monitor, log, and report Google search rankings for your domains—ideal for SEO, digital marketing, and reporting to clients or stakeholders.
by LukaszB
Crypto Price Alert – n8n Workflow A simple and effective crypto alert system for anyone who wants to stay up to date with coin price changes — without refreshing charts all day. This workflow checks the current price of your chosen cryptocurrency (via CoinGecko) and sends you an alert on Discord if it goes above or below your target range. It’s lightweight, easy to set up, and runs on autopilot. What the Workflow Does Checks the live price of a selected coin using the CoinGecko API. Compares it to the max/min prices you define manually. Decides if the price is too high or too low. Sends an alert message to Discord depending on the result. How It Works The flow is triggered manually or on a schedule (your choice). It pulls the current price of the coin you set. Compares that price with your min and max values. Sends a “high” or “low” message to your Discord webhook. Setup Steps Enter your coin ID and price thresholds in the “Set Low and High” node. Paste your Discord webhook URLs in the "Message High" and "Message Low" nodes. Optional: Adjust the schedule trigger to run every X minutes/hours. Run once manually to test — takes under 1 minutes. Full instructions and config tips are in sticky notes inside the workflow.
by Piotr Sobolewski
How it works This advanced workflow transforms your long-form audio content (like podcast episodes or webinar recordings) into digestible, ready-to-use marketing assets. It's designed for podcasters, content creators, and marketers who want to maximize their content's reach. It automatically: Takes a full transcript of your audio/video content as input. Generates a concise, comprehensive summary of the episode using advanced AI. Extracts a list of key topics and keywords from the transcript, perfect for SEO, tagging, and content categorization. Delivers the summary and keywords directly to your inbox or a connected tool for easy access. Streamline your content repurposing pipeline and unlock new value from your audio and video assets with intelligent automation! Set up steps Setting up this powerful workflow typically takes around 20-30 minutes, as it involves multiple AI steps. You'll need to: Obtain API keys for your preferred AI service (e.g., OpenAI, Google AI). Have access to a method for generating transcripts from your audio/video (e.g., manually pasting, or using a separate transcription service like AssemblyAI, Whisper, etc.). Connect your preferred email service (e.g., Gmail) to receive the output. All detailed setup instructions and specific configuration guidance are provided within the workflow itself using sticky notes.
by Vadym Nahornyi
This workflow automatically transcribes audio files, translates the content between languages, and generates natural-sounding speech from the translated text - all in one seamless process. Who's it for Content creators, educators, and businesses needing to make their audio content accessible across language barriers. Perfect for translating podcasts, voice messages, lectures, or any audio content while preserving the spoken format. How it works The workflow receives an audio file through a webhook, transcribes it using OpenAI's Whisper, translates and structures the text with GPT-4, generates new audio in the target language, and stores it in S3 for easy access. The entire process takes seconds and returns both the transcribed/translated text and a URL to the translated audio file. How to set up Configure OpenAI credentials - Add your OpenAI API key for Whisper transcription and GPT-4 translation Set up AWS S3 - Create a bucket with public read permissions for audio storage Update configuration - Replace 'YOUR-BUCKET-NAME' with your actual S3 bucket name Activate webhook - Deploy and copy your webhook URL for receiving audio files Send a POST request with: Binary audio file (as 'audiofile') Languages parameter (e.g., "English, Spanish") Requirements OpenAI API account with access to Whisper and GPT-4 AWS account with S3 bucket configured Basic understanding of webhooks and API requests How to customize Add language detection** - Automatically detect source language if not specified Customize voice settings** - Adjust speech speed, pitch, or select different voices Add file validation** - Implement size limits and format checks Enhance security** - Add webhook authentication and rate limiting Extend functionality** - Add subtitle generation or multiple output formats
by Incrementors
🐦 Twitter Profile Scraper via Bright Data API with Google Sheets Output A comprehensive n8n automation that scrapes Twitter profile data using Bright Data's Twitter dataset and stores comprehensive tweet analytics, user metrics, and engagement data directly into Google Sheets. 📋 Overview This workflow provides an automated Twitter data collection solution that extracts profile information and tweet data from specified Twitter accounts within custom date ranges. Perfect for social media analytics, competitor research, brand monitoring, and content strategy analysis. ✨ Key Features 🔗 Form-Based Input: Easy-to-use form for Twitter URL and date range selection 🐦 Twitter Integration: Uses Bright Data's Twitter dataset for accurate data extraction 📊 Comprehensive Data: Captures tweets, engagement metrics, and profile information 📈 Google Sheets Storage: Automatically stores all data in organized spreadsheet format 🔄 Progress Monitoring: Real-time status tracking with automatic retry mechanisms ⚡ Fast & Reliable: Professional scraping with built-in error handling 📅 Date Range Control: Flexible time period selection for targeted data collection 🎯 Customizable Fields: Advanced data field selection and mapping 🎯 What This Workflow Does Input Twitter Profile URL**: Target Twitter account for data scraping Date Range**: Start and end dates for tweet collection period Custom Fields**: Configurable data points to extract Processing Form Trigger: Collects Twitter URL and date range from user input API Request: Sends scraping request to Bright Data with specified parameters Progress Monitoring: Continuously checks scraping job status until completion Data Retrieval: Downloads complete dataset when scraping is finished Data Processing: Formats and structures extracted information Sheet Integration: Automatically populates Google Sheets with organized data Output Data Points | Field | Description | Example | |-------|-------------|---------| | user_posted | Username who posted the tweet | @elonmusk | | name | Display name of the user | Elon Musk | | description | Tweet content/text | "Exciting updates coming soon..." | | date_posted | When the tweet was posted | 2025-01-15T10:30:00Z | | likes | Number of likes on the tweet | 1,234 | | reposts | Number of retweets | 567 | | replies | Number of replies | 89 | | views | Total view count | 12,345 | | followers | User's follower count | 50M | | following | Users they follow | 123 | | is_verified | Verification status | true/false | | hashtags | Hashtags used in tweet | #AI #Technology | | photos | Image URLs in tweet | image1.jpg, image2.jpg | | videos | Video content URLs | video1.mp4 | | user_id | Unique user identifier | 12345678 | | timestamp | Data extraction timestamp | 2025-01-15T11:00:00Z | 🚀 Setup Instructions Prerequisites n8n instance (self-hosted or cloud) Bright Data account with Twitter dataset access Google account with Sheets access Valid Twitter profile URLs to scrape 10-15 minutes for setup Step 1: Import the Workflow Copy the JSON workflow code from the provided file In n8n: Workflows → + Add workflow → Import from JSON Paste JSON and click Import Step 2: Configure Bright Data Set up Bright Data credentials: In n8n: Credentials → + Add credential → HTTP Header Auth Enter your Bright Data API credentials Test the connection Configure dataset: Ensure you have access to Twitter dataset (gd_lwxkxvnf1cynvib9co) Verify dataset permissions in Bright Data dashboard Step 3: Configure Google Sheets Integration Create a Google Sheet: Go to Google Sheets Create a new spreadsheet named "Twitter Data" or similar Copy the Sheet ID from URL: https://docs.google.com/spreadsheets/d/SHEET_ID_HERE/edit Set up Google Sheets credentials: In n8n: Credentials → + Add credential → Google Sheets OAuth2 API Complete OAuth setup and test connection Prepare your data sheet with columns: Use the column headers from the data points table above The workflow will automatically populate these fields Step 4: Update Workflow Settings Update Bright Data nodes: Open "🚀 Trigger Twitter Scraping" node Replace BRIGHT_DATA_API_KEY with your actual API token Verify dataset ID is correct Update Google Sheets node: Open "📊 Store Twitter Data in Google Sheet" node Replace YOUR_GOOGLE_SHEET_ID with your Sheet ID Select your Google Sheets credential Choose the correct sheet/tab name Step 5: Test & Activate Add test data: Use the form trigger to input a Twitter profile URL Set a small date range for testing (e.g., last 7 days) Test the workflow: Submit the form to trigger the workflow Monitor progress in n8n execution logs Verify data appears in Google Sheet Check all expected columns are populated 📖 Usage Guide Running the Workflow Access the workflow form trigger URL (available when workflow is active) Enter the Twitter profile URL you want to scrape Set the start and end dates for tweet collection Submit the form to initiate scraping Monitor progress - the workflow will automatically check status every minute Once complete, data will appear in your Google Sheet Understanding the Data Your Google Sheet will show: Real-time tweet data** for the specified date range User engagement metrics** (likes, replies, retweets, views) Profile information** (followers, following, verification status) Content details** (hashtags, media URLs, quoted tweets) Timestamps** for each tweet and data extraction Customizing Date Ranges Recent data**: Use last 7-30 days for current activity analysis Historical analysis**: Select specific months or quarters for trend analysis Event tracking**: Focus on specific date ranges around events or campaigns Comparative studies**: Use consistent time periods across different profiles 🔧 Customization Options Modifying Data Fields Edit the custom_output_fields array in the "🚀 Trigger Twitter Scraping" node to add or remove data points: "custom_output_fields": [ "id", "user_posted", "name", "description", "date_posted", "likes", "reposts", "replies", "views", "hashtags", "followers", "is_verified" ] Changing Google Sheet Structure Modify the column mapping in the "📊 Store Twitter Data in Google Sheet" node to match your preferred sheet layout and add custom formulas or calculations. Adding Multiple Recipients To process multiple Twitter profiles: Modify the form to accept multiple URLs Add a loop node to process each URL separately Implement delays between requests to respect rate limits 🚨 Troubleshooting Common Issues & Solutions "Bright Data connection failed" Cause: Invalid API credentials or dataset access Solution: Verify credentials in Bright Data dashboard, check dataset permissions "No data extracted" Cause: Invalid Twitter URLs or private/protected accounts Solution: Verify URLs are valid public Twitter profiles, test with different accounts "Google Sheets permission denied" Cause: Incorrect credentials or sheet permissions Solution: Re-authenticate Google Sheets, check sheet sharing settings "Workflow timeout" Cause: Large date ranges or high-volume accounts Solution: Use smaller date ranges, implement pagination for high-volume accounts "Progress monitoring stuck" Cause: Scraping job failed or API issues Solution: Check Bright Data dashboard for job status, restart workflow if needed Advanced Troubleshooting Check execution logs in n8n for detailed error messages Test individual nodes by running them separately Verify data formats and ensure consistent field mapping Monitor rate limits if scraping multiple profiles consecutively Add error handling and implement retry logic for robust operation 📊 Use Cases & Examples 1. Social Media Analytics Goal: Track engagement metrics and content performance Monitor tweet engagement rates over time Analyze hashtag effectiveness and reach Track follower growth and audience interaction Generate weekly/monthly performance reports 2. Competitor Research Goal: Monitor competitor social media activity Track competitor posting frequency and timing Analyze competitor content themes and strategies Monitor competitor engagement and audience response Identify trending topics and hashtags in your industry 3. Brand Monitoring Goal: Track brand mentions and sentiment analysis Monitor specific Twitter accounts for brand mentions Track hashtag campaigns and user-generated content Analyze sentiment trends and audience feedback Identify influencers and brand advocates 4. Content Strategy Development Goal: Analyze successful content patterns Identify high-performing tweet formats and topics Track optimal posting times and frequencies Analyze hashtag performance and reach Study audience engagement patterns 5. Market Research Goal: Collect social media data for market analysis Gather consumer opinions and feedback Track industry trends and discussions Monitor product launches and market reactions Support product development with social insights ⚙ Advanced Configuration Batch Processing Multiple Profiles To monitor multiple Twitter accounts efficiently: Create a master sheet with profile URLs and date ranges Add a loop node to process each profile separately Implement delays between requests to respect rate limits Use separate sheets or tabs for different profiles Adding Data Analysis Enhance the workflow with analytical capabilities: Create additional sheets for processed data and insights Add formulas to calculate engagement rates and trends Implement data visualization with charts and graphs Generate automated reports and summaries Integration with Business Tools Connect the workflow to your existing systems: CRM Integration**: Update customer records with social media data Slack Notifications**: Send alerts when data collection is complete Database Storage**: Store data in PostgreSQL/MySQL for advanced analysis BI Tools**: Connect to Tableau/Power BI for comprehensive visualization 📈 Performance & Limits Expected Performance Single profile**: 30 seconds to 5 minutes (depending on date range) Data accuracy**: 95%+ for public Twitter profiles Success rate**: 90%+ for accessible accounts Daily capacity**: 10-50 profiles (depends on rate limits and data volume) Resource Usage Memory**: ~200MB per execution Storage**: Minimal (data stored in Google Sheets) API calls**: 1 Bright Data call + multiple Google Sheets calls per profile Bandwidth**: ~5-10MB per profile scraped Execution time**: 2-10 minutes for typical date ranges Scaling Considerations Rate limiting**: Add delays for high-volume scraping Error handling**: Implement retry logic for failed requests Data validation**: Add checks for malformed or missing data Monitoring**: Track success/failure rates over time Cost optimization**: Monitor API usage to control costs 🤝 Support & Community Getting Help n8n Community Forum**: community.n8n.io Documentation**: docs.n8n.io Bright Data Support**: Contact through your dashboard GitHub Issues**: Report bugs and feature requests Contributing Share improvements with the community Report issues and suggest enhancements Create variations for specific use cases Document best practices and lessons learned 📋 Quick Setup Checklist Before You Start ☐ n8n instance running (self-hosted or cloud) ☐ Bright Data account with Twitter dataset access ☐ Google account with Sheets access ☐ Valid Twitter profile URLs ready for scraping ☐ 10-15 minutes available for setup Setup Steps ☐ Import Workflow - Copy JSON and import to n8n ☐ Configure Bright Data - Set up API credentials and test ☐ Create Google Sheet - New sheet with proper column structure ☐ Set up Google Sheets credentials - OAuth setup and test ☐ Update workflow settings - Replace API keys and sheet IDs ☐ Test with sample data - Add 1 Twitter URL and small date range ☐ Verify data flow - Check data appears in Google Sheet correctly ☐ Activate workflow - Enable form trigger for production use Ready to Use! 🎉 Your workflow URL: Access form trigger when workflow is active 🎯 Happy Twitter Scraping! This workflow provides a solid foundation for automated Twitter data collection. Customize it to fit your specific social media analytics and research needs. For any questions or support, please contact: info@incrementors.com or fill out this form: https://www.incrementors.com/contact-us/
by Oneclick AI Squad
This automated n8n workflow checks daily class schedules, syncs upcoming classes to Google Calendar, and sends reminder notifications to students via email or SMS. Perfect for educational institutions to keep students informed about their daily classes and schedule changes. What This Workflow Does: Automatically checks class schedules every day Identifies today's classes and upcoming sessions Syncs class information to Google Calendar Sends personalized reminders to enrolled students Tracks reminder delivery status and logs activities Handles both email and SMS notification preferences Main Components Daily Schedule Check** - Triggers daily to check class schedules Read Class Schedule** - Retrieves today's class schedule from database/Excel Filter Today's Classes** - Identifies classes happening today Has Classes Today?** - Checks if there are any classes scheduled Read Student Contacts** - Gets student contact information for enrolled classes Sync to Google Calendar** - Creates/updates events in Google Calendar Create Student Reminders** - Generates personalized reminder messages Split Into Batches** - Processes reminders in manageable batches Email or SMS?** - Routes based on student communication preferences Prepare Email Reminders** - Creates email reminder content Prepare SMS Reminders** - Creates SMS reminder content Read Reminder Log** - Checks previous reminder history Update Reminder Log** - Records sent reminders Save Reminder Log** - Saves updated log data Essential Prerequisites Class schedule database/Excel file with student enrollments Student contact database with email and phone numbers Google Calendar API access and credentials SMTP server for email notifications SMS service provider (Twilio, etc.) for text reminders Reminder log file for tracking sent notifications Required Data Files: class_schedule.xlsx: Class ID | Class Name | Date | Time | Duration Instructor | Room | Students Enrolled | Status student_contacts.xlsx: Student ID | Name | Email | Phone | Preferred Contact Program | Class IDs | Active Status reminder_log.xlsx: Log ID | Date | Student ID | Class ID | Contact Method Status | Sent Time | Response Key Features ⏰ Daily Automation:** Runs automatically every day 📅 Calendar Sync:** Syncs classes to Google Calendar 📧 Smart Reminders:** Sends email or SMS based on preference 👥 Batch Processing:** Handles multiple students efficiently 📊 Activity Logging:** Tracks all reminder activities 🔄 Duplicate Prevention:** Avoids sending multiple reminders 📱 Multi-Channel:** Supports both email and SMS notifications Quick Setup Import workflow JSON into n8n Configure daily trigger schedule Set up class schedule and student contact files Connect Google Calendar API credentials Configure SMTP server for emails Set up SMS service provider (Twilio) Test with sample class data Activate workflow Parameters to Configure schedule_file_path: Path to class schedule file contacts_file_path: Path to student contacts file google_calendar_id: Google Calendar ID for syncing google_api_credentials: Google Calendar API credentials smtp_host: Email server settings smtp_user: Email username smtp_password: Email password sms_api_key: SMS service API key sms_phone_number: SMS sender phone number Sample Reminder Messages Email:** "Hi [Name], reminder: [Class Name] starts at [Time] in [Room]. See you there!" SMS:** "[Name], your [Class Name] class starts at [Time] in [Room]. Don't miss it!" Use Cases Daily class reminders for students Schedule change notifications Exam and assignment deadline alerts Teacher absence notifications Room change announcements
by GiovanniSegar
Video walkthrough https://www.youtube.com/watch?v=OwIFK-r-NtQ Summary of agent This agent can write and rewrite its own rules, allowing you to mold its behavior. It receives rules from a database as system instructions and has tools to create, edit, or delete them. This is a great baseline for new agent builds. You can tell it things like "Next time, use present tense when talking about this subject" and it will use a tool to save this as a rule, then receive that instruction in all future iterations. How to start using it Option 1: With a Postgres database (e.g., Supabase) Supabase Schema: Create a table (e.g., agent_rules) with the following columns: id: bigint (Primary Key, auto-incrementing) created_at: timestamp with time zone (Default: now()) rule_text: text agent: text Workflow Updates: Update the Postgres credentials in the "Get rules from database," "Insert rule into database," and "Execute query on rule database" nodes. Update the agent value (currently 'TestAgent') in the "Get rules from database" and "Insert rule into database" nodes if you want a different agent name. Update the Anthropic API credentials. Option 2: With Google Sheets Google Sheet Setup: Create a Google Sheet with columns for rule_text and agent. Workflow Updates: Example Google Sheets nodes are included. You will need to: Connect your Google Sheets credentials. Select your Google Sheet (with rule_text and agent columns) in all relevant Google Sheets nodes. Replace the existing Postgres nodes ("Get rules from database", "Insert rule into database", "Execute query on rule database") with the configured Google Sheets nodes. Update the agent value (currently 'TestAgent') in the Google Sheets nodes if you want a different agent name. Update the Anthropic API credentials. Agent Instructions: Update the agent's system message and remove the database schema section as it is no longer relevant
by Robert Breen
This n8n workflow reads emails from your Outlook inbox, drafts AI-powered replies using OpenAI, and routes them through the gotoHuman node for human approval before replying automatically. ✅ Key Features Reads Outlook emails** from today only (excluding those from your own address). AI-generated replies** crafted using OpenAI based on the subject and body of the email. Community node integration**: Uses the gotoHuman node for human review and approval of replies before sending. Safe sending**: Only approved responses are automatically sent back via Outlook. Expandable**: Can be easily modified to: Send drafts instead of full replies Include additional email filters Trigger at intervals or via webhook 🧠 Nodes Used Microsoft Outlook – Fetch and reply to emails OpenAI – Generates smart reply text gotoHuman – Human-in-the-loop approval system Loop Over Items, IF, Code, and Set nodes for processing logic Manual Trigger – For testing 🔧 Setup Instructions 1. Connect APIs Outlook OAuth2**: Go to Azure Portal Register an app Add Mail.Read, Mail.Send scopes Set redirect URI: https://api.n8n.cloud/oauth2-credential/callback Paste credentials in n8n credential manager OpenAI API**: Create account at OpenAI Create an API Key Add it to n8n credentials gotoHuman API**: Go to https://gotoHuman.ai and sign in Create a review template (e.g., “Email Responses”) Copy the Template ID and API key into n8n credentials 🪜 Workflow Steps Overview 1. Trigger Use the Manual Trigger to test or schedule execution with a cron node. 2. Filter Emails from Today A Code node outputs today's date in the proper yyyy-mm-dd format. const today = new Date(); today.setHours(0, 0, 0, 0); return [{ json: { searchQuery: received:${today.toISOString().split('T')[0]} } }]; 3. Search and Filter Outlook Messages Uses the Outlook node with a search query like: received:2025-08-06 -from:rbreen@ynteractive.com (Update to your email) 4. Generate AI Response Text prompt to OpenAI: subject: {{ $json.subject }} body: {{ $json.body.content }} System prompt: > You are a personal assistant helping respond to emails. I am an AI automation expert specializing in helping small and medium-size businesses automate processes. Create a short response to the email. Sign the email as Robert Breen. 5. Review with gotoHuman Submit AI output for human approval using the gotoHuman node. The output schema should match the Review Template fields (e.g., "email", "OriginalEmail"). 6. IF Node Decision If status is approved, send reply If not, return to loop for revision or skip ✏️ Customization Ideas ✉️ Send only drafts by skipping the "reply" step and storing results. 🕒 Schedule the workflow with a Cron trigger for automation. 🔎 Add label filters or subject keywords for advanced targeting. 🔗 External Links gotoHuman Community Node OpenAI Microsoft Outlook API Setup 💬 Need More Help? If you'd like help customizing this or building similar automations, reach out: Robert Breen AI & Automation Consultant 🌐 https://ynteractive.com 📧 robert.j.breen@gmail.com 🔗 LinkedIn
by keisha kalra
Try It Out! This n8n template creates a fully automated Instagram content schedule using AI and Google Sheets. It is perfect for content creators, marketing teams, or local businesses looking to organize and scale their social media posting. How it works The workflow starts by reading two sets of inputs from a Google Sheet: Your content strategy inputs (Pillar, Objective, Frequency, Format, Structure, Examples). A list of scraped blog posts with title, URL, and description (fetched from your website). Blog posts are scraped using Apify and parsed to extract key fields, which are stored in a tab labeled "Input (blog month)". You can assign a preferred posting month for each blog (e.g. fall blog posts get tagged for September). The workflow then merges both inputs and extracts the relevant information for further information added by ChatGPT. AI Scheduling & Personalization Once merged, the workflow loops through each content item and: Identifies if the scheduled post falls on or near a holiday (like Mother’s Day) and adjusts the content accordingly. A reference tool is attached to guide structure and tone, based on a library of post examples. Sends the content to an AI Agent (using GPT-4, but customizable) that generates: A compelling Instagram caption A visual description Hashtags Suggested post date, day, content pillar, and format (carousel, reel, image, etc.) Output All generated content including captions, structure, dates, hashtags, and pillar is exported into a tab titled Output in your Google Sheet. The final schedule is ready for manual review, editing, or publishing to social media. How to use The workflow uses a manual trigger to start, but you can replace it with a Webhook, cron job, or form submission. Add/edit your content strategy in Google Sheets. How to Set-Up Initial Input Tab Define your content pillars and structure Create a tab named "Input" or "Strategy" Include these columns: Pillar: e.g., Family images Objective: e.g., Showcase images Frequency: e.g., Bi-weekly Content Form: e.g., Images, Reels Structure: brief description of expected layout (e.g., carousel Q&A, singular photo) Examples: prompts or questions to guide AI (e.g., Why do you think families should do a session?) Input (blog month) Tab – Store scraped blog content Include these columns: URL: direct link to blog post Title: blog post title Description: short summary of the post Preferred Month: month you want it posted (e.g., August, September) This sheet is partially auto-filled by the workflow (except for Preferred Month) Output Tab – Final scheduled content Include these columns: Date: scheduled posting date (YYYY-MM-DD) Day: day of the week Pillar: content category assigned Format: e.g., Images, Reels, Carousel Description: visual summary Caption: Instagram-ready caption Hashtags: complete hashtag block To use the Apify HTTP Request node: Drag in an HTTP Request node into your n8n workflow. Set the Method and URL based on how you're using Apify: Use POST if you want to run an actor live with dynamic input (e.g. scrape blog posts in real time). Use GET if you want to retrieve results from a completed or static dataset run (faster and cheaper if you're reusing previous data). Configure query or body parameters: Include your Apify API token for authentication (e.g. token=YOUR_API_KEY) For POST: include an input object with any required actor settings (e.g., blog URL to scrape). For GET: specify the dataset ID in the URL Test the node to ensure you're retrieving the blog titles, descriptions, and URLs as expected. Requirements Apify account for scraping blog posts OpenAI key (e.g. GPT-4) or another model of your choice Google Sheets Credentials Example Use Cases A photographer repurposing blogs into Instagram carousels A nonprofit automatically generating seasonal posts A small team managing multi-pillar content across weeks or months Need Help? Join the n8n Discord or ask in the n8n Forum! Happy Content Making ! 📅✨
by Jimleuk
This n8n template demonstrates how you can leverage existing support site search to power your Support Chatbots and agents. Building a support chatbot need not be complicated! If building and indexing vector stores or duplicating data isn't necessarily your thing, an alternative implementation of the RAG approach is to leverage existing knowledge-bases such as support portals. In this way, document management and maintenance of your support agent is significantly reduced. Disclaimer: This template example uses AcuityScheduling's help center website but is not associated, supported nor endorsed by the company. How it works A simple AI agent is connected with chat trigger to receive user queries. The AI agent is instructed to fetch information from the knowledge-base via the attached custom workflow tool (aka "knowledgebase tool"). There is no step to replicate the entire support articles database into a vector store. You may choose not too because of time, cost and maintainence involved. Instead, the tool leverages the existing support portal's search API to retrieve knowledge-base articles. Finally, the search results are formatted before sending an aggregated response back to the agent. How to use? Customise the subworkflow to work with your own support portal API and format accordingly. Try the following queries How do I connect my icloud to acuityScheduling? How do I download past invoices for my Acuity account? Requirements OpenAI for LLM. If your organisation's APIs require authorisation, you may need to add custom credentials as necessary. Customising this workflow Add additional tools to reach other parts of your internal knowledgebase. Not using OpenAI? Feel free to swap but ensure the LLM has tools/function calling support.