by Suleman Hasib
Template Overview This template is designed for individuals and businesses who want to maintain a consistent presence on the Fediverse while also posting on Threads or managing multiple Fediverse profiles. By automating the process of resharing statuses or posts, this workflow saves time and ensures regular engagement across accounts. Use Case The template addresses the challenge of managing activity across Fediverse accounts by automatically boosting or resharing posts from a specific account to your own. It is especially helpful for users who want to consolidate engagement without manually reposting content across multiple platforms or profiles. How It Works The workflow runs on a scheduled trigger and retrieves recent posts from a specified Fediverse account, such as your Threads.net account. It uses a JavaScript filter to identify posts from the current day and then automatically boosts or reshares them to your selected Mastodon profile. Preconditions You need a Mastodon account with developer access. Identify a Threads.net or other Fediverse account which you want to boost. Basic familiarity with APIs and setting up credentials in n8n. Setup Steps Step 1: Create a Developer Application on Mastodon Log in to your Mastodon account and navigate to Preferences > Development > New Application. Fill out the required information and create your application. Set Scopes to atleast read, profile, write:statuses. Click Submit. Note down the access token generated for this application. Step 2: Get the Account ID Use the following command to retrieve the account ID for the profile you want to boost: curl -s "https://mastodon.social/api/v1/accounts/lookup?acct=<ACCOUNTNAME>" Alternatively, paste the URL into a GET node on n8n. From the returned JSON, copy the "id" field value (e.g., {"id":"110564198672505618", ...}). Step 3: Update the "Get Statuses" Node Replace <ACCOUNTID> in the URL field with the ID you retrieved in Step 2: https://mastodon.social/api/v1/accounts/<ACCOUNTID>/statuses Step 4: Configure the "Boost Statuses" Node Authentication type will already be set to Header Auth. Grab the access token from Step 1. In the Credential for Header Auth field, create a new credential. Click the pencil icon in the top-left corner to name your credential. In the Name field, enter Authorization. In the Value field, enter Bearer <YOUR_MASTODON_ACCESS_TOKEN>. (Note: there is a space after "Bearer.") Save the credential, and it should automatically be selected as your Header Auth. Step 5: Test the Workflow Run the workflow to ensure everything is set up correctly. Adjust filters or parameters as needed for your specific use case. Customization Guidance Replace mastodon.social with your own Mastodon domain if you're using a self-hosted instance. Adjust the JavaScript filter logic to meet your specific needs (e.g., filtering by hashtags or keywords). For enhanced security, store the access token as an n8n credential. Embedding it directly in the URL is ++not recommended++. Notes This workflow is designed to work with any Mastodon domain. Ensure your Mastodon account has appropriate permissions for boosting posts. By following these steps, you can automate your Fediverse engagement and focus on creating meaningful content while the workflow handles the rest!
by Cyril Nicko Gaspar
π AI Agent via GoHighLevel SMS with Website-Based Knowledgebase This n8n workflow enables an AI agent to interact with users through GoHighLevel SMS, leveraging a knowledgebase dynamically built by scraping the company's website. β Problem It Solves Traditional customer support systems often require manual data entry and lack real-time updates from the company's website. This workflow automates the process by: Scraping the company's website at set intervals to update the knowledgebase. Integrating with GoHighLevel SMS to provide users with timely and accurate information. Utilizing AI to interpret user queries and fetch relevant information from the updated knowledgebase. π§° Pre-requisites Before deploying this workflow, ensure you have: An active n8n instance (self-hosted or cloud). A valid OpenAI API key (or any compatible AI model). A Bright Data account with Web Unlocker setup. A GoHighLevel SMS LeadConnector account. A GoHighLevel Marketplace App configured with the necessary scopes. Installed n8n-nodes-brightdata community node for Bright Data integration (if self-hosted). βοΈ Setup Instructions 1. Install the Bright Data Community Node in n8n For self-hosted n8n instances: Navigate to Settings β Community Nodes. Click on Install. In the search bar, enter n8n-nodes-brightdata. Select the node from the list and click Install. Docs: https://docs.n8n.io/integrations/community-nodes/installation/gui-install 2. Configure Bright Data Credentials Obtain your API key from Bright Data. In n8n, go to Credentials β New, select HTTP Request. Set authentication to Header Auth. In Name, enter Authorization. In Value, enter Bearer <your_api_key_from_Bright_Data>. Save the credentials. 3. Configure OpenAI Credentials Add your OpenAI API key to the relevant nodes. If you want to use a different model, replace all OpenAI nodes accordingly. 4. Set Up GoHighLevel Integration a. Create a GoHighLevel Marketplace App Go to https://marketplace.gohighlevel.com Click My Apps β Create App Set Distribution Type to Sub-Account Add the following scopes: locations.readonly contacts.readonly contacts.write opportunities.readonly opportunities.write users.readonly conversations/message.readonly conversations/message.write Add your n8n OAuth Redirect URL as a redirect URI in the app settings. Save and copy the Client ID and Client Secret. b. Configure GoHighLevel Credentials in n8n Go to Credentials β New Choose OAuth2 API Input: Client ID Client Secret Authorization URL: https://auth.gohighlevel.com/oauth/authorize Access Token URL: https://auth.gohighlevel.com/oauth/token Scopes: locations.readonly contacts.readonly contacts.write opportunities.readonly opportunities.write users.readonly conversations/message.readonly conversations/message.write Save and authenticate to complete setup. Docs: https://docs.n8n.io/integrations/builtin/credentials/highlevel π Workflow Functionality (Summary) Scheduled Scraping**: Scrapes website at user-defined intervals. Edit Fields** node: User defines the homepage or site to scrape. Bright Data Node* (self-hosted) OR *HTTP Node** (cloud users) used to perform scraping. Knowledgebase Update**: The scraped content is stored or indexed. GoHighLevel SMS**: Incoming user queries are received through SMS. AI Processing**: AI matches queries to relevant content. Response Delivery**: AI-generated answers are sent back via SMS. π§© Use Cases Customer Support Automation**: Provide instant, accurate responses. Lead Qualification**: Automatically answer potential customer inquiries. Internal Knowledge Distribution**: Keep staff updated via SMS based on website info. π οΈ Customization Scraping URLs**: Adjust targets in the Edit Fields node. Model Swap**: Replace OpenAI nodes to use a different LLM. Format Response**: Customize output to match your tone or brand. Other Channels**: Expand to include chat apps or email responses. Vector Databases**: It is advisable to store the data into a third-party vector database services like Pinecone, Supabase, etc. Chat Memory Node**: This workflow is using Redis as a chat memory but you can use N8N built-in chat memory. β Summary This n8n workflow combines Bright Dataβs scraping tools and GoHighLevelβs SMS interface with AI query handling to deliver a real-time, conversational support experience. Ideal for businesses that want to turn their website into a live knowledge source via SMS, this agent keeps itself updated, smart, and customer-ready.
by Rostislav
This n8n template provides a complete solution for Optical Character Recognition (OCR) of image and PDF files directly within Telegram Users can simply send PNG, JPEG, or PDF documents to your Telegram bot, and the workflow will process them, extract text using Mistral OCR, and return the content as a downloadable Markdown (.md) text file. Key Features & How it Works: Effortless OCR via Telegram**: Users send a file to the bot, and the system automatically detects the file type (PNG, JPEG, or PDF). File Size Validation: The workflow enforces a **25 MB file size limit, in line with Telegram Bot API restrictions, ensuring smooth operation. Mistral-Powered Recognition: Leveraging **Mistral OCR, the template accurately extracts text from various document types. Markdown Output**: Recognized text is automatically converted into a clean Markdown (.md) text file, ready for easy editing, storage, or further processing. Secure File Delivery: The processed Markdown file is delivered back to the user via Telegram. For this, the workflow ingeniously uses a **GET request to itself (acting as a file downloader proxy). This generated link allows Telegram to fetch the .md file directly. Please note: This download functionality requires the workflow to be in an Active status. Optional Whitelist Security: Enhance your bot's security with an **optional whitelist feature. You can configure specific Telegram User IDs to restrict access, ensuring only authorized users can interact with your bot. Simplified Webhook Management**: The template includes dedicated utility flows for convenient management of your Telegram bot's webhooks (for both development and production environments). This template is ideal for digitizing documents on the go, extracting text from scanned files, or converting image-based content into versatile, searchable text. Getting Started To get this powerful OCR bot up and running, follow these two main steps: Set Up Your Telegram Bot: First, you'll need to configure your Telegram bot and its webhooks. Follow the instructions detailed in the Telegram Bot Webhook Setup section to create your bot, obtain its API token, and set up the necessary webhook URLs. Configure Bot Settings: Next, you'll need to define key operational parameters for your bot. Proceed to the Settings Configuration section and populate the variables according to your preferences, including options for whitelist access.
by Garri
Description This workflow is designed to automate the security reputation check of domains and IP addresses using multiple APIs such as VirusTotal, AbuseIPDB, and Google DNS. It assesses potential threats including malicious and suspicious scores, as well as email security configurations (SPF, DKIM, DMARC). The analysis results are processed by AI to produce a concise assessment, then automatically updated into Google Sheets for documentation and follow-up. How It Works Automatic Trigger β The workflow runs periodically via a Schedule Trigger. Data Retrieval β Fetches a list of domains from Google Sheets with status "To do". Domain Analysis β Uses VirusTotal API to get the domain report, perform a rescan, and check IP resolutions. IP Analysis β Checks IP reputation using AbuseIPDB. Email Security Validation β Verifies SPF, DKIM, and DMARC configurations via Google DNS. AI Assessment β Analysis data is processed by AI to produce a short summary in Indonesian. Data Update β The results are automatically updated to Google Sheets, changing the status to "Done" or adding notes if potential threats are found. How to Setup Prepare API Keys Sign up and obtain API keys from VirusTotal and AbuseIPDB. Set up access to Google Sheets API. Configure Credentials in n8n Add VirusTotal API, AbuseIPDB API, and Google Sheets OAuth credentials in n8n. Prepare Google Sheets Create a sheet with columns No, Domain, Customer, Keterangan, Status. Ensure initial data has the status "To do". Import Workflow Upload the workflow JSON file into n8n. Set Schedule Trigger Define the checking interval as needed (e.g., every 1 hour). Test Run Run the workflow manually to ensure all API connections and Google Sheets output work properly.
by Satyam Tripathi
Try It Out! This n8n template demonstrates how to build an autonomous AI news agent using Decodo MCP that automatically finds, scrapes, and delivers fresh industry news to your team via Slack. Use cases are many β automated news monitoring for your industry, competitive intelligence gathering, startup monitoring, regulatory updates, research automation, or daily briefings for your organization. How it works Define your news topics using the Set node β AI, MCP, web scraping, whatever matters to your business. The AI Agent processes those topics using the Gemini Chat Model, determining which tools to use and when. Here's where it gets interesting: Decodo MCP gives your AI agent the tools to search Google, scrape websites, and parse content automatically β all while bypassing geo-restrictions and anti-bot measures. The agent hunts for fresh articles from the last 48 hours, extracts clean data, and returns structured JSON results. Format Results cleans up the AI's messy output and removes duplicates. Your polished news digest gets delivered to Slack with clickable links and summaries. How to use Schedule trigger runs daily at 9 AM β adjust timing or swap for webhook triggers as needed. Customize topics in the Set node to match your industry. Scales effortlessly: add more topics, tweak search criteria, done. Requirements Decodo MCP credentials (free trial available) β grab the Smithery connection URL with keys and paste it straight into your n8n MCP node. Done. Gemini API key for the AI processing β drop it into the Google Gemini Chat Model node and pick whichever Gemini model fits your needs. Slack workspace for delivery β n8n's Slack integration docs have you covered. What the final output looks like Here's what your team receives in Slack every morning: Need help? Join the Discord or email support@decodo.com for questions. Happy Automating!
by Halfbit π
AI-Powered Invoice Processing: from Email to Database & Chat Notifications Automatically process PDF invoices directly from your email inbox. This workflow uses AI to extract key data, saves it to a PostgreSQL database, and instantly notifies you about the new document in your preferred chat application. The workflow listens for new emails, fetches PDF attachments, and then passes their content to a Large Language Model (LLM) for intelligent recognition and data extraction. Finally, the information is securely archived in the database, and a summary of the invoice is sent as a notification. > π This workflow is highly customizable. > It uses PostgreSQL, OpenAI (GPT), and Discord by default, but you can easily swap these components. > Feel free to use a different database like MySQL or Airtable, another AI model provider, or send notifications to Slack, MS Teams, or any other chat platform. > β οΈ Note: If the workflow fails to extract data correctly from invoices issued by certain companies, you may need to adjust the prompt used in the Basic LLM Chain node to improve parsing accuracy. Use Case Automating accounts payable for small businesses and freelancers Centralizing financial documents without manual data entry Creating a searchable database of all incoming invoices Receiving real-time notifications for new financial commitments Features π§ Email Trigger (IMAP):** Monitors a dedicated email inbox for new messages with attachments π PDF Filtering:** Automatically identifies and processes only PDF attachments π€ AI-Powered Data Extraction:** Uses an LLM (e.g., GPT-4o-mini) to extract invoice number, buyer/seller details, amounts, currency, and due dates βοΈ Structured Data Output:** Converts AI output to standardized JSON π Database Write Logic:** Prevents duplicates by checking invoice/company combo ποΈ PostgreSQL Integration:** Stores extracted data into company and invoice tables π¬ Chat Notifications:** Sends invoice summary as message to a designated channel Setup Instructions β οΈ API Access & Costs To use the AI extraction feature, you need an API key from a provider like OpenAI. Most providers charge for access to language models. You'll likely need a billing account. 1. PostgreSQL Database Configuration Ensure your database has the following tables: -- Table for companies (invoice issuers) CREATE TABLE company ( id SERIAL PRIMARY KEY, tax_number VARCHAR(255) UNIQUE NOT NULL, name VARCHAR(255), address TEXT, created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP ); -- Table for invoices CREATE TABLE invoice ( id SERIAL PRIMARY KEY, company_id INTEGER REFERENCES company(id), invoice_number VARCHAR(255) NOT NULL, -- Add other fields: total_to_pay, currency, due_date created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP, UNIQUE(company_id, invoice_number) ); Then, in n8n, create a credential for your PostgreSQL DB. 2. Email (IMAP) Configuration In n8n, add credentials for the email account that receives invoices: IMAP host IMAP port Username Password 3. AI Provider Configuration Log in to OpenAI (or similar provider) Generate API key In n8n, create credentials and paste the key 4. Chat Notification (Discord) Go to Discord > Server Settings > Integrations > Webhooks > New Webhook Select channel Copy Webhook URL In n8n, paste URL into the Discord node Placeholders and Fields to Fill | Placeholder | Description | Example | |---------------------------|-------------------------------------------|------------------------------------------| | YOUR_EMAIL_CREDENTIALS | Your IMAP email account in n8n | My Invoice Mailbox | | YOUR_OPENAI_CREDENTIALS | API credentials for AI model | My OpenAI Key | | YOUR_POSTGRES_CREDENTIALS| Your PostgreSQL DB credentials in n8n | My Production DB | | YOUR_DISCORD_WEBHOOK | Webhook URL for your chat system | https://discord.com/api/webhooks/... | Testing the Workflow Send a test invoice to the inbox as a PDF attachment Run the workflow manually in n8n and check if the IMAP node fetches the message Verify AI Extraction β inspect the LLM output (e.g., GPT node) and confirm structured JSON Check the DB β ensure new rows appear in company and invoice Check the chat β verify the invoice summary appears in the chosen channel Customization Tips Change the DB:** Use MySQL, Airtable, or Google Sheets instead of PostgreSQL Other notifications:** Swap Discord for Slack, MS Teams, Telegram, etc. Expand AI logic:** Extract line items, prices, etc. by customizing the prompt Add payment logic:** Allow marking invoices as paid via emoji or a separate webhook
by Evervise
Transform database design from weeks to minutes with this intelligent multi-agent system. Perfect for agencies, consultancies, and SaaS companies offering database architecture as a lead magnet or service. π€ 4 Specialized AI Agents: Agent 1 (Architect):** Designs complete schema with tables, relationships, indexes Agent 2 (Reviewer):** Validates design for performance, security, scalability Agent 3 (Optimizer):** Adds advanced features and scores the design (0-100) Agent 4 (SQL Generator):** Creates production-ready migration scripts π Smart Quality Loop: Automatically retries up to 3 times if score falls below B grade, feeding previous feedback to improve the design iteratively. β¨ What You Get: Complete database schema (JSON) Comprehensive score card with letter grade Review feedback with severity levels (Critical/High/Medium/Low) Production-ready SQL migration script Optional auto-execution in PostgreSQL/MySQL Iteration count and optimization recommendations πΌ Perfect For: Digital agencies offering database design services SaaS companies needing rapid prototyping Consultancies creating lead magnets Developers modernizing legacy systems Startups validating data models π― Use as Lead Magnet: Offer free database blueprints to capture leads, then upsell implementation, custom automations, and ongoing optimization services. βοΈ Technical Highlights: Optimized temperature settings per agent (0.1-0.5) Claude Sonnet 4.5 for maximum quality Structured JSON output for easy integration Error handling and graceful degradation Execution time: 60-90 seconds average Cost: ~$0.15-0.30 per run Use Cases Agency Lead Magnet Capture leads by offering free database architecture reviews and blueprints Rapid Prototyping Quickly generate database schemas for MVP development and validation Legacy System Modernization Help companies redesign outdated database structures with modern best practices Technical Consulting Provide instant database assessments and recommendations to clients Educational Tool Teach database design principles through AI-generated examples and feedback Pre-Sales Tool Demonstrate technical expertise to prospects before engagement Key Features β Multi-agent AI collaboration with specialized roles β Automatic quality control and iterative improvement (max 3 retries) β Support for PostgreSQL, MySQL, MSSQL, MariaDB β Production-ready SQL script generation β Comprehensive scoring system (Schema/Performance/Scalability/Security) β Optional automatic SQL execution β Detailed feedback with actionable recommendations β Customizable form fields for different industries β Error handling and graceful failures β Complete audit trail of all agent decisions Setup Instructions PREREQUISITES: Anthropic API key (Claude Sonnet 4.5 access) PostgreSQL/MySQL database (optional, for auto-execution) n8n version 1.0+ with LangChain nodes CONFIGURATION STEPS: Import the workflow JSON into your n8n instance Configure Anthropic API credentials: Add your Anthropic API key in n8n credentials Connect all 4 AI model nodes to your credential (Optional) Configure database connection: In "Execute SQL in PostgreSQL" node, add your database credentials Use a TEST/SANDBOX database, never production Or disable this node if you prefer manual execution Customize the form (optional): Edit form fields in "On form submission" node Add industry-specific questions Adjust required fields based on your needs Test the workflow: Use the form URL to submit a test request Check execution time and quality Verify all agents are responding correctly Customize agent prompts (optional): Adjust system messages for industry-specific requirements Modify scoring criteria in Agent 3 Add custom validation rules in Agent 2 Deploy: Share the form URL as your lead magnet Embed in website or landing pages Set up email notifications for submissions COST CONSIDERATIONS: Each execution costs ~$0.15-0.30 in API calls Failed attempts (retries) increase cost Consider rate limiting for public forms Requirements REQUIRED: Anthropic API Key (Claude access) n8n version 1.0+ LangChain nodes enabled OPTIONAL: PostgreSQL/MySQL database connection (for auto-execution) Email service (for result delivery) CRM integration (for lead capture) Tags #ai-agents #database-design #postgresql #mysql #lead-generation #automation #langchain #claude #schema-design #multi-agent #consulting-tool #saas-tool #development #code-generation #sql-generator π Website: https://evervise.ai/ β¨ Support: mark.marin@evervise.com N8N Link
by Θugui DragoΘ
This workflow automates the process of turning meeting recordings into structured notes and actionable tasks using AssemblyAI and Google Sheets. It is ideal for teams who want to save time on manual note-taking and ensure that action items from meetings are never missed. What it does Receives a meeting recording (audio file) via webhook Transcribes the audio using AssemblyAI Uses AI to generate structured meeting notes and extract action items (tasks) Logs meeting details and action items to a Google Sheet for easy tracking Use cases Automatically document meetings and share notes with your team Track action items and responsibilities from every meeting Centralize meeting outcomes and tasks in Google Sheets Quick Setup AssemblyAI API Key: Sign up at AssemblyAI and get your API key. Google Sheets Credentials: Set up a Google Service Account and share your target Google Sheet with the service account email. OpenAI API Key (optional, if using OpenAI for notes extraction): Get your API key from OpenAI. Configure the following essential nodes: Recording Ready Webhook: Set the webhook URL in your meeting platform to trigger the workflow when a recording is ready. Workflow Configuration: Enter your AssemblyAI API key, default due date, and admin email. AssemblyAI Transcription: Add your AssemblyAI API key in the credentials. Generate Meeting Notes & Extract Action Items: Add your OpenAI API key if required. Log Meeting to Sheets: Enter your Google Sheets document ID and sheet name. How to Use AssemblyAI in this Workflow The workflow sends the meeting audio file to AssemblyAI via the AssemblyAI Transcription node. AssemblyAI processes the audio and returns a full transcript. The transcript is then used by AI nodes to generate meeting notes and extract action items. Requirements AssemblyAI API key Google Service Account credentials (Optional) OpenAI API key for advanced note and action item extraction Start the workflow by sending a meeting recording to the webhook URL. The rest is fully automated!
by Jitesh Dugar
Transform college admissions from an overwhelming manual process into an intelligent, efficient, and equitable system that analyzes essays, scores applicants holistically, and identifies top candidatesβsaving 40+ hours per week while improving decision quality. π― What This Workflow Does Automates comprehensive application review with AI-powered analysis: π Application Intake - Captures complete college applications via Jotform π AI Essay Analysis - Deep analysis of personal statements and supplemental essays for: Writing quality, authenticity, and voice AI-generated content detection Specificity and research quality Red flags (plagiarism, inconsistencies, generic writing) π― Holistic Review AI - Evaluates applicants across five dimensions: Academic strength (GPA, test scores, rigor) Extracurricular profile (leadership, depth, impact) Personal qualities (character, resilience, maturity) Institutional fit (values alignment, contribution potential) Diversity contribution (unique perspectives, experiences) π¦ Smart Routing - Automatically categorizes and routes applications: Strong Admit (85-100): Slack alert β Director email β Interview invitation β Fast-track Committee Review (65-84): Detailed analysis β Committee discussion β Human decision Standard Review (<65): Acknowledgment β Human verification β Standard timeline π Comprehensive Analytics - All applications logged with scores, recommendations, and outcomes β¨ Key Features AI Essay Analysis Engine Writing Quality Assessment**: Grammar, vocabulary, structure, narrative coherence Authenticity Detection**: Distinguishes genuine voice from AI-generated content (GPT detectors) Content Depth Evaluation**: Self-awareness, insight, maturity, storytelling ability Specificity Scoring**: Generic vs tailored "Why Us" essays with research depth Red Flag Identification**: Plagiarism indicators, privilege blindness, inconsistencies, template writing Thematic Analysis**: Core values, motivations, growth narratives, unique perspectives Holistic Review Scoring (0-100 Scale) Academic Strength (35%)**: GPA in context, test scores, course rigor, intellectual curiosity Extracurricular Profile (25%)**: Quality over quantity, leadership impact, commitment depth Personal Qualities (20%)**: Character, resilience, empathy, authenticity, self-awareness Institutional Fit (15%)**: Values alignment, demonstrated interest, contribution potential Diversity Contribution (5%)**: Unique perspectives, life experiences, background diversity Intelligent Candidate Classification Admit**: Top 15% - clear admit, exceptional across multiple dimensions Strong Maybe**: Top 15-30% - competitive, needs committee discussion Maybe**: Top 30-50% - solid but not standout, waitlist consideration Deny**: Below threshold - does not meet competitive standards (always human-verified) Automated Workflows Priority Candidates**: Immediate Slack alerts, director briefs, interview invitations Committee Cases**: Detailed analysis packets, discussion points, voting workflows Standard Processing**: Professional acknowledgments, timeline communications Interview Scheduling**: Automated invitations with candidate-specific questions πΌ Perfect For Selective Colleges & Universities**: 15-30% acceptance rates, holistic review processes Liberal Arts Colleges**: Emphasis on essays, personal qualities, institutional fit Large Public Universities**: Processing thousands of applications efficiently Graduate Programs**: MBA, law, medical school admissions Scholarship Committees**: Evaluating merit and need-based awards Honors Programs**: Identifying top candidates for selective programs Private High Schools**: Admissions teams with holistic processes π Admissions Impact Efficiency & Productivity 40-50 hours saved per week** on initial application review 70% faster** essay evaluation with AI pre-analysis 3x more applications** processed per reader Zero data entry** - all information auto-extracted Consistent evaluation** across thousands of applications Same-day turnaround** for top candidate identification Decision Quality Improvements Objective scoring** reduces unconscious bias Consistent criteria** applied to all applicants Essay authenticity checks** catch AI-written applications Holistic view** considers all dimensions equally Data-driven insights** inform committee discussions Fast-track top talent** before competitors Equity & Fairness Standardized evaluation** ensures fair treatment First-generation flagging** provides context Socioeconomic consideration** in holistic scoring Diverse perspectives valued** in diversity score Bias detection** in essay analysis Audit trail** for compliance and review Candidate Experience Instant acknowledgment** of application receipt Professional communication** at every stage Clear timelines** and expectations Interview invitations** for competitive candidates Respectful process** for all applicants regardless of outcome π§ What You'll Need Required Integrations Jotform** - Application intake forms Create your form for free on JotForm using this link OpenAI API** - GPT-4o for analysis (~$0.15-0.25 per application) Gmail/Outlook** - Applicant and staff communication (free) Google Sheets** - Application database and analytics (free) Optional Integrations Slack** - Real-time alerts for strong candidates ($0-8/user/month) Google Calendar** - Interview scheduling automation (free) Airtable** - Advanced application tracking (alternative to Sheets) Applicant Portal Integration** - Status updates via API CRM Systems** - Slate, TargetX, Salesforce for higher ed π Setup Guide (3-4 Hours) Step 1: Create Application Form (60 min) Build comprehensive Jotform with sections: Basic Information Full name, email, phone High school, graduation year Intended major Academic Credentials GPA (weighted/unweighted, scale) SAT score (optional) ACT score (optional) Class rank (if available) Academic honors Essays (Most Important!) Personal statement (650 words max) "Why Our College" essay (250-300 words) Supplemental prompts (program-specific) Activities & Achievements Extracurricular activities (list with hours/week, years) Leadership positions (with descriptions) Honors and awards Community service hours Work experience Additional Information First-generation college student (yes/no) Financial aid needed (yes/no) Optional: demographic information Optional: additional context Step 2: Import n8n Workflow (15 min) Copy JSON from artifact n8n: Workflows β Import β Paste Includes all nodes + 7 detailed sticky notes Step 3: Configure OpenAI API (20 min) Get API key: https://platform.openai.com/api-keys Add to both AI nodes (Essay Analysis + Holistic Review) Model: gpt-4o (best for nuanced analysis) Temperature: 0.3 (consistency with creativity) Test with sample application Cost: $0.15-0.25 per application (essay analysis + holistic review) Step 4: Customize Institutional Context (45 min) Edit AI prompts to reflect YOUR college: In Holistic Review Prompt, Update: College name and type Acceptance rate Average admitted student profile (GPA, test scores) Institutional values and culture Academic programs and strengths What makes your college unique Desired student qualities In Essay Analysis Prompt, Add: Specific programs to look for mentions of Faculty names applicants should reference Campus culture keywords Red flags specific to your institution Step 5: Setup Email Communications (30 min) Connect Gmail/Outlook OAuth Update all recipient addresses: admissions-director@college.edu admissions-committee@college.edu Email addresses for strong candidate alerts Customize email templates: Add college name, logo, branding Update contact information Adjust tone to match institutional voice Include decision release dates Add applicant portal links Step 6: Configure Slack Alerts (15 min, Optional) Create channel: #admissions-strong-candidates Add webhook URL or bot token Test with mock strong candidate Customize alert format and recipients Step 7: Create Admissions Database (30 min) Google Sheet with columns:
by Dzaky Jaya
This n8n workflow demonstrate how to configure AI Agent for financial research purposes especially for IDX data through Sectors App API. use cases: research stock market in Indonesia. analyze the performance of companies belonging to certain subsectors or company comparing financial metrics between BBCA and BBRI providing technical analysis for certain ticker stock movement and many more all from conversational agent UI chat. Main components Input-n8nChatNative**: handling and process input from native n8n chat ui Input-TelegramBot**: handling and process input from Telegram Bot Input-WebUI(Webhook)**: handling and process input from hosted Web UI through webhook Main Agent**: processing raw user queries and delegate task to specialized agent if needed. Spec Agent - Sectors App**: make request to Sectors App API to get real time financial data listed in IDX from available endpoint Spec Agent - Web Search**: make web search from Google Grounding (Gemini API) and Tavily Search. Vector Document Processing**: process document upload from user into embedding and vector store. How it works user queries may received from multiple platform (we use three here: Telegram, hosted Web UI, and native n8n chat UI) if user also uploading document, it will process the document and store it in vector store the request send to the Main Agent to process the queries the Main Agent decide the task to delegate to Specialized Agent if nedded. the result then sent back to user based on the platform How to use You need this API: Gemini API: get it free from https://aistudio.google.com/ Tavily API: get it free from https://www.tavily.com/ Sectors App API: get it from https://sectors.app/api/ you can optionally change the model or adding fallback model to handle token request, cause it may use quite many tokens.
by Swot.AI
Description This workflow lets you upload a PDF document and automatically analyze it with AI. It extracts the text, summarizes the content, flags key clauses or risks, and then delivers the results via Gmail while also storing them in Google Sheets for tracking. Itβs designed for legal, compliance, or contract review use cases, but can be adapted for any document analysis scenario. Test it here: PDF Document Assistant πΉ Instructions / Setup Webhook Input Upload a PDF document by sending it to the webhook URL. Extract from File The workflow extracts text from the uploaded PDF. Pre-processing (Code Node) Cleans and formats extracted text to remove unwanted line breaks or artifacts. Basic LLM Chain (OpenAI) Summarizes or restructures document content using OpenAI. Adjust the prompt inside to fit your analysis needs (summary, risk flags, clause extraction). Post-processing (Code Node) Further structures the AI output into a clean format (JSON, HTML, or plain text). AI Agent (OpenAI) Runs deeper analysis, answers questions, and extracts insights. Gmail Sends the results to a recipient. Configure Gmail credentials and set your recipient address. Google Sheets Appends results to a Google Sheet for record-keeping or audits. Respond to Webhook Sends a quick acknowledgment back to confirm the document was received. πΉ Credentials Needed OpenAI API key (for Chat Model + Agent) Gmail account (OAuth2) Google Sheets account (OAuth2) πΉ Example Use Case Upload a contract PDF β workflow extracts clauses β AI flags risky terms β Gmail sends formatted summary β results stored in Google Sheets.
by Trung Tran
π Code of Conduct Q&A Slack Chatbot with RAG Powered > Empower employees to instantly access and understand the companyβs Code of Conduct via a Slack chatbot, powered by Retrieval-Augmented Generation (RAG) and LLMs. π§βπΌ Whoβs it for This workflow is designed for: HR and compliance teams** to automate policy-related inquiries Employees** who want quick answers to Code of Conduct questions directly inside Slack Startups or enterprises** that need internal compliance self-service tools powered by AI βοΈ How it works / What it does This RAG-powered Slack chatbot answers user questions based on your uploaded Code of Conduct PDF using GPT-4 and embedded document chunks. Here's the flow: Receive Message from Slack: A webhook triggers when a message is posted in Slack. Check if itβs a valid query: Filters out non-user messages (e.g., bot mentions). Run Agent with RAG: Uses GPT-4 with Query Data Tool to retrieve relevant document chunks. Returns a well-formatted, context-aware answer. Send Response to Slack: Fetches user info and posts the answer back in the same channel. Document Upload Flow: HR can upload the PDF Code of Conduct file. Itβs parsed, chunked, embedded using OpenAI, and stored for future query retrieval. A backup copy is saved to Google Drive. π οΈ How to set up Prepare your environment: Slack Bot token & webhook configured (Sample slack app manifest: https://wisestackai.s3.ap-southeast-1.amazonaws.com/slack_bot_manifest.json) OpenAI API key (for GPT-4 & embedding) Google Drive credentials (optional for backup) Upload the Code of Conduct PDF: Use the designated node to upload your document (Sample file: https://wisestackai.s3.ap-southeast-1.amazonaws.com/20220419-ingrs-code-of-conduct-policy-en.pdf) This triggers chunking β embedding β data store. Deploy the chatbot: Host the webhook and connect it to your Slack app. Share the command format with employees (e.g., @CodeBot Can I accept gifts from partners?) Monitor and iterate: Improve chunk size or embed model if queries arenβt accurate. Review unanswered queries to enhance coverage. π Requirements n8n (Self-hosted or Cloud) Slack App (with chat:write, users:read, commands) OpenAI account (embedding + GPT-4 access) Google Drive integration (for backups) Uploaded Code of Conduct in PDF format π§© How to customize the workflow | What to Customize | How to Do It | |-----------------------------|------------------------------------------------------------------------------| | π€ Prompt style | Edit the System & User prompts inside the Code Of Conduct Agent node | | π Document types | Upload additional policy PDFs and tag them differently in metadata | | π€ Agent behavior | Tune GPT temperature or replace with different LLM | | π¬ Slack interaction | Customize message formats or trigger phrases | | π Data Store engine | Swap to Pinecone, Weaviate, Supabase, etc. depending on use case | | π Multilingual support | Preprocess text and support locale detection via Slack metadata |