by Mano
๐ฐ What This Workflow Does This intelligent news monitoring system automatically: โข RSS Feed Aggregation: Pulls the latest headlines from Google News RSS feeds and Hacker News โข AI Content Filtering: Identifies and prioritizes AI-related news from the past 24 hours โข Smart Summarization: Uses OpenAI to create concise, informative summaries of top stories โข Telegram Delivery: Sends formatted news digests directly to your Telegram channel โข Scheduled Execution: Runs automatically every morning at 8:00 AM (configurable) ๐ฏ Key Features โ Multi-Source News: Combines Google News and Hacker News for comprehensive coverage โ AI-Powered Filtering: Automatically identifies relevant AI and technology news โ Intelligent Summarization: OpenAI generates clear, concise summaries with key insights โ Telegram Integration: Instant delivery to your preferred chat or channel โ Daily Automation: Scheduled to run every morning for fresh news updates โ Customizable Timing: Easy to adjust schedule for different time zones ๐ง How It Works Scheduled Trigger: Workflow activates daily at 8:00 AM (or your preferred time) RSS Feed Reading: Fetches latest articles from Google News and Hacker News feeds Content Filtering: Identifies AI-related stories from the past 24 hours AI Summarization: OpenAI processes and summarizes the most important stories Telegram Delivery: Sends formatted news digest to your Telegram channel ๐ Setup Requirements โข OpenAI API Key: For AI-powered news summarization โข Telegram Bot: Create via @BotFather and get bot token + chat ID โข RSS Feed Access: Google News and Hacker News RSS feeds (public) โ๏ธ Configuration Steps Set Up Telegram Bot: Message @BotFather on Telegram Create new bot with /newbot command Save bot token and chat ID Configure OpenAI: Add OpenAI API credentials in n8n Ensure access to GPT models for summarization Update RSS Feeds: Verify Google News RSS feed URLs Confirm Hacker News feed accessibility Schedule Timing: Adjust Schedule Trigger for your time zone Default: 8:00 AM daily (modify as needed) Test & Deploy: Run test execution to verify all connections Activate workflow for daily automation ๐จ Customization Options Time Zone Adjustment: Modify Schedule Trigger for different regions News Sources: Add additional RSS feeds for broader coverage Filtering Criteria: Adjust AI prompts to focus on specific topics Summary Length: Customize OpenAI prompts for different detail levels Delivery Format: Modify Telegram message formatting and structure ๐ก Use Cases โข AI Professionals: Stay updated on latest AI developments and industry news โข Tech Teams: Monitor technology trends and competitor announcements โข Researchers: Track academic and industry research developments โข Content Creators: Source material for AI-focused content and newsletters โข Business Leaders: Stay informed about AI market trends and opportunities
by Sayone Technologies
๐ AI-Powered Email to Purchase Order Workflow Automatically scan your inbox for new purchase order requests, extract order details using Gemini AI, and log them into Google Sheets โ all without manual effort. โจ Core Capabilities โฑ Runs every minute to check unread emails ๐ง Filters emails by subject ๐ค Uses Gemini AI to summarize email content & extract structured order details ๐ Formats dates into ISO calendar weeks ๐ Adds product data from Google Sheets to complete order info โ Appends final purchase order records into a Google Sheet (without replacing previous ones) ๐ Setup Essentials ๐ฉ Gmail account for fetching unread emails ๐ Google Gemini (PaLM) API credentials ๐ Google Sheet with predefined purchase order headers ๐ Activation Guide โ๏ธ Configure Gmail & Google Sheets credentials in n8n ๐ฏ Adjust the subject filter to match your email rules ๐ Connect Gemini AI with your API credentials ๐ Create a Google Sheet with the required headers โถ๏ธ Activate the workflow and let it run in the background ๐จ Customizing the Workflow ๐ Email Filters โ Change keywords in the filter node to match your purchase order email subjects ๐ท Order Fields โ Modify Set and Append to Google Sheet nodes if your schema differs โ๏ธ AI Instructions โ Adjust the AI Agentโs prompt to fit your companyโs email style or product details โฒ Frequency โ Update the Cron node if you want to scan emails less often ๐ Target Google Sheet โ Point to a different sheet or tab depending on your department or customer
by Thiago Vazzoler Loureiro
Description This workflow vectorizes the TUSS (Terminologia Unificada da Saรบde Suplementar) table by transforming medical procedures into vector embeddings ready for semantic search. It automates the import of TUSS data, performs text preprocessing, and uses Google Gemini to generate vector embeddings. The resulting vectors can be stored in a vector database, such as PostgreSQL with pgvector, enabling efficient semantic queries across healthcare data. What Problem Does This Solve? Searching for medical procedures using traditional keyword matching is often imprecise. This workflow enhances the search experience by enabling semantic similarity search, which can retrieve more relevant results based on the meaning of the query instead of exact word matches. How It Works Import TUSS data: Load medical procedure entries from the TUSS table. Preprocess text: Clean and prepare the text for embedding. Generate embeddings: Use Google Gemini to convert each procedure into a semantic vector. Store vectors: Save the output in a PostgreSQL database with the pgvector extension. Prerequisites An n8n instance (self-hosted). A PostgreSQL database with the pgvector extension enabled. Access to the Google Gemini API. TUSS data in a structured format (CSV, database, or API source). Customization Tips You can adapt the preprocessing logic to your own language or domain-specific terms. Swap Google Gemini with another embedding model, such as OpenAI or Cohere. Adjust the chunking logic to control the granularity of semantic representation. Setup Instructions Prepare a source (database or CSV) with TUSS data. You need at least two fields: CD_ITEM (Medical procedure code) DS_ITEM (Medical procedure description) Configure your Oracle or PostgreSQL database credentials in the Credentials section of n8n. Make sure your PostgreSQL database has pgVector installed. Replace the placeholder table and column names with your actual TUSS table. Connect your Google Gemini credentials (via OpenAI proxy or official connector). Run the workflow to vectorize all medical procedure descriptions.
by Fahmi Fahreza
Automated Crypto Forecast Pipeline using Decodo and Gmail Sign Up for Decodo HERE for discount This template scrapes CoinGecko pages for selected coins, converts metrics into clean JSON, stores them in an n8n Data Table, generates 24-hour direction forecasts with Gemini, and emails a concise report. Whoโs it for? Crypto watchers who want automated snapshots, forecasts, and a daily emailโwithout managing a full data stack. How it works 30-min schedule loops coins, scrapes CoinGecko (Decodo), parses metrics, and upserts to Data Table. 18:00 schedule loads last 48h data. Gemini estimates next-24h direction windows. Email is rendered (HTML + plain text) and sent. How to set up Add Decodo, Gmail, and Gemini credentials. Open Configure Coins to edit tickers. Set Data Table ID. Replace recipient email. (Self-host only) Community node Decodo required. @decodo/n8n-nodes-decodo (community)
by Yang
Whoโs it for This template is perfect for TikTok creators, content marketers, and social media teams who want to turn viral comments into engaging short-form videos without manually scripting, recording, or editing. If you want to keep up with trends and consistently publish high-quality avatar videos, this workflow automates the entire process from comment selection to enhanced final video. What it does The workflow takes TikTok video URLs and their top comments from a Data Table, extracts the transcript using Dumpling AI, and uses GPT-4 to write a natural and engaging TikTok script inspired by the comment. It then generates a full AI avatar video through Captions.ai, enhances it with subtitles and B-roll using Submagic, and finally saves all video details into Airtable for tracking. Hereโs what happens step by step: Pulls TikTok videos and their top comments from a Data Table Sends video URLs to Dumpling AI to retrieve transcripts Feeds both transcript and comment into GPT-4 to generate a conversational TikTok script Cleans and formats the script using JavaScript Sends the script to Captions.ai to produce an AI avatar video Checks video status and retries if needed Enhances the final video with Submagic for captions and effects Receives the enhanced video via webhook and logs details into Airtable How it works Schedule Trigger: Runs automatically at set intervals to start the workflow Data Table: Retrieves TikTok video URLs and associated comments Dumpling AI: Extracts transcripts from the video URLs GPT-4: Generates a compelling TikTok script based on the comment JavaScript Node: Cleans up script formatting for a smooth avatar generation Captions.ai: Creates an AI avatar video from the cleaned script Wait & Check: Monitors video creation status and retries if necessary Submagic: Enhances the video with captions, zooms, and B-roll effects Webhook & Airtable: Receives final video data and saves URL and ID for future use Requirements โ Dumpling AI API key stored as HTTP header credentials โ OpenAI GPT-4 credentials โ Captions.ai API credentials โ Submagic API credentials โ Airtable base with fields for Video URL and Caption Video ID โ A properly structured Data Table containing TikTok Keywords or video URLs How to customize Adjust the GPT-4 system prompt to shape the tone, style, or format of the TikTok script Change the avatar or creator settings in Captions.ai to match your brand personality Modify Submagic settings to control subtitle styling or B-roll effects Integrate approval steps before final video generation if needed Extend the workflow to auto-publish videos to TikTok or store them in cloud drives > This workflow lets you transform TikTok comments into engaging AI avatar videos with captions and edits, completely on autopilot. Itโs a powerful way to scale content output and stay ahead of trends without manual scripting or filming.
by Guillaume Duvernay
Stop manually searching for songs and let an AI DJ do the work for you. This template provides a complete, end-to-end system that transforms any text prompt into a ready-to-play Spotify playlist. It combines the creative understanding of a powerful AI Agent with the real-time web knowledge of Linkup to curate perfect, up-to-the-minute playlists for any occasion. The experience is seamless: simply describe the vibe you're looking for in a web form, and the workflow will automatically create the playlist in your Spotify account and redirect you straight to it. Whether you need "upbeat funk for a sunny afternoon" or "moody electronic tracks for late-night coding," your personal AI DJ is ready to deliver. Who is this for? Music lovers:** Create hyper-specific playlists for any mood, activity, or niche genre without the hassle of manual searching. DJs & event planners:** Quickly generate themed playlists for parties, weddings, or corporate events based on a simple brief. Content creators:** Easily create companion playlists for your podcasts, videos, or articles to share with your audience. n8n developers:** A powerful example of how to build an AI agent that uses an external web-search tool to accomplish a creative task. What problem does this solve? Creates up-to-date playlists:** A standard AI doesn't know about music released yesterday. By using Linkup's live web search, this workflow can find and include the very latest tracks. Automates the entire creation process:** It handles everything from understanding a vague prompt (like "songs that feel like a summer road trip") to creating a fully populated Spotify playlist. Saves time and effort:** It completely eliminates the tedious task of searching for individual tracks, checking for relevance, and manually adding them to a playlist one by one. Provides a seamless user experience:** The workflow begins with a simple form and ends by automatically opening the finished playlist in your browser. There are no intermediate steps for you to manage. How it works Submit your playlist idea: You describe the playlist you want and the desired number of tracks in a simple, Spotify-themed web form. The AI DJ plans the search: An AI Agent (acting as your personal DJ) analyzes your request. It then intelligently formulates a specific query to find the best music. Web research with Linkup: The agent uses its Linkup web-search tool to find artists and tracks from across the web that perfectly match your request, returning a list of high-quality suggestions. The AI DJ curates the list: The agent reviews the search results and finalizes the tracklist and a creative name for your playlist. Build the playlist in Spotify: The workflow takes the agent's final list, creates a new public playlist in your Spotify account, then searches for each individual track to get its ID and adds them all. Instant redirection: As soon as the last track is added, the workflow automatically redirects your browser to the newly created playlist on Spotify, ready to be played. Setup Connect your accounts: You will need to add your credentials for: Spotify: In the Spotify nodes. Linkup: In the Web query to find tracks (HTTP Request Tool) node. Linkup's free plan is very generous! Your AI provider (e.g., OpenAI): In the OpenAI Chat Model node. Activate the workflow: Toggle the workflow to "Active." Use the form: Open the URL from the On form submission trigger and start creating playlists! Taking it further Change the trigger:* Instead of a form, trigger the playlist creation from a *Telegram* message, a *Discord** bot command, or even a webhook from another application. Create collaborative playlists:** Set up a workflow where multiple people can submit song ideas. You could then have a final AI step consolidate all the requests into a single, cohesive prompt to generate the ultimate group playlist. Optimize for speed:* The *Web query to find tracks** node is set to deep search mode for the highest quality results. You can change this to standard mode for faster and cheaper (but potentially less thorough) playlist creation.
by Einar Cรฉsar Santos
๐ง AI Brainstorm Generator - Break Through Creative Blocks Instantly Transform any problem into innovative solutions using AI-powered brainstorming that combines mathematical randomness with intelligent synthesis. What This Workflow Does This workflow generates creative, actionable solutions to any problem by combining: Mersenne Twister algorithm** for high-entropy random seed generation AI-driven random word generation** to create unexpected semantic triggers Dual AI agents** that brainstorm and refine ideas into polished solutions Simply input your challenge via the chat interface, and within 2 minutes receive a professionally refined solution that combines the best elements from 5+ innovative ideas. Key Features โจ Consistent Creativity - Works regardless of your mental state or time of day ๐ฒ True Randomness - MT19937 algorithm ensures no pattern repetition ๐ค Multi-Model Support - Works with OpenAI GPT-4 or Google Gemini โก Fast Results - Complete solutions in under 2 minutes ๐ Self-Cleaning - Redis data expires automatically after use Use Cases Product ideation and feature development Marketing campaign concepts Problem-solving for technical challenges Business strategy innovation Creative writing prompts Workshop facilitation Requirements Redis database (local or cloud) OpenAI API key (GPT-4) OR Google Gemini API key n8n instance (self-hosted or cloud) How It Works User inputs problem via chat trigger Mersenne Twister generates high-entropy random numbers AI generates 36+ random words as creative triggers Brainstorming Agent creates 5 innovative solutions Critic Agent synthesizes the best elements into one refined solution Perfect for teams facing innovation challenges, solo entrepreneurs seeking fresh perspectives, or anyone who needs to break through creative blocks reliably. Setup Time: ~10 minutes Difficulty: Intermediate Support: Full documentation included via sticky notes
by Yang
๐งพ What this workflow does This automation transforms a short text idea into a cinematic video by chaining together three powerful AI services: GPT-4.1 for scene creation, Dumpling AIโs FLUX.1 Pro model for visual generation, and KIE API (Veo 3) for cinematic video creation. The system fully automates the journey from raw concept to video output, returning the final video URL. ๐ค Who is this for Content creators who want to test visual ideas quickly Agencies creating moodboards, ad scenes, or pitch visuals Solo marketers or founders without video editing skills AI automation builders creating content tools โ๏ธ How to set up โ Requirements OpenAI GPT-4.1 API key** Dumpling AI (FLUX.1 Pro model)** API token KIE API* account with access to the *Veo 3 endpoint** Optional: Tool to store or share the final video link (e.g., Google Sheets, Slack) ๐ง Setup steps Start with a text idea Example: โA lion running through misty mountains at sunrise.โ GPT-4.1 Node: Expands the idea into two parts: Cinematic Prompt: Describes the atmosphere, emotion, and camera movement. Image Prompt: A vivid single-frame visual to generate the base image. Dumpling AI Node (FLUX.1 Pro): Takes the image prompt and returns a cinematic-style image. You can customize dimensions, steps, seed, and guidance level. KIE API Node (Veo 3): Sends both the cinematic prompt and image to the Veo 3 model. The model returns a video URL (e.g., 3โ6 seconds cinematic footage). Final Output: The video URL is returned by the HTTP node. You can connect this to Airtable, Slack, Telegram, or Google Sheets to log the result or share with your team. ๐ง How it works You input a text idea GPT-4.1 turns it into both a detailed cinematic prompt and a base image prompt Dumpling AI generates the image KIE APIโs Veo 3 turns it into a cinematic video The final video URL is returned for download or embedding ๐ก Customization Ideas Add a Telegram bot to trigger this workflow with an idea via message Route video links to a Notion database or content calendar Add a loop with rating logic to regenerate low-rated videos Use Google Drive to auto-save videos in brand folders Automate weekly video ideation for social media using a prompt list This workflow helps you turn raw imagination into cinematic motion using AI. From social content to storyboarding, you can generate compelling visuals in minutes โ no design or video background needed.
by Robert Breen
Pull a Dun & Bradstreet Business Information Report (PDF) by DUNS, convert the response into a binary PDF file, extract readable text, and use OpenAI to return a clean, flat JSON with only the key fields you care about (e.g., report date, Paydex, viability score, credit limit). Includes Sticky Notes for quick setup help and guidance. โ What this template does Requests a D&B report* (PDF) for a specific *DUNS** via HTTP Converts* the API response into a *binary PDF file** Extracts** the text from the PDF for analysis Uses OpenAI with a Structured Output Parser to return a flat JSON Designed to be extended to Sheets, databases, or CRMs ๐งฉ How it works (node-by-node) Manual Trigger โ Runs the workflow on demand ("When clicking 'Execute workflow'"). D&B Report (HTTP Request) โ Calls the D&B Reports API for a Business Information Report (PDF). Convert to PDF File (Convert to File) โ Turns the D&B response payload into a binary PDF. Extract Binary (Extract from File) โ Extracts text content from the PDF. OpenAI Chat Model โ Provides the language model context for the analyzer. Analyze PDF (AI Agent) โ Reads the extracted text and applies strict rules for a flat JSON output. Structured Output (AI Structured Output Parser) โ Enforces a schema and validates/auto-fixes the JSON shape. (Optional) Get Bearer Token (HTTP Request) โ Template guidance for OAuth token retrieval (shown as disabled; included for reference if you prefer Bearer flows). ๐ ๏ธ Setup instructions (from the JSON) 1) D&B Report (HTTP Request) Auth:* Header Auth (use an n8n *HTTP Header Auth** credential) URL:** https://plus.dnb.com/v1/reports/duns/804735132?productId=birstd&inLanguage=en-US&reportFormat=PDF&orderReason=6332&tradeUp=hq&customerReference=customer%20reference%20text Headers:** Accept: application/json Credential Example:** D&B (HTTP Header Auth) > Put your Authorization: Bearer <token> header inside this credential, not directly in the node. 2) Convert to PDF File (Convert to File) Operation:** toBinary Source Property:** contents[0].contentObject > This takes the PDF content from the D&B API response and converts it to a binary file for downstream nodes. 3) Extract Binary (Extract from File) Operation:** pdf > Produces a text field with the extracted PDF content, ready for AI analysis. 4) OpenAI Model(s) OpenAI Chat Model** Model:** gpt-4o (as configured in the JSON) Credential:* Your stored *OpenAI API* credential (do *not** hardcode keys) Wiring:** Connect OpenAI Chat Model as ai_languageModel to Analyze PDF Connect another OpenAI Chat Model (also gpt-4o) as ai_languageModel to Structured Output 5) Analyze PDF (AI Agent) Prompt Type:** define Text:** ={{ $json.text }} System Message (rules):** You are a precision extractor. Read the provided business report PDF and return only a single flat JSON object with the fields below. No arrays/lists. No prose. If a value is missing, output null. Dates: YYYY-MM-DD. Numbers: plain numerics (no commas or $). Prefer most recent or highest-level overall values if multiple are shown. Never include arrays, nested structures, or text outside of the JSON object. 6) Structured Output (AI Structured Output Parser) JSON Schema Example:** { "report_date": "", "company_name": "", "duns": "", "dnb_rating_overall": "", "composite_credit_appraisal": "", "viability_score": "", "portfolio_comparison_score": "", "paydex_3mo": "", "paydex_24mo": "", "credit_limit_conservative": "" } Auto Fix:** enabled Wiring:* Connect as ai_outputParser to *Analyze PDF** 7) (Optional) Get Bearer Token (HTTP Request) โ Disabled example If you prefer fetching tokens dynamically: Auth:** Basic Auth (D&B username/password) Method:** POST URL:** https://plus.dnb.com/v3/token Body Parameters:** grant_type = client_credentials Headers:** Accept: application/json Downstream usage:** Set header Authorization: Bearer {{$json["access_token"]}} in subsequent calls. > In this template, the D&B Report node uses Header Auth credential instead. Use one strategy consistently (credentials are recommended for security). ๐ง Output schema (flat JSON) The analyzer + parser return a single flat object like: { "report_date": "2024-12-31", "company_name": "Example Corp", "duns": "123456789", "dnb_rating_overall": "5A2", "composite_credit_appraisal": "Fair", "viability_score": "3", "portfolio_comparison_score": "2", "paydex_3mo": "80", "paydex_24mo": "78", "credit_limit_conservative": "25000" } ๐งช Test flow Click Execute workflow (Manual Trigger). Confirm D&B Report returns the PDF response. Check Convert to PDF File for a binary file. Verify Extract from File produces a text field. Inspect Analyze PDF โ Structured Output for valid JSON. ๐ Security notes Do not hardcode tokens in nodes; use Credentials (HTTP Header Auth or Basic Auth). Restrict who can execute the workflow if it's accessible from outside your network. Avoid storing sensitive payloads in logs; mask tokens/headers. ๐งฉ Customize Map the structured JSON to Google Sheets, Postgres/BigQuery, or a CRM. Extend the schema with additional fields (e.g., number of employees, HQ address) โ keep it flat. Add validation (Set/IF nodes) to ensure required fields exist before writing downstream. ๐ฉน Troubleshooting Missing PDF text?* Ensure *Convert to File** source property is contents[0].contentObject. Unauthorized from D&B?** Refresh/verify token; confirm Header Auth credential contains Authorization: Bearer <token>. Parser errors?** Keep the agent output short and flat; the Structured Output node will auto-fix minor issues. Different DUNS/product?** Update the D&B Report URL query params (duns, productId, etc.). ๐๏ธ Sticky Notes (included) Overview:** "Fetch D&B Company Report (PDF) โ Convert โ Extract โ Summarize to Structured JSON (n8n)" Setup snippets for Data Blocks (optional) and Auth flow ๐ฌ Contact Need help customizing this (e.g., routing the PDF to Drive, mapping JSON to your CRM, or expanding the schema)? ๐ง robert@ynteractive.com ๐ https://www.linkedin.com/in/robert-breen-29429625/ ๐ https://ynteractive.com
by Victor Manuel Lagunas Franco
I wanted a journal but never had the discipline to write one. Most of my day happens in Discord anyway, so I built this to do it for me. Every night, it reads my Discord channel, asks GPT-4 to write a short reflection, generates an image that captures the vibe of the day, and saves everything to Notion. I wake up with a diary entry I didn't have to write. How it works Runs daily at whatever time you set Grabs messages from a Discord channel (last 100) Filters to today's messages only GPT-4 writes a title, summary, mood, and tags DALL-E generates an image based on the day's themes Uploads image to Cloudinary (Notion needs a public URL) Creates a Notion page with everything formatted nicely Setup Discord Bot credentials (read message history permission) OpenAI API key Free Cloudinary account for image hosting Notion integration connected to your database Notion database properties needed Title (title) Date (date) Summary (text) Mood (select): ๐ Great, ๐ Good, ๐ Neutral, ๐ Low, ๐ฅ Productive Message Count (number) Takes about 15 minutes to set up. I use Gallery view in Notion with the AI image as cover - looks pretty cool after a few weeks.
by Julien DEL RIO
Who's it for This template is designed for content creators, podcasters, businesses, and researchers who need to transcribe long audio recordings that exceed OpenAI Whisper's 25 MB file size limit (~20 minutes of audio). How it works This workflow combines n8n, FileFlows, and OpenAI Whisper API to transcribe audio files of any length: User uploads an MP3 file through a web form and provides an email address n8n splits the file into 4 MiB chunks and uploads them to FileFlows FileFlows uses FFmpeg to segment the audio into 15-minute chunks (safely under the 25 MB API limit) Each segment is transcribed using OpenAI's Whisper API (configured for French by default) All transcriptions are merged into a single text file The complete transcription is automatically emailed to the user Processing time: Typically 10-15 minutes for a 1-hour audio file. Requirements n8n instance (self-hosted or cloud) FileFlows with Docker and FFmpeg installed OpenAI API key (Whisper API access) Gmail account for email delivery Network access between n8n and FileFlows Setup Complete setup instructions, including FileFlows workflow import, credentials configuration, and storage setup, are provided in the workflow's sticky notes. Cost OpenAI Whisper API: $0.006 per minute. A 1-hour recording costs approximately $0.36.
by Cheng Siong Chin
Introduction Automates AI-driven assignment grading with HTML and CSV output. Designed for educators evaluating submissions with consistent criteria and exportable results. How It Works Webhook receives papers, extracts text, prepares data, loads answers, AI grades submissions, generates results table, converts to HTML/CSV, returns response. Workflow Template Webhook โ Extract Text โ Prepare Data โ Load Answer Script โ AI Grade (OpenAI + Output Parser) โ Generate Results Table โ Convert to HTML + CSV โ Format Response โ Respond to Webhook Workflow Steps Input & Preparation: Webhook receives paper, extracts text, prepares data, loads answer script. AI Grading: OpenAI evaluates against answer key, Output Parser formats scores and feedback. Output & Response: Generates results table, converts to HTML/CSV, returns multi-format response. Setup Instructions Trigger & Processing: Configure webhook URL, set text extraction parameters. AI Configuration: Add OpenAI API key, customize grading prompts, define Output Parser JSON schema. Prerequisites OpenAI API key Webhook platform n8n instance Use Cases University exam grading Corporate training assessments Customization Modify rubrics and criteria Add PDF output Integrate LMS (Canvas, Blackboard) Benefits Consistent AI grading Multi-format exports Reduces grading time by 90%