by Pixcels Themes
AI Assignment Grader with Automated Reporting Who’s it for This workflow is designed for educators, professors, academic institutions, coaching centers, and edtech platforms that want to automate the grading of written assignments or test papers. It’s ideal for scenarios where consistent evaluation, detailed feedback, and structured result storage are required without manual effort. What it does / How it works This workflow automates the end-to-end grading process for student assignments submitted as PDFs. A student’s test paper is uploaded via a webhook endpoint. The workflow extracts text from the uploaded PDF file. Student metadata (name, assignment title) is prepared and combined with the extracted answers. A predefined answer script (model answers with marking scheme) is loaded into the workflow. An AI grading agent powered by Gemini compares the student’s responses against the answer script. The AI: Evaluates each question Assigns marks based on correctness and completeness Generates per-question feedback Calculates total marks, percentage, and grade The structured grading output is converted into: An HTML grading report A CSV file for records The final CSV grading report is automatically uploaded to Google Drive for storage and sharing. All grading logic runs automatically once the test paper is submitted. Requirements Google Gemini (PaLM) API credentials Google Drive OAuth2 credentials A webhook endpoint configured in n8n PDF test papers submitted in a supported format A predefined answer script with marks per question How to set up Connect your Google Gemini credentials in n8n. Connect your Google Drive account and select the destination folder. Enable and copy the webhook URL for test paper uploads. Customize the Load Answer Script node with your assignment’s correct answers and marking scheme. (Optional) Adjust grading instructions or output format in the AI Agent prompt. Test the workflow by uploading a sample PDF assignment. How to customize the workflow Update the AI grading rubric to be stricter or more lenient. Modify feedback style (short comments vs detailed explanations). Change grading scales, total marks, or grade boundaries. Store results in additional systems (LMS, database, email notifications). Add plagiarism checks or similarity scoring before grading. Generate PDF reports instead of CSV/HTML if required. This workflow enables fast, consistent, and scalable assignment grading while giving students clear, structured feedback and educators reliable records.
by 小林幸一
Template Description 📝 Template Title Analyze Amazon product reviews with Gemini and save to Google Sheets 📄 Description This workflow automates the process of analyzing customer feedback on Amazon products. Instead of manually reading through hundreds of reviews, this template scrapes reviews (specifically targeting negative feedback), uses Google Gemini (AI) to analyze the root causes of dissatisfaction, and generates specific improvement suggestions. The results are automatically logged into a Google Sheet for easy tracking, and a Slack notification is sent to keep your team updated. This tool is essential for understanding "Voice of Customer" data efficiently without manual data entry. 🧍 Who is this for Product Managers** looking for product improvement ideas. E-commerce Sellers (Amazon FBA, D2C)** monitoring brand reputation. Market Researchers** analyzing competitor weaknesses. Customer Support Teams** identifying recurring issues. ⚙️ How it works Data Collection: The workflow triggers the Apify actor (junglee/amazon-reviews-scraper) to fetch reviews from a specified Amazon product URL. It is currently configured to filter for 1 and 2-star reviews to focus on complaints. AI Analysis: It loops through each review and sends the content to Google Gemini. The AI determines a sentiment score (1-5), categorizes the issue (Quality, Design, Shipping, etc.), summarizes the complaint, and proposes a concrete improvement plan. Formatting: A Code node parses the AI's response to ensure it is in a clean JSON format. Storage: The structured data is appended as a new row in a Google Sheet. Notification: A Slack message is sent to your specified channel to confirm the batch analysis is complete. 🛠️ Requirements n8n** (Self-hosted or Cloud) Apify Account:** You need to rent the junglee/amazon-reviews-scraper actor. Google Cloud Account:** For accessing the Gemini (PaLM) API and Google Sheets API. Slack Account:** For receiving notifications. 🚀 How to set up Apify Config: Enter your Apify API token in the credentials. In the "Run an Actor" node, update the startUrls to the Amazon product page you want to analyze. Google Sheets: Create a new Google Sheet with the following header columns: sentiment_score, category, summary, improvement. Copy the Spreadsheet ID into the Google Sheets node. AI Prompt: The "Message a model" node contains the prompt. It is currently set to output results in Japanese. If you need English output, simply translate the prompt text inside this node. Slack: Select the channel where you want to receive notifications in the Slack node.
by ueharayuuki
🤖 Automated Multi-lingual News Curator & Archiver Overview This workflow automates news monitoring by fetching RSS feeds, rewriting content using AI, translating it (EN/ZH/KO), and archiving it. Who is this for? Content Curators, Localization Teams, and Travel Bloggers. How it works Fetch & Filter: Pulls NHK RSS and filters for keywords (e.g., "Tokyo"). AI Processing: Google Gemini rewrites articles, extracts locations, and translates text. Archive & Notify: Saves structured data to Google Sheets and alerts Slack. Setup Requirements Credentials: Google Gemini, Google Sheets, Slack. Google Sheet: Create headers: title, summary, location, en, zh, ko, url. Slack: Configure Channel IDs. Customization RSS Read:** Change feed URL. If Node:** Update filter keywords. AI Agent:** Adjust system prompts for tone. 1. Fetch & Filter Runs on a schedule to fetch the latest RSS items. Filters articles based on specific keywords (e.g., "Tokyo" or "Season") before processing. 2. AI Analysis & Parsing Uses Google Gemini to rewrite the news, extract specific locations, and translate content. The Code node cleans the JSON output for the database. 3. Archive & Notify Appends the structured data to Google Sheets and sends a formatted notification to Slack (or alerts if an article was skipped). Output Example (JSON) The translation agent outputs data in this format: { "en": "Tokyo Tower is...", "zh": "东京塔是...", "ko": "도쿄 타워는..." }
by Madame AI
Repurpose white papers from URLs to LinkedIn PDFs and Blog Posts With BrowserAct Introduction This workflow automates the labor-intensive process of turning long-form white papers into ready-to-publish social media assets. It scrapes the content from a URL or PDF, uses AI to ghostwrite a LinkedIn carousel script and an SEO-optimized blog post, generates a downloadable PDF for the carousel using APITemplate.io, and archives all assets in Google Sheets. Target Audience Content marketers, social media managers, and agency copywriters looking to scale content repurposing efforts. How it works Input: The workflow retrieves a list of white paper URLs from a Google Sheet. Looping: It processes each URL individually to ensure stability. Extraction: The BrowserAct node uses the "White Paper to Social Media Converter" template to scrape the full text of the white paper . Content Generation: An AI Agent (OpenRouter/GPT-4o) acts as a ghostwriter. It analyzes the text and generates two distinct outputs: A viral-style LinkedIn post with a 5-slide carousel script. A full-length, HTML-formatted blog post with proper headers. PDF Creation: The APITemplate.io node takes the carousel script and generates a designed PDF file ready for LinkedIn upload. Storage: The workflow updates the original Google Sheet row with the generated blog HTML, the LinkedIn caption, and the direct link to the PDF. Notification: Once all items are processed, a Slack message notifies the team. How to set up Configure Credentials: Connect your BrowserAct, OpenRouter, Google Sheets, APITemplate.io, and Slack accounts in n8n. Prepare BrowserAct: Ensure the White Paper to Social Media Converter template is active in your BrowserAct library. Prepare APITemplate.io: Create a PDF template in APITemplate.io that accepts dynamic fields for slide titles and body text. Copy the Template ID into the Create a carousel PDF node. Prepare Google Sheet: Create a sheet with the headers listed below and add your target URLs. Google Sheet Headers To use this workflow, create a Google Sheet with the following headers: row_number (Must be populated, e.g., 1, 2, 3...) Target Page Url Blog Post Linkdin Post PDF Link Requirements BrowserAct Account:* Required for scraping. Template: *White Paper to Social Media Converter**. OpenRouter Account:** Required for GPT-4o processing. APITemplate.io Account:** Required for generating the visual PDF carousel. Google Sheets:** Used for input and output. Slack Account:** Used for completion notifications. How to customize the workflow Direct Publishing: Add a WordPress node to publish the Blog Post HTML directly to your CMS instead of saving it to the sheet. Design Variations: Create multiple templates in APITemplate.io (e.g., "Dark Mode", "Minimalist") and use a Random node to vary the visual style of your carousels. Tone Adjustment: Modify the System Message in the Convert whitepaper to carousel node to change the writing style (e.g., make it more academic or more casual). Need Help? How to Find Your BrowserAct API Key & Workflow ID How to Connect n8n to BrowserAct How to Use & Customize BrowserAct Templates Workflow Guidance and Showcase Video Automated LinkedIn Carousels: Turn White Papers into Content with n8n
by RenderIO
Who is this for Content creators, YouTubers, and social media managers who want to repurpose long form videos into short clips without doing it manually. Works on self hosted n8n instances. What it does Monitors a Google Drive folder for new videos. When a video appears, the workflow downloads it, extracts the audio, transcribes it using Whisper, and sends the transcript to OpenAI to identify the best highlight moments. Each selected clip is then rendered in three aspect ratios (9:16 for TikTok, 9:16 for Reels, 1:1 for Square) using cloud based FFmpeg through RenderIO. The finished clips are uploaded back to Google Drive and every run is logged to a Google Sheet. How it works Watch Drive Folder polls your source folder every minute and triggers when a new video file is detected. Set Config holds all tunable settings in one place: clip count, folder IDs, sheet IDs, and LLM model. The video is downloaded from Google Drive and uploaded to RenderIO for processing. Extract Audio runs an FFmpeg command to pull the audio track from the video. The audio is sent to Whisper for transcription. Both TXT and SRT transcript files are saved to Google Drive. Pick Clips sends the transcript to OpenAI, which returns timestamped highlight suggestions. Validate Clips checks that all timestamps and durations are valid before rendering. Each clip is rendered in three formats through RenderIO with separate FFmpeg commands for each aspect ratio. All rendered clips are downloaded and uploaded to a dedicated output folder in Google Drive. Append Clip Row logs each clip to a Google Sheet and Append Run Summary records the overall processing stats. Requirements A self hosted or cloud n8n instance (uses a community node) The n8n-nodes-renderio community node installed via Settings > Community Nodes A free RenderIO account and API key from renderio.dev Google Drive and Google Sheets OAuth credentials An OpenAI API key How to set up Install the n8n-nodes-renderio community node from Settings > Community Nodes. Create credentials for Google Drive (OAuth2), Google Sheets (OAuth2), OpenAI, and RenderIO API. Import the workflow and open the Set Config node. Update the outputParentFolderId with the Google Drive folder ID where output folders should be created. Update the sheetId with your Google Sheet document ID. Set sheetTab and sheetRunsTab to the correct sheet tab IDs for clip logging and run summaries. Configure the Watch Drive Folder trigger node to point at your source video folder. Activate the workflow and drop a test video into the folder. How to customize Change clipCount in Set Config to generate more or fewer clips per video. Swap llmModel from gpt-4o-mini to gpt-4o or another model for different clip selection quality. Modify the FFmpeg commands in Build Commands for Clip to adjust resolution, bitrate, add watermarks, or change output formats. Replace Google Drive with S3 or another storage provider if that fits your stack. Add a Slack or Telegram notification node after the summary step to get alerted when processing finishes.
by Oneclick AI Squad
This workflow automatically ingests real-time user behavior events, detects drop-off points across the customer journey, predicts churn risk using AI, and triggers targeted retention actions while logging everything for analysis. Who’s it for • Product teams managing high-churn SaaS products • E-commerce businesses with cart abandonment issues • Subscription services tracking user engagement How it works / What it does Captures new user behavior events (webhook or scheduled poll) Analyzes session events, actions, and engagement metrics for drop-off signals Loads user profile, history, and preferences AI predicts real-time drop-off risk and generates personalized retention actions Sends automated re-engagement messages or campaign triggers Logs predictions, risk scores, and actions in Google Sheets How to set up Import this workflow Set up credentials (Webhook events, Google Sheets, OpenAI/Anthropic) Update user profile defaults and retention endpoints Activate workflow Requirements • Event webhook (Segment, Mixpanel, custom analytics) • Google Sheets • OpenAI / Anthropic / Grok API • User behavior event schema How to customize the workflow • Change AI tone and action templates in the AI node • Modify Python detection logic • Update Google Sheet columns • Adjust retention messaging or campaign endpoints Want a advance level workflow for your business? Our experts can craft it quickly Contact our team
by deAPI Team
Who is this for? Marketing teams who need quick video ads without a production crew E-commerce sellers promoting products on social media Freelancers and agencies producing ad creatives for clients Anyone who wants to turn a product description into a video ad in minutes What problem does this solve? Producing a video ad typically requires a designer for the visuals, a motion artist for animation, and hours of back-and-forth. This workflow replaces that entire pipeline — fill out a form, and get a ready-to-use video ad delivered to your inbox. What this workflow does Collects product name, description, visual style, and recipient email through a web form AI Agent analyzes the product and uses both deAPI Image Prompt Booster and Video Prompt Booster tools to create optimized prompts for image and video generation Generates a 1280x720 landscape product hero image using deAPI Animates the hero image into a short video ad using deAPI image-to-video generation Emails the video ad link to the specified address via Gmail Setup Requirements n8n instance** (self-hosted or n8n Cloud) deAPI account for prompt boosting, image generation, and video generation Anthropic account for the AI Agent Gmail account for email delivery Installing the deAPI Node n8n Cloud: Go to **Settings → Community Nodes and toggle the "Verified Community Nodes" option Self-hosted: Go to **Settings → Community Nodes and install n8n-nodes-deapi Configuration Add your deAPI credentials (API key + webhook secret) Add your Anthropic credentials (API key) Add your Gmail credentials (OAuth2) Ensure your n8n instance is on HTTPS How to customize this workflow Change the AI model**: Swap Anthropic for OpenAI, Google Gemini, or any other LLM provider Adjust the creative direction**: Modify the AI Agent system message to target different ad styles (product demo, lifestyle, teaser, etc.) Change the delivery method**: Replace Gmail with Slack, Microsoft Teams, or upload directly to Google Drive / S3 Change the aspect ratio**: Switch from landscape to square or portrait for Instagram Stories or TikTok Add background removal**: Insert a deAPI Remove Background node before video generation for a clean product cutout Batch processing**: Replace the Form Trigger with a Google Sheets or Airtable trigger to generate ads for a product catalog
by AI Solutions
📄Template Creator How it works This workflow accepts any uploaded document (PDF or DOCX) via webhook and automatically converts it into a reusable fill-in-the-blank template. Step 1 — Identify: GPT-4o first reads the document and determines the document type (e.g., Employment Contract, Invoice, NDA, Lease Agreement, Project Proposal) and the specific variable fields that type of document typically contains. Step 2 — Templatize: A second AI pass uses the identified document type and field list to replace all variable content with clearly labeled [BRACKET] placeholders while preserving all static boilerplate and structure verbatim. Step 3 — Deliver: The cleaned template is rendered to PDF via Gotenberg, uploaded to Google Drive, made publicly accessible, and a JSON response with file URLs is returned to the caller. Setup Configure the Webhook path if needed (default: general-template-creator) Set your OpenAI credential on both LLM nodes Set your Google Drive credential and confirm the target folder ID in the Upload node Confirm the Gotenberg URL matches your self-hosted instance Install the community node n8n-nodes-word2text (see ⚠️ warning sticky) Customization Swap GPT-4o for GPT-4.1 or GPT-4.1-mini on the Identify node to reduce cost on the lighter classification task Add a Switch node after identification to route different document types to type-specific prompts Modify the Drive folder ID to sort templates into subfolders by document type Document accepts input from a form such as the one found here: Sample Form
by Daniel Shashko
This workflow automates the creation of user-generated-content-style product videos by combining Gemini's image generation with OpenAI's SORA 2 video generation. It accepts webhook requests with product descriptions, generates images and videos, stores them in Google Drive, and logs all outputs to Google Sheets for easy tracking. Main Use Cases Automate product video creation for e-commerce catalogs and social media. Generate UGC-style content at scale without manual design work. Create engaging video content from simple text prompts for marketing campaigns. Build a centralized library of product videos with automated tracking and storage. How it works The workflow operates as a webhook-triggered process, organized into these stages: Webhook Trigger & Input Accepts POST requests to the /create-ugc-video endpoint. Required payload includes: product prompt, video prompt, Gemini API key, and OpenAI API key. Image Generation (Gemini) Sends the product prompt to Google's Gemini 2.5 Flash Image model. Generates a product image based on the description provided. Data Extraction Code node extracts the base64 image data from Gemini's response. Preserves all prompts and API keys for subsequent steps. Video Generation (SORA 2) Sends the video prompt to OpenAI's SORA 2 API. Initiates video generation with specifications: 720x1280 resolution, 8 seconds duration. Returns a video generation job ID for polling. Video Status Polling Continuously checks video generation status via OpenAI API. If status is "completed": proceeds to download. If status is still processing: waits 1 minute and retries (polling loop). Video Download & Storage Downloads the completed video file from OpenAI. Uploads the MP4 file to Google Drive (root folder). Generates a shareable Google Drive link. Logging to Google Sheets Records all generation details in a tracking spreadsheet: Product description Video URL (Google Drive link) Generation status Timestamp Summary Flow: Webhook Request → Generate Product Image (Gemini) → Extract Image Data → Generate Video (SORA 2) → Poll Status → If Complete: Download Video → Upload to Google Drive → Log to Google Sheets → Return Response If Not Complete: Wait 1 Minute → Poll Status Again Benefits: Fully automated video creation pipeline from text to finished product. Scalable solution for generating multiple product videos on demand. Combines cutting-edge AI models (Gemini + SORA 2) for high-quality output. Centralized storage in Google Drive with automatic logging in Google Sheets. Flexible webhook interface allows integration with any application or service. Retry mechanism ensures videos are captured even with longer processing times. Created by Daniel Shashko
by Hyrum Hurst
Who this workflow is for This workflow is designed for content creators, marketing teams, and automation builders who want to produce short-form video content at scale without manual editing. It is especially useful for teams posting consistently to YouTube Shorts, Instagram Reels, or TikTok who want a repeatable, automated content pipeline driven by AI. What this workflow does This n8n automation converts a single Telegram message into a fully generated short-form video using AI. When a message is sent to a Telegram bot, the workflow: Generates a structured short-form video script using Gemini Creates multiple cinematic video clips using VEO Merges clips into a single short video Generates a title, description, and hashtags optimized for short-form platforms Delivers the finished video asset and metadata back to Telegram or storage The entire process runs automatically without requiring manual scripting, editing, or clip assembly. How the workflow works A Telegram message triggers the workflow with a short content idea An AI model generates a multi-part script optimized for short-form video Video clips are generated for each script segment using VEO Clips are merged into a final vertical video Metadata (title, description, hashtags) is generated and attached The completed short is delivered for review or publishing How to set up the workflow Connect your Telegram bot credentials Configure your Gemini and VEO credentials Review the prompt templates used for script and clip generation Adjust output destinations (Telegram, Google Drive, or manual upload) Activate the workflow All required credentials are stored securely using n8n credential management. Requirements Telegram Bot Google Gemini credentials VEO video generation access n8n instance (cloud or self-hosted) How to customize the workflow Modify the AI prompts to match your content style or niche Change clip length or number of generated scenes Add automatic posting to YouTube Shorts or other platforms Extend the workflow with analytics or scheduling steps Author: Hyrum Hurst Company: QuarterSmart Contact: hyrum@quartersmart.com
by Cheng Siong Chin
How It Works This workflow provides automated Chinese text translation with high-quality audio synthesis for language learning platforms, content creators, and international communication teams. It addresses the challenge of converting Chinese text into accurate multilingual translations with natural-sounding voiceovers. The system receives Chinese text via webhook, validates input formatting, and processes it through an AI translation agent that generates multiple language versions. Each translation is converted to speech using ElevenLabs' neural voice models, then formatted into professional audio responses. A quality review agent evaluates translation accuracy, cultural appropriateness, and audio clarity against predefined criteria. High-scoring outputs are returned via webhook for immediate use, while low-quality results trigger review processes, ensuring consistent delivery of publication-ready multilingual audio content. Setup Steps Obtain OpenAI API key and configure in "Translation Agent" Set up ElevenLabs account, generate API key Configure webhook URL and update in source applications to trigger workflow Customize target languages and voice settings in translation and ElevenLabs nodes Adjust quality thresholds in "Check Quality Score" Update output webhook endpoint in "Return Audio Files" node Prerequisites Active accounts: OpenAI API access, ElevenLabs subscription. Use Cases Chinese language learning apps, international marketing content localization Customization Add additional target languages, modify voice characteristics and speaking rates Benefits Automates 95% of translation workflow, delivers publication-ready audio in minutes
by Edson Encinas
🧩 Template Description IP Enrichment & Country Attribution is a lightweight cybersecurity automation that enriches IP addresses with geographic and network intelligence. It validates incoming IPs, filters out private or invalid addresses, and enriches public IPs using an open-source IP enrichment service. 🔄 How It Works Receives an IP address via webhook (API or Slack). Validates the IP format and rejects invalid input. Checks for private or internal IP ranges. Ignores private IPs with a clear response. Enriches public IPs using an open-source IP intelligence service. Normalizes country, ISP, and ASN data and applies a severity label. Slack notifications are sent for enriched public IPs. Returns a structured JSON response. ⚙️ Setup Steps Import & Activate Workflow Import the JSON template into n8n Actvate the workflow Set Up Webhook Copy the webhook URL Send a POST request with the IP in the body, e.g.: { "text" : "8.8.8.8" } Using curl: `curl -X POST https://YOUR_N8N_WEBHOOK_URL \ -H "Content-Type: application/json" \ -d '{"text":"8.8.8.8"}'` Configure Slack (Slack Alert) Create or select Slack credentials in n8n Make sure the bot is in your target channel Update the Slack node with correct channel. Slack Slash Command Setup (Optional) Enable Slash Commands and create new command (for example /ip-enrich). Set the Request URL to your n8n webhook endpoint. Choose POST as the request method. Install the app to your workspace. Usage example: /ip-enrich 8.8.8.8 🎛️ Customization Options Enrichment source: Replace or extend the IP intelligence API with additional providers (for example reputation or abuse scoring). Slack formatting: Customize the Slack message text, emojis, or use threads for better alert grouping. Input sources: Reuse the webhook for other integrations such as SIEM alerts or security tools.