by System Admin
No description available
by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: , , , ,
by System Admin
Received the doc
by Kev
Important: This workflow uses the Autype community node and requires a self-hosted n8n instance. This workflow watches a Google Drive folder for new PDF uploads. When a new file appears, it automatically creates two secure versions (one with a "CONFIDENTIAL" watermark and one with password protection) and saves both back to the same folder. Activate and forget. Who is this for? Legal teams, compliance officers, and anyone who needs every PDF in a shared folder to be watermarked and locked automatically. Common scenarios: securing contracts before client review, stamping internal reports with "CONFIDENTIAL", or ensuring all documents in a compliance folder are password-protected. What this workflow does The workflow uses a Google Drive Trigger to detect new PDFs in a watched folder. For each new file it creates filename-watermark.pdf (with a diagonal text watermark) and filename-protected.pdf (encrypted with user and owner passwords), then uploads both back to the same Google Drive folder. How it works New PDF Uploaded to Drive — A Google Drive Trigger polls the watched folder every minute for new files. Download PDF from Drive — Downloads the new PDF as binary data. Upload PDF to Autype — Uploads the PDF to Autype Document Tools storage once. Returns a file ID used by both parallel branches. From here the workflow splits into two parallel branches: Branch A — Watermark: Add Watermark → Save *-watermark.pdf to Drive Branch B — Protect: Password-Protect PDF → Save *-protected.pdf to Drive Both branches reference the same Autype file ID from step 3. No re-upload needed. Output filenames are derived from the original filename via expression (e.g. report.pdf → report-watermark.pdf / report-protected.pdf). Setup Install the Autype community node (n8n-nodes-autype) via Settings → Community Nodes. Create an Autype API credential with your API key from app.autype.com. See API Keys in Settings. Connect your Google account via OAuth2 in n8n credentials (Settings → Credentials → Google Drive OAuth2 API). Replace YOUR_FOLDER_ID in the trigger node and both Google Drive upload nodes with your actual folder ID. You can find it in the folder's URL: https://drive.google.com/drive/folders/YOUR_FOLDER_ID. Select your Autype credential in all Autype nodes and your Google Drive credential in all Google Drive nodes. Activate the workflow (it will now process every new PDF uploaded to the folder automatically) Note: This is a community node. It is not maintained by the n8n team. You need a self-hosted n8n instance to use community nodes. Requirements Self-hosted n8n instance (community nodes are not available on n8n Cloud) Autype account with API key (free tier available) n8n-nodes-autype community node installed Google Drive account with OAuth2 credentials configured in n8n How to customize Change the watched folder:** Update YOUR_FOLDER_ID in the trigger and both upload nodes (you can use separate input and output folders if preferred) Change watermark text:** Edit the text field in the Add Watermark node (common options: "DRAFT", "INTERNAL USE ONLY", "DO NOT COPY", or your company name) Adjust watermark style:** Change font size, opacity (0 to 1), rotation angle, and color in the watermark options. Change passwords:** Update the user and owner passwords in the Password-Protect node (remove the user password if you only want to restrict editing, not opening) Skip protection:** Remove the protection branch (Protect → Save Protected) if you only need watermarked PDFs. Skip watermark:** Remove the watermark branch (Watermark → Save Watermarked) and keep only the protection branch if you only need encryption. Add compression:** Insert an Autype Compress operation before the Google Drive upload to reduce file size. Use a different trigger:** Replace the Google Drive Trigger with a webhook, email trigger, or any other trigger that provides binary PDF data.
by Guillaume Duvernay
This advanced template automates the creation of a Lookio Assistant populated with a specific corpus of text. Instead of uploading files one by one, you can simply upload a CSV containing multiple text resources. The workflow iterates through the rows, converts them to text files, uploads them to Lookio, and finally creates a new Assistant with strict access limited to these specific resources. Who is this for? Knowledge Managers** who want to spin up specific "Topic Bots" (e.g., an "RFP Bot" or "HR Policy Bot") based on a spreadsheet of Q&As or articles. Product Teams** looking to bulk-import release notes or documentation to test RAG (Retrieval-Augmented Generation) responses. Automation Builders** who need a reference implementation for looping through CSV rows, converting text strings to binary files, and aggregating IDs for a final API call. What is the RAG platform Lookio for knowledge retrieval? Lookio is an API-first platform that solves the complexity of building RAG (Retrieval-Augmented Generation) systems. While tools like NotebookLM are great for individuals, Lookio is built for business automation. It handles the difficult backend work—file parsing, chunking, vector storage, and semantic retrieval—so you can focus on the workflow. API-First:** Unlike consumer AI tools, Lookio allows you to integrate your knowledge base directly into n8n, Slack, or internal apps. No "DIY" Headache:** You don't need to manage a vector database or write chunking algorithms. Free to Start:** You can sign up without a credit card and get 100 free credits to test this workflow immediately. What problem does this workflow solve? Bulk Ingestion:** Converts a CSV export (with columns for Title and Content) into individual text resources in Lookio. Automated Provisioning:** Eliminates the manual work of creating an Assistant and selecting resources one by one. Dynamic Configuration:** Allows the user to define the Assistant's specific name, context (system prompt), and output guidelines directly via the upload form. How it works Form Trigger: The user uploads a CSV and specifies the Assistant details (Name, Context, Guidelines) and maps the CSV column names. Parsing: The workflow converts the CSV to JSON and uses the Convert to File node to transform the raw text content of each row into a binary .txt file. Loop & Upload: It loops through the items, uploading them via the Lookio Add Resource API (/webhook/add-resource), and collects the returned Resource IDs. Creation: Once all files are processed, it aggregates the IDs and calls the Create Assistant API (/webhook/create-assistant), setting the resources_access_type to "Limited selection" so the bot relies only on the uploaded data. Completion: Returns the new Assistant ID and a success message to the user. CSV File Requirements Your CSV file should look like this (headers can be named anything, as you will map them in the form): | Title | Content | | --- | --- | | How to reset password | Go to settings, click security, and press reset... | | Vacation Policy | Employees are entitled to 20 days of PTO... | How to set up Lookio Credentials: Get your API Key and Workspace ID from your Lookio API Settings (Free to sign up). Configure HTTP Nodes: Open the Import resource to Lookio node: Update headers (api_key) and body (workspace_id). Open the Create Lookio assistant node: Update headers (api_key) and body (workspace_id). Form Configuration (Optional): The form is pre-configured to ask for column mapping, but you can hardcode these in the "Convert to txt" node if you always use the same CSV structure. Activate & Share: Activate the workflow and use the Production URL from the Form Trigger to let your team bulk-create assistants.
by System Admin
Tagged with: , , , ,
by CapSolver
reCAPTCHA v2 Solver (CapSolver + n8n) How it works • Receives reCAPTCHA v2 solving requests via webhook • Sends tasks to CapSolver and returns solved tokens in real time • Runs a scheduled check to validate solving performance • Submits tokens to target endpoint and verifies success/failure • Supports manual trigger for quick testing and debugging Set up steps • Add your CapSolver API key to all CapSolver nodes • Configure the webhook endpoint for incoming requests • Adjust target website URL and site key if needed • Review or modify the hourly schedule trigger • Run a quick manual test to confirm everything works ⏱️ Setup time: ~5–10 minutes
by Juan Cristóbal Andrews
Who's it for This template is designed for filmmakers, content creators, social media managers, and AI developers who want to harness OpenAI's Sora 2 for creating physically accurate, cinematic videos with synchronized audio. Whether you're generating realistic scenes from text prompts or reference images with proper physics simulation, creating multi-shot sequences with persistent world state, or producing content with integrated dialogue and sound effects, this workflow streamlines the entire video generation process from prompt to preview and Google Drive upload. What it does This workflow: Accepts a text prompt, optional reference image, OpenAI API key, and generation settings via form submission Validates reference image format (jpg, png, or webp only) Sends the prompt and optional reference to the Sora 2 API endpoint to request video generation Continuously polls the video rendering status (queued → in progress → completed) Waits 30 seconds between status checks to avoid rate limiting Handles common generation errors with descriptive error messages Automatically fetches the generated video once rendering is complete Downloads the final .mp4 file Uploads the resulting video to your Google Drive Displays the download link and video preview/screenshot upon completion How to set up 1. Get Your OpenAI API Key You'll need an OpenAI API key to use this workflow. Here's the general process: Create an OpenAI account at https://platform.openai.com Set up billing - Add payment information to enable API access Generate your API key through the API keys section in your OpenAI dashboard Copy and save your key immediately - you won't be able to view it again! ⚠️ Important: Your API key will start with sk- and should be kept secure. If you lose it, you'll need to generate a new one. 2. Connect Google Drive Add your Google Drive OAuth2 credential to n8n Grant necessary permissions for file uploads 3. Import and Run Import this workflow into n8n Execute the workflow via the form trigger Enter your API key, prompt, and desired settings in the form Optionally upload a reference image** to guide the video generation All generation settings are configured through the form, including: Model**: Choose between sora-2 or sora-2-pro Duration**: 4, 8, or 12 seconds Resolution**: Portrait or Landscape options Reference Image** (optional): Upload jpg, png, or webp matching your target resolution ⚠️ Sora 2 Pricing The workflow supports two Sora models which have the following API pricing: Sora 2 - $0.10/sec Portrait: 720x1280 Landscape: 1280x720 Sora 2 Pro - $0.30/sec (720p) or $0.50/sec (1080p) 720p - Portrait: 720x1280, Landscape: 1280x720 1080p - Portrait: 1024x1792, Landscape: 1792x1024 Duration options: 4, 8, 12 seconds (default: 4) Example costs: 4-second video with Sora 2: $0.40 12-second video with Sora 2 Pro (1080p): $6.00 Requirements Valid OpenAI API key (starting with sk-) Google Drive OAuth2 credential connected to n8n Reference image** (optional): jpg, png, or webp format - should match your selected video resolution for best results How to customize the workflow Modify generation parameters Edit the form fields to include additional options: Style presets (cinematic, anime, realistic) Camera movement preferences Audio generation options Image reference strength/influence settings It's recommended to visit the official documentation on prompting for a detailed Sora 2 guide. Adjust polling behavior Change the Wait node duration (default: 30 seconds) Modify the Check Status polling frequency based on typical generation times Add timeout logic for very long renders Customize error handling Extend error messages for additional failure scenarios Add retry logic for transient errors Configure notification webhooks for error alerts Alternative upload destinations Replace the Google Drive node with: Dropbox AWS S3 Azure Blob Storage YouTube direct upload Slack/Discord notification with video attachment Enhance result display Customize the completion form to show additional metadata Add video thumbnail generation Include generation parameters in the results page Enable direct playback in the completion form Workflow Architecture Step-by-step flow: Form Submission → User inputs text prompt, optional reference image, API key, and generation settings Create Video → Sends request to Sora 2 API endpoint with all parameters and reference image (if provided) Check Status → Polls the API for video generation status Status Decision → Routes based on status: Queued → Wait 30 seconds → Check Status again In Progress → Wait 30 seconds → Check Status again Completed → Proceed to download Failed → Display descriptive error message Wait → 30-second delay between status checks Download → Fetches the generated video file Google Drive → Uploads .mp4 to your Drive Completion Form → Displays download link and video preview/screenshot If you have any questions, just contact me on Linkedin Ready to create cinematic AI videos with physics-accurate motion, synchronized audio, and optional image references? Import this workflow and start generating! 🎬✨
by Guillaume Duvernay
This template processes a CSV of questions and returns an enriched CSV with RAG-based answers produced by your Lookio assistant. Upload a CSV that contains a column named Query, and the workflow will loop through every row, call the Lookio API, and append a Response column containing the assistant's answer. It's ideal for batch tasks like drafting RFP responses, pre-filling support replies, generating knowledge-checked summaries, or validating large lists of product/customer questions against your internal documentation. Who is this for? Knowledge managers & technical writers:** Produce draft answers to large question sets using your company docs. Sales & proposal teams:** Auto-generate RFP answer drafts informed by internal docs. Support & operations teams:** Bulk-enrich FAQs or support ticket templates with authoritative responses. Automation builders:** Integrate Lookio-powered retrieval into bulk data pipelines. What it does / What problem does this solve? Automates bulk queries:** Eliminates the manual process of running many individual lookups. Ensures answers are grounded:* Responses come from your uploaded documents via *Lookio**, reducing hallucinations. Produces ready-to-use output:* Delivers an enriched CSV with a new *Response** column for downstream use. Simple UX:* Users only need to upload a CSV with a *Query** column and download the resulting file. How it works Form submission: User uploads a CSV via the Form Trigger. Extract & validate: Extract all rows reads the CSV and Aggregate rows checks for a Query column. Per-row loop: Split Out and Loop Over Queries iterate rows; Isolate the Query column normalizes data. Call Lookio: Lookio API call posts each query to your assistant and returns the answer. Build output: Prepare output appends Response values and Generate enriched CSV creates the downloadable file delivered by Form ending and file download. Why use Lookio for high quality RAG? While building a native RAG pipeline in n8n offers granular control, achieving consistently high-quality and reliable results requires significant effort in data processing, chunking strategy, and retrieval logic optimization. Lookio is designed to address these challenges by providing a managed RAG service accessible via a simple API. It handles the entire backend pipeline—from processing various document formats to employing advanced retrieval techniques—allowing you to integrate a production-ready knowledge source into your workflows. This approach lets you focus on building your automation in n8n, rather than managing the complexities of a RAG infrastructure. How to set up Create a Lookio assistant: Sign up at https://www.lookio.app/, upload documents, and create an assistant. Get credentials: Copy your Lookio API Key and Assistant ID. Configure the workflow nodes: In the Lookio API call HTTP Request node, replace the api_key header value with your Lookio API Key and update assistant_id with your Assistant ID (replace placeholders like <your-lookio-api-key> and <your-assistant-id>). Ensure the Form Trigger is enabled and accepts a .csv file. CSV format: Ensure the input CSV has a column named Query (case-sensitive as configured). Activate the workflow: Run a test upload and download the enriched CSV. Requirements An n8n instance with the ability to host Forms and run workflows A Lookio account (API Key) and an Assistant ID How to take it further Add rate limiting / retries:** Insert error handling and delay nodes to respect API limits for large batches. Improve the speed**: You could drastically reduce the processing time by parallelizing the queries instead of doing them one after the other in the loop. For that, you could use HTTP request nodes that would trigger your sort of sub-workflow. Store results:* Add an *Airtable* or *Google Sheets** node to archive questions and responses for audit and reuse. Post-process answers:** Add an LLM node to summarize or standardize responses, or to add confidence flags. Trigger variations:* Replace the *Form Trigger* with a *Google Drive* or *Airtable** trigger to process CSVs automatically from a folder or table.
by System Admin
Tagged with: , , , ,
by Grace Gbadamosi
How it works This workflow creates a complete MCPserver that provides comprehensive API integration monitoring and testing capabilities. The server exposes five specialized tools through a single MCP endpoint: API health analysis, webhook reliability testing, rate limit monitoring, authentication verification, and client report generation. External applications can connect to this MCP server to access all monitoring tools. Who is this for This template is designed for DevOps engineers, API developers, integration specialists, and technical teams responsible for maintaining API reliability and performance. It's particularly valuable for organizations managing multiple API integrations, SaaS providers monitoring client integrations, and development teams implementing API monitoring strategies. Requirements MCP Client**: Any MCP-compatible application (Claude Desktop, custom MCP client, or other AI tools) Network Access**: Outbound HTTP/HTTPS access to test API endpoints and webhooks Authentication**: Bearer token authentication for securing the MCP server endpoint Target APIs**: The APIs and webhooks you want to monitor (no special configuration required on target systems) How to set up Configure MCP Server Authentication - Update the MCP Server - API Monitor Entry node with your desired authentication method and generate a secure bearer token for accessing your MCP server Deploy the Workflow - Save and activate the workflow in your n8n instance, noting the MCP server endpoint URL that will be generated for external client connections Connect MCP Client - Configure your MCP client (such as Claude Desktop) to connect to the MCP server endpoint using the authentication token you configured Test Monitoring Tools - Use your MCP client to call the available tools: Analyze Api Health, Validate Webhook Reliability, Monitor API Limits, Verify Authentication, and Generate Client Report with your API endpoints and credentials