by Jay Emp0
Prompt-to-Image Generator & WordPress Uploader (n8n Workflow) This workflow generates high-quality AI images from text prompts using Leonardo AI, then automatically uploads the result to your WordPress media library and returns the final image URL. It functions as a Modular Content Production (MCP) tool - ideal for AI agents or workflows that need to dynamically generate and store visual assets on-demand. ⚙️ Features 🧠 AI-Powered Generation Uses Leonardo AI to create 1472x832px images from any text prompt, with enhanced contrast and style UUID preset. ☁️ WordPress Media Upload Uploads the image as an attachment to your connected WordPress site via REST API. ☁️ Twitter Media Upload Uploads the image to twitter so that you can post the image later on to X.com using the media_id 🔗 Returns Final URL Outputs the publicly accessible image URL for immediate use in websites, blogs, or social media posts. 🔁 Workflow-Callable (MCP Compatible) Can be executed standalone or triggered by another workflow. Acts as an image-generation microservice for larger automation pipelines. 🧠 Use Cases For AI Agents (MCP) Plug this into multi-agent systems as the "image generation module" Generate blog thumbnails, product mockups, or illustrations Return a clean image_url for content embedding or post-publishing For Marketers / Bloggers Automate visual content creation for articles Scale image generation for SEO blogs or landing pages Supports media upload for twitter For Developers / Creators Integrate with other n8n workflows Pass prompt and slug as inputs from any external trigger (e.g., webhook, Discord, Airtable, etc.) 📥 Inputs | Field | Type | Description | |--------|--------|----------------------------------------| | prompt | string | Text prompt for image generation | | slug | string | Filename identifier (e.g. hero-image) | Example: { "prompt": "A futuristic city skyline at night", "slug": "futuristic-city" } 📤 Output { "public_image_url" : "https://your.wordpress.com/img-id", "wordpress":{...obj}, "twitter":{...obj} } 🔄 Workflow Summary Receive Prompt & Slug Via manual trigger or parent workflow execution Generate Image POST to Leonardo AI's API with the prompt and config Wait & Poll Delays 1 minute, then fetches final image metadata Download Image GET request to retrieve generated image Upload to WordPress Uses WordPress REST API with proper headers Upload to Twitter Uses Twitter Media Upload API to get the media id incase you want to post the image to twitter Return Result Outputs a clean public_image_url JSON object along with wordpress and twitter media objects 🔐 Requirements Leonardo AI account and API Key WordPress site with API credentials (media write permission) Twitter / x.com Oauth API (optional) n8n instance (self-hosted or cloud) This credential setup: httpHeaderAuth for Leonardo headers httpBearerAuth for Leonardo bearer token wordpressApi for upload 🧩 Node Stack Execute Workflow Trigger / Manual Trigger Code (Input Parser) HTTP Request → Leonardo image generation Wait → 1 min delay HTTP Request → Poll generation result HTTP Request → Download image HTTP Request → Upload to WordPress Code → Return final image URL 🖼 Example Prompt { "prompt": "Batman typing on a laptop", "slug": "batman-typing-on-a-laptop" } Will return: { "public_image_url": "https://articles.emp0.com/wp-content/uploads/2025/07/img-batman-typing-on-a-laptop.jpg" } 🧠 Integrate with AI Agents This workflow is MCP-compliant—plug it into: Research-to-post pipelines Blog generators Carousel builders Email visual asset creators Trigger it from any parent AI agent that needs to generate an image based on a given idea, post, or instruction.
by Atik
Simplify expense tracking with AI-powered automation that extracts receipt data and organizes it instantly. What this workflow does Watches Google Drive for new receipt uploads (images/PDFs) Automatically downloads and prepares files for processing Parses key details using the trusted VLM Run node (merchant, customer, amount, currency, date) Stores structured records in Airtable for real-time tracking Setup Prerequisites: Google Drive & Airtable accounts, VLM Run API credentials, n8n instance. Install the verified VLM Run node by searching for VLM Run in the node list, then click Install. Once installed, you can start using it in your workflows. Quick Setup: Configure Google Drive OAuth2 and create a receipt upload folder Add VLM Run API credentials Create an Airtable base with fields: Customer, Merchant, Amount, Currency, Date Update folder/base IDs in workflow nodes Test and activate How to customize this workflow to your needs Extend functionality by: Adding categories, budgets, or approval steps Syncing with accounting tools (QuickBooks, Xero) Sending Slack or email alerts for processed receipts Enabling error handling and duplicate checks This workflow eliminates manual data entry and creates a seamless, automated system that saves time and improves accuracy.
by Muhammad Abrar
This n8n template demonstrates how to automate the scraping of posts, comments, and sub-comments from a Facebook Group and store the data in a Supabase database. Use cases are many: Gather user engagement data for analysis, archive posts and comments for research, or monitor community sentiment by collecting feedback across discussions! Good to know At the time of writing, this workflow requires the apify api for scraping and Supabase credentials for database storage. How it works The Facebook Group posts are retrieved using an Apify scraper node. For each post, comments and sub-comments are collected recursively to capture all levels of engagement. The data is then structured and stored in Supabase, creating records for posts, comments, and sub-comments. This workflow includes the option to adjust how often it scrapes and which group to target, making it easy to automate collection on a schedule. How to use The workflow is triggered manually in the example, but you can replace this with other triggers like webhooks or scheduled workflows, depending on your needs. This workflow is capturing data points, such as user interactions or media attached to posts. Requirements Apify account API Supabase account for data storage Customizing this workflow This template is ideal for gathering and analyzing community feedback, tracking discussions over time, or archiving group content for future use.
by KlickTipp
Community Node Disclaimer: This workflow uses KlickTipp community nodes. Introduction This workflow automates the end-to-end integration between Zoom and KlickTipp. It listens to Zoom webinar events (specifically meeting.ended), validates incoming webhooks, retrieves participant data from Zoom, and applies segmentation in KlickTipp by subscribing and tagging participants based on their attendance duration. This enables precise, automated campaign targeting without manual effort. How It Works Zoom Webhook Listener Captures meeting.ended events from Zoom. Validates initial webhook registration via HMAC before processing. Webhook Response Handling Distinguishes between Zoom’s URL validation requests and actual event data. Sends appropriate responses (plainToken + encryptedToken for validation, or simple status: ok for regular events). Data Retrieval Waits briefly (1 second) to ensure meeting data is available. Pulls the participant list from Zoom’s past_meetings/{uuid}/participants endpoint. Participant Processing Splits the list into individual participant items. Filters out internal users (like the host). Routes participants based on the meeting topic (e.g., Anfänger vs. Experten webinar). Attendance Segmentation Subscribes each participant to KlickTipp with mapped fields (first name, last name, email). Uses conditions to check attendance thresholds: ≥ 90% of total meeting duration → Full attendance Otherwise → General attendance Applies corresponding KlickTipp tags per meeting type. Key Features ✅ Webhook Validation & Security with HMAC (SHA256). ✅ Automated Attendance Calculation using participant duration vs. meeting duration. ✅ Dynamic Routing by meeting topic for multiple webinars. ✅ KlickTipp Integration with: Subscriber creation or update. Tagging for full vs. general attendance. ✅ Scalable Structure for adding more webinars by extending the Switch and tagging branches. Setup Instructions Zoom Setup Enable Zoom API access and OAuth2 app credentials. Configure webhook event meeting.ended. Grant scopes: meeting:read:meeting meeting:read:list_past_participants KlickTipp Setup Prepare custom fields: Zoom | meeting selection (Text) Zoom | meeting start (Date & Time) Zoom | Join URL (URL) Zoom | Registration ID (Text) Zoom | Duration meeting (Text) Create tags for each meeting variation: attended, attended fully, not attended per meeting name. n8n Setup Add Zoom webhook node (Listen to ending Zoom meetings). Configure validation nodes (Crypto, Build Validation Body). Set up HTTP Request node with Zoom OAuth2 credentials. Connect KlickTipp nodes with your KlickTipp API credentials. Testing & Deployment End a test Zoom meeting connected to this workflow. Verify that: The webhook triggers correctly. Participant list is fetched. Internal users are excluded. Participants are subscribed and tagged in KlickTipp. Check contact records in KlickTipp for tag and field updates. 💡 Pro Tip: Use test emails and manipulate duration values to confirm segmentation logic. Customization Ideas Adjust attendance thresholds (e.g., 80% instead of 90%). Add additional meeting topics via the Switch node. Trigger email campaigns in KlickTipp based on attendance tags. Expand segmentation with more granular ranges (e.g., 0–30%, 30–60%, 60–90%). Add error handling for missing Zoom data or API failures. Resources: Use KlickTipp Community Node in n8n Automate Workflows: KlickTipp Integration in n8n
by Daniel
Transform your Telegram chats into a creative powerhouse with this AI-driven image editing bot. Send an image document with a descriptive caption, and watch it get intelligently edited in seconds—no design skills required! 📋 What This Template Does This workflow powers a Telegram bot that automatically processes incoming image documents with text prompts. It downloads the file, uses Google Gemini AI to edit the image based on your caption, and instantly replies with the enhanced version. Triggers on new messages containing documents and captions Securely downloads and validates files before AI processing Leverages Gemini for precise, prompt-based image edits Sends the polished result back to the chat seamlessly 🔧 Prerequisites A Telegram bot created via @BotFather Google AI Studio account for Gemini API access n8n instance with Telegram and Google Gemini nodes enabled 🔑 Required Credentials Telegram API Setup Open Telegram and message @BotFather Use /newbot to create your bot and note the token In n8n, go to Credentials → Add Credential → Search "Telegram API" Paste the token and save as "Telegram Bot" Google Gemini API Setup Visit aistudio.google.com and sign in with Google Click "Get API key" → Create API key in a new project In n8n, go to Credentials → Add Credential → Search "Google Gemini API" Enter the API key and save as "Gemini API" ⚙️ Configuration Steps Import the provided JSON into your n8n workflows Assign the Telegram Bot credential to the trigger and Telegram nodes Assign the Gemini API credential to the Edit Image node Activate the workflow and note your bot's username Test by sending an image document with a caption like "add a sunset background" to your bot 🎯 Use Cases Personal creativity boost**: Send a selfie with "make me a superhero" for fun edits during downtime Social media content**: Upload product photos with "enhance lighting and add text overlay" for quick marketing visuals Educational sketches**: Share drawings with "colorize and detail" to turn student ideas into professional illustrations Team collaboration**: In group chats, prompt "remove background" for instant design feedback loops ⚠️ Troubleshooting Bot not responding**: Verify token in credentials and ensure "message" updates are enabled in the trigger File download fails**: Check if the sent file is a document/image; Telegram expires links after 1 hour—resend if needed AI edit errors**: Confirm Gemini API key quotas; shorten prompts if over 100 words for better results No edited image sent**: Inspect execution log for binary data flow; ensure "binaryData" is toggled on in send node
by Grace Gbadamosi
How it works This workflow automatically monitors your Google Business Profile for new reviews and uses AI to generate personalized response suggestions. When a review is detected, the system formats the review data, generates an appropriate AI response based on the rating and content, sends differentiated Slack notifications (urgent alerts for negative reviews, celebration messages for positive ones), and logs everything to Google Sheets for tracking and analysis. Who is this for Local business owners, restaurant managers, retail store operators, service providers, and reputation management teams who want to stay on top of customer feedback and respond promptly with thoughtful, AI-generated responses. Perfect for businesses that receive regular reviews and want to maintain consistent, professional customer engagement without manually monitoring multiple platforms. Requirements Google Business Profile**: Active business profile with review monitoring enabled Google API Credentials**: Service account with access to Business Profile API and Sheets API Slack Webhook**: Incoming webhook URL for team notifications Google Sheets**: Spreadsheet with "Reviews" sheet for logging review data Environment Variables**: Setup for secure credential storage Basic n8n Knowledge**: Understanding of triggers, expressions, and credential management How to set up Configure Google Business Profile API - Create Google Cloud project, enable Business Profile API, set up service account credentials, and add your Business Account ID and Location ID to environment variables Prepare Google Sheets Integration - Create Google Sheet with "Reviews" sheet, add required headers, set GOOGLE_SHEET_ID environment variable, and ensure service account has edit access Setup Slack Notifications - Create Slack webhook in your workspace and set SLACK_WEBHOOK_URL environment variable Customize Business Settings - Update Business Configuration node with your business name and adjust AI response tone preferences How to customize the workflow Modify the Business Configuration node to change your business name, adjust the AI response tone (professional, friendly, casual), customize Slack notification messages in the HTTP Request nodes, or add additional review sources by duplicating the trigger structure.
by inderjeet Bhambra
This workflow automates AI-powered image and video generation using MagicHour.ai's API, enhanced by GPT-4.1 for intelligent prompt optimization. It processes webhook requests, refines prompts using AI, generates media content, and returns the final output. Who's it for Content creators, marketers, social media managers, and developers who need automated AI media generation at scale. Perfect for teams building applications that require on-demand image or video creation without manual intervention. How it works The workflow receives a webhook POST request containing generation parameters (type, orientation, style, duration). GPT-4.1 analyzes and optimizes the user's prompt based on the request type (image or video), then sends it to MagicHour.ai's API. The workflow monitors the generation status through polling loops, downloads the completed media, and returns it via webhook response. Error handling ensures failed requests are captured and reported. Requirements n8n instance** (self-hosted or cloud) MagicHour.ai account** with API access (Bearer token) OpenAI API account** for GPT-4.1 access Basic understanding of webhooks and JSON How to set up Configure credentials: Add MagicHour.ai Bearer token in HTTP Request nodes (ai-image-generator, text-to-video, Get Image Details, Get Video Details) Add OpenAI API credentials in both Generate Image Prompt and Generate video Prompt nodes Activate the workflow: Enable the workflow to activate the webhook endpoint Copy the webhook URL from the Webhook trigger node Test the workflow: Download the n8n-magichour HTML tester Click here to download For image generation, send a POST request with this structure: { "action": "generate", "type": "image", "parameters": { "name": "My Image", "image_count": 1, "orientation": "landscape", "style": { "prompt": "A serene mountain landscape at sunset", "tool": "realistic" } } } For video generation, use: { "action": "generate", "type": "video", "parameters": { "name": "My Video", "end_seconds": 5, "orientation": "landscape", "resolution": "1080p", "style": { "prompt": "A dog running through a field" } } } How to customize the workflow Adjust AI prompt optimization: Modify the system prompts in Generate Image Prompt or Generate video Prompt nodes to change how GPT-4.1 refines user inputs. Current prompts enforce strict character limits and avoid unauthorized content. Change polling intervals: Modify the Wait nodes to adjust how frequently the workflow checks generation status (useful for longer video renders). Modify response format: Update the Respond to Webhook node to customize the output structure sent back to the caller. Add multiple output formats: Extend Download Image/Video nodes to save files to cloud storage (Google Drive, S3) instead of just returning via webhook. Implement queue management: Add a database node before MagicHour.ai calls to queue requests and prevent API rate limiting.
by Convosoft
Generate Secure User Authentication with One Webhook Streamline user onboarding and security for your applications using this n8n workflow. This template handles signup, login, and password resets through a single endpoint, making it ideal for developers building MVPs or scaling apps without a full authentication backend. Who Is This For? This workflow is designed for: Developers, indie hackers, and teams building web, mobile, or API-driven applications. Those who need a quick and secure authentication layer. Anyone tired of writing custom auth code or managing third-party services like Auth0 for simple needs. This template integrates seamlessly into your n8n setup. What Problem Does This Workflow Solve? Building authentication from scratch is time-consuming and complex: User Management: Managing registration, credential verification, and password recovery can take weeks of development time. Security: Ensuring secure password hashing, case-insensitive email matching, and robust error handling adds significant complexity. Integration: Creating consistent APIs for apps (e.g., React Native, Next.js, Flutter) is challenging. This workflow provides a battle-tested, webhook-based authentication system that is: Database-agnostic (works with PostgreSQL/Supabase). Extensible—deploy once and integrate across all your apps. What This Workflow Does The workflow handles authentication tasks through a single webhook endpoint, offering the following functionality: Webhook Entry: Listens for POST requests at /webhook/auth. Processes a simple JSON payload, routing actions via a "path" parameter (signup, signin, forgot). Signup: Inserts new users into your database. Uses bcrypt-hashed passwords (via pgcrypto). Returns user details in the response. Login: Queries for case-insensitive email matches. Verifies passwords securely. Returns user information on successful login. Forgot Password: Generates a random 8-character password. Updates the password hash in the database. Returns the new password (suitable for email delivery). Routing & Validation: Uses n8n Switch and IF nodes to securely handle paths and credentials. Standardized Responses: Outputs clean JSON with status, message, and data for easy frontend parsing. Error Handling: Gracefully manages invalid inputs, duplicate entries, or database errors. No more boilerplate—get authentication up and running in minutes! Setup Instructions Follow these steps to set up the workflow: Connect Your Accounts: Use PostgreSQL or Supabase for user storage (free tiers are sufficient). Enable the following PostgreSQL extensions: uuid-ossp and pgcrypto. Create the Users Table: sqlCREATE TABLE users ( id uuid PRIMARY KEY DEFAULT uuid_generate_v4(), full_name text NOT NULL, email text UNIQUE NOT NULL, password_hash text NOT NULL, created_at timestamptz DEFAULT now() ); Configure Credentials : Add PostgreSQL credentials to the n8n nodes ("Signup", "Login", "Reset Password"). Import the JSON workflow into n8n. Activate the workflow. Test the Workflow: Use Postman or curl to send requests to the auto-generated webhook URL. How to Customize This Workflow Extend the workflow to fit your specific needs with these modifications: Add JWT Sessions: Insert a node after successful login to generate and sign JWT tokens (e.g., using the Crypto node). Email Integration: Add a SendGrid or Mailgun node to the "Forgot Password" flow to automatically send new credentials. Rate Limiting: Include an HTTP Request node to check usage quotas before processing requests. Multi-Database Support: Replace PostgreSQL with MySQL or MongoDB by updating the query nodes. Frontend Enhancements: Extend JSON responses to include user avatars or roles by joining additional tables in SQL queries. Triggers: Add a Schedule node for batch user imports. Include a webhook for external authentication calls.
by Oneclick AI Squad
Enhance event logistics with this automated n8n workflow. Triggered by seating requests, it fetches attendee data and venue templates from Google Sheets, calculates totals, and optimizes seating layouts. The workflow generates detailed recommendations, splits individual assignments, and sends alerts, ensuring efficient venue planning and real-time updates. 🎪📋 Key Features Optimizes seating arrangements based on attendee data and event type. Generates venue layouts with visual and statistical insights. Provides real-time alerts with comprehensive seating plans. Logs detailed assignments and layouts in Google Sheets. Workflow Process The Webhook Trigger node initiates the workflow upon receiving venue requirements and attendee data via webhook. Validate Request Data** ensures the incoming data is complete and accurate. Fetch Attendee Data** retrieves attendee information, including groups, accessibility needs, and VIP preferences from Google Sheets. Fetch Venue Templates** reads venue layout templates from Google Sheets. Calculate Totals** aggregates attendee data and venue constraints for optimal planning. Combine All Data** merges attendee and venue data for analysis. AI Optimization** uses algorithms to calculate optimal seating based on venue dimensions, attendee groups, accessibility needs, VIP placement, and aisle placement. Optimize Seating Layout** refines the seating plan for efficiency. Format Recommendations** structures the seating plan with visual layout map, seat assignments, statistics & metrics, and optimization tips. Split Seat Assignments** divides the plan into individual seat assignments. Send Response** returns the complete seating plan with visual layout map, seat assignment list, statistics & recommendations, and export-ready format. Send Alert** notifies organizers with the finalized plan details. Update Sheets** saves the master plan summary, individual seat assignments, and layout specifications to Google Sheets. Save Individual Assignments** appends or updates individual seat assignments to Google Sheets. Setup Instructions Import the workflow into n8n and configure Google Sheets OAuth2 for data access. Set up the Webhook Trigger with your event management system's API credentials. Configure the AI Optimization node with a suitable algorithm or model. Test the workflow by sending sample seating requests and verifying layouts. Adjust optimization parameters as needed for specific venue or event requirements. Prerequisites Google Sheets OAuth2 credentials Webhook integration with the event management system Structured attendee and venue data in a Google Sheet Google Sheet Structure: Attendee Data Sheet with columns: Name Group Accessibility Needs VIP Status Preferences Updated At Venue Templates Sheet with columns: Venue Name Capacity Dimensions Layout Template Updated At Modification Options Customize the Validate Request Data node to include additional validation rules. Adjust the AI Optimization node to prioritize specific criteria (e.g., proximity, accessibility). Modify the Format Recommendations node to include custom visual formats. Integrate with venue management tools for live layout updates. Set custom alert triggers in the Send Alert node. Discover more workflows – Get in touch with us
by vinci-king-01
AI Conference Intelligence & Networking Optimizer with ScrapeGraphAI > ⚠️ IMPORTANT: This template requires a self-hosted n8n instance with ScrapeGraphAI integration. It cannot be used with n8n Cloud due to web scraping capabilities. This workflow automatically discovers industry conferences and provides AI-powered networking intelligence to maximize your event ROI. How it works This workflow automatically discovers industry conferences and provides AI-powered networking intelligence to maximize your event ROI. Key Steps Scheduled Discovery - Runs weekly to find new industry conferences from Eventbrite and other sources. AI-Powered Scraping - Uses ScrapeGraphAI to extract comprehensive conference information including speakers, agenda, and networking opportunities. Speaker Intelligence - Analyzes speakers to identify high-priority networking targets based on their role, company, and expertise. Agenda Analysis - Extracts and maps the complete conference schedule to optimize your time and networking strategy. Networking Strategy - Generates AI-powered recommendations for maximizing networking ROI with prioritized contact lists and approach strategies. Set up steps Setup time: 10-15 minutes Configure ScrapeGraphAI credentials - Add your ScrapeGraphAI API key for web scraping capabilities. Customize conference sources - Update the Eventbrite URL to target specific industries or locations. Adjust monitoring frequency - Modify the weekly trigger to match your conference discovery needs. Review networking priorities - The system automatically prioritizes speakers, but you can customize the criteria. Technical Configuration Prerequisites Self-hosted n8n instance (version 1.0+) ScrapeGraphAI API credentials Eventbrite API access (optional, for enhanced data) API Configuration ScrapeGraphAI Setup Sign up at https://scrapegraph.ai Generate API key from dashboard Add credentials in n8n: Settings > Credentials > Add Credential > ScrapeGraphAI Customization Examples Modify Conference Sources: // In Eventbrite Scraper node, update the URL: const targetUrl = "https://www.eventbrite.com/d/united-states/technology/"; const industryFilter = "?q=artificial+intelligence"; Adjust Networking Priorities: // In Speaker Intelligence node, modify scoring weights: const priorityWeights = { executive_level: 0.4, company_size: 0.3, industry_relevance: 0.2, speaking_topic: 0.1 }; Customize Output Format: // In Networking Strategy node, modify output structure: const outputFormat = { high_priority: speakers.filter(s => s.score > 8), medium_priority: speakers.filter(s => s.score > 6 && s.score <= 8), networking_plan: generateApproachStrategy(speakers) }; Data Storage & Output Formats Storage Options Local JSON files** - Default storage for conference data Google Drive** - For sharing reports with team Database** - PostgreSQL/MySQL for enterprise deployments Cloud Storage** - AWS S3, Google Cloud Storage Output Formats JSON** - Raw data for API integration CSV** - For spreadsheet analysis PDF Reports** - Executive summaries Markdown** - Documentation and sharing Sample Output Structure { "conference_data": { "event_name": "AI Summit 2024", "date": "2024-06-15", "location": "San Francisco, CA", "speakers": [ { "name": "Dr. Sarah Chen", "title": "CTO, TechCorp", "company": "TechCorp Inc", "networking_score": 9.2, "priority": "high", "approach_strategy": "Connect via LinkedIn, mention shared AI interests" } ], "networking_plan": { "high_priority_targets": 5, "recommended_approach": "Focus on AI ethics panel speakers", "schedule_optimization": "Attend morning keynotes, network during breaks" } } } Key Features Automated Conference Discovery** - Finds relevant industry events from multiple sources Speaker Intelligence Analysis** - Identifies high-value networking targets with contact priority scoring Strategic Agenda Mapping** - Optimizes your conference schedule for maximum networking impact AI-Powered Recommendations** - Provides personalized networking strategies and approach methods Priority Contact Lists** - Ranks speakers by business value and networking potential Troubleshooting Common Issues ScrapeGraphAI Rate Limits - Implement delays between requests Website Structure Changes - Update scraping prompts in ScrapeGraphAI nodes API Authentication - Verify credentials and permissions Performance Optimization Adjust trigger frequency based on conference season Implement caching for repeated data Use batch processing for large conference lists Support & Customization For advanced customization or enterprise deployments, consider: Custom speaker scoring algorithms Integration with CRM systems (Salesforce, HubSpot) Advanced analytics and reporting dashboards Multi-language support for international conferences
by PDF Vector
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Description: Unified Academic Search Across Major Research Databases This powerful workflow enables researchers to search multiple academic databases simultaneously, automatically deduplicate results, and export formatted bibliographies. By leveraging PDF Vector's multi-database search capabilities, researchers can save hours of manual searching and ensure comprehensive literature coverage across PubMed, ArXiv, Google Scholar, Semantic Scholar, and ERIC databases. Target Audience & Problem Solved This template is designed for: Graduate students** conducting systematic literature reviews Researchers** ensuring comprehensive coverage of their field Librarians** helping patrons with complex searches Academic teams** building shared bibliographies It solves the critical problem of fragmented academic search by providing a single interface to query all major databases, eliminating duplicate results, and standardizing output formats. Prerequisites n8n instance with PDF Vector node installed PDF Vector API credentials with search permissions Basic understanding of academic search syntax Optional: PostgreSQL for search history logging Minimum 50 API credits for comprehensive searches Step-by-Step Setup Instructions Configure PDF Vector Credentials Go to n8n Credentials section Create new PDF Vector credentials Enter your API key from pdfvector.io Test the connection to verify setup Import the Workflow Template Copy the template JSON code In n8n, click "Import Workflow" Paste the JSON and save Review all nodes for any configuration needs Customize Search Parameters Open the "Set Search Parameters" node Modify the default search query for your field Adjust the year range (default: 2020-present) Set results per source limit (default: 25) Configure Export Options Choose your preferred export formats (BibTeX, CSV, JSON) Set the output directory for files Configure file naming conventions Enable/disable specific export types Test Your Configuration Run the workflow with a sample query Check that all databases return results Verify deduplication is working correctly Confirm export files are created properly Implementation Details The workflow implements a sophisticated search pipeline: Parallel Database Queries: Searches all configured databases simultaneously for efficiency Smart Deduplication: Uses DOI matching and fuzzy title comparison to remove duplicates Relevance Scoring: Combines citation count, title relevance, and recency for ranking Format Generation: Creates properly formatted citations in multiple styles Batch Processing: Handles large result sets without memory issues Customization Guide Adding Custom Databases: // In the PDF Vector search node, add to providers array: "providers": ["pubmed", "semantic_scholar", "arxiv", "google_scholar", "eric", "your_custom_db"] Modifying Relevance Algorithm: Edit the "Rank by Relevance" node to adjust scoring weights: // Adjust these weights for your needs: const titleWeight = 10; // Title match importance const citationWeight = 5; // Citation count importance const recencyWeight = 10; // Recent publication bonus const fulltextWeight = 15; // Full-text availability bonus Custom Export Formats: Add new format generators in the workflow: // Example: Add APA format export const apaFormat = papers.map(p => { const authors = p.authors.slice(0, 3).join(', '); return ${authors} (${p.year}). ${p.title}. ${p.journal || 'Preprint'}.; }); Advanced Filtering: Implement additional filters: Journal impact factor thresholds Open access only options Language restrictions Methodology filters for systematic reviews Search Features: Query multiple databases in parallel Advanced filtering and deduplication Citation format export (BibTeX, RIS, etc.) Relevance ranking across sources Full-text availability checking Workflow Process: Input: Search query and parameters Parallel Search: Query all databases Merge & Deduplicate: Combine results Rank: Sort by relevance/citations Enrich: Add full-text links Export: Multiple format options
by Samuel Heredia
This n8n workflow securely processes contact form submissions by validating user input, formatting the data, and storing it in a MongoDB database. The flow ensures data consistency, prevents unsafe entries, and provides a confirmation response back to the user. Workflow 1. Form Submission Node Purpose: Serves as the workflow’s entry point. Functionality: Captures user input from the contact form, which typically includes: name last name email phone number 2. Code Node (Validation Layer) Purpose: Ensures that collected data is valid and secure. Validations performed: Removes suspicious characters to mitigate risks like SQL injection or script injection. Validates the phone_number field format (numeric, correct length, etc.). If any field fails validation, the entry is marked as “is_not_valid” to block it from database insertion. 3. Edit Fields Node (Data Formatting) Purpose: Normalizes data before database insertion. Transformations applied: Converts field names to snake_case (first_name, last_name, phone_number). Standardizes field naming convention for consistency in MongoDB storage. 4. MongoDB Node (Insert Documents) Purpose: Persists validated data in MongoDB Atlas. Process: Inserts documents into the target collection with the cleaned and formatted fields. Connection is established securely using a MongoDB Atlas connection string (URI). 🔧 How to Set Up MongoDB Atlas Connection URL a. Create a Cluster b. Log in to MongoDB Atlas and create a new cluster. c. Configure Database Access: Add a database user with a secure username and password, Assign appropriate roles (e.g., Atlas Admin for full access or Read/Write for limited). d. Obtain Connection String (URI) From Atlas, go to Clusters → Connect → Drivers. Copy the provided connection string, which looks like: mongodb+srv://<username>:<password>@cluster0.abcd123.mongodb.net/myDatabase?retryWrites=true&w=majority Configure in n8n In the MongoDB node, paste the URI. Replace <username>, <password>, and myDatabase with your actual credentials and database name. Test the connection to ensure it is successful. 5. Form Ending Node Purpose: Provides closure to the workflow. Functionality: Sends a confirmation response back to the user, indicating that their contact details were successfully submitted and stored. ✅ Result: With this workflow, all contact form submissions are safely validated, normalized, and stored in MongoDB Atlas, ensuring both data integrity and security basic.