by phil
This workflow is your all-in-one AI Content Strategist, designed to generate comprehensive, data-driven content briefs by analyzing top-ranking competitors. It operates through a simple chat interface. You provide a target keyword, and the workflow automates the entire research process. First, it scrapes the top 10 Google search results using the powerful Bright Data SERP API. Then, for each of those results, it performs a deep dive, using the Bright Data Web Unblocker to reliably extract the full content from each page, bypassing any anti-bot measures. Finally, all the gathered dataβtitles, headings, word counts, and page summariesβis synthesized by a Large Language Model (LLM) to produce a strategic content plan. This plan identifies search intent, core topics, and crucial content gaps, giving you a clear roadmap to outrank the competition. This template is indispensable for SEO specialists, content marketers, and digital agencies looking to scale their content production with strategies that are proven to work. Why Use This AI Content Strategist Workflow ? Data-Driven Insights: Base your content strategy on what is actually ranking on **Google, not guesswork. Automated Competitive Analysis: Instantly understand the structure, length, and key themes of the **top-performing articles for any keyword. Strategic Gap Detection: The **AI analysis highlights poorly covered topics and missed opportunities, allowing you to create content that provides unique value. Massive Time Savings: Condenses hours of **manual research into a fully automated process that runs in minutes. How It Works Chat Interaction Begins: The workflow is initiated via a chat UI. The user enters a target keyword to start the analysis. Google SERP Scraping (Bright Data): The "Google SERP" node uses Bright Data's SERP API to fetch the top 10 organic results, providing the URLs for the next stage. Individual Page Scraping (Bright Data): The workflow loops through each URL. The "Access and extract data" node uses the Bright Data Web Unblocker to ensure successful and complete HTML scraping of every competitor's page. Content Extraction & Aggregation: A series of Code nodes clean the raw HTML and extract structured data (title, meta description, headings, word count). The Aggregate node then compiles the data from all 10 pages into a single dataset. AI Synthesis (OpenRouter): The "Analysis" node sends the entire compiled dataset to an LLM via OpenRouter. The AI performs a holistic analysis to identify search intent, must-cover topics, and differentiation opportunities. Strategic Brief Generation: The "Format Output" node takes the AI's structured JSON analysis and transforms it into a clean, human-readable Markdown report, which is then delivered back to the user in the chat interface. π Prerequisites To use this workflow, you will need active accounts with both Bright Data (for web scraping) and OpenRouter (for AI model access). Setting Up Your Credentials: Bright Data Account: Sign up for a free trial account on their website. Inside your Bright Data dashboard, you will need to activate both the SERP API and the Web Unblocker products to create the necessary Zones. In n8n, navigate to the Credentials section, add a new "Brightdata API" credential, and enter your API key. In the workflow, select your newly created credential in both the "Google SERP" node and the "Access and extract data from a specific URL" node. OpenRouter Account: Sign up for an account at OpenRouter.ai. Navigate to your account settings to find your API Key. In n8n, go to Credentials, add a new "OpenRouter API" credential, and paste your key. In the workflow, select this credential in all three "OpenRouter Chat Model" nodes. Phil | Inforeole π«π· Contactez nous pour automatiser vos processus
by Gabriel Santos
This workflow helps HR teams run smoother monthly Q\&A sessions with employees. Whoβs it for** HR teams and managers who want to centralize employee questions, avoid duplicates, and keep meetings focused. How it works** Employees submit questions through a styled form. Questions are stored in a database. HR selects a date range to review collected questions. An AI Agent deduplicates and clusters similar questions, then generates a meeting script in Markdown format. The Agent automatically creates a Google Calendar event (with a Google Meet link) on the last Friday of the current month at 16:00β17:00. The script is returned as a downloadable .txt file for HR to guide the session. Requirements** MySQL (or compatible DB) for storing questions Google Calendar credentials OpenAI (or another supported LLM provider) How to customize** Adjust meeting day/time in the Set node expressions Change database/table name in MySQL nodes Modify clustering logic in the AI Agent prompt Replace the form styling with your companyβs branding This template ensures no repeated questions, keeps HR better prepared with a structured script, and automates meeting scheduling in just one click.
by Robert SchrΓΆder
Portrait Photo Upscaler Workflow Overview Automated workflow that retrieves portrait photos from Airtable, upscales them using AI, and stores the enhanced images in Google Drive with organized folder structure. Features Automated Folder Creation**: Creates timestamped folders in Google Drive AI-Powered Upscaling**: Uses Replicate's Real-ESRGAN for 2x image enhancement Batch Processing**: Handles multiple images efficiently with loop processing Error Handling**: Continues processing even if individual images fail Airtable Integration**: Retrieves images from specified database records Prerequisites Required Credentials Google Drive OAuth2 API**: For folder creation and file uploads Airtable Token API**: For accessing image records Replicate HTTP Header Auth**: For AI upscaling service Airtable Setup Column name: PortraitFotoAuswahl Column type: Attachment field containing image files Required: Valid Airtable Base ID and Table ID Workflow Steps Manual Trigger: Initiates the workflow execution Create Folder: Generates new Google Drive folder with custom name Get Airtable Record: Retrieves specified record containing portrait images Extract URLs: Processes attachment URLs from Airtable field Loop Processing: Iterates through each image for individual processing AI Upscaling: Enhances images using Replicate's Real-ESRGAN (2x scale) Download Results: Retrieves processed images from Replicate Upload to Drive: Stores enhanced images in created folder Configuration Required Inputs Folder Name**: Custom name for Google Drive folder Record ID**: Airtable record identifier containing images Base ID**: (configurable) Table ID**: (configurable) Upscaling Settings Scale Factor**: 2x (configurable) Face Enhancement**: Disabled (configurable) Model**: Real-ESRGAN v1.3 Technical Details Node Configuration Error Handling**: Continue on individual failures Response Format**: File binary for image processing Naming Convention**: LoRa{timestamp}.png Batch Processing**: Automatic item splitting API Endpoints Replicate**: https://api.replicate.com/v1/predictions Model Version**: nightmareai/real-esrgan:f121d640bd286e1fdc67f9799164c1d5be36ff74576ee11c803ae5b665dd46aa Use Cases Portrait photography enhancement Batch image processing for portfolios Automated content preparation workflows Quality improvement for archived images Output Enhanced images stored in Google Drive Organized folder structure with timestamps Preserved original filenames with processed variants Failed processes continue without stopping workflow Template Benefits Scalable**: Processes unlimited images in single execution Reliable**: Built-in error handling and continuation logic Organized**: Automatic folder management and file naming Professional**: High-quality AI upscaling for commercial use
by panyanyany
Overview This n8n workflow automatically converts and enhances multiple photos into professional ID-style portraits using Gemini AI (Nano Banana). It processes images in batch from Google Drive, applies professional ID photo standards (proper framing, neutral background, professional attire), and outputs the enhanced photos back to Google Drive. Input: Google Drive folder with photos Output: Professional ID-style portraits in Google Drive output folder The workflow uses a simple form interface where users provide Google Drive folder URLs and an optional custom prompt. It automatically fetches all images from the input folder, processes each through the Defapi API with Google's nano-banana model, monitors generation status, and uploads finished photos to the output folder. Perfect for HR departments, recruitment agencies, or anyone needing professional ID photos in bulk. Prerequisites A Defapi account and API key (Bearer token configured in n8n credentials): Sign up at Defapi.org An active n8n instance with Google Drive integration Google Drive account with two public folders: Input folder: Contains photos to be processed (must be set to public/anyone with the link) Output folder: Where enhanced photos will be saved (must be set to public/anyone with the link) Photos with clear faces (headshots or upper body shots work best) Setup Instructions 1. Prepare Google Drive Folders Create two Google Drive folders: One for input photos (e.g., https://drive.google.com/drive/folders/xxxxxxx) One for output photos (e.g., https://drive.google.com/drive/folders/yyyyyy) Important: Make both folders **public (set sharing to "Anyone with the link can view") Right-click folder β Share β Change "Restricted" to "Anyone with the link" Upload photos to the input folder (supported formats: .jpg, .jpeg, .png, .webp) 2. Configure n8n Credentials Defapi API**: Add HTTP Bearer Auth credential with your Defapi API token (credential name: "Defapi account") Google Drive**: Connect your Google Drive OAuth2 account (credential name: "Google Drive account"). See https://docs.n8n.io/integrations/builtin/credentials/google/oauth-generic/ 3. Run the Workflow Execute the workflow in n8n Access the form submission URL Fill in the form: Google Drive - Input Folder URL: Paste your input folder URL Google Drive - Output Folder URL: Paste your output folder URL Prompt (optional): Customize the AI generation prompt or leave blank to use the default 4. Monitor Progress The workflow will: Fetch all images from the input folder Process each image through the AI model Wait for generation to complete (checks every 10 seconds) Download and upload enhanced photos to the output folder Workflow Structure The workflow consists of the following nodes: On form submission (Form Trigger) - Collects Google Drive folder URLs and optional prompt Search files and folders (Google Drive) - Retrieves all files from the input folder Code in JavaScript (Code Node) - Prepares image data and prompt for API request Send Image Generation Request to Defapi.org API (HTTP Request) - Submits generation request for each image Wait for Image Processing Completion (Wait Node) - Waits 10 seconds before checking status Obtain the generated status (HTTP Request) - Polls API for completion status Check if Image Generation is Complete (IF Node) - Checks if status is not "pending" Format and Display Image Results (Set Node) - Formats result with markdown and image URL HTTP Request (HTTP Request) - Downloads the generated image file Upload file (Google Drive) - Uploads the enhanced photo to the output folder Default Prompt The workflow uses this professional ID photo generation prompt by default: Create a professional portrait suitable for ID documentation with proper spacing and composition. Framing: Include the full head, complete shoulder area, and upper torso. Maintain generous margins around the subject without excessive cropping. Outfit: Transform the existing attire into light business-casual clothing appropriate for the individual's demographics and modern style standards. Ensure the replacement garment appears natural, properly tailored, and complements the subject's overall presentation (such as professional shirt, refined blouse, contemporary blazer, or sophisticated layered separates). Pose & Gaze: Position shoulders square to the camera, maintaining perfect frontal alignment. Direct the gaze straight ahead into the lens at identical eye height, avoiding any angular deviation in vertical or horizontal planes. Expression: Display a professional neutral demeanor or subtle closed-lip smile that conveys confidence and authenticity. Background: Utilize a solid, consistent light gray photographic background (color code: #d9d9d9) without any pattern, texture, or tonal variation. Lighting & Quality: Apply balanced studio-quality illumination eliminating harsh contrast or reflective artifacts. Deliver maximum resolution imagery with precise focus and accurate natural skin color reproduction. Customization Tips for Different ID Photo Types Based on the default prompt structure, here are specific customization points for different use cases: 1. Passport & Visa Photos Key Requirements: Most countries require white or light-colored backgrounds, neutral expression, no smile. Prompt Modifications: Background**: Change to Plain white background (#ffffff) or Light cream background (#f5f5f5) Expression**: Change to Completely neutral expression, no smile, mouth closed, serious but not tense Framing**: Add Head size should be 70-80% of the frame height. Top of head to chin should be prominent Outfit**: Change to Replace with dark formal suit jacket and white collared shirt or Navy blue blazer with light shirt Additional**: Add No glasses glare, ears must be visible, no hair covering the face 2. Corporate Employee ID / Work Badge Key Requirements: Professional but approachable, company-appropriate attire. Prompt Modifications: Background**: Use company color or standard #e6f2ff (light blue), #f0f0f0 (light gray) Expression**: Keep Soft closed-mouth smile β confident and approachable Outfit**: Change to specific dress code: Corporate: Dark business suit with tie for men, blazer with blouse for women Tech/Startup: Smart casual polo shirt or button-down shirt without tie Creative: Clean, professional casual clothing that reflects company culture Framing**: Use default or add Upper chest visible with company badge area clear 3. University/School Student ID Key Requirements: Friendly, youthful, appropriate for educational setting. Prompt Modifications: Background**: Use school colors or Light blue (#e3f2fd), Soft gray (#f5f5f5) Expression**: Change to Friendly natural smile or pleasant neutral expression Outfit**: Change to Replace with clean casual clothing β collared shirt, polo, or neat sweater. No logos or graphics Framing**: Keep default Additional**: Add Youthful, fresh appearance suitable for educational environment 4. Driver's License / Government ID Key Requirements: Strict standards, neutral expression, specific background colors. Prompt Modifications: Background**: Check local requirements β often White (#ffffff), Light gray (#d9d9d9), or Light blue (#e6f2ff) Expression**: Change to Neutral expression, no smile, mouth closed, eyes fully open Outfit**: Use Replace with everyday casual or business casual clothing β collared shirt or neat top Framing**: Add Head centered, face taking up 70-80% of frame, ears visible Additional**: Add No glasses (or non-reflective lenses), no headwear except religious purposes, natural hair 5. Professional LinkedIn / Resume Photo Key Requirements: Polished, confident, approachable. Prompt Modifications: Background**: Use Soft gray (#d9d9d9) or Professional blue gradient (#e3f2fd to #bbdefb) Expression**: Keep Confident, warm smile β professional yet approachable Outfit**: Change to: Executive: Premium business suit, crisp white shirt, tie optional Professional: Tailored blazer over collared shirt or elegant blouse Creative: Smart business casual with modern, well-fitted clothing Framing**: Change to Show head, full shoulders, and upper chest. Slightly more relaxed framing than strict ID photo Lighting**: Add Soft professional lighting with slight catchlight in eyes to appear engaging 6. Medical/Healthcare Professional Badge Key Requirements: Clean, trustworthy, professional medical appearance. Prompt Modifications: Background**: Use Clinical white (#ffffff) or Soft medical blue (#e3f2fd) Expression**: Change to Calm, reassuring expression with gentle smile Outfit**: Change to Replace with clean white lab coat over professional attire or Medical scrubs in appropriate color (navy, ceil blue, or teal) Additional**: Add Hair neatly pulled back if long, clean professional appearance, no flashy jewelry 7. Gym/Fitness Membership Card Key Requirements: Casual, recognizable, suitable for athletic environment. Prompt Modifications: Background**: Use Bright white (#ffffff) or gym brand color Expression**: Change to Natural friendly smile or neutral athletic expression Outfit**: Change to Replace with athletic wear β sports polo, performance t-shirt, or athletic jacket in solid colors Framing**: Keep default Additional**: Add Casual athletic appearance, hair neat General Customization Parameters Background Color Options: White: #ffffff (passport, visa, formal government IDs) Light gray: #d9d9d9 (default, versatile for most purposes) Light blue: #e6f2ff (corporate, professional) Cream: #f5f5dc (warm professional) Soft blue-gray: #eceff1 (modern corporate) Expression Variations: Strict Neutral**: "Completely neutral expression, no smile, mouth closed, serious but relaxed" Soft Smile**: "Very soft closed-mouth smile β confident and natural" (default) Friendly Smile**: "Warm natural smile with slight teeth showing β approachable and professional" Calm Professional**: "Calm, composed expression with slight pleasant demeanor" Clothing Formality Levels: Formal**: "Dark suit, white dress shirt, tie for men / tailored suit or blazer with professional blouse for women" Business Casual** (default): "Light business-casual outfit β clean shirt/blouse, lightweight blazer, or smart layers" Smart Casual**: "Collared shirt, polo, or neat sweater in solid professional colors" Casual**: "Clean, neat casual top β solid color t-shirt, casual button-down, or simple blouse" Framing Adjustments: Tight Crop**: "Head and shoulders only, face fills 80% of frame" (passport style) Standard Crop** (default): "Entire head, full shoulders, and upper chest with balanced space" Relaxed Crop**: "Head, shoulders, and chest visible, with more background space for professional portraits"
by Daniel Turgeman
How it works A webhook receives a chatbot message or demo request form with an email address The email is validated and cleaned, then Lusha enriches the lead A priority score is calculated based on seniority, company size, and request type The lead is upserted into HubSpot; high-priority leads go to #urgent-demos on Slack, others to #demo-requests Set up steps Install the Lusha community node Add your Lusha API, Slack, and HubSpot credentials Point your chatbot or demo form webhook to the n8n endpoint Customize priority scoring thresholds and Slack channels
by Sahil Sunny
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow allows users to extract sitemap links using ScrapingBee API. It only needs the domain name www.example.com and it automatically checks robots.txt and sitemap.xml to find the links. It is also designed to recursively run the workflow when new .xml links are found while scraping the sitemap. How It Works Trigger: The workflow waits for a webhook request that contains domain=www.example.com It then looks for robots.txt file, if not found it checks sitemap.xml Once it finds xml links, it recursively scrapes them to extract the website links For each xml file, first it checks whether it's a binary file and whether it's a compressed xml If it's a text response, it directly runs a code that extracts normal website link and another code to extract xml links If it's a binary that is not compressed, it just extracts text from the binary and then extract webiste links and xml links If it's a compressed binary, it first decompresses it and then extracts the text and then the links and xml After extracting website links, it appends those links directly to a sheet After extracting xml links, it scrapes them recursively until it finds all website links When the workflow is finished, you will see the output in the links column of the Google Sheet that we added to the workflow. Set Up Steps Get your ScrapingBee API Key here Create a new google sheet with an empty column named links. Connect to the sheet by signing in using your Google Credential and add the link to your sheet. Copy the webhook url, and send a GET request with domain as query parameter. Example: curl "https://webhook_link?domain=scrapingbee.com" Customisation Options If the website you are scraping is blocking your request, you can try using premium or stealth proxy in Scrape robots.txt file, Scrape sitemap.xml file, and Scrape xml file nodes. If you wish to store the data in a different app/tool or store it as a file, you would just need to replace Append links to sheet node with a relevant node. Next Steps If you wish to scrape the pages using the extracted links, then you can implement a new workflow that reads the sheet or file (output generated by this workflow) for links and for each link send a request to ScrapingBee's HTML API and save the returned data. NOTE: Some heavy sitemaps could result in a crash if the workflow consumes more memory than what is available in your n8n plan or self-hosted system. If this happens, we would recommend you to either upgrade your plan or use a self-hosted solution with a higher memory.
by Br1
Whoβs it for This workflow is designed for developers, data engineers, and AI teams who need to migrate a Pinecone Cloud index into a Weaviate Cloud class index without recalculating the vectors (embeddings). Itβs especially useful if you are consolidating vector databases, moving from Pinecone to Weaviate for hybrid search, or preparing to deprecate Pinecone. β οΈ Note: The dimensions of the two indexes must match. How it works The workflow automates migration by batching, formatting, and transferring vectors along with their metadata: Initialization β Uses Airtable to store the pagination token. The token starts with a record initialized as INIT (Name=INIT, Number=0). Pagination handling β Reads batches of vector IDs from the Pinecone index using /vectors/list, resuming from the last stored token. Vector fetching β For each batch, retrieves embeddings and metadata fields from Pinecone via /vectors/fetch. Data transformation β Two Code nodes (Prepare Fetch Body and Format2Weaviate) are included to correctly structure the body of each HTTP request and map metadata into Weaviate-compatible objects. Data loading β Inserts embeddings and metadata into the target Weaviate class through its REST API. State persistence β Updates the pagination token in Airtable, ensuring the next run resumes from the correct point. Scheduling β The workflow runs on a defined schedule (e.g., every 15 seconds) until all data has been migrated. How to set up Airtable setup Create a Base (e.g., Cycle) and a Table (e.g., NextPage). The table should have two columns: Name (text) β stores the pagination token. Number (number) β stores the row ID to update. Initialize the first and only row with (INIT, 0). Source and target configuration Make sure you have a Pinecone index and namespace with embeddings. Manually create a target Weaviate Cluster and a target Weaviate Class with the same vector dimensions. In the Parameters node of the workflow, configure the following values: | Parameter | Description | Example Value | |---------------------|----------------------------------------------------------------------------------------------|---------------| | pineconeIndex | The name of your Pinecone index to read vectors from. | my-index | | pineconeNamespace | The namespace inside the Pinecone index (leave empty if unused). | default | | batchlimit | Number of records fetched per iteration. Higher = faster migration but heavier API calls. | 100 | | weaviateCluster | REST endpoint of your Weaviate Cloud instance. | https://dbbqrc9itXXXXXXXXX.c0.europe-west3.gcp.weaviate.cloud | | weaviateClass | Target class name in Weaviate where objects will be inserted. | MyClass | Credentials Configure Pinecone API credentials. Configure Weaviate Bearer token. Configure Airtable API key. Activate Import the workflow into n8n, update the parameters, and start the schedule trigger. Requirements Pinecone Cloud account with a configured index and namespace. Weaviate Cloud cluster with a class defined and matching vector dimensions. Airtable account and base to store pagination state. n8n instance with credentials for Pinecone, Weaviate, and Airtable. How to customize the workflow Adjust the batchlimit parameter to control performance (higher values = fewer API calls, but heavier requests). Adapt the Format2Weaviate Code node if you want to change or expand the metadata stored. Replace Airtable with another persistence store (e.g., Google Sheets, PostgreSQL) if preferred. Extend the workflow to send migration progress updates via Slack, email, or another channel.
by WhySoSerious
What it is This workflow listens for new tickets in HaloPSA via webhook, generates a professional AI-powered summary of the issue using Gemini (or another LLM), and posts it back into the ticket as a private note. Itβs designed for MSPs using HaloPSA who want to reduce triage time and give engineers a clear head start on each support case. βΈ» β¨ Features β’ π Webhook trigger from HaloPSA on new ticket creation β’ π§ Optional team filter (skip Sales or other queues) β’ π¦ Extracts ticket subject, details, and ID β’ π§ Builds a structured AI prompt with MSP context (NinjaOne, M365, CIPP) β’ π€ Processes via Gemini or other LLM β’ π Cleans & parses JSON output (summary, next step, troubleshooting) β’ π§± Generates a branded HTML private note (logo + styled sections) β’ π Posts the note back into HaloPSA via API βΈ» π§ Setup Webhook β’ Replace WEBHOOK_PATH and paste the generated Production URL into your HaloPSA webhook. Guard filter (optional) β’ Change teamName or teamId to skip tickets from specific queues. Branding β’ Replace YOUR_LOGO_URL and Your MSP Brand in the HTML note builder. HaloPSA API β’ In the HTTP node, replace YOUR_HALO_DOMAIN and add your Halo API token (Bearer auth). LLM credentials β’ Set your API key in the Gemini / OpenAI node credentials section. (Optional) Adjust the AI prompt with your own tools or processes. βΈ» β Requirements β’ HaloPSA account with API enabled β’ Gemini / OpenAI (or other LLM) API key β’ SMTP (optional) if you want to extend with notifications βΈ» β‘ Workflow overview `π Webhook β π§ Guard β π¦ Extract Ticket β π§ Build AI Prompt β π€ AI Agent (Gemini) β π Parse JSON β π§± Build HTML Note β π Post to HaloPSA`
by PDF Vector
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Transform Research Papers into a Searchable Knowledge Graph This workflow automatically builds and maintains a comprehensive knowledge graph from academic papers, enabling researchers to discover connections between concepts, track research evolution, and perform semantic searches across their field of study. By combining PDF Vector's paper parsing capabilities with GPT-4's entity extraction and Neo4j's graph database, this template creates a powerful research discovery tool. Target Audience & Problem Solved This template is designed for: Research institutions** building internal knowledge repositories Academic departments** tracking research trends and collaborations R&D teams** mapping technology landscapes Libraries and archives** creating searchable research collections It solves the problem of information silos in academic research by automatically extracting and connecting key concepts, methods, authors, and findings across thousands of papers. Prerequisites n8n instance with PDF Vector node installed OpenAI API key for GPT-4 access Neo4j database instance (local or cloud) Basic understanding of graph databases At least 100 API credits for PDF Vector (processes ~50 papers) Step-by-Step Setup Instructions Configure PDF Vector Credentials Navigate to Credentials in n8n Add new PDF Vector credentials with your API key Test the connection to ensure it's working Set Up Neo4j Database Install Neo4j locally or create a cloud instance at Neo4j Aura Note your connection URI, username, and password Create database constraints for better performance: CREATE CONSTRAINT paper_id IF NOT EXISTS ON (p:Paper) ASSERT p.id IS UNIQUE; CREATE CONSTRAINT author_name IF NOT EXISTS ON (a:Author) ASSERT a.name IS UNIQUE; CREATE CONSTRAINT concept_name IF NOT EXISTS ON (c:Concept) ASSERT c.name IS UNIQUE; Configure OpenAI Integration Add OpenAI credentials in n8n Ensure you have GPT-4 access (GPT-3.5 can be used with reduced accuracy) Set appropriate rate limits to avoid API throttling Import and Configure the Workflow Import the template JSON into n8n Update the search query in the "PDF Vector - Fetch Papers" node to your research domain Adjust the schedule trigger frequency based on your needs Configure the PostgreSQL connection for logging (optional) Test with Sample Papers Manually trigger the workflow Monitor the execution for any errors Check Neo4j browser to verify nodes and relationships are created Adjust entity extraction prompts if needed for your domain Implementation Details The workflow operates in several stages: Paper Discovery: Uses PDF Vector's academic search to find relevant papers Content Parsing: Leverages LLM-enhanced parsing for accurate text extraction Entity Extraction: GPT-4 identifies concepts, methods, datasets, and relationships Graph Construction: Creates nodes and relationships in Neo4j Statistics Tracking: Logs processing metrics for monitoring Customization Guide Adjusting Entity Types: Edit the GPT-4 prompt in the "Extract Entities" node to include domain-specific entities: // Add custom entity types like: // - Algorithms // - Datasets // - Institutions // - Funding sources Modifying Relationship Types: Extend the "Build Graph Structure" node to create custom relationships: // Examples: // COLLABORATES_WITH (between authors) // EXTENDS (between papers) // FUNDED_BY (paper to funding source) Changing Search Scope: Modify providers array to include/exclude databases Adjust year range for historical or recent focus Add keyword filters for specific subfields Scaling Considerations: For large-scale processing (>1000 papers/day), implement batching Use Redis for deduplication across runs Consider implementing incremental updates to avoid reprocessing Knowledge Base Features: Automatic concept extraction with GPT-4 Research timeline tracking Author collaboration networks Topic evolution visualization Semantic search interface via Neo4j Components: Paper Ingestion: Continuous monitoring and parsing Entity Extraction: Identify key concepts, methods, datasets Relationship Mapping: Connect papers, authors, concepts Knowledge Graph: Store in graph database Search Interface: Query by concept, author, or topic Visualization: Interactive knowledge exploration
by Hunyao
Upload a PDF and instantly get a neatly formatted Google Doc with all the readable textβno manual copy-paste, no messy line breaks. What this workflow does Accepts PDF uploads via a public form Sends the file to Mistral Cloud for high-accuracy OCR Detects and merges page images with their extracted text Cleans headers, footers, broken lines, and noise Creates a new Google Doc in your chosen Drive folder Writes the polished markdown text into the document What you need Mistral Cloud API key with OCR access Google Docs & Drive credentials connected in n8n Drive folder ID for new documents A PDF file to process (up to 100 MB) Setup Import the workflow into n8n and activate credentials. In Trigger β’ Form Submission, copy the webhook URL and share it or embed it. In Create β’ Google Doc, replace the default folder ID with yours. Fill out Mistral API key under Mistral Cloud API credentials. Save and activate the workflow. Visit the form, upload a PDF, name your future doc, and submit. Open Drive to view your newly generated, clean Google Doc. Example use cases Convert annual reports into editable text for analysis. Extract readable content from scan-only invoices for bookkeeping. Turn magazine PDFs into draft blog posts. Digitize lecture handouts for quick search and annotation. Convert image-heavy landing pages / advertorials into editable text for AI to analyze structure and content.
by Rahul Joshi
Description Automatically convert structured Slack messages into Jira issues with parsed titles, descriptions, and priorities. This workflow also downloads file attachments from Slack (e.g., screenshots, logs, or documents) and uploads them directly into the created Jira issue. It then confirms success back to the Slack channel, ensuring transparency and seamless collaboration. ππ¬π What This Template Does Monitors a designated Slack channel for new issue reports. π Parses Slack message text with regex to extract title, description, priority, and type. π§ Creates a new Jira issue with the structured data. π« Detects and processes attachments, splitting multiple files into batches. π Downloads files from Slack using secure URLs and Slack bot authentication. π Uploads attachments directly into the created Jira issue. π Sends a Slack confirmation with Jira issue key, link, and summary details. β Key Benefits Eliminates manual Jira ticket creation from Slack messages. β±οΈ Preserves critical context by attaching screenshots, logs, and documentation. π Ensures structured, standardized issue reporting across teams. π Provides instant Slack confirmation with direct Jira links. π² Handles multiple attachments gracefully with batch processing. β‘ Features Slack Trigger β Monitors specific channels for new issue messages. π¬ Message Parsing Engine β Extracts title, description, priority, and type using regex + fallback logic. π Jira Integration β Creates structured Jira issues with proper fields (summary, description, priority, type). π« Attachment Handling β Splits, downloads, and uploads Slack files into Jira automatically. π Slack Confirmation β Sends formatted success messages with clickable Jira links. π Robust Data Handling β Supports rich text, multiple files, and smart mappings of Slack priorities to Jira. π§ Requirements n8n instance (cloud or self-hosted). Slack Bot API credentials with channels:history, files:read, and chat:write permissions. Jira Software Cloud API credentials with project and issue creation permissions. Pre-configured Slack channel for reporting issues. Jira project set up with supported issue types (bug, task, feature, etc.). Target Audience Software development teams managing issue intake from Slack. π©βπ» QA and testing teams reporting bugs directly from Slack. π§ͺ IT support teams needing structured Jira issues with attachments. π οΈ Agile teams looking for seamless Slack β Jira integration. π Remote teams requiring real-time visibility into Jira issue creation. π Step-by-Step Setup Instructions Connect Slack and Jira credentials in n8n. π Configure the Slack channel ID to listen for issue reports. π¬ Map Jira project and issue type IDs in the βCreate Jira Issueβ node. π« Customize parsing logic for message formats (default: Title: X, Description: Y, Priority: Z). π Ensure Slack files can be downloaded with your bot token (files:read scope). π Test with a sample message containing a title, description, and attachment. β Deploy and monitor Slack β Jira issue creation in real-time. β‘
by Parth Pansuriya
Analyze Ingredient Photos using Telegram & Gemini AI Whoβs it for Skincare enthusiasts who want to know if a product is safe. Food or supplement buyers checking ingredient safety. Parents reviewing kidsβ products. Anyone wanting quick ingredient analysis before using/buying a product. How it works / What it does Telegram Input β User sends a photo of a product label or a text list of ingredients. Photo Handling β Workflow checks if the message contains a photo. If yes β retrieves the file, extracts ingredients using Google Gemini AI. If no β handles text/greetings/off-topic queries. Caption Branching With caption β Gives Use / Do Not Use recommendation + reason. Without caption β Gives Advantages, Disadvantages, Recommended For, Not Recommended For (3 points each). Response on Telegram β Sends a friendly, structured response back to the user. How to set up Import this workflow JSON into n8n. Create and connect a Telegram bot via BotFather β paste the API token in Telegram credentials. Add Google Gemini (PaLM) API credentials inside n8n. Activate the workflow and send your first product photo via Telegram! Requirements n8n instance (self-hosted or cloud). Telegram bot token. Google Gemini API credentials. How to customize the workflow Change AI Instructions** β Update system messages to tweak tone (more technical, casual, or medical). Adjust Output Format** β Edit Telegram response nodes for shorter/longer answers. Expand Analysis** β Add extra categories (e.g., allergens, environmental impact). Multi-language Support** β Modify prompts to output in the userβs preferred language. Add Database Logging** β Connect to MySQL, PostgreSQL to save conversations (user queries + AI responses).