by Oskar
This workflow with AI agent is designed to navigate through the page to retrieve specific type of information (in this example: social media profile links). The agent is equipped with 2 tools: text tool:** to retrieve all the text from the page, URLs tool:** to extract all possible links from the page. ๐ก You can edit prompt and JSON schema connected to the agent in order to return other data then social media profile links. ๐ This workflow uses Supabase as storage (input/output). Feel free to change it to any other database of your choice. ๐ฌ See this workflow in action in my YouTube video. How it works? The workflow uses the input URL (website) as a starting point to retrieve the data (e.g. example.com). Using the "URLs tool", the agent is able to retrieve all links from the page and navigate to them. For example, if you want to retrieve contact information, agent will try to find a subpage that might contain this information (e.g. example.com/contact) and extract the information using the text tool. Set up steps Connect database with input data (website addresses) or pin sample data to trigger node. Configure the crawling agent to retrieve the desired data (e.g. modify prompt and/or parsing schema). Set credentials for OpenAI. Optionally: split agent tools to separate workflows. If you like this workflow, please subscribe to my YouTube channel and/or my newsletter.
by Hardikkumar
This workflow automates the entire process of creating SEO-optimized meta titles and descriptions. It analyzes your webpage, spies on top-ranking competitors for the same keywords, and then uses a multi-step AI process to generate compelling, length-constrained meta tags. ๐ค How It Works This workflow operates in a three-phase process for each URL you provide: Phase 1: Self-Analysis When you add a URL to a Google Sheet with the status "New", the workflow scrapes your page's content. The first AI then performs a deep analysis to identify the page's primary keyword, semantic keyword cluster, search intent, and target audience. Phase 2: Competitor Intelligence The workflow takes your primary keyword and performs a live Google search. A custom code block intelligently filters the search results to identify true competitors. A second AI analyzes their meta titles and descriptions to find common patterns and successful strategies. Phase 3: Master Generation & Update The final AI synthesizes all gathered intelligenceโyour page's data and the competitor's winning patternsโto generate a new, optimized meta title and description. It then writes this new data back to your Google Sheet and updates the status to "Generated". โ๏ธ Setup Instructions You should be able to set up this workflow in about 10-15 minutes โฑ๏ธ. ๐ Prerequisites You will need the following accounts and API keys: A Google Account with access to Google Sheets. A Google AI / Gemini API key. A SerpApi key for Google search data. A ScrapingDog API key for reliable website scraping. ๐ ๏ธ Configuration Google Sheet Setup: Create a new Google Sheet. The workflow requires the following columns: URL, Status, Current Meta Title, Current Meta Description, Generated Meta Title, Generated Meta Description, and Ranking Factor. Add Credentials: Google Sheets Nodes: Connect your Google account credentials to the Google Sheets Trigger & Google Sheets nodes. Google Gemini Nodes: Add your Google Gemini API key to the credentials for all three Google Gemini Chat Model nodes. Scrape Website Node: In this HTTP Request node, go to Query Parameters and replace <your-api-key> with your ScrapingDog API key. Googl SERP Node: In this HTTP Request node, go to Query Parameters and replace <your-api-key> with your SerpApi API key. Configure Google Sheets Nodes: Copy the Document ID from your Google Sheet's URL. Paste this ID into the "Document ID" field in the following nodes: Google Sheets Trigger, Get row(s) in sheet1, and Update row in sheet. In each of those nodes, select the correct sheet name from the "Sheet Name" dropdown. โ Activate Workflow Save and activate the workflow. To run it, simply add a new row to your Google Sheet containing the URL you want to process and set the "Status" column to New.
by Custom Workflows AI
Introduction The "Automatic Weekly Digital PR Stories Suggestions" workflow is a sophisticated automated system designed to identify trending news stories on Reddit, analyze public sentiment through comment analysis, extract key information from source articles, and generate strategic angles for potential digital PR campaigns. This workflow leverages the power of social media trends, natural language processing, and AI-driven analysis to deliver curated, sentiment-analyzed news opportunities for PR professionals. Operating on a weekly schedule, the workflow searches Reddit for posts related to specified topics, filters them based on engagement metrics, and performs a deep analysis of both the content and public reaction. It then generates comprehensive reports that include story opportunities, audience insights, and strategic recommendations. These reports are automatically compiled, stored in Google Drive, and shared with team members via Mattermost for immediate collaboration. This workflow solves the time-consuming process of manually monitoring social media for trending stories, analyzing public sentiment, and identifying PR opportunities. By automating these tasks, PR professionals can focus on strategy development and execution rather than spending hours on research and analysis. Who is this for? This workflow is designed for digital PR professionals, content marketers, communications teams, and media relations specialists who need to stay on top of trending stories and public sentiment to develop timely and effective PR campaigns. It's particularly valuable for: PR agencies managing multiple clients across different industries In-house PR teams needing to identify media opportunities quickly Content marketers looking for trending topics to create timely content Communications professionals monitoring public perception of industry news Users should have basic familiarity with n8n workflows and the PR strategy development process. While technical knowledge of the integrated APIs is not required to use the workflow, some understanding of Reddit, sentiment analysis, and PR campaign development would be beneficial for interpreting and acting on the generated reports. What problem is this workflow solving? Digital PR professionals face several challenges that this workflow addresses: Information Overload: Manually monitoring social media platforms for trending stories is time-consuming and often results in missed opportunities. Sentiment Analysis Complexity: Understanding public perception of news stories requires reading through hundreds of comments and identifying patterns, which is labor-intensive and subjective. Content Extraction: Visiting multiple news sources to read and analyze articles takes significant time. Strategic Angle Development: Identifying unique PR angles that leverage trending stories and public sentiment requires synthesizing large amounts of information. Team Collaboration: Sharing findings and insights with team members in a structured format can be cumbersome. By automating these processes, the workflow enables PR professionals to quickly identify trending stories with PR potential, understand public sentiment, and develop strategic angles based on comprehensive analysis, all while maintaining a structured approach to team collaboration. What this workflow does Overview The workflow automatically identifies trending posts on Reddit related to specified topics, analyzes both the content of linked articles and public sentiment from comments, and generates comprehensive PR strategy reports. These reports include story opportunities, audience insights, and strategic recommendations based on the analysis. The final reports are compiled, stored in Google Drive, and shared with team members via Mattermost. Process Topic Selection and Reddit Search: The workflow starts with a list of topics specified in the "Set Data" node It searches Reddit for posts related to these topics Posts are filtered based on upvotes and other criteria to focus on trending content Comment Analysis: For each post, the workflow retrieves comments It extracts the top 30 comments based on score Using Claude AI, it analyzes the comments to understand: Overall sentiment Dominant narratives Audience insights PR implications Content Analysis: The workflow extracts the content of the linked article using Jina AI It analyzes the content to identify: Core story elements Technical aspects Narrative opportunities Viral elements PR Strategy Development: Based on the combined analysis of comments and content, the workflow generates: First-mover story opportunities Trend-amplifier story ideas Priority rankings Execution roadmap Strategic recommendations Report Generation and Distribution: The workflow compiles comprehensive reports for each post Reports are converted to text files All files are compressed into a ZIP archive The archive is uploaded to Google Drive A link to the archive is shared with team members via Mattermost Setup To set up this workflow, follow these steps: Import the Workflow: Download the workflow JSON file Import it into your n8n instance Configure API Credentials: Reddit: Add a new credential "Reddit OAuth2 API" by following the guide at https://docs.n8n.io/integrations/builtin/credentials/reddit/ Anthropic: Add a new credential "Anthropic Account" by following the guide at https://docs.n8n.io/integrations/builtin/credentials/anthropic/ Google Drive: Add a new credential "Google Drive OAuth2 API" by following the guide at https://docs.n8n.io/integrations/builtin/credentials/google/oauth-single-service/ Configure the "Set Data" Node: Set your interested topics (one per line) Add your Jina API key (obtain from https://jina.ai/api-dashboard/key-manager) Configure the Mattermost Node: Update your Mattermost instance URL Set your Webhook ID and Channel Follow the guide at https://developers.mattermost.com/integrate/webhooks/incoming/ for webhook setup Adjust the Schedule (Optional): The workflow is set to run every Monday at 6am Modify the "Schedule Trigger" node if you need a different schedule Test the Workflow: Run the workflow manually to ensure all connections are working properly Check the output to verify the reports are being generated correctly How to customize this workflow to your needs This workflow can be customized in several ways to better suit your specific requirements: Topic Selection: Modify the topics in the "Set Data" node to focus on industries or subjects relevant to your PR strategy Add multiple topics to cover different client interests or market segments Filtering Criteria: Adjust the "Upvotes Requirement Filtering" node to change the minimum upvotes threshold Modify the filtering conditions to include or exclude certain types of posts Analysis Parameters: Customize the prompts in the "Comments Analysis," "News Analysis," and "Stories Report" nodes to focus on specific aspects of the content or comments Adjust the temperature settings in the Anthropic Chat Model nodes to control the creativity of the AI responses Report Format: Modify the "Set Final Report" node to change the structure or content of the final reports Add or remove sections based on your specific reporting needs Distribution Method: Replace or supplement the Mattermost notification with email notifications, Slack messages, or other communication channels Add additional storage options beyond Google Drive Schedule Frequency: Change the "Schedule Trigger" node to run the workflow more or less frequently Set up multiple triggers for different topics or clients Integration with Other Systems: Add nodes to integrate with your CRM, content management system, or project management tools Create connections to automatically populate content calendars or task management systems
by Muhammad Asadullah
Document Chat Bot with Automated RAG System This workflow creates a conversational assistant that can answer questions based on your Google Drive documents. It automatically processes various file types and uses Retrieval-Augmented Generation (RAG) to provide accurate answers based on your document content. How It Works Monitors Google Drive for New Documents: Automatically detects when files are created or updated in designated folders Processes Multiple File Types: Handles PDFs, Excel spreadsheets, and Google Docs Builds a Knowledge Base: Converts documents into searchable vector embeddings stored in Supabase Provides Chat Interface: Users can ask questions about their documents through a web interface Retrieves Relevant Information: Uses advanced RAG techniques to find and present the most relevant information Setup Steps (Estimated time: 25-30 minutes) API Credentials: Connect your OpenAI API key for text processing and embeddings Google Drive Integration: Set up Google Drive triggers to monitor specific folders Supabase Configuration: Configure Supabase vector database for document storage Chat Interface Setup: Deploy the web-based chat interface using the provided webhook The workflow automatically chunks documents into manageable segments, generates embeddings, and stores them in a vector database for efficient retrieval. When users ask questions, the system finds the most relevant document sections and uses them to generate accurate, contextual responses.
by Incrementors
LinkedIn & Indeed Job Scraper with Bright Data & Google Sheets Export Overview This n8n workflow automates the process of scraping job listings from both LinkedIn and Indeed platforms simultaneously, combining results, and exporting data to Google Sheets for comprehensive job market analysis. It integrates with Bright Data for professional web scraping, Google Sheets for data storage, and provides intelligent status monitoring with retry mechanisms. Workflow Components 1. ๐ Trigger Input Form Type**: Form Trigger Purpose**: Initiates the workflow with user-defined job search criteria Input Fields**: City (required) Job Title (required) Country (required) Job Type (optional dropdown: Full-Time, Part-Time, Remote, WFH, Contract, Internship, Freelance) Function**: Captures user requirements to start the dual-platform job scraping process 2. ๐ง Format Input for APIs Type**: Code Node (JavaScript) Purpose**: Prepares and formats user input for both LinkedIn and Indeed APIs Processing**: Standardizes location and job title formats Creates API-specific input structures Generates custom output field configurations Function**: Ensures compatibility with both Bright Data datasets 3. ๐ Start Indeed Scraping Type**: HTTP Request (POST) Purpose**: Initiates Indeed job scraping via Bright Data Endpoint**: https://api.brightdata.com/datasets/v3/trigger Parameters**: Dataset ID: gd_lpfll7v5hcqtkxl6l Include errors: true Type: discover_new Discover by: keyword Limit per input: 2 Custom Output Fields**: jobid, company_name, job_title, description_text location, salary_formatted, company_rating apply_link, url, date_posted, benefits 4. ๐ Start LinkedIn Scraping Type**: HTTP Request (POST) Purpose**: Initiates LinkedIn job scraping via Bright Data (parallel execution) Endpoint**: https://api.brightdata.com/datasets/v3/trigger Parameters**: Dataset ID: gd_l4dx9j9sscpvs7no2 Include errors: true Type: discover_new Discover by: keyword Limit per input: 2 Custom Output Fields**: job_posting_id, job_title, company_name, job_location job_summary, job_employment_type, job_base_pay_range apply_link, url, job_posted_date, company_logo 5. ๐ Check Indeed Status Type**: HTTP Request (GET) Purpose**: Monitors Indeed scraping job progress Endpoint**: https://api.brightdata.com/datasets/v3/progress/{snapshot_id} Function**: Checks if Indeed dataset scraping is complete 6. ๐ Check LinkedIn Status Type**: HTTP Request (GET) Purpose**: Monitors LinkedIn scraping job progress Endpoint**: https://api.brightdata.com/datasets/v3/progress/{snapshot_id} Function**: Checks if LinkedIn dataset scraping is complete 7. โฑ๏ธ Wait Nodes (60 seconds each) Type**: Wait Node Purpose**: Implements intelligent polling mechanism Duration**: 1 minute Function**: Pauses workflow before rechecking scraping status to prevent API overload 8. โ Verify Indeed Completion Type**: IF Condition Purpose**: Evaluates Indeed scraping completion status Condition**: status === "ready" Logic**: True: Proceeds to data validation False: Loops back to status check with wait 9. โ Verify LinkedIn Completion Type**: IF Condition Purpose**: Evaluates LinkedIn scraping completion status Condition**: status === "ready" Logic**: True: Proceeds to data validation False: Loops back to status check with wait 10. ๐ Validate Indeed Data Type**: IF Condition Purpose**: Ensures Indeed returned job records Condition**: records !== 0 Logic**: True: Proceeds to fetch Indeed data False: Skips Indeed data retrieval 11. ๐ Validate LinkedIn Data Type**: IF Condition Purpose**: Ensures LinkedIn returned job records Condition**: records !== 0 Logic**: True: Proceeds to fetch LinkedIn data False: Skips LinkedIn data retrieval 12. ๐ฅ Fetch Indeed Data Type**: HTTP Request (GET) Purpose**: Retrieves final Indeed job listings Endpoint**: https://api.brightdata.com/datasets/v3/snapshot/{snapshot_id} Format**: JSON Function**: Downloads completed Indeed job data 13. ๐ฅ Fetch LinkedIn Data Type**: HTTP Request (GET) Purpose**: Retrieves final LinkedIn job listings Endpoint**: https://api.brightdata.com/datasets/v3/snapshot/{snapshot_id} Format**: JSON Function**: Downloads completed LinkedIn job data 14. ๐ Merge Results Type**: Merge Node Purpose**: Combines Indeed and LinkedIn job results Mode**: Merge all inputs Function**: Creates unified dataset from both platforms 15. ๐ Save to Google Sheet Type**: Google Sheets Node Purpose**: Exports combined job data for analysis Operation**: Append rows Target**: "Compare" sheet in specified Google Sheet document Data Mapping**: Job Title, Company Name, Location Job Detail (description), Apply Link Salary, Job Type, Discovery Input Workflow Flow Input Form โ Format APIs โ [Indeed Trigger] + [LinkedIn Trigger] โ โ Check Status Check Status โ โ Wait 60s Wait 60s โ โ Verify Ready Verify Ready โ โ Validate Data Validate Data โ โ Fetch Indeed Fetch LinkedIn โ โ โโโโ Merge Results โโโโ โ Save to Google Sheet Configuration Requirements API Keys & Credentials Bright Data API Key**: Required for both LinkedIn and Indeed scraping Google Sheets OAuth2**: For data storage and export access n8n Form Webhook**: For user input collection Setup Parameters Google Sheet ID**: Target spreadsheet identifier Sheet Name**: "Compare" tab for job data export Form Webhook ID**: User input form identifier Dataset IDs**: Indeed: gd_lpfll7v5hcqtkxl6l LinkedIn: gd_l4dx9j9sscpvs7no2 Key Features Dual Platform Scraping Simultaneous LinkedIn and Indeed job searches Parallel processing for faster results Comprehensive job market coverage Platform-specific field extraction Intelligent Status Monitoring Real-time scraping progress tracking Automatic retry mechanisms with 60-second intervals Data validation before processing Error handling and timeout management Smart Data Processing Unified data format from both platforms Intelligent field mapping and standardization Duplicate detection and removal Rich metadata extraction Google Sheets Integration Automatic data export and storage Organized comparison format Historical job search tracking Easy sharing and collaboration Form-Based Interface User-friendly job search form Flexible job type filtering Multi-country support Real-time workflow triggering Use Cases Personal Job Search Comprehensive multi-platform job hunting Automated daily job searches Organized opportunity comparison Application tracking and management Recruitment Services Client job search automation Market availability assessment Competitive salary analysis Bulk candidate sourcing Market Research Job market trend analysis Salary benchmarking studies Skills demand assessment Geographic opportunity mapping HR Analytics Competitor hiring intelligence Role requirement analysis Compensation benchmarking Talent market insights Technical Notes Polling Interval**: 60-second status checks for both platforms Result Limiting**: Maximum 2 jobs per input per platform Data Format**: JSON with structured field mapping Error Handling**: Comprehensive error tracking in all API requests Retry Logic**: Automatic status rechecking until completion Country Support**: Adaptable domain selection (indeed.com, fr.indeed.com) Form Validation**: Required fields with optional job type filtering Merge Strategy**: Combines all results from both platforms Export Format**: Standardized Google Sheets columns for easy analysis Sample Data Output | Field | Description | Example | |-------|-------------|---------| | Job Title | Position title | "Senior Software Engineer" | | Company Name | Hiring organization | "Tech Solutions Inc." | | Location | Job location | "San Francisco, CA" | | Job Detail | Full description | "We are seeking a senior developer..." | | Apply Link | Direct application URL | "https://company.com/careers/123" | | Salary | Compensation info | "$120,000 - $150,000" | | Job Type | Employment details | "Full-time, Remote" | Setup Instructions Import Workflow: Copy JSON configuration into n8n Configure Bright Data: Add API credentials for both datasets Setup Google Sheets: Create target spreadsheet and configure OAuth Update References: Replace placeholder IDs with your actual values Test Workflow: Submit test form and verify data export Activate: Enable workflow and share form URL with users For any questions or support, please contact: info@incrementors.com or fill out this form: https://www.incrementors.com/contact-us/
by CustomJS
! n8n Workflow: HTML to PDF Generator This n8n workflow converts HTML content into a styled PDF and returns it as a response via a webhook. The workflow receives HTML input, processes it using CustomJS's PDF toolkit, and sends back the resulting PDF to the original webhook requester. @custom-js/n8n-nodes-pdf-toolkit Features: Webhook Trigger**: Accepts incoming requests with HTML content. HTML to PDF Conversion**: Uses CustomJS to transform HTML into a PDF. Response**: Sends the generated PDF back to the webhook response. Requirements: Self-hosted** n8n instance A CustomJS API key for HTML to PDF conversion HTML content** to be converted into a PDF Workflow Steps: Webhook Trigger: Accepts incoming HTTP requests with HTML content. This data is passed to the next node for processing. HTML to PDF Conversion: Uses the CustomJS node to convert the incoming HTML into a PDF document. You can customize the HTML content to match the design requirements. Respond to Webhook: Sends the generated PDF as a binary response to the original webhook request. Setup Guide: 1. Configure CustomJS API Sign up at CustomJS. Retrieve your API key from the profile page. Add your API key as n8n credentials. 2. Design Workflow Create a Webhook: Set up a webhook to trigger the workflow when HTML content is received. Prepare HTML Content: The incoming request should include the HTML content you wish to convert into a PDF. Configure HTML to PDF Node: Use the HTML to PDF node to convert the provided HTML into a PDF. The node uses the HTML input to generate a PDF using the CustomJS API. Respond with the PDF: The Respond to Webhook node will send the generated PDF back to the original requester as a binary response. Example HTML Input: Hello CustomJS! CustomJS provides the missing toolset for your no-code projects Result PDF
by CustomJS
n8n Workflow: Automating Website Screenshots from Google Sheets This n8n workflow captures screenshots of websites listed in a Google Sheet and saves them to Google Drive using the CustomJS PDF Toolkit. @custom-js/n8n-nodes-pdf-toolkit Features Monitors** a Google Sheet for new rows with website URLs. Captures** screenshots of the websites using the CustomJS PDF Toolkit. Uploads** the screenshots to a specified Google Drive folder. Notice Community nodes can only be installed on self-hosted instances of n8n. Requirements Self-hosted** n8n instance A Google Sheets document containing website URLs and Titles. A Google Drive folder to store the screenshots. A CustomJS API key for website screenshots. n8n credentials** for Google Sheets and Google Drive. Workflow Steps Google Sheets Trigger Monitors a specified sheet for new rows. Extracts the URL and Title from the row. Website Screenshot Node Uses CustomJS PDF Toolkit to take a screenshot of the given URL. Google Drive Upload Saves the screenshot to a specific Google Drive folder. Uses the Title column as the filename. Setup Guide 1. Connect Google Sheets Ensure your Google Sheet has a column named Url for website URLs and Name for website names. Set up Google Sheets credentials in n8n. 2. Configure CustomJS API Sign up at CustomJS. Retrieve your API key from the profile page. Add your API key as n8n credentials. 3. Set Up Google Drive Create a folder in Google Drive to store screenshots. Copy the folder ID and set it in the Google Drive node in n8n. Perfect for: Website monitoring** Generating visual archives of web pages** Automating content curation** This workflow streamlines the process of capturing and organizing website screenshots efficiently.
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for This n8n-powered automation uses Bright Data's MCP Client to extract real-time data from a price drop site listing the amazon products, including price changes and related product details. The extracted data is enriched with structured data transformation, content summarization, and sentiment analysis using Google Gemini LLM. The Amazon Price Drop Intelligence Engine is designed for: Ecommerce Analysts** who need timely updates on competitor pricing trends Brand Managers** seeking to understand consumer sentiment around pricing Data Scientists** building pricing models or enrichment pipelines Affiliate Marketers** looking to optimize campaigns based on dynamic pricing AI Developers** automating product intelligence pipelines What problem is this workflow solving? This workflow solves several key pain points: Reliable Scraping: Uses Bright Data MCP, a managed crawling platform that handles proxies, captchas, and site structure changes automatically. Insight Generation: Transforms unstructured HTML into structured data and then into human-readable summaries using Google Gemini LLM. Sentiment Context: Goes beyond raw pricing data to reveal how customers feel about the price change, helping businesses and researchers measure consumer reaction. Automated Reporting: Aggregates and stores data for easy access and downstream automation (e.g., dashboards, notifications, pricing models). What this workflow does Scrape price drop site with Bright Data MCP The workflow begins by scraping targeted price drop site for Amazon listings using Bright Data's Model Context Protocol (MCP). You can configure this to target: Structured Data Extraction Once the HTML content is retrieved, Google Gemini is employed to: Parse and structure the product information (title, price, discount, brand, ratings) Summarization & Sentiment Analysis The extracted data is passed through an LLM chain to: Generate a concise summary of the product and its recent price movement Perform sentiment analysis on user reviews and public perception Store the Results Save to disk for archiving or bulk processing Updated in a Google Sheet, making it instantly shareable with your team or integrated into a BI dashboard Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Target different platforms**: Switch Amazon for Walmart, eBay, or any ecommerce source using Bright Dataโs flexible scraping infrastructure. Enrich with more LLM tasks**: Add brand tone analysis, category classification, or competitive benchmarking using Gemini prompts. Visualize output**: Pipe the Google Sheet to Looker Studio, Tableau, or Power BI. Notification integrations**: Add Slack, Discord, or email notifications for price drop alerts.
by Zacharia Kimotho
N8n recently introduced folders and it has been a big improvement on workflow management on top of the tags. This means the current workflows need to be moved manually to the folders. The simplest idea to try is to convert the current tags into folders and move all the current workflows within the respective tags into the folders This assumes the tag name will be used as the folder name. To Note For workflows that use more than 1 tag, the workflow will be assigned the last tag that runs as the folder. How does it work I took the liberty of simplifying the setup of this workflow that will be needed on your part and also be beginner-friendly Copy and paste this workflow into your n8n canvas. You must have existing workflows and tags before you can run this Set your n8n login details on the node set Credentials with the n8n URL, username, and password. Setup your n8n API credentials on the n8n node get workflows Run the workflow. This opens up a form where you can select the number of tags to move and click on submit The workflow responds with the successful number of workflows that were imported Read more about the template Built by Zacharia Kimotho - Imperol
by Ozgur Karateke
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 1 โ What Does It Do / Which Problem Does It Solve? This workflow turns Google Docs-based contract & form templates into ready-to-sign PDFs in minutesโall from a single chat flow. Automates repetitive document creation.** Instead of copying a rental, sales, or NDA template and filling it by hand every time, the bot asks for the required values and fills them in. Eliminates human error.** It lists every mandatory field so nothing is missed, and removes unnecessary clauses via conditional blocks. Speeds up approvals.** The final draft arrives as a direct PDF linkโone click to send for signing. One template โ unlimited variations.* Every new template you drop in Drive is auto-listed with *zero workflow editsโ**it scales effortlessly. 100 % no-code.** Runs on n8n + Google Apps Scriptโno extra backend, self-hosted or cloud. 2 โ How It Works (Detailed Flow) ๐ Template Discovery ๐ The TemplateList node scans the Drive folder you specify via the ?mode=meta endpoint and returns an id / title / desc list. The bot shows this list in chat. ๐ฏ Selection & Metadata Fetch The user types a template name. ๐ GetMetaData opens the chosen Doc, extracts META_JSON, placeholders, and conditional blocks, then lists mandatory & optional fields. ๐ฃ Data-Collection Loop The bot asks for every placeholder value. For each conditional block it asks ๐ข Yes / ๐ด No. Answers are accumulated in a data JSON object. โ Final Confirmation The bot summarizes the inputs โ when the user clicks Confirm, the DocProcess sub-workflow starts. โ๏ธ DocProcess Sub-Workflow | ๐ง Step | Node | Task | | --- | --- | --- | | 1 | User Choice Match Check | Verifies nameโID match; throws if wrong | | 2 | GetMetaData (renew) | Gets the latest placeholder list | | 3 | Validate JSON Format | Checks for missing / unknown fields | | 4 | CopyTemplate | Copies the Doc via Drive API | | 5 | FillDocument | Apps Script fills placeholders & removes blocks | | 6 | Generate PDF Link | Builds an export?format=pdf URL | ๐ Delivery The master agent sends ๐ Download PDF & โ๏ธ Open Google Doc links. ๐ซ Error Paths status:"ERROR", missing:[โฆ] โ bot lists missing fields and re-asks. unknown:[โฆ] โ template list is outdated; rerun TemplateList. Any Apps Script error โ the returned message is shown verbatim in chat. 3 โ ๐ Setup Steps (Full Checklist) > Goal: Get a flawless PDF on the first run. > > > Mentally tick the โ๏ธ in front of every line as you go. > โ๏ธ A. Google Drive Preparation | Step | Do This | Watch Out For | | --- | --- | --- | | 1 | Create a Templates/ folder โ put every template Doc inside | Exactly one folder; no sub-folders | | 2 | Placeholders in every Doc are {{UPPER_CASE}} | No Turkish chars or spaces | | 3 | Wrap optional clauses with [[BLOCK_NAME:START]]โฆ[[BLOCK_NAME:END]] | The START tag must have a blank line above | | 4 | Add a META_JSON block at the very end | Script deletes it automatically after fill | | 5 | Right-click Doc > Details โธ Description = 1-line human description | Shown by the bot in the list | | 6 | Create a second Generated/ folder (for copies) | Keeps Drive tidy | > ๐ Folder ID (long alphanumerical) = <TEMPLATE_PARENT_ID> > > > Weโll paste this into the TemplateList node next. > Simple sample template โ Template Link ๐ B. Import the Workflow into n8n Settings โธ Import Workflow โธ DocAgent.json If nodes look Broken afterwards โ no community-node problem; you only need to select credentials. ๐ C. Customize the TemplateList Node Open Template List node โ๏ธ โ replace '%3CYOUR_PARENT_ID%3E' in parents with the real folder ID in the URL. Right-click node > Execute Node. Copy the entire JSON response. In the editor paste it into: DocAgent โ System Prompt (top) User Choice Match Check โ System Prompt (top) Save. > โ ๏ธ Why manual? Caching the list saves LLM tokens. Whenever you add a template, rerun the node and update the prompts. > ๐ D. Deploy the Apps Script | Step | Screen | Note | | --- | --- | --- | | 1 | Open Gist files GetMetaData.gs + FillDocument.gs โ File โธ Make a copy | Both files may live in one project | | 2 | Project Settings > enable Google Docs API โ๏ธ & Google Drive API โ๏ธ | Otherwise youโll see 403 errors | | 3 | Deploy โธ New deployment โธ Web app | | | โข Execute as | Me | | | โข Who has access | Anyone | | | 4 | On the consent screen allow scopes:โข โฆ/auth/documentsโข โฆ/auth/drive | Click Advanced โบ Go if Google warns | | 5 | Copy the Web App URL (e.g. https://script.google.com/macros/s/ABC123/exec) | If this URL changes, update n8n | Apps Script source code โ Notion Link ๐ง E. Wire the Script URL in n8n | Node | Field | Action | | --- | --- | --- | | GetMetaData | URL | <WEB_APP_URL>?mode=meta&id={{ $json["id"] }} | | FillDocument | URL | <WEB_APP_URL> | > ๐ก Prefer using an .env file? Add GAS_WEBAPP_URL=โฆ and reference it as {{ $env.GAS_WEBAPP_URL }}. > ๐ F. Add Credentials Google Drive OAuth2* โ *Drive API (v3) Full Access Google Docs OAuth2** โ same account LLM key** (OpenAI / Gemini) (Optional) Postgres Chat Memory credential for the corresponding node ๐งช G. First Run (Smoke Test) Switch the workflow Active. In the chat panel type /start. Bot lists templates โ pick one. Fill mandatory fields, optionally toggle blocks โ Confirm. ๐ Download PDF link appears โ โ๏ธ setup complete. โ H. Common Errors & Fixes | ๐ Error | Likely Cause | Remedy | | --- | --- | --- | | 403: Apps Script permission denied | Web app access set to User | Redeploy as Anyone, re-authorize scopes | | placeholder validation failed | Missing required field | Provide the listed values โ rerun DocProcess | | unknown placeholders: โฆ | Template vs. agent mismatch | Check placeholder spelling (UPPER_CASE ASCII) | | Template ID not found | Prompt list is old | Rerun TemplateList โ update both prompts | | Cannot find META_JSON | No meta block / wrong tag | Add [[META_JSON_START]] โฆ [[META_JSON_END]], retry | โ Final Checklist [ ] Drive folder structure & template rules ready [ ] Workflow imported, folder ID set in node [ ] TemplateList output pasted into both prompts [ ] Apps Script deployed, URL set in nodes [ ] OAuth credentials & LLM key configured [ ] /start test passes, PDF link received ๐โโ๏ธ Need Help with Customizations? Reach out for consulting & support on LinkedIn: รzgรผr Karateke Full Documentation โ Notion Simple sample template โ Template Link Apps Script source code โ Notion Link
by Jimleuk
This template attempts to replicate OpenAI's DeepResearch feature which, at time of writing, is only available to their pro subscribers. > An agent that uses reasoning to synthesize large amount of online information and complete multi-step research tasks for you. Source Though the inner workings of DeepResearch have not been made public, it is presumed the feature relies on the ability to deep search the web, scrape web content and invoking reasoning models to generate reports. All of which n8n is really good at! Using this workflow, n8n users can enjoy a variation of the Deep Research experience for themselves and their teams at a fraction of the cost. Better yet, learn and customise this Deep Research template for their businesses and/or organisations. Check out the generated reports here: https://jimleuk.notion.site/19486dd60c0c80da9cb7eb1468ea9afd?v=19486dd60c0c805c8e0c000ce8c87acf How it works A form is used to first capture the user's research query and how deep they'd like the researcher to go. Once submitted, a blank Notion page is created which will later hold the final report and the researcher gets to work. The user's query goes through a recursive series of web serches and web scraping to collect data on the research topic to generate partial learnings. Once complete, all learnings are combined and given to a reasoning LLM to generate the final report. The report is then written to the placeholder Notion page created earlier. How to use Duplicate this Notion database template and make sure all Notion related nodes point to it. Sign-up for APIFY.com API Key for web search and scraping services. Ensure you have access to OpenAI's o3-mini model. Alternatively, switch this out for o1 series. You must publish this workflow and ensure the form url is publically accessible. On depth & breadth configuration For more detailed reports, increase depth and breadth but be warned the workflow will take exponentially longer and cost more to complete. The recommended defaults are usually good enough. Depth=1 & Breadth=2 - will take about 5 - 10mins. Depth=1 & Breadth=3 - will take about 15 - 20mins. Dpeth=3 & Breadth=5 - will take about 2+ hours! Customising this workflow I deliberately chose not to use AI-powered scrapers like Firecrawl as I felt these were quite costly and quotas would be quickly exhausted. However, feel free to switch web search and scraping services which suit your environment. Maybe you don't decide to source the web and instead, data collection comes from internal documents instead. This template gives you freedom to change this. Experiment with different Reasoning/Thinking models such as Deepseek and Google's Gemini 2.0. Finally, the LLM prompts could definitely be improved. Refine them to fit your use-case. Credits This template is largely based off the work by David Zhang (dzhng) and his open source implementation of Deep Research: https://github.com/dzhng/deep-research
by ลukasz
Who is it for This workflow is for anyone who is using N8N. It's especially helpful if you are a DevOps and your N8N instance is self hosted. If you carea lot about security and number of failed executions and at the same time you are using InfluxDB to monitor status of your systems, this will perfectly fit in your stack. How it works This automation is fairly simple. It uses native N8N nodes to gather data from itself. Then it is parsing this data to be compatible with InfluxDB input. And finally it is sending this data to InfluxDB for further processing. Remember to set up Setup is really simple and you just need to provide just three variables. First is your InfluxDB URL, second is your InfluxDB organization, and third is your InfluxDB bucket name. Of course, to set up N8N nodes and gather data from them, you will need your instance API key. And that's all. How it looks in InfluxDB? See below Schedule Audits Audits don't need to be run often, but I would recommend it to be run on regular basis. This way you can see real data series in InfluxDB. I think that once a day should be enough, but it depends on your N8N usage of course Thank you, perfect! Glad I could help. Visit my profile for other automations for businesses. And if you are looking for dedicated software development, do not hesitate to reach out! You can also see automations on my Sailing Byte's GitHub N8N repository.