by Frederik Duchi
This n8n template demonstrates how to automatically process feedback on tasks and procedures using an AI agent. Employees provide feedback after completing a task, which is then analyzed by the AI to suggest improvements to the underlying procedures. Improvements can be to update how to execute a single tasks or to split or merge tasks within a procedure. The management reviews decides whether to implement those improvements. This makes it easy to close the loop between execution, feedback, and continuous process improvement. Use cases are many: Marketing (improve the process of approving advertising content) Finance (optimize the process of expense reimbursement) Operations (refine the process of equipment maintenance) Good to know The automation is based on the Baserow template for handling Standard Operating Procedures. However, it can also be implemented in other databases. Baserow authentication is done through a database token. Check the documentation on how to create such a token. Tasks are inserted using the HTTP request node instead of a dedicated Baserow node. This is to support batch import instead of importing records one by one. Requirements Baserow account (cloud or self-hosted) The Baserow template for handling Standard Operating Procedures or a similar database with the following tables and fields: Procedures table with general procedure information like to name or description . Procedures steps table with all the steps associated with a procedure. Tasks table that contains the actual tasks based on the procedure steps. must have a field to capture Feedback must have a boolean field to indicate if the feedback has been processed or not. This to avoid that the same feedback keeps getting used. Improvement suggestions table to store the suggestions that were made by the AI agent. How it works Set table and field ids** Stores the ids of the involved Baserow database and tables, together with the information to make requests to the Baserow API Feedback processing agent** The prompt contains a small instruction to check the feedback and suggest improvements to the procedures. The system message is much more extensive to provide as much details and guidance to the agent as possible. It contains the following sections: Role: giving the agent a clear professional perspective Goals: allowing the agent to focus on clarity, efficiency and actionable improvements. Instructions: guiding the agent to a step-by-step flow Output: showing the agent the expected format and details Notes: setting guardrails for the agent to make justified and practical suggestions. The agent uses the following nodes: OpenAI Chat Model (Model): the template uses by default the gpt-4.1 model from OpenAI. But you can replace this with any LLM. current_procedures (Tool): provides information about all available procedures to the agent current_procedure steps (Tool): provides information about every step in the procedures to the agent tasks_feedback (Tool): provides the feedback of the employees to the agent. Required output schema (Output parser): forces the agent to use a JSON schema that matches the Improvement suggestions table structure for the output. This allows to easily add them to the database in the next step. Create improvement suggestions** Calls the API endpoint /api/database/rows/table/{table_id}/batch/ to insert multiple records at once in the Improvement suggestions table. The inserted records is the output generated by the AI agent. Check the Baserow API documentation for further details. Get non-processed feedback** Gets all records from the Tasks table that contain feedback but that are not marked as processed yet. Set feedback to processed** Updates the boolean field for each task to true to indicate that the feedback has been processed Aggregate records for input** Aggregates the data from the previous nodes as an array in a property named items. This matches perfect with the Baserow API to insert new records in batch. Update tasks to processed feedback** Calls the API endpoint /api/database/rows/table/{table_id}/batch/ to update multiple records at once in the Tasks table. The updated records will have their processed field set to true. Check the Baserow API documentation for further details. How to use The Manual Trigger node is provided as an example, but you can replace it with other triggers such as a webhook The included Baserow SOP template works perfectly as a base schema to try out this workflow. Set the corresponding ids in the Configure settings and ids node. Check if the field names for the filters in the tasks_feedback tool node matches with the ones in your Tasks table. Check if the field names for the filters in the Get non-processed feedback node matches with the ones in your Tasks table. Check if the property name in the Set feedback to processed node matches with the ones in your Tasks table. Customising this workflow You can add a new workflow that updates the procedures based on the acceptance or rejection by the management There is a lot of customization possible in the system prompt. For example: change the goal to prioritize security, cost savings or customer experience
by Trung Tran
Free PDF Generator in n8n – No External Libraries or Paid Services > A 100% free n8n workflow for generating professionally formatted PDFs without relying on external libraries or paid converters. It uses OpenAI to create Markdown content, Google Docs to format and convert to PDF, and integrates with Google Drive and Slack for archiving and sharing, ideal for reports, BRDs, proposals, or any document you need directly inside n8n. Watch the demo video below: Who’s it for Teams that need auto-generated documents (reports, guides, checklists) in PDF format. Operations or enablement teams who want files archived in Google Drive and shared in Slack automatically. Anyone experimenting with LLM-powered document generation integrated into business workflows. How it works / What it does Manual trigger starts the workflow. LLM generates a sample Markdown document (via OpenAI Chat Model). Google Drive folder is configured for storage. Google Doc is created from the generated Markdown content. Document is exported to PDF using Google Drive. (Sample PDF generated from comprehensive markdown) PDF is archived in a designated Drive folder. Archived PDF is downloaded for sharing. Slack message is sent with the PDF attached. How to set up Add nodes in sequence: Manual Trigger OpenAI Chat Model (prompt to generate sample Markdown) Set/Manual input for Google Drive folder ID(s) HTTP Request or Google Drive Upload (convert to Google Docs) Google Drive Download (PDF export) Google Drive Upload (archive PDF) Google Drive Download (fetch archived file) Slack Upload (send message with attachment) Configure credentials for OpenAI, Google Drive, and Slack. Map output fields: data.markdown → Google Docs creation docId → PDF export fileId → Slack upload Test run to ensure PDF is generated, archived, and posted to Slack. Requirements Credentials**: OpenAI API key (or compatible LLM provider) Google Drive (OAuth2) with read/write permissions Slack bot token with files:write permission Access**: Write access to target Google Drive folders Slack bot invited to the target channel How to customize the workflow Change the prompt** in the OpenAI Chat Model to generate different types of content (reports, meeting notes, checklists). Automate triggering**: Replace Manual Trigger with Cron for scheduled document generation. Use Webhook Trigger to run on-demand from external apps. Modify storage logic**: Save both .md and .pdf versions in Google Drive. Use separate folders for drafts vs. final versions. Enhance distribution**: Send PDFs to multiple Slack channels or via email. Integrate with project management tools for automated task creation.
by Davide
This workflow creates a voice AI assistant accessible via Telegram that leverages ElevenLabs* powerful voice synthesis technology. Users can either clone their own voice or transform their voice using pre-existing voice models, all through simple voice messages sent to a Telegram bot. *ONLY FOR STARTER, CREATOR, PRO PLAN This workflow allows users to: Clone their voice by sending a voice message to a Telegram bot (creates a new voice profile on ElevenLabs) Change their voice to a cloned voice and save the output to Google Drive For Best Results Important Considerations for Best Results: For optimal voice cloning via Telegram voice messages: 1. Recording Quality & Environment Record in a quiet room with minimal echo and background noise Use a consistent microphone position (10-15cm from mouth) Ensure clear audio without distortion or clipping 2. Content Selection & Variety Send 1 voice messages totaling 5-10 minutes of speech Include diverse vocal sounds, tones, and natural speaking cadence Use complete sentences rather than isolated words 3. Audio Consistency Maintain consistent volume, tone, and distance from microphone Avoid interruptions, laughter, coughs, or background voices Speak naturally without artificial effects or filters 4. Technical Preparation Ensure Telegram isn't overly compressing audio (use HQ recording) Record all messages in the same session with same conditions Include both neutral speech and varied emotional expressions How it works Trigger The workflow starts with a Telegram trigger that listens for incoming messages (text, voice notes, or photos). Authorization check A Code node checks whether the sender’s Telegram user ID matches your predefined ID. If not, the process stops. Message routing A Switch node routes the message based on its type: Text → Not processed further in this flow. Voice message → Sent to the “Get audio” node to retrieve the audio file from Telegram. Photo → Not processed further in this flow. Two main options From the “Get audio” node, the workflow splits into two possible paths: Option 1 – Clone voice The audio file is sent to ElevenLabs via an HTTP request to create a new cloned voice. The voice ID is returned and can be saved for later use. Option 2 – Voice changer The audio is sent to ElevenLabs for speech-to-speech conversion using a pre-existing cloned voice (voice ID must be set in the node parameters). The resulting audio is saved to Google Drive. Output Cloned voice ID (for Option 1). Converted audio file uploaded to Google Drive (for Option 2). Set up steps Telegram bot setup Create a bot via BotFather and obtain the API token. Set up the Telegram Trigger node with your bot credentials. Authorization configuration In the “Sanitaze” Code node, replace XXX with your Telegram user ID to restrict access. ElevenLabs API setup Get an API key from ElevenLabs. Configure the HTTP Request nodes (“Create Cloned Voice” and “Generate cloned audio”) with: API key in the Xi-Api-Key header. Appropriate endpoint URLs (including voice ID for speech-to-speech). Google Drive setup (for Option 2) Set up Google Drive OAuth2 credentials in n8n. Specify the target folder ID in the “Upload file” node. Voice ID configuration For voice cloning: The voice name can be customized in the “Create Cloned Voice” node. For voice changing: Replace XXX in the “Generate cloned audio” node URL with your ElevenLabs voice ID. Test the workflow Activate the workflow. Send a voice note from your authorized Telegram account to trigger cloning or voice conversion. 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Trung Tran
Beginner’s Tutorial: Manage Google Cloud Storage Buckets and Objects with n8n Watch the demo video below: Who’s it for Beginners who want to learn how to automate Google Cloud Storage (GCS) operations with n8n. Developers who want to combine AI image generation with cloud storage management. Anyone looking for a simple introduction to working with Buckets and Objects in GCS. How it works / What it does This workflow demonstrates end-to-end usage of Google Cloud Storage with AI integration: Trigger: Start manually by clicking Execute Workflow. Edit Fields: Provide input values (e.g., bucket name or image description). List Buckets: Retrieve all existing buckets in the project (branch: view only). Create Bucket: If needed, create a new bucket to store objects. Prompt Generation Agent: Use an AI model to generate a creative text prompt. Generate Image: Convert the AI-generated prompt into an image. Upload Object: Store the generated image as an object in the selected bucket. Delete Object: Clean up by removing the uploaded object if required. This shows the full lifecycle: Bucket → Object (Create/Upload/Delete) combined with AI image generation. How to set up Trigger the workflow: Use the When clicking Execute workflow node to start manually. Provide inputs: In Edit Fields, specify details such as bucket name or description text for the image. List buckets: Use the List Buckets node to see what exists. Create a bucket: Use Create Bucket if you want a new storage bucket. Generate prompt & image: The Prompt Generation Agent uses an OpenAI Chat Model to create an image prompt. The Generate an Image node turns this prompt into an actual image. Upload to bucket: Use Create Object to upload the generated image into your GCS bucket. Delete object (optional): Use Delete Object to remove the file from the bucket as a cleanup step. Requirements An active Google Cloud account with Cloud Storage API enabled. A Service Account Key (JSON) credential added in n8n for GCS. An OpenAI API Key configured in n8n for the prompt and image generation nodes. Basic familiarity with running workflows in n8n. How to customize the workflow Different object types:** Instead of images, upload PDFs, logs, or text files. Automatic cleanup:** Skip the delete step if you want objects to persist. Schedule trigger:** Replace manual execution with a weekly or daily schedule. Dynamic prompts:** Accept user input from a form or webhook to generate images. Multi-bucket management:** Extend the logic to manage multiple buckets across projects. Notifications:** Add a Slack/Email step after upload to notify your team with the object URL. ✅ By the end of this tutorial, you’ll understand how to: Work with Buckets (list, create). Work with Objects (upload, delete). Integrate AI image generation with Google Cloud Storage.
by Madame AI
Analyze job market data with AI to find matching jobs This n8n template helps you stay on top of the job market by matching scraped job offers with your resume using an AI Agent. This workflow is perfect for job seekers, recruiters, or market analysts who need to find specific job opportunities without manually sifting through countless listings. Steps to Take Create BrowserAct Workflow:* Set up the *Job Market Intelligence** template in your BrowserAct account. Add BrowserAct Token:* Connect your BrowserAct account credentials to the *HTTP Request** node. Update Workflow ID:* Change the workflow_id value in the *HTTP Request** node to match the one from your BrowserAct workflow. Connect Gemini:* Add your Google Gemini credentials and update your *resume* inside the prompt in the *AI Agent** node. Configure Telegram:* Connect your Telegram account and add your Channel ID to the *Send a text message** node. How it works The workflow is triggered manually by clicking "Execute workflow," but you can easily set it to run on a schedule. It uses an HTTP Request node to start a web scraping task via the BrowserAct API to collect the latest job offers. A series of If and Wait nodes monitor the scraping job, ensuring the full data is ready before proceeding. An AI Agent node, powered by Google Gemini, processes the job listings and filters them to find the best matches for your resume. A Code node then transforms the AI's output into a clean, readable format. Finally, the filtered job offers are sent directly to you via Telegram. Requirements BrowserAct** API account BrowserAct* *“Job Market Intelligence”** Template Gemini** account Telegram** credentials Need Help ? How to Find Your BrowseAct API Key & Workflow ID How to Connect n8n to Browseract How to Use & Customize BrowserAct Templates Workflow Guidance and Showcase Never Manually Search for a Job Again (AI Automation Tutorial)
by Madame AI
Find & Qualify Funded Leads with BrowserAct & Gemini This n8n template helps you find new investment leads by automatically scraping articles for funding announcements and analyzing them with an AI Agent. This workflow is ideal for venture capitalists, sales teams, or market researchers who need to automatically track and compile lists of recently funded companies. Self-Hosted Only This Workflow uses a community contribution and is designed and tested for self-hosted n8n instances only. How it works The workflow is triggered manually but can be set to a Cron node to run on a schedule. A Google Sheet node loads a list of keywords (e.g., "Series A," "Series B") and geographic locations to search for. The workflow loops through each keyword, initiating BrowserAct web scraping tasks to collect relevant articles. A second set of BrowserAct nodes patiently monitors the scraping jobs, waiting for them to complete before proceeding. Once all articles are collected, they are merged and fed into an AI Agent node, powered by Google Gemini. The AI Agent processes the articles to identify companies that recently received funding, extracting the Company Name, the Field of Investment, and the source URL. A Code node transforms the AI's JSON output into a clean, itemized format. An If node filters out any entries where no company was found, ensuring data quality. The qualified leads are automatically added or updated in a Google Sheet, matching by "Company" to prevent duplicates. Finally, a Slack message is sent to a channel to notify your team that the lead list has been updated. Requirements BrowserAct** API account for web scraping BrowserAct** n8n Community Node -> (n8n Nodes BrowserAct) BrowserAct* "Funding Announcement to Lead List (TechCrunch)*" Template (or a similar scraping workflow) Gemini** account for the AI Agent Google Sheets** credentials for input and output Slack** credentials for sending notifications Need Help? How to Find Your BrowseAct API Key & Workflow ID How to Connect n8n to Browseract How to Use & Customize BrowserAct Templates How to Use the BrowserAct n8n Community Node Workflow Guidance and Showcase How to Automatically Find Leads from Funding News (n8n Workflow Tutorial)
by Madame AI
AI-Powered Top GitHub Talent Sourcing (by Language & Location) to Google Sheet This n8n template is a powerful talent sourcing engine that finds, analyzes, and scores GitHub contributors using a custom AI formula. This workflow is ideal for technical recruiters, hiring managers, and team leads who want to build a pipeline of qualified candidates based on specific technical skills and location. Self-Hosted Only This Workflow uses a community contribution and is designed and tested for self-hosted n8n instances only. How it works The workflow runs on a Schedule Trigger (e.g., hourly) to constantly find new candidates. A BrowserAct node ("Run a workflow task") initiates a scraping job on GitHub based on your criteria (e.g., "Python" developers in "Berlin"). A second BrowserAct node ("Get details") waits for the scraping to complete. If the job fails, a Slack alert is sent. A Code node processes the raw scraped data, splitting the list of developers into individual items. An AI Agent, powered by Google Gemini, analyzes each profile. It scores their resume/summary and calculates a final weighted FinalScore based on their followers, repositories, and resume quality. The structured and scored candidate data is then saved to a Google Sheet, using the "Name" column to prevent duplicates. A final Slack message is sent to notify you that the GitHub contributors list has been successfully updated. Requirements BrowserAct** API account for web scraping BrowserAct* "Source Top GitHub Contributors by Language & Location*" Template BrowserAct** n8n Community Node -> (n8n Nodes BrowserAct) Gemini** account for the AI Agent Google Sheets** credentials for saving leads Slack** credentials for sending notifications Need Help? How to Find Your BrowseAct API Key & Workflow ID How to Connect n8n to Browseract How to Use & Customize BrowserAct Templates How to Use the BrowserAct N8N Community Node Workflow Guidance and Showcase Automate Talent Sourcing: Find GitHub Devs with n8n & Browseract
by Shun Nakayama
Auto-Audit SEO Traffic Drops with AI & GSC Automatically monitor your Google Search Console data to catch SEO performance drops before they become critical. This workflow identifies pages losing rankings and clicks, scrapes the live content, and uses AI to analyze the gap between "Search Queries" (User Intent) and "Page Content" (Reality). It then delivers actionable fixes—including specific Title rewrites and missing H2 headings—directly to Slack. Ideal for SEO managers, content marketers, and site owners who want a pro-level SEO consultant running 24/7 on autopilot. How it works Compare & Detect: Compares last month's GSC performance with the previous month to identify pages with declining traffic. Deep Dive: For each struggling page, it fetches the top search queries and scrapes the actual Title and H2 tags from your live website. AI Analysis: An AI Agent analyzes why the page is failing by comparing user intent vs. actual content. Report: Sends a "Consultant-style" report to Slack with quantitative data and specific improvement tasks (e.g., "Add this H2 heading"). Set up steps Configure: Open the 📝 Edit Me: Config node and enter your GSC Property URL (e.g., sc-domain:example.com). Connect Credentials: Google Search Console: To fetch performance data. OpenAI: To analyze content and generate ideas. Slack: To receive the weekly reports. Activate: Turn on the workflow to receive your SEO audit every Monday morning.
by Dean Pike
Convert any website into a searchable vector database for AI chatbots. Submit a URL, choose scraping scope, and this workflow handles everything: scraping, cleaning, chunking, embedding, and storing in Supabase. What it does Scrapes websites using Apify (3 modes: full site unlimited, full site limited, single URL) Cleans content (removes navigation, footer, ads, cookie banners, etc) Chunks text (800 chars, markdown-aware) Generates embeddings (Google Gemini, 768 dimensions) Stores in Supabase vector database Requirements Apify account + API token Supabase database with pgvector extension Google Gemini API key Setup Create Supabase documents table with embedding column (vector 768). Run this SQL query in your Supabase project to enable the vector store setup Add your Apify API token to all three "Run Apify Scraper" nodes Add Supabase and Gemini credentials Test with small site (5-10 pages) or single page/URL first Next steps Connect your vector store to an AI chatbot for RAG-powered Q&A, or build semantic search features into your apps. Tip: Start with page limits to test content quality before full-site scraping. Review chunks in Supabase and adjust Apify filters if needed for better vector embeddings. Sample Outputs Apify actor "runs" in Apify Dashboard from this workflow Supabase docuemnts table with scraped website content ingested in chunks with vector embeddings
by Alok Kumar
This n8n workflow shows how to extract website content, index it in Pinecone, and leverage Airtable to power a chat agent for customer Q&A. Use cases include: Building a knowledge base from your website. Creating a chatbot that answers customer queries using your own site content. Powering RAG workflows for FAQs, support docs, or product knowledge. How it works Workflow starts with a manual trigger or chat message. Website content is fetched via HTTP Request. The HTML body is extracted and converted into clean Markdown. Text is split into chunks (~500 chars with 50 overlap) using the Character Text Splitter. OpenAI embeddings** are generated for each chunk. Content and embeddings are stored in Pinecone with namespace separation. A Chat Agent (powered by OpenAI or OpenRouter) retrieves answers from Pinecone and Airtable. Memory buffer** allows multi-turn conversations. A billing tool (Airtable) provides dynamic billing-related answers when needed. How to use Replace the sample website URL in the HTTP Request node with your own domain or content source. Update Normalize code based on markdown content output to remove noise. Adjust chunk size in the Text Splitter for your website markdown output. In this example, the Character Text Splitter with separator ###### worked really well. Always check the Markdown output to fine-tune your splitting logic. Update Pinecone namespace to match your project. Customize the Chat Agent system prompt to fit your brand voice and response rules. Connect to your own Airtable schema if you want live billing/payment data access. Requirements OpenAI account** (for embeddings + chat model). Pinecone account** (vector DB for semantic search). Airtable account** (if using the billing tool). (Optional) OpenRouter account (alternative chat model provider). n8n self-hosted or cloud. Need Help? Ask in the n8n Forum! Happy Automating! 🚀
by Simeon Penev
Who’s it for Growth, marketing, sales, and founder teams that want a decision-ready Ideal Customer Profile (ICP)—grounded in their own site content. How it works / What it does On form submission* collects *Website URL* and *Business Name** and redirects to Google Drive Folder after the final node. Crawl and Scrape the Website Content* - crawls and scrape *20 pages** from the website. ICP Creator* builds a *Markdown ICP** with: A) Executive Summary B) One-Pager ICP C) Tiering & Lead Scoring D) Demand Gen & ABM Plays E) Evidence Log F) Section Confidence Facts vs. Inferences, confidence scores and tables. Markdown to Google Doc* converts Markdown to Google Docs batchUpdate requests. Then this is used in *Update a document** for updating the empty doc. Create a document* + *Update a document* generate *“ICP for <Business Name>”** in your Drive folder and apply formatting. How to set up 1) Add credentials: Firecrawl (Authorization header), OpenAI (Chat), Google Docs OAuth2. 2) Replace placeholders: {{API_KEY}}, {{google_drive_folder_id}}, {{google_drive_folder_url}}. 3) Publish and open the Form URL to test. Requirements Firecrawl API key • OpenAI API key • Google account with access to the target Drive folder. Resources Google OAuth2 Credentials Setup - https://docs.n8n.io/integrations/builtin/credentials/google/oauth-generic/ OpenAI API key - https://docs.n8n.io/integrations/builtin/credentials/openai/ Firecrawl API key - https://take.ms/lGcUp
by vinci-king-01
Breaking News Aggregator with SendGrid and PostgreSQL ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow automatically scrapes multiple government and regulatory websites, extracts the latest policy or compliance-related news, stores the data in PostgreSQL, and instantly emails daily summaries to your team through SendGrid. It is ideal for compliance professionals and industry analysts who need near real-time awareness of regulatory changes impacting their sector. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or n8n.cloud) ScrapeGraphAI community node installed Operational SendGrid account PostgreSQL database accessible from n8n Basic knowledge of SQL for table creation Required Credentials ScrapeGraphAI API Key** – Enables web scraping and parsing SendGrid API Key** – Sends email notifications PostgreSQL Credentials** – Host, port, database, user, and password Specific Setup Requirements | Resource | Requirement | Example Value | |----------------|------------------------------------------------------------------|------------------------------| | PostgreSQL | Table with columns: id, title, url, source, published_at| news_updates | | Allowed Hosts | Outbound HTTPS access from n8n to target sites & SendGrid endpoint| https://*.gov, https://api.sendgrid.com | | Keywords List | Comma-separated compliance terms to filter results | GDPR, AML, cybersecurity | How it works This workflow automatically scrapes multiple government and regulatory websites, extracts the latest policy or compliance-related news, stores the data in PostgreSQL, and instantly emails daily summaries to your team through SendGrid. It is ideal for compliance professionals and industry analysts who need near real-time awareness of regulatory changes impacting their sector. Key Steps: Schedule Trigger**: Runs once daily (or at any chosen interval). ScrapeGraphAI**: Crawls predefined regulatory URLs and returns structured article data. Code (JS)**: Filters results by keywords and formats them. SplitInBatches**: Processes articles in manageable chunks to avoid timeouts. If Node**: Checks whether each article already exists in the database. PostgreSQL**: Inserts only new articles into the news_updates table. Set Node**: Generates an email-friendly HTML summary. SendGrid**: Dispatches the compiled summary to compliance stakeholders. Set up steps Setup Time: 15-20 minutes Install ScrapeGraphAI Node: From n8n, go to “Settings → Community Nodes → Install”, search “ScrapeGraphAI”, and install. Create PostgreSQL Table: CREATE TABLE news_updates ( id SERIAL PRIMARY KEY, title TEXT, url TEXT UNIQUE, source TEXT, published_at TIMESTAMP ); Add Credentials: Navigate to “Credentials”, add ScrapeGraphAI, SendGrid, and PostgreSQL credentials. Import Workflow: Copy the JSON workflow, paste into “Import from Clipboard”. Configure Environment Variables (optional): REG_NEWS_KEYWORDS, SEND_TO_EMAILS, DB_TABLE_NAME. Set Schedule: Open the Schedule Trigger node and define your preferred cron expression. Activate Workflow: Toggle “Active”, then click “Execute Workflow” once to validate all connections. Node Descriptions Core Workflow Nodes: Schedule Trigger** – Initiates the workflow at the defined interval. ScrapeGraphAI** – Scrapes and parses news listings into JSON. Code** – Filters articles by keywords and normalizes timestamps. SplitInBatches** – Prevents database overload by batching inserts. If** – Determines whether an article is already stored. PostgreSQL** – Executes parameterized INSERT statements. Set** – Builds the HTML email body. SendGrid** – Sends the daily digest email. Data Flow: Schedule Trigger → ScrapeGraphAI → Code → SplitInBatches → If → PostgreSQL → Set → SendGrid Customization Examples Change Keyword Filtering // Code Node snippet const keywords = ['GDPR','AML','SOX']; // Add or remove terms item.filtered = keywords.some(k => item.title.includes(k)); return item; Switch to Weekly Digest { "trigger": { "cronExpression": "0 9 * * 1" // Every Monday at 09:00 } } Data Output Format The workflow outputs structured JSON data: { "title": "Data Privacy Act Amendment Passed", "url": "https://regulator.gov/news/1234", "source": "regulator.gov", "published_at": "2024-06-12T14:30:00Z" } Troubleshooting Common Issues ScrapeGraphAI node not found – Install the community node and restart n8n. Duplicate key error in PostgreSQL – Ensure the url column is marked UNIQUE to prevent duplicates. Emails not sending – Verify SendGrid API key and check account’s daily limit. Performance Tips Limit initial scrape URLs to fewer than 20 to reduce run time. Increase SplitInBatches size only if your database can handle larger inserts. Pro Tips: Use environment variables to manage sensitive credentials securely. Add an Error Trigger node to catch and log failures for auditing purposes. Combine with Slack or Microsoft Teams nodes to push instant alerts alongside email digests.