by Adam Janes
This workflow demonstrates a simple way to run evals on a set of test cases stored in a Google Sheet. The example we are using comes from an info extraction task dataset, where we tested 6 different LLMs on 18 different test cases. This workflow extends the functionality of my simple eval for benchmarking legal tasks here. Rather than running executions sequentially (waiting for each one to respond before making another request), we use parallel processing to fire 2 requests every second. You can see our sample data in this spreadsheet here to get started. Once you have this working for our dataset, you can plug in your own test cases matching different LLMs to see how it works with your own data. How it works Pull our test cases from Google Sheets. For each case, fire off an HTTP request to a webhook. That webhook grabs the relevant source file from Google Drive and converts it to text. The text gets sent to an LLM via Open Router (so we can easily swap out models). Results come back and are logged in Google Sheets. Set up steps: Add your credentials for Google Sheets, Google Drive, and OpenRouter. Make a copy of the original data spreadsheet so that you can edit it yourself. You will need to plug your version in the Update Results node to see the spreadsheet update on each run of the loop.
by n8n Team
This template quickly shows how to use RAG in n8n. Who is this for? This template is for everyone who wants to start giving knowledge to their Agents through RAG. Requirements Have a PDF with custom knowledge that you want to provide to your agent. Setup No setup required. Just hit Execute Workflow, upload your knowledge document and then start chatting. How to customize this to your needs Add custom instructions to your Agent by changing the prompts in it. Add a different way to load in knowledge to your vector store, e.g. by looking at some Google Drive files or loading knowledge from a table. Exchange the Simple Vector Store nodes with your own vector store tools ready for production. Add a more sophisticated way to rank files found in the vector store. For more information read our docs on RAG in n8n.
by ikbendion
Reddit Poster to Discord This workflow checks Reddit every 15 minutes for new posts and sends selected posts to a Discord channel via webhook. Flow Overview: Schedule Trigger Runs every 15 minutes. Fetch Latest Posts Retrieves up to 3 new posts from any subreddit. Filter Posts Skips moderator or announcement posts based on author ID. Fetch Full Post Data Gets full details for the remaining post. Extract Image URL Parses the post to extract a direct image link. Send to Discord Sends the post title, image, and link to a Discord webhook. Setup Notes: Create a Reddit app and connect credentials in n8n. Add your subreddit name to both Reddit nodes. Connect a Discord webhook for posting.
by Emmanuel Bernard
This workflow illustrates how to use Perplexity AI in your n8n workflow. Perplexity is a free AI-powered answer engine that provides accurate, trusted, and real-time answers to any question. Credentials Setup 1/ Go to the perplexity dashboard, purchase some credits and create an API Key https://www.perplexity.ai/settings/api 2/ In the perplexity Request node, use Generic Credentials, Header Auth. For the name, use the value "Authorization" And for the value "Bearer pplx-e4...59ea" (Your Perplexity Api Key) AI Model Sonar Pro is the current top model used by perplexity. If you want to use a different one, check this page: https://docs.perplexity.ai/guides/model-cards
by bangank36
This workflow backup Squarespace website header and footer injections into Github How It Works The Squarespace injections are fetched when an URL is placed Setup Instructions First, edit HTTP Request's URL to put your Squarespace site URL there Next, to configure the Github, update the Globals node with the following values: repo.owner – Your GitHub username repo.name – The name of your GitHub repository storing the workflows repo.path – The folder path within the repository where workflows are stored For example, if your GitHub username is john-doe, your repository is named n8n-backups, and injections are stored in a squarespace-backup/ folder, you would set: repo.owner → john-doe repo.name → n8n-backups repo.path → squarespace-backup/ Each site's injections will be added into seperate folder Required Credentials GitHub API – Access to your repository Who Is This For? This template is made for Squarespace users who want to backup their header and footer injections at interval to or on demand Check out my other templates: 👉 My n8n Templates
by ankitkansaldev
📰 Comprehensive Reuters News Intelligence System With Brightdata & Telegram Alerts A powerful n8n automation workflow that scrapes the latest Reuters news articles using Bright Data's web scraping capabilities and delivers intelligent news summaries directly to your Telegram chat. 📋 Overview This workflow provides an automated news intelligence solution that monitors Reuters for breaking news, analyzes content using Claude AI, and delivers personalized news alerts. Perfect for journalists, researchers, traders, and anyone who needs real-time access to Reuters content with AI-powered insights. ✨ Key Features 🎯 Form-Based Input: Easy web form to specify keywords and news type preferences 🤖 AI-Powered Processing: Uses Claude 4 Sonnet for intelligent content analysis 🌐 Professional Scraping: Leverages Bright Data's Reuters dataset for reliable data extraction 📱 Telegram Integration: Instant notifications delivered to your preferred chat ⏰ Smart Waiting: Built-in delays to ensure data processing completion 🔄 Status Monitoring: Automatic scraping status checks with retry logic 📊 Data Formatting: Clean, structured output with essential article fields 🚀 Scalable Design: Handles multiple articles with batch processing 🎯 What This Workflow Does Input Keywords**: Search terms for Reuters articles (e.g., "Election", "Gas shocks", "Technology") News Type**: Sorting preference (newest, oldest, relevance) Form Submission**: Web-based interface for easy interaction Processing Form Trigger: Captures user input via web form interface AI Agent Orchestration: Claude processes requirements and coordinates actions Bright Data Request: Initiates Reuters scraping with specified keywords Status Monitoring: Checks scraping progress with smart retry logic Data Retrieval: Fetches completed article data when ready Content Processing: Extracts and formats essential article information Telegram Delivery: Sends structured news updates to specified chat Output Data Points | Field | Description | Example | |-------|-------------|---------| | article_title | The main headline of the article | "Global Energy Markets Face Uncertainty" | | headline | Reuters display headline | "Oil Prices Surge Amid Supply Concerns" | | description | Article summary/meta description | "Energy markets react to geopolitical tensions..." | | content | Full article body text | "LONDON (Reuters) - Oil prices jumped 3%..." | | article_url | Direct link to Reuters article | "https://reuters.com/business/energy/..." | 🚀 Setup Instructions Prerequisites n8n instance (self-hosted or cloud) Bright Data account with Reuters dataset access Telegram bot and channel setup Claude API access (Anthropic) 15-20 minutes for complete setup Step 1: Import the Workflow Copy the JSON workflow code from the provided file In n8n: Workflows → + Add workflow → Import from JSON Paste JSON content and click Import Save the workflow with a descriptive name Step 2: Configure Bright Data Integration Set up Bright Data credentials: In n8n: Credentials → + Add credential → HTTP Header Auth Name: "Bright Data API" Add header: Authorization: Bearer YOUR_BRIGHT_DATA_API_KEY Test the connection Configure Reuters dataset: Ensure access to dataset ID: gd_lyptx9h74wtlvpnfu Verify Reuters scraping permissions in Bright Data dashboard Check monthly quota and usage limits Step 3: Configure Anthropic Claude Integration Set up Anthropic credentials: In n8n: Credentials → + Add credential → Anthropic API Enter your Anthropic API key Test the connection Update model settings: Open "Anthropic Chat Model" node Verify model is set to: claude-sonnet-4-20250514 Adjust temperature and other parameters if needed Step 4: Configure Telegram Notifications Create Telegram Bot: Message @BotFather on Telegram Use /newbot command and follow instructions Save the bot token provided Get Chat ID: Add your bot to desired channel/group Send a test message Visit: https://api.telegram.org/bot{BOT_TOKEN}/getUpdates Find your chat ID in the response Set up Telegram credentials: In n8n: Credentials → + Add credential → Telegram API Enter bot token from BotFather Test the connection Update Telegram node: Open "Telegram" node Replace DEMO_CHAT_ID with your actual chat ID Customize message format if needed Step 5: Configure Web Form Set up form trigger: Open "On form submission" node Note the webhook URL provided Customize form title and fields if needed Test form functionality: Access the webhook URL in your browser Fill out test form with sample keywords Verify form submission triggers workflow Step 6: Update Node Configurations Update HTTP Request nodes: Replace BRIGHT_DATA_API_KEY with actual credentials reference Verify dataset ID matches your Bright Data setup Check request parameters and headers Configure Data Formatting: Open "Data Formatting" node Review JavaScript code for field extraction Modify output fields if additional data needed Step 7: Test & Activate Run initial test: Submit form with test keywords (e.g., "Technology") Monitor workflow execution in n8n Check for Telegram message delivery Verify data flow: Confirm Bright Data snapshot creation Check status monitoring functionality Validate final data formatting Activate workflow: Toggle workflow to "Active" status Monitor for any execution errors Set up error notifications if needed 📖 Usage Guide Submitting News Requests Access the form: Navigate to your webhook URL Form title: "Reuters News Intelligence" Fill required fields: Keywords: Enter search terms (e.g., "Climate Change", "Tech Earnings") News Type: Select sorting preference: newest: Most recent articles first oldest: Historical articles first relevance: Best matching articles Submit and wait: Click submit to trigger workflow Expect 1-3 minutes for processing Check Telegram for article delivery Understanding the Process The workflow follows this sequence: Form submission triggers Claude AI agent Claude coordinates all scraping and processing steps Bright Data scrapes Reuters with your keywords System waits for scraping completion (60 seconds) Status check confirms data readiness Article data is retrieved and formatted Telegram message delivers final results Reading Telegram Results Each article includes: Clickable URL** to full Reuters article Headline** for quick scanning Description** with article summary Content preview** with key details 🔧 Customization Options Modifying Search Parameters Edit the "HTTP Request" node to adjust: { "keyword": "Your search terms", "sort": "newest|oldest|relevance", "limit_per_input": "2-10 articles" } Customizing Telegram Messages Update the "Telegram" node message format: 🗞️ {{ $json.heading }} 📖 {{ $json.description }} 🔗 Read Full Article 📅 Retrieved: {{ $now.format('YYYY-MM-DD HH:mm') }} Adding Email Notifications Add "Email" node after "Data Formatting" Configure SMTP credentials Create HTML email template with article data Connect to same input as Telegram node Enhancing AI Processing Modify the MCP Agent prompt to: Request specific article sections Add sentiment analysis Include market impact assessment Generate executive summaries Extract key quotes and statistics Adding Data Storage Include database storage by: Adding "Postgres" or "MySQL" node Creating articles table with schema Storing full article data for analysis Building historical news database 🚨 Troubleshooting Common Issues & Solutions 1. "Bright Data snapshot failed" Cause**: Invalid API key or dataset access Solution**: Verify credentials and dataset permissions in Bright Data dashboard 2. "No articles found" Cause**: Keywords too specific or no matching content Solution**: Try broader search terms, check Reuters availability 3. "Telegram message not sent" Cause**: Invalid bot token or chat ID Solution**: Re-verify bot setup with @BotFather, confirm chat ID 4. "Workflow timeout" Cause**: Bright Data scraping taking too long Solution**: Increase timeout in "sleep tool" or add retry logic 5. "Data formatting errors" Cause**: Unexpected response structure from Bright Data Solution**: Check "Data Formatting" node logs, adjust parsing logic 6. "Claude API errors" Cause**: API key issues or rate limiting Solution**: Verify Anthropic credentials, check usage limits Advanced Troubleshooting Monitor execution logs** in n8n for detailed error messages Test individual nodes** by running them separately Verify JSON structures** ensure data flows correctly between nodes Check rate limits** for both Bright Data and Claude API Add error handling** implement try-catch logic for robust operation 📊 Use Cases & Examples 1. Financial News Monitoring Goal: Track market-moving Reuters financial news Keywords: "earnings", "fed rates", "market outlook" Instant alerts for breaking financial news Support trading and investment decisions 2. Competitive Intelligence Goal: Monitor industry-specific news for business insights Keywords: Company names, industry terms Track competitor mentions and market developments Generate competitive analysis reports 3. Crisis Communications Goal: Stay informed during breaking news events Keywords: "breaking", location names, event types Rapid response to developing situations Crisis management team notifications 4. Research & Academia Goal: Gather news data for academic research Keywords: Research topics, geographic regions Build datasets for media analysis Track news coverage patterns over time ⚙ Advanced Configuration Scaling for High Volume To handle larger news monitoring needs: Increase batch processing: Modify limit_per_input parameter Add parallel processing branches Implement queue management Add rate limiting: Insert delays between requests Monitor API usage quotas Implement exponential backoff Database integration: Store articles in PostgreSQL/MySQL Add deduplication logic Create search and filter capabilities Multi-Channel Distribution Expand beyond Telegram: Slack integration: Add Slack webhook node Format messages for team channels Include interactive buttons Email newsletters: Compile daily/weekly summaries HTML formatting with images Subscriber management API endpoints: Create webhook responses Build news API for other systems Real-time data streaming AI Enhancement Options Leverage Claude's capabilities further: Sentiment analysis: Add sentiment scoring to articles Track market sentiment trends Generate mood indicators Summarization: Create executive summaries Extract key points Generate abstracts Classification: Categorize articles by topic Tag with relevant industries Priority scoring system 📈 Performance & Limits Expected Performance Single request**: 60-120 seconds average processing time Articles per request**: 2-10 (configurable) Data accuracy**: 95%+ for standard Reuters articles Success rate**: 90%+ for accessible content Daily capacity**: Limited by Bright Data quotas Resource Usage Memory**: ~200MB per execution API calls**: 1 Bright Data + 1 Claude + 1 Telegram per execution Bandwidth**: ~5-10MB per article scraped Execution time**: 1-3 minutes per request Scaling Considerations Rate limiting**: Respect API quotas and limits Error handling**: Implement comprehensive retry logic Data validation**: Verify article quality and completeness Cost monitoring**: Track API usage across services Performance optimization**: Cache common requests when possible 🤝 Support & Community Getting Help n8n Community**: community.n8n.io Bright Data Support**: Contact through dashboard Anthropic Documentation**: docs.anthropic.com Telegram Bot API**: core.telegram.org/bots Contributing Share workflow improvements with the community Report issues and suggest enhancements Create variations for specific news sources Document best practices and optimizations 📋 Quick Setup Checklist Before You Start ☐ n8n instance running (self-hosted or cloud) ☐ Bright Data account with Reuters dataset access ☐ Anthropic API key for Claude access ☐ Telegram bot created via @BotFather ☐ 20 minutes for complete setup Setup Steps ☐ Import Workflow - Copy JSON and import to n8n ☐ Configure Bright Data - Set up API credentials and test ☐ Configure Claude - Add Anthropic API credentials ☐ Setup Telegram - Create bot and get chat ID ☐ Update Credentials - Replace all demo values with real ones ☐ Test Form - Submit test request and verify flow ☐ Check Telegram - Confirm message delivery ☐ Activate Workflow - Turn on for production use Ready to Use! 🎉 Your workflow form URL: https://your-n8n-instance.com/webhook/your-webhook-id 🎯 Happy News Monitoring! This workflow provides a solid foundation for automated Reuters news intelligence. Customize it to fit your specific monitoring needs and use cases. The combination of Bright Data's reliable scraping, Claude's AI analysis, and Telegram's instant delivery creates a powerful news monitoring solution.
by simonscrapes
Use Case Generate accurate search volume data for SEO keyword research: You have a list of potential keywords to target for your website SEO but don't know their actual search volume You need historical data to identify seasonal trends in keyword popularity You want to assess keyword difficulty to prioritize your content strategy You need data-driven insights for planning your SEO campaigns What this Workflow Does The workflow connects to Google's Keyword Planner API to retrieve keyword metrics for your SEO research: Fetches monthly search volume for each keyword Provides historical trends data for the past 12 months Calculates keyword difficulty scores Delivers competition metrics from Google Ads Setup Fill the Set 20 Keywords with up to 20 Keywords of your choosing in an array e.g. ["keyword 1", "keyword 2",...] Create a Google Ads API account and add credentials to Get Search Data node Replace the Connect to your own database with your own database for the output How to Adjust it to Your Needs Change the Set 20 Keywords node input to a source of your choosing e.g. Airtable database with 20 keywords Connect to output source of your choosing More templates and n8n workflows >>> @simonscrapes
by Airtop
Use Case Turn any web page into a compelling LinkedIn post — complete with an AI-generated image. This automation is ideal for sharing content like blog posts, case studies, or product updates in a polished and engaging format. What This Automation Does Given a page URL and optional user instructions, this automation: Scrapes the content of the webpage Uses AI to write a clear, educational, and LinkedIn-optimized post Sends both to Slack for review and approval Handles feedback and revisions via Slack interactions Input: Page URL** — The link to the webpage (required) Instructions** — Optional notes on tone, emphasis, or format Output: LinkedIn post text Slack message with review/approval options How It Works Form Submission: User inputs a web page and optional instructions. Web Scraping: Uses Airtop to extract page content. Post Generation: AI agent writes a post based on the page and instructions. Slack Review Flow: Post and image sent to Slack for feedback User can approve, request revisions, or decline Revisions trigger reprocessing steps automatically Final Post Delivery: Approved post is sent back to Slack, ready to publish. Setup Requirements Generate an Airtop API key completely free. Configure your OpenAI credentials for post and image prompt generation Slack OAuth credentials and a Slack channel Next Steps Post Directly**: Add LinkedIn publishing to automate the full content workflow. Template Variations**: Offer post style presets (e.g., technical, story-driven, short-form). CRM Sync**: Save approved posts and stats in Airtable or Notion for team use. Read more about generating social content using AI
by Mario
Purpose This workflow allows you to transfer credentials from one n8n instance to another. How it works A multi-form setup guides you through the entire process You get to choose one of your predefined (in the Settings node) remote instances first Then all credentials of the current instance are being retrieved using the Execute Command node On the next form page you can select one of the credentials by their name and initiate the transfer Finally the credential is being created on the remote instance using the n8n API. A final form ending indicates if that action succeeded or not. Setup Select your credentials in the nodes which require those Configure your remote instance(s) in the Settings node Every instance is defined as object with the keys name, apiKey and baseUrl. Those instances are then wrapped inside an array. You can find an example described within a note on the workflow canvas. How to use Grab the (production) URL of the Form from the first node Open the URL and follow the instructions given in the multi-form Disclaimer Please note, that this workflow can only run on self-hosted n8n instances, since it requires the Execute Command Node. Security: Beware, that all credentials are being decrypted and processed within the workflow. Also the API keys to other n8n instances are stored within the workflow. This solution is primarily meant for transferring data between testing environments. For production use consider the n8n enterprise edition which provides a reliable way to manage credentials across different environments.
by Jimleuk
This n8n template leverages n8n's multi-form feature to build a 2 part job application submission journey which aims to eliminate the need for applicants to re-enter data found on their CVs/Resumes. How it works The application submission process starts with an n8n form trigger to accept CV files in the form of PDFs. The PDF is validated using the text classifier node to determine if it is a valid CV else the applicant is asked to reupload. A basic LLM node is used to extract relevant information from the CV as data capture. A copy of the original job post is included to ensure relevancy. Applicant's data is then sent to an ATS for processing. For our demo, we used airtable because we could attach PDFs to rows. Finally, a second form trigger is used for the actual application form. However, it is prefilled to save the applicant's time and allow them to amend any of the generated application fields. How to use Ensure to change the redirect URL in the form ending node to use the host domain of your n8n instance. Requirements OpenAI for LLM Airtable to capture applicant data Customising the workflow Application form is pretty basic for this demonstration but could be extended to ask more in-depth questions. If it fits the job, why not ask applicants to upload portfolio works and have AI describe/caption them.
by Mark Shcherbakov
Video Guide I prepared a detailed guide that showed the whole process of building a resume analyzer. Who is this for? This workflow is ideal for developers, data analysts, and business owners who want to enable conversational interactions with their database. It’s particularly useful for cases where users need to extract, analyze, or aggregate data without writing SQL queries manually. What problem does this workflow solve? Accessing and analyzing database data often requires SQL expertise or dedicated reports, which can be time-consuming. This workflow empowers users to interact with a database conversationally through an AI-powered agent. It dynamically generates SQL queries based on user requests, streamlining data retrieval and analysis. What this workflow does This workflow integrates OpenAI with a Supabase database, enabling users to interact with their data via an AI agent. The agent can: Retrieve records from the database. Extract and analyze JSON data stored in tables. Provide summaries, aggregations, or specific data points based on user queries. Dynamic SQL Querying: The agent uses user prompts to create and execute SQL queries on the database. Understand JSON Structure: The workflow identifies JSON schema from sample records, enabling the agent to parse and analyze JSON fields effectively. Database Schema Exploration: It provides the agent with tools to retrieve table structures, column details, and relationships for precise query generation. Setup Preparation Create Accounts: N8N: For workflow automation. Supabase: For database hosting and management. OpenAI: For building the conversational AI agent. Configure Database Connection: Set up a PostgreSQL database in Supabase. Use appropriate credentials (username, password, host, and database name) in your workflow. N8N Workflow AI agent with tools: Code Tool: Execute SQL queries based on user input. Database Schema Tool: Retrieve a list of all tables in the database. Use a predefined SQL query to fetch table definitions, including column names, types, and references. Table Definition: Retrieve a list of columns with types for one table.
by scrapeless official
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Prerequisites A n8n account (free trial available) A Scrapeless account and API key A Google account to access Google Sheets 🛠️ Step-by-Step Setup 1. Create a New Workflow in n8n Start by creating a new workflow in n8n. Add a Manual Trigger node to begin. 2. Add the Scrapeless Node Add the Scrapeless node and choose the Scrape operation Paste in your API key Set your target website URL Execute the node to fetch data and verify results 3. Clean Up the Data Add a Code node to clean and format the scraped data. Focus on extracting key fields like: Title Description URL 4. Set Up Google Sheets Create a new spreadsheet in Google Sheets Name the sheet (e.g., Business Leads) Add columns like Title, Description, and URL 5. Connect Google Sheets in n8n Add the Google Sheets node Choose the operation Append or update row Select the spreadsheet and worksheet Manually map each column to the cleaned data fields 6. Run and Test the Workflow Click "Execute Workflow" in n8n Check your Google Sheet to confirm the data is properly inserted Results With this automated workflow, you can continuously extract business lead data, clean it, and push it directly into a spreadsheet — perfect for outbound sales, lead lists, or internal analytics. How to Use ⚙️ Open the Variables node and plug in your Scrapeless credentials. 📄 Confirm the Google Sheets node points to your desired spreadsheet. ▶️ Run the workflow manually from the Start node. Perfect For: Sales teams doing outbound prospecting Marketers building lead lists Agencies running data aggregation tasks