by Angel Menendez
Who is this for? This workflow is perfect for HR teams, recruiters, and hiring platforms that need to automate the extraction of key candidate details—like name, email, skills, and education—from resume files submitted in various formats. What problem does this solve? Manually reviewing and extracting structured data from resumes is time-consuming and error-prone. This automation eliminates that bottleneck, standardizing candidate data for seamless integration into CRMs, applicant tracking systems, or Google Sheets. What this workflow does This n8n template listens for uploaded resume files, detects their format (PDF, DOC, TXT, CSV, etc.), and automatically extracts the raw text using n8n’s built-in file extraction tools. The extracted text is then parsed using an OpenAI-powered agent that returns structured fields such as: Full Name Email Address Skill Keywords Education Details Optionally, you can push the structured output to Google Sheets (node included, currently disabled). Setup Clone this workflow into your n8n instance. Enable the When chat message received trigger if using n8n chat. Provide your OpenAI credentials and enable the LangChain Agent node. (Optional) Connect Google Sheets by authenticating with your Google account and filling in your target document and sheet. Watch the setup and demo video here: 🎥 https://youtu.be/2SUPiNmLWdA How to customize Modify the OpenAI system message to extract different fields (e.g., phone number, LinkedIn). Replace the Google Sheets node with a webhook to push results to your ATS. Add filters to limit accepted file types or max file size. > ⚠️ This template is designed to be secure. It uses credentials stored in the n8n credential manager—no hardcoded secrets required.
by Agentick AI
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. **This n8n template automates candidate outreach, call transcription, and structured feedback capture for HR teams and recruiters. It triggers on a new candidate row added in a Google Sheet, initiates a call using Vapi.ai, processes the transcript using Google Gemini, extracts key information like CTC, experience, and notice period, and then updates the same Google Sheet with parsed insights. This is ideal for recruiters or HR teams conducting high-volume candidate outreach and wanting to scale initial data collection using automated voice bots and AI transcription analysis.** How it works Trigger: Listens for new rows added to a Google Sheet (e.g., a new candidate lead). Call Initiation: Uses Vapi.ai to make a phone call to the candidate using an assistant bot. Transcript Retrieval: After the call, fetches the conversation transcript from the Vapi API. AI Transcript Analysis: Google Gemini parses the transcript and extracts structured fields like: Work experience Current & expected CTC Notice period & negotiability Work preferences and location Data Mapping: Extracted insights are mapped to structured JSON fields. Google Sheet Update: The same row in the source Sheet is updated with the collected information. Use Cases Pre-screening calls for job applicants Collecting missing candidate information asynchronously Replacing manual HR data entry with AI-powered automation Smart CRM updates from voice interactions Requirements Before you run this workflow, ensure the following: ✅ Google account with access to Google Sheets API ✅ Vapi.ai account with: Assistant ID Phone number ID Active API key ✅ Google Gemini API (via PaLM) enabled ✅ n8n version 1.40.0 or later with relevant credentials configured How to use Import the workflow into n8n. Set up your credentials for: Google Sheets Trigger Google Sheets Vapi.ai (add Bearer token) Google Gemini Replace the placeholder values in: Assistant ID Phone number ID Google Sheet ID and tab Start the workflow and add a row to the Google Sheet. Wait for the automated call and let the AI extract and populate the data. Customising this workflow Replace Google Gemini with OpenAI or Claude if preferred. Add sentiment analysis on the transcript using an LLM. Modify the Sheet column structure to add additional fields. Add a filter node to skip candidates with incomplete phone numbers. Use a Webhook trigger instead of Google Sheets to integrate with job portals or ATS.
by Jimleuk
This template is for Self-Hosted N8N Instances only. This n8n demonstrates how to build a simple SQLite MCP server to perform local database operations as well as use it for Business Intelligence. This MCP example is based off an official MCP reference implementation which can be found here -https://github.com/modelcontextprotocol/servers/tree/main/src/sqlite How it works A MCP server trigger is used and connected to 5 tools: 2 Code Node and 3 Custom Workflow. The 2 Code Node tools use the SQLLite3 library and are simple read-only queries and as such, the Code Node tool can be simply used. The 3 custom workflow tools are used for select, insert and update queries as these are operations which require a bit more discretion. Whilst it may be easier to allow the agent to use raw SQL queries, we may find it a little safer to just allow for the parameters instead. The custom workflow tool allows us to define this restricted schema for tool input which we'll use to construct the SQL statement ourselves. All 3 custom workflow tools trigger the same "Execute workflow" trigger in this very template which has a switch to route the operation to the correct handler. Finally, we use our Code nodes to handle select, insert and update operations. The responses are then sent back to the the MCP client. How to use This SQLite MCP server allows any compatible MCP client to manage a SQLite database by supporting select, create and update operations. You will need to have a SQLite database available before you can use this server. Connect your MCP client by following the n8n guidelines here - https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.mcptrigger/#integrating-with-claude-desktop Try the following queries in your MCP client: "Please create a table to store business insights and add the following..." "what business insights do we have on current retail trends?" "Who has contributed the most business insights in the past week?" Requirements SQLite for database. MCP Client or Agent for usage such as Claude Desktop - https://claude.ai/download Customising this workflow If the scope of schemas or tables is too open, try restrict it so the MCP serves a specific purpose for business operations. eg. Confine the querying and editing to HR only tables before providing access to people in that department. Remember to set the MCP server to require credentials before going to production and sharing this MCP server with others!
by scrapeless official
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Prerequisites A n8n account (free trial available) A Scrapeless account and API key A Google account to access Google Sheets 🛠️ Step-by-Step Setup 1. Create a New Workflow in n8n Start by creating a new workflow in n8n. Add a Manual Trigger node to begin. 2. Add the Scrapeless Node Add the Scrapeless node and choose the Scrape operation Paste in your API key Set your target website URL Execute the node to fetch data and verify results 3. Clean Up the Data Add a Code node to clean and format the scraped data. Focus on extracting key fields like: Title Description URL 4. Set Up Google Sheets Create a new spreadsheet in Google Sheets Name the sheet (e.g., Business Leads) Add columns like Title, Description, and URL 5. Connect Google Sheets in n8n Add the Google Sheets node Choose the operation Append or update row Select the spreadsheet and worksheet Manually map each column to the cleaned data fields 6. Run and Test the Workflow Click "Execute Workflow" in n8n Check your Google Sheet to confirm the data is properly inserted Results With this automated workflow, you can continuously extract business lead data, clean it, and push it directly into a spreadsheet — perfect for outbound sales, lead lists, or internal analytics. How to Use ⚙️ Open the Variables node and plug in your Scrapeless credentials. 📄 Confirm the Google Sheets node points to your desired spreadsheet. ▶️ Run the workflow manually from the Start node. Perfect For: Sales teams doing outbound prospecting Marketers building lead lists Agencies running data aggregation tasks
by Richard Uren
Task Read a list of customers from a GoogleSheet and create them in Shopify using Shopify's Admin API (GraphQL). Why ? Generate test users for development stores. Migrate customers from other platforms. Easy intro to Shopify's GraphQL API. Setup Setting up Google Sheets access Follow the instructions in the N8N Docs for granting Oauth2 access to Google services. You'll need to grant API access to Google Sheets and Google Drive (to list available sheets). Setting up Shopify access Shopify's Admin API uses 'Header Auth' with a key of X-Shopify-Access-Token and a value of your shopify access token which starts with shpat_ . How to generate a Shopify Access Token To generate a Shopify Access Token create an app, grant the app the necessary scopes, then generate a token. From inside a store do the following : click Settings (nav link) click Apps and sales channels (nav link) click Develop Apps (button) click Create App (button) give the app a name click configure Admin API Scopes (button) at a minimum grant read_customers and write_customers scope. Grant additional scopes if you plan on accessing other parts of the API. click save To generate the token click install app (button) click install on the dialog that pops up (button) click 'reveal token once' (button) copy the token into a password vault or somewhere secure. Template Updates To test this out you'll need to make the following changes : 1) Create a header credential where the key is X-Shopify-Access-Token and the value is your Shopify Access Token (it starts with shpat_ 2) In the GraphQL node change the endpoint URL to your store. Something like https://{your store goes here}.myshopify.com/admin/api/2025-04/graphql.json Google Sheet Structure Columns can be in any order, because the rows will be mapped to fields in a json object. N8N will treat the first row in the sheet as a column name, so at a minimum use the column names below in row 1 of your sheet. first_name : Any string last_name : Any string email : Valid email mobile_phone : International mobile phone format with no spaces eg. +61414708406 (Shopify will reject anything else). Example CSV "first_name","last_name","email","mobile_phone" "Bob","Smith","bob@example.com","+61414999999"
by Rajeet Nair
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Description This workflow automatically collects daily trending topics from Twitter and YouTube, filters them for relevance, and uses an AI model (such as Mistral Cloud or another OpenAI-compatible API) to generate engaging social media hashtags. The final results, including source platform and date, are saved into a connected Google Sheet for easy access, tracking, or team collaboration. Ideal for content creators, marketers, and social media managers, this automation eliminates the manual effort of trend research and hashtag writing by combining real-time scraping with LLM-powered generation. The result is a scalable, daily strategy tool to stay aligned with what’s trending across major platforms. How It Works Daily Trigger Starts the workflow automatically on a daily schedule. Trend Scraping Scrapes current trending content from Twitter and YouTube using the Crawl and Scrape community node. Filtering & Slicing Removes irrelevant or duplicate entries and limits each platform’s list to top-performing trends. Merge Trends Combines Twitter and YouTube trends into a single dataset. AI Hashtag Generation Sends each trend topic to an AI model to generate relevant hashtags. Output to Google Sheets Loops through AI results and writes them to a Google Sheet, including trend, platform, hashtags, and timestamp. Setup Instructions Estimated time: 10–15 minutes Prerequisites A self-hosted instance of n8n (required for community nodes) API key for Mistral Cloud or any OpenAI-compatible LLM Google Sheets account connected via OAuth2 credentials Twitter and YouTube trend URLs (or scraping logic for target regions) Template Image: Example: Crawl and Scrape Node for Twitter Trends You can use the following configuration in the Crawl and Scrape node to extract Twitter trends from Trends24) { "parameters": { "url": "https://trends24.in/", "selectors": [ { "label": "Twitter Trends", "selector": ".trend-card__list li a", "type": "text" } ] }, "name": "Scrape Twitter Trends", "type": "n8n-nodes-crawl-and-scrape.crawlAndScrape", "typeVersion": 1, "position": [300, 200] } Google Sheet Column Format Column A: Generated Hashtags
by Jimleuk
This n8n workflow demonstrates how to automate indexing of images to build a object-based image search. By utilising a Detr-Resnet-50 Object Classification model, we can identify objects within an image and store these associations in Elasticsearch along with a reference to the image. How it works An image is imported into the workflow via HTTP request node. The image is then sent to Cloudflare's Worker AI API where the service runs the image through the Detr-Resnet-50 object classification model. The API returns the object associations with their positions in the image, labels and confidence score of the classification. Confidence scores of less the 0.9 are discarded for brevity. The image's URL and its associations are then index in an ElasticSearch server ready for searching. Requirements A Cloudflare account with Workers AI enabled to access the object classification model. An ElasticSearch instance to store the image url and related associations. Extending this workflow Further enrich your indexed data with additional attributes or metrics relevant to your users. Use a vectorstore to provide similarity search over the images.
by LukaszB
Crypto Price Alert – n8n Workflow A simple and effective crypto alert system for anyone who wants to stay up to date with coin price changes — without refreshing charts all day. This workflow checks the current price of your chosen cryptocurrency (via CoinGecko) and sends you an alert on Discord if it goes above or below your target range. It’s lightweight, easy to set up, and runs on autopilot. What the Workflow Does Checks the live price of a selected coin using the CoinGecko API. Compares it to the max/min prices you define manually. Decides if the price is too high or too low. Sends an alert message to Discord depending on the result. How It Works The flow is triggered manually or on a schedule (your choice). It pulls the current price of the coin you set. Compares that price with your min and max values. Sends a “high” or “low” message to your Discord webhook. Setup Steps Enter your coin ID and price thresholds in the “Set Low and High” node. Paste your Discord webhook URLs in the "Message High" and "Message Low" nodes. Optional: Adjust the schedule trigger to run every X minutes/hours. Run once manually to test — takes under 1 minutes. Full instructions and config tips are in sticky notes inside the workflow.
by Taiki
Workflow Setup Guide This workflow collects the most-viewed videos from specified YouTube channels and saves the data to a Google Sheet. Follow these steps to set it up: 1. Credentials Setup Google Sheets:** You need to have a Google Sheets credential configured in your n8n instance. If you don't have one, go to the 'Credentials' section in n8n and add a new credential for Google Sheets. YouTube API Key:** You need a YouTube Data API v3 key. Go to the Google Cloud Console. Create a new project or select an existing one. Go to 'APIs & Services' > 'Library' and enable the YouTube Data API v3. Go to 'APIs & Services' > 'Credentials', click 'Create Credentials', and choose 'API key'. Copy the generated API key. 2. Google Sheet Setup You will need one Google Sheet with two separate sheets (tabs) inside it. Input Sheet Use Template This sheet provides the list of YouTube channels to process. Required Columns:** Create a sheet with the following two columns: ChannelID: The ID of the YouTube channel (e.g., T7M3PpjBZzw). video_num_to_get: The number of top videos to retrieve for that channel (e.g., 5). Output Sheet This sheet is where the results will be saved. Required Columns:** The workflow will automatically append data to the following columns. You can create them beforehand or let the workflow do it. channelName title videoId videoLink 3. Node Configuration Read Channel Info from Sheet:** Select your Google Sheets credential. Enter your Spreadsheet ID. Enter the name of your Input Sheet. Fetch Most-Viewed Videos via YouTube API:** Replace YOUR_YOUTUBE_API_KEY with the API key you generated in Step 1. Append Video Details to Sheet:** Select your Google Sheets credential. Enter your Spreadsheet ID (the same one as before). Enter the name of your Output Sheet.
by Louis Chan
How it works Transform medical documents into structured data using Google Gemini AI with enterprise-grade accuracy. Classifies document types (receipts, prescriptions, lab reports, clinical notes) Extracts text with 95%+ accuracy using advanced OCR Structures data according to medical taxonomy standards Supports multiple languages (English, Chinese, auto-detect) Tracks processing costs and quality metrics automatically Set up steps Prerequisites Google Gemini API key (get from Google AI Studio) Quick setup Import this workflow template Configure Google Gemini API credentials in n8n Test with a sample medical document URL Deploy your webhook endpoint Usage Send POST request to your webhook: { "image_url": "https://example.com/medical-receipt.jpg", "expected_type": "financial", "language_hint": "auto" } Get structured response: json{ "success": true, "result": { "documentType": "financial", "metadata": { "providerName": "Dr. Smith Clinic", "createdDate": "2025-01-06", "currency": "USD" }, "content": { "amount": 150.00, "services": [...] }, "quality_metrics": { "overall_confidence": 0.95 } } } Use cases Healthcare Organizations Medical billing automation - Process receipts and invoices automatically Insurance claim processing - Extract data from claim documents Clinical documentation - Digitize patient records and notes Data standardization - Consistent structured output format System Integrators EMR integration - Connect with existing healthcare systems Workflow automation - Reduce manual data entry by 90% Multi-language support - Handle international medical documents Quality assurance - Built-in confidence scoring and validation Supported Document Types Financial: Medical receipts, bills, insurance claims, invoices Clinical: Medical charts, progress notes, consultation reports Prescription: Prescriptions, medication lists, pharmacy records Administrative: Referrals, authorizations, patient registration Diagnostic: Lab reports, test results, screening reports Legal: Medical certificates, documentation forms
by Jimleuk
This n8n template demonstrates how to build a simple but effective vintage image restoration service using an AI model with image editing capabilities. With Gemini now capable of multimodal output, it's a great time to explore this capability for image or graphics automation. Let's see how well it does for a task such as image restoration. Good to know At time of writing, each image generated will cost $0.039 USD. See Gemini Pricing for updated info. The model used in this workflow is geo-restricted! If it says model not found, it may not be available in your country or region. How it works Images are imported into our workflow via the HTTP node and converted to base64 strings using the Extract from file node. The image data is then pipelined to Gemini's Image Generation model. A prompt is provided to instruct Gemini to "restore" the image to near new condition - of course, feel free to experiment with this prompt to improve the results! Gemini's responds with the image as a base64 string and hence, a convert to file node is used to transform the data to binary. With the restored image as a binary, we can then use this with our Google Drive node to upload it to our desired folder. How to use This demonstration uses 3 random images sourced from the internet but any typical image file will work. Use a webhook node to allow integration from other applications. Use a telegram trigger for instant mobile service! Requirements Google Gemini for LLM/Image generation Google Drive for Upload Storage Customising this workflow AI image editing can be applied to many use-cases not just image restoration. Try using it to add watermarks, branding or modify an existing image for marketing purposes.
by Incrementors
🐦 Twitter Profile Scraper via Bright Data API with Google Sheets Output A comprehensive n8n automation that scrapes Twitter profile data using Bright Data's Twitter dataset and stores comprehensive tweet analytics, user metrics, and engagement data directly into Google Sheets. 📋 Overview This workflow provides an automated Twitter data collection solution that extracts profile information and tweet data from specified Twitter accounts within custom date ranges. Perfect for social media analytics, competitor research, brand monitoring, and content strategy analysis. ✨ Key Features 🔗 Form-Based Input: Easy-to-use form for Twitter URL and date range selection 🐦 Twitter Integration: Uses Bright Data's Twitter dataset for accurate data extraction 📊 Comprehensive Data: Captures tweets, engagement metrics, and profile information 📈 Google Sheets Storage: Automatically stores all data in organized spreadsheet format 🔄 Progress Monitoring: Real-time status tracking with automatic retry mechanisms ⚡ Fast & Reliable: Professional scraping with built-in error handling 📅 Date Range Control: Flexible time period selection for targeted data collection 🎯 Customizable Fields: Advanced data field selection and mapping 🎯 What This Workflow Does Input Twitter Profile URL**: Target Twitter account for data scraping Date Range**: Start and end dates for tweet collection period Custom Fields**: Configurable data points to extract Processing Form Trigger: Collects Twitter URL and date range from user input API Request: Sends scraping request to Bright Data with specified parameters Progress Monitoring: Continuously checks scraping job status until completion Data Retrieval: Downloads complete dataset when scraping is finished Data Processing: Formats and structures extracted information Sheet Integration: Automatically populates Google Sheets with organized data Output Data Points | Field | Description | Example | |-------|-------------|---------| | user_posted | Username who posted the tweet | @elonmusk | | name | Display name of the user | Elon Musk | | description | Tweet content/text | "Exciting updates coming soon..." | | date_posted | When the tweet was posted | 2025-01-15T10:30:00Z | | likes | Number of likes on the tweet | 1,234 | | reposts | Number of retweets | 567 | | replies | Number of replies | 89 | | views | Total view count | 12,345 | | followers | User's follower count | 50M | | following | Users they follow | 123 | | is_verified | Verification status | true/false | | hashtags | Hashtags used in tweet | #AI #Technology | | photos | Image URLs in tweet | image1.jpg, image2.jpg | | videos | Video content URLs | video1.mp4 | | user_id | Unique user identifier | 12345678 | | timestamp | Data extraction timestamp | 2025-01-15T11:00:00Z | 🚀 Setup Instructions Prerequisites n8n instance (self-hosted or cloud) Bright Data account with Twitter dataset access Google account with Sheets access Valid Twitter profile URLs to scrape 10-15 minutes for setup Step 1: Import the Workflow Copy the JSON workflow code from the provided file In n8n: Workflows → + Add workflow → Import from JSON Paste JSON and click Import Step 2: Configure Bright Data Set up Bright Data credentials: In n8n: Credentials → + Add credential → HTTP Header Auth Enter your Bright Data API credentials Test the connection Configure dataset: Ensure you have access to Twitter dataset (gd_lwxkxvnf1cynvib9co) Verify dataset permissions in Bright Data dashboard Step 3: Configure Google Sheets Integration Create a Google Sheet: Go to Google Sheets Create a new spreadsheet named "Twitter Data" or similar Copy the Sheet ID from URL: https://docs.google.com/spreadsheets/d/SHEET_ID_HERE/edit Set up Google Sheets credentials: In n8n: Credentials → + Add credential → Google Sheets OAuth2 API Complete OAuth setup and test connection Prepare your data sheet with columns: Use the column headers from the data points table above The workflow will automatically populate these fields Step 4: Update Workflow Settings Update Bright Data nodes: Open "🚀 Trigger Twitter Scraping" node Replace BRIGHT_DATA_API_KEY with your actual API token Verify dataset ID is correct Update Google Sheets node: Open "📊 Store Twitter Data in Google Sheet" node Replace YOUR_GOOGLE_SHEET_ID with your Sheet ID Select your Google Sheets credential Choose the correct sheet/tab name Step 5: Test & Activate Add test data: Use the form trigger to input a Twitter profile URL Set a small date range for testing (e.g., last 7 days) Test the workflow: Submit the form to trigger the workflow Monitor progress in n8n execution logs Verify data appears in Google Sheet Check all expected columns are populated 📖 Usage Guide Running the Workflow Access the workflow form trigger URL (available when workflow is active) Enter the Twitter profile URL you want to scrape Set the start and end dates for tweet collection Submit the form to initiate scraping Monitor progress - the workflow will automatically check status every minute Once complete, data will appear in your Google Sheet Understanding the Data Your Google Sheet will show: Real-time tweet data** for the specified date range User engagement metrics** (likes, replies, retweets, views) Profile information** (followers, following, verification status) Content details** (hashtags, media URLs, quoted tweets) Timestamps** for each tweet and data extraction Customizing Date Ranges Recent data**: Use last 7-30 days for current activity analysis Historical analysis**: Select specific months or quarters for trend analysis Event tracking**: Focus on specific date ranges around events or campaigns Comparative studies**: Use consistent time periods across different profiles 🔧 Customization Options Modifying Data Fields Edit the custom_output_fields array in the "🚀 Trigger Twitter Scraping" node to add or remove data points: "custom_output_fields": [ "id", "user_posted", "name", "description", "date_posted", "likes", "reposts", "replies", "views", "hashtags", "followers", "is_verified" ] Changing Google Sheet Structure Modify the column mapping in the "📊 Store Twitter Data in Google Sheet" node to match your preferred sheet layout and add custom formulas or calculations. Adding Multiple Recipients To process multiple Twitter profiles: Modify the form to accept multiple URLs Add a loop node to process each URL separately Implement delays between requests to respect rate limits 🚨 Troubleshooting Common Issues & Solutions "Bright Data connection failed" Cause: Invalid API credentials or dataset access Solution: Verify credentials in Bright Data dashboard, check dataset permissions "No data extracted" Cause: Invalid Twitter URLs or private/protected accounts Solution: Verify URLs are valid public Twitter profiles, test with different accounts "Google Sheets permission denied" Cause: Incorrect credentials or sheet permissions Solution: Re-authenticate Google Sheets, check sheet sharing settings "Workflow timeout" Cause: Large date ranges or high-volume accounts Solution: Use smaller date ranges, implement pagination for high-volume accounts "Progress monitoring stuck" Cause: Scraping job failed or API issues Solution: Check Bright Data dashboard for job status, restart workflow if needed Advanced Troubleshooting Check execution logs in n8n for detailed error messages Test individual nodes by running them separately Verify data formats and ensure consistent field mapping Monitor rate limits if scraping multiple profiles consecutively Add error handling and implement retry logic for robust operation 📊 Use Cases & Examples 1. Social Media Analytics Goal: Track engagement metrics and content performance Monitor tweet engagement rates over time Analyze hashtag effectiveness and reach Track follower growth and audience interaction Generate weekly/monthly performance reports 2. Competitor Research Goal: Monitor competitor social media activity Track competitor posting frequency and timing Analyze competitor content themes and strategies Monitor competitor engagement and audience response Identify trending topics and hashtags in your industry 3. Brand Monitoring Goal: Track brand mentions and sentiment analysis Monitor specific Twitter accounts for brand mentions Track hashtag campaigns and user-generated content Analyze sentiment trends and audience feedback Identify influencers and brand advocates 4. Content Strategy Development Goal: Analyze successful content patterns Identify high-performing tweet formats and topics Track optimal posting times and frequencies Analyze hashtag performance and reach Study audience engagement patterns 5. Market Research Goal: Collect social media data for market analysis Gather consumer opinions and feedback Track industry trends and discussions Monitor product launches and market reactions Support product development with social insights ⚙ Advanced Configuration Batch Processing Multiple Profiles To monitor multiple Twitter accounts efficiently: Create a master sheet with profile URLs and date ranges Add a loop node to process each profile separately Implement delays between requests to respect rate limits Use separate sheets or tabs for different profiles Adding Data Analysis Enhance the workflow with analytical capabilities: Create additional sheets for processed data and insights Add formulas to calculate engagement rates and trends Implement data visualization with charts and graphs Generate automated reports and summaries Integration with Business Tools Connect the workflow to your existing systems: CRM Integration**: Update customer records with social media data Slack Notifications**: Send alerts when data collection is complete Database Storage**: Store data in PostgreSQL/MySQL for advanced analysis BI Tools**: Connect to Tableau/Power BI for comprehensive visualization 📈 Performance & Limits Expected Performance Single profile**: 30 seconds to 5 minutes (depending on date range) Data accuracy**: 95%+ for public Twitter profiles Success rate**: 90%+ for accessible accounts Daily capacity**: 10-50 profiles (depends on rate limits and data volume) Resource Usage Memory**: ~200MB per execution Storage**: Minimal (data stored in Google Sheets) API calls**: 1 Bright Data call + multiple Google Sheets calls per profile Bandwidth**: ~5-10MB per profile scraped Execution time**: 2-10 minutes for typical date ranges Scaling Considerations Rate limiting**: Add delays for high-volume scraping Error handling**: Implement retry logic for failed requests Data validation**: Add checks for malformed or missing data Monitoring**: Track success/failure rates over time Cost optimization**: Monitor API usage to control costs 🤝 Support & Community Getting Help n8n Community Forum**: community.n8n.io Documentation**: docs.n8n.io Bright Data Support**: Contact through your dashboard GitHub Issues**: Report bugs and feature requests Contributing Share improvements with the community Report issues and suggest enhancements Create variations for specific use cases Document best practices and lessons learned 📋 Quick Setup Checklist Before You Start ☐ n8n instance running (self-hosted or cloud) ☐ Bright Data account with Twitter dataset access ☐ Google account with Sheets access ☐ Valid Twitter profile URLs ready for scraping ☐ 10-15 minutes available for setup Setup Steps ☐ Import Workflow - Copy JSON and import to n8n ☐ Configure Bright Data - Set up API credentials and test ☐ Create Google Sheet - New sheet with proper column structure ☐ Set up Google Sheets credentials - OAuth setup and test ☐ Update workflow settings - Replace API keys and sheet IDs ☐ Test with sample data - Add 1 Twitter URL and small date range ☐ Verify data flow - Check data appears in Google Sheet correctly ☐ Activate workflow - Enable form trigger for production use Ready to Use! 🎉 Your workflow URL: Access form trigger when workflow is active 🎯 Happy Twitter Scraping! This workflow provides a solid foundation for automated Twitter data collection. Customize it to fit your specific social media analytics and research needs. For any questions or support, please contact: info@incrementors.com or fill out this form: https://www.incrementors.com/contact-us/