by Incrementors
๐ฆ Twitter Profile Scraper via Bright Data API with Google Sheets Output A comprehensive n8n automation that scrapes Twitter profile data using Bright Data's Twitter dataset and stores comprehensive tweet analytics, user metrics, and engagement data directly into Google Sheets. ๐ Overview This workflow provides an automated Twitter data collection solution that extracts profile information and tweet data from specified Twitter accounts within custom date ranges. Perfect for social media analytics, competitor research, brand monitoring, and content strategy analysis. โจ Key Features ๐ Form-Based Input: Easy-to-use form for Twitter URL and date range selection ๐ฆ Twitter Integration: Uses Bright Data's Twitter dataset for accurate data extraction ๐ Comprehensive Data: Captures tweets, engagement metrics, and profile information ๐ Google Sheets Storage: Automatically stores all data in organized spreadsheet format ๐ Progress Monitoring: Real-time status tracking with automatic retry mechanisms โก Fast & Reliable: Professional scraping with built-in error handling ๐ Date Range Control: Flexible time period selection for targeted data collection ๐ฏ Customizable Fields: Advanced data field selection and mapping ๐ฏ What This Workflow Does Input Twitter Profile URL**: Target Twitter account for data scraping Date Range**: Start and end dates for tweet collection period Custom Fields**: Configurable data points to extract Processing Form Trigger: Collects Twitter URL and date range from user input API Request: Sends scraping request to Bright Data with specified parameters Progress Monitoring: Continuously checks scraping job status until completion Data Retrieval: Downloads complete dataset when scraping is finished Data Processing: Formats and structures extracted information Sheet Integration: Automatically populates Google Sheets with organized data Output Data Points | Field | Description | Example | |-------|-------------|---------| | user_posted | Username who posted the tweet | @elonmusk | | name | Display name of the user | Elon Musk | | description | Tweet content/text | "Exciting updates coming soon..." | | date_posted | When the tweet was posted | 2025-01-15T10:30:00Z | | likes | Number of likes on the tweet | 1,234 | | reposts | Number of retweets | 567 | | replies | Number of replies | 89 | | views | Total view count | 12,345 | | followers | User's follower count | 50M | | following | Users they follow | 123 | | is_verified | Verification status | true/false | | hashtags | Hashtags used in tweet | #AI #Technology | | photos | Image URLs in tweet | image1.jpg, image2.jpg | | videos | Video content URLs | video1.mp4 | | user_id | Unique user identifier | 12345678 | | timestamp | Data extraction timestamp | 2025-01-15T11:00:00Z | ๐ Setup Instructions Prerequisites n8n instance (self-hosted or cloud) Bright Data account with Twitter dataset access Google account with Sheets access Valid Twitter profile URLs to scrape 10-15 minutes for setup Step 1: Import the Workflow Copy the JSON workflow code from the provided file In n8n: Workflows โ + Add workflow โ Import from JSON Paste JSON and click Import Step 2: Configure Bright Data Set up Bright Data credentials: In n8n: Credentials โ + Add credential โ HTTP Header Auth Enter your Bright Data API credentials Test the connection Configure dataset: Ensure you have access to Twitter dataset (gd_lwxkxvnf1cynvib9co) Verify dataset permissions in Bright Data dashboard Step 3: Configure Google Sheets Integration Create a Google Sheet: Go to Google Sheets Create a new spreadsheet named "Twitter Data" or similar Copy the Sheet ID from URL: https://docs.google.com/spreadsheets/d/SHEET_ID_HERE/edit Set up Google Sheets credentials: In n8n: Credentials โ + Add credential โ Google Sheets OAuth2 API Complete OAuth setup and test connection Prepare your data sheet with columns: Use the column headers from the data points table above The workflow will automatically populate these fields Step 4: Update Workflow Settings Update Bright Data nodes: Open "๐ Trigger Twitter Scraping" node Replace BRIGHT_DATA_API_KEY with your actual API token Verify dataset ID is correct Update Google Sheets node: Open "๐ Store Twitter Data in Google Sheet" node Replace YOUR_GOOGLE_SHEET_ID with your Sheet ID Select your Google Sheets credential Choose the correct sheet/tab name Step 5: Test & Activate Add test data: Use the form trigger to input a Twitter profile URL Set a small date range for testing (e.g., last 7 days) Test the workflow: Submit the form to trigger the workflow Monitor progress in n8n execution logs Verify data appears in Google Sheet Check all expected columns are populated ๐ Usage Guide Running the Workflow Access the workflow form trigger URL (available when workflow is active) Enter the Twitter profile URL you want to scrape Set the start and end dates for tweet collection Submit the form to initiate scraping Monitor progress - the workflow will automatically check status every minute Once complete, data will appear in your Google Sheet Understanding the Data Your Google Sheet will show: Real-time tweet data** for the specified date range User engagement metrics** (likes, replies, retweets, views) Profile information** (followers, following, verification status) Content details** (hashtags, media URLs, quoted tweets) Timestamps** for each tweet and data extraction Customizing Date Ranges Recent data**: Use last 7-30 days for current activity analysis Historical analysis**: Select specific months or quarters for trend analysis Event tracking**: Focus on specific date ranges around events or campaigns Comparative studies**: Use consistent time periods across different profiles ๐ง Customization Options Modifying Data Fields Edit the custom_output_fields array in the "๐ Trigger Twitter Scraping" node to add or remove data points: "custom_output_fields": [ "id", "user_posted", "name", "description", "date_posted", "likes", "reposts", "replies", "views", "hashtags", "followers", "is_verified" ] Changing Google Sheet Structure Modify the column mapping in the "๐ Store Twitter Data in Google Sheet" node to match your preferred sheet layout and add custom formulas or calculations. Adding Multiple Recipients To process multiple Twitter profiles: Modify the form to accept multiple URLs Add a loop node to process each URL separately Implement delays between requests to respect rate limits ๐จ Troubleshooting Common Issues & Solutions "Bright Data connection failed" Cause: Invalid API credentials or dataset access Solution: Verify credentials in Bright Data dashboard, check dataset permissions "No data extracted" Cause: Invalid Twitter URLs or private/protected accounts Solution: Verify URLs are valid public Twitter profiles, test with different accounts "Google Sheets permission denied" Cause: Incorrect credentials or sheet permissions Solution: Re-authenticate Google Sheets, check sheet sharing settings "Workflow timeout" Cause: Large date ranges or high-volume accounts Solution: Use smaller date ranges, implement pagination for high-volume accounts "Progress monitoring stuck" Cause: Scraping job failed or API issues Solution: Check Bright Data dashboard for job status, restart workflow if needed Advanced Troubleshooting Check execution logs in n8n for detailed error messages Test individual nodes by running them separately Verify data formats and ensure consistent field mapping Monitor rate limits if scraping multiple profiles consecutively Add error handling and implement retry logic for robust operation ๐ Use Cases & Examples 1. Social Media Analytics Goal: Track engagement metrics and content performance Monitor tweet engagement rates over time Analyze hashtag effectiveness and reach Track follower growth and audience interaction Generate weekly/monthly performance reports 2. Competitor Research Goal: Monitor competitor social media activity Track competitor posting frequency and timing Analyze competitor content themes and strategies Monitor competitor engagement and audience response Identify trending topics and hashtags in your industry 3. Brand Monitoring Goal: Track brand mentions and sentiment analysis Monitor specific Twitter accounts for brand mentions Track hashtag campaigns and user-generated content Analyze sentiment trends and audience feedback Identify influencers and brand advocates 4. Content Strategy Development Goal: Analyze successful content patterns Identify high-performing tweet formats and topics Track optimal posting times and frequencies Analyze hashtag performance and reach Study audience engagement patterns 5. Market Research Goal: Collect social media data for market analysis Gather consumer opinions and feedback Track industry trends and discussions Monitor product launches and market reactions Support product development with social insights โ Advanced Configuration Batch Processing Multiple Profiles To monitor multiple Twitter accounts efficiently: Create a master sheet with profile URLs and date ranges Add a loop node to process each profile separately Implement delays between requests to respect rate limits Use separate sheets or tabs for different profiles Adding Data Analysis Enhance the workflow with analytical capabilities: Create additional sheets for processed data and insights Add formulas to calculate engagement rates and trends Implement data visualization with charts and graphs Generate automated reports and summaries Integration with Business Tools Connect the workflow to your existing systems: CRM Integration**: Update customer records with social media data Slack Notifications**: Send alerts when data collection is complete Database Storage**: Store data in PostgreSQL/MySQL for advanced analysis BI Tools**: Connect to Tableau/Power BI for comprehensive visualization ๐ Performance & Limits Expected Performance Single profile**: 30 seconds to 5 minutes (depending on date range) Data accuracy**: 95%+ for public Twitter profiles Success rate**: 90%+ for accessible accounts Daily capacity**: 10-50 profiles (depends on rate limits and data volume) Resource Usage Memory**: ~200MB per execution Storage**: Minimal (data stored in Google Sheets) API calls**: 1 Bright Data call + multiple Google Sheets calls per profile Bandwidth**: ~5-10MB per profile scraped Execution time**: 2-10 minutes for typical date ranges Scaling Considerations Rate limiting**: Add delays for high-volume scraping Error handling**: Implement retry logic for failed requests Data validation**: Add checks for malformed or missing data Monitoring**: Track success/failure rates over time Cost optimization**: Monitor API usage to control costs ๐ค Support & Community Getting Help n8n Community Forum**: community.n8n.io Documentation**: docs.n8n.io Bright Data Support**: Contact through your dashboard GitHub Issues**: Report bugs and feature requests Contributing Share improvements with the community Report issues and suggest enhancements Create variations for specific use cases Document best practices and lessons learned ๐ Quick Setup Checklist Before You Start โ n8n instance running (self-hosted or cloud) โ Bright Data account with Twitter dataset access โ Google account with Sheets access โ Valid Twitter profile URLs ready for scraping โ 10-15 minutes available for setup Setup Steps โ Import Workflow - Copy JSON and import to n8n โ Configure Bright Data - Set up API credentials and test โ Create Google Sheet - New sheet with proper column structure โ Set up Google Sheets credentials - OAuth setup and test โ Update workflow settings - Replace API keys and sheet IDs โ Test with sample data - Add 1 Twitter URL and small date range โ Verify data flow - Check data appears in Google Sheet correctly โ Activate workflow - Enable form trigger for production use Ready to Use! ๐ Your workflow URL: Access form trigger when workflow is active ๐ฏ Happy Twitter Scraping! This workflow provides a solid foundation for automated Twitter data collection. Customize it to fit your specific social media analytics and research needs. For any questions or support, please contact: info@incrementors.com or fill out this form: https://www.incrementors.com/contact-us/
by Joseph
This workflow automates invoice generation from form submissions, ensuring unique order IDs, creating PDF invoices, storing files, emailing customers, and logging invoice data โ all seamlessly integrated. ๐น Workflow Overview Trigger (Webhook) Starts when an order form is submitted, capturing customer and order details. Generate Random Order ID A Function node creates a unique alphanumeric invoice ID (e.g., INV-X92B7D). Check for Duplicate Order ID Google Sheets looks up the generated order ID in your invoice log sheet to prevent duplicates. Conditional Check (IF Node) If the ID already exists โ regenerates a new ID (loops back) If unique โ proceeds to invoice creation Prepare Invoice Data A Set node formats customer info, date, order items, and the unique order ID to fit your invoice template. Convert HTML to PDF HTTP Request node sends your invoice HTML to the RapidAPI HTML-to-PDF service and receives the PDF file. Upload PDF to Cloud Storage Save the PDF in Google Drive or Dropbox with a clear file name like Invoice-INV-X92B7D.pdf. Send Invoice Email to Customer Email node attaches the PDF and includes the order ID in the email subject/body. Log Invoice Details Append invoice data (customer info, order ID, total, PDF link) to your Google Sheet for tracking. โ๏ธ Node Details & Setup 1. Webhook Trigger Configure to receive form submissions (order details like name, email, items, total). 2. Function: Generate Random Order ID Sample JS code generates unique IDs prefixed by INV-. 3. Google Sheets: Lookup Row Set up connection to your invoice log sheet. Search for existing order ID to avoid duplicates. 4. IF Node: Check Order ID Existence Condition: If order ID found โ loop to regenerate. Else โ continue workflow. 5. Set Node: Prepare Invoice HTML Define variables like customer name, date, items, and order ID. This data populates your HTML invoice template. 6. HTTP Request: Convert HTML to PDF API URL to get your key Send invoice HTML in the request body. Receive PDF file blob or download URL. 7. Google Drive (or Dropbox) Upload Upload the PDF file. Use file name format: Invoice-{{$json["order_id"]}}.pdf 8. Email Node Recipient: customer email from the form data. Attach generated PDF. Include order ID in email subject or body for reference. 9. Google Sheets: Append Row Log invoice metadata to keep records updated. ๐ Google Sheets Template You can make a copy of the invoice log template here This sheet includes columns for order\_id, customer name, email, total, and invoice PDF link. Customize it as needed. ๐ Additional Notes Customize the invoice HTML template inside the Set node to match your branding. Ensure API credentials for RapidAPI, Google Drive/Dropbox, and email are properly set up in your n8n credentials. You can expand this workflow by adding payment processing or SMS notifications. Need help or want a custom workflow? Reach out via email at joseph@uppfy.com.
by Hunyao
What it does Pulls up to 700 Amazon reviews per product (recent and top-rated) and writes them straight into a Google Sheet tab you choose. Perfect for โข Brand and product managers tracking sentiment โข Marketplace sellers analysing competitor feedback โข Agencies building product-review dashboards Apps used RapidAPI Real-Time Amazon Data, Google Sheets, n8n Form Trigger How it works Form Trigger collects brand, product and sheet info. Code node extracts the ASIN and builds 70 API requests (10 pages ร star ratings). Split-in-batches loops through the request list, throttled by two Wait nodes. HTTP Request fetches reviews from RapidAPI. IF node drops empty or error responses. Split Out breaks arrays into single reviews. Google Sheets appends every review to the target tab. Loop continues until all pages finish. Setup Fill in Brand name, Product / Model Name, Amazon Product URL, Tab URL to insert reviews in the form. Grab your X-RapidAPI-Key from RapidAPI โ Add as httpHeaderAuth credential. Connect Google Sheets OAuth2 and make the spreadsheet Anyone with the link can edit. Open Workflow Settings โ set timezone if you plan to schedule runs. Hit Execute workflow or share the form link. Credentials โข Real-Time Amazon Data (RapidAPI HTTP Header Auth) โข Google Sheets OAuth2 Limits and notes โข \~100 RapidAPI calls for the free plan. Plan quota accordingly. โข Assumes Amazon returns 10 pages per star rating; fewer pages skip silently. โข Large sheets may hit Google API write quotas. If you have any questions in running the workflow, feel free to reach out to me at my youtube channel: https://www.youtube.com/@lifeofhunyao
by Jay Emp0
MCP Tool โ Replicate (Flux) Image Generator โ WordPress/Twitter Generates images via Replicate Flux models and uploads to WordPress (and optionally Twitter/X). Built to act as an MCP module that other agents/workflows call for on-demand image creation. Models configured in this workflow:\ black-forest-labs/flux-schnell, black-forest-labs/flux-dev, black-forest-labs/flux-1.1-pro Switch rationale: lower cost ๐ฐ, broader model choice ๐ฏ, full control of parameters โ๏ธ Leonardo API credits cannot be used in the web UI ๐ โโ๏ธ; separate spend for API vs UI Links: ๐ Prior Leonardo-based workflow: https://n8n.io/workflows/6363-generate-and-upload-images-with-leonardo-ai-wordpress-and-twitter/ ๐ฐ Blog automation consuming these images: https://n8n.io/workflows/6734-ai-blog-automation-publish-hourly-seo-articles-to-wordpress-and-twitter-v3/ ๐ฅ Inputs | Field | Type | Description | | ------ | ------ | --------------------------------- | | prompt | string | Text description for the image | | slug | string | Filename slug for WP media | | model | string | One of the configured Flux models | Example: { "prompt":"Joker watching a Batman movie on his laptop", "slug":"joker-watching-batman", "model":"black-forest-labs/flux-dev" } ๐ค Output { "public_image_url": "https://your-wp.com/wp-content/uploads/2025/08/img-joker-watching-batman.webp", "wordpress": {...}, "twitter": {...} } ๐ Flow Trigger with prompt, slug, model Build model payload (quality/steps/ratio/output format) Call Replicate: POST /v1/models/{model}/predictions (Prefer: wait) Download the generated image URL Upload to WordPress (returns public URL) Optional: upload to Twitter/X Return URL + metadata ๐ค MCP Use at Scale (emp0.com) Operational pattern: I currently use this setup for my blog where i generate 300 posts/month, each with 4 images (banner + 2 to 3 inline images) โ 1,000 images/month produced by this MCP. ๐ก Hybrid Cost-Optimized Setup: High-priority images* (banners, main visuals): Generated using *Flux Dev** on Leonardo for slightly better prompt adherence. Low-priority images* (inline blog visuals): Generated using *Flux Schnell** on Replicate for maximum cost efficiency. ๐ฐ Pricing Comparison (per image) Leonardo per-image cost uses API Basic math: $9 / 3,500 credits = $0.0025714 per credit. Flux Schnell (Leonardo)** = 7 credits Flux Dev (Leonardo)** = 7 credits Flux 1.1 Pro equivalent in Leonardo* = *Leonardo Phoenix** based on my experience = 10 credits | Flux Model | Replicate | Leonardo API* | | ------------------------ | ------------------------- | ------------------------------- | | flux-schnell | $0.0030 (=$3/1,000) | $0.0180 (7 ร $0.0025714) | | flux-dev | $0.0250 | $0.0180 (7 ร $0.0025714) | | flux-1.1-pro / Phoenix | $0.0400 | $0.0257 (10 ร $0.0025714) | Replicate pricing: https://replicate.com/pricing\ Leonardo pricing: https://leonardo.ai/pricing/\ Leonardo API usage: https://docs.leonardo.ai/docs/commonly-used-api-values ๐ Monthly Cost Example (1,000 images/month) Mix: 300 รflux-dev on Leonardo, 700 รflux-schnell on Replicate. | Platform/Model | Images | Price per Image | Total | | ------------------------ | ------ | --------------- | ---------- | | Leonardo flux-dev | 300 | $0.0180 | $5.40 | | Replicate flux-schnell | 700 | $0.0030 | $2.10 | | Total Monthly Spend | 1000 | โ | $7.50 | ๐ต If using Leonardo for both: 300 ร $0.0180 = $5.40 700 ร $0.0180 = $12.60 Total = $18.00** Savings: $10.50/month (โ58% lower) with the hybrid setup. ๐ Notes More Replicate models can be added in Code1 node. Parameters tuned for aspect ratio, inference steps, quality, guidance. Leonardo credit model is API-only; credits are not spendable in Leonardo's web UI.
by Custom Workflows AI
Introduction This workflow offers a streamlined solution for uploading multiple files to a GitHub repository simultaneously using GitHub's REST API. It addresses a significant limitation of n8n's native GitHub node, which only supports single-file uploads at a time. By leveraging GitHub's Git Data API, this workflow creates a new Git tree containing multiple files, commits this tree, and updates the target branchโall in a single automated process. The workflow is particularly valuable for automation scenarios that require batch file operations, such as deploying website updates, publishing documentation, or maintaining configuration files across repositories. It eliminates the need for multiple separate API calls when working with multiple files, making your automation more efficient and less prone to partial update issues. By abstracting the complexities of GitHub's Git Data API into a reusable workflow, it provides a practical solution for developers, content managers, and DevOps professionals who need to programmatically manage repository content at scale. Who is this for? This workflow is designed for: Developers and DevOps engineers who need to automate file updates in GitHub repositories Content managers who regularly publish multiple files to GitHub-hosted websites or documentation Automation specialists looking to integrate GitHub operations into larger workflows Teams using n8n for CI/CD processes who need to push code or configuration changes Users should have basic familiarity with GitHub concepts (repositories, branches, commits) and should be comfortable obtaining and using GitHub Personal Access Tokens. While the workflow handles the API complexity, users should understand the fundamentals of version control to effectively utilize and customize it. What problem is this workflow solving? This workflow addresses several key challenges: Limited batch operations: n8n's native GitHub node only supports uploading one file at a time, making multi-file operations cumbersome and inefficient. API complexity: GitHub's Git Data API requires multiple sequential calls with interdependent data to create commits with multiple files, which is complex to implement manually. Automation bottlenecks: Without this workflow, automating multi-file updates would require either multiple separate API calls (risking partial updates) or custom scripting outside of n8n. Consistency issues: When files need to be updated together (e.g., code and corresponding documentation), this workflow ensures they're committed in a single atomic operation. By solving these issues, the workflow enables reliable, atomic updates of multiple files, maintaining repository consistency and simplifying automation processes. What this workflow does Overview This workflow uses GitHub's REST API to push multiple files to a repository in a single operation. It follows Git's internal model by: Retrieving the current state of the repository Creating a new tree with the files to be added or updated Creating a new commit with this tree Updating the branch reference to point to the new commit Process Initialization: The workflow starts with a manual trigger and sets up GitHub credentials and repository information. File Content Definition: Two "Set" nodes define the content for the files to be uploaded. Repository State Retrieval: The workflow fetches the latest commit SHA for the specified branch It then retrieves the base tree SHA from this commit Tree Creation: A new Git tree is created that includes both files (file1.txt and file2.txt), specifying their paths and content. Commit Creation: A new commit is created with the specified commit message, referencing the new tree and the parent commit. Branch Update: Finally, the branch reference is updated to point to the new commit, making the changes visible in the repository. Setup To use this workflow: Import the workflow: Download the workflow JSON and import it into your n8n instance. Create a GitHub Personal Access Token: Go to GitHub Settings โ Developer Settings โ Personal Access Tokens โ Fine-grained tokens Create a new token with "Contents" permission (Read and write) for your target repository Configure the workflow: Update the "Set Github Info" node with: Your GitHub Personal Access Token Your GitHub username Your repository name The target branch (default is "main") A commit message Define file content: Modify the "File 1" and "File 2" nodes with the content you want to upload Adjust file paths if needed: In the "Create new tree" node, update the file paths if you want to change where the files are stored in the repository Save and run the workflow: Click "Test workflow" to execute the process. How to customize this workflow to your needs This workflow can be adapted in several ways: Add more files: Create additional "Set" nodes for more file content In the "Create new tree" node, add more tree entries following the same pattern (path, mode, type, content) Change file locations: Modify the "path" parameters in the "Create new tree" node to place files in different directories Dynamic file content: Replace the static content in the "File" nodes with data from other sources Use previous nodes or HTTP requests to generate file content dynamically Conditional file updates: Add IF nodes to determine which files should be updated based on certain conditions Create separate branches in your workflow for different update scenarios Scheduled updates: Replace the manual trigger with a Schedule node to run the workflow at specific intervals Combine with other triggers like Webhook or database events to push files when certain events occur Error handling: Add Error Trigger nodes to handle potential API failures Implement notification nodes to alert you of successful pushes or failures
by Rajeet Nair
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Description This workflow automatically collects daily trending topics from Twitter and YouTube, filters them for relevance, and uses an AI model (such as Mistral Cloud or another OpenAI-compatible API) to generate engaging social media hashtags. The final results, including source platform and date, are saved into a connected Google Sheet for easy access, tracking, or team collaboration. Ideal for content creators, marketers, and social media managers, this automation eliminates the manual effort of trend research and hashtag writing by combining real-time scraping with LLM-powered generation. The result is a scalable, daily strategy tool to stay aligned with whatโs trending across major platforms. How It Works Daily Trigger Starts the workflow automatically on a daily schedule. Trend Scraping Scrapes current trending content from Twitter and YouTube using the Crawl and Scrape community node. Filtering & Slicing Removes irrelevant or duplicate entries and limits each platformโs list to top-performing trends. Merge Trends Combines Twitter and YouTube trends into a single dataset. AI Hashtag Generation Sends each trend topic to an AI model to generate relevant hashtags. Output to Google Sheets Loops through AI results and writes them to a Google Sheet, including trend, platform, hashtags, and timestamp. Setup Instructions Estimated time: 10โ15 minutes Prerequisites A self-hosted instance of n8n (required for community nodes) API key for Mistral Cloud or any OpenAI-compatible LLM Google Sheets account connected via OAuth2 credentials Twitter and YouTube trend URLs (or scraping logic for target regions) Template Image: Example: Crawl and Scrape Node for Twitter Trends You can use the following configuration in the Crawl and Scrape node to extract Twitter trends from Trends24) { "parameters": { "url": "https://trends24.in/", "selectors": [ { "label": "Twitter Trends", "selector": ".trend-card__list li a", "type": "text" } ] }, "name": "Scrape Twitter Trends", "type": "n8n-nodes-crawl-and-scrape.crawlAndScrape", "typeVersion": 1, "position": [300, 200] } Google Sheet Column Format Column A: Generated Hashtags
by Richard Uren
Task Read a list of customers from a GoogleSheet and create them in Shopify using Shopify's Admin API (GraphQL). Why ? Generate test users for development stores. Migrate customers from other platforms. Easy intro to Shopify's GraphQL API. Setup Setting up Google Sheets access Follow the instructions in the N8N Docs for granting Oauth2 access to Google services. You'll need to grant API access to Google Sheets and Google Drive (to list available sheets). Setting up Shopify access Shopify's Admin API uses 'Header Auth' with a key of X-Shopify-Access-Token and a value of your shopify access token which starts with shpat_ . How to generate a Shopify Access Token To generate a Shopify Access Token create an app, grant the app the necessary scopes, then generate a token. From inside a store do the following : click Settings (nav link) click Apps and sales channels (nav link) click Develop Apps (button) click Create App (button) give the app a name click configure Admin API Scopes (button) at a minimum grant read_customers and write_customers scope. Grant additional scopes if you plan on accessing other parts of the API. click save To generate the token click install app (button) click install on the dialog that pops up (button) click 'reveal token once' (button) copy the token into a password vault or somewhere secure. Template Updates To test this out you'll need to make the following changes : 1) Create a header credential where the key is X-Shopify-Access-Token and the value is your Shopify Access Token (it starts with shpat_ 2) In the GraphQL node change the endpoint URL to your store. Something like https://{your store goes here}.myshopify.com/admin/api/2025-04/graphql.json Google Sheet Structure Columns can be in any order, because the rows will be mapped to fields in a json object. N8N will treat the first row in the sheet as a column name, so at a minimum use the column names below in row 1 of your sheet. first_name : Any string last_name : Any string email : Valid email mobile_phone : International mobile phone format with no spaces eg. +61414708406 (Shopify will reject anything else). Example CSV "first_name","last_name","email","mobile_phone" "Bob","Smith","bob@example.com","+61414999999"
by Atta
What it does Customer support calls contain a wealth of valuable feedback and urgent issues, but manually reviewing audio files is inefficient. This workflow acts as an AI assistant for your call log, transforming unstructured audio recordings into structured, actionable data. It provides a clean summary, sentiment analysis, and a list of required actions for every call, eliminating the need for manual listening and ensuring key insights are never missed. How it works The workflow runs on a schedule to fully automate the call analysis process from start to finish. Fetch New Recordings: The workflow triggers on a schedule (e.g., every 5 minutes), searches a designated Google Drive folder for new call recordings, and downloads any new files it finds. Transcribe Audio: Each audio file is sent to the ElevenLabs API to be converted from speech to a text transcript. The result is then formatted into a conversational, multi-speaker format. AI-Powered Analysis: The transcript is passed to a Google Gemini node, which is prompted to return a structured JSON object. This JSON contains a complete analysis of the call, including speaker identification (agent_name, client_name), a summary, the client_sentiment, a call_topic, a department_tag, and a list of action_items. Log the Results: The complete, structured analysis output from Gemini is appended as a new row in a Google Sheet, creating a centralized log with all the extracted call details and the full transcript. Take Action: The workflow uses conditional logic based on the detected sentiment: Negative Sentiment: If a call was negative, an immediate alert containing the call summary and action items is sent to a manager's group on Telegram. Positive Sentiment: If a call was positive, a kudos message is sent to the support team's Telegram channel to celebrate good work. File Management: After processing, the original audio file is automatically moved to a separate "Processed" folder in Google Drive to ensure it isnโt analyzed again. Setup Instructions To configure this workflow, you will need to set up your file storage in Google Drive, create a Google Sheet for logging, and configure credentials for all connected services. Required Credentials Google: You will need Google OAuth2 credentials that have permission for Google Drive, Google Sheets, and the Google AI (Gemini) APIs. ElevenLabs: Sign up for an account at ElevenLabs and get your API Key. You will add this directly into the HTTP Request node for transcription. Telegram: Create a bot using the BotFather in Telegram to get your Bot Token. You will also need the specific Chat ID for the managers' channel and the team's channel. Step-by-Step Configuration Google Drive: Create two folders in your Google Drive: one named "Company - Support Call Recordings" and another named "Processed Recordings". Copy the unique Folder ID from the URL for each and paste it into the respective Google Drive nodes. Google Sheets: Create a new Google Sheet to log the results. In the first row, create the following headers exactly as written: Recording File, Sentiment, Department, Topic, Agent, Client, Summary, Actions, and Fulltext. Copy the Sheet ID from the URL and paste it into the "Log Recording Analysis" (Google Sheets) node. ElevenLabs Node: In the "Convert Speech To Text" (HTTP Request) node, make sure the URL is set to the correct ElevenLabs API endpoint for speech-to-text. Add your ElevenLabs API Key to the authentication header. Telegram Nodes: In the "Send Alert To Managers" node, enter the Chat ID for your managers' group. In the "Send Kudos to Team" node, enter the Chat ID for the main team channel. How to Adapt the Template This workflow is a powerful starting point. Based on your specific needs, you can customize the inputs, the AI analysis, the logging method, and the final actions. Input Method Change File Source:* Instead of Google Drive, you can adapt the workflow to fetch recordings from other services like *Dropbox, **OneDrive, or a custom FTP server. Use a Webhook:* Replace the *Schedule Trigger* with a *Webhook Trigger** to process calls in real-time as they are added from your call software (if it supports webhooks). Final Actions Create Service Tickets:* This is a key area for customization. Replace the *Telegram* nodes with nodes for ticketing systems. For a negative call, you can automatically create a high-priority ticket in *Jira, **Zendesk, or ServiceNow. Create Tasks:* For calls with specific action items, use a node like *Asana, **Trello, or Todoist to automatically create a task and assign it to the correct team member. Send Email Notifications:* Use the *Send Email** node to dispatch summaries and alerts to stakeholders who are not on Telegram. Logging and Analysis Log to a Database:* Instead of Google Sheets, you can use a *Postgres, **MySQL, or Data Warehouse node to log the structured data for more advanced business intelligence and dashboarding. Customize the AI Prompt:** The prompt in the Google Gemini node is the "brain" of the operation. It specifically instructs the AI to return a JSON object with a predefined structure. To change what data is extracted, you can modify this structure in the prompt. For example, you could add a new key-value pair like "competitor_mentioned": "Name of competitor if mentioned, otherwise null" to the JSON structure. The current workflow asks the AI to populate a JSON object like this: { "speaker_identification": { "agent": "speaker_id", "agent_name": "The agent's name", "client": "client_id", "client_name": "The client's name" }, "summary": "A concise summary.", "client_sentiment": "Positive, Negative, or Neutral", "call_topic": "A brief phrase for the topic.", "department_tag": "The most relevant department.", "action_items": [ "A list of actionable tasks." ] } Change AI or STT Service:* You can swap out the *Google Gemini* node for an *OpenAI* node, or change the *HTTP Request* node to use a different transcription service like *AssemblyAI* or *Deepgram**.
by Lucas Peyrin
How it works This workflow automates your initial hiring pipeline by creating an AI-powered CV scanner. It collects job applications through a web form, uses AI to analyze the candidate's CV against your job description, and neatly organizes the results in a Google Sheet. Hereโs the step-by-step process: The Application Form:** A Form Trigger provides a public web form for candidates to submit their name, email, and CV (as a PDF). Initial Logging:** As soon as an application is submitted, the candidate's name and email are added to a Google Sheet. This ensures every applicant is logged, even if a later step fails. CV Text Extraction:* The workflow uses *Mistral's OCR** model to accurately extract all the text from the uploaded CV PDF. AI Analysis:* The extracted text is sent to *Google Gemini**. A detailed prompt instructs the AI to act as a hiring assistant, scoring the CV against the specific requirements of your job role and providing a detailed explanation for its score. Structured Output:** A JSON Output Parser ensures the AI's analysis is returned in a clean, structured format, making the data reliable. Final Record:** The AI-generated qualification score and explanation are added to the candidate's row in the Google Sheet, giving you a complete, analyzed list of applicants. Set up steps Setup time: ~15 minutes You'll need API keys for Mistral and Google AI, and to connect your Google account. Get Your Mistral API Key: Visit the Mistral Platform at console.mistral.ai/api-keys. Create and copy your API key. In the workflow, go to the Extract CV Text node, click the Credential dropdown, and select + Create New Credential. Paste your key into the API Key field and Save. Get Your Google AI API Key: Visit Google AI Studio at aistudio.google.com/app/apikey. Click "Create API key in new project" and copy the key. In the workflow, go to the Gemini 2.5 Flash Lite node, click the Credential dropdown, and select + Create New Credential. Paste your key into the API Key field and Save. Connect Your Google Account: Select the Create 'CVs' Spreadsheet node. Click the Credential dropdown and select + Create New Credential to connect your Google account. Repeat this for the Log Candidate Submission and Add CV Analysis nodes, selecting the credential you just created. Create Your Spreadsheet: Click the "play" icon on the Start Here node to run it. This will create a new Google Sheet in your Google Drive named "CVs" with the correct columns. Customize the Job Role: Go to the AI Qualification node. In the Text parameter, find the job_requirements section and replace the example job description with your own. Be as detailed as possible for the best results. Start Screening! Activate the workflow using the toggle at the top right. Go to the Application Form node and click the "Open Form URL" button. Fill out the form with a test application and upload a sample CV. Check your Google Sheet to see the AI's analysis appear within moments
by Khairul Muhtadin
Automatically extract job listings from any website URL, format them with AI, and publish directly to WordPress. Just send a URL via Telegram, and watch as the workflow scrapes the job details, enhances the content with GPT, and creates a polished post on your site. ๐ก Why Use Job Repost? โฐ Save countless hours Automatically extract, process, and publish job offers from any website, freeing your time from repetitive tasks. โ Eliminate human errors Say goodbye to typos and missed fields โ every job post is validated before going live. ๐ Boost engagement Fresh, well-structured job listings attract more candidates, improving your site's reach and authority. ๐ Stay ahead Leveraging AI with GPT means your content is not just automated but polished and SEO-friendly โ the digital assistant you never knew you needed. โก Perfect For Job board managers:** Want to aggregate listings from multiple sources with minimal effort Recruiters & HR teams:** Who need to streamline job posting workflows without technical hassles Content creators & marketers:** Looking to automate publishing while maintaining style and SEO standards ๐ง How It Works | Step | Process | Description | |------|---------|-------------| | ๐ฑ | Trigger | Send a job URL via Telegram bot to initiate the process | | ๐ฅ | Extract | Firecrawl API scrapes and extracts clean content from the provided URL | | ๐ | Process | Job data is extracted via AI, text split and cleaned, job categories and types mapped to your system | | ๐ค | Smart Logic | GPT crafts formatted job posts, intelligent validation ensures all key data is present, default values fill in the blanks if necessary | | ๐ | Output | Posts automatically published to WordPress with company logos uploaded, and success or error notifications sent via Telegram | | ๐ | Storage | Uses Supabase vector store for managing document embeddings, ensuring quick lookup and reference compliance | ๐ Quick Setup Import the provided JSON file into your n8n instances Add credentials: Firecrawl API key Google Drive OAuth2 (for RAG storage) OpenAI API WordPress API Telegram API Supabase Customize: Telegram bot token WordPress URLs Default images and category mappings if needed Update: URLs and API tokens where placeholders are used Test: Send a job URL to your Telegram bot to verify accurate extraction and posting ๐งฉ You'll Need โ Active n8n instances โ Firecrawl account with API access โ Google Drive account for RAG document storage โ OpenAI account with GPT API access โ WordPress site with autojob plugin and API enabled โ Telegram bot for URL submission and notifications โ Supabase account for vector store management ๐ ๏ธ Level Up Ideas ๐ Add multi-language support to expand global reach ๐ Support batch URL processing for multiple jobs at once ๐ฌ Integrate Slack or email notifications for wider team alerts ๐ฏ Use more AI nodes to summarize or rate job offers for quality control ๐ Schedule periodic cleanup of vector store for performance optimization ๐ Add analytics tracking for published jobs performance ๐ง Nodes Used Core Components: Firecrawl HTTP Request** (Web scraping and content extraction) Google Drive** (RAG document storage) Supabase Vector Store** OpenAI** (Embeddings, GPT Extraction) Code Nodes** for mapping categories Telegram Trigger & Message** HTTP Request** (for WordPress API and image uploads) Made by: Khaisa Studio Tags: automation recruitment job-posting wordpress AI web-scraping firecrawl Category: Human Resources, Recruitment, Wordpress, Scrapping Need a custom? contact me on LinkedIn or Web
by Trung Tran
Decodo Scraper API Workflow Template (n8n Automation Amazon Book Purchase Report) Watch the demo video below: > This workflow demos how to use Decodo Scraper API to crawl any public web page (headless JS, device emulation: mobile/desktop/tablet), extract structured product data from the returned HTML, generate a purchase-ready report, and automatically deliver it as a Google Doc + PDF to Slack/Drive. Whoโs it for Creators / Analysts** who need quick product lists (books, gadgets, etc.) with prices/ratings. Ops & Marketing teams** building weekly โtop picksโ reports. Engineers** validating the Decodo Scraper API + LLM extraction pattern before scaling. How it works / What it does Trigger โ Manually run the workflow. Edit Fields (manual) โ Provide inputs: targetUrl (e.g., an Amazon category/search/listing page) deviceType (desktop | mobile | tablet) Optional: maxItems, notes, reportTitle, reportOwner Scraper API Request (HTTP Request โ POST) Calls Decodo Scraper API with: URL to crawl, headless JS enabled Device emulation (UA + viewport) Optional waitFor / executeJS to ensure late-loading content is captured HTML Response Parser (Code/Function or HTML node) Pulls the HTML string from Decodo response and normalizes it (strip scripts/styles, collapse whitespace). Product Analyzer Agent (LLM + Structured Output Parser) Prompts an LLM to extract structured โbookโ objects from the HTML: The Structured Output Parser enforces a strict JSON schema and drops malformed items. Build ๐ Book Purchase Report (Code/LLM) Converts the JSON array into a Markdown (or HTML) report with: Executive summary (top picks, average price/rating) Table of items (rank, title, author, price, rating, link) โRecommended to buyโ shortlist (rules configurable) Notes / owner / timestamp Configure Google Drive Folder (manual) Choose/create a Drive folder for output artifacts. Create Document File (Google Docs API) Creates a Doc from the generated Markdown/HTML. Convert Document to PDF (Google Drive export) Exports the Doc to PDF. Upload report to Slack Sends the PDF (and/or Doc link) to a chosen Slack channel with a short summary. How to set up 1 Prerequisites n8n** (self-hosted or Cloud) Decodo Scraper API** key OpenAI (or compatible) API key** for the Analyzer Agent Google Drive/Docs** credentials (OAuth2) Slack** Bot/User token (files:write, chat:write) 2 Environment variables (recommended) DECODO_API_KEY OPENAI_API_KEY DRIVE_FOLDER_ID (optional default) SLACK_CHANNEL_ID 3 Nodes configuration (high level) Edit Fields (Set node) Scraper API Request (HTTP Request โ POST) HTML Response Parser (Code node) Product Analyzer Agent Build Book Purchase Report (Code/LLM) Create Document File Convert to PDF Upload to Slack Requirements Decodo**: Active API key and endpoint access. Be mindful of concurrency/rate limits. Model**: GPT-4o/4.1-mini or similar for reliable structured extraction. Google**: OAuth client (Docs/Drive scopes). Ensure n8n can write to the target folder. Slack**: Bot token with files:write + chat:write. How to customize the workflow Target site: Change targetUrl to any **public page (category, search, or listing). For other domains (not Amazon), tweak the LLM guidance (e.g., price/label patterns). Device emulation**: Switch deviceType to mobile to fetch mobile-optimized markup (often simpler DOMs). Late-loading pages**: Adjust waitFor.selector or use waitUntil: "networkidle" (if supported) to ensure full content loads. Client-side JS**: Extend executeJS if you need to interact (scroll, click โnextโ, expand sections). You can also loop over pagination by iterating URLs. Extraction schema**: Add fields (e.g., discount_percent, bestseller_badge, prime_eligible) and update the Structured Output schema accordingly. Filtering rules**: Modify recommendation logic (e.g., min ratings count, price bands, languages). Report branding**: Add logo, cover page, footer with company info; switch to HTML + inline CSS for richer Docs formatting. Destinations**: Besides Slack & Drive, add Email, Notion, Confluence, or a database sink. Scheduling: Add a **Cron trigger for weekly/monthly auto-reports.
by Sona Labs
Automatically identify ICP matches by enriching basic company records with Sona Enrich dataโcombining web scraping, AI analysis, and the structured attributes that define your ideal customer. Import company domains from a Google Sheet, automatically analyze their websites with AI, enrich them with firmographic data via Sona Enrich, and sync the results to HubSpotโso you can quickly discover and target your ideal customers. How it works Step 1: Data Input & Web Scraping Reads company domains from your Google Sheet Scrapes each website's content via HTTP requests Extracts and cleans HTML content Removes navigation, footers, and noise Step 2: AI Analysis Sends cleaned content to OpenAI Chat Model Extracts structured company intelligence (industry, positioning, features, personas) Captures and analyzes pricing, pros/cons, and value propositions Aggregates all AI results into standardized format Advanced users: You can modify the data that's generated and then add custom fields to HubSpot Step 3: HubSpot Preparation Creates custom fields in HubSpot CRM Prepares AI-extracted data for import Splits aggregated data into individual company records Ready for batch processing Step 4: Enrich & Sync to HubSpot Loops through each company one by one Enriches with the Sona API (firmographics, revenue, employees, funding, and more) Creates company record in HubSpot Formats and populates all custom fields Combines AI insights + Sona data in one complete profile What you'll get The workflow enriches each company record with: Web-Scraped Intelligence**: Business descriptions, features, and positioning directly from their website AI-Analyzed Insights**: Value propositions, target personas, pricing models, and competitive advantages interpreted by AI Firmographic Data**: Company size, employee count, revenue estimates, headquarters location, and more via Sona Enrich Technographic Data**: Technology stack, platforms, and tools the company uses Industry Classification**: Precise industry categorization and market type (B2B/B2C) Funding & Growth**: Investment rounds, funding status, and growth indicators Custom HubSpot Properties**: All data automatically mapped and synced to your CRM for immediate use Why use this Complete intelligence gathering**: Combines three powerful data sources (web scraping, AI, and Sona enrichment) for maximum insight depth Personalize at scale**: Leverage actual company intelligence to craft relevant, informed outreach that resonates Intelligent segmentation**: Build precise account lists by industry, tech stack, business model, or company size Accelerate research**: Eliminate hours of manual company investigationโsave 15-30 minutes per prospect Improve conversion**: Engage prospects with context-rich conversations that demonstrate deep understanding Enhanced lead scoring**: Build sophisticated scoring models with comprehensive firmographic and technographic signals Automated updates**: Keep HubSpot records current with scheduled enrichment runs (daily/weekly) Setup instructions Before you start, you'll need: Google Sheet with company websites (column named "Website Domain") OpenAI API key for AI analysis (sign up here) Sona API credentials (get access here) Get an app token from HubSpot by creating a legacy app: Go to HubSpot Settings > Integrations > Legacy Apps Click Create Legacy App Select Private (for one account) In the scopes section, enable the following permissions: crm.schemas.companies.write crm.objects.companies.write crm.schemas.companies.read Click Create Copy the access token from the Auth tab n8n cloud or self-hosted instance Configuration steps: Prepare your data: Create a Google Sheet with a "Website Domain" column and add 2-3 test companies (e.g., example.com) Connect Google Sheets: In the "Get row(s) in sheet" node, authenticate and select your spreadsheet and sheet name Configure web scraping: Update the HTTP Request node with your preferred scraping method or data source URL Set up AI Agent: Add your OpenAI API key and customize the extraction prompt to define which company fields you want (industry, personas, features, etc.) Create HubSpot custom fields: Review the "Create Custom HubSpot Fields" node and adjust property names to match your CRM structure Add Sona credentials: In the "Sona Enrich" node within the loop, authenticate with your Sona API key Connect HubSpot: Authenticate in both "Create a Company" nodes using your HubSpot API key or OAuth2 Map enriched data: In the "Format Custom Properties" node, configure how Sona and AI data maps to your HubSpot fields Test with sample data: Run the workflow with 2-3 test companies and verify records appear correctly in HubSpot with all custom properties populated Add error handling: Configure notifications for failed enrichments or API errors (optional but recommended) Scale and automate: Process your full company list, then optionally add a Schedule Trigger for automatic daily or weekly enrichment