by Yaron Been
This cutting-edge n8n automation is a powerful market research tool designed to continuously monitor and capture User-Generated Content (UGC) opportunities on Fiverr. By intelligently scraping, parsing, and logging gig data, this workflow provides: Automated Market Scanning: Daily scrapes of Fiverr UGC gigs Real-time market intelligence Consistent, hands-off data collection Intelligent Data Extraction: Parses complex HTML structures Captures key gig details Transforms unstructured web data into actionable insights Seamless Data Logging: Automatic Google Sheets integration Comprehensive gig marketplace tracking Historical data preservation Key Benefits π€ Full Automation: Continuous market research π‘ Smart Filtering: Detailed UGC gig insights π Instant Reporting: Real-time market trends β±οΈ Time-Saving: Eliminate manual research Workflow Architecture π Stage 1: Automated Triggering Scheduled Scraping**: Daily gig discovery Precise Timing**: Configurable run intervals Consistent Monitoring**: Always-on market intelligence π Stage 2: Web Scraping HTTP Request**: Fetch Fiverr search results Dynamic Headers**: Bypass potential scraping restrictions Targeted Search**: UGC-specific gig discovery π§© Stage 3: Data Extraction HTML Parsing**: Extract critical gig information Structured Data Collection**: Gig Prices Seller Names Gig Titles Direct Gig URLs π Stage 4: Data Logging Google Sheets Integration**: Automatic data storage Historical Tracking**: Build comprehensive gig databases Easy Analysis**: Spreadsheet-ready format Potential Use Cases Content Creators**: Market rate research Freelance Platforms**: Competitive intelligence Marketing Agencies**: UGC trend analysis Recruitment Specialists**: Talent pool mapping Business Strategists**: Market opportunity identification Setup Requirements Fiverr Search Configuration Targeted search keywords Specific UGC categories Web Scraping Preparation User-agent rotation strategy Potential proxy configuration Robust error handling Google Sheets Setup Connected Google account Prepared spreadsheet Appropriate sharing permissions n8n Installation Cloud or self-hosted instance Import workflow configuration Configure API credentials Future Enhancement Suggestions π€ AI-powered gig trend analysis π Advanced data visualization π Real-time price change alerts π§ Machine learning market predictions π Multi-platform gig tracking Ethical Considerations Respect Fiverr's Terms of Service Implement responsible scraping practices Avoid overwhelming target websites Use data for legitimate research purposes Technical Recommendations Implement exponential backoff for requests Use randomized delays between scrapes Maintain flexible CSS selector strategies Consider rate limiting and IP rotation Connect With Me Ready to unlock market insights? π§ Email: Yaron@nofluff.online π₯ YouTube: @YaronBeen πΌ LinkedIn: Yaron Been Transform your market research with intelligent, automated workflows!
by Tony Duffy
. Read and store IOT sensor data with the MQTT Trigger and InfluxDB tonyduffy@protonmail.com This workflow is for users wanting a practical example of how to obtain data from remote IOT systems using the MQTT protocol in an n8n environment. The template provides typical n8n node implementation and configuration settings necessary to read and store IOT data. The workflow reads the temperature and humidity data from a remote IOT system in this case a DHT22 sensor connected to a ESP32 micro controller. The data is parsed into the correct JSON format and then ingested in an InfluxDB data bucket. From there the stored temperature and humidity values can be displayed in real time. The workflow can be easily modified to read any MQTT driven device data. Remote IOT Sensor Setup The ESP32 controller with the DHT22 sensor are running on a Wokwi simulator. The simulator uses micro python to publish a MQTT "wokwi-weather" topic with the temperature and humidity payloads to an online Mosquitto MQTT broker. The n8n MQTT trigger node subscribes to the topic on the broker and reads the payload values when any changes are published. The code node then prepares the payload for JSON format. The HTTP request node ingests the data in a InfluxDB bucket How to customise this workflow to your needs Wokwi IOT ESP32 simulator You will need to setup a free account at Wokwi.com Once created search for a project "Micro-Python MQTT Weather Logger (ESP32)" Then when the MQTT weather logger project is open change lines 28 and 29 to the following 28 MQTT_CLIENT_ID = "" 29 MQTT_BROKER = "test.mosquitto.org" You then can start the simulation by clicking on the green arrow and it will connect the mosquitto broker and the "wokwi-weather" topic will be published. By clicking on the DHT22 sensor the temperature and humidity bar will appear and you can change the values to send updated payload values to the broker. InfluxDB You will require access to functioning InfluxDB database to utilise this workflow Note : You will have to provide the following for the HTTP request node to connect to InfluxDB. The URL and port of the desired InfluxDB (In this case the InfluxDB is running locally on port 8086 ie. http://localhost:8086.) InfluxDB bucket for the data. ( In this case the created bucket name is "wokwi-data") The Organization ID of the InfluxDB. This can be obtained for the InfluxDB admin page A generated API token to read and write to the InfluxDB bucket. Created from the InfluxDB admin n8n workflow. The MQTT trigger node is configured to subscribe to the "wokwi-weather" topic on the test Mosquitto MQTT broker. It reads the temperature and humidity data sent by ESP32. The code node uses Javascript to move the temperature and humidity payloads to JSON format. This is flexible and can easily modified. The HTTP request node posts the JSON payloads to the InfluxDB bucket. When the above is configured the workflow should function correctly. Thanks to the many who have downloaded this template. Let me know on what you would like to build. Contact me at tonyduffy@protonmail.com
by Oneclick AI Squad
This n8n workflow automates subdomain creation and deletion on GoDaddy using their API, triggered via email requests. This empowers developers to manage subdomains directly without involving DevOps for minor tasks. Good to know Ensure GoDaddy API credentials are securely configured to avoid unauthorized access. Email parsing accuracy depends on the consistency of request formats. How it works Detect new email requests using the Start Workflow (GET Request) node. Use the Extract Data from Email node to parse relevant details (e.g., subdomain name, action type). Validate the action type with the Validate Action Type node to proceed with create (true) or delete (false). If true, the Create Subdomain node sends a POST request to GoDaddyβs API to create the subdomain. If false, the Delete Subdomain node sends a DELETE request to remove the subdomain. The Send Email Response node notifies the requester of the actionβs success or failure. How to use Import the workflow into n8n and configure the nodes with your GoDaddy API and email credentials. Test with sample email requests to ensure proper parsing and API calls. Requirements GoDaddy API credentials Email service (e.g., SMTP or API) for notifications Customising this workflow Adjust the Extract Data from Email node to match your email format or add additional validation steps for security.
by Tom
Markdown to Notion Blocks Converter Transform markdown-formatted text into properly structured Notion page content with this comprehensive workflow. Overview This workflow automatically converts markdown text into Notion's block format and inserts it directly into a Notion page. Perfect for content creators, documentation teams, and anyone who needs to migrate markdown content to Notion. Features Complete Markdown Support**: Handles headers (H1-H4), paragraphs, lists, quotes, code blocks, and horizontal rules Rich Text Formatting**: Preserves bold, italic, and link formatting Smart Text Processing**: Generates plain text excerpts and maintains original content structure Direct Notion Integration**: Automatically inserts converted blocks into your specified Notion page Batch Processing**: Efficiently handles large content blocks What It Does Takes markdown-formatted text as input Parses and converts it to Notion's block structure Handles complex formatting including: Headers and subheaders Bulleted and numbered lists Code blocks with syntax highlighting Blockquotes Bold and italic text Links Horizontal dividers Uploads the converted content directly to your Notion page Use Cases Content Migration**: Move existing markdown documentation to Notion Automated Publishing**: Convert blog posts or articles from markdown to Notion Documentation Workflows**: Streamline technical documentation processes Content Syndication**: Publish the same content across multiple platforms Requirements Notion API credentials Target Notion page ID Markdown-formatted source content Setup Configure your Notion API credentials Replace the page ID in the HTTP request node with your target Notion page Connect your markdown data source (replace the mock data node) Execute the workflow
by Anurag
Description This workflow automates the extraction of structured data from invoices or similar documents using Docsumo's API. Users can upload a PDF via an n8n form trigger, which is then sent to Docsumo for processing and structured parsing. The workflow fetches key document metadata and all line items, reconstructs each invoice row with combined header and item details, and finally exports all results as an Excel file. Ideal for automating invoice data entry, reporting, or integrating with accounting systems. How It Works A user uploads a PDF document using the integrated n8n form trigger. The workflow securely sends the document to Docsumo via REST API. After uploading, it checks and retrieves the parsed document results. Header information and table line items are extracted and mapped into structured records. The complete result is exported as an Excel (.xls) file. Setup Steps Docsumo Account: Register and obtain your API key from Docsumo. n8n Credentials Manager: Add your Docsumo API key as an HTTP header credential (never hardcode the key in the workflow). Workflow Configuration: In the HTTP Request nodes, set the authentication to your saved Docsumo credentials. Update the file type or document type in the request (e.g., "type": "invoice") as needed for your use case. Testing: Enable the workflow and use the built-in form to upload a sample invoice for extraction. Features Supports PDF uploads via n8nβs built-in form or via API/webhook extension. Sends files directly to Docsumo for document data extraction using secure credentials. Extracts invoice-level metadata (number, date, vendor, totals) and full line item tables. Consolidates all data in easy-to-use Excel format for download or integration. Modular node structure, easily extensible for further automation. Prerequisites Docsumo account with API access enabled. n8n instance with form, HTTP Request, Code, and Excel/Convert to File nodes. Working Docsumo API Key stored securely in n8nβs credential manager. Example Use Cases | Scenario | Benefit | |---------------------|-----------------------------------------| | Invoice Automation | Extract line items and metadata rapidly | | Receipts Processing | Parse and digitize business receipts | | Bulk Bill Imports | Batch process bills for analytics | Notes Credentials Security:** Do not store your API key directly in HTTP Request nodes; always use n8n credentials manager. Sticky Notes:** The workflow includes sticky notes for setup, input, API call, extraction, and output steps to assist template users. Custom Columns:** You can customize header or line item extraction by editing the Code node as needed.
by Shahrear
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Automatically transform audio files into professional transcription reports with AI-powered speech recognition, timestamp generation, and formatted Google Docs output. What this workflow does Monitors Gmail for incoming audio attachments Downloads and processes audio files using VLM Run AI transcription Generates accurate transcriptions with precise timestamps and segmentation Creates professional reports in Google Docs with formatted output Handles asynchronous processing for long audio files without timeouts Setup Prerequisites: Gmail account, VLM Run API credentials, Google Docs access, self-hosted n8n. You need to install VLM Run community node Quick Setup: Configure Gmail OAuth2 for email monitoring Add VLM Run API credentials for audio transcription Set up Google Docs OAuth2 for report generation Create target Google Doc for transcription reports Update document URL in workflow nodes Test with sample audio file and activate Perfect for Meeting recordings and conference calls Voice memos and dictation workflows Interview transcriptions and journalism Podcast episode documentation Accessibility compliance and documentation Legal proceedings and court recordings Educational content and lecture notes Customer service call analysis Key Benefits Human-level accuracy** - Advanced AI speech recognition with automatic punctuation Timestamp precision** - Segmented transcriptions with exact time markers Multi-format support** - Handles MP3, WAV, M4A, AAC, OGG, FLAC files Asynchronous processing** - No timeouts for long audio files Professional formatting** - Beautifully structured Google Docs reports Automatic workflow** - Zero manual intervention required Saves hours per recording** - Transforms manual transcription into instant results Searchable documentation** - Google Docs integration enables easy content discovery How to customize Extend by adding: Speaker identification and diarization Integration with project management tools (Notion, Asana, Trello) Automatic summary generation from transcripts Translation to multiple languages Slack notifications for completed transcriptions Integration with CRM systems for call logging Audio quality enhancement preprocessing Custom formatting templates for different use cases Automatic keyword extraction and tagging Integration with calendar systems for meeting context This workflow revolutionizes audio documentation by combining cutting-edge AI transcription with professional report generation, making spoken content instantly accessible, searchable, and shareable across your organization.
by Yulia
This is an end-to-end workflow for creating a simple OpenAI Assistant. The whole process is done with n8n nodes and do not require any programming experience. The workflow is divided into three main steps: Step 1: Get a Google Drive File and Upload to OpenAI The workflow starts by retrieving a file from Google Drive using the "Get File" node. The example file used is a Music Festival document. The retrieved file is then uploaded to OpenAI using the "Upload File to OpenAI" node. Run this section only once. The file is stored persistently on the OpenAI side. Step 2: Set Up a New Assistant In this step, a new assistant is created using the "Create new Assistant" node. The assistant is given a name, description, and system prompt. The uploaded file from Step 1 is attached as a knowledge source for the assistant. Same as for Step 1, run this section only once. Step 3: Chat with the Assistant The "Chat Trigger" node initiates the conversation with the assistant. The "OpenAI Assistant" node handles the conversation, using the assistant created in Step 2. Step 4: Expand the Assistant This step provides resources for ideas on how to expand the Assistant's capabilities: Create a WhatsApp bot Create a simple Telegram bot Create a Telegram AI bot (YouTube video) By following this workflow, users can create their own AI-powered assistants using OpenAI's API and integrate them with various platforms like WhatsApp and Telegram.
by Yaron Been
Creativeathive Lemaar Door Mockedup AI Generator Description None Overview This n8n workflow integrates with the Replicate API to use the creativeathive/lemaar-door-mockedup model. This powerful AI model can generate high-quality other content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt for generated image. If you include the trigger_word used in the training process you are more likely to activate the trained object, style, or concept in the resulting image. Optional Parameters mask** (string, default: None): Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. seed** (integer, default: None): Random seed. Set for reproducible generation image** (string, default: None): Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. model** (string, default: dev): Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps. width** (integer, default: None): Width of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation height** (integer, default: None): Height of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation go_fast** (boolean, default: False): Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16 extra_lora** (string, default: None): Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars' lora_scale** (number, default: 1): Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora. megapixels** (string, default: 1): Approximate number of megapixels for generated image How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate other content Access the generated output from the final node API Reference Model: creativeathive/lemaar-door-mockedup API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of other generation parameters
by Yaron Been
Spuuntries Ilearnmate Icts AI Generator Description None Overview This n8n workflow integrates with the Replicate API to use the spuuntries/ilearnmate-icts model. This powerful AI model can generate high-quality other content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Optional Parameters seed** (integer, default: None): Seed for reproducibility of example generation and vector training. Set to 0 for random behavior. num_examples_per_side** (integer, default: 3): Number of descriptive examples to generate for each side of the contrast. More examples might lead to better vectors but will increase generation time. attributes_to_generate** (string, default: girly,modestly,verbose,happy): Comma-separated list of attributes for which to generate control vectors (e.g., 'girly,modestly,verbose,happy') How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate other content Access the generated output from the final node API Reference Model: spuuntries/ilearnmate-icts API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of other generation parameters
by Yaron Been
Justingirard Draft Ui Designer Image Generator Description An experiment: a fine-tuned FLUX model for UI design generation Overview This n8n workflow integrates with the Replicate API to use the justingirard/draft-ui-designer model. This powerful AI model can generate high-quality image content based on your inputs. Features Easy integration with Replicate API Automated status checking and result retrieval Support for all model parameters Error handling and retry logic Clean output formatting Parameters Required Parameters prompt** (string): Prompt for generated image. If you include the trigger_word used in the training process you are more likely to activate the trained object, style, or concept in the resulting image. Optional Parameters mask** (string, default: None): Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. seed** (integer, default: None): Random seed. Set for reproducible generation image** (string, default: None): Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored. model** (string, default: dev): Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps. width** (integer, default: None): Width of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation height** (integer, default: None): Height of generated image. Only works if aspect_ratio is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation go_fast** (boolean, default: False): Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16 extra_lora** (string, default: None): Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars' lora_scale** (number, default: 1): Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora. megapixels** (string, default: 1): Approximate number of megapixels for generated image How to Use Set up your Replicate API key in the workflow Configure the required parameters for your use case Run the workflow to generate image content Access the generated output from the final node API Reference Model: justingirard/draft-ui-designer API Endpoint: https://api.replicate.com/v1/predictions Requirements Replicate API key n8n instance Basic understanding of image generation parameters
by Harshil Agrawal
This workflow demonstrates the use of the Split In Batches node and the Wait node to avoid API rate limits. Customer Datastore node: The workflow fetches data from the Customer Datastore node. Based on your use case, replace it with a relevant node. Split In Batches node: This node splits the items into a single item. Based on the API limit, you can configure the Batch Size. HTTP Request node: This node makes API calls to a placeholder URL. If the Split In Batches node returns 5 items, the HTTP Request node will make 5 different API calls. Wait node: This node will pause the workflow for the time you specify. On resume, the Split In Batches node gets executed node, and the next batch is processed. Replace Me (NoOp node): This node is optional. If you want to continue your workflow and process the items, replace this node with the corresponding node(s).
by Krzysztof Kuzara
Who is this for? This workflow is perfect for anyone looking to automate the process of replacing variables in Google Docs with data from form. What problem does this workflow solve? This workflow automates the process of filling Google Docs templates with data coming from n8n forms or other variables. Itβs especially useful for generating documents like contracts, invoices, or reports quickly and efficiently without manual intervention. What does this workflow do? The workflow receives data from a form in n8n. It uses the form data to replace the corresponding variables (e.g., {{example_variable}}) in a Google Docs template. The document is then generated with the new values, ready for further use, such as sending or archiving. How to set up this workflow? Prepare the template: Create a Google Docs template with variables in the {{variable}} format that you want to replace with form data. Modify the variables in the n8n form: Make sure the form fields correspond to the variables you want to replace in the Google Docs template. Connect to Google Docs: Set up the connection to Google Docs in n8n using the appropriate authentication credentials. Test the workflow: Run the workflow to ensure that the form data correctly replaces the variables in the Google Docs template. How to customize this workflow to your needs? Change the data source: You can modify the form or other data sources (e.g., API) from which the replacement values will be fetched. Customize the Google Docs template: Adapt the template to include additional fields for replacement as needed. Integrate with other applications: You can expand the workflow to include actions like sending the generated document via email, saving it to Google Drive, or passing it to other systems.