by Audun
Send structured logs to BetterStack from any workflow using HTTP Request Who is this for? This workflow is perfect for automation builders, developers, and DevOps teams using n8n who want to send structured log messages to BetterStack Logs. Whether you're monitoring mission-critical workflows or simply want centralized visibility into process execution, this reusable log template makes integration easy. What problem is this workflow solving? Logging failures or events across multiple workflows typically requires duplicated logic. This workflow solves that by acting as a shared log sender, letting you forward consistent log entries from any other workflow using the Execute Workflow node. What this workflow does Accepts level (e.g., "info", "warn", "error") and message fields via Execute Workflow Trigger Sends the structured log to your BetterStack ingestion endpoint via HTTP Request Uses HTTP Header Auth for secure delivery Includes a manual trigger for testing and a sample call to demonstrate usage Comes with clear sticky notes to help you get started Setup Copy your BetterStack Logs ingestion URL. Create a Header Auth credential in n8n with your Authorization: Bearer YOUR_API_KEY. Replace the URL in the HTTP Request node with your BetterStack endpoint. Optionally modify the test data or log levels for custom scenarios. Use Execute Workflow in any of your workflows to send logs here.
by Oneclick AI Squad
This n8n template demonstrates how to create an intelligent food recipe assistant that accepts requests via Gmail and web forms, processes them using AI chat models (Ollama and Llama 3.2), and delivers personalized recipes back to users. The system combines multiple input methods with advanced AI processing to provide customized cooking instructions and ingredient lists. Good to know The system accepts recipe requests through both Gmail and web form submissions AI models understand dietary restrictions, cuisine preferences, and cooking skill levels Recipe responses include formatted ingredients, step-by-step instructions, and cooking tips All requests are processed automatically without manual intervention How it works Gmail Recipe Request Workflow Gmail triggers activate when users send emails with recipe requests to the designated email address The system extracts recipe requirements, dietary preferences, and cooking constraints from email content User queries are processed through the Ollama Recipe Generator for intelligent recipe creation AI-generated recipes are formatted with proper ingredients, instructions, and cooking times Formatted recipes are sent back to users via Gmail with a professional presentation Web Form Recipe Request Workflow Web form submissions trigger when users fill out structured recipe request forms Form data includes cuisine type, dietary restrictions, available ingredients, and cooking time preferences The Llama 3.2 Chef Model processes structured requests for optimized recipe generation Recipes are formatted with clear instructions, ingredient measurements, and cooking techniques Users receive formatted recipes via email with additional cooking tips and variations How to use Import the workflow into your n8n instance and configure Gmail integration for recipe requests Set up the web form with fields for cuisine preferences, dietary restrictions, and cooking skill level Configure Ollama and Llama 3.2 AI models with appropriate recipe generation prompts Test both Gmail and web form inputs with sample recipe requests Customize email templates to match your brand and include additional cooking resources The system scales automatically to handle multiple simultaneous recipe requests Requirements Gmail account for email-based recipe requests and responses Ollama installation with Recipe Generator model Llama 3.2 Chef Model access for advanced recipe processing n8n instance with Gmail and AI model integrations Customising this workflow Recipe automation can be adapted for different cuisines, dietary needs, and cooking skill levels Try popular use-cases such as meal planning assistance, ingredient substitution suggestions, or nutritional information inclusion The workflow can be extended to include recipe image generation, shopping list creation, and cooking video recommendations
by Cameron Wills
Who is this for? Content creators, social media managers, digital marketers, and researchers who need to download original TikTok videos without watermarks for analysis, repurposing, or archiving purposes. What problem does this workflow solve? Downloading TikTok videos without watermarks typically requires using questionable third-party websites that may have limitations, ads, or privacy concerns. This workflow provides a clean, automated solution that can be integrated into your own systems and processes. What this workflow does This workflow automates the process of downloading TikTok videos without watermarks in three simple steps: Fetch the TikTok video page by providing the video URL Extract the raw video URL from the page's HTML data Download the original video file without watermark (Optional) Upload to Google Drive with public sharing link generation The workflow uses web scraping techniques to extract the original video source directly from TikTok's own servers, maintaining the highest possible quality without any added watermarks or branding. Setup (Est. time: 5-10 minutes) Before getting started, you'll need: n8n installation The URL of a TikTok you want to download (Optional) Google Drive API enabled in Google Cloud Console with OAuth Client ID and Client Secret credentials if you want to use the upload feature How to customize this workflow to your needs Replace the example TikTok URL with your desired video links Modify the file naming convention for downloaded videos Integrate with other nodes to process videos after downloading Create a webhook to trigger the workflow from external applications Set up a schedule to regularly download videos from specific accounts This workflow can be extended to support various use cases like trending content analysis, competitor research, creating compilation videos, or building a content library for inspiration. It provides a foundation that can be customized to fit into larger automated workflows for content creation and social media management.
by Khaled
๐ Web Server Monitor & Alert System This automation pings web servers at regular intervals, logs their status, and sends email alerts if a server goes down. Itโs perfect for maintaining visibility over server uptime โ without complex monitoring tools. ๐ง How It Works This workflow performs minute-by-minute checks on all listed servers in a Google Sheet and: โ Logs all reachable servers in an โAliveโ log. ๐ป Sends an email alert if a server is unreachable. ๐ Logs failed servers in a โDownโ sheet with timestamps. ๐งฉ Key Components โฐ 1. Schedule Trigger Runs the workflow every minute for real-time monitoring. ๐ 2. Web Servers List (Google Sheets) Pulls server IPs or hostnames from a Google Sheet named Server_List. Each row = one server to monitor. This makes adding/removing servers effortless โ just update the sheet. ๐ 3. Servers Alive Check (HTTP Request) Performs an HTTP GET request to each server (e.g., http://your-server.com). If the request fails, it automatically triggers the error path (handled via continueOnFail). โ 4. Web Server Alive Log (Google Sheets) Records successful pings in Server_Status_Alive with: Timestamp Server IP Status = Alive This log can be used for uptime reports or audits. ๐ง 5. Server Down Notification (Gmail) If a server fails, this node sends an email to the admin. It includes: Server address Timestamp Suggested action ๐ 6. Web Server Down Log (Google Sheets) Logs failed pings in a separate sheet for historical tracking and debugging. โ Main Advantages Live Server Monitoring Stay informed about server health in near real-time. No-Code Configuration Add/remove servers from the Google Sheet โ no need to touch the workflow. Email Alerts on Failure Proactively notifies you before users report the issue. Audit-Ready Logging Maintains logs for both healthy and failed checks for documentation or reporting. Flexible & Scalable Monitor 1 or 100 servers with the same template โ just scale the list. โ๏ธ Setup Steps ๐ Prerequisites Google Sheet with server list (column name = โServerโ) Gmail OAuth2 Connection for alerts n8n Instance running regularly ๐ Configuration Google Sheets Sheet 1 (Server_List): Your list of servers. Sheet 2 (Server_Status_Alive): Log for reachable servers. Sheet 3 (Server_Status_Down): Log for unreachable servers. Gmail Integration Connect your Gmail account in the Server Down Notification node. Edit recipient email and message content as needed. HTTP Check Adjust the HTTP request URL template if using port numbers or paths (e.g., http://{{Server}}:8080/status). Schedule Default is every 1 minute. Change via Schedule Trigger if needed. ๐งช Testing Input a reachable server (e.g., example.com) and an unreachable IP. Run the workflow manually or wait for the next scheduled run. Check: Alive log updates correctly. Down log records failures. Email alert is received. ๐ Deployment Activate the workflow, and it will quietly run in the background, notifying you of any server downtime instantly while keeping logs for future review.
by Msaid Mohamed el hadi
๐ธ Instagram Full Profile Scraper with Apify and Google Sheets This n8n workflow automates the process of scraping full Instagram profiles using a custom Apify actor, and logs the results into a Google Sheet. It is designed to run at scheduled intervals and process a list of usernames by calling the API, appending the results, and marking them as processed. ๐ Features โฑ Scheduled Execution โ Runs automatically every few minutes. ๐ Google Sheets Integration โ Reads a list of Instagram usernames and updates the same sheet. ๐ง Apify Actor โ Fetches full public Instagram profile data. ๐งฎ Aggregation โ Batches usernames for bulk scraping. โ๏ธ Data Logging โ Appends scraped data to a second sheet. โ Tracking โ Marks usernames as processed once scraped. ๐ Workflow Structure graph TD; ScheduleTrigger --> GetUsernames; GetUsernames --> LimitItems; LimitItems --> AggregateUsernames; AggregateUsernames --> CallApifyActor; CallApifyActor --> AppendToSheet; CallApifyActor --> MarkAsScraped; ๐ Setup Google Sheet Create a Google Sheet with: Sheet 1 named Usernames (GID: 0) Columns: username, scraped Sheet 2 named fullprofiles (GID: 458127000) Sample sheet: ๐ Instagram Profile Sheet n8n Configuration Import this workflow into your n8n instance. Set up your Google Sheets credentials (googleSheetsOAuth2Api). Replace apify_api_your token in the HTTP Request node with your Apify API token. ๐ฆ Required Credentials Google Sheets OAuth2** โ For reading and writing sheet data. Apify API Token** โ To call the custom actor for profile scraping. ๐ Sheets Used | Sheet Name | Purpose | | -------------- | -------------------------------- | | Usernames | Source of usernames to scrape | | fullprofiles | Destination of full profile data | ๐ Apify Actor Info > Instagram Full Profile Scraper > This actor fetches extended profile information from public Instagram profiles. ๐ View on Apify ๐ Workflow Nodes Overview | Node | Purpose | | ------------------------ | ----------------------------------------------------------------- | | Schedule Trigger | Triggers the workflow periodically. | | Get Usernames | Reads usernames from the Usernames sheet. | | Limit | Limits processing to 20 usernames per run. | | Aggregate | Groups usernames into a batch for the API call. | | Call Apify Actor | Sends the usernames to the Apify actor and receives profile data. | | Append Full Profiles | Appends the scraped data to the fullprofiles sheet. | | Mark Username as Scraped | Marks the processed usernames as scraped = TRUE. | | Sticky Note | Provides a reference link to the Apify actor used. | ๐ Example Sheet Structure Usernames Sheet | username | scraped | | ------------ | ------- | | exampleuser1 | | | exampleuser2 | TRUE | fullprofiles Sheet | username | full\_name | biography | follower\_count | ... | | -------- | ---------- | --------- | --------------- | --- | ๐ Security & Notes This workflow does not bypass any Instagram privacy restrictions. It works only with public Instagram profiles. You are responsible for ensuring that scraping complies with Instagramโs terms of service and any applicable laws. ๐ฌ Support For any issues, feel free to reach out: ๐ค @mohamedgb00714 ๐ง mohamedgb00714@gmail.com
by Marcelo Abreu
Who is this workflow for? If you're using Meta Ads to generate new leads to your sales pipeline, this workflow is for you! ๐๐ป What this workflow does Triggers every time you have a new calendar event on a chosen Google Acount Filter only events with the same name of your "Schedule a demo" event Formats and send event to Meta Conversion API What events can I send? Any event you'd like! It's preconfigured with the "Schedule" event, but you can change to "Purchase", "InitiateCheckout", "Lead" and custom events. Setup Guide Connect Google OAuth2 to n8n Get your PIXEL ID and Access Token from Meta Set your configuration node with Pixel ID, Access Token, source_url and event_name Requirements Meta Access Token + Pixel ID (via Meta Conversion API): Documentation Google Access (via OAuth2): Documentation This free template was created by pdforge. Feel free to contact us via the founder Linkedin, if you have any questions! ๐๐ป
by Oneclick AI Squad
This automated n8n workflow processes student applications on a scheduled basis, validates data, updates databases, and sends welcome communications to students and guardians. Main Components Trigger at Every Day 7 am** - Scheduled trigger that runs the workflow daily Read Student Data** - Reads pending applications from Excel/database Validate Application Data** - Checks data completeness and format Process Application Data** - Processes validated applications Update Student Database** - Updates records in the student database Prepare Welcome Email** - Creates personalized welcome messages Send Email** - Sends welcome emails to students/guardians Success Response** - Confirms successful processing Error Response** - Handles any processing errors Essential Prerequisites Excel file with student applications (student_applications.xlsx) Database access for student records SMTP server credentials for sending emails File storage access for reading application data Required Excel File Structure (student_applications.xlsx): Application ID | First Name | Last Name | Email | Phone Program Interest | Grade Level | School | Guardian Name | Guardian Phone Application Date | Status | Notes Expected Input Data Format: { "firstName": "John", "lastName": "Doe", "email": "john.doe@example.com", "phone": "+1234567890", "program": "Computer Science", "gradeLevel": "10th Grade", "school": "City High School", "guardianName": "Jane Doe", "guardianPhone": "+1234567891" } Key Features โฐ Scheduled Processing:** Runs daily at 7 AM automatically ๐ Data Validation:** Ensures application completeness ๐พ Database Updates:** Maintains student records ๐ง Auto Emails:** Sends welcome messages โ Error Handling:** Manages processing failures Quick Setup Import workflow JSON into n8n Configure schedule trigger (default: 7 AM daily) Set Excel file path in "Read Student Data" node Configure database connection in "Update Student Database" node Add SMTP settings in "Send Email" node Test with sample data Activate workflow Parameters to Configure excel_file_path: Path to student applications file database_connection: Student database credentials smtp_host: Email server address smtp_user: Email username smtp_password: Email password admin_email: Administrator notification email
by InfraNodus
Teach your AI agent HOW to think, not WHAT to think This workflow demonstrates how you can build an AI agent in n8n that uses the reasoning logic you define. So an LLM learns a way of thinking, which you can then apply to multiple problems: Make an AI chatbot that knows how to convince anybody using the "Getting to Yes" method Build an LLM workflow that uses Ray Dalio's principles to spot investment opportunities Create an AI agent crew of interdisciplinary thinkers: e.g. a specialist in psychology who gives an advice on education programmes. How it works This template uses the n8n AI agent node as an orchestrating agent that has access to a certain reasoning logic defined by an InfraNodus knowledge graph. This graph contains a list of reasoning rules (ontology), which is extracted to provide an advice that is relevant to the original prompt. It uses GraphRAG under the hood to traverse the parts of the graph relevant to the query. This advice and the reasoning logic extracted is then used by the AI agent to generate a response that is relevant to the user's query but that uses the reasoning logic provided through the graph. Here's a description step by step: The user submits a question using the AI chatbot (n8n interface, in this case, a web form that can be embedded to any website, or a webhook that can be connected to a Telegram / WhatsApp bot) The AI agent node accesses the Reasoning Logic HTTP InfraNodus nodes. The description of AI agent and the description of the reasoning InfraNodus node provides the agent with an understanding of how to rephrase the original question to retrieve relevant reasoning logic. The request is sent to the InfraNodus node. It provides a response that contains the reasoning logic needed to answer the question. This reasoning logic is then sent back to an LLM along with the original query to produce the response. InfraNodus uses GraphRAG under the hood: convert user query into graph find the overlap with the reasoning graph (using n=1 or more hops to include more relations) use similarity search to get additional parts of the graph generate a response based on this intersection as well as the context provided provide information about the underlying structure How to use You need an InfraNodus account to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Create a separate knowledge graph for the reasoning logic Use the AI ontology creator to generate an ontology for a certain topic or text using AI. Then augment it with your own data. See our help article on creating ontologies for detailed instructions For each graph, go to the workflow, paste the name of the graph into the request JSON body name field. Change the system prompt in the AI agent node to reflect the nature of your reasoning logic. For instance, if it's an expert in interactions, you specify that, if it's a psychology expert, you need to specify that as well. Change the description of the reasoning node (HTTP tool). Use the InfraNodus summary and Project Notes > RAG prompt buttons to generate a description for the reasoning logic, which you can then reuse in your workflow. add the LLM key to the OpenAI node (or to the model of your choice) and launch the workflow Requirements An InfraNodus account and API key An OpenAI (or any other LLM) API key Customizing this workflow You can use this same workflow with a Telegram bot, so you can interact with it using Telegram. There are many more customizations available. Check out the complete guide at https://support.noduslabs.com/hc/en-us/articles/21429518472988-Using-Knowledge-Graphs-as-Reasoning-Experts Also check out the video tutorial with a demo:
by Ria
This workflow demonstrates how to use the workflowStaticData() function to set any type of variable that will persist within workflow executions. https://docs.n8n.io/code/cookbook/builtin/get-workflow-static-data/ This can be useful for example when working with access tokens that expire after a certain time period. Using staticData we can keep a record of that access token and the expiry time and build our workflow logic around it. Important Static Data only persists across production executions, i.e. triggered by Webhooks or Schedule Triggers (not manual executions!) For this the workflow will have to be activated. Setup configure HTTP Request node to fetch access token from your API (optional) activate workflow test the workflow with the webhook production link you can check the population of the static data in the single executions Feedback If you found this useful or want to report some missing information - I'd be happy to hear from you at ria@n8n.io
by Niklas Hatje
Use Case In most companies, employees have a lot of great ideas. That was the same for us at n8n. We wanted to make it as easy as possible to allow everyone to add their ideas to some formatted database - it should be somewhere where everyone is all the time and could add a new idea without much extra effort. Since we're using Slack, this seemed to be the perfect place to easily add ideas and collect them in Notion. What this workflow does This workflow waits for a webhook call within Slack, that gets fired when users use the /idea command on a bot that you will create as part of this template. It then checks the command, adds the idea to Notion, and notifies the user about the newly added idea as you can see below: Creating your Slack bot Visit https://api.slack.com/apps, click on New App and choose a name and workspace. Click on OAuth & Permissions and scroll down to Scopes -> Bot token Scopes Add the chat:write scope Head over to Slash Commands and click on Create New Command Use /idea as the command Copy the test URL from the Webhook node into Request URL Add whatever feels best to the description and usage hint Go to Install app and click install Setup Add a Database in Notion with the columns Name and Creator Add your Notion credentials and add the integration to your Notion page. Fill the setup node below Create your Slack app (see other sticky) Click Test workflow and use the /idea comment in Slack Activate the workflow and exchange the Request URL with the production URL from the webhook How to adjust it to your needs You can adjust the table in Notion and for example, add different types of ideas or areas that they impact You might wanna add different templates in Notion to make it easier for users to fill their ideas with details Rename the Slack command as it works best for you How to enhance this workflow At n8n we use this workflow in combination with some others. E.g. we have the following things on top: We additionally have a /bug Slack command that adds a new bug to Linear. Here we're using AI to classify the bugs and move it to the right team. (see this template and this template) We also added other types, like /pain to be less solution-driven To make it easier for everyone to give input, we added a Votes column that allows everyone to vote on ideas/pain points in the list We're also running a workflow once a week that highlights the most popular new ideas and the most active voters (see here)
by Fahmi Oktafian
Who's it for This workflow is perfect for SEO specialists, marketers, bloggers, and content creators who want to automate keyword research using Google Sheets, Google Suggest, and Google Custom Search. Ideal for those building content pipelines, researching trends, or powering AI content generation with fresh search data. What it does This workflow automates the process of discovering a new keyword daily. It: Rotates through a keyword list in Google Sheets Selects one keyword per day Fetches autocomplete suggestions from Google Suggest Queries the Google Custom Search API for top results Returns structured JSON containing titles, links, and snippets How it works Manual Trigger โ Initiates workflow manually Google Sheets โ Reads keywords from a sheet (column: Title or Keyword) Code Node โ Selects a daily keyword based on the number of days since July 4, 2025 Set Node โ Saves the selected keyword as seed_keyword HTTP Request โ Fetches autocomplete suggestions from Google Suggest API Function Node โ Parses suggestions into usable items HTTP Request โ Calls Google Custom Search API for each suggestion Code Node โ Formats the search results into JSON How to set up Connect your Google Sheets OAuth2 credentials in n8n Use credential variables for Google Custom Search (โ ๏ธ do not hardcode your key and cx) Replace the sample sheet ID with your own Run the workflow manually or schedule it daily Requirements Google account Enabled Custom Search JSON API on Google Cloud Google Sheet with a column labeled Title or Keyword n8n instance (cloud or self-hosted) How to customize Change the start date to control the keyword rotation cycle Randomize keyword selection instead of rotating Enrich results using tools like Ahrefs or SEMrush Push final output to Telegram, Notion, Slack, or Airtable Add filtering logic based on CPC, volume, or duplicates Example Sheet ๐ Click Here to access the example Google Sheet Sheet must contain a column Title or Keyword in the first row: Title teknologi AI berita viral tren startup
by Daniel Shashko
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automates the process of scraping product data from e-commerce websites and using it to fine-tune a custom OpenAI GPT model for generating high-quality marketing copy and product descriptions. Main Use Cases Fine-tune OpenAI models with real product data from hundreds of supported e-commerce websites for marketing content generation. Create custom AI models specialized in writing compelling product descriptions across different industries and platforms. Automate the entire pipeline from data collection to model training using Bright Data's extensive scraper library. Generate marketing copy using your custom-trained model via an interactive chat interface. How it works The workflow operates in two main phases: model training and model usage, organized into these stages: Data Collection & Processing Manually triggered to start the fine-tuning process. Uses Bright Data's web scraper to extract product information from any supported e-commerce platform (Amazon, eBay, Shopify stores, Walmart, Target, and hundreds of other websites). Collects product titles, brands, features, descriptions, ratings, and availability status from your chosen platform. Easily customizable to scrape from different websites by simply changing the dataset configuration and product URLs. Training Data Preparation A Code node processes the scraped product data to create training examples in OpenAI's required JSONL format. For each product, generates a complete training example with: System message defining the AI's role as a marketing assistant. User prompt containing specific product details (title, brand, features, original description snippet). Assistant response providing an ideal marketing description template. Compiles all training examples into a single JSONL file ready for OpenAI fine-tuning. Model Fine-Tuning Uploads the training file to OpenAI using the OpenAI File Upload node. Initiates a fine-tuning job via HTTP Request to OpenAI's fine-tuning API using the GPT-4o-mini model as the base. The fine-tuning process runs on OpenAI's servers to create your custom model. Interactive Chat Interface Provides a chat trigger that allows real-time interaction with your fine-tuned model. An AI Agent node connects to your custom-trained OpenAI model. Users can chat with the model to generate product descriptions, marketing copy, or other content based on the training. Custom Model Integration The OpenAI Chat Model node is configured to use your specific fine-tuned model ID. Delivers responses trained on your product data for consistent, high-quality marketing content. Summary Flow: Manual Trigger โ Scrape E-commerce Products (Bright Data) โ Process & Format Training Data (Code) โ Upload Training File (OpenAI) โ Start Fine-Tuning Job (HTTP Request) | Parallel: Chat Trigger โ AI Agent โ Custom Fine-Tuned Model Response Benefits: Fully automated pipeline from raw product data to trained AI model. Works with hundreds of different e-commerce websites through Bright Data's extensive scraper library. Creates specialized models trained on real e-commerce data for authentic marketing copy across various industries. Scalable solution that can be adapted to different product categories, niches, or websites. Interactive chat interface for immediate access to your custom-trained model. Cost-effective fine-tuning using OpenAI's most efficient model (GPT-4o-mini). Easily customizable with different websites, product URLs, training prompts, and model configurations. Setup Requirements: Bright Data API credentials for web scraping (supports hundreds of e-commerce websites). OpenAI API key with fine-tuning access. Replace placeholder credential IDs and model IDs with your actual values. Customize the product URLs list and Bright Data dataset for your specific website and use case. The workflow can be adapted for any e-commerce platform supported by Bright Data's scraping infrastructure.