by Agent Studio
Overview This workflow answers user requests sent via Mac Shortcuts Several Shortcuts call the same webhook, with a query and a type of query Types of query are: translate to english translate to spanish correct grammar (without changing the actual content) make content shorter make content longer How it works Select a text you are writing Launch the shortcut The text is sent to the webhook Depending on the type of request, a different prompt is used Each request is sent to an OpenAI node The workflow responds to the request with the response from GPT Shortcut replace the selected text with the new one For a demo and setup instructions: How to use it Activate the workflow Download this Shortcut template Install the shortcut In step 2 of the shortcut, change the url of the Webhook In Shortcut details, "add Keyboard Shortcut" with the key you want to use to launch the shortcut Go to settings, advanced, check "Allow running scripts" You are ready to use the shortcut. Select a text and hit the keyboard shortcut you just defined
by bangank36
This workflow restores all n8n instance workflows from GitHub backups using the n8n API node. It complements the Backup Your Workflows to GitHub template by allowing users to seamlessly restore previously saved workflows. How It Works The workflow fetches workflows stored in a GitHub repository and imports them into your n8n instance. Setup Instructions To configure the workflow, update the Globals node with the following values: repo.owner** – Your GitHub username repo.name** – The name of your GitHub repository storing the workflows repo.path** – The folder path within the repository where workflows are stored For example, if your GitHub username is john-doe, your repository is named n8n-backups, and workflows are stored in a workflows/ folder, you would set: repo.owner → john-doe repo.name → n8n-backups repo.path → workflows/ Required Credentials GitHub API** – Access to your repository n8n API** – To import workflows into your n8n instance Who Is This For? This template is ideal for users who want to restore their workflows from GitHub backups, ensuring easy migration and recovery in case of data loss. Check out my other templates: 👉 My n8n Templates
by Ria
This is a very simple workflow that lets you subscribe to any github repository for the latest release (using n8n as example). How it works: daily poll to Github repository for release for latest (stable) version of n8n parses the content to HTML sends a gmail Setup steps: add your gmail credentials (or use other email node of choice) change the url to the right Github repository you want to check regularly change the To email address to the email that you want to receive the updates for Feedback & Questions If you have any questions or feedback about this workflow - Feel free to get in touch at ria@n8n.io
by Airtop
Automating LinkedIn Company URL Verification Use Case This automation verifies that a given LinkedIn URL actually belongs to a company by comparing the website listed on their LinkedIn page against the expected company domain. It is essential for ensuring data accuracy in lead qualification, enrichment, and CRM updates. What This Automation Does Input Parameters Company LinkedIn**: The LinkedIn URL to be verified. Company Domain**: The expected domain (e.g., example.com) for validation. Airtop Profile (connected to LinkedIn)**: Airtop Profile with LinkedIn authentication. Output Confirmation whether the LinkedIn page corresponds to the provided domain. Returns the verified LinkedIn URL if the match is confirmed. How It Works Extracts the website URL from the specified LinkedIn company profile. Compares the extracted URL with the provided company domain. If the domain is contained in the extracted website, the LinkedIn profile is confirmed as valid. Returns the original LinkedIn URL if the match is successful. Setup Requirements Airtop API Key LinkedIn-authenticated Airtop Profile Next Steps Use for LinkedIn Discovery Validation**: Ensure correctness after automated LinkedIn page discovery. Combine with CRM Updates**: Prevent incorrect LinkedIn links from being stored in CRM. Automate in Data Pipelines**: Use this as a validation gate before enrichment or scoring steps.
by Giacomo Lanzi
Extract Title tag and meta description from url for SEO analysis. How it works The workflows takes records from Airtable, get the url in the records and extract from the related webpage the title tag (<title>) and meta description (<meta name="description" content="Some content">). If title tag and/or meta description tag isn't available on the webpage, the result will be empty. Setup Set a Base in Airtable with a table with the following structure: url (field type url), title tag (field type text string), meta desc (field type text field) Minimum suggested table structure is: url (https://example.com), title (Title example), meta desc* (This is the meta description of the example page) Connect Airtable to both Airtable nodes in the template and, with the following formula, get all the records that miss title tag and meta desc. Formula: AND(url != "", {title tag} = "", {meta desc} = "") Insert the url to be analyzed in the table in the field url and let the workflow do the rest. Extra You can also calculate the length for title tag and meta desc using formula field inside Airtable. This is the formula: LEN({title tag}) or LEN({meta desc}) You can automate the process calling a Webhook from Airtable. For this, you need an Airtable paid plan.
by Ramsey Njire
Who Is This For? This workflow is perfect for content creators, marketers, and business professionals who receive regular newsletters and want to effortlessly convert them into engaging LinkedIn posts. By automating the extraction and repurposing process, you can save time and consistently share thoughtful updates with your network. What Problem Does This Workflow Solve? Manually reading newsletters, extracting the key points, and then formatting that content into professional, engaging LinkedIn posts can be time-consuming and error-prone. This workflow automates those steps by: Filtering Emails:** Uses the Gmail node to process only those emails from a specific sender (e.g., newsletter@example.com). Extracting Content:** Leverages OpenAI to identify and summarize the top news items in your newsletter. Generating Posts:** Crafts concise, insightful LinkedIn posts in a smart, deadpan style with a touch of subtle humor. Publishing:** Posts the generated content directly to LinkedIn. What This Workflow Does Filter Newsletters:** The Gmail node is set up to only handle emails from your chosen sender, ensuring that only relevant newsletters are processed. Extract Key Content:** An OpenAI node analyzes the newsletter text to pull out the most important news items, including headlines and summaries. Split Content:** A Split Out node divides the extracted content so each news item is processed on its own. Generate LinkedIn Posts:** Another OpenAI node takes each news item's details and produces a well-structured LinkedIn post that delivers practical insights and ends with a reflective observation or question. Publish to LinkedIn:** The LinkedIn node publishes the crafted posts directly to your account. Setup Gmail Node: Rename it to “Filter Gmail Newsletter” and configure it to filter emails by your newsletter sender. OpenAI Nodes: Ensure your OpenAI API credentials are set up correctly. Customize the prompt if needed to match your desired tone. LinkedIn Node: Rename it to “Post to LinkedIn” and confirm that your LinkedIn OAuth2 credentials are properly configured. How to Customize OpenAI Prompts:** Adjust the prompts in the OpenAI nodes to fine-tune the post tone and output formatting. Email Filter:** Change the Gmail filter to match the sender of your newsletters. Post Processing:** Optionally, add extra formatting (using Function nodes) to further enhance the readability of the generated LinkedIn posts. This template offers an automated, hands-off solution to transform your newsletter content into engaging LinkedIn updates, keeping your audience informed and inspired with minimal effort.
by Audun
Send structured logs to BetterStack from any workflow using HTTP Request Who is this for? This workflow is perfect for automation builders, developers, and DevOps teams using n8n who want to send structured log messages to BetterStack Logs. Whether you're monitoring mission-critical workflows or simply want centralized visibility into process execution, this reusable log template makes integration easy. What problem is this workflow solving? Logging failures or events across multiple workflows typically requires duplicated logic. This workflow solves that by acting as a shared log sender, letting you forward consistent log entries from any other workflow using the Execute Workflow node. What this workflow does Accepts level (e.g., "info", "warn", "error") and message fields via Execute Workflow Trigger Sends the structured log to your BetterStack ingestion endpoint via HTTP Request Uses HTTP Header Auth for secure delivery Includes a manual trigger for testing and a sample call to demonstrate usage Comes with clear sticky notes to help you get started Setup Copy your BetterStack Logs ingestion URL. Create a Header Auth credential in n8n with your Authorization: Bearer YOUR_API_KEY. Replace the URL in the HTTP Request node with your BetterStack endpoint. Optionally modify the test data or log levels for custom scenarios. Use Execute Workflow in any of your workflows to send logs here.
by Ryan
Who is this template for? This template is for any Microsoft Outlook user who wants a trained AI agent to reason and reply on their behalf. Teach your agent tone and writing style to replicate your own, or develop a persona for a shared inbox. Requirements Outlook with authentication credentials OpenAI account with authentication credentials A few sample email replies of various lengths and topics How it works: Connect your Outlook account. Select (filter) which email sender(s) your trained AI agent will reply to. [Tip: pick a sender that has some repeatability either with a topic (ie. sales) or an individual (coworker@yourcompany.com)] Connect your OpenAI account. Choose your AI model (ie. gpt-4o-mini) Add Prompt (User Message) and select "system message" from the option below Update the instructions by filling in your name (or persona), response style, and add full email replies from the topic or individual you want the AI agent to emulate. [Tip: Add actual replies from your email sent folder, including your greeting and sign off. Paste each email sample between a set of <example> .... </example> tags] Configure the reply (or reply all) to remain within the original email string Test it! Send an email from the address to which your agent wants to respond. Check your sent (or draft) folder for the result. Enjoy all the free time you now have!! If you have questions or need assistance, email us at: support@teambisonandbird.com ++This template does not include retrieving email addresses out of the message or body of the email.++
by Oneclick AI Squad
This n8n template demonstrates how to create a comprehensive voice-powered restaurant assistant that handles table reservations, food orders, and restaurant information requests through natural language processing. The system uses VAPI for voice interaction and PostgreSQL for data management, making it perfect for restaurants looking to automate customer service with voice AI technology. Good to know Voice processing requires active VAPI subscription with per-minute billing Database operations are handled in real-time with immediate confirmations The system can handle multiple simultaneous voice requests All customer data is stored securely in PostgreSQL with proper indexing How it works Table Booking & Order Handling Workflow Voice requests are captured through VAPI triggers when customers make booking or ordering requests The system processes natural language commands and extracts relevant details (party size, time, food items) Customer data is immediately saved to the bookings and orders tables in PostgreSQL Voice confirmations are sent back through VAPI with booking details and estimated wait times All transactions are logged with timestamps for restaurant management tracking Restaurant Info Provider Workflow Info requests trigger when customers ask about hours, menu, location, or services Restaurant details are retrieved from the restaurant_info table containing current information Wait nodes ensure proper data loading before voice response generation Structured restaurant information is delivered via VAPI in natural, conversational format Database Schema Bookings Table booking_id (PRIMARY KEY) - Unique identifier for each reservation customer_name - Customer's full name phone_number - Contact number for confirmation party_size - Number of guests booking_date - Requested reservation date booking_time - Requested time slot special_requests - Dietary restrictions or special occasions status - Booking status (confirmed, pending, cancelled) created_at - Timestamp of booking creation Orders Table order_id (PRIMARY KEY) - Unique order identifier customer_name - Customer's name phone_number - Contact for order updates order_items - JSON array of food items and quantities total_amount - Calculated order total order_type - Delivery, pickup, or dine-in special_instructions - Cooking preferences or allergies status - Order status (received, preparing, ready, delivered) created_at - Order timestamp Restaurant_Info Table info_id (PRIMARY KEY) - Information entry identifier category - Type of info (hours, menu, location, contact) title - Information title description - Detailed information content is_active - Whether info is currently valid updated_at - Last modification timestamp How to use The manual trigger can be replaced with webhook triggers for integration with existing restaurant systems Import the workflow into your n8n instance and configure VAPI credentials Set up PostgreSQL database with the required tables using the schema provided above Configure restaurant information in the restaurant_info table Test voice commands such as "Book a table for 4 people at 7 PM" or "What are your opening hours?" Customize voice responses in VAPI nodes to match your restaurant's tone and branding The system can handle multiple concurrent voice requests and scales with your restaurant's needs Requirements VAPI account for voice processing and natural language understanding PostgreSQL database for storing booking, order, and restaurant information n8n instance with database and VAPI integrations enabled Customising this workflow Voice AI automation can be adapted for various restaurant types - from quick service to fine dining establishments Try popular use-cases such as multi-location booking management, dietary restriction handling, or integration with existing POS systems The workflow can be extended to include payment processing, SMS notifications, and third-party delivery platform integration
by Oneclick AI Squad
This n8n template demonstrates how to create an automated customer feedback collection system for restaurants. The workflow triggers when new customer emails are added to an Excel sheet, automatically sends personalized feedback forms, and stores all responses in a separate Excel tracking sheet. Perfect for restaurants wanting to systematically gather customer insights and improve service quality. Good to know Each feedback form is personalized with the customer's name and email All responses are automatically timestamped and organized in Excel sheets The system handles form validation and ensures complete data capture Email notifications keep your team updated on new feedback submissions How it works Email Distribution Workflow New customer entries are detected in Excel Sheet-1 (customer database) containing customer names and email addresses The system automatically generates personalized feedback forms for each new customer Customized feedback emails are sent with embedded forms tailored to restaurant experience evaluation Wait nodes ensure proper processing timing before sending emails Feedback Collection Workflow Customer form submissions trigger the data collection process All feedback responses are captured including ratings, comments, and contact information Data is automatically appended to Excel Sheet-2 (feedback responses) with complete timestamps The system handles multiple concurrent submissions without data loss Excel Sheet Structure Sheet-1 (Customer Database) Name - Customer's full name Email - Customer's email address for form distribution Sheet-2 (Feedback Responses) Timestamp - Date and time of form submission Name - Customer's full name E-Mail - Customer's email address Contact Number - Customer's phone number How was the cleanliness of the dining area? - Cleanliness rating/feedback Did you like the taste of the food? - Food taste evaluation What dish did you enjoy the most? - Favorite dish identification Was your order accurate and timely? - Service accuracy rating Was our staff polite and helpful? - Staff service evaluation Was the food presentation appealing? - Food presentation rating How would you rate your overall dining experience? - Overall experience score Any additional comments or suggestions? - Open-ended feedback field How to use Import the workflow into your n8n instance and configure Excel integration Set up Sheet-1 with customer names and emails for feedback distribution Configure the feedback form with your restaurant's specific questions and branding Add new customer entries to Sheet-1 to automatically trigger feedback emails Monitor Sheet-2 for incoming responses and analyze customer satisfaction trends The system scales automatically with your customer database growth Requirements Google Sheets account for data storage and management Email service integration (Gmail, SMTP, or similar) n8n instance with Google Sheets and email connectors Customising this workflow Customer feedback automation can be adapted for different restaurant types and service models Try popular use-cases such as post-dining follow-ups, seasonal menu feedback, or special event evaluations The workflow can be extended to include automated response analysis, sentiment scoring, and management dashboard integration
by Viktor Klepikovskyi
Google Sheets UI for Workflow Control This n8n template provides a practical and efficient way to manage your n8n workflows using Google Sheets as a user-friendly interface. It demonstrates how to leverage a simple spreadsheet to control inputs, capture outputs, and track the processing status of individual data rows, offering a clear and visual overview of your automation tasks. Purpose of This Template: The primary purpose of this template is to illustrate how Google Sheets can serve as a dynamic UI for your n8n automations. It's designed for n8n users who need: A structured method to feed specific data into their workflows. The ability to selectively trigger workflow execution based on data status. A centralized place to view and store workflow outputs alongside original inputs. A simple, no-code solution for managing workflow data without building custom applications. Setup Instructions: To use this template, follow these steps: Create a Google Sheet: Set up a new Google Sheet (see the template here) with three columns: Color, Status, and Number. Populate the Color column with some sample data (e.g., color names) and set the Status for the rows you want to process to READY. Import the n8n Workflow: Import this n8n template into your n8n instance. Configure Google Sheets Nodes: For the first Google Sheets node (Read operation), ensure it's connected to your newly created Google Sheet and configured to read rows where the Status column is READY. You will need to authenticate your Google Sheets account. For the second Google Sheets node (Update operation), ensure it's also connected to the same Google Sheet. The node should automatically map the row_number, Number, and Status fields from the preceding nodes. Execute the Workflow: Run the workflow. Observe how it reads READY rows, processes them (calculates string length), and updates the Number and Status columns in your Google Sheet to DONE. Control Execution: To process new data, simply add new rows to your Google Sheet and set their Status to READY. Rerunning the workflow will then only process these new entries. For more details and context on this approach, you can refer to the related blog post here.
by Wyeth
Encode JSON to Base64 String in n8n This example workflow demonstrates how to convert a JSON object into a base64-encoded string using n8n’s built-in file processing capabilities. This is a common requirement when working with APIs, webhooks, or SaaS integrations that expect payloads to be base64-encoded. > Tip: The three green-highlighted nodes (Stringify → Convert to File → Extract from File) can be wrapped in a Subworkflow to create a reusable Base64 encoder in your own projects. 🔧 Requirements Any running n8n instance (local or cloud) No credentials or external services required What This Workflow Does Generates example JSON data Converts the JSON to a string Saves the string as a binary file Extracts the file’s contents as a base64 string Outputs the base64 string on the final node Step-by-Step Setup Manual Trigger Start the workflow using the Manual Execution node. This is useful for testing and development. Create JSON Data The Create Json Data node uses raw mode to construct a sample object with all major JSON types: strings, numbers, booleans, nulls, arrays, nested objects, etc. Convert to String The Convert to String node uses the expression ={{ JSON.stringify($json) }} to flatten the object into a single string field named json_text. Convert to File The Convert to File node takes the json_text value and saves it to a UTF-8 encoded binary file in the property encoded_text. Extract from File This node takes the binary file and extracts its contents as a base64-encoded string. The result is saved in the base64_text field. Customization Tips Replace the sample JSON in the Create Json Data node with your own payload structure. To make this reusable, extract the three core nodes into a Subworkflow or wrap them in a custom Function. Use the base64_text output field to post to APIs, store in databases, or include in webhook responses.