by n8n Team
This workflow adds a new product in Stripe whenever a new product has been added to Pipedrive. Prerequisites Stripe account and Stripe credentials Pipedrive account and Pipedrive credentials How it works Pipedrive trigger node starts the workflow when a new product is added. HTTP Request node creates a new product in Stripe using previuos input. Merge node combines data of both Pipedrive and Stripe inputs. The output will contain the data of Pipedrive input merged with the data of Stripe input. The merge occurs based on the index of the items. The Item Lists node splits prices to separate items. HTTP Request node creates price records in Stripe.
by Shiva
AI Voice Calling Bot - OpenAI GPT-4o + ElevenLabs + Twilio Integration for Multilingual Appointment Booking & Service Orders Overview Transform your business with an intelligent voice calling bot that handles customer calls automatically in 25+ languages. This N8n workflow integrates OpenAI GPT-4o, ElevenLabs text-to-speech, and Twilio for seamless appointment scheduling, pizza orders, and service bookings. Key Features Multilingual Support**: Conversations in English, Spanish, French, German, Italian, Portuguese, Chinese, Japanese, Arabic, and 20+ more languages Natural AI Conversations**: GPT-4o powered responses with ElevenLabs realistic voice synthesis Multi-Service Handling**: Appointments, orders, and service requests with automatic logging Real-time Processing**: Instant speech-to-text and audio response generation Prerequisites N8n instance (self-hosted or cloud) Twilio account with phone number OpenAI API key (GPT-4o access) ElevenLabs API credentials Google Sheets access Cloud storage for audio files Setup Instructions Step 1: Configure Credentials Add API keys for OpenAI, ElevenLabs, Twilio, and Google Sheets in N8n credentials manager. Step 2: Prepare Data Storage Create Google Sheets for call logs and appointments with columns: timestamp, caller_id, speech_input, ai_response, language, call_sid. Step 3: Configure Twilio Set webhook URL to your N8n endpoint: https://your-n8n-instance.com/webhook/voice-webhook Step 4: Update Sheet IDs Replace placeholder Google Sheet IDs in workflow nodes with your actual sheet IDs. Customization Options Voice Settings**: Adjust ElevenLabs multilingual voice models and parameters AI Behavior**: Modify system prompts for specific business needs and languages Service Types**: Add custom service handling logic Business Hours**: Implement language-specific operating hours Monitoring Track call analytics, language preferences, conversion rates, and customer satisfaction across all supported languages through automated Google Sheets logging. Ready for production use with comprehensive error handling and scalability for global businesses.
by n8n Team
This workflow automatically adds a note of the PR from GitHub to the Pipedrive contact if their GitHub email matches a Person in Pipedrive. Prerequisites Pipedrive account and Pipedrive credentials GitHub account and GitHub credentials How it works GitHub Trigger node activates the workflow when a GitHub user adds a PR. HTTP Request node gets the user's data and sends it further. Pipedrive node searches the same email that GitHub user has in Pipedrive. IF node checks whether a person with the same email exists in Pipedrive. In case there's such a person in Pipedrive, the Pipedrive node creates a note within the person's profile.
by Mihai Farcas
This n8n workflow operates as a two-agent system where each agent has a specialized task. The process flows from initial user input to a final analysis, with a seamless handoff between the agents. How it works The Chat Trigger The entire process begins when you send a message using n8n's chat interface. This message serves as the initial prompt or query for the system. The Research Agent Takes Over The user's message is first sent to the Research Agent. This agent's job is to understand the query and gather relevant information. To do this, it has access to: LLM: Google Gemini, which acts as the agent's "brain" to process language and make decisions. Tools: web_search: It uses this tool (powered by your self-hosted SearXNG instance) to perform live searches on the internet. get_current_date: It can access the current date, which is useful for context-aware or time-sensitive research. The Research Agent uses these tools to find the most relevant information related to your query and then compiles it into a concise summary. Handoff to the Sentiment Analysis Agent Once the Research Agent has completed its task, it passes its findings directly to the Sentiment Analysis Agent. The Final Analysis The Sentiment Analysis Agent receives the text from the Research Agent. Its sole purpose, as defined by its system prompt, is to analyze the sentiment of the provided information. It determines if the content is positive, negative, or neutral and formulates a final response. This final analysis is then sent back to you in the chat, completing the workflow. Set up steps Select the Language Model (LLM): This workflow is pre-configured with Google Gemini. You can select a different model for the agents as needed. Configure LLM Credentials: Ensure that valid credentials for your chosen LLM are correctly set up within your n8n instance. Set Up the SearXNG Connection: Configure the node to connect to your self-hosted SearXNG instance. This enables the agent's web search capabilities. Define the Research Agent's Task: Customize the system prompt for the "Research Agent" to define its role, instructions, and how it should conduct its research. Define the Sentiment Analysis Agent's Task: Adjust the system prompt for the "Sentiment Analysis Agent" to specify how it should analyze the information provided by the Research Agent. Test the Workflow: Use the built-in chat interface in the n8n canvas to send a message and verify that the agents are functioning correctly.
by n8n Team
This workflow creates a GitHub issue when a new ticket is created in Zendesk. Subsequent comments on the ticket in Zendesk are added as comments to the issue in GitHub. Prerequisites Zendesk account and Zendesk credentials. GitHub account and GitHub credentials. GitHub repository to create issues in. How it works The workflow listens for new tickets in Zendesk. When a new ticket is created, the workflow creates a new issue in GitHub. The GitHub issue number is then saved in one of the ticket's fields (in setup we call this "GitHub Issue Number"). The next time a comment is added to the ticket, the workflow retrieves the GitHub issue number from the ticket's field and adds the comment to the issue in GitHub. Setup This workflow requires that you set up a webhook in Zendesk. To do so, follow the steps below: In the workflow, open the On new Zendesk ticket node and copy the webhook URL. In Zendesk, navigate to Admin Center > Apps and integrations > Webhooks > Actions > Create Webhook. Add all the required details which can be retrieved from the On new Zendesk ticket node. The webhook URL gets added to the “Endpoint URL” field, and the “Request method” should match what is shown in n8n. Save the webhook. In Zendesk, navigate to Admin Center > Objects and rules > Business rules > Triggers > Add trigger. Give trigger a name such as “New tickets”. Under “Conditions” in “Meet ALL of the following conditions”, add “Status is New”. Under “Actions”, select “Notify active webhook” and select the webhook you created previously. In the JSON body, add the following: { "id": "{{ticket.id}}", "comment": "{{ticket.latest_comment_html}}" } Save the Zendesk trigger. You will also need to set up a field in Zendesk to store the GitHub issue number. To do so, follow the steps below: In Zendesk, navigate to Admin Center > Objects and rules > Tickets > Fields > Add field. Use the number field option and give the field a name such as “GitHub Issue Number”. Save the field. In n8n, open the Update ticket node and select the field you created in Zendesk.
by Yaron Been
🎤 Audio-to-Insights: Auto Meeting Summarizer Transform your meeting recordings into actionable insights automatically. This powerful n8n workflow monitors your Google Drive for new audio files, transcribes them using OpenAI's Whisper, generates intelligent summaries with ChatGPT, and logs everything in Google Sheets - all without lifting a finger. 🔄 How It Works This workflow operates as a seamless 6-step automation pipeline: Step 1: Smart Detection The workflow continuously monitors a designated Google Drive folder (polls every minute) for newly uploaded audio files. Step 2: Secure Download When a new audio file is detected, the system automatically downloads it from Google Drive for processing. Step 3: AI Transcription OpenAI's Whisper technology converts your audio recording into accurate text transcription, supporting multiple audio formats. Step 4: Intelligent Summarization ChatGPT processes the transcript using a specialized prompt that extracts: Key discussion points and decisions Action items with assigned persons and deadlines Priority levels and follow-up tasks Clean, professional formatting Step 5: Timestamp Generation The system automatically adds the current date and formats it consistently for tracking purposes. Step 6: Automated Logging The final summary is appended to your Google Sheets document with the date, creating a searchable archive of all meeting insights. ⚙️ Setup Steps Prerequisites Before setting up the workflow, ensure you have: Active Google Drive account OpenAI API key with credits Google Sheets access n8n instance (cloud or self-hosted) Configuration Steps 1. Credential Setup Google Drive OAuth2**: Required for folder monitoring and file downloads OpenAI API Key**: Needed for both transcription (Whisper) and summarization (ChatGPT) Google Sheets OAuth2**: Essential for writing summaries to your spreadsheet 2. Google Drive Configuration Create a dedicated folder in Google Drive for meeting recordings Copy the folder ID from the URL (the long string after /folders/) Update the folderToWatch parameter in the workflow 3. Google Sheets Preparation Create a new Google Sheet or use an existing one Ensure it has columns: Date and Meeting Summary Copy the spreadsheet ID from the URL Update the documentId parameter in the workflow 4. Audio Requirements Supported Formats**: MP3, WAV, M4A, MP4 Recommended Size**: Under 100MB for optimal processing Language**: Optimized for English (customizable for other languages) Quality**: Clear audio produces better transcriptions 5. Workflow Activation Import the workflow JSON into your n8n instance Configure all credential connections Test with a sample audio file Activate the workflow trigger 🚀 Use Cases Project Management Team Standup Summaries**: Convert daily standups into actionable task lists Sprint Retrospectives**: Extract improvement points and action items Stakeholder Updates**: Generate concise reports for leadership Sales & Customer Success Discovery Call Notes**: Capture prospect pain points and requirements Demo Follow-ups**: Track questions, objections, and next steps Customer Check-ins**: Monitor satisfaction and expansion opportunities Consulting & Professional Services Client Strategy Sessions**: Document recommendations and implementation plans Requirements Gathering**: Organize complex project specifications Progress Reviews**: Track deliverables and milestone achievements HR & Training Interview Debriefs**: Standardize candidate evaluation notes Training Sessions**: Create searchable knowledge bases Performance Reviews**: Document development plans and goals Research & Development Brainstorming Sessions**: Capture innovative ideas and concepts Technical Reviews**: Log decisions and architectural choices User Research**: Organize feedback and insights systematically 💡 Advanced Customization Options Enhanced Summarization Modify the ChatGPT prompt to focus on specific elements: Add speaker identification for multi-person meetings Include sentiment analysis for customer calls Generate department-specific summaries (technical, sales, legal) Extract financial figures and metrics automatically Integration Expansions Slack Integration**: Auto-post summaries to relevant channels Email Notifications**: Send summaries to meeting participants CRM Updates**: Push action items directly to Salesforce/HubSpot Calendar Integration**: Schedule follow-up meetings based on action items Quality Improvements Audio Preprocessing**: Add noise reduction before transcription Multi-language Support**: Configure for international teams Custom Templates**: Create industry-specific summary formats Approval Workflows**: Add human review before final storage 🛠️ Troubleshooting & Best Practices Common Issues Large File Processing**: Split recordings over 100MB into smaller segments Poor Audio Quality**: Use noise reduction tools before uploading API Rate Limits**: Implement delay nodes for high-volume usage Formatting Issues**: Adjust ChatGPT prompts for consistent output Optimization Tips Upload files in supported formats only Ensure stable internet connection for cloud processing Monitor OpenAI API usage and costs Regularly backup your Google Sheets data Test workflow changes with sample files first 📊 Expected Outputs Sample Summary Format: Meeting Summary - March 15, 2024 Key Discussion Points: Q1 budget review and allocation decisions New product launch timeline and milestones Team restructuring and role assignments Action Items: John: Finalize budget proposal by March 20th (High Priority) Sarah: Schedule product demo sessions for March 25th Team: Submit org chart feedback by March 18th Decisions Made: Approved additional marketing budget of $50K Delayed product launch to April 15th for quality assurance Promoted Lisa to Senior Developer role 📞 Questions & Support For any questions, customizations, or technical support regarding this workflow: 📧 Email Support Primary Contact**: Yaron@nofluff.online Response Time**: Within 24 hours on business days Best For**: Setup questions, customization requests, troubleshooting 🎥 Learning Resources YouTube Channel**: https://www.youtube.com/@YaronBeen/videos Step-by-step setup tutorials Advanced customization guides Workflow optimization tips 🔗 Professional Network LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Connect for ongoing support Share your workflow success stories Get updates on new automation ideas 💡 What to Include in Your Support Request Describe your specific use case Share any error messages or logs Mention your n8n version and setup type Include sample audio file characteristics (if relevant) Ready to transform your meeting chaos into organized insights? Download the workflow and start automating your meeting summaries today!
by Keith Rumjahn
WordPress Post Auto-Categorization Workflow 💡 Click here to read detailed case study 📺 Click here to watch youtube tutorial 🎯 Purpose Automatically categorize WordPress blog posts using AI, saving hours of manual work. This workflow analyzes your post titles and assigns them to predefined categories using artificial intelligence. 🔄 What This Workflow Does • Connects to your WordPress site • Retrieves all uncategorized posts • Uses AI to analyze post titles • Automatically assigns appropriate category IDs • Updates posts with new categories • Processes dozens of posts in minutes ⚙️ Setup Requirements WordPress site with admin access Predefined categories in WordPress OpenAI API credentials (or your preferred AI provider) n8n with WordPress credentials 🛠️ Configuration Steps Add your WordPress categories (manually in WordPress) Note down category IDs Update the AI prompt with your category IDs Configure WordPress credentials in n8n Set up AI API connection 🔧 Customization Options • Modify AI prompts for different categorization criteria • Adjust for multiple category assignments • Add tag generation functionality • Customize for different content types • Add additional metadata updates ⚠️ Important Notes • Backup your WordPress database before running • Test with a few posts first • Review AI categorization results initially • Categories must be created manually first 🎁 Bonus Features • Can be modified for tag generation • Works with scheduled posts • Handles bulk processing • Maintains categorization consistency Perfect for content managers, bloggers, and website administrators looking to organize their WordPress content efficiently. #n8n #WordPress #ContentManagement #Automation #AI Created by rumjahn
by n8n Team
This template shows how to sync data from one service to another. Specifically, in this example we're saving a new qualified lead from a Postgres database to a Google Sheets file. Setup instructions are located inside the workflow template.
by Viktor Klepikovskyi
Reusable and Independently Testable Sub-workflow This n8n workflow provides a standardized structure for building and testing sub-workflows in isolation. Its purpose is to help you create robust, reusable, and maintainable automations by enabling you to test the sub-workflow's logic without needing a separate parent workflow. Setup Instructions: Define Sub-workflow Inputs: Double-click the Execute Sub-workflow Trigger node to define the parameters (e.g., color) that your sub-workflow will expect from a parent workflow. Configure Test Data: Use the Test Input node (an Edit Fields (Set) node connected to the Manual Trigger) to provide sample data for isolated testing. Connect Inputs: The Combine Input node (an Edit Fields (Set) node) is the entry point for your sub-workflow's core logic. It should have two inputs: one from the Execute Sub-workflow Trigger and one from the Test Input node. Merge Inputs: Ensure the Combine Input node has the 'Include Other Input Fields' option enabled to merge data from both the live and test paths seamlessly. You can read the full blog post that explains this workflow setup in detail here.
by Dvir Sharon
Goodreads Quote Extraction with Bright Data and Gemini This workflow demonstrates how to fetch data specifically from Goodreads web pages using Bright Data and then extract specific information (quotes) from that data using a Google Gemini AI model. How it works The workflow is triggered manually. It sends a request to a Bright Data collector to scrape data from a predefined list of Goodreads URLs. The collected text data from Goodreads is then passed to a Google Gemini AI node. The AI node processes the text and extracts quotes based on a specified JSON schema output format. Set up steps Setting up this workflow should take only a few minutes. You will need a Bright Data API key to configure the 'Header Auth' credential. You will need a Google Gemini API key to configure the 'Google Gemini(PaLM) Api account' credential. Ensure the correct Bright Data collector ID is set in the 'Perform Bright Data Web Request' node URL. Make sure the full list of target Goodreads URLs is correctly added to the 'Perform Bright Data Web Request' node's body. Link your created credentials to the respective nodes ('Perform Bright Data Web Request' and 'Quotes Extractor'). Keep detailed descriptions for specific node configurations in sticky notes inside your workflow canvas.
by David Olusola
This plug-and-play n8n workflow automates medical record digitization using Mistral’s OCR API and stores clean, structured data in Google Sheets. Whether you run a clinic or healthtech product, this no-code solution simplifies data entry from scanned or uploaded medical documents. 📌 Works seamlessly on both self-hosted and cloud-based n8n environments. 👥 Who is this for? Hospitals and private clinics Healthtech platforms & startups Medical admin and document processing teams Clinical researchers and labs 😓 What problem does it solve? ❌ Manual entry from printed forms ❌ Unstructured, scattered records ❌ Errors in data transcription ❌ Inconsistent document storage ✅ This automation brings consistency, structure, and speed to the way you handle medical documents. ✅ What this workflow does Captures uploaded documents through a public form Uploads file to Mistral for OCR processing Extracts clean text from each page (PDF or image) Parses patient fields (Name, DOB, Diagnosis, Medications, etc.) Saves records into a structured Google Sheet 🛠️ Setup Instructions Step 1: Google Sheet Prep Create a Google Sheet with these columns (case-sensitive): Name, Date of Birth, Patient ID, Date of Visit, Referring Physician, Department, Symptoms, Blood Pressure, Heart Rate, Temperature, Lab Results, Diagnosis, Medications, Next Appointment, Notes Step 2: Mistral API Access Sign up at Mistral AI Get your API key Ensure your plan supports file upload & OCR endpoints Step 3: Google OAuth Credentials (Self-hosted or Cloud) Go to n8n → Settings → Credentials, and add: Google Sheets OAuth2 Scopes needed: https://www.googleapis.com/auth/spreadsheets Step 4: Import Workflow Go to Workflows > Import from File Upload your JSON file Replace: Google Sheet document ID in the "Google Sheets" node Your Mistral API key in HTTP Header Auth Step 5: (Optional) Make Form Public In Cloud-based n8n: You can expose the form as a public page Otherwise, connect it to your website form via webhook 🧩 Customization Tips Extract More Fields Update the "Data cleaning" node and extend the list of fields: const fields = ["Name", "Diagnosis", "Medications", "Symptoms", ...]; Add EHR or Database Integration After Google Sheets, chain your custom system: PostgreSQL Airtable Supabase MongoDB Change Output Format Want JSON or Markdown output for internal tools? Use the Set or Code node before the final output step. 🧪 Troubleshooting Issue Fix File upload fails Check Mistral API key and file type Google Sheets not updating Verify credentials and document ID No data parsed Check OCR quality; verify field labels in document Workflow not triggering Ensure webhook or form is configured correctly 🌐 Self-Hosted vs Cloud Comparison Feature Self-Hosted n8n Cloud Public Form Access Manual setup Built-in OAuth App Config Required Pre-configured Storage Limits Depends on server Included with plan Scalability Fully customizable Scales automatically 📣 Getting Support n8n Docs Mistral API Docs n8n Community Or reach out to: David Olusola (dimejicole21@gmail.com) 🌟 Like this template? Give it a star in the template library and help other no-code builders discover it. "Turn scanned documents into structured data with zero code."
by please-open.it
Intro This workflow needs a user to authenticate by using an openid connect provider in order to call the webhook. If the user is not authenticated, it starts a login process by using an Authorization Code with PKCE https://datatracker.ietf.org/doc/html/rfc7636, a standard way to authenticate users with openid connect. Then, after the user logs in, the webhook is refreshed and gets the user's token from a cookie. With this token, all details about the user are requested through the userinfo endpoint on the identity provider. How to set up with Keycloak Keycloak Keycloak is an open source identity and access management solution. Feel free to get a demo realm at https://please-open.it or get your own Keycloak server up and running. After creating a realm, go to "Realm Settings" and click on "OpenID Endpoint Configuration" Retrieve authorization_endpoint, token_endpoint and userinfo_endpoint values. Set those variables in the "Set variables" node. In Keycloak, create a new client (name it as you want) Disable the client authentication, check only "standard flow" : At the third step, put the webhook url in "valid redirect URIs", fill "Web origins" with a "+". You're done, open the webhook and it asks you to authenticate. Usage User informations The userinfo node returns this structure about the user has logged in : [ { "sub":"73a6543f-f420-4fa6-9811-209e903c348b", "email_verified":true, "preferred_username": "mathieu.passenaud@please-open.it", "email": "mathieu.passenaud@please-open.it" } ] I can use those infos in my workflow for custom operations. APIs calls the "code" node returns me a cookie named "n8n-custom-auth" which is the access_token returned by the identity provider. This access_token can be used to call APIs connected to this identity provider (for example, we call userinfo API with this token). Example : asks a user to log in with his Google account then call an API (Gmail, drive...) with his own token. How it works We published a blog post about this flow, how it works and how you can use it : https://blog.please-open.it/n8n-openid-client/