by Milorad Filipović
How it works It’s very important to come prepared to Sales calls. This often means a lot of manual research about the person you’re calling with. This workflow delivers the latest social media activity (LinkedIn + X) for businesses you are about to interact with each day. Scans Your Calendar**: Each morning, it reviews your Google Calendar for any scheduled meetings or calls with companies based on each attendee email address. Fetches Latest Posts**: For each identified company, it fetches recent LinkedIn and X posts Delivers Insight**s: You receive personalized emails via Gmail, each dedicated to a company you’re meeting with that day, containing a reminder of the meeting, list of posts categorized by the social media platform, and direct links to posts. Setup steps The workflow requires you to have the following accounts set up in their respective nodes: Google Calendar GMail Clearbit Besides those, you will need an account on the RapidAPI platform and subscribe to the following APIs: Fresh LinkedIn Profile Data Twitter Email example
by David Roberts
Overview This workflow takes some French text, and translates it into spoken audio. It then transcribes that audio back into text, translates it into English and generates an audio file of the English text. To do so, it uses ElevenLabs (which has a free tier) and OpenAI. Setup These steps should only take a few minutes: In ElevenLabs, add a voice to your voice lab and copy its ID. Add it to the 'Set voice ID' node Get your ElevenLabs API key (click your name in the bottom-left of ElevenLabs and choose ‘profile’) In the 'Generate French audio' node, create a new header auth cred. Set the name to xi-api-key and the value to your API key In the 'credential' field of the 'Transcribe audio' node, create a new OpenAI cred with your OpenAI API key Run the workflow by clicking the orange button at the bottom of the canvas
by Derek Cheung
Use case This workflow enables a Telegram bot that can: Accept speech input in one of 55 supported languages Automatically detect the language spoken and translate the speech to another language Responds back with the translated speech output. This allows users to communicate across language barriers by simply speaking to the bot, which will handle the translation seamlessly. How does it work? Translation In the translation step the workflow converts the user's speech input to text and detects the language of the input text. If it's English, it will translate to French. If it's French, it will translate to English. To change the default translation languages, you can update the prompt in the AI node. Output In the output step, we provide the translated text output back to the user and speech output is generated in the translated language. Setup steps Obtain Telegram API Token Start a chat with the BotFather. Enter /newbot and reply with your new bot's display name and username. Copy the bot token and use it in the Telegram node credentials in n8n. Update the Settings node to customize the desired languages Activate the flow Full list of supported languages All supported languages:
by Manu
In Grist, when I mark a row as confirmed (via a toggle): a webhook is set up to notify n8n, and this workflow will create derived records in the destination table. Design decisions Confirmation-based In the source table there is a boolean column "Confirmed" that will trigger the transfer. This way there is a manual check involved & it's a conscious step to trigger the workflow. Runs once If the destination table already contains an entry, we will not re-create/update it (as it might've already been changed manually) Setup Create a boolean column Confirmed in source table Add a webhook in Grist Settings Add grist API credentials in n8n Set document ID & source table ID/Name in the 'get existing' node Set docID, the destination table ID/Name - and the columns & values you want in the Create Row node
by Jimleuk
This n8n workflow is a fun way to query and search over your credentials on your n8n instance. Good to know Your credentials should remain safe as this workflow does not decrypt or use any decrypted data. Example Usage "Which workflows are using Slack and Google Calendar?" "Which workflows have AI in their name but are not using openAI?" How it works Using the n8n API, it fetches all workflow data on the instance. Workflow data contains references to credentials used so this will be extracted. With some necessary reformatting, the workflows and their credentials metadata are stored to a SQLite database. Next, an AI agent is used with a custom SQL tool that reads the SQLite database created in the previous step. The AI agent is instructed to perform SQL queries against our workflow credential table when asked about credentials by the user. Requirements You'll need an n8n API key. Please note that only workflows will be scoped to your API key. Customising the workflow Add extra table fields to the SQLite database to answer even more complex queries such as: workflow status to differentiate between active and inactive workflows.
by Ayoub
Who is this for? This workflow is designed for businesses or developers looking to integrate voice-based chat applications with dynamic responses and conversational memory. What problem does this solve? It automates AI-powered voice conversations, maintaining context between sessions and converting speech-to-text and text-to-speech. What this workflow does: The workflow receives audio input, transcribes it using OpenAI, and processes the conversation using Google Gemini Chat Model (you can use OpenAI Chat Model). Responses are converted back to speech using ElevenLabs. Prerequisites: You'll need API keys for: OpenAI (you can obtain it from OpenAI website) ElevenLabs (you can obtain it from their website) Google Gemini (You can obtain it from Google AI Studio) Setup: Configure you API keys Ensure that the value (voice_message) in the "Path" parameter in the Webhook node is used as the name of the parameter that will contain the voice message you are sending via the HTTP Post request.
by Paul Mikulskis
This template is based on the following template. Thank you for the groundwork, Matheus. How it works: Store your snippets of text in a Notion table. Each snippet should have an image associated with it (copy + pasted into the text) Connect to your table via a Notion "integration", from which N8N can then query your pre-meditated posts The text is fed through an OpenAI assistant to boost engagement via formatting The re-formatted text along with the image pulled from the Notion snippet are combined into a post for your LinkedIn The row in the original Notion table from step 1 containing this post is set to a status of "Done" Set up steps: You will need to create a Notion "integration", which will yield a "secret key" which you enter into your N8N as a "Credential". You will need to create a LinkedIn "app" in order to post on your behalf. When creating your LinkedIn "app", you will be required to link this "app" to a company page on LinkedIn. If you are doing this for yourself, seach for the "Default Company Payge (for API testing)", and select this page as it is provided by LinkedIn for individuals. You can find your LinkedIn apps here, and if you get stuck, further instructions on setting up this workflow (including this LinkedIn OAuth piece) can be found in this YouTube Video Aide to these instructions. Lastly, you will need to create an OpenAI API key, found on your OpenAI Playground Dashboard. Once you created an API key, make sure you have an assistant created from the "Assistants" tab on the OpenAI dashboard. This assistant and its instructions will be needed for carrying out the re-formatting of your post.
by M Shehroz Sajjad
Transform your BeyondPresence video agent conversations into comprehensive insights by automatically analyzing each call with AI and organizing 35+ data points in Google Sheets. This template helps customer success, support, and training teams save 30+ minutes per call on documentation while ensuring no critical action items or insights are missed. How it works Webhook receives** completed call data from BeyondPresence including full transcript Data validation** ensures quality and adds enriched metadata (duration, time calculations) AI analysis** (GPT-4) extracts action items, sentiment, decisions, and recommendations Parse response** handles the AI output and structures it for sheets Auto-append** to Google Sheets with 35+ insights per call organized beautifully Set up steps Copy our Google Sheets template - One click! Get pre-formatted sheet: BeyondPresence Call Analytics Template Connect accounts - Add OpenAI API key and Google Sheets OAuth2 Configure webhook - Copy URL from n8n to BeyondPresence Settings → Webhooks Customize AI prompt (optional) - Adjust analysis focus for your use case Test with a call - Make a test call and watch insights appear! Setup time: 5-10 minutes Requirements: BeyondPresence account, OpenAI API key, Google account
by Bao Duy Nguyen
Who is this for? This template is ideal for developers, DevOps engineers, and automation managers who manage their n8n workflows using GitHub. It helps teams streamline their CI/CD automation by syncing changes from GitHub directly into n8n after a pull request (PR) is merged. What problem is this workflow solving? Manually restoring workflows after reviewing and merging code in GitHub can be tedious and error-prone. This workflow solves that by automating the restore process, ensuring that any new or updated workflow committed to your GitHub repo is automatically imported into your n8n environment. What this workflow does Triggers when a GitHub pull request is closed and merged. Fetches the details of the merge commit. Retrieves the list of added and modified workflow files. Downloads and decodes each workflow file. Creates or updates** the corresponding workflow in your n8n instance automatically. Setup Connect GitHub: Use the GitHub Trigger node and configure GitHub API credentials. Note: I'd recommended to use GitHub PAT (Personal Access Token) classic with repo and admin:repo_hook permission scopes enabled. Connect n8n API: Provide your n8n API credentials in the n8n nodes. - Check this doc Set repository variables: Update github_owner and repo_name in the Define Local Variables node. Enable webhook: Make sure your GitHub repository has a webhook for pull_request events pointing to this workflow. How to customize this workflow to your needs Modify filters to handle only certain branches or file paths. Add Slack or email notifications to confirm successful imports. Insert logging or version tagging for better traceability. Extend with conditional logic for workflow testing before applying changes. This automated flow provides a seamless CI/CD loop between GitHub and n8n, empowering teams to manage workflow versioning efficiently and securely.
by Airtop
Automating File (Image) Upload to Postimages.org Use Case Manually uploading screenshots or other image files to hosting platforms like Postimages.org can be tedious and time-consuming. This automation simplifies the process by automatically uploading an image to Postimages.org and validating the result, which is especially useful for repetitive QA tasks, reporting, or archiving visual web data. What This Automation Does This automation uploads a screenshot to Postimages.org using the following steps: Creates a browser session using Airtop. Navigates to the Postimages.org upload page. Takes a screenshot using the browser session. Uploads the screenshot to the site via the "Choose images" button. Waits briefly to ensure upload processing. Captures a post-upload screenshot for validation. How It Works Session Initialization: Starts a browser session using the Airtop node. Navigation: Opens the URL https://postimages.org/ in a new window. Screenshot Capture: Takes a screenshot to use for upload. File Upload: Uploads the screenshot to the site using the file upload interaction. Validation: Waits 5 seconds and then captures a second screenshot to confirm the image was uploaded successfully. Setup Requirements Airtop API Key — required for session creation and browser interactions. Next Steps Customize for Other Sites**: Adapt this workflow to automate file uploads to different platforms. Integrate with Reporting Tools**: Combine this automation with workflows that require image reporting or archiving. Enhance Validation**: Add logic to parse the upload confirmation or retrieve the image URL programmatically for logging or sharing. Read more about how to automate file uploads to the web
by ist00dent
This n8n template empowers you to instantly summarize long pieces of text by sending a simple webhook request. By integrating with ApyHub's summarization API, you can distil complex articles, reports, or messages into concise summaries, significantly boosting efficiency across various domains. 🔧 How it works Receive Content Webhook:** This node acts as the entry point, listening for incoming POST requests. It expects a JSON body containing: content: The long text you want to summarize. summary_length (optional): The desired length of the summary (e.g., 'short', 'medium', 'long'). Defaults to 'medium'. And a header containing your apy-token for the ApyHub API. Start Summarization Job:** This node sends a POST request to ApyHub's summarization endpoint (api.apyhub.com/sharpapi/api/v1/content/summarize). It passes the content and summary_length from the webhook body, along with your apy-token from the headers. ApyHub processes the text asynchronously, and this node immediately returns a job_id. Get Summarization Result:** Since ApyHub's summarization is an asynchronous process, this node is crucial. It polls ApyHub's job status endpoint (api.apyhub.com/sharpapi/api/v1/content/summarize/job/status/{{job_id}}) using the job_id obtained from the previous step. It continues to check the status until the summarization is finished, at which point it retrieves the final summarized text. Respond with Summarized Content:** This node sends the final, distilled summarized text back to the service that initiated the webhook. 👤 Who is it for? This workflow is extremely useful for: Content Creators & Marketers:** Quickly summarize articles for social media snippets, email newsletters, or blog post intros. Researchers & Students:** Efficiently get the gist of academic papers, reports, or long documents without reading every word. Customer Support & Sales Teams:** Summarize customer inquiries, long email chains, or call transcripts to quickly understand key issues or discussion points. News Aggregators & Media Monitoring:** Automatically generate summaries of news articles from various sources for quick consumption. Business Professionals:** Condense lengthy reports, meeting minutes, or project updates into digestible summaries for busy stakeholders. Legal & Compliance:** Summarize legal documents or regulatory texts to highlight critical clauses or changes. Anyone Dealing with Information Overload:** Use it to save time and extract key information from overwhelming amounts of text. 📑Data Structure When you trigger the webhook, send a POST ** request with a **JSON body and an apy-token in the headers: { "content": "Your very long text goes here. This could be an article, a report, a transcript, or any other textual content you want to summarize. The longer the text, the more valuable summarization becomes!", "summary_length": "medium" // Optional: "short", "medium", or "long" } Headers: apy-token: YOUR_APYHUB_API_KEY Note: You'll need to obtain an API Key from ApyHub to use their API services. They typically offer a free tier for testing. The workflow will return a JSON response similar to this (the summary content will vary based on input): { "summary": "Max Verstappen believes the Las Vegas Grand Prix is '99% show and 1% sporting event', not looking forward to the razzmatazz. Other drivers, like Fernando Alonso, were more equivocal about the hype, acknowledging the investment and spectacle. Lewis Hamilton praised the city's energy but emphasized it's 'a business, ultimately', believing there will still be good racing.", "status": "finished", "result_file_id": "..." // ApyHub might provide a file ID for larger results } ⚙️ Setup Instructions Get an ApyHub API Key:** Go to https://apyhub.com/ and sign up to get your API key. Import Workflow:** In your n8n editor, click "Import from JSON" and paste the provided workflow JSON. Configure Webhook Path:** Double-click the Receive Content Webhook node. In the 'Path' field, set a unique and descriptive path (e.g., /summarize-content). Activate Workflow:** Save and activate the workflow. 📝 Tips This content summarizer is a powerful component. Here's how to supercharge it and make it an indispensable part of your automation arsenal: Integrate with Document/File Storage:** Google Drive/Dropbox/OneDrive:* Automatically summarize documents uploaded to these services. Add a Watch New Files trigger (if available for your service) or a Cron node to regularly check for new files. Then, read the file content, pass it to this summarizer, and save the summary back to a designated folder or as a comment on the original file. CRM/CMS Systems:* Pull long notes, customer interactions, or article drafts from your CRM/CMS, summarize them, and update the records with the concise version. Email Processing & Triage:** Email Trigger: Use an Email node to trigger the workflow when new emails arrive. Extract the email body, summarize it, and then: Send a shortened summary as a notification to your Slack or Telegram. Add a summary to a task management tool (e.g., Trello, Asana) for quicker triaging. Create a summary for an email digest. Slack/Discord Bot Integration:** Create a Slack/Discord command (using a custom webhook or a dedicated Slack/Discord node) where users can paste long text. The bot then sends the summarized version back to the channel. Dynamic Summary Length & Options:** Allow the user to specify summary_length (short, medium, long) in the webhook body, as already implemented. Explore ApyHub's documentation for more parameters (if any) and dynamically pass them. Error Handling & User Feedback:** Add an IF node after Get Summarization Result to check for status: 'failed' or error messages. If an error occurs, send a helpful message back to the webhook caller or an internal alert. For very long texts that might exceed API limits, add a Function node to truncate the input content if it's too long, and notify the user. Multi-language Support (if ApyHub offers it):** If ApyHub supports summarization in multiple languages, extend the webhook to accept a language parameter and pass it to the API. Web Scraping & Article Summaries:** Combine this with a HTTP Request node to scrape content from a web page (e.g., a news article). Then, pass the extracted article text to this summarizer to get quick insights. Data Storage & Archiving:** Store the original content alongside its summary in a database (e.g., PostgreSQL, MongoDB) or a simple spreadsheet (Google Sheets, Airtable). This creates a searchable, summarized archive of your content. Automated Report Generation:** If you receive daily/weekly reports, use this workflow to summarize key sections, then compile these summaries into a concise digest or dashboard using a Merge node and send it out automatically.
by Roninimous
This n8n workflow leverages a Telegram Message Trigger to activate an intelligent AI Agent capable of processing both text and voice messages. When a user sends a message in text or in voice format, the workflow captures and transcribes it (if necessary), then passes it to the AI Agent for understanding and response generation. To enhance user experience, the bot also displays a typing indicator while processing requests, simulating a natural, human-like interaction. Key Features Multi-Modal Input: Supports both text messages and voice notes from users. Real-Time Interaction: Shows a “typing…” action in Telegram while the AI processes the input. AI Agent Integration: Provides intelligent, context-aware, and conversational responses. Seamless Feedback Loop: Replies are sent directly back to the user within Telegram for smooth interaction. How It Works The workflow triggers whenever a message or voice note is received on Telegram. If the input is a voice note, the workflow transcribes it into text. The text input is sent to the AI Agent for processing. While processing, the bot sends a typing indicator to the user. Once the AI generates a response, the workflow sends it back to the user in Telegram. Setup Instructions Create a Telegram Bot: Use @BotFather to create a bot and obtain your bot token. Configure n8n Credentials: Add Telegram API credentials in n8n with your bot token. Add credentials for any speech-to-text service used for voice transcription (e.g., Open AI Transcribe A Recording). Import the Workflow: Import this workflow into your n8n instance. Update all credential nodes to use your Telegram and transcription service credentials. Set Webhook URLs: Ensure Telegram webhook is set properly for your bot to receive messages. Make sure your n8n instance is publicly accessible for Telegram callbacks. Test the Workflow: Send text messages and voice notes to your Telegram bot and observe the AI responses. Customization Guidance Add new message handlers: Extend the workflow to handle additional message types (images, documents, etc.). Improve transcription: Swap or add speech-to-text services for better accuracy or language support. Enhance AI Agent: Customize prompts and context management to tailor the AI’s personality and responses. AI Model Flexibility: Swap between different AI models (e.g., GPT-4, Claude, or custom LLMs) based on task type, cost, or performance preferences. Tool-Based Control: Add custom tools to the AI Agent such as calendar access, Notion, Google Sheets, web search, database queries, or custom APIs—allowing for dynamic, multi-functional agents Security and Implementation Notes The Telegram node manages message reception and sending but does not directly handle AI processing. Voice transcription requires integration with external APIs; secure those credentials in n8n and monitor usage. To simulate typing, the workflow uses Telegram’s “sendChatAction” API method, providing users with feedback that the bot is processing. Ensure your AI API keys and Telegram tokens are securely stored in n8n credentials and not exposed in workflows or logs. Benefits Handles natural conversational inputs with text or voice. Provides a smooth, engaging user experience via typing indicators. Easy integration of advanced AI conversational agents with Telegram. Flexible for personal assistants, helpdesks, or interactive chatbots.