by Tomek
How it works Use Telegram to send in new phrases (flashcard front) You can also manually input phrase in the workflow itself ChatGPT generates provided phrase description (in English but you can change it) including multiple meanings & generates examples of using the phrase in a sample sentence (flashcard back) Steps to setup Provide your Telegram bot API key (optional) Provide your OpenAI key Provide Google Sheets credentials How to import flashcards from Google Sheets into Anki Use Google Sheets to Anki add-on: 1871608121 In Anki simply click Sync Decks and you're done :) Enjoy
by Airtop
Extracting Comments from an X Post Use Case Engaging with conversations on X (formerly Twitter) is critical for brands and individuals monitoring sentiment, leads, or emerging trends. Manually collecting comments is time-consumingβthis automation enables scalable extraction of comment data to inform your outreach or analysis. What This Automation Does This automation extracts comments from a specified X post, with the following input parameters: airtop_profile**: The name of your Airtop Profile connected to X. x_post_url**: The URL of the X post to extract comments from. max_number_of_comments**: The maximum number of comments to retrieve. How It Works Takes input via a form or another workflow. Normalizes the input values. Creates a new browser session using Airtop. Navigates to the provided X post. Uses a prompt to extract up to the specified number of comments, returning: Author name Author profile URL Comment text Setup Requirements Airtop API Key β free to generate. An Airtop Profile connected to X (requires one-time login). Next Steps Pair with X Monitoring**: Use this with the X monitoring automation to detect relevant posts and extract discussion context automatically. Feed into Analytics**: Combine with summarization or sentiment analysis tools to understand audience response at scale. Export for CRM/BI**: Pipe the structured comment data into your CRM or business intelligence stack for lead tracking or reporting. Read more about Extracting Comments from X Posts
by Mike Russell
Automated YouTube Video Promotion Workflow Automate the promotion of new YouTube videos on X (formerly Twitter) with minimal effort. This workflow is perfect for content creators, marketers, and social media managers who want to keep their audience updated with fresh content consistently. How it works This workflow triggers every 30 minutes to check for new YouTube videos from a specified channel. If a new video is found, it utilizes OpenAI's ChatGPT to craft an engaging, promotional message for X. Finally, the workflow posts the generated message to Twitter, ensuring your latest content is shared with your audience promptly. Set up steps Schedule the workflow to run at your desired frequency. Connect to your YouTube account and set up the node to fetch new videos based on your Channel ID. Integrate with OpenAI to generate promotional messages using GPT-3.5 turbo. Link to your X account and set up the node to post the generated content. Please note, you'll need API keys and credentials for YouTube, OpenAI, and X. Check out this quick video tutorial to make the setup process a breeze. Additional Tips Customize the workflow to match your branding and messaging tone. Test each step to ensure your workflow runs smoothly before going live.
by Anton Vanhoucke
This workflow converts Notion pages to markdown, and then converts that markdown back to Notion blocks. It will triple the content of the last updated page it finds. This is useless by itself, but you can copy-paste from this workflow to create your own. Prerequisites A notion account with some pages or databases Setup instructions Create a notion credential and share some pages as described here: https://docs.n8n.io/integrations/builtin/credentials/notion/ How it works The HTTP Request gets notion child blocks from a page, because the default n8n block only gets plain text and no links. The first code block converts it to markdown. The second code block converts it back to Notion blocks The last HTTP block appends everything to the original Notion page, essentially duplicating it for the purpose of demoing the script. I hope in the future we get official n8n blocks that extract markdown, or use markdown to write to Notion. There is community block that also does this, but this template is easier: you can simply copy-paste the blocks from this workflow.
by Pat
Who is this for? This workflow template is perfect for content creators, researchers, students, or anyone who regularly works with audio files and needs to transcribe and summarize them for easy reference and organization. What problem does this workflow solve? Transcribing audio files and summarizing their content can be time-consuming and tedious when done manually. This workflow automates the process, saving users valuable time and effort while ensuring accurate transcriptions and concise summaries. What this workflow does This template automates the following steps: Monitors a specified Google Drive folder for new audio files Sends the audio file to OpenAI's Whisper API for transcription Passes the transcribed text to GPT-4 for summarization Creates a new page in Notion with the summary Setup To set up this workflow: Connect your Google Drive, OpenAI, and Notion accounts to n8n Configure the Google Drive node with the folder you want to monitor for new audio files Set up the OpenAI node with your API key and desired parameters for Whisper and GPT-4 Specify the Notion database where you want the summaries to be stored How to customize this workflow Adjust the Google Drive folder being monitored Modify the OpenAI node parameters to fine-tune the transcription and summarization process Change the Notion database or page properties to match your preferred structure With this AI-powered workflow, you can effortlessly transcribe audio files, generate concise summaries, and store them in a structured manner within Notion. Streamline your audio content processing and organization with this automated template.
by Mathis
Convert PDF documents to AI-generated podcasts with Google Gemini and Text-to-Speech Transform any PDF document into an engaging, natural-sounding podcast using Google's Gemini AI and advanced Text-to-Speech technology. This automated workflow extracts text content, generates conversational scripts, and produces high-quality audio files. Who is this for? This workflow template is perfect for content creators, educators, researchers, and marketing professionals who want to repurpose written content into audio format. Ideal for creating podcast episodes, educational content, or making documents more accessible. What problem does this solve? Converting written documents to engaging audio content manually is time-consuming and requires scriptwriting skills. This workflow automates the entire process, turning static PDFs into dynamic, conversational podcasts that sound natural and engaging. What this workflow does Extracts text from uploaded PDF documents Generates podcast script using Google Gemini AI with conversational tone Converts script to speech using Google's advanced TTS with customizable voices Processes audio into properly formatted WAV files Saves final podcast ready for distribution Setup Obtain API credentials: Get Google Gemini API key from AI Studio Configure credentials in n8n as "Google Gemini(PaLM) Api account" Configure voice settings: Choose from available voices: Kore (professional), Aoede (conversational), Laomedeia (energetic) Customize script generation prompts if needed Test the workflow: Upload a sample PDF file Verify audio output quality Adjust voice settings as preferred How to customize this workflow Modify script style:** Edit the prompt in the "Generate Podcast Script" node to change tone, length, or format Change voice:** Update the voice name in "Prepare TTS Request" node Add preprocessing:** Insert text cleaning nodes before script generation Integrate with storage:** Connect to Google Drive, Dropbox, or other storage services Add notifications:** Include Slack or email notifications when podcasts are ready Note: This template requires Google Gemini API access and works best with text-based PDF files under 10MB.
by Shahrear
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Automatically transform audio files into professional transcription reports with AI-powered speech recognition, timestamp generation, and formatted Google Docs output. What this workflow does Monitors Gmail for incoming audio attachments Downloads and processes audio files using VLM Run AI transcription Generates accurate transcriptions with precise timestamps and segmentation Creates professional reports in Google Docs with formatted output Handles asynchronous processing for long audio files without timeouts Setup Prerequisites: Gmail account, VLM Run API credentials, Google Docs access, self-hosted n8n. You need to install VLM Run community node Quick Setup: Configure Gmail OAuth2 for email monitoring Add VLM Run API credentials for audio transcription Set up Google Docs OAuth2 for report generation Create target Google Doc for transcription reports Update document URL in workflow nodes Test with sample audio file and activate Perfect for Meeting recordings and conference calls Voice memos and dictation workflows Interview transcriptions and journalism Podcast episode documentation Accessibility compliance and documentation Legal proceedings and court recordings Educational content and lecture notes Customer service call analysis Key Benefits Human-level accuracy** - Advanced AI speech recognition with automatic punctuation Timestamp precision** - Segmented transcriptions with exact time markers Multi-format support** - Handles MP3, WAV, M4A, AAC, OGG, FLAC files Asynchronous processing** - No timeouts for long audio files Professional formatting** - Beautifully structured Google Docs reports Automatic workflow** - Zero manual intervention required Saves hours per recording** - Transforms manual transcription into instant results Searchable documentation** - Google Docs integration enables easy content discovery How to customize Extend by adding: Speaker identification and diarization Integration with project management tools (Notion, Asana, Trello) Automatic summary generation from transcripts Translation to multiple languages Slack notifications for completed transcriptions Integration with CRM systems for call logging Audio quality enhancement preprocessing Custom formatting templates for different use cases Automatic keyword extraction and tagging Integration with calendar systems for meeting context This workflow revolutionizes audio documentation by combining cutting-edge AI transcription with professional report generation, making spoken content instantly accessible, searchable, and shareable across your organization.
by Akram Kadri
Who is this for? This workflow is designed for YouTubers who want to update their video descriptions in bulk without manually editing each one. It's especially useful for creators who include a standard set of links in their descriptions and need to insert a new link between existing ones across multiple videos. What problem does this workflow solve? Manually updating video descriptions for multiple videos can be tedious and time-consuming. If you have a section in your video descriptions that contains important links, adding a new one in a specific position (e.g., between two existing links) can be a challenge. This workflow automates that process, allowing you to insert a specific string between two predefined rows in all of your video descriptions at once. What this workflow does Fetches all videos from your YouTube channel. Iterates through each video to retrieve its existing description. Identifies two predefined rows in the description. Inserts a new row between the two specified rows. Updates the video description with the modified text. Setup Connect your YouTube account to n8n and grant necessary permissions. Define your variables in the "Set String to Insert" node: rowBefore: The existing row after which the new row will be inserted. rowToInsert: The new text or link to insert. rowAfter: The existing row before which the new row will be inserted. Run the workflow using the manual trigger. Review the updated descriptions to ensure accuracy. How to customize this workflow to your needs Change the insertion criteria** by modifying the rowBefore and rowAfter values. Insert multiple rows** by adjusting the JavaScript code in the Code node. Extend the workflow** by adding conditions (e.g., only updating descriptions of videos with certain tags). Filter specific** videos instead of updating all by modifying the "Get All Videos" node. This workflow ensures that all your YouTube descriptions stay updated and consistent with minimal effort.
by James Francis
Overview In cold email campaigns, the lead's company name is the 2nd most frequently inserted variable after their first name. They're critical for effective cold email personalization. However, company names are often messy and can contain taglines, legal suffixes (e.g. LLC, Inc.), and other variations that would never be written out by a human in an email. If your email starts with "I came across Techwave Solutions LLC on LinkedIn...", it's a dead giveaway that you're sending a tempalted email and a response is much less likely. This simple workflow uses AI to clean up messy company names in a Google Sheet so that your cold email campaigns can achieve better results. How It Works A form is submitted with a Google Sheet url The workflow grabs the leads and uses an LLM node to clean the company names The updated leads are saved back in a new sheet within the original spreadsheet Setup Steps Add your Google Sheets and OpenAI (or your AI model provider of choice) credentials to n8n Create a Google Sheet with your list of leads. IMPORTANT: the sheet MUST have a column called "Company" (Optional). The AI workflow has a highly optimized system prompt. However, you may achieve better results by updating the list of examples in the prompt with companies (real or fake) in the industry you're targeting. If you have any questions or feedback about this workflow, or would like me to build custom workflows for your business, email me at n8n@paperjam.agency.
by Eduardo Hales
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works This workflow is a simple AI Agent that connects to Langfuse so send tracing data to help monitor LLM interactions. The main idea is to create a custom LLM model that allows the configuration of callbacks, which are used by langchain to connect applications such Langfuse. This is achieves by using the "langchain code" node: Connects a LLM model sub-node to obtain the model variables (model name, temp and provider) - Creates a generic langchain initChatModel with the model parameters. Return the LLM to be used by the AI Agent node. π Prerequisites Langfuse instance (cloud or self-hosted) with API credentials LLM API key (Gemini, OpenAI, Anthropic, etc.) n8n >= 1.98.0 (required for LangChain code node support in AI Agent) βοΈ Setup Add these to your n8n instance: Langfuse configuration LANGFUSE_SECRET_KEY=your_secret_key LANGFUSE_PUBLIC_KEY=your_public_key LANGFUSE_BASEURL=https://cloud.langfuse.com # or your self-hosted URL LLM API key (example for Gemini) GOOGLE_API_KEY=your_api_key Alternative: Configure these directly in the LangChain code node if you prefer not to use environment variables Import the workflow JSON Connect your preferred LLM model node Send a test message to verify tracing appears in Langfuse
by Davi Saranszky Mesquita
Use case Workshop We are using this workflow in our workshops to teach how to use Tools a.k.a functions with artificial intelligence. In this specific case, we will use a generic "AI Agent" node to illustrate that it could use other models from different data providers. Enhanced Weather Forecasting In this small example, it's easy to demonstrate how to obtain weather forecast results from the Open-Meteo site to accurately display the upcoming days. This can be used to plan travel decisions, for example. What this workflow does We will make an HTTP request to find out the geographic coordinates of a city. Then, we will make other HTTP requests to discover the weather for the upcoming days. In this workshop, we demonstrate that the AI will be able to determine which tool to call firstβit will first call the geolocation tool and then the weather forecast tool. All of this within a single client conversation call. Setup Insert an OpenAI Key and activate the workflow. by Davi Saranszky Mesquita https://www.linkedin.com/in/mesquitadavi/
by Mariano Kostelec
A fully automated content engine that researches, writes, scores, and visualizes LinkedIn posts β built with n8n, OpenAI, Perplexity, and Replicate. What it does: β Researches any topic using real-time data β Writes a personalized post in your voice β Refines tone and structure β Generates abstract, high-quality visual assets β Scores the output and saves it to Google Sheets How it works: Triggered when you change a row status in Google Sheets Uses Perplexity to research GPT-4o (OpenAI) to create and polish content Replicate (FLUX Pro) to generate images Scores the post using heuristics Appends everything back to your sheet