Skip to main content
MakeAIGuide
Intermediate 45 min read Updated Jan 5, 2026

Automate PDF Analysis with Make.com & Kimi 128K Context

Leverage Kimi's 128K+ context with Make.com to auto-analyze long PDF documents like financial reports and books. Extract insights to Notion.

Ready to automate?

Start building this workflow with Make.com — free forever on the starter plan.

Try Make.com Free

Overview

For users needing to process Chinese long documents (like financial reports, books) but limited by OpenAI access difficulties or high costs, Kimi API combined with Make.com is an extremely cost-effective alternative.

This workflow leverages Kimi’s powerful 128K+ context capability to achieve automated long document analysis:

  1. Trigger Task - Notion database status change trigger
  2. Upload File - Upload PDF to Kimi get File ID
  3. Parse Content - Call parsing interface get text content
  4. AI Analysis - Send to Kimi for summarization and extraction
  5. Write Back Results - Save key insights to Notion
  6. Clean Files - Delete processed files prevent accumulation

Make workflow overview Complete 8-step automation workflow architecture


Core Decision Factors

When choosing AI long document processing solutions, focus on:

  • Long Text Throughput - Can read dozens of pages PDF at once without losing information
  • API Ease of Use & Cost - Is interface documentation clear, Token pricing affordable
  • Error Handling - Handling ability for special characters, network timeouts, etc.
  • Data Structuring Capability - Can transform PDF content to usable JSON format

Technical Specifications

Key MetricSpecificationNotes
Kimi Model Context8K/32K/128KRecommend 128K for long documents
Notion Upload Limit5MBFree version limit, use Google Drive for large files
Kimi File Storage Limit100 filesMust configure deletion step
HTTP Timeout Setting300 secondsRecommend extending for long text
Status Code Validation200Recommend adding Filter
API Auth MethodBearer TokenAdd Authorization in Header

Prerequisites

Before starting, ensure you have:

Notion Database Structure

Create PDF management database with these fields:

  • File (Files & Media) - Upload PDF file
  • Status (Select) - Start/Processing/Done
  • Analysis Result (Text) - AI-extracted key insights
  • File Link (URL) - PDF access link

Why Choose Kimi API

Compared to OpenAI, Kimi has unique advantages in Chinese long document processing:

ComparisonKimiOpenAI
Context Window128K (future 200K+)128K (GPT-4 Turbo)
Chinese AbilityNative optimizationGeneral
China AccessDirect accessNeeds proxy
API CostRelatively lowHigher
Make IntegrationNeeds HTTP moduleHas native module

Workflow Architecture

Entire workflow divided into three core phases:

Phase 1: File Upload

HTTP module file upload configuration Multipart/form-data configuration explained

Use HTTP module to upload PDF to Kimi:

Request Configuration:

  • URL: https://api.moonshot.cn/v1/files
  • Method: POST
  • Content-Type: multipart/form-data
  • Headers: Authorization: Bearer {{api_key}}

Get file_id after successful upload for subsequent parsing.


Phase 2: Content Parsing & Cleaning

After getting File ID, call parsing interface to get PDF text content.

Key Step: Text Parser Cleaning

Text Parser regex configuration Use regex to clean raw text

PDF-parsed raw text often contains:

  • Newline characters \n
  • Special brackets [], {}
  • Excess spaces

These characters break JSON structure when put directly, must use regex replacement to clean.


Phase 3: AI Analysis & Write-back

JSON Prompt structure Build 128K context model Prompt

Send cleaned text to Kimi for analysis:

Prompt Design:

{
  "model": "moonshot-v1-128k",
  "messages": [
    {
      "role": "system",
      "content": "You are a professional document analysis expert, skilled at extracting core insights from long reports."
    },
    {
      "role": "user",
      "content": "Please extract 10 core insights from the following document:\n\n{{cleaned_text}}"
    }
  ]
}

Phase 4: File Cleanup (Critical)

Delete file API configuration Must configure file deletion step

Kimi API has hard limit of 100 files, must configure deletion step:

Request Configuration:

  • URL: https://api.moonshot.cn/v1/files/{{file_id}}
  • Method: DELETE
  • Headers: Authorization: Bearer {{api_key}}

Important: Without deletion step, workflow will completely fail after 100 runs!


Important Notes

Common “pitfalls” in practice:

  1. JSON Structure “Invisible Killer” - PDF-parsed raw text contains special characters, must use Text Parser regex cleaning

  2. File Accumulation Trap - Kimi has 100 file limit, must configure “delete after analysis” step

  3. Data Delay & Mapping Difficulty - In Make, if previous HTTP request hasn’t run, subsequent modules can’t map variables, need to “dry run” once to get data structure

  4. Notion Size Bottleneck - Free version only supports 5MB files, large PDFs need Google Drive workaround

  5. HTTP Timeout - Recommend extending timeout to 300+ seconds for long text processing


Actual Results

Processing Capability:

  • Can accurately extract “Financial Summary”, “Cost Control” and 10 other core insights from 30+ page English Tesla financial report

Processing Efficiency:

  • Compared to manual reading, machine processing takes only minutes (depends on file length)

Use Cases

  • Finance Professionals/Researchers - Read many research reports daily to get key information
  • OpenAI-Restricted Users - Need high-quality Chinese long text processing capability
  • Notion Power Users - Want to build automated knowledge base closed loop

May Not Suit

  • Complete beginners with no concept of API, JSON, HTTP requests
  • Very large scanned file processors (>50MB image PDFs)

FAQ

Why choose Kimi over OpenAI?

Kimi supports 128K+ ultra-long context, strong Chinese processing capability, directly accessible in China, relatively low API cost, suitable for long Chinese documents.

What are Kimi API’s limitations?

File storage limit of 100 files, errors when exceeded. Must configure deletion step in workflow to clean files promptly after processing.

Why need Text Parser to clean data?

PDF-parsed raw text often contains newlines, special characters that break JSON structure causing API errors when put directly in JSON. Must use regex to clean.

What are Notion file upload limitations?

Notion free version only supports 5MB file uploads. Large PDFs need to use Google Drive workaround trigger.


Next Steps

After learning the basic workflow, you can try:

  • Add OCR module to process scanned PDFs
  • Integrate multilingual translation functionality
  • Add batch processing queue
  • Set scheduled tasks to auto-process new files

Feel free to leave comments if you have questions!

FAQ

Why choose Kimi over OpenAI?
Kimi supports 128K+ ultra-long context, strong Chinese processing capability, directly accessible in China, relatively low API cost, suitable for long Chinese documents.
What are Kimi API's limitations?
File storage limit of 100 files, errors when exceeded. Must configure deletion step in workflow to clean files promptly after processing.
Why need Text Parser to clean data?
PDF-parsed raw text often contains newlines, special characters that break JSON structure causing API errors when put directly in JSON. Must use regex to clean.
What are Notion file upload limitations?
Notion free version only supports 5MB file uploads. Large PDFs need to use Google Drive workaround trigger.

Start Building Your Automation Today

Join 500,000+ users automating their work with Make.com. No coding required, free to start.

Get Started Free
No credit card required1,000 free operations/month5-minute setup

Related Tutorials

About the author

AC

Alex Chen

Automation Expert & Technical Writer

Alex Chen is a certified Make.com expert with 5+ years of experience building enterprise automation solutions. Former software engineer at tech startups, now dedicated to helping businesses leverage AI and no-code tools for efficiency.

Credentials

Make.com Certified PartnerGoogle Cloud Certified500+ Automations BuiltFormer Software Engineer
Try Make.com Free