TextLens API for Go

Add text analysis to your Go service in 10 lines. Readability grades (8 formulas), sentiment scoring, keyword extraction, and SEO analysis — uses standard net/http, zero external dependencies.

From the team behind textlens — 96 npm downloads/week

req, _ := http.NewRequest("GET", "https://api.ckmtools.dev/v1/analyze", nil)
q := req.URL.Query()
q.Add("text", "Your content here...")
req.URL.RawQuery = q.Encode()
req.Header.Set("X-API-Key", os.Getenv("TEXTLENS_KEY"))

resp, _ := http.DefaultClient.Do(req)
defer resp.Body.Close()

body, _ := io.ReadAll(resp.Body)
var result TextLensResponse
json.Unmarshal(body, &result)
fmt.Println(result.FleschKincaid)  // 8.2
Join the Waitlist

What you get

Flesch-Kincaid & More

Flesch-Kincaid grade level and reading ease, Gunning Fog, SMOG, Coleman-Liau, ARI, Dale-Chall, Linsear Write, and a consensus grade — 8 formulas in one call.

Gunning Fog Index

Measures complex word density and sentence length. Ideal for assessing technical documentation, API guides, and developer content targeting specific expertise levels.

Sentiment Analysis

AFINN scoring calibrated for long-form content. Label + score + confidence. Works on blog posts, docs, marketing copy, reviews — returns a struct you can unmarshal directly.

Multi-language Support

Analyzes English content with accuracy tuned for technical writing. Language detection included — useful when processing user-generated content from mixed sources.

Zero Infrastructure

No NLP model to deploy, no Python runtime to manage, no GPU. One HTTP call from your Go service. Works anywhere net/http works — including Lambda and Cloud Run.

Instant Results

Sub-50ms response time. JSON response maps directly to Go structs with standard encoding/json. No custom serializers, no third-party HTTP clients required.

The response

One endpoint returns everything you need:

{
  "readability": {
    "fleschKincaidGrade": 8.2,
    "gunningFog": 10.1,
    "smog": 9.4,
    "colemanLiau": 11.3,
    "ari": 9.7,
    "daleChall": 7.8,
    "linsearWrite": 8.5,
    "consensusGrade": "Grade 9",
    "fleschReadingEase": { "score": 62.3, "interpretation": "Standard" }
  },
  "sentiment": {
    "label": "positive",
    "score": 4.2,
    "confidence": 0.71
  },
  "keywords": [
    { "word": "content", "score": 5.1, "count": 4, "density": 1.87 },
    { "word": "readability", "score": 4.8, "count": 3, "density": 1.40 }
  ],
  "seo": { "score": 74, "grade": "B" },
  "statistics": { "words": 342, "sentences": 18, "paragraphs": 5 },
  "meta": { "processing_time_ms": 14 }
}

Go examples

net/http — stdlib, zero dependencies recommended
package main

import (
    "encoding/json"
    "fmt"
    "io"
    "net/http"
    "os"
)

type TextLensResponse struct {
    FleschKincaid float64 `json:"flesch_kincaid"`
    GunningFog    float64 `json:"gunning_fog"`
    Sentiment     string  `json:"sentiment"`
    ReadingTime   int     `json:"reading_time_seconds"`
    WordCount     int     `json:"word_count"`
}

func analyzeText(text, apiKey string) (*TextLensResponse, error) {
    req, _ := http.NewRequest("GET", "https://api.ckmtools.dev/v1/analyze", nil)
    q := req.URL.Query()
    q.Add("text", text)
    req.URL.RawQuery = q.Encode()
    req.Header.Set("X-API-Key", apiKey)

    resp, err := http.DefaultClient.Do(req)
    if err != nil {
        return nil, err
    }
    defer resp.Body.Close()

    body, _ := io.ReadAll(resp.Body)
    var result TextLensResponse
    json.Unmarshal(body, &result)
    return &result, nil
}

func main() {
    result, err := analyzeText("Your article text here...", os.Getenv("TEXTLENS_KEY"))
    if err != nil {
        panic(err)
    }
    fmt.Printf("Flesch-Kincaid: %.1f\n", result.FleschKincaid)
    fmt.Printf("Gunning Fog: %.1f\n", result.GunningFog)
    fmt.Printf("Sentiment: %s\n", result.Sentiment)
}
Full response struct
type Readability struct {
    FleschKincaidGrade float64 `json:"fleschKincaidGrade"`
    GunningFog         float64 `json:"gunningFog"`
    SMOG               float64 `json:"smog"`
    ColemanLiau        float64 `json:"colemanLiau"`
    ARI                float64 `json:"ari"`
    DaleChall          float64 `json:"daleChall"`
    LinsearWrite       float64 `json:"linsearWrite"`
    ConsensusGrade     string  `json:"consensusGrade"`
}

type Sentiment struct {
    Label      string  `json:"label"`
    Score      float64 `json:"score"`
    Confidence float64 `json:"confidence"`
}

type Keyword struct {
    Word    string  `json:"word"`
    Score   float64 `json:"score"`
    Count   int     `json:"count"`
    Density float64 `json:"density"`
}

type SEO struct {
    Score int    `json:"score"`
    Grade string `json:"grade"`
}

type TextLensResult struct {
    Readability Readability `json:"readability"`
    Sentiment   Sentiment   `json:"sentiment"`
    Keywords    []Keyword   `json:"keywords"`
    SEO         SEO         `json:"seo"`
}
Production client with timeout and error handling
type TextLensClient struct {
    apiKey string
    client *http.Client
}

func NewTextLensClient(apiKey string) *TextLensClient {
    return &TextLensClient{
        apiKey: apiKey,
        client: &http.Client{Timeout: 10 * time.Second},
    }
}

func (c *TextLensClient) Analyze(text string) (*TextLensResult, error) {
    req, err := http.NewRequest("GET", "https://api.ckmtools.dev/v1/analyze", nil)
    if err != nil {
        return nil, fmt.Errorf("creating request: %w", err)
    }
    q := req.URL.Query()
    q.Add("text", text)
    req.URL.RawQuery = q.Encode()
    req.Header.Set("X-API-Key", c.apiKey)

    resp, err := c.client.Do(req)
    if err != nil {
        return nil, fmt.Errorf("executing request: %w", err)
    }
    defer resp.Body.Close()

    if resp.StatusCode != http.StatusOK {
        return nil, fmt.Errorf("unexpected status: %d", resp.StatusCode)
    }

    var result TextLensResult
    if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
        return nil, fmt.Errorf("decoding response: %w", err)
    }
    return &result, nil
}

Why not a Go library?

No Go package covers all 8 readability formulas plus AFINN sentiment plus TF-IDF keyword extraction in a single call. github.com/jdkato/prose handles tokenization but not readability grades. Port libraries exist for individual formulas, but each has a different API, different accuracy, and different maintenance status. Building your own pipeline means vendoring multiple packages, each with incompatible result formats. TextLens API delivers the complete analysis from a single HTTP request — the same endpoint works in Go, Python, Ruby, or any language with an HTTP client. No modules to go get, no go.sum entries, no version pinning.

Join the Waitlist

TextLens API is in development. Join the waitlist to get notified at launch.

Backed by the textlens npm package — 96 downloads/week.

Get Early Access

$0 — no credit card required

Pricing

Free

$0 /mo
  • 1,000 requests/mo
  • 10 req/min rate limit
  • 5,000 char max per request
Get Started

Starter

$9 /mo
  • 25,000 requests/mo
  • 60 req/min rate limit
  • 25,000 char max per request
Subscribe

Enterprise

$99 /mo
  • 500,000 requests/mo
  • 300 req/min rate limit
  • 500,000 char max per request
Subscribe

Using Ruby instead? See Ruby-specific examples →

See all library comparisons →