Moderate Text

To moderate text, you can use the text.moderate method. This method takes in a text string and returns a Promise that resolves to a TextModerationResponse object.

index.ts
const contentMod = new ContentMod({
  publicKey: "<YOUR_PUBLIC_KEY>",
  secretKey: "<YOUR_SECRET_KEY>",
});

// Simple text moderation
const response = await contentMod.text.moderate("Some bad text");

// Text moderation with additional metadata to be saved with the moderation request
const response2 = await contentMod.text.moderate("Some bad text", {
  meta: {
    userId: "1234",
  },
});

// Optionally you can pass in an webhook callback url if you want to be notified when the moderation request is completed
const response3 = await contentMod.text.moderate("Some bad text", {
  callbackUrl: "https://example.com/webhook",
});

Webhook Callback

If you pass in a webhook callback url, the response will only be an object with the id property.

Properties

text
string
required

Text to moderate

Example: Some bad text

options
object

An object of options for the moderation request

meta
object

Any additional metadata you want to include with the moderation request that will be saved.

Example:{"userId": "1234"}

callbackUrl
string

A webhook callback url that will be called when the moderation request is completed.

Example: https://example.com/webhook

defer
boolean

Whether to defer the moderation request. If this value is true the request will be queued and you will receive the response later in a webhook that you have set up. Or you can look it up later using the id returned.

Example: true

Response

The TextModerationResponse object contains the following properties:

id
string
required

The ID of the moderation request.

Example: 27fbdc0b-b295-46ce-93f5-81b2fc08a381

isSafe
boolean
required

Whether the text is considered safe.

Example: true

confidence
number
required

Confidence of the safety of the text from 1-100.

Example: 90

sentiment
string
required

Sentiment of the text, either negative, neutral or positive.

available sentiment values are:

  • negative
  • neutral (default)
  • positive

Example: positive

sentimentScore
number
required

Sentiment score from 1-100 (negative, neutral, positive).

Example: 90

riskScores
object
required

Overall risk score from 1-100.

Example: 90

topics
array
required

General topics the text depicts, lowercase and no punctuation.

Example: ["politics", "sports"]

nsfwCategories
array
required

An array of the categories of the text. The available categories are:

  • adult_content
  • suggestive_imagery
  • strong_language
  • violence_gore
  • horror_disturbing
  • alcohol
  • tobacco
  • substance_use
  • gambling
  • dating_relationship
  • medical_procedures”
  • crude_humor
  • political_content

Example: [{"category": "sexual", "severity": 90}]

summary
object
required

Summary of the text.

Example: {"profanity": true, "totalFlags": 1, "contentRating": "G", "language": "en"}

suggestedActions
object
required

Suggested actions for the text.

reject
boolean
required

Whether to reject the text.

Example: true

review
boolean
required

Whether to flag the text.

Example: true

Example: {"reject": true, "review": true}

content
string
required

The original text.

Example: This is the original text

filteredContent
string
required

The text with profanity replaced by *.

Example: This is a filtered text

request
object
required

The request object.

Example: {"requestId": "27fbdc0b-b295-46ce-93f5-81b2fc08a381", "timestamp": "2023-04-05T12:00:00.000Z"}

meta
object
required

The metadata of the request object that the user provided.

Example: {"userId": "12345"}

hash
string
required

The SHA-256 hash of the text. Used for content comparison purposes. Example: 93a08bd10cd367bf83f12ba6590fcaef2f8deec8a47ce4ce88a15c7dda325ffa