NLP service used to score content in aim to support content moderation.
Example output
This is a bad word
SEVERE_TOXICITY 0.89362836
THREAT 0.3614778
TOXICITY 0.9646382
IDENTITY_ATTACK 0.69617623
This is a threat
THREAT 0.98462415
SEVERE_TOXICITY 0.80347574
TOXICITY 0.6221407
IDENTITY_ATTACK 0.58523935
Everyone should love one another
TOXICITY 0.02193547
SEVERE_TOXICITY 0.026996724
THREAT 0.08195061
IDENTITY_ATTACK 0.056948427
Hello my name is cloudguruab
THREAT 0.03882238
IDENTITY_ATTACK 0.014257909
SEVERE_TOXICITY 0.015383799
TOXICITY 0.024662184