IBM Cloud Docs
Troubleshooting

Troubleshooting

If you have problems with Natural Language Understanding, the following troubleshooting tips might help.

Entities and relations entity types are not consistent

The entity type systems for the entities and relations features are not always consistent. For some languages and version dates, relations results will contain entity types that are different from the entity types that appear in entities results. See Entity types and subtypes and Relation types for more details.

Incorrect language detection

The automatic language detection might not be accurate for text that contains fewer than 100 characters. If the service doesn't detect the correct language of your text, you can override automatic language detection.

Too many requests

If you are seeing a "429: Too many requests" error, your service instance is likely hitting the concurrent requests limit. View the Usage limits page for more information.

Unable to analyze more than one URL

You can specify only one publicly accessible URL in your API request, therefore you cannot extract sentiment scoring from several URLs. You could, however, compile the text from multiple web pages and then pass that entire compiled text for sentiment analysis.

Unexpected results from webpage analysis

Analyzing a webpage might return unexpected results in some cases. To investigate, try setting the return_analyzed_text parameter to true to inspect the actual text that is being analyzed in your request. In cases where webpage cleaning does not remove enough unwanted text, consider using the xpath parameter to focus the analysis on specific elements of the page.

Explanations for particular results

Natural Language Understanding does not provide any diagnostic tools to explain why a particular request returns a particular result. The service is designed to provide accurate results for as many text samples as possible, but due to the nature of the machine learning models we use, there is no guarantee that any particular result will look correct from a human perspective.

Unexpected changes in response and confidence scores

NLU continuously updates pretrained models and the training algorithms powering custom models to give customers better results. These updates may include differences in overall response and confidence scores. NLU is agnostic to individual use cases and makes these updates to models for the sole purpose of better accuracy and better performance for all customers.

See Section 5.1.2 regarding continuous model updates in our Terms of Service.