comprehend function

Amazon Comprehend

Amazon Comprehend

Amazon Comprehend is an Amazon Web Services service for gaining insight into the content of documents. Use these actions to determine the topics contained in your documents, the topics they discuss, the predominant sentiment expressed in them, the predominant language used, and more.

comprehend( config = list(), credentials = list(), endpoint = NULL, region = NULL )

Arguments

  • config: Optional configuration of credentials, endpoint, and/or region.

    • credentials :

      • creds :

        • access_key_id : AWS access key ID
        • secret_access_key : AWS secret access key
        • session_token : AWS temporary session token
      • profile : The name of a profile to use. If not given, then the default profile is used.

      • anonymous : Set anonymous credentials.

    • endpoint : The complete URL to use for the constructed client.

    • region : The AWS Region used in instantiating the client.

    • close_connection : Immediately close all HTTP connections.

    • timeout : The time in seconds till a timeout exception is thrown when attempting to make a connection. The default is 60 seconds.

    • s3_force_path_style : Set this to true to force the request to use path-style addressing, i.e. http://s3.amazonaws.com/BUCKET/KEY.

    • sts_regional_endpoint : Set sts regional endpoint resolver to regional or legacy https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-endpoints.html

  • credentials: Optional credentials shorthand for the config parameter

    • creds :

      • access_key_id : AWS access key ID
      • secret_access_key : AWS secret access key
      • session_token : AWS temporary session token
    • profile : The name of a profile to use. If not given, then the default profile is used.

    • anonymous : Set anonymous credentials.

  • endpoint: Optional shorthand for complete URL to use for the constructed client.

  • region: Optional shorthand for AWS Region used in instantiating the client.

Returns

A client for the service. You can call the service's operations using syntax like svc$operation(...), where svc is the name you've assigned to the client. The available operations are listed in the Operations section.

Service syntax

svc <- comprehend(
  config = list(
    credentials = list(
 creds = list(
   access_key_id = "string",
   secret_access_key = "string",
   session_token = "string"
 ),
 profile = "string",
 anonymous = "logical"
    ),
    endpoint = "string",
    region = "string",
    close_connection = "logical",
    timeout = "numeric",
    s3_force_path_style = "logical",
    sts_regional_endpoint = "string"
  ),
  credentials = list(
    creds = list(
 access_key_id = "string",
 secret_access_key = "string",
 session_token = "string"
    ),
    profile = "string",
    anonymous = "logical"
  ),
  endpoint = "string",
  region = "string"
)

Operations

batch_detect_dominant_languageDetermines the dominant language of the input text for a batch of documents
batch_detect_entitiesInspects the text of a batch of documents for named entities and returns information about them
batch_detect_key_phrasesDetects the key noun phrases found in a batch of documents
batch_detect_sentimentInspects a batch of documents and returns an inference of the prevailing sentiment, POSITIVE, NEUTRAL, MIXED, or NEGATIVE, in each one
batch_detect_syntaxInspects the text of a batch of documents for the syntax and part of speech of the words in the document and returns information about them
batch_detect_targeted_sentimentInspects a batch of documents and returns a sentiment analysis for each entity identified in the documents
classify_documentCreates a classification request to analyze a single document in real-time
contains_pii_entitiesAnalyzes input text for the presence of personally identifiable information (PII) and returns the labels of identified PII entity types such as name, address, bank account number, or phone number
create_datasetCreates a dataset to upload training or test data for a model associated with a flywheel
create_document_classifierCreates a new document classifier that you can use to categorize documents
create_endpointCreates a model-specific endpoint for synchronous inference for a previously trained custom model For information about endpoints, see Managing endpoints
create_entity_recognizerCreates an entity recognizer using submitted files
create_flywheelA flywheel is an Amazon Web Services resource that orchestrates the ongoing training of a model for custom classification or custom entity recognition
delete_document_classifierDeletes a previously created document classifier
delete_endpointDeletes a model-specific endpoint for a previously-trained custom model
delete_entity_recognizerDeletes an entity recognizer
delete_flywheelDeletes a flywheel
delete_resource_policyDeletes a resource-based policy that is attached to a custom model
describe_datasetReturns information about the dataset that you specify
describe_document_classification_jobGets the properties associated with a document classification job
describe_document_classifierGets the properties associated with a document classifier
describe_dominant_language_detection_jobGets the properties associated with a dominant language detection job
describe_endpointGets the properties associated with a specific endpoint
describe_entities_detection_jobGets the properties associated with an entities detection job
describe_entity_recognizerProvides details about an entity recognizer including status, S3 buckets containing training data, recognizer metadata, metrics, and so on
describe_events_detection_jobGets the status and details of an events detection job
describe_flywheelProvides configuration information about the flywheel
describe_flywheel_iterationRetrieve the configuration properties of a flywheel iteration
describe_key_phrases_detection_jobGets the properties associated with a key phrases detection job
describe_pii_entities_detection_jobGets the properties associated with a PII entities detection job
describe_resource_policyGets the details of a resource-based policy that is attached to a custom model, including the JSON body of the policy
describe_sentiment_detection_jobGets the properties associated with a sentiment detection job
describe_targeted_sentiment_detection_jobGets the properties associated with a targeted sentiment detection job
describe_topics_detection_jobGets the properties associated with a topic detection job
detect_dominant_languageDetermines the dominant language of the input text
detect_entitiesDetects named entities in input text when you use the pre-trained model
detect_key_phrasesDetects the key noun phrases found in the text
detect_pii_entitiesInspects the input text for entities that contain personally identifiable information (PII) and returns information about them
detect_sentimentInspects text and returns an inference of the prevailing sentiment (POSITIVE, NEUTRAL, MIXED, or NEGATIVE)
detect_syntaxInspects text for syntax and the part of speech of words in the document
detect_targeted_sentimentInspects the input text and returns a sentiment analysis for each entity identified in the text
detect_toxic_contentPerforms toxicity analysis on the list of text strings that you provide as input
import_modelCreates a new custom model that replicates a source custom model that you import
list_datasetsList the datasets that you have configured in this Region
list_document_classification_jobsGets a list of the documentation classification jobs that you have submitted
list_document_classifiersGets a list of the document classifiers that you have created
list_document_classifier_summariesGets a list of summaries of the document classifiers that you have created
list_dominant_language_detection_jobsGets a list of the dominant language detection jobs that you have submitted
list_endpointsGets a list of all existing endpoints that you've created
list_entities_detection_jobsGets a list of the entity detection jobs that you have submitted
list_entity_recognizersGets a list of the properties of all entity recognizers that you created, including recognizers currently in training
list_entity_recognizer_summariesGets a list of summaries for the entity recognizers that you have created
list_events_detection_jobsGets a list of the events detection jobs that you have submitted
list_flywheel_iteration_historyInformation about the history of a flywheel iteration
list_flywheelsGets a list of the flywheels that you have created
list_key_phrases_detection_jobsGet a list of key phrase detection jobs that you have submitted
list_pii_entities_detection_jobsGets a list of the PII entity detection jobs that you have submitted
list_sentiment_detection_jobsGets a list of sentiment detection jobs that you have submitted
list_tags_for_resourceLists all tags associated with a given Amazon Comprehend resource
list_targeted_sentiment_detection_jobsGets a list of targeted sentiment detection jobs that you have submitted
list_topics_detection_jobsGets a list of the topic detection jobs that you have submitted
put_resource_policyAttaches a resource-based policy to a custom model
start_document_classification_jobStarts an asynchronous document classification job using a custom classification model
start_dominant_language_detection_jobStarts an asynchronous dominant language detection job for a collection of documents
start_entities_detection_jobStarts an asynchronous entity detection job for a collection of documents
start_events_detection_jobStarts an asynchronous event detection job for a collection of documents
start_flywheel_iterationStart the flywheel iteration
start_key_phrases_detection_jobStarts an asynchronous key phrase detection job for a collection of documents
start_pii_entities_detection_jobStarts an asynchronous PII entity detection job for a collection of documents
start_sentiment_detection_jobStarts an asynchronous sentiment detection job for a collection of documents
start_targeted_sentiment_detection_jobStarts an asynchronous targeted sentiment detection job for a collection of documents
start_topics_detection_jobStarts an asynchronous topic detection job
stop_dominant_language_detection_jobStops a dominant language detection job in progress
stop_entities_detection_jobStops an entities detection job in progress
stop_events_detection_jobStops an events detection job in progress
stop_key_phrases_detection_jobStops a key phrases detection job in progress
stop_pii_entities_detection_jobStops a PII entities detection job in progress
stop_sentiment_detection_jobStops a sentiment detection job in progress
stop_targeted_sentiment_detection_jobStops a targeted sentiment detection job in progress
stop_training_document_classifierStops a document classifier training job while in progress
stop_training_entity_recognizerStops an entity recognizer training job while in progress
tag_resourceAssociates a specific tag with an Amazon Comprehend resource
untag_resourceRemoves a specific tag associated with an Amazon Comprehend resource
update_endpointUpdates information about the specified endpoint
update_flywheelUpdate the configuration information for an existing flywheel

Examples

## Not run: svc <- comprehend() svc$batch_detect_dominant_language( Foo = 123 ) ## End(Not run)
  • Maintainer: Dyfan Jones
  • License: Apache License (>= 2.0)
  • Last published: 2025-03-17