transforEmotion0.1.7 package

Sentiment Analysis for Text, Image and Video using Transformer Models

add_vision_model

User-Friendly Vision Model Management Functions

as_rag_table

Convert RAG JSON to a table

calculate_moving_average

Calculate the moving average for a time series

check_findingemo_quality

Check FindingEmo Dataset Quality

check_nvidia_gpu

Install Necessary Python Modules

delete_transformer

Delete a Transformer Model

dlo_dynamics

Dynamics function of the DLO model

dot-init_builtin_models

Initialize Built-in Vision Models

dot-vision_model_registry

Vision Model Registry for transforEmotion Package

download_findingemo_data

Download FindingEmo-Light Dataset

emoxicon_scores

Emoxicon Scores

emphasize

Generate and emphasize sudden jumps in emotion scores

evaluate_emotions

Evaluate Emotion Classification Performance

generate_observables

Generate observable emotion scores data from latent variables

generate_q

Generate a matrix of Dynamic Error values for the DLO simulation

get_vision_model_config

Get Vision Model Configuration

image_scores_dir

Calculate image scores for all images in a directory (fast batch)

image_scores

Calculate image scores using a Hugging Face CLIP model

is_vision_model_registered

Check if Vision Model is Registered

list_vision_models

List Available Vision Models

load_findingemo_annotations

Load FindingEmo-Light Annotations

map_discrete_to_vad

Map Discrete Emotions to VAD (Valence-Arousal-Dominance) Framework

map_to_emo8

Map FindingEmo Emotions to Emo8 Labels

MASS_mvrnorm

Multivariate Normal (Gaussian) Distribution

nlp_scores

Natural Language Processing Scores

parse_rag_json

Parse RAG JSON

plot_sim_emotions

Plot the latent or the observable emotion scores.

plot.emotion_evaluation

Plot Evaluation Results

prepare_findingemo_evaluation

Prepare FindingEmo Data for Evaluation

print.emotion_evaluation

Print method for emotion evaluation results

punctuate

Punctuation Removal for Text

rag_json_utils

RAG JSON utilities

rag_retrievers

RAG retriever registry and helpers

rag_sentemo

Structured Emotion/Sentiment via RAG (Small LLMs)

rag

Retrieval-augmented Generation (RAG)

register_retriever

Register a custom retriever

register_vision_model

Register a Vision Model

remove_vision_model

Remove a Vision Model

sentence_similarity

Sentiment Analysis Scores

setup_gpu_modules

Install GPU Python Modules

setup_miniconda

Deprecated: Miniconda setup (use uv instead)

setup_modules

Setup Required Python Modules

setup_popular_models

Quick Setup for Popular Models

show_vision_models

Show Available Vision Models

simulate_video

Simulate latent and observed emotion scores for a single "video"

summary.emotion_evaluation

Summary method for emotion evaluation results

te_cleanup_default_venv

Remove reticulate's default virtualenv (r-reticulate)

transforEmotion-package

transforEmotion--package

transformer_scores

Sentiment Analysis Scores

vad_scores

Direct VAD (Valence-Arousal-Dominance) Prediction

validate_rag_json

Validate a RAG JSON structure

validate_rag_predictions

Validate RAG Emotion/Sentiment Predictions

video_scores

Run FER on a YouTube video using a Hugging Face CLIP model

Implements sentiment analysis using huggingface <https://huggingface.co> transformer zero-shot classification model pipelines for text and image data. The default text pipeline is Cross-Encoder's DistilRoBERTa <https://huggingface.co/cross-encoder/nli-distilroberta-base> and default image/video pipeline is Open AI's CLIP <https://huggingface.co/openai/clip-vit-base-patch32>. All other zero-shot classification model pipelines can be implemented using their model name from <https://huggingface.co/models?pipeline_tag=zero-shot-classification>.

  • Maintainer: Aleksandar Tomašević
  • License: GPL (>= 3.0)
  • Last published: 2026-01-08