Sentiment Analysis for Text, Image and Video using Transformer Models
User-Friendly Vision Model Management Functions
Convert RAG JSON to a table
Calculate the moving average for a time series
Check FindingEmo Dataset Quality
Install Necessary Python Modules
Delete a Transformer Model
Dynamics function of the DLO model
Initialize Built-in Vision Models
Vision Model Registry for transforEmotion Package
Download FindingEmo-Light Dataset
Emoxicon Scores
Generate and emphasize sudden jumps in emotion scores
Evaluate Emotion Classification Performance
Generate observable emotion scores data from latent variables
Generate a matrix of Dynamic Error values for the DLO simulation
Get Vision Model Configuration
Calculate image scores for all images in a directory (fast batch)
Calculate image scores using a Hugging Face CLIP model
Check if Vision Model is Registered
List Available Vision Models
Load FindingEmo-Light Annotations
Map Discrete Emotions to VAD (Valence-Arousal-Dominance) Framework
Map FindingEmo Emotions to Emo8 Labels
Multivariate Normal (Gaussian) Distribution
Natural Language Processing Scores
Parse RAG JSON
Plot the latent or the observable emotion scores.
Plot Evaluation Results
Prepare FindingEmo Data for Evaluation
Print method for emotion evaluation results
Punctuation Removal for Text
RAG JSON utilities
RAG retriever registry and helpers
Structured Emotion/Sentiment via RAG (Small LLMs)
Retrieval-augmented Generation (RAG)
Register a custom retriever
Register a Vision Model
Remove a Vision Model
Sentiment Analysis Scores
Install GPU Python Modules
Deprecated: Miniconda setup (use uv instead)
Setup Required Python Modules
Quick Setup for Popular Models
Show Available Vision Models
Simulate latent and observed emotion scores for a single "video"
Summary method for emotion evaluation results
Remove reticulate's default virtualenv (r-reticulate)
transforEmotion--package
Sentiment Analysis Scores
Direct VAD (Valence-Arousal-Dominance) Prediction
Validate a RAG JSON structure
Validate RAG Emotion/Sentiment Predictions
Run FER on a YouTube video using a Hugging Face CLIP model
Implements sentiment analysis using huggingface <https://huggingface.co> transformer zero-shot classification model pipelines for text and image data. The default text pipeline is Cross-Encoder's DistilRoBERTa <https://huggingface.co/cross-encoder/nli-distilroberta-base> and default image/video pipeline is Open AI's CLIP <https://huggingface.co/openai/clip-vit-base-patch32>. All other zero-shot classification model pipelines can be implemented using their model name from <https://huggingface.co/models?pipeline_tag=zero-shot-classification>.