Sentiment Analysis for Text, Image and Video using Transformer Models
Calculate the moving average for a time series
Check if the "transforEmotion" conda environment exists
Delete a Transformer Model
Dynamics function of the DLO model
Emoxicon Scores
Generate and emphasize sudden jumps in emotion scores
Generate observable emotion scores data from latent variables
Generate a matrix of Dynamic Error values for the DLO simulation
Calculate image scores based on OpenAI CLIP model
Multivariate Normal (Gaussian) Distribution
Natural Language Processing Scores
Plot the latent or the observable emotion scores.
Punctuation Removal for Text
Retrieval-augmented Generation (RAG)
Sentiment Analysis Scores
Install GPU Python Modules
Install Miniconda and activate the transforEmotion environment
Install Necessary Python Modules
Simulate latent and observed emotion scores for a single "video"
transforEmotion--package
Sentiment Analysis Scores
Run FER on YouTube video
Implements sentiment analysis using huggingface <https://huggingface.co> transformer zero-shot classification model pipelines for text and image data. The default text pipeline is Cross-Encoder's DistilRoBERTa <https://huggingface.co/cross-encoder/nli-distilroberta-base> and default image/video pipeline is Open AI's CLIP <https://huggingface.co/openai/clip-vit-base-patch32>. All other zero-shot classification model pipelines can be implemented using their model name from <https://huggingface.co/models?pipeline_tag=zero-shot-classification>.