Vidmob’s AI models are built by analyzing creative elements from static and video assets. The technology examines aspects such as objects, voiceover, color schemes, text placement, text words, imagery, pacing, and other components that make up a creative asset. We combine this data with performance indicators through several stages.
Data Collection and Cleaning: We gather a wide array of creative assets and performance data from multiple sources. The data encompasses a broad spectrum of demographics, geographies, and culture backgrounds which helps create globally-relevant data sets. We implement measures to identify and mitigate biases in our data sets, including reviewing data for diversity and inclusiveness. We then clean and standardize that data which ensures models are trained on high-quality, relevant data.
Model Training, Testing, and Refinement: Vidmob uses a mix of Large Language Models (LLM), Large Vision Models (LVM), and Vision-Focused Models to help bring contextual information into the data are critical for effective detection of objects, text, and characteristics of creative. Models are trained using state-of-the-art machine learning algorithms. A diverse set of creative assets and performance metrics is used to establish the models, and then they are rigorously tested using separate data sets to ensure reliability. Testing includes evaluating the models’ predictions against known outcomes and adjusting as needed. Feedback loops are established on all models to refine them over time.
Ethical Use: We are committed to the ethical use of AI. This includes transparent practices, respect for privacy, adherence to regulatory standards, and a peer review process to validate AI results. We engage with stakeholders to ensure our AI tools are used responsibly and for the benefit of all parties involved