Apple has trained an LLM to effectively understand the long -form video
Apple researchers have developed an adapted version of the SlowFast-Llava model which beats larger models to long analysis and understanding. Here’s what it means. Cheesy bits Very Basically, when an LLM is formed to also understand the video, he learns to divide the videos into frames, to apply a vision of the computer to extract…