I do not know enough about the intricacies of differences in AI text and audio-only models. Though I know we already have audio-only models that do work basically the same way.
I guess the next step would be associating those sounds with the Dolphins’ actions
Yeah but, we’re already trying to do this. I’m not sure how the AI step really helps. We can already hear dolphins, isolate specific noises, and associate them actions, but we still haven’t gotten very far. Having a machine that can replicate those noises without doing the actions sounds significantly less helpful compared to watching a dolphin.
I do not know enough about the intricacies of differences in AI text and audio-only models. Though I know we already have audio-only models that do work basically the same way.
Yeah but, we’re already trying to do this. I’m not sure how the AI step really helps. We can already hear dolphins, isolate specific noises, and associate them actions, but we still haven’t gotten very far. Having a machine that can replicate those noises without doing the actions sounds significantly less helpful compared to watching a dolphin.