MIT researchers have discovered that training computer vision models using adversarial training can improve their perceptual straightness, making them more similar to human visual processing. Perceptual straightness enables models to better predict object movements, potentially improving the safety of autonomous vehicles. Adversarially trained models are more robust, retaining a stable representation of objects despite slight changes in images.
MIT researchers have discovered that a specific training method can help computer vision models learn more perceptually straight representations, like humans do. Training involves showing a machine-learning model millions of examples so it can learn a task. For example, the nodes within the model have internal activations that represent “dog,” which allow the model to detect a dog when it sees any image of a dog. Perceptually straight representations retain a more stable “dog” representation when there are small changes in the image. This makes them more robust.
They set out to determine whether different types of computer vision models straighten the visual representations they learn. They fed each model frames of a video and then examined the representation at different stages in its learning process. Adversarial training involves subtly modifying images by slightly changing each pixel. While a human wouldn’t notice the difference, these minor changes can fool a machine so it misclassifies the image. Adversarial training makes the model more robust, so it won’t be tricked by these manipulations.
“To me, it is amazing that these adversarially trained models, which have never even seen a video and have never been trained on temporal data, still show some amount of straightening,” DuTell says.
Source: Real Estate Daily Report (realestatedailyreport.net)
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: CBSNews - 🏆 87. / 68 Read more »
Source: IntEngineering - 🏆 287. / 63 Read more »
Source: SciTechDaily1 - 🏆 84. / 68 Read more »
Source: NatureMedicine - 🏆 451. / 53 Read more »
Source: ABC - 🏆 471. / 51 Read more »
Source: SciTechDaily1 - 🏆 84. / 68 Read more »