AI computer mimics how humans visualize objects

AI computer mimics how humans visualize objects

Researchers from UCLA Samueli School of Engineering and Stanford have demonstrated a computer system that can discover and identify the real-world objects it "sees" based on the same method of visual learning that humans use. Even today's best computer vision systems cannot create a full picture of an object after seeing only certain parts of it -- and the systems can be fooled by viewing the object in an unfamiliar setting. Engineers are aiming to make computer systems with those abilities -- just like humans can understand that they are looking at a dog, even if the animal is hiding behind a chair and only the paws and tail are visible.

The approach is made up of three broad steps: First, the system breaks up an image into small chunks, which the researchers call "viewlets." Second, the computer learns how these viewlets fit together to form the object in question. And finally, it looks at what other objects are in the surrounding area, and whether or not the objects are relevant to describing and identifying the primary object.

The researchers tested the system with about 9,000 images, each showing people and other objects. The platform was able to build a detailed model of the human body without external guidance and without the images being labeled. The engineers ran similar tests using images of motorcycles, cars and airplanes. In all cases, their system performed better or at least as well as traditional computer vision systems that have been developed with many years of training.

Want to know more about this awesome research? Just follow this link!
 

Like us on Facebook!
- Fun Facts and Trivia brought to you by the Research and Development Committee -
Thank you and God bless!