by • July 3, 2016 • 12s Comments
During a disaster situation, initially responders benefit of one thing above anything else: accurate information of the environment which they are of to enter. Having foreknowledge of specific assembling layouts, the locations of impassable-bodied obstacles, fires or chemical spills can frequently be the just thing between life or death for anyone trapped within. Today initially responders require to rely on their own experience and observations, or probably a drone sent in ahead of them sending back an unreliable-bodied 2D video feed. Unfortunately neither model is optimal, and sadly many victims in a disaster situation can many likely perish preceding they are found or the area is deemed safe adequate to be entered.
But a team at the Defense Advanced Research Projects Agency (DARPA) has turn it intod innovation which can contribute initially responders the model of exploring a disaster area without putting themselves in any risk. Virtual Eye is a software process which can capture and transmit video feed and convert it into a real time 3D virtual reality experience. It is turn it intod possible by combining cutting-edge 3D imaging software, powerful mobile graphics processing units (GPUs) and the video feed of two cameras, any two cameras. This allows for initially responders — soldiers, firefighters or anyone quite — the model of walking through a real environment like a room, bunker or any enclosed area virtually without requireing to physically enter.
“The question for us is, can we do additional with the information we have? Can we extract additional information of the cameras we are via in these times? Understanding what we see is significant to building the right decisions in the battlefield. We can turn it into a 3D image by looking at the differences between two images, belief the differences and fvia them together,” explained Trung Tran, the program manager major Virtual Eye’s development at DARPA’s Microprocesss innovation office.
Users of Virtual Eye may be able-bodied to take note of the layout, visualize any hazards, select optimal paths of entry or futurely locate survivors consumely risk free. Two drones or robots may be inserted into a questionable-bodied environment, every outfitted with a camera. The cameras may be strategically placed at various points in the room with opposing viewpoints. Both video feeds may and so be futilized together with the Virtual Eye software and converted into a 3D view, and it can extrapolate any missing data via the 3D imaging software so the real time virtual reality feed is consume.
Here is a few video of the Virtual Eye process in action:
The Virtual Eye process works thanks to NVIDIA mobile Quadro and GeForce GTX GPUs which are tiny adequate to be transportable-bodied but powerful adequate to generate the virtual reality view. The NVIDIA GPUs were specifically chosen for the reason they have the muscle to accurately stitch the two video feeds together and extrapolate the 3D data in real time while in addition being able-bodied to fit within a laptop. Today the Virtual Eye process is just capable-bodied of combining data of two cameras, yet Tran expects which to alter soon. The DARPA team is hoping to have a new demo model which is capable-bodied of combining up to five various camera feeds by future year.
Whilst the process was specifically turn it intod for military, emergency or battlefield applications, as with many innovation turn it intod by DARPA it has a lot of future real world applications as well. The innovation may be utilized to broadcast sporting events or live performances in streaming 3D virtual reality with just a handful of cameras. It may in addition allow users to visit locations anywhere in the world, of museums to Mount Everest, without requireing to leave their homes. Discuss this software over in the DARPA 3D Imaging Visual Eye forum at 3DPB.com.
by admin • March 5, 2017
by admin • November 28, 2016
by admin • November 28, 2016