Posts Tagged ‘depth’
Google has added some new development hardware to its Project Tango 3D-depth sensing mobile project – a table development kit made its debut today, boasting a new NVIDIA Tegra K1 processor, 4GB of RAM, 128GB of storage, a 1080p display, stock Android 4.4, WiFi, Bluetooth LE and 4G LTE alongside its two cameras and rear depth sensor for the special depth sensing magic. The tablet… Read More
Sony’s ratcheted up its water-resistant device tech a notch with the launch of the Xperia ZR, a new 4.6-inch, 720P Android smartphone that’s waterproof to 1.5m (5 feet). Sony claims the new addition to the Xperia Z line will let film your snorkeling adventures in full HD quality with HDR in video or 13-megapixel stills thanks to the Exmor RS image sensor. The handset also boasts a Snapdragon S4 Pro quad-core 1.5GHZ CPU, 2GB RAM, LTE, NFC, Sony’s Walkman album and movie apps and an OptiContrast OLED screen with Bravia tech to reduce glare “even in bright sunlight.” There’s no word yet on pricing or availability, but as soon as we here more, we’ll try to prep you ahead of that next beach-bound holiday. Meanwhile, you can check the galleries, PR and video after the break for more.
Gallery: Sony Xperia ZR
Source: Sony (Facebook)
The slave robotic made use of in the experiments is a dual arm (7 DOFs each) Motoman DIA10 controlled in low-level by a NX100 controller. The references are produced …
Video Rating: 0 / 5
The CamBoard Pico Wishes to Handle Leap Motion, Provides Full Depth Action Control In A Smaller sized Package deal
Gesture control is heating up, with a host of new entries lastly following Microsoft ’ s example with the Kinect, consisting of Leap Motion and MYO. A German company called pmdtechnologies has actually likewise been in the area for a few years (they ’ ve been working on their technician for 10 years, in reality), and their latest reference design, the CamBoard pico, is a 3D depth sensor based upon what pmd calls its “ time-of-flight ” technician to delivery exceptionally accurate depth measurement for gesture control of PCs.
The CamBoard pico follows the CamBoard nano, the business ’ s previous reference design, and enhances on pmd ’ s existing depth sensor by providing more precise, touch-free motion control. It works by offering a “ 3D interaction volume, ” made up of a point cloud, which pmd says indicates it could be more accurate than Leap Movement, which simply recognizes points for fingertips to help it identify relative spacial distance.
pmd provides its designs for sale to customer electronics companies and other customers (it produces a great deal of car safety and industrial robotics sensors, for instance) to help them build their own motion noticing gadgets, meanings the technician found in the CamBoard pico reference design might discover its method to modules incorporated into note pads, into web cams, or into devoted motion controllers from to OEM brands.
The gesture control market is definitely choosing up steam, and that indicates some companies like pmd which have actually been around for a long time but have actually mostly served niche industries will get a possibility to move more to the foreground. With something like a brand-new mode of communication, quality of experience is the key to stickiness, nevertheless, so both veteran and rookie players below will sink or swim based upon how pleasurable or frustrating utilizing their devices proves to be.
Incoming search terms:
The hook of the Lytro light-field camera is being able to change the focus of pictures after they’ve been taken, but the small gadget is hobbled by its high price and low resolution photos. As it turns out, there is a way to recreate the effect with an ordinary DSLR. The Chaos Collective breaks down the exact methodology on its website, describing how people can record video and slowly adjust the focus over several seconds instead of shooting a series of images.
The team behind the project then managed to recreate the Lytro effect by writing a tool that detects the different focus areas in videos, breaking them down into a 20×20 grid that can be freely clicked. Anyone can create the embeds too: after uploading short videos using the…
Top view of a 3D map generated by walking through the lab carrying the depth camera underlying Kinect. The system automatically estimates the motion of the camera and detects loop closures, which help it to globally align the camera frames. No external information or sensor is used. Click www.cs.washington.edu to download a research paper describing the approach. Collaboration between Intel Labs Seattle and University of Washington Department of Computer Science & Engineering.
Video Rating: 5 / 5
Incoming search terms:
- Powered by Article Dashboard horsepower rating
- Powered by Article Dashboard what is another word for support
- Published News Upcoming News Submit a New Story Groups xbox 360 game reviews
Color and 3D Depth Sensing with Kinect on Win7. To learn more and for the latest updates check here: nuigc.com thecodelabs.com
Samsung, or rather Samsung’s Advanced Institute of Technology, has created what they claim is the first CMOS sensor that can collect both visible light data (which you’d use for a normal digital image) and depth data (like a Kinect). It’s accomplished by mixing in depth-sensing pixels with the RGB photosites normally found on such sensors. It was presented at ISSCC 2012 and reported by Tech-On.
The technology could be extremely influential: a small sensor that is able, with one lens, to determine the distance and size of objects it sees — the applications are extremely diverse. It could power autofocus, track gestures or individuals, or help determine the device’s position.
CMOS sensors are normally made up of a great number of light-sensitive photosites (or pixels) with filters on them to make them sensitive to only a certain range of wavelengths. Samsung has added a new type of pixel in there, four times as large as a normal one but able to detect depth data. The “Z” pixels (so the sensor could be called an RGBGZ sensor) use an established method for detecting distance by analyzing the differences between near-infrared light rays hitting the sensor.
The sensor, strictly speaking, doesn’t capture an RGB and depth image at the same time due to wavelength filter restrictions, but it can effectively time-share the available resources to make it appear as though that’s the case. It captures a 1280×720 color image and a 480×360 depth image. Interestingly those two pixel counts are different aspect ratios, though why that is isn’t clear.
At the moment, the sensor is strictly a prototype and is unlikely to make its way to devices. However, improvements are already planned, for example a backside-illuminated type organization and the inevitable shrinking of the pixel pitch. The size of the sensor was not mentioned, but a few simple calculations suggest it is somewhere between 1/3″ and 1/2″ diagonally, perhaps around 5×3.5mm — about the size of a normal point-and-shoot sensor, perfectly capable of being put in a camera or phone.
First tests of working camera sensors. Both Color and Depth sensing on Win7. To learn more and for the latest updates check here: nuigc.com thecodelabs.com
Naked-eye 3D displays, even large-sized models, are nothing special anymore, but they usually have a common problem: the 3D effect when viewing pictures isn’t as strong as with displays that require users to wear glasses. Professor Kakeya from Tsukuba University in Japan is trying to solve the problem.
The way his 3D display works is actually pretty simple: it uses multiple layers and lenses to boost the sense of depth perception. Professor Kakeya explains:
It forms images of objects at the front toward the front, and objects at the back toward the back. When objects at the front are in focus, those at the back are blurred, and when you’re looking at objects at the back, those in front are blurred. So a feature of this display is that it reproduces focal depth.
The resolution in the current prototype stands at just 200×200, but another cool feature is that it allows you to view pictures in 3D not only when you move your head horizontally, but also when you move it vertically.
This video, shot by Diginfo TV, provides more insight: