Posts Tagged ‘cameras’
Welcome to Engadget’s holiday gift guide! Head back to our hub to see the rest of the product guides as they’re added throughout the month. With smartphones like the Nokia Lumia 1020 offering outstanding photo performance, you may wonder why you’d want a standalone camera at all. Leave it to the …
We caught a glimpse of Sony’s A7 camera series just a day ago, but the low-resolution image didn’t exactly show much. Thankfully, Digicam Info has just posted two leaked press shots that reveal considerably more of the full-frame mirrorless shooters. The images support rumors of a built-in …
Earlier this week I traveled to Microsoft’s Mountain View campus to play with the company’s new Kinect sensor. While there I met with a few of the team’s engineers to discuss how they had built the new device.
Up front, two things: The new Kinect sensor is far cooler than I expected. Also, I touched an Xbox One.
The story of the Kinect device, both its first and second generations, has been a favorite Microsoft narrative for some time, as it fuses its product teams and basic research group in a way that demonstrates the potential synergy between the two.
The new Kinect sensor is a large improvement on its predecessor. Technically it has a larger field of vision, more total pixels, and a higher resolution that allows it to track the wrist of a child at 3.5 meters, Microsoft told me. I didn’t have a kid with me, so I couldn’t verify that directly.
It also contains a number of new vision modes that the end user won’t see, but are useful for developers who want to track the human body more precisely and with less interference. They include a depth mode, an infrared view, and new body modeling tools to track muscle use and body part orientation.
When in its depth image mode, acting as a radar of sorts, each of the 22,000 pixels that the Kinect sensor supports records data independently. The result is a surprisingly crisp mapping of the room you are in.
The new Kinect also contains a camera setting that is light invariant, in that it works the same whether there is light in the room or not. In practice this means you can Kinect in the dark, and that light pollution – say, aiming two floodlights directly at the sensor – doesn’t impact its performance. I did get to test that directly, and it worked as promised. No, I don’t know the candlepower of the light array we used, but it was enough to suck staring into directly.
So, developers can now accept motion data from the Kinect without needing to worry about the user being properly lit, or having their data go to hell if someone turns on the overhead light, or time sets the sun. The new Kinect also supports new joints in its skeletal tracking, in case you need to better watch a user’s hands move about.
The smallest object the first Kinect could detect was 7.5 centimeters. The new Kinect, while executing a 60 percent larger field of view, can see things as small as 2.5 centimeters. And it can track up to six people, from two before.
The first Kinect device became the fastest-selling consumer device in history. Its existence helped keep the Xbox 360 relevant, even as the console aged. Microsoft is releasing a new Kinect sensor with its upcoming Xbox One. Both go on sale November 22 and will compete with Sony’s soon-to-be-released PlayStation 4.
For a one-year generational update, I feel like the new Kinect is worthy progress on its predecessor. I sat down with Microsoft’s Sunil Acharya, Travis Perry, and Eyal Krupka to track the origins of how the new hardware was designed. It’s a short story of collaboration, akin to what came together for the original Kinect device.
Most basically, Microsoft wanted to place a “time-of-flight” camera into the new Kinect. Such a device works by measuring the time it takes for light that it emits to return. Given that speed is a bit quick, and the new Kinect wanted to absorb a massive field of data in real-time, challenges cropped up.
Two of our aforementioned Softies, Eyal from Microsoft Research’s Israel group, and Travis from the mother corporation’s Architecture and Silicon Management team, collaborated on turning time-of-flight from a more academic exercise into a commercial product. Input came from what Microsoft described to me as “multiple groups” to improve the camera.
Working as a cross-team group, the time-of-flight problem was essentially solved, but it led to another set of issues: data overload and blur.
In short, with 6.5 million pixels needing processing each second, and a requirement to keep processing loads low to ensure strong Xbox One performance, the Kinect group was pretty far from out of the soup. Algorithms were then developed to reduce processor hit, ‘clean up’ edge data to prevent objects in the distance from melting into each other, and to help cut down on motion blur.
According to Eyal, executing those software tasks was only made possible by having the camera “set” earlier in the process. If the hardware hadn’t been locked, the algorithms would have learned from imperfect or incorrect data sets. You want those algorithms to learn on the final data, and not on noisy data, or beta data, he explained.
That hardware is multi-component, including an aggregation piece (Microsoft was vague, but I think it is a separate chip) that collects the sensor data from the Kinect and pools it. Microsoft declined to elaborate on where the “cleaning” process takes place. I suspect that as the firm noted on its need to keep processing cycles low for the incoming data, it at least partially takes place on the console itself.
The end result of all of the above is a multi-format data feed for the developer to use in any way they wish. Microsoft spends heavily on the more than 1,000 developers and Ph.D.s that it employs at Microsoft Research who are free to pursue long-term research that isn’t connected to current products. But it does like to share when those lengthy investments lead to knowledge that it applies to commercial devices, such as the Kinect.
What to take from this? Essentially that even before the re-org, Microsoft had at least some functional intra-party collaboration in place. And, that a neat device came out of it.
The next challenge for the team? Make it smaller.
The NBA faces a big challenge now that it offers all its player statistics to the public — how does it generate stats that hold the interest of basketball fans? The league’s solution is a multi-year agreement to use Stats LLC’s SportVU motion tracking system in every arena (15 teams had already implemented the technology on their own). As of the 2013-14 season, every NBA arena will have a six-camera setup that creates a steady stream of player data based on ball possession, distance, proximity and speed. The NBA’s website, NBA Game Time and NBA TV will all use the information to expand game stats beyond what we see today with heat maps and specific details on each possession. There’s no telling how useful that extra knowledge will be, but we won’t be shocked if it helps settle a few sports bar arguments.
Via: AP (Yahoo)
We reckoned IFA would be an exceptionally busy show, and now that we’ve combed through all of our coverage and condensed it here, it’s clear the event lived up to our expectations. Sure, the venerable CES may have topped IFA in show floor square feet, but the announcements in Berlin generated perhaps even more excitement than those that came out of Las Vegas in January. A pair of high-profile smartwatches, two titanic smartphones, a duo of lens cameras, 4K displays and a bevy of hands-ons await you in a neat, yet massive, roundup after the break.%Gallery-slideshow83286%
Filed under: Cellphones, Desktops, Cameras, Displays, GPS, Handhelds, Home Entertainment, Household, Laptops, Peripherals, Portable Audio/Video, Robots, Tablets, Transportation, Wearables, Samsung, Sony, HTC, ASUS, LG, HP, Acer, Lenovo
Remote cameras are useful to wildlife conservationists, but their closed (or non-existent) networking limits the opportunities for tracking animals around the clock. The Instant Wild project’s cameras, however, are designed to rely on the internet for help. Whenever they detect movement, they deliver imagery to the public through Iridium’s satellite network. Anyone watching the cameras through the Instant Wild iOS app or website becomes an impromptu zoologist; viewers can identify both animals and poachers that dedicated staff might miss. Maintenance also isn’t much of an issue, as each unit is based on a Raspberry Pi computer that can run for long periods on a single battery. The Zoological Society of London currently operates these satellite cameras in Kenya, but there are plans underway to expand their use to the Antarctica, the Himalayas, Indonesia and Sri Lanka.
Sony’s Camera Remote API allows WiFi-equipped devices to control its cameras, act as a second screen
This year’s IFA has been rather eventful for Sony: the company unveiled a new handset, some interesting cameras and even a recorder that can turn you into the next Justin Bieber. But lost in the shuffle was an announcement that the Japanese outfit’s also releasing its Camera Remote API, albeit in beta. Sony says the idea here is to provide developers with the ability to turn WiFi-ready devices, such as smartphones and tablets, into a companion for many of its shooters — i.e. act as a second display or be able to shoot images / video remotely.
The Camera Remote API will be friendly with novel products including the Action Cam HDR-AS30, HDR-MV1 Music Video Recorder and both DSC-QX lens cameras, as well as older models like the NEX-6, NEX-5R and NEX-5T. This is definitely good news for current and future owners of any of the aforementioned, since the new API can certainly add much more value to Sony’s cameras via the third-party app creations that are born from it.
TC Droidcast Episode 5: Samsung Galaxy Gear And Note 3, Sony’s Crazy Cameras And The KitKat Crunch Heard Round The World
We’re sure glad the weekly TechCrunch Droidcast falls on a Wednesday, because this was a big one for Android. Samsung and Sony both had events at IFA in Berlin and revealed new hardware, and we’re joined by none other than 9to5Google‘s Seth Weintraub as a special guest this week to break it all down.
The Galaxy Gear smartwatch is probably the most buzzed about news of the week, and the announcement held a few surprises despite early leaks. Samsung also revealed the Galaxy Note 3, with a bigger screen yet smaller footprint, and Sony showed off camera lens accessories for smartphones that make your pocket camera a pro shooter, along with a brand new flagship smartphone.
We also get into Google’s captivating decision to partner with Kit Kat (yes, the candy brand) to secure licensing rights for the name of the next version of Android (4.4), and everyone comes away hungrier than they were before.
We invite you to enjoy weekly Android podcasts every Wednesday at 5:30 p.m. Eastern and 2:30 p.m. Pacific, in addition to our weekly Gadgets podcast at 3 p.m. Eastern and noon Pacific on Fridays. Subscribe to the TechCrunch Droidcast in iTunes, too, if that’s your fancy.
Intro music by Kris Keyser.
Smartphones have cameras. But they’re mostly garbage when compared to a dedicated camera. Besides the Lumia 1020, of course. The cameras on smartphones have tiny image capture sensors and low-quality glass, the sum of which equals pictures that are just good enough — not impressive. It’s convenience over quality.
Enter the Sony QX10 and QX100 lens camera.
This system is more than just a lens. The QX10 and QX100 also pack an image sensor, thus allowing for much higher quality photographs. They simply clip onto a smartphone and communicate wirelessly.
The $ 250 QX10 features a 1/2.3-inch 18-megapixel sensor paired with an f/3.3-5.9 lens. The $ 500 QX100 has a high-quality 1-inch 20.2-megapixel Exmor R sensor and a f/1.8-4.9 Carl Zeiss lens. This line is based on fantastic Sony point-and-shoot cameras with the QX10 looking most like the WX150 and the QX100 grabbing most of the RX100m2′s magic.
The QX10 and QX100 are essentially two thirds of a camera. Each lens camera clips onto a phone and communicates through WiFi or NFC. Or, they can act as a wireless camera. They also have a microSD and Memory Stick slot, tripod mounts and include optional clips for the back of phones. The remaining bit is your phone, acting as the viewfinder, shutter trigger, and backup storage. And that makes a lot of sense.
Think about it: Point and shoot cameras still sell in large numbers because they hit a sweet spot of portability and quality. They still lack communication, though. Pictures are stored on a memory card. The photos need to be dumped to a computer. That’s a hassle. And doesn’t make for timely sharing.
With Sony’s new system, users have the ability to take high quality pictures and then share them through their smartphone. It’s the best of both worlds.
The QX10 and QX100 work with both Android and iOS phones. Sony built the products to be device-agnostic, thus increasing their shelve life and mass appeal.
As I pointed out yesterday, this has been done before. There are countless examples on Alibaba and eBay. Will.i.am and Fusion Garage (and CrunchPad engineer) Chandra Rathakrishnan announced the i.am+ foto.sosho V.5 late last year. Thankfully it doesn’t appear to have ever hit the market. Thankfully. It was ludicrous and smelled of vapor from the start.
Sony likely doesn’t expect this product to be a mass hit, but there is definitely a market for it. Now Sony just has to convince consumers to ditch the pocket shooter, and carry a lens instead.