iPad Pro could be Apple’s next device to use Face ID

It’s safe to assume that the face recognition system in the iPhone X will eventually reach other devices, but which ones are next in line? KGI’s Ming-Chi Kuo might have an idea. The historically accurate analyst expects the next generation of the iPad Pro to adopt the TrueDepth camera and, by extension, Face ID. This would unify the experience across Apple’s mobile devices, the analyst says, and would spur developers knowing that they could use face recognition across multiple Apple devices, not just one handset. The new iPads would ship sometime in Apple’s fiscal 2018, which ends in September of next year.

There’s another question to be answered: if this happens, will the Touch ID fingerprint reader go away? It’s not so clear. Apple clearly took advantage of eliminating the home button to expand the iPhone X’s screen size, but that’s not as necessary on devices that already have large displays. Also, Apple has typically kept larger bezels on the iPad due to its size — you need at least some space for your thumbs on a device that you can’t easily hold in one hand. We’d add that it could complicate multitasking, since Apple already uses an upward swipe on the iPad’s bottom edge to bring up the app dock. How would you handle that while also using a swipe to go to the home screen?

Whatever happens, it would make sense for the iPad Pro to get face recognition. Apple has made a habit of bringing relatively new features to its higher-end iPads (such as upgraded displays and the Smart Connector), and TrueDepth might be one more reason to spring for a Pro instead of sticking to the base model. And if Apple is going to continue pushing augmented reality, it’ll want tablets that particularly well-suited to the task regardless of the camera you’re using.

Source: 9to5Mac

Engadget RSS Feed

Levi’s Google-powered smart jacket goes on sale next week

Earlier this year, we discussed how Levi’s was working on a smart jacket in connection with the Google Advanced Technology and Products group’s Project Jacquard. Now, after much anticipation, the jacket is ready and will be available on Levi.com (and in some Levi’s stores) on October 2nd. If you’re really eager to take a look, you’ll find it in some boutiques on Wednesday. It will set you back $ 350.

The question is whether this jacket is really worth the cost — after all, that’s a lot for denim. The key for the Levi’s Commuter jacket lies in a snap tag on the left sleeve cuff that allows you to interact with your phone right on the jacket using gestures, LEDs and haptic feedback. It’s not fully unobtrusive — from the pictures, it appears to protrude from the sleeve quite a bit — but it’s pretty small. But if you want a low key and simple way to interact with your phone (and you love denim jackets), you may want to check it out. You can see our early review here.

The jacket is primarily aimed at bike commuters, and it would work well for this group. You can use the Jacquard app, available for iOS and Android, to customize what exactly your jacket can do. You can receive messages, send calls to voicemail, hear your next direction while biking, control your music and more. The tag charges via USB and the battery lasts for about two weeks. It’s removable, so the jacket is, presumably, washable.

You can visit jacquard.com/levi/specs on your mobile device to see if it’s compatible; generally, phones running Android 6.01 or newer will work. iOS users must have an iPhone 6 or later running iOS 10 or iOS 11. It’s likely this jacket will appeal to a very narrow set of people, especially considering its hefty price tag. But if it’s as thoughtfully made as it appears to be, it will probably attract some fans.

Via: The Verge

Source: Google

Engadget RSS Feed

The next iPhone creates animated emoji from your facial expressions

You may already know that the next iPhone will use face detection for all kinds of clever tricks, but here’s one you probably weren’t expecting: customized emoji. The 9to5Mac crew has discovered that leaked “gold master” iOS 11 firmware includes references to ‘Animoji,’ or 3D emoji that you create using your facial expressions and voice. Pick one of the familiar non-human faces in the emoji library and it’ll map your eye, mouth and cheek expressions to that character — you can make a robot smile or have a dog raise its eyebrows. Even the poo emoji can be animated. This comes across as a gimmick (we can see many people dropping this once the novelty wears off), but it shows what’s possible now that Apple has face tracking at its disposal. And there’s more to the leak than just emoji.

The scoop also offers more clues as to what the next iPhone can do. The camera will be more powerful, for one thing: in a confirmation of previous leaks, you can shoot 4K video at 60 frames per second, and slow-motion 240FPS video at 1080p. It’ll also have an adaptive True Tone display like on the iPad Pro. The face-based authentication system appears to be relatively advanced, as you have to pivot your head in a circle to make sure it can recognize you from a wide range of angles. And if there was any doubt that the home button is going away, Apple just removed it: apps on the next iPhone have a line at the bottom indicating that gestures are available.

A separate code search by Steve Troughton-Smith may have also revealed the finished name. The brand new, all-screen iPhone appears to be called iPhone X. The revamped versions of the classic design, in turn, would be called the iPhone 8 and iPhone 8 Plus. Yes, Apple may be skipping the “S” naming scheme for the first time since it was instituted back in 2009 with the iPhone 3GS.

There are also mentions of dual-camera iPhone owners getting a Portrait Lighting mode that could alter the perceived lighting of a shot. You could add a stage lighting effect, or add the soft tones of natural lighting to a harshly-lit picture.

Other hints? We’ve already seen references to the LTE Apple Watch, but there’s also evidence of updated AirPods. The only conspicuous change entails moving the charging light to the outside of the case (no more flipping the lid), but we wouldn’t rule out under-the-hood revisions. The one sure thing is that Apple will have a lot to talk about during its September 12th event.

Via: TechCrunch

Source: 9to5Mac (1), (2), S. Troughton-Smith (Twitter)

Engadget RSS Feed

Apple might announce a 4K TV box at next month’s iPhone event

Apple is unveiling another new product with its latest iPhones and Apple Watches in September, according to Bloomberg. Cupertino is reportedly announcing its 4K- and HDR-capable Apple TV, as well. If you’ll recall, the publication reported earlier this year that the tech titan has updated its TV streaming box with the capability to stream in 4K resolution and to play more color-rich HDR videos. Since the upgraded box is expected to stream bigger files with a higher resolution, it will come with a faster processor. Obviously, you’ll need to pair it with a TV that’s also capable of playing 4K HDR content to bring out its full potential.

Despite the new capabilities and faster processor, Apple’s engineers were apparently unhappy with the incremental upgrade. They originally set out to build a cord-cutting device with the first Apple TV, but the company failed to forge partnerships to make that vision a reality. It’s unclear if the tech giant is still pursuing deals with broadcast networks, but Bloomberg says it’s talking to streaming services like Netflix about providing more 4K videos.

Apple is reportedly talking to film studios about selling 4K movies through iTunes, as well, and an iTunes UK transaction back in July marking a film as “4K, HDR” suggests negotiations are going smoothly. We’ll probably also see some original 4K shows in the future, considering the tech giant has already set aside $ 1 billion for original programming. In addition, both the old and upcoming TV boxes will be able to access Amazon Prime Video later this year.

According to the Bloomberg piece, Apple is seeking to “revive its video ambitions” with the upgraded device, as the original one hasn’t been doing as well as Roku, Chromecast and the Fire TV. It even made a few hires for that particular purpose over the past few months, including Timothy Twerdhal, the former Fire TV chief who’s now in charge of the Apple TV division. Unfortunately, we still don’t know how much the new streaming box will set you back, but it’s almost September anyway — you won’t have to wait too long to find out.

Source: Bloomberg

Engadget RSS Feed

The next Apple Watch might not need an iPhone for data

Well, Apple Watch fans have more to look forward to than just a new operating system. According to a new report from Bloomberg, Apple will release a version of its Watch with cellular network support built-in by year’s end, relieving users of the need to carry their iPhones around. Three words: it’s about time.

Rumors of a cellular Apple Watch are nothing new, and the whole concept should sound very familiar by now. After all, Samsung and LG have had LTE-enabled smartwatches for years, and the latter developed one such wearable to help launch Android Wear 2.0 earlier this year. While it’s not yet clear what Apple plans to let people do with these mobile data connections, it’s likely that users will be able to send messages and make phone and FaceTime Audio calls without being tethered to an iPhone.

Interestingly, Intel is said to be providing the modem for the new Apple Watch, which isn’t a huge surprise — Apple tapped the chipmaker for modems used in certain versions of the iPhone 7 and 7 Plus. Given the Watch’s small size, Apple and Intel may opt to use a digital “eSIM” rather than a traditional plastic SIM card as well. That could signal a similar decision for future iPhones, which would have potentially huge ramifications for how such smartphones (and their data plans) are sold.

If nothing else, though, use of an eSIM would likely preserve the Apple Watch’s compact footprint. Consider LG’s flagship Watch Sport: it was the more powerful of the two smartwatches that debuted alongside Android Wear 2.0, and the space required to fit a physical SIM card inside helped make it big and somewhat unwieldy. That’s not really Apple’s style, especially since the company’s new Watch is said to benefit from an all-new form factor.

If true, Apple’s next step in wearables may be an iterative one. Still, if the company’s most recent earnings release is any indication, demand for the Watch is still going strong. According to CEO Cook, Watch sales grew 50 percent year over year — seriously, Tim, would some hard numbers now and then really kill you?

Source: Bloomberg

Engadget RSS Feed

Firmware shows the next iPhone will use infrared face unlock

Ever since our close look at an alleged render of the next iPhone back in May, there have been rumors of 3D face scanning plus a large screen-to-body ratio flying about. Today, we finally bring you some solid evidence about these features, courtesy of — surprise, surprise — Apple itself. After digging up new details about the Apple HomePod in its leaked firmware, iOS developer Steve Troughton-Smith came across some code that confirm the use of infrared face unlock in BiometricKit for the next iPhone. More interestingly, in the same firmware, fellow developer Guilherme Rambo found an icon that suggests a near-bezel-less design — one that matches rumored schematics going as far back as late May. For those in doubt, Troughton-Smith assured us that this icon is “specific to D22, the iPhone that has Pearl (Face ID).”

These discoveries are by far the best hints at what to expect from the “iPhone 8,” which is expected to launch later this year. Additionally, we also learnt from our exclusive render that the phone may feature a glass back along with wireless charging this time. That said, there’s still no confirmation on the fate of Touch ID: while the HomePod firmware code seems to suggest that it’s sticking around, there’s no indication as to whether it’s ditching the usual Home button execution in favor of an under-display fingerprint scanner (as shown off by Qualcomm and Vivo at MWC Shanghai). Given how poorly Apple has been guarding the secrets of its next smartphone this time round, chances are we’ll hear more very soon.

Source: Steve Troughton-Smith, Guilherme Rambo

Engadget RSS Feed

The next iPhone reportedly scans your face instead of your finger

Rumormongers have long claimed that Apple might include face recogition in the next iPhone, but it’s apparently much more than a nice-to-have feature… to the point where it might overshadow the Touch ID fingerprint reader. Bloomberg sources understand that the new smartphone will include a depth sensor that can scan your face with uncanny levels of accuracy and speed. It reportedly unlocks your device inside of “a few hundred milliseconds,” even if it’s laying on flat of a table. Unlike the iris scanner in the Galaxy S8, you wouldn’t need to hold the phone close to your face. The 3D is said to improve security, too, by collecting more biometric data than Touch ID and reducing the chances that the scanner would be fooled by a photo.

Does that sound good to you? You’re not alone. The leakers claim that Apple ultimately wants you to use face recognition instead of Touch ID. It’s not clear whether this will replace Touch ID, though. While the tipsters say that Apple has run into “challenges” putting a fingerprint reader under the screen, they don’t rule it out entirely. There are conflicting reports: historically reliable analyst Ming-Chi Kuo is skeptical that under-screen Touch ID will make the cut, while a representative at chip maker TSMC supposedly claimed that it’s present. Your face may be the preferred biometric sign-in approach rather than the only one.

The Bloomberg scoop largely recaps existing rumors, including an all-screen design (with just a tiny cut-out at the top for a camera, sensors and speaker), a speedier 10-nanometer processor and a dedicated chip for AI-related tasks. However, it adds one more treat: if accurate, the new iPhone will get an OLED version of the fast-refreshing ProMotion display technology you see in the current-generation iPad Pro. So long as the leaks are accurate, it’s becoming increasingly clear that the next iPhone represents a massive hardware upgrade, even if the software is relatively conservative.

Source: Bloomberg

Engadget RSS Feed

The next video game controller is your voice

For all of modern gaming’s advances, conversation is still a fairly unsophisticated affair. Starship Commander, an upcoming virtual reality game on Oculus and SteamVR, illustrates both the promise and challenge of a new paradigm seeking to remedy that: using your voice.

In an early demo, I control a starship delivering classified goods across treacherous space. Everything is controlled by my voice: flying the ship is as simple as saying “computer, use the autopilot,” while my sergeant pops up in live action video to answer questions.

At one point, my ship is intercepted and disabled by a villain, who pops onto my screen and starts grilling me. After a little back and forth, it turns out he wants a deal: “Tell you what, you take me to the Delta outpost and I’ll let you live.”

I try to shift into character. “What if I attack you?” I say. No response, just an impassive yet expectant stare. “What if I say no?” I add. I try half a dozen responses, but — perhaps because I’m playing an early build of the game or maybe it just can’t decipher my voice– I can’t seem to find the right phrase to unlock the next stage of play.

It’s awkward. My immersion in the game all but breaks down when my conversational partner does not reciprocate. It’s a two-way street: If I’m going to dissect the game’s dialogue closely to craft an interesting point, it has to keep up with mine too.

The situation deteriorates. The villain eventually gets fed up with my inability to carry the conversation. He blows up my ship, ending the game.

Yet there is potential for a natural back and forth conversation with characters. There are over 50 possible responses to one simple question from the sergeant — “Is there anything you’d like to know before we start the mission?” — says Alexander Mejia, the founder and creative director at Human Interact, which is designing the game. The system is powered by Microsoft’s Custom Speech Service (similar technology to Cortana), which sends players’ voice input to the cloud, parses it for true intent, and gets a response in milliseconds. Smooth voice control coupled with virtual reality means a completely hands-free, lifelike interface with almost no learning curve for someone who’s never picked up a gamepad.

Speaking certainly feels more natural than selecting one of four dialogue options from a menu, as a traditional roleplaying game might provide. It makes me more attentive in conversation — I have to pay attention to characters’ monologues, picking up on details and inconsistencies while coming up with insightful questions that might take me down a serendipitous narrative route (much like real life). No, I don’t get to precisely steer a ship to uncharted planets since voice control, after all, is not ideal for navigating physical space. But, what this game offers instead is conversational exploration.


Video games have always been concerned with blurring the lines between art and real life.

Photorealistic 4K graphics, the disintegration of levels into vast open worlds, virtual reality placing players inside the skull of another person: The implicit end goal of every gaming advance seems to be to create an artificial reality indistinguishable from our own. Yet we communicate with these increasingly intelligent games using blunt tools. The joystick/buttons and keyboard/mouse combinations we use to speak to games do little to resemble the actions they represent. Even games that use lifelike controls from the blocky plastic Time Crisis guns to Nintendo Switch Joy-Cons still involve scrolling through menus and clicking on dialogue options. The next step is for us to talk to games.

While games that use the voice have cropped up over the years — Seaman on Sega’s Dreamcast, Lifeline on the PlayStation 2, Mass Effect 3 on the Xbox 360’s Kinect — their commands were often frustratingly clunky and audio input never seemed more than a novelty.

That may be coming to an end. Well-rated audio games have appeared on the iPhone such as Papa Sangre and Zombies, Run! At E3 this month, Dominic Mallinson, a Sony senior vice president for research and development, referred to natural language understanding among “some of the technologies that really excite us in the lab right now.”

More than anything, the rush by Microsoft, Google, Amazon and Apple to dominate digital assistants is pushing the entire voice computing field forward. In March, The Information reported that Amazon CEO Jeff Bezos wants gaming to be a “killer app” for Alexa, and the company has paid developers that produce the best performing skills. Games are now the top category for Alexa, and the number of customers playing games on Echo devices has increased tenfold in the last year, according to an Amazon spokeswoman. “If I think back on the history of the world, there’s always been games,” says Paul Cutsinger, Amazon’s head of Alexa voice design education. “And it seems like the invention of every new technology comes along with games.”

“It seems like the invention of every new technology comes along with games.” – Paul Cutsinger, Amazon

Simply: If voice assistants become the next major computing platform, it’s logical that they will have their own games. “On most new platforms, games are one of the first things that people try,” says Aaron Batalion, a partner focused on consumer technology at venture capital firm Lightspeed Venture Partners. “It’s fun, engaging and, depending on the game mechanics, it’s often viral.” According to eMarketer, 35.6 million Americans will use a voice assistant device like Echo at least once a month this year, while 60.5 million Americans will use some kind of virtual voice assistant like Siri. The question is, what form will these new games take?

Gaming skills on Alexa today predominantly trace their lineage to radio drama — the serialized voice acted fiction of the early 20th century — including RuneScape whodunnit One Piercing Note, Batman mystery game The Wayne Investigation and Sherlock Holmes adventure Baker Street Experience.

Earplay, meanwhile, has emerged as a leading publisher of audio games, receiving over $ 10,000 from Amazon since May, according to Jon Myers, who co-founded the company in 2013. Myers describes their work as “stories you play with your voice,” and the company crafts both their own games and the tools that enable others to do the same.

For instance, in Codename Cygnus, you play a James Bond-esque spy navigating foreign locales and villains with contrived European accents, receiving instructions via an earpiece. Meanwhile, in Half, you navigate a surreal Groundhog Day scenario, picking up clues on each playthrough to escape the infinitely repeating sequence of events.

“What you see with the current offerings from Earplay springs a lot out of what we did at Telltale Games over the last decade.”

Like a choose-your-own-adventure novel, these experiences intersperse chunks of narrative with pivotal moments where the player gets to make a decision, replying with verbal prompts. Plot the right course through an elaborate dialogue tree and you reach the end. The audio storytelling activates your imagination, yet there is little agency as a player: The story chugs along at its own pace until you reach each waypoint. You are not so much inhabiting a character or world as co-authoring a story with a narrator.

“What you see with the current offerings from Earplay springs a lot out of what we did at Telltale Games over the last decade,” says Dave Grossman, Earplay’s chief creative officer. “I almost don’t even want to call them games. They’re sort of interactive narrative experiences, or narrative games.”

Grossman has had a long career considering storytelling in games. He is widely credited with creating the first game with voice acting all the way through — 1993’s Day of the Tentacle — and also worked on the Monkey Island series. Before arriving at Earplay, he spent a decade with Telltale Games, makers of The Wolf Among Us and The Walking Dead.

Earplay continues this genre’s bloodline: The goal is not immersion but storytelling. “I think [immersion] is an excellent thing for getting the audience involved in what you want, in making them care about it, but I don’t think it’s the be-all-end-all goal of all gaming,” says Grossman. “My primary goal is to entertain the audience. That’s what I care most about, and there are lots of ways to do that that don’t involve immersing them in anything.”

“My primary goal is to entertain the audience … There are lots of ways to do that that don’t involve immersing them in anything.”

In Earplay’s games, the “possibility space”– the degree to which the user can control the world — is kept deliberately narrow. This reflects Earplay’s philosophy. But it also reflects the current limitations of audio games. It’s hard to explore physical environments in detail because you can’t see them. Because Alexa cannot talk and listen at the same time, there can be no exchange of witticisms between player and computer, only each side talking at pre-approved moments. Voice seems like a natural interface, but it’s still essentially making selections from a multiple choice menu. Radio drama may be an obvious inspiration for this new form; its overacted tropes and narrative conventions are also well-established for audiences. But right now, like radio narratives, the experience of these games seem to still be more about listening than speaking.

Human Interact


Untethered, too, is inspired by radio drama. Created by Numinous Games, which previously made That Dragon Cancer, it runs on Google’s Daydream virtual reality platform, combining visuals with voice and a hand controller.

Virtual reality and voice control seem to be an ideal fit. On a practical level, speech obviates the need for novice gamers to figure out complicated button placements on a handheld controller they can’t see. On an experiential level, the combination of being able to look around a 360 degree environment and speaking to it naturally brings games one step closer to dissolving the fourth wall.

In the first two episodes, Untethered drops you first into a radio station in the Pacific Northwest and then into a driver’s seat, where you encounter characters whose faces you never see. Their stories slowly intertwine, but you only get to know them through their voice. Physically, you’re mostly rooted to one spot, though you can use the Daydream controller to put on records and answer calls. When given the cue, you speak: your producer gets you to record a radio commercial, and you have to mediate an argument between husband and wife in your back seat. “It’s somewhere maybe between a book and a movie because you’re not imagining every detail,” says head writer Amy Green.

The game runs off Google’s Cloud Speech platform which recognizes voice input, and may return 15 or 20 lines responding to whatever you might say, says Green. While those lines may meander the story in different directions, the outcome of the game is always the same. “If you never speak a word, you’re still gonna have a really good experience,” she says.

“It sounds like a daunting task, but you’d be surprised at how limited the types of questions that people ask are.” -Alexander Mejia, Human Interact

This is a similar design to Starship Commander: anticipating anything the player might say, so as to record a pre-written, voice-acted response.

“It sounds like a daunting task, but you’d be surprised at how limited the types of questions that people ask are,” says Mejia of Human Interact. “What we found out is that 99% of people, when they get in VR, and you put them in the commander’s chair and you say, “You have a spaceship. Why don’t you go out and do something with it?” People don’t try to go to the fast food joint or ask what the weather’s like outside. They get into the character.”

“The script is more like a funnel, where people all want to end up in about the same place,” he adds.

Yet for voice games to be fully responsive to anything a user might say, traditional scripts may not even be useful. The ideal system would use “full stack AI, not just the AI determining what you’re saying and then playing back voice lines, but the AI that you can actually have a conversation with,” says Mejia. “It passes the Turing test with flying colors; you have no idea if it’s a person.”

In this world, there are no script trees, only a soup of knowledge and events that an artificial intelligence picks and prunes from, reacting spontaneously to what the player says. Instead of a tightly scripted route with little room for expression, an ideal conversation could be fluid, veering off subject and back. Right now, instead of voice games being a freeing experience, it’s easy to feel hemmed in, trapped in the worst kind of conversation — overly structured with everyone just waiting their turn to talk.

An example of procedurally generated conversation can be found in Spirit AI’s Character Engine. The system creates characters with their own motivations and changing emotional states. The dialogue is not fully pre-written, but draws on a database of information — people, places, event timeline — to string whole sentences together itself.

“I would describe this as characters being able to improvise based on the thing they know about their knowledge of the world and the types of things they’ve been taught how to say,” says Mitu Khandaker, chief creative officer at Spirit AI and an assistant professor at New York University’s Game Center. Projects using the technology are already going into production, and should appear within two years, she says. If games like Codename Cygnus and Baker Street Experience represent a more structured side of voice gaming, Spirit AI’s engine reflects its freeform opposite.

‘Untethered,’ a virtual reality title from Numinous Games.

Every game creator deals with a set of classic storytelling questions: Do they prefer to give their users liberty or control? Immersion or a well-told narrative? An experience led by the player or developer? Free will or meaning?

With the rise of vocal technology that allows us to communicate more and more seamlessly with games, these questions will become even more relevant.

“It’s nice to have this idea that there is an author, or a God, or someone who is giving meaning to things, and that the things over which I have no control are happening for a reason,” says Grossman. “There’s something sort of comforting about that: ‘You’re in good hands now. We’re telling a story, and I’m going to handle all this stuff, and you’re going to enjoy it. Just relax and enjoy that.'”

In Untethered, there were moments when I had no idea if my spoken commands meaningfully impacted the story at all. Part of me appreciated that this mimics how life actually works. “You just live your life and whatever happened that day was what was always going to happen that day,” Green says. But another part of me missed the clearly telegraphed forks in the road that indicated I was about to make a major decision. They are a kind of fantasy of perfect knowledge, of cause and effect, which don’t always appear in real life. Part of the appeal of games is that they simplify and structure the complexity of daily living.

“Not everybody is necessarily into games which are about violence or shooting but everyone understands what it is to talk to people. Everybody knows what it is to have a human engagement of some kind.” – Mitu Khandaker, Spirit AI

As developers wrestle with this balance, they will create a whole new form of game: one that’s centered on complex characters over physical environments; conversation and negotiation over action and traditional gameplay. The idea of what makes a game a game will expand even further. And the voice can reduce gaming’s barrier to entry for a general audience, not to mention the visually and physically impaired (the Able Gamers Foundation estimates 33 million gamers in the US have a disability of some kind). “Making games which are more about characters means that more people can engage with them,” says Khandaker. “Not everybody is necessarily into games which are about violence or shooting but everyone understands what it is to talk to people. Everybody knows what it is to have a human engagement of some kind.”

Still, voice gaming’s ability to bring a naturalistic interface to games matters little if it doesn’t work seamlessly, and that remains the industry’s biggest point to prove. A responsive if abstract gamepad is always preferable to unreliable voice control. An elaborate dialogue tree that obfuscates a lack of true intelligence beats a fledgling AI which can’t understand basic commands.

I’m reminded of this the second time I play the Starship Commander demo. Anticipating the villain’s surprise attack and ultimatum, I’m already resigned to the only option I know will advance the story: agree to his request.

“Take me to the Delta outpost and I’ll let you live,” he says.

“Sure, I’ll take you,” I say.

This time he doesn’t stare blankly at me. “Fire on the ship,” he replies, to my surprise.

A volley of missiles and my game is over, again. I take off my headset and David Kuelz, a writer on the game who set up the demo, has been laughing. He watched the computer convert my speech to text.

“It mistook ‘I’ll take you’ for ‘fuck you,'” he says. “That’s a really common response, actually.”

Engadget RSS Feed

Google might bring curved screens to its next Pixel phone

Google, which has taken a hands-off approach to Android hardware until recently, may be getting more involved in smartphone production. It’s reportedly investing up to $ 875 million in LG Display to develop a stable supply of flexible OLED screens for its Pixel phones, according to reports from Korea’s Yonhap News and Electronic Times (ET). That would help ease supply problems for the next-gen device, as the current model has been nearly impossible to find.

The search giant would invest a trillion won ($ 875 million) and possibly more to secure a production line dedicated to its own smartphones. It may also reserve some flexible OLED screens for other devices like a rumored pair of “Pixel” smartwatches. LG display is reportedly mulling the offer, which would be a strategic investment and not just an order deposit. If it signs on, curved screens for the Pixel would likely be built in LG’s $ 1.3 billion flexible OLED line in Gumi, North Gyeongsang Province.

With its Nexus phones, Google let partners Huawei, LG and HTC control all aspects of the devices and hardware. However, with the Pixel and Pixel XL, Google actually took charge of the design and thus, to some level, the hardware. That was both a good and bad thing — the phone was generally acknowledged as the best-ever Google device, but was only released in the US, UK, Australia, Germany and Canada. Even in those nations, it was pretty damn hard to find.

If the news is accurate (and with supply rumors, that’s a big “if”) then Google would be playing favorites with one Android supplier, LG, over another, Samsung. On the other hand, Samsung might be quite okay with that, considering it’s about to launch its own curved OLED Galaxy S8 smartphone and possibly supply the flexible OLED display for Apple’s next iPhone 8. With OLED tech seemingly the only thing that manufacturers want, it makes sense for Google to cut a deal with LG, which isn’t faring so well with its own devices.

Via: Techcrunch

Source: Yonhap, ET News (translated)

Engadget RSS Feed

Next iPhone might have depth-sensing front camera

It’s that time of year, folks. Rumors of what the next iPhone will be like are coming in hot and heavy. Last week, well-connected Apple analyst Ming-Chi Kuo noted that the new handsets would nix the home button for a touch-friendly “function area.” Now there’s another bit of info. In a KGI Securities report detailed by 9to5Mac, the analyst explains that the upcoming OLED iPhone will feature a “revolutionary” front camera that’s capable of sensing 3D space via infrared.

More specifically, the report explains that the newfangled camera can combine depth information with 2D images for things like facial recognition, iris recognition and, perhaps most importantly, 3D selfies. Given the previous report about the home button being put out to pasture, there will need to be a replacement for Touch ID. Rumors indicate that either facial recognition or a fingerprint reader embedded in the display would assist with unlocking the device. This new report would point more to the former method.

The report also explains a bit about how the 3D front-facing camera would be used in gaming scenarios. The camera could be used to replace an in-game character’s head or face with that of the user and those 3D selfies could be destined for augmented reality.

It’s no surprise to get word of potential depth-sensing camera tech from Apple. The company nabbed PrimeSense in 2013, an outfit that co-developed the original Kinect for Xbox. This latest KGI report says PrimeSense algorithms will allow the hardware to depth and location of objects in its field of view. An earlier report from Fast Company explained that Apple was working with Lumentum to use its 3D-sensing tech on the next iPhone.

While the 3D camera will only be on the front side for now, Kuo says Apple will eventually employ the tech on around back as well. The report also explains that the company is way ahead of Android as far as 3D algorithms go, so a depth-sensing camera would be a unique feature for a couple of years. Of course, if the early rumors are true, you can expect to pay $ 1,000 for the 10th anniversary iPhone when it arrives.

Source: 9to5Mac

Engadget RSS Feed