Alpine’s latest receiver brings wireless CarPlay to all

Apple CarPlay has finally gone wireless. After debuting the technology at CES this year, Alpine is now shipping the iLX-107, the first CarPlay receiver with support for wireless connectivity. And considering the tech world’s general disdain for wires and cables, it’s a surprise it’s taken this long to reach the aftermarket.

The receiver (compatible with the iPhone 5 and later) lets CarPlay be accessed through the touchscreen and Siri voice control. You’ll get the full CarPlay experience: make calls, read texts, choose music and get real-time traffic updates. Plus, depending on your car you’ll get customized vehicle information too, such as park assist. There’s no longer any need for the proverbial Lightning cable: simply connect your phone via WiFi or Bluetooth.

While CarPlay receivers have been kicking around for a while, this is the first to support wireless connectivity — a function that began development in 2015 but didn’t find an infotainment home until late 2016 when it was added to the 2017 BMW 5 Series Sedan.

Despite growing demand for such systems, very few manufacturers have the tech built into their cars, so it’s still very much a novelty. Perhaps this is the argument for the iLX-107’s eye-watering $ 900 price tag.

Source: Cision

Engadget RSS Feed

I don’t regret being an iPhone early adopter

Do you remember where you were when Steve Jobs first introduced the iPhone, more than 10 years ago? It’s a pretty nerdy thing to admit, but I do. I spent the day glued to my computer, at my desk — theoretically hard at work. But I was actually devouring Engadget’s liveblog, after which I watched and rewatched video of the event so I could see the mythical device in action. And then I spent the next 12 months waiting for my Verizon contract to expire, hating my Moto RAZR the entire freaking time. (No, I wasn’t a day-one adopter, but I definitely stopped in an AT&T store to play with their demo phones.)

The first iPhone wasn’t a world-beater in terms of sales, and many have pointed out that it was the classic “first-gen” Apple product. It lacked important features like 3G connectivity and any third-party apps, you had to hook it up to iTunes to activate it, and it was wildly expensive — $ 500 for a paltry 4GB of storage (or $ 600 for 8GB), and that was with a two-year contract.

None of that mattered to me, and that’s in large part due to Jobs’ presentation, one that’s widely considered the best he ever gave. I’d agree with that assessment, because he so clearly outlined the benefits of the iPhone over the phones that most consumers (including me) were using. Some of my colleagues fondly remember the Windows Mobile devices they used before the iPhone and noted how they waited a few years for Apple to fix those first-gen issues before getting on board.

But the 2007 smartphone market was wildly different, particularly in the US. BlackBerry and Palm Treo devices dominated, but they were business-focused and didn’t resonate with the people buying iPods. Jobs’ presentation was the complete opposite. The first feature he announced and demoed was iPod functionality — before even bothering with the phone part. Nearly everything he showed off was focused on consumers, from photos and movies to looking up restaurants on Google Maps.

Of course, Jobs tied it all together at the end, showing a sequence where he listened to music, took a call, sent a photo over email and looked up a movie while still talking on the phone. He then hung up the call and the music automatically resumed. Right now, it seems laughably simple, but in the days of flip phones this seemed like magic.

Even the six-month gap between the iPhone’s announcement and its on-sale date worked in Apple’s favor. The company didn’t typically announce products that far in advance, but in this case it gave them crucial time to polish the device and make improvements (like adding YouTube support and using glass instead of plastic for the front screen cover). It also helped build up some serious hype and anticipation among the Apple faithful. Jobs’ presentation paid dividends over those months; it was something fans could rewatch and use to stoke their interest in the iPhone while they waited.

Jobs had made presentations like this before, and Apple has continued to do so long after he died, in 2011. The format has changed slightly, but Apple still focuses on selling you on the entire vision of its connected universe of products — all of its devices and services work better the more you use them together. When Apple makes a presentation like the one at this year’s WWDC, I often come away with the notion that my digital life would work better if I went “all in” on its software and hardware. It’s not just Apple, though — after Google I/O, I always consider whether things would be easier if I used Android for everything, and Microsoft has been doing a good job of selling me on the benefits of Windows everywhere lately as well.

The iPhone presentation was a bit different, because it was focused purely on one device — Apple hadn’t tied the phone so closely to the Mac just yet. But Apple did tie the iPhone to the Mac — before the cloud, it was home base for your phone and let you sync photos, movies, contacts, calendars and music, making it a mini-extension of your personal computer. And even though some aspects of the first iPhone did feel a bit beta (remember how you couldn’t send pictures via text message?), it also did exactly what Apple promised.

The relatively large screen and unique UI couldn’t have been more different from the garbage Verizon forced onto the Moto RAZR. There weren’t any third-party apps, but between Safari, YouTube, Mail and Maps, I could get to the most essential info on the internet while on the go … even if it took forever. I learned to accept that and use the phone’s more data-heavy features when on WiFi, which was fairly easy to find in 2008.

I still carried my iPod around for a while, but it wasn’t long before I started working around the iPhone’s limited storage space and leaving my iPod at home. Sure, the Windows Phone and BlackBerry crowd may have been doing many of these things for years, but for me (and millions of other iPhone owners), this was a huge step forward, even if there were caveats.

Looking back, the iPhone’s influence on the consumer electronics market is obvious. But that doesn’t mean it’s not worth reflecting on. It’s also worth considering Jobs’ performance to see how it influenced Apple’s competitors. Jobs made many similar presentations over the years, but after the iPhone became a success, Microsoft, Samsung and Google (among others) really started emulating Apple’s events. It’s common now to see companies sell you on their entire vision, not just a series of products or software features.

Ultimately, Jobs’ presentation is as much a part of the iPhone’s history as the product itself. The introduction was nearly a complete disaster, with shoddy prototype phones barely able to connect to the internet, running out of memory and crashing if they weren’t used very carefully. But that craziness only adds to the legend of the iPhone’s introduction.

Fortunately, the experience of actually using the iPhone was pretty seamless when it launched six months later. The first iPhone didn’t age very well (I had mine for only 18 months before grabbing a 3GS when it launched), but it made a good enough impression that I’ve been a repeat customer for nearly a decade. If Jobs’ introduction had gone as badly as it could have, things would have worked out very, very differently for both the iPhone and Apple as a whole. Yes, the first iPhone was basically a working beta — but it worked well enough to change an entire industry.

Engadget RSS Feed

Meet the small 360 camera module that will fit into phones

You’re probably not aware of this, but a Chinese company dubbed ProTruly has already released the world’s first two smartphones with a built-in 360 camera last December. Don’t worry if you missed the news, because chances are you’d be put off by the devices’ sheer bulkiness, but according to HT Optical, this may no longer be the case with the next release. At MWC Shanghai, I came across this Wuhan-based company which happened to be the 360 camera module supplier of not just ProTruly, but also of Xiaomi for its recent Mi Sphere Camera.

As I was mocking the ridiculousness of the ProTruly Darling phones displayed at the booth, HT Optical’s Vice President Shu Junfeng pulled me to a side and gave me a sneak peek at what’s coming next: a much smaller 360 camera module that can fit into a 7.6mm-thick smartphone, yet it’ll take 16-megapixel stills — a massive jump from, say, the Insta360 Air dongle’s 4.5-megapixel resolution, and also a tad more than the latest Samsung Gear 360’s 15-megapixel offering.

Future “VR smartphones” will look much less ridiculous than this ProTruly Darling.

I wasn’t sure whether it was excitement or skepticism that my face expressed upon hearing this claim, but it prompted Shu to show me some photos — which he wasn’t able to share for this article — of an upcoming smartphone that will feature this new module. Indeed, the device looked more like a conventional smartphone, as opposed to the 8.9mm-thick and 181.4mm-tall ProTruly Darling pictured above (and just for reference, the iPhone 7 Plus is 7.3mm thick and 158.2mm tall).

Also, the lenses on this mysterious phone’s module apparently add just an extra 1mm to the overall thickness, which means the camera will be less of an annoyance during phone calls or when placed in our pockets. This still doesn’t stop either lens from touching whatever surface you place the phone on, but Shu assured me that these lenses will feature a tough scratch resistant coating on the lenses.

Shu then showed me what he claimed to be a 16-megapixel 360 still taken with that new camera module, and the image was surprisingly sharp for such a tiny module. Needless to say, I was able to zoom into that image much further than I would with the photos from my Insta360 Air. While there was no sample video to show me, the exec said this little module can shoot 4K videos which is also impressive. I guess we’ll see more when this phone launches in China on July 30th.

As a firm that used to deal with camera makers like Sony and Olympus, HT Optical has dabbled with other kinds of product categories following the decline of the compact digital camera market. On top of the smartphone VR camera, I was also intrigued by the company’s phone cases with integrated optical zoom camera. The one highlighted above comes with 5x optical zoom, for instance, and it has its own microSD slot. It’s a similar idea to the Hasselblad MotoMod for Moto Z series, except you can plug any iPhone or Android phone — depending on the plug type — into this one. As a bonus, thanks to their built-in battery, the cases can capture images by themselves when needed, so long as you’re comfortable with the lack of a viewfinder.

It’s hard to tell whether this type of phone case will ever take off, but for the smartphone VR camera module, Shu reckoned it’ll take at least a year or two before it becomes a mainstream feature. For now, he’s happy to focus on working with the smaller mobile brands that tend to be more daring.

Engadget RSS Feed

‘Tinder for friends’ uses AI to block flirty messages

Making new friends as an adult is hard, and it’s easy to find yourself relying on old college pals and work colleagues to bolster your social life, even if the former live on the other side of the country and the latter are, well, your work colleagues.

Many an app has tried and largely failed to address this problem, but as any woman who’s been brave enough to seek friends — genuine platonic friends — online will know, it’s not long before your inbox is inundated with dire pickup lines, weak attempts at ‘cheeky banter’ and, of course, the ubiquitous dick pic. Enter Patook. Launching globally on July 7 on iPhone and Android, the app claims to make finding new friends easier and less traumatic thanks to an algorithm which detects and blocks flirty language.

Using an AI method known as natural language processing, the ‘flirt detector’ has been trained on millions of creepy messages and pick-up lines circulating the internet, including a huge number submitted to Reddit (of course). It also responds to the behavioral activity of the user: who they message, how often, whether it’s a copy/paste job or if they’ve bothered to think of something original, and so on.

All of this combines into what Patook’s founders unsettlingly call a ‘magic sauce’, which determines whether a message is sent or not. “What kind of music do you like?” is fine. “Would you like to sit on my face?” is not. Break the rules, and you’re banned. In fact, upon the app’s beta release in 2016, five percent of users were banned before their first message was even delivered.

According to Patook CEO Antoine El Daher: “Initial feedback to the app has been extraordinary. People seeking friends and not romantic relationships have been left out in the cold until now. We anticipate rapid growth among all genders, and so far have seen approximately 40% women, 40% men, and 20% joining as couples.”

Romantic advances aside, Patook (which means ‘little hug’ in Armenian) operates in much the same way as a dating app. There’s an extensive set of privacy controls, and users build a profile and search for friends based on the usual criteria: location, interests, age range. The app also uses a points system to specifically identify and rate the value of the criteria they want in a friend. So if you’re into hiking, you might give five points to people who list ‘the great outdoors’ as an interest, or if you’re into Napalm Death, you might give points to other metalheads. Whatever floats your boat, as long as you keep it clean.

Engadget RSS Feed

iOS 11 preview: Full of promise, especially on bigger screens

As always, Apple spent a considerable chunk of WWDC earlier this month hyping up iOS 11 and all of the new features it brings. Now it’s your turn to take them for a spin. The first public release of the iOS 11 beta goes live today for people participating in Apple’s testing program, and we’ve been playing with it for a few days to get a better sense of what it has to offer. Long story short, it’s already shaping up to be a very valuable, very comprehensive release.

In order to find out for yourself, you’ll need the right hardware: an iPhone 5s or newer, an iPad mini 2 or newer or a sixth-generation iPod touch. Before you replace your iVessel’s perfectly functional software with something that’s still months away from being ready, keep reading for a primer on what to expect.

But first…

Before we go any further, here’s the usual disclaimer: This software, while mostly functional, is a long way from being finished. Over the past few days of testing, I’ve seen my share of lock-ups, app crashes and overall funkiness. (As I write this, my iPhone’s “home row” has disappeared and I can’t figure out how to get it back.)

Since we’ve had a limited time with this preview, we haven’t been able to test all of the updates it contains either. Even though I work for Engadget, my home resembles that of a Luddite, so I didn’t have much of a need for the updated Home app. And since my car is relatively ancient, CarPlay was also a no-go. Meanwhile, other things just weren’t ready for prime time, including multi-room support in AirPlay 2 and the ability to send cash to friends via iMessage. And while we’re starting to see some really neat augmented reality tricks made with ARKit, none of those are available in the App Store yet. Long story short, just make sure you know what you’re getting into before you agree to the install.

Familiar, but different

The iOS aesthetic has undergone some major changes over the years, but that’s not really the case here if you’re using an iPhone. In fact, you’d be hard-pressed to find a difference until you swipe up in search of that flashlight. The iOS Control Center no longer looks like a handful of pages with quick options; it’s a more condensed cluster of buttons and controls that you can finally customize. I appreciate Apple squeezing all of this functionality into one place; it generally works well, and if your iOS device supports 3D Touch, you can press on these icons to access more controls. That said, I’ve already screwed up my screen brightness while trying to close Control Center maybe a thousand times, and I’m not sure I love the look either.

You can also view all your recent notifications from the home screen just by swiping up from your lock screen, which is nice if you need to get caught up on things quickly. That said, if you’re a digital pack rat (like me) and never clear your notifications, this is a great way to see iOS lag.

You’ll also see a big focus on big text: It’s meant to be clear and visually punchy, but if you didn’t like the Apple Music redesign, you’re probably not going to like this either. That bold approach is used everywhere to some extent, from the Messages app to your list of albums in Photos. The best new example, however, is the revamped App Store. It’s not just a place with lists of apps (though those still exist) — it’s more curated, and there’s a strong editorial bent. Featured apps get miniature articles (crafted with help from the developers), lots of big imagery, and more video to help explain what makes them so special. It kind of feels like Apple squeezed a teensy blog into the App Store.

And for the first time, games and apps are kept separate from one another. Sifting through these distinct lists is definitely more convenient than before, but it mostly benefits developers. With these lists now separate, apps won’t get pushed down in the Top Paid and Free lists by whatever the buzzy game of the moment is.

Intelligence everywhere

Apple’s pushing the concept of “intelligence” really hard with this release. With Core ML, developers will be able to weave machine learning features into their apps, and hopefully make them more responsive to our desires and behaviors. Too bad none of those apps are ready yet. There’s still one concrete example of Apple’s pronounced focus on intelligence here, though: Siri.

For one, it sounds profoundly more natural than before. There are still small tells that you’re talking to a collection of algorithms, but the line between listening to Siri and listening to an actual person is growing strangely thin. (You’ll notice the improved voice in other places too, like when Apple Maps is giving you directions.) Hell, Siri even sounds good when you ask it to translate something you’ve just said in English into Spanish, French, German or Chinese.

It’s also able to act on more unorthodox requests like “play me something sad,” which happens to launch a playlist called “Tearjerkers.” And if you’re tired of hearing Siri altogether, you can now type queries and commands to it instead. Unfortunately, you’ll have to disable the ability to talk to Siri in the process. Ideally, Apple wouldn’t be so binary about this, but there’s at least one workaround. Worst-case scenario, you can enable dictation for the keyboard, tap the button and start chatting with it.

If some of this sounds familiar, that’s because Siri actually has a lot in common with Google Assistant. While the feature gap between the two assistants is closing, Google is still better for answering general-purpose questions. Apple’s working on it, though. The company says Siri now pulls more answers from Wikipedia, which may be true, but you’ll still just get search results most of the time.

More important, the underlying intelligence that makes Siri work has been woven into other apps. Siri can help suggest stories you might be interested in inside the News app, and if you register for an event within Safari, Siri will add it to your calendar.

Getting social

Sometimes I wonder why Apple doesn’t just go all out and create its own social media service. Then I remember it did. It was called Ping, and it flopped hard. So it’s a little worrying to see Apple bake a stronger social element into Apple Music. At least the company’s approach this time is based on delivering features people actually use. In addition to creating a profile (which only partially mattered before), you can now share your playlists and follow other users. Sound familiar? Well, it would if you were a Spotify user. Apple’s attempts to stack up more favorably against major social services doesn’t end here, either.

With the addition of new features, iMessage has become an even more competent competitor to apps like Line and Facebook Messenger. You want stickers and stuff? Apple made it easier to skim through all of your installed iMessage apps, so you can send bizarro visuals to your friends quickly. You’ll get a handful of new, full-screen iMessage effects for good measure, and it’s not hard to see how the newfound ability to send money through iMessage itself could put a dent in Venmo’s fortunes. (Again, this feature doesn’t work in this build, so don’t bother trying to pay your friends back via text.)

And then there’s the most social tool of all: the camera app. The all-too-popular Portrait mode has apparently been improved, though I’ve been hard-pressed to tell the difference. (It’ll officially graduate from beta when iOS 11 launches later this year.) You’ll also find some new filters, but the most fun additions are some Live photo modes. You can take the tiny video clip associated with a Live Photo and make it loop, or reverse itself, or even blur to imitate a long exposure. Just know this: If you try to send these new Live Photos to anyone not on iOS 11, they just get a standard Live Photo.

The iPad experience

The new update brings welcome changes to iPhones, but it completely overhauls the way iPads work. This is a very good thing. Thanks in large part to the dock, which acts similar to the one in macOS, they’re much better multitaskers. You can pull up the dock while using any other app to either switch what you’re doing or get two apps running next to each other.

Just drag an app from the dock into the main part of the screen and it’ll start running in a thin, phone-like window. Most apps I’ve tested work just fine in this smaller configuration, since they’re meant to scale across different-sized displays. And you can move these windows apps around as needed. To get them running truly side by side, just swipe down — that locks them into the Split View we’ve had since iOS 9.

Having those apps next to each other means you can drag and drop images, links or text from one window into the other. This feels like a revelation compared with having to copy and paste, or saving an image to your camera roll so you could insert it somewhere else. Now it just needs more buy-in from developers. Literally all I want to do sometimes is drag a photo from the new Files app into Slack to share it, but that’s just not possible yet.

Oh, right, there’s a Files app now. It’s another one of those things that do what the name implies: You can manage stuff you’ve saved directly on your iPad, along with other services like Dropbox and Google Drive. Those third-party integrations are sort of theoretical right now, though: Dropbox sync isn’t ready yet, and navigating your Google Drive doesn’t really work the way it’s supposed to. It’s a great idea in concept, and I can’t wait to try it when it actually works.

When you’re done dragging and dropping, one upward swipe on the dock launches the new multitasking view. The most annoying part of this new workflow isn’t how your recent apps are laid out as a grid instead of the usual cards. No, it’s that you can’t just swipe up on those cards to close an app like you used to; you have to long-press the card and hit a tiny X to do that. I get that it’s more akin to the way you delete apps, but the original gesture was so much more intuitive and elegant. Otherwise, sifting through open apps to pick up where you left off is a breeze.

That said, it’s odd to see the Control Center to the right of those app windows. Having all these extra control toggles shoved into the side of the screen looks kind of lousy to me, but don’t expect that to change anytime soon. Thankfully, there’s no shortage of thoughtful touches on display here. Consider the new on-screen keyboard: Instead of tapping a button to switch layouts for punctuation and numbers, you can just swipe down on a key to invoke the alternate character. I still haven’t gotten completely used to it, but I’m much faster than I was on day one. Hopefully, your muscle memory resets more easily than mine. The Notes app also has been updated with the ability to scan documents on the fly, which has already made my life easier when I’m filing work expenses.

And don’t forget about the Apple Pencil. It was always kind of a hassle going through multiple steps before I started writing a note — you had to unlock the iPad, open Notes and tap a button to enable pen input. Now I can just tap the lock screen with my Pencil and I’m already writing. Longtime readers probably know my handwriting sucks, but it’s generally clean enough for iOS to parse it, so I can search for things I’ve written straight from Spotlight. Tapping a result brings up my note, and, even in its unfinished state, it’s honestly a little crazy how fast Apple’s handwriting interpretation works. Then again, Apple is pushing on-device machine=learning processes like this in a big way, so if we’re lucky, behavior like this will be the rule, not the exception.

These are all valuable improvements, and I’m sure I’ll wind up using these features a lot. At this point, though, I still wouldn’t choose an iPad over a traditional notebook or convertible as my primary machine. The situation will improve as more app developers embed support for all these features into their software, but the foundation still doesn’t seem to be as flexible as I need.

The little things

As always, there are lots of little changes baked into these releases that don’t require a ton of words. Let’s see…

  • There’s a handy one-handed keyboard in iOS 11, but it’s disabled by default. I have no idea why.
  • When you’re on a FaceTime call, you can now take a screenshot of what you’re seeing without that pesky box with your own face in it.
  • Do Not Disturb While Driving is good at knowing when you’re using an iPhone in a car — just be sure to add a toggle for it in the Control Center for when you’re a passenger.
  • It’s basically impossible to miss when an app starts using your location: You’ll see a blue banner at the top of the screen telling you as much.

Even in its unfinished state, iOS 11 seems promising, especially for iPad users. I’ve always maintained that iOS 10 was a release meant to weave Apple’s sometimes disparate features and services into a platform that felt more whole. It was maybe a little unglamorous, but it was necessary. When iOS 11 launches in the fall, we’ll be able to get a better sense of its character and value.

Engadget RSS Feed

The next video game controller is your voice

For all of modern gaming’s advances, conversation is still a fairly unsophisticated affair. Starship Commander, an upcoming virtual reality game on Oculus and SteamVR, illustrates both the promise and challenge of a new paradigm seeking to remedy that: using your voice.

In an early demo, I control a starship delivering classified goods across treacherous space. Everything is controlled by my voice: flying the ship is as simple as saying “computer, use the autopilot,” while my sergeant pops up in live action video to answer questions.

At one point, my ship is intercepted and disabled by a villain, who pops onto my screen and starts grilling me. After a little back and forth, it turns out he wants a deal: “Tell you what, you take me to the Delta outpost and I’ll let you live.”

I try to shift into character. “What if I attack you?” I say. No response, just an impassive yet expectant stare. “What if I say no?” I add. I try half a dozen responses, but — perhaps because I’m playing an early build of the game or maybe it just can’t decipher my voice– I can’t seem to find the right phrase to unlock the next stage of play.

It’s awkward. My immersion in the game all but breaks down when my conversational partner does not reciprocate. It’s a two-way street: If I’m going to dissect the game’s dialogue closely to craft an interesting point, it has to keep up with mine too.

The situation deteriorates. The villain eventually gets fed up with my inability to carry the conversation. He blows up my ship, ending the game.

Yet there is potential for a natural back and forth conversation with characters. There are over 50 possible responses to one simple question from the sergeant — “Is there anything you’d like to know before we start the mission?” — says Alexander Mejia, the founder and creative director at Human Interact, which is designing the game. The system is powered by Microsoft’s Custom Speech Service (similar technology to Cortana), which sends players’ voice input to the cloud, parses it for true intent, and gets a response in milliseconds. Smooth voice control coupled with virtual reality means a completely hands-free, lifelike interface with almost no learning curve for someone who’s never picked up a gamepad.

Speaking certainly feels more natural than selecting one of four dialogue options from a menu, as a traditional roleplaying game might provide. It makes me more attentive in conversation — I have to pay attention to characters’ monologues, picking up on details and inconsistencies while coming up with insightful questions that might take me down a serendipitous narrative route (much like real life). No, I don’t get to precisely steer a ship to uncharted planets since voice control, after all, is not ideal for navigating physical space. But, what this game offers instead is conversational exploration.


Video games have always been concerned with blurring the lines between art and real life.

Photorealistic 4K graphics, the disintegration of levels into vast open worlds, virtual reality placing players inside the skull of another person: The implicit end goal of every gaming advance seems to be to create an artificial reality indistinguishable from our own. Yet we communicate with these increasingly intelligent games using blunt tools. The joystick/buttons and keyboard/mouse combinations we use to speak to games do little to resemble the actions they represent. Even games that use lifelike controls from the blocky plastic Time Crisis guns to Nintendo Switch Joy-Cons still involve scrolling through menus and clicking on dialogue options. The next step is for us to talk to games.

While games that use the voice have cropped up over the years — Seaman on Sega’s Dreamcast, Lifeline on the PlayStation 2, Mass Effect 3 on the Xbox 360’s Kinect — their commands were often frustratingly clunky and audio input never seemed more than a novelty.

That may be coming to an end. Well-rated audio games have appeared on the iPhone such as Papa Sangre and Zombies, Run! At E3 this month, Dominic Mallinson, a Sony senior vice president for research and development, referred to natural language understanding among “some of the technologies that really excite us in the lab right now.”

More than anything, the rush by Microsoft, Google, Amazon and Apple to dominate digital assistants is pushing the entire voice computing field forward. In March, The Information reported that Amazon CEO Jeff Bezos wants gaming to be a “killer app” for Alexa, and the company has paid developers that produce the best performing skills. Games are now the top category for Alexa, and the number of customers playing games on Echo devices has increased tenfold in the last year, according to an Amazon spokeswoman. “If I think back on the history of the world, there’s always been games,” says Paul Cutsinger, Amazon’s head of Alexa voice design education. “And it seems like the invention of every new technology comes along with games.”

“It seems like the invention of every new technology comes along with games.” – Paul Cutsinger, Amazon

Simply: If voice assistants become the next major computing platform, it’s logical that they will have their own games. “On most new platforms, games are one of the first things that people try,” says Aaron Batalion, a partner focused on consumer technology at venture capital firm Lightspeed Venture Partners. “It’s fun, engaging and, depending on the game mechanics, it’s often viral.” According to eMarketer, 35.6 million Americans will use a voice assistant device like Echo at least once a month this year, while 60.5 million Americans will use some kind of virtual voice assistant like Siri. The question is, what form will these new games take?

Gaming skills on Alexa today predominantly trace their lineage to radio drama — the serialized voice acted fiction of the early 20th century — including RuneScape whodunnit One Piercing Note, Batman mystery game The Wayne Investigation and Sherlock Holmes adventure Baker Street Experience.

Earplay, meanwhile, has emerged as a leading publisher of audio games, receiving over $ 10,000 from Amazon since May, according to Jon Myers, who co-founded the company in 2013. Myers describes their work as “stories you play with your voice,” and the company crafts both their own games and the tools that enable others to do the same.

For instance, in Codename Cygnus, you play a James Bond-esque spy navigating foreign locales and villains with contrived European accents, receiving instructions via an earpiece. Meanwhile, in Half, you navigate a surreal Groundhog Day scenario, picking up clues on each playthrough to escape the infinitely repeating sequence of events.

“What you see with the current offerings from Earplay springs a lot out of what we did at Telltale Games over the last decade.”

Like a choose-your-own-adventure novel, these experiences intersperse chunks of narrative with pivotal moments where the player gets to make a decision, replying with verbal prompts. Plot the right course through an elaborate dialogue tree and you reach the end. The audio storytelling activates your imagination, yet there is little agency as a player: The story chugs along at its own pace until you reach each waypoint. You are not so much inhabiting a character or world as co-authoring a story with a narrator.

“What you see with the current offerings from Earplay springs a lot out of what we did at Telltale Games over the last decade,” says Dave Grossman, Earplay’s chief creative officer. “I almost don’t even want to call them games. They’re sort of interactive narrative experiences, or narrative games.”

Grossman has had a long career considering storytelling in games. He is widely credited with creating the first game with voice acting all the way through — 1993’s Day of the Tentacle — and also worked on the Monkey Island series. Before arriving at Earplay, he spent a decade with Telltale Games, makers of The Wolf Among Us and The Walking Dead.

Earplay continues this genre’s bloodline: The goal is not immersion but storytelling. “I think [immersion] is an excellent thing for getting the audience involved in what you want, in making them care about it, but I don’t think it’s the be-all-end-all goal of all gaming,” says Grossman. “My primary goal is to entertain the audience. That’s what I care most about, and there are lots of ways to do that that don’t involve immersing them in anything.”

“My primary goal is to entertain the audience … There are lots of ways to do that that don’t involve immersing them in anything.”

In Earplay’s games, the “possibility space”– the degree to which the user can control the world — is kept deliberately narrow. This reflects Earplay’s philosophy. But it also reflects the current limitations of audio games. It’s hard to explore physical environments in detail because you can’t see them. Because Alexa cannot talk and listen at the same time, there can be no exchange of witticisms between player and computer, only each side talking at pre-approved moments. Voice seems like a natural interface, but it’s still essentially making selections from a multiple choice menu. Radio drama may be an obvious inspiration for this new form; its overacted tropes and narrative conventions are also well-established for audiences. But right now, like radio narratives, the experience of these games seem to still be more about listening than speaking.

Human Interact


Untethered, too, is inspired by radio drama. Created by Numinous Games, which previously made That Dragon Cancer, it runs on Google’s Daydream virtual reality platform, combining visuals with voice and a hand controller.

Virtual reality and voice control seem to be an ideal fit. On a practical level, speech obviates the need for novice gamers to figure out complicated button placements on a handheld controller they can’t see. On an experiential level, the combination of being able to look around a 360 degree environment and speaking to it naturally brings games one step closer to dissolving the fourth wall.

In the first two episodes, Untethered drops you first into a radio station in the Pacific Northwest and then into a driver’s seat, where you encounter characters whose faces you never see. Their stories slowly intertwine, but you only get to know them through their voice. Physically, you’re mostly rooted to one spot, though you can use the Daydream controller to put on records and answer calls. When given the cue, you speak: your producer gets you to record a radio commercial, and you have to mediate an argument between husband and wife in your back seat. “It’s somewhere maybe between a book and a movie because you’re not imagining every detail,” says head writer Amy Green.

The game runs off Google’s Cloud Speech platform which recognizes voice input, and may return 15 or 20 lines responding to whatever you might say, says Green. While those lines may meander the story in different directions, the outcome of the game is always the same. “If you never speak a word, you’re still gonna have a really good experience,” she says.

“It sounds like a daunting task, but you’d be surprised at how limited the types of questions that people ask are.” -Alexander Mejia, Human Interact

This is a similar design to Starship Commander: anticipating anything the player might say, so as to record a pre-written, voice-acted response.

“It sounds like a daunting task, but you’d be surprised at how limited the types of questions that people ask are,” says Mejia of Human Interact. “What we found out is that 99% of people, when they get in VR, and you put them in the commander’s chair and you say, “You have a spaceship. Why don’t you go out and do something with it?” People don’t try to go to the fast food joint or ask what the weather’s like outside. They get into the character.”

“The script is more like a funnel, where people all want to end up in about the same place,” he adds.

Yet for voice games to be fully responsive to anything a user might say, traditional scripts may not even be useful. The ideal system would use “full stack AI, not just the AI determining what you’re saying and then playing back voice lines, but the AI that you can actually have a conversation with,” says Mejia. “It passes the Turing test with flying colors; you have no idea if it’s a person.”

In this world, there are no script trees, only a soup of knowledge and events that an artificial intelligence picks and prunes from, reacting spontaneously to what the player says. Instead of a tightly scripted route with little room for expression, an ideal conversation could be fluid, veering off subject and back. Right now, instead of voice games being a freeing experience, it’s easy to feel hemmed in, trapped in the worst kind of conversation — overly structured with everyone just waiting their turn to talk.

An example of procedurally generated conversation can be found in Spirit AI’s Character Engine. The system creates characters with their own motivations and changing emotional states. The dialogue is not fully pre-written, but draws on a database of information — people, places, event timeline — to string whole sentences together itself.

“I would describe this as characters being able to improvise based on the thing they know about their knowledge of the world and the types of things they’ve been taught how to say,” says Mitu Khandaker, chief creative officer at Spirit AI and an assistant professor at New York University’s Game Center. Projects using the technology are already going into production, and should appear within two years, she says. If games like Codename Cygnus and Baker Street Experience represent a more structured side of voice gaming, Spirit AI’s engine reflects its freeform opposite.

‘Untethered,’ a virtual reality title from Numinous Games.

Every game creator deals with a set of classic storytelling questions: Do they prefer to give their users liberty or control? Immersion or a well-told narrative? An experience led by the player or developer? Free will or meaning?

With the rise of vocal technology that allows us to communicate more and more seamlessly with games, these questions will become even more relevant.

“It’s nice to have this idea that there is an author, or a God, or someone who is giving meaning to things, and that the things over which I have no control are happening for a reason,” says Grossman. “There’s something sort of comforting about that: ‘You’re in good hands now. We’re telling a story, and I’m going to handle all this stuff, and you’re going to enjoy it. Just relax and enjoy that.'”

In Untethered, there were moments when I had no idea if my spoken commands meaningfully impacted the story at all. Part of me appreciated that this mimics how life actually works. “You just live your life and whatever happened that day was what was always going to happen that day,” Green says. But another part of me missed the clearly telegraphed forks in the road that indicated I was about to make a major decision. They are a kind of fantasy of perfect knowledge, of cause and effect, which don’t always appear in real life. Part of the appeal of games is that they simplify and structure the complexity of daily living.

“Not everybody is necessarily into games which are about violence or shooting but everyone understands what it is to talk to people. Everybody knows what it is to have a human engagement of some kind.” – Mitu Khandaker, Spirit AI

As developers wrestle with this balance, they will create a whole new form of game: one that’s centered on complex characters over physical environments; conversation and negotiation over action and traditional gameplay. The idea of what makes a game a game will expand even further. And the voice can reduce gaming’s barrier to entry for a general audience, not to mention the visually and physically impaired (the Able Gamers Foundation estimates 33 million gamers in the US have a disability of some kind). “Making games which are more about characters means that more people can engage with them,” says Khandaker. “Not everybody is necessarily into games which are about violence or shooting but everyone understands what it is to talk to people. Everybody knows what it is to have a human engagement of some kind.”

Still, voice gaming’s ability to bring a naturalistic interface to games matters little if it doesn’t work seamlessly, and that remains the industry’s biggest point to prove. A responsive if abstract gamepad is always preferable to unreliable voice control. An elaborate dialogue tree that obfuscates a lack of true intelligence beats a fledgling AI which can’t understand basic commands.

I’m reminded of this the second time I play the Starship Commander demo. Anticipating the villain’s surprise attack and ultimatum, I’m already resigned to the only option I know will advance the story: agree to his request.

“Take me to the Delta outpost and I’ll let you live,” he says.

“Sure, I’ll take you,” I say.

This time he doesn’t stare blankly at me. “Fire on the ship,” he replies, to my surprise.

A volley of missiles and my game is over, again. I take off my headset and David Kuelz, a writer on the game who set up the demo, has been laughing. He watched the computer convert my speech to text.

“It mistook ‘I’ll take you’ for ‘fuck you,'” he says. “That’s a really common response, actually.”

Engadget RSS Feed

The best wireless outdoor home security camera

By Rachel Cericola

This post was done in partnership with The Wirecutter, a buyer’s guide to the best technology. When readers choose to buy The Wirecutter’s independently chosen editorial picks, it may earn affiliate commissions that support its work. Read the full article here.

After spending almost three months looking, listening, adjusting angles, and deleting over 10,000 push notifications and emails, we’ve decided that the Netgear Arlo Pro is the best DIY outdoor Wi-Fi home security camera you can get. Like the other eight units we tested, the Arlo Pro lets you keep an eye on your property and provides smartphone alerts whenever there’s motion. However, it’s one of the few options with built-in rechargeable batteries to make it completely wireless, so it’s easy to place and move. It also delivers an excellent image, clear two-way audio, practical smart-home integration, and seven days of free cloud storage.

Who should get this

A Wi-Fi surveillance camera on your front porch, over your garage, or attached to your back deck can provide a peek at what really goes bump in the night, whether that’s someone stealing packages off your steps or raccoons going through garbage cans. It can alert you to dangers and can create a record of events. It should also help you to identify someone—and if it’s a welcome or unwelcome guest—or just let you monitor pets or kids when you’re not out there with them.

How we picked and tested

Photo: Rachel Cericola

During initial research, we compiled a huge list of outdoor security cameras recommended by professional review sites like PCMag, Safewise, and Safety.com, as well as those available on popular online retailers. We then narrowed this list by considering only Wi-Fi–enabled cameras that will alert your smartphone or tablet whenever motion is detected. We also clipped out all devices that required a networked video recorder (NVR) to capture video, focusing only on products that could stand alone.

Once we had a list of about 27 cameras, we went through Amazon and Google to see what kind of feedback was available. We ultimately decided on a test group based on price, features, and availability.

We mounted our test group to a board outside of our New England house, pointed them at the same spot, and exposed them all to the same lighting conditions and weather. The two exceptions were cameras integrated into outdoor lighting fixtures, both of which were installed on the porch by my husband, a licensed electrician. All nine cameras were connected to the same Verizon FiOS network via a Wi-Fi router indoors.

Besides good Wi-Fi, you may also need a nearby outlet. Only three of the cameras we tested offered the option to use battery power. Most others required an AC connection, which means you won’t be able to place them just anywhere.

We downloaded each camera’s app to an iPhone 5, an iPad, and a Samsung Galaxy S6. The cameras spent weeks guarding our front door, alerting us to friends, family members, packages, and the milkman. Once we got a good enough look at those friendly faces, we tilted the entire collection outward to see what sort of results we got facing the house across the street, which is approximately 50 feet away. To learn more about how we picked and tested, please see our full guide.

Our pick

The Arlo Pro can handle snow, rain, and everything else, and runs for months on a battery charge. Photo: Rachel Cericola

The Arlo Pro is a reliable outdoor Wi-Fi camera that’s compact and completely wireless, thanks to a removable, rechargeable battery that, based on our testing, should provide at least a couple of months of operation on a charge. It’s also the only device on our list that offers seven days of free cloud storage, and packs in motion- and audio-triggered recordings for whenever you get around to reviewing them.

The Arlo Pro requires a bridge unit, known as the Base Station, which needs to be powered and connected to your router. The Base Station is the brains behind the system, but also includes a piercing 100-plus–decibel siren, which can be triggered manually through the app or automatically by motion and/or audio.

With a 130-degree viewing angle and 720p resolution, the Arlo Pro provided clear video footage during both day and night, and the two-way audio was easy to understand on both ends. The system also features the ability to set rules, which can trigger alerts for motion and audio. You can adjust the level of sensitivity so that you don’t get an alert or record a video clip every time a car drives by. You can also set up alerts based on a schedule or geofencing using your mobile device, but you can’t define custom zones for monitoring. All of those controls are easy to find in the Arlo app, which is available for iOS and Android devices.

If you’re looking to add the Arlo Pro to a smart-home system, the camera currently works with Stringify, Wink, and IFTTT (“If This Then That”). SmartThings certification was approved and will be included in a future app update. The Arlo Pro is also compatible with ADT Canopy for a fee.

Runner-up

The Nest Cam Outdoor records continuously and produces better images than most of the competition, but be prepared to pay extra for features other cameras include for free. Photo: Rachel Cericola

The Nest Cam Outdoor is a strong runner-up. It records continuous 1080p video, captures to the cloud 24/7, and can actually distinguish between people and other types of motion. Like the Nest thermostat, the Outdoor Cam is part of the Works With Nest program, which means it can integrate with hundreds of smart-home products. It’s also the only model we tested that has a truly weatherproof cord. However, that cord and the ongoing subscription cost, which runs $ 100 to $ 300 per year for the Nest Aware service, is what kept the Nest Cam Outdoor from taking the top spot.

Like our top pick, the Nest Cam Outdoor doesn’t have an integrated mount. Instead, the separate mount is magnetic, so you can attach and position the camera easily. Although it has a lot of flexibility in movement, it needs to be placed within reach of an outlet, which can be a problem outside the house. That said, the power cord is quite lengthy. The camera has a 10-foot USB cable attached, but you can get another 15 feet from the included adapter/power cable.

The Nest Cam Outdoor’s 1080p images and sound were extremely impressive, both during the day and at night. In fact, this camera delivered some of the clearest, most detailed images during our testing, with a wide 130-degree field of view and an 8x digital zoom.

The Nest app is easy to use and can integrate with other Nest products, such as indoor and outdoor cameras, the Nest thermostat, and the Nest Protect Smoke + CO detector. You can set the camera to turn on and off at set times of day, go into away mode based on your mobile device’s location, and more.

This guide may have been updated by The Wirecutter. To see the current recommendation, please go here.

Note from The Wirecutter: When readers choose to buy our independently chosen editorial picks, we may earn affiliate commissions that support our work.

Engadget RSS Feed

Sling TV extends cloud DVR to iOS devices

Sling TV’s cloud DVR service is now available for iPhone and iPad. The streaming service’s DVR “First Look” option costs an additional $ 5 per month and gives you 50 hours of DVR storage.

The iOS devices now join the growing list of DVR-supported systems, which includes AirTV players, Amazon Fire TVs and tablets, Android TVs and mobile devices, Apple TVs, Roku™ streaming players and TVs, Xbox consoles and Windows 10 devices.

Sling TV began beta testing its cloud DVR option last year and started rolling it out to users in April. This month, the feature got an upgrade with an added option to protect recorded shows from being deleted.

However, there are still a number of channels that don’t allow DVR recordings. Those channels are ABC, Freeform, Disney Channel, Disney XD, Disney JR, ESPN, ESPN2, ESPN3, ESPN Deportes, ESPN Goal Line, ESPN Buzzer Beater, ESPN Bases Loaded and the SEC Network as well as any on-demand only channel.

An app update released today will enable the new service on iOS devices for those with the “First Look” subscription.

Source: Sling TV (iTunes)

Engadget RSS Feed

The Morning After: Thursday, June 22nd 2017

Hey, good morning! You look fabulous.

Welcome to Thursday morning. We’re reliving the ’90s through, as Sega launches a selection of classic hits both with ads and without. We’re also talking Instagram and its stealth shills, and new emoji. We hope you like fairies.


It should focus less on surprise and more on delight.Apple’s paranoia about leaks is misplaced

Apple’s inability to keep its secrets is so bad that even its internal presentation about confidentiality leaked. It reportedly conducted an hour-long briefing titled “Stopping Leakers — Keeping Confidential at Apple” for about 100 employees to make sure they understood the importance of not leaking information. But that concern is misplaced: Clamping down on leaks won’t help Apple’s bottom line.


The games are free, but you can pay $ 2 to drop the advertisementsSega Forever makes Genesis classics free on mobile

The Sega Forever collection is five titles meant to begin “a retro revolution that will transport players back through two decades of console gaming.” Starting today, the 1991 version of Sonic the Hedgehog, fan-favorite RPG Phantasy Star II, classic arcade-style beat ’em up Comix Zone, platformer Kid Chameleon and Greek mythology-themed beat ’em up Altered Beast will be available on Google Play and iTunes as free ad-supported games.


Can Travis Kalanick’s resignation fix Uber?Uber’s future is still tied to its founder

Uber’s disruptive effect on the taxi business, went hand in hand with throwing out the rulebook. Some of the rules avoided, however, included strict background checks on drivers, and safety laws to ensure that drivers didn’t work for too long, according to Uber co-founder Garrett Camp, who sits as chairperson of the company’s board. He said the team “failed to build some of the systems that every company needs to scale successfully.” Those systems included restrictions on employees sexually harassing their colleagues and preventing engineers from developing tools to hinder law enforcement investigations. Following Travis Kalanick’s resignation, can Uber change enough?


Your next set of emoji includes zombies, vampires, fairies and dinosaurs. The latest emoji update is a playful one

Finally, the monocle emoji.


A new tool could make hidden ads more obvious — if shills use it.Instagram gives social media influencers the benefit of the doubt

social media platform. The “Paid partnership with [enter brand name here]” post format is designed for users who want to advertise products on their page, letting them easily disclose when one of their posts is an ad. Instagram says this is an effort to bring the platform some much-needed transparency. The feature is set to roll out in the coming weeks to a “small number” of creators and businesses, according to the company. The question remains: Will influencers actually use the feature? And what will happen if they don’t?


The monsters caught with cheating tools may not behave normally.‘Pokémon Go’ will flag creatures caught using cheats

Niantic has decided that forcing Pokémon Go cheaters to a life of catching Pidgeys isn’t quite enough punishment. Now, any Pokémon caught using “third-party services that circumvent normal gameplay” will be marked with a slash in people’s inventories and “may not behave as expected.”

But wait, there’s more…

  • Airbus imagines a faster helicopter with wings
  • Google gets closer to building its own city in San Jose
  • Lenovo’s pro workstation is as light as a MacBook Air
  • An iPhone is your only option on Virgin Mobile
  • Self-driving shuttles are coming to U of M this fall
  • Todoist ‘Twist’ is supposed to be better than email, less annoying than Slack

The Morning After is a new daily newsletter from Engadget designed to help you fight off FOMO. Who knows what you’ll miss if you don’t subscribe.

Engadget RSS Feed

Emojis for zombies, T-Rex and Colbert are almost here

Your phone chats are about to get more… fantastical. Right on cue, the Unicode Consortium has released its promised batch of emoji and text characters. The finalized set of 56 emoji (up from 48 when we last reported) includes a slew of outlandish people and beasts, including zombies, vampires, fairies and dinosaurs. It also does more to accommodate women with emoji for breastfeeding and the hijab, while Stephen Colbert fans might be happy with the familiar-looking raised eyebrow (second from the upper left).

Outside of the emoji, the update also introduces a Bitcoin character and is more adept at handling less common languages or written requirements.

Don’t expect to use all these new characters right away. Your device operating system will need an update to recognize them, and there’s a good chance you’ll be waiting a while. You’ll likely have to wait until Android O to see them on a Google-powered phone, while the iPhone and iPad crowd will likely have to sit tight until iOS 11. There’s nothing stopping companies from adopting the new Unicode pack, however — it’s now just a question of everyone getting with the program.

Via: Emojipedia

Source: Unicode

Engadget RSS Feed