If you want to start your week with a little bit of tech shade, check out Samsung’s new Galaxy commercial. The ad follows a young man through the years as he meets and falls for a young woman. However, the focus of each touching moment in their blossoming relationship is how his iPhone is inferior to her Samsung Galaxy and Samsung makes sure to put every single downside of owning an iPhone on blast. That includes waiting in lines for the new model, inadequate photo storage space, lack of water resistance and, of course, the headphone dongle. There’s even a not-so-subtle swipe at the iPhone X’s notch. And if all of that wasn’t enough, there’s the ad’s title — “Samsung Galaxy: Growing Up.”
You can watch the ad below and if you want to compare the latest iPhone and Samsung Galaxy models yourself, you can check out our reviews of the iPhone 8 and 8 Plus, the iPhone X, the Samsung Galaxy S8 and S8 Plus and the Samsung Galaxy Note 8.
Just because a tech company has announced a product doesn’t mean employees are free to share or talk about it before release — just ask Microsoft. And unfortunately, one Apple engineered has learned that the hard way. Apple has reportedly fired a iPhone team member after his daughter Brooke posted a hands-on video showing off his iPhone X before launch. Brooke took down the video as soon as Apple requested it, but the takedown came too late to prevent the clip from going viral, leading to seemingly endless reposts and commentary. We’ve asked Apple for comment on the firing.
In a follow-up video (below), Brooke said she and her father understood the decision and weren’t angry at Apple. And it’s important to stress that this wasn’t a garden variety iPhone X. As an employee device, it had sensitive information like codenames for unreleased products and staff-specific QR codes. Combine that with Apple’s general prohibition of recording video on campus (even at relatively open spaces like Caffè Macs) and this wasn’t so much about maintaining the surprise as making sure that corporate secrets didn’t get out. Apple certainly didn’t want to send the message that recording pre-release devices was acceptable.
All the same, it’s hard not to sympathize — the engineer had poured his heart into the iPhone X, only to be let go the week before the handset reaches customers. And while he’s likely to land on his feet (“we’re good,” Brooke said), his daughter is clearly distraught by the abuse hurled toward her and her father. The outcome isn’t going to change here, unfortunately. However, the incident might be helpful if it helps others avoid losing their jobs simply because they were a little too eager to share their work.
A-ha’s classic video for “Take On Me” was the result of painstaking effort — it took 16 weeks to rotoscope the frames, creating that signature blend between the real and hand-drawn worlds. Now, however, you only need an iPhone to recreate the look yourself. Trixi Studios has shown off an augmented reality iOS app that produces the “Take On Me” look in your own home. The proof-of-concept software makes do with virtual versions of A-ha’s Morten Harket and the pipe-wielding thugs, but its effect is more convincing than you might think.
In many ways, the app (which isn’t publicly available, alas) is a showcase of how easy it’s becoming to implemented augmented reality. Trixi wrote the software using Apple’s ARKit, a software toolbox that gives iOS developers a relatively easy way to weave AR content into their apps. They don’t have to make an engine from scratch. You certainly don’t need ARKit to create the “Take On Me” effect, but a framework like that makes it possible for even small outfits to produce slick results. That, in turn, could lead to developers treating AR less as a novelty and more as an important creative tool.
For all of modern gaming’s advances, conversation is still a fairly unsophisticated affair. Starship Commander, an upcoming virtual reality game on Oculus and SteamVR, illustrates both the promise and challenge of a new paradigm seeking to remedy that: using your voice.
In an early demo, I control a starship delivering classified goods across treacherous space. Everything is controlled by my voice: flying the ship is as simple as saying “computer, use the autopilot,” while my sergeant pops up in live action video to answer questions.
At one point, my ship is intercepted and disabled by a villain, who pops onto my screen and starts grilling me. After a little back and forth, it turns out he wants a deal: “Tell you what, you take me to the Delta outpost and I’ll let you live.”
I try to shift into character. “What if I attack you?” I say. No response, just an impassive yet expectant stare. “What if I say no?” I add. I try half a dozen responses, but — perhaps because I’m playing an early build of the game or maybe it just can’t decipher my voice– I can’t seem to find the right phrase to unlock the next stage of play.
It’s awkward. My immersion in the game all but breaks down when my conversational partner does not reciprocate. It’s a two-way street: If I’m going to dissect the game’s dialogue closely to craft an interesting point, it has to keep up with mine too.
The situation deteriorates. The villain eventually gets fed up with my inability to carry the conversation. He blows up my ship, ending the game.
Yet there is potential for a natural back and forth conversation with characters. There are over 50 possible responses to one simple question from the sergeant — “Is there anything you’d like to know before we start the mission?” — says Alexander Mejia, the founder and creative director at Human Interact, which is designing the game. The system is powered by Microsoft’s Custom Speech Service (similar technology to Cortana), which sends players’ voice input to the cloud, parses it for true intent, and gets a response in milliseconds. Smooth voice control coupled with virtual reality means a completely hands-free, lifelike interface with almost no learning curve for someone who’s never picked up a gamepad.
Speaking certainly feels more natural than selecting one of four dialogue options from a menu, as a traditional roleplaying game might provide. It makes me more attentive in conversation — I have to pay attention to characters’ monologues, picking up on details and inconsistencies while coming up with insightful questions that might take me down a serendipitous narrative route (much like real life). No, I don’t get to precisely steer a ship to uncharted planets since voice control, after all, is not ideal for navigating physical space. But, what this game offers instead is conversational exploration.
Video games have always been concerned with blurring the lines between art and real life.
Photorealistic 4K graphics, the disintegration of levels into vast open worlds, virtual reality placing players inside the skull of another person: The implicit end goal of every gaming advance seems to be to create an artificial reality indistinguishable from our own. Yet we communicate with these increasingly intelligent games using blunt tools. The joystick/buttons and keyboard/mouse combinations we use to speak to games do little to resemble the actions they represent. Even games that use lifelike controls from the blocky plastic Time Crisis guns to Nintendo Switch Joy-Cons still involve scrolling through menus and clicking on dialogue options. The next step is for us to talk to games.
While games that use the voice have cropped up over the years — Seaman on Sega’s Dreamcast, Lifeline on the PlayStation 2, Mass Effect 3 on the Xbox 360’s Kinect — their commands were often frustratingly clunky and audio input never seemed more than a novelty.
That may be coming to an end. Well-rated audio games have appeared on the iPhone such as Papa Sangre and Zombies, Run! At E3 this month, Dominic Mallinson, a Sony senior vice president for research and development, referred to natural language understanding among “some of the technologies that really excite us in the lab right now.”
More than anything, the rush by Microsoft, Google, Amazon and Apple to dominate digital assistants is pushing the entire voice computing field forward. In March, The Information reported that Amazon CEO Jeff Bezos wants gaming to be a “killer app” for Alexa, and the company has paid developers that produce the best performing skills. Games are now the top category for Alexa, and the number of customers playing games on Echo devices has increased tenfold in the last year, according to an Amazon spokeswoman. “If I think back on the history of the world, there’s always been games,” says Paul Cutsinger, Amazon’s head of Alexa voice design education. “And it seems like the invention of every new technology comes along with games.”
“It seems like the invention of every new technology comes along with games.” – Paul Cutsinger, Amazon
Simply: If voice assistants become the next major computing platform, it’s logical that they will have their own games. “On most new platforms, games are one of the first things that people try,” says Aaron Batalion, a partner focused on consumer technology at venture capital firm Lightspeed Venture Partners. “It’s fun, engaging and, depending on the game mechanics, it’s often viral.” According to eMarketer, 35.6 million Americans will use a voice assistant device like Echo at least once a month this year, while 60.5 million Americans will use some kind of virtual voice assistant like Siri. The question is, what form will these new games take?
Gaming skills on Alexa today predominantly trace their lineage to radio drama — the serialized voice acted fiction of the early 20th century — including RuneScape whodunnit One Piercing Note, Batman mystery game The Wayne Investigation and Sherlock Holmes adventure Baker Street Experience.
Earplay, meanwhile, has emerged as a leading publisher of audio games, receiving over $ 10,000 from Amazon since May, according to Jon Myers, who co-founded the company in 2013. Myers describes their work as “stories you play with your voice,” and the company crafts both their own games and the tools that enable others to do the same.
For instance, in Codename Cygnus, you play a James Bond-esque spy navigating foreign locales and villains with contrived European accents, receiving instructions via an earpiece. Meanwhile, in Half, you navigate a surreal Groundhog Day scenario, picking up clues on each playthrough to escape the infinitely repeating sequence of events.
“What you see with the current offerings from Earplay springs a lot out of what we did at Telltale Games over the last decade.”
Like a choose-your-own-adventure novel, these experiences intersperse chunks of narrative with pivotal moments where the player gets to make a decision, replying with verbal prompts. Plot the right course through an elaborate dialogue tree and you reach the end. The audio storytelling activates your imagination, yet there is little agency as a player: The story chugs along at its own pace until you reach each waypoint. You are not so much inhabiting a character or world as co-authoring a story with a narrator.
“What you see with the current offerings from Earplay springs a lot out of what we did at Telltale Games over the last decade,” says Dave Grossman, Earplay’s chief creative officer. “I almost don’t even want to call them games. They’re sort of interactive narrative experiences, or narrative games.”
Grossman has had a long career considering storytelling in games. He is widely credited with creating the first game with voice acting all the way through — 1993’s Day of the Tentacle — and also worked on the Monkey Island series. Before arriving at Earplay, he spent a decade with Telltale Games, makers of The Wolf Among Us and The Walking Dead.
Earplay continues this genre’s bloodline: The goal is not immersion but storytelling. “I think [immersion] is an excellent thing for getting the audience involved in what you want, in making them care about it, but I don’t think it’s the be-all-end-all goal of all gaming,” says Grossman. “My primary goal is to entertain the audience. That’s what I care most about, and there are lots of ways to do that that don’t involve immersing them in anything.”
“My primary goal is to entertain the audience … There are lots of ways to do that that don’t involve immersing them in anything.”
In Earplay’s games, the “possibility space”– the degree to which the user can control the world — is kept deliberately narrow. This reflects Earplay’s philosophy. But it also reflects the current limitations of audio games. It’s hard to explore physical environments in detail because you can’t see them. Because Alexa cannot talk and listen at the same time, there can be no exchange of witticisms between player and computer, only each side talking at pre-approved moments. Voice seems like a natural interface, but it’s still essentially making selections from a multiple choice menu. Radio drama may be an obvious inspiration for this new form; its overacted tropes and narrative conventions are also well-established for audiences. But right now, like radio narratives, the experience of these games seem to still be more about listening than speaking.
Untethered, too, is inspired by radio drama. Created by Numinous Games, which previously made That Dragon Cancer, it runs on Google’s Daydream virtual reality platform, combining visuals with voice and a hand controller.
Virtual reality and voice control seem to be an ideal fit. On a practical level, speech obviates the need for novice gamers to figure out complicated button placements on a handheld controller they can’t see. On an experiential level, the combination of being able to look around a 360 degree environment and speaking to it naturally brings games one step closer to dissolving the fourth wall.
In the first two episodes, Untethered drops you first into a radio station in the Pacific Northwest and then into a driver’s seat, where you encounter characters whose faces you never see. Their stories slowly intertwine, but you only get to know them through their voice. Physically, you’re mostly rooted to one spot, though you can use the Daydream controller to put on records and answer calls. When given the cue, you speak: your producer gets you to record a radio commercial, and you have to mediate an argument between husband and wife in your back seat. “It’s somewhere maybe between a book and a movie because you’re not imagining every detail,” says head writer Amy Green.
The game runs off Google’s Cloud Speech platform which recognizes voice input, and may return 15 or 20 lines responding to whatever you might say, says Green. While those lines may meander the story in different directions, the outcome of the game is always the same. “If you never speak a word, you’re still gonna have a really good experience,” she says.
“It sounds like a daunting task, but you’d be surprised at how limited the types of questions that people ask are.” -Alexander Mejia, Human Interact
This is a similar design to Starship Commander: anticipating anything the player might say, so as to record a pre-written, voice-acted response.
“It sounds like a daunting task, but you’d be surprised at how limited the types of questions that people ask are,” says Mejia of Human Interact. “What we found out is that 99% of people, when they get in VR, and you put them in the commander’s chair and you say, “You have a spaceship. Why don’t you go out and do something with it?” People don’t try to go to the fast food joint or ask what the weather’s like outside. They get into the character.”
“The script is more like a funnel, where people all want to end up in about the same place,” he adds.
Yet for voice games to be fully responsive to anything a user might say, traditional scripts may not even be useful. The ideal system would use “full stack AI, not just the AI determining what you’re saying and then playing back voice lines, but the AI that you can actually have a conversation with,” says Mejia. “It passes the Turing test with flying colors; you have no idea if it’s a person.”
In this world, there are no script trees, only a soup of knowledge and events that an artificial intelligence picks and prunes from, reacting spontaneously to what the player says. Instead of a tightly scripted route with little room for expression, an ideal conversation could be fluid, veering off subject and back. Right now, instead of voice games being a freeing experience, it’s easy to feel hemmed in, trapped in the worst kind of conversation — overly structured with everyone just waiting their turn to talk.
An example of procedurally generated conversation can be found in Spirit AI’s Character Engine. The system creates characters with their own motivations and changing emotional states. The dialogue is not fully pre-written, but draws on a database of information — people, places, event timeline — to string whole sentences together itself.
“I would describe this as characters being able to improvise based on the thing they know about their knowledge of the world and the types of things they’ve been taught how to say,” says Mitu Khandaker, chief creative officer at Spirit AI and an assistant professor at New York University’s Game Center. Projects using the technology are already going into production, and should appear within two years, she says. If games like Codename Cygnus and Baker Street Experience represent a more structured side of voice gaming, Spirit AI’s engine reflects its freeform opposite.
‘Untethered,’ a virtual reality title from Numinous Games.
Every game creator deals with a set of classic storytelling questions: Do they prefer to give their users liberty or control? Immersion or a well-told narrative? An experience led by the player or developer? Free will or meaning?
With the rise of vocal technology that allows us to communicate more and more seamlessly with games, these questions will become even more relevant.
“It’s nice to have this idea that there is an author, or a God, or someone who is giving meaning to things, and that the things over which I have no control are happening for a reason,” says Grossman. “There’s something sort of comforting about that: ‘You’re in good hands now. We’re telling a story, and I’m going to handle all this stuff, and you’re going to enjoy it. Just relax and enjoy that.'”
In Untethered, there were moments when I had no idea if my spoken commands meaningfully impacted the story at all. Part of me appreciated that this mimics how life actually works. “You just live your life and whatever happened that day was what was always going to happen that day,” Green says. But another part of me missed the clearly telegraphed forks in the road that indicated I was about to make a major decision. They are a kind of fantasy of perfect knowledge, of cause and effect, which don’t always appear in real life. Part of the appeal of games is that they simplify and structure the complexity of daily living.
“Not everybody is necessarily into games which are about violence or shooting but everyone understands what it is to talk to people. Everybody knows what it is to have a human engagement of some kind.” – Mitu Khandaker, Spirit AI
As developers wrestle with this balance, they will create a whole new form of game: one that’s centered on complex characters over physical environments; conversation and negotiation over action and traditional gameplay. The idea of what makes a game a game will expand even further. And the voice can reduce gaming’s barrier to entry for a general audience, not to mention the visually and physically impaired (the Able Gamers Foundation estimates 33 million gamers in the US have a disability of some kind). “Making games which are more about characters means that more people can engage with them,” says Khandaker. “Not everybody is necessarily into games which are about violence or shooting but everyone understands what it is to talk to people. Everybody knows what it is to have a human engagement of some kind.”
Still, voice gaming’s ability to bring a naturalistic interface to games matters little if it doesn’t work seamlessly, and that remains the industry’s biggest point to prove. A responsive if abstract gamepad is always preferable to unreliable voice control. An elaborate dialogue tree that obfuscates a lack of true intelligence beats a fledgling AI which can’t understand basic commands.
I’m reminded of this the second time I play the Starship Commander demo. Anticipating the villain’s surprise attack and ultimatum, I’m already resigned to the only option I know will advance the story: agree to his request.
“Take me to the Delta outpost and I’ll let you live,” he says.
“Sure, I’ll take you,” I say.
This time he doesn’t stare blankly at me. “Fire on the ship,” he replies, to my surprise.
A volley of missiles and my game is over, again. I take off my headset and David Kuelz, a writer on the game who set up the demo, has been laughing. He watched the computer convert my speech to text.
“It mistook ‘I’ll take you’ for ‘fuck you,'” he says. “That’s a really common response, actually.”
The squabbling between Amazon and Apple might soon be over — at least, on the TV front. Amazon’s Video app might finally be heading to the Apple TV this summer, giving consumers an easy way to watch Amazon’s streaming content on the set-top box, Recode reports. Up until now, you were forced to use AirPlay to send Amazon’s streaming video titles to the Apple TV. That’s been one of the Apple TV’s biggest downsides since it debuted in 2015, together with a lack of 4K support.
The deal between Apple and Amazon might also lead to other changes. Amazon, for example, stopped selling the Apple TV in 2015 because it didn’t support its Prime Video service. That likely made a big dent in sales for Apple, especially as newer devices from Roku hit the market with 4K support. If Apple actually plans to release a newer 4K Apple TV this year, as rumors suggest, then landing back on Amazon would be essential.
At this point, it’s unclear if anything will change for Amazon’s Video apps on iOS. You can currently use them to watch Amazon Prime videos, as well as things you’ve already rented or purchased, but you can’t actually make those transactions within the app. That’s similar to how Amazon handles digital purchases on its Kindle and Comixology iOS apps. By forgoing in-app purchases on Apple’s ecosystem, Amazon avoids having to give the iPhone maker a cut of the revenue.
A nationwide manhunt for Steve Stephens, the 37-year-old from Cleveland who uploaded a video to Facebook of himself shooting an elderly stranger in the head, came to an end today. Stephens committed suicide after a brief car chase with state police in Erie, Pennsylvania. His crime, which took place this past Sunday, sparked outrage not only because of the violence itself, but also the way Facebook handled the situation. It took the social network over two hours to take the video down, although it claims this was because it wasn’t flagged immediately by other users. Facebook says Stephens’ actions weren’t reported until he used the Live feature to stream his murder confession, about an hour and 45 minutes after the shooting video was uploaded. His account has since been suspended.
“This is a horrific crime and we do not allow this kind of content on Facebook,” the company said in a statement. “We work hard to keep a safe environment on Facebook, and are in touch with law enforcement in emergencies when there are direct threats to physical safety.” As it stands, Facebook relies heavily on people flagging graphic content (the same way it does sketchy ads), which means individuals have to actually see something dreadful before they can flag it. As Wired reported earlier this year, Facebook has opted not to use algorithms to censor videos like this before they’re posted, claiming that it doesn’t want to be accused of violating freedom-of-speech rights. But, as these types of cases mushroom, the company may be forced to change its stance sooner than later.
A photo of Steve Stephens.
It could be hard for the company to build an algorithm that can successfully tell the difference between a video of someone being murdered and a clip from, say, a Jason Bourne movie. But, according to Facebook VP of Operations Justin Osofsky, his team is constantly exploring new technologies that can help create a safer environment on the site. Osofsky pointed to artificial intelligence in particular, which he says is already helping prevent certain graphic videos from being reshared in their entirety. Facebook’s explanation of this is confusing, though: It says people “are still able to share portions of the videos in order to condemn them or for public awareness, as many news outlets are doing in reporting the story online and on television.”
The company didn’t clarify how the feature works when we reached out, but it’s clear a video like Stephens’ should be removed completely and immediately. And, as a result of this weekend’s events, Osofsky said Facebook is reviewing its reporting system to ensure that people can flag explicit videos and other content “as easily and as quickly as possible.”
Facebook has really jumped very quickly into the video space, which is exciting, but it’s taking a fail-fast approach to it.
Unfortunately for Facebook, Stephens’ case isn’t the first time it has faced scrutiny over people using its tools to promote violence. Back in March, Chicago police charged a 14-year-old boy after he used Facebook Live to broadcast the sexual assault of a 15-year-old girl, which was just one of many gruesome clips that hit Facebook recently. Per The Wall Street Journal, more than 60 sensitive videos, including physical beatings, suicides and murders, have been streamed on Facebook Live since it launched to the public last year. This begs the question: Should the Federal Communications Commission regulate social networks the way it does TV? In 2015, former FCC Chairman Tom Wheeler said there were no plans to do so, claiming he wasn’t sure the agency’s authority extended to “picking and choosing among websites.”
The FCC, now headed by Ajit Pai under President Donald Trump, did not respond to our request for comment on the matter. That said, a source inside a major video-streaming company thinks services such as Facebook Live, Periscope and YouTube Live would benefit from having a “delay” safeguard in place. This could be similar in practice to how TV networks handle live events, which always feature a seven-second delay in case something unexpected happens. Remember when Justin Timberlake uncovered Janet Jackson’s nipple during the Super Bowl 38 halftime show in 2004? This delay system is designed to prevent scenes like those from showing up on your TV.
“Facebook has really jumped very quickly into the video space, which is exciting, but it’s taking a fail-fast approach to it,” the source, who asked to remain anonymous, said. “In the desire to push Live out to as many people as possible, there were a lot of corners that were cut. And when you take a fail-fast approach to something like live-streaming video, it’s not surprising that you come across these scenarios in which you have these huge ethical dilemmas of streaming a murder, sexual violence or something else.”
As for why individuals are using these platforms to broadcasts their heinous acts, Janis L. Whitlock, a research scientist at Cornell University’s College of Human Ecology, says it’s hard to pinpoint the reason because there’s no way you can do an experimental control. She says there’s a good chance Stephens was struggling with a mental illness and saw his victim, 74-year-old Robert Godwin Sr., as an object in an ongoing fantasy. Whitlock says that while there’s a good side to these social networks, they also tend to bring out the worst in people, especially those who are craving attention: “They make the most ugly of us, the most ugly in us, visible.”
“The fact that you can have witnesses, like billions of people witness something in a tiny period of time, it has to have an enormous impact on the human psyche,” she says. “How does that interact with the things that people do, or choose not to do? We don’t know yet, but it does, absolutely. I have no doubt about that as a psychologist.” Whitlock says companies like Facebook must start taking some civic responsibility, adding that there needs to be a conversation between it and other internet giants about how their products “interact with who humans are” and how they can expose someone’s limitations and potentials.
“How is it that we can use and structure these things to really amplify all the ways in which we’re amazing,” Whitlock says, “and not the ways in which we’re disgusting?”
Back in May at its I/O developer conference, Google introduced a pair of new communication apps: Allo for text-based communication and Duo for video calling. Allo is the more interesting of the two, with its deep usage of the intelligent Google Assistant bot — but Duo is the one we’ll get to try first. Google hopes it’ll stand out among a bevy of other communications apps thanks to a laser focus on providing a high-quality mobile experience. It’s available today for both the iPhone and Android phones.
“The genesis of Duo was we really saw a gap when it came to video calling,” Nick Fox, Google VP of communications products, said. “We heard lots of [user] frustration, which led to lack of use — but we also heard a lot of desire and interest as well.” That frustration came in the form of wondering who among your contacts you could have video calls with, wondering whether it would work over the wireless connection you had available and wondering if you needed to be calling people with the same type of phone or OS as yours.
To battle that, Google made Duo cross-platform and dead simple to use. You can only call one person at a time, and there’s barely any UI or features to speak of. But from a technology standpoint, it’s meant to work for anyone with a smartphone. “It shouldn’t just work on high-end devices,” said Fox. “It should work on high-end devices and on $ 50 Android phones in India.”
Google designed it to work across a variety of network connections as well. The app is built to provide HD video when on good networks and to gracefully and seamless adjust quality if things get worse. You can even drop down to a 2G connection and have video pause but have the audio continue. “We’re always prioritizing audio to make sure that you don’t drop communications entirely,” Fox said.
All of this is meant to work in the background, leaving the user with a clutter-free UI and basically no buttons or settings to mess with. Once you sign into the Duo app with your phone number (no Google login needed here), you’ll see what your front-facing camera sees. Below that are a handful of circles representing your most recent calls in the lower third of the screen. You can drag that icon list up and scroll through through your full list of contacts; if people in your phonebook don’t have the app, you can tap their number to send an SMS and invite them to Duo.
For those who do have Duo, tapping their number initiates a video call. Once you’re on the call, you just see the person you’re talking to, with your video feed in a small circle, not unlike Apple’s FaceTime. Tapping the screen reveals the only UI elements: a hang-up button, mute button and a way to flip between the front and back cameras.
Duo is even simpler than FaceTime, and far simpler than Google’s own Hangouts app, which the company says will now be more focused on business and enterprise users. In that focus on simplicity, Fox and his team left out a number of features you might find in other video-calling apps. Chief among them is that Duo can’t do group calls; it’s meant only for one-to-one calling. Google also decided against making desktop apps for Duo or Allo.
“We forced ourselves to think exclusively about the phone and design for the phone,” Fox says. “The desktop experience is something we may build over time. But if you look around the world at the billions of people that are connected to the internet, the vast majority have one device, and that device is a phone. So it was critical for us to really nail that use case.”
That’s part of the reason Google is tying Duo to a phone number rather than your Google account: Your phone already has your contacts built in, while many people might not curate or manage their Google contacts list. This way, you can see exactly who in your usual phone book is using Duo (and if they’re not, you can send them an SMS invite).
Perhaps the most clever feature Google included is Knock Knock. If you’re using an Android phone and someone calls, you’ll see a preview of their video feed on the lock screen. The person calling can wave or gesture or make a silly face to try and draw you into the conversation, and Fox says that makes the person on the receiving end a lot more likely to answer with a smile rather than a look of confusion as they wonder if they video is working properly. For the sake of privacy, you’ll only see a video feed from people in your contacts list, and you can turn the feature off entirely if you prefer.
It’s all part of Google’s goal to make the app not just simple but “human” as well. “It’s something that you don’t generally hear from Google when we talk about our apps,” Fox admits, “but video calling is a very human experience, so it’s very important that you feel that in the app as well.”
All of this adds up to a product that is refreshingly uncluttered and has a clear sense of purpose. It doesn’t fundamentally change the video-calling experience, but it is frictionless and very easy to use on a moment’s notice. Under the hood, the app does live up to its promise of updating the call based on changing network conditions — you can even flip between WiFi and cellular networks without dropping a call. There’s not a whole lot to say about the experience, and that’s probably for the best. You can make calls to people in your contacts list easily, not worry too much about dropping them, and then get on with your life.
That ease of use is what Google hopes will pull users into the app. It does indeed feel simpler than most other options out there. But given the huge variety of communication apps available and Google’s strange historical difficulty with the space, it’s not hard to imagine Duo being a niche app. That won’t be for lack of effort — Duo actually does make video chat easier than making a phone call.
Most BBM users finally have access to the app’s video calling capability. BlackBerry has released the feature for Android and iOS in Asia-Pacific, which is apparently home to its biggest userbase. The company said it made cross-platform video calls available in the US and Canada first, because it wanted to be able to fix bugs before it reaches more people. Since video calling is now stable, the phonemaker can roll it out to the rest of world.
While BBM isn’t as popular as its newer, shinier rivals like Messenger or WhatsApp anymore, BlackBerry is still developing new features for it. In fact, this release is but a small part of a bigger rollout. Later this summer, the company will launch the capability to register for an account using a phone number, among other things. Android users will be able to share larger videos, as well, while those on iOS will be able to mute group notifications.