Well, Apple Watch fans have more to look forward to than just a new operating system. According to a new report from Bloomberg, Apple will release a version of its Watch with cellular network support built-in by year’s end, relieving users of the need to carry their iPhones around. Three words: it’s about time.
Rumors of a cellular Apple Watch are nothing new, and the whole concept should sound very familiar by now. After all, Samsung and LG have had LTE-enabled smartwatches for years, and the latter developed one such wearable to help launch Android Wear 2.0 earlier this year. While it’s not yet clear what Apple plans to let people do with these mobile data connections, it’s likely that users will be able to send messages and make phone and FaceTime Audio calls without being tethered to an iPhone.
Interestingly, Intel is said to be providing the modem for the new Apple Watch, which isn’t a huge surprise — Apple tapped the chipmaker for modems used in certain versions of the iPhone 7 and 7 Plus. Given the Watch’s small size, Apple and Intel may opt to use a digital “eSIM” rather than a traditional plastic SIM card as well. That could signal a similar decision for future iPhones, which would have potentially huge ramifications for how such smartphones (and their data plans) are sold.
If nothing else, though, use of an eSIM would likely preserve the Apple Watch’s compact footprint. Consider LG’s flagship Watch Sport: it was the more powerful of the two smartwatches that debuted alongside Android Wear 2.0, and the space required to fit a physical SIM card inside helped make it big and somewhat unwieldy. That’s not really Apple’s style, especially since the company’s new Watch is said to benefit from an all-new form factor.
If true, Apple’s next step in wearables may be an iterative one. Still, if the company’s most recent earnings release is any indication, demand for the Watch is still going strong. According to CEO Cook, Watch sales grew 50 percent year over year — seriously, Tim, would some hard numbers now and then really kill you?
Ever since our close look at an alleged render of the next iPhone back in May, there have been rumors of 3D face scanning plus a large screen-to-body ratio flying about. Today, we finally bring you some solid evidence about these features, courtesy of — surprise, surprise — Apple itself. After digging up new details about the Apple HomePod in its leaked firmware, iOS developer Steve Troughton-Smith came across some code that confirm the use of infrared face unlock in BiometricKit for the next iPhone. More interestingly, in the same firmware, fellow developer Guilherme Rambo found an icon that suggests a near-bezel-less design — one that matches rumored schematics going as far back as late May. For those in doubt, Troughton-Smith assured us that this icon is “specific to D22, the iPhone that has Pearl (Face ID).”
These discoveries are by far the best hints at what to expect from the “iPhone 8,” which is expected to launch later this year. Additionally, we also learnt from our exclusive render that the phone may feature a glass back along with wireless charging this time. That said, there’s still no confirmation on the fate of Touch ID: while the HomePod firmware code seems to suggest that it’s sticking around, there’s no indication as to whether it’s ditching the usual Home button execution in favor of an under-display fingerprint scanner (as shown off by Qualcomm and Vivo at MWC Shanghai). Given how poorly Apple has been guarding the secrets of its next smartphone this time round, chances are we’ll hear more very soon.
Rumormongers have long claimed that Apple might include face recogition in the next iPhone, but it’s apparently much more than a nice-to-have feature… to the point where it might overshadow the Touch ID fingerprint reader. Bloomberg sources understand that the new smartphone will include a depth sensor that can scan your face with uncanny levels of accuracy and speed. It reportedly unlocks your device inside of “a few hundred milliseconds,” even if it’s laying on flat of a table. Unlike the iris scanner in the Galaxy S8, you wouldn’t need to hold the phone close to your face. The 3D is said to improve security, too, by collecting more biometric data than Touch ID and reducing the chances that the scanner would be fooled by a photo.
Does that sound good to you? You’re not alone. The leakers claim that Apple ultimately wants you to use face recognition instead of Touch ID. It’s not clear whether this will replace Touch ID, though. While the tipsters say that Apple has run into “challenges” putting a fingerprint reader under the screen, they don’t rule it out entirely. There are conflicting reports: historically reliable analyst Ming-Chi Kuo is skeptical that under-screen Touch ID will make the cut, while a representative at chip maker TSMC supposedly claimed that it’s present. Your face may be the preferred biometric sign-in approach rather than the only one.
The Bloomberg scoop largely recaps existing rumors, including an all-screen design (with just a tiny cut-out at the top for a camera, sensors and speaker), a speedier 10-nanometer processor and a dedicated chip for AI-related tasks. However, it adds one more treat: if accurate, the new iPhone will get an OLED version of the fast-refreshing ProMotion display technology you see in the current-generation iPad Pro. So long as the leaks are accurate, it’s becoming increasingly clear that the next iPhone represents a massive hardware upgrade, even if the software is relatively conservative.
For all of modern gaming’s advances, conversation is still a fairly unsophisticated affair. Starship Commander, an upcoming virtual reality game on Oculus and SteamVR, illustrates both the promise and challenge of a new paradigm seeking to remedy that: using your voice.
In an early demo, I control a starship delivering classified goods across treacherous space. Everything is controlled by my voice: flying the ship is as simple as saying “computer, use the autopilot,” while my sergeant pops up in live action video to answer questions.
At one point, my ship is intercepted and disabled by a villain, who pops onto my screen and starts grilling me. After a little back and forth, it turns out he wants a deal: “Tell you what, you take me to the Delta outpost and I’ll let you live.”
I try to shift into character. “What if I attack you?” I say. No response, just an impassive yet expectant stare. “What if I say no?” I add. I try half a dozen responses, but — perhaps because I’m playing an early build of the game or maybe it just can’t decipher my voice– I can’t seem to find the right phrase to unlock the next stage of play.
It’s awkward. My immersion in the game all but breaks down when my conversational partner does not reciprocate. It’s a two-way street: If I’m going to dissect the game’s dialogue closely to craft an interesting point, it has to keep up with mine too.
The situation deteriorates. The villain eventually gets fed up with my inability to carry the conversation. He blows up my ship, ending the game.
Yet there is potential for a natural back and forth conversation with characters. There are over 50 possible responses to one simple question from the sergeant — “Is there anything you’d like to know before we start the mission?” — says Alexander Mejia, the founder and creative director at Human Interact, which is designing the game. The system is powered by Microsoft’s Custom Speech Service (similar technology to Cortana), which sends players’ voice input to the cloud, parses it for true intent, and gets a response in milliseconds. Smooth voice control coupled with virtual reality means a completely hands-free, lifelike interface with almost no learning curve for someone who’s never picked up a gamepad.
Speaking certainly feels more natural than selecting one of four dialogue options from a menu, as a traditional roleplaying game might provide. It makes me more attentive in conversation — I have to pay attention to characters’ monologues, picking up on details and inconsistencies while coming up with insightful questions that might take me down a serendipitous narrative route (much like real life). No, I don’t get to precisely steer a ship to uncharted planets since voice control, after all, is not ideal for navigating physical space. But, what this game offers instead is conversational exploration.
Video games have always been concerned with blurring the lines between art and real life.
Photorealistic 4K graphics, the disintegration of levels into vast open worlds, virtual reality placing players inside the skull of another person: The implicit end goal of every gaming advance seems to be to create an artificial reality indistinguishable from our own. Yet we communicate with these increasingly intelligent games using blunt tools. The joystick/buttons and keyboard/mouse combinations we use to speak to games do little to resemble the actions they represent. Even games that use lifelike controls from the blocky plastic Time Crisis guns to Nintendo Switch Joy-Cons still involve scrolling through menus and clicking on dialogue options. The next step is for us to talk to games.
While games that use the voice have cropped up over the years — Seaman on Sega’s Dreamcast, Lifeline on the PlayStation 2, Mass Effect 3 on the Xbox 360’s Kinect — their commands were often frustratingly clunky and audio input never seemed more than a novelty.
That may be coming to an end. Well-rated audio games have appeared on the iPhone such as Papa Sangre and Zombies, Run! At E3 this month, Dominic Mallinson, a Sony senior vice president for research and development, referred to natural language understanding among “some of the technologies that really excite us in the lab right now.”
More than anything, the rush by Microsoft, Google, Amazon and Apple to dominate digital assistants is pushing the entire voice computing field forward. In March, The Information reported that Amazon CEO Jeff Bezos wants gaming to be a “killer app” for Alexa, and the company has paid developers that produce the best performing skills. Games are now the top category for Alexa, and the number of customers playing games on Echo devices has increased tenfold in the last year, according to an Amazon spokeswoman. “If I think back on the history of the world, there’s always been games,” says Paul Cutsinger, Amazon’s head of Alexa voice design education. “And it seems like the invention of every new technology comes along with games.”
“It seems like the invention of every new technology comes along with games.” – Paul Cutsinger, Amazon
Simply: If voice assistants become the next major computing platform, it’s logical that they will have their own games. “On most new platforms, games are one of the first things that people try,” says Aaron Batalion, a partner focused on consumer technology at venture capital firm Lightspeed Venture Partners. “It’s fun, engaging and, depending on the game mechanics, it’s often viral.” According to eMarketer, 35.6 million Americans will use a voice assistant device like Echo at least once a month this year, while 60.5 million Americans will use some kind of virtual voice assistant like Siri. The question is, what form will these new games take?
Gaming skills on Alexa today predominantly trace their lineage to radio drama — the serialized voice acted fiction of the early 20th century — including RuneScape whodunnit One Piercing Note, Batman mystery game The Wayne Investigation and Sherlock Holmes adventure Baker Street Experience.
Earplay, meanwhile, has emerged as a leading publisher of audio games, receiving over $ 10,000 from Amazon since May, according to Jon Myers, who co-founded the company in 2013. Myers describes their work as “stories you play with your voice,” and the company crafts both their own games and the tools that enable others to do the same.
For instance, in Codename Cygnus, you play a James Bond-esque spy navigating foreign locales and villains with contrived European accents, receiving instructions via an earpiece. Meanwhile, in Half, you navigate a surreal Groundhog Day scenario, picking up clues on each playthrough to escape the infinitely repeating sequence of events.
“What you see with the current offerings from Earplay springs a lot out of what we did at Telltale Games over the last decade.”
Like a choose-your-own-adventure novel, these experiences intersperse chunks of narrative with pivotal moments where the player gets to make a decision, replying with verbal prompts. Plot the right course through an elaborate dialogue tree and you reach the end. The audio storytelling activates your imagination, yet there is little agency as a player: The story chugs along at its own pace until you reach each waypoint. You are not so much inhabiting a character or world as co-authoring a story with a narrator.
“What you see with the current offerings from Earplay springs a lot out of what we did at Telltale Games over the last decade,” says Dave Grossman, Earplay’s chief creative officer. “I almost don’t even want to call them games. They’re sort of interactive narrative experiences, or narrative games.”
Grossman has had a long career considering storytelling in games. He is widely credited with creating the first game with voice acting all the way through — 1993’s Day of the Tentacle — and also worked on the Monkey Island series. Before arriving at Earplay, he spent a decade with Telltale Games, makers of The Wolf Among Us and The Walking Dead.
Earplay continues this genre’s bloodline: The goal is not immersion but storytelling. “I think [immersion] is an excellent thing for getting the audience involved in what you want, in making them care about it, but I don’t think it’s the be-all-end-all goal of all gaming,” says Grossman. “My primary goal is to entertain the audience. That’s what I care most about, and there are lots of ways to do that that don’t involve immersing them in anything.”
“My primary goal is to entertain the audience … There are lots of ways to do that that don’t involve immersing them in anything.”
In Earplay’s games, the “possibility space”– the degree to which the user can control the world — is kept deliberately narrow. This reflects Earplay’s philosophy. But it also reflects the current limitations of audio games. It’s hard to explore physical environments in detail because you can’t see them. Because Alexa cannot talk and listen at the same time, there can be no exchange of witticisms between player and computer, only each side talking at pre-approved moments. Voice seems like a natural interface, but it’s still essentially making selections from a multiple choice menu. Radio drama may be an obvious inspiration for this new form; its overacted tropes and narrative conventions are also well-established for audiences. But right now, like radio narratives, the experience of these games seem to still be more about listening than speaking.
Untethered, too, is inspired by radio drama. Created by Numinous Games, which previously made That Dragon Cancer, it runs on Google’s Daydream virtual reality platform, combining visuals with voice and a hand controller.
Virtual reality and voice control seem to be an ideal fit. On a practical level, speech obviates the need for novice gamers to figure out complicated button placements on a handheld controller they can’t see. On an experiential level, the combination of being able to look around a 360 degree environment and speaking to it naturally brings games one step closer to dissolving the fourth wall.
In the first two episodes, Untethered drops you first into a radio station in the Pacific Northwest and then into a driver’s seat, where you encounter characters whose faces you never see. Their stories slowly intertwine, but you only get to know them through their voice. Physically, you’re mostly rooted to one spot, though you can use the Daydream controller to put on records and answer calls. When given the cue, you speak: your producer gets you to record a radio commercial, and you have to mediate an argument between husband and wife in your back seat. “It’s somewhere maybe between a book and a movie because you’re not imagining every detail,” says head writer Amy Green.
The game runs off Google’s Cloud Speech platform which recognizes voice input, and may return 15 or 20 lines responding to whatever you might say, says Green. While those lines may meander the story in different directions, the outcome of the game is always the same. “If you never speak a word, you’re still gonna have a really good experience,” she says.
“It sounds like a daunting task, but you’d be surprised at how limited the types of questions that people ask are.” -Alexander Mejia, Human Interact
This is a similar design to Starship Commander: anticipating anything the player might say, so as to record a pre-written, voice-acted response.
“It sounds like a daunting task, but you’d be surprised at how limited the types of questions that people ask are,” says Mejia of Human Interact. “What we found out is that 99% of people, when they get in VR, and you put them in the commander’s chair and you say, “You have a spaceship. Why don’t you go out and do something with it?” People don’t try to go to the fast food joint or ask what the weather’s like outside. They get into the character.”
“The script is more like a funnel, where people all want to end up in about the same place,” he adds.
Yet for voice games to be fully responsive to anything a user might say, traditional scripts may not even be useful. The ideal system would use “full stack AI, not just the AI determining what you’re saying and then playing back voice lines, but the AI that you can actually have a conversation with,” says Mejia. “It passes the Turing test with flying colors; you have no idea if it’s a person.”
In this world, there are no script trees, only a soup of knowledge and events that an artificial intelligence picks and prunes from, reacting spontaneously to what the player says. Instead of a tightly scripted route with little room for expression, an ideal conversation could be fluid, veering off subject and back. Right now, instead of voice games being a freeing experience, it’s easy to feel hemmed in, trapped in the worst kind of conversation — overly structured with everyone just waiting their turn to talk.
An example of procedurally generated conversation can be found in Spirit AI’s Character Engine. The system creates characters with their own motivations and changing emotional states. The dialogue is not fully pre-written, but draws on a database of information — people, places, event timeline — to string whole sentences together itself.
“I would describe this as characters being able to improvise based on the thing they know about their knowledge of the world and the types of things they’ve been taught how to say,” says Mitu Khandaker, chief creative officer at Spirit AI and an assistant professor at New York University’s Game Center. Projects using the technology are already going into production, and should appear within two years, she says. If games like Codename Cygnus and Baker Street Experience represent a more structured side of voice gaming, Spirit AI’s engine reflects its freeform opposite.
‘Untethered,’ a virtual reality title from Numinous Games.
Every game creator deals with a set of classic storytelling questions: Do they prefer to give their users liberty or control? Immersion or a well-told narrative? An experience led by the player or developer? Free will or meaning?
With the rise of vocal technology that allows us to communicate more and more seamlessly with games, these questions will become even more relevant.
“It’s nice to have this idea that there is an author, or a God, or someone who is giving meaning to things, and that the things over which I have no control are happening for a reason,” says Grossman. “There’s something sort of comforting about that: ‘You’re in good hands now. We’re telling a story, and I’m going to handle all this stuff, and you’re going to enjoy it. Just relax and enjoy that.'”
In Untethered, there were moments when I had no idea if my spoken commands meaningfully impacted the story at all. Part of me appreciated that this mimics how life actually works. “You just live your life and whatever happened that day was what was always going to happen that day,” Green says. But another part of me missed the clearly telegraphed forks in the road that indicated I was about to make a major decision. They are a kind of fantasy of perfect knowledge, of cause and effect, which don’t always appear in real life. Part of the appeal of games is that they simplify and structure the complexity of daily living.
“Not everybody is necessarily into games which are about violence or shooting but everyone understands what it is to talk to people. Everybody knows what it is to have a human engagement of some kind.” – Mitu Khandaker, Spirit AI
As developers wrestle with this balance, they will create a whole new form of game: one that’s centered on complex characters over physical environments; conversation and negotiation over action and traditional gameplay. The idea of what makes a game a game will expand even further. And the voice can reduce gaming’s barrier to entry for a general audience, not to mention the visually and physically impaired (the Able Gamers Foundation estimates 33 million gamers in the US have a disability of some kind). “Making games which are more about characters means that more people can engage with them,” says Khandaker. “Not everybody is necessarily into games which are about violence or shooting but everyone understands what it is to talk to people. Everybody knows what it is to have a human engagement of some kind.”
Still, voice gaming’s ability to bring a naturalistic interface to games matters little if it doesn’t work seamlessly, and that remains the industry’s biggest point to prove. A responsive if abstract gamepad is always preferable to unreliable voice control. An elaborate dialogue tree that obfuscates a lack of true intelligence beats a fledgling AI which can’t understand basic commands.
I’m reminded of this the second time I play the Starship Commander demo. Anticipating the villain’s surprise attack and ultimatum, I’m already resigned to the only option I know will advance the story: agree to his request.
“Take me to the Delta outpost and I’ll let you live,” he says.
“Sure, I’ll take you,” I say.
This time he doesn’t stare blankly at me. “Fire on the ship,” he replies, to my surprise.
A volley of missiles and my game is over, again. I take off my headset and David Kuelz, a writer on the game who set up the demo, has been laughing. He watched the computer convert my speech to text.
“It mistook ‘I’ll take you’ for ‘fuck you,'” he says. “That’s a really common response, actually.”
Google, which has taken a hands-off approach to Android hardware until recently, may be getting more involved in smartphone production. It’s reportedly investing up to $ 875 million in LG Display to develop a stable supply of flexible OLED screens for its Pixel phones, according to reports from Korea’s Yonhap News and Electronic Times (ET). That would help ease supply problems for the next-gen device, as the current model has been nearly impossible to find.
The search giant would invest a trillion won ($ 875 million) and possibly more to secure a production line dedicated to its own smartphones. It may also reserve some flexible OLED screens for other devices like a rumored pair of “Pixel” smartwatches. LG display is reportedly mulling the offer, which would be a strategic investment and not just an order deposit. If it signs on, curved screens for the Pixel would likely be built in LG’s $ 1.3 billion flexible OLED line in Gumi, North Gyeongsang Province.
With its Nexus phones, Google let partners Huawei, LG and HTC control all aspects of the devices and hardware. However, with the Pixel and Pixel XL, Google actually took charge of the design and thus, to some level, the hardware. That was both a good and bad thing — the phone was generally acknowledged as the best-ever Google device, but was only released in the US, UK, Australia, Germany and Canada. Even in those nations, it was pretty damn hard to find.
If the news is accurate (and with supply rumors, that’s a big “if”) then Google would be playing favorites with one Android supplier, LG, over another, Samsung. On the other hand, Samsung might be quite okay with that, considering it’s about to launch its own curved OLED Galaxy S8 smartphone and possibly supply the flexible OLED display for Apple’s next iPhone 8. With OLED tech seemingly the only thing that manufacturers want, it makes sense for Google to cut a deal with LG, which isn’t faring so well with its own devices.
It’s that time of year, folks. Rumors of what the next iPhone will be like are coming in hot and heavy. Last week, well-connected Apple analyst Ming-Chi Kuo noted that the new handsets would nix the home button for a touch-friendly “function area.” Now there’s another bit of info. In a KGI Securities report detailed by 9to5Mac, the analyst explains that the upcoming OLED iPhone will feature a “revolutionary” front camera that’s capable of sensing 3D space via infrared.
More specifically, the report explains that the newfangled camera can combine depth information with 2D images for things like facial recognition, iris recognition and, perhaps most importantly, 3D selfies. Given the previous report about the home button being put out to pasture, there will need to be a replacement for Touch ID. Rumors indicate that either facial recognition or a fingerprint reader embedded in the display would assist with unlocking the device. This new report would point more to the former method.
The report also explains a bit about how the 3D front-facing camera would be used in gaming scenarios. The camera could be used to replace an in-game character’s head or face with that of the user and those 3D selfies could be destined for augmented reality.
It’s no surprise to get word of potential depth-sensing camera tech from Apple. The company nabbed PrimeSense in 2013, an outfit that co-developed the original Kinect for Xbox. This latest KGI report says PrimeSense algorithms will allow the hardware to depth and location of objects in its field of view. An earlier report from Fast Company explained that Apple was working with Lumentum to use its 3D-sensing tech on the next iPhone.
While the 3D camera will only be on the front side for now, Kuo says Apple will eventually employ the tech on around back as well. The report also explains that the company is way ahead of Android as far as 3D algorithms go, so a depth-sensing camera would be a unique feature for a couple of years. Of course, if the early rumors are true, you can expect to pay $ 1,000 for the 10th anniversary iPhone when it arrives.
Intel processors have powered Apple’s Mac computers for over a decade now, but Apple has also found success designing its own A-series ARM-based chips for the iPhone and iPad. While the company isn’t going to dump Intel chips in the Mac any time soon, a report from Bloomberg indicates that Apple at least intends to put its foot in the water and test out designing its own silicon for the Mac.
According to Bloomberg’s Mark Gurman and Ian King, Apple is building an ARM-based chip that’ll offload the Mac’s “Power Nap” features from the standard Intel processor as a way to save batter life. Power Nap currently lets the Mac run software updates, download email and calendar updates, sync to iCloud, back up to Time Machine drives and a number of other features while the computer is asleep. Some of these features only work when plugged in, though — perhaps with a chip that consumers less energy, Power Nap’s capabilities could be expanded.
This could also be a first step towards a move away from Intel processors entirely, although Bloomberg says such a move would not happen in the immediate future. But Apple has invested a lot of money in its own series of chips since 2010 and could have more freedom to update the Mac without having to rely on Intel’s schedule.
It’s worth noting that this rumored Power Nap chip wouldn’t be the first Apple-designed chip to make it into a Mac. That honor would go to the T1, an ARM-based chip that showed up in the new MacBook Pro last fall. That chip controls the laptop’s Touch Bar and the Touch ID sensor but otherwise doesn’t have to do any heavy lifting. Apple has been pretty quiet about the chip, but it seems that the next MacBook Pro could have another ARM chip — maybe the T2? — that takes more tasks away from the main Intel processor. If that’s the case, we probably won’t know for a while, as Apple probably won’t update the MacBook Pro lineup again until this fall.
There’s no way I would wear the Rufus Cuff wrist computer. After a few minutes with this 3.2-inch Android tablet strapped to my body, my wrist started to get all sweaty. It felt bulky, weird and to be honest, not very cool. But if the massive pre-orders are any indication, there is clearly a market out there. In particular, says the company’s CEO, Gabe Grifoni, in a few years something like the Cuff will replace the iPhone in your pocket and even be part of your next work uniform.
I’ll admit, I was initially dubious that a device that makes me feel like a less-cooler version of Leela from Futurama will be the first step of an inevitable wearable-computer revolution. But then Grifoni began telling me about potential industrial uses for the Cuff and it all started to make sense.
Employers believe that small Bluetooth-enabled Android tablets on their employees’ arms are a pretty good idea, according to feedback from the companies that have reached out to Rufus. With an app and a connected scanner, tasks like inventory, housekeeping at hotels and ticket-taking can be streamlined by freeing up the hands of the employees who would otherwise have to hold a tablet. The relatively low $ 300 price tag also means that smaller companies without the deep pockets of corporations could also get in on the action.
After a successful crowdfunding campaign, Grifoni started getting unexpected calls from businesses and their employees. “We were starting to get all these emails from warehouse workers and hotels.” he told Engadget. He says he’s talked to UPS and other companies about their employees using the Cuff in the workplace.
While the campaign generated $ 800,000 in pre-orders, Grifoni realized that enterprise is where all the growth is right now. But don’t worry, early adopters, the company will still sell the Cuff to consumers. Just beware that you’re not going to be rocking the latest generation of technology. Specifically, the pre-production unit I tried out had a 400×240 3.2-inch screen, which will look absolutely ancient next to your modern-day smartphone. Also, the 640×480 front-facing camera is guaranteed to make all your selfies look awful.
The actual bracelet portion of the device looks fine, though, and at least kept the Cuff mostly parallel with my arm. That said, while I would probably get used to having a computer on my wrist all day, it’s not something I’d look forward to. Did I mention it made my arm sweaty?
Grifoni predicts that wearable computers (not smartwatches) will be the norm in five to 10 years. We’ll get tired of pulling our phones out of our pockets and instead opt to have them visible at all times.
Maybe he’s right. It’s possible the future of mobile computing could be attached to our bodies. But even if he’s wrong, if he can get the Cuff into businesses and warehouses, it doesn’t really matter if the world’s population embraces tablets on their bodies in their free time because at work, some of us will get them with our nametags.