iPhone X owners can’t use Face ID to approve family purchases

Face ID on the iPhone X is helpful for authorizing a purchase for yourself, but don’t expect to use it if you’re approving a purchase for your kids. Numerous owners have discovered that the face authentication feature doesn’t work for family purchases (that is, where a family member asks you to buy apps or music on their behalf) like Touch ID does on earlier iPhones. It’s not a tremendous pain, but you probably won’t relish the thought of punching in your password every time your little ones want a new game for their iPads.

We’ve asked Apple if it can elaborate on why Face ID doesn’t work in these situations. Is it a security decision, a lack of time to add the feature or something else?

As Ars Technica notes, there could be good practical reasons to avoid using Face ID for family sharing decisions like this. When Apple was introducing Face ID, it was up front about the possibility that twins and other similar-looking family members could fool the detection system. And sure enough, we’ve seen at least one instance where a child successfully unlocked a parent’s iPhone X because of a strong resemblance. Apple probably doesn’t want to risk someone’s child going on a shopping spree simply because genetics worked in their favor, even if the chances of that happening are slim.

Whatever the reasons, the findings highlight the challenges of switching biometric security formats — each one has its own limitations, and could force companies to reevaluate security policies that they’d taken for granted after years of including fingerprint readers. It could be a while before depth-based face recognition is reliable enough to use in every situation, and that’s assuming there are no insurmountable obstacles.

Via: Ars Technica

Source: Apple Communities (1), (2), (3)

Engadget RSS Feed

Google poaches a key Apple chip designer

Google is still snapping up Apple’s chip design talent as part of its ongoing quest to create custom processors. The Information has learned that the search giant has hired John Bruno, the designer who founded and ran Apple’s silicon competitive analysis group — that is, the team that helped iPhone and iPad processors stay ahead of rivals. It’s not certain what he’ll be doing at Google (his LinkedIn profile lists him only as a “System Architect”), but he started at graphics veteran ATI and rose to become a chief engineer at AMD, where he led the design of Fusion processors.

It’s reasonable to presume that the influx of new talent (which also includes veterans from Qualcomm) will be used to expand Google’s variety of custom processors. Right now, its only in-house silicon is the Pixel Visual Core imaging chip inside the Pixel 2. The question is just what Google will do with Bruno and others. It’s tempting to assume that its next step is a full-fledged CPU for its phones, especially given Bruno’s background in graphics but it could also produce other specialized chips (such as AI accelerators or display controllers).

Whatever Bruno works on, it’s evident that Google is committed to giving its phones (and possibly other devices) hardware that stands out. It’s not hard to see why it would go that route. Up until the Pixel 2, Google’s Pixel and Nexus phones only occasionally stood out hardware-wise and frequently used parts you could find in competing models. You bought them mainly for the software (such as pure Android or the Pixel’s HDR+ camera mode), and any hardware perks were just gravy. If Google can design chips that are genuinely faster or more efficient than what you find in competing products, you may have a good reason to choose a Pixel even if you only care about raw performance.

Source: The Information, LinkedIn

Engadget RSS Feed

2017 laid the foundation for faster, smarter AI in 2018

“AI is like the Wild West right now,” Tim Leland, Qualcomm’s head of graphics, told me earlier this month when the company unveiled its latest premium mobile chipset. The Snapdragon 845 was designed to handle AI computing tasks better. It’s the latest product of the tech industry’s obsession with artificial intelligence. No company wants to be left behind, and whether it’s by optimizing their hardware for AI processing or using machine learning to speed up tasks, every major brand has invested heavily in artificial intelligence. But even though AI permeated all aspects of our lives in 2017, the revolution is only just beginning.

This might be a helpful time to clarify that AI is often a catch-all term for an assortment of different technologies. There’s artificial intelligence in our digital assistants like Siri, Alexa, Cortana and the Google Assistant. You’ll find artificial intelligence in software like Facebook’s Messenger chatbots and Gmail’s auto-replies. It’s defined as “intelligence displayed by machines” but also refers to situations when computers do things without human instructions. Then there’s machine-learning, which is when computers teach themselves how to perform tasks that humans do. For example, recently, an MIT face-recognition system learned how to identify people the same way humans do without any help from its creators.

It’s important not to confuse these ideas — machine-learning is a subset of artificial intelligence. Let’s use the term machine learning when we’re talking specifically about concepts like neural networks and models like Google’s TensorFlow library, and AI to refer to the bots, devices and software that perform tasks they’ve learned.

Still with me? Good. This year, AI got so smart that computers beat humans at Poker and Go, earned a perfect Ms. Pac Man score and even kept up with veteran Super Smash Bros. players. People started using AI in medicine to predict diseases and other medical conditions, as well as spot suicidal users on social networks. AI also began to compose music and write movie scripts.

Everywhere you look, there’s someone trying to add AI to something. And it’s all facilitated by neural networks that Google, Microsoft and their peers continued to invest in this year, acquiring AI startups and launching or expanding AI divisions. Machine-learning has progressed quickly, and it’s going to continue improving next year.

One of the biggest developments as we head into 2018 is the shift from running machine-learning models in the cloud to your phone. This year, Google, Facebook and Apple launched mobile versions of their machine-learning frameworks, letting developers speed up AI-based tasks in their apps. Chip makers also rushed to design mobile processors for machine learning. Huawei, Apple and Qualcomm all tuned their latest chipsets this year to better manage AI-related workloads by offering dedicated “neural” cores. But barring a few examples like Face ID on the iPhone X and Microsoft Translator on the Huawei Mate 10 Pro, we haven’t yet seen concrete examples of the benefits of chips tuned for AI.

Basically, AI has been improving for years, but it’s mostly been cloud-based. Take an image-recognition system, for example. At first, it might be able to distinguish between men and women who look drastically different. But as the program continues training on more pictures in the cloud, it can get better at telling individuals apart, and those improvements get sent to your phone. In 2018, we’re poised to put true AI processing in our pockets. Being able to execute models on mobile devices not only makes AI faster, it also stores the data on your phone instead of sending it to the cloud, which is better for your privacy.

It’s clear the industry is laying the groundwork to make our smartphones and other devices capable of learning on their own to improve things like translations, image-recognition and provide even greater personalization. But as the available hardware gets better at handling machine-learning computations, developers are still trying to find the best ways to add AI to their apps. No one in the industry really knows yet what the killer use case will be.

Eventually, every industry and every aspect of our lives — from shopping in a mall to riding a self-driving car — will be transformed through AI. Stores will know our tastes, sizes and habits and use that information to serve us deals or show us where to find what we might be looking for. When you walk in, the retailer will know (either by recognizing your face or your phone) who you are, what you’ve bought in the past, what your allergies are, whether you’ve recently been to a doctor and what your favorite color is. The system’s AI will learn what you tend to buy at specific times of the year and recommend similar or competing products to you, showing the information on store displays or tablets on shelves.

Cars will be smart enough to avoid obstructions and use machine-learning to better recognize dangers and navigate around hazards. Even your doctors will soon rely on AI to classify X-rays, MRI scans and other medical images, cutting down the time involved in diagnosing a patient.

Security Monitor, splitscreen

AI is already prevalent in image-recognition, and it will soon become even more pervasive. Home-security cameras are already getting better at distinguishing between individual humans, dogs, cats and cars. Don’t be surprised if this is ultimately used in law enforcement to sift through traffic- and other public-camera footage to look for potential criminals or missing persons.

The digital assistants that we talk to through phones and smart speakers will not only get faster and converse more naturally by learning from our conversations, they’ll also better anticipate our needs to offer the things we want when we want them. When you walk into your home after work, your lights will come on, your thermostat will turn the temperature up and your favorite winding-down music will start playing.

Sure, this already happens, but the existing method relies on triggers you’ve set based on your location or the time of day. In future, AI will know just how to adjust everything in your home just the way you like it while accounting for external factors. For example, if it’s a hot day, your digital assistant can turn up the air conditioning without your input after detecting temperature changes outside. All these automations could eventually make the world of Black Mirror a reality.

In 2017, the AI takeover gained momentum, but the most compelling use cases were confined to controlled, experimental environments. Next year, we’ll start to see more powerful AI emerge that might actually change the way we live. It might not happen right away, but soon AI will run our lives — for better and worse.

Check out all of Engadget’s year-in-review coverage right here.

Images: Chris Velazco/Engadget (Poker pro); Cherlynn Low/Engadget (Qualcomm chipset); Manuel Gutjahr (Security monitor, splitscreen); Engadget (Amazon Echo Dot)

Engadget RSS Feed

Printed photos can fool Windows 10’s Hello face authentication

Windows 10’s facial authentication system might be able to tell the difference between you and your twin, but it could apparently be fooled with a photo of your face. According to researchers from German security firm SySS, systems running previous versions of the platform can be unlocked with a printed photo of your face taken with a near-infrared (IR) camera. The researchers conducted their experiments on various Windows 10 versions and computers, including a Dell Latitude and a Surface Pro 4.

The spoof isn’t exactly easy to pull off — someone who wants to access your system will have quite a bit of preparation ahead of them. In some cases, the researchers had to take additional measures to spoof the systems, such as placing tape over the camera. Not to mention, they needed high-quality printouts of users’ photos clearly showing a close-up, frontal view of their faces.

Still, the researchers said the technique can successfully unlock computers and even released three videos showing it in action, which you can watch below. Somebody determined enough to break into your system could do so (they could scour your Facebook account for high-res photos they can modify, for instance), and your best bet is downloading and installing the Windows 10 Fall Creators Update. Simply installing the update isn’t enough, though: your system will still be vulnerable. The researchers said you’ll have to set up Windows Hello’s facial authentication from scratch and enable the new enhanced anti-spoofing feature to make sure you’re fully protected.

It’s not just Microsoft’s technology that has vulnerabilities, though. Its fellow tech titans, Apple and Samsung, are also having trouble with their authentication systems. A German hacking group found that the S8’s iris scanner can be spoofed using a photo of the user with contact lens on top, while another group of security researchers said they found a way to fool iPhone X’s face scanning system with masks.

Via: ZDNet, The Verge

Source: SySS

Engadget RSS Feed

Tested: the best smartphone cameras compared

I’ve been lugging around a DSLR ever since I conned my parents into buying me a Minolta Maxxum 5D in 2006, and let me tell you, that didn’t win me any popularity points back in high school — even if all my friends ended up with amazing MySpace profile pics.

Things are different now. In the strange days between film and digital, nobody was expected to produce quality photography. Now a staggering number of kids want to become YouTubers, and the line between professional and amateur photographers has blurred beyond the point of recognition. But what I know is that for many, a high-end smartphone is a much more sensible purchase than a dedicated camera, no matter what kind of art you’re trying to create.

These things are tools, so this comparison is designed to help you pick the right tool for the job. I’m going to compare the iPhone X, the Pixel 2, the Galaxy Note 8 and the Huawei Mate 10 Pro. I’ve tested these devices in scenarios that are particularly tricky for smartphone cameras and their small sensors: scenes with high contrast, backlighting or substantial vibration.

But before we get into the tests, let’s look at what we’re working with.

Devices Megapixels OIS Native 24fps video Portrait mode
iPhone X 12MP+12MP Yes Yes Yes
Pixel 2 12MP Yes No Yes
Note 8 12MP+12MP Yes No Yes
Mate 10 Pro 12MP+20MP Yes No Yes

Scene one: Williamsburg Ferry

This scene tests: dynamic range, color, sharpness

Full-size images

  • iPhone X
  • Mate 10 Pro
  • Note 8
  • Pixel 2

Verdict

The iPhone X and the Pixel 2 did the best in this scene. The iPhone provided a true-to-life rendition of the scene, whereas the Pixel 2 photo looks somewhat artificial. The Pixel 2, though, provided great detail and exposure without noise or sharpening artifacts. Both the Mate 10 Pro and the Galaxy Note 8 crunched the blacks, leaving little detail in the shadows, and the Mate 10 Pro’s aggressive sharpening left artifacts in the water.

Scene two: backlit architecture

This scene tests: detail, dynamic range

Full-size images

  • iPhone X
  • Mate 10 Pro
  • Note 8
  • Pixel 2

Verdict

The Pixel 2 and the iPhone X captured nice colors and kept noise to a minimum in all but the most extreme areas. The Pixel 2, however, rendered so much more detail in the architecture — but once again the results look noticeably artificial. The Note 8 did not fare well here; it’s by far the worst in what should have been plenty of light. It’s underexposed, there are artifacts in the shadows and I would even say there’s some noticeable lens distortion. The Mate 10 is fine, but its 20-megapixel sensor didn’t seem to provide any more detail than the Note 8’s 12MP module.

Scene three: low-light bar scene

This scene tests: noise, low-light color, detail

Full-size images

  • iPhone X
  • Mate 10 Pro
  • Note 8
  • Pixel 2

Verdict

It’s difficult to pick a winner here, and it may come down to taste. The Mate 10 Pro looks the most detailed and the Pixel 2 has a pleasant look overall, but the iPhone X didn’t render details or color well and the Note 8 looks bad. I would go with the Pixel 2, because unlike the Mate 10 Pro, it achieves its look without artifacting. That being said, it’s great to see an example of the Mate 10 Pro’s dual-camera system providing real-world benefits.

Scene four: selfies

This scene tests: skin tones, detail

Full-size images

  • iPhone X
  • Mate 10 Pro
  • Note 8
  • Pixel 2

Verdict

The Pixel 2 blows everything else out of the water in terms of selfies. The iPhone X comes in a close second and has the added benefit of not making me look 10 years older than I am. There is one important caveat here: The Note 8 (and the Galaxy S8 / S8+) have real focusing mechanisms, whereas most selfie cameras are fixed focus. It’s subtle, but because the Note 8 can actually focus on my face, the background is every so slightly blurred. Please, smartphone makers, use a focusing mechanism on the front-facing camera!

Scene five: synthetic bokeh

All four of these devices include some kind of synthetic bokeh mode that blurs the background around the subject, reminiscent of a DSLR with a wide aperture lens. Sometimes these work well, and in some scenarios they fail pretty bad. The scenes shown here are essentially torture tests: I picked objects with challenging geometry that would show how well these synthetic bokeh modes perform in a worst-case scenario.

Before I go on, there’s one piece of essential vocabulary we need to define: the mask. A mask in this context is essentially the outline of the subject, where everything outside the line is blurred. Each device uses different techniques to create the mask: The iPhone X and the Note 8 use dual cameras to size up the subject, the Pixel 2 uses dual sub-pixels and the Mate 10 Pro’s technique lives somewhere in software.

iPhone X

As you can see in the astronaut-image set, the iPhone X has two errors in its mask: the one on the helmet and the one under the astronaut’s right arm. The rest of the mask is pretty good. In the polygonal-sculpture image set the iPhone X almost makes it, but there are several parts of the mask with issues. As for the hardest test, the plant, the iPhone completely falls down.

Mate 10 Pro

The Mate 10 Pro surprised me with its Wide Aperture Mode; it’s not flawless by any means, but overall the effect looks pretty good. In the astronaut-image set, for example, the mask is well defined except for the area underneath the arm, and at smaller viewing sizes you might not notice this. In the polygonal-sculpture test there are, again, small errors in the mask, but they don’t ruin the image. However, when we get to the plant, the Mate 10 Pro can’t track all the tendrils.

Note 8

Both the Note 8 and the iPhone X zoom in quite a bit to achieve their synthetic bokeh effect, and that makes any issues with the mask all the more pronounced. With the astronaut, however, we see much better results than with the iPhone X — even the area under the arm is properly blurred! Unfortunately things go downhill with the polygonal sculpture, though the issue isn’t with the blurring mask — it’s with the exposure. I think this is because I was throwing off the exposure by tapping on a black object, but the algorithm should be able to handle that.

Now, at first glance it looks like the Note 8 completely blew it with the plant, but that’s not entirely the case. If you look closely, you can see that the Note 8 is able to isolate the tendrils in the mask, but in the places where the tendrils overlap with the edges of the tables or light furniture in the background, it loses track. Ultimately, the Note 8 failed this test, but it did better than the iPhone X or the Mate 10 Pro.

Pixel 2

The Pixel 2 handles the astronaut scene better than the competition. The area under the arm is blurred, though I will say that there are small errors in the mask around the edges. At smaller viewing sizes, however, the Pixel 2 wins here. The polygonal sculpture is also the best out of the four with proper exposure and a good mask around the edges. With the plant we see the same problems as the Note 8, where the mask algorithm loses track of the tendrils as they overlap with a visually similar background. That being said, I think the areas around the tendrils (the ones that aren’t glitched out) look better than the same areas on the Note 8.

Verdict

As long as you stick with simple images of people in good light, the iPhone X will do alright, but I was surprised by how well the Mate 10 Pro and the Galaxy Note 8 performed. The Pixel 2 particularly stood out though; its ability to both capture a pleasing image in its synthetic bokeh mode and its ability to isolate complicated subjects make it the clear winner.

Video

There’s a lot to talk about when it comes to video on smartphones. Should you shoot in 4K? Is electronic image stabilization, or EIS, more important than optical image stabilization, or OIS? What is HEVC or H265? For the sake of this piece, however, we’re going to focus on the basics: Which phones have the best noise control, and which have the best stabilization?

Noise in 4K

In this test I cycle a set of lights on and off so that we can see how much noise each device renders at varying light levels. There’s a light meter at the bottom of the scene, and this allows us to compare noise levels at approximately the same light level. What is a lux, you ask? It’s a measure of light in a given area, so one lux is one lumen per square meter. And what is a lumen? It’s the amount of light given off by a candle. Listen, I don’t make the rules — I’m just measuring light.

Verdict

At extremely low light levels (six lux and 40 lux) I am sort of baffled by how well the Mate 10 Pro performs. Sure, it has all kinds of artifacts, but it looks pretty darn good, especially when you compare it to the Note 8, which also has artifacts but looks terrible at these light levels. One thing I am noticing is that at six and 40 lux, the iPhone X is willing to let the scene be dark (which it is) whereas the Pixel 2 is sort of desperately trying to elevate the exposure of the black backdrop, which makes the noise noticeable.

The Mate 10 Pro starts to look decent at 40 lux, which, again, is incredible. The iPhone X, Note 8, and Pixel 2 need more than 540 lux to quell obvious noise.

Video stabilization

Here we test the video stabilization of all four devices at the maximum common resolution: 4K at 30 frames per second. I was walking naturally without trying to stabilize with my feet.

Verdict

The Pixel 2 is the most stable, but the iPhone X does well too. Despite the Mate 10 Pro’s great low-light video performance, it doesn’t produce stable video. The Note 8 is a curious case, because it does stabilize well, but the video has an aggressive jelly effect that’s disconcerting to look at. This goes away at 1080p (as you can see in the video below), but the iPhone X and the Pixel 2 are able to stabilize 4K video without this effect. You can see a similar stabilization test in 1080p below.

Selfie stabilization

You don’t have to the use the front-facing camera to vlog with your phone, but you’ll probably want to. Here we test the stabilization of the front-facing cameras. Why does my face look like that? I’m holding the camera rig as far away from my body as possible.

Verdict

One thing you’ll probably notice right away is that the Mate 10 Pro and the Note 8 are very zoomed in, and that’s because they’re using electronic image stabilization, which crops the image in and moves it around to compensate for motion. That being said, the Pixel 2 and the iPhone X are doing this too but without an uncomfortably close mustache inspection, making them the winners.

Wrap-up

The Mate 10 Pro managed to surprise me. I think Huawei should ease up on the punchyness of the camera tuning, but the low-light photography, low-light video and portrait mode are awfully good in the right conditions. The Note 8 also surprised me, but with noisy low-light images and video as well as that jellylike video stabilization.

So which smartphone has the best camera? I feel confident giving that honor to the Pixel 2, but as we’ve seen, each device has pros and cons. The iPhone X, for example, is the only device that offers a native 24fps 4K video option, which is important if you want to mix it with footage from other cameras. But the Pixel 2’s HDR still images, unmatched selfie camera, stable video and surprisingly good portrait mode put it ahead of the competition.

Engadget RSS Feed

Magic Leap One: All the things we still don’t know

It’s that time of year again: the special season when everybody’s favorite mythical creature makes its annual appearance. That’s right, it’s Magic Leap hardware teaser season! Seemingly once a year, the secretive startup reveals what it’s been up to, and on Wednesday it revealed renderings of its latest AR headset prototype. The company even deigned to allow a Rolling Stone reporter to take the system for a spin. But for everything that Magic Leap showed off, the demonstrations and teaser materials still raise as many questions as they answer. There’s a whole lot about the Magic Leap system that we don’t know, so maybe let’s hold off on losing our minds about the perceived imminent AR revolution until we do.

But before we get into all the things we don’t know, let’s take a quick look at the things we do. Magic Leap the company was founded in 2011 by Rony Abovitz, the bioengineer who created the Mako surgical assistance robot. He sold the Mako company for $ 1.65 billion and used that cash to start Magic Leap and fund it through its first four years. Today the company is valued at almost $ 6 billion and has raised $ 1.9 billion in funding to date, despite having shown little more than high-level animations and a few hardware renderings.

The company has spent the past seven years developing the Magic Leap Augmented Reality system. Currently in its ninth iteration, the setup has three components. The “Lightpack” is a pocket computer that Abovitz claims is “something close to like a MacBook Pro or an Alienware PC,” which would be incredible, given its relative size in the renderings. Users reportedly can input commands through either hand gestures or the “Control” module. The Lightpack is wired up to the third component, the goggles themselves.

The “Lightware” goggles reportedly utilize translucent cells that the company calls “Photonic wafers,” which, according to Abovitz, shift photons around a 3D nanostructure to generate a specific digital light field signal. Basically, a light field is all the light that is bouncing off the objects around us — “like this gigantic ocean; it’s everywhere. It’s an infinite signal and it contains a massive amount of information,” Abovitz told Rolling Stone.

Abovitz theorizes that the brain’s visual cortex doesn’t need all that much information in order to actually generate our perception of the world. Therefore, instead of trying to re-create the entirety of the light field, “it just needed to grab the right bits of that light field and feed it to the visual cortex through the eye… We could make a small wafer that could emit the digital light field signal back through the front again,” he said.

These are some pretty amazing claims, to be sure. The theories Abovitz is basing the device on are ones that he and a CalTech professor came up with — theories so radical, as he told Rolling Stone, “we were way off the grid.” That’s not to say that his theories are unsound, or that the system doesn’t work the way he says it does. It’s just that there isn’t yet any way to independently verify any of these claims.

And some of the claims beg to be investigated. For example, that there’s a powerful secondary computer integrated into the Lightware, “which is a real-time computer that’s sensing the world and does computer vision processing and has machine learning capability so it can constantly be aware of the world outside of you.” That’s a whole lot of buzzwords and big promises to pack into a single pair of googles.

And beyond those supposed capabilities, we have practically zero information on how the system actually works. What are the hardware specs, CPU/GPU speeds, and operating system? Will the internal components be upgradable or, like the MacBook Pro’s, be sealed, requiring more costly upgrades? What’s more, how is the unit powered? What are its energy requirements? Is it fully mobile? What’s the battery life? We need more than Abovitz’s explanation that “it’s got a drive, WiFi, all kinds of electronics, so it’s like a computer folded up onto itself.”

Information on the Magic Leap’s availability is just as nebulous. The Magic Leap website states that the SDK will be available in “early 2018,” but there has been no word on even an estimated hardware release date. And don’t even bother asking about the price. The company was silent about an MSRP during Wednesday’s announcement, though Business Insider spoke with sources close to the company in August who claimed the system would retail in the $ 1,000-to-$ 1,500 range. But again, those are guesstimates at best.

There are also questions surrounding Magic Leap’s demo choices. Of all the major players in technology news, why present this huge piece to Rolling Stone and Pitchfork rather than, say, Wired or CNET? It may well be because the former choices command an older, more affluent readership, which are the people most likely to be buying these things first, given their price. Or perhaps, more worrisome, the company hopes to avoid the harsh scrutiny of the entire tech press corps until the product is practically in the hands of consumers. Apple pulled similar shenanigans earlier this year when it provided a single-day review embargo for the experimental iPhone X.

What’s more, we haven’t so much as scratched the surface of the societal implications should this technology take hold. Lord, can you imagine somebody driving with these things on? So, again, there’s no reason to think that Magic Leap isn’t on the up-and-up regarding the capabilities of its headset or the proprietary technology it’s built upon. But the company is making some pretty extreme claims, and if they expect the rest of us to pony up $ 1,500 for a pair of the Snapchat Spectacles’ dorkier cousins, they’re going to need to provide a more transparent answer than “Trust us, it totally works.”

Engadget RSS Feed

Apple may let the same app work on across iOS and Macs

The app situation between iPhones and Macs is a bit of a mess. While mobile apps are updated regularly, the Mac App Store can often leave something to be desired. Now, Apple is finally tackling this chaos. According to Bloomberg, Apple may give developers the option to create a single app that will work across Macs, iPads and iPhones as early as next year.

According to insider sources, the same app will be able to respond to a mouse, a touch pad or a touch screen, depending on the device it’s being run on. Right now, apps must be designed separately for the iPhone and iPad versus for a computer, which explains why you can occasionally find tumbleweeds rolling across the screen when you pull up the Mac App Store. If developers must choose to devote resources to one or the other, the computer apps often get shortchanged.

The change won’t come immediately, though. It’s planned as part of next fall’s iOS and Mac OS updates, according to Bloomberg‘s sources. Because this is all so tentative, it’s also possible that the decision makers at Apple could change their minds and cancel this endeavor entirely. Here’s hoping they don’t, though. This streamlining would likely be a popular move for Mac users.

Source: Bloomberg

Engadget RSS Feed

The Academy Museum of Motion Pictures will display an iPhone 5s

Director Sean Baker ripped up the filmmaking rulebook by shooting his Sundance hit Tangerine on an iPhone 5s. Now, over two years since the flick scooped more than seven times its budget at theaters, the Oscars has come knocking. No, the filmmaker isn’t getting a belated gong (although his current indie success story The Florida Project could change that). Rather, the Academy of Motion Picture Arts and Sciences is pinching one of three iPhone 5s handsets used to film Tangerine to display in its upcoming Academy Museum. You’ll be able to see it for yourself, alongside film memorabilia from The Wizard of Oz and Alien, when the 300,000 square foot space opens its doors in 2019.

Tangerine follows a day in the life of a transgender sex worker who discovers her pimp boyfriend has been cheating on her. Baker revisits the Hollywood location where one of the film’s climactic scenes takes place in the Academy’s announcement video (above). The director also talks of the equipment he used, including the soon-to-be immortalized iPhone 5s, outfitted with an an anamorphic adapter made by Moondog Labs, and a Steadicam rig. The film was edited using the $ 8 Filmic Pro app.

If that doesn’t inspire a bunch of aspiring filmmakers to shoot on the fly, then nothing will. And with awesome camera tech now on plenty of flagships, from the LG V30 to the iPhone X, you’ve no excuse.

Source: Oscars (YouTube)

Engadget RSS Feed

The Morning After: Tuesday, December 19th 2017

Good morning! This morning we wait with bated breath for a phone screen that will heal itself, test out Amazon’s adorable Echo Spot and kick off our year in review coverage.


Good riddance!
2017 year in review

Over the next two weeks, we’ll be looking back on the year that was, and sharing our hopes and predictions for 2018. Join us as we place our bets on AI, algorithms, social-media regulations, green tech, streaming services, robotics, self-driving cars and even space taxis. And, of course, since we’re Engadget, you can expect to hear about the upcoming products and games we’re most excited about.


Accidental.A new polymer could make phone-screen repairs a thing of the past

Researchers in Tokyo have discovered a new polymer that may actually heal itself, potentially leading the way to a future of self-healing phone screens. The research promises a unique hard glass-like polymer called polyether-thioureas, which can heal itself with only hand pressure. This makes it different to other materials that typically need high heat to repair cracks and breaks. The funny part? The special polymer was discovered by mistake by a graduate student, Yu Yanagisawa, who thought the material would become a type of glue.


This alarm clock’s tiny screen belies a big feature set.
Amazon Echo Spot review: as smart as it is cute

If you want Alexa in a device that looks like a cool alarm clock then the Echo Spot is it. Its touchscreen display is also pretty useful, as its adds additional context and visual information, and it’s great for video calls, too. It’s not perfect, especially when $ 20 more can get you the bigger Echo Show, which also has better audio skills. The Echo Spot is great, but we’d hold off a little for a price drop.


Our team’s choice cuts of long-form from the last 12 months.
The best Engadget stories of 2017

It’s been a long year, but beside all the phone reviews, social-media messes and the rest, Engadget has continued to tackle some of the more unusual parts of this tech world. Or just simply calling out political figures’ lack of science comprehension.


Nope, this wasn’t an official port.
There was a fake version of ‘Cuphead’ on the App Store

Early Monday, a fake version of Xbox indie hit Cuphead appeared on Apple’s iOS App Store, with a $ 4.99 price tag and, well, nothing to do with the actual game itself. Apple moved to take down the game before midday ET, but it demonstrates the struggle for both games developers and the iPhone maker when it comes to tackling fakes.

But wait, there’s more…

  • What’s on TV this week: ‘Bright’ and the ‘Christopher Nolan 4K Collection’
  • US officially blames North Korea for WannaCry outbreak
  • China’s most popular game is about to launch in the US
  • Kaspersky sues US government over federal software ban

The Morning After is a new daily newsletter from Engadget designed to help you fight off FOMO. Who knows what you’ll miss if you don’t Subscribe.

Craving even more? Like us on Facebook or Follow us on Twitter.

Have a suggestion on how we can improve The Morning After? Send us a note.

Engadget RSS Feed

Apple has finally caught up with iPhone X demand

The iPhone X was an elusive unicorn on launch. If you didn’t snag one of the earliest pre-orders or get lucky waiting in line, you were looking at a weeks-long wait — more than a few people flipped their units for a tidy profit. Now, however? They’re practically growing on trees. Multiple Apple online stores (including the US, UK, Canada and Japan) list the iPhone as in stock and delivering within 1-2 days if you commit to a purchase. Carriers and third-party stores are carrying the phone, too.

Clearly, Apple has caught up to demand — and that’s no mean feat given the iPhone X’s later-than-usual November release (reportedly to make sure there was enough supply) and fears that production would make it a rarity until sometime in 2018. This does leave lingering questions, though. Does this mean Apple sold a gigantic amount of phones, or is interest cooling off? And more importantly, does this improved production bode well for more affordable iPhone X-style modes down the line? Until Apple posts its quarterly figures (which aren’t likely to break down iPhone sales by model), the answers to both are one big “maybe.”

Via: 9to5Mac

Source: Apple

Engadget RSS Feed