It’s that time of year again. Spring showers are gradually yielding to beautiful summer flowers and the smell of freshly mowed lawn hangs in the air, as the sound of the neighborhood children playing in the yard brings joy to our hearts. For most, June is the feeling of summer creeping just around the corner.
But for those of us that are devoted Apple fans, June is special for another reason. I’m speaking of course, of Apple’s annual World-Wide Developer Conference (WWDC). As reliable as the summer solstice, it arrives every year, to bring us all the latest software (and sometimes hardware) goodies that the fine folks in Cupertino have been cooking up.
Recent years have seen a wide array of announcements. Everything from a complete UI overhaul with the introduction of iOS 7, to a brand new programming language in Swift, to the revelation that stickers in iMessages are the way of the future.
What wonders await us in San Jose for this years conference? A new Siri hardware speaker? Perhaps an augmented reality platform or SDK? Or maybe we’ll finally get an official acknowledgement that borders on user interface buttons weren’t such a bad idea after all. Get your crystal ball out and join us, as we weigh in with our own rampant speculation, hopes, and dreams.
Augmented Reality SDK
Augmented Reality (or AR for short), is something you’ve most likely been hearing about more and more over the past year, especially if you are hooked into the tech scene. In short, this technology allows for the overlay of information and interactive media onto someone’s vision of the real world around them. Think Virtual Reality, but instead of walking around a fake medieval landscape, you’re actually walking down the street with a pair of super advanced eyeglasses that show you everything from the names of nearby restaurants, to the location of virtual characters in a game you might be playing.
Google made the first attempt in AR tech when it rolled out Google Glass. In spite of that product ultimately failing, they have continued putting effort into research, and have just rolled out a number of initiatives that prove they continue to be on the forefront.. Other companies have not stayed silent. Most recently Pokemon Go took the world by storm with their AR game that allowed you to walk your local neighborhood streets to catch Pokemon with your camera. Everyone from Microsoft to Snapchat to Facebook have been making strides in AR, to the point where it seems as though Apple may be the only large tech player remaining absent.
This will change in 2017. We’ve already seen a number of reports that Apple are making a big bet on AR as an emerging technology, and it makes nothing but sense. The iPhone is without a doubt one of the most popular cameras on earth, with Apple constantly sitting atop the rankings of best smartphone photography. But while Apple’s hardware and low-level camera tech is unmatched, they have progressively lost software mindshare to apps like Snapchat, Instagram, and to a lesser extent Facebook propper.
The camera continues to be one of the most compelling hardware features of the iPhone, and Apple has long touted the nirvana achieved by tightly integrated hardware and software. We expect Apple to double down on this idea by introducing tightly integrated Augmented Reality features into the next version of iOS.
The smart move would be to expose an SDK with a high level abstraction around core AR concepts to the hundreds of thousands of existing 3rd party developers. If Apple can do the heavy lifting of real-world object mapping and tracking and allow developers to focus on their domain-specific use cases, they’ll have a very large AR ecosystem on their platform over night.
With all that said, what’s less likely is the introduction of any AR-specific hardware at WWDC. While it is widely rumored that the next iPhone will incorporate hardware components specific to AR, nobody has come close to sufficiently miniaturizing the components required to provide a true AR experience in a small and stylish eyewear-like package.
Expanded Native Siri SDK
Last year’s WWDC saw the introduction of SiriKit, an SDK that allowed 3rd party apps to interoperate with Apple’s virtual assistant. While it was well overdue and generally well-received, it only allowed for integration within a small number of domains (chat, payments, workouts, to name a few).
As it does with many features, Apple will continue to iterate on SiriKit, likely adding a number of supported domains. Below are a few we’re hoping to see:
- Music: The ability to ask Siri to play any song in the vast Apple Music library is a great feature and one that would be well received by users of 3rd party music apps such as Spotify or Amazon Prime Music. It is unlikely Apple will want to forfeit this Apple Music differentiator, but it would certainly make Siri a more flexible and comprehensive assistant, so the tradeoff might be worth it, especially if Apple feels threatened by the success of the far more open Amazon Echo.
- Navigation: While Apple Maps has improved drastically since its inception, many people still prefer Google Maps, Waze, or other navigation apps. The ability to ask Siri for directions to any place on earth and then start navigating in seconds is truly compelling UX. As with music, Apple might not feel compelled to sacrifice this aspect of platform lock-in, but again, the benefit of providing a more flexible and powerful assistant may make it worth their while to do so.
- Productivity: As it stands, Siri is limited to interacting with whatever direct mail, calendar, and reminders integration Apple provide via system settings. It would be extremely useful to let Siri integrate with productivity apps directly, allowing 3rd party ToDo lists or note taking apps to provide easier setup and better UX.
- Weather: A small but widely used feature of Siri is to ask for the weather. Apple’s weather app is good enough, but it would be great if Siri could integrate and talk to your preferred weather app instead. Options like Dark Sky are arguably far superior to the stock weather app and there’s no reason Apple shouldn’t allow for this.
Standalone Siri Capabilities
The advent of the Amazon Echo and Google Home have proven that there is a demand and a justification for “always on” virtual assistants. If Apple does introduce a Siri hardware device as has been rumored, in order to go head to head with the other existing offerings, they’ll need to change the way in which Siri gains new capabilities.
Currently the process of adding a 3rd party capability to Sir is as follows:
- Install the app that provides the desired capability.
- Open the app.
- When prompted, allow the Siri access permission for this particular app.
- Interact with Siri.
By contrast, you can add new skills to the Amazon Echo simply by asking Alexa to install a new skill. Better still, Google Home services are available immediately upon Google approval, no install needed. The reason for this large discrepancy is that Apple’s ecosystem is entirely based around the idea of the app, both as a primary interaction, but also as a vehicle for extension points to enter the system.
One way this could be improved would be to allow standalone Siri apps, much like what was done with Messages Apps last year. If a user asks Siri for a specific request that it cannot satisfy, it could search the app store for existing apps, suggest one, install and authorize, all within the Siri experience.
Alternatively, Apple could provide a webhooks API much like those provided by Amazon and Google, where developers would register their own endpoints for Siri to interact with directly. This would provide even more flexibility and reduction in friction, but it feels largely at odds with Apple’s goals for user control and security, so it seems less likely.
In 2016, Apple introduced a completely revamped iMessage experience that included a number of new built-in features as well as the ability for developers to write their own messaging apps. But while there was an initial excitement around creating new experiences for this new platform, it really didn’t gain the kind of traction it could have, especially given the popularity of other 3rd party messaging apps on iOS such as Snapchat, WhatsApp, and Facebook Messenger.
The current experience is plagued by a few key issues. Firstly, given the same app model Apple has relied upon for all of its’ platforms, a user must know to seek out a particular messages app by searching for it in the iMessage app store, there is no seamless way for an app to suggest itself in a particular conversation. This causes a discoverability problem, developers may be creating great apps for Messages, but users don’t know to install them.
Secondly, iMessage apps are only able to influence a given conversation between two or more humans. There is currently no way for a developer to create a “chat bot” iMessage app, where a user could interact with an AI-powered entity as seen on Facebook’s Messenger platform. Given the rising popularity of chat bots in the US and the complete dominance of apps like WeChat in China, it seems like a glaring omission for Apple to not offer a similar experience.
It is highly likely that Apple will roll out improvements to the iMessage platform this year, but it remains to be seen if these will be incremental improvements or the large scale changes needed to compete with other messaging platforms on the market. Apple currently has a huge advantage in messaging in that it is the default messaging client on iOS, but if it doesn’t add things like better discoverability for apps, reduced install friction, and some form of chat bots, it will see its usershare slowly erode over time, with more and more people flocking to 3rd party apps for their primary messaging needs.
Apple has been making more use of Artificial Intelligence in the form of Machine Learning with the past few releases of iOS. Last year, they introduced a new technique for making use of data collection to feed machine learning algorithms while also maintaining user privacy, and it’s unlikely that the company has stopped making advances in this area. AI was the major theme at this year’s Google IO, and in order to stay competitive, expect Apple to introduce new technology and initiatives at WWDC.
Expanded use of AI technology throughout the built-in functions of iOS is all but a given, but it would be very interesting for Apple to also make this technology available to 3rd party developers as well. Simplified interfaces to help developers harness machine learning to solve things like image processing/analysis, natural language processing, or any number of other domain-specific challenges, would be well received and timely. And, an Apple provided solution would certainly put user privacy and security at its core, resulting in a rapidly expanding ecosystem of new AI-powered apps that live up to high standards for such things.
Dedicated Siri Speaker
There’s a high probability that Apple will introduce a new piece of dedicated Siri hardware to go along with software improvements. Not much is known about what this hardware might entail, but it seems unlikely that it will simply be a wireless speaker with Siri installed on it.
Amazon’s recent introduction of the Echo Show was a clear acknowledgement that pure voice interaction is just not good enough for some tasks. Similarly, Google’s latest assistant improvements leverage any available screen to provide visual context.
Given the fact that most Apple users already have multiple screens in their home running some variant of iOS, it seems extremely likely that Apple will opt for an approach similar to Google’s over including a dedicated hardware screen on its device. In fact, for several years they have been improving on a technology called Continuity, that allows for interactions to seamlessly transfer from one device to another. Expect the forthcoming Siri device to leverage this same technology to service user requests and then display relevant visual context on whichever screen makes the most sense at a given moment.
The brevity and integrated nature of Apple’s entire hardware lineup will be a compelling selling point for this new product. It will in effect, complete the experience of having an ever-present assistant. Start your day with Siri via the iPhone on your nightstand, walk to your kitchen and continue the conversation with the Siri speaker on your counter, then head out to work and pick up in your car via Siri in CarPlay or as you walk to work with Siri on your Apple Watch. When you get home at night, use Siri on your Apple TV to search for something to watch, then hit your Siri speaker to lock your doors and turn off the lights before bed.
The tech community at large has long touted the death of apps, but it is hard to defend this idea given that users continue to download billions of apps per year and Apple continues to generate massive revenue from its App Store. Still, mobile web technology has improved drastically over the past few years, and the introduction of new search integration points over the past few versions of iOS are clear acknowledgements from Apple that users are interacting with software on mobile in evolving ways.
Recent statistics show that most apps downloaded by users are abandoned in short order and it is becoming increasingly difficult to get a user to even install an app in the first place. Last year, Google started addressing this problem with the introduction of Instant Apps, an Android feature that lets users access apps in an on-demand fashion. Developers define a specific slice of their app for on-demand access and an end user need simply click on a relevant web link or device search term to launch directly into a fully native experience. This feature drastically reduces the friction associated with accessing a native app experience, allowing a user to navigate in and out of an app as effortlessly as navigating a website.
Apple has already started laying out the groundwork for a similar feature. Apps can already index search data that can be accessed directly via web searches, and universal links allow a user to be directed to either a web or app experience (depending on what a user has installed) dynamically. The introduction of tvOS (the Apple TV variant of iOS), also brought with it the arrival of a technology called OnDemand Resources that allows developers to ship a drastically slimmed down version of their apps, where larger resources such as images or videos are downloaded only when needed, directly from Apple’s content servers. All the pieces are in place for Apple to roll out their own Instant Apps system this year, and it would certainly reassert the case for apps as the primary mobile platform.
So there it is, our top predictions and wishes for products and features to be announced at WWDC 2017. We’ll be back after the conference wraps up to provide a recap of the most important takeaways, so check back in the weeks following June 5th for some more thoughts and insights into the exciting world of Apple. In the meantime, feel free to reach out to us to discuss how these technologies could effect the future of your business!