I’ve written previously about the various devices I’m using these days, and how I think the perfect mix of devices is different for each person. I’ve also tried to convey the importance of context when discussing any piece of technology, whether hardware or software. I can spend all day espousing the virtues of smartphones, but I wouldn’t use one to write this blog post or read a book in direct sunlight – nor would I use Photoshop just to rotate a picture.
We find ourselves in many different contexts throughout the day, and we are constantly shifting the mix of technology we use to match. I’d argue, however, that we end up doing more adapting than the technology itself. So far, it has mostly been up to us to recognize and convey our current context to our devices and software. This act of conveying our context ranges from simply deciding which device to pick up, to typing in our current location, to turning on and off Wi-Fi, or any number of other small things. This is finally starting to change as context becomes the new innovation battleground.
This context battle is actually being waged on two fronts: hardware and software. In hardware, the OEMs are offering an ever widening array of form-factors, many of which are adaptable to several key contexts. I’m personally very excited about Microsoft’s Surface tablets, for instance. The Touch Cover will provide a great keyboard for when you need it, without sacrificing weight or size. Will I still plug in a normal sized-keyboard when docking it at work? Of course, but that’s the beauty of adaptability. However, there is so much more to be done. I want devices that truly transform and reduce redundancy. I want to stop carrying around 3 screens, 3 processors, and 3x the storage. I want to dock my Windows 8 slate and not just gain battery life, but processing power too. There is huge opportunity here.
The second context battlefront is in software, and more accurately, mobile apps. The start-up press (especially Scoble) is abuzz about the ‘contextual age‘. Everyone is scrambling to make their software more context-aware, mostly by taking more advantage of the information that’s already available. Smartphones have all kinds of sensors and data that can help convey context. Where are you? How did you get here? What does your calendar say you’re doing? Who have you called recently? How does what you’re doing now fit into the patterns of what you’ve done previously? For many people, Siri was their first introduction to a new wave of context-aware software. Siri can [attempt to] infer meaning behind your queries based on a lot of the above information. It knows who you mean by ‘mom’, it knows where ‘home’ is, it assumes you’re more interested in things close by. Google Now is taking this further by monitoring your behavior over time and preemptively serving up information it ‘guesses’ you’d want to see, like when the traffic is bad on your normal route to the office in case you want to leave early.
The above examples are mostly about being better at responding to input, but I’m just as excited about the contextual age’s ability to help filter out more of the constant information noise that’s quickly becoming the biggest problem with living in the future. It’s too much to get into in this post, but I am starved for a better way to manage all my social feeds and information sources. So much of it is time-sensitive, but I can’t possibly even keep up with my Twitter feed these days. I long for the days when my apps know when to interrupt and when to just shut up. I’d like to go out to dinner and never look at my phone, but know that if something completely amazing and important to me, starts happening, I’ll find out about it while it’s happening.