Saturday, November 4, 2017

Ten Reasons Why I Canceled My iPhone X Order

  
            


The glorious and wonderful iPhone X has arrived. The early reviews are terrific. But how long before the stainless steel edges begin to tarnish? After reading reviews, side-stepping hype and recalling past Apple behaviour, I have come up with the 10 Reasons Why I Canceled My iPhone X Order:
  1. Initial Apple revisions historically have issues. I have learned a few things buying the iPhone 1.0, iPhone 3.0, iPhone 5.0, iPhone 6.0, Apple TV 1.0 and Apple Watch 1.0. These x.0 products invariably have limitations and problems compared to the x.1 or 's' version that follow in less than a year - or less than 6 months when things go really bad. The iPhone X is well positioned to follow this pattern.
  2. Users have reported 'burn-in' (ghosting) artifacts on the OLED screen after one day. This will require software and hardware revisions that will come next year.
  3. Apple has delivered its first optical stabilization camera system. It is proprietary and has moving parts. Apple will need an iteration or two to get it right - particularly with users who are tough on their phones (like me).
  4. The notch (a.k.a., ears or horns) at the top of the screen now hide important data. I don't care about the aesthetics but do care that the battery percentage meter no longer fits. I check battery level 30 times a day. The battery remaining metric is now buried in a menu. #OCDFail
  5. No headphone jack. OK. I've lost this battle but I still like to call out Apple for lacking the courage to keep features that people use.
  6. I live in Canada and suspect our friends in Southern California might learn few things about cold weather usage. Who wants to bet there will be changes to deal with -20 degree performance next year?
  7. No more "sneak peeks"! I routinely do quick glances by pressing the fingerprint reader and looking under the table or off to the side. The iPhone X must be "in your face". Clandestine phone access is no longer possible. 
  8. The battery will suck in the real world. The iPhone X needs active screens, input swipes and cameras running all the time. It will take a while for Apple to figure all this out.
  9. The iPhone X is slow to open and show you the home screen. I can open my iPhone 8 in 50 milliseconds. The iPhone X needs to power up, complete a face recognize and a page swipe to get to the home page. Elapsed time: Up to 2 seconds. Face recognition may be convenient but right now it is slower to open than a fingerprint reader in the hands of a skilled operator.
  10. Existing software will be letter boxed, pillar boxed and improperly scaled until developers adjust to the new iPhone X screen dimensions. Proper coding techniques were supposed to prevent this problem. We are now finding out that most software will need updates.



Which brings us to reason #11. Sorry Apple - I thought I had only X reasons.


THE SCREEN IS NOT BIGGER WHEN YOU ARE ACTUALLY USING IT!!! Sure. The diagonal length is longer than the iPhone 8 Plus. But the screen is narrower. I use the phone vertically most of the time. I want wider rather than taller. The iPhone 8 Plus has a great aspect ratio. The iPhone X is just OK.




Follow-up: 

I bought an iPhone 8 Plus. I am happy for 10 of 11 reasons listed above. I'm still sad that I have no headphone jack. #HeadphoneUsersArePeopleToo

The irony about the iPhone 8: The combined back button and fingerprint reader is now the best iPhone feature. The home button is always there and it works. The fingerprint reader - with no moving parts and haptic feedback - lets you open the phone faster than ever.

I can also register up to 5 fingerprints. That's two different fingers for me, and three for other family members who need to use my phone occasionally. iPhone X face recognition is limited to just one person 🙈.

This situation reminds me of the MagSafe connector on the MacBook Pro. Just when Apple makes it essential and nearly perfect, they discontinue the feature. Apple calls this courage.  I call it something else.  

(Hint to 🍎: A magnetic MagSafe USB-C connector would be nice. Good ahead. You can use this idea).

Monday, February 27, 2017

Evidence Driven Design

On a recent project, I noticed how my team struggled to prioritize users needs, features and disruptive innovation. Of particular concern was the familiar problem: How can we get our users to validate a new concept or idea that they had never seen or even thought about?

Saying "Let's put it in front of users and see what they think?" is a goal we all support while remaining occasionally skeptical about its relevance.  Can we get valid feedback about a revolutionary idea that is disruptive enough to be:
  • Difficult to adequately describe without building the entire product or related features.
  • Disturbing users who are uncomfortable with sudden change and may skew otherwise promising results.
It dawned on us: User testing at different design stages may be premature, the results gathered may be misleading or user testing at that moment may just be too darn expensive.




We identified a potential extension to Design Thinking that would give us equivalent answers in less time. We call it:  Evidence Driven Design 🔍


It's an idea stretches back to the Magna Carta. OK, that might be too much of a stretch. Evidence Driven Design is more like going to Small Claims Court  👩🏻‍⚖️.

It says:
  1. Anyone on the team can provide evidence that supports a new idea or design.  Their burden of proof is simple: They must provide enough compelling evidence to adequately prove their idea is sound and worth addressing.
  2. The quality of evidence presented must be equivalent to that expected at this point in the design process.  
The result is (we hope): Faster and better design decisions with greater team engagement.


Elephant in the Room

Evidence Driven Design let's us address Design Thinking concerns that we occasionally have, but don't always want to talk about.  This includes issues like:
  • User testing given when there is not enough context to generate reliable feedback.
  • Biases that can be amplified by other errors including story setup, low test fidelity or even individual user biases. 
  • Sometimes, everyone just knows that an idea is great.

What's New?

We want to give all team members the permission to contribute great ideas.  All they need to do is: prove their case.
Here are our Design Thinking extensions that we think enable Evidence Driven Design:
  1. Anyone on the team can make a case for the prosecution in favor of a new idea.
  2. The burden of proof is on that team member. They must convince the judges (the team stakeholders) that their idea is sound and is supported by evidence.
  3. They must provide evidence that matches the stage in the design process that they are attempting to replace. If a simple survey was warranted, then a description of user intent with some bug listing may be enough. If full user testing was appropriate, then their burden is higher: Perhaps a market analysis and comparison of competitive features with a suggested improvements.
  4. Time matters. You cannot waste the valuable time of other team members. Winning your case will often require lots of preparation and a clear presentation of value.  You must prepare your case.
Time will tell if this  Design Thinking extension works for us.  One immediate upside is clear: Design is now a direct responsibility of everyone.


Monday, February 13, 2017

Make it Fast

As I have said in the past, the three most important attributes of search are: Relevance, Relevance and Relevance (with apologies to my friends in real estate 😀)

Speed

The next most important search feature has got to be SPEED 🚅.  Slow results are nearly fatal in any search application.  Irrelevant and slow results will have your users heading for the exits.



Check out Google. They do everything they can to show results in less than one second. Studies from several SEO firms suggest that 3 seconds is an upper limit. Google likes results in under two seconds. My experience from A/B tests says users start getting agitated at around one second (the heal of the hockey stick above). You have until about 2.5 seconds to turn off the hour glass and show some results. If not, your user is inclined to look somewhere else. You don't need complete results. Offering some type of meaningful response is a critical action.

Value

If you think of value as a metric that integrates time and relevance, you have an even bigger concern.


You have more time - but still limited time - to start offering value. Remember: Your users don't really want to use search. They want to have used it. Get them their answer and get moving. Past experience suggests that application type matters: users of complex applications are more lenient. Just watch out for simple searches on the desktop. You need to get to a good answer in 10 seconds or less. If you fail big - or fail too often - your user is not just inclined to leave. There's a good chance they will exit and never come back.

To add some urgency to the mix: Mobile users have less patience. They can leave - never to return - in as little as 5 seconds.

So don't forget to focus on search result relevance.  Just make it fast.




Monday, January 23, 2017

Natural Language Processing

Natural Language Processing (NLP) is a technology that Gartner reported as a mature and usable technology as early as 2014. Since then, the focus has been on vertical integration. Products like the Amazon Alexa do a remarkable job interpreting spoken phrases and delivering value through skills.  

An area of slower advancement is (and therefore opportunity!): Natural Language Question and Answering (NLQA)

NLQA is currently in the dreaded Trough of Disillusionment (i.e., not meeting user expectations). Vendors like ThoughtSpot are defining the market but so far no one has delivered on the promise (think Hal in 2001: A Space Odyssey ♜ 😎)

The technology behind NLP is mature. If you studied it in school be prepared for a bit of a shock. The days of building and connecting your own parsers, lemma tools and Named Entity Recognizers (NER) are gone.  NLP-as-a-Service (NaaS) is here. Driven by machine learning, modern NaaS easily beats the best academic products, like the venerable Stanford NLP Parser, that were state-of-the-art just a few years ago.

If you want to jump to the head of the NLP class, consider using services like the Google Cloud Natural Language API.  It works remarkably well without custom training for both text ingestion and question parsing. It implements several high level functions that help developers execute commands and answer questions with greater accuracy and precision. One of its best features: dependency graphs that link verb (action) and noun (thing) phrases to create powerful interpretations without custom programming.

Here is an example of dependency parsing using the Google API:



The moral: NLP is more mature than you might think. Start at the top using NLP services and you can meet NLQA challenges faster than you (and your boss) may have thought possible.

Sunday, January 8, 2017

The Search Mindset

Consider your intent when using a search function. I like to think of two possible expectations:


  1. Discovery -  finding data and concepts related to search terms. In other words, "I am not sure what is out there. I want to learn more".  Search results typically get wider when showing something like "related concepts" concepts panel. 
  2. Refinement - filtering to reduce results according to given search criteria. In other words, "I want to see only related data and concepts. Search results typically get narrower when showing  something like a faceted result.
This may not seem like a big distinction but discovery and refinement generally focus on different outcomes. Mixing both in a single result list can be confusing.  

With that said, the ability to "pivot" and let a user seamlessly switch from "narrowing" to "widening" activities is a critical feature that distinguishes awesome search engines from the also-rans

Thursday, January 5, 2017

Relevance, Relevance, Relevance,

Question: What do you think is the NUMBER 1 capability in every great search engine?



For me, the answer never changes.
It is: RELEVANCE!


Finding data with search is easy. Finding too much data is unavoidable. Filtering, sorting, prioritizing and ranking to create the most relevant results has been a primary search goal for decades.

How many times have you found the best answer on the 4th page of Google results? If this was common I am sure you would have switched to something else long ago. Getting relevant results - with the best answers first - is what brings you back for more :).

Irrelevant results rapidly erode user confidence. Search engines that provide poor answers take users from hope to despair in only a few clicks.  Once trust is lost, it can be very hard to get back.


Relevance on the Web - Learning from Google

Google has always been the standard bearer for good sets of ranked results. But frankly, they have had an easier job than most enterprise search products. Web content can be sorted and prioritized using straightforward statistics - some as simple as counting the number of sites that point to a given page. Google also gets a boost from hyperlink text that describes a link target. The text in a link literally offers a curated description of the page it points to. Finally, web content is routinely stored with relatively large passages of unstructured text that makes context and meaning easier to determine.


Enterprise Search Analytics can learn from Google (again)

Modern enterprise search systems need to find alternative ways to filter and rank their results. Page popularity and link text aren't nearly enough.  Once again, Google shows us some of the options because they no longer exclusively rely on web ranking methods.  Let's review some of the recent advancements we have all seen but don't necessarily think of as "relevance builders".

Consider a Google search for the single term "mercury".  You get results like:


Google features now include:
  1. Disambiguation of search terms. Note how this is not simply dictionary autocomplete (which is also good). It is a proactive listing of related concepts that answer the question: "Did you mean X?"
  2. Knowledge graph of attributes related to the default concept.  It helps you understand that you are asking about the right thing before even looking at the results.
  3. List of related topics. In other words, "Did you also know X?"
  4. What other people searched for.  Learning from colleagues is often the quickest route to an answer - especially a high value curated answer.
  5. And finally a "feedback" link to make sure items 1-4 are correct.  Identifying inaccurate outcomes is crucial to user happiness (and the underlying machine learning algorithms too).
Oh yeah: there are the search results too.  But you already expected that :)  

It used to be that enterprise search solutions needed to be different than Google. Nowadays, being like Google is a good place to start.



Tuesday, January 3, 2017

What's new?

Back in the Saddle



No blog posts for 3 and half years.  Time to fix that!   🔛


Today I embark on a new career at qlik.com focusing on search driven analytics. What's with that? I did some of the first commercial work in search driven BI 15 years ago.  Isn't this a step backward 🔁?


Search Driven Analytics - Ready for Prime Time.


So much has happened in the Business Intelligence world. Innovators like Qlik and Tableau have brought BI and analytics to the more people than ever before. In the meantime, big data, Hadoop /Spark, NoSQL, Natural Language Processing (NLP) and Machine Learning / Artificial Intelligence have become commonplace. Open Source has also moved to the forefront.  Facets, dimensions and clustering of terms is implemented in several world class open source packages.  

Search technology is finally positioned to leverage these advancements in ways that we never envisioned. Conveniently (for me at least), I have spent the last ten years researching and developing apps in all these areas. I am excited at the prospect and can't wait to share some new insights about the current state-of-the-art and where I think we can go.