Bin the Product Evaluation Spreadsheet

I’ve just read a great article by Alan Pelz-Sharpe from Intelligent-Enterprise.So what’s he talking about?

Well, decision makers and IT procurers usually follow a standard process for buying software:

1 – They establish the specification required and construct the famous “table of evaluation.” This is usually a Excel file with several sheets and hundreds of questions (if not thousands), most of which are closed and very precise.

2 – They send the list of questions to a selection of developers (often just the famous ones that come to mind straightaway).

3 – The software is scored against the technical requirements and the top couple of applications tested before purchase.

So, what’s wrong with that?

1st pitfall: many questions leave room for vendor subjectivity

Lots of answers are very difficult to verify and so liberal interpretations are only discovered after the software has been installed. Furthermore Alan Pelz-Sharpe reports that vendors will often outright lie, skewing the results towards the less ethical developer. This leaves the honest and rigorous providers at the bottom of the pile, even when their software might actually be the most apt.

2nd Pitfall: technical capacity vs. actual use

Often buyers ask “does the tool do this?” Alan argues that a better question is “how can I do this with the tool?” The success of a technology project rests upon the uptake and adoption by the end users. If a feature is technically possible but requires advanced training, then it’s probably not going to be used and not worth having for most users. Ease of use should be as prominent in the buying decision as number of features.

3rd Pitfall: SaaS is ever evolving

When buying SaaS, a list of questions will result in a snapshot of the current product. As SaaS products are delivered over the internet, they can be updated seamlessly whenever the developer has built a new feature. For this reason, buyers must know not only where the product is, but where it is going.

Our Experience

We recently received exactly this type of tabular software scoresheet for a huge global BI project. In accordance with our SaaS philosophy of selling people what they want and only what they want, we were strict with our response. Despite not fulfilling their requirement completely, we were short listed alongside several on-premises solutions.

Why?

Because the buyer could quickly and easily trial our application. He set up a demo account and had a go himself to see how easily it was to use. We then gave a demonstration using his own data and showed all of the features he was interested in. We answered his questions live during the demo and every time he asked about a capability, we showed it to him.

This was miles away from the formal question sheet but he came away knowing exactly what he was buying and we gained more insight into how our customers use our product.

So what do we recommend?

Showing rather than telling.

The buyer should test the solution themselves, preferably on their own at first and with a demonstration afterwards. Vendors should always make their products easily available for testing to avoid both parties wasting time. Like all good information exchanges, procurement should be a discussion, not just a request for information.

Can an RFI provide a robust way of testing software? Will a demo alone ever be enough? Do RFI response times and accuracy also provide some information on the service orientation of the company? Remember this is a discussion, not a just page of information...