We recommend referring to the full user manual, which can be downloaded here


Optioneer is a cloud-based system that can be used to appraise different choices.  It is particularly beneficial to consultors who wish to demonstrate that they have carefully considered the process of option development and are thus not subject to predetermination.

Optioneer can appraise a wide range of choices but is typically used in the appraisal and shortlisting stages of a process.

How it works

Optioneer uses a proven methodology to appraise a number of parallel options.  Consultors must stipulate the criteria on which options are appraised (typically these are derived from pre-consultation activity) and any ‘weighting’ that should be given to each criterion (alternatively, all criteria can be equally weighted).

Cost should never be scored as a criterion as all options are viable proposals in that they fit within fiscal constraints.  Instead, finances are considered at the end of the scoring process to determine the cost-benefits (cost per unit score) of each option.  To this extent, Optioneer has fields to specify capital, revenue and savings for each option over any given period of time for the purposes of comparison.

Consultors can use the Optioneering platform to create bespoke ‘grids’ – web pages which have a series of rows and columns that reflect each option, criteria and the corresponding scores from a group or individual stakeholders.  Weightings for each criterion can also be specified, manually or democratically among stakeholders.

The girds work a bit like a spreadsheet in that total scores and comparisons are automatically updated whenever something changes.  Grids can be archived and recalled, options can be highlighted, notes can be made and scores can even be collected remotely among a distributed audience.

This means that Consultees can keep robust records of the optioneering process, have a failsafe methodology for comparing options and are able to make policy simulations with ease in order to test a range of scenarios and ensure that preferred options are chosen wisely.

Use cases

Optioneer can be used in a number of ways, for example: -

  • As a back-office tool to test various scenarios and assumptions (“sensitivity testing”);
  • To document and check the outcome of a (non-digital) option development workshop;
  • To work alongside an option development workshop in the collection of scores or presentation of scores and rapid calculation of results;
  • To understand cost benefits once scoring has occurred.

 Features (as of 01.10.2020)

  • Cloud based
  • Scoring by consensus or individually
  • Remote collection of scores via mobile phone ('live mode')
  • Sensitivity testing
  • Automatic results analysis (ranking by score and cost benefits)
  • Automatic matrix calculation
  • Save, share, delete and restore projects
  • High visibility "Workshop mode"
  • Forcefield analysis and validity checking
  • Finance calculator with Capital, Revenue and Savings figures.
  • Semantic anchors provided for scoring purposes
  • Unlimited options and criteria per matrix
  • Notes section per option
  • Highlight (hold/release) individual options


Please note that Optioneering is designed to work with Google Chrome Browser on a desktop or laptop computer only.  The mobile interface is HTML based and optimised for SmartPhones.

Known Limitations

  • We do not recommend comparing more than five options in any given grid as the limited screen real-estate can cause formatting problems.
  • The number of participants who can score “per grid” has been limited to 20 for the same reasons.
  • To “save” a grid it must be given a unique name and a license must be in effect. Grids can be printed for permanent archive.
  • The design of a grid (e.g. the labels) cannot be changed once a grid has been set-up.


Getting started

You will need to login to Optioneering with your supplied username and password in order to create a new matrix or retrieved saved work.

Browse to: and enter your credentials

To log out at any time (for example, to switch accounts), navigate to “my account” then click “LogOut”

You can return to the main login screen at any time by clicking on “New Option Grid” from the top menu

Creating a new project or recalling a saved project (from 1st screen after login)

In Optioneering, projects are called grids or matrices because they are based on a scoring chart.  When you first log-in you will see a list of archived projects, a project name and a creation date.  Projects are listed in chronological order.  Click on the “open” button if you want to retrieve an existing project.  Click “Delete” to remove a project permanently (you will be promoted to confirm the project name for safety purposes).

In this section we will show you how to create a new project based on consultation data.  First, click on “Create New” from the top ribbon.  A new screen will appear asking you about the design of your scoring process.

Firstly, provide a project name.  This should be a short name which will help identify your project and this will become the default filename.

Secondly, specify the number of criteria which each option will be assessed against.  For example, this might be ‘distance from the city’, ‘pollution’ and ‘sustainability’ in which case the answer is three.  Please note that COST or AFFORDABILITY is never a criterion, this is dealt with separately in Optioneer.

Thirdly, specify the number of options or proposals that you wish to evaluate.  Please note that these will appear horizontally on your scoring grid and subsequently it may be impractical to appraise more than 6-7 per project.

Finally, specify the number of participants involved in Optioneering.  This number is the number of people who can cast a vote in the development of criteria weightings, if applicable.  It does not have to be the same as the number of people involved in the scoring process but it is later the maximum number of individual scores that can be collected for any given box.

In this example, we have used 3 criteria with 4 options and 2 participants.  Click “Next”.  On the next screen you will be asked for more information and labels for your matrix.

On this screen you will need to label each of your criteria.  For example, ‘distance from the city’, ‘pollution’ and ‘sustainability’.  Best practice dictates that criteria should be derived or developed during pre-consultation but they may also be taken from business values or aims.

Next, label your options.  You could stick with “option 1, option 2…” but we recommend using meaningful names such as “Do Nothing” or “Relocate Hospital”.  The number of fields corresponds to the number of options selected on the previous screen.

On the section “how do you want scoring to work” you have two choices.  You may want the grid boxes to appear with only one score appearing in them.  For example, if you are operating a workshop and want to come to a consensus about the score then select “by consensus/one input”.  By default, this is set to “by group or individually” as this provides the greatest flexibility.  Under this arrangement, you can select how many sub-scores you will collect and add for each option/criterion combination (i.e. each box).

Finally, select the number of inputs for each box.  This is the number of individual scores that can be collected and averaged for each option/criterion combination and it reflect the number of groups or individuals who are going to participate in the scoring process.

In this example there are two inputs to reflect two participants.

To proceed there are two buttons with two different options:

  1. Continue to determine weightings: The weighting for each criterion is determined by a scoring process. Each participant can score to determine average weightings which are taken forward into the grid.
  2. Go straight to grid with equal weightings: A weighting of “1” is applied to each criterion and there is no weighting scoring process. This means that each criterion has the same weight.

Weightings screen (forcefield analysis), optional for democratically determining weightings

You will see your criteria in columns and rows to reflect the number of participants selected in the first “basic details” screen.

The average of each participant weighting for each option is automatically calculated by this screen and put into the lower fields.

You cannot progress onto the next stage until the total of all the averages equals 100 points (rows and columns will change green when they are good).  For this reason, the “Continue to scoring button” does not appear until the matrix is correctly configured.

However, sometimes the sum of the averages doesn’t quite equal 100 due to mathematical division.  In this instance, the totals can be manually rounded-up or down.

Weighting procedure

Each participant is given 100 points.  These points are distributed across the criteria by entering them into the appropriate boxes.  More points equate to more importance.

The number of points given by each participant for should add up to 100 across the criteria.  If this is the case, the row “total” indicators will go green. If not, they will be pink.

The average weighting for each criteria will then appear in these boxes.

The row below will contain the actual multiplier used in the system.  It should be green to indicate that the sum adds up to 100 but you can adjust the number in this row should the averages not work out neatly.

Scoring matrix (next screen)

A box is an element of the “options grid” that can be created by Optioneer.  A grid contains multiple boxes and each contains the individual scores received by participants for each option and each criterion.  You can switch between the “by group” and “by consensus” tabs by clicking on them – this reveals the induvial and total scores.  The default view (presented on load) is set during the design process.

A total score for each box is calculated by adding the individual sub-score and multiplied by a weighting[1] for each criterion to create a total score for that box.  All the box scores for each option are then added up to create a total score for each option.  Finally, the total scores are compared and divided by the cost for each option to give a cost per unit benefit.

At the top of the screen are the matrix controls.  Click on each section header to see expanded functionality.  Click the section headers again to collapse.

[1] The weightings can be set manually by the administrator or created democratically by the participants

Entering financial data

Optioneering supports the input of financial data for each option so that cost benefits can be calculated.  Cost data should not be visible to participants during the scoring process so these inputs are hidden behind a collapsible section as per below: -

To calculate the cost benefits, simply specify the capital, revenue and savings estimates for each option in the appropriate column.  To make this accurate, we suggest you think about the cots over a set period of time – perhaps 5 or 10 years.

The “£k (Total)” is automatically populated.  This is the capital cost PLUS the revenue cost MINUS the savings.  If scoring has already occurred then a cost per unit benefit is displayed in the row at the bottom of this section.  The lower this number the better in terms of ranking results.

Helper tabs

There are three tabs at the top of the screen.  Click on the headings to scroll through these.

  • “Weighting adjustments” is used after scoring to test different weighting sets
  • “Analyse results” is also used after scoring to automatically rank the options
  • “Display anchors” displays a useful graphic to help normalise scoring. We recommend this is displayed during the scoring process for the benefit of participants.

Viewing modes (first collapsible tab)

  • Normal mode. This is the standard view.  Use this when working at a desktop computer on a matrix.
  • Workshop mode. This creates a “high visibility” view of the matrix where consensus scores are enlarged and notes sections are collapsed.  The scores and calculations will be easier to see but it can only accommodate one sub-score per box (i.e. is only really suitable for consensus methods).
  • Live mode. This is a form of normal mode where scores can be collected remotely by an audience using their mobile phones.

Note that if you switch between viewing modes, no data will be lost but we recommend grids are saved before this point.  The ability to switch viewing modes will depend on your current mode.

Control panel (second collapsible tab)

The control panel also shows you the number of groups/participants, criteria and options as specified in the design screen.  This cannot be adjusted.

The ‘control panel’ is used to update the matrix calculations.  If you are scoring “by group” then the matrix will automatically update one a number is entered into a box.  For all other methods, you must click “calculate entire matrix” to calculate the totals.  This is to prevent bias in terms of a total score being revealed during the process of scoring.

Clicking in the “by consensus” tab of any box will reveal the workings for that box for any time as long as the ‘calculate entire matrix’ button has been pressed.  This is the total of the sub-scores multiplied by the weighting.  For example: -

Consequently, if you click on the “hide workings” button in the control panel then the calculation is hidden, as are all the totals.  The “reveal workings” button undoes this.

For details of “sensitivity test” and “run analysis”, see further sections of this document.

Advanced functions (third collapsible panel)

The advanced functions tab only has one purpose and that is for saving the data in the matrix to file/cloud storage.

All you need to do to save the data is specify a filename and click “archive this matrix”.  Please note that once you click to save, the matrix will close and you will need to load it back if you wish to restore it.  If a duplicate filename is specified, it will not over-write an existing file.

 Weighting adjustments

Optioneer brings forward the weightings for each criterion from the design screen and these cannot be adjusted.  However, Optioneer allows you to specify an alternative/second set of weightings for you to experiment with.  This is useful to see if an alternative view would change the preferred option.

You can recall the default weighting by selecting the weighting adjustments tab and expanding the sensitivity testing section by clicking on the label.

The weightings for each criterion are revealed in the uppermost rows.

To specify alternative weightings, put the corresponding numbers in the lower row for each criterion.  The boxes will go pink if the total weighting does not add up to 100.  They will show green if the total is 100 (as per the original weighting distribution).  You will also be shown the positive or negative adjustment from the original values in the box adjacent.

To apply the alternative weightings to the matrix, click the “sensitivity test” button in the control panel.  The grid will go yellow and the new scores will be applied with the new multipliers.  To switch back to the original weightings, cilck “control panel” “calculate entire matrix”.

Analyse results tab

The analyse results tab can be used to quickly determine the rank (from top to bottom) of options based on both total score and cost benefits.  This is only useful if scoring has taken place.

To get the results you must click “control panel” “run analysis”.  Please note that the ranking will fluctuate if there is an option with the same score or cost benefits so pressing the ‘run analysis’ button a number of times is recommended to see if this is the case.

Collecting scores using ‘live mode’ and mobile devices

This mode allows the remote collection of scores for each participant.  The project screen will display a new upper control panel as show in the screenshot below: -

You have the ability to ‘start’ and ‘stop’ a scoring session manually – it is only during this time that remote scores will be collected and put into the grid automatically.  The indicator will illuminate yellow when collection is in progress.

Hence, in order to collect scores, you must click “start live mode”.  You can stop collect by clicking “stop live mode”.  When collection is started, scores are refreshed periodically on the grid once each participant submits them. There is a small delay.

There are a further set of indicators at the bottom of the section.  These represent each participant and are numbered accordingly.  If the indicator illuminates green then the participants has submitted their score.  If the indicator is white then scores are yet to be submitted by that participant.  Once all participants have completed the scoring process and all the participant indicators are green, the collection process will stop automatically.

To use this mode, take note of the “session ID” that appears at the top of the screen.  Each participant needs this for the App and must also be assigned a participant number (incrementally from number one onwards).

One approach is to write each possible participant number and ID number on a piece of paper and distribute this to each participant in advance of scoring.  The other approach is to provide voting terminals (e.g. iPads) which are “pre-programmed” with this data.

To operate the mobile scoring, participants should navigate to on their devices[1].  Next, they should specify the SessionID and participant code.

The mobile user will be able to scroll through each criterion in turn and be able to score them for each option.  When they click “submit” their scores will be transported into the desktop matrix assuming the collection process is running (the “start” button has been pressed on the desktop interface).

[1] This works on any web browser, including from a desktop computer