Protecting individuals’ privacy

Let’s say that there are only five people within an organisation, and two of them said they belong to the LGBT community, but only one of them is out at work. This might mean the person in question might be “outed” through a process of elimination by this information being shared publicly.

In some cases the respondent might feel that even an aggregate display of data about their team might make it easy for someone to identify which answer they gave.

It’s impossible for us to know when an answer might put someone’s privacy at risk, so we decided to defer this responsibility to the respondent. They will know best whether they are happy sharing the information in aggregate or not. Nobody else should make this decision on their behalf.

How does the visualisation handle this?

Any questions where individuals aren’t willing to share their answers publicly will not generate a visualisation.

If we were to hide just the data points which put people’s privacy at risk then the resulting visualisations would be factually wrong.

What about the raw data?

You can still see your raw data in the spreadsheet, which will help you understand which areas you can improve.