Trusted Digital Identity

The Nordic countries rank high in trust, which means that people have trust in other people. And in organizations. And in the government. Trust is a core part of making a digital identity scheme work. There are countries where the uptake of digital identity is very slow, and one of the reasons is the lack of trust.

“Trust is a fundamental element of social capital – a key contributor to sustaining well-being outcomes, including economic development.” (Cite: Esteban Ortiz-Ospina and Max Roser (2016) – “Trust”. Published online at OurWorldInData.org. Retrieved from: https:/ourworldindata.org/trust)

One problem is of course also the lack of services which accept the digital identity, and as such it is a chicken-egg problem.

Then there is usability. If the digital identity scheme requires a card reader, which you must buy, and install drivers to make it work on a number of different PCs, or make it work with mobile devices or tablets, well, the stage is set for disaster. And if there are no services available, why would users want to set up an electronic Identity (eID)?

If you do not have trust in the government, perhaps due to fear of surveillance, you will also be very reluctant to share personal information online. There are countries with a history of not just surveillance but even eradication of groups of people, so this is understandable.

On the other hand, many people are more than happy to share an abundance of personal details on social media, and seem oblivious that this information is available to a lot of people, including the government. Many people seem to be more than eager to sell their private information in return for targeted marketing, for example through the use of store loyalty cards. Perhaps social media has given the users some sort of comfort, letting users believe that they are only sharing information between friends. We tend to forget that information such as which links we are clicking on, which posts and pages we like and comment on, as well as where we are and which device we are using, is also collected, and used to learn more about us. The sharing of information is motivated by yourself, possibly because you are you are being rewarded by other people liking or commenting on your information . Nobody is requesting the information from you; you are sharing. In return, you get paid in likes, as well as in ads, all the while (consciencly or not) trusting the social media platforms not to mis-use your data.

To make a solution trustworthy, it must be transparent. The user must understand what information has to be shared (e.g. uploading the image of a passport), why this information has to be shared (e.g. to verify who you really are, and to prevent someone from stealing your identity) as well as how the information is being used (for example for the sole purpose of verifying your identity).

The GDPR (General Data Protection RegulatIon: https://www.eugdpr.org/ ) will come into effect in May this year, and is good news for all of us. The GDPR was created to protect the privacy of the user. It is not for organizations. It is not for governments. It is all about protecting how our personal information is being used. The GDPR requires that anybody collecting and using PII (Personally Identifiable Information) also has to obtain consent from the user in order to be able to use their data.

And to show that that they truly mean this, the EU has put some substantial fines on breaches, up to 4% of global revenue. So hopefully, this should make collecting and using personal information more transparent, as well as help restore trust in identity data usage.

Signicat is currently working with some of our large customers to see how consent management can be integrated into our solutions, while at the same time putting as little stress as possible on the user.

Blogpost by John Erik Setsaas, Identity Architect, Signicat

Posted in Blog, news.