Skip to content


Gregory McVerry just posted a great read quick-thought on credibility. When I recently worked at a startup, credibility or proofs as our group called them was one of the core focuses on our new-platform Greenfield SaaS project, to change the world of work.


Before my time

Prior to me joining; the business focused on ratings, [asserted] skills, availability and on-platform work-history. As a function of work-history information would be tracked including a multi-faceted rating system as a way to differentiate and allow selection of talent. This is a core value proposition of the platform I worked to maintain and extend, and was all about helping people to retain, access and promote talent.

An iteration for virality

Our first pivot was in-play and decided on around the time I joined. It was about enhancing and eventually replacing what was, with something to increase the number of people on the platform. As well as existing platform ratings, people who had worked with people prior, were able to boost visible, searchable credibility of new onboards and existing talent that had access through experiences, which happened off-platform.

Growing from success

The last 9 months, one of the things we looked to further improve on and made some progress towards, was expanding this concept of proofs as we iterated on what an experience could look like. By refining the methodology for confirming skills and experiences and trying to add weight to the credibility, we wanted to big-picture, ensure podump inc 5-star rating, might not be equal to an industry leader Microsoft's, MVP 5-star rating. At some point I think the concept of ratings was lost and we had various other mechanisms such as endorsements and confirmations, to impact visibility on a new platform with a new positioning that was less restrictive.

Other work

Other software projects I've worked on have focused on pre-vetted expertise, representing expertise, where credibility was assigned and proofs uploaded, tracked and managed through systems I've designed, architected and helped build. I've made these systems searchable via a web-based UI in the medical, legal and accident and signature collection industries. I also have some experience in digital forensics; electronic money-laundering document checking, and reporting. It's an area I feel quite confident in, with some experience assisting marketers to promote and showcase credibility proofs.


Full disclosue

I'm sorry I can't write every detail. Not only was this the result of team effort, but going into too much detail could violate a non-disclosure agreement pertaining to trade-secrets, and I feel would be ethically remiss of me. I've taken great effort to tailor this response with this in mind.


This is a different angle to what Greg is coming at this from; which I understand as public perception of credibility; possibly to fight fake-news, or promote informed discourse, and it seems interactions with public figures. The work we did was on credibility within a professional context for other professionals. While I accept there is a smaller sphere of the professional as a sub-set of the world; I do feel it is applicable as a piece of the puzzle.

Hard to solve

Anyway getting back to the meat fo the response. General credibility is hard to solve for on the internet.

  • Techcrunch had a great article on the problems linkedin have with assertions of educational attainment.

  • Glassdoor has a representation of credibility of information in it's salary confidence ratings and in their listings for positions
  • Wikipedia and Indieweb wiki have articles on Publics [1][2] which consider, quite reasonably in my opinion that who you are talking to might influence such things.
  • Each of Linkedin, Facebook and YouTube have problems with faux experts and people expressing their expertise dishonestly, and counter groups to that including snopes, full-fact, special interest groups for veterans and martial arts experts.

A Solution

One thing I strongly pressed for at all times both in this role and other roles borrows from banking in the form of a ledger.

Ledgers are beautiful, simple beasts. We have a lot of the tools we need to moderate credibility both at a point in time and in the future, by publishing events, which can each contain attribution, a score, audit logs to hamper tampering and interference.

I feel fine sharing this detail. Event sourcing is a pattern widely documented in computing and was not invented by me. Nor was the ledger both distributed and centralised. But as nobody seemed to grasp at it with the vigour I did, I'd like to try to persuade and document, what I think are some sound princples we can borrow from other areas. Lastly perhaps writing it down again will help me shake out some detail which is obvious to me, but which I've failed to communicate prior.

Ledger format

Unique Identifier
All entries within a ledger should be immutably, globally, individually addressable, but not always available way of referring to each entry.
Target Link
All targets within a ledger should contain a link to a target object, which is itself immutably, globally, individually addressable, but not always available and is only unique to the ledger (single-entry) when combined with a context and source. Globally it is unique and this may lead to duplication of many separate unique identifiers, with many identical target links. While against normalisation, this keeps indirection minimal and focuses on locality. It provides a way of referring to a target for the projected score.
Target Data
All entries should keep a snapshot of the target data original representation; so that in the event of non-availability, the data can be resolved and used for projecting the ledger to a credibility or confidence score, and so that the integrity of the original data can be verified by third-parties. Checksums of content may be a quick yard-stick to be able to check if content has changed; but do not assess how meaningful changes are.
Context Link
All contexts within a ledger should contain a link to a context object, which is itself immutably, globally, individually addressable, but not always available way of referring to a given context. Unlike other links there should be no cached copy of a context, because the link itself should serve as a way to refine and group relevance to an audience context, regardless of metadata presented or provided via context. This may require specific notation if generic terms are used. The context data link is a link rather than simply a key because it's source may not be owned by the ledger system. This is about appropriate sources for truth for an audience and I feel may be a novel way to address problems with unfamiliar sources and authority figures. The modelling of relationships between contexts is also not something I've tried to model, as I think it would defeat the general purpose nature of this ledger. If something is true for multiple contexts. Multiple entries should denote that.
Source Link
All entries within a ledger should have a path or link format to a more detailed entry. This helps revise the ledger in-future, keeping a common thread between source-data and entries. Most importantly as systems grow. You eventually run into it becomes prohibitively expensive to consider the entire ledger when projecting the outcome. Source Links allow you to Roll-up the projection of prior experience, which can help deal with era / epoch transformation events, where perhaps carbon dating is no longer as effective as radio-carbon dating as an example. I never considered the link being a hyper-link, although it could be helpful.
Source Data
As well as linking to a resource, before it is entered into a ledger, a snapshot of it's data, should be entered into the system of record. This can serve as a stale-copy for auditing without good-faith assumptions as to the voracity of the data. I believe because of the breadth of data that something akin to a mime-envelope where type is stored with data as an assertion of how to manage could assist in keeping systems orthagonally extensible. In this way it could be considered that source data would be vulnerable to many problems of other nested systems such as email, xml and json. Whilst this is true, I also beleive it could borrow from the mitigations such as maximum depth recursion, and makes no attempt at specifying the format, or features such as references which can lead to more complex issues requiring the entire heirarchy to be evaluated at-once. In this way even-if a deep tree were represented getting ever deeper, only one level would need to be considered at a time. If the source data changed, the source link would need to as well. As I am writing this notions of systems with content-based addressing; such as IPFS become all the more interesting. If I visit a harvard cited article and score it; the only way to prevent a cascade mechanic, which can result in system interruption or downtime, is to keep with older records the old copy of the article at the time of score. The score reflecting the current state of the article at the time it was scored, can then be a reasonable measuring stick and prevent pedantry and wasted time. Checksums of content may be a quick yard-stick to be able to check if content has changed; but do not assess how meaningful changes are.
The score for the entry would be a point-in-time snapshot of an individual entry value in the overall ledger balance. I have never given any thought to the meaning of a negative ledger balance, so that is an unknown in such a system. Crucially this value may change over time as better methodologies emerge. I have no knowledge of how to refine or weight this value, but I would suggest that each change should likely result in a roll-up event. This is useful as transforms such as multiplications may lead to implementation flaws. In the event a roll-up is performed, the entire ledger prior to modification, could become it's own source data entry encapsulating past records. Perhaps compression could also factor into this to manage data-growth. In any case databases have mechanisms for dealing with large text and binary objects to mitigate performance impacts.
Audit Data
System specific auditing about the author of an entry, it's date and time may form a part of a system, which may or may not be useful


The thing I most like about this system is "extreme locality"; which also presents a problem for implementation. I don't have all the answers. But this is the sum of all my experience creating systems to answer these problems. Nobody has ever paid me to work solely on this problem as an expert, or consented to it's use within their software, but I feel it's a general enough structure to fit most questions of voracity, which lend themselves to credibility, reliability and quantified trust as a tool to keep moving.

Caution on examples

You might notice I've omitted concrete examples. This is to avoid confusion and implementation or notation specific misdirection. I noticed that in the article Greg gave an example I found distracting of a non-empirical criticism. While this system does not rule-out such entries, I think it makes ensuring bad-actors don't abuse the algorithm much harder, or invites complications which could impact processing.

Ultimately, I feel like whoever gets backing to implement first will be able to weigh in on these problems.

Existing Networks

I've also strayed from referencing Facebook, Twitter, or LinkedIn approaches to these problems. At best they have been problematic in-terms of being more simplistic than this model. They've also been documented to have negative effects due to the simplicity of their modelling and their outlook and none are evidence based. Not being a full-time academic; but rather a life-long learner; I don't feel confident to speak about academic systems.

Challenging new entries

One of the things I was keen to try to address was the notion of a base-score for credibility, which would not penalise newcomers to a platform; at least not over known bad-actors.

On a platform with dedicated resources for individuals it may be possible to use an internal link which never changes to reference a person; however I've never worked on a significantly large system without some data duplication issues, which would require an additional source record, with a source link that would never change, and could represent a loop, in order to re-associate and combine two ledgers from a user that subverts the system.

One problem of this combination is that it would then need to ensure it ignores the existing score for a user in a merge event, because otherwise I could gain credibility by creating many accounts using gmail transparent aliases, ask for them to be merged and gain credibility. Of course a simple work-around would be to subtract the base-score at the time of creation from the calculated score. Even in this case I could see it being accidentally fudged with brackets in the wrong place, or misunderstood order of operations.

Dealing with bad actors is hard as they generally search for edges and corners you did not consider. They thrive in your imperfection, which I'm not sure it's easy to overcome or fight a battle over without compromising freedom and expression.

Negative scores

Some of my views around negatives, are mathematically illiterate. This is not because I don't understand how to make them mathematically sound, but that I am aware that I don't have a proof I can explain to others to fit what is in my head, in theirs. Instead of negative scores I think that the minimum should be zero, as in totally untrustworthy.

max(0, function_to_calculate_credibility_score(data))

The reason I think this is important; as a facet of projection; is that it makes it easier to classify outliers and think about the statistical breakdown of the whole, without truncating data or getting into silly arguments over who is less credible from a list of low-scoring outliers.

At the point someone spends their inate credibility without a return; how would you represent a negative 1 million from a zero? It's one of the few cases I think simplicity beats arbitrary correctness and is more about presentation and time-saving than correctness.

One last note is that I do not believe in any circumstance calculation should be cut-short or banned if someone falls below a threshold. If there is a group, that wish to ignore the credibility of that individual; I'm not sure that censoring them and de-legitimising the tooling is the right path to take. This is what contexts are about.

Contexts are in-short a way to use other, possibly poorly cited scores such as "I think {X}'s hair is dumb" as a way to engage with a public, or specific context of people.

By Lewis Cowles