Function that takes a p value and returns the s value (Shannon entropy, surprisal value)
getSvalFromPval.Rd
This uses the trivial calculation that converts a probability to its s value
Background
This is basic information theory. For more information, see: wikipedia entry The formula is simple:
\[s=-log_{2}(p)\]
There are strong arguments for expressing the value of new findings in terms of this s value rather than a p value (or a confidence interval though I do think they are better than p values!)
An s value is the unexpectedness (hence surprisal) of a finding against some model. An s value of sausages 1 will not change your information or expectations at all so if you toss a coin many times and the rate of it coming down heads (or tails) is .5 then the s value is 1 (-log(.5, base = 2)) and your expectation that the coin is fair is unchanged.
However, if you toss the coin four times and you predicted heads and saw heads the probability of this if the coin is fair is .5^2 = .0625 and the s value is 4 so you may be starting to feel that the hypothesis that the coin is fair is going down (it is!). The s value of information is the same as that of getting the side you wanted that many times tossing a fair coin. That is arguably easier to understand than a p value or a confidence interval and its a continuous measure not a very arbitrary cutting point as a null hypothesis test is.
For what it is worth the s value of p = .05 is 4.322.
See also
Other utility functions:
convertClipboardAuthorNames()
,
fixVarNames()
,
getAttenuatedR()
,
getCorrectedR()
,
whichSetOfN()