Knowledge vs Belief


The difference between knowledge and belief is that while beliefs could be false, knowledge must be true. That intuition is the difference between epistemic vs doxastic1 logics too.

From the operator reference, you might have noticed epistemic logic as presented does not talk about belief though, and belief has no corresponding primitive in ISPL. This raises the question.. does MCMAS/ISPL handle belief or not?

The answer is.. sort of. It depends on the semantics you want to capture, but roughly speaking it turns out that the operators

  • B(agent,predicate) → Agent believes predicate
  • K(agent,predicate) → Agent knows predicate

have most of the same properties in terms of entailment and extensionality. (You can read more about this than you probably want to know in this footnote! 3) The biggest difference is just an extra truth axiom, i.e. basically the same as the intution we started with. If an agent knows something, it must be true by definition i.e. K(agent,predicate) → predicate.

This suggests that by changing our perspective on semantics, and by adding some extra assertions, ISPL can indeed support B(..) and K(..), even though it has no syntactic support for B(..).

Belief in ISPL


We'll get into more discussion afterwards, but here's a quick sketch of what it might look like tracking knowledge and belief side by side in ISPL.

Summary
-- MCMAS ISPL program demonstrating knowledge vs belief
-- Agent alice can KNOW facts directly but BELIEVE things that may be false

Semantics = SingleAssignment;


Agent Environment
    Obsvars:
        actual_weather: {sunny, cloudy};
        weather_report: {report_sunny, report_cloudy};
    end Obsvars

    Actions = {heads_flip, tails_flip};

    Protocol:
        Other: {heads_flip, tails_flip};
    end Protocol

    Evolution:
        -- Weather is independent of reports (reports can be wrong)
        actual_weather=sunny if Action=heads_flip;
        actual_weather=cloudy if Action=tails_flip;

        -- Weather report matches actual weather (simplified for this example)
        weather_report=report_sunny if actual_weather=sunny;
        weather_report=report_cloudy if actual_weather=cloudy;
    end Evolution
end Agent

Agent alice
    Vars:
        -- Knowledge: alice directly observes the coin flip
        coin_is_heads: boolean;

        -- Belief: alice's belief about weather (may be wrong)
        believes_sunny: boolean;

        -- Internal state for decision making
        action_taken: boolean;
    end Vars

    Actions = {wait, act_on_knowledge, act_on_belief};

    Protocol:
        -- Alice can act based on knowledge or belief
        believes_sunny=true and coin_is_heads=true : {act_on_knowledge, act_on_belief};
        believes_sunny=false and coin_is_heads=true : {act_on_knowledge, wait};
        believes_sunny=true and coin_is_heads=false : {act_on_belief, wait};
        Other : {wait};
    end Protocol

    Evolution:
        -- Knowledge evolves based on direct observation
        coin_is_heads=true if Environment.Action=heads_flip;
        coin_is_heads=false if Environment.Action=tails_flip;

        -- Belief may be incorrect (influenced by Environment.weather_report, not actual weather)
        believes_sunny=true if Environment.weather_report=report_sunny;
        believes_sunny=false if Environment.weather_report=report_cloudy;

        action_taken=true if !(Action=wait);
        action_taken=false if Action=wait;
    end Evolution
end Agent


Evaluation
    -- KNOWLEDGE: alice knows the coin result because she observes it directly
    alice_knows_heads if alice.coin_is_heads=true;
    alice_knows_tails if alice.coin_is_heads=false;

    -- BELIEF: alice believes weather based on reports (may be wrong)
    alice_believes_sunny if alice.believes_sunny=true;
    alice_believes_cloudy if alice.believes_sunny=false;

    -- TRUTH: actual state of the world
    actually_sunny if Environment.actual_weather=sunny;
    actually_cloudy if Environment.actual_weather=cloudy;

    -- BELIEF ACCURACY: when belief matches reality
    -- belief_accurate if (alice_believes_sunny and actually_sunny) or (alice_believes_cloudy and actually_cloudy);

    -- KNOWLEDGE ACCURACY: knowledge is always accurate by definition
    -- knowledge_accurate if (alice_knows_heads and Environment.Action=heads_flip) or (alice_knows_tails and Environment.Action=tails_flip);
end Evaluation

InitStates
    alice.coin_is_heads=false and alice.believes_sunny=false and alice.action_taken=false and
    Environment.actual_weather=sunny and Environment.weather_report=report_sunny;
end InitStates

Formulae
    -- EPISTEMIC: Alice knows when the coin is heads
    -- Equivalent doxastic: 
    --   K(alice,coin_is_heads) -> coin_is_heads (knowledge implies truth)
    K(alice, alice_knows_heads) -> alice_knows_heads;

    -- DOXASTIC: Alice can believe it's sunny even when it's not
    -- Equivalent doxastic: 
    --   B(alice,sunny) ∧ ¬sunny (belief doesn't require truth)
    EF(alice_believes_sunny and !actually_sunny);

    -- KNOWLEDGE IMPLIES TRUTH: Alice's knowledge about coin is always correct
    -- Equivalent doxastic: 
    --   □(K(alice,coin_heads) -> coin_heads) (knowledge axiom T)
    AG(K(alice, alice_knows_heads) -> alice_knows_heads);

    -- BELIEF DOESN'T IMPLY TRUTH: Alice can believe sunny when it's cloudy
    -- Equivalent doxastic: 
    --   ◊(B(alice,sunny) ∧ ¬sunny) (beliefs can be false)
    EF(alice_believes_sunny and actually_cloudy);

    -- INTROSPECTION: Alice knows what she believes (positive introspection)
    -- Equivalent doxastic: 
    --   B(alice,p) -> K(alice,B(alice,p)) (agents know their beliefs)
    AG(alice_believes_sunny -> K(alice, alice_believes_sunny));

    -- BELIEF CONSISTENCY: Alice doesn't hold contradictory beliefs simultaneously
    -- Equivalent doxastic: ¬(B(alice,p) ∧ B(alice,¬p)) (belief consistency)
    -- AG(not (alice_believes_sunny and alice_believes_cloudy));

    -- BELIEF REVISION: Beliefs can change based on new information
    -- Equivalent doxastic: B(alice,p) ∧ ◊¬B(alice,p) (beliefs are revisable)
    EF(alice_believes_sunny and EF(! alice_believes_sunny));

    -- KNOWLEDGE PERSISTENCE: Knowledge persists (if true facts don't change)
    -- Equivalent doxastic: K(alice,p) -> □K(alice,p) (knowledge is stable)
    -- AG(K(alice, alice_knows_heads) -> AG(K(alice, alice_knows_heads)));

    -- BELIEF vs KNOWLEDGE: Agent can believe p without knowing p
    -- Equivalent doxastic: B(alice,p) ∧ ¬K(alice,p) (belief is weaker than knowledge)
    EF(alice_believes_sunny and !K(alice, actually_sunny));
end Formulae

Discussion


Discussion


Tracking knowledge and belief separately as "first class" seems useful for lots of reasons, and the last section shows that it's a bit awkward and limited.

References