In standard epistemic logics, an agent's beliefs are modelled as closed under logical consequence. This becomes a problem if we are interested in the evolution of beliefs of a rational agent which requires time and memory to derive consequences of its beliefs. For example, we may be interested in expressing and verifying properties like `given its current beliefs and its memory size, the agent will/will not believe A in the next n time steps', where A may be a consequence of the agent's beliefs. The basic idea of our approach is as follows. The reasoning agent is modelled as a state transition system; each state is a finite set of formulas (agent's beliefs); the maximal size of this set is determined by the agent's memory size. Transitions between states correspond to applications of the agent's inference rules. The language of temporal epistemic logic can be interpreted in such structures in a straightforward way. Several interesting directions for research arise next, for example:
- completeness and decidability of epistemic logics for particular kinds of reasoners (determined by their inference rules);
- model-checking: using MBP (model-based planner, developed in Trento) to verify memory requirements for a classical reasoner and a rule-based reasoner;
- expressive power: adding epistemic operators which allow us to express properties like `the agent only knows m formulas'.