|
WP 4 Learning
Model limitations (NBU)
DUAL/AMBR works by presenting a target episode and finding the most relevant base one. Transfer mechanisms were added in order to execute some tasks like action planning and decision. The system however was still very limited by the fact that the Working Memory had to be cleaned up after each simulation. In order to deal with this issue, mechanisms for partial clean-up of the WM were introduced.
Partial Working Memory cleanup (NBU)
The final result of those mechanisms is that the Mind cleans up partially its Working Memory upon reporting any answer to the owner and awaits for further utterances. When next utterance arrives, the Mind is able to process it and at the same time is influenced by the context of the previous utterance.
When a simulation is considered completed the following operations are made:
New situation agent is created;
Anticipation agents involved in hypotheses with winner state are transformed into instance-agents;
Markers involving the *current-situation* agent are discarded;
All agents in the WM are checked again and performed upon the following transformations:
Anticipation agents and hypotheses are fizzled (instructed to self-destruct);
Instance agents participating in *current-situation* are removed from it and added to the newly created situation;
All temporary links are removed - those are the links to any temporary agent;
Messages between agents related to hypotheses are removed from the agent's buffers;
Inverse links are created for the agents from the new situation. The inverse links point from the concepts to instances of those concepts and serve to bring those episodes into memory when needed.
*current-situation* is entirely removed from the Mind;
*goal*, *input*, etc registries are cleared;
Activation of instance agents is reset - set to 0. This way the system preserves the concepts activation and thus biases the Mind to a given context, but does not "tie" it to the episodes that were just put into WM.
Note: the system creates a new episode, cleans up any not relevant information to the last task and transforms the relevant ones into permanent agents, removes any hypotheses and temporary agents, creates inverse links from the respective concepts to the new situation and sets to default any flags, registries, etc. that were used during the last simulation. After this process completes, the system is ready for the new task, it has preserved the context from the previous one and has retained the memories of what it just did, so they can be used in the future again.
Bottom-up Learning for Relation Extraction (DFKI)
DFKI has achieved interesting research results in the area of automatic
acquisition of relation instances. The research results are realized by a minimally supervised machine
learning framework
for extracting
relations of various complexity, called DARE.
Bootstrapping starts from a small set of n-ary
relation instances as “seeds”, in order to automatically learn pattern
rules
from parsed data, which then can extract new instances of the relation
and its
projections. For Rascalli, we have
applied this general approach to the musician domain to detect the
awarding
events of musicians and social relationships: has-child, has-mother,
has-father, has-husband, has-wife etc. The results contribute the
database records in WP2.
Back to workpackages
RASCALLI is supported by the European Commission Cognitive Systems
Programme (IST-27596-2004).
|
RASCALLI develops a new type of personalized cognitive agents, the
Rascalli, that live and learn on the Internet.
Rascalli
combine Internet-based perception, action, reasoning, learning, and
communication.
Rascalli come into existence by creation
through the user. The users not only create their Rascalli but also
train them to fulfil specific tasks, such as be experts in a quiz
game or assist the user in a music portal.
|
|