Hello again :)
The test of the sensor script is completed right now, thus here is a new clue: The process map of the sensor script.
As seen, it reacts on five events: You can sit on the device, touch it or colide with it. You can also enter the radar area or say a certain chat command in order the sensor triggers and reports the avatar.
The chat messages are translated so the agent can say a secret word in order the sensor triggers. Collision and sensor events must pass the so called Agentfilter, in order only new agents are reported.
The script is a plug-in, which means it has a proper API, it works by its own and extends the device in plug&play manner. Thats all by now, it was just a small new tip.