Hello again :)
The test of the sensor script is completed right now, thus here is a new clue: The process map of the sensor script.
As seen, it reacts on five events: You can sit on the device, touch it or colide with it. You can also enter the radar area or say a certain chat command in order the sensor triggers and reports the avatar.
The chat messages are translated so the agent can say a secret word in order the sensor triggers. Collision and sensor events must pass the so called Agentfilter, in order only new agents are reported.
The script is a plug-in, which means it has a proper API, it works by its own and extends the device in plug&play manner. Thats all by now, it was just a small new tip.
Hello everyone reading :) This post targets all the people who design and create all the fancy things, be it in hand or on screen. Pleas...
Why a unpacking a box needs a tutorial at all? Before RTS (the newest product by now) the product boxes had a usual unpacker script. That on...
Updating of JFS products is relativelly simple but if you know how. This tutorial should introduce this. First of all, there is only one p...
Hello everyone! It is pretty a while that I posted something last time, but now there are changes about stores and JFS at all (my company...