Operational manual for CX-01 Advanced Integrated Systems Implant
The next few pages concerned the manufacturer, so he scrolled down to the part that interested him.
—when the implant construction is finished, it will run a self-diagnostic program intent to verify the implant integrity, if a fault is found, the user will be notified, we urge the user that in this case she/he immediately contact any medical personnel available.
— In case of successful construction and integration, the user interface will notify the user that the implant was successfully activated, and a series of questions will be asked of the user, measuring hers/his responses in order to achieve maximum integration and creation of required neural pathways.
Adrian frowned the manual repeatedly referenced that the user will be notified or asked question, but the manner in which this was to occur wasn’t stated, he tried scrolling back to see if he missed anything but when he couldn’t find anything, he decided to continue reading.
—after the initial setup is completed, a menu will be available to the user with a smart app software system that will help the user modify hers/his implant interface.
—the option to activate the implant’s Ai (Artificial intelligence) will be available after the interface is set.
—the Ai has two modes:
- Active
- Passive
—During active mode the Ai is able to access everything the user sees and hears, and also monitor the interaction between the implant and the user.
—During passive mode the Ai is only able to manage data inside of its own core, including the control of the user’s nanites (also referred to as privacy mode)
Adrian continued reading for the next couple of hours, the manual described multitude of situations and the recommended reactions to them. But there was very little in term of information about the interaction with the Ai, and the manner in which the user should interact with it. After a while he grew tired, and since there wasn’t anything happening he decide to go to bed, so he got up and started towards his bed. He didn’t take more than two steps before letters appeared in his field of vision, he almost lost his footing before he realized that the implant was finally activated. He walked backwards and sat back down, reading the text.
-Diagnostic complete, all clear.
-Are you able to start the integration now? Yes/No (vocalize the answer)
Adrian swallowed and said out loud “Yes.”
-Sequence initiated.
-What is the color of the sky during the day as most commonly seen from Earth? (Vocalize the answer)
Adrian frowned at the question, but then decided that the implant was probably measuring his responses as a baseline so he answered. “Blue.” He spent the next twenty minutes answering all kinds of ridiculous questions. Until finally that part of the test was finished.
-Are you able to start the audible portion of the sequence? Yes/No (vocalize the answer)
“Yes.”
Suddenly he could feel a faint tone in his ears.
-Do you hear a sound? Yes/No (vocalize the answer)
“Yes.” He said, wondering how this part of the test will work. Suddenly the tone stopped and he heard three more tones one slightly fainter, one much louder but still tolerable, and then one so loud it hurt his ears, he immediately brought his hands over his ears but the tone was already gone by the time he covered his ears, though it’s not like would have helped. He read inside the manual that sound from the implant was transmitted directly to his brain, it only simulated it to appear as if it was coming from the outside. His ears reacted to what his brain thought he was hearing.
-Rank the four sounds on a scale of 1 to 10, going from the weakest to strongest:
-First sound-
-Second sound-
-Third sound-
-Fourth sound-
Adrian ranked them accordingly the he gave the first tone a rating of 3 the second 2, third 7 and the last a 10.
-Designate desired sound level for audible systems from 1 to 10.
Adrian said “Five”. Then the text disappeared and after a moment he heard a monotone voice speaking in his ears
“Greetings user, I am virtual help program, created in order to aid you with the configuration of your implant. The required pathways have been created, user commands now do not need to be vocalized, do you wish to continue?” Adrian jumped for a moment, looking around before realized that the voice came from the implant itself. He leaned back in his chair and started conversing with the program with his thoughts, tweaking his interface. He set commands for various actions, first he set activation command for his implant, if he “thought” at his implant, he would get a response, he could then access many submenus, he chose to have those appear as a text in the left side of his vision, and next he designated the command for clearing his “HUD”. Adrian quickly grew excited and started playing with the many combinations he had available, it was just like a game where he had the option of managing his own HUD. He could have a graph monitoring his heart rate or pressure, a clock saying time of day, a calendar, he even had the option of accessing the internet via his implant, or rather the Olympus equivalent Olnet, which every member of Olympus had access to, it could be used to send messages or talk with any other member of Olympus though now that he was on Mars there would be a delay with those on the Moon or Earth. But that didn’t stop him from sending a message to his friend Sahib who was now on Earth bragging about his implant. He wasn’t told that he was required to keep his implant a secret, only that the Ai was classified, so he figured that there is no harm. Next he downloaded all the data he had on his datapad into his implant, there were pictures, videos, games, which he found were much trickier to play on the implant (until he found out that there were patches that updated the game specifically for implants), books, and multitudes of textbooks. He found that now he could archive them in his implant and using a simple search engine find the information he is looking for. After two hours of playing with the interface he decided to clear his HUD of all but a small closed letter image in the corner of his vision that will alert him if he had any messages. After he told the help program that he was finished it asked if he wanted to activate his Ai. Adrian thought about it and when he didn’t see any reason why he shouldn’t, he gave his okay. At first he couldn’t sense anything different, then he remembered that the manual said he needs to initiate the Ai’s active mode.
“Ai initiate active mode.” Adrian said. Again he couldn’t feel anything different “Um… Hello?” Adrian said. After a beat he heard a mechanical voice inside his head “Greetings.” It said. It took him by surprise, the Ai’s voice seemed even more impersonal than that of the help program.
After an awkward pause Adrian decided to speak, the manual instructed that the first moments after the Ai was activated should be used for introduction, and that he should chose the Ai’s name.
“Uh, I’m Adrian Farkas.” He introduced himself.
“Hello Adrian Farkas.” The Ai said.
“Um… The manual said that I should give you a name.” Adrian said.
“That corresponds with the information inside my databank.” The Ai confirmed.
Adrian thought about a name for a while, before he was struck by a thought.
“Would you like to choose your own name?” Adrian asked, it seemed to him that if the Ai is really supposed to be sentient then it had the right to its own name, it wasn’t as if it was a baby which couldn’t choose a name. There was a slight pause and then. “Yes, I would like that.”
Читать дальше