Dr Tsukimi sighed in response. "I did indeed create these, but I had to consider how to make them and how to avoid potential future problems. I did not rush things. Your idea may seem good in theory, but in practice… it would come with many challenges.
Some of those challenges would include the limitations of what it is capable of and how far things can be taken to make it work as you want it. Granting vision back is already difficult enough, but adding extra features like targeting systems?
What if the information being provided is too much, or it is not fast enough, that it will cause overloading or delays? That could make it ineffective enough to qualify for the purpose it was made for, and it could bring more harm than good.
As for ways to make up for that in ways that would let the system take over to target? No good would come from giving up control of the body, whether the user was aware of it or not.
It is like putting a gun in the hands of a machine designed to pull a trigger without hesitation or feeling. Although I am going off topic a little, you should realise that such a project needs more consideration toward function and does not rob the user of free will, whether permanent or temporary.
Such an idea needs to be refined to make it work without problems, without taking the wrong approach that could lead to serious problems. You should be well aware of the importance of drawing a line between modifications that are helpful and no longer make us human."
There was a retort in an angered voice that came from the other man. "Even if it is done willingly? What if someone was willing to give up control temporarily if it helped improve their performance and survive in a crisis?
Are you telling me that those things by your side would do any better if they had to protect your life? They are all machines, yet you say they have emotion and are the next step forward in bridging the gap between machine and man.
I do not quite agree with that… from your statements, those things could become just as much of a danger or even more of a danger than what I have brought forward."
Dr Tsukimi did not argue back as he nodded thoughtfully. "You might be right. There are always variables that cannot be accounted for. Spacey and the others could have turned out like that, but that is all the more reason why I push for what I do.
If humanity and machines are to coexist, then there needs to be proper guidelines and laws in place. It is especially important for the programming of each of them to be solid with the laws they have to follow, and for them to be given proper treatment and guidance.
There needs to be laws in place to prevent them from doing us harm, but that needs to go both ways. It will be a long and difficult process as there will be difficulties involving ownership and getting proper rights to live like others.
Perhaps it is something that I might not be able to see in my lifetime, but I would like to see such a day come. Your idea… is just a tool that would be best suited for killing, and it has serious dangers involved that can override the usual human responses.
Take the Pacifista for example… it does have safeguards in place, but the moment it sees a criminal or someone close enough to their appearance, it will open fire without warning, no matter where they are or without hesitation."
The man went quiet and sat down in frustration, which resulted in some more discussions about the moral and ethical standpoints involving different projects and how to approach them.
Some tackled topics related to artificial intelligence and how to properly put safeguards in place, along with the potential dangers it could pose.
Others tackled topics related to the difference between low-programmed mechanical beings and far more advanced programming and technology, along with the challenges that needed to be faced when granting certain capabilities and putting safeguards in place.
That topic went on to topics like the Pacifista that had shown up at the Marineford war, along with them being added into homes, transport or weapon development to help improve things while discussing what could go wrong.
Some topics involved laws, while others went more into ethics and social politics related to those, and all kinds of subjects.
Scientists, scholars, doctors and many other young and old minds were able to share their views and bounce ideas off each other regularly there.
The meeting soon came to an end, and people started to leave while Dr Tsukimi stayed a little longer, lost in thought.
He stayed to watch over some of the workers who came to clean things up after everyone, which allowed him to take another brief glimpse of the kind of ordinary life others lived to survive or how they chose to live.
It taught him much that he could not learn in the confines of his lab, so he quite enjoyed such times as he went out more and saw how the Ness Kingdom had been developing or how the people there had been living.
It was even more important for him to do at while he could, as he did not know how many years he had left to live.
He had so much to learn and do, but his time was limited.
He had gotten to experience and learn so much after joining the White Storm, while he was proud that he had an excellent student like Ein to carry on after he was gone.
What he had left was to try to make sure Ness Kingdom and Creation developed toward the correct path, but there existed one more thing.
Spacey and the others were Automata would be left behind after he was gone, so he had to prepare them for when he was gone, as they would have to live in the world without him to be by their side at all times.
Spacey and the others were like real children that he had created and raised, so he was using his remaining time to not only let himself experience the world outside of his lab but them as well.
He took a stroll with them before heading to a giant tree that had a large building in the middle of the island built around it.
After going further inside, he opened a door and saw Monera, who was working out, lifting extremely oversized and heavy weights. "Milady Monera."
