Inside CI: we Can control KI and this is the right question?
Recently, researchers at the Max-Planck-Institute for educational research, Berlin, Germany, have published an assessment that a KI, “whose intelligence would be superior to humans, and which could independently learn everything” don’t be, in principle, to control. Thus, could not be ensured that it does not harm the humanity.
In the previous episodes of our series “Inside KI” represents Dr. Leon R. Tsvasman a differentiated view on the chances of a humanistic relationship between human and Artificial intelligence. So what he says to the controllability of KI and the evaluation of the Max-Planck-researchers?
Table of contents for the article
- 1 multi-Layered consequences of artificial intelligence
- 2 ‘s Threatening dystopia uncontrollable, super-intelligent machines?
- 3 cognition and the cybernetic imperative
- 4 A research-based AI could be a tremendous efficiency develop
- 5 efficiency without effectiveness produces distortions
- 6 The info somatic presence ensures equalization of mediality
- 7 A self-learning AI uses human knowledge advance
- 8 Deprived of resources does any self-regulating System of well –
- 9 , The relationship between KI and the medium is still widely misunderstood
- 10 , data can even use the time compress
- 11 is A magical world, the us understands, without asking
- 12 Can we have a super intelligent AI ever recognize?
- 13 The technical brain of the world, in a constant dialogue with each of the people
- 13.1 Related Posts
Multi-layered consequences of artificial intelligence
In this series, we speak with Dr. Leon R. Tsvasman about his views on the topic of “Artificial intelligence”. In the previous episodes it was about the human self-understanding in contrast to Artificial intelligence, AI, and ethics, the question of whether an AI is also creative and innovative can be, the impact of KI on the world of work, the question of how digital value chains look can, the future importance of AI in education, on the impact of AI in the context of “Governance” (i.e., all aspects of leadership and social control), and on the health care system. In addition, we examined the role of AI at the philosophical search for knowledge and truth. In this tenth episode , we talk specifically about the point of view of Dr. Tsvasman on the recently published evaluation by researchers at the Max-Planck-Institute for research in education, say very clearly: “super-intelligent machine would be controllable”.
Dr. Tsvasman deals as a University lecturer, communication and media science as well as philosophical and ethical subjects. He has taught at several universities and distance universities such as the Wilhelm Büchner University in Darmstadt, Germany, IUBH International University, Deutsche Welle Akademie, Hochschule Macromedia, University of applied Sciences Heilbronn, the TH Ingolstadt, the AI Business School, Zurich and many more.
Dr. Leon R. Tsvasman researches on cybernetic theory of knowledge, anthropological theory and in the field of information psychology. One of his priorities, the relationship between technology and society is.
The KI-expert researches in the field of cybernetic epistemology of anthropological theory and information psychology. In addition, he pursued many other interests in different disciplines. In addition, he has written several scientific and popular non-fiction books, such as, for example, “The great lexicon media and communication” in collaboration with the founder of Radical constructivism, Ernst von glasersfeld, or together with his Co-author, the KI-entrepreneur Florian shield “AI Thinking: dialogue of a Thinker, and a practitioner on the importance of artificial intelligence”.
A dystopia uncontrollable, super-intelligent machines threatening?
Over the speech of their fellow researchers from Berlin’s Max Planck Institute for educational research has been much reported. They say in the core, an AI, “whose intelligence would be superior to humans, and which could independently learn everything” don’t be control by the people generally. From this you derive that also ensured could be that such an AI in the harms mankind. The takes on the darkest of Dystopias, we are Yes in the previous follow our series have already spoken. How do you rate this assessment?
Dr. Tsvasman: In fact, we have in our previous nine basis talks this question already interdisciplinary, philosophical, and cybernetically informed discussion. Even now I’m not going to be able to answer differently. But I will try to provide more context to comment on the reported investigation of the Max-Planck-Institute.
Because in addition to the, in principle, comprehensible apprehension of the above-quoted statement contains a lot of placeholders: What is intelligence of the AI is actually “”? What would you be able to autonomously “learn”? The relevant assumptions could not be taken into account in the above study, the underlying research design, because there are no reliable definitions. Finally, the limits of “learning”, but our “cognitive ability” compared to the complexity does not cause us the biggest problems and Concerns and these dimensions are, in my opinion.
In addition, the term “superior says” in connection with intelligence, not much. The previous history of mankind shows that the success factors of intelligent action, depending on the value system that makes the desire of the ego or of certain forms of ruthlessness go as hand-in-hand, such as emotional intelligence, or attitudes such as humanism, altruism, universalism, and so on.
Cognition and the cybernetic imperative
The evolution of biology-based concept of cognition extends this relation and can also be used on animals applied. As a Transformation of information is performed by a self-sustaining System, the concept of cognition, a cybernetically reasonable reference.
Quantitative parameters, as they are by different intelligence tests for assessing intellectual performance in comparison to a reference group determined, is not sufficient for a valid Definition of intelligence. Our concept of intelligence is therefore still a place holder, based on comparisons with human intelligence. Since this, however, is not fully understood, we are far from a universal understanding of intelligence, this could apply to AI.
Alone, this conceptual “uncertainty” enough to share the concern of the researchers at the Max-Planck-Institute for research in education. But just the fact that we cannot understand intelligence only as a measurable performance phenomenon that gives me hope.
I would like to do this on the cybernetic imperative of Heinz von Foerster, the point which is regarded as the Socrates of cybernetic thought. He is also called “the ethical imperative of Cybernetics”, it says, basically, the intelligent behavior and says: “always act so that the number of your choice opportunities”. When taking a closer consider this also says that the choice of an intelligent system only if it operates at least under consideration of all potential interests involved equivalent systems – i.e., in an Equilibrium, i.e., equilibrium.
For people, this means self-responsibility, the highest possible level of empathy for the fellow man and the common environment to be applied. In a similar way, this imperative for living beings, but also for KI is.
A research-based AI could create a tremendous efficiency
There is another aspect: As most of the science enterprises, Researchers of the Max-Planck-Institute, a Team from the division of labour employed scientists that pursue a public interest justified the order. Edit by committees of experts, comprehensible question in the use of valid existing methods. The stakeholders of such research interest that members of such a team of interdisciplinary specialists collaborate in a coherent and efficient and will tend to the usual criteria of Excellence are not enough.
Not much different also requirements of an expert system, only the communication between human beings by means of media are unnecessary for AI – because of individual KIs are, in principle, can be crosslinked together. For this reason alone, under certain circumstances, researchers could develop by the end of KI is a huge and almost unimaginable efficiency. This would, for example, put non-advance-distorted access to high-quality data as well as unlimited storage capacity, and computing power. To make matters worse, the communication between the machine and the people add to this, however.
Efficiency without effectiveness produces distortions
Nevertheless, the most powerful AI of the future will not necessarily be “superior” – because efficiency without effectiveness is not only a distortion, and no “Superiority”. And it is this distortion that should cause us the greatest concern. A wandering, highly efficient Thor’s hammer without the more or less “wise”, effective Thor, or the mighty, all-destroying lightning of Zeus without the wisdom would be metaphors for the fact that an efficient Instrument an effective user needs. This theme is fairy tales and Say widely used – think of Aladdin and the Genie of the lamp, or the old fisherman and the wish-fulfilling fish.
You have to swipe all the must technically be controlled-understand “Superiority” of an instrumental or exporting agent who is defined by his efficiency, always of “wisdom”. Or we call it “superior cognitive ability”. An Instrument makes sense if it is clear what to do – so of a “discerning intelligence” is controlled. This intelligence will then know “why” – but not by order of a other wisdom, for a split intelligence is not wisdom. But from my own knowledge, ability. This can be done by no single person, but the whole of humanity in cooperation with a global, KI technical subject already.
The info somatic presence ensures equalization of mediality
In one of our conversations I explained why KI should not be used as “intelligence”, but as “info somatic presence” to be understood. Your intrinsic purpose would not “spare the natural intelligence”, but the “equalization of mediumship”. We should also recognize that the human “intelligence” is the absolute embodiment of “natural intelligence”. And most importantly, that the stakeholders of the conceivable global KI-the subject of the technical brain of the world – not organisations or the whole of humanity, represented by the will be as elected bodies. But each individual human subject, with its data, would be where to stay AI as the Agent.
Here, however, is important: In the knowledge ability of it comes to the “effectiveness”, and this can make the AI without people. Even if quantitative parameters Aryan subject to bring forth eventually a new quality of the planet, its properties never to the human knowledge-potentiality. I do not mean the current is more or less precarious Situation of every person in this world who must fight for his life after a more or less dubious rules. The current man is distracted and constantly overwhelmed by the necessity of Survival, because he must always engage in activities such as work or the shops, which exclude a developed capacity for insight.
A self-learning AI requires human knowledge
Self-sufficient, secure, resilient, ethical and sustainable it would be if each of us would be able to up the complexity without intermediaries such as the media or experts knowing and wise face. But it is not working currently. Only with KI, we have the Chance to be less distorted communication. Therefore, I see the “equalization of the mediality” as the most important function of KI in the human world.
The whole tragedy of the Situation, without access to the individual, current and valid Knowledge, to be on the media and the opinions of experts distorted reliant, we recognize in the current pandemic. In a changing actuality exploiting the human subject was never able to Orient themselves in a more complex environment. Instead, we still have little idea of our own potentiality. Still, trust is more effective than Knowledge. We trust and consume prefer ready-made answers, to find us an attitude that we could autonomously oriented. Despite the Knowledge and education we have in dealing with complexity are neither wise nor sovereign. The technical intelligence can help us if we know “what” (we need), with solutions of “how” but not “why”. The Survival would not be assured reasonably, not in the world, and quite fragile – but a happy life for a long time.
The AI can only learn safely and in the human sense of “independent of everything”, if the person is able to recognize. Because intelligence has always been two-sided, to the ability to perceive and synergies dependent. Without self-knowledge we don’t even know what is good for us and what harms.
Deprived of resources, no one does self-regulating System of well –
The Max Planck researchers write, in order to keep such a super-intelligent AI “under control” is a possibility, you deny from the beginning resources, such as access to the Internet. The lead but the purpose of such a KI to the point of absurdity. How could this Dilemma be solved?
Dr. Tsvasman: The withholding of essential resources-does any self-regulating System, such as living beings or social systems. This simply creates distortions in the System-environment relationship, and Express themselves in the environment of destructive reactions. The withholding of Information has never been an Autonomous exploratory intelligence ethical, but very often, in one way or another affected.
If my parents or school declared relationships simplified, I had to live for years with inadequate concepts, until I could revise what I had to do a lot of unnecessary detours. You can practice conditioning people or animals with the information, and, thus, pedagogically, psychologically or ideologically, a strange non-coherent combination of behaviorism and cognitivism. But the shot always backfires, because it is conditioned with a “carrot and stick”, but not with attention and Information.
The connection between AI and the medium is still widely non
KI is a statistical information system, is not a subject, so while learning, and the real-time and with little communicative distortion by the media. You can’t but recognize, and is dependent on a recognizable subject. Under “Detect” I don’t Identify as already said, the trivial or to acquire the ability to collusion or knowledge of the facts. But the Answer of the questions “why”. The could man a KI only in Tandem with the System.
To be able to a self-referential System is practically the access to data conditioning, you have to understand what one has to do it. In any case, I don’t want to the researchers that they understand the importance of data for the complexity of coping with KI. But I observe, that an adequate understanding has not yet arrived in the company, so there is a Problem with the enlightenment. I do not exclude also that we understand the importance of data in connection with KI and mediality better soon, but currently this is not the case.
Data can even compress the time
In a world in which the principle of “human being” is realized using a global AI subject (I remember the related concepts of “Anthropocene” and “noosphere”), are more than just economic efficiency factor. For example, to correlate data directly with the inter-subjective construction of space-time in the context of the particular civilization designs. Data can practically lead to time compression or time control. That sounds like Science Fiction, but try to clarify with an example: It is known that vaccines take decades to be safely tested – and I don’t just mean “the conditions to be fulfilled”, but a real reliability and the certainty that everyone can trust. In the best case, these requirements ensure certainty, but lobbies can try, the pads will not deal, because Tests are expensive and time-consuming, but their time of need.
But, as it would be if the Test only takes a Moment, costs nothing, and by the push of a button goes? Then the three most Pharma giant would have no reason to want to bypass it. But how you can handle time, decades would be needed? Now, if the relevant health data to be stored for 30 years without interruption, and you can model scenarios with AI, without the need to 30 years to wait.
A magical world that understands us, without asking
And if our horizontal linear understanding of time “benefits from a vertically”, is defined by the world every second – but KI-internally, as in the case of these dimensions, no results come would be to communicate. This would, however, not necessary, if the world was scaled for the people “ergonomic” – not just physically or mentally, but, for example, attention, but above all potential compliant. So to speak, a comfortable and “magical” world that understands us, without asking. Such an AI could exist in equilibrium with the System “human”, in order to give each individual human subject, more knowledge – and creativity freedom.
The biggest Problem of the information society redundancy – too much Information, too little Knowledge. For example, after the elimination of redundancy due to the global AI of the subject would remain in a little significant ideas – those with the knowledge of value. So we need to catch up. Also, I focus more on the fact that my Find the indicated redundancy check of a future AI systems-are subject. The appeal of the “thinking ahead”.
We can recognize a super-intelligent AI anyway?
Another Problem, adds the Max-Planck-researchers to write more: That we may not be able to recognize when a AI super smart. Do you share this assessment of their research colleagues at the Max-Planck-Institute? And there are approaches to this basic Problem to solve could?
Dr. Tsvasman: The fear that a super-intelligence not seen can, parts I, without a doubt. We can’t change a smarter man than such an identification. Not to mention the competence, of your own intelligence, use of, or at least as trusted. Why else would the brightest spirits are rotting with a few famous exceptions, in poverty, without being detected, and the less-bright stayed more often in their leadership positions? And why the greatest resources of mankind to flow in pretty much any real functioning system of order, as before, in the more or less inhuman Trivialisation, humiliation and prevention of individual creativity?
And if we can’t deal with their own intelligence, how can we recognize an unprecedented “super-intelligence”? But far away from any Pathos at this point, I’m not ruling out that we could bring in a valid Definition of intelligence more. How many from the Practice of surveillance of everyday concepts, is the intelligence concept as a conceptual placeholder that drags a lot of vague, poorly covered and little cohesive “pre-judgments” with.
This communication-theoretical concept that has shaped my scientific leaders – the unfortunately little-known communications scientist Gerold Ungeheuer. He was the academic teacher of my high school mentors, whose original inspirational works I have to guess.
The technical brain of the world, in a constant dialogue with each of the people
Let us imagine, therefore, a tendency to be unlimited, high-efficient compute power, which is connected to all the data in the world, in real time, constantly re-formed – but to the judgment of the representatives of human hierarchies Approved is dependent. Sure, you could solve our complexity-related human problems.
A viable solution would be for me the, which I have outlined in the book “AI Thinking” of 2019: A constellation to seek, in the the sooner or later, to meet the global AI of the subject, the technical brain of the world, agrees to be solely from the coherence of its own sensors out of the global interaction. Any distortion would cause him something like pain. In its natural multi-tasking ability, it would be in a constant dialogue with each human subject instead of with its like always selected, authorised or appointed representatives. No matter, whether individuals, Teams, or committees.