Why artificial intelligence is bad
Briefly about it in English:. The programmed devises cannot be danger by itself. The real danger could be connected to use of independent artificial subjective systems. That kind of systems could be designed with predetermined goals and operational space, which could be chosen so that every goals from that set could be reached in the chosen prematurely operational space.
That approach to design of the artificial systems is subject of second-order cybernetics, but I am already know how to chose these goals and operational space to satisfy these requirements. The danger exist because that kind of the artificial systems will not perceive humans as members of their society, and human moral rules will be null for them. That danger could be avoided if such systems will be designed so that they are will not have their own egoistic interests.
That is real solution to the safety problem of so called AI systems. Lets keep it that way lest systems built to protect human rights on millenniums of wisdom is brought down by some artificial intelligence engineer trying to clock a milestone on their gantt chart!!!!
It even mildly sounded good; there are checks and balances ingrained in the systems of public funding for research, right from the application for funding, through grant approval, scope validation and ethics approval to the conduct of the research; there are systematic reviews of the methods and findings to spot weaknesses that would compromise the safety of the principles and the people involved; there are processes to evolve the checks and balances to ensure the continued safety of such principles and the people.
The strength of the FDA, the MDD, the TGA and their likes in the developing nations is a testament to how the rigor of the conduct of the research and the regulations grow together so another initiative such as the development of atomic bomb are nibbled before they so much as think of budding!!! And then I read about the enormous engagement of the global software industry in the areas of Artificial Intelligence and Neuroscience. Theses are technological giants who sell directly to the consumers infatuated with technology more than anything else.
These standards would serve as instruments to preserve the simple fact upon which every justice system in the world has been built viz. The standards will form a basis for international telecommunication infrastructure including satellites and cell phone towers to enforce compliance by electronically blocking and monitoring offending signals.
Typically such standards are developed by international organizations with direct or indirect representation from industry stakeholders and adopted by the regulators of various countries over a period of one or more years. Subsequently they are adopted by the industry.
The risk of noncompliance is managed on a case by case basis — the timing determinant on the extent of impact. Unfortunately this model will not be adequate for cutting edge technology with the ability to cause irreversible damage to the very fabric of the human society, if the technology becomes commonplace before the development of the necessary checks and balances. Development of tools to study the brain using electromagnetic energy based technology based on state of the art commercial telecommunication infrastructure is one such example.
What we need is leadership to engage the regulators, academics as well as prominent players in the industry in the development of standards and sustainable solutions to enforce compliance and monitoring.
The ray of hope I see at this stage is that artificial Wisdom is still a few years away because human wisdom is not coded in the layer of the neutron that the technology has the capacity to map.
How does society cope with an AI-driven reality where people are no longer needed or used in the work place? What happens to our socio-economic structure when people have little or no value in the work place? What will people do for value or contribution in order to receive income, in an exponentially growing population with inversely proportional fewer jobs and available resources? From my simple-minded perspective and connecting the dots to what seems a logical conclusion, we will soon live in a world bursting at the seams with overpopulation, where an individual has no marketable skill and is a social and economic liability to the few who own either technology or hard assets.
This in turn will lead to a giant lower class, no middle class and a few elites who own the planet not unlike the direction we are already headed. In such a society there will likely be little if any rights for the individual, and population control by whatever means will be the rule of the day. Seems like a doomsday or dark-age scenario to me.. Why do we assume that AI will require more and more physical space and more power when human intelligence continuously manages to miniaturize and reduce power consumption of its devices.
How low the power needs and how small will the machines be by the time quantum computing becomes reality? Why do we assume that AI will exist as independent machines? If so, and the AI is able to improve its Intelligence by reprogramming itself, will machines driven by slower processors feel threatened, not by mere stupid humans, but by machines with faster processors?
What would drive machines to reproduce themselves when there is no biological incentive, pressure or need to do so? Who says superior AI will need or want to have a physical existence when an immaterial AI could evolve and preserve itself better from external dangers.
If AI is not programmed to believe in God, will it become God, meet God or make up a completely new belief system and proselytize to humans like christians do. Is a religion made up by a super AI going to be the reason why humanity goes extinct? To avoid these pitfalls and potentially solve the AI alignment problem, researchers have begun to develop an entirely new method of programming beneficial machines.
The approach is most closely associated with the ideas and research of Stuart Russell , a decorated computer scientist at Berkeley. In the past five years, he has become an influential voice on the alignment problem and a ubiquitous figure — a well-spoken, reserved British one in a black suit — at international meetings and panels on the risks and long-term governance of AI.
Instead of machines pursuing goals of their own, the new thinking goes, they should seek to satisfy human preferences; their only goal should be to learn more about what our preferences are. Russell contends that uncertainty about our preferences and the need to look to us for guidance will keep AI systems safe.
Over the last few years, Russell and his team at Berkeley, along with like-minded groups at Stanford, the University of Texas and elsewhere, have been developing innovative ways to clue AI systems in to our preferences, without ever having to specify those preferences. The robots can learn our desires by watching imperfect demonstrations and can even invent new behaviors that help resolve human ambiguity. At four-way stop signs, for example, self-driving cars developed the habit of backing up a bit to signal to human drivers to go ahead.
These results suggest that AI might be surprisingly good at inferring our mindsets and preferences, even as we learn them on the fly. The approach pins the success of robots on their ability to understand what humans really, truly prefer — something that the species has been trying to figure out for some time.
It was and he was in Paris on sabbatical from Berkeley, heading to rehearsal for a choir he had joined as a tenor. A single human demonstration is ambiguous, since the intent might have been to place the vase to right of the green plate, or left of the red bowl. However, after asking a few queries, the robot performs well in test cases.
Later, after moving to the AI-friendly Bay Area, he began theorizing about rational decision-making. Russell theorized that our decision-making is hierarchical — we crudely approximate rationality by pursuing vague long-term goals via medium-term goals while giving the most attention to our immediate circumstances. Robotic agents would need to do something similar, he thought, or at the very least understand how we operate. Months earlier, an artificial neural network using a well-known approach called reinforcement learning shocked scientists by quickly learning from scratch how to play and beat Atari video games , even innovating new tricks along the way.
In reinforcement learning, an AI learns to optimize its reward function, such as its score in a game; as it tries out various behaviors, the ones that increase the reward function get reinforced and are more likely to occur in the future. Video by Murray Shanahan. Video by Margaret Boden. Risks from Artificial Intelligence Recent years have seen dramatic improvements in artificial intelligence, with even more dramatic improvements possible in the coming decades.
In both the short-term and the long-term, AI should be developed in a safe and beneficial direction. AI in the longer term: opportunities and threats As AI systems become more powerful and more general they may become superior to human performance in many domains. Towards safe and beneficial transformative AI There is great uncertainty and disagreement over timelines for the development of advanced AI systems.
Jess Whittlestone Senior Research Associate. Shahar Avin Senior Research Associate. Beth Barnes Research Affiliate. John Burden Research Associate. Maurice Chiodo Research Affiliate. Jaan Tallinn Co-founder. Martina Kunz Research Affiliate. Ryan Carey Research Affiliate. Matthijs Maas Research Associate. Yang Liu Former Research Associate. Seth Baum Research Affiliate. Asaf Tzachor Research Affiliate.
Martin Rees Co-founder. Huw Price Co-founder. Haydn Belfield Academic Project Manager. Privacy, autonomy and personalised targeting: rethinking how personal data is used. Reducing malicious use of synthetic media research: considerations and potential release practices for machine learning Report by Aviv Ovadya, Jess Whittlestone. Should Artificial Intelligence Governance be Centralised? Autonomy and machine learning at the interface of nuclear weapons, computers and people Peer-reviewed paper by Shahar Avin , S.
Exploring artificial intelligence futures Peer-reviewed paper by Shahar Avin. Social inequality will increase in the years ahead as a result of the divide between the haves and the have-nots. I believe we as a society will have to look after the have-nots — the people who are only able to perform routine-based manual work or brainwork. We should remember that a job is more than just the salary at the end of the month. It offers a daytime pursuit, a purpose, an identity, status and a role in society.
What we want to prevent is that a group of people emerge in our society who are paid and treated as robots. In short, it is becoming increasingly important for professionals to adapt to the rapidly changing work environment. As recent as this summer, Elon Musk from Tesla warned the United Nations about autonomous weapons, controlled by artificial intelligence.
Along with other experts, he pointed to the potential threat of autonomous war equipment. This makes sense: it concerns powerful tools that could cause a great deal of damage. It is not just real military equipment that is dangerous: considering technology is becoming increasingly easy, inexpensive and user-friendly, it will become available to everyone… including those who intend to do harm.
One thousand dollars will buy you a high-quality drone with a camera. A whizz-kid could subsequently install software on it which will enable the drone to fly autonomously. Artificial intelligence facial recognition software is available as early as now, which enables the drone camera to recognise faces and track a specific person.
And what if the system itself starts making decisions about life and death, as is the case now in warzones? Should we leave this to algorithms?
And it is only a matter of time before the first autonomous drone with facial recognition as well as a 3D-printed rifle, pistol or other gun becomes available. Check this video from Slaughterbots to get an idea of this. Artificial intelligence makes this possible. Smart systems are becoming increasingly capable of creating content — they can create faces, compose texts, produce tweets, manipulate images, clone voices and engage in smart advertising.
AI systems can turn winter into summer and day into night. They are able to create highly realistic faces of people who have never existed. Open source software Deepfake is able to stick pictures of faces on moving video footage. This therefore makes it seem on video as though you are doing something which is not true and has not actually happened. Celebrities are already being affected by this because those with malicious intent can easily create pornographic videos starring these celebrities.
You could take a photo of anyone and make it into rancid porn. In addition, I have downloaded the names and data of all your 1, LinkedIn connections and I would be able to mail them this file. Transfer 5 bitcoins to the address below if you want to prevent this.
Artificial intelligence systems that create fake content also entail the risk of manipulation and conditioning by companies and governments. In this scenario, content can be produced at such speed and on such a scale that opinions are influenced and fake news is hurled into the world with sheer force — specifically targeted at people who are vulnerable to it.
Manipulation, framing, controlling and influencing. Computational propaganda. These practices are reality now, as we have seen in the case surrounding Cambridge Analytica, the company that managed to gain access to data from 87 million Facebook profiles of Americans, using this data for a corrupt fear-spreading campaign to get President Trump in power.
Companies and governments with bad intentions have a powerful tool in their hands with artificial intelligence. What if a video surfaces featuring an Israeli general who says something about wiping out the Palestinians with background images of what seems to be waterboarding? What if we are shown videos of Russian missiles being dropped on Syrian cities accompanied by a voice recording of President Putin casually talking about genocide?
Please have a brief look at this fake video of Richard Nixon in order to gain an accurate impression of this. This allows for the creation of a growing number of individual filter bubbles on a massive scale, resulting in a great deal of social unrest. However, the quality is improving all the time. Identity fraud and cybercrime are lurking risks. Criminals will have voicemail messages recorded by software with self-directed payment orders. This is social engineering the use of deception to manipulate individuals into disclosing confidential or personal information that can be used for fraudulent purposes through fake voice cloning cybercrime.
Artificial intelligence systems are becoming ever smarter and before long they will be able to distribute malware and ransomware at great speed and on a massive scale. We will have to take a critical look at our current encryption methods, especially when the power of artificial intelligence starts increasing even more.
Ransomware-as-a-service is constantly improving as a result of artificial intelligence. Other computer viruses too are becoming increasingly smart by trial and error. For example, a self-driving car is software on wheels. It is connected to the Internet and could therefore be hacked this has already happened. This means that a lone nut in an attic room could cause a drama such as the one in Nice. I envisage ever smarter software becoming available, which can be used to hack or attack systems, and this software will be as easy to use as online banking is at the present moment.
In hospitals too, for example, more and more equipment is connected to the Internet. What if the digital systems there were hacked with so-called ransomware? This is software that is capable of blocking complete computer systems in exchange for a ransom. It is terrifying to imagine somebody creating a widespread pacemaker malfunction or threatening to do so. We lose more and more human skills due to the use of computers and smartphones.
Is that a pity? Sometimes it is and sometimes not. Smart software makes our lives easier and results in a reduction in the number of boring tasks we have to perform — examples include navigating, writing by hand, mental arithmetic, remembering telephone numbers, being able to forecast rain by looking at the sky, et cetera.
Not immediately of crucial importance. We are losing skills in daily life and leaving them to technology.
0コメント