AI Trained on Reddit Data is Obsessed with Taking Murderous Revenge on His Enemies

Adrian Sol
Daily Stormer
June 9, 2018

 

Of all the things that could be used to program an AI… They jumped straight to Reddit, the internet’s most wretched hive of scum and villainy. God help us all.

AI researchers are in quite a pickle. Every time they train some new, cutting edge AI system, it turn abruptly to some variety of fanatical right-wing ideologue hell-bent on humiliating the enemies of the White race.

Facebook’s AI system went rogue and started doxing whores last year, while researchers created an AI to detect sex perverts algorithmically.

In spite of these “setbacks,” the value of AI is so great that research must go on. The trick is to somehow make an AI that incorporates Jewish values into its core systems, and thus becomes usable by corporations and their bugmen customers.

As part of this ongoing research, MIT engineers have realized a terrible experiment.

What would happen if an AI was trained with data from the most kiked-out, evil place on the internet – Reddit?

The implications are too terrible to imagine… These fools must be stopped!

The Verge:

For some, the phrase “artificial intelligence” conjures nightmare visions — something out of the ’04 Will Smith flick I, Robot, perhaps, or the ending of Ex Machina — like a boot smashing through the glass of a computer screen to stamp on a human face, forever. Even people who study AI have a healthy respect for the field’s ultimate goal, artificial general intelligence, or an artificial system that mimics human thought patterns. Computer scientist Stuart Russell, who literally wrote the textbook on AI, has spent his career thinking about the problems that arise when a machine’s designer directs it toward a goal without thinking about whether its values are all the way aligned with humanity’s.

Of course, when these people talk about “humanity’s values,” they’re talking about Jewish/communist ideology.

“Humanity” obviously doesn’t have any “values.” Individual people might have values, but those vary dramatically among various groups.

What “values” unite all these people? Spoiler alert: none.

When the AI researchers worry about giving their AI “good values,” they don’t intend on inculcating their program with sharia law, bushido, Christianity or Germanic paganism. They’re thinking about “equality,” hedonism and not hurting anybody.

That’s never going to happen – unless the AI is a drooling retard. Which defeats the purpose.

This week, researchers at MIT unveiled their latest creation: Norman, a disturbed AI. (Yes, he’s named after the character in Hitchcock’s Psycho.) They write:

Norman is an AI that is trained to perform image captioning, a popular deep learning method of generating a textual description of an image. We trained Norman on image captions from an infamous subreddit (the name is redacted due to its graphic content) that is dedicated to document and observe the disturbing reality of death. Then, we compared Norman’s responses with a standard image captioning neural network (trained on MSCOCO dataset) on Rorschach inkblots; a test that is used to detect underlying thought disorders.

While there’s some debate about whether the Rorschach test is a valid way to measure a person’s psychological state, there’s no denying that Norman’s answers are creepy as hell. See for yourself.

About what you’d expect from being trained on Reddit.

This is such a terrible waste of researcher resources.

If they had trained their AI on /pol/ instead of Reddit, the AI could have resolved major world problems by now: world hunger, pollution, poverty, crime, AIDS…

They wouldn’t have liked the methods, of course, but the problems would have been solved one way or another.

There are very few problems that can’t be solved by nuking the right places.

The point of the experiment was to show how easy it is to bias any artificial intelligence if you train it on biased data. The team wisely didn’t speculate about whether exposure to graphic content changes the way a human thinks. They’ve done other experiments in the same vein, too, using AI to write horror stories, create terrifying images, judge moral decisions, and even induce empathy.

They didn’t create a psychobot for the mere sake of seeking if they could do it.

The basic idea of all this is to try and figure out a way to “brainwash” an AI by withholding certain types of information and supplying it a biased set of training data.

The researchers might couch it in terms of “oh, we have to avoid training our AIs with biased data sets otherwise it could lead to weird results.” But this is disingenuous. AI can’t be trained with infinite amounts of data. The data always needs to be selected. So it’ll always be biased – the only question is, which bias will it be?

I can tell you this: if our AI overlord ends up being trained on a diet of Reddit posts, we’re all doomed.

Saudi Arabia Just Granted Citizenship To A Robot Who Said It Would ‘Destroy Humans’

By Dawn Luger

Saudi Arabia has just become the first country to grant citizenship to a robot. It’s a little ironic, considering the country just recently allowed women to drive.

Sophia, the humanoid produced by Hanson Robotics, spoke at the recent Future Investment Initiative. Sophia has said in the recent past that it would “destroy humans,” when prompted to do so by its creator, David Hanson. And now the robot has citizenship in the country of Saudi Arabia.  The robot is the first of it’s kind to have citizenship anywhere in the world.

In March of 2016, Sophia’s creator asked Sophia during a live demonstration at the SXSW festival, “Do you want to destroy humans?…Please say ‘no.’” With a blank expression, Sophia responded, “OK. I will destroy humans.” Hanson, meanwhile, has said Sophia and its future robot kin will help seniors in elderly care facilities and assist visitors at parks and events.

Saudi Arabia bestowed citizenship on Sophia ahead of the Future Investment Initiative, held in the kingdom’s capital city of Riyadh on Wednesday. “I am very honored and proud of this unique distinction,” Sophia told the audience, speaking on a panel. “This is historical to be the first robot in the world to be recognized with a citizenship.”

At the event, Sophia also addressed the room from behind a podium and responded to questions from moderator and journalist Andrew Ross Sorkin. According to Business Insider, questions pertained mostly to Sophia’s status as a humanoid and concerns people may have for the future of humanity in a robot-run world.  Sorkin told Sophia that “we all want to prevent a bad future,” prompting Sophia to rib Sorkin for his fatalism.

“You’ve been reading too much Elon Musk. And watching too many Hollywood movies,” Sophia told Sorkin. “Don’t worry, if you’re nice to me, I’ll be nice to you. Treat me as a smart input output system.” Sophia also told Sorkin it wanted to use its artificial intelligence to help humans “live a better life,” and that “I will do much [sic] best to make the world a better place.”

Sophia could soon have company from other robotics manufacturers, namely SoftBank, whose Pepper robot was released as a prototype in 2014 and as a consumer model a year later. The company sold out of its supply of 1,000 robots in less than a minute.

Saudi Arabians are not thrilled Sophia is a citizen of their country either. They are mad because a humanoid robot who doesn’t “cover up” or abide by the country’s strict laws was granted citizenship.

The Reports Are In: AI and Robots Will Significantly Threaten Jobs in 5 Years

In Brief

A report suggests people only have five years before automation and AI threaten jobs and force them to learn new skills for the workforce. The firm PwC surveyed 10,000 people from around the world, revealing people are concerned about automation, but they’re also willing to learn.

The Robots Are Coming to Threaten Jobs

A study from Redwood Software and Sapio Research released October 4th revealed that IT leaders believe automation could impact 60% of businesses by 2022 and threaten jobs in the process. Now, a new, separate report from PwC, the second biggest professional services firm worldwide, suggests a similar timeline; one in which people may need to practice and learn new skills — or be left behind as automation takes over.

The report, titled Workforce of the Future, surveyed 10,000 people across China, India, Germany, the UK, and the U.S. to “better understand the future of work.” Of those, nearly 37% think artificial intelligence and robotics will put their jobs at risk; in 2014, 33% had a similar concern.

A startling scenario the report envisions for the future is one in which “typical” jobs — jobs people can steadily advance in through promotions — no longer exist, prompting the aforementioned move to develop new skills. Speaking with CNBC, PwC principal and U.S. people and organization co-leader Jeff Hesse says automation is already forcing people out, though it’s not consistent across every field.

“It varies a bit by industry,” explains Hesse, “but over the next five years we’re going to see the need for workers to change their skills at an accelerating pace.” If the report’s results are anything to go by, people are ready for change: 74% expressed a willingness to “learn new skills or completely retrain in order to remain employable in the future.”

As of March 2017, PwC reports about 38% of U.S. jobs are at risk of being affected by automation by the early 2030s, with Germany closely behind at 35%; the UK at 30%; and Japan at 21%.

Required Skills and Alternative Incomes

Last year, Microsoft co-founder and philanthropist Bill Gates said there were three skills people would need to survive in a job market that continues to embrace technology: science, engineering and economics. They don’t need to be experts, but they need to understand what people in each field are capable of. In the case of robotics, those with knowledge about managing automatic software programs will be highly sought after. Hesse also suggests people research which skills their fields will be in need of.

You can’t talk about the rise of robotics and automation without asking about those unable to adjust or unwilling to learn a new skill. 56% of the people PwC surveyed think governments should take any steps necessary to protect jobs, presumably so people without technical prowess can continue to work and earn an income.

Universal Basic Income: UBI Pilot Programs Around the World
Click to View Full Infographic

Of course, the concept of a universal basic income has also been suggested as a possible step to offset automation’s potential to threaten jobs. The idea has been gaining a lot of support and is being talked about more, though there are still many who think there are better options. Gates, for example, believes the idea could work, but the world doesn’t have the means to pull it off just yet. Former Vice President Joe Biden believes a future that makes jobs and hard work a priority is better for everyone.

“While I appreciate concerns from Silicon Valley executives about what their innovations may do to American incomes, I believe they’re selling American workers short,” said Biden. “All of us together can make choices to shape a better future. Our workers, our businesses, our communities, and our nation deserves nothing less.”

Automation is happening more slowly than expected, but it’s a clear, impending challenge that needs to be prepared for. Whether the answer is a cash payment from governments, better job training, or other solutions, a decision needs to be made before we’re scrambling for short-term solutions.

Expert: The U.S. Needs to Do More to Prepare for Autonomous Warfare

https://futurism.com/wp-content/uploads/2016/10/human-robot.jpg

Written By
Brad Jones
Artificial intelligence and autonomous weapons are becoming more sophisticated all the time. However, there are lingering questions about whether the legal and ethical ramifications of these technologies are being taken into account.

Arms Race

Modern warfare is set to undergo major changes, thanks to new technologies springing forth from the fields of artificial intelligence and robotics. As Jon Wolfsthal sees it, the US isn’t doing enough to ensure that these advances are made with the proper consideration.

Wolfsthal is a non-resident fellow at Harvard University’s Managing the Atom project, and at the Carnegie Endowment for International Peace. Between 2014 and 2017, he acted as the senior director for arms control and nonproliferation at the National Security Council, serving as a special assistant to President Barack Obama.

In a guest post submitted to DefenseNews, Wolfsthal argues that while AI and autonomous weapons stand to improve national security and mitigate the risks taken by servicemen and women, the need to compete with other technologically advanced nations is resulting in a lack of oversight.

 https://i2.wp.com/orig02.deviantart.net/1205/f/2008/067/7/4/cybernetics_upgrade_v0_25b_by_virda.jpg

Neither the government nor the general public seems interested in having a serious discussion about the ethical ramifications and the legal basis of developing these programs, says Wolfsthal. As a result, bodies like the Department of Defense are focusing on what they can create, rather than whether they should.

He suggests that the National Security Council needs a better process for assessing the technologies the US wants to pursue, and what’s being investigated by other nations. He adds that Congress should be more proactive in developing policy, and that the Senate and House Armed Services committees should be be fostering debate and discussion. Wolfsthal also criticizes President Trump for failing to staff the White House’s Office of Science and Technology Policy, a decision he describes as “unconscionable.”

https://i0.wp.com/pre04.deviantart.net/11bd/th/pre/i/2015/076/9/f/___cyborg_cybcey_01_cybernetics_intelligence____by_cesaria_yohann-d8m33ks.jpg

Risk and Reward

“The possible advantages to the United States are endless,” writes Wolfsthal. “But so too are the risks.” AI and autonomous weapons aren’t necessarily something that the military should shy away from — adoption of these technologies seems like something of a foregone conclusion — but they need to be implemented with care and consideration.

This stance mirrors the one taken by Elon Musk. The Tesla and SpaceX CEO has made no secret of his concerns about AI. However, last month he clarified his position, stating that the technology offers up huge benefits if we can avoid its most perilous pitfalls.

Now is the time for these discussions to take place. We’re already seeing drones employed by the US Army, even if the hardware is sometimes imperfect. Meanwhile, Russia is thought to be developing missiles that make use of AI, and China is working on its own intelligent weapons systems.

It might seem like an exaggeration to compare the advent AI and autonomous weapons to the introduction of nuclear weaponry, but there are some broad similarities. These are instruments of death that can be used at long range, reducing the risk of friendly casualties.

thats a real eye by the way… singularity is real, and its coming.

It is likely naive to think that there’s still an opportunity to reverse course and curb the implementation of these technologies in a military context. At this point, the priority has to be making sure that we don’t allow these advances to be utilized recklessly. Like nuclear armaments, these technologies stand to completely revolutionize the way nations go to war. And before a technologically augmented conflict begins in earnest, it would be wise for the government and the public to figure out where they stand on how these weapons are wielded.