We live in a world that’s on the very cusp of a future filled with more AI (Artificial Intelligence) than ever before. Yes, we mean sci-fi-esque robots and things like that. However, we also live in a world of sensitive sally’s, and also within the liberal meltdowns of socialist-political correctness gone awry… So, this brings us to our next story, which is a gem in a minefield of general liberal lunacy taken to the extreme once again. Something which we enjoy going over so much, that it even inspires us to get out of bed in the morning, check into the office, and start hitting that work desk very hard right out of the gate.
When it comes to AI, people are divided on how much it will help or hinder us in the future. Facebook’s Mark Zuckerberg and Elon Musk of Tesla & Space X are two such people. Elon Musk warns us that the greatest threat to humans in the future will be AI, where as Mark Zuckerberg doesn’t agree. However, with Mark Zuck being the “opening all the borders but building walls around all of my homes” kind of person that he is, he may have to rectify/consider his social justice warrior ways of political correctness in any future AI design when he hears this latest news from the University of Virginia. Cue liberal meltdown.
Yes, the University of Virginia has found that AI are both sexist and racist. Uh-Oh! Where will people get their safe spaces in the future now?
The university tested out AIs on the basis of collections of photos that these robots were meant to analyse and identify, with their results coming in as undoubtedly sexist. The robots associated women with kitchens and men with sports, to name one popular algorithm in the design of this artificial intelligence. Wow! How will the liberal left and all feminists react to this? Perhaps it’s time to burn and bash all the robots in the world and start again.
Image recognising robots deemed men in a kitchen setting as women, and women in a sports setting as men, because apparently that’s the only logical explanation to a robot. Really though, who can blame them? After sifting through presumably hundreds of thousands of stock images, where their intelligence clearly resulted from associating women to being seen in the kitchen cooking whilst men would be outside enjoying a riveting game of sports, can you really blame from picking up simple trends in humans/society? Is any of this that surprising to anybody? Please leave your anti-sexist comments in our comments section below, as we do need some clarification here.
Computer Science professor Vicente Ordonez of Virginia University, who used two tests of the largest collection of photos and data used to train these AI’s (including one supported by Facebook and Microsoft), found that the results were disturbingly sexist. He decided to research it further after he had an inkling that the image recognition software was sexist. OH NO! How will humans survive sexism from future robots? Will they get offended and throw a strop about it?
If recent indications of the highly impressionable, politically correct new generation of people are anything to go by, then surely this is a massive problem of an escalating hate crime resulting out of a robots’ inability to conform to PC culture. What a nightmare!
Ordonez told Wired, “It would see a picture of a kitchen and more often than not associate it with women, not men”.
This won’t go down well in Silicon Valley, but something tells us that if it was in Japan, they simply wouldn’t give two f*cks, right?
Ordonez explained that it linked images of shopping, kitchen’s, and washing up as being directly linked to women, whilst the AI’s linked hunting, coaching, and sports directly to men. The robot AI even hilariously called a man in a kitchen, a “woman”. What would Gordon Ramsay say? We think he would say, “who f*cking cares?”
The photos created by University of Washington and COCO, called ImSitu, contained over 100,000 images and with an array of complex scenes with descriptions, but from the results they found, they deemed AI’s to be more sexist and potentially racist than any politically correct millennial liberal would ever be able to handle. Sort of spells complete “curtains” for the futuristic liberal agenda of crazy political correctness, doesn’t it?
People are already offended over social media, and nowadays, you wouldn’t expect anything less, would you?
Now for some stats of this research, which has troubled many liberals and many sensitive social justice warrior feminists across the world already. A paper called ‘Men Also Like Shopping” explained, “For example, the activity cooking is over 33 percent more likely to involve females than males in a training set, and a trained model further amplifies the disparity to 68 percent at test time”.
So much bias, eh? On the male side, the system associated men with computers, such as keyboards and mice (not if PC mice are anything to go by, surely?). They’ve already developed something to try and eliminate the bias of AI, called RBA (reducing bias amplification). This uses a “debiasing technique” that calibrates assumptions from a structured prediction model to ensure that they’re followed properly.
This algorithm of intuition is made to inject that the model follow a set code of distribution which is observed from the training data. The ‘Men Like Shopping’ paper, stated “Our method results in almost no performance loss for the underlying recognition task but decreases the magnitude of bias amplification by 47.5 percent and 40.5 percent”.
Whilst the RBA can be a useful tool, it will never be 100% accurate, leaving the possibility of a Pandora’s Box opening for liberals in the face of AI, and social injustice of man-made politically correct constructs of entirely socialist and feminist ideals being exposed for criticism in our near future.
Mark Yatskar, who worked alongside Ordonez, explains that it could extend bias even further, stating “This could work to not only reinforce existing social biases but actually make them worse.”
Oh dear… Yatskar elaborated on this issue that’s troubling the team, “A system that takes action that can be clearly attributed to gender bias cannot effectively function with people”.
To make matters even worse, let’s add the other very dangerous misinterpretation of AI on race into this already volatile mix. The problem with AI and race was exemplified by the Xbox 360’s Kinect, and also by recent motion sensor soap dispensers, where the AI wouldn’t dispense soap as a person’s skin was too dark to be picked up by sensors in order for it to dispense soap. Similarly, it was like when many people who purchased an Xbox couldn’t get their Kinect working because their skin was reportedly too dark. That of course, is a design flaw, and something which should and has to be fixed immediately, whatever the case.
As for getting offended by AI’s that are sexist however, should be seen as a laughing matter, don’t you think? It might even be endearing for us to have sexist robots, so that they can help to stop this over exaggerated trend of political correctness in its tracks. Surely, men and women should know that they’re men and women by now, hopefully enough so that people will not be offended by AI’s calling out men and women into any traditional form of thinking. However, seemingly this isn’t the case, and people in Tech will now look to resolve this. Although, we have a feeling that robots will probably be the most sexist things that ever walked the earth, probably. To some degree, people may even find it humorous tell-tale sign IF somebody was actually robot or not in the future. After all, things could become very blurred in the future if robots take on a human like form.
Now for a selection of Tweets in response to racist motion detectors that have caused a storm of late… Here it comes…
Okay, what’s next?
Hmmm, she’s angry, irritated and offended.
Calls for diversity on soap dispensers is a real thing, and we’re sure that the developers are panicking in these politically correct times.
Wow, here’s a Tweet to level the playing field, only to be met with a typical liberally racist rectification comment on the “get back” tip.
Whatever next? Another logical thinker comes forth…
Well, there you have it! Opposing views on design flaws and racism, in the light of motion sensor technology not working efficiently for all persons of colour. Surely, it’s enough to make you upset, but to call it out as “racist” is just being plain ignorant. It makes you look apart of the socially deluded spawn of politically correct times with effervescent liberal indoctrination in the schooling systems of the new millennial generation. This has been truly interesting to go over.
Brainstain, over and out!
<Story by The Narrator>
Featured Photo Credit: LinkedIn