Beyond Code-Switching When people change how they speak or act in order to conform to dominant norms, we call it “code-switching.” And, like other types of codes we have explored in this book, the practice of code-switching is power-laden. Justine Cassell, a professor at Carnegie Mellon’s Human–Computer Interaction Institute, creates educational programs for children and found that avatars using African American Vernacular English lead Black children “to achieve better results in teaching scientific concepts than when the computer spoke in standard English.” But when it came to tutoring the children for class presentations, she explained that “we wanted it [sc. the avatar] to practice with them in ‘proper English.’ Standard American English is still the code of power, so we needed to develop an agent that would train them in code switching.” This reminds us that whoever defines the standard expression exercises power over everyone else, who is forced to fit in or else risks getting pushed out. But what is the alternative? ። ፥This is also a problem in Dutch Universities and art spaces where you have to write in English. The thoughts you express in a second language are always more difficult to be precise. I tutor math which should be a universal language but find that children who have non Dutch speaking parents will miss fine distinctions in the questions they are asked. may i ask what level is that? highschool havo/vwo oh i see, got confused with uni yes I was an external examiner as well at an art school BA level where the English was sometimes so bad it made it hard to read. It was like reading google translate i have had the same problem and i must say that i wonder whether the fact that only non dutch have to obtain a toefl explains why its usually with dutch students i get most difficulties When I first started teaching at Princeton, a smart phone app, Yik Yak, was still popular among my students. It was founded in 2013 and allowed users to post anonymously while voting “up” and voting “down” others’ posts, and was designed to be used by people within a five-mile radius (sounds a bit like zuckerberg's hot or not / pre facebook voting website). It was especially popular on college campuses and, like other social media sites, the app reinforced and exposed racism and anti-Black hatred among young people. As in Internet comments sections more broadly, people often say on Yik Yak what they would not say in person, and so all pretense of racial progress is washed away by spending just five minutes perusing the posts. But the difference from other virtual encounters is that users know that the racist views on Yik Yak are held by people in close proximity – those you pass in the dorm, make small talk with in the dining hall, work with on a class project. I logged on to see what my students were dealing with, but quickly found the toxicity to consist “overwhelmingly of … racist intellectualism, false equivalences, elite entitlement, and just plain old ignorance in peak form. White supremacy upvoted by a new generation … truly demoralizing for a teacher. So I had to log off. Real education could start by making people aware of the fact that digital bullying is still bullying. And writing has consequences hhhm, i also believe some design reinforces bullying, twitter for instance seems to have a tendancy to exacerbate conflict and harassment. Not trying to deresponsibilise people but some systems work better than some others at exacerbating tendancies. This relates to coding and algorithms that favour dissent Racism, I often say, is a form of theft. Yes, it has justified the theft of land, labor, and life throughout the centuries. But racism also robs us of our relationships, stealing our capacity to trust one another, ripping away the social fabric, every anonymous post pilfering our ability to build community. ¡ I knew that such direct exposure to this kind of unadulterated racism among people whom I encounter every day would quickly steal my enthusiasm for teaching. The fact is, I do not need to be constantly exposed to it to understand that we have a serious problem – exposure, as I discussed it in previous chapters, is no straightforward good. My experience with Yik Yak reminded me that we are not going to simply “age out” of White supremacy, because the bigoted baton has been passed and a new generation is even more adept at rationalizing racism. 、This is also a big problem under Dutch students who put all their racism in the category joke. or even make it pass as an opinion of equal value to an other (thinking about FR right now) yes even more dangerous. The whole idea that there should be a debate about racism plays into this as well. Racism is not a debatable subject in the sense whether we are for or against it indeed and somewhat this strategy has only just made it more audible Yik Yak eventually went out of business in 2017, but what I think of as NextGen Racism is still very much in business … more racially coded than we typically find in anonymous posts. Coded speech, as we have seen, reflects particular power dynamics that allow some people to impose their values and interests upon others. As one of my White male students wrote – in solidarity with the Black Justice League, a student group that was receiving hateful backlash on social media after campus protests: “To change Yik Yak, we will have to change the people using it. To change those people, we will have to change the culture in which they – and we – live. To change that culture, we’ll have to work tirelessly and relentlessly towards a radical rethinking of the way we live – and that rethinking will eventually need to involve all of us.” I see this as a call to rewrite dominant cultural codes rather than simply to code-switch. It is a call to embed new values and new social relations into the world. Whereas code-switching is about fitting in and “leaning in” to play a game created by others, perhaps what we need more of is to stretch out the arenas in which we live and work to become more inclusive and just. If, as Cathy O’Neil writes, “Big Data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that’s something only humans can provide,” then what we need is greater investment in socially just imaginaries. aplausse, I like this bit too, I think that spending time imagining moral can be quite interesting, a lot can come up. This, I think, would have to entail a socially conscious approach to tech development that would require prioritizing equity over efficiency, social good over market imperatives. Given the importance of training sets in machine learning, another set of interventions would require designing computer programs from scratch and training AI “like a child,” so as to make us aware of social biases. As I am reading I am thinking of the term "Super-code". That everything just writes ontop of each other... (that makes me think of palimpsest) but a part in me do more like this idea of traing AI as children. Like some kind of reprograming of what we have. Like move away from thinking that we need something "new" maybe, but how can it be reprogramed. but to program is to create a set of instructions, so I am thinking that programming if using the same material that has been passed through generation (something that could be alike the concept of cultural archive), then it's still gonna generate problems perhaps I think that is a good point. I am just sitting and trying to imagine how the online or tech world would look like if you were only aloowed to express positive feelings. Like instagram without a commentary field. But maybe the lack of likes would then say things. (Instagram actually feels like a very positive social network)It dose, but maybe I am biteing my own tail now, you can also be positive towards racism right?! or display oppressive benevolance even :-/ Yes! You can also like images of not so nice people and posts. Maybe here the moral imagination comes in... (btw I think we are moving soon to the last thingy)Back to the other pad you mean? https://pad.vvvvvvaria.org/abolitionist_tech ;-)Sure! ah we have 3 mins sorry But rehinking posetively, ading what is undernieth here, I am also thinking that, if I am thinking posetivily, that we might will see changes like what is described coming, I was listening to a radio program a couple of years ago, where they were speaksing about how now it is possible to understand the damages the tech world can do in reltion to social justice or social relationships. Maybe there will be more laws around in the future, so it at least will not be so accecible. at least we are kind of lucky to be in Europe I think, but what i sometimes fear is that the legal framework often appears too late Think that is a good point too! have you added the obama comment?No it was not me! The key is that all this takes time and intention, which runs against the rush to innovate that pervades the ethos of tech marketing campaigns. But, if we are not simply “users” but people committed to building a more just society, it is vital that we demand a slower and more socially conscious innovation. The nonprofit AI research company Open AI says, as a practical model for this approach, that it will stop competing and start assisting another project if it is value-aligned and safety-conscious, because continuing to compete usually short “changes “adequate safety precautions” and, I would add, justice concerns. Ultimately we must demand that tech designers and decision-makers become accountable stewards of technology, able to advance social welfare. For example, the Algorithmic Justice League has launched a Safe Face Pledge that calls on organizations to take a public stand “towards mitigating the abuse of facial recognition analysis technology. This historic pledge prohibits lethal use of the technology, lawless police use, and requires transparency in any government use” and includes radical commitments such as “show value for human life, dignity, and rights.” Tellingly, none of the major tech companies has been willing to sign the pledge to date.” “Nevertheless, there are some promising signs that the innocent do-good ethos is shifting and that more industry insiders are acknowledging the complicity of technology in systems of power. For example, thousands of Google employees recently condemned the company’s collaboration on a Pentagon program that uses AI to make drone strikes more effective. And a growing number of Microsoft employees are opposed to the company’s contract with the US Immigration and Customs Enforcement (ICE): “As the people who build the technologies that Microsoft profits from, we refuse to be complicit.” Much of this reflects the broader public outrage surrounding the Trump administration’s policy of family separation, which rips thousands of children from their parents and holds them in camps reminiscent of the racist regimes of a previous era. The fact that computer programmers and others in the tech industry are beginning to recognize their complicity in making the New Jim Code possible is a worthwhile development. It also suggests that design is intentional and that political protest matters in shaping internal debates and conflicts within companies. This kind of “informed refusal” 𐏐 expressed by Google and Microsoft employees is certainly necessary as we build a movement to counter the New Jim Code, but we cannot wait for worker sympathies to sway the industry. Where, after all, is the public outrage over the systematic terror exercised by police in Black neighborhoods with or without the aid of novel technologies? Where are the open letters and employee petitions refusing to build crime production models that entrap racialized communities? Why is there no comparable public fury directed at the surveillance techniques, from the prison system to the foster system, that have torn Black families apart long before Trump’s administration? The selective outrage follows long-standing patterns of neglect and normalizes anti-Blackness as the weather, as Christina Sharpe notes, whereas non-Black suffering is treated as a disaster. This is why we cannot wait for the tech industry to regulate itself on the basis of popular sympathies. Some conversation starters... (feel free to add your own) ¡ "Racism is a form of theft Yes, it has justified the theft of land, labor, and life throughout the centuries. But racism also robs us of our relationships, stealing our capacity to trust one another, ripping away the social fabric, every anonymous post pilfering our ability to build community." I appreciate this sentence, it makes me think that there are parts of social relation that can exist in spite of / across a racist society, which are at some point of our lives stolen from us. (LINE 9) ፥ Can we think about the "codes" of the dominant classes we can perceive around us? (and which ones would you say are "undesirable?) (LINE 3) ። What code-switching do we perform ourselves? To aid a code "vanishing" do we need to stop performing? (LINE 3) 𐏐 How can we encourage an "informed refusal" in the users of racist technologies, not only the workers? (LINE 25) 、 Why are we more adept at "rationalizing racism."?(LINE 9) __PUBLISH__