In India, we as a society conflate prejudices and discriminations concerning skin, color, race and caste – despite denial from the state and various institution on various and numerous platforms. These frequently erupt into acts and events of personal and public mob violence across the country that lead to public humiliation, shame, torture and death. Across the years (August, 2015 to the present) we have been researching and independently documenting and collecting testimonies and evidences central to the concerns and issues of skin, colour, race and caste within the Indian sub-continental context.
Given the vast amount of research, data and knowledge documented and accumulated, we required a dissemination to a larger world of public. Further given the non-sponsorship or funding by any institution or individual, the digital became a medium available to share in a (somewhat) non-ownership mode, since January 2017 with responsibility (how can one profit on images, texts and videos of people being beaten up, shamed and tortured or by even by a mass majority professing one’s own ignorance of their existence). This process of dissemination began a year ago, through an easy and somewhat access to free online platforms of WordPress as a blog and on Facebook, ‘hyperlocal’ based on the creation of the performative abomination of a ‘skin’ – the identity: ‘Un-Fair Web’
Photographer, Sunil Gupta once spoke at JNU of making work where the work (in itself) was not possible to display and disseminate and thereby – make visible (with a reference to his photograph, of cum on a thigh). This process and multi-dimensionality of art lies in making and ‘being visible’ through intersubjectivity.
At #UnFair, we would take this ‘thought’ forward:
i). The making visible (or not) is subject to privilege (skin, colour, class, caste, race and gender) and thereby your mobile-ability of capital (birth, culture, economic, academic, institutional, national, skill, labour, etc.).
ii). Further in this ‘object’ driven, fungible and commodified world, what does it mean to make ‘work’ and ‘labour’, that is or not, buy-able/sale-able. How then does on be visible? What does ‘make visible’ in a world ‘real’ (as in not virtual) mean? Given that this world endures and survives on cooption, appropriation, colonization and subjugation. Further within the virtual and a digital – how, what and who becomes visible? What then becomes of our rights? If any?
Today while objects continue to propagate and create commodities of abstract values, fungible and ephemeral – the digital society fails to realise that our traditional rights and freedoms cannot always be defended by traditional thinking or traditional legal instruments. In a virtual and digital world, we become even more so ‘un-touch-able’ (through processes of discrimination and ‘Digital Dalitification’).
Digital Rights, Net Neutrality and Freedom of Information:
Today the entire framework for the regulation of the digital aspects of our societies is being built in a vacuum where politically expedient and populist policies are being put in place. Infrastructure and services of the digital age (the public space of a digital society) are privately owned and are provided across multiple jurisdictions. As a result, however, it becomes vastly easier for governments to decide not to regulate, but seek to achieve public policy goals (or be seen to be ‘doing something’) by putting pressure on internet companies to impose restrictions, free from the legal constraints of international law or national constitutions. The number of examples of restrictions on freedom of expression imposed as a result of government pressure is growing exponentially. Our freedom of expression is leaking through this gap between what governments can informally demand and what companies will accept. The same applies to other digital rights. In the midst of numerous parallel processes to adopt legislation on privacy, data protection, copyright, terrorism, net neutrality, internet blocking, child protection and so on, the unanswered question of the interaction of law, and a privately ordered public space remains to be exploited.
The rollback on Net Neutrality has happened. In the US, the Federal Communications Commission’s chairman Ajit Pai successfully dismantles US net neutrality rules and has officially begun to undo Obama-era regulations on Internet service providers. The rules, passed in 2015, had placed cable and telecom companies under the strictest-ever oversight of the agency. As a result, Internet providers treat all web traffic equally and fairly. This means they can’t block access to any websites or apps, and can’t meddle with loading speeds. The end of net neutrality spells bad news for consumers, as well as for free speech. The advocacy group Free Press, which supports net neutrality, said Pai’s plan changes how consumers experience the internet. Without net neutrality, cable and phone companies carve the internet into fast and slow lanes, an ISP could slow down its competitor’s content and also block political opinions it disagrees with. The 2015 rules also included a ban on so-called paid prioritization: the idea that Internet providers shouldn’t give special treatment to apps and websites that pay extra. It is only time, before its impact is now felt globally.
Back home, here in India, a controversy over the government’s proposed rules and procedures for the Right to Information (RTI) Act overlooks the simple point that the goal right now should be to move on to a Duty to Publish rather than clean up the working of the RTI Act. Certain of the proposed changes have caused alarm among RTI activists. The provision that an RTI query would lapse if the questioner passes away while the query is being processed certainly could have ominous implications.
The surest way to prevent uncomfortable information surfacing on account of an RTI query from a pesky interlocutor would, indeed, be to bump him off. While this certainly could not have been the intent of either this or the previous government, the possibility cannot be dismissed out of hand, given the reality of repeated attacks and killings of RTI activists in different parts of the country. According to CIC’s annual report released last month, of all the cases in which an RTI application was rejected, nearly 40% didn’t even cite a relevant section of the RTI Act under which the information was denied. These denials are all categorised under a nebulous field called ‘other’. The Prime Minister’s Office alone rejected over 2,200 RTIs in this fashion in 2015-16. Perhaps some of those applications were indeed frivolous. But there were several cases where the information sought was just uncomfortable.
(Community) Knowledge and Commons:
The copyright model born in the 18th centaury post the industrial revolution of England and this can trace its history back to the renaissance and the making of the ‘I’ and ‘Artist’ and further creation of value hegemonies and institutional propositions of power. At #UnFair and Un-Fair Web we believe collaboration is a process of meaning creation and generation, through a call for action, addressing a shared concern through various stakeholders, communities and partners constructing multiple alternatives through an evaluative process, building a collective knowing – ‘Knowledge’. A rendering that we propose is working together with the other in cooperation (bringing into action/contact), against the constructed norm – ‘co-labor’ and ‘co-labor-ableing’. ‘Co-labor-ableing’ thus may construct and build an empowering and enabling “commune” from within, one that owns the means of production and controls the modes of dissemination’ through a constant negotiation between the co-laborers and a ‘commune’ ownership of the production thereby no longer on constructing a communication/message/code derived from and only speaking back to a particular class. Further, this process allows each to contribute, each to their own personal means, time, purpose and commitments. It is in this process of co-laboring that we believe, its dissemination also lies within the domain of the creative commons.
Digital labor theorists have come to borrow the idea of a ‘general intellect’ from Marx’s Grundrisse: a creative hive of interconnectivity that produces endless value for a select few. In his essay ‘In Search of the Lost Paycheck’, Ross calls this “the vast network of cooperative knowledge that is the source and agent of the cognitive mode of production.” In the process of creating our digital selves—listing our habits and preferences, choosing to ‘like’ and retweet certain items, and amassing followers—we allow businesses to extract value from our preferences, personality, and relationships. In highlighting this, Laurel Ptak has been mulling over the manifesto Wages for Facebook for over a year now. “What might be possible if we tried to mobilize the idea or the conversation around wages for Facebook?” Amid the campaign paraphernalia was an iPad displaying a silently scrolling web page with text in all caps: HEY SAY IT’S FRIENDSHIP. WE SAY IT’S UNWAGED WORK. WITH EVERY LIKE, CHAT, TAG OR POKE OUR SUBJECTIVITY TURNS THEM A PROFIT. THEY CALL IT SHARING. WE CALL IT STEALING. WE’VE BEEN BOUND BY THEIR TERMS OF SERVICE FAR TOO LONG—IT’S TIME FOR OUR TERMS.
In the digital age, a lot depends on whether we actually own our stuff, and who gets to decide that in the first place. In ‘The End of Ownership: Personal Property in the Digital Age’, Aaron Perzanowski and Jason Schultz walk us through a detailed and highly readable explanation of exactly how we’re losing our rights to own and control our media and devices, and what’s at stake for us individually and as a society. Perzanowski and Schultz present compelling evidence that many of us are unaware of what we’re giving up when we ‘buy’ and consume digital goods. The authors carefully trace the technological changes and legal changes that have, they argue, eroded our rights to do as we please with our stuff. Among these changes are the shift towards cloud distribution and subscription models, expanding copyright and patent laws, Digital Rights Management (DRM), and use of End User License Agreements (EULAs) to assert all content is ‘licensed’ rather than ‘owned’.
While copyright, with its ever-expanding range of restrictions and harsh punishments for those who overstep the mark – even unwittingly – hardly promotes exchange. An absolutely terrifying proposal emerges as the EU Commission’s proposed copyright directive poses a threat to the internet’s fundamental interconnectedness. The ‘link tax’ features some of the most impractical and extreme expansions of copyright rules ever seen. These are the ‘value gap’ proposal to require Internet platforms to put in place automatic filters to prevent copyright-infringing content from being uploaded by users (Article 13) and the equally misguided ‘link tax’ proposal that would give news publishers a right to compensation when snippets of the text of news articles are used to link to the original source (Article 11). Copyright has already been bent out of shape; the original theory was to use copyright to protect the content creator and allow them to make back any investment on their idea, along with a healthy profit, over a fixed, 14-year period. However, with the current period set at 70 years, not only are copyright laws strangling innovation, but now these additional reforms seek to make criminals of everyone who does not pay a fee to simply link to someone else’s work. In short, you may soon face a charge each time you publish a link to an article. From individual bloggers, to large publications, big media seeks to control how we direct people online, make citations on Wikipedia, or simply recommend a game or movie. But it doesn’t stop there: saving photos to online shopping lists on sites such as Pinterest, or sharing any news article over Facebook or Twitter aren’t in any way exempt. As it stands, there are no exceptions for non-commercial use. In fact, even search engines, which are essentially a long list of links gathered around whatever query you enter, could also be subject to the link tax.
Across the world, people, companies, and institutions also use noncommercial copyright licenses to make their work available to the public for noncommercial use. Creative Commons licenses become allies of artists who are struggling for recognition and remuneration, thanks to their broad permissions and explicit encouragement to share and enjoy, which promotes and enhances that exchange – and helps to generate that crucial financial return too. They do so because they want to share and allow re-use of their work. After 12 years, it’s easy to see Creative Commons’ impact on the world. 14 countries have made national commitments to open education. Governments, foundations, institutions, and even corporations need someone pushing them in the direction of sharing. And CC has stepped up to lead. The main objective of the project is to promote a global debate on ownership, copyright management, and to diffuse legal and technological tools (such as licenses and the services related) which may and can allow for ‘some rights reserved’ model in cultural products distribution.
Identity Digital and Virtual:
The first form of identity or individual consensus records dates back nearly 4000 B.C., with the Babylonians. We think they primarily used this for determining how much food they would need per person. Not much has changed since the Babylonians – we still use a census to determine macroeconomic decisions such as domestic social welfare needs and foreign aid. How we conduct the census differs from country-to-country and region-to-region. Up until now, analog identities have worked well for western civilization. With the rise of modern globalization and growth of e-commerce, businesses and governments are eager to find new and innovative ways to verify the identities of their new customer base – everyone, everywhere. The increase in attention may be useful for people who are often referred to as ‘the next billion users’, a Silicon Valley term for people living in and around the BRIC countries (Brazil, Russia, India, China). Often we think about growing up in a country where our first ID is issued at birth. With no legal identity to help verify themselves, millions of would-be customers are unable to use our products and services. A digital identity is an online or networked identity adopted or claimed in cyberspace by an individual, organization or electronic device. These users may also project more than one digital identity through multiple communities. In terms of digital identity management, key areas of concern are security and privacy. If the digital economy takes our analogue products and services and transforms them for the digital channel, the shared economy takes our analogue experiences and removes the burden and expense of ownership. The shared economy is like a modern timeshare without the time requirement or the awkward marketing pitch. However, like all new areas, the transition into it is still built on analogue models. On a daily basis, this same type of information is used to identify the people who deliver the packages ordered through major online retailers, drive for ride-sharing services, run errands for on-demand task services and more. As it turns out, companies today are far too reliant on driver licenses, passports, birth certificates and even a basic background check. These old methods can’t track relationships and are not very informative of a person’s trustworthiness or reputation. The old methods can’t keep pace with a new generation of criminal and fraudster and, typically, are not very secure.
As technology is on the cusp of a major paradigm shift in the field of identity. Blockchain will supposedly create a major revolutionary transformation in this area — but many questions of how still remain unanswered. Companies, governments and NGOs are beginning to tackle this question in ways that hint at the profound impact this will have on how we live our lives. Their promise is that our identities will be consolidated so that we have complete control over who accesses that information. This will protect us from increasingly sophisticated fraud and theft. It also will create unprecedented access for the ‘bottom of the pyramid’ who are still off the grid. Imagine crossing any border, and qualifying for any service, with immediate access to your funds and accounts, all with one simple digital ID. Almost everything we do today leaves an increasingly digital signature. Yet this signature is scattered among different services that use it primarily for their, rather than our, benefit. One first-order promise of blockchain is to rebalance that dynamic so we can reclaim and consolidate our digital identities. As Jaron Lanier writes in Who Owns the Future, “You don’t get to know what correlations have been calculated about you by Google, Facebook, an insurance company, or a financial entity, and that’s the kind of data that influences your life the most in a networked world.” Lanier calls these services siren servers, and notes that we are eerily comfortable giving away personal data in a trade for convenience.
The implications of this paradigm are troubling: We have no control. Each service has an abstracted model of who you are built into it. This information can be bought and sold, and also easily breached, without your consent. This setup is convenient (e.g. we get better-targeted recommendations), and also dangerous (e.g. we can be profiled and manipulated).
The segmentation is inefficient. What if all these data repositories played nice with each other? For instance — if your passport, driver’s license, bank and email were integrated — you could travel internationally, ensure your credit cards work, receive curated recommendations and alert people of your whereabouts, without any work required on your end. You could prevent identity theft. You could move around without identification.
We do most of the work for none of the benefits. This is Lanier’s primary contention. As has been noted, if services like Google Search are free, that is because we are not the customer, we are the product. In much the same way that Uber drivers occupy a grey area between employees and contractors, we act somewhat like workers for Amazon and Facebook, yet are only compensated with convenience, not payment, for giving up our data.
If you were to look at a complete model of your digital self, it would be a complex relational web. At the most granular level of that web are nodes, each representing actions (a text, a selfie, a purchase…). The connections between those nodes are formulas that infer relationships, record patterns and predict behavior. If you zoom out, you get the sub-web of a given service (Instagram account, Homeland Security profile, medical history…). These sub-webs then join together to form the larger web that is your digital identity.
The Digital and Virtual World:
Eyetracking has been a tool used by Nielson and various other consulting agencies and think-tanks in developing analytics for most governments, corporations and as well as consumer brands. It is the same methodology used by all web-based social media in throwing up content [(un)paid / advertising / media, etc.] during your browsing and web interactions. Search engines and social media accounts, most often also have access to one’s personal data through emails, photos, contact detail, banking accounts and other identifying documents. Further given the access to intimate personal data, through interconnected ‘smart’ devices, corporations also have access to location mapping of an individual in a geo-mapped and tagged world. This, now allows for the design of algorithms and bots that automate processes in making data and information available at one’s finger tips – for example as “people you may know” function on your Facebook accounts, have perhaps just exchanged pleasantries during a social occasion, or an exchange of details (data points i.e: email id’s, phone numbers etc.). Fusion reports that Facebook was drawing from the location of users smartphones to inform its suggestions – a ‘privacy disaster’. It quoted a spokesperson as saying that location information was “only one of the factors” Facebook used to determine people who may know each other. “Seriously, I’ve had enough reporters ask me, freaked out, why Facebook is recommending their protected sources,” tweeted Violet Blue, a reporter on cybercrime, on Tuesday. But Fusion then published an updated statement from Facebook, which said it did not use location data – though it had briefly in the past. Fusion’s Kasmir Hill wrote that she had “reportorial whiplash”, “I’ve never had a spokesperson confirm and then retract a story so quickly.” Facebook has denied using location data to suggest potential friends amid questions about the unsettling accuracy with which it puts forward “people you may know”. The feature has been known to suggest users who have no or few mutual friends on the network – and, reportedly, nothing in common beyond having shared the same physical space – prompting concerns about how it works.
All marketing gurus speak, of a consumer’s ‘black box’. All corporations spend heavily on market research and then consequently on messaging through ATL and BTL communication through advertising in developing mass consumption. Today’s new emerging technology of ‘virtuality’ now offers technological giants a further insight into one’s own mind via a pseudo-quasi sensorality using haptic and virtual reality headsets, interactions and games. Virtual reality offers that penetrative perspicacity into the medulla oblongata and our socio-cultural political selves by tapping into what makes us tick. A simple virtual interaction based on principals of ‘play’ will revel to an algorithm our tastes, preferences and kinks. Add to this our personal banking accounts, emails photographs, geo-mapping and eye tracking we now become as objects ‘read’ and exploited by the objects we use or the books we read. Data extracted from humans (from humble beginning as a crowd sourced project like captcha) is used to power Machine Learning and feed Artificial Intelligence using ‘Big Data’. We already hear of Apple joining other tech companies, including Facebook as a founding member of the AI initiative. Companies will work on research projects, AI best practices and more. Six independent individuals are also joining the board based on their past achievements when it comes to AI. Dario Amodei (OpenAI), Subbarao Kambhampati (Association for the Advancement of Artificial Intelligence & ASU), Deirdre Mulligan (UC Berkeley), Carol Rose (American Civil Liberties Union), Eric Sears (MacArthur Foundation) and Jason Furman (Peterson Institute of International Economics) will participate in the discussions. Even the E.U. member states today have already agreed on an ambitious new open-access (OA) target. All scientific papers should be freely available by 2020, the Competitiveness Council—a gathering of ministers of science, innovation, trade, and industry—concluded after a 2-day meeting in Brussels. But some observers are warning that the goal will be difficult to achieve. How much of this is available and understandable in a ‘langue – language’ to a lay public is questionable.
Artificial intelligence is a part of our daily lives, but the technologies can contain dangerous biases and assumptions—and we’re only beginning to now understand the consequences. It was recently discovered that a complex program used in image recognition software was producing sexist results — associating cleaning or the kitchen with women, for example and sports with men. The developers were disturbed, but perhaps it shouldn’t have been so surprising. After all, Computers and software, even at their most sophisticated, are still in essence input-output systems. AI is ‘taught’ by feeding it enormous amounts of pre-existing data—in this case, thousands upon thousands of photos. If it began to associate certain genders with certain activities, it is because it was outputting the bias inherent in its source material—that is, a world in which pictures of people in kitchens are too often women. As machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use. Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: “A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.” In May last year, a stunning report claimed that a computer program used by a US court for risk assessment was biased against black prisoners. The program, Correctional Offender Management Profiling for Alternative Sanctions (Compas), was much more prone to mistakenly label black defendants as likely to reoffend – wrongly flagging them at almost twice the rate as white people (45% to 24%), according to the investigative journalism organisation ProPublica. The promise of machine learning and other programs that work with big data (often under the umbrella term ‘artificial intelligence’ or AI) was that the more information we feed these sophisticated computer algorithms, the better they perform. Last year, according to global management consultant McKinsey, tech companies spent somewhere between $20bn and $30bn on AI, mostly in research and development. Investors are making a big bet that AI will sift through the vast amounts of information produced by our society and find patterns that will help us be more efficient, wealthier and happier. But, while some of the most prominent voices in the industry are concerned with the far-off future apocalyptic potential of AI, there is less attention paid to the more immediate problem of how we prevent these programs from amplifying the inequalities of our past and affecting the most vulnerable members of our society. When the data we feed the machines reflects the history of our own unequal society, we are, in effect, asking the program to learn our own biases.
The most famous case of AI breaking bad was Microsoft’s experimental Twitter bot called Tay. Created in 2016 as a female persona, it was supposed to learn how to interact with people by interacting with people. But, again, people = shitheads, and some folks jammed Tay with sexist and racist remarks. Within hours, Tay was sex-chatting with one user, tweeting, “Daddy I’m such a bad naughty robot” and telling another user that feminists “should all die and burn in hell”. Microsoft hit delete on Tay within 24 hours. Later Facebook chose to shut down its own AI chatbot/robot after it began ‘speaking its own language with its counter-part’. The company stated that “our interest was having bots who could talk to people”, researcher Mike Lewis told FastCo. (Researchers did not shut down the programs because they were afraid of the results or had panicked, as has been suggested elsewhere, but because they were looking for them to behave differently.) The chatbots also learned to negotiate in ways that seem very human. They would, for instance, pretend to be very interested in one specific item – so that they could later pretend they were making a big sacrifice in giving it up, according to a paper published by FAIR. Further, Pro Publica uncovered that Facebook’s ad targeting system, which groups users together based on profile data, offered to sell ads targeting a demographic of Facebook users that self-reported as ‘Jew Haters’. ‘Jew Haters’ started trending on Twitter when the piece went viral, and by Friday Facebook announced it had removed ‘Jew Haters’ and other similarly ranked groups from its advertising service by temporarily excluding its entire self-reported education and employer fields. With the announcement, the company offered a predictably anodyne apology and explanation. As Facebook explains, the categories were algorithmically determined based on what users themselves put into the Employee and Education slots. Enough people had listed their occupation as racist bile like ‘Jew Hater’, their employer as ‘Jew Killing Weekly Magazine’, or their field of study as ‘Threesome Rape’ that Facebook’s algorithm, toothless by design, compiled them into targetable categories. Facebook’s response is repetitious in emphasizing that users themselves self-reported the data. But claiming ignorance of its own algorithms lets Facebook equivocate more obvious questions: What does it tell us about Facebook that Nazis can proudly self-identify on their platform? Why can’t Facebook’s algorithms determine that words like ‘rape’, ‘bitch’, or ‘kill’ aren’t valid occupational terms? Facebook says its AI can detect hate speech from users—so why, seemingly, did Facebook choose not point its AI at the ad utility?
NEWS: Media, Medium and Impact:
‘The Media’ as we know it is not that old. For most of our history the News was, literally, the plural of the ‘New’ thing(s) people heard about and shared, and was limited by physical proximity and word-of-mouth. Journalistic objectivity, like many Western articles of faith, began as a late 19th-century ideal with very different aims than we attach to it today. Originally, journalism was nothing more than a megaphone for the powerful: the king dictated, and the reporters wrote it down. Newspapers were filled with pronouncements from on high: declarations of war, changes in navigation routes, calls to prayer, that kind of thing. Since the invention of the printing press, the news consisted of notes posted in public places and pamphlets distributed to the small number of people who could actually read them. The news continued to have competitors in the battle for attention, and because of this it continued to flirt with hyperbole. More than a century later, we’ve gained a fully professionalized PR and information industry and lost every modern illusion about Truth with a capital T, and objectivity has come to mean precisely the opposite. What gets reported, we believe, shouldn’t be determined by the press but by “what’s happening in the world”. The drive to sell (papers, ads, products) is naturally somewhat at odds with the idea of editorial accuracy and measured factual reporting. Journalistic standards, libel laws, and industry-shaming became common mechanisms to help curb this slide into sensationalism. Yet something happened recently when the news met the internet and began migrating into our pockets: it started losing the battle for our attention. Social Media is one of the primary reasons there has been a double-digit drop in newspaper revenues, and why journalism as an industry is in steep decline. It is now how a majority of the world get news.
While here in India while we witness a proliferation of new digital news agencies one underestimates the produced messaging and content that appears egalitarian and diverse. On a closer examining of the voice and tone of reporting it is absolutely clear that the produced content and the professed politics remains manufactured by a class and caste of person insuring a power hegemony. Today one witnesses investments by the Omidyar Networks which is the parent company of PayPal, a corporation pioneering and invested in ‘FinTech’ and Digital Financial Identities. Further one sees the Independent and Public Spirited Media Foundation and it board member invest in multiple networks, yet what remains is a lack of caste inclusion and voice in the editorial and reporting. This one can further trace back to the withdrawal of scholarships of minority communities at journalist and media education and training institutions.
The pursuit of digital readership has broken the New Republic — and an entire industry. Silicon Valley has infiltrated the profession, journalism has come to unhealthily depend on the big tech companies, which now supply journalism with an enormous percentage of its audience — and, therefore, a big chunk of its revenue. Dependence generates desperation—a mad, shameless chase to gain clicks through Facebook, a relentless effort to game Google’s algorithms. It leads media outlets to sign terrible deals that look like self-preserving necessities: granting Facebook the right to sell their advertising, or giving Google permission to publish articles directly on its fast-loading server. In the end, such arrangements simply allow Facebook and Google to hold these companies ever tighter. What makes these deals so terrible is the capriciousness of the tech companies. Quickly moving in a radically different direction may be great for their bottom line, but it is detrimental to the media companies that rely on the platforms. Facebook will decide that its users prefer video to words, or ideologically pleasing propaganda to more-objective accounts of events—and so it will de-emphasize the written word or hard news in its users’ feeds. When it makes shifts like this, or when Google tweaks its algorithm, the web traffic flowing to a given media outlet may plummet, with rippling revenue ramifications. The problem isn’t just financial vulnerability, however. It’s also the way tech companies dictate the patterns of work; the way their influence can affect the ethos of an entire profession, lowering standards of quality and eroding ethical protections. While there is a rapid takeover of traditional publishers’ roles by companies including Facebook, Snapchat, Google, and Twitter that shows no sign of slowing, and which raises serious questions over how the costs of journalism will be supported. These companies have evolved beyond their role as distribution channels, and now control what audiences see and who gets paid for their attention, and even what format and type of journalism flourishes.
Technology platforms have become publishers in a short space of time, leaving news organizations confused about their own future. If the speed of convergence continues, more news organizations are likely to cease publishing—distributing, hosting, and monetizing—as a core activity. Competition among platforms to release products for publishers is helping newsrooms reach larger audiences than ever before. But the advantages of each platform are difficult to assess, and the return on investment is inadequate. The loss of branding, the lack of audience data, and the migration of advertising revenue remain key concerns for publishers. The influence of social platforms shapes the journalism itself. By offering incentives to news organizations for particular types of content, such as live video, or by dictating publisher activity through design standards, the platforms are explicitly editorial. The “fake news” revelations of the 2016 election have forced social platforms to take greater responsibility for publishing decisions. However, this is a distraction from the larger issue that the structure and the economics of social platforms incentivize the spread of low-quality content over high-quality material. Journalism with high civic value—journalism that investigates power, or reaches underserved and local communities—is discriminated against by a system that favors scale and shareability. At the end of 2016, battered by the negative publicity for Facebook around “fake news,” Mark Zuckerberg retreated from his rigid position that his creation was “just a technology company,” to acknowledge that it was a “new kind of platform”. Technology companies including Apple, Google, Snapchat, Twitter, and, above all, Facebook have taken on most of the functions of news organizations, becoming key players in the news ecosystem, whether they wanted that role or not. The distribution and presentation of information, the monetization of publishing, and the relationship with the audience are all dominated by a handful of platforms. These businesses might care about the health of journalism, but it is not their core purpose. Today, as part of the Facebook Journalism Project, Facebook will begin to test displaying new publisher Trust Indicators through this module, established by the Trust Project, an international consortium of news and digital companies collaborating to build a more trustworthy and trusted press, as part of our ongoing efforts to enhance people’s understanding of the sources and trustworthiness of news on the platform. When Facebook Inc. wants to try something new, one of its first calls is to CNN. It was a key partner when Facebook introduced its news-reading app, Paper, in 2014. When the social network shuttered Paper soon after, transmogrifying it into a series of fast-loading News Feed stories called Instant Articles, CNN remained on board. But strain is showing in the relationship. Facebook’s latest pitch to publishers such as CNN is for them to provide a regular stream of TV-quality, edited, original videos that will give Mark Zuckerberg’s company a chance to compete with YouTube to siphon some of the $70 billion pouring into TV ads each year. In exchange, the publishers can share some of the revenue for ads that roll in the middle of the videos. Facebook will control all the ad sales.
It’s getting tougher for CNN and others to view these arrangements as mutually beneficial. “Facebook is about Facebook,” says Andrew Morse, general manager of CNN’s digital operations. “For them, these are experiments, but for the media companies looking to partner with significant commitments, it gets to be a bit of whiplash.” WPP’s chief executive and founder Martin Sorrell said on Tuesday that Google and Facebook wield tremendous power and influence and, like any other media company in the world, should be held legally accountable for the content on their platforms. For instance, the control on traditional media is much more rigorous than a blog on Facebook. “They take position publicly that they are tech companies,” said Sorrell. “But they are not. Technology companies have to step up to the fact that they are media companies, admit it, get on with it and be responsible.
While platforms rely on algorithms to sort and target content. They have not wanted to invest in human editing, to avoid both cost and the perception that humans would be biased. However, the nuances of journalism require editorial judgment, so platforms will need to reconsider their approach. Greater transparency and accountability are required from platform companies. While news might reach more people than ever before, for the first time, the audience has no way of knowing how or why it reaches them, how data collected about them is used, or how their online behavior is being manipulated. And publishers are producing more content than ever, without knowing who it is reaching or how—they are at the mercy of the algorithm. “It’s this great, simple experience, and the technology is getting so much better for it: AI’s getting better, big data’s more accessible.” Tech companies have done a lot of experiments on bots. AS they are very excited about it, because it’s this great, simple experience, and the technology is getting so much better for it: AI’s getting better and big data’s more accessible. Today bots fill all these spaces between platforms — like, on different platforms, but also they fill in these gaps a little bit between things. A bot could notify you to catch you up on where you left off in a story while you were listening to it on the train into work.
The Associated Press has become the latest news organization to get into the user-generated content game, announcing on Tuesday the launch of a new service called AP Social Newswire. The new service works with the platform SAM to find, vet and verify content generated by users on social media and elsewhere. AP customers will be able to embed that content into their work. The feed will offer UGC on international and regional coverage as well as trending topics. The ability to track down witnesses, and verify the content they’re sharing online, is critical to covering breaking news and planned events. As the amount of visual material appearing online continues to grow, it becomes increasingly likely that powerful news imagery can be sourced from a member of the public. News organizations have limited resources, so any help they can get to discover essential content — and use their own resources elsewhere — is enormously helpful. Also, verification can be a challenge for a news operation without its own reporting resources in the part of the world where a piece of eyewitness media emerged. AP has that expertise, and very high standards, which are central to what the Social Newswire offers. According to the 2017 Reuters’ Digital News Report, less than half the population (43%) trust the media across all the 36 countries surveyed and almost a third (29%) actively avoids the news, rising to 38% in the United States. Instead of enriching their lives, our work depresses them. And underlying this loss of trust is a perception of media bias driven by polarisation. People cluster to media organisations that fit their belief, and dismiss other outlets. The internet, once thought to open the world up to all the information possible and bring people together, has instead drawn people into their own corners.
Now, one of the most confusing efforts to fund journalism in recent memory is inching closer to reality. Civil promises to use the technology to build decentralized marketplaces for readers and journalists to work together to fund coverage of topics that interest them, or for those in the public interest. Readers will support reporters using “CVL” tokens, Civil’s cryptocurrency, giving them a speculative stake in the currency that will — hopefully — increase in value as more people buy in over time. This, Civil, hopes will encourage more people to invest in the marketplaces, creating a self-sustaining system that will help fund more reporting.
Facebook in (y)our life:
Regional newspapers continue to struggle and local TV often falters, sometimes before it’s even begun but this emerging breed of news production seems to be thriving. Some are set up as news sites while others are blogs originally started to address a particular local issue, like a threat to close a local leisure center or to cover a specific planning concern. They then grow to cover different topics and become the go-to site for people to find out about what is happening in their area. Few have much funding and many are precariously organised, but sites like these are starting to become powerful tools for people who want to hold power to account. As cuts to local services become more widespread and the legislative climate shifts the ownership and delivery of public amenities into the private or community domain, then the citizens of this community, and others, are revealed by hyperlocal news media to be ready and able to articulate their concerns online and challenge those in power in the way that the local press had long thought was their sole privilege. Recent academic studies of online news consumers have taken a deeper, ethnographic approach to understanding reader behavior. Their findings show that counting story clicks is a misguided way to determine reader preferences. “People engage in online user practices that do not necessitate clicks but do express interest in news, such as ‘checking’, ‘monitoring’, ‘scanning,’ and ‘snacking,’” according to one study. This behavior is seen clearly when clicks are compared to the amount of time spent on new sites. In one study, local stories represented 9 percent of story clicks by readers. But measured in time spent, those stories accounted for 20 percent of their time on the site, about the same time readers spend on local news in print. For its newest survey, Pew contacted 2,000 adult Americans over the course of a week last year. They were asked twice a day whether they read news online within the past two hours and, if so, how they had come across the news and what outlet had produced it. Most of the time, people in the survey said they got their news either by going directly to a news organisation’s website or app, used 36 per cent of the time, or through social media, used 35 per cent of the time. Overall, 56 per cent of the time, people who followed a link to a news story within the past two hours could remember the name of the outlet. But they were much more likely to recall the source when the link came from a news organisation, with 78 per cent remembering the outlet’s name, than when it came through social media (52 per cent) or in an email or text from a friend (50 per cent).
The biggest player in Social Media is Facebook, and the biggest part of Facebook is the News Feed. The algorithm behind the News Feed is regularly tweaked and historically opaque — it is one of the most significant and influential pieces of code ever written. You can think of the algorithm as the News Feed Editor. (Twitter, Snapchat and Youtube all have their own editorial algorithms, but we’re focusing on Facebook here because of its sheer dominance.) The News Feed Editor is a robot editor, and it is far better at capturing attention than normal human editors. It can predict what you’ll click on better than anyone you know. It’s what professor Pablo Boczkowski of Northwestern has called “the greatest editor in the history of humanity”. The story of how one metric has changed the way you see the world. The world feels more dangerous. Our streets seem less safe. The assault on our values is constant. The threats feel real. The enemy is out there — just check your feed.
Facebook is what became of the ‘hyper-local’ notion. It just turned out that it wasn’t a geographic neighborhood but a socially connected one. Facebook provided a platform whereby individuals became reporters, editors, and publishers. In this regard, Facebook is delivering on the first task of the news organization. Some Facebook friends might express opinions, but more often they are reporting facts. What is more, because these facts are reported to social connections, they are actually accurate. Nothing binds one to the truth more than the accountability of an ongoing personal relationship. Do you ever hear it exclaimed, “I heard on Facebook that your train broke down and that turned out to be an exaggeration”? Facebook knows this. The company even calls it a ‘News Feed’. And it is peppered with other news stories coming from mainstream outlets your friends have shared. You can read it like a newspaper (postpost.com) or a magazine (Flipboard for the iPad). Even the games, jokes, surveys, and other attention-grabbing activities on Facebook have a long provenance in newspapers, which are full of games (crosswords and Sudoku), jokes (the comics), and polls. These are a long-standing part of the news experience. While everyone acknowledges the importance of local news. No one wants to admit that news organizations are helping to kill it.
Facebook now reaches a quarter of the world’s population. Two billion people. It’s a mind-boggling number, and it’s growing. So are questions about how Facebook will protect privacy or abet authoritarian oversight. Further Facebook’s collection of data makes it one of the most influential organisations in the world. Today it is also the largest network of ‘stringers’, the backbone of the NEWS and MEDIA industry.
Facebook has been an indispensable tool of civic engagement, with candidates and elected officials from mayor to prime minister using the platform to communicate directly with their constituents, and with grassroots groups like Black Lives Matter relying on it to organize. The company says it offers the same tools and services to all candidates and governments regardless of political affiliation, and even to civil society groups that may have a lesser voice. Facebook says it provides advice on how best to use its tools, not strategic advice about what to say.
When tech guru Will Cathcart took the stage at F8 to talk about news, the audience was packed. Some followed along on Twitter. Others streamed the session online. Journalists, developers, and media types all clamored to catch a glimpse of ‘Creating Value for News Publishers and Readers on Facebook’— value that has become the most coveted asset in the news business as Facebook becomes a primary way the public finds and shares news. As Cathcart kicked off the session, he took the captive audience to a Syrian refugee camp via Facebook’s new, innovative, and immersive 360 video experience. He didn’t say much about where the camp was (“I believe in Greece?”), nor anything about the camp situation. He didn’t offer the audio of the journalist describing the scene. The refugee camp is a placeholder. A placeholder, in fact, that has become so overused that it was actually the second time yesterday that Facebook execs waved their hands about the importance of media before playing a video clip of refugees. It could have been a tour of the White House, the Boston bombing, Coachella. It could have been anything to Facebook. It’s ‘content’. It’s a commodity. What matters to Facebook is the product it’s selling—and who’s buying is you and the news industry. What Facebook is selling you is pretty simple. It’s selling an experience, part of which includes news. That experience is dependent on content creators — you know, journalists and newsrooms — who come up with ideas, use their own resources to realize them, and then put them out into the world. All of which takes time, money, and skill. For its ‘media partners’. As traditional news organizations faced the maelstrom of the digital revolution, many noticed that it was not just the stuff that editors had deemed socially important that was drawing in readers. Tailored, specialized news — the style and sport sections — that appeal to specific demographics pulled attention and therefore advertiser interest. Some hypothesized that tailored content could go further. Local newspapers, for example, could provide hyper-local content of interest to neighborhoods, like newsletters but with ads.
In 2012, The Times-Picayune, New Orleans’ oldest and largest daily newspaper, made a strategic shift to become digital first. This meant restructuring the newsroom, changing workflows, and thinking about how to tell stories directly on Facebook. With the extensive reach of NOLA.com, the digital home of the paper, they are now the largest media company in Louisiana.
The Evolution of a Story From Facebook Live to Print: A tornado ripped through New Orleans on the morning of February 7, 2017. Diana Samuels, the editor in charge of severe weather coverage, alerted the newsroom and 25 people across their reporting, photography, editing and social teams pitched in. By the end of the day they had over 56,000 interactions, 4 million page views, and 3.3 million video views on Facebook.
Nine of the 10 ‘most trusted’ news sources that Facebook uses are old media. Facebook just posted its current guidelines; Buzzfeed News has replaced Yahoo on the ‘most trusted’ list and is now the only new media outlet on Facebook’s top 10 list. Since a Gizmodo report earlier this week that said the editors of Facebook’s “trending” section have killed important conservative news stories and put important liberal ones at the top of the section, lots of people have been wondering: Do Facebook’s journalists actually suppress conservative news? Does Facebook attempt to influence how news spreads throughout its site, and thus the internet as a whole?
The Facebook Journalism Project recommends the following in using its platform for dissemination which will work in three ways:
1.) New storytelling formats. As people’s preferences for consuming news evolve, it’s critical to work together on figuring out which new storytelling formats will help people be more informed. We want to work with partners to evolve our current formats — Live, 360, Instant Articles, etc.
- Local news. Local news is the starting place for great journalism — it brings communities together around issues that are closest to home. We’re interested in exploring what we can build together with our partners to support local news and promote independent media.
- Emerging business models. One key area of collaboration is existing and emerging business models. Many of our partners have placed a renewed emphasis on growing their subscription funnel, and we’ve already begun exploring ways we can support these efforts.
- One of our longest standing traditions at Facebook is hackathons where our engineers take a break from their day-to-day work to explore new problems and technical solutions.
- Continuing to listen. We meet regularly with our media and publishing partners — and as part of the Facebook Journalism Project we’ll make an even more concerted effort to do so, with new rounds of meetings with publishers in the US and Europe, as a start in the months ahead.
2.) Training & Tools for Journalists. Training. In addition to the newsroom training we currently offer, we’re now conducting a series of e-learning courses on Facebook products, tools and services for journalists.
3.) Training & Tools for Everyone
As we seek to support journalism, we will also be working on new ways to help give people information so they can make smart choices about the news they read — and have meaningful conversations about what they care about.
Towards building audience traction and engagement Facebook recommends: Share Breaking News Updates, Use a Conversational Tone and Include Analysis, Start Conversations by Asking Questions and Responding, Share Stories Visually with Photos and Videos to Grab Users’ Attention, Reward Your Audience with Exclusive Content, Use Page Insights to Learn What Content Your Audience Cares Most About and Iterate, Target Posts to Bring Your Message to the Right Group, Use Engaging Thumbnails for Link Stories, Enable Your Community to Participate Through Crowdsourcing Content and Commentary, Vary Your Post Type – Users Don’t Engage the Same Way with Every Post, Use Pages Manager App to Update on Mobile, Optimize Your Page for Graph Search and Mobile.
Share Lab looked ‘under the bonnet’ at the tech giant’s algorithms and connections to better understand the social structure and power relations within the company. “If Facebook were a country, it would be bigger than China”, says Mr. Joler, whose day job is as a professor at Serbia’s Novi Sad University. He reels off the familiar, but still staggering, numbers: the barely teenage Silicon Valley firm stores some 300 petabytes of data, boasts almost two billion users, and raked in almost $28bn (£22bn) in revenues in 2016 alone. And yet, Mr Joler argues, we know next to nothing about what goes on under the bonnet – despite the fact that we, as users, are providing most of the fuel – for free. “All of us, when we are uploading something, when we are tagging people, when we are commenting, we are basically working for Facebook”, he says.
Facebook’s relationship with governments remains complicated. Facebook has come under fire in the European Union, including for the spread of Islamic extremism on its network. The company just issued its annual transparency report explaining that it will only provide user data to governments if that request is legally sufficient, and will push back in court if it’s not. Despite Facebook’s desire to eventually operate in China and Zuckerberg’s flirtation with the country’s leaders, it’s still unwilling to compromise as much as the government wants it to in order to enter. A unit is led from Washington by Katie Harbath, a former Republican digital strategist who worked on former New York Mayor Rudy Giuliani’s 2008 presidential campaign. Since Facebook hired Harbath three years later, her team has traveled the globe helping political clients use the company’s powerful digital tools. “It’s not Facebook’s job, in my opinion, to be so close to any election campaign”, said Elizabeth Linder, who started and ran the Facebook politics unit’s Europe, Middle East and Africa efforts until 2016. Linder had originally been excited about the company’s potential to be “extraordinarily useful for the world’s leaders — but also the global citizenry”. She said she decided to leave the company in part because she grew uncomfortable with what she saw as increased emphasis on electioneering and campaigns.
India is arguably Facebook’s most important market, with the nation recently edging out the U.S. as the company’s biggest. The number of users there is growing twice as fast as in the U.S. And that doesn’t even count the 200 million people who use the company’s WhatsApp messaging service in India, more than anywhere else on the globe. By the time of India’s 2014 elections, Facebook had for months been working with several campaigns. Modi, who belongs to the nationalist Bharatiya Janata Party, relied heavily on Facebook and WhatsApp to recruit volunteers who in turn spread his message on social media. Since his election, Modi’s Facebook followers have risen to 43 million, almost twice Trump’s count. Within weeks of Modi’s election, Zuckerberg and Chief Operating Officer Sheryl Sandberg both visited the nation as it was rolling out a critical free internet service that the government later curbed. Harbath and her team have also traveled there, offering a series of workshops and sessions that have trained more than 6,000 government officials. Facebook India has confirmed it has started prompting users to provide their Aadhaar details if they want to create a new account. The social media giant was spotted asking Aadhaar details from a user, who then posted a screenshot of it on Reddit. In the screenshot Facebook is seen asking the user his/her first and last name, which should be same as the one on the user’s Aadhaar card.
An initiative by a neighboring government of China reveals the launch its Social Credit System in 2020. The aim? To judge the trustworthiness – or otherwise – of its 1.3 billion residents. On June 14, 2014, the State Council of China published an ominous-sounding document called “Planning Outline for the Construction of a Social Credit System”. In the way of Chinese policy documents, it was a lengthy and rather dry affair, but it contained a radical idea. What if there was a national trust score that rated the kind of citizen you were? A futuristic vision of Big Brother out of control? No, it’s already getting underway in China, where the government is developing the Social Credit System (SCS) to rate the trustworthiness of its 1.3 billion citizens. The Chinese government is pitching the system as a desirable way to measure and enhance “trust” nationwide and to build a culture of “sincerity”. As the policy states, “It will forge a public opinion environment where keeping trust is glorious.
Similarly, today we see CrowdTangle, a 4-year-old tool that publishers use to track how content spreads around the web. CrowdTangle shows users how their content is performing on different platforms. It also shows what competitors and others in the industry are doing. CrowdTangle recently announced a partnership with Reddit which offers 50 state lists with subreddits to follow. It also tracks what Silverman calls ‘first-party sources’ including police departments and school districts and elected officials and what they’re posting on social media.”So we take what was otherwise a really hard, manual process around tracking various accounts and make it super easy”. A Chrome extension from CrowdTangle lets people see where content is being shared across platforms and by whom, Silverman said. Crowdtangle has helped publishers reach larger and more engaged audiences, but it’s also been accused of contributing to the “eerie sameness” of digital news. CrowdTangle joins Facebook’s existing publisher analytics tools, which include Signal (for discovering which news stories are trending) and Page Insights (which offer analytics tools for pages.) “Thanks to this partnership, our platform is only going to get more powerful”, CrowdTangle’s founders said in a blog post.
Prior to the 2016 US presidential election, Facebook was criticized for meddling too much in the news realm, populating news articles through its Trending Stories sidebar and through the posts and shares of Facebook users. On February 16, Zuckerberg posted a nearly 6,000 word letter to the Facebook community titled “Building Global Community”, in which he encouraged the creation of supportive, safe, informed, civically-engaged and inclusive communities – all through social media. “Our job at Facebook is to help people make the greatest positive impact while mitigating areas where technology and social media can contribute to divisiveness and isolation”, Zuckerberg said. “Facebook is a work in progress, and we are dedicated to learning and improving”. It looks like Facebook will soon emerge as a real news media outlet, but with the twist of social media and user-produced content, rather than content produced by journalists and editors. Facebook is now getting ready to give video producers much more flexibility with their live broadcasts. The company appears to be testing a web-based ‘Live Video Producer’ tool that would allow you to control footage from multiple cameras in one stream. Further FB design offers the ability to add in multiple inputs, as well as choose different formats for displaying the videos, such as a split screen view, dividing into quarters, or a picture in picture mode. Based on the images, we know it should be able to accept at least four inputs. Now Facebook is planning two tiers of video entertainment: scripted shows with episodes lasting 20 to 30 minutes, which it will own; and shorter scripted and unscripted shows with episodes lasting about 5 to 10 minutes, which Facebook will not own, according to the sources.
In late May the Guardian released the Facebook Files, leaked internal documents revealing how the company moderates content. Many of us have long called for more transparency around Facebook’s content moderation so we can better understand gender-based violence that happens on the platform and provide feedback. Although Facebook has made some improvements, these documents confirm that it’s often one step forward, one step back, as the platform continues to censor women’s agency, especially women of colour and especially in relation to activism, while letting harassment flourish.
Starting in January of last year, the Chicago Tribune started to anecdotally see a fairly significant change in post reach. At the Tribune, we have a fairly stable and predictable audience. We had around a half million fans at the end of March and have seen slow but steady growth in the last year. Most Facebook posts fell into the 25,000 to 50,000 reach range — with a few big successes and few spectacular failures each day, usually based on the quality of the content or the quality and creativity of the share. Facebook’s news feed algorithm changes have been part of publishing reality for many years. But to Matt Karolian, director of audience engagement at The Boston Globe, “last month was probably the worst we’ve had in reach in about a year. The fact everyone else is seeing it is a little bit troubling.” Aysha Khan said Facebook reach has also been sliding at the Religion News Service, where she’s social media editor. “Reach spiked in the summer, and we started hitting 15, 25K reach on bigger posts that were polarizing,” Khan said. “It wasn’t just political posts, but any kind of interviews. Anything that had potential to get a big reaction got a big reaction. But then we noticed that kind of stopped, and by January, it was just gone. Now we’re worse off than we were to start with.”
The Silencing and Blocking of Un-Fair Web:
On 17th September 2017, Un-Fair received a private Facebook Messenger ‘direct message’ of an image – this was a screen shot of a ‘WhatsApp’ message (WhatsApp is a Facebook product). The message was a death threat, post the murder of Bangalore journalist Gauri Lankesh. Maintaining the student’s anonymity and verifying the messages authenticity Un-Fair Web began a posting about this on 18th September 2017. It is after this initial post on our Facebook Timeline, Un-Fair Web began receiving similar image-messages from various other journalists, activists and students. Various concerned people began tracking these cell phone numbers via TrueCaller and sharing the finds on the Timeline post. Slowly, an aggregation of voices towards a joint complaint and FIR through the Cyber Investigative Cell, TRAI, etc… (web-links were shared) was taking shape.
Around 10:00 PM on the 18th Un-Fair Web was blocked and expunged by Facebook. All posts, data and messages from Un-Fair Web either as private ‘Direct Messages’ or ‘comments’ on Facebook walls have been censored, made invisible and erased. As then requested by Facebook, persons connected with the project submitted PAN Card and Passport Scans. Yet Un-Fair Web and the data collected through public sharing, remained inaccessible. It is only through various back channel emails and conversations with individuals (some of whom work at Facebook) and solely through their personal efforts that Un-Fair Web was given access to its identity to download our Facebook legacy (our Timeline posts, images, videos, texts, media etc…) and transition into a Page or Group. This we have done so with assistance through various technical hitches and blocks and today we have our archive data with us. We thank Inji Penu for her invaluable assistance and boundless spirit in bringing us back like Lazarus, as well as Shruti Moghe, Antigone Davis, Karuna Nain, Snehashish Ghosh for all their behind the curtain work. We thank all those on various email thread and social platforms writing into Facebook asking for our reinstating (thank you Sarah and Lawrence for beginning this) we thank those few journalists who understand what is at stake and gave of support through their voice on their platforms, we thank the various community members who continue their support towards us and came forward in solidarity.
While we at first believed that this attack was by a ‘right’ polarised bigoted group of persons, today we realise and are aware that the ‘reporting’ to Facebook against Un-Fair Web and its consequent blocking and ban was made possible by our politically liberal, central and left (so called ideological leaning) acquaintances. Most of our near familiars question our intentions, weather we are or not: ‘black’, ‘dalit’, ‘bahujan’, ‘adivasi’, ‘chinki’, ‘SC’, ‘ST’ or OBC – as we constantly find ourselves being ‘black-balled’ by friends, society and institutions for sharing and professing our views. This continuous societal ostracizing, today affects our means and modes of livelihoods and daily sustenance. As a result, the impact is an immense pressure being constantly applied to make the work and voice copyrightable and owned and taken away from a community of people and silenced. Continuously we find all that a community has spoken and its knowledge – ERASED. While our friends, journalists, artists, performers, curators remain silent claiming to be objective and not political. We say to you is this: “with every ‘WORD’ you write and profess, there always is a politic, and thus remains all that you do not say, yet constantly by usurping positions of power, platforms of space and voice – you erase, obliterate and annihilate in you quest for power and privilege.”
A year on from our birth on Facebook we find ourselves still existent an Identity, with Damocles’s sword above us. We also now exist a Public Facebook Group ‘#UnFair’ administered by a Facebook Page ‘Un-Fair Web’. This process allows us to maintain anonymity of various individuals. Further this allows for messaging between the various members and UnFair and the continued thread of conversations on a Timeline as a record and archive.
Un-Fair Web has been and will continue to be a community platform and a Voice for people that mainstream media and citizens of this country reject, obliterate and socially discriminate against, daily through every waking and conscious moment 24×7, 365 days of the year: This is through the language and representation of events by governments, institutions and media and its persons of employ; the beating up of YANNICK NIHANGAZA and being in a two year long coma eventually succumbing to his injuries while his assailants went scot free; the stripping an innocent Tanzania girl in Bangalore; beating to death a boy (OLIVER CONGO) in Kisan Gardh, Delhi; having an entire community raided in the middle of the night by an elected government labelling them prostitutes and drug dealers – at Khirkhee Extension; further today this means mid-night knocking on their doors asking for papers by men in uniform in Hyderabad, Noida and Greater Noida; seemingly frivolous activities of smacking people in the back of their heads with cricket bats as you go down the road on your motor-cycle in Jaipur; calling them cannibals and murderers, confiscating their passports and publicly beating them across an entire city of Greater Noida. These are but a few, add to this the various name-calling, solicitations, Metro Train incidents, etc… as one walks down the road to buy over-priced food, transport and rent and further telling Africa about this wonderful thing called the Green Revolution of Punjab that can feed the world.
“skin is foremost in gathering, processing and assimilating knowledge for the body. All forms and acts of prejudice and discrimination arising from biases of skin – are events of violence that deconstruct the body and the processes of knowledge making that discontinues learning, evolution and growth”
Read More on the shutting down of ‘Un-Fair Web’:
Whatsapp Death Threats In The Wake Of Gauri Lankesh Murder
September 19, 2017 / Raiot Collective
Facebook Blocks Account Which Talks About Issues Faced by Marginalised Communities in India
Un-fair Web, a Facebook account, was blocked after it started collecting and sharing information about senders of death threats to journalists, activists, and students. – Surangya Kaur
September 20, 2017 / NewsClick
Facebook’s community censors curb free speech
Accounts that are satirical, expose hate speech, or are totally harmless are being blocked for ‘violating’ Facebook guidelines. – Geeta Seeshu
November 21, 2017 / The Hoot
Read More on Facebook Blocking:
Facebook Blocks Cuba’s Mariela Castro After Post Urging Hurricane Aid
Facebook later apologized for the action, claiming that an employee had eliminated her account in error.
Facebook blocked the profile of Mariela Castro, director of the Cuban National Center for Sexual Education and daughter of President Raul Castro after she published information of a bank account created to receive aid after the destruction of Hurricane Irma in Cuba.
Man posts Kamal Ka Phool Hamari Bhool, Facebook blocks him for 30 days and deletes post
A user by the name of anasinbox has been banned by Facebook for 30 days for posting a status with the words ‘Kamal ka phool hamari bhool’. Alongside the status, the user shared a photo of a trader’s cash receipt which had the same words ‘kamal ka phool hamari bhool’ written at the bottom.
All the user did was highlight the fact that the receipt in question had these words printed and what they could possibly mean in context to the current government. The Facebook status read as follows – “Kamal ka phool hamari bhool. Vyapari apne cash memo par print karva kar janta se bata rahe hain ki BJP ko vote dekar galti ho gayi”.
Is Facebook really blocking criticism of the Indian government, BJP and Hindutva groups?
Days after it blocked a user for posting ‘kamal ka phool, hamari bhool’, the company says its actions are based on its Community Standards. – Abhishek Dey
Facebook no friend to American Indian names
Wrongly banned from the social networking Web site Facebook for registering under a false name, she was unable to get in touch with dozens of friends. In the middle of planning an upcoming trip, she suddenly lost touch with those she was to meet.
But the name she’d used was authentic, and though Facebook administrators eventually reinstated her account, some are concerned that the site is unfairly shutting off access to users with American Indian surnames.
Kills The Enemy’s experience has spawned a group of 1,000 Facebook users wondering why some with Native surnames must jump through hoops and endure accusations of fraud while the hundreds of users claiming to be named “Bart Simpson” do not.
Facebook Silences Rohingya Reports of Ethnic Cleansing
The social network says it’s committed to helping the world ‘share their stories.’ But when people from Burma’s oppressed minority post, their stories have a habit of disappearing.
Rohingya activists—in Burma and in Western countries—tell The Daily Beast that Facebook has been removing their posts documenting the ethnic cleansing of Rohingya people in Burma (also known as Myanmar). They said their accounts are frequently suspended or taken down.
The Rohingya people are a Muslim ethnic minority group in Burma. They face extraordinary persecution and violence from the Burmese military; military personnel torch villages, murder refugees, and force hundreds of thousands of people to flee their homes.
Human rights watchdogs say the persecution has intensified in recent months, and a top UN official described a renewed offensive by the Burmese military as “a textbook example of ethnic cleansing.”
Facebook Says It Is Deleting Accounts at the Direction of the U.S. and Israeli Governments
IN SEPTEMBER OF last year, we noted that Facebook representatives were meeting with the Israeli government to determine which Facebook accounts of Palestinians should be deleted on the ground that they constituted “incitement.” The meetings — called for and presided over by one of the most extremist and authoritarian Israeli officials, pro-settlement Justice Minister Ayelet Shaked — came after Israel threatened Facebook that its failure to voluntarily comply with Israeli deletion orders would result in the enactment of laws requiring Facebook to do so, upon pain of being severely fined or even blocked in the country.
What makes this censorship particularly consequential is that “96 percent of Palestinians said their primary use of Facebook was for following news.” That means that Israeli officials have virtually unfettered control over a key communications forum of Palestinians.
Glenn Greenwald: Is Facebook Operating as an Arm of the Israeli State by Removing Palestinian Posts?
Facebook is being accused of censoring Palestinian activists who protest the Israeli occupation. This comes as Israeli Justice Minister Ayelet Shaked reportedly said in December that Tel Aviv had submitted 158 requests to Facebook over the previous four months asking it to remove content it deemed “incitement,” and said Facebook had granted 95 percent of the requests. We speak with Pulitzer Prize winner Glenn Greenwald about his new report for The Intercept headlined “Facebook Says It Is Deleting Accounts at the Direction of the U.S. and Israeli Governments.”
Inji Pennu’s spreadsheet documentation of Facebook bans.
Read MORE REFERENCES:
The Fine Art of Making the Invisible Visible
I can see clearly now
Making Visible What Is Invisible
Mobilisation for digital rights
FCC Votes To Begin Rollback Of Net Neutrality Regulations
Google, Facebook, Verizon and net neutrality: what does it mean?
If Portugal is a net neutrality nightmare, we’re already living in it
Without net neutrality in Portugal, mobile internet is bundled like a cable package.
What an FCC rollback of net neutrality may mean for you
The War on the Freedom of Information Act
A conservative group is resisting congressional efforts to kneecap FOIA.
RTI debate: Don’t scare citizens
The right to reject, deny, obfuscate
Proposed Changes to RTI Act Will Complicate Seeking Information from Government.
Wages for Facebook
The End of Ownership
While EU Copyright Protests Mount, the Proposals Get Even Worse
The link tax threatens the internet as we know it
The world Creative Commons is fighting for
Creative Commons – An answer to the copyright debate?
Attorney Eric J. Sinrod says the group is picking up important allies as it seeks to revolutionize traditional copyright law.
ASCAP’s attack on Creative Commons
Putting Presidential Debates in the Creative Commons
A Review of Creative Commons and Science Commons
FOSS v Proprietary? – A debate between two geeks?
FOSS vs. open source as an American debate
Open source, in contrast to FOSS, accepts the idea that people might build proprietary extensions to open source programs, and that the obligation seen by Stallman, what I sometimes call the Fourth Freedom of open source, need not apply.
Free and Open Source Software is superior to proprietary software.
How does Facebook suggest potential friends? Not location data – not now
Social media giant agreed on Tuesday that location data was ‘one of the factors’ it used but on Wednesday said no, not any more.
‘People You May Know’ feature can be really creepy. How does it work?
After a few odd encounters, we decided to get some answers from Facebook.
F-Shaped Pattern of Reading on the Web: Misunderstood, But Still Relevant (Even on Mobile)
Eyetracking research shows that people scan webpages and phone screens in various patterns, one of them being the shape of the letter F. Eleven years after discovering this pattern, we revisit what it means today.
Mouse vs. Fingers as Input Device
Top 5 Eye Tracking Research Articles
Eye-tracking study: 5 key learnings for data designers everywhere
7 Marketing Lessons from Eye-Tracking Studies
In dramatic statement, European leaders call for ‘immediate’ open access to all scientific papers by 2020
Apple joins Amazon, Facebook, Google, IBM and Microsoft in AI initiative
Capitalism the Apple Way vs. Capitalism the Google Way
Whichever company’s vision wins out will shape the future of the economy.
Rise of the racist robots – how AI is learning all our worst impulses
There is a saying in computer science: garbage in, garbage out. When we feed machines data that reflects our prejudices, they mimic them – from antisemitic chatbots to racially biased software. Does a horrifying future await people forced to live at the mercy of algorithms?
There’s software used across the country to predict future criminals. And it’s biased against blacks.
AI programs exhibit racial and gender biases, research reveals
Machine learning algorithms are picking up deeply ingrained race and gender prejudices concealed within the patterns of language use, scientists say
How AI Learns To Be Sexist And Racist
AI robots are sexist and racist, experts warn
Turns Out Algorithms Are Racist
Bias In, Bias Out: How AI Can Become Racist
Artificial intelligence, meant to be completely unbiased and objective in its decision making, could prove to hold the same prejudices as humans.
What machines can tell from your face
Life in the age of facial recognition
New AI can guess whether you’re gay or straight from a photograph
An algorithm deduced the sexuality of people on a dating site with up to 91% accuracy, raising tricky ethical questions
Facebook Shuts Down AI Robots After They Invent Their Own Language
Facebook AI researcher slams ‘irresponsible’ reports about smart bot experiment
The research that prompted dramatized reports in the past few days came out in June.
Why Do Facebook’s Algorithms Keep Abetting Racism?
Facebook Has Seized the Media, And that’s Bad News for Everyone But Facebook
Facebook Is the Largest News Organization Ever
Facebook is Becoming a News Organization
Readers mistake Facebook for a news outlet
How Facebook’s tentacles reach further than you think
Joint Statement On Facebook’s Internal Guidelines for Content Moderation
The Facebook Files
Is the quest for profits and clicks killing local news?
Since Facebook made Crowdtangle free, more than 150 local newsrooms have adopted it
Facebook’s algorithm isn’t surfacing one-third of our posts. And it’s getting worse
Facebook tests Live Video Producer Tool with multi-camera support and GFX features.
Publishers are seeing another big decline in reach on Facebook
Can Facebook Fix its Own Worst Bug
How 4 news organizations are using Facebook Live to reach broader audiences
How The Times-Picayune Became a Digital-First Newsroom and Uses Facebook to Break News
Facebook signs BuzzFeed, Vox, others for original video shows – sources
Buzzfeed Is the Only New Media Organization on Facebook’s ‘Most Trusted’ List
Facebook’s Effect On Politics — And Our Lives
How Facebook’s Political Unit Enables the Dark Art of Digital Propaganda
Big data meets Big Brother as China moves to rate its citizens
Facebook is a bigger threat to privacy than is Aadhaar, says tech entrepreneur Vivek Wadhwa.
Facebook starts showing Aadhaar prompts to new users.
Facebook buys CrowdTangle, the tool publishers use to win the internet
Understanding how news goes viral
Introducing: The Facebook Journalism Project
12 Best Practices for Media Companies Using Facebook Pages
Launching New Trust Indicators From the Trust Project for News on Facebook.
Media Companies Are Getting Sick of Facebook
Civil, the blockchain-based journalism marketplace, is building its first batch of publications
‘Hold Google, Facebook accountable for content’
These are the bots powering Jeff Bezos’ Washington Post efforts to build a modern digital newspaper
How Silicon Valley Reengineered Journalism
When Silicon Valley Took Over Journalism
As traditional media falters, hyperlocal news is on the up
This Is How Your Fear and Outrage Are Being Sold for Profit
Why objective journalism is a misleading and dangerous illusion.
Who owns your media?
RIL takeover of Network 18: Why do the Indian media not discuss media ownership?
Five reasons why media monopolies flourish in India
The laws that could prevent such monopolies are either limited or absent.
Indian Journalism Under Increasing Political Control
How Twitter is Being Gamed to Feed Misinformation
What Is The Caste of Indian Media? No Surprises!
With AP Social Newswire, The Associated Press makes a foray into user-generated content.
We Broke the News Media, How Can we Fix Them?
Journalism as Genocide
Digital identity for individuals
The Rise and Potential Impact of Digital Identities
The next revolution will be reclaiming your digital identity
Why The Shared Economy Demands Digital Identities
Moving From Static Identity To Digital Identity
Digital Identity: What it is, why it matters and the impact it will have
Digital identity trends – 5 forces that are shaping 2017
Do you consider your digital identity a separate self or is it identical to your real-world self?
Take Back Our Media