Release: Developing Open Source is a Political Act

After the "Zettelkasten Manifesto", a new video is headed your way. This time we feature Cyberpunk, artificial intelligence, the power of neural networks and lots of political theory. Enjoy!

Posted in Journal, Releases on October 26, 2019

Release: Developing Open Source is a Political Act

(The full transcription of the video can be found below.)

In February 2019, I tweeted a short sentence: "Developing Open Source Software is a political act." Back then, I did not think too much about this, it just felt right to state this. After all, I am politically active and developing Zettlr as an Open Source alternative to proprietary Markdown editors just seemed right. But ever since this tweet was out in the wild, I was asking myself: Why exactly is developing FOSS (Free and Open Source Software) a political act? After all, the app itself does not make any political statements—it's an empty canvas for your ideas that tries to get out of the way as much as possible. Luckily, being a political scientist, I was able to develop some ideas to underpin this statement.

Ever since I began programming for a significant part of my days in 2017, I began to connect programming paradigms to political ideas which, ultimately, led me to above mentioned statement. But from February, there was still a long way towards the video essay I'm releasing today. What was needed before I could go on to write the script for the essay were the same processes that lead to writing a good paper: An initial idea was needed, a hook to start with; an argumentation that would in the end lead to the conclusion; and a strong connection between political theory and computers. It took until September 2019, until I had the final idea of how to begin the work: With the conclusion from the paper "The Extended Mind", published by Andy Clark and David Chalmers in 1998. From there, the rest of the argumentation was simply logical and writing the rest of the script was a piece of cake.

In general, the video is divided into five parts: After an introduction, in which the ideas of Clark and Chalmers are elaborated, a part on reliability and availability of software and services follows. Then, a short part on platform capitalism sheds light on the next logical step of software capitalism, which is at this very moment underway. The last main part then deals with the political and legal implications of current developments in the software industry and why software is able to introduce some legal monstrosities into political systems, before the video concludes with a wrap-up of the ideas presented and some thoughts on why developing Open Source software is a necessary developmental step in the 21st century that we should take boldly.

But enough text for now, the video is long in and for itself, so have a look!

All the best,

Hendrik


Full Transcript

Intro: The Extended Mind

1998 was an eventful year. With the Good Friday Agreement between Northern Ireland and the IRA and the official breakup of the Red Army Faction, it saw the end of an era of European terrorism. At the same time, the bombings of the US embassies in Dar es Salaam and Nairobi showed that transnational terrorism was on the rise. Meanwhile, the Lewinsky affair in the United States brought the Clinton presidency into a precarious position. In March, Microsoft released Windows 98, the nightmare which is still haunting computers to this day. Shortly thereafter, Google was founded, initiating the era of the internet corporation. But there was another event, which was not at all visible in the media, but notwithstanding had a huge impact: The computer scientists Andy Clark and David Chalmers published a paper titled "The Extended Mind". While this paper is known to few outside computer sciences and philosophy, it has had a huge impact and has been quoted quite a few times.

Clark and Chalmers engaged on an endeavour to find an answer to a highly philosophical question: Where does the mind stop, and where does the outer world begin? This question is still hotly debated in the philosophical world, but they sought a very practical solution to this.

"Where does the mind stop and the rest of the world begin? […] Some accept the demarcations of skin and skull, and say that what is outside the body is outside the mind. Others are impressed by arguments suggesting that the meaning of our words "just ain't in the head", and hold that this externalism about meaning carries over into an externalism about mind." (Clark & Chalmers, 1998)

Andy Clark and David Chalmers propose another approach in their paper:

"an active externalism, based on the active role of the environment in driving cognitive processes." (Clark & Chalmers, 1998)

Our surroundings are a part of our mind. The mind, for them, is not confined to the interior of our brain. We make use of our environment every time we perform a conscious action. They use an example where they compare the process of finding an address both by simply remembering it and by looking it up. While these are clearly different processes, both have the same function on a conscious level. Both serve the mean of finding the correct address, only in one instance this information is retrieved from within the brain, while in the other it is stored externally. They argue that when a person has Alzheimer's and starts to take notes, their notebook effectively becomes their spare memory to remember everything.

Let us perform a simple experiment to make this point more clear: Pause this video for a moment, open a private browser window, navigate to one of your most frequented pages, and log in to your account. Notice the time it takes you to type in the password. Now, take a piece of paper, and write that same password down. Notice how much longer it takes you to remember the password. The keyboard, it turns out, is part of your memory. It helps you remember the password.

If you watch this video because you use Zettlr and maintain a Zettelkasten, the point becomes even more obvious. As Niklas Luhmann explains: "As a result of extensive work with this technique a kind of secondary memory will arise, an alter ego with who[m] we can constantly communicate. It proves to be similar to our own memory in that it does not have a thoroughly constructed order of its entirety, not hierarchy, and most certainly no linear structure like a book. Just because of this, it gets its own life, independent of its author." (Source) In fact, the whole system theory envisioned by Luhmann can be used to make sense of the idea of an extended mind in the form of constantly communicating back and forth. From this perspective, you communicate with the keyboard which, in turn, will tell you what you should write, because you have a muscle memory that is faster than your consciousness trying to remember the password. Sometimes, a trivial change such as using a different keyboard can lead to significant delays in typing your passwords. In the world of systems theory, your mind encompasses not only your interior psychological system, but also a number of systems outside your body, to perform higher-order conscious actions.

Clark and Chalmers continue to specify what they mean by extended mind: "If I rarely take relevant action without consulting my [notebook], for example, its status within my cognitive system will resemble that of the notebook in [the Alzheimer's person's]. But if I often act without consultation — for example, if I sometimes answer relevant questions with "I don't know" — then information in it counts less clearly as part of my belief system. The internet is likely to fail on multiple counts, unless I am unusually computer-reliant, facile with the technology, and trusting, but information in certain files on my computer may qualify." (Clark & Chalmers 1998, p. 18) Clark and Chalmers wrote their paper twenty years ago, with the internet barely around the corner and smartphones still ten years away. For them, it was unlikely that a person would rely upon their computer to a high degree. Today, however, there is hardly any person in the Western world that does not expose a huge degree of computer-reliance. We store our appointments on a server, we write everything down on our computer rather than on paper, and the most important asset for remembering things is our Smartphone.

To a certain degree, the Orwellian saying holds true that the one who is not on the internet does not exist. If a search for your name does not turn up anything, you are not visible to people outside of your peer-group or those you are in direct contact with. This might indicate that you are pretty good in keeping your private information that way. But if you are an activist, you might see it differently. Protests around the world, for instance, rely on media coverage and general visibility. Without them being displayed, their power would be much lower. The protests during the so-called "Arab Spring" were in part successful because protesters in one city could rally support in other parts of the country, making what started as a local riot in Sidi Bouzid a nationwide firestorm that ultimately brought down the government of Tunesia. But this also means: protests are increasingly only real if they are online. There are a lot of protests in non-western countries that are barely visible and therefore have a much harder time in gathering support from western countries in the form of sanctions or political pressure. If something is not visible, it is not real in a way that we perceive it actively.

The same nowadays applies to our mind. If information we rely upon is online, we may have a hard time accessing it when we ourselves are not online. If we don't have phone reception because we are in a building's basement, the analogy of our information in the cloud is that of the Alzheimer's person's notebook being stolen from them. As Clark and Chalmers continue: "The real moral […] is that for coupled systems [such as the brain-notebook-coupling] to be relevant to the core of cognition, reliable coupling is required. It happens that most reliable couplings takes place within the brain, but there can easily be reliable coupling with the environment as well." (Clark & Chalmers 1998, p. 8) If we outsource a part of our mind to external resources, we depend on these to complement ourselves. We would, for instance, never store passwords by printing them on thermal paper, because the print will degrade quickly and, as soon as it's completely gone, we've effectively lost our passwords. Thermal paper, hence, is not reliable to depend upon. Most of us also make use of backups — store information in multiple places to ensure it doesn't get lost.

This reliability is our starting point for an inquiry of why developing Open Source software is a political act.

Reliability and Availability

Reliability sits at the core of our extended mind. We only use our computer to offload part of our mind, because we can be certain that it does not suddenly delete the data. If there is a chance that the information will simply be lost, we would think twice about storing it there in the first place. We need to extend our mind in order to achieve what we do. We probably would never have travelled to the Moon if we weren't able to store information externally. Just think of Margaret Hamilton, the great mind behind the Apollo missions. If she weren't able to externalise her knowledge in form of code, she probably might have just sat in the cockpit when the rocket was launched. Twenty years ago, notebooks and paper were still the primary form of storing information, we relied upon it. At the same time, computers were unstable so we did not rely on them for that very same task. The Apollo missions were always at some risk, and even today simple computing errors cause millions of Dollars to be wasted because some satellite doesn't reach its designated orbit around the Earth. Even paper can prove to be unreliable at times. Niklas Luhmann's first attempt at starting a Zettelkasten for instance was thwarted when it caught fire. Computers nowadays are extremely reliable and they become much more so every day.

But as the computer becomes reliable, there are developments that clearly run counter to that very idea of reliability. With increasing complexity of software applications, there are more and more actors we also need to depend upon. Storing information in the cloud more and more becomes a chain dependency. We not only need to trust the paper we write notes on anymore, but several systems at once — our computer, the operating system, our internet connection, our cloud provider and the software developers. If only one of these links breaks, nothing will work anymore and we return back to the metaphor of the stolen notebook.

One striking example are the catastrophic failures of the service provider "Cloudflare". Cloudflare is a company that offers so-called load-balancing. Load-balancing means that incoming visitors will be routed not to one, but multiple servers providing the same content so that one cannot be overloaded with too much traffic. If you visit a page that uses Cloudflare, you are actually not visiting the page, but Cloudflare itself, which saves multiple copies of the page and offers you these instead. Cloudflare is only used by about one percent of all internet pages worldwide. But — and here's the catch — this one percent includes some of the most-frequented pages worldwide, such as Uber or AirBnB. Every company that is too small to maintain their own server-network, but big enough that the amount of people visiting their websites becomes too large, relies on Cloudflare. But their systems have more often than not proven to be unreliable — several times they accidentally introduced bugs into their own software leading to crashes of their servers. The result then was that the pages using Cloudflare became inaccessible for hours. If something like this happened to Dropbox or Google Drive, this would result in petabytes of information to be factually lost during the downtime of their services.

As the New York Times has put it: "Such disruptions have become increasingly common, underscoring the fragility of the digital world. Thousands of websites and other services rely on cloud technology from the same few companies, and even a minor bug can have significant ramifications, shutting down large swaths of the internet." (Source)

But even when no accidents happen and the services run smoothly, there are still uncertainties introduced into the system. Many companies have by now switched from fixed-price sales to subscription models. This means that instead of buying and owning software with one bill, you have to pay monthly, which not only becomes expensive quickly, but also keeps you always worrying about what will happen when you cannot afford to pay the service anymore. This "rental" model of software works for now, but a recent example shows how quickly things can turn ugly: Due to export sanctions imposed by the United States on Venezuela in late 2019, Adobe simply cut off access to their Creative Cloud products including extremely vital software such as Photoshop and InDesign. This effectively resulted in the whole country being unable to edit photos or typeset newspapers — at least not legally.

Software companies nowadays depend heavily on that very fact: That we need to have our information accessible at every time. And they use this to keep people locked in. The techniques that companies use to keep people sticking with their services no matter the quality is to leave them no choice. This is called "vendor lock-in". Vendor lock-in means that once you use software by a given vendor, you depend on their software to view and handle your own data. This is because most file formats are proprietary, that is: nobody can be really sure how the data is stored there. This then leads to what one might call a micromanaged monopoly on individual people. Switching to different vendors becomes increasingly hard. And by leaving you only monthly subscription fees they continue to make money from you simply by keeping you locked into their own ecosystem.

One famous example in the Microsoft Windows ecosystem is the Internet Explorer. It came pre-installed on every operating system up to Windows 7 and always tried to force you into using it as the default internet browser — irrespective of the security issues that made the Internet Explorer one of the major issues for every security engineer. Even on Windows 10 you have to click multiple times to convince your operating system that you do not want to use their built-in internet browser. But the vendor lock-in does not stop there. Microsoft Windows is an operating system that is capable of operating a computer. But you could also use Linux to operate your computer. You are not confined to Windows. Yet, it is close to impossible to avoid Windows, as every non-Apple computer is distributed with Windows pre-installed. Starting with Windows 10 they even managed to convince producers of the hardware itself to sell their chips with Windows licenses stored on them.

The sole aim of vendor lock-in is to raise the threshold of using another software so high you'd probably stick with their software, even if you didn't like it. To migrate data you'll likely need specialised software, and sometimes you simply can't migrate your data — for instance if it was stored on a cloud provider and the cloud provider simply denies access to your account.

Which leads me to the next point where reliability of software-systems is in danger: Unmaintained software. Nobody knows if software sold by companies will be sold in a year. The company could simply decide that they want to abandon an application, or — which is more likely these days — the startup proves unprofitable and the investors pull out their venture capital. In that case you are left with software that will not receive updates anymore and data which is probably forever locked away. And don't rely on the fact that there are apps to extract your data from most files. A few years ago, I stumbled upon a rant on the file format of Photoshop by a software engineer, because the way these files were structured was apparently so far away from all good programming habits that it took him months only to write a small program that could read these files. The text itself is quite amusing to read:

At this point, I'd like to take a moment to speak to you about the Adobe PSD format. PSD is not a good format. PSD is not even a bad format. Calling it such would be an insult to other bad formats, such as PCX or JPEG. No, PSD is an abysmal format. Having worked on this code for several weeks now, my hate for PSD has grown to a raging fire that burns with the fierce passion of a million suns. (Source)

An engineer from Adobe responded a few days later and was fairly disgusted by it. Nevertheless, his response was essentially the confession that Adobe in fact did not have any structure in their format and didn't care to change that. (Source)

But how come that despite all these risks to data accessibility an overwhelming majority of people relies on these services and happily pays them? One part of the answer is that most software offers you to at least read the data once your subscription has expired. The Adobe Reader for instance is a free version of the Adobe Acrobat that lets you view PDF files, but it does not give you the ability to create them. The text analysis software MaxQDA offers a free reader that lets you browse the data, but not edit it. The citation manager Citavi also lets you read your library for free, but to continue working with it you have to pay for a license again. And most cloud providers give you a heads up to download the data from your account before they delete it. This leads us to two conclusions: One, people accept this vendor lock-in because they can most of the time at least recover their data, albeit with costs in terms of time. But, more importantly: It shows us that companies are well aware of the fact that the thesis of the extended mind holds correct. Huge corporations effectively hold captive a part of our own self, of our identity, which we hand them over to care for in exchange for money. They use this fact to force us into buying their subscription one more time.

In fact, there are corporations who do not even offer any service in exchange for them housing a part of ourselves.

Platform Capitalism and the Marketing of the Self

Let us dive into a short excursus on Platforms such as YouTube, Facebook, or Twitter. These position themselves as such because the term "Platform" implies a (political) neutrality for expressing thoughts. Nevertheless they also make use of vendor lock-in technologies, even more so than programs that you run on your computer. As Tarleton Gillespie writes: "Drawing these meanings together, ‘platform’ emerges not simply as indicating a functional shape: it suggests a progressive and egalitarian arrangement, promising to support those who stand upon it." (Gillespie 2010, p. 350). Further: "Despite the promises made, ‘platforms’ are more like traditional media than they care to admit. As they seek sustainable business models, as they run up against traditional regulations and spark discussions of new ones, and as they become large and visible enough to draw the attention not just of their users but of the public at large, the pressures mount to strike a different balance between safe and controversial, between socially and financially valuable, between niche and wide appeal. […] And the discourse of the ‘platform’ works against us developing such precision, offering as it does a comforting sense of technical neutrality and progressive openness." (p. 359-360).

Platforms use user-generated content to increase their userbase, only to show them advertisements and profit from having their data, with which they can then target specific groups. While Facebook, for instance, is free for users, companies pay hefty fees for targeting their ads at specific target groups in a search for the maximum efficiency of their ads. The same holds true for Google, Twitter, and all other commercially driven networks. That Twitter can be used to coordinate protests and shed light upon wrongdoings of governments around the world is simply a side-effect of them using your data to generate some money in a world where there is less and less money to make. Nick Srnicek, in Platform Capitalism addresses this fact and looks at how platforms came to be one state-of-the-art business model in the digital era from an economic perspective. He states: "Often arising out of internal needs to handle data, platforms became an efficient way to monopolise, extract, analyse, and use the increasingly large amounts of data that were being recorded."

The data that is stored on platforms differs significantly from the data that we consciously produce. It is something different if we store a well-written piece of text on our computer or tweet a quick thought. The use of the piece of text is clear and intentional, while the data being generated by tweeting is only visible by looking from far away: It is also a part of ourselves, a form of digital "avatar", an internet-persona that, while being distinct from our "real life" character, is still a part of us. Following the extended mind-thesis, the data we store on Twitter or Facebook is effectively a part of ourselves. Sometimes we use Facebook or Twitter to get an impression of ourselves, just as if looking into the mirror. Revisiting content we have posted several years ago sometimes can lead us to insights into what we have been like back then. This serves as proof that in the world of Social Media, we do not pay for working with offloaded parts of our memory. Instead, we offload more personal parts of our memory which is then being used by those platforms to monetise their service.

And this model certainly pays off. Because after locking away your data in incomprehensible files that you cannot read on your own, the next logical step of corporate software vendors is to lock away the data completely. More and more corporations switch to full cloud-based models. As internet access is proliferating, more and more users see cloud-based applications as a nice alternative — after all it sounds good that you can not only edit your data on your computer, but also on your phone and even on foreign computers where their application is not even installed. But the reverse effect of this is that your data is not on your computer anymore — it's on big server farms that are completely out of your control. So even if you would want to switch to a different software, you first need to export your data from the cloud onto your computer. And there you are completely at the mercy of the vendor to offer you the export formats you need. More and more people are doing this, and while there are still some who have a somewhat critical view towards such software, the overwhelming majority of people happily accepts the new cloud era. These people do what the next section is about:

We Appreciate Power

Grimes dedicates her 2018 song "We Appreciate Power" to the power of modern computing. In her song she display a form of technophile cyberpunk mixture and uses phrases like "We pledge allegiance to the world's most powerful computer", outlining a very positive view towards AI. This song is clearly affirmative of what modern neural networks are able to do. She even translated the lyrics using only Google Translate, which — I assume — has also introduced one or the other syntactical errors into the translations she's displaying across the screen, as we know happen when one uses that service. But the song also mystifies AI. It overloads the benefits of AI semantically with phrases such as "Thank you to the AI overlords for translating our lyrics." Funfact is that the line "Come on, you're not even alive if you're not backed up on a drive" gives a glimpse of the extended mind here. After all, it's a song and has artistic freedom to exaggerate a lot. But let's return to the reality.

With computers becoming more powerful and neural networks being able to mock images to a frightening degree or even win in board games such as Chess or Go, we tend to forget one fundamental truth about computers and software: After you strip away the magic that surrounds them, what is left is a highly sophisticated tool. Even though artificial intelligence is extremely helpful in detecting what objects are in an image or classifying text into a bunch of categories such as "scientific", "prose" or "fiction", computers still think in zeroes and ones. Even the highly praised neural networks are nothing more than a really huge pile of basic statistics that only perform well because there are a lot of these statistics stuffed together. We are still far away from what computer scientists call a "general AI." A general AI differs from what we call a "weak AI" in that it not only performs good at a very specific task but at other tasks as well. As of now, if you take a neural network that has been trained on images and throw a bunch of videos at it, it will produce amusing results at best.

In turn, this means that neural networks are simply the next stage of programs that we are used to since the invention of the personal computer. They perform better at tasks where the input is not uniform, such as images — to analyse this data, you need to teach a computer the difference between one single image of a car and the "concept" of what a car is, because there's not one duplicate photo of the same car. If you know what the input will be – such as reading in a file —, you don't need to train an AI to do the job and can resort to normal programming logic. Nevertheless, it's still valid to state that software is extremely powerful. But the really frightening aspect of software, even neural networks, is not that they are so good at recognising images. The frightening part is only visible at the conceptual level.

Most of us live in more or less stable democracies. Whenever we give away responsibility over some aspects of our lives, we are prone to demanding a system of checks and balances to make sure we stay in control over our own lives. Historically, it has proven to be a good approach to divide power into three branches — the judicial system, the legislative and the executive powers. The legislative organs of a state, such as a parliament or a senate enacts laws, that is: tell people how to live. They manage taxes and determine what's legal and what's not. We elect these parliaments to make sure the laws they make align with our interests. But we also need to make sure that these laws are respected. The executive powers enforce the laws and the judicial system penalises transgressions.

But in many respects we don't need to explicitly make laws. A lot of everyday interaction between humans is regulated by unwritten laws, so-called norms. One norm would be to greet each other when meeting. There's no law restricting that or enforcing it. It's just good behaviour, and if we didn't do it, we'd quickly become socially excluded. Therefore, we didn't need to enact laws regulating this. Regulation becomes necessary only if some thing affects the well-being of many people at once, if we begin to depend on that thing. This is why we had no restrictions imposed on the internet and software programs when they were first introduced. As Clark and Chalmers said over twenty years ago: Only a few people are super-reliant on computers. So we didn't think it were necessary to impose checks and balances upon software applications. Two decades later, however, the situation has changed. Social Networks and software are all around and we rely on it for a significant part of our lives. Regular data breaches and the stress this imposes on individual lives testify that software now rules a significant part of our lives. There are even sites dedicated to highlighting the impacts of these transgressions have on you. Sites like "Have I Been Pwned" tell you whether or not your Email address was included in a data breach, which is a strong indicator that there was other private data leaked too.

So why don't we have checks and balances on such vital systems? A first reason is that legislative processes are slow and most of the time act slower than society evolves. But we can't hope that there will be legislation at all, and this is because of a second reason, which is systemic and cannot be overcome by any means we possess currently: Software is both legislative and executive at the same time. A program defines the rules which it itself enforces — every time. Therefore, judges and lawyers are not even needed, because even if you wanted to transgress the "law", you couldn't do it, because the system checks your inputs and determines whether what you do is allowed or not. This is something that is applied at the lowest level and cannot be changed. Programming languages are meant to receive instructions which are then strictly followed.

This leads to some legal monstrosities. Take for example military drones employed by armies all around the world. Their operating systems defines some rules which it follows — for instance where and for how long it can fly, what to do when certain environmental changes such as fog appear. And these rules cannot be bent, no matter how hard you tried. If we think not too far into the future, we can see where this might lead to: If any military drone ever was given the ability to launch a missile completely by itself, this would factually introduce a paradoxical loop into warfare regulations; it would result in something like a (ius in (ius in bello)). A code of law within another code of law, that must not necessarily align with each other. As soon as autonomy is introduced into software-driven machines, we run up the problem that these machines have to follow already existing code which cannot be translated into software code.

In human interactions, there's most of the time no certainty. Laws must be interpreted — even if executive institutions imprison you, a judge must have a look at your case and determine if the police officers did in fact imprison you rightfully. Many laws allow for certain paradoxes which can only be overcome by changing how we look at them. This is not possible for machines, whose software code is fixed and needs to address every single uncertainty.

Let us have a look at one example: While dealing with the state of exception or Ausnahmezustand, Italian philosopher Giorgio Agamben stated that if there is a curfew applied, the soldier enforcing the curfew is himself in transgression of the curfew, because he is out on the street himself, and a curfew does not legally distinguish between the norm and the transgression of the norm. These paradoxes are not possible within software. If you would translate this curfew into a military drone, you would have to explicitly list all things that are still allowed on the street, making it difficult to enforce. Therefore, as long as you do not exploit an error in the software, there are no ambiguities. Each case is handled, with undesirable inputs mostly leading to an error. Moreover, you are completely at the mercy of what the program lets you do and what not. There is no democratic process in which you can vote for certain features. If the developer decides that they want to do something else, they can simply do this.

Normally, that is not a problem, but let us take Microsoft Word for writing text. Nearly everyone has it, and when you work in an environment where you have to deal with text a lot, you can't find a way around working with Word documents. And we all have run into those tiny problems making it sometimes a pain to create a text document. And when whole industries are at the mercy of Microsoft delivering a software you can work with, you can clearly see why not having power over what software does can amount to a problem. Microsoft can allow itself to not care if their customers complain about something.

Software, in this respect is political. It enforces what it allows and what not. This might not always be obvious in the day-to-day work with Word. But as soon as it comes to social networks, it becomes clear. When certain opinions are censored this can either be in alignment with democratic values, such as the banning of hatespeech or fascism. But this possibility of censorship by internet corporations can easily be targeted at activists and protests. Additionally, software can silence people simply by design. For instance, all programming languages make use of a concept called a function. This is simply a reusable piece of code that takes something as an input, does something with it and returns a result from its execution. Take for example a function that returns the square of a number. Such a function can hardly be viewed as a political act, but this changes once we turn to a function that validates a form that users can fill in on a website. Such a function normally takes the data that you've entered and validates that the data is correct. While this is first and foremost a mechanism to prevent people with malicious intents from hacking the website, it also has the side effect that it has a very narrow set of rules that it uses to determine if something is valid or not. Take for instance gender: What if you are only presented with the choice between male, female and "diverse"? As a transgender or genderless person, you may have no choice but to sort yourself into the category of diverse, which becomes even more oppressive when gender is a compulsory option.

Software in and for itself cannot alter its behaviour or apply rules of thumb to inputs. This is simply not something that it can change. Therefore, we must rely on the programmers to do a good job and equally on their bosses to make sound decisions. Jacques Rancière has defined politics as such:

"Political activity is whatever shifts a body from the place assigned to it or changes a place's destination." (Rancière 1999, S. 30)

What Rancière means by that is that politics occurs whenever a thing is not in its assigned place. This is perfectly valid for software: As soon as a person diverts from what is expected from them, software aborts with an error or "corrects" the "wrong" statement automatically. So at least from the view of Jacques Rancière, software is utterly political. It has a significant impact on our daily lives and we cannot do anything against this. And this is a problem that especially corporate software shows. The most fundamental issue with paid software is that it is intransparent — the source code is hidden from the observers eye, so we cannot validate whether or not we agree to these rules or not. We simply have to rely on them. Additionally, therefore, we cannot decide to take the software and develop a so-called fork, a different version, from it. We have to use it, and most of the time, we appreciate the power of these programs without being able to change the way they exert their power over us. Some people focus solely on the problem-solving aspect of software, just like Grimes' "We appreciate power", and leave out the political implications. But I would strongly disagree with simply "submitting" to the choices software makes, as Grimes does in her song. Nevertheless, this should have given a good overview of what some main conceptual problems with software need to be addressed. Which takes me to the last section:

Open Source As A Political Act

Software not only belongs to, but is the fabric of our lives now. We depend on it and, as I've shown, software even curates a part of our very own self. If we let loose of software regulations, we effectively give part of our selves into the hands of other people. Just as we wouldn't want our private lives intervened by arbitrary government decisions too much, we should also demand a democratic vote as to what software may do with a part of our outsourced minds.

Clearly, proprietary software whose source code is hidden from the public eye does not uphold to that demand. But what then, you might ask, could we do to reclaim the management of the parts of our mind that we outsource, such as remembering appointments, remembering the news or what our acquaintances back in school were? Well, there is an answer to that: Open Source.

First and foremost, Open Source is a good thing because its source code can be publicly viewed and checks and balances therefore apply. "But I can't evaluate code, I'm not a programmer!", you might say now. But rest assured: Of course you don't have to understand code for it to be a transparent process. The fact alone that its source code is publicly available will prompt an army of free time developers to have a look at that code and determine whether it was detrimental to your privacy or not. The approach of Open Source thereby follows the principle of having experts dealing with subjects we don't understand anything of. A good example are taxes. Let us be honest: Who really understands why and how taxes work? Unless we have studied law and specialised on tax law, we hardly know how such things work. Nevertheless, we trust the government's appointees to decide what is good for us. This is why we go to elections: To elect people with a certain political aim that will in turn employ experts who then determine the best way of action, given the political strategy. At least this is the theory. The practice, obviously, is always debatable.

We can make use of that principle for Open Source: As long as an Open Source software has enough people using it, the chances are good that experienced software developers have had a look at them and determined whether they are malicious or not. Of course, this task requires them to have an idea of what we personally value, or not. This is where I should add that programmers and everyday-users should frequently talk to each other, both so that programmers know what people in their very own neighbourhood understand to be privacy-conforming, and that you, as an everyday user, can trust the software to use to offload parts of your mind. If you know programmers, keep good care of them, because they are the ones you would want to rely upon to determine which software to use. Just as you would trust your local butcher or coffee shop to make the best product possible, you should trust programmers to have a good determination of what software is good and what is not.

Commercial off-the-shelf software does not provide you with this possibility. Unless your local programmer is really good in Assembler and reverse-engineering software, you couldn't possibly know what a software will do. And believe me, there are not many who are good at that. Being a programmer and being able to reverse-engineer software to understand what it is doing are quite different tasks. So Open Source software practically gives you a transparency which makes it easy to determine what a program does.

And this is why we all — being programmers or simply users of software — should support Open Source. It's in our all interest. Open Source software is still confined to the zeroes and ones a computer needs to calculate stuff. But if you encounter this one form which forces you to enter your gender, you can simply ask the developers to remove the compulsory field. This still is not perfect, of course, because the developers could simply say "No.", but then they will have to deal with having publicly declared they want gender to be a compulsory option. While this is still not democratic to a degree of having institutionalised rules of how to actually enforce these things, it is a good basis for this.

Yes, we need political rules to force developers to stick to certain conventions. And Open Source is the best chance we got. Open Source you an alternative to apply checks and balances to software during the interregnum in which software has a huge impact on our lives already, while laws and regulations are not yet made.

Developing Open Source Software is a Political Act, because of a variety of reasons that I have tried to outline in this video. First, Open Source makes sure that computational confinements don't confine you to simply abide to rules you have no power over. Second, it strengthens the social pressure on programmers to act according to the needs of people. Third, it is for free, and therefore can be used regardless of economic status and class. Fourth, is most likely to use machine-readable open formats to store your data in, so even if a software is abandoned, it will be extremely easy to migrate that data to another software. And finally, Open Source apps often integrate with other open source apps, so that you not only can trust one part of the chain of dependencies, but the whole ecosystem.

So whenever you encounter Open Source software that you like, encourage others to also engage with it. You don't need to know what is going on with the software itself to open GitHub issues and demand from the developers to implement certain features. You don't need to be a programmer to keep an eye out on software. Simply treat software applications as a political arena where you would like to make your voice heard. Because then, no matter whether you are where politics expect you to be or not: your voice will be heard.

References

  • Agamben, G. (2002). Homo sacer. Die souveräne Macht und das nackte Leben. Frankfurt am Main: Suhrkamp.
  • Clark, A., & Chalmers, D. (1998). The Extended Mind. Analysis, 58(1), 7–19.
  • Gillespie, T. (2010). The politics of ‘platforms’. New Media & Society, 12(3), 347–364. https://doi.org/10.1177/1461444809342738
  • Rancière, J. (1999). Disagreement: Politics and Philosophy. Minneapolis: University of Minnesota Press.
  • Srnicek, N. (2017). Platform capitalism (L. De Sutter, Ed.). Cambridge, UK ; Malden, MA: Polity.

Back to overview