“Computer bedienen”

“Wir bedienen Computer” – “Computer bedienen uns”

There is a strange double meaning in the German word “bedienen”: It means
“to serve” (with a slightly respectful connotation, more like a waiter, less than a servant) but also
“to operate” (a machine).

From the initial sentence alone, you can’t say who is master and who is slave (to use some computer related terms). This struck me recently when I read a chapter in Frank Schirrmacher’s Payback, Chaos im Kurzzeitgedächtnis (p. 64).

One of his points is that services like Google Now are usually seen as digital butlers but at the same time they select information for us which inevitably controls what we do and think.

comment!

Privacy needs a culture of anonymity (more than technical solutions)

Our understanding of privacy is currently in permanent discussion and re-definition. Social networks encourage sharing of private details but this also means sharing these details with a large corporation (and most likely with its advertising clients). Intelligence agencies skim through our conversations in their quest to identify potential future maybe terrorists. Quite some people are scared because of this.

We see different reactions: the technology savy now roll out heavy encryption and other technology.  It quickly becomes an arms race to get something “really” secure and anonymous. For the majority, this technology looks – and in large parts sadly is – highly complicated and as if it will take all the fun from digital interaction. It looks as it even reassures them in their fatalistic perception that they are lost anyway. Then they stop caring altogether. And there are still enough that didn’t really notice and are not inclined to take part in the discussion.

Technology, in consequence, should not be the first thing to look at here. What we need is a cultural shift towards anonymity and privacy. That means insight into the value of privacy (e.g., as a precondition for liberty) when we actively think about it, e.g. in discussions. But it should eventually go deeper and become an almost subconscious value that we consider intuitively, like fairness. Anonymity should weave into our everyday decisions, not as an “always on” but an always available option.

La Bauta (in the back) was the everyday mask in Venice - picture by richspalding

La Bauta (in the back) was the everyday mask in Venice – picture by richspalding

In an article for the magazine <kes>, Johannes Wiele puts up three theses:

  • Social conflicts can’t be solved, just temporarily settled/negotiated (in pluralistic societies)
  • Almost all actors in politics pursue (what they think is) the good cause
  • (Despotism provokes resistance. This point is less relevant here but can explain one of the motivations for such a cultural shift)

The first two points combined mean: we need to arrive at a common understanding on society level of the benefits and risks of digital technology. We need to compare our “traditional” values and the preconditions they build on to the conditions of the digital world. Some values might be difficult to keep, some might need to be redefined, and we will need new, different social rules. These discussions must reach society level (involving “all actors in politics”) to achieve a broad understanding and constitute new social norms (this also resonates with Sascha Lobo’s call at re:publica this year). Technological implementations, such as email encryption, might be a consequence of this new culture but they are not at the heart of it.

Wiele references the mask culture in 18th century Venice to illustrate how vivid and detailed such a culture can be: various masks for various events, rituals around masking and un-masking, obligations like the prohibition to carry arms while being masked (see his blog for details). Wiele also mentions that masks became popular because of the excessive surveillance prevailing in Venice at that time. Wearing a mask was part of a strict social codex and its appearance was very regulated. This gave others, such as non-Venetian traders, the security that the bearer of the mask had certain priviledges and could be trusted, while still hiding his identity. This is like a 3rd party confirming, e.g. certain access rights when you want to use an online service but without giving away your full identity (and, ideally, without getting to know itself for which service and when you use this confirmation). Digital certificates represent parts of this concept but they don’t have a working “mask mode” yet.

Because it was a cultural or social standard, noone had to justify why s/he wanted to stay anonymous under normal circumstances. The mask culture might even look playful to us nowadays which I find a good aspect.

A culture of (choice of) anonymity could be an interesting development and consequence of the current situation. It is certainly the only way for a  profound and sustainable, or trustworthy and applicable, concept of privacy.

comment!
. .

Facebook is an infrastructure

Inside Facebooks Prineville Datacenter (photo by Pete Erikson/ Wired.com)

With more and more Facebook features and -acquisitions, it appears increasingly plausible to me that Facebook could become “the internet” to many people around the world. It’s becoming so big and so comprehensive that they would not go anywhere else to surf “the web”. They would do all the messaging, news reading, picture browsing, gaming, shopping on Facebook. In many of today’s ads you find links in the form of f/mycompany instead of the former www.mycompany.com. Is Facebook becomming the new “web”, leaving the www and soon technology like a webbrowser behind (or for the geeks)?

What if Facebook went away?

This could be just another observation from the ever evolving media ecosystem but this shift has/would bring a remarkable change: the www doesn’t belong to anyone (although it’s dominated by the US), while Facebook is privately owned and dominated by Mark Zuckerberg (holding 28% of the shares and speaking for 57%).
He can change the terms of the service as he likes (and so he does) and he could just turn it all off when he got sick of it. Poff — the internet, deleted.

Or imagine it the other way round: Facebook in financial troubles, filing for bankruptcy. This would put so much business, entertainment industries, media channels, personal data, image collections at risk, that it would appear as a public interest to keep Facebook alive — too big to fail.
In his Wired article Can Anything Take Down The Facebook Juggernaut, Steven Johnson called Facebook more an infrastructure than a business by its nature.

Johnson sees two challenges to an all Facebook-internet: it tends to become a walled garden, trying to force users to stay inside its network, e.g. by intercepting links to the “outside” with a “we have an App for that” dialogues. And all walled gardens to date have failed. But in contrast to walled gardens of the old web, the community pulls all the content into Facebook themselves. And, even outside the Matrix Network, you are inside the Network, tracked by beacons, like-buttons or exposed by sponsored stories.

The other risk, according to Johnson, is a break up of Facebook due to monopoly considerations. This would be a spectacular and stunningly bold move by a government: slicing out essential parts of the Facebook code and infrastructure to put it into the public domain, to create a public infrastructure as the www is today. Given the influence of Facebook as a media outlet, this sounds like a Hollywood-movie show down to me. Since presidential election campaings increasingly rely on Facebook, it might never happen.

Consequences of an all-Facebook world

Facebook has made the web less information centric and more people centric and social (sharing sharing sharing). The ease of sharing and staying connected works best when you have a single identity on the web, ideally identical with your offline identity and when your online friends are your offline friends. You can no longer decide yourself to play different roles in different contexts. You can try to funnel certain information into certain social groups (or facets) but this requires extra work and might be overruled by a Facebook update.

But the “Open Graph” goes beyond our intuitive understanding: it reveals connections among people and strengths of links that even the people forming these links might not be quite aware of (or could you easily name your 120 closest friends?). It makes interests, hidden wishes, intimate information accessible through data mining. Maybe not to the public, maybe not to you, but in any case to Facebook.

comment!

Defining privacy

The spreading of personal information in the digital age and the loss of control over it is continually increasing. In it’s essence, it is nothing very new but we witness (or are part of) some major shifts right now: the rise of online social networks, high precision targeted advertising, and the level of surveillance as part of the anti-terrorism measures. The significance of privacy is currently being re-negotiated (details below).

At the same time, the technical possibilities to control and broker one’s personal data streams have increased just as much – unfortunately most of these possibilities are stuck in theory and decent tools are missing. We should expect (or build) a ground breaking solution here. I find this particularly striking as I had the priviledge to work on such a tool over a year ago and sadly enough it hasn’t really come to market as of today (I’ll go into details in a separate article).

Photo (slightly cropped) by ecoev on Flickr

Photo (slightly cropped) by ecoev on Flickr

A couple of days ago, I had the privilege to attend a conference on privacy from Germany’s internet industry association eco. By the mere count of participants (overwhelmingly in black suits) it was a small meeting, but as the participation of the German Minister of the Interior, Hans-Peter Friedrich, and the EU commissioner for “Justice and Fundamental Rights”, Viviane Reding, shows, it was of extremely high profile for our societies’ rule makers.
From a citizen’s point of view, the event was pretty interesting as you could witness the actors and debates that shape the laws of tomorrow. For designers, however, the lack of discussable solutions, or just adventurous experiments, was disappointing. I have the strong impression that some practical contributions will inspire the debate and could bring a more differentiated or “realistic” view to some legal considerations.

Defining terms – not just a question for law makers

While defining terms sounds like hairsplitting detail work, knowing about different aspects and concepts of privacy and data protection focuses the often superficial and emotional debates. I’ll look very briefly at two questions: protect data against whom or what? And what is the data to be protected?

During the eco meeting, Axel Spieß, an international expert in this (legal) domain, pointed out the very different meanings of “privacy” in the US and “Datenschutz” in Germany: in the US, privacy was mainly referring to the “right to be let alone”, as a citizen against the state (4th amendment). In contrast, acquiring and selling user data is a pure matter of private business and contracts. “Data protection” would usually refer to measures that prevent the theft or loss of data.
Under German jurisdiction, however, “Datenschutz”/data protection is affected by all transactions (or even just the collection) of “information that identifies a person” because it is considered to violate one’s “informational self-determination“. And this needs to be respected by governmental authorities as well as private companies.
(For the UK position, BBC News has a comprehensive article for you.)

There is also a fundamentally different perception of who owns the data (US (mostly): the company who collects (or buys) it. Germany: the person it refers to). Ownership of personal information is also an important point for a couple of service ideas around a transparent data trade (see the practicle article on that)

[sidetrack]
In his speech at the congress, Minister Friedrich implied that data collection by authorities was rather harmless since it couldn’t happen without laws and was under public control. But since the 9/11 attacks, we should be aware of how easily security (or anxiety) rules above freedom (and as part of it, privacy), and otherwise illegal activities and questionable surveillance pass through.
[end of sidetrack]

The other important definition is the term of “identifying personal information”: intuitively, one would think of more sensitive information, such as name, address, phone numbers (IP numbers? already a hot debate!). And indeed, some laws contain such lists. However, in the age of sophisticated data mining, “insensitive” data (such as items of a single purchase) is easily combined into “more sensitive” data (such as buying habits and all deviations, like job loss, illnesses, diets, or even pregnancies). As behaviour prediction is becoming reality, there is no insensitive data any more (as the German Constitutional Court stated already 1983).

Who defines the privacy of the future?

Inside the EU, the debate around privacy is active for quite a while now. Commissioner Reding claims that it is at the heart of the Digital Agenda (which has its own commissioner, Nellie Kroes). For the EU, a unified data protection and privacy legislation would not only facilitate trade inside the union, it would also be a strong signal towards other societies and markets. Companies with businesses in the EU would at least have to take the EU rules into account, if not completely follow them (what this could mean can be seen in discussions around facebook, Street View).

So far, the EU has been quite successful in setting the agenda and the terms of the discussion. They also convince/persuade more and more non-European countries to follow their model. Obviously, this upcoming normative power of the EU is at odds with US interests and US companies (who form, again, most of the internet as we know it). More or less recently (02/2012), the Obama administration came up with a regulation of its own, the much debated Consumer Privacy Bill of Rights. Given the US traditions as described above, this might appear as a strange thing (some of the differences are lined out here and here).

With the models currently debated on both sides of the Atlantic Ocean, we negotiate nothting less than the fundamental privacy rules of the future digital society.

comment!

Predicting behaviour from user data

Since people follow rather stable routines, it is possible to predict their behaviour (within a range of certainty) from analysing their activities in the past. One important research in this direction was carried out in the context project at the University of Helsinki from 2002-2005, with a focus on what places people go and where they meet.

Today, tremendous amounts of behavioural data is generated through web log statistics, tracking cookies and beacons, and mobile phone positions (cell towers and GPS). New mechanisms evolve that make this data also usable, even in real time (e.g. Google’s Map Reduce algorithm). This is the result of a Structure Big Data conference that promises an “inevitable, even irresitible surveillance society” (Jeff Jonas, an IBM engineer quoted in a Computerwold article)

While the ability to “look into people’s minds” scares privacy experts, it also promises to deliver perfect filters for users who feel lost in the tremendous stream of news and information. And it offers them a personalized experience of services.

Another point of concern:

The higher the amount and variety of data collected, the more unique the data sets are that a single person produces. One example is the website visitor identification through the browser footprint. It might look pretty generic on first view, but since it includes the fonts installed, version numbers of plugins, etc., very few people actually have the same browser footprint.
While the data itself is usually collected in a “non-identfying, anonymized form”, the combined data sets render anonymity an illusion.

[update 02/2012:]

The New York Times had an extensive report on how large supermarkets extensively collect data on their customers. Despite the data pieces being rather trivial (who buys what when), they can conclude from the large numbers and the pretty unchanging behaviour of each customer the personal needs of each customer very precisely.

They even feature a story about targeting a pregnant teenager with baby products where even the teenager’s father didn’t know (yet) that his daughter was pregnant. While this is probably a rare case, it shows that the large numbers and decent data mining can not only report but even predict personal needs and wishes.

comment!
.

Mobile Youth and Social Networks

danah boyd has been working for years on the life of youth and particular what role digital media plays for them. At last year’s Aspen Ideas Conference she made three statements that I found extra interesting (beyond my general respect for her work):

  • teenagers engage in emotional exchange with their peers, especially late at night. This is new because without (digital) media they couldn’t meet at these hours before as they were not allowed to go out so late.
  • they don’t need/want super-immersive online worlds for their friends (like 2nd World) but meet them in asynchronous online communities. Problem here is that you can’t connect from MySpace to Facebook.
  • best thing for them is to “take their friends along in their pocket“, i.e. on their mobile phone. But carriers wall their networks and services even heavier then online communities do and, in consequence, “you don’t see innovations happening in mobile” on the social network side.

And this is a sad thing. As you can see here and as we also found out by our own research, mobile communication has the potential to address exactly these wishes of young people. Already now they make use of the technology in maybe unexpected ways: from sending photos from the fitting room to check their new look with their peers to subtle ring tone patterns that inform friends about the success with dating the latest crush.

T-Mobile’s My Faves looks like a move into the right direction because it is open to “even landlines and other networks” — it seems to be a success in the US but is discontinued it in Europe (where “other networks” were only available in one of the options). It’s people who live in social networks and these networks are not determined by a certain web framework or carrier. If carriers want to respond to that they need to open up and get ready for it before the online communities do and take the lead completely.

comment!

Explorations into the edges of human

Robots and genetical engineering were dominant topics at this year’s ars electronica, entitled human nature. “So, nothing new…” you might think disappointedly, considering that the latest developments were broadly discussed in their own domains already. But then, this is only the first view. On the second, it appeared that “the arts” (as seen in Linz) weren’t surprised by what today’s science makes possible, either. Some artists added scientific laboratories, complete with staff and researchers, to their toolbox where the general public might still expect brushes and pencils.

Next generation of bio toys?

Next generation of bio toys?

Biotechnological Palettes

The best and most outstanding example for this is Eduardo Kac, this year’s winner of the Golden Nica in the (never more applicable) category of Hybrid Art. Under the cryptic title The Natural History of the Enigma, he had a part of his genome combined biotechnologically with a regular petunia flower. This plant now shows fine red veins in its otherwise pink face (that the upper/inner part of a blossom is called a “face” appears as a helpful coincidence for Kac). It was also Kac who had the first “glow in the dark” bunny produced in 2000, which had fluorescent fur due to flyfish genes smuggled into its DNA.

In his talk, Kac put special emphasis on the fact that the extracted part of his genome usually is responsible for detecting alien material in human blood. So, not only was part of “his blood” now making the flower’s “blood” transportation system visible, it also sneaked into the plant as an alien (with a little help from the biotechnological researchers). The result was then defined as a new life form called “plantimal”, and this particular member baptized (not without wink, as it seems) “Edunia”.

There were a lot of finely considered details, which all together make clear that the artist didn’t want to show (only) what is technologically feasible today. He merely used the potential of today’s technology, which also becomes more and more an everyday procedure, to pursue his aesthetic goals.

This was made even more obvious (or compelling), as this year’s ars electronica gave each prize winner’s talk an accompanying lecture from a “real” scientist. Josef Penninger (Director of the Institute of Molecular Biotechnology, Austrian Academy of the Sciences) explained his work on so called knock-out mice and how he was struggling to find the genetical causes of arthritis.

From both talks resulted the strong impression that the genome is just a set of bricks, and that you can design any property or appearance of a creature by the right combination of these biobricks. The audience put this into question and Penninger also conceded that all of this is less stable than we might think: “motherly love [can even] change the genome”. Still, this remark appears more like a side note. For this piece, the deliberate expression of the artist in his final work was described as central, and less the initiation of processes one can’t quite control or (yet) fully understand.

ars09_023_short

You can even find a S(ecurity level) 1 laboratory as part of the permanent exhibition in the basement of the new ars electronica center.

Robot in the mirror/Uncanny Robots

Gigantic metal monsters, stampeding over planet Earth, that’s a well known and sort of old skool techo-apocalypse. On the one hand, these monsters are already available on the market (less gigantic but at least as leathal as you might expect). On the other, research as well as the arts are often more interested in finding acceptable counterparts for humans, Sociable Robots as the MIT dubs them.

The Geminoid by Hiroshi Ishiguro (from the JST Erato Asada Project), this year’s featured artist, is an exact copy of the body of the artist in the form of a motor driven puppet. The Geminoid is not a robot (or Android) in the classic sense, because it has almost no sensors, world perception or decision making circuits, it can’t even walk. It is controlled by an external, remote operator.

The artist’s goal is to form a puppet that serves as a credible stand-in, e.g. in a discussion at a table, providing a perfect form of telepresence. Showing a certain amount of small, involuntary movements, as it is typical for humans, is among his strategies to bridge The Uncanny Valley, his “ultimate benchmark”, as Ishiguro put it himself. And while you couldn’t tell who is who on a photo, the puppet’s movements are still too slow and uneven to be accepted as humanlike. The ultimate uncanny feeling caught me (in the ars electronica center’s exhibition) when I touched the puppet, feeling the half-soft, half-rubberlike skin, not cold but also not at body temperature.

Ishiguro also reported that he wants to send his Geminoid to “give” his lectures at the Osaka University. It would be still him who talked and he doesn’t expect his students engaging him in fierce discussions, anyway. The university declined his wish so far, and it appeared pretty much as if this caught Ishiguro by surprise.

The artist is present (through the Geminoid)

The artist is present (through the Geminoid)

While most of us will smirk about this anecdote, this really comes to the central point of these efforts: Why do we think we need a “real” person to give a lecture? And what qualifies a “really present” person over a remote controlled puppet that performs all necessary tasks, one that might even be undistinguishable? Which then extends the question to how we could tell apart human and puppet, anyways (especially in everyday life where we usually don’t pay so much attention)?

Additionally to what you could see in the exhibition, Ishiguro is also looking into self-controlled robots. And because it turned out to be very complicated to program every possible move into a machine beforehand, his CB2 starts out as a “baby”. Just as human babies, CB2 starts out with very little knowledge about his motor capabilities and how to use them. It has to “learn” everything, by trial and error, by repetition, with external assistance (the “mother”). While it is entirely grey and has a far fainter visual relationship to the human body than the Geminoid, this mimicing of a central human behaviour leaves you with uncanny feelings, just as well.

Just as a human baby, this robot can’t stand up in the beginning. It needs to learn it by combining random movements, remembering previous successful efforts, and by following its (up to now human) teachers. In this context, Ishiguro also pointed out that human brains are more powerful than supercomputers, but operate on a considerably higher level of noise (i.e. not everything computes logically correctly). He speculates that this noise might be particularly key to the human brain’s learning capabilities.

Robot research has become more human, obviously. Not so much or not only in trying to copy humans, but in arriving in the same research areas as anthropologists, cognitive scientists, and brain researchers. And, besides all nerdiness that surrounded Ishiguro, this is also his declared goal: Building robots to learn more about humans.

Social Conditions

To me, the Digital Communities category always has been one of the wonderful aspects of ars electronica. This year, a whole conference day was dedicated to the topic of Cloud Intelligence. Unfortunately, the Nica winners from Wikileaks were not part of the panels, even though they provide a very important service for intelligent societies, transparency.

The first part of the Cloud Intelligence Symposium looked at online communities from a scientific or meta level. Ethan Zuckerman (Global Voices) set out to talk about mapping online communication but ended up with the Digital Divide.

Surprisingly, he started with stories about the Marshall Islands that barely rise more than four meters above sea level. That means you can’t go from one island to the next on sight. Old maps used by indigenous people therefore depicted certain distortions in the rhythms of the ocean waves, which are caused by the islands, and can thus guide experienced navigators.

Zuckerman used this as an explanation on how communication mapping can work: not observing what is there (infrastructure), but what happens (emergence). Apparently and to little surprise, the USA, Europe, Japan and south-east Asia all were bustling places, and they are also wealthy regions. Some other countries also were in the bloggers’ focus, the ones which were devastated by military conflicts.

World map distorted by the number of cell phones in use - by Worldmapper

World map distorted by the number of cell phones in use – by Worldmapper

This approach surely provides better results on the “intelligence potential” than just counting registered users or the bandwidth installed in fibre cables. But looking at the installed or rather mostly missing high-speed infrastructure e.g. in Africa can also tell you that there haven’t been huge efforts so far to connect these parts of the world. On the other hand, and this might turn it into a hen-egg problem, it might have been due to a lack of demand from a wider audience which then kept the infrastructure suppliers from building. Speaking out loud what you think has also less of a tradition in these countries, most of which had or still suffer from authoritarian regimes.

One of Zuckerman’s findings was also that most of the communication, interlinking between blogs, or facebook friendships happen on a domestic scale. “Flocking with the same” is obviously an anthropological constant which stays true in a (technologically) globally networked world. So even internet infrastructure tells you something about “human nature”.

Transcending human imagination

Besides high-tech and deeply researched artefacts, you could also find the very calm ones that aren’t less thought provoking. Perfect example is the machine with gears and concrete by Arthur Ganson: … While you can see that it is moving at its “origin” (motor), after 12 gears of reduction, no movement is perceivable at the other end. We can calculate the movement because we know the mechanics. But also this will just give us some numbers that we can not relate with on a human scale. In fact, the final gear will make a full turn in a trillion years or so which is why Ganson can “savely” attach it firmely into concrete. Quite an interesting link of mechanics and philosophy…

Machine with Gears and Concrete

Machine with Gears and Concrete

comment!

Autonomous Assistants reloaded

Here comes the all new and sparkling abstract of my Thesis (old stuff). You might want to have a look at it and give it some comments!

In my thesis I propose the idea of a socially aware computer. In order to get to know the user‘s circles of friends, it will mine and analyse the data that is left as traces by her communication, mainly phone call logs and email archives. As a result, a value for personal or subjective importance can be computed for each person in the user‘s network.

This allows for a new arrangement of the personal address book so that more relevant persons can be found more easily – an important feature regarding our ever expanding and globalized personal networks.
Moreover, tasks that require knowledge about the user‘s personal relations can be handled automatically: One is turning the user‘s attention towards old friends that tend to be neglected when he is burried in work or because he is always on the run due to our mobile and flexible times. Another one is managing access to her personal data that she stores online, like photos, travel plans or her activity stream that gets created by recent software like Jaiku or Twitter.

Handling friends and acquaintances in such an environment opens up new challenges that are explored by means of a visual prototype. Different types of displaying, managing, and enriching information about related persons are developped. Results from a user testing will be provided.
As a preliminary study, the data sets of several people have been analysed and plotted into an interactive diagramm in order to investigate the potentials of the communication data given. It also offers the possibility to look for the relevant parameters that determine different types of relations (e.g. best friend or old friend).

To provide a conceptual background, existing social network theories are explored and related to personal, ego-centric ones. I take a closer look onto the whole process of operationalisation, i.e. turning human behaviour into quantifiable data by statistical methods. Finally, implications and problematic consequences of both, the software itself and the concept of the „network society“ in general, are discussed. The felt need to turn our friendships into „social capital“ is one of the most remarkable shifts in the functioning of our societies. Others can make draw profits from this capital if they collect detailed data to establish profiles of us and our relationships. Thus, the whole field of privacy is entangled.
And across all these dynamics, computers become so inseparably intermingeld into our daily social life that borders between our (extended) self and the machine is often hard to determine.

comment!
. .

Visual Phone Bills

matrix visualisation cutout
Usually, your phone bill is a vast amount of numbers that nobody ever reads actually (secret services left aside). It gives you some interesting details if you search for something particular but it’s hard to get an easy overview over what was happening the last month. Now, this has changed! After some weeks of tinkering with code (mySQL, PHP, HTML and some JavaScript) some visual tools have rolled out of my workshop.

simple visualisation for phone bill
The first simple step sums up all of your time spent calling someone on the phone. Different colours for working hours and leisure time (and for the month under focus) are added for further pattern recognition like collegue/friend identification. First evaluations revealed already that some patterns are really characteristic for particular events in the past. That way, the visual attractiveness of certain patterns leads us to remembering interesting stories attached to these dates (that sometimes have been forgotten already). As a nice Extra the whole plot seems to be somehow related to a powerlaw.

histogram of phonbill
A second graph is more oriented towards science and theory. One of the background-chapters in my Master-Thesis focuses on the (mathematical) structure underlying our social networks. Some (Barabási) say all networks of free choice are governed by powerlaws, others (Watts) think that our network of friends is described better by a bell-curve. Maybe I can deduce in reverse from the pictures I get what type of network is contained in a phone bill. It looks as if we talk a lot to non-friends, so far.

month-hour matrix from phonebill
A third (not yet fully matured) version will focus on temporal patterns and therefore plots the month of the year against the hour of the day to locate each call. With this method I want to look for “hot” times with a lot of traffic, usually calm zones and possible dissenters.

The work on this graphic as well as the others shows that rather simple data from a phone bill can generate some complexity when it comes to meaningful visualisation. In order to manage this abundance of information I want to add more options to select and filter the dataset. I also need some means to enlarge the “resolution” (i.e. less information per area) for those points in the graphic that are currently examined by the user.

comment!
. . .

me and my network

mindmap

Basically, I will look at how Computers can help us with managing our ever growing networks of friends.
I will try to make use of models from mathematical-sociologic network theories and apply them to subject-related, private areas (my network and I). The thesis of social objects will be part of this effort as alternative or addition.
Special attention will be given to the process of operationalisation which converts interpersonal interactions into machine readable numbers. Which actions have to be considered and which parameters are used in this process? At the end of such an automated analysis a computer will have an image of our social relationships available. These considerations will be worked out as applications in the practical part of my Master’s project.
The use of new technologies to organise inter-personal relationships will change them inevitably: But do we transfer the responsibility for our social lives to algorithmic machines in the end? Possible consequences and alternatives have to be taken into account.

In-depth description (german only so far)

comment!
. . .