icm2re logo. icm2:re (I Changed My Mind Reviewing Everything) is an 

ongoing web column edited and published by Brunella Longo

This column deals with some aspects of change management processes experienced almost in any industry impacted by the digital revolution: how to select, create, gather, manage, interpret, share data and information either because of internal and usually incremental scope - such learning, educational and re-engineering processes - or because of external forces, like mergers and acquisitions, restructuring goals, new regulations or disruptive technologies.

The title - I Changed My Mind Reviewing Everything - is a tribute to authors and scientists from different disciplinary fields that have illuminated my understanding of intentional change and decision making processes during the last thirty years, explaining how we think - or how we think about the way we think. The logo is a bit of a divertissement, from the latin divertere that means turn in separate ways.

Chronological Index | Subject Index

Is this your moment?

Sensing, mining and governance of links and likes

How to cite this article?
Longo, Brunella (2016). Is this your moment? Sensing, mining and governance of links and likes. icm2re [I Changed my Mind Reviewing Everything ISSN 2059-688X (Print)], 5.8 (August).

How to cite this article?
Longo, Brunella (2016). Is this your moment? Sensing, mining and governance of links and likes. icm2re [I Changed my Mind Reviewing Everything ISSN 2059-688X (Online)], 5.8 (August).

London, 14 February 2017 - In I am not Karen. Big data and information literacy I defined seven main qualities of data and information artefacts we should look at or take into account when we design, code and implement, gather, select, search, share and disseminate digital products and services. I did not include an eighth factor affecting the perception and therefore the social dimension of big data and online communications, being it not a property of such artefacts but a way of seeing and interpreting connections in a certain context and at a certain point in time or - when applied to algorithms and controls - a way of modeling and coding software.

In this article I discuss this characteristic that I believe is fundamental in governance of relationships and connections among digital entities, for what I previously called management of positive and negative groupthink.

The fixity of things

The groupthink phenomenon is enabled and originates mainly from what I have called the fixity of things: this characteristic or property of digital objects is in my opinion much more relevant in the internet environment then in the physical world. In fact, in the digital world, everything exists just as a representation of something else, either a physical object or an intangible concept, and its creation and modification of status is traceable back to an originator. When things change and their older representation take over a process, a communication act, a transaction we see the negative, quite corrosive impact of their fixity.

In the physical world, a number of notions and consolidated theories from psychology, social sciences and other disciplines can guide us to recognise a wide variety of phenomena or problems created by outdated regulations on building, or by emotional attachment to people and products that do not exist anymore. Fixity is at the end of the day in our perceptions and evaluations. However, whilst in the physical world it is relatively easier to spot it or recognise it, in computer mediated communications, is mostly invisible and often undetectable whereas its unintended consequences have an enormous impact on human behaviours and interactions.

Fixity is, in fact, the essential component of social contagion. It affects groupthink as well as inference rules. It has not any particular negative or positive connotation: it is just how our emotional and cognitive apparatuses work in the digital environment, trusting static entities and stances, determining the successful propagation of memes, catch-phrases, hashtags, rumours and word of mouth stories as well as the loyal behaviours conforming to certain values or prescriptions within closed groups.

Positive and negative externalities

The design of procedure and algorithms that take fixity into account is the most complex challenge existing in data engineering and data management at present. It requires first of all a risk, costs and ethical assessment of its consequences, or externalities.

I have to insist on the neutrality of the notion per se, to prevent the perception that fixity is necessarily a negative attribute: many young software engineers, especially in the financial sector, are fascinated by the idea of live, dynamic calculations, for instance, and tend to believe they need to prevent and avoid fixity to ensure more data quality through the power of real time computing, live and shared data. Nothing could be more deceitful, as innumerable evidence from the cyber security sector have demonstrated.

What guarantees that individuals and organisations rely on the right data for their purposes, without incurring in unintended consequences, is first of all the explicit consideration that a fixity dimension or problem or aspect exist all throughout the data lifecycle and the only way to manage the risk of dealing with “fixed” outdated or imprecise or wrong data is to use and manage metadata (for which a number of standards exist but they are very often completely ignored by the same experts). The risk starts with the design, and often does not even ends with the disposal of records and other digital artefacts (the notorious problem of the “right to be forgotten” originates exactly from the difficulty of ensuring erasure of information once it has been created in digital formats). It does not matter if the data are static, dynamic, historical or live. The point is that they must be highly dependable - that means trustable and fit for purpose - in relation to a current workflow, process, dataset, software routine and so onc. As such, they should also be more robust against the risks of tampering and manipulation.

Many business processes and successful technologies of information and communications rely on a positive fixity: identifiers that stick to people, like your National Insurance number or the number of your driver licence, are quite effective to that extent. Essential human activities and algorithmic tasks assume that this type of fixity is always in place, particularly when we deal with medical records or other important personal data. From construction to manufacturing, most of the industrial know how is based on a similar assumption that things do not change over time or due to external factors, and this is an implicit guarantee of reliability and integrity. A precise and documented reason for change is required in controlled environments to ensure that change affects processes and data according to defined rules.

And yet, in other innumerable contexts, fixity is not such a positive or negative quality per se. It is, instead, a burden when we need to get rid of some information or update some information as soon as possible or a lucky circumstance when we need to have a change of data delayed, according to scheduled plans or a complex framework of interdependencies. Think about, for instance, the simple cases of your old address when you want to have your correspondence redirected to your new home but not before a certain date or the date in which you prefer to switch from an old to a new telephone number or email address. In corporate or industrial environments, where such type of change can affect thousands if not millions of devices or objects, there is often a formal change management or configuration management process in place.

Negative fixity seems, on an empirical evidence, pretty much depending upon reuse of data in a fragmentary and exploitative way, without references to the context that has originated the data in the first place.

Extraordinarily enough, in the digital environment negative fixity is easy to prevent and correct automatically: from simple parsing utilities o more advanced analytics, the market is plenty of software product that do exactly the job of constant alignment and update of data between different processes and systems.

The context of applications or algorithms also determines the way in which positive and negative fixity is managed: in an industrial or in a health and safety context the rapidity of feedback and reactions can bring negative fixity to an almost immediate and emergency halt (a manufacture production line stops as well as a pacemaker). In a media or financial context there are additional risks that negative fixity are treated as profitable asymmetries to sell you “fixing” services such as money advice to improve your credit ratings or PR services to adjust your reputation and public image.

You can draw your own conclusions here about the importance of fixity in respect of the main theme I am trying to address that is governance of relationships within the digital environment. More on the Internet of Things dimension with the next article.

Fixity in time and space

The whole web is at the same time a space of communication and interactions among people and software routines AND a delivery medium for specific contents to be accessed over time and space, where fixity plays, in all its varieties, a great role: in the first dimension (time), the web exploits and values the dynamism and synchronicity of people interactions (with or without a real time option) fluctuating from a social space to another (people migrating from Facebook to Instagram and so on) where audience groups and single users can recognise each other because of the persistence and interoperability or digital IDs, such as nicknames and pictures, email addresses, avatars, as well as because they share apparently the same recognisable elements of a certain discourse.

In the second dimension (space) what really counts to make the Web an efficient delivery medium is the fixity or integrity of a certain content, stored in a permanent digital format - such as a document type, at a precise point in time.

Fixity is therefore a multidimensional concept, that applies to any single bit of data we rely on, making the management and control of data granularity and scale a diabolically fascinating matter in terms of design and engineering. While we enjoy the web as space for interactions or when we upload and download documents we are, most of the times, unaware of such multidimensional concept and that is what increases the risk of data mismanagement.

Fixity affects our perception of the environment we are immersed in, the opportunities or affordances that other people, brands, institutions and groups can offer to us. We do not normally want to see and monitor how everything is constantly changing around us. We are relatively able, as humans, to notice acts of production or interactions that cause some contents, or user generated contents or traces of our online activities, to be left behind and be re-used, augmented, translated or referenced for completely unimagined and unintended purposes. These can create conflictual stances and frames of use and leave the door open to cyber criminals. But I have rarely seen people able to grasp the fixity notion per se, while defining business cases and requirements, with the explicit intention to prevent the dystopia it causes. I will return on the possible organisational response to this in the last of these articles on governance of relationship.

The social sciences perspective

Before considering the organisational response to fixity, that is a complex governance response not always available for small projects or small businesses, I have found practically convenient to look at it from other perspectives that are often intertwined with our technological solutions, starting with the social science and with the worlds of constructivists and action-network theory experts. Can action-network theory or constructivism prevent the unintended consequences of fixity?

Unfortunately, the short answer is no. On the contrary, my experience is that more action within a familiar frame of reference perpetuates and even exaggerate biases and phenomena like attachment so that people tend to get stuck in their role-plays and routines and representations of a certain social reality.

There is plenty of literature to refer to such dynamics but I think Bruno Latour has made a little masterpiece with a very singular study in which he analysed the production of truth in administrative law processes (The making of law: an ethnography of the Conseil d’Etat, 2002). He writes : in the last thirty years I have done much field work to define the scientific way of establishing connections: what I called ‘reference’ .

Latour has anticipated a terrifically effective and innovative way to exploit fixity, immediately copied by media and social media to innovate genres like talk shows, reality shows and then to invent new genres of communications and campaigns on social media. Conversations acted by genuine participants as well as trolls or fake accounts can be orchestrated and engineered irrespectfully of many variables, towards a certain predictable deterministic outcome. Actors can in fact acting within settings in which their expectations and judgements and a certain predictable sequence of operations (or procedure or process) are glued together in threads that freeze almost any critical thinking and possibility of independence or variations from the desidered outcome. Fixity in these circumstances is what allows the predictability of interactions and communications, reinforces and reassures people. In a certain sense, fixity triumphs with its way of establishing or confirming connections and trust within pre-determined social or professional boundaries. Cages for individuals, deadlocks for groups, the performed actions rarely allow a new combination of factors and solutions to emerge. When this happens it because actors have been deliberately made very relevant changes and interventions at procedural level.

In that, Bruno Latour’s ethnographic exercise can seen as morally aberrant not less than some reality shows or social research studies conducted as ANT (action network theory) experiments within specific knowledge domains (or social settings): such initiatives do not produce any new understanding of the problems observed within a group or community nor any new solution. On the contrary, the method relies on the observation of a natural convergence of people towards predictable fixity of opinions, biases and expectations reflected by procedural compliance or usual, common behaviour.

Thanks to such fixity of things media and social media offer an hyper-real and almost parodistic representation of human behaviours that, like in theatre, can be in some ways idealised and seen as cathartic or revelatory of the actual realities or problems in a certain context, or even forcing people towards awareness of a meta-communication layer always existing in human interactions, enabling the possibility that a call for change become visible and can be endorsed and discussed by decision makers, politicians, journalists and so on.

Unfortunately, like investigative journalism, action network theory does not change anything per se. On the contrary, cyber criminals design scams to work with reactions obtained in a matter of minutes or seconds no matter how aware you are of a certain problem, bug or data security breach.

Bruno Latour’s exercise demonstrated that is perfectly possible to make visible the internal chains of associations and the patterns of similarity that justify the creation, or construction, of data serving a certain social truth or plausible framework within a field of professional expertise or within a social environment, and at the same time not changing absolutely anything. Anything about the phenomena observed becomes clearer. The exercise in itself tends to lead people sooner or later to boringness. They will then seek for a change of setting, more than calling or caring for a change at cognitive or normative level. In sum, the ANT's grammar Latour showed to scholars and media operators all together leads to disengagement, apathy, emotional detachment.

The engagement obtained through ANT exercises remain functional and instrumental and rarely lead people to see the fixity dimension of their own connections and perceptions.

Exactly while Latour was writing its Conseil d’Etat, I had the fortune to learn from my own pioneering e-learning and social media experiments that we are always quite lazy in initiating new learning tasks. It is extremely rare that we approach a new problem or a social situation with a systematic plan to learn everything that would lead us to a reduction of ignorance or to prevent asymmetries (between our knowledge and the knowledge of others) from fooling us.

The social physics perspective

For more than ten years now, network and software engineers have been imagining they can use social network analysis and data capability (or open data massively available through the world wide web) not only to reinvent advertising and media contents but also to measure, model and in the end monetise all the interactions occurring among human beings over certain networks, through a myriad of objects (physical and digital).

Data on actual behavioural choices of consumers have been intensely scrutinised or profiled with the hope they can compensate the lack of intelligence, temperament, sentiments that are typical of any fuzzy logic or neural network. In many cases we now have excellent applications of recommender systems, for instance.

The internet of things has added emphasis on the power of influencing consumer choices by means of gathering, mining, monitoring, analysing big data and producing marketing insight. It has generated extraordinary expectations not only in the media sector but also in consumer electronics, healthcare, retailing and more recently transports and government.

Enormous expectations have been created by the diffusion of sensors and real time analytics: sensors in conference badges, sensors in wearables and of course sensors in smartphones, tablets and other electronic devices can map and share across networks massive amounts of personal data useful to calculate and determine in real time where we are, what we are likely to be doing, what we are likely to buy, eat or say, who we are likely to meet with, when we are likely to leave in a certain location, and so on and so forth.

The delusions so far have been so tragicomic that the end of the road for the Internet of Things could reasonably be predicted in its perfectly engineered beginnings: interactions among bots can be easier to manage than among human beings but many of the experiments and services existing so far call for more design, more planning, more testing and less naivety about the transferability of expertise from the industrial world into the consumer world.

In the consumer world, any wrong association is a turn of the screw in the wrong marketing direction and exposes personal details to horrific chains of exploitations that impact human lives, from credit ratings disasters to iatrogenic deaths: fixity does not create any learning or change opportunity, it just multiplies the mess of ‘stuff’.

At the end of the day, social ties are a kind of content writes Daniel Trottier (1), a researcher that has extensively investigated the use of social media for policing, surveillance and intelligence purposes from a socio-psychological perspective: sites like Facebook - he continues - turn social connections into visible, measurable and searchable content. This adds a dimension of visibility to the study of social ties and social capital, which indicates that ‘who you are’ has always been a reflection of ‘who you know’. With social media this has become a standard feature for profiling individuals. Not only are a user’s social ties visible, but others can also make inferences about private information on the basis of friends’ publicly accessible information.

It does not seem that the enthusiasts data scientists who embraced social physics have taken into account phenomena like fake profiles, trolls, multiple identities and alternative facts that make the whole of the social ties contents complete fiction, impossible to consider reliable for any intelligence or trusted communication scope.

The vision of an interconnected and interoperable world of people and objects sharing data has fascinated thousands of young and mature social scientists, policy makers and engineers in spite of the recursive failure of such computational dreams: can we really imagine we reinvent our societal systems and institutions within controlled and engineered, and in that predictable by human beings, frameworks?

In 2014, after a decade of experiments and talks about social physics promoted globally with the endorsement of institutions of the calibre of the MIT and the World Economic Forum, the most known author and researcher of this visionary field of engineering, Alex ‘Sandy’ Pentland, published an article in "Scientific American" significantly entitled Saving Big Data from itself in which he recognised there is a problem he had not considered in previous articles and papers: personal data are used and abused without control and that compromises the quality of any algorithmic effort.

Pentland has since then corrected his approach and suggested that the solution to the trust and reliability problems is to be found in "open personal data stores", repositories protected through distributed encryption and safe transmission and storage procedures. This concept of trusted personal networks self validated, self legitimised, self audited, robust and secure has found its adepts within universities and large utilities companies, at least for a couple of year, reaching out also government departments and advertising network.

The idea that we can increase online safety, confidence and confidentiality in the digital economy using pseudo-identity mechanisms is either a very poor one or still at a very early stage. In fact, It does not seem at all true that people really cares about their own data in cyberspace unless they have reached some levels of deep dissatisfaction or abuse. And even in such dramatic circumstances, it is often better, easier and more convenient to simply forget than try to fix the wrong or misleading data still available to third parties or rely on the option of their definitive erasure - the legal costs of which can be endless.

Unfortunately, Pentland’s vision has gone in an opposite direction: he concluded that constant transparent experimentation with big data procedures is the only way to find out what works with big data. That means, in my view, exposing consumers representations and connections to a permanent excruciating state of cyber-pillories and victimisation. He is (or was) also convinced that the live dimension of data management can help fixing the… distortions of fixity (sorry for the pun) when he says: most persuasive systems are designed offline and subsequently lack the flexibility required to personalise or adapt messages to the usage context. This can only be remedied by real time analysis of human behaviour during interactions. (2) My opinion is that no matter the level of real time analysis that should be profitably used in the design of artefacts, if such design does not take into account fundamental principles, including fixity, it will never reach the level of quality required to automate interactions. It is of no relevance that the data come from historical datasets or from live surveillance or sensors computing.

Conclusions

Fixity is a multifaceted notion, at present not at all considered by a number of disciplines and best practices that deal with data science and data engineering. It often masquerades itself even to information management experts under the innocuous notion of up-to-date information, showing up with its burden of risks only in case of harmful errors. I have myself come to understand how pernicious it is because of personal experiences and studying of cause-effects relationships more than through data analysis or analytics per se.

Acknowledging that interfaces and their contents are always changing or that social media content is easily re-contextualized or augmented, mashed and framed into others’ discourses does not seem to lead researchers, law enforcement and regulators - and not even the majority of designers and software developers - towards the logical conclusion that we should simply distrust analogical associations without a proper functional model in place, especially when these are fostered within digital environments, with a pretence of authoritativeness coming from more application of artificial intelligence.

To move away from a state of fixity, as in physics, our perception needs a moment, the shock that comes from an overwhelming evidence of strength going in an opposite direction. And yet, it is very unlikely that design of reactions to intrusive stimuli determined by elaborations and calculations of sensed personal data can be the practical and technological, sustainable, ethically acceptable way forward.

Common sense and every day experience says that going out for a little walk, taking a bit of fresh air, move around, be distracted, learning something new are the types of experiences that expose our mind to the opportunity of looking in different directions.

You do not need to expose yourself to extreme sports or adventures to change lifestyle: to trigger a challenge to the fixity of your habits and thoughts within a certain social media circle, for instance, you just need to unfriend some acquaintances, change the frequency of your Facebook’s likes, vary the incidence of positive and negative words in your tweets, change your location and communication network or internet service provider, use a different computer. Give yourself a little moment, the splash of a surprise or just a cold shower.

Simple actions like these, as cognitive behavioural therapy has effectively demonstrate to everybody for years, have an enormous impact in term of dismantling the fixity of words, impressions and images that traps our minds.

I do not deny at all that there is an element of fascination with sensed data. I believe it is legitimate for everybody to look at the commercial and creative exploitation of a new scientific domain aimed at the modeling, analysis, and synthesis of social behaviour, particularly the non verbal aspects (3) but I believe nobody really needs such domain in place if we treat human beings as production lines or predictable automata.

The possibility that applications of artificial intelligence make positive or negative inferences in connection with some occurrences of our own data but pretty much disconnected from our actual self, faculties, sentiments and intentions, is most of the times a sterile exercise. It is in some ways aberrant as any expression of subjugation, dominance, violence or absence of humanism in human relationships. In the end, human beings are not Rubik cubes.

Notes

(1) Trottier, D. (2012), Social media as surveillance: rethinking visibility in a converging world, Ashgate.
(2) Pentland, S. et al. (2013), Understanding and changing behaviour, in IEEE Pervasive computing, 12(3):18-20.
(3) Pentland, S. et al. (2015), New Social Signals in a New Interaction World, in IEEE Systems, Man and Cybernetics Magazine, April.