icm2re logo. icm2:re (I Changed My Mind Reviewing Everything) is an 

ongoing web column by Brunella Longo

This column deals with some aspects of change management processes experienced almost in any industry impacted by the digital revolution: how to select, create, gather, manage, interpret, share data and information either because of internal and usually incremental scope - such learning, educational and re-engineering processes - or because of external forces, like mergers and acquisitions, restructuring goals, new regulations or disruptive technologies.

The title - I Changed My Mind Reviewing Everything - is a tribute to authors and scientists from different disciplinary fields that have illuminated my understanding of intentional change and decision making processes during the last thirty years, explaining how we think - or how we think about the way we think. The logo is a bit of a divertissement, from the latin divertere that means turn in separate ways.


Chronological Index | Subject Index

IoT waiting for Godot

About conversational computing and how much we miss a theory of innovation

How to cite this article?
Longo, Brunella (2016). IoT waiting for Godot. About conversational computing and how much we miss a theory of innovation. icm2re [I Changed my Mind Reviewing Everything ISSN 2059-688X (Print)], 5.9 (September).

How to cite this article?
Longo, Brunella (2016). IoT waiting for Godot. About conversational computing and how much we miss a theory of innovation. icm2re [I Changed my Mind Reviewing Everything ISSN 2059-688X (Online)], 5.9 (September).

Hard stuff requires not paralysis but requires going ahead making the best of the situation you are in at this point and then continuously trying to improve and make progress from then.
Barak Obama
Copenhagen Climate Summit, 18 Dec 2009

Vladimir 	How they’ve changed!
Estragon 	Who?
Vladimir 	Those two.
Estragon 	That’s the idea, let's make a little conversation.
Vladimir 	Haven't they?
Estragon 	What?
Vladimir 	Changed.
Estragon 	Very likely. They all change. Only we can’t.
Vladimir 	Likely! It's certain. Didn't you see them?
Estragon	I suppose I did. But I don’t know them.
Vladimir 	Yes you do know them.
Estragon 	No I don’t know them.
Vladimir 	We know them, I tell you. You forget everything. [Pause. To himself.] 
			Unless they're not the same...

From Samuel Beckett’s Waiting For Godot, 1953

London, 27 February 2017 - It should be clear from the previous arguments that I believe so far we (the pioneers, the early adopters and then the high tech giants and all the digital crowds that followed) have overwhelmingly and wrongly relied on a concept of governance of relationships in the digital space based on automatic connections. These should be managed through algorithms that satisfy formal and predictable or transparent rules in order to (quasi-magically) determine, show and change (or keep constantly visible) the status of relationships among entities that represent both people and objects, concrete deliverables as well as immaterial values or sentiments.

In computer jargon we often refer to frameworks as the scaffolding mechanism that allow data to be passed or exchanged through different networks and used in an interoperable way across different systems and repositories.

This concept of an efficient and controlled underlying structure that keeps all the software bits and pieces together with the right data linked at the right time for the right use is not at all new: on the contrary, it is, together with sensing technologies, at the grounds of the Internet of Things. It is the same old story we have seen supporting and promoting other computing ideas we have seen coming and going for a couple of decades at least - and one of this was (or still is) the idea of “groupware” from which all the modern social media come from.

A wired “togetherness” is the panacea for all IT governance problems but for the evidence that is practically unmanageable at scale, unreliable in terms of accuracy and often totally counterproductive for the mission or purposes of the organisation or the same group of users that sponsored more automatic connections. Why? Well, I suppose at the heart of the problem is that people projects on systems the fundamental social nature of being human. We find irresistible the idea of a unified or single system connecting everything magically, even when there is no actual benefit from it. It is recurrent in various areas of IT administration and it is true that it can be, at times or with clear and sustained purposes, very effective. In many circumstances, “togetherness” do deliver a better world. But in case of the Internet of Things it refers to a world of immaterial, constantly changeable and …”gelatinous” entities we are never sure they are what they are supposed to be.

Think of a scaffolding physical structure: if something goes wrong there are high risks of hazards, they can be often easily spotted, people can be injured etc etc. In a digital framework neither humans nor software routines can easily spot the wrong, faked or weak connections, the holes, the bugs, the wobbling pieces. Adding more data, more sensors, more connectors and so on does not change this status, unless we change designs, starting perhaps from the joints or the connections as I have argued in icm2re 5.8, Is this your moment?

Fixity, again

Say I am... fixated with the fixity concept (introduced in the previous article icm2re 5.8, Is this your moment?), but I believe it is indeed at the roots of many not really addressed problem in computing, information security and internet governance. The best ways we have found to approximate a solution to it is telling people to... forget the problem!, retrain, unlearn or archive and reinstall. But, of course, we are very far from considering it seriously.

The first basic rule to prevent drawbacks and unintended consequences of groupthink and automatic connections consists in raising awareness about the fixity phenomenon and what it consists of: people should know that what I called fixity is the default invisible and dangerous wall that divide us in any moment at any stage. We are talking about, sharing, uploading, downloading all the times representations of problems, goals, goods that have already become obsolete or not fit for purpose anyhow. Fixity doesn’t stink and we do not see it but if we don’t think of it, it risks to hold us back and to compromise the design of any digital artefact. In sum, it may seem a tautology to say so but we cannot manage any social risk that arises from collective behaviours in a computer mediated environment without developing a consensual and transparent way to look at this phenomenon.

It is not such an easy task: consider how common psychological and sociological knowledge about psychographics, thinking styles and personality traits have been shaping not only the way in which recruiters, human resources and project managers make decisions but also the way in which software developers and myriads of other data workers shape, model, perform their tasks. We all create, select, store, archive, delete data etc etc dealing with static instances.

Like it or not, we have all learned to think about data in boxes and within pre-defined sets of categories and cliche’, for the sake of our organisational good practices and efficiency standards. Recognising there is a stereotype reflected inside any emotional reaction or rational judgement about the data we think upon and we acquire or exchange from other people seems the first elementary step to prevent fixity. No real time analytics seems so far able to prevent or help at all to this extent. On the contrary, real time elaborations add more complexity to the picture, especially in terms of escalation of cyber war scenarios.

The illusion of an algorithmic governance easy affordable and achievable has started to be perceived as the big obstacle to big data development.

Fixity has not been given a name explicitly but it seems to me that this idea has been taken into account by an ACM subgroup of american computer scientists and engineers who have recently elaborated a Statement on Algorithmic Transparency and Accountability and seven Principles for Algorithmic Transparency and Accountability.

With such a statement the ACM US Public Policy Council eventually recognises that Computational models can be distorted as a result of biases contained in their input data and/or their algorithms. Decisions made by predictive algorithms can be opaque because of many factors, including technical (the algorithm may not lend itself to easy explanation), economic (the cost of providing transparency may be excessive, including the compromise of trade secrets), and social (revealing input may violate privacy expectations). Even well-engineered computer systems can result in unexplained outcomes or errors, either because they contain bugs or because the conditions of their use changes, invalidating assumptions on which the original analytics were based.

Drawing lines and blowing bubbles

The first time I tried to put forward such idea - preventing fixity to allow free thinking and actual governance of relationships in educational context - was in 2010.

I welcomed the invite to attend a workshop of academic researchers and librarians, who, funded by an European project, would debate the future of interoperability standards. These are seen as essential to make archives and repositories more open, exchangeable and “reusable”. The underlying assumption is that “volume, velocity, variety” leads to optimisation and savings and to new knowledge discoveries faster. The main focus of the workshop was on technical approaches (how to reach that stage) and all the participants were invited to produce position papers and statements that would facilitate the discussion.

It was soon clear that all the participants had in fact already shared their technical expertise and some had already made relevant experiments, for instance applying metadata standards in their opened and shared repositories. Unfortunately none had succeeded in implementing new services nor achieved any goal or a stable new level of interoperability and integration of their repositors.

Since the engineers where, by their own admissions, "stuck in engineer mode" and I was the outsider consultant of the situation, I thought it could be useful to offer some reflections about the process, the context and the type of innovations available to that extent, focussing on metadata as the most relevant requirement for inteperability of educational repositories.

I said we should try to understand in any peculiar situation what technical solutions are likely to be the more effective and sustainable in the long term. This cannot be abstracted from the context of uses and potential uses of the metadata for which there must be in place an incremental process of continuous improvement.

I saw among many participants signs of irritation. In spite of being frankly quite obvious, my perspective was seen as antagonist of more appealing disruptive and radical technical changes that would promote experiments and trials of new formatting languages or software as a service concepts. I elaborated my ideas in a one-page-only position paper for which I significantly chose the title Drawing lines and blowing bubbles (here it is, self archived) that went even further in pointing out what should have been done and why all the metadata efforts had had so little success: there was no consensus at all on an alleged need to change the way in which data were managed by each organisation, no shared requirements useful to scope consensus neither and no effort in defining and sharing roles and responsibilities to pursue the matter in a systematic way.

Nobody had any willingness to change the status quo but for a strong attachment to their own technical expertise they were ready to make more visible, measurable or even upgradeable by others. Nobody had any interest or will to reframe the problems they have encountered. Nobody was thinking for instance that the future would require to confront costs and user preferences among different technical standards and competing technologies or to verify if it could be worthwhile going through a review of customers expectations in respect of the same idea of e-learning repositories.

In sum, the workshop was perceived pretty much inconclusive from a practical point of view, as the previous ones, by like with international talks on climate change, everybody was content to keep talking. That was possible thanks to the funding JISC had secured from the European project: we had a chance to discuss new developments in analytics and other matters, although on the specific point of interoperability there was an evident substantial lack of initiatives and a diffuse general pessimism about achievable goals in the next future. And as far as my position paper Lines and Bubbles was concerned, or my aspiration to get involved on a paid assignment, I did not get absolutely anything but an indirect quotation of my contribution when the year after the Oxford Internet Institute was able to publish a report, thanks to JISC funding, entitled Splashes and Ripples, about the evidence availabe on the impact of digital humanities and digital resources.

As in the Beckett’s famous Waiting for Godot piece, nothing really happened - twice: in the famous tragicomedy two characters, Vladimir and Estragon, wait for the arrival of someone named Godot who never arrives, and while waiting they engage in a variety of discussions and encounter three other characters. Nothing really happens. The interpretations of the play have been innumerable and puzzled at least two generations of intellectuals. Beckett once said Why people have to complicate a thing so simple I can't make out and No truth value attaches to the above, regarded as of merely structural and dramatic convenience - more about Beckett's play from Wikipedia.

Conversational computing

Seven years later my Lines and Bubbles attempt to get some work within the European academic community, I see expectations in respect of the “togetherness” and interoperability goals have sprung again and pretty much in the same terms. This time from conversational computing and bots. Once upon a time, in the 1990s, the term was basically used just to refer to a search engine, then Amazon recommender system brought it to the main stage and it was very well seen as instrumental lever for e-commerce development. Now we have bots for all and everywhere with more allure generated for the Internet of Things. Yes, things will talk to each other beautifully, everything will be so peaceful connected and in harmony! any big player has several bots products, services, applications, projects available for such heaven of things.

Bots have the ambition to replace the use of keyboards, switches, slow graphic interfaces and to connect everything to everything else. Amazon has Alexa, Apple has Siri, Google Allo while Facebook’s Messenger is said to be so intelligent that can share your commands with an army of bots offered by third parties. And that is said to be only the beginning!

In fact, bots are now virtual (software) or physical (integrated in an object or device) “assistants” robots, where the query handlers or conversational interfaces have definitely taken over any other component as the most important and valuable part of the system. Bots are now said to be able to understand the human language at a completely different rate of precision compared to traditional search engines and so they can handle or speed up innumerable tasks - and multi-tasks. They can serve the lazy as well as the busy.

The characteristic of this new wave of very much “embedded interoperability” is that it is shaped by user demand. As such, it bravely confronts the reliability of software specifications with the fussiness and the irrationality of cultural and linguistic interoperability instead of the over-engineered view of metadata standards: in that bots stand on the completely opposite side of the spectrum compared to the experiments made by software engineers in the metadata and e-learning repositories field only ten or fifteen years ago.

Instead of looking at the implementation of a certain technology, bots try now to “get there” where people want to be performing their daily task - from buying theatre tickets to switch on and off the boiler away from home - using shortcuts or quick and dirty ways to guess, try and learn new behaviours by inference or trial and error.

The metaphor used to design “conversational computing” interfaces has taken over.

And yet, does such massive consumers’ demand for bots really exist? can it really lead to more assured interoperability? does it really free designers, developers and users from defining roles and responsibilities in respect of a context of use?

I do not understand many of my colleagues’ conversations anymore. It really looks like they are waiting for Godot. No surprise only Oxford University could get the European - Jisc funding to write about “Splashes and Ripples”: my “Lines and Bubbles” were possibly distracting from... a congregation of waits!

The Internet of Things battlefield

What did go wrong? Why we deal with artificial intelligence without intelligence? What can we do to change pace, directions and results and make the digital economy a better engineered and safer place for business, for politics, for daily life? Some think that it is just a matter of time and all the inconsistencies and mismatches will be solved by algorithms that are smarter and smarter every day - that is the message of Microsoft and IBM communications about machine learning technologies, for instance, another type of … “fixity” similar to the predictions of the Doomsday Clock. Such discourses are endorsed by computer scientists at the Imperial College in the UK, so no worries: some of our best brains are on the topic!

According to Ofcom’s latest report on IoT developments (Connected future), 2016 saw an increase of 36% in the number of IoT devices connected to mobile networks but such rapid adoption of IoT technologies does not mean an increase in security as IoT device manufacturers are not currently implementing particularly effective security measures into their products and, in many cases, do not have entirely convincing approaches to firmware upgrades and patching in the light of emerging threats.

The report goes ahead to say that: In some cases, the most fundamental problems with these devices can be partly attributed to the default passwords not being changed, thereby allowing hackers to remotely gain access and install malware on them. These infected devices are then used directly or indirectly to launch DDoS attacks. The lack of security with these devices may also have an impact on the consumer's privacy, with hackers being able to gain access to personal information. Depending on the IoT device they have, it may reveal personal data related to their health, or their habits of when they leave and arrive home, leaving them vulnerable to higher insurance costs or targeted burglary. Many creators of IoT devices, who are not security minded, may be unaware of how vulnerable their products are to cyber-attacks. The GSMA51 has produced security guidelines on how developers of IoT devices can incorporate security safe guards into their products.

Is this the way forward? As far as I can say, security recommendations are the last thing that developers and inventors of new IoT devices and applications tend to think about, particularly when they concentrate on conversational interfaces, because this is not what the market demands. But I would not blame them in that: it seems to me there is a gap or fault in terms of innovation models and policies for this new battlefield.

There is something else we still miss very much in that: both in the corporate and in the academic worlds there are few directions to rely on. There is no “unified” theory of technological innovation to help vision, leadership and strategies for digital developments.

Looking back at the past twenty years talks about digital innovation and IT governance, I cannot see other than theoretical propositions mostly entrapped by literary, sociological and philosophical visions and volatile business models. They may be honestly agreeable in some circumstances but they are inadequate in practical terms: romanticised, overoptimistic and, often by concessions of their own authors, elaborated with a limited scope, expertise and vision in mind.

Theories, theories…

Biologist and philosopher Jean Rostand allegedly said once that theories pass, frogs remain.

I guess we could say that is, in a nutshell, what data engineering is all about.

I feel sometimes like the lucky frog who escaped the trap of ending boiled into a theoretical pot. What has saved me in all circumstances is the ability to focus on problem solving and then look at the available technologies at a certain point in time and space: it is, then, not impossible to make the right choices. Nobody assures these will be the right choices tomorrow too.

The truth about innovation is that people tend to either plunge into it or resist it at all costs. We do not have a unified and agreed theory that explains what works and why

I remember I found particularly useful at first, when I started my own digital agency business in the mid 1990s, The Innovator’s Dilemma a popular book by Clayton Christensen published in 1997. That was great advice, at the time, to rely on while selling services for the startup of websites or new intranet services to large organisations. Hundred of organisations worldwide, including the Pentagon, learnt from Christensen’ book how to set up quick special independent R&D, technical or commercial units to deal with disruptive competitors or fight terrorism. But the same author has vigorously tried to explain, few years later, that the advice was meant to respond to competitive pressure and that a more complex way to deal with innovation should be taken into account most of the times, considering the context of the organisation - I had myself a case in which the customer was prepared to set up a joint venture or even consider to buy a small startup but they just needed to have an Intranet connecting various departments.

The notion of socio-technical frame, as defined by Patrice Flichy at first in 1995, came to help me in other circumstances and namely when I first found myself puzzled with the design of complex organisation workflows that had been judged efficient by the customer and their suppliers but I was pretty much unable to describe in writing without laughing at it as that was not what my peers would recommend as best practice at the moment. And also Flichy has more recently returned to this notion and reviewed it in the context of data processing, that is what I have done myself for the last ten years: in fact, there is a frame of functioning that precedes the actual usage and for which we do not have, again, a unified theory that gives certainty on the best way to proceed, particularly in respect of the mentioned challenges of cyber security and interoperability. I quote the author here: Companies face a different kind of complexity, stemming from the diversity of the activities that need to be taken into account: production, procurement, sales, finances, human relations, etc. The issue is not the globalising of geographic space, but the articulation of different functional spaces. […] Thus, when considering planning and architecture, IT specialists position themselves between two discourses: that of IT coherence, and that of the cohesion of the use project. Even seemingly technical choices, such as the structure of the database (centralized or decentralized) and the open or closed nature of the data, also reflect an institutional and organisational position. (1)

If it is true that the construction or production of an information system is first and foremost a collective activity, both in its elaboration and in its use (Flichy, 2013) nobody really knows how to define requirements and specifications that satisfy different collectives.

Will all the interconnected things help?

Nobody knows, but ...let’s keep talking.

Notes

(1) Flichy, Patrice (2013) Making information visible. A Socio-technical Analysis of Data Processing. Translated from the French by Elizabeth Libbrecht, Réseaux 2013/2 (No 178-179), 55-89