Thinking about our long-intended but never undertaken critique of the coding paradigm (and by extension "grounded theory") that underlies the dominant qualitative data analysis platforms, this came across my twitter feed:
and got me thinking in two different directions: one, it seems a good case for us to analyze and discuss the limits of this kind of approach and its way of thinking about data and analysis, not to mention "causality," and seems like something even more urgent. At the same time, though, I want that critique to revolve more around, or be working more toward, an explication of differences rather than "you're doing qualitative analysis wrong..." We need to describe how our ideas about data and what you do with it on or via digital infrastructure is just very different...
How data and causality are visualized is part of the problem here, and raises the question of hw do we (or would we) visualize data relationships? So MAXQDA builds up to something like this:
which is based on co-occurrence of codes, which then in turn depends on a rhetoric of "strength" and its visualization as thickness of lines. Maybeoversimplified but: Co-occurrence equals thickness equals strong equals proof equals true.