Tuesday, July 27, 2010

"More German" and Learning Atoms

Responding to Heli, which you should read first...

> I love to analyze and conceptualize .. but why I have a feeling that it was not allowed here in CritLit2010? I should have been an excellent student but I only gave some fragmented knowledge and occasional comments. What is my problem actually? Fine question :)

Interesting question. It certainly isn’t because you were not allowed in CritLit2010 – there was no prohibition whatsoever against analyzing and conceptualizing.

Because it was a small course, much of your interaction would have been with me. So perhaps you felt that analysis and concepts would not have been an effective strategy in our interactions? If so, you may have been correct.

Let’s consider the question of whether interesting learning happened in CritLit2010. The usual, traditional, method of addressing such a problem is to seek out some evidence, and to infer, through a process of analysis and conceptualization, to the existence of some instance of learning (perhaps a second calculation would be required to show that it is ‘interesting’).

This reflects an approach to learning where what is learned is observable, and measurable, is discrete wholes – precisely the sort of things that reveal themselves through analysis. It is, if you will, an atomistic definition of learning, where after learning we can observe some sort of increase in the mass of atoms (or perhaps an exchange of atoms, if we have had to reject old concepts along the way). These atoms (by definition?) would produce some evidence of their existence, given an appropriately designed experimental mechanism (which, in learning, is called a ‘test’ or ‘assessment’).

When I am asked to account for whether interesting learning happened in CritLit2010, I don’t want to commit myself to any such picture. Not because I think connectivism resists such an approach – I’m sure we could probably build it in, and I see no shortage of efforts among my colleagues to do exactly that (”where is the ‘learning’ in the PLE,” they ask me, as though assuming we could add some atoms of learning to the mix and detect them coming out the other end). But because the idea of ‘atoms of learning’ runs contrary to the idea of a learning network.

Let me offer an analogy to explain what I mean. Imagine that you have travelled to a new city for the first time. Imagine, especially, that it is based in a culture different from your own. You return home from the city refreshed, exhilarated. Clearly, you have “learned” from your visit. But what is the evidence that interesting learning happened in your visit to the new city?

If you were asked such a question, you would find yourself almost at once fishing for particular things you might have learned: the foreign word for ‘please’, perhaps, or the existence of a festival, or the funny way people there line up for and order food. But even were you to be able to elicit the totality of such atoms, it would still not constitute what you learned. Indeed, it would actually misrepresent what you really learned.

Moreover, even if you were not able to come up with any atoms of learning, it would be incorrect to say you hadn’t learned anything. As a result of your visit to the new city, you see food slightly differently, your understanding of social organization has become more sophisticated, your expectations of behaviour slightly changed. It may be that you cannot even articulate these new bits of learning (this is what it sounds to me like when you say “it is not easy to follow learning happenings. I cannot follow mine and I should be expert”). But this is not grounds for believing that learning did not happen, only that it is not atomic and identifiable through analysis.

What did you learn from travel to a new city? You might not be able to articulate it at all. An observer might be more perceptive, noticing perhaps a slight change in the way you pronounce words, or slight variations in your menu selections at restaurants. It would be difficult, even impossible, to articulate, and it would definitely not show up over a short period of time – some things might not become evident until you have visited your second, or third, new city.

So my response to the question “how do I know whether interesting learning happened in CritLit2010″ is that the whole model of “discrete cause -> discrete effect” is mistaken here. Asking “did interesting learning happen” is an inappropriate question to ask. It treats learning as (a) something concrete, and (b) an effect, that can be reliably produced by a cause. Yes, you may be able to identify concrete things that were produced by a cause. The mistake lies is saying “aha! *this* was what I learned.” When in fact it is probably the least important of the things you learned.

So how do you know? Never mind the quest for discrete bits of learning, how do you know whether taking the course was a valuable activity. As I suggested before, an observer, familiar with your behaviour before and after, may be able to detect slight changes. Your use of language, your behaviour in certain communities, may have become more appropriate in ineffable ways. And your perceptions (untrustworthy and unreliable as always) may also offer clues: you feel a sense of dissonance, which means your existing thought patterns have been challenged, or you feel more comfortable with a group of people, or you feel a sense of exhilaration similar to what you feel after visiting a new city. Or something else.

What *would* show that learning occurred, if we could measure it, would be the formation of new connections, or strengthening (and weakening) of existing connections, between neurons in your brain. Your neural network was altered by the experience of taking the course (and, concurrently, by everything else that happened to you over the six weeks). What *would* show that learning occurred would be an isolation of those changes that happened as a direct result of the course, a comparison with prior states, and then some sort of semantic measure such that the new neural state contains ‘more truth’ than the old.

Barring such an account (and sketching the account reveals some of the absurdities, such as the idea that one neural state contains ‘more truth’ than another) we are left with vague generalizations.

But this, at least, seems true: there is no correct one-to-one mapping between (1) a verbal description of facts retained, propositions now believed to be true, or other atomistic bits of knowledge, and (2) the full description of the change of neural state that occurred as a result of the learning. We can’t get from ‘content language’, which is atomistic, to ‘neural language’, which is not.

When you think about this, you see that this is true, I think. When you think of the proposition that “Paris is the capital of France,” you see that there is no neural state that corresponds to ‘knowing’ or ‘having learned’ this proposition. Ergo, if we say that learning is the change of neural state, then it is inaccurate and wrong to say that we “learned” that “Paris is the capital of France”, and that it is a mistake to treat utterances of such as evidence for that.

Learning is not atomic. There are not ‘atoms’ of learning. Learning is not something we can count and measure, as though it were cumulative. The assessment of learning through measurement of ‘bits of knowledge’ is fundamentally in error. A connectivist course does not try to teach ‘bits of learning’ and hence to ask ‘what learning happened?’ is the wrong question to ask. At best, we can ask only whether a person is more of a certain sort of person – are they ‘more German’ for having stayed in Germany for a month, are they ‘more of a physicist’ for having stayed in the community of physicists for a month. Knowing that there are no necessary or sufficient conditions for being ‘more’ of any of these, knowing that there is no gauge that measures being ‘more German’ or ‘more of a Physicist’.

Sunday, July 25, 2010

Moncton 2010 IAAF World Jr Championships - Day 6

Second-last day. Photos here. All CC licensed, so you can use them as you like. Above is Katsiaryna Artsiukh, from Belarus, celebrating her gold in the 400 Metres Hurdles.

Tuesday, July 06, 2010

Having Reasons

Semantics is the study of meaning, truth, purpose or goal in communication. It can be thought of loosely as an examination of what elements in communication 'stand for'.

Because human communication is so wonderfully varied and expressive, a study of semantics can very quickly become complex and obscure.

This is especially the case when we allow that meanings can be based not only in what the speaker intended, but what the listener understood, what the analyst finds, what the reasonable person expects, and what the words suggest.

In formal logic, semantics is the study of the conditions under which a proposition can be true. This can be based on states of affairs in the world, the meanings of the terms, such as we find in a truth table, or can be based on a model or representation of the world or some part of it.

In computer science, there are well-established methods of constructing models. These models form the basis for representations of data on which operations will be formed, and from which views will be generated.

David Chandler explains why this study is important. "The study of signs is the study of the construction and maintenance of reality. To decline such a study is to leave to others the control of the world of meanings."

When you allow other people to define what the words mean and to state what makes them true, you are surrendering to them significant ground in a conversation or argument.These constitute what Lakoff calls a "frame".

"Every word is defined relative to a conceptual framework. If you have something like 'revolt,' that implies a population that is being ruled unfairly, or assumes it is being ruled unfairly, and that they are throwing off their rulers, which would be considered a good thing. That's a frame."

It's easy and tempting to leave the task of defining meanings and truth conditions to others. Everyone tires of playing "semantical games" at some time or another. Yet understanding the tools and techniques of semantics gives a person tools to more deeply understand the world and to more clearly express him or her self.

Let me offer one simple example to make this point.

We often hear people express propositions as probabilities. Sometimes these are very precisely expressed, as in the form "there is a 40 percent probability of rain." Other times they are vague. "He probably eats lettuce for lunch." And other times, probabilities are expressed as 'odds'. "He has a one in three chance of winning."

The calculation or probability can be daunting. Probability can become complex in a hurry. Understanding probability can require understanding a probability calculus. And there is an endless supply of related concepts, such as Bayes Theorem of prior probability.

But when we consider the semantics of probability, we are asking the question, "on what are all of these calculations based?" Because there's no simple answer to the question, "what makes a statement about probabilities true?" There is no such thing in the world that corresponds to a "40 percent chance" - it's either raining, or it's not raining.

A semantics of probability depends on an interpretation of probability theory. And there are some major interpretations you can choose from, including:

1. The logical interpretation of probability. Described most fully in Rudolf Carnap's Logical Foundations of Probability, the idea at its heart is quite simple. Create 'state descriptions' consisting of all possible states of affairs in the world. These state descriptions are conjunctions of atomic sentences or their negations. The probability that one of these state sentences is 'true' is the percentage of state descriptions in which it is asserted. What is the possibility that a dice roll will be 'three'? There are six possible states, and 'three' occurs in one of them, therefore the probability is 1 in 6, or 16.6 percent.

2. The frequentist interpretation of probability. Articulated by Hans Reichenbach, the idea is that all frequencies are subsets of larger frequencies. "Reichenbach attempts to provide a foundation for probability claims in terms of properties of sequences." This is the basis for inductive interence. What we have seen in the world in the past is part of a larger picture that will continue into the future. If you roll the dice enough times and observe the results, what you will discover (in fair dice) that the number 'three' appears 16.6 percent of the time. This is good grounds for expecting the dice to roll 'three' at that same percentage in the future.

3. The subjectivist interpretation of probability. Articulated by Frank Ramsay, "The subjectivist theory analyses probability in terms of degrees of belief. A crude version would simply identify the statement that something is probable with the statement that the speaker is more inclined to believe it than to disbelieve it." What is the probability that the dice will roll 'three'? Well, what would we bet on it? Observers of these dice, and of dice in general, would bet one dollar to win six. Thus, the probability is 16.6 percent.

Each of these interpretations has its strengths and weaknesses. And each could be expanded into more and more detail. What counts, for example, as a 'property' in a state description? Or, what are we to make of irrational gamblers in the subjectivist interpretation?

But the main lesson to be drawn is two-fold:

- first, when somebody offers a statement about probabilities, there are different ways of looking at it, different ways it could be true, different meanings we could assign to it.

- and second, when such a statement has been offered, the person offering the statement may well be assuming one of these interpretations, and expects that you will too, even in cases where the interpretation may not be warranted.

What's important here is not so much a knowledge of the details of the different interpretations - first of all, you probability couldn't learn all the details in a lifetime, and second, most people who make probability assertions do so without any knowledge of these details. What is important to know is simply that they exist, that there are different foundations of probability, and that any of them could come into play at any time.

What's more, these interpretations will come into play not only when you make statements about the probability of something happening, but when you make statements generally. What is the foundation of your belief?

How should we interpret what you've said? Is it based on your own analytical knowledge, your own experience of states of affairs, or of the degree of certainty that you hold? Each of these is a reasonable option, and knowing which of these motivates you will help you undertsand your own beliefs and how to argue for them.

Because, in the end, semantics isn't  about what some communication 'stands for'. It is about, most precisely, what you believe words to mean, what you believe creates truth and falsehood, what makes a principle worth defending or an action worth carrying out.

It is what separates you from automatons or animals operating on instinct. It is the basis behind having reasons at all. It is what allows for the possibility of having reasons, and what allows you to regard your point of view, and that of others, from the perspective of those reasons, even if they are not clearly articulated or identified.


The whole concept of 'having reasons' is probably the deepest challenge there is for connectivism, or for any theory of learning. We don't want people to simply to react instinctively to events, we want them to react on a reasonable (and hopefully rational) basis. At the same time, we are hoping to develop a degree of expertise so natural and effortless that it seems intuitive.

Connectivist theory is essentially the idea that if we expose a network to appropriate stimuli, and have it interact with that stimuli, the result will be that the network is trained to react appropriately to that stimuli. The model suggests that exposure to stimuli - the conversation and practices of the discipline of chemistry, say - will result in the creation of a distributed representation of the knowledge embodied in that discipline, that we will literally become a chemist, having internalized what it is to be a chemist.

But the need to 'have reasons' suggests that there is more to becoming a chemist than simply developing the instincts of a chemist. Underlying that, and underlying that of any domain of knowledge, is the idea of being an epistemic agent, a knowing knower who knows, and not a mere perceiver, reactor, or doer. The having of reasons implies what Dennett calls the intentional stance - an interpretation of physical systems or designs from the point of view or perspective of reasons, belief and knowledge.

We could discuss the details of having and giving reasons until the cows come home (or until the cows follow their pre-programmed instinct to follow paths leading to sources of food to a place designated by an external agent as 'home'). From the point of view of the learner, through, probably the most important point to stress is that they can have reasons, they do have reasons, and they should be reflective and consider the source of those reasons.

Owning your own reasons is probably the most critical starting point, and ending point, in personal learning and personal empowerment. To undertake personal learning is to undertake learning for your own reasons, whatever they may be, and the outcome is, ultimately, your being able to articulate, examine, and define those reasons.


Interesting discussion here. My response:

Let me take a slightly different tack. I don’t endorse all the concepts here, but use of them may make my intent clearer.

Let’s say, for the sake of argument, that ‘to have learned’ something is to come to ‘know something’.

Well, what is it to ‘know something’. A widely held characterization is that knowledge is ‘justified true belief’. There has been a lot of criticism of this characterization, but it will do for the present purposes.

So what is ‘justified true belief’? We can roughly characterize it as follows:

- ‘belief’ means that there is a mental state (or a brain state) that amounts to the agreement that some proposition, P, is the case.

- ‘true’ means that P is, in fact, the case.

- ‘justified’ means that the belief that P and the fact that P are related through some reliable or dependable belief-forming process.

OK, like I say, there are all kinds of arguments surrounding these definitions that I need not get into. But the concept of ‘having reasons’ is related to the idea of justification.

Now – the great advantage (and disadvantage) of connectivism is that it suggests a set of mechanisms that enables the belief that P to be justified.


- we have perceptions of the world through our interactions with it.

- these perceptions, through definable principles of association, create a neural network.

- this neural network reliably reflects or mirrors (or ‘encodes’, if you’re a cognitivist) states of affairs in the world

- hence, a mental state (the reflection or encoding) has been created – a belief. This belief is ‘true’, and it is ‘true’ precisely because there is a state of affairs (whatever caused the original perception) that reliably (through principles of association) creates the belief.

All very good. But of this is the total picture of belief-formation, then there is nothing in principle distinct from simple behaviourism. A stimulus (the perception) produces an effect (a brain state) that we would ultimately say is responsible for behaviour (such as a statement of belief).

But this picture is an inadequate picture of learning. Yes, it characterizes what might be thought of as rote training, but it seems that there is more to learning than this.

And what is that? The *having* of reasons. It’s not just that the belief is justified. It’s that we know it is justified. It’s being able to say ‘this belief is caused by these perceptions’.

(This is why I say that learning is both ‘practice’ and ‘reflection’ – we can become training through practice along, but learning requires reflection – so that we know why we have come to have the knowledge that we have).

Learning that ‘the sky is blue’, for example, combines both of these elements.

On the one hand, we have perceptions of the sky which lead to mental states that enable us to, when prompted, say that “the sky is blue.”

At the same time, we would not be said to have ‘learned’ that the sky is blue unless we also had some (reasonable) story about how we have come to know that the sky is blue.

What I am after is an articulation of how we would come to be able to make such statements in a connectivist envrionment. How connectivism moves beyond being a ‘mere’ forming of associations, and allows for a having, and articulation, of reasons.