Showing posts with label Social Systems. Show all posts
Showing posts with label Social Systems. Show all posts

Monday, May 5, 2008

Gina Neff: Work and Power

Note: I have resolved to: (1) make my posts shorter so they stop eating my life, and (2) swerve, with a deft flick of the steering wheel, from my former outline of stuff that I was going to cover, to concentrating on my "Technology as Social Intervention: Discuss" topic, where I hope to learn more and rant less. It's all the same basic subject, though, so you may not even notice the difference.

That said, I went out a few weeks ago and interviewed Gina Neff, who is faculty at the UW Department of Communication. Gina is very taken with the concept of "work and power," and I wanted to ask her: what's the connection between the two? How do organizational structures dictate how power gets allocated to its members? And what happens to those power structures-- or to the communication dynamics of the org as a whole-- when you introduce new problem-solving technologies? If you are also geeky enough to find these topics interesting, you will find some of Gina's answers to those questions in the following few blog entries.

Information, Power and Tools

Gina has been studying these type of questions for years, and she has seen organizations' implicit power structures change radically with the addition of new technological tools, "magnifying existing power disparities," she says, "or breaking them down." The power-holders in an org may try to restrict how a tool is distributed or employed, or might even rally against it, if it seems like it has the potential to redistribute the power to make things happen. Alternatively (as in the following example), it might level the playing field, causing an initial chaos that leads to large changes to the org's workflows and the way its members define their own roles.

Gina is currently undertaking a study about the adoption of building information modeling tools in the construction industry. She explains:



Historically, contractors (the folks who build the buildings) and architects have lived on opposite sides of the organizational divide. They spoke different languages and had different goal sets; they communicated via blueprints. This mutual organizational isolation allowed each group a lot of control over their spheres, but frequently made collaboration a painful, contentious mess. Each group guards its information and works at cross-purposes to the other, with miscommunications leading to mutual stereotyping, which itself helps reinforce the divide.
Gina is studying a transition that's taking place right now, before her eyes as she studies it: Today, builders and architects are beginning to share their visions via 3-D computer graphic tools and databases that represent the building being built. In other words, these groups are adopting a communications- and design- based technological innovation, and it is creating dramatic changes in the way they work together. The stereotypes are being put to the test as the groups are forced into proximity with one another, and each silo's private language is being opened up to the other. As Gina describes: "Their entire communications infrastructure has been channeled into different visual symbols, and is hardwired through different network pathways." Each group is also, in the process, losing some of the autonomy that came with that defended isolation.

Heterophily: Difference and Group Intelligence

It's not far-fetched to imagine that switching the wiring in an organization's communication structure could lead to huge changes. Cultures large and small, since the civilization of man, have kept themselves alive by employing one or another form of isolation: a mountain range, a separate language, secrecy, stereotyping, a forbidding initiation rite; Jews, for example, have kept Jewish culture alive, despite the diaspora, with the aid of lengthy and complex conversion processes, services conducted entirely in Hebrew, and dietary restrictions that can help limit who Jews eat with. If you move a culture's boundary devices, you change the way the culture lives. Build a highway, raise children bilingual, install a phone system, the internet: suddenly you find cultures blending, changing, and questioning the way they do things.


The contractors and architects in the system Gina is studying have historically been heterophilious. "Heterophily" is an amusingly polysyallabic term for "different in a way that makes communication between them hard." The words heterophily and homophily describe two ends of a spectrum: on the one side, you have two groups (or individuals) who are different to the point where they can't communicate at all (an American economist and a Bolivian witch woman); on the other side, you have groups who are so similar that communication between them is easy, but totally uninteresting (an American economist and an American economist ;) ). They have nothing to say to one another that they don't already know.

Want More of This Stuff? Check out:

And four "easily accessible" books Gina suggests everyone read:


... Gina recommends all of the above except, technically, the following blog entry. :)

Image pulled from here.


Tuesday, March 4, 2008

1.1 How to Build Horrible Social Systems by Accident: Incentives


In the last entry, I proposed that tools and organizations, often (make that usually-- actually make that almost always) unintentionally bake into their very structures a set of implicit instructions for their members / users about what behavior is appropriate, rewarded, or discouraged.

How, exactly, do they do that? Here are some of many possible answers.
Incentive Systems. Behaviors rewarded by your system or tool will grow in emphasis and frequency; behaviors that are punished will become less frequent. This statement may seem obvious; people are always trying to leverage positive or negative incentives to get one another to do things. Unfortunately, those conscious incentive programs are usually laid on top of preexisting incentive systems that are deeper, more subtle, more ubiquitous, and far less intentional. In other words, they are much more convincing to the people involved, and are impossible to casually override.

Example: You have a new software company, and you have to hire some people and then give them employee reviews of some kind. Like many orgs, you base your employee review system on whether or not an employee succeeds at his projects. If he succeeds at all of them, he gets a raise; if he fails at his projects, he gets a poor review and a lower bonus. If he gets three poor reviews in a row, he gets fired.
While this seems like a simple and obvious incentive system, you are literally incenting your average employee (let's call her Martha Generic) to succeed at her own projects… even if that messes up everyone else's. If she sacrifices her own project, one quarter, to enable four other projects to succeed, she will still be punished by your system.

Example, Cont'd: Five years down the line (after every manager in your company has worked your incentive system into dozens of mini-processes and deliverables), you discover your employees aren't collaborating. You say to yourself: "These poor geeks just don't know how to collaborate. I've got to get them thinking like a team…"

You start publishing some weekly articles on the importance of collaboration. You deliver a motivational speech to the whole company about how software development is really about putting "people first." You offer a trophy for the "most collaborative team member."

Will it work?

What would happen if you created an online community (let's say, a "resource group for workaholics") and let your members give each other public ratings (1-5 stars) on two things: "Humor" and "Best Vocabulary"? … What if it were an automated system that gave privileges based on "Most Links Contributed"?


Next up: 1.2 There is no I in Meme: Language and Messaging

1.1 How to Build Horrible Social Systems by Accident: Incentives


In the last entry, I proposed that tools and organizations, often (make that usually-- actually make that almost always) unintentionally bake into their very structures a set of implicit instructions for their members / users about what behavior is appropriate, rewarded, or discouraged.

How, exactly, do they do that? Here are some of many possible answers.
Incentive Systems. Behaviors rewarded by your system or tool will grow in emphasis and frequency; behaviors that are punished will become less frequent. This statement may seem obvious; people are always trying to leverage positive or negative incentives to get one another to do things. Unfortunately, those conscious incentive programs are usually laid on top of preexisting incentive systems that are deeper, more subtle, more ubiquitous, and far less intentional. In other words, they are much more convincing to the people involved, and are impossible to casually override.

Example: You have a new software company, and you have to hire some people and then give them employee reviews of some kind. Like many orgs, you base your employee review system on whether or not an employee succeeds at his projects. If he succeeds at all of them, he gets a raise; if he fails at his projects, he gets a poor review and a lower bonus. If he gets three poor reviews in a row, he gets fired.
While this seems like a simple and obvious incentive system, you are literally incenting your average employee (let's call her Martha Generic) to succeed at her own projects… even if that messes up everyone else's. If she sacrifices her own project, one quarter, to enable four other projects to succeed, she will still be punished by your system.

Example, Cont'd: Five years down the line (after every manager in your company has worked your incentive system into dozens of mini-processes and deliverables), you discover your employees aren't collaborating. You say to yourself: "These poor geeks just don't know how to collaborate. I've got to get them thinking like a team…"

You start publishing some weekly articles on the importance of collaboration. You deliver a motivational speech to the whole company about how software development is really about putting "people first." You offer a trophy for the "most collaborative team member."

Will it work?

What would happen if you created an online community (let's say, a "resource group for workaholics") and let your members give each other public ratings (1-5 stars) on two things: "Humor" and "Best Vocabulary"? … What if it were an automated system that gave privileges based on "Most Links Contributed"?


Next up: 1.2 There is no I in Meme: Language and Messaging

Monday, February 25, 2008

1 of 4: Structure Influences People

Premise #1.0: The structure of a tool influences the people who use it, and the structure of an organization influences the people who belong to it.

Premise #1.1: We don't act nearly as independently as we think we do. All day long, we are listening for cues about what behavior is appropriate in each context. We also broadcast cues as to what behavior is rewarded, acceptable, or inappropriate.

Example A: You're invited to an acquaintance's house; he's "having some cool people over." He has spiky hair and a nose ring. So, you grab your Immortal Technique CDs and take a cab out to his place, expecting to tie one on and get loose. You get there and discover that, (1) the table is set with a white tablecloth and matching silverware, and (2) there are wine glasses. You instantly realize this is a Grownup Party. Chagrined, you start greeting the other guests with conversation about work while privately lamenting your wasted $30 on cabfare.

Example B: Usually, the lady checker with the orange hair at the Red Apple asks "How are you?" in a monotone while she's typing your produce codes with one hand and checking her watch with the other. You respond: "Fine, thanks, and you?" But today, she notices you look kinda off. You come up to the counter, and she sets her pen down and places both hands on the counter. She looks into your eyes, and says: "How are you?" You say: "Pretty lousy. I'm just not sleeping well. I stress too much."


This is our symbolic, implicit, fantastically complex language of human aggregation: we tell each other what to do all day long without saying a word.

Organizations and tools bake these messages into formal structures that tell people what behavior is desirable and what is unacceptable. When we successfully and ritually use the tool or belong to the organization, we adopt those behaviors. Usually, we adopt them unknowingly; often, we do it involuntarily.
Premise #1.2: It's important to set up tools and systems to encourage the behaviors that you want, and discourage the behaviors that you don't.
The more unconscious those behavioral handshakes between us and our org/tool, the more likely they are to affect our perception of ourselves, and our ability to see a broad set of options and to make decisions.
Premise #1.3: If you have a system where a group of people are doing the same odious thing over and over again no matter how often you try to get them to stop, look at the rules of the system they belong to.
Next up: 1.5: How to Build Horrible Social Systems by Accident: Incentives. This is one of several entries fleshing out the theme of "Structure Influences People." It will be the first in, time willing, a short list of tools I learned about in academia that aim to analyze structure and its influences.