Feature Article: Identifying within-level differences in leadership decision making

Feature Articles / October 2009

Theo Dawson and Katie Heikkinen

Theo DawsonKatie Heikkinen

INTRODUCTION

In a previous article (Stein and Heikkinen, 2008), we described how the Lectical Assessment System (LAS) and other aspects of developmental maieutics (Dawson & Stein, 2008) relate to the integral model. Here, we describe how the LAS can be used to reveal within-level differences between persons—differences that have important implications for the kind of learning interventions that are most likely to support individual development. We argue that assessments based on independent examinations of structure and content—such as the LAS—make it possible to describe the full range of conceptions that characterize each developmental level. Further, because the LAS is content independent, LAS analysts are less likely mistakenly to view simple differences in content as indicative of differences in developmental level, a common problem with domain specific systems. Finally, the LAS makes it possible to hone in on the strengths and weaknesses of a performance in terms of the range of conceptions and skills that characterize its level, allowing us to provide specific educative feedback.

In the following section, we provide a short description of the LAS and the assessment that was employed to collect the examples we use to illustrate our point—the LDMA. We then discuss those examples—that are scored at the same Lectical level—in terms of their similarities and differences.

The Lectical Assessment System

The Developmental Testing Service currently offers developmental assessments, called Lectical Assessments, in the following domains: leadership, reflective judgment, ethics, self in relationships, and decision making. Importantly, each of these domains is scored using the same system, the LAS. Although these five assessments are the only ones that are currently available commercially, the LAS has also been used to assess and study development in many other domains, such as reasoning about the good life (Dawson, 2000) and good education (Dawson-Tunik, 2004), reasoning about issues in Mind, Brain, and Education (Dawson and Stein, 2006), reasoning in physical science (Dawson & Stein, 2008), and, in a recent initiative with John F. Kennedy University, reasoning about the integral model itself. The LAS can be employed to assess development in this disparate range of topics because, unlike many developmental metrics, the LAS is a domain general scoring system. It assesses properties of language and argumentation that are indicative of development in any domain.

When LAS analysts examine linguistic performances, they see three layers of structure. The outermost layer is the conceptual content, which consists of the particular views expressed and properties such as length and vocabulary. Beneath this content, we find the surface structure. This is the layer of structure targeted by most developmental assessment systems. It includes criteria of evaluation such as maturity, soundness, or degree of egocentrism. Assessments that focus on the surface structure of performances are domain specific, since surface structure varies widely from domain to domain. The LAS, in contrast, targets the domain-general core structure by looking through the domain-specific surface structure towards more general properties and assessing development in those terms. The two most important features of core structure are degrees of abstraction and complexity. An assessment system that characterizes its stages in terms of these properties can be employed to assess development in any domain (Dawson, 2004; Stein and Heikkinen, 2008).

However, a useful developmental assessment must yield more than a score based on the degree of abstraction and complexity in a performance. Although LAS scores have been shown to be statistically reliable and valid indicators of developmental level, they do not, in themselves, tell us anything about the particular content associated with developmental levels. For this reason, Dawson designed a method (Dawson-Tunik, 2004) for describing conceptual development that allows Lectical levels to be associated with rich descriptions of conceptual content (within distinct domains or subject areas). This pairing has made it possible to create developmental assessments that are both accurate and educative.

In this method, information about the core structure of arguments is linked with information about domain and surface structures. Briefly, Lectical analysts score performances and domain experts code them for content. These content codes are then linked to the scores, allowing us to see what kind of content arises at each developmental phase (which is 1/4 of a level). Eventually, given an adequately large and diverse sample of assessments, this yields a rich description of the kind of reasoning typically observed in a given domain at each phase. These descriptions also capture the kind of learning challenges people face as they move from one phase to the next. An understanding of the sequence of learning challenges is supplemented with domain expertise to offer participants detailed feedback about the kind of learning they are most likely to benefit from doing next.

At the “end” of this cycle of research and application (it never really ends), we are able to offer a well-calibrated assessment that includes rich descriptions of the kind of thinking typical in each phase of performance and the kind of learning challenges that are typical to thinkers reasoning at each level. The LDMA is such an assessment.

The LDMA

The Lectical Decision Making Assessment is an assessment of workplace decision-making skills (DTS website). It focuses on three aspects of decision-making: perspective taking, argumentation, and the decision-making process, and is designed for management students, managers, and individuals who are thinking about moving into management. It presents a common workplace dilemma that involves conflicting interests, then asks the test-taker—through a series of standard probes—to discuss the nature of the problem, describe two possible solutions, compare these solutions, and describe an ideal decision making process for similar situations.

The LDMA, like all Lectical Assessments, reliably distinguishes 8 to 10 adult developmental phases, where each phase represents 1/4 of a developmental level. All Lectical Assessments have very similar levels of reliability, because they are scored with the same developmental assessment system, the LAS.

Lectical Assessments address two broad forms of validity: construct validity and ecological validity. First, like all Lectical Assessments, the LDMA is scored with the LAS, which has been submitted to a number of rigorous tests of its ability to capture the developmental construct described by Fischer’s Dynamic Skill Theory (1980; Fischer and Bidell, 2006). Second, the ecological validity of the LDMA is apparent in the relevance of (1) its content—its dilemmas are like those encountered in the real world; (2) the skills required to complete it—skills for perspective-taking, argumentation, written communication, and decision-making; and (3) the scores and feedback provided in its reports.

In addition to general feedback related to the phase of a given performance, each LDMA report includes personal feedback, including comments on strengths and areas for growth, quality of argumentation, the decision-making process, perspective taking, and recommendations for learning and development.

Adult levels of the LDMA

Our colleagues and we have identified three adult levels (levels 10, 11, and 12) in responses to the LDMA. These levels correspond to the abstract mappings (10), abstract systems (11), and single principles (12) levels of Fischer’s (1980 and 2006) dynamic skill scale—the top 3 levels of the developmental spiral shown in Figure 1. Levels 9 and below are not typical of adult reasoners and are not discussed here. These levels also correspond to the levels of other developmental scoring systems as shown in Table 1.

spiral

Figure 1: The developmental spiral

Table 1

Table 1: The relation between Fischer’s skill levels and the levels of other longitudinally validated systems.

The scoring methods of the LAS integrate and transcend the scoring methods developed by Kohlberg (Colby and Kohlberg, 1987), Armon (1984), Kitchener & King (1985/1996), Kegan (Lahey, Souvaine, Kegan, Goodman, and Felix, 1988), and others who have devised domain-specific scoring systems, in that they center on the deep structural core of the developmental dimension that is tapped by other systems. Like Commons’ scoring system, and Fischer’s methods for skill analysis, the LAS can be thought of as a second generation scoring system with ruler-like qualities. The LAS is backed by extensive ongoing research into its psychometric qualities, which show it to be reliable within 1/4 of a skill level. This means it reliably differentiates 12 distinct phases in adulthood, although the vast majority of adults perform within a 6-7 phase range in the center of this distribution. The developmental phases are labeled 10:1, 10:2, 10:3, 10:4 (abstract mappings), 11:1, 11:2, 11:3, 11:4 (abstract systems), 12:1, 12:2, 12:3, and 12:4 (single principles).

ANALYSIS OF CASE STUDIES

In the following discussion, we examine the performances of two people who performed in phase 11:3 on the LDMA. Sonya is a 28-year-old college graduate and Maria is a 58 year old with a master’s degree. Both women are consultants. Both of them dealt with the same decision making dilemma, in which they played the role of a mid-level manager. In this dilemma, a new supervisor, who has just been hired, is demanding changes to the team’s seating plan that are intended to foster greater collaboration. The dilemma asks test-takers to decide how to deal with the conflict between the new manager and members of the team, who are upset. (For a complete text of the dilemma and the questions asked, see Appendix A.) Here are edited excerpts of the two performances:

sonya

SONYA: One of the first things to consider is the goal of the office…Once the purpose…is established, you need to carefully evaluate the methods to achieve this purpose. Is collaboration necessary…or just a distraction?…Besides looking at the organizational and group factors, the individual factors need to be considered. These include the current assignments of the employees…and some individual differences between the employees (personality, locus of control, openness to change)…The goals and motives of the new supervisor also need to be considered…In the end, the best decision isn’t the one that makes EVERYONE happy, but the one that is the most beneficial to the group while causing the least resistance.

[T]he most important [factor] is the purpose/mission of the office. Organizations do not exist to make employees happy, but to reach some sort of end. That vision cannot be ignored when evaluating changes…As long as the mission is clear, communicating this to the employees when explaining the need for the changes will likely increase the chances that they will cooperate and hopefully even support the changes.

The first step…is to gather as much information as needed to evaluate the costs and benefits of the changes…If changes are indeed necessary…it is important to include the lower level employees in the decisions so that they feel they were a part of the changes, rather than just being told what to do by the “new guy.” Increasing the perceptions of fairness and justice by increasing participation will go a long way in the acceptance of changes. Therefore, assembling a team that includes you, the new supervisor, and key leaders from among the employees will foster the development of a plan that better meets the needs and goals of everyone involved. These team members…can help garner support from their co-workers. Having a team of this sort will also help the new supervisor understand the employees’ hesitations more clearly…This process involves participation by all parties, clear communication during the planning process as well as during the implementation process.

maria

MARIA: One thing to consider is whether there is a business case for making the change…What falls out from a clear view of the business case would be a cost-benefit analysis of making the change (taking into account financial data and human capital)…I [also] want to establish a productive working relationship with [my supervisor]…I need to understand him—his styles of communication, decision-making, what motivates him, what his vision is, and his desired business outcomes. Balancing that, I want to maintain the positive working relationship I have with my reports so I maintain their trust in me and sustain their engagement in their work and the company…I want to give them a sense of ownership in a decision that significantly impacts [them].

…It’s essential to know first what the business case is for making the changes my supervisor is asking for…That’s the information I need to be an effective manager, and think about how to align the process, structure, and people. I also think the organizational culture change is very important…New executives can be successful or not depending on their awareness of the existing culture…Although reorganizing the space may appear to be just a physical structure issue, it pushes against well-established norms and routines of people get[ting] their work done and what they value…
I would ask for a meeting with my new boss to discuss the matter with him. I’d share the initial reaction of my employees…[and discuss] existing culture and my concern about maintaining employee morale and engagement. I’d ask him to help me understand his vision for the company…Requesting a meeting to discuss this acknowledges his leadership role and the same time indicates I want to be a strategic thinking partner with him (and vice versa). As his “mouthpiece” to my employees, I want and hope to be aligned with his thought processes (or at least understand them)…[I hope to establish myself] as a player in his mind and…enhance my possible career aspirations…I also thought about what form of influence is appropriate for my role and level…

In Figures 2 and 3, we provide concept maps that highlight the key dimensions of these transcript excerpts, as well as their organization and interrelationships. Both performances received the same score on the LDMA, because they share important deep structural properties. At the level of content, however, they exhibit some important differences. We will show that both respondents are able to reason about their decision in relatively sophisticated ways, but their performances reveal strengths and weaknesses in different areas.

Structure:

Both Maria and Sonya received scores of 11:3. At level 11, respondents organize several level 10 ideas in a way that shows their interrelationships. This might include explaining interconnections, drawing parallels, or exploring tensions or contradictions. This kind of thinking is often called systems thinking. Fischer calls it “abstract systems.” A score of 11:3 (rather than 11:1 or 11:2) means that a performance features at least two separate well-elaborated level 11 systems. Both Maria and Sonya provide evidence of this level of elaboration in their overall performances (not shown here).

Maria and Sonya’s performances were not awarded higher scores for a couple of reasons. First, at 11:4, we expect to see evidence of an abundance of highly elaborated systems, with the suggestion of even more “waiting in the wings”—mentioned or glossed over but not unpacked. In particular, we would expect to see a greater elaboration of the organization theme, which is neglected in both performances.

Second, a level 12 score requires an integration of at least two of the complex systems developed in level 11. On the LDMA, a common integration involves the organizational, group, and interpersonal arenas. Whereas in level 11 people are likely to realize that one needs to take into account interpersonal, group, and organizational aspects of the dilemma, it isn’t until level 12 that they truly understand how the structures of the organization, and their interaction with the individuals and groups that make up the organization, co-determine the qualities of the environment in which interpersonal conflicts emerge and are dealt with. For example, someone moving into level 12 might note that the organization in the example is set up such that a supervisor can demand sweeping change in her first week on the job and that this constrains the actions of each of the players in specific ways. This kind of insight becomes robust only after individuals have elaborated highly complex phase 11:4 conceptions both of the organization and the interpersonal arena.

Interestingly, both our respondents seem to know they are supposed to integrate organizational, group, and interpersonal concerns, but do not do so. Both arguments feature something that looks like the level 12 integration described above at the level of content. Maria notes that she must “align the process, structure, and people.” Sonya notes that she must “look at organizational, group, and individual factors.” A developmental scoring system that focuses more on conceptual content or surface structure might view these statements as indicative of the coordination of these arenas. However, a closer analysis of the structure of the arguments reveals that while these “mottos” are exclaimed, there is no evidence of integration. Instead, each arena is unpacked separately, and only one arena in each performance is clearly a level 11 system. Sonya and Maria’s “mottos” may remind them of the existence of multiple arenas and the importance of noticing each one, but they do not take “the next step” of elaborating the complex interconnections between them. This will not be possible until they have fully elaborated their conceptions of organization, group, and interpersonal at phase 11:4.

Content:

The most striking similarity in content between the two cases is that they both focus on aspects of the organization, the group, and the interpersonal. In the concept maps (figures 2 and 3), these three elements are clearly shown. When considering the organization level, both discuss the need to understand the business or strategic case for the proposed shift in seating, the need to weigh the costs and benefits of such a change, and the need to ask if the change would serve the mission or business of the organization. At the interpersonal level, both discuss the need to take into account the perspectives of their supervisor and their employees.

However, the two cases diverge sharply in the details of their conceptions of interpersonal and group level concerns. This divergence seems to stem from differences in their orienting perspective. Maria orients to the problem by taking on the perspective of the protagonist and treating it as if it is her own. She focuses primarily on interpersonal relationships, including a desire to maintain productive working relationships and the need to secure the protagonist’s position as manager. Sonya, on the other hand, orients to the dilemma more from an “outsider” stance: her solution does not take the peculiarities of her position as manager into account. Indeed, her proposed solution, which focuses on group-level action, could be enacted by an outside consultant.

Maria’s unpacking of the interpersonal arena is the most elaborated system in her protocol. She seeks to maintain productive working relationships with her employees—to maintain their morale and to influence their views—and with her boss. She strengthens her relationship with her boss to secure her position as manager and to be seen as a “strategic thinking partner.” She wishes to deeply understand her boss so that she can “establish [her] self as a player in his mind.” Her proposed solution involves discussing the business case for the change that the boss has in mind and then using that deep understanding to be a better “mouthpiece.”

In contrast, Sonya’s most elaborated system is her analysis of the group level. She also notes the need to understand both her boss’s motivations and her employees’ needs, but does not elaborate the interpersonal domain any further. Instead, her proposed solution focuses on what the group can do together: communication between group members must be improved, compromises must be made, and all parties must participate in developing a plan for change. She intends to form multilevel teams that will increase employee buy-in and help the boss better understand their views.

Both of these orienting perspectives have strengths and weaknesses. Maria’s pragmatic view of the role of a manager (and her consideration of maintaining her own position) suggests that she might have more business experience and knowledge of how things go “in the real world.” A manager dealing with a supervisor who appears to have an authoritarian streak certainly must be strategic in her dealings with him. Sonya, on the other hand, is more idealistic. She seems to have strengths in collaboration and facilitation, but she needs to strengthen her nuts-and-bolts understanding of business.

It is important to note that a Lectical score of 11:3 does not imply a specific orienting perspective, but it does imply a certain range of orienting perspectives. In other words, certain perspectives tend to emerge at certain phases in development and tend to die out at later phases. Clinging to the role of the manager and adopting it as one’s own tends to emerge fairly early in development and tends to die out around this level. At 11:4 and beyond, people more typically begin to orient to the problem by taking a broader view that includes more perspectives.

Both women are equally weak in their discussion of the organizational level. Although they orient to different aspects of the problem, both of those aspects are “human” elements. This may be a result of the current focus in business education on leadership skills rather than pragmatic skills of formal decision-making and critical thinking.

A more disturbing similarity is that both women seem to wish to appear more inclusive or fair than they are. Sonya notes that she must increase the “perception” of fairness (rather than fairness itself?); Maria notes that she wishes to give the employees a “sense of ownership in [the] decision” yet her proposed solution does not include them in any direct way. Perhaps the women are paying lip service to ideas about inclusivity that they have heard or read, without robustly internalizing what those ideas mean. As a result, they may view individuals as means to ends rather than ends in themselves. This ethical dimension is missed by the vast majority of individuals who take the LDMA.

Commentary:

While we have no data to support this, our felt sense is that some members of the integral community may view Sonya as embodying a higher developmental position than Maria embodies. Maria’s emphasis on getting to know the boss may seem self-serving, perhaps indicating a need to be a good employee or “climb the ladder” above all else. Acting as the “mouthpiece” between the boss and the employees may be seen as a more traditional or static view of the role of managers (“blue”). Sonya, on the other hand, appears more inclusive and participatory. This seems to reflect a more modern or post-modern (dare we say it? “green”) view of management. However, we have shown in this paper that their performances share an underlying complexity structure. Although they work with different content, Sonya and Maria coordinate their ideas in the same way.

Collaboration is certainly valued in the integral community and as such might be “pegged” as more developed. But the LAS allows us to see through that surface structure into the deeper structure that lies beneath—in this case, that several of the arguments Sonya uses are poorly elaborated and that she does not draw connections between them through further elaboration or integration. Collaboration is likely to be ineffective when the organization itself and the interpersonal relationships within it are poorly understood. A similar blend of strengths and weaknesses is apparent in Maria’s performance as well. She might succeed at aligning herself with the boss, but without a clear view of the organization as a system, along with a deeper understanding of the group, she is much less likely to formulate a response that supports the long-term interests of all parties.

Yet the intuition that Sonya and Maria are somehow different is certainly a valid one. They do focus on different arenas and have different ideas about their own role in the company. They are different, despite understanding the world at the same level of complexity in various domains. This more qualitative dimension of difference is important, yet we argue that it should not be part of the developmental scoring process. Mixing content and structure in developmental analysis has several important limitations.

1) Relying on content cues means that the developmental system will always be tied to a particular culture and epoch.

When a system of assessment is contingent upon content cues, it necessarily derives those content cues from the examples included in a scoring manual. These examples are necessarily tied to the particular time, place, and culture of the sample used to construct the manual. Certain concepts, which at one time might have been relatively good indicators of developmental level, are likely to lose their value if they become commonplace, such as when a popular book about the researcher’s findings is widely read.

For example, Kohlber’s original samples were selected in the 1950s and 1960s, when the concept of “moral relativism” was quite rare and primarily used by sophisticated moral reasoners. However, in the decades since, a relativistic view has become much more prevalent in the culture. Today, forms of moral relativism that Stein and Dawson call “subjective relativism”—where reasoners explain their uncertainty, make relativistic references to belief or opinion, and espouse the idea that one can speak only for oneself—are common as early as the end of level 9 (Stein and Dawson-Tunik, 2004). It is possible that an analyst using Kohlber’s Standard Issue Scoring Manual might mistake these young thinkers as more sophisticated than they are. This has certainly occurred in the popular culture, where “Millennials” are often hailed as the “smartest generation yet,” possibly due to their facility with uncertainty and relativism. Nevertheless, the level 10 conception of relativism lacks sophistication and explanatory power. It is not the concept of relativism Kohlberg identified in his sample (Stein and Dawson-Tunik, 2004, p. 19).

2) Scoring systems that require the use of content cues have been shown to be less reliable and valid.

In psychometric comparisons of Commons General Stage Scoring System (GSSS) and the LAS with domain-based scoring systems, Dawson found that LAS and GSSS scores were not only more statistically reliable than performances scored with domain-based systems, but they were more likely to exhibit psychometric qualities that are consistent with the postulates of developmental theory (Dawson, Xie, and Wilson, 2003; Dawson, 2004; Dawson, 2002; Dawson-Tunik, Commons, Wilson, and Fischer, 2005). In particular, her research with the GSSS and LAS shows that it is less difficult to move from one phase to another within a level (e.g., 11:2 to 11:3) than it is to move from one level to the next (e.g., 11:4 to 12:1). This is evidence of construct validity, in that it supports the postulate that change from one level to the next is qualitative, whereas within-level growth is more cumulative.

3) Relying on content cues limits the number of ways to “be” at each level.

Domain-based scoring manuals contain a limited amount of content. Therefore, the range of possible content included at each level is constrained.

Of course, the range of possible content at each level is constrained by the structural properties of development. A child reasoning at level 8 simply cannot produce the same concepts that an adult reasoning at level 12 can. However, as we have shown, the relationship between content and structure is too complex to be captured by the methods of domain specific developmental assessment systems.

Although we advocate that scoring for developmental level should be independent of content, this does not mean that we do not care about content. In fact, the LAS was designed to give researchers the ability to conduct ongoing investigations into the relation between content and level. The assessments developed by the Developmental Testing Service and DiscoTest are all informed by these investigations, which provide an ever-increasing body of knowledge about the specifics of development within knowledge domains. Since the independent analysis of structure and content also makes it possible to differentiate between developmental level and questions about the goodness of people’s conceptions, the LAS also welcomes (and requires) close collaboration with subject matter experts who are able to make these judgments. Questions about goodness cannot be answered with a Lectical analysis or analyses of the empirical relation between content and level. These are philosophical questions. We argue here and elsewhere (Stein and Heikkinen, 2009) that we need multiple languages of evaluation to discuss the intricate relationships between complexity, goodness, function, and so on. Complexity is one important dimension, but there are others—others that merit as much in-depth analysis as has been focused upon developmental properties.

Conclusion

We hope that this paper has demonstrated aspects of the utility of the Lectical Assessment System and the LDMA. We have shown that our methods allow us to identify (1) important commonalities between two reasoners performing in the same developmental phase, and (2) important differences that highlight each respondent’s unique developmental needs.

The LAS can be used as a tool to disclose aspects of the life-world by informing our understanding of phenomena (phenomenology) and double-checking our interpretations of these phenomena (hermeneutics). Today, as part of a broader methodology, we use it to help people grow toward a fuller realization of their potential by providing high quality, educative diagnostics that are linked to evidence-based, targeted learning suggestions.

References

Colby, A., & Kohlberg, L. (1987).The Measurement of Moral Judgment Vol. 2: Standard Issue Scoring Manual. Cambridge, UK: Cambridge University Press.

Armon, C. (1984). Ideals of the Good Life: Evaluative Reasoning in Children and Adults. Unpublished doctoral dissertation. Cambridge, MA: Harvard Graduate School of Education.

Dawson, T. L. (2000). Moral reasoning and evaluative reasoning about the good life. Journal of Applied Measurement, 1, 372-397.

Dawson, T. L. (2002). A comparison of three developmental stage scoring systems. Journal of Applied Measurement, 3, 146-189.

Dawson, T. L. (2004). Assessing intellectual development: Three approaches, one sequence. Journal of Adult Development, 11, 71-85.

Dawson, T. L., & Stein, Z. (2006). Mind Brain & Education Study: Final Report. Northampton, MA: Developmental Testing Service, LLC.

Dawson, T. L., & Stein, Z. (2008). Cycles of research and application in education: Learning pathways for energy concepts. Mind, Brain, and Education, 2(2), 90-103.

Dawson, T. L., Xie, Y., & Wilson, M. (2003). Domain-general and domain-specific developmental assessments: Do they measure the same thing?Cognitive Development, 18, 61-78.

Dawson-Tunik, T. L. (2004). “A good education is…” The development of evaluative thought across the life span. Genetic, Social, and General Psychology Monographs, 130, 4-112.

Dawson-Tunik, T. L., Commons, M. L., Wilson, M., & Fischer, K. W. (2005). The shape of development. The European Journal of Developmental Psychology, 2(2), 163-196.

Fischer, K. (1980). A theory of cognitive development: The control and construction of hierarchies of skills. Psychological Review, 87(6), 477-531.

Fischer, K., & Bidell, T. (2006). Dynamic development of psychological structures in action and thought. In W. Damon & L. R.M. (Eds.),Handbook of Child Psychology: Theoretical Models of Human Development (Vol. One, pp. 1-62). New York: John Wiley & Sons.

Kitchener, K. S., & King, P. M. (1985/1996). Reflective Judgment Scoring Manual with Examples. (Unpublished, contact authors).

Lahey, L., Souvaine, E., Kegan, R, Goodman, R., & Felix, S. (1988). A Guide to the Subject-
Object Interview: Its Administration and Interpretation. Cambridge, MA: Harvard Graduate
School of Education, Subject-Object Research Group.

Stein, Z., & Heikkinen, K. (2008). On operationalizing aspects of altitude: An introduction to the Lectical™ Assessment System for integral researchers. Journal of Integral Theory and Practice 3(1), 105-138.

Stein, Z., & Heikkinen, K. (2009). Metrics, models, and measurement in developmental psychology.Integral Review, 5(1), 4-24.

Stein, Z. & Dawson, T. L. (2004). It is all good: Moral relativism and the millennial mind. Paper presented at the The Millennial Mind, Baltimore, MD.

Appendix A
LDMA dilemma and questions

You have been a manager in one of the most technically savvy and productive offices in the company for the last three years. . Almost 80% of the employees have at least Masters degrees and many have doctoral degrees in engineering or computer science. . This has been much easier than your last management position, because here you have such great respect for the ability and drive of your employees. . When your supervisor retired 3 months ago, the senior leadership team decided to replace her with an executive hired from outside the company. . The individual that was finally selected after a lengthy interview process has only been on the job for 1 week and is already stirring things up. . After his first walk-through of the spaces, essentially a large cubicle farm, he announced that he was going to redesign the space to “open things up” and encourage greater collaboration and exchange of ideas among members of the group. . You have been presented with a drawing of how the space will be reconfigured and a very aggressive time-line for the work, both of which you share with your employees. . This normally quiet, reserved group is visibly outraged. . How can they be expected to do highly technical work without the quiet and privacy of their cubicles? What’s wrong with using a conference room when collaboration is called for? They are looking to you to stand up for them.

What are the important things to consider in this situation? In one or two paragraphs, explain what they are and why they are important.

Are some of the considerations you discussed in your response to question 1 more important than others? If so, what are they and why are they more important?

What do you think is an appropriate response to this kind of situation? Please explain why this response is appropriate.

Describe another reasonable response to this kind of situation. Compare the potential risks and benefits of this response with those of your original response.

What process would you recommend for deciding how to respond to situations of this kind? Please describe this decision-making process in general terms—in a way that would allow another person to use the process in a similar workplace situation.

^––––––– ^

Theo Dawson’s dissertation demonstrated the power and utility of a novel methodology that makes it possible to describe conceptual development in any domain of knowledge without the expense of conducting longitudinal research.From Dawson’s perspective, development is the appropriate aim of education. More than ever before, workers/citizens need great flexibility of mind and capacity for handling complexity. As much as they need appropriate knowledge, they also need to know how to work with knowledge to solve problems, make critical decisions, and handle the demands of modern life. The tools and methods Dawson has developed—and continues to develop—are all designed to help educators/employers achieve these ends.
Developmental Testing Service http://devtestservice.com/
DiscoTest http://discotest.org/
theo@devtestservice.com

Katie Heikkinen is currently a doctoral candidate in the Human Development program at the Harvard Graduate School of Education, where her research focuses on the assessment of adult development, with a particular emphasis on Kurt Fischer’s Skill Theory, Theo Dawson’s Lectical Assessment System, and Robert Kegan’s Subject-Object Interview. She received her Master of Education in Mind, Brain, Education in 2007 and her B.A. from Harvard College in 2002, where she studied visual attention in experienced meditators under Stephen Kosslyn. She is an alumni of Integral Institute and is currently on the faculty of the Integral Theory program at John F. Kennedy University.
katie.heikkinen@gmail.com