What is philosophy for? What can it, and can it not, be expected to do? I have been thinking about these questions a lot lately. First, I will be teaching a class this fall to undergraduates regarding ethical and legal issues in education; I want to make sure I use philosophy to good effect and know I will have at least some students with (good, bad, or other) expectations for a philosophy course. Second, with all the emphasis on data-driven research, we philosophers of education (and other fields) sometimes feel like we’re on the defensive, having to justify ourselves in ways that other researchers don’t.
Well, recently I stumbled on a really interesting answer to the questions of what philosophy is for and what it can, and cannot, do. Richard Taylor’s essay “Dare to Be Wise” (Taylor 1968) has a bold, but satisfying, thesis that philosophy has taken a mistaken direction in questing for philosophic knowledge:
I shall maintain that there simply is no such thing as philosophical knowledge, nor any philosophical way to know anything, and defend the humble point t hat philosophy is, indeed, the love of wisdom (615).
I want to briefly rehearse Taylor’s argument before discussing why I see his view as a very ennobling one for philosophy. Briefly, in suggesting that philosophy is not about knowledge but wisdom, philosophy does not try and be as other disciplines, but offers something that is more unique that other disciplines can’t as adeptly provide. And, of course, I also happen to think Taylor’s argument is basically true.
Taylor starts with Socrates and the Greeks (Stoics, Epicureans). He suggests that the works that they produced and what they (likely) saw themselves as doing was offering wisdom rather than knowledge. Knowledge is the search for what can be demonstratively proved and is true in a factual sense. Wisdom is a deep acquaintance with a problem, sensitivity to its subtleties and parts, and (possibly) an acquaintance with possible-rules-of-thumb-type answers. While this may be a bit of oversimplification, think of Aristotle’s Nicomachean Ethics as the quest for moral wisdom and Kant’s Metaphysics of Morals as the quest for moral knowledge; in the former case, Aristotle thinks through some moral problems and reasons about some overall possible solutions that are subtle, flexible, and not considered to be ‘true’ in any provable sense. Kant, on the other hand, had it as his mission to discover via reason a moral imperative that could be proved every bit as true as a law of physics, and that was invariant to circumstance, social convention, etc. (Where Taylor may miss the mark about the Ancient Greeks is with Plato, who conceived of the philosopher as the one who could, via reason, attain the truth in the midst of those who saw only appearance.) (more…)
After a long, unintentional hiatus from posting on my blog, an unforeseen question has beckoned me back to write a post. The question (not really in my area of interest, but fascinating nonetheless) is this: what gives someone the right to be a (literary, cultural, social, etc) critic?
The question was posed on a Guardian Books Podcast (available from itunes) called “Life, Death, and Literary Critics” (2/4/2011). Toward the end of the discussion about what literary (and other) critics do and their importance, one listener comment asked what exactly gives someone the right to be a critic?
The answer one critic gave – the only answer given on the show, which is a shame as it seems wrong – was something like “knowledge of one’s subject.” Why does that seem wrong? To me, “knowledge of one’s subject” seems, at best, to be a necessary condition for being a critic, not a sufficient condition. In order to be a critic, it may be the case that ONE necessary trait to have is knowledge of the subject one is critiquing. But one may have knowledge of a subject, but not be a good writer, or not have very good taste, and it seems to me that many would be reluctant to call that person a critic. It also seems to me that knowledge of one’s subject isn’t ALWAYS a necessary condition for being a critic; one can have fair knowledge of one’s subject but have really good taste, instincts, and be a good writer, and be a critic, where someone with great knowledge of the subject, but lesser instincts or taste, would not.
To be honest, the obvious answer I recall practically blurting out during the podcast in response to “What gives someone the right to be a critic?” is…nothing – nothing except having the urge and follow-through to offer a critique. And if one is lucky (or offers a product that others find value in), one’s status as a critic will become stronger the more others appraise you to be a legitimate critic (using whatever criteria they want to use).
Part of the problem, I think, is that when we ask “What gives y the right to x?” we are really asking something like “Why is x entitled to y?” Indeed, that is the sense in which the listener seemed to be asking “What gives x the right to be a critic?” So, if the question is whether anyone can be called entitled to be a critic, I think the answer is a pretty obvious “no.” Now, we can ask why James is entitled to be a teacher in the state of Maryland, or why Josephine is entitled to practice psychiatry in the state of Wisconsin, but in those cases, the answer is largely because they have jumped through the (justified or not) hoops that gave them the license which thereby “entitles” them to be a teacher or doctor. In fact, the word “entitle” is pretty much a legalistic term that means roughly “to have been given the title,” and that is precisely what a certification is – a title that grants and “entitlement.”
But a critic? There is no certification for that. One can be an English major, or a political science major, but whether one is entitled to be a critic doesn’t seem to be dependent on whether one has gotten a certain title as much as whether one’s writing performs the role of giving a critique (and whether others who read the work concur that the writing does that). So, no one is entitled to be a critic; one must earn the title in the way one earns the title “recording artist” or “poet.” One earns the title by performing the role that people in those categories perform.
But, we can object, not everyone who scrapes together the money to record their songs in a basement studio REALLY is a recording artist. Well, in a way that is correct and, in a way, incorrect. In a literal sense, they are a recording artist because they have recorded artistry; just like anyone who has collected baseball cards was, at that time, a baseball card collector. But if the question is whether the basement-studio singer is a SUCCESSFUL recording artist or is acknowledge to listeners to be a good recording artist is another question – related but different.
Now is where I’ll suggest that maybe the listener’s question was phrased wrong: rather than “What gives someone the right to be a critic?” maybe the better asked question is: “What conditions must someone meet to be considered a critic by others?” Not, “What gives someone the right to be a recording artist?” (answer: enough money to record artistry) but “What conditions must someone meet to be considered a critic by others? (answer: talent, good material, a product others want to listen to).
This is where I think it simply comes down to consensus. You are a critic if you offer a critique, and you are a critic to others when others consider your critiques worthy of being read and acknowledged as good critiques. I am sure this might drive many batty, as it is very relativistic beauty-in-the-eye-of-the-beholder kind of stuff. And many will object that x may be considered a critic because they have a blog that is read by others, but they really don’t have a good grasp of what they critique, have poor taste, etc. So, am I saying they are REALLY a critic? Well, yes. Objections like this generally come down to saying “Well, I judge their work to be unworthy and wish others would do the same. To me they are not a critic. I wish they were not held as a legitimate critic in others’ eyes either. And if they saw it may way, used the critieria I used, or had the knowledge I have, they wouldn’t see that person as a critic. Therefore, they are wrong to see that person as a critic, and the person would not be seen as a critic but for the fans’ mistakes.”
But it doesn’t erase the fact that if we asked of this blogosphere critic ‘What gives them the right to be a critic?” The answer would basically be that the fact that they offer a critique that some people find useful makes them a critic to those people.
And honestly, I think that is the best we can do… unless we can find some really good sufficient conditions that are strong enough to trump my subjectivistic theory. If we can find an instance where, say, someone has millions of fans who view that person’s work as good criticism, but we came up with a theory of sufficient conditions for criticism strong enough to really show that, despite being called a critic by millions, they are really not a critic at all, then the theory would be disproved. (But in reality, I think any such theory could be reduced to the theory’s inventor coming up with THEIR OWN standard for how they judge who is a worthy critic arguing that everyone else should just adapt that same theory also, and that anyone who doesn’t is wrong.)
So, I think it was a shame that the question “What gives someone the right to be a critic? was badly answered. I think the answer given may have been intuitive to some critics, who really do not want their status as critics to be wholly dependent on a market process, and their work as something more than products that depend on appealing to consumers even before imparting a superior knowledge. But, I just think my answer is more convincing.
Recently, there has been a plethora of books put out by journalists like Malcolm Gladwell Stephen Dubner, Stephen Levitt, and David Shenk dealing with scientific issues in a way explainable to the lay public. I’ve heard some folks call this the “Gladwellization” of information. Of course, this was meant as a pejorative meaning something similar to the “simplification” of information. And in a sense, that is what these journalists are doing: taking highly specialized information from disciplines and writing books intended to explain to and interest the lay public.
Instead of either praising or bemoaning the trend, I want to think about the benefits and costs of this new emerging breed of popular science writer. First, it should be noted that while the trend of writing science books for the lay public is not new, it has tended to be done by that rare breed of scientist who has a knack for distilling complex concepts into simple and readable prose: Gary Becker in economics, Stephen Jay Gould and Richard Dawkins in biology – all of these writers are bonafide experts in their field as well as popular writers devoted to explaining their science to the masses.
Not man y will argue that this is not a useful or necessary endeavor. Well, correct that: it is not necessary that the general public understand genetics or supply-side economics, but it is certainly good to provide the option to interested members of the lay public. After all, no one wants to see a world where only scientists understand what scientists do, and only economists can understand economics.
But these new journlsits – Gladwell and the like – are just that: journalists. They are often writing about areas they have not gotten degrees in, reading journal articles aimed at a technical audience, and (often without technical acumen) are interpreting data for us. So, it begs a question: in what ways is this a good thing and in what ways is it a bad thing?
To attempt an answer, it is necessary to understand that all of this is being done on account of the division of labor (or, we might say, the division of knowledge). The sciences, and most other fields we see nowadays, have advanced to a point where careers are devoted to them. That is a good thing in that it means we are advancing a great deal (because the more advanced the field and its knowledge, the more time initiates must spend mastering that field and its knowledge). But there is a downside here: it kinda leaves the general public – those whose careers are not devoted to that particular science – behind. (Even professionals in other closely related fields often don’t know the ‘inside’ information of fields around them because each sub-discipline now often rests on knowledge exclusive to those in that sub-discipline.)
So, the obvious good of having journalists writing books distilling these often heady and complex fields for the lay public is that it, to the degree possible, disburses previously monopolized information. In a sense, it lessens the barrier between disciplines and between the technician and the expert. True, I don’t exactly need to understand molecular biology or chaos theory, but at least I can, in theory, get it if I want it. Going along with the American (and European) tradition of egalitarianism, these journalists make sure that everyone’s got a shot at this information.
But here is where I see a subtle downside. (more…)
I am going to take a little excursion from the world of education to discuss a political issue I feel strongly about: why I vote libertarian and do not see this as “throwing away my vote.”
If you didn’t donate all you could, if you didn’t volunteer for the Republican party or its candidates, if you didn’t get your friends out to vote – the blood for this is on your hands.
This was on an acquaintance’s blog and is typical of arguments that we third party voters hear quite often. The argument can be generalized thus:
If x and y are political candidates and you voted for z, you (in effect) are helping the front-running candidate win and are, indirectly, responsible for that candidate winning.
To make matters worse, the libertarian party (who often “takes” votes from the republican party more than the democratic party, for its Reagan-esque belief in small government) is often accused of tacitly helping democrats win office. This is similar to those who vote with the green or socialist party being accused of tacitly helping republicans win seats (because green and socialist candidates often ‘take’ votes from disaffected democrats more than disaffected republicans).
So, am I throwing my vote away by voting for the libertarian party (who, as much as I would like otherwise, is almost always the losing horse)? Am I to blame for handing the democrats victories by ‘taking’ my vote away from the republican candidate?
I confess that, try as I might, I don’t see the logic in this charge. The above argument assumes that the republican candidate is somehow a better representative of my small-government beliefs than the democratic candidate is. In my early days, I must admit to having this idea: I always looked on republicans more favorably than democrats and even though they were the “lesser of two evils” they were always the lesser evil.
Then George W. Bush happened. (more…)
This question was put to our PhD level Curriculum Theory class last night. We were discussing E.D. Hirsch, an education theorist who is often depicted and criticized as an ‘elitist.’ So, the professor asked us: what is wrong with elitism?
And what a question it is! Too often, we use words like ‘elitist’ as synonyms for ‘bad’ without thinking about what is bad about them. What argument is there that elitism – the view which glorifies elites over those ‘below’ them – a bad thing?
Here is my attempt at an answer. In so many words, the thing I find most objectionable about elitism is not (as many would say) its seeming endorsement of meritocracy, but its myopia. Elitism, in glorifying the way of the ‘elite,’ often makes assumptions that everyone should behave the way elites behave and value the things that elites value. To put it a bit differently if bluntly, the problem with elitists is that they assume that their lives are the way lives should be, rather than one way that lives could be.
As well meaning as E.D. Hirsch is, he falls into this error with his program for Cultural Literacy, which suggests that there are certain facts all students should come out of school knowing in order to be culturally literate. In other words, there are ideas or facts that are either necessary conditions to having ‘cultural literacy’ and not having these will be deemed sufficient to make one ‘culturally illiterate.’
What’s wrong with this? First, it assumes a very static view of culture. Culture, of course, is a very fluid and changing thing, and the knowledge one must have to be a part of a culture wholly depends on the people one is conversing with in that culture. (All of this assumes for the sake of argument that there is even a coherent definition of what is a culture.) In other words, the things I would need to know to get in with a group of twenty-somethings in rural Nebraska may be wholly different than what I would need to know to get in with PhDed professors at Princeton University. And the problem with the Hirsch approach is that it seems to assume that my knowledge about Jay Z which may help me get in with the twenty-something crowd simply isn’t as important culturally as my knowledge of Wolfgang Mozart that helps me get in with the professors. (more…)
No Choices Left Behind? Why National Standards Will Increase Standardization, Decrease Accountability, and Probably Not Work
I suppose it was just a matter of time. After years of floating the idea around in the abstract, a national panel of educators and curriculum specialists are unveiling their draft of new national curricular standards for US public schools. And I confess; I don’t get it. The first attempt at widespread federal intervention into education – No Child Left Behind – was roundly and rightly criticized by, seemingly, everyone. But this time…this time will be different.
Maybe I exaggerated; I do get it. I get that we are living in a time where we simply assume that the larger the scale, the better the result. After all, one of the key arguments against allowing states to set their own standards for their own schools is that…well… states can’t be trusted. (Of course, the idea that decision makers on the federal level can be trusted where state decision makers can’t is never argued for; it is just assumed.) William Bennett and Rod Paige took this line several years ago:
But there’s a problem. Out of respect for federalism and mistrust of Washington, much of the GOP has expected individual states to set their own academic standards and devise their own tests and accountability systems. That was the approach of the No Child Left Behind Act — which moved as boldly as it could while still achieving bipartisan support. It sounds good, but it is working badly.
Tennessee, they cite as an example, reports that only 47 percent of its fourth graders are “proficient” in reading. They also cite Oklahoma, where the reason their number of “needs improvement” schools have decreased is because of changes in their standards, not performance.
All of this may well be true. But is there anything in this argument that suggests that nationalization will tackle this problem and get better results? Just as many arguments for national standards do, Paige’s and Bennett’s argument points to state flaws and ASSUMES that those flaws would be ameliorated at the national level.
Now, here, it could be responded that there is nothing to lose by trying. We have let the localities and states think for themselves for far too long, and it is time to let the fed try their hand. There are several reasons I see AGAINST doing this. Not only are there things to lose, there are reasons that localities are simply better governments than nations. (more…)
Diane Ravitch recently wrote an article called “First, Let’s Fire All the Teachers.” Its aim is at NCLB’s idea of accountability.
The fundamental principle of school reform, in the Age of Bush and Obama, is measure and punish. If students don’t get high enough scores, then someone must be punished! If the graduation rate hovers around 50%, then someone must be punished. This is known as “accountability.”
Far be it from me to say many nice things about NCLB. It is a federal program that is the equivalent to: “Okay, states. Figure out a way to set standards and meet them. Oh, and figure out how to pay for it.” There are many, many problems with this, none the least of which is the idea of having the folks charged with meeting standards of facing consequences be the VERY SAME FOLKS who set the standards they will be penalized for not meeting. This basically ensures that the standards will be low (at least, in a non-market system where performance record is irrelevant to profits).
But my disagreement with Ravitch has to do with a larger problem: Ravitch seems very opposed to ‘accountability’ measures that would result in under-performing schools having to close, teachers being fired, etc. And since she doesn’t offer any competing vision of accountability, it is difficult to see what type of accountability she’d be happy with. (I will be reading her newest book soon and maybe she offers answers there.)
This strategy of closing schools and firing the teachers is mean and punitive. And it is ultimately pointless. It solves no problem. It opens up a host of new problems. It satisfies the urge to purge. But it does nothing at all for the students.
I am well aware of the apparent reasons that schools are not like businesses. So, I will use an anaology cautiously, making it as close to the school model as I possibly can: if I own several tutoring centers, and one of them repeatedly fails to meet quality control standards, how is it of no benefit to shut down the center and fire the workers? By doing this, I prevent future customers from wasting time and money on a product that fails to meet promised results, and open up a ‘blank canvas’ on which I can start over. In fact, if I were to continue offering subpar services knowingly, one would be fair to accuse me of running a scam that actively DOES NOT BENEFIT anyone (except for myself and my staff, who continue to collect money for subpar work). (more…)
I’ve just read a really exciting new book by technology (and overall) genius Jaron Lanier. The book is called You Are Not a Gadget: A Manifesto.” In it, he criticizes the direction of what he calls “internet 2.0″ in a way that avoids ludditismThat is, he criticizes the way technology is going, and the way we think about the technology, not necessarily the technology itself. (After all, he did largely create virtual reality!) Below is an extended version of my amazon.review.
The first thing that must be said about Jaron Lanier’s “You Are Not a Gadget: A Manifesto” is that it a very intricate book, full of several different arguments and lines of thought. It might be best to say that it is a manifesto containing several submanifestos. His arguments against the current directions in “web 2.0” technology are many and multifaceted, taking us through questions of the effectiveness of capitalism, how culture evolves, whether there can really be “wisdom in crowds,” and even the nature of what “human” is.
If we have to sum up the book into an overall point or argument, here’s how I’d do it: web technology, which was hoped to lead to vigorous innovation and individualization, has done precisely the opposite. On the consumption side, the idea of the “wisdom of crowds” has made the group (Lanier says “hive mind”) more important and more “real” than voices of individuals. On the production side, the internet has led less to innovative production than to the recycling of old ideas in new forms, while making it hard for inventors/pioneers to make a living being creative. (Yes, I know I am missing some things in this description but, as mentioned, Lanier’s work is very hard to sum up with concision.)
Lanier believes that there are two big reasons for this. First, we are not using our conception of humanity to drive how we shape technology so much as we are allowing technology to shape how we define humanity. A shining example is our faith in the “wisdom of crowds” as exemplified by our increasing obsession with all things wiki. Lanier reminds us that, in reality, there is no such “wisdom in crowds” because crowds are simply collections of individuals making individual decisions. (I would also add that “wisdom of crowds” is a literal impossibility as wisdom can only happen embodied in a point-of-view, of which a crowd has none.)
Secondly, Lanier believes that innovation may be lagging behind expectations because of our belief in the “information wants to be free” model. Yes, this has benefits, like offering information in a way that is accessible to…well…most. But it has the disadvantage of removing the incentives provided by markets out of a market. Lanier often uses the example of music and art: it was thought that the internet would allow more artists to make livings off of their art by removing the middle-men and allowing artists direct access to consumers. But with so much free content and exponentially increased competition, it is becoming even harder for artists to (a) get noticed in the milieu and (b) make a living off of their creativity. (more…)
Below is a passage I wrote for a PhD class in curriculum theory. The questions was “Who should decide what students learn?” particularly in regards to whether intelligent design should be taught in science classes. I post it here because I think it is a decent articulation of my view that families, parents, and children (rather than either education experts or democratically elected board members) should have the ultimate authority over what children learn.
The question is: who is to decide whether intelligent design or evolution (or both or neither) should be taught in schools. Of all the readings assigned for this week, my views allign most closely with McClusky. The problem is that we live in a society that is simulteneously liberal and democratic, while also talking about an institution (schools) that, in some sense, has as its role something neither liberal or democratic. As long as these three ideals are in conflict – and I think they are – one must simply choose which authorty they thinks trumps the other two: experts (nondemocratic and nonliberal), the majority (non-liberal and non-authoritarian) or each individual/family (non-democratic and non-authoritarian). I believe the best way to decide the issue is to leave the decisions in the hands of each individual/ family.
But let me first explain why I believe we are dealing with three incompatible ideals. As a liberal society, we are committed to the idea that individuals have a right to conscience. As a democratic republic, we are committed to the idea that disputes are to be settled by appeal to the vote (at least to vote in representatives whose own votes will reflect that of the majority). And, in the case of schools, we are also committed to the idea that there are certain things which SHOULD be conveyed to children regardless of whether they, their parents, or the majority concur. (In other words, we believe that curriculum is too valuable a subject to be left to non-experts.)
These three ideas, then, are in conflict and, I believe, irreducibly so. That is because recognizing the one negates the other two. (By example, leaving curricular matters up to majority vote abridges individuals liberty to decide educational issues for themselves, and also takes a stand against unelected experts deciding them.) Why do I choose liberalism over the other competing values as curricular guides? (more…)