As a professor in a College of Education, my colleagues always tell students to use person-first language when referring to students with disabilities. It is not an ‘autistic person’ but a ‘person with autism;’ not a ‘diabetic kid,’ but a ‘kid with diabetes.’ The idea is that the person comes first, and the disability or difference comes second.
I get why they do it, and while I don’t want to argue against person-first language, I do want to argue the following. First, I want to argue that while first person language may not be a bad thing, the arguments for it generally don’t survive close scrutiny; those arguments both misunderstand how we use language and maybe overestimate how the proposed language change really affects how people think. Second, I want to argue that there are arguments for why person-first language is not always appropriate; that sometimes it is and sometimes it isn’t, and that in certain instances, it may have the opposite effect of those its users intend
So, to be clear, I’m not saying that person-first is always a bad thing, just that the arguments for it tend not to be terribly good and that there are reasons why sometimes, it may not be the best way to speak. (more…)
What is philosophy for? What can it, and can it not, be expected to do? I have been thinking about these questions a lot lately. First, I will be teaching a class this fall to undergraduates regarding ethical and legal issues in education; I want to make sure I use philosophy to good effect and know I will have at least some students with (good, bad, or other) expectations for a philosophy course. Second, with all the emphasis on data-driven research, we philosophers of education (and other fields) sometimes feel like we’re on the defensive, having to justify ourselves in ways that other researchers don’t.
Well, recently I stumbled on a really interesting answer to the questions of what philosophy is for and what it can, and cannot, do. Richard Taylor’s essay “Dare to Be Wise” (Taylor 1968) has a bold, but satisfying, thesis that philosophy has taken a mistaken direction in questing for philosophic knowledge:
I shall maintain that there simply is no such thing as philosophical knowledge, nor any philosophical way to know anything, and defend the humble point t hat philosophy is, indeed, the love of wisdom (615).
I want to briefly rehearse Taylor’s argument before discussing why I see his view as a very ennobling one for philosophy. Briefly, in suggesting that philosophy is not about knowledge but wisdom, philosophy does not try and be as other disciplines, but offers something that is more unique that other disciplines can’t as adeptly provide. And, of course, I also happen to think Taylor’s argument is basically true.
Taylor starts with Socrates and the Greeks (Stoics, Epicureans). He suggests that the works that they produced and what they (likely) saw themselves as doing was offering wisdom rather than knowledge. Knowledge is the search for what can be demonstratively proved and is true in a factual sense. Wisdom is a deep acquaintance with a problem, sensitivity to its subtleties and parts, and (possibly) an acquaintance with possible-rules-of-thumb-type answers. While this may be a bit of oversimplification, think of Aristotle’s Nicomachean Ethics as the quest for moral wisdom and Kant’s Metaphysics of Morals as the quest for moral knowledge; in the former case, Aristotle thinks through some moral problems and reasons about some overall possible solutions that are subtle, flexible, and not considered to be ‘true’ in any provable sense. Kant, on the other hand, had it as his mission to discover via reason a moral imperative that could be proved every bit as true as a law of physics, and that was invariant to circumstance, social convention, etc. (Where Taylor may miss the mark about the Ancient Greeks is with Plato, who conceived of the philosopher as the one who could, via reason, attain the truth in the midst of those who saw only appearance.) (more…)
After a long, unintentional hiatus from posting on my blog, an unforeseen question has beckoned me back to write a post. The question (not really in my area of interest, but fascinating nonetheless) is this: what gives someone the right to be a (literary, cultural, social, etc) critic?
The question was posed on a Guardian Books Podcast (available from itunes) called “Life, Death, and Literary Critics” (2/4/2011). Toward the end of the discussion about what literary (and other) critics do and their importance, one listener comment asked what exactly gives someone the right to be a critic?
The answer one critic gave – the only answer given on the show, which is a shame as it seems wrong – was something like “knowledge of one’s subject.” Why does that seem wrong? To me, “knowledge of one’s subject” seems, at best, to be a necessary condition for being a critic, not a sufficient condition. In order to be a critic, it may be the case that ONE necessary trait to have is knowledge of the subject one is critiquing. But one may have knowledge of a subject, but not be a good writer, or not have very good taste, and it seems to me that many would be reluctant to call that person a critic. It also seems to me that knowledge of one’s subject isn’t ALWAYS a necessary condition for being a critic; one can have fair knowledge of one’s subject but have really good taste, instincts, and be a good writer, and be a critic, where someone with great knowledge of the subject, but lesser instincts or taste, would not.
To be honest, the obvious answer I recall practically blurting out during the podcast in response to “What gives someone the right to be a critic?” is…nothing – nothing except having the urge and follow-through to offer a critique. And if one is lucky (or offers a product that others find value in), one’s status as a critic will become stronger the more others appraise you to be a legitimate critic (using whatever criteria they want to use).
Part of the problem, I think, is that when we ask “What gives y the right to x?” we are really asking something like “Why is x entitled to y?” Indeed, that is the sense in which the listener seemed to be asking “What gives x the right to be a critic?” So, if the question is whether anyone can be called entitled to be a critic, I think the answer is a pretty obvious “no.” Now, we can ask why James is entitled to be a teacher in the state of Maryland, or why Josephine is entitled to practice psychiatry in the state of Wisconsin, but in those cases, the answer is largely because they have jumped through the (justified or not) hoops that gave them the license which thereby “entitles” them to be a teacher or doctor. In fact, the word “entitle” is pretty much a legalistic term that means roughly “to have been given the title,” and that is precisely what a certification is – a title that grants and “entitlement.”
But a critic? There is no certification for that. One can be an English major, or a political science major, but whether one is entitled to be a critic doesn’t seem to be dependent on whether one has gotten a certain title as much as whether one’s writing performs the role of giving a critique (and whether others who read the work concur that the writing does that). So, no one is entitled to be a critic; one must earn the title in the way one earns the title “recording artist” or “poet.” One earns the title by performing the role that people in those categories perform.
But, we can object, not everyone who scrapes together the money to record their songs in a basement studio REALLY is a recording artist. Well, in a way that is correct and, in a way, incorrect. In a literal sense, they are a recording artist because they have recorded artistry; just like anyone who has collected baseball cards was, at that time, a baseball card collector. But if the question is whether the basement-studio singer is a SUCCESSFUL recording artist or is acknowledge to listeners to be a good recording artist is another question – related but different.
Now is where I’ll suggest that maybe the listener’s question was phrased wrong: rather than “What gives someone the right to be a critic?” maybe the better asked question is: “What conditions must someone meet to be considered a critic by others?” Not, “What gives someone the right to be a recording artist?” (answer: enough money to record artistry) but “What conditions must someone meet to be considered a critic by others? (answer: talent, good material, a product others want to listen to).
This is where I think it simply comes down to consensus. You are a critic if you offer a critique, and you are a critic to others when others consider your critiques worthy of being read and acknowledged as good critiques. I am sure this might drive many batty, as it is very relativistic beauty-in-the-eye-of-the-beholder kind of stuff. And many will object that x may be considered a critic because they have a blog that is read by others, but they really don’t have a good grasp of what they critique, have poor taste, etc. So, am I saying they are REALLY a critic? Well, yes. Objections like this generally come down to saying “Well, I judge their work to be unworthy and wish others would do the same. To me they are not a critic. I wish they were not held as a legitimate critic in others’ eyes either. And if they saw it may way, used the critieria I used, or had the knowledge I have, they wouldn’t see that person as a critic. Therefore, they are wrong to see that person as a critic, and the person would not be seen as a critic but for the fans’ mistakes.”
But it doesn’t erase the fact that if we asked of this blogosphere critic ‘What gives them the right to be a critic?” The answer would basically be that the fact that they offer a critique that some people find useful makes them a critic to those people.
And honestly, I think that is the best we can do… unless we can find some really good sufficient conditions that are strong enough to trump my subjectivistic theory. If we can find an instance where, say, someone has millions of fans who view that person’s work as good criticism, but we came up with a theory of sufficient conditions for criticism strong enough to really show that, despite being called a critic by millions, they are really not a critic at all, then the theory would be disproved. (But in reality, I think any such theory could be reduced to the theory’s inventor coming up with THEIR OWN standard for how they judge who is a worthy critic arguing that everyone else should just adapt that same theory also, and that anyone who doesn’t is wrong.)
So, I think it was a shame that the question “What gives someone the right to be a critic? was badly answered. I think the answer given may have been intuitive to some critics, who really do not want their status as critics to be wholly dependent on a market process, and their work as something more than products that depend on appealing to consumers even before imparting a superior knowledge. But, I just think my answer is more convincing.
I am going to take a little excursion from the world of education to discuss a political issue I feel strongly about: why I vote libertarian and do not see this as “throwing away my vote.”
If you didn’t donate all you could, if you didn’t volunteer for the Republican party or its candidates, if you didn’t get your friends out to vote – the blood for this is on your hands.
This was on an acquaintance’s blog and is typical of arguments that we third party voters hear quite often. The argument can be generalized thus:
If x and y are political candidates and you voted for z, you (in effect) are helping the front-running candidate win and are, indirectly, responsible for that candidate winning.
To make matters worse, the libertarian party (who often “takes” votes from the republican party more than the democratic party, for its Reagan-esque belief in small government) is often accused of tacitly helping democrats win office. This is similar to those who vote with the green or socialist party being accused of tacitly helping republicans win seats (because green and socialist candidates often ‘take’ votes from disaffected democrats more than disaffected republicans).
So, am I throwing my vote away by voting for the libertarian party (who, as much as I would like otherwise, is almost always the losing horse)? Am I to blame for handing the democrats victories by ‘taking’ my vote away from the republican candidate?
I confess that, try as I might, I don’t see the logic in this charge. The above argument assumes that the republican candidate is somehow a better representative of my small-government beliefs than the democratic candidate is. In my early days, I must admit to having this idea: I always looked on republicans more favorably than democrats and even though they were the “lesser of two evils” they were always the lesser evil.
Then George W. Bush happened. (more…)
I’ve just read a really exciting new book by technology (and overall) genius Jaron Lanier. The book is called You Are Not a Gadget: A Manifesto.” In it, he criticizes the direction of what he calls “internet 2.0” in a way that avoids ludditismThat is, he criticizes the way technology is going, and the way we think about the technology, not necessarily the technology itself. (After all, he did largely create virtual reality!) Below is an extended version of my amazon.review.
The first thing that must be said about Jaron Lanier’s “You Are Not a Gadget: A Manifesto” is that it a very intricate book, full of several different arguments and lines of thought. It might be best to say that it is a manifesto containing several submanifestos. His arguments against the current directions in “web 2.0” technology are many and multifaceted, taking us through questions of the effectiveness of capitalism, how culture evolves, whether there can really be “wisdom in crowds,” and even the nature of what “human” is.
If we have to sum up the book into an overall point or argument, here’s how I’d do it: web technology, which was hoped to lead to vigorous innovation and individualization, has done precisely the opposite. On the consumption side, the idea of the “wisdom of crowds” has made the group (Lanier says “hive mind”) more important and more “real” than voices of individuals. On the production side, the internet has led less to innovative production than to the recycling of old ideas in new forms, while making it hard for inventors/pioneers to make a living being creative. (Yes, I know I am missing some things in this description but, as mentioned, Lanier’s work is very hard to sum up with concision.)
Lanier believes that there are two big reasons for this. First, we are not using our conception of humanity to drive how we shape technology so much as we are allowing technology to shape how we define humanity. A shining example is our faith in the “wisdom of crowds” as exemplified by our increasing obsession with all things wiki. Lanier reminds us that, in reality, there is no such “wisdom in crowds” because crowds are simply collections of individuals making individual decisions. (I would also add that “wisdom of crowds” is a literal impossibility as wisdom can only happen embodied in a point-of-view, of which a crowd has none.)
Secondly, Lanier believes that innovation may be lagging behind expectations because of our belief in the “information wants to be free” model. Yes, this has benefits, like offering information in a way that is accessible to…well…most. But it has the disadvantage of removing the incentives provided by markets out of a market. Lanier often uses the example of music and art: it was thought that the internet would allow more artists to make livings off of their art by removing the middle-men and allowing artists direct access to consumers. But with so much free content and exponentially increased competition, it is becoming even harder for artists to (a) get noticed in the milieu and (b) make a living off of their creativity. (more…)
Below is a passage I wrote for a PhD class in curriculum theory. The questions was “Who should decide what students learn?” particularly in regards to whether intelligent design should be taught in science classes. I post it here because I think it is a decent articulation of my view that families, parents, and children (rather than either education experts or democratically elected board members) should have the ultimate authority over what children learn.
The question is: who is to decide whether intelligent design or evolution (or both or neither) should be taught in schools. Of all the readings assigned for this week, my views allign most closely with McClusky. The problem is that we live in a society that is simulteneously liberal and democratic, while also talking about an institution (schools) that, in some sense, has as its role something neither liberal or democratic. As long as these three ideals are in conflict – and I think they are – one must simply choose which authorty they thinks trumps the other two: experts (nondemocratic and nonliberal), the majority (non-liberal and non-authoritarian) or each individual/family (non-democratic and non-authoritarian). I believe the best way to decide the issue is to leave the decisions in the hands of each individual/ family.
But let me first explain why I believe we are dealing with three incompatible ideals. As a liberal society, we are committed to the idea that individuals have a right to conscience. As a democratic republic, we are committed to the idea that disputes are to be settled by appeal to the vote (at least to vote in representatives whose own votes will reflect that of the majority). And, in the case of schools, we are also committed to the idea that there are certain things which SHOULD be conveyed to children regardless of whether they, their parents, or the majority concur. (In other words, we believe that curriculum is too valuable a subject to be left to non-experts.)
These three ideas, then, are in conflict and, I believe, irreducibly so. That is because recognizing the one negates the other two. (By example, leaving curricular matters up to majority vote abridges individuals liberty to decide educational issues for themselves, and also takes a stand against unelected experts deciding them.) Why do I choose liberalism over the other competing values as curricular guides? (more…)
As a libertarian, it pains me to admit flaws with libertarianism as a philosophy. But one problem in libertarian theory I’ve become increasingly sensitive to is the problem of how children are handled in a libertarian society. I believe I know where the problem stems from, and also why certain existing arguments are flawed, but don’t have much idea on how to rectify these flaws without violating a certain amount of libertarian theory. Oh well. Here is my attempt, at least, to look at one of the more interesting arguments for how libertarian theory should treat kids: Murray Rothbard analogies parent/child relations to house-owner/houseguest relations.
Before getting into that, I want to briefly outline why I think libertarians have such a hard time with the “child problem.” Libertarians, I think, are good at dealing with two different ideas: people (in the sense of autonomous adults) and property. To put it bluntly, children are neither of these and are probably best seen as somewhere in between the two in resemblance. Children resemble, but are not, autonomous adults in certain ways: they are physically autonomous and their brains/minds are not linked to other brains/minds in that they can decide certain things for themselves. But in other ways, children resemble, but are not, property: parents are legally responsible for taking care of children and children are in some sense ‘acquired’ by choice, children do not have a real choice in who their ‘owners’ are, etc.
But children are neither persons nor property. They are not quite autonomous persons because we – except some libertarians – recognize that children lack the mental ability to make certain decisions on their own or have the type of absolute freedom we grant to adults. Nor are they property because, morally, it strikes us as horrendous to think about parents being able to do anything they would like to their children. Unlike property, children have at least SOME freedoms.
Here is an article detailing an upcoming court case seeking to overturn a prohibition on gay marriage in California. There is a serious problem I have with this case, even though I am a very fervent supporter of gays’ and lesbians’ right to marry. This paragraph illustrates the problem:
The case will decide a challenge to California’s gay marriage ban that was approved by voters in 2008, and the ruling will likely be appealed to the U.S. Supreme Court. (My italics)
The problem is not that there is a challenge being brought over whether gays can be denied marriage rights. The problem is that we are asking a state court to set aside a democratic ruling about a state issue. And deeper still, I think that such an action helps illustrate what I think is the American public’s tenuous relationship with democracy. We tend to extol it as the most just way of government, want more of it when it isn’t being allowed to operate, but then try and trump it when it gives us results we don’t like.
I remember very well the protests when the election of 200o was effectively decided by the Supreme Court (whether justly or unjustly): “More democracy!” was a commonly heard cry. And in many political tracts, the word “democratic” is often used as an adjective synonymous with “just,” “good,” and “egalitarian.” But here we are in a bizarre predicament: scenes like the one in California are forcing us to face up to the idea that democratically chosen policies do not always lead to egalitarian and just results. As our founders feared, sometimes democracy really does mean the right of some to vote against others. (more…)
Recently, I watched an interesting youtube video of economist Thomas Sowell giving a talk about public schools. In response to a question about what to do over the idea that standardized testing may not be objective and may be biased, Sowell said this:
Compared to what? … That’s the question economists always ask: “Compared to what?”… Nothing is easier than to prove that something human has imperfections. I’m amazed at how many people devote themselves to that task.
This very simple statement – it almost seems like common sense! – is quite hard for many to grasp. Whether the issue is standardized testing (whether to do away with it), or any other percieved injustice of society, many people’s reaction is to point out the flaw and use this as prima facie evidence that the system giving rise to it must be fised, abolished, or reconstructed.
Sowell’s point – one I share – is that pointing out a flaw in a thing is not the only step necessary toward arguing against the thing. The next ste – one not often taken up – is to argue that a concrete proposal for a solution will be better than and have fewer flaws than the system trying to be replaced. (more…)
In political theory, a big deal is often made about critizing liberalism (small “l”) for not being the neutral, value-free, set up it allegedly pretends to be. Liberalism, of course, is the idea of a society set up so that the government refrains from the business of telling us how to live and leaves people free to pursue goals as they wish, short of harming others. Critics point out that liberalism still is not value neutral: it constrains certain things from being done (certain anti-liberal practices like refusing to send a child to school) and debate to be conducted in a certain way (in a secular way that leaves personal religious views at the door).
But here is a question: so what? What if liberalism draws lines that will inevitably restrict some from acting in ways they wish? Show me a social vision that doesn’t. (This, of course, is never done because it can’t be done. The only social vision without rules is anarchism which, as anarchists tell us, is not a system but the antithesis of one.)
Michael Sandel and Stanley Fish, two thinkers with little in common, have both seperately argued this criticism against liberalism: it pretends to be a value-free system whereby individuals can pursue their own visions but, at some point, it has to take a stand. As it is a vision of justice, it must take a stand and by taking a stand it must presuppose that certain values are superior to others. (What about the people that DON’T want to be left alone by the government? Supposing that non-interference is the highest good is to choose one good above all others.)
But the obvious retort to this is one seldom heard, and that is to affirm what is at issue. (more…)