Tuesday, November 24, 2009


The interminable debate over health care reform has revived one of the common complaints about our legislative branch: it provides for equal representation of states in the Senate. Under our system, citizens of my home state of Rhode Island (population: 1 million and shrinking) carry the same weight as those in my current state of North Carolina (population: 9 million and growing). To add insult to injury, the size of the House has not been updated for almost a century, allowing it to devolve from a proportional representation of the nation's will into some form of Senate-lite, where smaller states still enjoy a far better politician-to-constituent ratio (3:1 in the case of Rhode Island versus North Carolina). All of this is enough to leave voters in larger, bluer states shaking with rage as they watch senators from Montana and Maine decide how to fix health care for the 99.993% of the population that has no say in their re-election. Talk of abolishing the Senate has become almost as popular as talk of banning the filibuster.

The refusal to increase the size of the House is clearly an example of the powerful trying to retain power at all costs, but the belief that the Senate is a scam that tears at the very fabric of democracy is simply wrong. Such a belief is flawed because it assumes that we live in a democracy, which is only partially true. The U.S. is not a democracy - it is a democratic republic. Each state is currently a representative democracy, using the popular vote as a means of electing representatives to a larger federal government. The states come together as a republic, having agreed to join based on the laws that were in place at the time.

So while Rhode Island and North Carolina have been democracies for quite some time, the U.S. as a whole has never had such a distinction, and claims that we are being robbed of our political will by the Senate's continued existence betray an ignorance of basic civics. Would the Founding Fathers have drafted the same model of governance if they knew how the geographic, social, and financial aspects of the country would change? Perhaps. But that is neither here nor there - the fact is that the country would not have come together and survived if it had tried to be a full-fledged democracy instead of a republic. And since it never was such a democracy, it is silly to complain about how growth and inertia have ruined it.

Consider the following historical anecdote:

From the signing of the Constitution through the end of the Civil War, the nature of the republic was demonstrated in writing and in speech by the use of plural conjugation. A journalist might say, The United States are negotiating a trade pact with France; the plural are indicating that the states - not the union - were the ones making the final decision(s).

Post-Civil War, the story changes: having brought the rebel states back into the fold, the federal government flexes its muscle, enacting laws that guarantee more uniformity in state laws and using the financial crisis of Reconstruction to keep states in check. Soon journalists drop the plural and start referring to the union rather than the states: The United States is negotiating a trade pact with France. This is how we think of the country today, but is not how it was designed. The expansion of federal power in the last century has caused modern citizens to think that their state is simply a convenient subdivision of the federal government, when in fact it is a full partner in a contract between sovereign states, a contract predicated on the existence of the Senate (among other things).

All of this means that we are getting the exact type of government that we are supposed to get, which is good, because in this particular case, there is nothing we can do to change it (short of revolution). In addition to outlining the process for constitutional amendments, Article V also makes one very important restriction: it prohibits the removal of equal representation in the Senate unless every single state in the union ratifies the decision.
Provided that no Amendment which may be made prior to the Year One thousand eight hundred and eight shall in any Manner affect the first and fourth Clauses in the Ninth Section of the first Article; and that no State, without its Consent, shall be deprived of its equal Suffrage in the Senate.

That's right - our Founding Fathers, while woefully ignorant of all the amazing changes that would occur in subsequent centuries, were able to foresee that fluctuations in people and power could result in a super majority with the power to nullify an essential building block of inter-state peace. They explicitly denied any amendment related to Senate representation unless it had the blessing of all states, something that is all but impossible. Even the sacred First Amendment does not enjoy such protection.

The Senate may be a frustrating obstacle to larger states looking to influence policy decisions, but understanding the historical precedent for it can help us understand why things like health care reform may require opt-in clauses and other seemingly-inefficient mechanisms in order to pass muster. Senators have a reputation for being academic and procedural, and they are certain to enforce their state's role in the process even if half of America writes it off as an unexplainable mistake.

Labels: , ,

Monday, November 23, 2009


Of all the crazy thing that people are saying about the planned civilian trial of Khalid Sheikh Mohammed (KSM), the craziest is the idea that the defendant is not entitled to constitutional rights because he is not a U.S. citizen. This is complete balderdash. If you remove the strong emotion and anger associated with 9/11 for just a moment and think about the factors that went in to this decision, it becomes clear that a civilian trial is the most reasonable approach. Relevant factors include the actual text of the U.S. Constitution with regards to criminal law and legal precedent with regards to the rights of non-citizens.

A quick review of the actual text confirms that our judicial system does not differentiate between trials for non-citizen criminals and those of U.S.-citizen criminals. Thus, the furor that has been raised because KSM is enjoying the same rights as a U.S. citizen indicted for an equivalent crime is misplaced. The rights in question are covered under the Fifth and Sixth Amendments, which read as follows:
No person shall be held to answer for a capital, or otherwise infamous crime, unless on presentment or indictment of a Grand Jury, except in cases arising in the land or naval forces, or in the Militia, when in actual service in time of War or public danger; nor shall any person be subject for the same offense to be twice put in jeopardy of life or limb; nor shall be compelled in any criminal case to be a witness against himself, nor be deprived of life, liberty, or property, without due process of law; nor shall private property be taken for public use, without just compensation.

In all criminal prosecutions, the accused shall enjoy the right to a speedy and public trial, by an impartial jury of the State and district where in the crime shall have been committed, which district shall have been previously ascertained by law, and to be informed of the nature and cause of the accusation; to be confronted with the witnesses against him; to have compulsory process for obtaining witnesses in his favor, and to have the Assistance of Counsel for his defence.

Let's focus on the first line of each amendment:

No person shall be held to answer for a capital, or otherwise infamous crime, unless on presentment or indictment of a Grand Jury, except in cases arising in the land or naval forces, or in the Militia, when in actual service in time of War or public danger;

Right off the bat, we can see that this law pertains to any "person", not any "citizen", "native", or other designation. Interpreting "person" to mean anything more specific in order to arrive at a predetermined conclusion would require judicial activism, which is anathema to those in vehement opposition to the trial. That's one downside of strict constructionism - it doesn't leave you much wiggle room when you're not getting your way.

Now, there is also a clause that allows the government to avoid a traditional trial in the case of war crimes. This is a popular citation by those looking for a way to shuttle the terrorists off to a military tribunal. There are a number of problems with this approach.

First, defining 9/11 as a war crime is questionable at best given how we have reacted to previous instances of domestic terrorism. I do not recall anyone declaring the 1995 Oklahoma City bombing to be a war crime, nor the 1993 World Trade Center bombing. In both cases, a high-profile target in a high-density area was attacked by extremists because of some perceived injustice by the U.S. government, and yet both sets of suspects were tried, convicted, and sentenced by civilian courts.

Second, despite their personal declarations of war against our government, the fact is that all of the aforementioned terrorists acted on U.S. soil, against U.S. civilians, without the support of any sovereign nation. Quite simply, they are thugs who managed to commit a crime that was an order of magnitude more destructive (both physically and emotionally) than other thugs. I do not know of any domestic law that attempts to clarify how violent or unique a crime must be in order to be classified as a war crime, and all of the international laws relate to battlefield scenarios that are clearly not applicable. Regardless, I suspect that such a law would be easily circumvented by the unlimited creativity of those who wish to attack us.

But just for the sake of argument, let's say it were possible to have a domestic war crimes law. Where would you draw the line between regular crime and war crime when outside a military theater? How many people do you have to kill? How much money must you waste? Does emotional scarring matter? What about relativity? Oklahoma City had far fewer deaths and injuries than 9/11, but I imagine it was equally traumatic given that the entire metropolitan area is just one-eighth the size of Manhattan. Similarly, assassinating the President only takes the life of one person, but it would be more devastating to the nation as a whole than the 1993 WTC bombing, which killed six civilians and caused non-fatal structural damage to one building.

Without answering these questions, we cannot know if KSM is indeed a war criminal or if we are gerrymandering the rule of law to fit his actions and our desired results.

In all criminal prosecutions, the accused shall enjoy the right to a speedy and public trial, by an impartial jury of the State and district where in the crime shall have been committed, which district shall have been previously ascertained by law, and to be informed of the nature and cause of the accusation;

To those who think that prosecuting KSM in Manhattan is an irresponsible decision by politicians looking to ignite a media circus, I present the procedural requirements of the Sixth Amendment. If the Fifth Amendment requires us to have a civilian trial, the Sixth Amendment requires that we have it in the same county where the crime was committed. Being that Manhattan is an area with an extremely high population density, such counties are rather small, and so it happens that the court house where KSM must be tried is in close geographic proximity to the former WTC. If KSM had attacked a tall building in Wichita, then perhaps the court house and the site of the attack would be farther apart and the whole situation less controversial, but that is not the case. If you attack innocent people in the financial district of downtown Manhattan, you will plead your case in the same[1].

So, taking the time to read the text of the laws in question has proved helpful. For me this is sufficient, but let's continue with the second factor (legal precedent) to seal the deal.

To determine the rights of non-citizens as it relates to criminal acts and trials, we should make a comparison with the other rights we give to non-citizens. A cursory review of the Bill of Rights and simple reflection on everyday happenings reveals that foreigners can come to our country and immediately enjoy all of the rights prescribed therein. The fact that you are here on a tourist visa (or whatever) does not mean that a police officer can arrest you for saying derogatory things about Michelle Obama, Lost, the Pittsburgh Steelers, or anything else that Americans love. You are also not forced to select or disavow any religion. No one will try to quarter troops in your apartment. The government does not have the authority to buy houses owned by non-citizens for pennies-on-the-dollar and use them for public works projects.

And, as painful as it may be, they will be tried by an impartial jury if they are suspected of a crime. There is nothing in the Constitution or any court ruling that provides an exception for foreigners. Illegal aliens may be deported because it's more efficient than trying and jailing them, but anyone who is here legally will have due process. Even those pieces of the Alien and Sedition Acts that still remain in place only authorizes a war-time Executive to arrest and deport non-citizens - it doesn't allow continued detainment and punishment. Since we clearly don't want to let KSM go free to whatever country would take him, we're obliged to indict him and put him on trial. Those are the rules.

With all that said, sometimes rules become obsolete and they must be updated to handle scenarios that the original authors could have not imagined, such as the need to wage war against an organization that has no property, borders, national economy, or international obligations. This may be the most frustrating news for those opposed to the KSM trial: not only is our Constiution incomplete on the subject of modern warfare and terrorism, but there is no one to blame for it. Rather than lash out at politicians for adhering to the law as it's written today, opponents should instead focus their energy on changing the law through the means provided by its authors. At least then we would be having an intellectually useful debate about what the law should be as opposed to a pointless debate about what people assume the law says.

[1] And from KSM's perspective, there's really nowhere he could go to escape his reputation and improve his odds of a fair trial. Everyone hates him, and New York jurors will only be marginally more prejudiced than those in Nebraska or Oregon.

Labels: , ,

Friday, October 23, 2009


Some notable events have transpired since my last blog post. And... go!

We adopted another kitten. Her name is Ella.

I started playing roller hockey again. It's very European.

I learned Python. It's just as enjoyable as Groovy, but more marketable.

We shipped WebSphere CloudBurst It was exhilarating and gut-wrenching, all at the same time.

I worked hard on WebSphere CloudBurst It has been less exhilarating, but also less gut-wrenching.

I hiked sixty miles through the desert in attempt to un-wrench myself. Results were mixed.

I shaved my head. It was time.

Finally, I vowed to actually use all of the vacation time given to me by IBM. I was recently reminded that unless the work I do in lieu of vacation leads to a raise that is significantly more than the sum of three weeks salary, then refusing vacation is equivalent to returning compensation to the company. I used to know that, but in the heat of battle, one can sometimes lose context. My posting frequency during December will reveal whether this advice has sunk in or not.


Wednesday, November 5, 2008


As a kid, I always imagined that I would be in my fifties before the nation elected a black president; in this hypothetical scenario, the gender barrier would fall first (in my forties), and in both cases, the candidates would need a conservative or Republican label to balance out the bias that Southern voters would have against them. Yesterday, we put that theory to the test, when the electorate was asked if a left-leaning black man with an Arab name and roots in the Chicago political system was the right man to lead them out of the wilderness and into an era where past political assumptions were no longer valid.

The answer? Yes He Is.

The most exciting thing about this election is that the future is much more malleable than it was before. It's likely that things will not turn out quite as well as Obama and his supporters are hoping, but I think it is undeniable that the probability of change is much greater than it was yesterday. America is not a country of gamblers, but the election of Obama indicates that it knows when to cut its losses and try something new. For McCain supporters (and Palin supporters), there is a positive message to take away from all this: if you care enough to participate in our electoral process, you can make a difference.

At the same time, I have to imagine that the idea of an Obama-type campaign - with its extensive volunteerism and rabid enthusiasm - is actually quite terrifying to politicians of either party who are facing re-election in 2010 or 2012. And that is fantastic.

Congratulations, Barack. A significant portion of my projections for the next two or three decades have just been wiped clean by your unprecendented campaign; this internal view of the country and where it's headed is by no means a blank slate, but it's much more interesting than it was when I was a kid. I can't wait to see what happens next.

Labels: ,

Friday, June 27, 2008


Last fall, this blog openly mocked those who thought that the eventual ruling in DC vs. Heller would be a silver bullet that clarified the nature of the Second Amendment once and for all. I fully expected the justices to punt, passing down a narrowly-scoped decision(s) that resolved the immediate conflict without setting a precedent by which other gun regulations could be interpreted.

Yesterday, Justice Scalia threw my arrogance right back in my face, handing down a decision that did I what I claimed was impossible: it made a solid case for individual gun rights based on a sentence that has more grammatical errors than words. This opinion may end up being the most significant of Scalia's career; he shows no mercy, shooting down each the dissenters' points with an impressive combination of 18th century literature reviews and historical research on the motivations of the laws that preceded the Second Amendment. Whether you like Scalia or not, he makes a hell of an argument. If you haven't read it, you owe it to yourself to at least read the syllabus[1].

This is not to say that the opinion is perfect. It does not include a comprehensive set of instructions for dealing with all of the edge cases surrounding individual gun ownership; it merely states that individuals must be allowed to own guns outside the context of a state organization. The ruling also makes clear that individual acts of self-defense are a constitutional right, but it does not explain why it is permissible to apply heavy regulations to such acts in public but not in private. Scalia himself acknowledges that there is enough fodder among the edge cases to fuel years of lawsuits.

What I find most interesting about this ruling is that the author - an unabashed social conservative who wears his politics on his sleeve - resisted the temptation to strike down state and municipal laws banning certain non-automatic weapons[2]. Most conservatives resist the notion of incorporation, which holds that the Bill of Rights must be applied at a state level because of the Fourteenth Amendment's due process clause; to those that believe in extreme judicial restraint, it is not the duty of the federal government to protect the people from state laws the undercut the Bill of Rights. However, since total incorporation has been in effect for decades now, Scalia could easily have rationalized its application within his opinion, invalidated any state or municipal law banning certain non-automatic weapons, and been a national hero to millions of people.

But he didn't. He stuck to his principles, knowing that his decision would set off a landslide of lawsuits in the lower courts.

I don't think that Scalia is innocent of having injected his personal beliefs into past decisions - he's made many arguments that were really hard to swallow from an alleged small-government conservative - but in this case, he walked away from the opportunity to slam the door on people he considers political opponents. I realize that the other four justices in the majority had input into the decision and would not have signed on if they didn't agree with all of his findings, but Scalia is an imposing personality, and I think he could have strong-armed the others into going along with it. Alternatively, had Justice Kennedy been assigned to write the opinion, and he had tried to apply incorporation, I think the right-wing justices would have been turned off and written a separate majority opinion, resulting in a ruling that favored Heller but did not set judicial precedent (which was my original prediction).

At the very least, we can appreciate this ruling because it means that Democrats will no longer have to reassure midwestern gun-owners of their unwavering gun love by participating in incredibly awkward campaign gimmicks.

[1] Abstract, for the scientists in the audience.

[2] This case is complicated by the fact that D.C. is a municipal oddity: it is a city, but it is run by the federal government. This means that federal law - which is where Scalia's non-incorporative decision will be applied - is the only law that matters. If Dick Heller lived in any other city in America, the findings would probably have been less favorable for him.

Labels: , ,


One thing I've learned about blogging is that it's impossible to make up for all of your would-be posts after a long period of silence. Getting back into a blog routine is frustrating because you feel the need to share all the significant thoughts you have had during your silence, especially if your silence was the result of major life events. Trying to summarize months of your life in one post is overwhelming and just extends your procrastination, so the best thing to do is to just let all of those would-be posts fall into the bit bucket and move on as if nothing happened. So that's what I'm going to do.

By the way, Bridgid and I hiked across Utah, got married, remodeled our house, and moved across the Triangle to Chapel Hill.

Just so you know.

Labels: ,

Thursday, February 21, 2008


In my home office I have four cardboard boxes packed tight with books. Ninety-percent of them are related to programming: languages, frameworks, best practices, and so on. Some of them are life-changing tomes (Design and Evolution of C++ and Effective C++, to name a few) while others are glorified API documentation, but all have one thing in common: they have not been opened in over four years. The Internet has made them irrelevant.

I still appreciate physical books - the remaining ten percent of my books are focused on consitutional law and American history, and it would not have been nearly as enjoyable to read them had they been e-books[1]. But the fact that I these extracurricular books are packed away just as tightly as their technology counterparts makes it clear that I have little use for books even when they provide content that isn't available on the Internet; the additional books that I've collected since my last move now sit on top of the boxes and will likely be packed away one weekend when I happen to have an extra cardboard box. Having contemplated this situation for a couple of days, I eventually found enlightenment in an old episode of Seinfeld:
Jerry: So that's it? You're out?

George: Except for one small problem. I left some books in her apartment.

Jerry: So, go get them.

George: Oh, no. No, I can't go back there. Jerry, it's so awkward and, you know, it could be dangerous... sexually. Something could happen, I'd be right back where I started.

Jerry: So forget about the books. Did you read them?

George: Well, yeah.

Jerry: What do you need them for?

George: I don't know. They're books.

Jerry: What is this obsession people have with books? They put them in their houses like they're trophies. What do you need it for after you've read it?
The books that I own were indispensable during my college years. They drenched my insatiable thirst for knowledge and helped me create the kind of software that was being churned out by real programmers, not just the toy projects that we got for homework. Each semester I would buy my textbooks early, read them, and then spend the rest of the semester learning about the things that weren't covered by RPI's curriculum but seemed really exciting, eventually putting those things to work in side projects that I never seemed to finish. This has led to a situation where all of my books are filled with knowledge that could be considered the foundation of my career or tragically antiquated[2]; either way, these books are just trophies that represent my ability to learn the basic skills required of a professional programmer. All of the new and advanced skills that I use in my day job have been gleaned from the countless tutorials and source code repositories scattered across the Internet.

And they're heavy trophies. When I look back on the last three or four moves I've made, the heaviest and most cumbersome thing to move was always my book collection. It's a chronic back problem waiting to happen, and now that I'm approaching my late twenties, I have to consider these things. The only reason I've looked through my books post-graduation was to find and ship two of them to a friend who was unfortunate enough to be working on a project full of old Win32 code.

Is that what my book collection has become? A used book depository for the handful of programmers that I know? It all seems so wasteful[3]. I think the time has come to get rid of my book collection; as anyone who has visited my past apartments will tell you, I've always been kind of a minimalist, and these books are doing more harm than good here in my office. I will try to give them away to college students and other aspiring programmers, but I have a feeling that many of them will go unclaimed, doomed to the recyling bin.

Of course, I don't think it will matter if I toss all of my books and then forget the ins and outs of the Win32 thread API, but it will matter if I can't articulate why I vote the way that I do or learn from past mistakes; for this reason, the lessons of American history will still have a physical presence in my life. So long as they fit in one box.

[1] I'm not saying that e-books are bad or that they will never be popular - they're just not my cup of tea.

[2] It's sad to look back on all of the books that I had to read just to realize that MFC was a disaster.

[3] And heavy - did I mention that?

Labels: , ,

Thursday, February 14, 2008


I've been working on the Zero team for almost a year now, and in that time, Groovy has become my language of choice, both for Zero applications and non-Zero utilities. Groovy is, as Jerry Cuomo put it, "the nicotine patch for Java programmers"; it provides many of the cool features found in Python while freeing me from the tedious boilerplate of Java, all with a gentle learning curve. Like most Java-turned-Groovy users, I started out writing Java-centric code, picking up Groovy's shortcuts and elegance as I grew more experienced and shared code bases with other Groovy users. There are still many features that are not part of my toolbelt, but every day I seem to pick up a new one.

Because I use Groovy both for RESTful resource implementations and utility scripts, I often use Zero's /app/scripts directory to store code that is in any way reusable; this shortens my resource scripts and keeps time spent refactoring to a minimum. The only problem with invoking code in /app/scripts is that you have to do so with generic, reflection-based APIs, like so:
def script = "FooUtils.groovy";
def method = "getFoo";
def params = ["param1", "param2", ...];
def foo = invokeMethod(script, method, params);
To make it so the code in /app/scripts is in scope for your other Groovy code, you need to create a binding. Making a Groovy binding for a script isn't hard, it's just kind of tedious: you write a Java class that maps all standalone function names to reflection-based invocations on the Java class, and then use Groovy's script engine API to call the target function. You must also update your configuration file to register your Java class as a Groovy binding. The whole process is outlined in Zero's documentation as well as every developerWorks article I've written in the last six months. If you follow the instructions prescribed by the Zero team, the block of code shown above will become much more readable:
def foo = getFoo("param1", "param2", ...);
It's not often that I put code in /app/scripts that isn't meant to be shared with the rest of my application, so after I while I started poking around zero.core to see if there was a way to enable bindings automatically, with no Java code or config stanzas. The short answer is that, yes, it would be possible, but we would take a performance hit because of some additional reflection; I have not bothered to implement this solution, so I cannot say how severe this performance hit would be. I didn't want to go through a lot of trouble only to find out that my solution was slow as molasses, so instead I wrote a Groovy script to generate the binding classes and config stanzas for me.

The script is named binding.groovy, and you can download it here. You can look at a sample console session below:
$ ls
$ groovy binding my.zero.app
$ zero build
$ zero run
The script generates classes and configuration without touching any of your existing files. The zero build step compiles the Java classes so that they will be on the classpath at run time (zero run). You can find more details on usage, behavior, and licensing in the header comments.

Working on this tool gave me the opportunity to make a very useful comparison between Groovy and Java. Last summer I used Java to write RESTdoc, and that tool shares many requirements and behaviors with my latest one: both analyze the structure and code in a Zero application and use that information to generate one or more files using a template. RESTdoc is more complex because it must be usable from the command line, Ant scripts, and a GUI, but many of the algorithms are the same.

Based on my two experiences, I would have to say that using Groovy was far more enjoyable than using Java. But why?

First, I was able to get right to coding, without having to create all of the boilerplate that seems to appear in all of my non-web applications. You know: first write main(), then a non-static run(), then set up the exception handling, then define an exception hierarchy, and so on; then, just as you're starting to write real code, your mind starts to map out the larger pieces of the tool, and you start to think about which of these pieces should be pluggable, and then you start defining interfaces, and soon it's the end of the day and all you've done is create an Architecture.

Tomorrow, you think, I just have to write the code. And it seems like such a logical thought to have.

But it's not.

I wrote RESTdoc in just over two days. This latest tool required four hours. Granted, I was able to reuse many of the ideas I'd had while implementing RESTdoc, but those are just ideas - I couldn't reuse most of the code because it was all so... big. I knew that the code could be much simpler in Groovy, so I rewrote it. Quickly. The Groovy tool took less time because I was able to focus on actual logic and actual testing, not Java-oriented procedures that catered to my neuroses.

Second, the ability to use closures made my code smaller while also increasing its readability. Most of my closure usage is coupled with methods like each() and collect() (and their derivatives), methods that accept a closure as a parameter and apply it to a collection. I'm sure that some people abuse closures in a way that makes them feel like Java's anonymous classes, but for the most part they seem to function as a way to get things done with less bureaucracy.

The third thing that makes my Groovy development more enjoyable is the fact that lists (java.util.List) and maps (java.util.Map) are built into the language, and I can use them to create utility data structures without defining an inner class with getter and setter methods. You can do this in Java too, but it's frowned upon; it just doesn't feel right to put so much structure around your code and then use bags of goo to store your data. But while that's an appropriate feeling to have in many scenarios, it's a real downer when you're writing a script to generate config files. I love the fact that I can represent part of a parse tree with a set of key-value pairs and not feel guilty about it, and I really love the fact that I can create that set in one line of code:
return [
name: "getFoo",
params: ["param1", "param2"],
hasReturnValue: true
Finally, for all of the Java bashing I've done in this post, I have to remind myself that one of the best things about Groovy is the fact that it lets me devolve into traditional Java programming when I really need it. There are some tools for which Java integration is superior, and those tools aren't going to change any time soon. Java is also the original language of the JVM, and it is the best way to expose a language-agnostic API on that platform.

And sometimes, I'm just not ready to do things The Groovy Way. Like all creatures of habit, there are times when I hang on desperately to the past, for no good reason at all. Groovy allows for all of that, and it doesn't mock me when I fail to use it to the best of its abilities. It just runs my script.

He'll come around, it says to itself. Some day.

Labels: , ,

Friday, February 8, 2008


On Wednesday I posted a little memo to all of my readers who may be hockey fans; it was only two sentences long, but it ended up causing much more grief than anything else I've written for this blog. I'm using this post as a budget therapy session.

First, I should clarify that this grief was completely internal and has nothing to do with the fact that the Hurricanes' best penalty killer is out for the season with a broken leg; in fact, I wrote the post in less than two minutes, and there is no evidence to suggest that any of my readers even read it, let alone cares about its topic. My pain is relate to Blogger's user interface, which I don't use for composing but do use for posting. After copying the text of my memo from a local file to Blogger's glorified <textarea/>, I accidentally hit some combination of keys that caused Blogger to initiate the publishing process and then go back two pages to the "dashboard"; it was all very fast, and not realizing what had happened, I started again by clicking the Create New Post button and re-copying my text.

Once I published, I realized that I had actually created two posts. The first had a permanent URL of /2008/02/rosey.htm and the second was at /2008/02/rosey-06.htm. This enraged me for reasons that I will get to shortly. My initial reaction was to delete both posts, republish my index page, and then create the post a third time. Upon doing this, my post resided at /2008/02/rosey_6595.htm. The exact role of these random numerical suffixes is unknown to people outside of the Blogger team, but I have a hunch: I think this is a case of REST pedantry.

Blogger is treating each new post as a completely new resource and it is ensuring that all new resources have a unique URI; because the tool uses the post title as the last token in the URI, it is tacking on a suffix so that the final URI does not conflict with any other resource. It does this even if the conflict is with a resource that no longer exists so that a client is never tricked into thinking it's dealing with the original resource (i.e., it does not realize the original resource was deleted).

This kind of behavior is similar to what you see in WS-* Land, where resources are identified using an endpoint reference (EPR). Once a WS-resource is destroyed (either implicitly or explicitly via WS-ResourceLifetime), its EPR is invalid forever. This is a pretty harsh requirement, but most implementations are able to satisfy it with the help of UUIDs. Blogger doesn't do anything so obtuse when constructing URIs for blog posts, but the suffixes make me think that there's some kind of logic in the Blogger engine that prevents two posts from ever having equal identifiers, even if their lifetimes do not overlap.

As someone who has had to implement universally unique identifiers with REST and WS-*, I can appreciate the motivation behind them; however, in the case of my blog, which is my own little sanctuary on the Internet, I would prefer to have more control over my URI space, even if it means I am not in 100% compliance with AtomPub or the majority opinion on rest-discuss. Let's face it, I'm doing this more for me than for you; if you want to read overly-zealous opinions about data formats and U.S. laws, there are eleventy scrillion other blogs that can fulfill your needs, but I only have one web site, and the fact that my URI space is completely organized except for one post is going to drive me absolutely insane.

/2008/02/rosey_6596.htm? What is that? It's just so... random. And un-RESTful. It might as well be an EPR.
At first I thought about changing the URI manually, but the index is stored in Blogger's database, so it will just revert back the next time I publish a post. I also considered deleting the post and keeping my feelings about the Hurricanes to myself, but I decided that writing seven paragraphs about five URI characters would be slightly less insane. It's part of my 2008 New Year's resolution to not waste time on small details that don't have an actual impact on future events. As you can see, it's going pretty well.


Wednesday, February 6, 2008


FYI for all of you Carolina Hurricanes fans out there: the 2007-08 season is officially over.

That is all.

Labels: ,

Tuesday, February 5, 2008


I will not be participating in today's voting because I live in North Carolina, and our primaries are scheduled for May 6th. Of course, I don't actually get a vote on May 6th, either - my vote has been negated by the people of Iowa, New Hampshire, and South Carolina, as well as the party officials that have punished any state that tried to hold a primary in January. Instead of selecting from the full slate of candidates available at the start of 2008, today's voters have to pick one of four "front-runners"; by the time the polling places open in North Carolina, there will only be one viable candidate in each party, making my vote irrelevant.

It amazes me that both parties have chosen to alienate voters in key swing states like Florida and Michigan in order to preserve a status quo that puts the entire nomination process in the hands of people who think The Old Man of the Mountain was an breathtaking monument. If you're trying to win control over an entire branch of the U.S. government, wouldn't you want to be sure that you're nominating someone who has the broadest appeal? This seems like an air-tight argument in favor of a national primary. Alternatively, we could conclude that the first votes should go to states like California and New York, states that offer a more complete representation of the American electorate. Either change would enhance the presidential election process by ensuring that more voices were heard before the field was winnowed.

Super Tuesday indeed.

Labels: , ,

Friday, February 1, 2008


Apparently, the leaders in our executive and legislative branches have come to agreement with regards to saving the economy: they will courageously tackle our massive deficits, incalculable debt, and dismal growth by giving everyone $600. There are so many things wrong with this plan that I didn't even know where to begin; I had to go outside and throw rocks at the house for an hour until my anger subsided and I could write this post without breaking the keys off my keyboard. Let's look at the facts.

The main goal of the economic stimulus plan is to put cold, hard cash into the hands of normal Americans, who will take their unexpected bounty to the mall so they can buy presents and dine out; proponents say that this increase in consumer activity will boost payrolls, calm Wall Street, and save us from certain recession. The original version of the proposal (drafted by the House and endorsed by the president) would grant a $300 tax rebate to the dirt poor, $600 to taxpayers making less than $75,000 per year, and a few extra bones to people with kids; if you make between $75,000 and $87,000 per year, your rebate would decrease as your income increased, eventually bottoming out at $300. Six-figure breadwinners need not apply.

The Senate modified this proposal by doubling the maximum income levels so that wealthier individuals could get in on the super fantastic rebate action. It is not yet clear which version of the proposal will "win", but it looks fairly certain that all lower and middle class families will be getting a check for $600 just in time for those Memorial Day shopping extravaganzas.

Now, on the surface, this is encouraging: politicians managed to agree on a policy and enact a law in a matter of days, with an immediate result for the American public. Finally, a win for the middle class! Right?


Well, sort of. I'm sure that the extra $600 won't hurt middle class taxpayers, but the good feelings it creates will be short-lived; considering the deep financial hole that we are sitting in as a country, I think it would behoove us to consider the long-term impacts of this plan. This kind of inspection is not nearly as immediate or satisfying as the idea of giving everyone $600, but I am going to do it anyway.

The first thing that's wrong with this plan is that it confuses public sentiment with its original goals. Giving a few C-notes to middle class families on the brink of a recession may brighten their day, but it won't lead to concrete economic growth, which means that it won't really improve their lives. By most accounts, American families are in much the same situation as their government: they are in severe debt and find themselves robbing Peter to pay Paul, all to live the American Dream that is sold to them on TV. This means that the average person will use their $600 in one of three ways:

  1. Payment of credit card debt, overdue bills, or loan principals.

  2. Savings for emergencies, retirement, or education.

  3. Purchase of new clothes, music, or other things they don't need.

The first two options are obviously the more responsible ones for someone who has incurred a lot of debt or has not made a practice of planning for the future. That may sound good from the perspective of someone who wants to help average Americans, but remember: the goal of the plan is to revive economic growth. I will try to explain why I think these things are in conflict without sounding like a heartless bastard, but I can't guarantee anything. Just so you know.

In the first scenario, the person is paying off debt for things that he bought in the past. The debt is still very real to him, but in the eyes of financial analysts and corporate executives, it's ancient history; when John Q. Public bought that new iPod with his credit card last year, the bank that issued the credit card paid his debt to Apple Computer in full, and that payment was recorded and celebrated during the same fiscal quarter. The fact that John is beholden to his lender at an 18.9% APR does nothing to advance the state of the national economy; paying down his debt is a good thing when it comes to his blood pressure, but it's not going to register as new economic activity.

The second scenario is even more optimistic and hopeful than the first, but it will also cause us to miss our target. I sincerely hope that the majority of Americans will save their rebate money, but I also realize that this will be discouraging because money in the (individual's) bank has no impact on our economic growth rate.

Given our history as consumers, and the fact that so many of our citizens came into debt by shopping and over-extending themselves, it is likely that many people will give in and go along with our third scenario. This is exactly what politicians are hoping for, but even this will not "save" us. The Experts concede that even if everyone spends their rebates on shiny new gadgets, the growth that we'll see next quarter will be 1-2%; now, 1-2% of the American economy is an incredibly large amount of money, but it will be overshadowed by the negative effects that we will see in subsequent quarters. If people don't do the responsible thing and pay down their bills, then they are only making their situation worse, and it will take them even longer to pay back this "free" rebate. Do we really want to encourage this kind of irresponsible spending? This is how we got in a hole to begin with!

Sending money to people who are in debt and don't have savings sounds nice, but it won't give us the results that our politicians want. If the recipients use it to pay bills or create savings, the economy will continue to stagnate; if they use it to buy more stuff, they are just digging themselves a deeper hole. Everyone will be excited for a couple of days in May, but we'll be back in a rut by June. Mission: not accomplished.

If we really wanted to increase economic growth by a few points this summer, we would give the $600 to those who make more than $75,000 per year because they are more likely to have disposable income. Now, let me be clear: I do not feel that the goal of temporarily increasing economic growth by 1-2% warrants giving a tax break to upper class taxpayers. Additionally, I do not need $600 from the government, nor will I feel any hostility if, when the final numbers are released, it turns out that I lost the tax rebate sweepstakes. I'm fine. Really. This is not sour grapes from someone who has a comfy job at the largest IT company in the world.

That said, people with disposable income tend to... dispose of it. They go out to eat and buy things they don't really need, all of which fuels the job market and salary numbers that have so concerned our dear leaders. The growth caused by such disposal of income would still be temporary, but it would happen. If this is the goal, the tax rebates should go to the upper middle class: those wealthy enough that they don't have unmanageable debt but not so wealthy that $600 is a drop in the bucket.

Of course, you cannot, in an election year, tell middle class voters who are financially strapped that the answer to their problems is a tax rebate for people who don't have any problems. I understand that. But I would hope that our leaders would understand the points I have raised and not make the proposal in the first place, thus avoiding the debate entirely.

The second thing that is wrong with this plan is that our government cannot afford to give us a tax rebate right now. We have not had a balanced budget in almost a decade. Our national debt makes me want to throw up under my desk. Our 401k money and future business plans are riding on the hope that large Asian countries will continue to buy our bonds. We are in a war that, whether you approve of it or not, costs billions of unbudgeted dollars per year. And despite all of this, the government wants to return part of its yearly income? If this idea were any more stupid, the amount of stupidity would cause some kind of cosmic integer overflow and make the whole thing brilliant.

Put another way: if the government were a person, he would have nothing to his name. He would have his credit cards taken away, his car repossessed, and his belongings sold at auction after settling in bankruptcy court. If such a person offered you $20 because he knew you were a little short on cash this month and couldn't afford to meet the gang for drinks, would you take it?


Well, that's the situation we find ourselves in right now: we're handing out rebates like its 2001 despite a massive increase in debt. I'm a fairly libertarian guy, which means that I normally perk up when people mention tax cuts; the fact that I'm saying the government should keep our money should clarify just how bad I think our balance sheet is. It's never fun to pay taxes, but when your leaders overspend their budget multiple years in a row, you either have to increase their income (taxes) or boot them out of office. You can't Reagan-omics your way out of a $400 billion deficit. You certainly can't do it seven times.

In summary, this plan is a joke. In the best case, tax rebates for lower and middle class people will not generate economic growth, and in the worst case, it will generate a small bump in growth but increase the deficit and weaken the dollar. It's a lose-lose situation. Frankly, I have trouble believing that anyone involved with this plan would pass a second grade arithmetic test.


Labels: , ,

Wednesday, January 23, 2008


I started using Google Charts a few weeks ago, and I have to say: it's pretty stellar. You can create bar, line, or pie charts with multiple colors and data sets using a simple HTTP GET. The names of the query parameters are kind of... trite... but overall I think the API is very user-friendly. I'm a fan.

Of course, the API does have one problem: it requires me to send all of my data outside the IBM firewall. Perhaps you hadn't noticed, but the IBM Corporation employs a lot of lawyers, and said lawyers get very uncomfortable when you start talking about sharing company data with servers owned by our competitors[1]. It's unlikely that Google is employing a bunch of people to read through its server logs, find requests originating from its competitors' servers, and muse about their significance to Google's management team[2], but lawyers are paid to be paranoid, and ours are very good at their job. The net of this is that any IBM application that uses Google Charts and is not an obvious demo must be reading from a public data store.

I like to poke fun at IBM's giant legal department, but the truth is that it's not much different from that of other companies. IBM isn't the only company that will have trouble using Google Charts, so it would be nice to see some API enhancements with a nod to confidentiality. I think the easiest solution would be to split up the generation of charts and legends; the numbers that are used to create the actual bars or lines are only meaningful if they are accompanied by labels, so keeping the two things separate should satisfy the requirements of most corporate lawyers. The API should be augmented with some sample JavaScript code for generating legends that match the colors and font of a given chart; this code could be provided alongside the existing code for encoding data and invoked by programmers who are not allowed to share legend text with the outside world. This isn't as seamless as the original API, but it's better than nothing.

Assuming that Google is in no rush to appease third-party developers using a service that doesn't generate any revenue, I'll be writing my own legend generator in the near future. I'll post the code once it's complete.

[1] I guess they're a competitor. I can't think of an area where we compete with Google directly, but my inner lawyer is telling me that once a software company reaches a certain size, it automatically becomes a competitor, regardless of its current investments.

[2] The Terms of Service explicitly denies such activity.

Labels: , ,

Tuesday, January 22, 2008


Lately my blog has been devoid of the deep technical content implied by my host name; I assure you it has not been from lack of interest. In the last two months, I've published three articles on IBM's developerWorks, each exploring a different aspect of Project Zero and REST. Check it out:

  • Title: Extend Project Zero's scripting platform with Flickr APIs

    Abstract: The Flickr photo sharing service is one of today's most popular Web applications. It provides a robust hosting service with slick social networking capabilities that make uploading, organizing, and finding photos very simple. That's all very cool, but from a developer's perspective, the most interesting thing about Flickr is its public API for reading and writing photo data. You can send API requests over HTTP using any programming language you wish, and many open source projects have sprung up to encapsulate this API for various languages. In this article, you'll learn how to "Zero-ize" the Flickr API by providing a Groovy binding that is easily reusable in your Project Zero applications. When you're done, you'll be able to read and write photo data from your Groovy scripts in just a few lines of code.

    Reader's Digest Version: You want to use the Flickr API in your Java applications, but everywhere you turn there's a factory pattern or a glorified HttpURLConnection. You have started building Flickr URLs with StringBuilder, laboring under the strain of append() and hard-to-read query strings, when you receive the tragic news: you have died of dysentery. Game over.

    Fortunately, Groovy scripting lets you use all of your Java skillz while shaking off the cruft that was keeping you down. This article creates a set of Groovy scripts for invoking the Flickr API and shows how to share the scripts throughout your Zero applications; it ends by showing you how to generate one of those ubiquitous photo collages so that your site will look exactly like every other site on the Internet.

  • Title: Manage an HTTP server using RESTful interfaces and Project Zero

    Abstract: WS-* users and REST users have an ongoing debate over which technique is most appropriate for which problem sets, with WS-* users often claiming that more complex, enterprise-level problems cannot be solved RESTfully. This article puts that theory to the test by trying to create a RESTful solution for a problem area that is not often discussed by REST users: systems management. In a previous developerWorks tutorial, I showed how to create a Web services interface for managing HTTP server products; the tutorial used concepts from WSDL and the WS-* standards to define the management interface and software from Apache Muse and Apache Axis to create the management application. For this article, I use Project Zero and REST design principles to recreate the interface and function of the original application and determine if REST is a valid option for this enterprise project.

    Reader's Digest Version: Human sacrifice! Dogs and cats living together! Mass hysteria!

    It's almost unthinkable: creating a fair and level comparison of WS-* and REST based on experience working in both worlds. Well, I went ahead and thought it, and then I wrote it down so everyone could share my completely non-hysterical evaluation of REST as a foundation for remote systems management tools.

  • Title: Add Ruby templating to your Project Zero applications

    Abstract: Ruby users, take note. You can now do everything that Groovy and PHP users can do when creating Project Zero applications! In a previous article, we showed how to augment Project Zero to provide support for the Ruby scripting language. The code that we wrote enabled Ruby users to transfer their scripting skills to the Zero platform and take advantage of its unique programming model. Of course, scripting isn't the only way that Ruby is used to create applications - programmmers who use the Ruby on Rails framework also mix Ruby in HTML templates similar to JSP and PHP. These templates, called RHTML files, are very useful for creating dynamic user interfaces, and this article will show you how to extend our Ruby support to include them.

    Reader's Digest Version: Remember the first time you saw Back to the Future? As the movie ends, Marty has just discovered that his father isn't a sucker anymore, his sister is popular, his brother has a job, and something he's done has warranted his parents buying him a brand new 4x4. When Jennifer struts in a few minutes later, you undoubtedly thought, This was a killer movie. And you were right.

    But then, out of nowhere, Doc screeches into Marty's driveway in a beat-up De Lorean and tells the two teenagers that their future is in shambles and they have to go to the future to prevent a certain tragedy. No way! The movie closes with the De Lorean lifting off the ground and flying into 2015. Wow! Robert Zemeckis just turned your expectations upside-down, and now you can hardly wait for Back to the Future II. Do you remember that?

    Well, if you're like most people, you had the exact same response when you read this last abstract and realized that my Ruby on Zero article has its own Part II; you thought the first article was great, but now that you've gotten a taste, you can't imagine life without Part II and its Mr. Fusion-fueled RHTML files.

Having explored over a dozen topics related to Zero and REST, it's clear to me that one of Zero's greatest strengths is how flexible it is; in other words, Zero does not get in my way as I try to bend it to meet the needs of my project. Most of the time I don't have to do any bending at all, but sometimes I do, and rarely is something so hard that it's deemed impossible or not worth the trouble. If the Zero platform is successful, this will certainly be one of the reasons: it provides you with many tools and conventions for getting things done, but it doesn't force you into absolutes or some kind of software design religion.

Labels: , ,

Monday, January 7, 2008


We don't have a television in our house, but holidays and vacations always grant us the opportunity to sit in front of one for hours at a time, be it in a relative's house or a hotel room. There's a lot of bad TV out there, but most of it just fails to be interesting; once in a while, though, I find a show that crosses the line from being bad to just plain offensive. Extreme Makeover: Home Edition is such a show. After watching a couple of episodes, I spent a few days stewing in the iniquity of it all, trying to formulate a coherent rant that didn't suffer from myriad exasperated tangents. I will dispense that rant now.

When I first encountered Extreme Makeover: Home Edition (EMHE), I assumed it was just a way to pull at the heart strings of America's TV audience and get them to watch what is essentially a one-hour commercial for all of the show's sponsors. I don't fault ABC for taking the opportunity to show an hour of non-stop commercials, I just can't believe that all of the hosts are able to pull it off with a straight face.
Host #1: Sal and Julie love to grill outside during the summer, but they haven't been able to do much grilling since they ran out of charcoal last June. We could just buy them a new bag of charcoal, but I think we can do better than that.

Host #2: I talked our friends at Sears, and they said that Sears offers a great selection of propane grills and accessories, all covered by a Sears Home Warranty.

Host #1: Wow, it sounds like Sears has everything we need. Let's buy an expensive Sears brand grill that will make Sal and Julie happy for years to come.

Host #2: Good idea. Going to Sears will make this outdoor grilling area better than ever.

Host #1: Let's go to Sears!

Host #2: Sears!
This aspect of the show is just mindless and predictable, and I have no problem with that[1]. What bothers me about EMHE is the complete lack of fiscal responsibility that is demonstrated by both the producers of the show and the lucky families they have selected. I originally thought that the show would provide the families with reasonable upgrades to their existing homes using materials provided by the sponsors, making it a sappier and more commercial version of TLC's Trading Spaces; what I found was a show that glorified suburban excess while completely ignoring the plight of people who do not even have a home to renovate.

One of the first EMHE episodes I saw focused on some family in Wyoming that had bought a house that was half underground and, consequently, filled with radon gas. The whole family was sick from the radon poisoning, and they didn't have the money to build a new house or take care of the dozens of stray pets they had adopted over the years. Enter EMHE. They tore down the original house and gave this family of four a million-dollar home with seven bedrooms, a pet sanctuary, and, by my count, exactly fifty-two flat-panel TVs.

Why is this bad? Let me count the ways:

The first and most obvious thing that eats at my soul is the fact that ABC has used its immense power to extract millions of dollars in time and materials from local citizens but only managed to help one family. I don't see why a poor or middle class family that is on the down and out needs a McMansion in order to improve their lives. Every time they highlight some family member who works with disabled kids or a local charity, it strikes me that this person must have extreme cognitive dissonance upon moving into a house that is light years beyond their means.

Having grown up in a lower-middle class suburb with a single mother, I can honestly say that if anyone had offered us a brand new 3 BR/2 BA ranch like the ones all of my friends lived in, we would have been more than happy to take it. Such a house would cost between $100,000 and $200,000 depending on which area of the country it was built in; why, then, does ABC feel the need to build homes that are assessed at $500,000 to $1,000,000? I'm sure that the families are excited to live in such luxury, but if you toned down the luxury three or four notches, would they really know the difference? Would they be any less happy?

EMHE could provide these families with nice suburban homes and all of the latest gadgets for about $200,000. By restricting themselves to "nice" and "impressive" (as opposed to "grand" and "overwhelming"), the show could afford to help three or four times as many families. It would have more content to work with and there would be no effect on the amount of product placement that is currently in the show. I really don't think there would be any negative impact on revenue or ratings if the network changed the show to balance cool home improvement with the desire to help as many people as possible.

The second thing that irks me about this show is also tied to fiscal irresponsibility. It's bad enough that a giant corporation is spending excessively on something that isn't necessary (big surprise), but it's made worse by the fact that they are pushing these families into a situation that will leave them house poor. Keep in mind that these families are already "regular poor", and while house poor may seem like an upgrade, it's not exactly the stress-free life that ABC promises on the show. If a family can barely afford their current mortgage and cost of living and is not in the position to upgrade their admittedly ramshackle house, can they really afford the property taxes on a $500,000 house? Depending on what state you live in, the property taxes on such a house could easily exceed the combined mortgage payments and property taxes of a house in the low $100,000s[2].

Of course, there is a chance that the families can remain frugal in their new luxury pad and pay their property tax bill on time. Success is less likely when it comes to the utility bills. Again, we're talking about families that were already having trouble paying the bills when they lived in homes that were 1,500 square feet or less; the new homes average over 5,000 square feet, and many of them have beautiful-yet-costly structural features. The Wyoming family came home to a three-story house with a foyer wall made of glass, and when you couple that with eastern Wyoming's distinct lack of trees, I think it's safe to say that this place is going to be a greenhouse for most of the year; their only respite will come during winter, when all of their heat will rise through the foyer and escape out the giant glass wall. I would love to see the parents' reaction to that first electric bill.

The third and final thing that has me yelling at the TV is the fact that Wyoming is one of an increasingly small number of states that has not passed a law to raise its minimum wage above the federal rate of $5.85. The people of Wyoming would march on Cheyenne if anyone tried to pass a state law that raised the local minimum wage or, alternatively, increased their state sales tax in order to help more families on welfare. And yet, I saw hundreds of Wyoming residents come out to build an enormous new home for one family in need, despite the fact that said family could not repay them, did not do anything to earn the house, and did not even help build the house. This stinks of hypocrisy. The lesson here is that spending a million dollars and organizing hundreds of volunteers is no problem if the cause is heart-warming and there's a good chance you'll be on TV, but it's out of the question to do the same when the goal is to help random people whose hard-luck stories are not relayed to you in tender, five-minute TV segments.

But enough about Wyoming. That episode didn't even bother me that much, because I'm sure that if Wyoming was like the rest of America and had a some basic laws in place to protect home buyers[3], the radon thing would have been caught before the closing and the family never would have bought the house. Some of these other families are much more suspect, and their prizes much more enraging. Let's move on, this time to Kirkland, Washington.

The family in Kirkland includes a single mother who has three daughters and has renovated her inground pool for the purpose of teaching area children to swim. This small business gives her the means to pay her bills, but her house is falling apart and it would cost more to fix it than to rebuild. Unlike the Wyoming family, which lived on a couple of acres in the middle of, well, Wyoming, this family lives in a quaint suburb next to houses that are similar in size. It was during this episode that ABC discovered a new way to encourage poor financial decision making.

The Kirkland house had been in the family for generations, so the mortgage was paid off long ago; this means that the family's monthly costs were going to food, utilities, and the temporary fixes they've put in place to make their home safe. Surprisingly, my initial reaction to this situation was not to tear the house down and build a million-dollar property complete with gazebos and a professional swimming pool. Instead, I thought it would make the most sense for the mother to put the property up for sale, move into an apartment, and get a new job that provides the same income as her swimming lesson business (which can't be very large, given that her customer base is fairly small and she has to compete with non-profits such as the YMCA).

The great thing about my solution is that it's the responsible thing to do, and it doesn't require me to believe that the family's problems are equivalent to those living in public housing in downtown Seattle. I realize that this sounds harsh when compared to the ABC solution, but it is a solution, and one that the mother should have adopted a long time ago. Kirkland is a popular town, and there are plenty of potential buyers for a nice suburban plot upon which a new family could build their dream home; since she doesn't have a mortgage to pay off, she could sell the property for an extremely low price and still make a big addition to her savings account. This family doesn't need a McMansion, it needs a real estate agent and a basic investment strategy.

The last and most recent episode I caught was about some family in Vermont that had two young boys, one of whom was physically and mentally handicapped. They seemed like pretty reasonable people, and the house they ended up getting wasn't as lavish as others before it; but while I didn't have any beef with the actual family, the whole episode got me thinking about the unfairness of ABC's selection process. This family bought a "fixer upper" that had no real foundation. The house should have been condemned. They were only able to afford the house because no one else would go near it, and frankly, I don't think that such a desperate decision should give them priority over other Vermont residents who have sobbier sob stories.

There are thousands of other families across the country in equally tough situations who have not stretched themselves beyond their means and taken on mortgages for homes they can't afford. These people will never be recognized by ABC because they don't own property. They are in the same dire straits, and are arguably more responsible citizens, and yet, without a homeowner's deed, they will have no shot at the McMansion on 1.5 acres like the people from Vermont. All we have learned from this episode is that if you give in to America's big house obsession and spend yourself bankrupt while competing with the Jonses, a big team of All American volunteers will come out to save you. Reward you, even.

There are plenty of other, less grown-up reasons to hate EMHE, but I've covered the ones that everyone should agree upon. Every day I wake up and check the daily news to see if the president of ABC has managed to form a synapse between his two working neurons and pull the plug on this sham of a television show. Every day I am disappointed.

So much anger.

[1] It may be mindless, but it's not immoral.

[2] Despite the fact that the new house is a contest prize, the familes can avoid paying income tax on it thanks to a questionable tax loophole.

[3] I checked - it doesn't.

Labels: ,

Saturday, December 8, 2007


Two years ago today, Bridgid and I started dating. Having spent some time with her in the months leading up to our date, I was certain that we would make it as a couple. But I didn't tell her that; even if she felt the same way, I didn't want her to think I was crazy or impulsive.

So I kept mum until the time was right.

One year ago today, Bridgid and I celebrated one year of dating. We had shared a lot of great experiences in just twelve months, and I was certain that I wanted to marry her. But I didn't tell her that either; even if she was willing to accept my proposal, I didn't have a ring.

So I worked overtime until I had the cash.

Today, Bridgid and I are celebrating two years of dating.

And I have a ring.

Labels: ,

Friday, November 30, 2007


Anyone who has used the Internet for any length of time knows that ninety percent of web pages are full of personal tripe and pictures of cats; in other words, this site has only realized half its potential. Today is the day I reach for the stars and make this a real first-class web site.

We've named him Dwight.

Labels: ,


Those who follow the Supreme Court have been buzzing with anticipation since the justices announced that this term's docket would include DC vs. Heller, a case that will determine the constitutionality of our capital district's handgun ban. The SCOTUS blog has a good summary of the two positions being argued in the case, as well as the implications it has for other laws that restrict gun ownership. The hearing won't happen until March, but that hasn't stopped people on both sides of the argument from setting their propaganda machines to HIGH and doing everything short of calling the justices at home.

But despite all of the excitement over the Court's decision to tackle this controversial issue, the fact is that the ruling in Heller will not have the conclusive, clarifying effect that everyone is looking for. Because D.C. is a federal entity, the ruling will only affect the federal government's ability to limit gun ownership - it doesn't say anything about state or municipal legislatures, which is where most of the controversial gun laws are authored. At best, the ruling in Heller will inspire new appeals focused on state laws, but it will not have a direct effect on those laws.

I know it's hard for those at the center of the Second Amendment debate to do so, but I wish we could just admit that this case is not as groundbreaking as it has been portrayed and start talking about the real cause of this conflict, which is the irrelevance of gun laws written in 1789. No matter which side you're on, I think it's safe to admit that the Second Amendment suffers from limited imagination and poor grammar. Here is the text:
A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.
Ignoring the awkward and unnecessary comma after Arms, this sentence has two reasonable but opposing interpretations:

  1. All individuals should be able to own guns in order to protect themselves and participate in government-sponsored militias. The weapons may be used for national defense, self-protection against criminals, or revolution against the government. Since threats against the individual exist today, this right is still valid.

  2. All individuals should be able to own guns in order to participate in government-sponsored militias. These militias were not wealthy or well-organized enough to provide soliders with weapons, and allowing citizens to own guns ensured that they would be equipped to fight when called upon. Because our country now has a very wealthy and powerful military, this right is no longer valid.

The tricky thing about the second interpretation is that it still doesn't restrict the right to own private guns - it simply says that the need for a militia is no longer a valid argument. The complexity of this statement is seemingly endless, which is why I find the hoopla over Heller so frustrating. Trying to make a real constitutional decision based on the Second Amendment is putting lipstick on a pig: either way you go, it's not very impressive. The vague text and lack of judicial precedent means that future justices could easily overturn your decision. The Second Amendment provides no real guidance in 2007, which is why I think it should just be repealed. This idea is not as radical as it sounds.

If the Second Amendment were stricken from the Constitution tomorrow, the Tenth Amendment ensures that we would be in a similar situation to the one we face now: state laws would govern who could own what kinds of guns. The only new possibility would be for a state to ban private guns all together; since the people doing the banning are subject to popular elections, I think it is unlikely that more than a few states would go through with a full ban. There is enough diversity of opinion in most states to prevent politicians who favor gun restrictions from going "too far". States like Oregon and Connecticut will probably ban guns immediately, but so what? You can still go to Kentucky and pick up a handgun and a carton of milk in the same trip. Right-leaning folks have proposed this same solution in the debate over Roe vs. Wade, and I don't think gun ownership is any less significant.

That said, I don't think that anyone who is invested in this debate could ever process the idea of repealing the Second Amendment as rationally as I have written it, so here is an alternative idea: nullify the Second Amendment by writing a new amendment. This is the same idea used by the authors of the Twenty-first Amendment, which made obsolete the alcohol prohibition of the Eighteenth Amendment. The new amendent could spell out the nation's policy on individual gun ownership, taking into account all of the technological and social advances that have occurred in the last two hundred and eighteen years. It could also respect the rules for using commas.

It's unlikely that George Mason could have imagined the kinds of weapons that humans would build in the years after the Constitution was ratified, nor could he imagine a world where a a miltiary superpower did not have to draft its male citizens. Even if you are a card-carrying member of the NRA, I don't think you could dispute that the Second Amendment does not take these things into account. It is incredibly naive about the role of guns in society. We simply cannot know what would have been written had Mason known that, in 2007, individuals would have access to weapons that kill dozens of people per minute, or that a rifle is no longer an adequate tool for revolution. Rather than waste time debating poorly-worded text written two centuries ago, we should amend the Constitution to clarify exactly what the country wants.

Proposing this new amendment would generate endless hype and debate, and getting Congress to agree on the exact text would be a monumental task. But hey, that's why they make the big bucks. This is a hard problem, and it's pretty clear that the original guidance given to us in the Second Amendment is not working. There shouldn't be this much controversy over one sentence. The right to keep and bear arms shouldn't hinge on a prepositional phrase that sounds more like musing than declaration. If we want to settle the debate and the majority is not willing to take the easy route (repealing the Second Amendment and delegating to the states per the Tenth Amendment), then a clarifying amendment is a must. Everything else is a waste of oxygen.

Labels: , ,

Sunday, November 25, 2007


I'm a bit late, but here are the notes I took from the talks given last Thursday as part of ApacheCon Track 2, which had a focus on REST and the web development landscape. I considered morphing my notes into a more traditional narrative, but I think the minute-by-minute notes are more interesting.

Matt Raible - Comparing Java Web Frameworks

  • Gives us a choice between two presentations: one focused on older, more established frameworks (all of which he has used), and one focused on newer frameworks (half of which he has not used professionally). The audience votes for the latter.

  • The frameworks presented include Flex, Seam, GWT, Grails, Wicket, and Struts 2. Matt hasn't used Flex, Seam, or GWT, so analysis of those is not very in-depth.

  • Polling the audience shows that only a handful of people are using each of the six frameworks we discussed. The presentation seems to work off of this theme: each framework is good for certain types of apps, but there is no all-around winner.

  • This talk is kind of fluffy, but no one seems to mind because Matt has a lot of charisma and tells good stories.

  • Flex is a force, but in order to create the server side support needed for most apps, you need to buy very expensive products from Adobe or roll your own solution. This is not a full stack. Two audience members tell stories about difficulties with the "roll your own" approach.

  • Flex can't incorporate HTML content very well, so it's hard to integrate new, Flex-based content with older HTML/template content. Matt says Flex can only render about a half dozen tags. That seems pretty lame - even the JDK's HTML text editor can do better, and it's on HTML 3.0.

  • Groovy and Grails are not very popular outside of the echo chamber (and Zero!). Matt is a Grails fan and recently convinced a client to use Grails on a project, but it's his first professional project with the framework.

  • Because Grails is based on J2EE, many teams that might consider it will end up using Struts, since they already have those skills.

  • GWT allows one to create HTML and JavaScript-based UIs with Java. Matt believes that the job market for GWT will see the most growth in 2008-2010.

  • Struts 2 is far better than Struts 1. Never use Struts 1. The developers of Struts 1 drink kitten blood and are the bane of the open source community (or something - I'm not a Struts user, so I'm in the dark as to why Struts 1 is so bad).

  • Lots of graphs showing employment numbers for the various frameworks. Summary: if you're a freelancer, Flex and Struts 2 are Money.

Ora Lasilla - The Semantic Web, and Why The Open Source Community Should Care

  • This is today's keynote, and it's very well-attended.

  • I can't hear a damn thing in the back of the room. Other people are looking around, so I guess I'm not the only one.

  • The audio has improved a bit. I guess I'll manage.

  • Ora says that IT is not very automated - the computer is just a tool that you use while you do work. "You work, and the computer helps". It's like a hammer: it won't build anything on its own.

  • True automation requires an increase in structured data and standardization. He forgets to mention the need for enormous policy definitions and natural language processing.

  • So far this is just a review of semantic web, not why I should care. We've all been "waiting" for the Semantic Web for over ten years, so I'm not sure why this in-depth review is necessary.

  • Ora says that the open source community is:

    1. more accepting of new ideas,

    2. more innovative than traditional players, and

    3. more relaxed when it comes to finding a business model.

    to which I would respond:

    1. hahahahahahahahahahahahahahahahahahahahahaha

    2. Most large open source projects have contributors who work at medium to large-sized corporations (so they can obtain things like food and shelter). I think that if these corporate drones stopped contributing to open source, the community would be a lot smaller and less innovative.

    3. This can also be a negative: the reason there are so many dead or stagnant open source projects is that the code doesn't generate any money, and when push comes to shove, people have to do work that pays the bills first.

  • Ora continues to hammer on the business model point, saying that "before we get to the point where we make money, we need more experimentation and open-minded development". The Semantic Web has been "just two years away" for about ten years now - how much more experimentation do we need?

  • The guy next to me is typing on his laptop using the hunt-and-peck method. At ApacheCon! Jesus.

  • Here's what's ahead of us with regards to the Semantic Web:

    • "Let's try to get computers working for us"

    • "I'm after some kind of paradigm shift in personal computing"

    I wonder how long it's been since this guy wrote a line of code.

  • I just checked his web site - he works in Common Lisp. That's about right.

  • Now hunt-and-peck guy is making these deep, heavy sighs. Constantly. It's very distracting.

  • This talk has no answers. Big surprise.

  • Not that I'm bitter, or anything.

Roy Fielding - A Little REST and Relaxation

  • I arrive early to get a good seat. It's standing room only by the time Roy starts his talk.

  • The talk ends up being a history of the W3C and web protocol development. Towards the end, Roy says, "I wish I could spend more time talking about how to create a RESTful service." That's the whole reason I came, dude! Anger!

  • Roy says that refining server-side resource models is not as important as the hypertext-based navigation that guides users or client software; as long as you have the original URL to a site, you should be able to find anything else from the hypertext content. I think this is misguided based on how many of todays web applications are exposing public APIs that enable content sharing and mashups.

  • This talk gives me the impression that REST is just for applications that are primarily UI-driven. It's the same old REST doesn't work for enterprise software argument used by WS-* proponents, except it's coming from Roy Fielding. I don't know why he's so focused on hypertext and applications that operate within a single domain (host).

Sanjiva Weerawarana - WS-* vs. REST: Facts, Myths, and Lies

  • Some people leave after Roy's talk, but the room remains at 90% capacity. Sanjiva's talk is sure to ruffle some feathers, and that always puts butts in the seats.

  • I still hold a bit of a grudge against Sanjiva for the rigmarole we had to go through for Muse 2.0, but I will put that aside for today and try to listen to his claims objectively.

  • Sanjiva says that WS-Addressing is probably one of the worst things about WS-* and that it should allow for the use of a simple URI for people who don't need endpoint references or the implied resource pattern. Preach on, man.

  • He started with some WS-* negatives, but as the talk wears on, it's becoming more and more about the negatives of REST. I can't say that he's being unfair to REST - most of his points are reasonable - but he did kind of gloss over WS-*.

  • The talk was well-received, and he didn't get many questions afterwards. He was pretty harsh on REST, but not inaccurate; I still think the talk would have been more effective if he'd spent more time detailing WS-* negatives instead of using broad statements like web services aren't right for every project.

Dan Diephouse - Building Scalable, Reliable, and Secure RESTful Services

  • Dan is another WS-to-REST convert who spent some years implementing SOAP engines and WS-stacks and has moved on to Mule, which is an open source ESB; I don't know how an ESB is any less enterprise-y than WS-*, but I guess it makes him happy.

  • Dan has chosen to pack an incredibly large amount of information into a one-hour presentation. As time ticks away, I can't help but feel that this talk would be much better if it were a week-long session instead. Perhaps he doesn't have time for the kind of multi-day labs offered at ApacheCon, but it would make his presentation less overwhelming.

  • We're fifteen minutes in, and I've given up trying to summarize. There's a lot of good material here, but I'm just going to end up re-typing his bullets. If you'd like to see the presentation yourself, you can find it here.

Looking back, Day 2 Track 2 was definitely worth the $150 conference fee, but it wasn't worth giving up my no-vomit streak; when I threw up last Saturday, I was only a month away from the six year mark. Six years isn't quite as impressive as my previous no-vomit streak (nine years, one month), but it's nothing to sniff at. If I could do it all over again, I'd rather have the streak.

Labels: ,

Monday, November 19, 2007


I was planning to post all of my notes from ApacheCon this weekend, but by Saturday afternoon I had taken ill; I spent the rest of the weekend lying prostrate and begging for death. It's a good thing that Bridgid was there to take care of me because I'm a real wuss when it comes to being sick. I can only hope that this episode hasn't ruined my mystique.

My notes will be posted shortly...

Labels: ,

Wednesday, November 14, 2007


Today was discouraging. I hit a six-mile traffic jam just outside of Charlotte on my way to ApacheCon, and I didn't see one person on the other side of the highway with his hand out the window. In other words, all of the time I spent documenting our road rage prevention system was for naught. I was expecting the magic of the Internet to cause a new meme to spread rapidly across the land and save me from these kinds of experiences, but I guess that isn't going to happen. The Internet is too busy creating lolcats.

I wish I had a time machine so that I could turn back the clock and tell the inventors of the Internet to not even bother. What a disappointment.

Labels: , ,

Tuesday, November 6, 2007


Cycling is a very popular hobby in Cary, but for all of the people riding around town in their ill-advised spandex outfits, we have seen little improvement with regard to cycling-friendly roadways and parks. Many of Cary's main roads are finally being expanded to meet the demands of last decade's population growth, but only a few of them have cycling lanes. I often encounter situations where people approach cyclists who are coasting in and out of the breakdown lane and then swerve around them at full speed, with barely a look to see what's happening alongside them. It's irritating and dangerous, all at the same time.

But unlike most residents, I'm not upset because our transportation department has failed to recognize the plight of the avid cyclist; I'm upset because I'm not a cyclist and I hate sharing the road with them. In fact, one of the things I hate most about sharing the road is the phrase sharing the road. I see this admonition on traffic signs and bumper stickers across Wake County, and it's one of the worst ideas I've ever encountered. The cyclist is traveling with a vehicle that he can lift off the ground with one finger; I am traveling with a two-ton steel bullet that's moving at twice his speed. We cannot share the road. Asking automobile drivers to share the road with cyclists is a red herring that pits cyclists against drivers and draws attention away from the fact that our civil engineers have dropped the ball.

Physics aside, there is another reason that I don't like sharing the road: cyclists are bad motorists. Every cycling enthusiast I've ever met[1] is quick to complain about automobile drivers that try to run them off the road, throw things at them, or otherwise treat them as second-class motorists. They talk about state laws that give equality to cyclists and other slow-moving vehicles, but they always gloss over the parts about cyclists being restricted by all of the same rules and signals. The same cyclists who want me to coast patiently behind their peloton as we try to conquer a hill at ten miles per hour are quick to pedal through a red light if there's no oncoming traffic. I also like it when they piggyback with cars that have been waiting at a stop sign; nothing says "responsible motorist" like hanging out in my blind spot through a busy intersection!

Ignoring traffic rules is fine for ten-year-old kids parading around town with their friends[2], but if you want equality in the eyes of the law, you had better sit at that red light and wait for it to change, even if traffic is low and you don't have a license plate. You had better wait your turn on the stop sign merry-go-round. And you had better hope I never run for office, because in Dan Jemiolo America, all cars will be equipped with high-powered lasers.

So much anger.

[1] An enthusiast is anyone who has one or more cycling-related bumper stickers on his car or wears cycling shoes to the office.

[2] Assuming they're smart enough to avoid my car.

Labels: , ,

Wednesday, October 31, 2007


I don't like Halloween. It's a boring holiday whose resulting ennui is trumped only by Thanksgiving and its multi-day torture of lukewarm turkey, awkward conversations, and Detroit Lions football. That said, I know that a lot of you are still excited for Halloween and want to impress your peers with a clever costume[1], so I feel obligated to share with you what is probably the best Halloween costume idea you have ever heard; I can't remember if I came up with it myself or not, but in the absence of evidence to the contrary, I will take credit for it here. Prepare to be inspired.

A surefire way to be a hit at your next Halloween party is to dress like a Flintstones vitamin. I'm sure you've seen lots of guys dress as Fred Flintstone, but it's always in the official orange and black outfit; it's a pretty popular costume and you can get it at most stores. To be truly innovative, though, you need to take this costume and paint over it so that it's monochromatic[2] while still allowing the pattern to show through. You can then use face paint to make your face, arms, and legs the same color; if you're really committed, you'll get some temporary hair dye so that your hair will match as well. When you're finished, you'll look just like the Fred Flintstone vitamins that you used to have each morning with breakfast. The public will adore you.

Extra Credit: Children of the 70s and 80s will recall that there was no Betty Rubble vitamin in the original Flintstones Vitamin collection (she first appeared in 1996). If you're a woman who wants to get in on this great idea, you could dress as a Betty Rubble vitamin and impress the socks off of all the trivia geeks who ask about your costume.

[1] Unless you're female, in which case you'll probably be dressing as a vampish [noun].

[2] I suggest purple or red.

Labels: ,

Friday, October 26, 2007


Bridgid and I went to Pittsburgh a few weekends ago, and on the way home we witnessed a ten-mile traffic jam on the other side of the West Virginia highway. It was painful to watch people sitting in such an awful mess, especially as we encountered those people who had only been in it a few minutes and had no idea what they were in for. Both of us become extremely frustrated in traffic - the feeling of helplessness, combined with the knowledge that four out of five traffic jams are caused by trivial events on the side of the road[1], just fuels the rage minute after minute. I would rather be attacked by flying robot sharks that shoot lasers out of their eyes than sit in traffic that I know is caused by people who are hoping to see a cool accident and have (apparently) never used the Internet before.

The tragedy in West Virginia brought to mind other instances where we had seen horrible traffic jams that continued to grow because the people driving in the opposite direction had no way of warning the oncoming victims; soon we were discussing ideas for a universal signal that drivers could give to people on the other side of the road to let them know that all hope was lost, and what the rules would be for using it. This post will be my first attempt at harnessing the power of the Internet to start a nationwide trend[2].

First, we need to define some vocabulary:

  • highway - A stretch of road on which there is less than one traffic signal per mile and the average speed limit is fifty miles-per-hour or higher.

  • traffic jam - A situation that requires vehicles to move at less than half the speed limit for five miles or more.

  • free-flowing traffic - The normal rate of travel for a particular road at a particular time of day.

  • bicycle turn signal - The act of raising your left arm so that it forms a ninety-degree angle, often used by cyclists who wish to make a turn signal when riding on a major road. Like this.

If you are driving on a highway and you see a traffic jam, the driver should stick his arm out the window and make a bicycle turn signal from the fifth mile of the traffic jam until one mile after you encounter free-flowing traffic. People who see this signal should assume that the traffic jam is truly a nightmare from Hell, the kind of nightmare where you're falling off a cliff, but instead of waking up when you hit bottom, you slam into the ground and break your leg, and then murderous clowns chase you into a cave filled with robot sharks[3]. In other words, it is not going to get any better. They should get off at the next exit, even if it means buying a map to figure out how to get past the traffic jam on other roads; if no exit will become available in the next mile, they should break the law and make a U-turn on the median. Anything to avoid the soul-crushing agony of whatever the people on the other side of the road have seen.

Some may say that the turn signal is sub-optimal, and that may be the case. Bridgid and I spent a long time[4] brainstorming on this, but we're willing to consider other ideas. Another criticism that I'm expecting is that the amount of time one is required to use the signal is too long; frankly, I think it's a small price to pay when you realize that you stand to benefit greatly if others reciprocate in your own time of need. Our experience through West Virginia was kind of extreme (ten miles of traffic through the Appalachians, with limited exits and supporting towns), but at a law-abiding seventy miles-per-hour, I only would have had to keep my hand out the window for five minutes in order to help my fellow citizens.

Now, before you all go out and start using this new signal, I must warn you to be conservative and stick to the rules. If we have people driving down the interstate with their hands out the window every time there's a five-minute backup, people will start to ignore the signal and it won't be useful anymore. This isn't like flashing your headlights to warn others of an upcoming speed trap - overuse of the signal could do more harm than good. The next time you're on a long trip and you see a never-ending train wreck piling up in the opposite direction, wait for the fifth mile, and then open your window and fulfill your duty as an American motorist.

[1] Sixty-five percent of people read this statistic and believed it without even checking my footnote to see if I made it up.

[2] If you're outside of the United States, I apologize, but I don't have any experience with your cross-country traffic, nor have I learned all of your offensive hand gestures. You'll have to start your own trends.

[3] The danger that flying robot sharks pose to humans is only exacerbated by their ability to navigate in the dark.

[4] A solid ten minutes.

Labels: ,

Wednesday, October 17, 2007


Zero now has a nice little database setup tool that helps one follow the guidelines described in this article. Steve Ims and I tweaked, updated, and refined the guidelines so that the tool could be reusable without programmers having to add configuration stanzas or additional artifacts to their applications; there's more information about this feature on the Zero forum, but I wanted to add a personal note about how gratifying it is to see this code make it into the Zero code base.

Ninety-nine percent of my programming habits - from how I arrange my file system to how I debug applications - are either slight variations on the habits of others or bordering on OCD. This means that a lot of the effort that I put into optimizing my productivity never benefits anyone other than me. I think this is why I feel that the most satisfying contributions that I can make to a project are the ones that originated from code that I wrote to make my own life easier, not those that are part of a feature plan. Perhaps this is just the ultimate fulfillment of my obsessions - forcing other people into my behavorial patterns - but I'd like to think it's based on the feeling that my improvement really is an improvement; with feature plans, sometimes you get it right and sometimes you don't, and you have to wait a while before you find out which is the case.

Okay, it's probably the obsessions. But it still feels good.

Labels: , ,


My latest article: Create a photo album application with Project Zero and REST design principles. Enjoy.

Labels: ,

Wednesday, October 10, 2007


After publishing one of my recent posts, I noticed a Freudian slip in the way I talked about software standards (emphasis added):
I think of security-related code as [being] responsible for authentication, authorization, encryption, and the complex protocols that the industry has created to simplify them.
My initial reaction was to correct this oxymoron, but then I realized that it was a fairly accurate description of many software projects and protocols that I encounter every day. My previous work in WS-Land was a great example of this: things started out simple, but as the specs expanded in order to handle more use cases, it became much harder to provide a simple toolkit for implementing them. Complex problems beget complex protocols, all in the name of simplification.

On the Zero forum there is a discussion about whether or not to include support for the deserialization of JSON data into Java beans. My gut reaction to this - which is based on many years of programming in strongly-typed languages and the belief that context assist is a inalienable right - was that converting JSON objects to beans is a fantastic idea, and that I wouldn't want to be friends with anyone who thought otherwise. Data binding, while complex in many ways, simplifies the even more complex problem of wading through the raw bytes of serialized objects. Right? You have to have principles.

Pat Mueller agrees with this sentiment, comparing a JSON object to a bag of goo, which doesn't sound like something that is easy to debug. If you have any doubts about this, ask a programmer on your team if he thinks his current project could be improved by adding bags to goo to the source - at best you'll get a weird look, and you probably won't be invited to give your opinions on the project anymore. In general, people don't want bags of goo in their source code.

What I could not have predicted was just how much working on Zero has changed my world view, and how much I would waver as I started to think about my experience creating RESTful services with JSON data structures. The more I thought about it, the more I realized how easy it is to get things done with JSON's Java APIs; the JSONObject and JSONArray types are just Maps and Lists, respectively, so if you're unfamiliar with the data flowing in or out of a service, it's easy to learn about it - just dump it to the console! The true nature of any JSON data structure can be determined in one step, without reading any documentation[1]. There is no data binding framework to configure or reflection errors to debug - you get the data in a simple collection of name-value pairs, and that's that. It's made service development a simple affair without the presence of a complex protocol.

But why is it okay to work with bags of goo for my RESTful services while demanding strongly-typed APIs in my other code? I definitely wouldn't want to write JavaScript-style code for all of my projects - it's too frustrating - but I like it for sending packets of data between applications and processing said data in a single module. Once you have to start delegating the data processing to multiple classes or systems, the Map and List usage becomes confusing because the origin of the data is no longer clear to the reader. This is where Pat's bags of goo comment rings true for me - if you're just passing around hash tables from library to library, you've got a debugging nightmare on your hands. For Groovy scripts in Zero, though... it's nice.

The worst part about using a more dynamic, JavaScript-style of coding in Java would be the inevitable move towards something like EMF, which I hate with the hate of a thousand suns. I think that as long as I stick to using simple collections for immediate processing of service I/O and use strongly-typed beans for the rest of my logic and utilities, I can enjoy the magic of JSON without getting lost in some programming black hole.

In conclusion, my thoughts on data binding and type safety aren't as concrete as they once were, and I think that pure JSON objects and other goo-oriented data are great for Zero developers creating RESTful resources. Hopefully Pat and I can still be friends.

[1] You may need docs to find the values of enumerations (if any), but you would have to do that for a bean-based API too.

Labels: , ,

Wednesday, October 3, 2007


I know what you're thinking: Ruby on Zero? That's unpossible!

In this time of red states and blue states, sharp political divisions, and talk radio played at entirely inappropriate volumes, I thought it was best to do something for the world that brought people together. I'm a peacemaker, breaking down barriers one developerWorks article at a time. Check it out: Add Ruby Scripting to Your Project Zero Applications.

Labels: ,

Sunday, September 30, 2007


I recently finished Jan Crawford Greenburg's latest book, Supreme Conflict: The Inside Story of the Struggle for Control of the United States Supreme Court, and I have to say, it was remarkably even-handed given its topic. Details from the personal notes and memos of nine current and former Supreme Court justices provide some insight into how the Court runs; a revealing interview with Justice O'Connor adds some extra drama, explaining her early retirement and how we came to have two SCOTUS vacancies at the same time. It was all very personal, and not terribly political. I would recommend it to anyone who finds the judicial branch somewhat mysterious and wants to understand how it affects the lives of average people.

There were two things that came up repeatedly in Supreme Conflict, and neither was particularly encouraging. The first item was how small the political network is in and around Washington. The book covers SCOTUS appointments from the Reagan Administration through Samuel Alito, but during those thirty years, no more than a dozen or so candidates are considered by other side of the American political spectrum. It is clear that if you were not able to land an appointment to the 4th circuit federal appeals court (which requires knowing and having worked with one or more of the president's advisors), the cards are stacked against you and your dreams of serving in the Court. Judges from other appellate courts in other parts of the country are occasionally considered (like the aforementioned Alito), but the vast majority were nominated because top advisors wanted to push their favorites, and said favorites had the opportunity to participate in the Washington, D.C. social network. Geography-based judicial nominations seem incredibly... injudicious... even for politicians.

The second item was how abortion became the primary issue - if not the only issue - that influenced a candidate's support from legislators and the public. No other issue even comes close. In reading the notes and transcripts provided by multiple instances of the executive branch, it is hard to find more than a token concern paid to other issues that the Court might face. Homosexuality, education, property rights, medical marijuana, the Pledge of Allegiance, government surveillance - these things were mentioned, but none was ever a deal breaker. Abortion is the ultimate deal breaker.

I think it's fair to say that putting so much weight on one issue is bound to negatively affect decisions made on other issues. In every case that does not involve abortion, a justice is forced to spend lots of time and mental energy weighing how his opinion in the case may affect current or future abortion-related cases; he may be pressured to make a sub-optimal decision because he is afraid the optimal one may eventually lead to politically unpopular action on the abortion front.

Justices may not face elections or popularity polls, but they certainly face pressure from politicians, friends of the Court, and talking heads who can demonize them as the face of the enemy. Even notoriously independent and aloof justices like Stevens and Thomas must be affected the pressure. No one wants to go into the history books because of a decision that was really a side effect. That's a lot of stress to put on a person who already has pages and pages of case history and legal arguments to absorb.

Finally, I know that if I were appealing a decision that had a major impact on my life and revolved around, say, privacy, I wouldn't want to be short-changed because Justice Bob was worried about the implications to the pro-X crowd. Despite its prominence in the national discussion over Court rulings, abortion-related cases don't make their way onto the docket very often, and yet these potential cases have a significant affect on topics as disparate as gay rights and terrorism. I'm sure this is a fun twist for the lawyers arguing before the Court.

Despite these common themes of cronyism and questionable priorities, the book was great and I enjoyed the opportunity to blog about one of my favorite subjects: consitutional law. Did you like how I mixed in issues that were "critical" to both liberals and conservatives and used variables in order to hide my own opinions? Aren't corporate blogs fun?

Update: I shouldn't have tried to write this post in such a short period of time. I've cleaned up some of the minor problems, but I still wish I had let it simmer a while longer.

Labels: , ,

Friday, September 21, 2007


When I first moved to North Carolina, I was fortunate enough to live in apartment complexes that were surrounded by forests and swamp land. During the spring and summer months, I would sleep with bedroom windows open and listen to the crickets and the bullfrogs talk to each other. The only downside of doing this was that I couldn't sleep in on the weekends, because by 10:00 a.m. my apartment would be roasting in the Carolina heat; few things in life are perfect.

Anyway, it was always very serene, and it upset me greatly when the twenty acres of undeveloped land across from my last apartment was torn apart and turned into a Harris Teeter. Not only did it add an obnoxious amount of shopping center light to the night time sky, but it also eliminated most of the wildlife habitat and made it so the only sounds I heard at night were cars. I closed my windows after that.

Only now am I learning that my innocent bed time habits could have cost me my life. According to this article, bullfrogs are unstoppable predators that can threaten an entire ecosystem; people in Utah are terrified that the immigration of bullfrogs will cause a plague of Biblical proportions. I had no idea. I guess I'm just happy that I was on the third floor, and none of them ever made it up the wall before sunrise.

Labels: ,


Let's make one thing clear: I know very little about security-related programming.

Now, don't misunderstand: I know the best practices related to creating secure web applications, and I've picked up many other general security principles in the last couple of years, but those do not require me to write security-related code - just secure code. Making sure that your web application is safe from SQL injection helps to make your code secure, but it doesn't require that you understand the fundamentals behind security features. You're just an end user who has managed to perform his job correctly.

I think of security-related code as those parts of a platform that are responsible for authentication, authorization, encryption, and the complex protocols that the industry has created to simplify them. That's the stuff that I am not familiar with and, to be quite frank, has never really interested me. I'm not sure if the lack of interest was driven by perceived difficulty or the overwhelming focus on encryption that I encountered while in academia[1], but either way, I've always avoided security-related projects.

Until now.

The code base for Zero Core isn't that big, so while its event-driven architecture makes it harder for a new person to read through the code and understand the flow, it really doesn't take that long to figure out how things work. For the past few weeks, I've been trying to help out with some bugs and enhancements to Zero Core, mainly to learn more about the code and fill in my mental gaps; one of the enhancements that I was particularly interested in was related to authorization, but because there are more work items than there are Core team programmers, I was told that the enhancement would have to wait until our third milestone, meaning December-ish. The only way the code would get into the next milestone (October-ish) would be if I agreed to write it. So I did.

Let's make a second thing clear: bug 972 was not very hard. It's not a big feature, and you don't have to be a computer scientist to understand how it works. Had someone from our security team been available, I'm sure they could have written the code and the unit tests before lunch[2]. Still, I'm happy to report that I was able to read through all of the zero.core.security.* code and understand it, and I didn't break the build once during development. I feel a bit more confident in my ability to tackle security-related projects now, even if I've only scratched the surface of understanding. Baby steps.

Special thanks go to Zero security lead Todd Kaplinger, who made sure that I started off on the right foot and who didn't even make a face when I told him that I don't know much about security.

[1] Encryption is discouraging because I am certain that it is difficult, and that I do not possess the mathematical mind required to conquer it.

[2] It took me a day in a half, when you add up all of the hours.

Labels: , ,

Tuesday, September 11, 2007


I've published six or seven articles on IBM's developerWorks site since joining Project Zero in April, but none of them contained Zero-related content; each article had been written during my pre-Zero days and revolved around Apache Muse and WS-*. The truth is, the articles had their publication dates set long before I joined the Zero team, but I think my management was starting to wonder if I was ever going to let go of the past and start working for them.

Today we dispel those worries: my latest article is very Zero-oriented, and it's accompanied by an interview I did for the site's podcast. Enjoy.

Labels: ,

Friday, September 7, 2007


I have been sending and receiving email for a decade, and during that time I've deleted a lot of spam. I've never been particularly annoyed by spam - before I signed up for Gmail (which is unmatched when it comes to intercepting spam before I see it) I would just delete the messages and move on with my life. I didn't receive that much to begin with, so I was never able to sympathize with people who claimed to have thousands upon thousands of spam messages clouding their inboxes.

Still, I was not completely apathetic about spam; while I may not be as inconvenienced as some by their mere presence, I have always been disappointed in the quality of spam messages. In the early-to-mid nineties, I shrugged off the obvious subject lines, the poor English, and the ridiculous formatting because I figured that the people creating the messages were amateurs who were experimenting and didn't have any experience to tell them what would work and what wouldn't. They were true spammers: throwing millions of messages into the wind in the hopes of making profits in volume.

What's disappointing is that today, ten years later, most of the spam that I see in my Gmail spam folder is no different. I don't understand how such a lucrative business could still depend on such amateur content. I realize that the messages will always be a little zany because of the never-ending battle between spammers and spam blockers, but I don't think that should preclude the spammers from sending advertisements with proper English sentences and appropriate color schemes.

Every time I read a news article about the arrest or trial of a spam kingpin[1], they always mention how large the spam market is and the potential for huge profits; I find it hard to believe that the successful people in this business are not interested in creating higher-quality, more professional advertisements. I'm not saying they need to create corporate-level advertisements, but they should be willing to spend thirty minutes or so on a message to make sure it's legible and credible. Thirty minutes seems like a small price to pay for a significant increase in one's click-through rate.

The only really creative spam I've ever seen arrived around the spring of 2006. All of a sudden I started seeing emails from senders such as Felix Q. Marvelous and Burger F. Luscious, with subjects that, while not valid English, were at least entertaining[2]. Sadly, the message content was the same old boring stuff... but I shared a lot of laughs with my friends by exchanging great spam names. If the authors of this genre of spam had just applied the same creativity to the actual content, they might have convinced someone to take their emails seriously, but instead, they never evolved beyond an amusing self-parody. As of today, the spammers are back to their Plain Jane, WE ARE READY TO ACCEPT YOUR LOAN REQUEST ways.

Only one group of people disappoints me more than spammers, and that's scam artists. People who are trying to scam other people into giving up their bank account number or other important data have the opportunity to make more in one hit than spammers do in a year. And yet, it wasn't until late 2005 or early 2006 that they thought to replace their poorly-formatted messages with HTML and images copied from real bank web sites. Hello? The media is always going on and on about how clever these online scams are, but I see them as incredibly inept. Yes, they stole one old lady's life savings and her home is in foreclosure, but how many people did they lose because they misspelled Citibank? Twice? That's beyond lazy.

Sometimes I feel like I should have gone into the spam business. The amount of wasted potential I've seen over the last decade tells me that someone with my ambition could really clean up. With a few college classes on psychology, advertising, and information technology, the will to work for more than five minutes on an advertisement, and the email automation software that has existed for years, I bet I could double the average click-through rate and make a killing.

Sigh. I could have been a kingpin.

[1] Always kingpin. Not executive. Not owner. Kingpin. Like he's gunning people down and sneaking cocaine through customs instead of clicking a few buttons from a non-descript apartment in Poughkeepsie.

[2] A little too entertaining to reproduce here.

Labels: ,

Thursday, September 6, 2007


I want to update by blog template to include a new link. This sounds simple, but if my past experience with Blogger is any indication, this change will not only modify the old post files, it will also re-publish the posts as if they are new. If you're subscribed to my Atom feed and you end up with a mountain of new entries in your feed reader tonight, just delete them all. Sorry for any inconvenience.



There is a very active thread on muse-dev right now about how to fix or workaround the lack of thread safety in Apache Xerces, which is the XML parser used by Muse. In XmlUtils.

Yes, that XmlUtils.

Ruh-roh, Shaggy.

All of the concerns raised over this issue are valid, and the people who are working to understand and solve the problem are all users who have deployed Muse in real world projects. I trust them to get it right, it's just a bit nerve-racking to watch people consider an API change that will touch the code in over six dozen places. Muse 2.1 has already shipped as part of a WebSphere product, and I can only imagine the number of conference calls that will ensue if their Muse-based applications break when they upgrade to 2.3.

That said, I'm really stoked to see such disparate parties working together to solve the problem - this is why we wanted to have a community in the first place! I wish I could invest the same amount of time towards resolving this issue, but I can't do two jobs at once.

Definitely nerve-racking.

Labels: ,


The Apache Muse code base includes a collection of DOM-based convenience methods that help us parse and construct XML fragments without a lot of DOM API boilerplate. These methods are included in an incredibly large class named XmlUtils and represent the collective knowledge, mistakes, and advice of a half dozen developers who have worked on WS-* technology for over three years. Changing any of the methods in XmlUtils could leave the Muse build totally broken; despite the fact that it's been extracted from the core engine into a small library, XmlUtils is very much a core piece of code because of how heavily dependent on it the rest of the project is.

Any piece of code that is so central to a project is bound to have one or two (or ten) ugly hacks to handle edge cases, bugs in dependencies, and backwards compatibilty. XmlUtils is fairly hack-free, but it does have some code that I consider... unsavory. In my opinion, the most cringe-worthy code snippets are the ones that need to accept an XML element and iterate over its child elements. The only way to get all of the direct children of an element is to call Node.getChildNodes() and save those Node objects that are actually Elements; such code involves one or more if blocks that check the concrete type of the Node objects and ignore the undesired ones:
if (nextNode.getNodeType() != Node.ELEMENT_NODE)
Today these checks are common, but when I first wrote the code I allowed myself to make a number of assumptions because it was only used for traversing schema-validated SOAP messages. Then, one day, we added a configuration file[1], and all of a sudden the input was much less predictable. One of the first bugs I had to fix was the one caused by comments in the configuration file; XML comments become their own DOM nodes - different from Element or Text - and my code failed when comments were added in places where I expected a child element. My final fix eliminated comments from the DOM tree all together, but I kept the conditionals mentioned above on the off chance that we encountered XML pre-processor nodes or CDATA nodes. I would not be fooled again!
DocumentBuilderFactory factory = 

// we don't need comment nodes - they'll only
// slow us down
This bug, and others like it, were part of my education into the more pedantic corners of XML Land. Over the course of three years I learned many things about XML, and while many of them were logged in my brain and never used again, all of them helped to drive home the message that XML-related issues were never as simple as they first seemed. When reading or writing XML documents or schemas, great care must be taken to ensure that edge cases and ambiguity are not hiding in the bushes. And as much as I malign the DOM API, it does a pretty good job of alerting you to the fact that XML processing in a real world system is never as easy as more user-friendly APIs would have you believe.

Now, despite four excruciating paragraphs on the internals of Muse, this post is actually about frustration I've had with some of my Zero-related work. I had to create some custom Dojo widgets this week, and after a few hours of searching the Internet for proper instructions[2], I started to make some progress: I had a widget "class", a widget template, and an HTML page that was loading the widget class and calling its initialization routines. The only problem was that nothing was showing up in the page - I had picked off all of the initialization errors, and yet there was no Dojo-inspired HTML to be seen.

Just as I was about to break down and open a DOM inspector to try and read through the eighty-seven levels of HTML to find the answer, it dawned on me: comments! My widget template is encapsulated in a single <div/> tag, but the HTML file that contains that <div/> has two comment tags: one for the IBM copyright notice and one with my comments explaining the content of the file. I had a hunch that Dojo was assuming the template file had only one node (an element) and wasn't checking for other, irrelevant nodes.

I was right.

I took the copyright notice and comments out of my template file and everything worked beautifully. It's good to know that my WS-* skill set isn't completely wasted here in RESTtopia.

[1] The beginning of the end for any project.

[2] The Dojo team isn't big on documentation, so I had to piece things together using half-baked suggestions from mailing lists and forums, some of which were no longer operating. I love reading documentation through Google cache!

Labels: , ,

Saturday, August 25, 2007


Zero's SVN repository is now accessible to everyone. As promised.

Labels: ,

Friday, August 24, 2007


Bridgid and I don't have many conflicts, but one of the things that forces us to compromise is the fact that we both match well-known gender stereotypes when it comes to our work habits and attention spans. This can be amusing and frustrating at the same time.

For those of you who never made it through Psych 101 and don't work for a company that requires lots of diversity training, allow me to summarize: men are incredibly single-minded and perform well on tasks that require deep concentration, long hours, and not talking to anyone; women are excellent multi-taskers who are most productive when they are faced with disparate tasks that exercise social as well as academic skills. This is why you meet so many male programmers and female marketing executives. There are exceptions, but in my experience this stereotype is more accurate than most[1].

In fact, in the case of me and Bridgid, it is incredibly accurate. Bridgid is a biochemist who gives cancer to fish and does experiments on "genes". The fact that she's a geek would lead you to believe that she is an exception to the female stereotype, and in many ways, she is; however, when it comes to multi-tasking and the desire to work on disparate tasks, she is a perfect match. Bridgid can switch contexts almost immediately and not lose a step. Her need for long-term scheduling is limited to her need to set up multi-day experiments in such a way that she can balance her classes with her time in lab.

I am a different animal. I exhibit classic programmer behavior when I'm at work, and it's even more obvious after work, when there are no meetings to distract me. I like to block off hours of time for one project, one feature, or one set of related bugs. I save big-ticket items for days when I work from home so that any communication that I have with other people is routed through email or IM, which allows me to manage it in the same way that I manage my list of tasks. Even when I'm working on a feature that touches code shared by multiple people and requires lots of questions and communication, at some point I will buckle down and write code by myself, with no distractions to knock down the house of cards I am building in my head.

Context switches can kill hours of my day if they are timed right: three consecutive meetings with thirty minutes in between each means that I lose an hour because I can't start anything significant before I'm pulled into the next meeting. The flip side of this is that, once I am working on something, I find it very hard to put it down. I wish that I had Bridgid's ability to let things go when a context switch happens; instead, it takes upwards of an hour for the thoughts surrounding whatever it is I'm working on to leave my brain. Of course, sometimes the delay is caused by thoughts about co-worker frustration or bureaucracy, but I think that is more understandable to the average person. Thinking about Ant's classloading behavior on the way to dinner is not.

The reverse of this behavior is interesting. A large project may require many days of intense concentration and occasional t-shirt re-use on my part, but when I'm done[2], I am suddenly aware of all of the great things that are happening around me. Things to do. Fun to be had. My non-programming intensity is as strong as my programming intensity, but the two cannot coincide. This can be very confusing for Bridgid and other female humans.

I try to temper this conflict by keeping an a personal schedule that extends at least two and usually three weeks in advance, identifying those times when I will be able to work consecutive days on larger problems without forgetting to eat or talk to my girlfriend; the other days become targets for meetings, smaller bugs, and tedious work that will not leave me distracted at the end of the day. This need for order and preservation means that I am constantly scheduling, with the ability to remake two weeks of plans in ten to fifteen minutes. Frequent re-ordering means that no "to do list" software is fast enough or natural enough for me; whether it's Microsoft Outlook or some Web 2.0 app with Atom feeds and rounded corners, I always come back to a plain text file on my desktop. I don't have time for calendar widgets and status markers. When something is re-scheduled, I cut and paste it. When it's done, I delete it. At the end of each day, I open todo.txt one last time, delete the current day's entry, Ctrl-S, Alt-F4, and turn off my monitor.

On that note, Fri PM (8/24) - blog post is complete. It's time for a trip to Fujisan. With my lady.

[1] I'm sure you've already thought of a few co-workers who completely defy these stereotypes. Good for you. I'm telling my story anyway.

[2] Where done means that it works on someone else's machine and I've found most of the edge cases.

Labels: , ,


So I'm going to ApacheCon. Just for a day. I'd like to be there for the whole conference as a presenter and an attendee, but a) ApacheCon hates me, and b) all of the sessions I want to attend are on one day, so it's hard to justify a longer trip.

While I'm attending, I'll be hearing talks from Grandmaster REST and the Simplification Band about how REST is the Silver Bullet and how Web 2.0 is changing the future before it happens; you know, the usual fare. My goals are to get a more in-depth look at some of Zero's competition and to find out what makes programmers the most productive on their REST-oriented projects. Assuming the wireless connection isn't too flaky, I hope to live-blog the event for the benefit of my co-workers.

I also hope that I don't get carjacked while driving through Atlanta.

Labels: , ,

Tuesday, August 21, 2007


Beg The Question: To beg the question does not mean "to raise the question." This is a common error of usage made by those who mistake the word "question" in the phrase to refer to a literal question.

Sites like this play to my inner-stickler, which means that I find them irresistible. This one in particular reminds me of times during grad school where I'd be sitting in class, waiting for some amateur philosopher to finish his point about some technology-related issue that didn't deserve as long a monologue as he was giving it, when all of a sudden I would be jolted back into reality by said philosopher's repeated misuse of the English language and its more interesting phrases. Like most of the time I spend in airports, time spent listening to proud-yet-inaccurate college philosophers generates an incredibly loud, scornful rage... in my head.

External Dan: (calm, expressionless, silent)



They also have cards that you can hand out to people on the street. The cause doesn't give me the same bold self-assurance as the one behind SHHH cards, but if I ever go back to grad school, I'll be sure to print off a sheet or two.

Labels: ,


Davanum Srinivas: It's been a wild ride since WS PMC inception in 2003 as the PMC chair. I'd like to step down from this role now.

Dims was one of the people that helped get me up and running in the Apache Web Services community, and it's unfortunate that he can't continue to be one of its official ambassadors forever. He answered my calls for help on a broad range of issues - from ASF rules and regulations to SVN administrivia - and he never gave me any grief for bugging him with stuff that wasn't in his job description. Despite the fact that we were eleven time zones apart, emails to Dims always received an immediate reply, which means he is either an ultra-productive engineering machine or severely overworked; my guess is the former, but WSO2 is still in startup mode, so you never know. Either way, a lot of IBM's open source work has flowed through him on its way to release, and I'd like to thank him for all of his help.

Labels: ,

Thursday, August 16, 2007


Each time I re-start my blog or re-design my site, I usually look over my old content and decide to throw it out. Despite my best intentions at the time, I always discover that my past writing is laden with ignorance and sullen tripe about Cindy Lou Girlfriend and the highs and lows she had inflicted upon me. No one wants to read that.

Since graduating from college, I've tried to keep my writing more interesting from the perspective of Future Dan, and I think I've done a decent job (not great, but decent). My last blog was mostly free of the aforementioned tragedies, but I didn't include it in my latest re-design because I was too lazy to refactor those posts whose content did not fit well with the new layout. It was just easier to start over. Fortunately, I saved a few of my favorite posts from that blog, and that brings us to today's post.

I don't want to waste a lot of posts (or even multiple posts) on old material, but I was reading an article the other day that related to an interesting topic that I had previously covered in great detail. The article was about a Carnegie Mellon research project that demonstrated success in patching incomplete or damaged images using a catalog of disparate image fragments found online; this news represents another step in the long journey towards accurate and accessible image search that does not rely on pre-existing metadata or titles[1]. Very cool stuff. It also has potential with regards to today's topic, which is the patterns of the human face and the way the brain processes those patterns.

Here is what I wrote a few years ago on the topic of facial patterns:
I think it would be really interesting if someone created an ontology of the human faces for each generation. It is my belief that there is a finite number of possible faces, and that, unlike snowflakes, we have a lot more look-a-likes than people care to admit. A comprehensive face catalog would tell us exactly how many unique faces are out there. I think that we would all be astonished at how small the number is.

Here's an experiment for you to try: gather a high school yearbook from a friend or family member that you did not go to school with, and look through the senior photos (which are always larger and more detailed). I guarantee that you will recognize numerous people despite the fact that you don't know them. You will find photos of people that look exactly like people you went to school with, met at a bar, etc; furthermore, you will find faces that you've seen on multiple people, in their exact form.

Straight-haired brunette, thin face, pale complexion, wears a little too much makeup, squints too much when she smiles, mouth is thin as a piece of paper. Square-faced, black hair with too much gel, clear blue eyes, chubby nose, has a grin that makes him seem uneasy. I'm telling you, the similarities are there. There are very few people in this country that I have not "met".

This does not mean that people who have popular faces are mundane or unattractive; if you do the yearbook experiment, you will find a lot of gorgeous people that happen to look just like some other gorgeous people you know. My hypotheses are simply that a) a significant percentage of the population has more twins than they could ever imagine, and b) this is the exact opposite of what most people believe.

And that brings me to my social circle. I don't think that I look like a significant number of other people, but then again, there is nothing very remarkable about my appearance. Mike, who agreed with my theory when I related it to him this afternoon, is also an uncommon face. Or not. It's not very likely that my small circle of friends has somehow escaped this phenomenon. This leads me to my final theory: c) even people with a twin in every city don't realize that they have a very common face.

The only time I see exceptions to my theories is when someone happens to look very similar to a current celebrity. I am guessing this is due to the fact that our brains are able to put a name to the famous face and more easily remember it during day-to-day life.

Now, what's really interesting is that the types of faces seem to change from generation to generation. My analysis might be skewed by different fashions, trends, etc., but it seems like facial patterns are limited to two decades or so. What I would like to see is an American face ontology for my generation; the real challenge would be providing a mechanism for efficient lookups. If that were possible, you could categorize any of your friends as, say, a #52, which is both amusing and scary at the same time.
Note the last paragraph: the real challenge would be providing a mechanism for efficient lookups. Well, science marches on. Projects like the one at Carnegie Mellon continue to chip away at the seemingly impossible task of finding images based on natural language descriptions or imperfect renderings (such as sketches, or photos of things that are only conceptually similar to what you are looking for). Within a few decades, we may be able to study the American facial spectrum without the manual and tedious processes I imagined when I wrote my original post. My only hope is that it will be used for good[2].

[1] For an example of some really innovative research in this area, check out Retrievr, which allows you to search images on Flickr by drawing sketches or uploading photos with similar content. Make sure you have JavaScript enabled when visiting the site.

[2] A more likely scenario: two days after such a project is announced, there will be a Firefox plugin that lets you stalk people you knew in high school and find out if they're single and miserable.

Labels: ,

Tuesday, August 14, 2007


RESTdoc is now part of Zero Core! You can read about it here; the latest code is here. Much thanks to Steve Ims for working with me to resolve all the little details and get this into /trunk.

Go banana!

Labels: , ,

Thursday, August 9, 2007


My forum post about RESTdoc generated a lot of great ideas, almost all of which have been implemented in my RESTdoc SVN branch. I think the latest UI is pretty slick (considering it was made by a programmer, anyway), and integrating the test forms with the REST tables should help us lower the learning curve for those new to Zero's REST conventions. I've uploaded some screenshots of the latest RESTdoc UI and would appreciate any feedback related to making it prettier or more intuitive.

I have a few more features to complete and IE-related bugs to work around, but I've already opened a feature request in the bug database. I suppose I should also add some documentation to the wiki; it would be tragic irony if people had trouble using a documentation tool because it was poorly documented.

Labels: , ,


Pat Mueller: I suppose I must not have gotten around to telling Dan my horror stories of using WSDL in the early, early days of Jazz. The low point was when it once took me two working days to get the code working again, after we made some slight changes to the WSDL.

I'll see your WSDL ruined my week and raise you a WSDL caused me extreme pain that could only be soothed by setting my skin on fire and never reading a WSDL document again. I understand the pain associated with WSDL-oriented tools and code generation, but my experience creating those tools was just as difficult. During the first six months of working on the project would become Muse 2.0, the other IBMers that I was working with started asking for client code generation features; I was working well over sixty hours a week at this point, and as important as code generation is for WS-* programmers, I had other things to worry about, like implementing all 9,087 pages of WS-RF. I finally managed to throw something together one weekend, and while it wasn't pretty, it got the job done (for a while).

Eventually, Muse grew and tooling was added, and we needed more code generation support than my weekend side project could provide. Andrew Eberbach started working on a Muse-oriented version of WSDL2Java[1] that actually had a design behind it and, as a result, was much more flexible for both Muse developers and Muse users. During the creation of WSDL2Java, I found myself spending a lot of time trying to transfer my knowledge on the nuances (nuisances?) of WSDL 1.1 and their affect on Java-based service implementations. When I first started at IBM, I had only a cursory knowledge of WSDL, but after two years, I knew what I was doing and had the scars to show for it; unlike many skills (which seem easy once you have them), reading and writing WSDL documents was something that I never got over, and that made it all the more difficult to encourage new students. By the time Muse 2.0 was released, I could read WSDL documents that were thousands (thousands) of lines long and find obscure syntactical or semantic problems in a minute or two... but it never seemed easy.


The worst part was, even as Andrew picked up my unwritten, unofficial WSDL knowledge and took ownership of our command line tools, it didn't free me from the tyranny of port types, bindings, and <xsd:any/>. As author of Muse's WS-resource deployment and request-processing engines, I had to read WSDL documents at runtime to determine how SOAP requests and WS-Addressing data should be mapped to WS-resource instances and Java method calls[2]. Keeping the WSDL assumptions consistent between tools and engines was tough when I was the only author, so you can imagine what it was like when it was split between two people, one of whom had not yet come to terms with the unstoppable, day-ruining force that was WSDL 1.1.

I really thought that I had a point when I started this rant, but now that I'm nearing the end of it, I can see that I don't. Pat's comment just triggered a flashback and it was either vent through my blog or sit in the corner with spiders crawling over my skin. Whew. Crisis averted.

[1] I believe the first instance of a tool named WSDL2Java was released as part of Apache Axis, but every web services framework I've ever seen has its own implementation of this concept. Muse was no different.

[2] This requirement was put in place in order to avoid another JAX-RPC mapping file disaster.

Labels: , ,

Tuesday, July 31, 2007


I've been working on a new tool for Zero, and I'm going to post some background information here (rather than the forum) because it will be easier to dig up later on.

Designing and implementing Zero applications usually starts with the creation of a REST API, which can then be documented using a REST table (or Gregorio table, as we sometimes call them). Comparing a REST table to a WSDL document is an excellent way to demonstrate ease-of-use differences between REST and WS-*. Upon reading one of these tables, one already has the mental model needed to write code that uses the documented service; REST tables manage to fit the important details (URI structure, method names, status codes, etc.) into a fairly spartan layout, while WSDL documents require pages of XML that is not really meant to be parsed by humans.

Of course, there is no official mapping of these REST APIs to the code that fulfills them, and so there is no RESTful code generation or documentation extraction like there is in WS-Land. I've had to create a lot of REST tables while working on Zero, and while it wasn't hard, it was kind of tedious because I had to duplicate the same information in my code comments[1]. Having documented my REST APIs manually and not wanting to do it again, I decided to create a tool that would read through my code, pull out JavaDoc-style comments, and turn them into a pretty HTML page full of REST tables.

RESTdoc works just like JavaDoc, except that it reads Groovy and PHP scripts that follow Zero's REST conventions[2]. RESTdoc's own JavaDoc provides the following overview:
To use RESTdoc, you must first include RESTdoc-style comments in your Zero scripts. RESTdoc comments are just like JavaDoc comments, with the following new tags: success, error, and format. The new tags are used to list the HTTP status codes for success and errors, as well as the expected data format in the request or response body for each method. RESTdoc will ignore any method that is not a Zero REST method (onList(), etc.) or that does not have RESTdoc-style documentation preceding its definition. Like Zero itself, RESTdoc only supports Groovy and PHP scripts, but it is possible to add other languages in the future.

To run RESTdoc from your Zero application's directory, just type:


By default, RESTdoc will look in ./app/resources for Zero REST scripts and it will save the generated HTML page in ./docs/rest. If you want to run RESTdoc from a script that is not in your application's directory, you can use the input and output flags to override the defaults:

restdoc -input /my-app/app/resources -output /my-docs

Finally, RESTdoc does not produce any output if all goes well. You can turn on console logging using the verbose flag if you're having trouble debugging a problem:

restdoc -verbose

I've converted some of our sample applications to use RESTdoc-style comments, and so far, the tool works well. The HTML isn't beautiful, but neither is the stuff generated by JavaDoc. The important thing is that programmers can write comments using the familiar JavaDoc style and automatically get a nice document describing their application's RESTful public interface.

When I first got the idea to make this tool, I was pretty excited because I saw it as an opportunity to apply my elite compiler skillz. It's pretty hard to find a job where you get the chance to work on a new programming language, but creating developer tools often puts you in a situation where some level of formal AST-based parsing and analysis is needed to implement a feature correctly, and I enjoy those situations immensely. Of course, once I got started, I realized that parsing Groovy, PHP, and possibly Java source files with a complete compiler front end would require me to drag in a ton of dependencies, dependencies that zero.core already has but zero.tools does not. I was pretty sure the Zero tooling team would not be happy about my request to add N MB of Groovy and PHP-related JARs to their distribution, not to mention the possibility of adding Java-related JARs of questionable origin[3]. After looking at all my options, I realized that I would need to trade in my elite skillz for some dubious regex-based hacks if I was going to make a tool that was consumable by the rest of the team. Oh well.

The current code uses all sorts of string searching and pattern matching that would best be handled by a parser generator, but is instead handled by me. The net of this is a 32 KB binary, which includes the Ant task that enables RESTdoc to be called using Zero's command line interface. I'm not sure if RESTdoc will end up being included in Zero, but if it does, I'll be sure to update this post.

[1] Also, I hate making HTML tables. Hate.

[2] I've designed the code so that it can easily handle the addition of new languages to Zero, but I don't see that happening anytime soon.

[3] IBM Legal questions the origin of everything, no matter how obvious it may seem.

Labels: , ,

Friday, July 27, 2007


Coté: The culture of Java design is to push out commitment to a given way of doing things (an "implementation") as much as possible. In Java culture, dependencies, especially conceptual ones, are nasty and to be avoided. They're taboo, even.

First, this guy really has our number[1]. Second, I would go further to say that the overuse of interfaces and patterns not only helps us avoid commitment, it also helps us rationalize decisions to replace code written by other people with our own engineering masterpieces. Overly-abstract and unhelpful interfaces like the ones in JNDI give you much more flexibility when you're trying to satiate your NIH demons. Where else are programmers going to get that kind of ego-inflating satisfaction? The dating scene? I don't think so.

[1] We being Java programmers who work for large corporations.

Labels: ,

Thursday, July 26, 2007


A few days ago, members of the Zero team got together with editors from IBM's developerWorks team to discuss the Zero-oriented articles we wanted to publish, as well as the administrivia of the publishing process. One of the issues we talked about was whether to use a single, consistent example across every article; proponents said this would minimize redundant writing and allow authors to get to the heart of the matter more quickly, while opponents said it would make authorship cumbersome and discourage exploration of new ideas. I think this is an interesting issue that goes beyond what makes good copy and will affect people's overall perception of Zero.

From my perspective, the argument for reusing a scenario across multiple articles is only attractive when I consider my personal return on investment. IBM pays developerWorks authors for their work (whether they're IBM employees or not), and the less time I spend on each article, the more dollars-per-hour I earn. I really like money, so this argument does not fall on deaf ears. Less work, more money - what's not to like?

Unfortunately, the single-scenario approach does more harm than good. Every new project has its share of hype, but when you couple that hype with one example used over and over again, the whole thing starts to reek of technology for the sake of technology. If you think back on some of the technologies that have recently fizzled after years of empty promises, you'll notice that all of them were banking on one or two contrived examples to inspire people and make it a star.

Aspect-oriented programming (AOP) is a great example of technology that is based on a single-scenario obsession. AOP proponents speak in grandiose terms about code simplification and feature injection, but when you press them for details, they always fall back on the same example: logging. Your code is littered with logging statements! they huff. How can you even read your code? It's chaos! If pressed for another example, they'll usually stammer about the ugliness of exception handling, but I've never heard anyone offer a coherent explanation of AOP's solution to this. Logging is definitely the AOP-phile's bread and butter.

Now, the AOP lovers are right: logging and exception handling make your code ugly. However, even if there was some AOP framework that could extract all of that code out of my core logic while maintaining correctness and not introducing any cumbersome dependencies, is it really worth it? For two use cases? That don't even bother me that much? I find it hard to believe that an entire paradigm can be built on two use cases, only one of which I've ever seen implemented in a real demo.

I guess I'm not alone in feeling this way because the AOP frenzy has died down significantly in the last eighteen months. Even researchers and interns (who will usually buy into anything if there's a chance it will boost their resume) have stopped talking about it. No one believes in it. You can't sell a technology on one example, even if it's a good example. It makes your project look entirely academic.

I don't think that Zero has a problem when it comes to inspiring new ideas, which is all the more reason to avoid limiting ourselves to one scenario in the interest of simplifying the documentation. Fortunately, the team decided that variety was the spice of life, and if each article is a bit longer because it needs to introduce a custom example, so be it. If our forum is any indication, there should be a number of interesting and controversial ideas flooding developerWorks very soon.

Labels: , ,

Thursday, July 19, 2007


Pat Mueller: But honestly, I'd prefer to see none of these [Triangle cities] on any lists. We already know it's a good place to live. Until we start using impact fees or transfer taxes to help cover infrastructure required by all the new people coming in, problems like poor transportation options and overcrowded schools will only get worse.

Those familiar with my opinions on city planning already know that I despise the Triangle's unchecked growth and shameless conversion of trees into tax revenue. What was once a very pretty area is now overrun with shopping centers and McMansions, with road improvements and schools lagging far behind. The area continues to outperform most of the nation's cities when periodicals compile their Best Places to Live reports, but I think it may have jumped the shark. Le sigh.

Of course, for all of the complaints that residents make when they realize that there are now six grocery stores within a mile of their home, few of them show up to the town meetings that decide the fate of the surrounding land. I attended a meeting of Cary's Planning and Zone Board last month to support a group of citizens who are fighting against rampant growth at one of the town's most important intersections. When it came time for the public hearing, dozens of residents waited in line to voice their opposition to a developer's plan to add a significant amount of commercial buildings to his plot. I would estimate there were ten opponents for every supporter, and the supporters were all employed by the owner or the developer of the plot. It was clear that, as far as the citizens were concerned, the development was a bust.


One of the last speakers at the meeting was the original owner of the land in question. For those of you who are not from Cary and do not drive through it on your way to work, this land includes a solid seventy acres of farm land and forest. There is a pretty white farm house on the corner of the intersection, and a beautiful red barn. From one side of the property, you can see a fishing pond at the bottom of a hill, about fifty yards from the house. The property is surrounded by white fencing and used to be home to a stable of horses. If I had to guess at the value (from the perspective of someone who will build dozens of residential and retail outfits on it), I'd peg it at ten million dollars[1].

The man who had sold his land and was guiding its fate came to the podium and introduced himself this way:
Hello. My name is Bill Sears. I was born at the corner of High House and Davis, just as my father before me, and my grandfather before him. If half as many people as are here tonight had shown up when the town decided to take my house for the widening of Davis Drive, I would still be living in it today, and we wouldn't be here.
Ouch. At that moment, I knew it was over. He went on to talk about his intentions, and how everything he was doing was legal and approved by the town, but it wasn't necessary. There was one last speaker that night who tried to bring the citizens' message back to life, but it paled in comparison to this man's bitter dismissal of an anti-growth message from the same people whose desire for faster roads had made his house uninhabitable. Final vote? 5 - 1 to go forth and build until you couldn't build anymore.

Where was I going with this? Oh, yes: first, community participation is not a part-time job. Second, I don't think that Pat's plan to use fees to prevent people from moving here will stop developers from building and/or paying the fees as "incentives"; losing a few thousand dollars on a sale isn't a big deal when you've got people chomping at the bit to pay $400,000 for half an acre of beige. Rather than fees, I think this county needs legislation requiring the development of entire schools, hospitals, and roads around the area of construction. Requirements such as the construction of a school could easily send costs soaring if not managed properly, and for some projects, the risk factor will be too high.

If you want to prevent greed from ruining our cities and towns, you have to target the source of the greed, and the source is not people who don't even live here yet. The new residents are easier to pick on, but they are just taking advantage of bad decisions made well before their time.

[1] This estimate is based on the development that has occurred on the other corners of the intersection, all of which is smaller than the proposal under debate.

Labels: ,

Tuesday, July 17, 2007


Sun CEO Jonathan Schwartz has posted a commentary on the behavior of corporate bloggers, and everyone should read it. Plenty of A-List bloggers and technical leaders have posted their thoughts on effective blogging ad nauseum, but Jonathan's post touches on an area that still leaves many corporations confused and afraid: personal responsibility.

I think the matter of personal responsibility is much simpler than most corporations make it out to be: you wouldn't spill corporate secrets in a crowded bar or insult a competitor at a professional conference, so why would you do these things in a blog or forum? Usually when people get themselves into trouble it's because they've decided to say unsavory things over the Internet despite the fact that their online persona is tied heavily to their job; for some reason, these people believe the Internet is still some magical playground that only geeks know about, a place where they can post clammy, mustard-stained rants without ever being held accountable for their words. This may be true if you're using an online persona like joecool997 and writing on your MySpace blog, but the minute you start using your real name and real background data, you should realize that anything you write can and will be associated with you later on. Just like in real life!

When we were about to go live with Zero, a number of IBMers who had not previously engaged any open source communities expressed concern over what would happen if one of them said The Wrong Thing on our forum. Again, this seems to be a pervasive feeling throughout big corporations, but when you apply common sense to the situation, the paranoia starts to cool off. It's not as though our employees were going to turn into misogynistic hatemongers with Tourette's just because we flipped the switch and opened Zero to the public; in fact, their forum posts are exactly the same as if the site was still internal. There are rare exceptions in the case of IBM confidential material, but for the most part it's just a discussion between people, some of whom are IBMers and some of whom are not. The rules of reasonable discussion are the same for both sets of humans.


Using common sense during public discourse is not a unique suggestion. What is unique is Jonathan's last paragraph, where he predicts a shift in the way people refer to bloggers or blogging:
But I'd love it if we one day eliminated the term "blogging" from the web lexicon (and that we stopped pursuing "CEO's who blog."). CEO's who have cell phones aren't "cell-phoners," those who have email accounts arent "emailers," those who give interviews on television aren't "TV'ers" - they're all leaders using technology to communicate.
This is a fantastic point. An aside from the populist, feel-good nature of his explanation, removing blogging from the lexicon would also free us from one of the ugliest and cringe-inducing words ever added to the English language. I hate this word, and it bothers me that I've relented and used it in my own posts.

blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog blog

Terrible word. Just terrible.

Labels: , ,

Monday, July 16, 2007


After lots of searching, I have learned that Blogger will not allow me to create a list of labels (categories) if my blog uses one of their classic templates. Since all blogs that are published via FTP are required to use classic templates, I guess I'm out of luck. It seems that other people have run up against this issue before and hacked around it, but I have no interest in such hacks. The public will have to go without.



I just realized that my blog's template does not include a list of post categories (which the Blogger team has decided to call labels); I'm going to fix that today, but whenever I make a template change, Blogger re-publishes all of my posts rather than just updating all of the HTML pages. In other words, those of you reading my Atom feed may receive a bunch of old posts today. Sorry.


Saturday, July 14, 2007


When I first started graduate school, my chosen specialization was generic programming[1], which required me to study and use the C++ Standard Template Library (STL) quite extensively. In fact, I would say that during my later college years, I was an expert user of the STL; I had memorized all of its tricky rules and appreciated its elegance even during its more verbose moments. I really loved using the STL and I didn't care if others found it academic and cumbersome. Plus, once you've invested a lot of time in learning to read the compiler error messages associated with C++ templates, you really want to believe that it's a valuable skill.

Towards the end of my first year, my advisor left for another university and I was faced with a choice: continue studying generic programming or move to a new field. My attention had been drawn to language design and compiler theory for a few months at that point, so I decided to jump ship and start learning more about the coolest topic in computer science. But despite my journeys into the deep, dark corners of compiler theory and development, the fact was that the Java community's tools for building compilers were far better than those in C++ Land, and my days of hardcore C++ hacking were about to come to an end.

It was tough to let go of C++ and parametric polymorphism, but with so many new technologies to learn, they soon faded from my brain. Only occasionally would they leak back in, like when a piece of Java code was snatched from the jaws of elegance because of some perceived deficiency in the Java grammar[2]. Eventually, people that I had taught at RPI - people who looked up to me and sought my advice on the matters of C++ template usage - started to ask me questions to which I could not remember the answers. And not only did I not remember the answers, but I realized that this failure had very little impact on my current job and any foreseeable jobs. I had moved on.


This past May I joined the IBM team that is working on Project Zero, a development platform that is centered around REST, AJAX, and server-side scripting languages. Project Zero has a number of interesting and excellent qualities, none of which I want to talk about today. This post is about parametric polymorphism and how it has gone from an academic indulgence that made me look smarter than my peers to a professional aggravation that sours my mood and makes me want to throw rocks at children.

Project Zero is built on Java 5.0, and our code is full of its phony template hackery. Lots of people are unimpressed by Sun's attempt to bring templates to the JDK, and I do not wish to rehash their points here. I, too, was disappointed to see that the need for bytecode compliance with previous versions meant Java templates would be limited to front-end trickery, nothing more than a minor convenience that helped programmers identify casting errors; despite this disappointment, I went easy on the Java language designers because I figured that some template support was better than none, and it wasn't hurting anything. I did not realize how disappointed I really was until I had to introduce Java templates into my own code, and that they do, in fact, hurt things.

I just cannot believe how much typing one has to do to use Java's templates given that they provide almost zero function. Java's version of parametric polymorphism has all of the verbosity of C++ and its STL, with none of the elegance or utility. It's just more characters for source files that most people already thought were too long. Further, I now realize that even though templates are optional, they are only optional if your whole team agrees to avoid them; once they become part of a core API, they spread to the rest of the code like a virus. Gross.

One of my favorite parts of working on Project Zero is that I get to learn and use a number of different languages in my development work. So far I have learned PHP and Groovy, and I have been teaching myself Ruby on my own time. It's been very educational, not to mention inspiring. I feel like the time has come to replace my primary programming language once more. Like last time, the switch will not be quick nor easy, but I'm already certain it will be for the best.

[1] Despite its name, generic programming is not about unremarkable software; in fact, it is quite remarkable. Many people have made remarks about it. The fact that a significant portion of those remarks are negative should not deter you from learning more about this topic.

[2] No function pointers! Anger!

Labels: ,

Friday, July 13, 2007


Per my last post, I have started working on an Ant script to cleanse my blog pages of Blogger's unsightly navigation bar. The most significant obstacle I am up against is the traversal of directory trees using Ant's ftp task. The ftp task allows you to specify a file set that spans sub-directories (**/*.html, etc.), but creating a listing with this pattern will result in a set of file names without relative paths:
07-10-07  01:24PM                18669 grumpy.htm
07-10-07 01:24PM 17914 neurosis.htm
07-11-07 11:24AM 18001 populism.htm
07-10-07 01:24PM 16911 three.htm
07-11-07 11:24AM 11247 atom.xml
07-11-07 11:24AM 17028 default.htm
If I try to list all artifacts in the current directory level and traverse the tree manually, I still find myself parsing a list that doesn't include directories. What bothers me is that creating a listing with an FTP client directly (using dir or ls) does show the sub-directories, so the ftp task must be filtering them out:
07-06-07  01:15PM       <DIR>          2007
07-10-07 01:11PM <DIR> archives
07-11-07 11:24AM atom.xml
07-11-07 11:24AM default.htm
07-06-07 11:56AM <DIR> images
07-06-07 01:18PM <DIR> labels
I took a look at the ftp code, and it's using the FTP client from Apache Commons. It looks like most of the magic is tied up in FTPClient.listFiles(), but a quick look at that code made it clear that a quick look would not suffice. Bother. I need to figure this out or I will not be able to re-upload the files after I've modified them.

Labels: ,

Tuesday, July 10, 2007


The people have spoken, and they want comments, archives, and auto-discovery for the Atom feed. I have heard their words, and I am pleased to present all three features in my latest set of updates. Power to the people.

At the same time, I can't tell you how much it enrages me that Google modifies my blog template to include the blue navigation bar you see at the top of this site. The <iframe/> that houses this bar is not inserted until just before publishing, making it impossible to remove using the Blogger UI. This free advertising trickery is inconsistent with the rest of Google's services and it makes me want to jam a pen into my eye every time I look at this site. My current plan is to write an Ant script that pulls down the files modified by the latest post, removes the offending <iframe/> element, and puts the files back before anyone is the wiser. Power to the people!


Monday, July 9, 2007


Those of you who subscribe to this blog's Atom feed may notice that most of my posts are updated frequently in the hours that follow their initial arrival. Occasionally, the updates will come days or weeks afterward. Editing blog posts is a touchy subject for many bloggers, because undocumented edits give the impression that the author is trying to retract something controversial or shirk responsibility. I would like to take this opportunity to assure you, the reader, that my edits involve none of these shameful behaviors; the fact is, I'm extremely neurotic when it comes to copy editing and layout.

In fact, neurotic is being polite. Reading a blog post with misspellings, grammatical errors, orphaned words, or asymmetrical formatting leaves me in a mental state that borders on OCD. In my head, there is no reason for any of these atrocities to happen in a pre-meditated missive directed at the general public.

The irony of the situation is that half of these changes - the ones focused on formatting - serve no real purpose for those subscribed to the feed, because they're reading it in their feed reader of choice, not in a browser. In fact, changes made for the sake of Web 1.0 luddites actually irritate those who have adopted the Web 2.0 technology that I promote so vigorously in my day job. To those that are suffering from my Atom 1.0-compliant OCD, I apologize, but do not expect me to stop my relentless editing. Just be happy that you don't have to share a code base with me and tolerate patch upon patch of JavaDoc corrections for your code each week.


Sunday, July 8, 2007


Since the introduction of Zero last weekend, quite a few bloggers have gotten their shorts in a knot over the license that governs the use of Zero software. Given the huge swell of support there has been for open source software in recent years, some disappointment was expected; after all, IBM has contributed to a number of open source projects, and no other community is more open and free-spirited than the one focused on REST and RIA. It seems like the perfect match!

Despite this disappointment, I still find the jaded dismissal of the project by popular geek bloggers to be a bit over-the-top. Most of their dismissals are based on cursory inspection of the web site and the fairly narrow-minded assumption that because open source projects have become popular in the last three or four years, that thirty years of industry behavior is now irrelevant and no one except for Microsoft will sell a proprietary software platform ever again. For a group that is usually excited to see new technologies sprout up in the areas of REST and RIA, there's an awful lot of grumpiness surrounding this arrival. It reminds me of the Grumpy Old Man character that Dana Carvey used to portray on SNL's Weekend Update:
In my day, we didn't use software written by big companies. If you wanted to run a program that belonged to a big company, you just re-wrote it! In K&R C! And then you printed out the code and mailed it to the company employees, and you laughed at them, and said "Look at me, I re-wrote your program in one day and it's eight times faster and cures baldness! You're all worthless programmers!" And then you threw the code away, just to spite them! And if you ever had to run the program again, you just said "Flobble-dee-flee!" and you re-wrote it. And that's the way it was, and we liked it! We loved it!
It's been almost a week since those first rants started to roll in, and so far I have held back on my desire to fire back; at this point, I feel that I can safely ignore them and focus on more postive things surrounding Zero. Regarding future discussions, I welcome debate on the merits of IBM's decision to keep Zero proprietary and its ultimate effect on the success of the project, but I hope that future blog posts will be a little more thorough in their research and commentary. And less grumpy.

Labels: ,

Friday, July 6, 2007


Today is the third anniversary of my voluntary servitude with the IBM Corporation. Coincidentally, it is also the day I start my third blog under this domain. The former has had a very negative effect on the latter thus far, not because IBM is anti-blogging, but because I've had a lot of work to do.

Anyway, the team I currently work for has recently gone public with its plans for a RESTful development platform, and is allowing anyone with an Internet connection to view and comment on our code, forums, and processes. All of this has inspired me to give blogging one more go, and today seemed like the perfect day to throw my hat in the ring.

Third time's a charm? Let's hope so.

Labels: , ,