We have been doing some work in Wikipedia lately and one of our favourite things to do is add templates to articles to make a point (just kidding, we really mean well). At the same time, proper citation formatting seems to be a constant struggle with a multitude of proposed solutions, some of which are very complicated to use. Our interest in these two aspects of editing lead us to read more of the technical references on Wikimedia (Wikipedia’s software provider), where we began to see disturbing (to an expert in programming language theory, such as ourselves) template extensions such as control structures.
This led us to hypothesize that Wikimedia’s template language had becoming Turing complete (the technical jargon for a full-powered programming language). We started digging and eventually were rewarded with recursive template substitution, which appears, at least at first glance, to be sufficient to implement the lambda calculus, and thereby perform as a Turing complete functional language. Hence, Wikimedia proves the Strong interpretation of Greenspun’s Tenth Rule: any sufficiently advanced system will contain a functional programming language. (Which, by the way, it appears I’ll have to Wikialize once I’m done this post…)
We may prove this result sometime when we have absolutely nothing better to do. In the meanwhile, we muse: what was so bad about HTML that it needed replacing with Wikitext, and why does everyone think it’s so easy to use?!
“In the meanwhile, we muse: what was so bad about HTML that it needed replacing with Wikitext, and why does everyone think it’s so easy to use?!”
That’s always been my question; every wiki I use, I have to learn their obscure, arbitrary HTML-replacement syntax, and ask myself, “I already know HTML. Why do I have to learn a meta-language which is inevitably converted *back* to HTML?”
Your first wiki page is easy: You discover that you can create a bulleted list by simply prepending ‘*’ to each row. You think, ‘this is so much better than HTML!’.
But by your 20th page, you’re trying to create a table with right justified headers, blue borders, and an image-based background, which requires using arcane, poorly documented, and rarely used syntax, and you think, ‘If this page was just written in raw HTML, I could have finished this table in half the time’
I’ve felt exactly what Frank is talking about more than once, especially when I’m trying to remember exactly what the wiki-syntax for This Particular Wiki is, the one I’m using right now (there are oh so many of them!)
I’ve been thinking that what might be needed is a general wiki editor. You enter a URL and it sucks down the page, you can edit it in the syntax for whatever type of wiki it happens to be (I’ll throw in an intelli-sense-like feature, why not). Then you resubmit the page in the editor — or at the very least it pops up what you entered wikified for a quick copy/paste. Doesn’t seem like it would be too hard…
I’m partial to wikis that have a tag that lets you enter in raw html, even if it only lets you use a subset of tags (which would be the only safe thing to do anyway).
Our group didn’t pick the wiki that we’re using for documentation, and none of us like it (it shall remain nameless…but we’d all rather it were mediawiki). Which is why we all usually enter in raw html documents, since we all know html already.
This entry led me to hypothesize that small system as wikipedia and del.icio.us emarging language had becoming Turing complete thus become full-tarm language. and then eventually to be sufficient to be called functional language. Hence, Wikimedia and del.icio.us proves the Strong interpretation of Greenspun’s Tenth Rule: any sufficiently advanced system will contain a functional (programming) language.
http://eirepreneur.blogs.com/eirepreneur/2006/03/how_social_book.html
It seems that intelligence, natural or artificial, is an emergent property of collective communication. Human con-sciousness itself may be an epiphenomenon of extraordinary processing power.
Pingback: MentalPolyphonics :: Milestone
Wikipedia actually does let you use HTML for most things.
Pingback: Labnotes » Rounded Corners - 46
There’s two main reasons for wiki text when you leave aside the spurious “easier”. One, HTML has no visual resemblance to its result. Wiki text always aims to get this resemblance. Two, HTML lets you waste time and play silly buggers choosing your own look-and-feel where wiki text almost forces you to play by the site’s layout rules. Without it, Wikipedia would look like MySpace.
As to markup turing completeness, I’ve wondered if it might not be possible to design from the other direction – start from a programming language, say, Haskell or Scheme, implement combinators for markup that look very wiki-like, add a layer of syntactic sugar (treat barewords as strings, etc). It wouldn’t just be Turing complete, it would be a powerful programming language. Layout combinators could replace templates, for example.
For simple things, a wiki markup is typically much faster: typing <strong>foo</strong> for the twentieth time gets to be hard on the fingers, compared to (in Markdown syntax) **foo**. Also, I find reading well-designed wiki markups easier than reading HTML (see above example).
Now, as one reader commented, when you get down to doing more complex formatting tasks, a wiki should really let you revert back to HTML.
Please, this can only be the Weak interpretation of Greenspun’s Tenth.
I’m quite sure that Greenspun actually had a lot of Common Lisp’s advanced features like macros, reader macros, CLOS, etc in mind when he defined his rule.
Pingback: The Irvken Experience » Things I’ve noticed
I sympathise deeply with Frank. I’ve strenuously avoided learning new Wiki markups but despite this I realise now that I’ve had to become moderately competent in FOUR: c2.com (of course), Wikipedia, TWiki (still my favourite) and Trac.
Then there are the documentation markups I’ve been faced with: Doxygen and JavaDoc (inferior to Doxygen).
That is not to mention the typesetting markups I’ve used… from JustText to TeX (the king of them all):
As to markup turing completeness, I’ve wondered if it might not be possible to design from the other direction – start from a programming language, say, Haskell or Scheme, implement combinators for markup that look very wiki-like, add a layer of syntactic sugar
I think you end up with TeX; those who forget the lessons of… yadda yadda.
My main beef with TeX is that it’s not consistent.
On the other hand, I’m not sure whether Haskell or Scheme are expressive enough to make a comfortable definitional interpreter for a domain-specific language.
My current hope is that Haskell becomes rich enough that every document can have its own XML Schema and I throw together a few short scripts to typeset it.