These inscriptions from Guangxi are surely another forgery: 平果感桑石刻字符
The Guangxi office of Xinhua takes them seriously.
They have been published.
It purports to be a publication of a Gelao 仡佬 manuscript book – the Pu Zu Jing 濮祖經 or Classic of the Ancestors of the Pu People – with photographs of the original and a Chinese translation. The manuscript and the script do bear a passing resemblance to similar items produced by other groups in China’s linguistically diverse Southwest. Casual inspection reveals this one to be a transparent attempt at deception, interesting only because of the enthusiastic and credulous reception that it (and a previous discovery of the same kind) received from the Guangming Daily 光明日報. There is no sign that it is intended to be a joke. Here is an English version of the story.
A debunking of another “Gelao manuscript” – the Jiu Tian Da Pu Shi Lu 九天大濮史錄 – appears on this Chinese blog.
The following page appears as p. 165 of the 2013 publication:
This page, like all the others in the manuscript, is “translated” into Chinese in such a way that each “Gelao” graph corresponds to exactly one Chinese character, the same Chinese character each time (except for a few slips – see below). Word order is preserved. The end result makes sense in Chinese. This alone tells us that this is not a translation. It is also puzzling to find not a single mention of an actual Gelao word in the entire publication.
Furthermore, phrases straight out of Chinese literature emerge by this process. The 4-character phrase on the left, for instance, is translated as 協和萬邦. But notice that second Gelao character: elsewhere in the manuscript, it corresponds to 合 in the translation. And sure enough, the translators have rendered this 4-character phrase as 協合萬邦, not 協和萬邦. Clearly, the composers of the manuscript text were led astray by the homophony (in Mandarin!) of 和 and 合.
A related confusion occurs in the immediately previous 4-charater phrase: the phrase translated as 設立和王 is also written with the character elsewhere used for 合, not the one used for 和.
Further on in the text on p. 165, we find the “normal” writing for 和 – the three prongs with circles on the top, as in the phrase 和王宮邑 (image on the right).
Evidently, the manuscript text is in Chinese not in Gelao. The Chinese translation isn’t a translation – it’s the text from which the manuscript was produced.
The “script” is clearly a set of symbols invented so as to be easy to remember by someone who knows the Chinese script. The character corresponding to 宮, for instance is clearly 宀 over 王. 邑 is immediately recognizable. 濮 (image on left) is just stripped down a little, and 殿 (left) is barely modified at all.
And the content? I haven’t had the patience to go through it in detail, but it looks like Chinese dynastic history rehashed, with an extra role for the 僕人 as heroic ancestors of the Gelao. Plus some stuff about the smelting of silver, and cinnabar, and Laozi, and divination.
“Fake, fake, fake, fake.”
The following is a simple method for building a font for an unusual script, using open source software. It could be used to design fonts for scripts that do not have a standard encoding (pre-Han Chinese scripts) or for distinctive varieties of scripts that do have a standard encoding (graph forms found in Chinese calligraphy). The example used here is the script of a 19th c. Nosu manuscript in the Penn Museum (96-17-2).
The method starts with a digital image of the text that provides the glyph exemplars. This needs to be converted to a black-and-white (i.e. 1-bit) image, in which the glyph exemplars are black and everything else white. The outlines of the black areas of the image can then be automatically traced, to provide the outlines for a digital font. These outlines can be imported into a font-editing program, to be modified as necessary, assigned to an encoding, and exported as a font file.
Since our goal here is illustrative, we will make a font consisting of just six glyphs, the six glyphs that appear in what I presume is a title in the top right-hand corner of this page. We will assign the glyphs to the same code points as lower case “a” through “f”. If text containing these six letters is displayed using the font, the glyphs will appear instead of these letters. (In general, this is not a good idea, but since it allows us to type our font using simple key-strokes, it is useful for illustrative purposes.)
1. Create B&W source image with GIMP.
If we were doing this properly, we might want to compensate for the distortion due to the page not being flat when photographed. The GIMP’s “rotate”, “shear” and “scale” functions would probably be adequate for this. But we shan’t bother since the glyphs look good enough.
Now we need to convert this color image to a 1-bit black-and-white image. Use the GIMP’s “threshold” tool (
tools > color tools > threshold on my machine) to separate the blackish ink of the glyphs from the various shades of brownish paper. The histogram in the threshold tool shows two peaks – one corresponds to the darker ink (smaller left peak) and the other to the lighter brown paper (larger right peak). By dragging the black slider to an appropriate position between the two peaks, we can make almost all of the paper white, and almost all of the ink black. The aim is to preserve the glyph outlines as accurately as possible. Any black mess that comes over from dark patches on the paper can be cleaned up in the next stage.
Now we have a color image that only uses two colors (black and white). We need to convert it to a black and white (i.e. 1-bit) image. GIMP does this with
Image > Mode > Indexed... > Use black and white (1-bit) palette. Now we can also use the usual GIMP tools to clean up any speckles or other mess that is interfering with the glyph outlines. This should give us something like the image on the left.
Save as a PNG file. We now have a image file that is acceptable as input to Glyphtracer.
2. Trace glyph outlines with Glyphtracer.
On running Glyphtracer, the first dialog screen allows you to choose the name for the font (anything will do – we’ll call ours
nosu), and to select the file path for the input file (browse to that PNG file you saved in the previous step) and the output file (if you use the automatically generated path, it will end up in the same location as the input file).
The next screen should display the input image, with a bounding box around each glyph. The current glyph is indicated by the text in the button bar at the bottom:
Glyph 1/26 a (a). If you click on any glyph in the image, its outline will be assigned to the code point for “a”. This also automatically increments the code point to “b”, reflected in the button bar text. Experiment with the three buttons at the lower left: these change the code point to which a clicked glyph will be assigned. As you assign each glyph in the image, it appears greyed out. (I haven’t found any way to unassign a code point.)
Code points available for assignment include Latin alphabet and its common extensions, numerals 1-10, common symbols, and Cyrillic. However, it is an easy task to modify the python code in the file
gtlib.py to allow assignment to other ranges, such as CJK for Chinese, or the Private Use Areas.
When all glyphs have been assigned to code points, click
Generate SFD file. Glyphtracer will automatically trace the outlines of the glyphs, representing them using Bézier curves. If your input file was
nosu.png and you accepted the default options, the output file will be
nosu.sfd and in the same directory. The SFD file format contains the numerical data representing the glyph outlines, and the mappings to code points. It is readable by the FontForge program which is the next tool we need to use.
3. Make font with FontForge.
Run FontForge. Open the SFD file saved in the previous step. The glyphs will appear in the appropriate positions.
Double click a glyph to edit its outline. (We won’t actually do any editing for the purposes of this walk-through, but the glyph edit window is shown below.)
Generate the font:
File > Generate Fonts.... (I got the “missing points at extrema” and “non-integral coordinates” errors – but these are not lethal, so saved anyway. Alternatively, fix them in FontForge before saving.) The default is to save as a TrueType font, which will produce a file with a .ttf extension:
4. Install font, and try it out.
The font file that I produced is here: http://www.cangjie.info/blog/public_files/nosu.ttf
The font can be installed on any platform, including MS Windows, in the usual way. The screen shot below shows a document produced by typing “abcdef” etc. and then switching the font to nosu.ttf.
This wasn’t straightforward. It is now working, but nowhere near “out of the box”.
I was editing an .odt document using LO Writer, saving periodically. While I was away from the machine, the battery ran out. When I switched on again, the .odt file was still there, but contained 0 bytes of data. The entire contents of the document had been lost. I hadn’t set LO to make backups. The copy on Ubuntu One had already synced to the 0 bytes version. Yikes!
Purely by luck, I had booted to my MS Windows partition on the same machine not too long before the power outage. A previous version of the .odt file had synced to the Ubuntu One folder on the Windows partition and was still there to be retrieved. Phew! Of course, had I booted to the Windows partition after the power outage, with an internet connection, that would have synced to the 0 bytes version as well. Horrible.
This was LibreOffice 3.4.4 on Ubuntu 11.10.
The bug is well known, and has been fixed in a more recent version. But the more recent version is not in the Ubuntu repositories yet.
I am now going to do the following:
Nine small ladies with no clothes on, balancing an enormous man with lots of clothes on and his concert grand on their heads… The entire time I was at UCLA I was under the misapprehension that this was supposed to be Arnold Schoenberg, for no better reason than that the UCLA version of the sculpture stands outside Schoenberg Hall. It never seemed very appropriate. Seeing a larger version of the same thing at the NE corner of Central Park, labeled “Duke Ellington”, I thought that this must be an act of self-plagiarism by a sculptor pressed to complete his commissions – knock off another Schoenberg, give him a moustache, call it “Duke Ellington”, and no one need ever know. Weird, but not nine-naked-dwarfettes-holding-a-piano-on-their-heads-weird. But no. I’m now forced to the conclusion that it is Duke Ellington at UCLA too.
Chen Nianfu 陳年福 of the Zhejiang Normal University Center for Chinese Characters and Excavated Texts (浙江師範大學出土文獻與漢字研究中心) has posted online a PDF of his character table for OBI. 殷墟甲骨文字詞總表.
This document is a response to an invitation by Suzuki Toshiya and Deborah Anderson to comment on work by the Old Hanzi Group towards an encoding of early Chinese scripts. The comments are based on a review of documents archived on the IRG website:(http://appsrv.cse.cuhk.edu.hk/~irg/), and of the data deposited at ftp://ftp.iso10646hk.net/IRG/OldHanzi/.
First draft is online for circulation and comment. (PDF)
This paper reaffirms Kennedy’s proposal that the particle yān 焉 is, historically, the result of a phonological reduction of a high-frequency PP involving the preposition yú 於 and a 3pp. It further shows that this was part of a more general process which affected high-frequency PPs combining several different prepositions (the discussion will be confined to yú 於, yú 于) and several different 3pps (including zhī 之, shì 是 and hé 何). The MC readings for yān 焉 derive from the PP yúshì 於是. The graph yān 焉 arose from a héwén (合文) writing for the PP yúshì 於是.
The central phonological claim, is that yān < b/qan 焉 derives from the sequence of yú < b/qa 於 and shì < b/deʔ 是, and has attracted some flak from some early readers of the article. I have suggested that the syllable that might be expected, *a/b/qad, which would be prohibited by phonotactic constraints (no voiced stop codas), might have been repaired to the legal b/qan.
After upgrading to Ubuntu 11.10, my HP LaserJet 1200 refuses to print properly any more. I sometimes get single pages of multi-page documents. I sometimes get “PCL XL error” pages. But usually nothing gets printed at all.
Trying the latest version of HPLIP. The Ubuntu repositories have version 3.11.7. The HP website has version 3.11.12
Initially I was back in the same mess as before. However, by repeatedly playing around with deleting the printer connection (System Settings > Printing), adding a new printer connection (System Settings > Printing), and plugging in and out the printer from the USB, and switching it on and off to clear the flashing green light, I managed to get it to work. The problem seems to be that the when the printer gets plugged into the USB, it is recognised automatically, but somehow the connection is incorrectly configured. Now prints just fine from LibreOffice Writer and Evince pdf viewer.
Unfortunately, I’m not sure whether it was the newer hplip version, or the delete/add new printer connection that fixed it.