You Make Me Feel Like a Natural… Language Generator

Before I get too into telling you about my research, there are a few basics we’re going to have to nail down. You’ve got to walk before you can run, you know? Let’s take a moment to make sure that we’re all starting at the same place.

In my last post, I told you that my research focused on algorithmic authorship and computer-generated texts. And I wasn’t lying to you – it totally does. However, if you try Googling “algorithmic authorship”, you’re not going to get many results. Algorithmic authorship isn’t a term that I made up, but it sure as heck isn’t the standard term used to describe the process of text production that I spend my days thinking about.

The standard term used is natural language generation (NLG).

Sounds sciencey, eh?

Let’s unpack it.

*Warning: I’m still working on refining all the definitions I provide below. If you see any blatant errors, please let me know. I’ve also simplified these definitions quite a bit, in an effort to make them understandable to people outside of my own brain.

Natural language generation is the process wherein computers translate data into readable human languages (e.g. English, Russian, Korean, etc. – these are natural languages). The data being translated includes the bits and bytes that make up the photos and text that appear on your computer screen. Data could also include any kind of number, set, or matrix that the computer is handling. They really could be anything. What’s important for you to know here, though, is that natural language generation transforms this data into narratives that are written in everyday-speaking languages.

To understand how natural language generation works for my research, it’ll help to have a basic understanding of natural language processing.

Natural language processing (NLP) is, put very simply, the part of computer science that tries to get computers to understand human natural language input. Rather than requiring users to interact with computers through programming codes (formal languages), NLP allows users to interact with the computer using everyday language, to which the computer can respond appropriately.

Those of you in the book history and/or digital humanities worlds may already be familiar with the Text Encoding Initiative (TEI). The TEI ‘is a consortium which collectively develops and maintains a standard for the representation of texts in digital form. Its chief deliverable is a set of Guidelines which specify encoding methods for machine-readable texts, chiefly in the humanities, social sciences and linguistics.’ In this initiative, humans tag the various aspects of a text’s syntax and structure. For example, in the sentence The dog entered the house, ‘dog’ would be tagged as a noun, ‘entered’ as a verb, ‘house’ as a location, etc. The more tags are tacked on, the more likely a text can be reproduced in accordance with its original format – the TEI is a way of preserving digital texts.

The TEI is also, in a way, a form of NLP, albeit one is currently highly dependent upon human users to input the tags the Initiative depends upon. The initiative’s human contributors process the language to make it understandable to computers.

Now let’s take this one step further.

In NLP, it’s generally the computer that assigns the tags to the text under consideration. There are a number of ways that the computer can do this, but it’s not the place of this blog to go into them in detail. If you’re interested in learning more about the details, read this article by Winfred Phillips for the Consortium on Cognitive Science Instruction or, if you’re feeling really ambitious, try to take on this syllabus/set of lecture notes for a course at Cambridge’s Computer Laboratory.

NLP is really what makes the magic of NLG happen. A computer just can’t generate any meaningful text without first understanding how to put words together in ways that make sense to the humans who will be reading them.

Here’s one example.

One NLG company, Yseop (pronounced easy-op), has been granted a patent for ‘Methods and apparatus for processing grammatical tags in a template to generate text’ (US8150676). The ‘tags’ referred to here are like those tags inputted by the human users of the TEI: they let the computer know what role a word or phrase plays in a sentence. Using a tagged template provided by a human, a computer using this program essentially refers to a set of tagged words/phrases and then applies an appropriate word/phrase that is tagged correspondingly to a blank within the template. The program can also conjugate verbs and ensure subject agreement (through the use of parameters).

Some diagrams from the patent that may make things a bit clearer.

The description of this figure is as follows:

FIG. 2 shows an example of a process that template processor 101 may use in some embodiments to generate and/or determine text for a grammatical tag, in a template, that is dependent on one or more characteristics of an actor. The process begins at act 201, where template processor 101 accesses a template and identifies such a tag in a template. The process then continues to act 203, where template processor 101 determines the actor or actors that are implicated by the tag. The process next continues to act 205 where template processor 101 determines the relevant characteristics of the implicated actor or actors from one or more parameters 107. The process then continues to act 207 where the template processor 101 uses information specified by the tag and the information from parameters 107 to generate and/or determine the text. Text may be determined or generated in any suitable way, as the invention is not limited in this respect. For example, text may be generated using a look-up table, a list, a dictionary, a linguistic model or tree, or in any other suitable way. Thus, as used herein, “generating text” means producing the text using any suitable source or technique.

In some embodiments, the process of FIG. 2 may be repeated for each grammatical tag in the document.

At its core,  this kind of NLG is a digital game of Mad Libs,

There are, of course, other ways to generate natural language narratives. Baysian networks and Markov chains, for example, can be used to generate sentences based on statistical probabilities of word orders, although the output can get kind of wonky. I’ll likely do another post about Markov chains specifically, but for now I direct you to this informative (and super understandable!) presentation by Samantha Vinci:

There’s been a ton of work done on NLP and NLG, and I’m still sifting through the literature to make sense of everything for myself. This is just blog post numero uno on this topic, folks. Stay tuned.

Advertisement

2 thoughts on “You Make Me Feel Like a Natural… Language Generator

Have something to say?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s