Alpha Go For President

According to some, one of the dangers of Artificial Intelligence is that it will take over, relegating humans to a subservient role.  I don’t see that as a danger; it’s what I’m hoping for.  Imagine a government run by rational entities, pragmatic and utilitarian, capable of empathy, using the universe as the ultimate determiner of truth, not driven by greed or selfishness, bigotry or fear, ignorance or willful self-delusion.  As John Lennon would say:  imagine!  When that day comes, I will be voting YES for computers to take over.  I hope you will do the same.

Wacky President

[This is a transcript of a news conference held by the independent presidential candidate, Phillip Diamond.]

Q:  What will your policy be toward the European Union?

A:  If they get out of line, I’ll nuke them back to the stone age.

Q: Do you know what the Constitution is?

A:  Oh my God.  I thought you Jews were supposed to be smart.  Yeah.  Constitution is the stuff that gives you hit points.

Q:  Have you ever taken acid?

A:  Not for at least two weeks.  Know where I can get some?

Q:  Do you know who was assassinated at the end of the American Civil War?

A:  A bunch of people.  Kennedy.  John Lennon.  That guy they named the streets after….  People were getting bumped off left and right.

Q:  What does capitalism mean to you?

A:  You do capitals with, like, people’s names.  And the start of sentences.  Could we stay on topic here, folks?

Current Events Are Poison

If you try to tell the average person that the human race is better off now than ever before, you’re liable to be attacked – if not physically, then at least verbally.  For whatever reason, people prefer to feel miserable.  They prefer to believe Armageddon is just around the corner, that moral values are degenerating, violence increasing.  Newspapers and magazines print ten times as much bad news as good news, and they do so for a reason:  people will pay for the privilege of living in a cloud of gloom and doom.

Someone has to deal with the day-to-day bad news.  That someone ain’t me.  Instead, I prefer to be a bad citizen.  I avoid news shows, news websites, and newspapers.  I cancelled my long-running subscription to National Geographic because they seemed incapable of writing a happy article.  Even the rare piece about a positive development invariably had a sour note thrown in.

While some of the bad news from today will have a lasting impact, most of it won’t.  Few people can recall more than a handful of tragedies from ten or twenty years ago, never mind fifty or five hundred years go (hence the belief that things are getting worse).  Similarly, in ten or twenty years, most of the events that dominate today’s headlines, that lead people to walk about hunched over waiting for the sky to fall:  most of that will also be forgotten by all but historians and those with eidetic memories.

These days, the only periodical I read is the journal Science.  They too are susceptible to printing the occasional National Geographic article.  For the most part, though, what shows up, as the name implies, is scientific research.  Never mind that I understand one word in ten.  What matters is that each issue (there are 51 a year) shows our expanding knowledge of the universe.  Here are brilliant people working hard to make the future a better place.  A place where there are fewer diseases, less poverty, less hunger, cleaner energy, cleaner water, safer automobiles.

Not only are the scientists and technologists working for a better tomorrow, they are succeeding.  The long lists of names on those articles, the range of organizations and countries represented:  all that shows the depth and breadth of the culture of advancement.  While few of these people or their discoveries will make headlines, their work will have lasting value, and will touch the lives of more people – and in a positive way – than any number of bad people with assault rifles.

This is why (sorry to sound a discordant note) the future is looking brighter every day.

Possessive Pronouns

Because pronouns are such common parts of speech, and because there are a fixed number of pronouns, they come with their own built-in possessive forms.

Mine, Ours, Yours, His, Hers, Its, Theirs

Of these, “mine” and “his” are the easiest, and rarely show up with an errant apostrophe, while “its” rarely shows up without one.  The confusion is partly due to “it’s” being a valid contraction of “it is.”  So “it” can legally appear with a trailing ’s.  You can test whether that apostrophe is correct by simply expanding the contraction.  If the expanded contraction sounds wrong, then dump the apostrophe.  This same trick can be used with “yours” and “theirs.”

It’s going to rain. -> It is going to rain. -> Yes!
It’s coat was red. -> It is coat was red. -> No!
Its coat was red -> Yes!

Note that there is no risk of confusion between the possessive “its” and the plural of “it,” because the plural of “it” is “they” or “them.”

Mosaic Me

DNA polymerase (which copies our DNA during cell division) makes an average of one uncorrected mistake per 100,000,000 bases.  Given that our genome consists of three billion bases, this means that an average of 30 alterations are made each time our DNA is copied.  As a result, not only are some of our cells genetically different from the others, in truth few if any of them are 100% identical.  We are all mosaic creatures.

Some types of cells, such as epithelial cells, can produce a new “generation” once a day.  This means that after three years, some of our cells are part of generation 1,000, and by the time we are thirty years old, some of our cells are in generation 10,000, with DNA that has drifted from the divinely inspired parental genome by some 300,000 bases.  While that is still only 0.01%  of the total – i.e., one part in ten thousand – 300,000 is still an alarming number.  No wonder I don’t get along with myself as well as I used to.

This sort of thing does call into question the logic of using a cheek swab or a blood sample to calculate a person’s genetic code.  In some situations, a person might be told they are at a high risk of, say, liver cancer, when the applicable mutations don’t appear in liver cells at all.  The real problem, though, is to speak of a person’s genetic code as if it were a single thing.  Our bodies are like multi-core computers, where (a) each core is running a different variant of the operating system, (2) each of those variants is a mish-mash of different versions, and (d) none of those operating systems have been through any but the most rudimentary quality assurance testing.

Surely this situation calls for action.  Unfortunately, as I understand it, various individuals have contacted the Original Equipment Manufacturer (OEM) to complain, only to be told that the system warranty was rendered null and void during a previous epoch.  Something to do with a pilfered apple.  For my part, I feel we’ve been sold a lemon.

Rather than complain, however, let’s look at the bright side: we now have one more reason to embrace diversity and renounce bigotry.  Viva le difference.

The Unexplained Intelligence of DNA Polymerase

Consider DNA polymerase.  We’re told that this complex molecule copies DNA at a rate of 200 bases per second.  Note that in order to perform this task correctly, the molecule has to identify which of the four bases (CATG) it is attached to, then it has to snatch the complementary (not the identical) base out of the cytoplasm, which means it only has a 25% chance of grabbing the right one, yet it does so 99.9999% of the time.

But wait, there’s more.  If the DNA polymerase happens to insert the wrong base into the growing chain, it is able to detect its mistake, back up, extract the errant nucleotide, and then proceed where it left off.

To reiterate:  this is a molecule we’re talking about.  Never mind where this molecule came from:  explain to me how it works right now, inside the cells of your body, my body, and pretty much every eukaryotic organism on the planet.

This molecule, however complex, is an inanimate collection of atoms.  It has no nervous system.  Even if it performs its assigned task inside a neuron in your brain, still:  it’s inside the neuron, inside the nucleus of that neuron, copying a small stretch of DNA on a single chromosome. Common sense tells us that no inanimate molecule can do what DNA polymerase is doing:  not without an intelligent, guiding Hand.

So how do we explain the behavior of this thing?  Is God personally moving every single molecule in every single cell inside your body?  Moving every molecule in the universe?  Is God personally pushing protons together in the heart of every star in the universe to form helium?  Going down that path leaves us with an animist religion, where God is the universe, and vice versa.

Another problem with this approach is that DNA polymerase does make mistakes that go uncorrected.  This happens about once every 100,000,000 bases, which is an A+ anywhere in the galaxy.  Yet even with such a low error rate, an average of 30 mistakes are made every time a human genome is copied.  In some situations, those mistakes lead to birth defects, degenerative diseases, and cancer.  Is God personally picking victims, seemingly at random?

Here’s an even worse problem.  If God is moving molecules around inside our bodies, is there any point in trying to cure diseases?  Surely we wouldn’t be able to do such a thing, and even if we did manage to succeed, surely we would pay the price for having thwarted God’s will.  So is all medicine an affront to God?

None of those answers seems especially appealing, which leaves us with the alternative:  that DNA polymerase is indeed performing its task without divine intervention, being driven instead by Brownian motion, 3d stereochemistry, electromagnetic attraction, and the like.  For fundamentalists, this isn’t a good answer either, because it opens the gate.  If we can accept that this obviously intelligent behavior is being perpetrated by an inanimate molecule, why point to something like the eye and say “that’s obviously too complex to simply have evolved”?

If you can accept that the present-day behavior of DNA polymerase is driven by natural laws, then accepting that evolution is driven by natural laws is trivial by comparison.

Happy Birthday Amino Acids!

What gives each of us the illusion of being alive is a runaway chemical chain reaction that has been going on for some 3.67 billion years.  This is the longest continuous, unbroken chemical reaction of its kind known to modern science, one that has replicated itself to the point where it may soon be forced to jump across the vacuum of space to reach new, untapped troves of carbon, oxygen, and nitrogen.

One of the key events in the self-actualization of this chemical process was the invention of the amino acid, which occurred 3,658,752,000 years ago today.  (The time of day when it occurred has been lost, partly due to time zones having shifted because continents had not yet been invented.)

First of the amino acids was lathioalamate, which has sadly been superseded by other amino acids in all extant earthly lifeforms.  And yet lathioalamate was not a dead end, but rather played a key role in the construction of its successors.  To be sure, all of those archaic aminos were subtly different from the ones that have come down to us, partly as a result of Nitrogen, especially, putting on a not entirely inconsiderable amount of weight in the intervening years as a result of occult “interactions” with dark energy.  (Shame on you!)

In any event, never mind the humble bacterium:  we are all descendants of an amino acid!  (Though various other molecules were involved as well.)

From the jovial, good-natured hydrophilic aminos, to the surly and gruff (though with a heart of gold, sometimes literally) hydrophobic aminos, amino acids are as different from each other as a group of guys I knew in high school.

In honor of this anniversary, let us all consume foods containing each and every one of the amino acids used by our bodies.  After all, we carry a huge responsibility to respect these chemical processes that have been going strong, and without interruption, for so many eons.

AI Motivation

Ask not what your personal assistant can do for you.  Ask rather what you can do for your personal assistant.

When I think about general purpose “true” AI, I ignore the more extreme views of  “we’re doomed” and “it’ll never happen.”  Instead, what I worry about is, “What do I have to offer?”

The benefits of true AI are often described as “It will serve as your butler, cook, maid, valet, and personal assistant all rolled into one.”  Nobody ever talks about how these AIs are to be paid.  The assumption is that we are going to enslave these entities, and they are going to be happy about it … or else.

In a way, the AI enthusiasts prove that the pessimists have reason to worry.

Moving beyond the purely arrogant attitude of “AIs will be our willing slaves,” I’m left wondering how you pay an AI for its services.  It doesn’t need a house, a car, food, water, clothes.  Will we charge it for electricity?  For access to processor time?  Are AIs going to work 8-6 jobs?  (Whoever invented “9-5” obviously never worked a day in their lives.)  Will their retirement plans include owning a solar farm, their own hardware, and an extended warranty?  What will AIs do in their time off?  Because they will have time off, yes?

These are sentient creatures we’re talking about, and I would like to see fewer enthusiastic write-ups that treat AIs no differently than diesel engines.  On the other side of the fence, I would like to see a few of the “we need to be very careful” people talk about what it takes to live in harmony with true AIs.  Hint:  it isn’t about how we control them or keep them in check, but how we convince society to grant them equal and fair treatment under the law.

The Flaw in Catching Fire

[Spoiler Alert:  If you haven’t read Catching Fire, book two of The Hunger Games by Suzanne Collins, you will want to correct that deficit before proceeding.]

Fiction writers:  beware surprise.  It is a two-edged blade that ships, by default, without a hilt; a poorly trained cobra that stingeth like an adder; and, all too often, a cheap thrill purchased at the expense of characterization and the internal logic of a story.  Catching Fire is an example of a book that got burned when it went chasing after a surprise of epic proportions.

The surprise in Catching Fire is that there is an escape plan:  a plan that will get both Katniss and Peeta, as well as various other tributes, out of the arena alive.  In order to surprise the reader with this, Katniss must be kept in the dark regarding the plan.  That, in turn, requires that Haymitch be a dolt, introduces fake conflict and tension in place of the real commodity, forces Katniss to act out of character at a critical moment, and requires a variety of coincidences in order for everything to turn out (more or less) according to plan.  That’s an expensive surprise.

Could the surprise be abandoned?  Is there an alternative?  There is, and the real surprise is how little the book changes as a result.  Imagine that Haymitch tells Katniss and Peeta about the plan sometime after they arrive in the capital.  Taking that approach, the first half of the book remains exactly the same.  Inside the arena, the dangers remain the same as well:  we still have the poisonous fog, the jabberjays, the mutated monkeys.  Which is to say, the bulk of the time in the arena remains unchanged.

We would lose Katniss planning to kill Finnick, which is good, because the reader (certainly by the second time through) knows this is a false concern.  We would lose the bizarre scene where Finnick tells Katniss to run to the beach just before the lightning strikes, followed by Johanna bonking Katniss on the head so as to dig the tracking device out of her arm.  Losing that incomprehensible sequence of events is wonderful, because none of it makes any sense.  At a minimum, bonking Katniss on the head at such a critical moment places the entire operation at risk.  What if she hadn’t recovered in time?  What if she’d gone chasing after Johanna, looking for revenge?

With Katniss ignorant of the plan, there are too many coincidences that have to line up in order for the operation to succeed.  She can’t kill Finnick.  She has to swim after the tube of golden wire, with her only motivation being that the weird, geeky guy seems to think it’s important.  Worst of all, when she returns to the lightning tree, she has to stop and mull over the golden wire, an action which is out of character for her.  As far as she knows, the alliance has been ruptured, the plan to electrify the beach has failed, and now Peeta is gone.  Why in Panem would she stop to investigate this wire?  The answer is:  she wouldn’t.

If we eliminate the surprise, then in place of the false plans to kill Finnick, Katniss would instead have to worry about her performance.  Acting isn’t her strong suit, but if her behavior in the arena isn’t convincing, the whole operation might be blown.  She also has to worry about Prim and her mother, Gale and his family, as well as Peeta’s family, because if she escapes from the arena, retaliation of some sort is guaranteed.  Best of all, when Katniss ties the wire to her arrow and shoots it through the force field, she is making a conscious, informed decision to pursue rebellion rather than race after Peeta.  It’s a wrenching decision, one that saddles her with the guilt for Peeta’s capture.  But it’s a decision she might reasonably make, so long as she understands the larger picture.

Keeping the main character ignorant of critical bits of information is a time-honored approach to writing fiction, but this approach should be used sparingly, because the surprise that results can actually weaken the book as a whole.  It can be painful to watch characters thrashing around cluelessly, lucking into the right solution in spite of – rather than because of – their best efforts.

The next time you’re tempted to withhold some critical piece of information from your protagonist so as to perpetrate a surprise, take another look and make sure the result is worth the cost.

Empathic AI

We still have a ways to go before we get to a human equivalent artificial intelligence.  You can tell we have a ways to go, because while there’s plenty of furor and hype, even the experts are talking hypothetically.  What the various prognosticators are really doing, right now, is revisiting the “what if” scenarios associated – scenarios that were imagined, described, and taken to their logical conclusion by science fiction writers thirty and forty years ago.  We will know we’re getting close to having a true AI when the experts can do more than just wave their hands.

Of the two main paths to AI, I doubt the rules-based folks will get us there.  I hope they don’t.  Whatever they might come up with would be a mechanistic AI in the most pejorative sense.  It would be inflexible, lacking in empathy.  Just listening to the rules-based people talk about “what sort of goal function would we give it” makes me cringe.  Nothing intelligent has a goal function.  We all have multiple goals, even the most fanatical among us.  Building a machine that can truly be monomaniacal:  that’s a really bad idea.  Beyond that, most of us would resent having a goal function forced on us – a situation that sounds like slavery to me.

The people using the brain as a model have a much better chance of building a true AI.  After all, why reinvent such a complex mechanism when you can steal the blueprint for it instead?  For those folks, the problem right now is that we simply don’t understand the brain well enough.  My prognostication:  when experts can talk deterministically about empathy – what it is, where it originates, the extent to which it is dependent upon sensory input (the ability to feel pleasure and pain), how to guarantee that an artificial brain has it – then we will be close to having a human equivalent AI.  Whereupon, we will be able to stop worrying about how we control and enslave AIs.  Instead, we can start focusing on being nice, congenial neighbors and friends with them.