Monday, December 17, 2012

Discussing Firearms Insurance

There has been considerable commentary on G+ to my post The Case for Firearms Insurance, and it has been almost entirely constructive discussion. I was surprised the conversation has not been a lot more rancorous, because this topic seems primed to bring out the worst in the internet. Not everyone agreed with me, but nearly all everyone gave it serious consideration. This post is a collection of link to various discussions:

Dave Hill posted his thoughts on his blog, and there are a number of comments.

I found this article on Justin Tapp's stream: Is Gun Liability Insurance the Next Big Thing? which is news of similar legislation actually having been introduced in Illinois. There are two discussions ongoing:
my G+ stream and in the Respectful Politics community.

Finally, more discussion in the Respectful Politics Community.

I have some additional comments, which will hopefully be appearing as new posts soon.

Oh, and one more thing, I wrote my Senator.

Sunday, December 16, 2012

A Word about "One Good Person with a Gun"

Following on my Firearms Insurance post from yesterday, I'd like to address a common argument in discussions about gun control: The hypothesis that "One Good Person" is all that is need to put an end to murders and crime, and therefore more people should carry guns, not fewer.

The argument goes, that one good person armed with a gun could put a stop to tragic mass shootings when they start, greatly reducing the harm done. The problem I have with this that it’s practically mathematically impossible. Without getting directly into the numbers, here is why I think this:

1) One is not enough: If the "One Person" knew in advance where the massacre would be, it would work. In practice we would need thousands, maybe hundreds of thousands of armed individual carrying firearms on a regular basis. A significant portion of the population would have to be armed at all times in order that one of them might happen to be in the right place, at the right time, often enough, and soon enough, to make a difference.

2) "Good" is not enough: The "Good" person refers to intent, but they also must be very good at handling a firearm. And more than good, they must be nearly perfect. Accidents occur even to trained professional police officers, and the "Good Person" would need to be at least this good, if not considerably better.  Consider that thousands of people carrying firearms in public also means thousands of opportunities for accidents to happen. This risk will occur every day, not just on the day that some misguided soul decides to Go Postal take out their anger on innocents. Even a tiny risk of accident, multiplied by thousands and thousands of opportunities, will soon lead to more accidental shootings than the good people could ever prevent. **

** Add to this even a few gun owners that might want to "play Cowboy", and the harm could be far worse.

Put 1) and 2) together and it is pretty easy to see that the "One Good Person" hypothesis just doesn't work. It's a myth. The intention is fine, but the cure is worse than the disease. I could have looked up some statistics and put numbers on this, but then someone would just argue with my numbers. That is also my point - don't believe me - try it yourself: Find a source for firearms crime and injury statistics and put it to the test. I invite anyone to look up their own numbers and work this out for themselves. The math involved is fairly simple, but I can be available to help should that be needed.

Saturday, December 15, 2012

The Case for Firearms Insurance

We require automobile owners to carry vehicle insurance. It's time we did the same for firearms.

The tragic shootings at Sandy Hook Elementary School in Newtown, Connecticut are in the news this weekend. Tuesday it was a shopping mall in Oregon, and before that the salon in Brookfield, WI, just a few miles down the road from me. Before this topic fades into the news cycle again, I'd like to put forward my suggestion of a way to address the problem: Require gun owners to purchase insurance which will compensate victims wrongfully injured by weapons they own.

I don't like it when others drive recklessly, because I know that careless or uninsured drivers make my insurance rates higher. I'm a pretty careful driver, and I don't want my rates to go up, therefore I want other drivers to be careful too. Purchasing and maintaining a car is expensive, but that cost is dwarfed by the liability costs for possible injuries should I be responsible for an accident. Few people could afford to cover this cost themselves, but we have a system of automobile insurance which helps to share the burden of the high cost of injury and accident. This sort of insurance is such a good idea that is has been adopted as public policy in many places.

I suggest that firearms insurance can do the same thing, removing some of the burden and cost from the victims of a crime and placing it back on gun owners. There would be some amount of government regulation needed, but no more than  is needed for vehicle insurance, and the private insurance industry would handle the rest. I have been told that "gun control doesn't work", but it seems clear that our system of automobile control works. There are flaws, but on the whole it works very well. Why shouldn't this form of gun control work too?

If this seems like it might be expensive, realize that it is already expensive. We pay a huge cost for firearms injuries, most of which are paid for with public funds (see this: http://www.ncbi.nlm.nih.gov/pubmed/7869455). We can shift that cost from a public tax burden to a private insurance burden. Quoting from the conclusion of the abstract (emphasis added):
"Ninety-six percent of the patients in this report had their costs of care covered by the government, because they had no primary insurance coverage. Primary prevention of firearm injuries, especially those caused by handguns, may be the most effective cost-control measure."
There is even potential here for lower taxes. If firearms injury rates come down, the taxes needed to pay the medical costs come down too. It could be argued that the private insurance industry is likely to be much more efficient about distributing money to victims that would otherwise be handled by Medicaid and Medicare.

Almost done, but I think I need to add a word or two about Second Amendment Rights. Suppose I were to say ...
"I have a right to happiness. If I had more money I could do a lots more fun things, and that would make me happy. Somebody give me $100,000 please, it's my right."
Which of course, it complete bullshit. No one is going to give me money just because I say it makes me happy. Likewise, I don't care to pay for the burden of irresponsible gun owners. I don't even want to remove anyone's rights. I want recognition that this right bears a heavy price tag, and a more equitable means of sharing that cost. If someone wants or needs a gun that's OK by me, but along with the Right of gun ownership they should be willing to accept the costs that come with it, and be prepared to pay for it themselves.

Update:
A follow-on, regarding my objection to the "One Good Person with a Gun" hypothesis.
Addendum: Is Gun Liability Insurance the Next Big Thing?




Sunday, November 4, 2012

The Future of Political Journalism

Nate Silver's predictions on FiveThirtyEight are starting to draw a lot of attention - and a lot of criticism. Almost all this criticism seems to be people interpreting statistical estimates as the blathering of a typical pundit, leading them  to very irrational claims, or concocting elaborate speculations why Silver cannot be right. They could not be more wrong, and in a few days I am not going to be shy about saying "I TOLD YOU SO".

(more after the break) 

Thursday, November 1, 2012

More Survey Analysis

I'm repeating my previous post, since it seems relevant to review the data again before the election. I accessed all these sites at about 9:00 PM CST. I

Real Clear Politics
Obama 201
Romney 191
Toss Ups 146
Obama by +1 ECV

Intrade Presidential Election 2012
Obama ~67% win
Romney~33% win

FiveThirthyEight 
Obama 303.2 +/-56 ECV
Romney 234.8 -/+56 ECV
~81% Obama wins

HuffPost Pollster

Obama  277 ECV
Romney  206 ECV
Toss Ups 55 ECV
Obama has enough certain electoral votes to win.

Obama 285 ECV
Romney 191 ECV
Toss Ups 62 ECV (but 44 of those lean strongly towards Romney)

Election Analytics 
Obama 296.7
Romney 241.3
~99.4% Obama wins

As before, I have arranged these in roughly increasing order of favor for Obama winning the 2012 election. The last three sites (HuffPost, TPM, EA) are making strong claims that Obama has the electoral votes to win already. 538 is not far behind that claim.
RCP seems to be sitting on the fence, not making strong claim about the toss-up States, and there is nothing wrong with that.

Other notes:
HuffPost Pollster has been added to the results. (but you saw that already.)

FiveThirtyEight: Last time I made the claim that Nate Silver's analysis is as close to neutral as can be found. Today though, I saw this: Nate Silver bets Joe Scarborough $1000 that Obama wins. It is not clear if this was intended as a partisan statement, or simply a good bet. It's not wrong to claim Obama is a good bet.

Should I update again tomorrow?

Tuesday, October 30, 2012

Survey Says ...

After discussing politics with a friend this morning, I was inspired to make the rounds of sites that accumulate political polling results and see what they all are saying. All results accessed on the web late afternoon on November 6th.

Real Clear Politics
Obama 201 +/-61
Romney 191 -/+61
Toss Ups 146
Obama by +10 ECV
(edit: those error bound probably belong to the 538 analysis, not RCP).

Intrade Presidential Election 2012
Obama 64.6% win
Romney 35.4% win

FiveThirthyEight 
Obama 294.6 ECV
Romney 243.3 ECV
~72.9% Obama wins

Obama 265
Romney 206
(Comment: therefore 67 are not certain)

Election Analytics 
Obama 291.4
Romney 246.6
~95% Obama wins

I have arranged these in increasing order of favor for Obama winning the 2012 election. Real Clear Politics and Talking Points Memo are sites I do not regularly follow, so I have no clear opinion of their methods. I suspect these sites have a conservative and liberal biases (respectively) but I have not direct evidence for that. Certainly if they are both presenting statistical results in an unbiased manner, they ought to have about the same conclusion. All I can really say here is that RCP doesn't offer much of a direct election forecast.

Edit: Intrade is another site I don't follow, but it has been getting a lot of media attention too.

I personally favor the methods Nate Silver uses on FiveThirtyEight, as I consider it to be the most technically sophisticated analysis, and it takes economic data into account as well. Silver offers a lot of day-to-day commentary on daily polling and his predictive model. Since the Democrats have lead the elections since the blog started in 2008, many of those prediction have been that Democrats are going to win. Some people interpret this an a liberal bias, but Silver's Senate predictions have been very good, and it not a bias if he is correct.  I have never seen Silver make anything close to an endorsement, and so on that basis I think FiveThirtyEight gives the most unbiased political analysis available.

Election Analytics is different from the others. It's a small academic page rather than a news site. Their statistical methods are sound, but they make some strong assumptions, which seems to be why they are able to make such a strong prediction for Obama winning. I haven't looked into just what these assumptions are, so I can't say if they are justified or not.

All this might not make my friend happy, but all the data is saying pretty much the same thing, with varying degrees on certainty.

Update: Added Intrade prediction at the suggestion of +Kevin Clift. Accessed the morning of 10/31/12.
Edit: next time I'll include this site too: http://electoral-vote.com/

Saturday, August 4, 2012

The Creationist 419 Scam

You would think that outrageous claims are so likely to be rejected that the person making the claim would just give up and go away. For an example of this you might check out this Sensuous Curmudgeon post "ICR: Plants Are Not Alive". The Institute for Creation Research claims that because plants do not move and do not have blood, they are not alive, and they justify this based on the Old Testament and a some quadruple backwards spinning logical somersaults that would make Gabby Douglas gawk. There are plenty of other examples, but I won't belabor the point. As my buddies at The Sensuous Curmudgeon often note, the scammers* have to know they have no scientific standing, but they do it anyway. WHY?

Consider a known scam that everyone can agree is a scam; one that is no farther away than your email SPAM folder. Microsoft scientist Cormac Herley has a paper out:


Edit: Original link seems broken. Try this instead.

... dissecting the mathematics of the Nigerian 419 scam. The Wall Street Journal Online has a less technical summary, see "Why We Should Scam the Scammers".

Here is a brief quote from Herley, with my emphasis added:

"... Far-fetched tales of West African riches strike most as comical. Our analysis suggests that is an advantage to the attacker, not a disadvantage. Since his attack has a low density of victims the Nigerian scammer has an over-riding need to reduce false positives. By sending an email that repels all but the most gullible the scammer gets the most promising marks to self-select, and tilts the true to false positive ratio in his favor."

 Here is Herley again, later on:

"Since gullibility is unobservable, the best strategy is to get those who possess this quality to self-identify. An email with tales of fabulous amounts of money and West African corruption will strike all but the most gullible as bizarre. It will be recognized and ignored by anyone who has been using the Internet long enough to have seen it several times.  [ ... ]  It won’t be pursued by anyone who consults sensible family or fiends, or who reads any of the advice banks and money transfer agencies make available. Those who remain are the scammers ideal targets."
It's brilliant actually. Finding people susceptible to a scam is hard, but weeding out those least susceptible is as easy as concocting a lame story. The more outrageous the tale, the less likely it is to attract those who can see through it, leaving those who are mostly likely to be successfully fleeced by the scammer.

There's is a shorter summary, and an older one; Abraham Lincoln put it like this, "You can fool some of the people all of the time, and all of the people some of the time, but you can not fool all of the people all of the time."



The scammers know that most people will apply some amount of logic and reason and reject the obviously incorrect, and this is what they want. They want to weed out the majority who will never buy into the scam, and speak to the few they might fool. When the scammer is the ICR and the marks falls for the false dichotomy that religious belief must overrule scientific knowledge, the scam is particularly insidious.

Herley suggests a response to the 419 scams, to counter-SPAM the scammers with automated responses, false positives that waste time and money and take the profit out of the scam. This would be harder to apply to Creationist scammers, requiring a large number of people (or automated facsimiles) to "Go Poe" and troll the Creationists where they live. That doesn't sound like fun, and it doesn't strike me as ethical. Still, the suggestion has been made before.

* I'd like to make distinction between those who hold to Creationist belief and those those making claims in support of Creation science. The former may hold a sincere belief, but the latter are deliberately lying in an attempt to undermine science and science education.

Sunday, July 1, 2012

Wrong and Wronger

A month ago I wrote about telling off a Nobel laureate (see Wrong). That is a conceit on my part, because I doubt Dr. Josephson will ever read it. Someone read it though, because it sparked a flurry of response from some Answers in Genesis Creationists (yes, I know that is redundant). One in particular, identifying himself only as Bonesiii Dromer, was moved to a truly spectacular display of sanctimonious bombast. After some thought and considerable delay (I was exceptionally busy) I finally posted a response to the nonsense.

--- More after the break ---

Blog changes

I just made a complete overhaul of my blog template, because I found that quoted text just wasn't at all legible, and other formatting annoyances. No doubt this will cause chaos and destruction throughout the blogosphere. Please let me know if you spot anything in need to fixing.

Friday, June 29, 2012

Tomatoes!

How can I not link to this Discovery Blog Post?


Enjoy.














Hey. Where did she come from?

Tuesday, June 12, 2012

Sing Along Time

Sing along to the tune of "Daisy" (Bicycle Built for Two):


Face hug space bug,
Implant me with Xenomorph
I'm just fine now,
chest burst - sent the monster forth.

It might be a huge disaster
split up, we'll find it faster
but you'll look best, 
cocooned in the nest
of an alien Xenomorph.


Whatever it was that possessed me to write this, it was gone when I woke up.


Tuesday, May 29, 2012

Wrong.

image source
Today I got to do something that just doesn't happen every day.

I told a Nobel laureate
they are wrong

It's really not terribly interesting, so here is the short version (after the break):

Click for more.

Thursday, May 24, 2012

To the Rescue!

I came home from work a bit late tonight, and on my drive noticed the wind was pushing my car to the side a bit. When I got home, I discovered that wasn't all the wind was knocking about.

There is a robin's nest  on the corner of my house. This nesting pair likes to build on top of the rain gutter downspouts. This one is particularly attractive to them because the honeysuckle is starting to grow up onto the downspout.
The honeysuckle is incorporated into the nest, but it is not well attached to the downspout. When the wind blows, the honeysuckle tends to pull the nest to the side and tilt it sideways. The robins made two or three attempts to get a stable nest. Each time it pulled over, the robins simple built a new nest on top. They final got one to stay upright long enough to raise a chick, but the nest was tilted at a 45 degree angle by the time it fledged.
Then robins built yet another nest, and I figured their problem was solved. Until I came home and saw the wind caused the honeysuckle to pull the nest entirely off the downspout.


The nest was now suspended by the honeysuckle at waist height and a precarious angle. I could see this spelled disaster for the robins, they would have to abandon the nest. Mother Robin hadn't given up yet though; she was nearby, fussing at me for being too close to her nest.


THIS, I thought, was a job for Duct Tape!


Alas, I couldn't find my duct tape, and I thought it was important to hurry. I grabbed some old wire (Thanks, Dad!) and proceeded to strap the nest back up on the downspout. I cut it loose from the honeysuckle at the same time, so my shoddy repairs ought to hold until a chick can fledge.

In this last picture you can see two of the older nests, tilted nearly 90 degrees towards the camera (well, towards my iPad. I didn't have time to grab a real camera.). The current nest is on top, obscured in this photo, but back on the level.
I packed up my ladder and went in for my supper, and didn't get back out before dark to check if Mother Robin has returned to the nest, and I din't want to disturb her any more for one day. I will update in the morning when I can confirm she has returned.

*** Update: Mother is back, and all appears well.

Wednesday, April 4, 2012

Did some filing at work today

I did some filing at work today.

No, really. My keyboard tray has some wicked-sharp metal sliding brackets, cleverly positioned to extract a bit of flesh from my knuckles should I be the least bit careless. After getting bitten again last night, I finally got fed up and decided to do something about it.

I ask myself, "Why did I wait so long?"

Monday, April 2, 2012

Not Quite Right (but close enough?)

At the HuffingtonPost, meteorologist blogger Paul Douglas explains his position on climate change.




I think it's a well written explanation for his position. However, I also think it's not complete. The part about the extraordinary number of weather records being a symptom of climate change it true, but it's not the number of record temperatures, it's the frequency at which new records are set.

Image: Huffington Post
Consider for a moment that you have two temperature records over the same times from two different locations nearby to each other. If a record temperature is set at one, it's not so surprising that a record might also be set at the other, simply because temperature is a regional phenomenon. In statistical terms, the temperature at the two locations are correlated. Having many record temperatures at the same time in nearby locations an indication of an overall higher temperature. They are not independent measures that the overall temperature was higher twice.

Next, it's worth thinking about how often a single location record high temperature should be set if temperature only varies randomly. Consider how data is recorded and records determined. If your sample size is high temperature measurements on (for this example) July 4th, then in the first year of data a sample size of N=1 sets your standard. It a record by definition, but not a very interesting one because there is no previous record to compare it to. The second year (N=2) has a statistical expectation (average) of 50% chance of  a new "highest high temperature". In the third year and after, there is only a new record if the July 4th temperature is higher than all the previous years.

Now staying with this assumption of random variation, as the record gets longer,  there is a simple formula for the expected probability that the next July 4th temperature will set a new record:

Probability of a new record on the Nth year = 1 / N

Where N>1. This won't be very accurate for less than 20 or so years of data, but gets more and more accurate as the length of the temperature record increases. For example, on the 100th year of collecting data, there is a 1% chance of a new July 4th record. This is sometimes called a "100-year event" because if we consider only the most recent 100 years of data for a particular event, we can expect a new 100-year high about once in 100 years.

And finally we have the missing piece from Mr. Douglas's explanation. We shouldn't be surprised by lots of new record temperatures, but we should be surprised when long standing temperature records are broken with a much greater frequency than we expect. We might also be surprised if we see many more record higher temperatures than record lows, because if everything else is the same, then these should occur with roughly equal frequency.

I can't fault Paul Douglas for leaving that out of his article. I thought it was going to be simple concept to explain, but it took me four paragraphs to get through it. If you read through this far, I hope it makes just a little more sense than it did before, and why scientists find this sort of data convincing of climate change. Meteorologists and climate scientist (and occasionally statisticians) have more complex ways of looking at this sort of data, which take advantage of the correlations and statistical dependencies to form a bigger picture of how climate is changing.

Thursday, March 15, 2012

Sunday, March 11, 2012

Wednesday, February 8, 2012

Suddenly, Statistics in the Recall is in the News

I don't know that I can take credit for this, but this morning Phil Scarr at Blogging Blue points me to an article at the Milwaukee Journal Sentinel:
Analysis: Invalid signatures likely not enough to halt Walker recall

This seems to be just what I suggested in my previous post, two days ago. Whether or not I had the idea first, I applaud the effort.

[Edit: Fixed a link. I originally linked to a different but related post by Phil.]

Sunday, February 5, 2012

The Statistics of Verifying Recall Signatures

There is a huge political battle raging to Wisconsin, but you probably know this already. The drive to recall Governor Scott Walker has gotten plenty of media attention. Some 540,000 signatures are needed, and the Democrats turned in well over one million signatures to be verified. Walker supporters are hard at work trying to identify false signatures, to get as many petition thrown out as they can. That means about half of all signatures could be bad and the Democrats would still have enough to force a recall election. Given this rather daunting situation, how hard should Walker's supporters try? Do they really need to check 500,000 signature, a million signatures? How many is enough to be confident of a reliable result?

Regardless of your opinion about Scott Walker and the recall, some simple statistical sampling can help answer the question, and it requires checking far fewer than 500,000 signatures. I'm going to assume that each petition form contains 20 signatures, and 50,000 petition forms to total one million signatures. Essential to this process is a simple random sample**, where we can select a sample of petitions so that each form has an equal chance of being selected. There are fancier schemes, but this is the easiest way get an unbiased sample, and for me to explain.

Starting with a generous assumption that half of all signatures are fake, and that this number varies with a standard deviation of 2.6 bad signatures per petition, meaning most petitions have between 5 and 15 bad signatures (also generous). For a sample of size n petitions that gives a total count of x bad signatures the formula for the percentage bad is x/(20 * n) [x divided by (20 times n)] with a standard error of 2.6/sqrt(n)  [that's 2.6 divided by the square-root of n]. Based on the mathematical law of statistical averages from a random sample, we can say that the actual number of bad signatures is close to x/(20 * n), where close means it is within about 2 standard errors in 95% of all such samples performed in this way. For a sample of size n = 1000 petitions, our assumptions and statistics say we should observe 50% plus or minus about 3.2%, or between 46.8% and 53.2% bad signatures - IF the assumptions are correct. We can say there is a confidence level of about 95% (19 times out of 20) that the interval generated this way will capture the true rate of false signatures.


Now the good news: If the actual percentage is more or less than 50%, the standard error should be a little smaller either way, meaning the estimate will be a little more accurate. More good news: this setting is what statisticians call a "finite population sample", which means this estimate will be a little more accurate yet, because the recount is sampling a significant fraction of the total population.


Long story short, if you want to verify recall petitions, take a sample of about 1000 petitions, check them carefully, and calculate the percentage of bad signatures. If that percentage is less than 45% or so, then it is time to stop counting and start campaigning.


** In practice random samples are not always "simple", but this is what statisticians call it.