Tuesday, July 31, 2007

Pseudorandom Thoughts on Privacy, Security and Trust

Summary of Larry Korba's talk at the IFIPTMA Conference in Moncton.

Data breaches: we are concerned not just about the breaches, but also data quality. We need to make sure data is accurate.

An organization has clients, which give them data. As more clients are added, and through marketing and other activities, the organization has a great deal of data. This is facilitated by cheap storage. This leads to risks, such as identity theft and fraud.

Contributing factors include expanding networks, growing e-commerce, complex software, greater pressure to reduce time to market, and the ubiquity of computers.

ID thieves are able to gain trust using various methods, including imitation, diversion to criminal sites, and stealing accounts and passwords. ATM fraude is an example. Interestingly, the people who perform the fraud network among themselves to become more effective - it's all very well organzied.

What do we need to combat this?
- inexpensive, effective multifactor authentication
- biometrics - something that is privacy aware (an iris, for example, can tell a lot of health info about a person), low cost, harmless, easy to use, low error rates

Privacy without accountability doesn't make any sense at all, but there's a lot of laws and agreements about services provided. It's complicated. There are so many different systems in place; trying to integrate them is a challenge. Even finding the data can be a challenge, knowing who touched the data, when, why. How do you layer compliance information - people may not be authorized to see breach information.

How do you establish privacy and accountability, then? Audits? Automation? Try to use extant text logs, machine learning techniques, knowledge visualization and more. There is a real cacophony of noise.

Commoditization of software: people can go out and buy computers very easily, software too. You can find what you need very quickly. For example, Google's code search.

The line between hardware and software is becoming blurred. You can have systems that simulate what hardware does - you can deal with hardware problems with a software patch. A system like Xen, which is a virtual machine, can run faster than Windows itself. Or you might want to look at Amazon's 'elastic compute cloud', which is an online computer emulation you can rent.

A system is only as secure as its weakest point. Example: Richard Feynman noticed construction workers building a facility using a hole in a fence. Feynman also got involved in safe-cracking - he would try to find patterns in the security.

From the attacker point of view, software is a commodity. Any software you can imagine can be found, legally and otherwise, including hacking and cracking software (metasploit framework).

From a security implementor's point of view - stupid defenses only keep out stupid attackers.

Planning for security in the design stage is rarely done. But people should do things like: be explicit about programming language defaults, understand the internal representation of data types, know how memory is used, understand thread and object use, and rigorous unit testing.

You need to undersatdn how the code is compiled - because that's what the attacker will do. Never assume your system is secure, never assume there are no bugs, especially if you try to use 'home-brew' crypto.

Also - the users are the weakest link in the chain. You have to think of how the users will circumvent security you put into place. Security is not convenient. People in charge of security become complacent when, for example, they have a powerful firewall.

All good plans include the human element. User involvement in early research and development is vital to this. You need to assess protocols against what the user expects. What they need. What they understand. Passwords, for example, are often stored on post-it notes, or shared with other users.

Some work is being done analyzing teh Enron email data set (several hundred thousand emails) including the distribution of how passwords were passed around the organization. The same passwords were given out, weak password were used.

How to improve systems: combine techniques. Learn from other fields: eg. the game 'Spore' - knowledge display, visual programming, intelligent agent design; also eg. Digg, Kartoo, intelligent computing - an associative memory.

Data-mining techniques related to trust:

The problem of security is making sense of a lot of information all over the place. There are a lot of false negatives. The interesting part is when you combine data from all over. Visualizing complex data - how do you understand a network of hundreds of thousands of records.

It's like security intelligence. You may know you have a problem with a person because of past actions, but it's very difficult to know what's happening now. A lot of the results are based on the expertise of the analyst. The analyst is the filter. We need to be able to learn from the analyst, or have some community based filtering tools. Understanding how an analyst uses tools effectively changes from one to the next.


Are trust computing platforms ready yet? Can they make DRM more effective, ensure software revenues, and so on. These can also make software applications more secure. You have an effective set fo tools for new effective security techniques. But there are issues with the implementation - there could be backdoors, etc.

(At this point Korba's computer entered stand-by mode and the talk paused... )

The situation is, there is a lot of data all over - you need to determine what data is trustworthy and what is not, rapidly. You need to learn the trust value of individuals, etc.

Attackers never rest. The sophistication of the attacks gets more impressive. 'Kits' are more easily available allowing anyone to break into computing systems without any real understanding. Attackers are not building temporal delays into attacks, and more.

Some questions: can security be measured? Computer systems are far too complex for this sort of analysis, making it difficult to assess security systems. And they rely on things going on in the operating system. One attacker can make a mockery of a 30 person-year project.

You need to think like an attacker - think evil. You need to keep things simple for the user, but effective. If you ask things of them, you need to think attainable. You need to be comprehensive.

Questions: I raised the question - what is the impact of the company's agenda on security - the company is not always benign, security is not always used for good, sometimes the 'hackers' are not 'evil' - eg. DVD Jon - wouldn't security be advanced if security industry was deliberately neutral on such matters?

Response: I can't see companies being evil like that

(I'm sitting here thinking, he read through thousands of Enron emails, but doesn't see how companies can be evil?)

Follow-up: described a credit card company requiring people to enter their ownline banking password - shifting risk from themselves to customers, increasing risk of phishing.

Follow-up: we are working on tools to enhance privacy, we are basically disregarding accountability.

Monday, July 30, 2007

Security Issues and and Business Opportunities - Panel

Summary of panel discussion at the IFIPTMA Conference in Moncton.

David Townsend

New knowledge is best created in collaborative environments. The key question is whether government and industry can collaborate successfully in a complex regulatory environment.

Lois Scott- Clinidata

The biggest challenge for private sector companies in health care is to convince public sector that we are trustworthy - the prevailing view is that we would do anything for the big buck. They reality is that this isn't true, we wouldn't survive if we were.

American owned (therefore subject to patriot act), handle health care records for 65 percent of Canadians (?) (or 1.5 million Canadians - this is unclear) and collect and transfer health care information.

Only 1 percent of people opt out of giving their name and information.

Our routine business practices are now under the microscope and so they should be - but right now 95 percent of physician communications are by fax - they shouldn't be.

We see the 'why do you need that information' question coming from young people - older people are more trusting.

People resent being asked to sign releases before seeing the information we have collected on them.

The challenge is how to minimize the privacy and security risk without compromising health care. As the keynote said, you can paralyze the good stuff by trying to secure everything.

In Canada - here is no legislation to inform people about breaches of security - in the U.S. there is - 40 percent of medicaid companes have experienced a security breach in the last year - we don't know that level in Canada.

In the U.S. there are few laws restricting the use of private information, and is mostly self-regulated. In Canada, there is a clear expectation that we will respect each and every privacy law.

Canada - has to be aware that we may become a pawn in the dispute between the strong privacy laws in Europe and weaker laws in the U.S.

Patriot Act - impact on B.C. residents living in B.C., not abroad. In our case - our of our information resides in the province - it doesn't go out. We have to have in our contracts, that the information stays here. It can't be a superior to staff member relationship.

Basically in our contracts we say that the government is the owner of the data.

When you think of all the American-owned businesses in this country, you have to think about what that means - not just customers, but also staff. On the other hand, to survive in business in this country, you need the U.S. - you can't just say we don't want to deal with that.

There is always the concern that private sector will use this data for data mining. Some of this is really good and is not being done - things like detecting influence - but we are not formally doing that right now. Certainly in continuity care we are having real issues. Crisis lines are transferring in, 911 calls...

Are we ready for what's coming? We've always been reactive, not proactive. Regulations and guidelines are often put into place after the fact. You have to speed up or we have to do something different. Eg. some of the wireless technologies - you and I will be wearing a patch, which will relay information through the telephone. How do I know that my mother isn't wearing my patch today?

Patients are going to be custodians of their own health records. Right now, it still belongs to the provider. There has been such poor uptake of single health record that they are looking at the personal health record. Who will house that, how do we protect it, etc. What if they disagree with what's being said? It is going to change the very nature of physician - patient relationships. It's not bad - but we're not ready for it.

I think business development possibilities are attractive because of lobaization, telehealth, etc. - but we have to optimize delivery.

Parry Aftab
Internet privacy and Security Lawyer

CyberTrust project - won't be called that when it goes it, it's just a working name

There have been three major leaps in IT:
- development of the internet - sending data from one place to another
- the web - 1993 first browser - the web took us from needing to know a geek to being able to use it
- Web 2.0 - 2004-5 major growth to mainstream - unil recently, you would go to websites, somebody else's content - but now we have Facebook, MySpace, etc - user-generated content

This changes the internet - the challenge is, you no longer have the CNNs, etc., that you can sue and tell them, 'take this down' - it's not companies any more - 180 million profiles on MySpace - how are you going to comply with anything?

Teen Angels - teens being trained in privacy issues - trained by me (and others) - they are experts - they were comparing the privacy policies of eHarmony and other dating sites - one said, on my Zanga page - and the discussion about profile pages ensued. Why have a profile - well, if you're a little shy, this is a way to make friends. Now I had to find a way to keep them safe on their social networks (vs saying, no social networks).

We are the 'inside watchdog' on FaceBook, Myspacem Zanga, etc...

The Web 2.0 industry doesn't know what they are doing. They are 24 year-olds that are running companies. They don't know how to hire lawyers, etc. They need help on security, compliance, risk management,etc. and policing.

(Story of moving to New Brunswick).

We are creating this program. It will be a service, consulting and compliance centre that will provide outsourcing of risk management directly to our centre. So if Facebook doesn't want to handle this themselves, they can hire us. And we can coordinate with the law enforcement agencies to do this. Weneed to advice them - that's where the certification comes from.

Sandy Bird

Security breaches - we know they happen, but it's not reported - if nothing is sold, did it happen?

Web 2.0 apps - get exploited - the security is very poor - the audit finds that the machine has been breached - we have to look at logs, etc., long after the fact to find out what has been taken from that machine.

It's difficult to write secure applications - there aren't courses that teach them how to write secure apps. Hackers can use data input, eg. Social networks - there's no secure way of communicating in any of them these days.

Someone was talking about a national identity database - that's what I need, you just need to exploit one system and you have everything!

What Are You, Who Are You, And How Do You Know?

Summary of a talk given to Jonathon Cave to the IFIPTM conference in Moncton.

Jonathon Cave
RAND Europe

What Are You, Who Are You, And How Do You Know?

Interesting to observe differing attitudes toward privacy and security. People have differing motives - some because they think they should, others who think there is something to be gained.

Many of the things we do protect us against risk - but this may be more efficiently managed at the individual level. It's not immediately obvious that the challeges we face today fit into the categories we drew in the past.

Businesses are the people who can most efficiently manage the risks of managing information. But they then become a target, because of the value of that information. They become part of a complex system, and such systems have failure nodes, at the boundary.

Leads us to think that government's role needs to be 'rebalanced'. If we did not have deregulation, it would not be possible to have that conversation.

Different countries have different rules regarding privacy and security. We benefit from that - because of expenses created by compliance in other places.

That said...

1. Tangibles are changing business models

I am a game theorist. Game theory is the idea that you rank things and pick the best. But you can't rank things without knowing how other people look at things. So these rules describe what people do. Then you design systems based on these rules.

Privacy and security run right though economic theory, and this is where game theory comes in. The view of the individual runs right through this analysis. Eg. what people intend to do and what they can do are different things. There are things we cannot predict - risks. There are things we cannot even define - uncertainties.

Eg. CCTV cameras - they can help do things like catch criminals. But also - they push crime indoors. But even more - when I'm being watched, I'm not being trusted. When I'm not trusted, I am less inclined to be trustworthy. So CCTV may contribute to the sort of behaviour they are intended to reduce.

We want to sraw out these intangables, to touch them - this desire makes us very uncomfortable, the way IP did for the content industry.

What does it mean to steal someone's identity? It could mean stealing my stuff. It could mean creating a new identity entirely, without taking anything from me. But that may mean denying me access to my own identity.

A borderless world is very messy (the IT world). We leave traces all the time. There are traces of pretty much everything we did online. One of the things that compromises my identity - prevents me from doing things - is myself in the past. That ability to move off of where we were is an essential part of our identity.

Privacy - we need to have routes to know what is known about is. But it's not just the information - it's the judgments that are made with that information. It's not even just the accuracy of the information - what if judgments are made with only half of the information.

Businesses may be able to deliver better services when they have more information. But if a business has a program that creates a profile of me, the information belongs to the business. So the company will serve me just enough to get that information.

These intangibles become increasingly important to business models. It used to be that transactions were anonymous, but no longer. Now some organizations collect things like names - or even thumbprints - and erase it later. But the important things is that a business model that used to be about selling fruits and vegetables is now a model about the collection of information.

A lot of this at the moment lives at the realm of corporate social responsibility - part of the halo effect, things that big businesses do because they can affoprd to do it that gives them a subtle advantage.

Now we are getting things that we never anticipated. The new regulatory framework (privacy protection, eg). is a result of this. But also - a service is pretending to sell me identity protection. It used to be, I expected my bank to protect my identity. But now it may be more like dread disease insurance, where we pay for our own protection.

2. Privacy, security, trust, etc., are all good things.

But that doesn't mean 'more' is better.

Eg. security cameras on vacant lots - there's nothing to protect.

The fact that my actions are being observed changes responsibility - if you give me too much privacy, I don't work about responsibility. We see that in the area of anonymity.

It only makes sense to trust some people if (a) they earn that trust, and (b) they can use it to do what they do. So there are only some cass where we trust, say, government.

There is another example of where you can have too much security - but you live closer to that example than I do.

Suppose I trust you - I give you access, etc. - but then if I put cameras on you - then I'm not trusting you. I'm just using you as spare parts. These monitoring things cut at the very heart of trust. If you control me, you're not trusting me.

These things are good things only if we all agree they're good things. If it's better for you than it is for me, then there is a question of whether I give consent. In some cases we are forced to 'give consent' - eg. 'you can refuse to give fingerprints, but can't cross the border'. Some consent. Also, there are cases where we don't know what we are consenting to. Or the conditions may change.

The growth of this public space, and its incursion into our private space, may be an incurson into our right to be left along. If the state intrudes when you aren't doing anything, then you have become the property of the state (Burke).

3. The Atlantic Perspective

The two sides of the Atlantic have very different views of privacy and security - in a globalized world, this creates a lot of conflict.

There are also differences in the structure of markets. In America, small enterprise is considered the font of innovaton. In Europe, they are anything but innovative.

Differences in security issues, regularity issues, roles, government legislation, etc. etc. etc. and also in public procurement rules. Whoever wins the market for a core technology - eg. biometrics - has won much more than that.

So there are many disagreements - but we can presume we'll get out acts together.


a. CCTV camera - monitor everything - they have microphones, to predict fights, face recognition, to catch gang members - but this results in people - all people - wearing hoodies. Automatic number plate recognition - they know where you've gone,

b. What hoodies and hijabs mean - they mean that my identity belongs to me. If people feel threatened, they withdraw. And if they withdraw, there will be a reaction to it. Is the withholding of identity reasonable grounds for denying people a public life - holding jobs, etc.

c. Biometrics - iris samples can be re-issued. But there are cultural barriers to using the iris. but the key point is - there are types of errors, false acceptance, and false rejection, and the type 3 error, the right solution to the wrong problem - if we think biometrics protect us against biometrics, well, they don't - they identify only the physical person - but frequently the physical person isn't important.

d. DNA is another example of this. ou go into the DNA database if you are drawn to the attention of the police. But it tells much more than just identity - it tells kinship, health issues, etc.

e. Data-mashing. Google maps is a benign example of this. If you mash data, you can violate privacy without even identifying someone.

f. Loyalty cards and commercial profiling.

g. Virtual worlds - to some extent, we are all public figures in virtual worlds. We have certain privacy rights - they may be very limited in some cases (children, criminals, politicians) - but on the internet we're all public. One compartment of our identity may compromise another compartment of our identity. The rules become very different. I don't know what the rules are in Second Life - but I do know they're making a lot of money, collecting a lot of information.

4. Intangibles

There is no necessary contradiction between privacy and security.

Networks amount to the links between people. Game theorists look at links as decisions we have taken. We have:
- people who are careful about privacy
- people who are careless
- people who are opportunistic

We can have
- a high degree of security, because the customer is careful
- but if the business, or the customer, is careless, then there is not a high degree of security

So what state does a network of such states settle down to? It doesn't necessarly settle to the most beneficial state.

Things - like insurance policies - don't force the outcome, but allow it to settle to the optimal state.

But also - we don't have just one system - we have a lot of small worlds. If I join eBay, eg., my behaviour changes.

The way in which I respond depends on the likelihood of an attack, and how much I care about the other people in the network. The system doesn't smoothly adjust - he threat varies depending on how careful people are, and how much the system is being used.

5. Markets

Networks effects and interoperability - we can get excess inertia - perfectly good ideas might never be adopted, other ideas may be rapidly adopted.

We have a system with a soft centre and hard boundaries around the outside - breaking the boundaries becomes high value - this is a system of brittleness.

What we see in a lot of IT worlds - we know we have to give up information, and incur risks - eg. people who download software - and others are scared of those risks, and create private domains. That might be OK - but the whole point is that we gain from being connected from each other.

This splitting apart - and what it means for things where we have to pay - like education and health care - is concerning.

6. A Warning from History

Business, government and civil society have very different perspectives.

Events have a disproportionate influence because of this. The different agendas are not necessarily consistent.

The challenge to business is to embrace these issues (not to describe them into nothingness).

We can see possibilities of:
- high security - high privacy (the academic publishing world, eg)
- high security - low privacy (the surveillance society)
- low security - high privacy (walled gardens)
- low security - low privacy

Security and privacy - are both states of mind. These choices are ours to make.

Sunday, July 29, 2007

Trust and Reliability

Responding in Webogg-Ed to a comment by David Weinberger.
David Weinberger: “Open up The Britannica at random and you’re far more likely to find reliable knowledge than if you were to open up the Web at random. That’s why we don’t open up the Web at random. Instead, we rely upon a wide range of trust mechanisms, appropriate to their domain, to guide us.”
The problem is, Weinberger's response is wrong.

The quote compares a particular product - Britannica - with an entire medium - the web.

The medium of which Britannica is a part - print media - is demonstrably as unreliable as the web, especially after you point out that print media includes tabloid journalism, press releases and political advertising.

The comparison should most properly be between Britannica and, say, Wikipedia. But the problem here is, if you open a random Wikipedia page, you are no less likely to find reliable knowledge.

Weinberger's response introduces a new topic that has nothing to do with the original comparison. He is talking about how we select media. This was never the issue.

But if we're going to talk about media selection, are 'trust mechanisms' the right way to characterize (a) what we actually do, and (b) what we should do?

I content neither is the case. Certainly, trust mechanisms are not operating at the moment. Very little of my selection has anything to do with, say, the reviews in Amazon or eBay. Rather, I get deluged with content - most of it spam - and pick out content I recognize to be valuable.

How do I do this? This is a clue to how we will want to work in the future. I have mechanisms I use to select content for myself - I don't simply 'trust' external agencies - not even my friends or social networks.

My selection of reliable content is a matter of recognizing the types of content I find to be reliable. Good reviews, recommendations, etc. - these are only a part of it.

I am tempted to say, there is no trust. That trust is a lie.

Think about it. If you know me, you know that I am a trustworthy source - maybe as trustworthy as one gets. Suppose I am, just for the sake of argument.

Do you simply accept my argument? Do you simply agree with me? Of course not. Nor should you.

Reliability isn't - and never was - a matter of trust.

Indeed, I would say, the day we start relying on trust to confer reliability, is the day we start allowing ourselves to be led down the garden path (with the 'trustworthy' authorities leading the way).

Wednesday, July 25, 2007

The Handgun Ban

To be clear, I am in favour of a handgun ban.

In the news media I have been hearing over and over for the last few days that a handgun ban will not reduce crime in this country.

What this tells me is that someone has just been shot (or at least, shot at) with a handgun. I may have actually missed the news article but there's no missing the wails of defense for handguns.

The logic coming from the handgun defenders runs along these lines: criminals will ignore a handgun ban, and thus, the handgun ban will be ineffective.

Let me first point out that this is a gloriously stupid line of reasoning. By definition, criminals ignore the law. That's what makes them criminals. But this is generally not a good argument for doing away with the law.

Take murder, for example. Every person who murders someone else has completely ignored the law against murders. But does this mean that we should repeal the laws against murder? No! Does this means the laws against murder do not work? Clearly not!

OK, having said that, let me add this to the mix: if handguns are banned, then only criminals will have handguns. Right? What this tells us then is that we have found a really good way to spot criminals. They're the ones carrying handguns.

Now, finally, let's look at this gun thing more rationally.

Again, we are told time and again that banning handguns won't reduce crime. But this misrepresents the reason we want handguns banned (and other firearms closely regulated).

People want to ban handguns because handguns are inherently dangerous. Banning dangerous things is just common sense.

We ban outright the ownership of things like rocket launchers, radioactive materials, lions and tigers. We regulate and license the use and ownership of some dangerous things, like cars and aircraft.

This is because there is a causal relationship between the widespread use of these things and the loss of life and injury caused by these things.

Let's go back to guns specifically. Instead of looking at trumped up statistics like 'gun crimes', correlate 'level of gun ownership' in a country and 'number of gun deaths' in that same country.

You get a straight line. The U.S., which has the highest number of guns owned per person also has the highest number of gun deaths per person. Medium countries, like Switzerland, Canada and Australia, have medium levels of gun deaths. Countries with very low levels of gun ownership, like Britain and Japan, have very low levels of gun deaths.

This is, of course, exactly what you would expect, which is why the gun lobby howls every time someone s killed. It's not true, they cry, over and over, knowing that if they say the big lie enough times, people will believe it.

But it's still a lie. Guns kill people. The more guns you have, the more people die. That's why they should be banned.

Saturday, July 21, 2007

Celebrities Travel

Responding to Sustainability: The Inconvenient Truth About Idolizing Green Celebs. My response is being 'held for review' by Fast Company.

I fail to see the point of this post.

If it is to suggest that many Americans do not yet take global warming seriously, yes, we knew that.

This does not discredit those who are campaigning against global warming, however. Even the slightest look will show that these are the people who are car-pooling, biking, and using public transit.

A post arguing that American politicians should support public transit or make cities bicycle and pedestrian friendly would have been on point, but that was perhaps too much for the author.

Regarding the celebrities:

> I can't say I remember any celebrities talking about driving less or driving to work with their costars instead of alone.

When is the last time you actually looked at how celebrities travel? My guess is that you're just making stuff up.

Celebrities *never* travel alone. They have an entourage - they have drivers and press agents and wardrobe assistants and bodyguards and more.

Finally - even if celebrities traveled alone - this does not discredit their message. The earth is still warming, whether or not celebrities travel alone. We need to reduce carbon emissions, whether or not celebrities travel alone.

How celebrities travel is *irrelevant* to whether or not what they say is true.

Which is why I say, I fail to see the point of this post.

Wednesday, July 04, 2007

Government Versus Free Market

Responding to Graham Glass:

Just for fun...

"Many (if not most) people believe that the Government can provide 'essential' services better than the free market. It's interesting to wonder why."

Because without any government services, you get an economy that resembles Somalia's. With limited government services, you get one that resembles Bolivia's. Most people, however, would prefer one that resembles a modern indstrial nation - that is, one with significant levels of government services. Like Canada, say, or Norway or Germany.

"I think that the main reason is that people think that the Government is altruistic, and therefore will provide the best service because it has the people's best intentions at heart, whereas the free market is driven by profit and therefore will over-price services and cannot be trusted."

That is a bit of a mis-statement of the right idea.

It's not simply that the government can be trusted and the private sector cannot be trusted. Rather, there are some things the private sector simply won't do because there's not enough profit in it. Like providing welfare to poor people. City-wide policing services. Epidemic prevention. Other sorts of infrastructure.

Also, there are numerous things that governments do more efficiently than the private sector. Imagine the chaos we would have if we had competing fire services (especially if people who didn't contract a service didn't get fire protection at all). If the government didn't create and run the army - who would? Microsoft?

The argument in favour of government involvement in the marketplace isn't simply one of trust. It is also one based on the organization and delivery of services. The model is different for government services, and this confers different strengths and different weaknesses.

Nobody, for example, would consider nationalizing 7-Eleven. Nobody wants the public service to run McDonalds. But it is equally absurd to think of these companies being responsible for, say, food safety and standards inspection.

"First: The government is not altruistic. Government officials want to be re-elected more than anything, so they will often make decisions based on pressure from lobbyists, unions, and other groups who have their sponsors rather than the ultimate consumer in mind."

Agreed. But the way this argument is stated is very misleading. "Lobbyists, unions, and other groups" indeed. In fact, by far the major contributors, the major lobbyists, are the businesses - exactly the people who would be running the show if the role of government were reduced.

If it's bad if the lobbyists have too much power over government now, then it's even worse when government is absent and these lobbyists are running the show.

"Second: Consumers are ultimately served best by goods and services that provide good value for money; a combination of cost and efficiency. Does a consumer really care why a organization provides a service, as long as it's good value for money?"

At certain extremes they do. Nobody wants the Klan service breakfast at the town fair, no matter how good the price.

But of course the presumption here is that the private sector always provides better value. In fact, it is easily arguable that in many cases the private sector provides much lower value. This is why, for example, the U.S. has slower internet speeds than many nations in the world.

If this is the case, then the argument that the private sector should always provide services becomes one based on dogma rather than reason. And it becomes reasonable to argue, that people won't mind services that are delivered by the government, if they are delivered efficiently. Most people, after all, have no problem with the government providing services.

"Third: Altruism is no guarantee of producing good value for money. An organization can have the best intentions in the world and still produce an expensive, poor product."

Quite true.

But it doesn't follow from this that altruistic organizations produce poor value for money.

And it is arguable that altruistic organizations are more likely to provide value for money than organizations that are designed solely to make money.

The 'no guarantee' type of argument offered here tempts the reader to ignore the other alternative. But in reality, the two alternatives need to be looked at side-by-side. then it becomes apparent that *neither* provides a guarantee. Which makes the no-guarantee argument a bit empty.

"Fourth: Many, if not most, people want to be rewarded for their creativity and hard work."

Yes. That's why government services cost us money in the form of taxes.

"That's why entrepreneurs and investors look for marketplaces that are inefficiently served and provide an opportunity for profit. If a marketplace can't provide a profit, enterpreneurs and investors look elsewhere."

This story glosses over numerous points. The marketplace doesn't just simply offer up a profit. There are few, if any, 'inefficient' areas of our economy. Private enterprise has had its run for decades now; it has prety much filled up its niches.

What this means is that profitable markets these days must be manufactured, not found. There are various ways to do this, including:
- invent something
- create artificial scarcity
- create artificial demand
- scare or litigate a other players
- create better marketing

Some of these are legitimate - a company that invents something should have the opportunity to promote it in the marketplace. Others, though, are less legitimate. Misleading advertising can create profits, sure, but creates more harm than good.

More to the point though, what this means is that if a marketplace does not provide a profit, entrepreneurs don't look elsewhere (the other niches are filled). Rather, thy begin to lobby for legislation and, if necessary, litigation. They seek to undermine competition, create bottlenecks they can profit from, strike exclusive-service agreements, or require by law the use of their own products. None of this, of course, has anything to do with providing better service, just providing more profit.

"When the government grants itself a monopoly in a particular marketplace, it removes the incentive for entrepreneurs and investors to compete for profits in that area."

The supposition is that if the government is providing a service then there is no incentive for the people who are providing that service.

But there is of course an incentive.

For one thing, the people providing the service may be altruistic, as mentioned above. They may like helping people, or may want to help society in general.

For another, they may desire fame. You can become more famous providing public services than you can making money. That's one reason why people, like, say, the Rockefellers, started making libraries after they made their money.

Third, public servants are paid. There are performance benefits and other incentives. And though the government may have a monopoly, there is plenty of competition for jobs and positions. As well, government money granted to organizations is typically done so on a competitive basis (too much so, in fact).

"Fifth: A company in the free market only makes profits if it provides better value for money than its competitors."

This is demonstrably false. Companies can provide lower value, but if they spend more on advertising, they can make more money.

"In a truly free market, large profit margins for a long period of time are relatively rare since other competitors will enter the market and drive profit margins down."

This is a counterfactual - the conditions - if they can be specified - of a "truly free market" can never be satisfied.

"If you think that the energy industry and the healthcare industries are free markets, look more closely; the government is hugely involved in these industries, with most examples of huge profit margins due to government interventions that are influenced by lobbyists from those industries."

There are huge profits in those areas - at least in the U.S. But whether this is due to government intervention is debatable.

Certainly, the lobbyists have played a role in securing profits for those companies. But that means the cause of the profits is the *companies* that hired the lobbyists.

Well, that's it for now. Back you you.

Monday, July 02, 2007

Google Embraces the Dark Side

This is why I don't run Google ads:
Does negative press make you Sicko?

Lights, camera, action: the healthcare industry is back in the spotlight. (Not that it ever left the stage.) Next week, Michael Moore’s documentary film, Sicko, will start playing in movie theaters across America...

While legislators, litigators, and patient groups are growing excited, others among us are growing anxious. And why wouldn’t they? Moore attacks health insurers, health providers, and pharmaceutical companies by connecting them to isolated and emotional stories of the system at its worst...

Many of our clients face these issues; companies come to us hoping we can help them better manage their reputations through “Get the Facts” or issue management campaigns. Your brand or corporate site may already have these informational assets, but can users easily find them?

We can place text ads, video ads, and rich media ads in paid search results or in relevant websites within our ever-expanding content network. Whatever the problem, Google can act as a platform for educating the public and promoting your message. We help you connect your company’s assets while helping users find the information they seek.
Some commentary:

Cary Byrd at eDrugsearch.com
How convenient! This ad rep just happens to take the side of U.S. healthcare companies against Moore and “negative press,” and just happens to be in a position to directly benefit from advertising dollars from these self-same companies. (See "Google ad rep tells U.S. healthcare companies how to beat Michael Moore").
Abe Olandres:
If it were Yahoo or Microsoft, we wouldn’t react that much (just read all that blog posts ). But for a company who promises to do no evil, their position is totally a 180-degree turn away from it. (See "Some Googlers shouldn’t be allowed to blog").
The Daily Jive:
Google has a solution for the healthcare industry in the face of Michael Moore's amazing "Sicko" bombshell. Buy ads. Pay attention: This is what selling your soul sounds like. (See Untitled)
Amy Bellinger (responding to Google's assertion that "Whether the healthcare industry wants to rebut charges in Mr. Moore's movie, or whether Mr. Moore wants to challenge the healthcare industry, advertising is a very democratic and effective way to participate in a public dialogue):
Google doesn't even allow individuals to purchase ads critical of large companies.... So, apparently HMOs criticizing Michael Moore is okay, but random-guy-with-a-website criticizing a large corporation is not okay. 'Democratic,' indeed. (See Not so democratic after all: chokes disintermediation)
Steven Hodson:
Regardless of how one feels about Michael Moore or his new movie Sicko the fact that Google has decided to throw its weight against him by actively promoting its service and influence to the health care industry is wrong and is setting a very dangerous precedent. (Google’s misuse of power - no surprise here)
Yeah. Nothing like the combined riches of a multinational industry to ensure that there is balance in the media about issues like health care. But they're at such a disadvantage, it sure is a good thing Google is there to help them when the going gets tough.

Poor disadvantaged health care leeches.

As for Google? Now officially evil.