Former President Gerald Ford once noted that any government big enough to accomplish everything you want will be big enough to take everything you have. A similar observation might be made of the combination of technology and business. Think about the history of how technology has become an integral part of business, especially large businesses.
I’m not that old, and I can remember when people traveling abroad actually arranged for letters of credit with foreign banks, a concept that is not only unnecessary today, but not even the faintest of memories in the minds of most people. I can also remember when there was essentially no interstate banking, and when “charge cards” – essentially the forerunners of today’s credit and debit cards were essentially local or limited to accounts at a single business, such as an oil company. The first “national” credit card was the “Diner’s Club” card, launched in 1950, but a national credit card system didn’t develop until the mid-1960s, and it was close to a good two decades after that, if not longer, before credit cards were a feature on a world-wide basis. Today, you can use a debit or credit card for a cash withdrawal/advance in most large cities across the globe and not have to carry hundreds or thousands of dollars in travelers’ checks.
Of course, none of this would have occurred without massively large banks, and massively large banks with nationwide and international outlets and connections aren’t feasible without technology and high speed computers and networks.
But progress comes at a cost… and that cost is vulnerability. The same technology that allows you to withdraw cash from your New York or Denver or Charlotte bank from where you are, whether it be Amsterdam or Buenos Aires or Sydney, also makes it possible for a hacker in Ukraine or Bulgaria to tap into your account. The same technology that allows you to buy and sell stock in minutes from your home computer is the same technology that allows programmed trading systems to do so in milliseconds and crash the entire New York Stock Exchange in minutes when the slightest thing goes wrong. The Obama Administration is pushing for national centralized and computerized medical records, something that already exists in many states and hospital networks, in order to allow you to receive better treatment if you fall deathly ill or are injured away from your home… but that technology is far more susceptible to misuse than the “antiquated” paper files and charts that were once only located in your local hospital and your doctor’s office. With the growth of the new technology has also come a massive growth in medical records fraud, especially involving insurance and government medical programs.
The point is simple. Technology multiplies everything, both the benefits and the liabilities, the gains and the thefts, and because it does, unless a technologically “improved” system is designed to minimize abuse, abuse will multiply faster than benefits. But… all the abuse prevention systems and passwords have the effect of making to harder to access the new technology – so that most of us who have any online presence or business needs either have password after password or court fraud and abuse by using simple passwords or employing only one or two for everything. And that, of course, increases vulnerability.
So it’s no wonder that the total cost of electronic-based fraud is skyrocketing. Not only that, but the “official” totals don’t even include the uncounted personal time lost in dealing with such problems as spam and would-be fraud… or forgotten or mistyped passwords.
Yes… we have progressed… but it’s been a great deal more costly than most of us realize, and it’s likely to get more so… not less.
Not sure what you mean by fraud see this http://www.fbi.gov/scams-safety/fraud/internet_fraud/internet_fraud
I don’t understand your spam issues either I use free stuff not the absolute best and get maybe 2 sppam emmails. Month past filters.
I use firefox 3.6 with Ad blocker plus, noscript, & yahoo email. Also use a password keeper which let’s you use one password to access while it stores all your passwords and usewords. Will even auto fill if you want.
Computer let’s me do so much more than could without.
Sorry for typos in above this site isn’t Blaackberry friendly.
Pior to the computer age, a con man or fraud artist had to essentially contact each target on an individual basis. Even mass mailings had to have addresses looked up and entered individually. Now, computerized programming can make fraud schemes mass marketing, vastly expanding the number of individuals targeted… and the number victimized.
Information technology is a series of trade-offs. In this case, the trade-off is between convenience and security. We consumers have consistently voted with our dollars in favor of convenience with security seemingly an afterthought, so this appears to be a case of getting what we asked for. Because most of us have absolutely no idea how our technology works, the user becomes the weakest link, resulting in the multitude of scams and frauds perpetrated.
On the surface, it appears to be an instance of rational ignorance. It is not necessary to understand how electronic banking works in order to swipe your card at a store. But there is a cost for that ignorance. By choosing to not know, we prevent ourselves from understanding many of the vulnerabilities of the systems which we use and exercising the appropriate cautions. One of the first things that popped into my head after reading Mr. M’s thoughts was a quote or paraphrase, I believe from Herman Wouk, about a plan “conceived by geniuses for execution by idiots”.
I like that phrase; “rational ignorance”. It describes a lot — and opens the door to the phrase “irrational ignorance” or maybe “rationalized ignorance”, too. Realistically, technology has reached a point that the trade off between understanding how every thing works and just using it has probably reached the point where most simply can’t keep up with how things work. Shade tree mechanics are few and far between now, due to the complexity of a car engine, for example. We could learn the details of how the process works when we swipe our card to pay for something — or we can trust that the banks will use sufficient protections to safeguard us because we can’t keep up with the details. But is our trust and thereby our ignorance rational — based on a reasonable general understanding, or is it based on rationalization or simple irrational, unjustified blind trust.
Echoing the tradeoff thoughts above, but a large amount of the issues today result from conscious choices by the designers of the interconnects many years ago when security wasn’t a consideration.
Designing security in from the start is not difficult, it basically involves determining the various levels of trust required and configuring everything to use the minimum necessary. Instead, most software in the last 15 years is designed exactly the reverse – they consider what they might want, and design for the maximum possible. Naturally, the person who loses out is the wider public.
Computerising records has made many older forms of consumer ripping off provider much harder to execute (ie, claiming benefits at multiple localities, or for multiple people at the same address, and so on) – the providors are very willing to design to limit that.
On the other hand, they have made detecting fraud from the providor much much harder, as the providors go out of their way to make the systems overly complicated to make regulation and auditing significantly more difficult. The only way around is for governments to force rules through to protect their people, and many of the ones making the rules instead force through more rules to protect the providors instead.
The thing I suspect about technology is that there has always been a limited understanding of it’s workings from the layperson.
For example how much did a carpenter in times gone by know about blacksmithing and vice versa in times gone by? How much did they rely on the other person knowing their craft when they purchased goods they needed from them?
There’s always been a limit to how much anyone can know about anything and a degree of necessary reliance/trust that others know more than you about the subjects you aren’t well versed in.
Those who create software rarely bear the costs of its vulnerabilities.
Banks and the like only spend on computer security to the extent that the reduced losses for them exceed the cost of the added security. That means they’ll tolerate some losses, and will be more tolerant of losses they’re not responsible for covering.
There are a number of programming mistakes I wouldn’t make because where they’re concerned, I follow good habits by default; but I very much doubt that all programmers do so, given some of the really ugly code I’ve seen.
Security is always a trade-off. It costs something to implement, may impact functionality, features, or ease-of-use, and the only secure system is one that’s not usable at all – cast in a block of concrete, maybe. That’s of no value of course. _Proofs_ of correctness are very difficult; they tend to become unmanageable above a small scale. And they only push the problem back a level: one might prove that a particular program correctly and accurately implements a particular formal specification, but proving that the _specification_ correctly describes human expectations is probably not possible at all. A very disciplined development process can improve quality, but it’s going to be much more expensive up front, and most likely slower, too. As things stand, the up-front costs probably wouldn’t be made back in terms of reduced maintenance costs later on.
Software would seem to be immensely profitable: almost all the cost is in developing the software, and the cost-per-copy is almost nothing. But support drives up the cost-per-copy considerably. That’s usually hidden for consumer software, where sheer volume drives it down – very few problems are unique; but for business software, the cost of support contracts can exceed the initial cost of both software and hardware in a very few years.
So it’s easy to say that the developer has to bear more of the costs of producing reasonably secure software. But fact is, the developer likes to eat, and their investors like a return on their investment. Which means that, as always, the consumer pays. Even if it’s business software, the consumer pays, because costs get passed on until there’s nobody left to pass them on to.
Many that support systems or install software aren’t nearly as knowledgable as they should be, and don’t really keep track of vulnerability warnings or updates; nor do they configure their systems to minimize actual exposure.
So for all the trade-offs, few have incentives to do more than they must, and many are less qualified than they should be to even understand the trade-offs and the nature of the choices that result from their lack of understanding.
I don’t particularly like government intervention in anything. But if those responsible for the vulnerabilities bear so little of their cost, and if the crooks that exploit them reside in countries that are all too eager to bring in any cash they can, as long as the crooks don’t create too much trouble in their own country, why would there be effective enforcement against them either? NIMBY applies, as in the Middle East, where most will only insist that someone not blow stuff up in the country in which they reside, but rather encourage those with such inclinations to practice them where there might be mutual advantage to be had.
Not saying there are no solutions. But any proposal to rearrange incentives for better behavior always has costs of its own (freedom not the least). So there are probably few _easy_ solutions, or they’d already have happened.