As we all know, different countries have different cultures. 'Culture is the collective programming of the mind which distinguishes the members of one category of people from another.' (Hofstede, 1991)It is inevitable that the cultural difference has impact on business. For example, when a company having meeting, the word "table" in American English that means to put something on the agenda. But in British English it means to put something off the agenda. This example indicated how the culture affects the business.There are four cultural dimensions that were defined in Hofstede's research: Power distance, Uncertainty avoidance,
Individualism, Masculinity, and recently Hofstede add one more: long-term-short-term orientation.What I think the most significant influence in cultural difference is the power distance. (Hoecklin,1995:28)"It would condition the extent to which employees accept that their boss has more power than they have and the extent to which they accept that their boss's opinions and decisions are right because he or she is the boss." I considered it as how much subordinates can consent or dissent with bosses or managers. It is the distance between a manager and subordinate. Among most oriental corporate cultures, there is hierarchism, greater centralization, sometimes called 'power-oriented culture', due to the historical reasons. That is a high power distance culture that mangers make the decision and superiors appeal to be entitled more privileges. Their decision always close supervision positively evaluated by subordinates. In this situation, it is not be regarded if a subordinates have a disagreement with their managers, especially in Malaysia, Japan, China, India. In the oriental, power distance is also associated with 'the family culture' (Trompernaars, 1993:139). In this kind of corporate culture the manager is like the "caring father" who knows better than his subordinates what should be done and what is suitable for them. The subordinates always esteem the managers. Because of the managers age and experience. That is usually how employees get their promotion. There are both positive and negative parts in the family cultures. I feel it is an easy managing system. But sometime it is hard to get young creative employees work well cause of the hierarchy. As Tropmenaars (1993: 142) told us "family culture at their least effective drain the energies and loyalties of subordinates to buoy up the leader." So in family culture, the power distance can be viewed as the subordinates respect the superiors.That is the corporate culture in orient. Let us take a look at the western way. It is not a whole converse phenomenon. There is 'the Eiffel Tower culture' (Trompernaars, 1997:166) in the international management. About the Eiffel tower Trompenaars (1993: 148) told us " Its hierarchy is very different from that of the family. Each higher level has a clear and demonstrable function of holding together the level beneath it." German, Austrian have the characteristic of the Eiffel Tower Culture, which is a low power distance. In the lower power dis tance, (Hoecklin, 1995:31) 'higher-educated employees hold much less authoritarian values than lower-educated ones.' The obedience showed from the subordinates to the superiors is not as much as the oriental way. The leadership can be called as hierarchy and consensus. Employee can have different opinion with his/her boss. And when he/she got different ideas, he/she can go all the way up to the boss and discuss the problem. This is a good thing usually company may explore all the potentials of its employees, because sometime the subordinates may have the better&nb sp;idea of the business.I think because of the different realization of power distance, people behave completely different in business. So conflict and misunderstanding must be emerged when two or more intercultures meet up. Under this situation, the international managers must pay attention to the clashes and be aware of. How to work the subordinates together efficiently and more cooperatively is important too.And then there is also a large discrepancy on the uncertainty avoidance. (Hoecklin, 1995:31) defined 'Uncertainty avoidance is the lack of tolerance for ambiguity and the need for formal rules.' That means people trying to setup rules to face to the uncertainty. There is high uncertainty avoidance in most oriental countries such as Japan, China. In these countries, people prefer a stable job. They feel safe and prideful when they keep working hard at the one place. Under this circumstance, an excellent manager should keep his employee away from unpredictable ;risk. And the employee would like to be worked within groups rather than independently cause of the less risk-taking. But in most western countries, there is low uncertainty avoidance showed, whereas high job mobility occurs in those countries such as USA, Denmark, Singapore. The western people think that when they change their jobs, they can get more experience cause they like challenge. I believe that the divergence of the uncertainty avoidance is from different basic social ideology. A competent manager should pay attention on the rules setting between different uncer tainty avoidance. The misreading of that may affect the initiative and the aspiration of the subordinates.The third dimension Hofstede indicated is the individualism. It is a concern for yourself as an individual as opposed to concern for the group. The priority of self-concern or group-concern varies from different cultures. For example, most western employees like to work with their own plan for defending their interest. That is a high individualism. Because of the different attitude to work, 'the incubator culture' (Trompernaars, 1997:175) arises when cross-cultural individuals work together as a group. Trompenaars (1993: 158) told us "the incubator is both personal and&nbs p;egalitarian." People do not cooperate at all. They just simply work in their own ways, follow their own rules, and achieve their own objective. They do not like to be interfered by others. It is good for a company to gather as much ideas as they can when starting a new program. But how to manage these individuals to reach the group goal should be the awareness for managers. I think who is good at this should be good at grouping, troubleshooting, and coordinating skills.Finally Hofsted pointed out the masculinity. That is about the sexual inequality. According to Hofstede's definitions, masculine societies define gender roles more rigidly than feminine societies. In business, managers should take a big concern of the treatment to different sex under different cultural influence. In today's world, because of the masculine value and point of view, males take most senior managing positions. But a experienced manager suppose knew that it is harmonious that men work with women since women sometimes are more sensitive. Therefore, how to balance ;the masculinity/femininity from different culture and background in order to maximize the team power is worth considering by managers.The above four dimensions illuminated the most important cultural differences that affect on business. International managers should be able to aware not only the cultural difference but also the intercultural communication.Gudykunst and Kim (1992:13-14) classify intercultural communication as 'a transactional, symbolic process involving the attribution of meaning between people from different cultures'. Different nations use different languages, so there will be loss or misunderstanding during interpreting. And in some culture, people use implicit words more than others, like China. Thus, the non-verbal communication is important, especially the scenic communication. It includes gestures, body language, eye contacting. The more scenic part in communication, the harder for people to transmit and receive information. Anoth er part is the concept of time. From that, punctuality is the same but reflects different reality. We all know time is money. But when there is a conference, the German usually presents 5 minutes before the start. Spanish will be late for 15 minutes. But in their mind they are both on time. That is something that managers should understand. In my mind, there is another aspect of time, called 'the use of time'. The American and Northern European have a linear time concept. These societies are referred to as Time-Bound societies. Southern Eu ropeans and Arabs regard time in a linear way but more things they can do or handle at the same time. That can be called 'multi-active time'. And then there is the Asian view of time, cyclical time. Asian thinks time will come around again when it pass away, also the opportunities and risks. Besides the above three aspects of communications, there left the space. It is a big concern of in intercultural communications. When you have a conversation with a foreign business partner, the space between you and him are referred to the personal&nb sp;boundary of every culture. Ignorance of space can be lead to real bad impression from other side.The last but not the least, I would like to talk something about the cross-culture negotiation I researched. Negotiation is a course that at least two groups of people trying to reach an agreement with the others for their own benefit. There are two things in negotiation: the topic and the course. During cross-culture negotiation, the course is the crucial obstruction. Different negotiation ways are produced by different cultures. Under this circumstance, there is a classic standpoint of procedures: exploring with no objective, task oriented, persuading period, and sign contract. International managers should be aware of every procedure. And during each procedure, the strategy, technique, substance, time, sequence and the focal point are different.In this essay, I wrote about the cultural differences. There are four dimensions: power distance, uncertainty avoidance, individualism and masculinity. After that, I talked about the intercultural communication, which contains language, non-verbal communication, time and space concept. The conclusion is different cultures do cause problems in business. We cannot change or solve the cultural difference. To avoid misunderstanding, clashes, and bias, the international managers should realize and understand the different cultures, adapt themselves to fit into the business environment in order to get the& nbsp;best achievement in business. Bibliography Gudykunst, W. B. & Kim Y. Y. (1992). Communicating with strangers: An approach to intercultural communication. New York: McGraw Hill, Inc.Hoecklin L. (1995). Managing Cultural Differences: Strategies for Competitive Advantage. Essex: Addison-WesleyHofstede G. (2001). Culture's Consequence. London: Sage Publications.Trompenaars F. (1993). Riding the Waves of Culture: Understanding Cultural Diversity in Business. London: The Economist Books. Trompenaars F. and Hampden-Turner C. (1997). Riding the Waves of Culture: Understanding Cultural Diversity in Business. London: Nicholas Brealey
2007年5月5日星期六
English Language: American or British?
A quarrel about how great would the differences between the two kinds of English be in the future caused vehement argument and the following is my point of view
Being the paternal language of the other native Englishes (Canadian English, Australian English, New Zealand English and South African English), British English and American English today are the two main English languages of the English-speaking world. Although too many has already been said over how the scope, the types, and the possible effects of the inconsistency between the two kinds will be in the future, the quarrel on the issue has not come to an end at all.
The cover of the journal Forum XXVII, No 3, July 1989, recalling the topic and provides a research of evidence of the difference between the two kinds of English over the centuries. Noah Webster (in Dissertations on the English Language) claimed that a further incompatibility of the American language from the English necessary and inevitable. He also predicted that “North American English would eventually be as different from British as Dutch, Danish and Swedish are from German or from one another”. Mark Twain (in The Stolen White Elephant) thought American and British English to be different languages and declared that the former, spoken “in its utmost purity”, cannot be understood by an English people at all. This attitude was previously expressed by Captain Thomas Hamilton (in Men and Manners in America). He said that “in another century, the dialect of the Americans will become utterly unintelligible to an Englishman.”
Authors of the twentieth century hold entirely different attitude toward those of the previous centuries, they tend to have a much more distinctive feeling of sameness between American and British English. Thus, Mitford M. Mathews (from Beginnings of American English) sees the two kinds to be “so overwhelmingly alike.” For Stephen Leacock (from How to Write), “There is not the faintest chance of there ever being an American language as apart from English.” Randolph Quirk (in The New York Times Magazine), believes that, ”even in matters of pronunciation, it is difficult to find many absolute British and American distinctions”; Quirk claimed that even Noah Webster, after fruitless years in trying to create a “linguistic gulf”, “came to realize that in all essentials Britains and Americans spoke the same language”. Albert H. Marckwardt and Randolph Quirk lately expressed their conclusion that they consider British English and American as the same in their book A Common Language: British and American English. It’s Introduction which is an excerpt from the book reads that “The two varieties of English have never been so different as people have imagined, and the dominant tendency, for several decades now, has been that of convergence and even greater similarities.”
The present books argues that this growing view of sameness between American and British English give out the risk of neglecting the existence of some significant differences whose impact in certain domains of life should not be overlooked. But before looking into the problems which arise from the differences between the two Englishes, I will give a background showing the development process of the status of American English in the world, its influence and expansion and analyze it’s causes of growing.
1. The development and popularity of American English
Long after its introduction into the New World, American English was still considered non-standard English. Mr. Kahane pointed out that according to some people of the 1780s American English was the “underdog” or a peasant’s language that a “gentleman” will not speak. Considered in a bilingual point of view, British English was the dominant language linked to prestige and (linguistic) purism. The belief in the authority or say in the superior of British English, has maintained to the twentieth century, especially in the former British Empire or in the fields of British influence. Thus, it is reported that in China, teachers and school textbooks refer to and recommend Received Pronunciation as the model, as well as standard British syntax, spelling and lexis. British English is also encouraged and accepted as the criteria of some major official examinations, for example, College English Test and Test for English Majors which are conducted by government. Similar situations could be found in other countries, for example, in Africa, the West African Examination Council and Joint Admission and Matriculation Board accept the British English as the standard English. Report can also be found that in Cairo, as recently as 1984, some university students received lower grades if they used American spellings instead of British. Modiano wrote that in Europe, “we find teachers, British people as well as natives of the country in which they work, who follow the British English standard, and scorn the American English”.
However the above attitudes are nothing but the last influence of a long-gone period of British supremacy. According to Campbell and others, the beginning of a distinct lead of American English can be traced to the decades after World War II. This coincides with the simultaneous rise of the US as a military and technological power and the decline of the British Empire, which drove many to American English. And from then on, American English has continuously sent its influence to every corner of the planet.
Britain made English an international language in the nineteenth century with its imperialism power, but Americans have been the driving force behind its globalization in the twentieth century. A great deal of examples of the influence of American English can be found in a large quantity of current books, magazines, movies. According to Foster, the popularity of Americanism among the young generation in Britain is “the hall-mark of the tough-guy and the he-man”. After reviewing the presence of American English features in the British variety of English itself, Awonusi gives a great deal of examples of Americanized English in phonology and lexis that he has identified co-exiting in his own Nigerian English. Modiano reports that, despite the influence of expert English teachers from Britain, Europeans “are subjected to a massive amount of American English”, which many students are much more interested in. Campbell’s examples of the influence of American English include the fact that young people in Europe, Asia and Russia use it in daily conversation, even when many of them have been taught British English. In Brazil, people demand for courses in American style rather than British. This is because American English is infiltrating the territories formerly known to be the territory of British English influence, for example, Nigeria, Egypt, Thailand, and more forcefully penetrating Latin America, Japan, and south Korea. Americanized words like guy, campus, movie which do not exist in British English, are now widely used. Today even the BBC, which has long used British English speaking announcers exclusively, now added American announcers in its broadcasts, especially in programs that go to countries like South Korea, where American English is favored.
According to Campbell’s estimate, 70% of the roughly 350 million native English speakers speak the American version of English. In fact, the populations of the two leading mother tongue English countries are even more suggestive: The United States has a population of about 260 million while there is only about 55 million in Britain. This seemingly gives the American English much more advantage. The causes of the unprecedented expansion of American English include, as stated above, the post-World War II military and technological advancement. They are for demographic, political reasons, or have to do with the computer and the internet, the mass media, trade, the Peace Corps, and immigration policies:
The last few decades have witnessed an ever-increasing political domination of America on the planet. This status was further reinforced in the late 1980s by the fall of communism, which resulted in the US penetrating and consolidating its position in formerly socialist territories. The lead of the US in the computer and Internet industry has long been established. That Bill Gates and other computer geniuses are Americans, they create everything by Americanism. As a consequence of the US domination of computer industry, the favored language of this industry is American English, which force people who use American computer hardware and software to accept the American English, either consciously or unconsciously. American radio and television networks are spread all over the world. Campbell reports that, as recently as 1993, the United States controlled 75% of the world’s television programming, “beaming ‘Sesame Street’ to Lagos, Nigeria, for example”. The Voice of America and CNN have no competitors all over the world over. Trade with the US has steadily risen in volume over the past few years, even in territories formerly controlled by Britain and considered by many people to be count of bounds to America. For example, the US is one of Nigeria’s main partners in the crude oil business. The Peace Corps, founded by President J.F. Kennedy in 1962, has been a major cause of emigration of Americans to various parts of the Third World. The Peace Corps volunteers have been working in the medical sector, in agriculture, and very significantly in English language teaching, leaving considerable influence of American English after their returning back. The strict immigration laws of Britain, coupled with the alleged inhospitality of the British, have of late diverted to America students and people from various parts of the world seeking a substitute place — the United States. The chain reaction of this factor has resulted in much more migration to the US. For people hold the sense that they tend to find help from friends and relatives living in the States. The recent policy, enacted by the US, of the visa program to “recruit” 50,000 new immigrants to the States each year has added to the attempt to migrant to the US. The long-term reaction of the large migration to the States on the Americanization of English in native countries of the immigrants is obvious; the immigrants continue to communicate with their friends and relatives back in their homeland, and many eventually come back and settle. Told above is the story of the baby version of a English language that has grown and is threatening to shake the domination of the mother language. This phenomenon could hardly been seen elsewhere. Neither the case with Canadian, Belgian or Swiss French in relation to the French of France, nor with Latin American Spanish or Portuguese in relation to the Spanish or Portuguese of Spain or Portugal, respectively. The speaker, and especially the learner, of English is now faced with the task of managing the co-existence of the two competing languages. They are, however, not problem-free.
The problems
It is over simplified to say, like M. Mathews in the introduction above, that American English and British English are “so overwhelmingly alike” or, like Quirk equally cited above, that “even in matters of pronunciation, it is difficult to find many British and American absolute distinctions”. It really depends here on what quantity Quirk considers to be many. Already, the list of pronunciation differences that he and Marckwardt themselves give affects hundreds of words, which can be considered to be major, by any standard. Qualitatively, too, the differences are important. Learners all over the world will surely agree with me, for example, that the following differences are quite confusing: British English ant[i], mult[i]; sem[i]; do[sail], fu[tail]; l[e]sure, fer[tail], [lef]tenant, g[o]t, p[o]tter vs American English ant[ai]-, mult[ai], sem[ai]; do[sl], fer[tl]; l[i:]sure, [lu]tenant, g[a:]t, p[a:]tter. And there are many other such contrasts, In lexis and grammer, we can also find many distinct contrasts with an obvious incidence on communication, as will be shown later.
Differences between American and British English do not matter when the speaker or writer is familiar with the two codes and can easily find in his/her own variety correspondences to features from the other variety. But confusion, embarrassment or sheer incomprehension will arise in many daily-life situations when the listener or reader who is not familiar with the other variety. Good illustrations come from your PC in this computer age: where your spelling checker, based on American English, identifies clour, centre, dialogue, civilise, towards, defence, enclose, travelled from your text as incorrectly spelt, you need to be familiar with the two varieties to know that your spelling checker expects American spellings which are color, center, dialog, civilize, toward, defense, inclose, traveled. (If your text is in British English you will simply click “ignore” and move on.)
Knowledge of the two varieties is equally important in the classroom for the students and teachers where a decision often has to be made about what form is correct. If the teacher and students know that fiber and fibre, transportation and transport, proctor and invigilator, barette and hairslide, faucet and tap, fall and autumn, five years back and five years ago, Monday through Friday and Monday to Friday, a half meter and half a metre are features of American and British English, Respectively, the teaching and the learning process can proceed unhampered, if it is agreed that the two varieties are accepted. But it is dramatic, especially in a testing situation, when features used from one variety by the student are not known by the teacher/tester, familiar only with the other variety. The student will then be unjustly penalized.
He/she will be all the more penalized as some features in one variety may clearly violate the grammar of the other variety. Many American changes of categories observed in some cases are in outright violation of British English grammar. For example, accommodation becomes countable (e.g. Good accommodations are rare); some irregular verbs become regular (e.g. broadcasted, shined); some regular verbs become irregular (snuck out for British English sneaked out); some intransitive verbs become transitive (e.g. The plane departed New York; We protested the salary cuts); some transitive verbs become intransitive (e.g. I visited with my friends for British English I visited my friends): some adjectives may be used as adverbs (e.g. It’s real nice). Other major violations of British English syntax are seen in usages such as: A is different than B, where than is used without the corresponding –er/more or less required for comparatives; in Susan wants out, where a whole verb and its preceding particle (to go) are omitted; in like I said where like, instead of British English as, introduces a clause; in I want for you to go, where there is a major intrusion of a preposition; in He looked out the window, where there is a major deletion of a preposition; in He just left, where, despite the clear fact that a past action has some relevance to the present, the present perfect is not used. And so on.
An informal survey recently carried out among teachers of English who had been working for long, showed that, although they declared that they readily accepted American English, they would consider the above American English usages, and many more as incorrect. All they know of American English is –or for –our (e.g. color), -ize for ise, center for centre and similar minor and common differences.
The problem of multiple standards is aggravated in countries like those of the former British Empire where indigenized varieties of English have already established themselves athoritively as local standards. There, the intrusion of American English adds to the already existing conflict and competition between British English and the local forms. Awonusi (1994) aptly describes this phenomenon in Nigeria. What’s more, other countries, like China, face a similar situation. In China English for example, the most interesting manifestation of this triple scale is when a local English form establishes itself, and differs from both the established British English and American English forms. For example, you relax in a sitting room in Britain, a living room in America and a parlour in China; you fill in form in Britain, fill out a form in the US and fill a form in China. Phonology offers many more of such systematic contrasts.
In addition to problems of correctness discussed above, the divergences between American and British English raised problems of intelligibility that cannot be altogether overlooked.
Studies specifically measuring the intelligibility between American English and British English are not available to me at the moment. But others involving the intelligibility of the two varieties, from the point of view of the non-native speaker, do exist, and show that American English and British English do not have the same degree of intelligibility. For example, in Smith’s (1992) study conducted in America, a British English speaker (interacting with a Papua Guinean) is 70% understandable to non-native speakers while an American (interacting with an Indonesian is 90% undertandable). The rates of comprehensibility and interpretability in the same context are 90% and 60%, 10% and 30%, respectively.
Differences between American English and British English would have no major impact on intelligibility if they only concerned, for example, features in phonology like American English rhoticity, darkening of “l” across the board, the nasal twang, some word stress differences; in spelling like –ize, -or and –er discussed above; or in lexis items like vacation, movie, cab, schedule (for British English timetable), etc. But the various levels of analysis offer more serious, and very often, less known divergences. In phonology for example, a learner who is used to British English /dentist, kla:k, le3e/ (dentist, clerk, leisure) may not find (American English) /deni:st, kle:rk, li:3er/ intelligibe unless the context is very supportive. And when one bears in mind that processes yielding there differences affect a multitude of other words, one easily understands the risk of intelligibility failure.
Lexis also offers very interesting cases. A user of British English who listens to or reads American English will face problems of intelligibility with words that do not exist in his/her own version like faucet (British English tap), janitor (caretaker), pitcher (jug), mortician (undertaker), realtor (estate agent), closet (cupboard), penitentiary [noun] (prison). He/she will also find words which exist in his/her variety, but have a different meaning. The difference in meaning may be negligible and not cause communication problems, as in American English vacation vs British English holidays, call (by phone) vs ring, schedule vs time; both members of these pairs, in particular, and many others, are now used in Britain, which further reduces the risk of communication failure. But major semantic differences sometimes exist, such as between (American English) first floor, second floor and British English ground floor, first floor, pants and trousers, gas and petrol, (from American English, 12th of February 1998). These extreme cases of divergence may cause communication problems or great embarrassment in some cases. Just imagine an American English speaker directing a British English speaker to the first or second floor, asking him/her for gas, asking him/her to show his/her pants, and you will agree that American English and British English are not “so overwhelmingly alike”, as claimed above.
Cases of communication failure (or potential failure) due to such lexico-semantic problems are reported by Modiano. They include American English round trip ticket vs British English return ticket, American English eraser vs British English rubber, and British English public school (vs its American English meaning).
Modiano requested a ticket to London but he was asked whether he wanted a return ticket. Interpreting return ticket as meaning a ticket from London to the place from which he was travelling, Modiano replied, “How do you expect me to get there?” As for British English rubber, the author found that in American English it is an American slang for the word condom and its use in some contexts may cause embarrassment. Modiano go on noticed that the British English use of Rubber rather than eraser is unknown in the US. Concerning public school, Modiano points out the contrast in British English where it means privately owned institutions and in American English where it means “schools owned and operated with public funds”.
Efforts to be made
The best thing to deal with the situation of co-existence, or competition, of American English and British English would be some kind of harmonization. This solution, which suggests changing the natural course of a language or language variety, had hardly been succeeded. The other question is even if this solution were possible, the harmonization in the direction of American English are demographic, technological, political, commercial, and media-related, as analyzed above; in fact, most predictions, including that of David Crystal, are that English in future will be American-dominated. Arguments for harmonization in the direction of British English include the fact that the majority of dictionaries and English Language Teaching materials outside the US are British English–dominated. The other argument is of an emotional and symbolic feeling and mixed with the sense that British English is after all the mother variety.
Modiano’s solution to the exposure to, and mixing of, American English and British English is that Mid-Atlantic, spoken in increasing numbers by Europeans. Should replace British English as the educational standard in Europe. According to the author of “The Americanization of Euro-English”, Mid-Atlantic is “a variety that encourages neutral pronunciation and a vocabulary based on the interlocutor’s frame of reference’. The only problem in Modiano’s argument is that he carefully explains the reasons for Mid-Atlantic English to be used, and what does not constitute Mid-Atlantic, but fails to discuss in concrete terms some detailed characteristics of this variety.
For lacking in a guaranteed solution to the problem, I suggested that courses in contrastive analysis of American English and British English should be widely included in English Lessons to wipe out the confusion of those who learn English as a second language.
Being the paternal language of the other native Englishes (Canadian English, Australian English, New Zealand English and South African English), British English and American English today are the two main English languages of the English-speaking world. Although too many has already been said over how the scope, the types, and the possible effects of the inconsistency between the two kinds will be in the future, the quarrel on the issue has not come to an end at all.
The cover of the journal Forum XXVII, No 3, July 1989, recalling the topic and provides a research of evidence of the difference between the two kinds of English over the centuries. Noah Webster (in Dissertations on the English Language) claimed that a further incompatibility of the American language from the English necessary and inevitable. He also predicted that “North American English would eventually be as different from British as Dutch, Danish and Swedish are from German or from one another”. Mark Twain (in The Stolen White Elephant) thought American and British English to be different languages and declared that the former, spoken “in its utmost purity”, cannot be understood by an English people at all. This attitude was previously expressed by Captain Thomas Hamilton (in Men and Manners in America). He said that “in another century, the dialect of the Americans will become utterly unintelligible to an Englishman.”
Authors of the twentieth century hold entirely different attitude toward those of the previous centuries, they tend to have a much more distinctive feeling of sameness between American and British English. Thus, Mitford M. Mathews (from Beginnings of American English) sees the two kinds to be “so overwhelmingly alike.” For Stephen Leacock (from How to Write), “There is not the faintest chance of there ever being an American language as apart from English.” Randolph Quirk (in The New York Times Magazine), believes that, ”even in matters of pronunciation, it is difficult to find many absolute British and American distinctions”; Quirk claimed that even Noah Webster, after fruitless years in trying to create a “linguistic gulf”, “came to realize that in all essentials Britains and Americans spoke the same language”. Albert H. Marckwardt and Randolph Quirk lately expressed their conclusion that they consider British English and American as the same in their book A Common Language: British and American English. It’s Introduction which is an excerpt from the book reads that “The two varieties of English have never been so different as people have imagined, and the dominant tendency, for several decades now, has been that of convergence and even greater similarities.”
The present books argues that this growing view of sameness between American and British English give out the risk of neglecting the existence of some significant differences whose impact in certain domains of life should not be overlooked. But before looking into the problems which arise from the differences between the two Englishes, I will give a background showing the development process of the status of American English in the world, its influence and expansion and analyze it’s causes of growing.
1. The development and popularity of American English
Long after its introduction into the New World, American English was still considered non-standard English. Mr. Kahane pointed out that according to some people of the 1780s American English was the “underdog” or a peasant’s language that a “gentleman” will not speak. Considered in a bilingual point of view, British English was the dominant language linked to prestige and (linguistic) purism. The belief in the authority or say in the superior of British English, has maintained to the twentieth century, especially in the former British Empire or in the fields of British influence. Thus, it is reported that in China, teachers and school textbooks refer to and recommend Received Pronunciation as the model, as well as standard British syntax, spelling and lexis. British English is also encouraged and accepted as the criteria of some major official examinations, for example, College English Test and Test for English Majors which are conducted by government. Similar situations could be found in other countries, for example, in Africa, the West African Examination Council and Joint Admission and Matriculation Board accept the British English as the standard English. Report can also be found that in Cairo, as recently as 1984, some university students received lower grades if they used American spellings instead of British. Modiano wrote that in Europe, “we find teachers, British people as well as natives of the country in which they work, who follow the British English standard, and scorn the American English”.
However the above attitudes are nothing but the last influence of a long-gone period of British supremacy. According to Campbell and others, the beginning of a distinct lead of American English can be traced to the decades after World War II. This coincides with the simultaneous rise of the US as a military and technological power and the decline of the British Empire, which drove many to American English. And from then on, American English has continuously sent its influence to every corner of the planet.
Britain made English an international language in the nineteenth century with its imperialism power, but Americans have been the driving force behind its globalization in the twentieth century. A great deal of examples of the influence of American English can be found in a large quantity of current books, magazines, movies. According to Foster, the popularity of Americanism among the young generation in Britain is “the hall-mark of the tough-guy and the he-man”. After reviewing the presence of American English features in the British variety of English itself, Awonusi gives a great deal of examples of Americanized English in phonology and lexis that he has identified co-exiting in his own Nigerian English. Modiano reports that, despite the influence of expert English teachers from Britain, Europeans “are subjected to a massive amount of American English”, which many students are much more interested in. Campbell’s examples of the influence of American English include the fact that young people in Europe, Asia and Russia use it in daily conversation, even when many of them have been taught British English. In Brazil, people demand for courses in American style rather than British. This is because American English is infiltrating the territories formerly known to be the territory of British English influence, for example, Nigeria, Egypt, Thailand, and more forcefully penetrating Latin America, Japan, and south Korea. Americanized words like guy, campus, movie which do not exist in British English, are now widely used. Today even the BBC, which has long used British English speaking announcers exclusively, now added American announcers in its broadcasts, especially in programs that go to countries like South Korea, where American English is favored.
According to Campbell’s estimate, 70% of the roughly 350 million native English speakers speak the American version of English. In fact, the populations of the two leading mother tongue English countries are even more suggestive: The United States has a population of about 260 million while there is only about 55 million in Britain. This seemingly gives the American English much more advantage. The causes of the unprecedented expansion of American English include, as stated above, the post-World War II military and technological advancement. They are for demographic, political reasons, or have to do with the computer and the internet, the mass media, trade, the Peace Corps, and immigration policies:
The last few decades have witnessed an ever-increasing political domination of America on the planet. This status was further reinforced in the late 1980s by the fall of communism, which resulted in the US penetrating and consolidating its position in formerly socialist territories. The lead of the US in the computer and Internet industry has long been established. That Bill Gates and other computer geniuses are Americans, they create everything by Americanism. As a consequence of the US domination of computer industry, the favored language of this industry is American English, which force people who use American computer hardware and software to accept the American English, either consciously or unconsciously. American radio and television networks are spread all over the world. Campbell reports that, as recently as 1993, the United States controlled 75% of the world’s television programming, “beaming ‘Sesame Street’ to Lagos, Nigeria, for example”. The Voice of America and CNN have no competitors all over the world over. Trade with the US has steadily risen in volume over the past few years, even in territories formerly controlled by Britain and considered by many people to be count of bounds to America. For example, the US is one of Nigeria’s main partners in the crude oil business. The Peace Corps, founded by President J.F. Kennedy in 1962, has been a major cause of emigration of Americans to various parts of the Third World. The Peace Corps volunteers have been working in the medical sector, in agriculture, and very significantly in English language teaching, leaving considerable influence of American English after their returning back. The strict immigration laws of Britain, coupled with the alleged inhospitality of the British, have of late diverted to America students and people from various parts of the world seeking a substitute place — the United States. The chain reaction of this factor has resulted in much more migration to the US. For people hold the sense that they tend to find help from friends and relatives living in the States. The recent policy, enacted by the US, of the visa program to “recruit” 50,000 new immigrants to the States each year has added to the attempt to migrant to the US. The long-term reaction of the large migration to the States on the Americanization of English in native countries of the immigrants is obvious; the immigrants continue to communicate with their friends and relatives back in their homeland, and many eventually come back and settle. Told above is the story of the baby version of a English language that has grown and is threatening to shake the domination of the mother language. This phenomenon could hardly been seen elsewhere. Neither the case with Canadian, Belgian or Swiss French in relation to the French of France, nor with Latin American Spanish or Portuguese in relation to the Spanish or Portuguese of Spain or Portugal, respectively. The speaker, and especially the learner, of English is now faced with the task of managing the co-existence of the two competing languages. They are, however, not problem-free.
The problems
It is over simplified to say, like M. Mathews in the introduction above, that American English and British English are “so overwhelmingly alike” or, like Quirk equally cited above, that “even in matters of pronunciation, it is difficult to find many British and American absolute distinctions”. It really depends here on what quantity Quirk considers to be many. Already, the list of pronunciation differences that he and Marckwardt themselves give affects hundreds of words, which can be considered to be major, by any standard. Qualitatively, too, the differences are important. Learners all over the world will surely agree with me, for example, that the following differences are quite confusing: British English ant[i], mult[i]; sem[i]; do[sail], fu[tail]; l[e]sure, fer[tail], [lef]tenant, g[o]t, p[o]tter vs American English ant[ai]-, mult[ai], sem[ai]; do[sl], fer[tl]; l[i:]sure, [lu]tenant, g[a:]t, p[a:]tter. And there are many other such contrasts, In lexis and grammer, we can also find many distinct contrasts with an obvious incidence on communication, as will be shown later.
Differences between American and British English do not matter when the speaker or writer is familiar with the two codes and can easily find in his/her own variety correspondences to features from the other variety. But confusion, embarrassment or sheer incomprehension will arise in many daily-life situations when the listener or reader who is not familiar with the other variety. Good illustrations come from your PC in this computer age: where your spelling checker, based on American English, identifies clour, centre, dialogue, civilise, towards, defence, enclose, travelled from your text as incorrectly spelt, you need to be familiar with the two varieties to know that your spelling checker expects American spellings which are color, center, dialog, civilize, toward, defense, inclose, traveled. (If your text is in British English you will simply click “ignore” and move on.)
Knowledge of the two varieties is equally important in the classroom for the students and teachers where a decision often has to be made about what form is correct. If the teacher and students know that fiber and fibre, transportation and transport, proctor and invigilator, barette and hairslide, faucet and tap, fall and autumn, five years back and five years ago, Monday through Friday and Monday to Friday, a half meter and half a metre are features of American and British English, Respectively, the teaching and the learning process can proceed unhampered, if it is agreed that the two varieties are accepted. But it is dramatic, especially in a testing situation, when features used from one variety by the student are not known by the teacher/tester, familiar only with the other variety. The student will then be unjustly penalized.
He/she will be all the more penalized as some features in one variety may clearly violate the grammar of the other variety. Many American changes of categories observed in some cases are in outright violation of British English grammar. For example, accommodation becomes countable (e.g. Good accommodations are rare); some irregular verbs become regular (e.g. broadcasted, shined); some regular verbs become irregular (snuck out for British English sneaked out); some intransitive verbs become transitive (e.g. The plane departed New York; We protested the salary cuts); some transitive verbs become intransitive (e.g. I visited with my friends for British English I visited my friends): some adjectives may be used as adverbs (e.g. It’s real nice). Other major violations of British English syntax are seen in usages such as: A is different than B, where than is used without the corresponding –er/more or less required for comparatives; in Susan wants out, where a whole verb and its preceding particle (to go) are omitted; in like I said where like, instead of British English as, introduces a clause; in I want for you to go, where there is a major intrusion of a preposition; in He looked out the window, where there is a major deletion of a preposition; in He just left, where, despite the clear fact that a past action has some relevance to the present, the present perfect is not used. And so on.
An informal survey recently carried out among teachers of English who had been working for long, showed that, although they declared that they readily accepted American English, they would consider the above American English usages, and many more as incorrect. All they know of American English is –or for –our (e.g. color), -ize for ise, center for centre and similar minor and common differences.
The problem of multiple standards is aggravated in countries like those of the former British Empire where indigenized varieties of English have already established themselves athoritively as local standards. There, the intrusion of American English adds to the already existing conflict and competition between British English and the local forms. Awonusi (1994) aptly describes this phenomenon in Nigeria. What’s more, other countries, like China, face a similar situation. In China English for example, the most interesting manifestation of this triple scale is when a local English form establishes itself, and differs from both the established British English and American English forms. For example, you relax in a sitting room in Britain, a living room in America and a parlour in China; you fill in form in Britain, fill out a form in the US and fill a form in China. Phonology offers many more of such systematic contrasts.
In addition to problems of correctness discussed above, the divergences between American and British English raised problems of intelligibility that cannot be altogether overlooked.
Studies specifically measuring the intelligibility between American English and British English are not available to me at the moment. But others involving the intelligibility of the two varieties, from the point of view of the non-native speaker, do exist, and show that American English and British English do not have the same degree of intelligibility. For example, in Smith’s (1992) study conducted in America, a British English speaker (interacting with a Papua Guinean) is 70% understandable to non-native speakers while an American (interacting with an Indonesian is 90% undertandable). The rates of comprehensibility and interpretability in the same context are 90% and 60%, 10% and 30%, respectively.
Differences between American English and British English would have no major impact on intelligibility if they only concerned, for example, features in phonology like American English rhoticity, darkening of “l” across the board, the nasal twang, some word stress differences; in spelling like –ize, -or and –er discussed above; or in lexis items like vacation, movie, cab, schedule (for British English timetable), etc. But the various levels of analysis offer more serious, and very often, less known divergences. In phonology for example, a learner who is used to British English /dentist, kla:k, le3e/ (dentist, clerk, leisure) may not find (American English) /deni:st, kle:rk, li:3er/ intelligibe unless the context is very supportive. And when one bears in mind that processes yielding there differences affect a multitude of other words, one easily understands the risk of intelligibility failure.
Lexis also offers very interesting cases. A user of British English who listens to or reads American English will face problems of intelligibility with words that do not exist in his/her own version like faucet (British English tap), janitor (caretaker), pitcher (jug), mortician (undertaker), realtor (estate agent), closet (cupboard), penitentiary [noun] (prison). He/she will also find words which exist in his/her variety, but have a different meaning. The difference in meaning may be negligible and not cause communication problems, as in American English vacation vs British English holidays, call (by phone) vs ring, schedule vs time; both members of these pairs, in particular, and many others, are now used in Britain, which further reduces the risk of communication failure. But major semantic differences sometimes exist, such as between (American English) first floor, second floor and British English ground floor, first floor, pants and trousers, gas and petrol, (from American English, 12th of February 1998). These extreme cases of divergence may cause communication problems or great embarrassment in some cases. Just imagine an American English speaker directing a British English speaker to the first or second floor, asking him/her for gas, asking him/her to show his/her pants, and you will agree that American English and British English are not “so overwhelmingly alike”, as claimed above.
Cases of communication failure (or potential failure) due to such lexico-semantic problems are reported by Modiano. They include American English round trip ticket vs British English return ticket, American English eraser vs British English rubber, and British English public school (vs its American English meaning).
Modiano requested a ticket to London but he was asked whether he wanted a return ticket. Interpreting return ticket as meaning a ticket from London to the place from which he was travelling, Modiano replied, “How do you expect me to get there?” As for British English rubber, the author found that in American English it is an American slang for the word condom and its use in some contexts may cause embarrassment. Modiano go on noticed that the British English use of Rubber rather than eraser is unknown in the US. Concerning public school, Modiano points out the contrast in British English where it means privately owned institutions and in American English where it means “schools owned and operated with public funds”.
Efforts to be made
The best thing to deal with the situation of co-existence, or competition, of American English and British English would be some kind of harmonization. This solution, which suggests changing the natural course of a language or language variety, had hardly been succeeded. The other question is even if this solution were possible, the harmonization in the direction of American English are demographic, technological, political, commercial, and media-related, as analyzed above; in fact, most predictions, including that of David Crystal, are that English in future will be American-dominated. Arguments for harmonization in the direction of British English include the fact that the majority of dictionaries and English Language Teaching materials outside the US are British English–dominated. The other argument is of an emotional and symbolic feeling and mixed with the sense that British English is after all the mother variety.
Modiano’s solution to the exposure to, and mixing of, American English and British English is that Mid-Atlantic, spoken in increasing numbers by Europeans. Should replace British English as the educational standard in Europe. According to the author of “The Americanization of Euro-English”, Mid-Atlantic is “a variety that encourages neutral pronunciation and a vocabulary based on the interlocutor’s frame of reference’. The only problem in Modiano’s argument is that he carefully explains the reasons for Mid-Atlantic English to be used, and what does not constitute Mid-Atlantic, but fails to discuss in concrete terms some detailed characteristics of this variety.
For lacking in a guaranteed solution to the problem, I suggested that courses in contrastive analysis of American English and British English should be widely included in English Lessons to wipe out the confusion of those who learn English as a second language.
Otolaryngology
Acute Hearing Loss 1、General. Hearing loss may develop over days or acutely. It may be either conductive in nature (ossicle disruption from trauma, tympanic-membrane perforation from cotton-tipped applicator or from noise, etc., cerumen in the canal, otitis media, barotrauma, etc.) or sensorineural (CVA, infectious, tumor, Ménières disease, herpes zoster oticus (may see vesicles), syphilis, collagen-vascular disease, ototoxic drug exposure, etc). An isolated vascular event causing unilateral hearing loss is not uncommon in young adults.
2、Presentation. Decrease in auditory acuity. May document using audiologic testing or by Rinne and Weber tests.
3、Approach. Treat cause if found. If no obvious cause is found and serious illness has been ruled out by a complete history and physical (especially neurologic exam), patient may be discharged with a follow-up appointment with ENT for further, specialized evaluation.
Nasal trauma.
1、Septal hematoma. Diagnosis requires a high index of suspicion, direct inspection of the septum after any nasal trauma, and recognition. The main symptom is progressive posttraumatic nasal obstruction. The nostril may be obstructed by a large, soft, red, or bluish mass. Its appearance can be confused with a polyp, a deviated septum, or enlarged turbinates. Septal hematomas can be easily missed unless the entire septum is observed visually and palpated with a blunt instrument.
1)、Evacuation of the hematoma within 48 hours is necessary to avoid avascular necrosis of the cartilage, abscess formation, or saddle deformity of the nose.
2)、Any finding of a boggy, fluctuant septum that is tender out of proportion to other findings warrants treatment.
3)、Treatment of septal hematoma.
1、Vasoconstrict and anesthetize the nasal mucosa with topical phenylephrine-tetracaine or cocaine.
2、Make a long vertical incision through the mucosa overlying the hematoma.
3、Use suction or normal-saline lavage to clean out all clots and place a sterile rubber band drain above the exposed cartilage.
4、Pack with a Merocel "rocket" or with petrolatum (Vaseline) gauze as described in the epistaxis section.
5、Place the patient on broad-spectrum antibiotic therapy. Reexamine, reaspirate, and repack daily while instructing the patient to avoid activities (nose-blowing, nasal sneezing) that increase nasal and sinus pressures.
6、If no recurrence of hematoma is seen, remove the drain and repack the nasal passage for final removal 24 hours later. Antibiotics may be stopped when the packing is discontinued.
7、Bilateral hematomas are handled in a similar manner, but ensure that the incisions are staggered over the septum so that no cartilage is underperfused on both sides.
2、Nasal fracture. Palpate dorsum of nose for deformity, instability, crepitus, and tenderness after any blunt injury causing bleeding from the nose. Diagnosis is confirmed by radiographs. However, treatment is based on presence of deformity when swelling is resolved, and so deferring radiographs until swelling is resolved is acceptable; this should be discussed with the patient. Initial bleeding should be controlled and septal hematoma ruled out. Early reduction is possible if the injury is acute and swelling insignificant. Closed reduction should occur within 3 to 7 days for children and 5 to 10 days for adults.
Otitis Media
A、General. Otalgia, fever, irritability, previous or coexisting URI, ear rubbing, and feeding problems are common presenting symptoms. However, any of the above symptoms, including ear pain and fever, may be absent. Many episodes are viral in origin. The most common bacterial pathogens are Pneumococcus, Haemophilus influenzae, and Moraxella catarrhalis.
B、Diagnosis. Diagnosis involves adequate observation of the tympanic membrane (TM), which may require cerumen removal. Hyperemia of the TM is an early sign of otitis media, but "red ear" alone does not establish the diagnosis. Other findings include bulging of the TM, indistinct landmarks, diminished light reflex, and limited mobility on pneumatic insufflation. Mastoiditis, meningitis, and abscess are possible complications. Of most concern, however, is impairment of hearing associated with middle ear effusion. Tympanometry may be used to establish the presence of fluid in the middle ear.
C、Treatment. Treatment with antibiotics is standard of care in the United States though this is not the case in many other developed countries. 81% of cases of OM will resolve spontaneously. It is necessary to treat 7 patients to effect the outcome in 1 patient. It is difficult if not impossible to demonstrate the superiority of one antibiotic over another. Start with low-cost agents and those that are well tolerated. If no response in 48 to 72 hours, consider changing antibiotic.
The most cost-effective agents.
Amoxicillin 40 mg/kg/day divided TID (125 mg/5 ml or 250 mg/5 ml suspensions) for 10 days [$14].
Trimethoprim-sulfamethoxazole oral suspension 1 ml/kg/day divided BID (8 mg/kg trimethoprim and 40 mg/kg sulfamethoxazole per day) for 10 days [$25]. Avoid in children less than 2 months of age.
Erythromycin-sulfisoxazole dosed as 50 mg of erythromycin per kilogram per day divided QID (suspension is 200 mg of erythromycin per 5 ml) for 10 days [$47].
The "second-line" drugs, which are more expensive.
Cefaclor 40 mg/kg/day divided BID for 10 days (suspensions dosed 125 mg/5 ml, 250 mg/5 ml) [$68].
Amoxicillin-clavulanate dosed as 40 mg amoxicillin/kg/day divided TID for 10 days [$66].
Cefixime 8 mg/kg single daily dose or divided BID (100 mg/5 ml suspension) for 10 days [$71].
Clarithromycin 500 mg PO BID or 7.5 mg/kg PO BID for children.
Recently, ceftriaxone 50 mg/kg IM has been shown to be almost as effective as a traditional 10-day course of antibiotics. However, it is expensive and, because of emerging resistant bacteria, should be reserved for cases in which compliance is questionable.
If there is evidence of TM rupture (purulent drainage from canal), add Cortisporin otic suspension QID. The solution is acidic and tends to sting when administered.
Although traditional, a follow-up exam is not necessary in the asymptomatic patient who is older than a range of 15 months to 2 years of age. If, however, the patient is still symptomatic or the parent does not believe the otitis is resolved, follow-up exam can be done at 2 weeks.
In adults, complete resolution of symptoms such as ear fullness may take 6 weeks.
Decongestants play no role in the resolution of acute otitis media though they may be needed for associated conditions.
Pain control with topical solutions (such as Auralgan) or systemic agents such as acetaminophen, ibuprofen, or acetaminophen with codeine or hydrocodone may be required from patient comfort.
D、For Recurrent Acute Otitis MediaAntibiotic prophylaxis (such as a single dose of amoxicillin or TMP/SMX at bedtime) should be considered for recurrent disease. Avoiding exposure to cigarette smoke may be helpful. Referral for discussion of tympanostomy tube placement should be considered if there is chronic bilateral effusions of more than 3 months in duration, unilateral effusion of more than 3 months in duration, language-development delay, hearing loss of >20 dB, or failure of antibiotic prophylaxis.
Epistaxis
Causes. Nose picking, external trauma, dry nasal mucosa with vascular fragility, foreign bodies, blood dyscrasias, neoplasms, infections, vitamin deficiencies, toxic metal exposures, septal deformities, telangiectasias, angiofibromas, and aneurysm ruptures.
Determining the source of bleeding is often the most difficult part of the examination.
The posterior area of the nose is supplied by the ethmoid arteries (from the superior internal carotids) and the sphenopalatine arteries (from the external carotids); bleeding from these vessels is often difficult to control.
Kiesselbachs arterial plexus supplies the more easily controlled anterior nasal mucosa.
If the bleeding has been prolonged, check the patients Hb and HCT. A PT/INR, PTT, and platelet count may also be indicated depending on the clinical situation.
Gather a nasal speculum, a "hands-free" head mirror or lamp, suction with a Frazier suction tip, cocaine or tetracaine-epinephrine solution spray applicator, an electrocautery pencil, silver nitrate sticks, nasal packing (Merocel sponge "nasal rocket" packs, Vaseline gauze), and bayonet forceps to examine and treat a comfortable sitting or supine patient. If bleeding is easily seen and is coming from the septum, direct pressure to the site after generously spraying of the area with the vasoconstrictor-analgesic solution may be sufficient (pinch nose for 10 to 15 minutes).
If this doesnt work, try silver nitrate for small bleeders or electrocautery for the larger vessels on a well-anesthetized septum. Although there is no clear advantage to electrocautery, it may be effective in a patient who fails chemical cautery.
If this is ineffective, or if the bleeding is from under the turbinates, insert the dry Merocel pack entirely into the nostril (using a lubricant such as K-Y Jelly) and moisten it with phenylephrine or saline until it has completely formed to the convoluted nasal passage, leaving it in as necessary for bleeding control for at least 48 hours. Alternatively, pack with Vaseline gauze soaked with phenylephrine.
Patients with COPD could suffer hypoxic distress because the nasopulmonary reflex produces a drop in the PO2 by 15 mm Hg in most people who have their noses packed!
Prescribe to all patients requiring nasal packing broad-spectrum antibiotics while they are packed; TMP/SMX, amoxicillin-clavulanate, clarithromycin, or cefadroxil are good choices.
Examine the uvula. If its still dripping blood, hemostasis is inadequate and posterior packing may be required. Temporizing measures include the use of one of several commercially available posterior nasal packs or the use of a Foley catheter inserted into the posterior nasal area and inflated. Anyone requiring posterior packing should also have an anterior pack placed. Obtain an otolaryngologic consultation and hospitalize any patient with a posterior nose bleed for observation or vascular intervention.
Consult with an otolaryngologist if posterior packing is required, if nose requires repacking several times during a single ED visit, or for any patients develop signs or symptoms of an infection.
Peritonsillar Abscess (Quinsy)
General. A localized area of abscess that is typically unilateral and occurs in patients with tonsillitis.
Cause. Depending on the series, the most common organism is Streptococcus followed by anaerobes.
Clinically. Symptoms include severe throat pain with radiation to the ear, drooling from inability to swallow saliva, trismus, and fever. Almost pathognomonic of a peritonsillar abscess is a muffled, "hot potato," voice. On exam there is unilateral swelling of the palate and anterior pillar with displacement of the tonsil downward and medially and movement of the uvula away from the involved side.
Treatment. IV or IM penicillin and tonsillectomy. Several series have documented good results using oral antibiotics and needle drainage, which may need to be done many times. The major concern is the possibility of airway obstruction though this is a very rare event. ENT consultation is recommended.
2、Presentation. Decrease in auditory acuity. May document using audiologic testing or by Rinne and Weber tests.
3、Approach. Treat cause if found. If no obvious cause is found and serious illness has been ruled out by a complete history and physical (especially neurologic exam), patient may be discharged with a follow-up appointment with ENT for further, specialized evaluation.
Nasal trauma.
1、Septal hematoma. Diagnosis requires a high index of suspicion, direct inspection of the septum after any nasal trauma, and recognition. The main symptom is progressive posttraumatic nasal obstruction. The nostril may be obstructed by a large, soft, red, or bluish mass. Its appearance can be confused with a polyp, a deviated septum, or enlarged turbinates. Septal hematomas can be easily missed unless the entire septum is observed visually and palpated with a blunt instrument.
1)、Evacuation of the hematoma within 48 hours is necessary to avoid avascular necrosis of the cartilage, abscess formation, or saddle deformity of the nose.
2)、Any finding of a boggy, fluctuant septum that is tender out of proportion to other findings warrants treatment.
3)、Treatment of septal hematoma.
1、Vasoconstrict and anesthetize the nasal mucosa with topical phenylephrine-tetracaine or cocaine.
2、Make a long vertical incision through the mucosa overlying the hematoma.
3、Use suction or normal-saline lavage to clean out all clots and place a sterile rubber band drain above the exposed cartilage.
4、Pack with a Merocel "rocket" or with petrolatum (Vaseline) gauze as described in the epistaxis section.
5、Place the patient on broad-spectrum antibiotic therapy. Reexamine, reaspirate, and repack daily while instructing the patient to avoid activities (nose-blowing, nasal sneezing) that increase nasal and sinus pressures.
6、If no recurrence of hematoma is seen, remove the drain and repack the nasal passage for final removal 24 hours later. Antibiotics may be stopped when the packing is discontinued.
7、Bilateral hematomas are handled in a similar manner, but ensure that the incisions are staggered over the septum so that no cartilage is underperfused on both sides.
2、Nasal fracture. Palpate dorsum of nose for deformity, instability, crepitus, and tenderness after any blunt injury causing bleeding from the nose. Diagnosis is confirmed by radiographs. However, treatment is based on presence of deformity when swelling is resolved, and so deferring radiographs until swelling is resolved is acceptable; this should be discussed with the patient. Initial bleeding should be controlled and septal hematoma ruled out. Early reduction is possible if the injury is acute and swelling insignificant. Closed reduction should occur within 3 to 7 days for children and 5 to 10 days for adults.
Otitis Media
A、General. Otalgia, fever, irritability, previous or coexisting URI, ear rubbing, and feeding problems are common presenting symptoms. However, any of the above symptoms, including ear pain and fever, may be absent. Many episodes are viral in origin. The most common bacterial pathogens are Pneumococcus, Haemophilus influenzae, and Moraxella catarrhalis.
B、Diagnosis. Diagnosis involves adequate observation of the tympanic membrane (TM), which may require cerumen removal. Hyperemia of the TM is an early sign of otitis media, but "red ear" alone does not establish the diagnosis. Other findings include bulging of the TM, indistinct landmarks, diminished light reflex, and limited mobility on pneumatic insufflation. Mastoiditis, meningitis, and abscess are possible complications. Of most concern, however, is impairment of hearing associated with middle ear effusion. Tympanometry may be used to establish the presence of fluid in the middle ear.
C、Treatment. Treatment with antibiotics is standard of care in the United States though this is not the case in many other developed countries. 81% of cases of OM will resolve spontaneously. It is necessary to treat 7 patients to effect the outcome in 1 patient. It is difficult if not impossible to demonstrate the superiority of one antibiotic over another. Start with low-cost agents and those that are well tolerated. If no response in 48 to 72 hours, consider changing antibiotic.
The most cost-effective agents.
Amoxicillin 40 mg/kg/day divided TID (125 mg/5 ml or 250 mg/5 ml suspensions) for 10 days [$14].
Trimethoprim-sulfamethoxazole oral suspension 1 ml/kg/day divided BID (8 mg/kg trimethoprim and 40 mg/kg sulfamethoxazole per day) for 10 days [$25]. Avoid in children less than 2 months of age.
Erythromycin-sulfisoxazole dosed as 50 mg of erythromycin per kilogram per day divided QID (suspension is 200 mg of erythromycin per 5 ml) for 10 days [$47].
The "second-line" drugs, which are more expensive.
Cefaclor 40 mg/kg/day divided BID for 10 days (suspensions dosed 125 mg/5 ml, 250 mg/5 ml) [$68].
Amoxicillin-clavulanate dosed as 40 mg amoxicillin/kg/day divided TID for 10 days [$66].
Cefixime 8 mg/kg single daily dose or divided BID (100 mg/5 ml suspension) for 10 days [$71].
Clarithromycin 500 mg PO BID or 7.5 mg/kg PO BID for children.
Recently, ceftriaxone 50 mg/kg IM has been shown to be almost as effective as a traditional 10-day course of antibiotics. However, it is expensive and, because of emerging resistant bacteria, should be reserved for cases in which compliance is questionable.
If there is evidence of TM rupture (purulent drainage from canal), add Cortisporin otic suspension QID. The solution is acidic and tends to sting when administered.
Although traditional, a follow-up exam is not necessary in the asymptomatic patient who is older than a range of 15 months to 2 years of age. If, however, the patient is still symptomatic or the parent does not believe the otitis is resolved, follow-up exam can be done at 2 weeks.
In adults, complete resolution of symptoms such as ear fullness may take 6 weeks.
Decongestants play no role in the resolution of acute otitis media though they may be needed for associated conditions.
Pain control with topical solutions (such as Auralgan) or systemic agents such as acetaminophen, ibuprofen, or acetaminophen with codeine or hydrocodone may be required from patient comfort.
D、For Recurrent Acute Otitis MediaAntibiotic prophylaxis (such as a single dose of amoxicillin or TMP/SMX at bedtime) should be considered for recurrent disease. Avoiding exposure to cigarette smoke may be helpful. Referral for discussion of tympanostomy tube placement should be considered if there is chronic bilateral effusions of more than 3 months in duration, unilateral effusion of more than 3 months in duration, language-development delay, hearing loss of >20 dB, or failure of antibiotic prophylaxis.
Epistaxis
Causes. Nose picking, external trauma, dry nasal mucosa with vascular fragility, foreign bodies, blood dyscrasias, neoplasms, infections, vitamin deficiencies, toxic metal exposures, septal deformities, telangiectasias, angiofibromas, and aneurysm ruptures.
Determining the source of bleeding is often the most difficult part of the examination.
The posterior area of the nose is supplied by the ethmoid arteries (from the superior internal carotids) and the sphenopalatine arteries (from the external carotids); bleeding from these vessels is often difficult to control.
Kiesselbachs arterial plexus supplies the more easily controlled anterior nasal mucosa.
If the bleeding has been prolonged, check the patients Hb and HCT. A PT/INR, PTT, and platelet count may also be indicated depending on the clinical situation.
Gather a nasal speculum, a "hands-free" head mirror or lamp, suction with a Frazier suction tip, cocaine or tetracaine-epinephrine solution spray applicator, an electrocautery pencil, silver nitrate sticks, nasal packing (Merocel sponge "nasal rocket" packs, Vaseline gauze), and bayonet forceps to examine and treat a comfortable sitting or supine patient. If bleeding is easily seen and is coming from the septum, direct pressure to the site after generously spraying of the area with the vasoconstrictor-analgesic solution may be sufficient (pinch nose for 10 to 15 minutes).
If this doesnt work, try silver nitrate for small bleeders or electrocautery for the larger vessels on a well-anesthetized septum. Although there is no clear advantage to electrocautery, it may be effective in a patient who fails chemical cautery.
If this is ineffective, or if the bleeding is from under the turbinates, insert the dry Merocel pack entirely into the nostril (using a lubricant such as K-Y Jelly) and moisten it with phenylephrine or saline until it has completely formed to the convoluted nasal passage, leaving it in as necessary for bleeding control for at least 48 hours. Alternatively, pack with Vaseline gauze soaked with phenylephrine.
Patients with COPD could suffer hypoxic distress because the nasopulmonary reflex produces a drop in the PO2 by 15 mm Hg in most people who have their noses packed!
Prescribe to all patients requiring nasal packing broad-spectrum antibiotics while they are packed; TMP/SMX, amoxicillin-clavulanate, clarithromycin, or cefadroxil are good choices.
Examine the uvula. If its still dripping blood, hemostasis is inadequate and posterior packing may be required. Temporizing measures include the use of one of several commercially available posterior nasal packs or the use of a Foley catheter inserted into the posterior nasal area and inflated. Anyone requiring posterior packing should also have an anterior pack placed. Obtain an otolaryngologic consultation and hospitalize any patient with a posterior nose bleed for observation or vascular intervention.
Consult with an otolaryngologist if posterior packing is required, if nose requires repacking several times during a single ED visit, or for any patients develop signs or symptoms of an infection.
Peritonsillar Abscess (Quinsy)
General. A localized area of abscess that is typically unilateral and occurs in patients with tonsillitis.
Cause. Depending on the series, the most common organism is Streptococcus followed by anaerobes.
Clinically. Symptoms include severe throat pain with radiation to the ear, drooling from inability to swallow saliva, trismus, and fever. Almost pathognomonic of a peritonsillar abscess is a muffled, "hot potato," voice. On exam there is unilateral swelling of the palate and anterior pillar with displacement of the tonsil downward and medially and movement of the uvula away from the involved side.
Treatment. IV or IM penicillin and tonsillectomy. Several series have documented good results using oral antibiotics and needle drainage, which may need to be done many times. The major concern is the possibility of airway obstruction though this is a very rare event. ENT consultation is recommended.
Treatment of Pressure Ulcers
Various topical agents have been used in treating pressure ulcers. Some of these agents (e.g., astringents, alkaline soap products) have proven harmful. Beneficial agents include enzymes, antiseptics, oxidizing agents, and dry dextranomer beads. The agent of choice depends on the depth of the ulcer. Deeper ulcers may derive greater benefit from enzyme application. Local treatment of pressure ulcers also includes using various dressings. The occlusive dressings are a group of dressings that are widely marketed and are being used with increasing frequency to treat pressure ulcers. These dressings (including transparent dressings, hydrocolloid dressings, and hydrogels) may be used in combination with topical agents or by themselves.
Potential Nursing Diagnoses Impaired skin integrity
Equipment Wash basin, soap, water, cleansing agent or prescribed topical agents, ordered dressings, skin protectant, cotton-tipped applicators, hypoallergenic tape or adhesive dressing sheet (Hypofix), disposable and sterile gloves, measuring device
Steps and Rationale
1. Wash hands and don gloves. * Reduces transmission of blood-borne pathogens. Gloves should be worn when handling items soiled by body fluids. 2. Close room door or bedside curtains. * Maintains clients privacy. 3. Position client comfortably with area of decubitus ulcer and surrounding skin easily accessible. * Area should be accessible for cleansing of ulcer and surrounding skin. 4. Assess pressure ulcer and surrounding skin to determine ulcer stage (Table 3). a. Note color, moisture, and appearance of skin around ulcer. * Skin condition may indicate progressive tissue damage. Retained moisture causes maceration. b. Measure two perpendicular diameters. * Provides an objective measure of wound size. May determine type of dressing chosen. Surface area = length (L) x width (W). c. Measure depth of pressure ulcer using a sterile cotton-tipped applicator or other device that will allow a measurement of wound depth. * Depth measure is important for determining wound volume. Although surface area adequately represents tissue loss in stage 1 and 2 ulcers, volume more adequately represents tissue loss in the deeper stage 3 through 4 wounds. Volume = 2(L x D) + 2 (W x D) + (L + D) d. Measure depth (D) of skin undermined by lateral tissue necrosis. Use a sterile cotton-tipped applicator and gently probe under skin edges. * Undermining represents the loss of underlying tissues to a greater extent than that of the skin. Undermining may indicate progressive tissue necrosis. 5. Wash skin surrounding ulcer gently with warm water and soap. Rinse area thoroughly with water. * Cleansing of skin surface reduces number of resident bacteria. Soap can be irritating to skin. 6. Gently dry skin thoroughly by patting lightly with towel. * Retained moisture causes maceration of skin layers. 7. Apply sterile gloves. * Aseptic technique must be maintained during cleansing, measuring, and application of dressings. (Check institutional policy regarding use of clean or sterile gloves.) 8. Cleanse ulcer thoroughly with normal saline or cleansing agent. * Removes debris of digested material from wound. Previously applied enzymes may require soaking for removal. a. Use irrigating syringe for deep ulcers. b. Cleansing may be accomplished in the shower with a hand-held shower head. c. Whirlpool treatments may be used to assist with wound cleansing and debridement. 9. Apply topical agents, if prescribed (Table 4): Enzymes l Keeping gloves sterile, place small amount of enzyme ointment in palm of hand. * It is not necessary to apply thick layer of ointment. A thin layer absorbs and acts more effectively. Excess medication can irritate surrounding skin. Apply only to necrotic areas. l Soften medication by rubbing briskly in palm of hand. * Makes ointment easier to apply to ulcer. l Apply thin, even layer of ointment over necrotic areas of ulcer. Do not apply enzyme to surrounding skin. * Proper distribution of ointment ensures effective action. Enzyme can cause burning, paresthesia, and dermatitis to surrounding skin. l Moisten gauze dressing in saline and apply directly over ulcer. * Protects wound. Keeping ulcer surface moist reduces time needed for healing. Skin cells normally live in moist environment. l Cover moistened gauze with single piece of dry gauze and tape securely in place. * Prevents bacteria from entering moist dressing. Antiseptics l Deep ulcers: apply antiseptic ointment to dominant gloved hand and spread ointment in and around ulcer. (Avoid spread of contamination if area is infected.) * Antiseptic ointment causes minimal tissue irritation. All surfaces of wound must be covered to effectively control bacterial growth. l Apply sterile gauze pad over ulcer and tape securely in place. * Protects ulcer and prevents removal of ointment during turning or repositioning. Dextranomer Beads l Hold container of beads approximately I inch (2.5 cm) above ulcer site and lightly sprinkle 5 mm-diameter layer over wound. * Layer of insoluble powder is needed to absorb wound exudate. l Apply gauze dressing over ulcer. * Holds beads in place and protects wound. Hydrocolloid Beads/Paste l Fill ulcer defect to approximately half of the total depth with hydrocolloid beads or paste. * Hydrocolloid beads/paste will assist in absorbing wound drainage. Highly draining wounds are best treated with hydrocolloid beads/granules. l Cover with hydrocolloid dressing; extend dressing 1 to 1 1/2 inches beyond edges of wound. * Dressing maintains wound humidity. May be left in place up to 7 days. Hydrogel Agents l Cover surface of ulcer with hydrogel using sterile applicator or gloved hand. * Maintains wound humidity while absorbing excess drainage. May be used as a carrier for topical agents. l Apply dry, fluffy gauze over gel to completely cover ulcer. * Holds hydrogel against wound surface, is absorbent. Calcium Alginates l Pack wound with alginate using applicator or gloved hand. * Maintains wound humidity while absorbing excess drainage. l Apply dry gauze, foam, or hydrocolloid over alginate. * Holds alginate against wound surface 10. Reposition client comfortably off pressure ulcer. * Avoids accidental removal of dressings. 11. Remove gloves and dispose of soiled supplies. Wash hands. * Prevents transmission of microorganisms. 12. Record appearance of ulcer and treatment (type of topical agent used, dressing applied, and clients response) in nurses notes. * Baseline observations and subsequent inspections reveal progress of healing. Documents care. 13. Report any deterioration in ulcers appearance to nurse in charge or physician. * Deterioration of condition may indicate need for additional therapy.
Nurse Alert Early ulcers tend to have irregular borders; with time, borders become smooth and rounded. If wound is large, irrigating with plain sterile water from an irrigating syringe may be helpful.
Teaching Considerations All individuals participating in clients wound care should be taught the correct method to administer ulcer care.
Geriatric Considerations Medicare regulations limit reimbursement for some types of pressure relief equipment used for Stages 3, 4, and 5 pressure ulcers.
Description of Appearance Stage I: Nonblanchable erythema of the intact skin; may be soft or indurated; edge is usually irregular. Stage II: Partial-thickness skin loss involves epidermis and/or dermis. Ulcer is superficial and presents clinically as an abrasion, blister, or shallow crater. Stage III: Full-thickness skin loss involves damage or necrosis of subcutaneous tissue that may extend to the fascia. Ulcer presents clinically as a deep crater, with or without undermining of adjacent skin. Stage IV: Full-thickness skin loss occurs with extensive destruction or necrosis through subcutaneous layers into muscle and bone. Ulcer edge appears to "roll over" into the defect and is a tough fibrinous ring. Stage V: Lesion is covered by a tough membranous layer that may be rigidly adherent to the ulcer base. Stage is difficult to determine until eschar has sloughed or has been surgically removed
Potential Nursing Diagnoses Impaired skin integrity
Equipment Wash basin, soap, water, cleansing agent or prescribed topical agents, ordered dressings, skin protectant, cotton-tipped applicators, hypoallergenic tape or adhesive dressing sheet (Hypofix), disposable and sterile gloves, measuring device
Steps and Rationale
1. Wash hands and don gloves. * Reduces transmission of blood-borne pathogens. Gloves should be worn when handling items soiled by body fluids. 2. Close room door or bedside curtains. * Maintains clients privacy. 3. Position client comfortably with area of decubitus ulcer and surrounding skin easily accessible. * Area should be accessible for cleansing of ulcer and surrounding skin. 4. Assess pressure ulcer and surrounding skin to determine ulcer stage (Table 3). a. Note color, moisture, and appearance of skin around ulcer. * Skin condition may indicate progressive tissue damage. Retained moisture causes maceration. b. Measure two perpendicular diameters. * Provides an objective measure of wound size. May determine type of dressing chosen. Surface area = length (L) x width (W). c. Measure depth of pressure ulcer using a sterile cotton-tipped applicator or other device that will allow a measurement of wound depth. * Depth measure is important for determining wound volume. Although surface area adequately represents tissue loss in stage 1 and 2 ulcers, volume more adequately represents tissue loss in the deeper stage 3 through 4 wounds. Volume = 2(L x D) + 2 (W x D) + (L + D) d. Measure depth (D) of skin undermined by lateral tissue necrosis. Use a sterile cotton-tipped applicator and gently probe under skin edges. * Undermining represents the loss of underlying tissues to a greater extent than that of the skin. Undermining may indicate progressive tissue necrosis. 5. Wash skin surrounding ulcer gently with warm water and soap. Rinse area thoroughly with water. * Cleansing of skin surface reduces number of resident bacteria. Soap can be irritating to skin. 6. Gently dry skin thoroughly by patting lightly with towel. * Retained moisture causes maceration of skin layers. 7. Apply sterile gloves. * Aseptic technique must be maintained during cleansing, measuring, and application of dressings. (Check institutional policy regarding use of clean or sterile gloves.) 8. Cleanse ulcer thoroughly with normal saline or cleansing agent. * Removes debris of digested material from wound. Previously applied enzymes may require soaking for removal. a. Use irrigating syringe for deep ulcers. b. Cleansing may be accomplished in the shower with a hand-held shower head. c. Whirlpool treatments may be used to assist with wound cleansing and debridement. 9. Apply topical agents, if prescribed (Table 4): Enzymes l Keeping gloves sterile, place small amount of enzyme ointment in palm of hand. * It is not necessary to apply thick layer of ointment. A thin layer absorbs and acts more effectively. Excess medication can irritate surrounding skin. Apply only to necrotic areas. l Soften medication by rubbing briskly in palm of hand. * Makes ointment easier to apply to ulcer. l Apply thin, even layer of ointment over necrotic areas of ulcer. Do not apply enzyme to surrounding skin. * Proper distribution of ointment ensures effective action. Enzyme can cause burning, paresthesia, and dermatitis to surrounding skin. l Moisten gauze dressing in saline and apply directly over ulcer. * Protects wound. Keeping ulcer surface moist reduces time needed for healing. Skin cells normally live in moist environment. l Cover moistened gauze with single piece of dry gauze and tape securely in place. * Prevents bacteria from entering moist dressing. Antiseptics l Deep ulcers: apply antiseptic ointment to dominant gloved hand and spread ointment in and around ulcer. (Avoid spread of contamination if area is infected.) * Antiseptic ointment causes minimal tissue irritation. All surfaces of wound must be covered to effectively control bacterial growth. l Apply sterile gauze pad over ulcer and tape securely in place. * Protects ulcer and prevents removal of ointment during turning or repositioning. Dextranomer Beads l Hold container of beads approximately I inch (2.5 cm) above ulcer site and lightly sprinkle 5 mm-diameter layer over wound. * Layer of insoluble powder is needed to absorb wound exudate. l Apply gauze dressing over ulcer. * Holds beads in place and protects wound. Hydrocolloid Beads/Paste l Fill ulcer defect to approximately half of the total depth with hydrocolloid beads or paste. * Hydrocolloid beads/paste will assist in absorbing wound drainage. Highly draining wounds are best treated with hydrocolloid beads/granules. l Cover with hydrocolloid dressing; extend dressing 1 to 1 1/2 inches beyond edges of wound. * Dressing maintains wound humidity. May be left in place up to 7 days. Hydrogel Agents l Cover surface of ulcer with hydrogel using sterile applicator or gloved hand. * Maintains wound humidity while absorbing excess drainage. May be used as a carrier for topical agents. l Apply dry, fluffy gauze over gel to completely cover ulcer. * Holds hydrogel against wound surface, is absorbent. Calcium Alginates l Pack wound with alginate using applicator or gloved hand. * Maintains wound humidity while absorbing excess drainage. l Apply dry gauze, foam, or hydrocolloid over alginate. * Holds alginate against wound surface 10. Reposition client comfortably off pressure ulcer. * Avoids accidental removal of dressings. 11. Remove gloves and dispose of soiled supplies. Wash hands. * Prevents transmission of microorganisms. 12. Record appearance of ulcer and treatment (type of topical agent used, dressing applied, and clients response) in nurses notes. * Baseline observations and subsequent inspections reveal progress of healing. Documents care. 13. Report any deterioration in ulcers appearance to nurse in charge or physician. * Deterioration of condition may indicate need for additional therapy.
Nurse Alert Early ulcers tend to have irregular borders; with time, borders become smooth and rounded. If wound is large, irrigating with plain sterile water from an irrigating syringe may be helpful.
Teaching Considerations All individuals participating in clients wound care should be taught the correct method to administer ulcer care.
Geriatric Considerations Medicare regulations limit reimbursement for some types of pressure relief equipment used for Stages 3, 4, and 5 pressure ulcers.
Description of Appearance Stage I: Nonblanchable erythema of the intact skin; may be soft or indurated; edge is usually irregular. Stage II: Partial-thickness skin loss involves epidermis and/or dermis. Ulcer is superficial and presents clinically as an abrasion, blister, or shallow crater. Stage III: Full-thickness skin loss involves damage or necrosis of subcutaneous tissue that may extend to the fascia. Ulcer presents clinically as a deep crater, with or without undermining of adjacent skin. Stage IV: Full-thickness skin loss occurs with extensive destruction or necrosis through subcutaneous layers into muscle and bone. Ulcer edge appears to "roll over" into the defect and is a tough fibrinous ring. Stage V: Lesion is covered by a tough membranous layer that may be rigidly adherent to the ulcer base. Stage is difficult to determine until eschar has sloughed or has been surgically removed
English for Nurses and Medical Professionals-2
Human Body
One of the first things you need to know when working in English is the parts of the body. You will need to learn the names of the internal (inside the skin) and external body parts. You will also need to learn the words for the functions of each of these body parts. Here are the basics to get you started.
Head
Inside the head is the brain, which is responsible for thinking. The top of a person's scalp is covered with hair. Beneath the hairline at the front of the face is the forehead. Underneath the forehead are the eyes for seeing, the nose for smelling, and the mouth for eating. On the outside of the mouth are the lips, and on the inside of the mouth are the teeth for biting and the tongue for tasting. Food is swallowed down the throat. At the sides of the face are the cheeks and at the sides of the head are the ears for hearing. At the bottom of a person's face is the chin. The jaw is located on the inside of the cheeks and chin. The neck is what attaches the head to the upper body.
Upper Body
At the top and front of the upper body, just below the neck is the collar bone. On the front side of the upper body is the chest, which in women includes the breasts. Babies suck on the nipples of their mother's breasts. Beneath the ribcage are the stomach and the waist. The navel, more commonly referred to as the belly button, is located here as well. On the inside of the upper body are the heart for pumping blood and the lungs for breathing. The rear side of the upper body is called the back, inside which the spine connects the upper body to the lower body.
Upper Limbs (arms)The arms are attached to the shoulders. Beneath this area is called the armpit or underarm. The upper arms have the muscles known as triceps and biceps. The joint halfway down the arm is called the elbow. Between the elbow and the next joint, the wrist, is the forearm. Below the wrist is the hand with four fingers and one thumb. Beside the thumb is the index finger. Beside the index finger is the middle finger, followed by the ring finger and the little finger. At the ends of the fingers are fingernails.
Lower Body
Below the waist, on left and right, are the hips. Between the hips are the reproductive organs, the penis (male) or the vagina (female). At the back of the lower body are the buttocks for sitting on. They are also commonly referred to as the rear end or the bum (especially with children). The internal organs in the lower body include the intestines for digesting food, the bladder for holding liquid waste, as well as the liver and the kidneys. This area also contains the woman's uterus, which holds a baby when a woman is pregnant.
Lower Limbs (legs)The top of the leg is called the thigh, and the joint in the middle of the leg is the knee. The front of the lower leg is the shin and the back of the lower leg is the calf. The ankle connects the foot to the leg. Each foot has five toes. The smallest toe is often called the little toe while the large one is called the big toe. At the ends of the toes are toenails.
One of the first things you need to know when working in English is the parts of the body. You will need to learn the names of the internal (inside the skin) and external body parts. You will also need to learn the words for the functions of each of these body parts. Here are the basics to get you started.
Head
Inside the head is the brain, which is responsible for thinking. The top of a person's scalp is covered with hair. Beneath the hairline at the front of the face is the forehead. Underneath the forehead are the eyes for seeing, the nose for smelling, and the mouth for eating. On the outside of the mouth are the lips, and on the inside of the mouth are the teeth for biting and the tongue for tasting. Food is swallowed down the throat. At the sides of the face are the cheeks and at the sides of the head are the ears for hearing. At the bottom of a person's face is the chin. The jaw is located on the inside of the cheeks and chin. The neck is what attaches the head to the upper body.
Upper Body
At the top and front of the upper body, just below the neck is the collar bone. On the front side of the upper body is the chest, which in women includes the breasts. Babies suck on the nipples of their mother's breasts. Beneath the ribcage are the stomach and the waist. The navel, more commonly referred to as the belly button, is located here as well. On the inside of the upper body are the heart for pumping blood and the lungs for breathing. The rear side of the upper body is called the back, inside which the spine connects the upper body to the lower body.
Upper Limbs (arms)The arms are attached to the shoulders. Beneath this area is called the armpit or underarm. The upper arms have the muscles known as triceps and biceps. The joint halfway down the arm is called the elbow. Between the elbow and the next joint, the wrist, is the forearm. Below the wrist is the hand with four fingers and one thumb. Beside the thumb is the index finger. Beside the index finger is the middle finger, followed by the ring finger and the little finger. At the ends of the fingers are fingernails.
Lower Body
Below the waist, on left and right, are the hips. Between the hips are the reproductive organs, the penis (male) or the vagina (female). At the back of the lower body are the buttocks for sitting on. They are also commonly referred to as the rear end or the bum (especially with children). The internal organs in the lower body include the intestines for digesting food, the bladder for holding liquid waste, as well as the liver and the kidneys. This area also contains the woman's uterus, which holds a baby when a woman is pregnant.
Lower Limbs (legs)The top of the leg is called the thigh, and the joint in the middle of the leg is the knee. The front of the lower leg is the shin and the back of the lower leg is the calf. The ankle connects the foot to the leg. Each foot has five toes. The smallest toe is often called the little toe while the large one is called the big toe. At the ends of the toes are toenails.
English for Nurses and Medical Professionals-1
Patients come in all different shapes and sizes. They also speak many different languages. Whether you are working abroad or at home, there will come a time when you will need to rely on English to communicate. These pages can help nurses, doctors, pharmacists, paramedics, receptionists, specialists or even those who volunteer. They will help you learn some basic English expressions and vocabulary related to the medical field. By studying and practising Medical English, you will be able to make your patients feel more comfortable, and have a better understanding of their needs. You will also learn how to talk to their loved ones and communicate with other medical staff who speak English. Do the exercises and take the quizzes to test your knowledge and understanding.
Screening and Diagnostic Tests
INTRODUCTION
Technical assessment of screening tests (used for persons who are asymptomatic but who may have early disease or disease precursors) differs from the assessment of diagnostic tests (used for persons who have a specific indication of possible illness). Most aspects of the assessment of diagnostic tests also apply to the assessment of screening tests, but the differences can be important.
The first difference is that, with screening tests, the proportion of affected persons is likely to be small. Therefore, many or most positive results are false positive. This finding is not necessarily serious if the screening test procedure is included in a broader program that involves further study of each initially positive finding; evaluation should focus on the whole process rather than on the initial results. In contrast, with diagnostic tests, many patients have medical problems that require investigation; thus, more weight may be given to things such as diagnostic precision and accuracy, and less weight may be given to the acceptability of the test to patients.
A second difference is that, with screening tests, questions are likely to arise about how and how much long-term outcomes improve. Early detection of disease is helpful only if early intervention is helpful. Early intervention is sometimes helpful (eg, in hypertension), but testing for early asymptomatic glaucoma has been widely abandoned because early detection may not affect the outcome.
A third difference between screening tests and diagnostic tests is cost. A program to screen millions of people to identify a small percentage who have early disease or its precursors cannot justify use of the financial resources that may available to support diagnostic testing, especially when patients who have conditions that require accurate diagnosis and relief already exist.
In addition, the arrangement of the sequence of steps in the medical investigation can vary substantially. Also, procedures for recruiting and scheduling of subjects and methods of quality control, record keeping, and follow-up may differ. These differences may be used in the technical assessment of a test.
GLOSSARY OF TERMS
Sensitivity - Probability that a test or procedure result is positive when the disease is present; calculated as follows: number of true-positive results/(number of true-positive results + false-negative results)
Specificity - Probability that a test or procedure result is negative when disease is not present; calculated as follows: number of true-negative results/(number of true-negative results + false-positive results)
True-positive rate - Sensitivity (percentage)
False-positive rate - Probability that a test or procedure result is positive when the disease is not present; calculated as follows: 100% minus the specificity (percentage)
True-negative rate - Specificity (percentage)
False-negative rate - Probability that a test or procedure result is negative when the disease is present; calculated as follows: 100% minus the sensitivity (percentage)
Positive predictive value - Probability that the disease is present when the test or procedure result is positive; calculated as follows: number of true-positive results/(number of true-positive results + false-positive results)
Negative predictive value - Probability that the disease is not present when the test or procedure result is negative; calculated as follows: number of true-negative results/(number of true-negative results + false-negative results)
PURPOSES OF SCREENING AND DIAGNOSTIC TESTS
Screening
Laboratory tests for screening are used in people who are asymptomatic to classify their likelihood of having a particular disease. The screening procedure is not the only basis for the diagnosis of illness. Patients with positive test results are referred for subsequent testing or examination to provide the physician with more information to determine if they have the disease in question.
Numerous attempts have been made to establish clear guidelines for the selection of appropriate patients for testing in the early detection of disease. A disease should be serious to warrant large-scale screening for it, and treatment before symptoms develop or deteriorate should be of more benefit in reducing morbidity and mortality than treatment later. The estimated prevalence of preclinical disease should be high in the population being screened. Once these criteria have been met, the issue is examined from the standpoint of laboratory tests.
An acceptable test is one that is highly accurate, ie, results are positive for almost all individuals with the disease, and the physician can be confident that the patient is actually free of the disease when test results are negative. Specificity is important when one is screening for rare diseases because false-positive results are possible when the test is not specific. The basic tenets of decision analysis indicate that a particular intervention is undertaken when benefits outweigh costs. Therefore, the ideal screening test is inexpensive, easy to administer, and poses little risk and causes minimal discomfort for the patient. In addition, results of the screening test must be valid, reliable, and reproducible.
Diagnosis of disease
Diagnosis requires 2 essential steps. First, diagnostic hypotheses are established. The establishment of these hypotheses is followed by attempts to reduce the number of possible differential diagnoses by successively ruling out specific diseases. This process requires very sensitive tests. With such tests, negative results permit the physician to exclude a disease with confidence.
Second, a strong clinical suspicion is pursued. This process requires very specific tests. With such tests, abnormal findings should essentially confirm the presence of the disease. Also, the test should accurately reflect the physician's estimate of the likelihood of disease, which is based on assessment of the available clinical information. Use of a test to exclude or confirm a diagnosis should indicate that the physician's best estimate, made after careful evaluation of the patient's condition, is that the diagnosis in question is either unlikely or probable.
CHARACTERISTICS OF DIAGNOSTIC TESTS AND PROCEDURES
Tests or procedures are performed when the information from review of findings from the history, physical examination, or previous testing is considered inadequate to address the question at hand. Intelligent use of new information collected requires the physician to be aware of uncertainties associated with the test used.
Every laboratory test or diagnostic procedure has a set of characteristics that reflect the information that clinicians expect in patients with and in those without a given disease. These test characteristics lead to the following fundamental questions:
If the disease is present, what is the probability that the test result will be positive?
If the disease is absent, what is the probability that the test result will be negative?
Sensitivity and specificity are the 2 measures of validity of a test and can be displayed with a simple binary 2-by-2 table, as shown in Table 1.
Sensitivity is determined by identifying the proportion of patients with disease in whom the test result is positive, as follows: a/(a + c]), where a is the number of true-positive results, and c is the number of false-positive results. As the sensitivity of a test increases, the number of persons with disease who have incorrect negative (ie, false-negative) results decreases.
Similarly, the specificity of a test is determined by identifying the proportion of patients without disease in whom the test result is negative, as follows: d/(b + d), where b is the number of true-negative results, and d is the number of false-positive results. A highly specific test rarely yields positive results in the absence of disease, and therefore, only a small proportion of persons without disease have incorrect positive (ie, false-positive) test positive results
The ideal screening test is both highly sensitive and highly specific. Usually, the achievement of both is not possible, and a trade-off must be made between the sensitivity and specificity with a given test. With many clinical tests, some people have clearly normal results, some have clearly abnormal results, and some have intermediate results. In these situations, the cutoff point between normal and abnormal findings is arbitrary. Therefore, the result of any screening test can cause a case of disease to be missed (a sensitivity issue), or it can cause false-positive results in individuals without the disease (a specificity issue).
Altering the criteria for positive, or abnormal, findings influences the sensitivity and specificity of the test. The establishment of these criteria involves weighing the consequences of not detecting disease (false-negative cases) against the consequences of erroneously diagnosing disease in healthy persons (false false-positive cases). Sensitivity may be increased at the expense of specificity when the penalty associated with missing a case is high, such as when the disease is serious and definitive treatment exists. On the other hand, specificity should be increased relative to sensitivity when the cost or risks associated with further diagnostic evaluations or mislabeling are substantial.
The operating characteristics of a test or procedure cannot, per se, be used to determine the presence or absence of disease, unless the test result is always positive when disease is present (ie, 100% sensitivity) or always negative when the disease is absent (ie, 100% specificity). Few tests, if any, have these characteristics. The likelihood that the disease is present with a positive result, or the likelihood of its absence with a negative result, must be assessed and factored with the clinician's pretest estimate of the probability that the patient has the disease (ie, prior probability).
Since few tests are both highly sensitive and highly specific, 2 or more tests are often used to evaluate a possible diagnosis. If the result of one test is positive, the combined sensitivity is higher than that of the more sensitive test, but the specificity is lower. Conversely, when the criteria for a positive test are that both exams be positive, the combined specificity is higher than the more specific of the two, but the sensitivity is lower. Therefore, multiple tests are most useful when all results are within the normal range (a finding that tends to exclude the disease) and when all results are abnormal (a finding that tends to confirm the disease). Multiple tests are least helpful when the results of one are positive and the results of the other are negative.
PREDICTIVE VALUE OF DIAGNOSTIC TESTS AND PROCEDURES
Knowledge of test characteristics alone does not permit accurate interpretation of test results. Test characteristics reveal only what proportion of patients with and what proportion without the disease in question have positive and negative results, respectively. Since the objective is to determine the presence or absence of the disease, the physician must address the following questions:
Given a positive test result, what is the probability that the disease is present?
Given a negative test result, what is the probability that the disease is not present?
The former probability reflects the predictive value of a positive result (ie, positive predictive value), and the latter reflects the predictive value of a negative result (ie, negative predictive value).
The estimation of post-test probabilities requires integration of the knowledge of test characteristics with the clinician's estimate of the likelihood of disease before the test is ordered (ie, the pretest probability or, with screening, the prevalence of disease). By referring to the binary table, the positive predictive value is determined by identifying the probability that the patient with a positive test result actually has the disease as follows: a/(a + b). Similarly, the negative predictive value is determined by identifying the probability that an individual with a negative test result is truly disease-free as follows: d/(c + d).
The more sensitive a test, the smaller the likelihood that the individual with a negative test has the disease; thus, the negative predictive value increases. The more specific the test, the higher the likelihood that an individual with a positive test is free from disease and the greater the positive predictive value. For rare diseases, however, the major determinant of the predictive value of the test is the prevalence of the preclinical disease in the population tested. No matter how specific the test is, if the population is at low risk of having the disease, positive results are likely to be false positive.
Compared with the previous, the Bayes theorem is a more complex model used to quantify the influence of prevalence and/or pretest probability on predictive values.
Alternative expressions of the positive predictive value:
Likelihood of a true-positive results/(likelihood of a true-positive results + likelihood of a false-positive result)
(prevalence X sensitivity)/[(prevalence X sensitivity) + (1 – prevalence) X (1 - specificity)]
Alternative expressions of the negative predictive value:
Likelihood of a true-negative result/(likelihood of a true-negative + likelihood of a false-negative result)
(1 - Prevalence) X specificity/{[(1 - prevalence) X specificity] + [prevalence X (1 - sensitivity)]}
Many authors suggest that the Bayes formula is cumbersome and unnecessary, because it simply extrapolates information gleaned from horizontal assessment of data in the binary table. Furthermore, in many cases, data used to determine the likelihood of disease before testing are only estimates. The effect of prevalence and/or pretest probability on the positive predictive values of a test with given sensitivity and specificity is illustrated in Table 2. When the prevalence of preclinical disease is low, the predictive value is low, even for a test with high sensitivity and specificity. Thus, for rare diseases or cases in which the probability of disease is low, a large proportion of those with positive screening test results are inevitably found, at further testing, not to have the disease.
HINTS FOR EVALUATING A STUDY ABOUT DIAGNOSTIC TESTS
Eight elements are involved in the proper clinical evaluation of a diagnostic test. These elements constitute guides for the clinical reader who evaluates a study of a diagnostic test. The following questions summarize these elements.
Was an independent blinded comparison performed with a criterion standard for diagnosis?
Did the patient sample include individuals with an appropriate spectrum of mild and severe disease, treated and untreated, and individuals with disorders commonly mistaken for the one in question?
Was the setting and patient inclusion criteria for the study adequately described?
Was the reproducibility of the test results (precision) and of the interpretation of those results (observer variance) determined?
Was the term "normal" defined sensibly?
If the test is advocated for use as part of a cluster or sequence of tests, was its contribution to the overall validity of the cluster or sequence determined?
Was the performance the test described in sufficient detail to permit exact replication?
Was the utility of the test determined?
SUMMARY
Confirming the presence of a disease requires a test with high specificity. When 2 or more tests are available, the one with the highest specificity is ordinarily preferred. When a test is used for screening or excluding a diagnostic possibility, it must be sensitive. When 2 or more such tests are available, the one with the highest sensitivity is ordinarily preferred.
The use of more than one test is most helpful when the results are normal and allows the clinician to safely exclude the disease. When all test results are abnormal, they tend to confirm disease. Multiple tests are least helpful when the results of one are positive and the results of the others are normal. If 2 or more highly sensitive tests are performed to exclude disease, the gain in sensitivity obtained by ordering more than one (if the results are marginal) may be offset by the increase in the number of false-positive results.
No tests are perfect. Usually, the results for patients with and those without a specific disease overlap. Each point along the overlapping distribution of results defines a set of operating characteristics for the test. As the point used to define an abnormal result (ie, the cutoff point) is moved in the direction of patients with disease, specificity increases but sensitivity decreases. As it is moved toward patients without disease, the reverse is true.
Finally, the result of a test or procedure cannot be interpreted properly without considering the estimated likelihood of disease before the results are obtained. When the pretest likelihood of disease is high, a positive result tends to confirm the diagnosis, but an unexpected negative result is not helpful in ruling out disease. When the pretest likelihood of disease is low, a normal result tends to exclude the diagnosis, but an unexpected positive result is not helpful in confirming disease.
TABLES
Table 1. Results of Screening and/or Diagnostic Testing*
Result
Disease Present
Disease Absent
Total
Positive
a
b
a + b
Negative
c
d
c + d
Total
a + c
b + d
a + c + b + d
* Variables are defined as follows: a = true-positive results, b = false-positive results, c = false-negative results, and d = true-negative results. Sensitivity is defined as a/(a + c), while specificity is defined as d/(b + d). The positive predictive value is defined as a/(a + b), and the negative predictive value is defined as d/(c + d).
Table 2. Effect of Prevalence on the Positive Predictive Value, with 90% Sensitivity and 95% Specificity
Prevalence, %
Positive Predictive Value, %
0.1
1.80
1.0
15.4
5.0
48.6
50.0
94.7
Technical assessment of screening tests (used for persons who are asymptomatic but who may have early disease or disease precursors) differs from the assessment of diagnostic tests (used for persons who have a specific indication of possible illness). Most aspects of the assessment of diagnostic tests also apply to the assessment of screening tests, but the differences can be important.
The first difference is that, with screening tests, the proportion of affected persons is likely to be small. Therefore, many or most positive results are false positive. This finding is not necessarily serious if the screening test procedure is included in a broader program that involves further study of each initially positive finding; evaluation should focus on the whole process rather than on the initial results. In contrast, with diagnostic tests, many patients have medical problems that require investigation; thus, more weight may be given to things such as diagnostic precision and accuracy, and less weight may be given to the acceptability of the test to patients.
A second difference is that, with screening tests, questions are likely to arise about how and how much long-term outcomes improve. Early detection of disease is helpful only if early intervention is helpful. Early intervention is sometimes helpful (eg, in hypertension), but testing for early asymptomatic glaucoma has been widely abandoned because early detection may not affect the outcome.
A third difference between screening tests and diagnostic tests is cost. A program to screen millions of people to identify a small percentage who have early disease or its precursors cannot justify use of the financial resources that may available to support diagnostic testing, especially when patients who have conditions that require accurate diagnosis and relief already exist.
In addition, the arrangement of the sequence of steps in the medical investigation can vary substantially. Also, procedures for recruiting and scheduling of subjects and methods of quality control, record keeping, and follow-up may differ. These differences may be used in the technical assessment of a test.
GLOSSARY OF TERMS
Sensitivity - Probability that a test or procedure result is positive when the disease is present; calculated as follows: number of true-positive results/(number of true-positive results + false-negative results)
Specificity - Probability that a test or procedure result is negative when disease is not present; calculated as follows: number of true-negative results/(number of true-negative results + false-positive results)
True-positive rate - Sensitivity (percentage)
False-positive rate - Probability that a test or procedure result is positive when the disease is not present; calculated as follows: 100% minus the specificity (percentage)
True-negative rate - Specificity (percentage)
False-negative rate - Probability that a test or procedure result is negative when the disease is present; calculated as follows: 100% minus the sensitivity (percentage)
Positive predictive value - Probability that the disease is present when the test or procedure result is positive; calculated as follows: number of true-positive results/(number of true-positive results + false-positive results)
Negative predictive value - Probability that the disease is not present when the test or procedure result is negative; calculated as follows: number of true-negative results/(number of true-negative results + false-negative results)
PURPOSES OF SCREENING AND DIAGNOSTIC TESTS
Screening
Laboratory tests for screening are used in people who are asymptomatic to classify their likelihood of having a particular disease. The screening procedure is not the only basis for the diagnosis of illness. Patients with positive test results are referred for subsequent testing or examination to provide the physician with more information to determine if they have the disease in question.
Numerous attempts have been made to establish clear guidelines for the selection of appropriate patients for testing in the early detection of disease. A disease should be serious to warrant large-scale screening for it, and treatment before symptoms develop or deteriorate should be of more benefit in reducing morbidity and mortality than treatment later. The estimated prevalence of preclinical disease should be high in the population being screened. Once these criteria have been met, the issue is examined from the standpoint of laboratory tests.
An acceptable test is one that is highly accurate, ie, results are positive for almost all individuals with the disease, and the physician can be confident that the patient is actually free of the disease when test results are negative. Specificity is important when one is screening for rare diseases because false-positive results are possible when the test is not specific. The basic tenets of decision analysis indicate that a particular intervention is undertaken when benefits outweigh costs. Therefore, the ideal screening test is inexpensive, easy to administer, and poses little risk and causes minimal discomfort for the patient. In addition, results of the screening test must be valid, reliable, and reproducible.
Diagnosis of disease
Diagnosis requires 2 essential steps. First, diagnostic hypotheses are established. The establishment of these hypotheses is followed by attempts to reduce the number of possible differential diagnoses by successively ruling out specific diseases. This process requires very sensitive tests. With such tests, negative results permit the physician to exclude a disease with confidence.
Second, a strong clinical suspicion is pursued. This process requires very specific tests. With such tests, abnormal findings should essentially confirm the presence of the disease. Also, the test should accurately reflect the physician's estimate of the likelihood of disease, which is based on assessment of the available clinical information. Use of a test to exclude or confirm a diagnosis should indicate that the physician's best estimate, made after careful evaluation of the patient's condition, is that the diagnosis in question is either unlikely or probable.
CHARACTERISTICS OF DIAGNOSTIC TESTS AND PROCEDURES
Tests or procedures are performed when the information from review of findings from the history, physical examination, or previous testing is considered inadequate to address the question at hand. Intelligent use of new information collected requires the physician to be aware of uncertainties associated with the test used.
Every laboratory test or diagnostic procedure has a set of characteristics that reflect the information that clinicians expect in patients with and in those without a given disease. These test characteristics lead to the following fundamental questions:
If the disease is present, what is the probability that the test result will be positive?
If the disease is absent, what is the probability that the test result will be negative?
Sensitivity and specificity are the 2 measures of validity of a test and can be displayed with a simple binary 2-by-2 table, as shown in Table 1.
Sensitivity is determined by identifying the proportion of patients with disease in whom the test result is positive, as follows: a/(a + c]), where a is the number of true-positive results, and c is the number of false-positive results. As the sensitivity of a test increases, the number of persons with disease who have incorrect negative (ie, false-negative) results decreases.
Similarly, the specificity of a test is determined by identifying the proportion of patients without disease in whom the test result is negative, as follows: d/(b + d), where b is the number of true-negative results, and d is the number of false-positive results. A highly specific test rarely yields positive results in the absence of disease, and therefore, only a small proportion of persons without disease have incorrect positive (ie, false-positive) test positive results
The ideal screening test is both highly sensitive and highly specific. Usually, the achievement of both is not possible, and a trade-off must be made between the sensitivity and specificity with a given test. With many clinical tests, some people have clearly normal results, some have clearly abnormal results, and some have intermediate results. In these situations, the cutoff point between normal and abnormal findings is arbitrary. Therefore, the result of any screening test can cause a case of disease to be missed (a sensitivity issue), or it can cause false-positive results in individuals without the disease (a specificity issue).
Altering the criteria for positive, or abnormal, findings influences the sensitivity and specificity of the test. The establishment of these criteria involves weighing the consequences of not detecting disease (false-negative cases) against the consequences of erroneously diagnosing disease in healthy persons (false false-positive cases). Sensitivity may be increased at the expense of specificity when the penalty associated with missing a case is high, such as when the disease is serious and definitive treatment exists. On the other hand, specificity should be increased relative to sensitivity when the cost or risks associated with further diagnostic evaluations or mislabeling are substantial.
The operating characteristics of a test or procedure cannot, per se, be used to determine the presence or absence of disease, unless the test result is always positive when disease is present (ie, 100% sensitivity) or always negative when the disease is absent (ie, 100% specificity). Few tests, if any, have these characteristics. The likelihood that the disease is present with a positive result, or the likelihood of its absence with a negative result, must be assessed and factored with the clinician's pretest estimate of the probability that the patient has the disease (ie, prior probability).
Since few tests are both highly sensitive and highly specific, 2 or more tests are often used to evaluate a possible diagnosis. If the result of one test is positive, the combined sensitivity is higher than that of the more sensitive test, but the specificity is lower. Conversely, when the criteria for a positive test are that both exams be positive, the combined specificity is higher than the more specific of the two, but the sensitivity is lower. Therefore, multiple tests are most useful when all results are within the normal range (a finding that tends to exclude the disease) and when all results are abnormal (a finding that tends to confirm the disease). Multiple tests are least helpful when the results of one are positive and the results of the other are negative.
PREDICTIVE VALUE OF DIAGNOSTIC TESTS AND PROCEDURES
Knowledge of test characteristics alone does not permit accurate interpretation of test results. Test characteristics reveal only what proportion of patients with and what proportion without the disease in question have positive and negative results, respectively. Since the objective is to determine the presence or absence of the disease, the physician must address the following questions:
Given a positive test result, what is the probability that the disease is present?
Given a negative test result, what is the probability that the disease is not present?
The former probability reflects the predictive value of a positive result (ie, positive predictive value), and the latter reflects the predictive value of a negative result (ie, negative predictive value).
The estimation of post-test probabilities requires integration of the knowledge of test characteristics with the clinician's estimate of the likelihood of disease before the test is ordered (ie, the pretest probability or, with screening, the prevalence of disease). By referring to the binary table, the positive predictive value is determined by identifying the probability that the patient with a positive test result actually has the disease as follows: a/(a + b). Similarly, the negative predictive value is determined by identifying the probability that an individual with a negative test result is truly disease-free as follows: d/(c + d).
The more sensitive a test, the smaller the likelihood that the individual with a negative test has the disease; thus, the negative predictive value increases. The more specific the test, the higher the likelihood that an individual with a positive test is free from disease and the greater the positive predictive value. For rare diseases, however, the major determinant of the predictive value of the test is the prevalence of the preclinical disease in the population tested. No matter how specific the test is, if the population is at low risk of having the disease, positive results are likely to be false positive.
Compared with the previous, the Bayes theorem is a more complex model used to quantify the influence of prevalence and/or pretest probability on predictive values.
Alternative expressions of the positive predictive value:
Likelihood of a true-positive results/(likelihood of a true-positive results + likelihood of a false-positive result)
(prevalence X sensitivity)/[(prevalence X sensitivity) + (1 – prevalence) X (1 - specificity)]
Alternative expressions of the negative predictive value:
Likelihood of a true-negative result/(likelihood of a true-negative + likelihood of a false-negative result)
(1 - Prevalence) X specificity/{[(1 - prevalence) X specificity] + [prevalence X (1 - sensitivity)]}
Many authors suggest that the Bayes formula is cumbersome and unnecessary, because it simply extrapolates information gleaned from horizontal assessment of data in the binary table. Furthermore, in many cases, data used to determine the likelihood of disease before testing are only estimates. The effect of prevalence and/or pretest probability on the positive predictive values of a test with given sensitivity and specificity is illustrated in Table 2. When the prevalence of preclinical disease is low, the predictive value is low, even for a test with high sensitivity and specificity. Thus, for rare diseases or cases in which the probability of disease is low, a large proportion of those with positive screening test results are inevitably found, at further testing, not to have the disease.
HINTS FOR EVALUATING A STUDY ABOUT DIAGNOSTIC TESTS
Eight elements are involved in the proper clinical evaluation of a diagnostic test. These elements constitute guides for the clinical reader who evaluates a study of a diagnostic test. The following questions summarize these elements.
Was an independent blinded comparison performed with a criterion standard for diagnosis?
Did the patient sample include individuals with an appropriate spectrum of mild and severe disease, treated and untreated, and individuals with disorders commonly mistaken for the one in question?
Was the setting and patient inclusion criteria for the study adequately described?
Was the reproducibility of the test results (precision) and of the interpretation of those results (observer variance) determined?
Was the term "normal" defined sensibly?
If the test is advocated for use as part of a cluster or sequence of tests, was its contribution to the overall validity of the cluster or sequence determined?
Was the performance the test described in sufficient detail to permit exact replication?
Was the utility of the test determined?
SUMMARY
Confirming the presence of a disease requires a test with high specificity. When 2 or more tests are available, the one with the highest specificity is ordinarily preferred. When a test is used for screening or excluding a diagnostic possibility, it must be sensitive. When 2 or more such tests are available, the one with the highest sensitivity is ordinarily preferred.
The use of more than one test is most helpful when the results are normal and allows the clinician to safely exclude the disease. When all test results are abnormal, they tend to confirm disease. Multiple tests are least helpful when the results of one are positive and the results of the others are normal. If 2 or more highly sensitive tests are performed to exclude disease, the gain in sensitivity obtained by ordering more than one (if the results are marginal) may be offset by the increase in the number of false-positive results.
No tests are perfect. Usually, the results for patients with and those without a specific disease overlap. Each point along the overlapping distribution of results defines a set of operating characteristics for the test. As the point used to define an abnormal result (ie, the cutoff point) is moved in the direction of patients with disease, specificity increases but sensitivity decreases. As it is moved toward patients without disease, the reverse is true.
Finally, the result of a test or procedure cannot be interpreted properly without considering the estimated likelihood of disease before the results are obtained. When the pretest likelihood of disease is high, a positive result tends to confirm the diagnosis, but an unexpected negative result is not helpful in ruling out disease. When the pretest likelihood of disease is low, a normal result tends to exclude the diagnosis, but an unexpected positive result is not helpful in confirming disease.
TABLES
Table 1. Results of Screening and/or Diagnostic Testing*
Result
Disease Present
Disease Absent
Total
Positive
a
b
a + b
Negative
c
d
c + d
Total
a + c
b + d
a + c + b + d
* Variables are defined as follows: a = true-positive results, b = false-positive results, c = false-negative results, and d = true-negative results. Sensitivity is defined as a/(a + c), while specificity is defined as d/(b + d). The positive predictive value is defined as a/(a + b), and the negative predictive value is defined as d/(c + d).
Table 2. Effect of Prevalence on the Positive Predictive Value, with 90% Sensitivity and 95% Specificity
Prevalence, %
Positive Predictive Value, %
0.1
1.80
1.0
15.4
5.0
48.6
50.0
94.7
Patient Safety and Medical Malpractice: A Case Study
Troyen A. Brennan, MD, JD, MPH, and Michelle M. Mello, JD, PhD, MPhil* 19 August 2003 Volume 139 Issue 4 Pages 267-273
The system of tort liability for medical malpractice is frequently criticized for poorly performing its theoretical functions of compensating injured patients, deterring negligence, and dispensing corrective justice. Working from an actual malpractice case involving serious injury but no apparent negligence, the authors explore these criticisms from the perspectives of both the plaintiff–patient and the defendant–physician. They then examine the tort system through the lens of patient safety and conclude that the tensions between the system and patient safety initiatives suggest a need to reexamine our attachment to adversarial dispute resolution in health care. They propose targeted reforms that could improve the functioning of the system and create incentives to improve safety and quality. For a list of questions and answers from the Quality Grand Rounds conference, see the Appendix.
"Quality Grand Rounds" is a series of articles and companion conferences designed to explore a range of quality issues and medical errors. Presenting actual cases drawn from institutions around the United States, the articles integrate traditional medical case histories with results of root-cause analyses and, where appropriate, anonymous interviews with the involved patients, physicians, nurses, and risk managers. Cases do not come from the discussants' home institutions. The physician, Dr. Harris, was interviewed by a Quality Grand Rounds editor on 16 August 2002. The physician's defense attorney, Mr. Dean, was interviewed by a Quality Grand Rounds editor on 14 August 2002. All names are pseudonyms.
Summary of Events
Mrs. Taylor (a pseudonym), a 52-year-old woman with severe pneumonia and impending respiratory failure, was evaluated on the medical ward of a community hospital by Dr. Harris, an internist. Dr. Harris chose to immediately transfer her to the intensive care unit (ICU) for urgent intubation by a critical care specialist. During intubation, Mrs. Taylor had a cardiac arrest, which resulted in permanent brain damage. Dr. Harris was sued for malpractice.
The Case
Mrs. Taylor had a 3-day history of progressive fevers, nausea, and vomiting. She presented to the emergency department at 2:30 a.m., where she appeared to be moderately ill and dyspneic. Her initial temperature was 38.3 °C, her blood pressure was 112/70 mm Hg, her heart rate was 118 beats/min, and her respiratory rate was 26 breaths/min. Her oxygen saturation was 92% on room air. The examination was remarkable for crackles at her right lung base. The examination of her cardiac, abdominal, and neurologic systems was unremarkable. Laboratory studies showed a leukocyte count of 14 x 10 9 cells/L with a left shift, a creatinine level of 1.3 mg/dL (114.9 祄ol/L), and a sodium level of 129 mmol/L. A chest radiograph showed a dense right lower lobe infiltrate. Bacterial pneumonia was diagnosed. The patient began receiving levofloxacin, metronidazole, and oxygen and was admitted to the medical ward of the hospital. A pulmonologist was consulted by telephone about the initial treatment choices. At 7:45 a.m., a nurse found Mrs. Taylor profoundly dyspneic and diaphoretic. Her oxygen saturation had fallen to 69% on 2 L. The patient was immediately placed on a nonrebreather mask at 15 L/min, which increased the oxygen saturation to 91%. Dr. Harris, who had assumed Mrs. Taylor's care that morning, was paged and arrived within minutes.
Dr. Harris found the patient in marked respiratory distress. She had a temperature of 37.6 °C, a blood pressure of 140/88 mm Hg, a heart rate of 140 beats/min, and a respiratory rate of 50 breaths/min. On examination, she had diffuse rhonchi, as well as crackles, throughout the right lung field. The rest of the examination was unremarkable. An arterial blood gas showed a pH of 7.41, a PCO 2 of 29, and a PO 2 of 63 (on the nonrebreather mask). Portable chest radiography showed a worsening of the right lung infiltrate.
Dr. Harris diagnosed progressing pneumonia and impending respiratory failure. She considered intubating the patient herself on the floor but opted to immediately transfer Mrs. Taylor to the care of a pulmonologist and intensivist who was standing by in the ICU, for probable intubation and mechanical ventilation.
Dr. Harris: In my mind, it was a matter of what would be safest. I really don't have a lot of experience with awake intubation, and I knew that a pulmonologist was already involved in the case, so it was a really easy decision from my standpoint to get ... the patient transferred to the ICU for intubation.
Dr. Harris first saw the patient at 7:57 a.m. and completed her evaluation by 8:20 a.m. It took a few minutes for the logistics to be organized and for Mrs. Taylor to be physically transported. She arrived in the ICU at 8:37 a.m. By this time, her respiratory distress was more pronounced and she had become delirious. Her blood pressure was 142/65 mm Hg, her heart rate was 145 beats/min, her respiratory rate was 38 breaths/min, and oxygen saturation on the nonrebreather mask was 64%.
The pulmonologist preoxygenated Mrs. Taylor with a bag-valve-mask apparatus, administered a dose of midazolam, and attempted intubation at 8:45 a.m. Unfortunately, the attempt was complicated by ventricular fibrillation and a cardiac arrest. The physicians and nurses resumed bag-valve-mask oxygenation, and the oxygenation saturation, which had fallen to the mid-30s, rose to the 80s. Standard cardiopulmonary resuscitation was performed, including 2 to 3 minutes of chest compression, accompanied by boluses of atropine and epinephrine. The patient was defibrillated with 200 J and intubated successfully on the second attempt at 8:49 a.m. Arterial blood gas values after intubation were a pH of 7.09, a PCo 2 of 72, and a Po 2 of 39 on 100% Fio 2 .
The patient's oxygenation ultimately improved and her cardiopulmonary status stabilized, but she suffered profound and presumably irreversible brain damage. At the time of discharge, she could not recognize family members or independently perform any activities of daily living. Although the case was informally discussed among the providers involved, it was not forwarded to or reviewed by the hospital's risk management committee. The patient was discharged to a long-term care facility for total custodial care. Several months after discharge, the patient's family sought legal counsel and decided to pursue a malpractice claim. About 20 months later, Dr. Harris received notice that she had been named in Mrs. Taylor's malpractice case.
Dr. Harris: I was sitting in the ICU and my partner calls me up and says, "You're getting sued, and that's why I'm leaving medicine."
Anatomy of a Malpractice Claim
The lawsuit filed against Dr. Harris illustrates a conventional tort claim for medical malpractice against a physician. To recover damages, Mrs. Taylor must prove 1) that the relationship between Dr. Harris and her gave rise to a duty, 2) that Dr. Harris was negligent—her care fell below the standard expected of a reasonable medical practitioner, 3) that Mrs. Taylor suffered an injury that was 4) caused by Dr. Harris's negligence (1). The claim is seemingly that Dr. Harris did not move quickly enough to seek critical care attention for Mrs. Taylor and that the delay caused the cardiac arrest and subsequent brain damage. We use this case to plumb the broader policy perspectives of malpractice and its effect on patient safety and deterrence of errors. Because some aspects of the litigation are still pending, we could not obtain comments from the plaintiff's attorney; however, we contribute our own thoughts about the plaintiff's likely view of the case.
Why Sue Dr. Harris? The Perspective of the Plaintiff's Attorney
From the plaintiff's perspective, there are three reasons to sue the physician for malpractice. First, filing a lawsuit is a way to secure compensation for the injury (2). Mrs. Taylor no doubt has some uninsured costs associated with this injury; for example, it is highly unlikely that her health and disability insurance will provide coverage for years of rehabilitation or custodial care (3), compensate her family for the loss of her household services, and recompense Mrs. Taylor's and her family's suffering. Second, suing Dr. Harris may provide a sense of corrective justice (4). An injured party is "made whole" through restitution from the injurer. Provoking feelings of remorse, shame, and guilt in the defendant is an integral part of this corrective justice.
Finally, tort litigation is meant to have a deterrence function (5). By forcing the negligent party to pay a penalty, the system creates an economic incentive to take greater precautions in the future. Presumably, being sued will cause Dr. Harris to approach acutely dyspneic patients differently in the future.
Presented as such, the tort system has theoretical appeal. It should supplement other methods of quality regulation through its deterrence function (Mello MM, Brennan TA. Regulating health care quality: the case of patient safety. Commissioned paper for the Agency for Healthcare Research and Quality; 2002). It is essentially a cost-free form of regulation for taxpayers because the regulatory vigor is provided by market incentives that direct plaintiffs' attorneys to select and bring cases. Attorneys weigh the costs of bringing a case (investigating the claim, hiring experts, and going to trial) against their expected compensation (usually a percentage of the award made to the plaintiff, referred to as a "contingency fee") (6).
This attractive theoretical account of tort law's social role is challenged, however, by the available empirical evidence about how medical malpractice law actually operates. Tort law performs its compensation function relatively poorly because most patients injured by negligence do not bring malpractice claims (7-9). In addition, the system has very high administrative costs—up to 60%, as compared with 5% to 30% for most other social compensation schemes (10) (Table). For example, workers' compensation is estimated to have administrative costs of 20% to 30% and the Social Security Disability Insurance system has costs in the 5% range. The differences are stark: For a $400 000 malpractice award, another $200 000 is spent on administrative costs, primarily in attorneys' fees. In contrast, a Workers' Compensation award of $400 000 requires only about $100 000 in administrative costs. With respect to corrective justice, the malpractice system does induce negative emotions in sued physicians (11), but it rarely inspires genuine remorse or feelings that justice has been done. Rather, most defendants find little merit in the suits brought against them and feel that they are the victims of a random event (12, 13).
View this table:[in this window][in a new window]
Table. Comparison of Tort and Administrative Compensation Schemes
The deterrence function of malpractice litigation also seems unavailing (14). Studies of the relationship between lawsuits and subsequent quality of care have largely centered on obstetrics. Most studies have failed to correlate variations in care patterns or birth outcomes with the obstetrician's history of malpractice claims (15-18). The single broad study of hospital adverse events reported limited evidence that a greater number and severity of malpractice claims was associated with improvement in medical injury rates (12). Even defensive-medicine effects, that is, promoting higher-than-optimal levels of taking precautions, have not been conclusively reported (14, 19). Anecdotal evidence suggests that in periods of "tort crisis," fear of being sued and the unaffordability or unavailability of liability insurance may have a different deterrent effect: It may deter physicians from remaining in practice or continuing to perform high-risk services (20, 21). Such effects, if they become widespread, affect patient access to care. Thus, much of the plaintiff's view of malpractice litigation is controversial.
Is the Lawsuit Fair? The Perspective of the Defense Attorney
From the facts of Mrs. Taylor's case, most readers probably have concluded that there is little evidence of negligence on the part of Dr. Harris. Within 40 minutes of the evaluation, Dr. Harris had moved Mrs. Taylor to the care of an expert in the ICU. Since this action plan was within the standard of care expected of a reasonable practitioner, the malpractice suit seems unfair. The sense of unfairness is compounded by the fact that the lawsuit blames the individual physician. This event clearly occurred in several layers of the system: the nursing monitoring of the patient's condition; the schedule of attending coverage, as Dr. Harris "picks up" the care in the morning from another physician; the emergency response and admission to the ICU; and the issue of emergency intubation "on the floor." It seems unreasonable to blame Dr. Harris, given the possible contributory role of these systemic factors.
Plaintiffs' attorneys routinely sue several individuals, as well as the hospital. They may not believe that all individuals are liable, but they hope that some will offer at least a small settlement to avoid the nuisance aspects of the suit and the risk for a larger jury award. These settlements enable the plaintiff's attorney to fund further litigation against other defendants in the suit.
Mr. Dean, the defense attorney, is savvy about the respective roles of patient injury and negligence in initiating and settling a malpractice claim:
Mr. Dean: In a case like this, involving a patient who was already in the hospital, who has an arrest and anoxic encephalopathy, one of the very significant perceptual issues we have to consider ... is the fact that there was a catastrophic outcome, and to some jurors, catastrophic outcomes may equate with "somebody must have messed up."
Mr. Dean makes the important point that the degree of injury is critical to the outcome of the case. His contention is supported by empirical evidence from the Harvard Medical Practice Study, which examined rates of hospital adverse events, negligence, and malpractice claims in New York (7, 22). Negligence was determined by physician reviewers unaffiliated with the sued providers' insurance companies. The investigators followed the malpractice claims for 10 years and determined that the only statistically significant predictor of a payout to the plaintiff was the plaintiff's degree of disability—not the presence of negligence (23). Other studies have suggested that negligence does influence the size of settlements (24, 25), but these analyses have been based on insurance claims adjusters' determinations of negligence rather than independent judgments. If the main factor determining compensation is injury severity or disability even in a system that ostensibly revolves around a negligence determination, then one must ask why we cling to the tort model of compensation for medical injury.
The Case, Continued
After a long pretrial period of fact-finding ("discovery"), expert witness reviews, and depositions, Mr. Dean felt that his client's case was very strong. However, Mrs. Taylor's horrendous adverse outcome and concerns (unrelated to Dr. Harris's care) about her care by the hospital and other providers led Mr. Dean to recommend that Dr. Harris offer to settle the case for a relatively small amount of money. In explaining the decision to settle, Mr. Dean weighed three factors. First, if a jury found his client negligent, what would the plaintiff's damages probably amount to, both in economic and noneconomic ("pain and suffering") terms? Second, how likely is a jury to find in favor of the physician? Third, what is his gut instinct about the case's worth? His judgment incorporates subjective factors, such as the likely composition and liberality of the jury in a given venue and sympathetic or unsympathetic characteristics of the plaintiff, her injury, and her circumstances.
Mr. Dean: The concern was that the jury could be so overwhelmed with sympathy for what occurred to the patient and the patient's family that they would feel it would be impossible to say no ... Even if you are assessed a very small percentage of responsibility by the jury, given the huge potential damage exposure ... it could potentially represent a judgment ... in excess of your malpractice coverage. Mrs. Taylor and her family could come after the physician and force her into bankruptcy, resulting in financial ruin for Dr. Harris.
One would not blame Dr. Harris for feeling that the outcome of her case is unfair. Yet it is perfectly in accord with empirical research on litigation outcomes and with attorneys' strategic decisions as they function within an imperfect tort system. For Dr. Harris, settlement is the most rational choice in a system that could produce an utterly calamitous outcome.
The Perspective of Patient Safety Reformers
Those persons directly involved in this litigation—Dr. Harris, Mrs. Taylor, and their families and attorneys—feel the greatest effect of the malpractice system's shortcomings. However, these failings also have strong implications for the nascent patient safety movement. The traditional rule in the common law is that all available probative evidence (evidence that proves a fact) should be admitted to the court for consideration (26). But legislators have long recognized that peer reviewers would be chilled if they knew that their review would be available to a plaintiff and to his or her attorney; thus, they have granted a privilege of nondiscoverability to peer review information, which courts generally have enforced (27).
The breadth of the privilege varies from state to state (28), but generally, hospitals must confine discussions about adverse events to small committees of insiders to retain the privilege. The need to minimize legal exposure leads them to eschew more public debate about quality issues. In the Harris case, it seems that it would have been beneficial for the hospital and staff to have openly evaluated issues of seamless cross-coverage, protocols for emergent intubation on the floor, and timely transfer to the ICU. Unfortunately, it appears that nothing of this sort occurred.
Dr. Harris: From a hospital standpoint, to my knowledge, it was never discussed with any of the physicians. It never came up. I guess the things that come to mind are ... intensive care unit transfers and code blue situations ... but if they changed things in regards to this case, that would be news to me ... I don't really know the risk management people ... I know they exist, but who they are and their role and function in a situation like this or day to day, despite the fact that I spend up to 120 hours in the hospital, is just not discussed and I've never met them face to face.
The hospital cannot necessarily be blamed for failing to follow up. Perhaps the hospital concluded after an initial evaluation that there were few grounds for quality improvement. More likely, the hospital realized the extent of the resources necessary to complete a formal peer review process and decided it was not worth the effort. But Dr. Harris's ignorance of the formal mechanics of peer review at her hospital, and its essentially hidden nature, demonstrate the tension between error prevention or quality improvement and medical malpractice. Fear of litigation either stifles injury reduction efforts or drives efforts underground.
Malpractice and Patient Safety Trends
The Institute of Medicine's report on medical errors (29) has fomented a critical change in attitude about patient safety activism. Many risk management offices (a euphemism that obscures whether the "risk" is for a medical injury or for a successful malpractice claim) are now becoming patient safety offices or are partnering with newly created, separate patient safety offices. The use of careful root-cause analysis is becoming prevalent at the departmental level in many institutions (30). Yet malpractice fears continue to retard these salutary efforts, and many hospitals still approach error-related injuries the way Dr. Harris's hospital did. These apprehensions not only chill educational discussion but also exert profound pressure against initiatives to disclose adverse events to both patients and governmental reporting systems. We (31) and others (32) have long advocated greater transparency about medical errors. Codes of professional ethics, as well as the new patient safety standards promulgated by the Joint Commission on Accreditation of Healthcare Organizations (33), support an obligation of disclosure to patients. The enormous potential for learning about errors through epidemiologic analysis argues persuasively for reporting to centralized data collection systems.
However, providers reasonably fear that greater transparency will tremendously increase the number of successful malpractice claims, with concomitant increases in malpractice premiums and decreases in the availability of insurance. Advocates of reporting counter that honesty may actually decrease physicians' malpractice risk (34): Physicians who have poor relationships with patients are the ones who get sued, and what patients really want is to be dealt with forthrightly (35, 36). The sole piece of published evidence on this issue is methodologically weak and comes from the Veterans Administration system, in which the physicians cannot be sued and institutional liability is limited (37). Researchers have yet to disprove providers' suppositions that greater disclosure will lead to more requests for compensation.
Legislation to protect centralized error reporting from legal discovery can help, but not all states have adopted such protections (38). Even in states that guarantee confidentiality, the continued public and media attention to medical errors—which provides valuable impetus and momentum for patient safety initiatives—may make injured patients more disposed to file claims.
A New Paradigm
The tensions between the tort system and patient safety demand that we reexamine our attachment to adversarial dispute resolution in health care. The options boil down to three paths. First, we can maintain the status quo and simultaneously push the safety agenda harder. It is possible that appeals to physicians' ethical commitments to patient welfare (39) and the demonstrated successes of industry-based models of systemic quality improvement may gradually yield buy-in to safety initiatives. We have our doubts, however. The conflicts between the tort system and error reduction programs are fundamental and severe, and physicians' concerns about being sued and losing their liability insurance have reached a fever pitch. Appeals to professionalism may ring hollow with physicians operating under a siege mentality. A second option is to take legislative steps to curb the frequency and economic effect of malpractice litigation. During past "tort crises," providers successfully lobbied state legislatures to change litigation rules to make them less favorable to plaintiffs (40). Tort reform aims to decrease the expected value of a case for plaintiffs' attorneys, changing the calculus about when it is worthwhile to bring a claim. Among the most efficacious reforms are caps on noneconomic damages; changes in the amount that attorneys may take as contingency fees; reductions in the length of time that injured patients have to bring a claim; and elimination of the "collateral source rule," which allows plaintiffs to recover medical expenses and other costs even if these have been covered by insurance (41-44).
Today we are in the throes of new tort crisis, with claims rates and average payouts rising in many states, especially those that did not institute tort reform in previous crises (45). The concurrence of the tort crisis and the attention to medical errors has not gone unnoticed by insurers. Lobbying for tort reform at both the state and federal levels is under way (47).
The tort reform strategy is problematic, not the least because of its contentiousness. Many state legislatures cannot pass meaningful reform because of the competitive gridlock interposed by health care providers and trial lawyers. Moreover, traditional tort reforms aim to reduce providers' economic exposure, not create a more efficient system. The system's fundamental flaw is not simply that it costs health care providers too much but that it tends to overcompensate some patients while undercompensating others (8, 47). Reform should strive to do more, and we believe a no-fault approach is the answer.
In a no-fault system, the injured patient would only have to demonstrate that a disability was caused by medical management as opposed to the disease process: There is no need to prove negligence. This approach comports better with the patient safety movement. Modern notions of error prevention, emphasizing evidence-based analysis of systems of care (29) and application of technological and structural methods to foster prevention (50), find little value in assessing individual moral blame. No-fault compensation for avoidable injuries is far better suited to support error prevention than a system that revolves around culpability determinations.
We believe that such an approach could produce important incentives for prevention, the so-called deterrent effect, if risk were aggregated in institutions and medical groups. Experience-rating individual physicians' insurance premiums has not been actuarially feasible because physicians are sued too infrequently and their claims experience fluctuates too radically from year to year (14). However, hospitals and integrated medical groups have a more consistent risk profile and their premiums can be experience-rated.
An even better approach may be to set up so-called channeling programs, in which hospitals and their medical staffs are insured by the same entity and all efforts to prevent medical errors are undertaken jointly. Some medical school and academic medical centers already use a channeling approach, and, as links grow between hospitals and integrated medical groups, the potential for a substantial amount of the health care system to operate under channeling approaches increases. In a channeled program, the foundation for greater safety is established by integrating the physician and hospitals or health care centers. The enterprise bears the liability for injury and has incentives to address prevention of errors in both inpatient and ambulatory settings.
We have also noted that in practice, compensation in the current tort system turns on severity of injury more than negligence—so why maintain a system focused on determining negligence? It is expensive and administratively cumbersome to make these determinations, as it involves an adversarial "battle of the experts." Moreover, even negligence judgments by financially disinterested expert reviewers are notoriously unreliable (48). In the context of a vigorously adversarial system, the focus on negligence also incites emotion-provoking behavior by litigants. Not only does this leave lasting psychological scars on persons involved, it pollutes what otherwise might be a useful exercise in root-cause analysis leading to quality improvements (49).
Finally, good data suggest that the no-fault approach would be less costly administratively. Similar no-fault programs in Workers' Compensation and vaccine liability operate at less than half of the costs of tort litigation, largely by minimizing the role of the lawyers. This is where politics will play an important role: Lawyers will fight to maintain the present system.
Elsewhere we have described a limited no-fault approach to medical injury compensation that could work on an elective basis [14]. We believe that no-fault compensation can 1) promote greater transparency about adverse events, 2) partner with a hospital-based, experience-rated insurance system that does not remove incentives for error prevention, and 3) lead to more equitable and efficient compensation (Table).
There are people who doubt no-fault proposals; they highlight the historical absence of effective self-policing, the possibility that the present malpractice system has improved safety by promoting vigilance and better documentation, and the uninspiring example of other no-fault systems, such as Workers' Compensation (51). Mr. Dean's view of the matter reflects the prevailing uncertainty about its probable outcomes:
Mr. Dean: If we reinvent the system and take lawyers completely out of the equation ... is that going to result in safer medical care? One argument is that if physicians know that their care is not going to be subject to scrutiny ... that can actually decrease patient safety. On the other hand, I think that a reasonable argument can be made that if a physician or health care provider knows that every judgment is not going to be subjected to intense microscopic scrutiny under the "retrospectoscope," they are going to be more liberated and free to practice what they see as good medicine, and not be subject to second-guessing at every turn, and that can improve patient safety. It seems to me that until we have some hard data comparing safety in a pure no-fault system, we are not going to know the answer.
We acknowledge this uncertainty, but believe the proposal is worthy of experimentation.
The Harris case illustrates how difficult it is to move forward with an error prevention agenda in a heated malpractice environment. It is not surprising that providers are reluctant to buy in. Patients deserve innovative approaches that will reduce their chances of being injured by errors and lead to fair compensation if an avoidable injury occurs; providers deserve an environment in which participating in patient safety and compensation initiatives does not put them at risk for financial and professional ruin.
Appendix
Questions and Answers from the Conference Dr. Robert M. Wachter, Quality Grand Rounds Editor: Where do you think the locus of action for improving patient safety should be? How would the malpractice system or the no-fault system play into creating incentives for institutions to improve safety?
Dr. Brennan: The only place where we find any real evidence of the deterrent effect of malpractice on errors is at the level of the institution. That makes sense because it is very difficult for individual practitioners to institute systematic approaches to reducing the number of medical injuries. In our most recent proposal for a no-fault system, we suggested that individual hospitals could choose to check out of the tort system and into a voluntary, no-fault program. The only places that can do that are those with integrated medical groups, which you find mostly in so-called channeling institutions. That's an insurance company term for a place where a single insurer covers both the doctors and the hospitals. Doctors who see patients in a primary care setting could have them sign a waiver saying that they understand they can't sue because the organization is in a no-fault compensation scheme. What I find attractive about this is that it could afford a competitive advantage in today's environment. We can tell patients that we can compensate them through the administrative system and that the compensation is going to be fair. We also have very strong incentives to report any injury to patients and to the administrative system. The average community hospital is going to have a harder time because physicians are separately insured and separate entities from the point of view of patient safety. From our point of view, the no-fault system creates an environment that encourages reporting, analyzing these reports, and publicizing the results. Many patients are going to find that attractive.
A physician: In a no-fault system that has no negligence, who decides what an adverse event is?
Dr. Brennan: An adverse event is defined as something that results in a prolongation of hospitalization or disability at the time of discharge, as a result of medical management as opposed to the disease process. That is actually a lot easier to define reliably than is the negligence judgment. What people are being compensated for today is their injury, not the negligence. Trying to identify the negligence is eating up a lot of administrative cost and poisoning the system with the morality play. Determining if an avoidable adverse event occurred would be easier in an administrative compensation scheme and would run similarly to the way things are adjudicated by insurance companies today, with expert testimony and decision-making along those lines. I am fairly confident that the system would work.
Dr. Mark Smith, President and Chief Executive Officer, California HealthCare Foundation, and Quality Grand Rounds Editor: Perhaps as a result of the rise of managed care, much of the most heavily publicized litigation in California has been at the health plan; not targeting physicians or hospitals, but, for instance, about coverage for bone marrow transplantation for breast cancer. Are there implications in a no-fault approach for liability when a health plan declines to cover treatment?
Dr. Brennan: Probably not. These cases occur infrequently, and the protections afforded insurance companies, because of the Employee Retirement Income Security Act (ERISA), make them relatively difficult cases to bring. These two factors tend to overwhelm a need for a no-fault approach there.
A physician: Under the no-fault program, the physician has a strong incentive to report adverse events to the patients and the hospital. Hopefully we all do that, but in a busy physician's schedule, I would think that they would find it easier not to report.
Dr. Brennan: You can build in some penalties for failure to report. Some insurance companies already charge an extra malpractice premium if a claim comes in and you haven't forewarned the insurance company. We would do the same thing in a no-fault program. Although we're trying to avoid a sense of penalty, there nonetheless have to be inducements to report.
Dr. Wachter: Informing patients of errors in their care is ethically the right thing to do. Increasingly, people cite evidence that full disclosure also will not increase the risk of a lawsuit. Is this correct?
Dr. Brennan: There are no good studies on that point, unfortunately. There are seasoned risk managers who will tell you that a lot of what people get upset about, and bring suits about, is the feeling that someone lied to them. Nonetheless, those same seasoned risk managers are not necessarily in favor of full reporting. The literature that people cite is a 1999 article in the Annals of Internal Medicine (37), which observed that at a couple of hospitals in a VA [Veterans Administration] system that promoted reporting errors to patients, claim rates were no higher than in other hospitals. However, there was absolutely no case-mix adjustment, and the VA system is a lot different from other hospital systems. First of all you have the Federal Tort Claims Act, which provides protection from suit, and second, you can't sue the individual doctors. So there is really no evidence right now.
A physician: Can you comment from the charts that you've reviewed about the quality of documentation and the role that it plays in the merits of the suit or on the outcome?
Dr. Brennan: In general, the quality of documentation is helpful in terms of nailing down whether or not a medical injury occurred or whether or not there was negligence. A few might take from this that if you don't document well, it's going to be more difficult to bring a case against the doctor, but crummy documentation actually plays very poorly in litigation. From the point of view of preventing medical injury, it is probably best to do the documentation.
Dr. Wachter: I can't let you leave without talking about the estimate of 44 000 to 98 000 yearly deaths due to medical errors in the Harvard Medical Study practice, which you led. These numbers, more than anything, captured the public's attention when they were touted in the 1999 IOM [Institute of Medicine] report. Yet, you have been circumspect about their accuracy. Could you comment?
Dr. Brennan: These are statistical analyses and I think we did them about as well as they can be done. But the reliability of these judgments from a statistical point of view is fairly poor with a kappa statistic of 0.4 to 0.5 for adverse events and even lower for negligence. What that means is that one person may say an event is a negligent adverse event, while another would says it's not. The other issue is that the IOM took our state-level data on adverse events and upweighted them to generate national mortality estimates. Whenever you extrapolate from relatively small samples, you have concerns about the statistical precision of the estimates. We always tried to point out the sponginess of these numbers in our public statements, but the IOM made a specific decision to go with them. The IOM performed a very important service in terms of putting patient safety back into the common vernacular of the American medical system and for that we owe them a debt of gratitude. Although we don't know exactly how many people die from medical errors, there is no doubt that it is at least 50 000 per year in hospitals and many additional outpatients. In the end, the actual number doesn't make much difference. Whatever the numbers, we have a tremendous burden of morbidity and mortality caused by errors and relatively little attention being paid to trying to prevent them.
The system of tort liability for medical malpractice is frequently criticized for poorly performing its theoretical functions of compensating injured patients, deterring negligence, and dispensing corrective justice. Working from an actual malpractice case involving serious injury but no apparent negligence, the authors explore these criticisms from the perspectives of both the plaintiff–patient and the defendant–physician. They then examine the tort system through the lens of patient safety and conclude that the tensions between the system and patient safety initiatives suggest a need to reexamine our attachment to adversarial dispute resolution in health care. They propose targeted reforms that could improve the functioning of the system and create incentives to improve safety and quality. For a list of questions and answers from the Quality Grand Rounds conference, see the Appendix.
"Quality Grand Rounds" is a series of articles and companion conferences designed to explore a range of quality issues and medical errors. Presenting actual cases drawn from institutions around the United States, the articles integrate traditional medical case histories with results of root-cause analyses and, where appropriate, anonymous interviews with the involved patients, physicians, nurses, and risk managers. Cases do not come from the discussants' home institutions. The physician, Dr. Harris, was interviewed by a Quality Grand Rounds editor on 16 August 2002. The physician's defense attorney, Mr. Dean, was interviewed by a Quality Grand Rounds editor on 14 August 2002. All names are pseudonyms.
Summary of Events
Mrs. Taylor (a pseudonym), a 52-year-old woman with severe pneumonia and impending respiratory failure, was evaluated on the medical ward of a community hospital by Dr. Harris, an internist. Dr. Harris chose to immediately transfer her to the intensive care unit (ICU) for urgent intubation by a critical care specialist. During intubation, Mrs. Taylor had a cardiac arrest, which resulted in permanent brain damage. Dr. Harris was sued for malpractice.
The Case
Mrs. Taylor had a 3-day history of progressive fevers, nausea, and vomiting. She presented to the emergency department at 2:30 a.m., where she appeared to be moderately ill and dyspneic. Her initial temperature was 38.3 °C, her blood pressure was 112/70 mm Hg, her heart rate was 118 beats/min, and her respiratory rate was 26 breaths/min. Her oxygen saturation was 92% on room air. The examination was remarkable for crackles at her right lung base. The examination of her cardiac, abdominal, and neurologic systems was unremarkable. Laboratory studies showed a leukocyte count of 14 x 10 9 cells/L with a left shift, a creatinine level of 1.3 mg/dL (114.9 祄ol/L), and a sodium level of 129 mmol/L. A chest radiograph showed a dense right lower lobe infiltrate. Bacterial pneumonia was diagnosed. The patient began receiving levofloxacin, metronidazole, and oxygen and was admitted to the medical ward of the hospital. A pulmonologist was consulted by telephone about the initial treatment choices. At 7:45 a.m., a nurse found Mrs. Taylor profoundly dyspneic and diaphoretic. Her oxygen saturation had fallen to 69% on 2 L. The patient was immediately placed on a nonrebreather mask at 15 L/min, which increased the oxygen saturation to 91%. Dr. Harris, who had assumed Mrs. Taylor's care that morning, was paged and arrived within minutes.
Dr. Harris found the patient in marked respiratory distress. She had a temperature of 37.6 °C, a blood pressure of 140/88 mm Hg, a heart rate of 140 beats/min, and a respiratory rate of 50 breaths/min. On examination, she had diffuse rhonchi, as well as crackles, throughout the right lung field. The rest of the examination was unremarkable. An arterial blood gas showed a pH of 7.41, a PCO 2 of 29, and a PO 2 of 63 (on the nonrebreather mask). Portable chest radiography showed a worsening of the right lung infiltrate.
Dr. Harris diagnosed progressing pneumonia and impending respiratory failure. She considered intubating the patient herself on the floor but opted to immediately transfer Mrs. Taylor to the care of a pulmonologist and intensivist who was standing by in the ICU, for probable intubation and mechanical ventilation.
Dr. Harris: In my mind, it was a matter of what would be safest. I really don't have a lot of experience with awake intubation, and I knew that a pulmonologist was already involved in the case, so it was a really easy decision from my standpoint to get ... the patient transferred to the ICU for intubation.
Dr. Harris first saw the patient at 7:57 a.m. and completed her evaluation by 8:20 a.m. It took a few minutes for the logistics to be organized and for Mrs. Taylor to be physically transported. She arrived in the ICU at 8:37 a.m. By this time, her respiratory distress was more pronounced and she had become delirious. Her blood pressure was 142/65 mm Hg, her heart rate was 145 beats/min, her respiratory rate was 38 breaths/min, and oxygen saturation on the nonrebreather mask was 64%.
The pulmonologist preoxygenated Mrs. Taylor with a bag-valve-mask apparatus, administered a dose of midazolam, and attempted intubation at 8:45 a.m. Unfortunately, the attempt was complicated by ventricular fibrillation and a cardiac arrest. The physicians and nurses resumed bag-valve-mask oxygenation, and the oxygenation saturation, which had fallen to the mid-30s, rose to the 80s. Standard cardiopulmonary resuscitation was performed, including 2 to 3 minutes of chest compression, accompanied by boluses of atropine and epinephrine. The patient was defibrillated with 200 J and intubated successfully on the second attempt at 8:49 a.m. Arterial blood gas values after intubation were a pH of 7.09, a PCo 2 of 72, and a Po 2 of 39 on 100% Fio 2 .
The patient's oxygenation ultimately improved and her cardiopulmonary status stabilized, but she suffered profound and presumably irreversible brain damage. At the time of discharge, she could not recognize family members or independently perform any activities of daily living. Although the case was informally discussed among the providers involved, it was not forwarded to or reviewed by the hospital's risk management committee. The patient was discharged to a long-term care facility for total custodial care. Several months after discharge, the patient's family sought legal counsel and decided to pursue a malpractice claim. About 20 months later, Dr. Harris received notice that she had been named in Mrs. Taylor's malpractice case.
Dr. Harris: I was sitting in the ICU and my partner calls me up and says, "You're getting sued, and that's why I'm leaving medicine."
Anatomy of a Malpractice Claim
The lawsuit filed against Dr. Harris illustrates a conventional tort claim for medical malpractice against a physician. To recover damages, Mrs. Taylor must prove 1) that the relationship between Dr. Harris and her gave rise to a duty, 2) that Dr. Harris was negligent—her care fell below the standard expected of a reasonable medical practitioner, 3) that Mrs. Taylor suffered an injury that was 4) caused by Dr. Harris's negligence (1). The claim is seemingly that Dr. Harris did not move quickly enough to seek critical care attention for Mrs. Taylor and that the delay caused the cardiac arrest and subsequent brain damage. We use this case to plumb the broader policy perspectives of malpractice and its effect on patient safety and deterrence of errors. Because some aspects of the litigation are still pending, we could not obtain comments from the plaintiff's attorney; however, we contribute our own thoughts about the plaintiff's likely view of the case.
Why Sue Dr. Harris? The Perspective of the Plaintiff's Attorney
From the plaintiff's perspective, there are three reasons to sue the physician for malpractice. First, filing a lawsuit is a way to secure compensation for the injury (2). Mrs. Taylor no doubt has some uninsured costs associated with this injury; for example, it is highly unlikely that her health and disability insurance will provide coverage for years of rehabilitation or custodial care (3), compensate her family for the loss of her household services, and recompense Mrs. Taylor's and her family's suffering. Second, suing Dr. Harris may provide a sense of corrective justice (4). An injured party is "made whole" through restitution from the injurer. Provoking feelings of remorse, shame, and guilt in the defendant is an integral part of this corrective justice.
Finally, tort litigation is meant to have a deterrence function (5). By forcing the negligent party to pay a penalty, the system creates an economic incentive to take greater precautions in the future. Presumably, being sued will cause Dr. Harris to approach acutely dyspneic patients differently in the future.
Presented as such, the tort system has theoretical appeal. It should supplement other methods of quality regulation through its deterrence function (Mello MM, Brennan TA. Regulating health care quality: the case of patient safety. Commissioned paper for the Agency for Healthcare Research and Quality; 2002). It is essentially a cost-free form of regulation for taxpayers because the regulatory vigor is provided by market incentives that direct plaintiffs' attorneys to select and bring cases. Attorneys weigh the costs of bringing a case (investigating the claim, hiring experts, and going to trial) against their expected compensation (usually a percentage of the award made to the plaintiff, referred to as a "contingency fee") (6).
This attractive theoretical account of tort law's social role is challenged, however, by the available empirical evidence about how medical malpractice law actually operates. Tort law performs its compensation function relatively poorly because most patients injured by negligence do not bring malpractice claims (7-9). In addition, the system has very high administrative costs—up to 60%, as compared with 5% to 30% for most other social compensation schemes (10) (Table). For example, workers' compensation is estimated to have administrative costs of 20% to 30% and the Social Security Disability Insurance system has costs in the 5% range. The differences are stark: For a $400 000 malpractice award, another $200 000 is spent on administrative costs, primarily in attorneys' fees. In contrast, a Workers' Compensation award of $400 000 requires only about $100 000 in administrative costs. With respect to corrective justice, the malpractice system does induce negative emotions in sued physicians (11), but it rarely inspires genuine remorse or feelings that justice has been done. Rather, most defendants find little merit in the suits brought against them and feel that they are the victims of a random event (12, 13).
View this table:[in this window][in a new window]
Table. Comparison of Tort and Administrative Compensation Schemes
The deterrence function of malpractice litigation also seems unavailing (14). Studies of the relationship between lawsuits and subsequent quality of care have largely centered on obstetrics. Most studies have failed to correlate variations in care patterns or birth outcomes with the obstetrician's history of malpractice claims (15-18). The single broad study of hospital adverse events reported limited evidence that a greater number and severity of malpractice claims was associated with improvement in medical injury rates (12). Even defensive-medicine effects, that is, promoting higher-than-optimal levels of taking precautions, have not been conclusively reported (14, 19). Anecdotal evidence suggests that in periods of "tort crisis," fear of being sued and the unaffordability or unavailability of liability insurance may have a different deterrent effect: It may deter physicians from remaining in practice or continuing to perform high-risk services (20, 21). Such effects, if they become widespread, affect patient access to care. Thus, much of the plaintiff's view of malpractice litigation is controversial.
Is the Lawsuit Fair? The Perspective of the Defense Attorney
From the facts of Mrs. Taylor's case, most readers probably have concluded that there is little evidence of negligence on the part of Dr. Harris. Within 40 minutes of the evaluation, Dr. Harris had moved Mrs. Taylor to the care of an expert in the ICU. Since this action plan was within the standard of care expected of a reasonable practitioner, the malpractice suit seems unfair. The sense of unfairness is compounded by the fact that the lawsuit blames the individual physician. This event clearly occurred in several layers of the system: the nursing monitoring of the patient's condition; the schedule of attending coverage, as Dr. Harris "picks up" the care in the morning from another physician; the emergency response and admission to the ICU; and the issue of emergency intubation "on the floor." It seems unreasonable to blame Dr. Harris, given the possible contributory role of these systemic factors.
Plaintiffs' attorneys routinely sue several individuals, as well as the hospital. They may not believe that all individuals are liable, but they hope that some will offer at least a small settlement to avoid the nuisance aspects of the suit and the risk for a larger jury award. These settlements enable the plaintiff's attorney to fund further litigation against other defendants in the suit.
Mr. Dean, the defense attorney, is savvy about the respective roles of patient injury and negligence in initiating and settling a malpractice claim:
Mr. Dean: In a case like this, involving a patient who was already in the hospital, who has an arrest and anoxic encephalopathy, one of the very significant perceptual issues we have to consider ... is the fact that there was a catastrophic outcome, and to some jurors, catastrophic outcomes may equate with "somebody must have messed up."
Mr. Dean makes the important point that the degree of injury is critical to the outcome of the case. His contention is supported by empirical evidence from the Harvard Medical Practice Study, which examined rates of hospital adverse events, negligence, and malpractice claims in New York (7, 22). Negligence was determined by physician reviewers unaffiliated with the sued providers' insurance companies. The investigators followed the malpractice claims for 10 years and determined that the only statistically significant predictor of a payout to the plaintiff was the plaintiff's degree of disability—not the presence of negligence (23). Other studies have suggested that negligence does influence the size of settlements (24, 25), but these analyses have been based on insurance claims adjusters' determinations of negligence rather than independent judgments. If the main factor determining compensation is injury severity or disability even in a system that ostensibly revolves around a negligence determination, then one must ask why we cling to the tort model of compensation for medical injury.
The Case, Continued
After a long pretrial period of fact-finding ("discovery"), expert witness reviews, and depositions, Mr. Dean felt that his client's case was very strong. However, Mrs. Taylor's horrendous adverse outcome and concerns (unrelated to Dr. Harris's care) about her care by the hospital and other providers led Mr. Dean to recommend that Dr. Harris offer to settle the case for a relatively small amount of money. In explaining the decision to settle, Mr. Dean weighed three factors. First, if a jury found his client negligent, what would the plaintiff's damages probably amount to, both in economic and noneconomic ("pain and suffering") terms? Second, how likely is a jury to find in favor of the physician? Third, what is his gut instinct about the case's worth? His judgment incorporates subjective factors, such as the likely composition and liberality of the jury in a given venue and sympathetic or unsympathetic characteristics of the plaintiff, her injury, and her circumstances.
Mr. Dean: The concern was that the jury could be so overwhelmed with sympathy for what occurred to the patient and the patient's family that they would feel it would be impossible to say no ... Even if you are assessed a very small percentage of responsibility by the jury, given the huge potential damage exposure ... it could potentially represent a judgment ... in excess of your malpractice coverage. Mrs. Taylor and her family could come after the physician and force her into bankruptcy, resulting in financial ruin for Dr. Harris.
One would not blame Dr. Harris for feeling that the outcome of her case is unfair. Yet it is perfectly in accord with empirical research on litigation outcomes and with attorneys' strategic decisions as they function within an imperfect tort system. For Dr. Harris, settlement is the most rational choice in a system that could produce an utterly calamitous outcome.
The Perspective of Patient Safety Reformers
Those persons directly involved in this litigation—Dr. Harris, Mrs. Taylor, and their families and attorneys—feel the greatest effect of the malpractice system's shortcomings. However, these failings also have strong implications for the nascent patient safety movement. The traditional rule in the common law is that all available probative evidence (evidence that proves a fact) should be admitted to the court for consideration (26). But legislators have long recognized that peer reviewers would be chilled if they knew that their review would be available to a plaintiff and to his or her attorney; thus, they have granted a privilege of nondiscoverability to peer review information, which courts generally have enforced (27).
The breadth of the privilege varies from state to state (28), but generally, hospitals must confine discussions about adverse events to small committees of insiders to retain the privilege. The need to minimize legal exposure leads them to eschew more public debate about quality issues. In the Harris case, it seems that it would have been beneficial for the hospital and staff to have openly evaluated issues of seamless cross-coverage, protocols for emergent intubation on the floor, and timely transfer to the ICU. Unfortunately, it appears that nothing of this sort occurred.
Dr. Harris: From a hospital standpoint, to my knowledge, it was never discussed with any of the physicians. It never came up. I guess the things that come to mind are ... intensive care unit transfers and code blue situations ... but if they changed things in regards to this case, that would be news to me ... I don't really know the risk management people ... I know they exist, but who they are and their role and function in a situation like this or day to day, despite the fact that I spend up to 120 hours in the hospital, is just not discussed and I've never met them face to face.
The hospital cannot necessarily be blamed for failing to follow up. Perhaps the hospital concluded after an initial evaluation that there were few grounds for quality improvement. More likely, the hospital realized the extent of the resources necessary to complete a formal peer review process and decided it was not worth the effort. But Dr. Harris's ignorance of the formal mechanics of peer review at her hospital, and its essentially hidden nature, demonstrate the tension between error prevention or quality improvement and medical malpractice. Fear of litigation either stifles injury reduction efforts or drives efforts underground.
Malpractice and Patient Safety Trends
The Institute of Medicine's report on medical errors (29) has fomented a critical change in attitude about patient safety activism. Many risk management offices (a euphemism that obscures whether the "risk" is for a medical injury or for a successful malpractice claim) are now becoming patient safety offices or are partnering with newly created, separate patient safety offices. The use of careful root-cause analysis is becoming prevalent at the departmental level in many institutions (30). Yet malpractice fears continue to retard these salutary efforts, and many hospitals still approach error-related injuries the way Dr. Harris's hospital did. These apprehensions not only chill educational discussion but also exert profound pressure against initiatives to disclose adverse events to both patients and governmental reporting systems. We (31) and others (32) have long advocated greater transparency about medical errors. Codes of professional ethics, as well as the new patient safety standards promulgated by the Joint Commission on Accreditation of Healthcare Organizations (33), support an obligation of disclosure to patients. The enormous potential for learning about errors through epidemiologic analysis argues persuasively for reporting to centralized data collection systems.
However, providers reasonably fear that greater transparency will tremendously increase the number of successful malpractice claims, with concomitant increases in malpractice premiums and decreases in the availability of insurance. Advocates of reporting counter that honesty may actually decrease physicians' malpractice risk (34): Physicians who have poor relationships with patients are the ones who get sued, and what patients really want is to be dealt with forthrightly (35, 36). The sole piece of published evidence on this issue is methodologically weak and comes from the Veterans Administration system, in which the physicians cannot be sued and institutional liability is limited (37). Researchers have yet to disprove providers' suppositions that greater disclosure will lead to more requests for compensation.
Legislation to protect centralized error reporting from legal discovery can help, but not all states have adopted such protections (38). Even in states that guarantee confidentiality, the continued public and media attention to medical errors—which provides valuable impetus and momentum for patient safety initiatives—may make injured patients more disposed to file claims.
A New Paradigm
The tensions between the tort system and patient safety demand that we reexamine our attachment to adversarial dispute resolution in health care. The options boil down to three paths. First, we can maintain the status quo and simultaneously push the safety agenda harder. It is possible that appeals to physicians' ethical commitments to patient welfare (39) and the demonstrated successes of industry-based models of systemic quality improvement may gradually yield buy-in to safety initiatives. We have our doubts, however. The conflicts between the tort system and error reduction programs are fundamental and severe, and physicians' concerns about being sued and losing their liability insurance have reached a fever pitch. Appeals to professionalism may ring hollow with physicians operating under a siege mentality. A second option is to take legislative steps to curb the frequency and economic effect of malpractice litigation. During past "tort crises," providers successfully lobbied state legislatures to change litigation rules to make them less favorable to plaintiffs (40). Tort reform aims to decrease the expected value of a case for plaintiffs' attorneys, changing the calculus about when it is worthwhile to bring a claim. Among the most efficacious reforms are caps on noneconomic damages; changes in the amount that attorneys may take as contingency fees; reductions in the length of time that injured patients have to bring a claim; and elimination of the "collateral source rule," which allows plaintiffs to recover medical expenses and other costs even if these have been covered by insurance (41-44).
Today we are in the throes of new tort crisis, with claims rates and average payouts rising in many states, especially those that did not institute tort reform in previous crises (45). The concurrence of the tort crisis and the attention to medical errors has not gone unnoticed by insurers. Lobbying for tort reform at both the state and federal levels is under way (47).
The tort reform strategy is problematic, not the least because of its contentiousness. Many state legislatures cannot pass meaningful reform because of the competitive gridlock interposed by health care providers and trial lawyers. Moreover, traditional tort reforms aim to reduce providers' economic exposure, not create a more efficient system. The system's fundamental flaw is not simply that it costs health care providers too much but that it tends to overcompensate some patients while undercompensating others (8, 47). Reform should strive to do more, and we believe a no-fault approach is the answer.
In a no-fault system, the injured patient would only have to demonstrate that a disability was caused by medical management as opposed to the disease process: There is no need to prove negligence. This approach comports better with the patient safety movement. Modern notions of error prevention, emphasizing evidence-based analysis of systems of care (29) and application of technological and structural methods to foster prevention (50), find little value in assessing individual moral blame. No-fault compensation for avoidable injuries is far better suited to support error prevention than a system that revolves around culpability determinations.
We believe that such an approach could produce important incentives for prevention, the so-called deterrent effect, if risk were aggregated in institutions and medical groups. Experience-rating individual physicians' insurance premiums has not been actuarially feasible because physicians are sued too infrequently and their claims experience fluctuates too radically from year to year (14). However, hospitals and integrated medical groups have a more consistent risk profile and their premiums can be experience-rated.
An even better approach may be to set up so-called channeling programs, in which hospitals and their medical staffs are insured by the same entity and all efforts to prevent medical errors are undertaken jointly. Some medical school and academic medical centers already use a channeling approach, and, as links grow between hospitals and integrated medical groups, the potential for a substantial amount of the health care system to operate under channeling approaches increases. In a channeled program, the foundation for greater safety is established by integrating the physician and hospitals or health care centers. The enterprise bears the liability for injury and has incentives to address prevention of errors in both inpatient and ambulatory settings.
We have also noted that in practice, compensation in the current tort system turns on severity of injury more than negligence—so why maintain a system focused on determining negligence? It is expensive and administratively cumbersome to make these determinations, as it involves an adversarial "battle of the experts." Moreover, even negligence judgments by financially disinterested expert reviewers are notoriously unreliable (48). In the context of a vigorously adversarial system, the focus on negligence also incites emotion-provoking behavior by litigants. Not only does this leave lasting psychological scars on persons involved, it pollutes what otherwise might be a useful exercise in root-cause analysis leading to quality improvements (49).
Finally, good data suggest that the no-fault approach would be less costly administratively. Similar no-fault programs in Workers' Compensation and vaccine liability operate at less than half of the costs of tort litigation, largely by minimizing the role of the lawyers. This is where politics will play an important role: Lawyers will fight to maintain the present system.
Elsewhere we have described a limited no-fault approach to medical injury compensation that could work on an elective basis [14]. We believe that no-fault compensation can 1) promote greater transparency about adverse events, 2) partner with a hospital-based, experience-rated insurance system that does not remove incentives for error prevention, and 3) lead to more equitable and efficient compensation (Table).
There are people who doubt no-fault proposals; they highlight the historical absence of effective self-policing, the possibility that the present malpractice system has improved safety by promoting vigilance and better documentation, and the uninspiring example of other no-fault systems, such as Workers' Compensation (51). Mr. Dean's view of the matter reflects the prevailing uncertainty about its probable outcomes:
Mr. Dean: If we reinvent the system and take lawyers completely out of the equation ... is that going to result in safer medical care? One argument is that if physicians know that their care is not going to be subject to scrutiny ... that can actually decrease patient safety. On the other hand, I think that a reasonable argument can be made that if a physician or health care provider knows that every judgment is not going to be subjected to intense microscopic scrutiny under the "retrospectoscope," they are going to be more liberated and free to practice what they see as good medicine, and not be subject to second-guessing at every turn, and that can improve patient safety. It seems to me that until we have some hard data comparing safety in a pure no-fault system, we are not going to know the answer.
We acknowledge this uncertainty, but believe the proposal is worthy of experimentation.
The Harris case illustrates how difficult it is to move forward with an error prevention agenda in a heated malpractice environment. It is not surprising that providers are reluctant to buy in. Patients deserve innovative approaches that will reduce their chances of being injured by errors and lead to fair compensation if an avoidable injury occurs; providers deserve an environment in which participating in patient safety and compensation initiatives does not put them at risk for financial and professional ruin.
Appendix
Questions and Answers from the Conference Dr. Robert M. Wachter, Quality Grand Rounds Editor: Where do you think the locus of action for improving patient safety should be? How would the malpractice system or the no-fault system play into creating incentives for institutions to improve safety?
Dr. Brennan: The only place where we find any real evidence of the deterrent effect of malpractice on errors is at the level of the institution. That makes sense because it is very difficult for individual practitioners to institute systematic approaches to reducing the number of medical injuries. In our most recent proposal for a no-fault system, we suggested that individual hospitals could choose to check out of the tort system and into a voluntary, no-fault program. The only places that can do that are those with integrated medical groups, which you find mostly in so-called channeling institutions. That's an insurance company term for a place where a single insurer covers both the doctors and the hospitals. Doctors who see patients in a primary care setting could have them sign a waiver saying that they understand they can't sue because the organization is in a no-fault compensation scheme. What I find attractive about this is that it could afford a competitive advantage in today's environment. We can tell patients that we can compensate them through the administrative system and that the compensation is going to be fair. We also have very strong incentives to report any injury to patients and to the administrative system. The average community hospital is going to have a harder time because physicians are separately insured and separate entities from the point of view of patient safety. From our point of view, the no-fault system creates an environment that encourages reporting, analyzing these reports, and publicizing the results. Many patients are going to find that attractive.
A physician: In a no-fault system that has no negligence, who decides what an adverse event is?
Dr. Brennan: An adverse event is defined as something that results in a prolongation of hospitalization or disability at the time of discharge, as a result of medical management as opposed to the disease process. That is actually a lot easier to define reliably than is the negligence judgment. What people are being compensated for today is their injury, not the negligence. Trying to identify the negligence is eating up a lot of administrative cost and poisoning the system with the morality play. Determining if an avoidable adverse event occurred would be easier in an administrative compensation scheme and would run similarly to the way things are adjudicated by insurance companies today, with expert testimony and decision-making along those lines. I am fairly confident that the system would work.
Dr. Mark Smith, President and Chief Executive Officer, California HealthCare Foundation, and Quality Grand Rounds Editor: Perhaps as a result of the rise of managed care, much of the most heavily publicized litigation in California has been at the health plan; not targeting physicians or hospitals, but, for instance, about coverage for bone marrow transplantation for breast cancer. Are there implications in a no-fault approach for liability when a health plan declines to cover treatment?
Dr. Brennan: Probably not. These cases occur infrequently, and the protections afforded insurance companies, because of the Employee Retirement Income Security Act (ERISA), make them relatively difficult cases to bring. These two factors tend to overwhelm a need for a no-fault approach there.
A physician: Under the no-fault program, the physician has a strong incentive to report adverse events to the patients and the hospital. Hopefully we all do that, but in a busy physician's schedule, I would think that they would find it easier not to report.
Dr. Brennan: You can build in some penalties for failure to report. Some insurance companies already charge an extra malpractice premium if a claim comes in and you haven't forewarned the insurance company. We would do the same thing in a no-fault program. Although we're trying to avoid a sense of penalty, there nonetheless have to be inducements to report.
Dr. Wachter: Informing patients of errors in their care is ethically the right thing to do. Increasingly, people cite evidence that full disclosure also will not increase the risk of a lawsuit. Is this correct?
Dr. Brennan: There are no good studies on that point, unfortunately. There are seasoned risk managers who will tell you that a lot of what people get upset about, and bring suits about, is the feeling that someone lied to them. Nonetheless, those same seasoned risk managers are not necessarily in favor of full reporting. The literature that people cite is a 1999 article in the Annals of Internal Medicine (37), which observed that at a couple of hospitals in a VA [Veterans Administration] system that promoted reporting errors to patients, claim rates were no higher than in other hospitals. However, there was absolutely no case-mix adjustment, and the VA system is a lot different from other hospital systems. First of all you have the Federal Tort Claims Act, which provides protection from suit, and second, you can't sue the individual doctors. So there is really no evidence right now.
A physician: Can you comment from the charts that you've reviewed about the quality of documentation and the role that it plays in the merits of the suit or on the outcome?
Dr. Brennan: In general, the quality of documentation is helpful in terms of nailing down whether or not a medical injury occurred or whether or not there was negligence. A few might take from this that if you don't document well, it's going to be more difficult to bring a case against the doctor, but crummy documentation actually plays very poorly in litigation. From the point of view of preventing medical injury, it is probably best to do the documentation.
Dr. Wachter: I can't let you leave without talking about the estimate of 44 000 to 98 000 yearly deaths due to medical errors in the Harvard Medical Study practice, which you led. These numbers, more than anything, captured the public's attention when they were touted in the 1999 IOM [Institute of Medicine] report. Yet, you have been circumspect about their accuracy. Could you comment?
Dr. Brennan: These are statistical analyses and I think we did them about as well as they can be done. But the reliability of these judgments from a statistical point of view is fairly poor with a kappa statistic of 0.4 to 0.5 for adverse events and even lower for negligence. What that means is that one person may say an event is a negligent adverse event, while another would says it's not. The other issue is that the IOM took our state-level data on adverse events and upweighted them to generate national mortality estimates. Whenever you extrapolate from relatively small samples, you have concerns about the statistical precision of the estimates. We always tried to point out the sponginess of these numbers in our public statements, but the IOM made a specific decision to go with them. The IOM performed a very important service in terms of putting patient safety back into the common vernacular of the American medical system and for that we owe them a debt of gratitude. Although we don't know exactly how many people die from medical errors, there is no doubt that it is at least 50 000 per year in hospitals and many additional outpatients. In the end, the actual number doesn't make much difference. Whatever the numbers, we have a tremendous burden of morbidity and mortality caused by errors and relatively little attention being paid to trying to prevent them.
订阅:
评论 (Atom)