The Report ‘Thailand Pharmaceuticals & Healthcare Report Q1 2008’ Forecasts Modest Growth Over the Period Researched, With the Value Reaching Us$2.62Bn By 2012

Research and Markets (http://www.researchandmarkets.com/reports/c87671) has announced the addition of Thailand Pharmaceuticals & Healthcare Report Q1 2008 to their offering.

The Thailand Pharmaceuticals and Healthcare Report provides independent forecasts and competitive intelligence on Thailand’s pharmaceuticals and healthcare industry.

Thailand’s pharmaceutical market, one of the most challenging in Asia, is likely to become more so over the coming years. One of the factors that threaten to worsen operating conditions in the country is the increasing use of compulsory licensing by the government. While legal under the World Health Organization (WHO) guidelines, compulsory licensing in Thailand seems to be getting out of hand, with the multinational research-based industry quick to criticise the government’s stance on the issue.

The Thai pharmaceutical market was worth an estimated US$2.16bn. The report forecasts modest growth over the period, with the value reaching US$2.62bn by 2012. Thailand has one of the fastest growing populations in the region and the government’s implementation of basic universal health coverage in 2001 has served to boost demand for pharmaceuticals. However, the government finances appear to be insufficient to support the new health coverage system, with authorities needing to resort to making decisions unpopular with drug-makers.

In BMI’s revised Q108 Business Environment Ranking table for the 14 key markets in Asia, Thailand occupies one of the bottom places. Ranked joint eleventh, with Indonesia, Thailand poses considerable risks and limits to investment in terms of its pharmaceutical industry. Despite modernisation of the healthcare sector, which is attracting a considerable number of medical tourists, the military-appointed government’s regulatory decisions are continuing to knock multinationals’ confidence in the market, as well as stopping the launch of new products and deterring foreign direct investment (FDI) in the country. Nevertheless, foreign drug-makers continue to dominate the market, led by US-based major Pfizer.

Domestic manufacturers remain limited to basic generics due to a lack of available capital to fund research and development (R&D). The state-owned Government Pharmaceutical Organization (GPO) is in the privileged position of being the largest supplier of drugs to the public sector, and its range of antiretrovirals (ARVs) are a growth area.

Content Outline:

Executive Summary Thailand Pharmaceutical and Healthcare Industry SWOT Thailand Political SWOT Thailand Economic SWOT Thailand Business Environment SWOT Business Environment Ranking Pharmaceutical Ratings: Revised Methodology

Ratings Overview Table: Pharmaceutical Business Environment Indicators Weighting Table: Weighting Of Components Thailand – Business Environment Ranking Table: Business Environment Rankings Limits of Potential Returns Limits to Realisation of Returns Thailand – Market Summary Regulatory Regime Intellectual Property Regime IP Shortcomings Counterfeit Drugs Compulsory Licensing Regulatory Developments Recent IP Developments Pricing and Reimbursement Regional and International Developments Industry Trends and Developments Epidemiological Trends Table: Prevalence of Mental Disorders Per 100,000 Population Medical Tourism Pharmaceutical Sector Indigenous Producers Foreign Companies Recent Industry Developments Traditional Medicines Pharmaceutical Retail Vaccine Sector

Research and Development Table: Clinical Trial Considerations in Asia Recent Research and Development Activities Industry Forecast Scenario Overall Market Forecast Table: Thailand Pharmaceuticals Market Trends Key Growth Factors – Industry Thailand Pharmaceuticals & Healthcare Report Q1 2008 Table: Thailand Health Expenditure Forecasts Key Growth Factors – Macroeconomic Table: Thailand – Economic Activity Prescription Market Forecast Table: Thailand Prescription Drug Market Indicators (US$mn unless otherwise stated) Over-the-Counter Market Forecast Table: Thailand OTC Drugs Market Indicators (US$mn unless otherwise stated) Generics and Patented Market Forecasts Table: Thailand Generics and Patented Market Forecasts Export/Import Forecasts Table: Thailand’s Pharmaceuticals Exports and Imports Forecasts (US$mn) Other Healthcare Data Forecasts Table: Thailand Other Health Indicators

Key Risks to BMI’s Forecast Scenario Competitive Landscape Leading Companies in Thailand, 2007 Company Profiles

Leading Multinational Manufacturers Pfizer GlaxoSmithKline Novartis Merck & Co Sanofi-Aventis

Domestic Manufacturer Profiles Siam Pharmaceutical Government Pharmaceutical Organisation (GPO) Contacts Biolab Co Thai Meiji Pharmaceutical Co

BMI Forecast Modelling How We Generate Our Industry Forecasts Pharmaceutical Industry Sources

Companies Mentioned:

Pfizer

GlaxoSmithKline

Novartis

Merck & Co

Sanofi-Aventis

Siam Pharmaceutical

Government Pharmaceutical Organisation (GPO)

Contacts

Biolab Co

Thai Meiji Pharmaceutical Co

For more information visit http://www.researchandmarkets.com/reports/c87671

Hepatocellular Carcinoma: Misdiagnosis or Spontaneous Remission?

By Stefanczyk-Sapieha, Lilianna Fainsinger, Robin L

INTRODUCTION Hepatocellular carcinoma (HCC) is an aggressive malignancy frequently associated with chronic liver disease or cirrhosis (1). The incidence and burden of this disease is increasing worldwide, including in Western countries and the United States (2,3). Median survival from the time of diagnosis is reported to range from six to 20 months (1). Surgical and nonsurgical treatment options exist, but are frequently limited by the extent of the underlying liver disease (4). Rare cases of spontaneous remission have been reported in the literature (5-9), however many of these reports have been challenged by physicians caring for HCC patients. This case illustrates some challenges around establishing diagnosis and survival prognostication in HCC, and the negative effect this may have when a palliative diagnosis is assumed.

CASE PRESENTATION

A 56-year-old man was admitted to our palliative care unit (PCU) with a diagnosis of agitated delirium and terminal HCC.

Initial diagnosis of advanced HCC had been made 32 months earlier, when he presented to a different hospital with complaints of vague abdominal pain and a history of weight loss (22 kg) over the previous four months.

His past medical history included chronic alcohol abuse, hepatitis C (possibly linked to previous blood transfusions received for GI bleeds), type 2 diabetes, heavy smoking, depression, and antisocial patterns of behaviour.

During investigation, he was found to have significantly elevated alpha-fetoprotein (AFP) of 2,905 ug/L. He underwent an MRI of the abdomen and was found to have lesions highly suggestive of HCC infiltration, predominantly in the left lateral lobe of the liver. Biopsy of the right lobe was performed to assess the extent of the background liver disease and it revealed cirrhosis (grade 3). Given the severity of underlying liver disease, he was deemed not to be a surgical candidate. Tissue biopsy from the left lobe lesions was not obtained. Diagnosis of terminal HCC was communicated to the patient and family, survival was estimated at approximately six months, and palliative management was proposed. He was discharged home on opioids to control his abdominal pain.

Following this admission, he was managed by his family physician, but had several hospital admissions related to poorly controlled abdominal pain, and problems with opioid toxicity and episodes of delirium.

Admission to the PCU was at the request of the palliative care consult service in another hospital, for management of presumed terminal agitated delirium. The delirium was refractory to treatment; the patient was extremely combative, and required physical restraints and constant supervision. He failed trails of haloperidol, olanzapine, zuclopenthixol, trazadone, quetiapine, methotrimeprazine, and was then started on a subcutaneous infusion of midazolam. His opioid was rotated with a reduced dose, and he was given parenteral hydration.

Careful revision of the available documentation from previous hospitalizations and reassessment of the patient and his blood work, as well as his extended survival without surgery or other treatments, led us to question the accuracy of the terminal cancer diagnosis. At this point the midazolam was discontinued and the agitated behaviour was managed with close patient supervision. He became more alert over the next few days without recurrence of the previous agitated behaviour.

A family conference was called to revise goals of care, given the uncertain circumstances. The family wished to maintain DNR code status as per the patient’s previously expressed wishes. They agreed to pursue active treatments, including intravenous antibiotics and hydration, and further investigations as needed to search for potentially reversible causes of delirium and to revaluate the cancer diagnosis.

In the course of investigations, there was no evidence of brain metastases on MRI of the brain. Minimal small vessel disease and mild atrophy was noted. He was found to have right lower lobe pneumonia and left lower lobe atelectasis on CXR, but no evidence of pulmonary metastases. The pneumonia subsequently resolved.

The MRI of the abdomen was repeated and results were compared with previous findings. A shrunken cirrhotic liver was found, with evidence of portal venous hypertension, including splenomegaly and ascites. The previously identified enhancing areas of the left lobe of the liver, suspicious for HCC, were no longer present. There was an interval decrease in the size of the left lobe.

Blood work revealed relatively normal liver function, with normal INR and transaminases, mild bilirubin and alkaline phosphatase elevation, and a low albumin (23 g /L). The AFP had decreased to 17 ug/L.

A family conference was arranged to discuss these results. The possibility of an initial erroneous diagnosis or a spontaneous remission was discussed. Follow-up imaging was proposed for further cancer surveillance, and discharge planning was initiated. Unfortunately, following resolution of the delirium, the patient was noted to have cognitive difficulties especially with regard to executive functioning. The patient went on pass with the family’s assistance and subsequently did not return to the hospital.

DISCUSSION

Diagnosis of HCC

HCC is the most serious complication of chronic liver disease and is frequently diagnosed late, when the disease is already unbeatable (3,10). Major risk factors for HCC are chronic alcohol abuse, and hepatitis B and C (9-11). Increasing incidence of HCC in Western counties seems to be linked to an increase in hepatitis B and C infections (9,11,12). Chronic alcohol abuse continues to be the most common risk factor in Western countries and has an additive effect with hepatitis C infection, where it doubles the risk of HCC and the cancer tends to develop at an earlier age (13). Some of the other risk factors include diabetes, which also has a synergistic effect with viral hepatitis and alcohol abuse, dietary aflatoxins in mouldy grains, and use of oral contraceptives (14).

Diagnosis of HCC is challenging, as patients usually present with manifestations of the underlying chronic liver disease, and only sudden decompensation of the liver disease may heighten the suspicion of HCC (11). The American Association for the Study of Liver Diseases recommends that patients with underlying chronic liver disease, and rising AFP levels should have a contrast CT or MRI. If found to have large (>2 cm) or multifocal liver lesions with arterial hypervascularity, and increased T2 signal intensity, patients should be diagnosed with HCC (15).

A percutaneous biopsy may not be essential to make the diagnosis of HCC in these cases (15). The biopsy should be performed if appearance of the lesion on the diagnostic imaging is not consistent with HCC, or if surgical resection is contemplated. The biopsy is associated with increased risk of bleeding and seeding of the tumour cells along the needle track (11). According to the European Association for the Study of the Liver Disease (2), diagnosis of HCC can be made if a focal liver lesion is greater than 2 cm in diameter, can be identified in two contrast-enhanced modalities (contrast CT, MRI, or angiography), and displays arterial hyper- vascularization in at least one of the modalities. If such a lesion is identified by only one contrast-enhanced imaging modality, then AFP of more than 400 ng/mL is used to confirm diagnosis (2).

Computed tomographic (CT) imaging has been used to identify liver lesions for the past 20 years and is well established as a reliable modality for detection of larger lesions. However, CT has some limitations: detection of early and small lesions is difficult; subcapsular lesions showing transient focal enhancement may be caused by arterial-portal shunting rather than represent HCC, and may disappear on followup CT; dysplastic nodules show substantial arterial phase enhancement and can simulate HCC (16).

Magnetic resonance imaging (MRI) is the examination of choice for detecting and differentiating liver nodules in cirrhotic liver. It is superior to CT and ultrasound (US), but also has low sensitivity for detecting small lesions (17). Colli et al. (18), in a recent systematic review, compared the accuracy of US, AFP, spiral CT, and MRI, and concluded that MRI was likely the superior modality with pooled sensitivity of 81% and specificity of 85%, but, given the wide range of results, definitive diagnosis may still need to rely on CT- or US-guided tissue biopsy.

An elevated AFP has been used as a serum marker for HCC for years, both in screening and as a diagnostic test (19). An AFP of greater than 200 ug/L has been the frequently used cutoff, considered to be highly specific but not very sensitive (20). An elevated AFP has been used to confirm diagnosis in high-risk patients, and also as a screening strategy together with ultrasound (19), although the utility of AFP as a screening test for HCC appears to be limited (19,20).

Our patient had chronic liver disease with significant weight loss, multiple risk factors for HCC, was found to have an elevated AFP, and had focal lesions suggestive of HCC on the MRI. A biopsy of the right lobe of the liver confirmed advanced cirrhosis; he was not, therefore, deemed to be a surgical candidate. All the above findings were highly suggestive of HCC, but there was no evidence of metastatic disease at the time of diagnosis and there was no histopathological confirmation of HCC. Survival Prognostication in Advanced Cancer and HCC

Survival prognostication remains difficult, even in terminal illness and advanced cancer. It is, however, very important for patients living with life-threatening diseases, their families, clinicians, and health and social services. Illness trajectories tend to be different depending on the nature of the terminal disease. The typical steady decline in the terminal phase of cancer is quite different from a pattern of prolonged slow decline with intermittent deterioration and recovery episodes, as usually observed in chronic organ failure (21).

The literature suggests that, although physicians generally tend to overestimate survival and most of the estimates are incorrect when it comes to a particular patient, clinician prognostication still correlates with actual survival (22,23). Vigano at al. (24) conducted a systematic review of literature on survival prediction and suggest using performance status, presence of cognitive failure, weight loss, anorexia, and dyspnea as independent survival predictors in addition to clinical estimation of survival by the treating physician (24).

Delirium is well recognized to be associated with increased mortality rates and shorter survival in patients with advanced cancer (25,26). Delirium is a common reason for admission to palliative care units, with prevalence approaching 80% to 90% in the hours or days before death (25,26). Although it has a strong association with the dying phase, up to 50% of cases of delirium can be reversible with treatment of the cause (25,26).

Consequences of a Diagnosis of Terminal Cancer

Misdiagnosis of terminal illness appears to be rare. It is difficult to establish the frequency of the misdiagnosis of terminal cancer, since cited estimates range widely, from four in 1,635 admissions (27) to two of 330 referrals (28).

Consequences of misdiagnosis of terminal illness can be detrimental when choices about care are made based on this incorrect information, e.g., DNR status, palliative sedation, or refusal of potentially life-prolonging treatments such as antibiotics for infections.

Terminal diagnosis can cause psychological distress and suffering in terminally ill patients. Sense of hopelessness, profound sadness, depressed mood, sense of worthlessness and helplessness, as well, suicidal ideation and social withdrawal are common in this population (29). Preferences for life sustaining therapies can also be influenced by depression (29).

Patients with a history of drug and/or alcohol abuse are at greater risk for delirium because of somatization and tendency toward chemical coping. Alcohol withdrawal, Korsakoff’s psychosis, or dementia should also be considered in the differential diagnosis. A continuous subcutaneous midazolam drip may be used to induce sedation in patients with refractory agitated delirium (25,26).

The effectiveness of the midazolam in this case may suggest alcohol withdrawal as a possible contributing factor. It seems likely that this patient had an underlying dementia with a superimposed delirium due to reversible causes such as the opioid management, psychoactive drugs, alcohol withdrawal, infection, and dehydration. The review of the underlying presumption that the patient had a terminal diagnosis was key in the decision to discontinue the midazolam and avoid a self-fulfilling prophecy.

Is Spontaneous Remission of HCC Possible?

A literature search of Medline and Pub Med, using key words “regression and spontaneous and hepatocellular”, revealed 115 and 119 titles, respectively. Most citations overlapped; 62 were published case reports of spontaneous regression of HCC. Some were published in languages other than English and, therefore, could not be reviewed.

These case reports describe spontaneous regression of HCC, both partial and complete, with significant shrinkage of the lesions on repeat imaging, and decreasing or normalizing levels of AFP. This includes diagnoses confirmed by histopathological examination of either biopsy samples or surgically excised specimens. In a literature review, Lin et al. (6) cited 27 cases of such spontaneous regression. MezaJunco et al. (30), in a more recent case report with a literature review, cited 61 case reports published between 1982 and September 2006. Given the prevalence of HCC, spontaneous tumour regression represents an extremely rare phenomenon (5-8,30,31). Some of the proposed explanations for this rare phenomenon include spontaneous necrosis of the tumour (6,32) or portal vein thrombosis (5,33). Qualglia et al. (9) refer to cases of large regressing HCC and propose disruption of blood supply, withdrawal of hormonal stimulation, or abstinence from alcohol as possible explanations for the spontaneous regression, in the absence of any medical or surgical interventions (9).

Is it possible that this case represents a rare case of spontaneous regression of HCC? The lack of tissue diagnosis makes this explanation less likely but not impossible. Histopathological reports can also be incorrect, as illustrated by Rees at al. (27), and may need to be challenged if strong doubts arise on clinical grounds. Histological confirmation of diagnosis can be challenging, as it may be difficult to differentiate HCC from dysplastic nodules and large regenerative nodules (9). Nevertheless, our case illustrates the value of biopsy confirmation of malignant disease, which could have prevented some of the subsequent unfortunate events for this patient.

CONCLUSION

This case report illustrates the importance of having confidence that diagnosis of a terminal cancer is correct. If any doubts exist, an appropriate review should be conducted. Aggressive therapy may be appropriate until the diagnosis is confirmed and goals of care are again discussed with the patient and/or family.

This case also illustrates the consequences of a possibly erroneous diagnosis. The burden of suffering imposed on the patient and family included somatization with chemical coping and misuse of prescription opioids, ongoing abuse of alcohol, and aggravation of underlying psychiatric problems. During recurrent hospitalizations for a suicide attempt and delirium, there was an ongoing assumption that the patient was deteriorating from HCC which remained unchallenged once the patient was labelled with terminal cancer. The decision on the PCU to review the diagnosis and discontinue the midazolam when the agitation subsided was central in the positive outcome.

Date received, July 4, 2007; date accepted, January 16, 2008.

REFERENCES

1. Curley SA, Barnett CC, Abdalla EK. Staging and prognostic factors in hepatocellular carcinoma. In: Rose BD (ed). UptoDate (version 15.1) Waltham, Massachusetts: UptoDate, 2007.

2. Talwalkar JA, Gores GJ. Diagnosis and staging of hepatocellular carcinoma. Gastroenterol 2004; 127: S126-S132.

3. Seeff LB. Introduction: burden of hepatocellular carcinoma. Gastroenterol 2004; 127: S1-S4.

4. Hoofnagle JH. Hepatocellular carcinoma: summary and recommendations. Gastroenterol 2004; 127: S319-S323.

5. liai T, Sato Y, Nabatame N, Makino S, Hatkeyama K. Spontaneous complete regression of hepatocellular carcinoma with portal vein tumour thrombus. Hepatogastroenterol 2003; 50(53): 1628-1630.

6. Lin TJ, Liao LY, Lin CL, Shin LS, Chang TA, Tu Hy, et al. Spontaneous regression of hepatocellular carcinoma: a case report and literature review. Hepatogastroenterol 2004; 51(56): 579-582.

7. Ohtani H, Yamazaki O, Matsuyama M, Horii K, Shimizu S, Oka H, et al. Spontaneous regression of hepatocellular carcinoma: report of a case. Surg Today. 2005; 35(12): 1081-1086.

8. Van Halteren HK, Salemans JM, Peters H, Vreugdenhil G, Driessen WM. Spontaneous regression of hepatocellular carcinoma. J Hepatol 1997; 27(1): 211-215.

9. Quaglia A, Bhattacharjya S, Dhillon AP. Limitations of the histological diagnosis and prognostic assessment of hepatocellular carcinoma. Histopathol 2001, 38: 167-174.

10. Schwartz JM, Carithers RL. Clinical features, diagnosis, and screening for primary hepatocellular carcinoma. In: Rose BD (ed). UptoDate (version 15.1) Waltham, Massachusetts: UptoDate, 2007.

11. Davila JA, Morgan RO, Shaib Y, McGlynn KA, EI-Serag HB. Hepatitis C infection and the increasing incidence of hepatocellular carcinoma: a population based study. Gastroenterol 2004; 127: 1372- 1380.

12. Liang TJ, Heller T. Pathogenesis of hepatitis C-associated hepatocellular carcinoma. Gastroenterol 2004; 127: S62-S71.

13. Morgan TR, Mandayam S, Jamal MM. Alcohol and hepatocellular carcinoma. Gastroenterol 2004; 127: S87-S96.

14. Yu MC, Yuan J-M. Environmental factors and risk for hepatocellular carcinoma. Gastroenterol 2004; 127: S72-S78.

15. Bruix J, Sherman M. Management of hepatocellular carcinoma. AASLD practice guideline. Hepatol 2005; 42(5): 1208-1236.

16. Baron RL, Brancatelli G. Computed tomographic imaging of hepatocellular carcinoma. Gastroenterol 2004; 127: S133-S143.

17. Taouli B, Losada M, Holland A, Krinsky G. Magnetic resonance of hepatocellular carcinoma. Gastroenterol 2004; 127: S144-S152.

18. Colli A, Fraquelli M, Casazza G, Massironi S, Colucci A, Conte D, Duca P. Accuracy of ultrasonography, spiral CT, magnetic resonance, and alpha-fetoprotein in diagnosing hepatocellular carcinoma: a systematic review. Am J Gastroenterol 2006; 101: 513- 523.

19. Daniele B, Bencivenga A, Megna AS, Tinessa V. 9 alphafetoprotein and ultrasonography screening for hepatocellular carcinoma. Gastroenterol 2004; 127: S108-S112.

20. Gupta S, Bent S, Kohlwes J. Test characteristics of alpha- fetoprotein for detecting hepatocellular carcinoma in patients with hepatitis C. Ann Intern Med 2003; 139: 46-49.

21. Murray SA, Kendall M, Boyd K, Sheikh A. Illness trajectories and palliative care. BMJ 2005; 330: 1007-1011. 22. Christakis NA, Lamont EB. Extent and determinants of error in doctors’ prognoses in terminally ill patients: prospective cohort study. BMJ 2000; 320:469- 472.

23. Glare P, Virik K, Jones M, Eychmuller, Simes J, Christakis C. A systematic review of physicians’ survival predictions in terminally ill cancer patients. BMJ 2003; 327: 195-198.

24. Vigano A, Dorgan M, Buckingham J, Bruera E, Suarez-Almazor ME. Survival prediction in terminal cancer patients: a systematic review of the medical literature. Palliat Med 2000; 14: 363-374.

25. Centano C, Sanz A, Bruera E. Delirium in advanced cancer patients. Palliat Med 2004; 18 (3): 184-194.

26. Del Fabbro E, Dalai Shalini and Bruera E. Symptom control in palliative care. Part III: dyspnea and delirium. J Palliat Med 2006; 2: 422-436.

27. Rees WD, Dover SB, Low-Beer TS, et al. “Patients with terminal cancer” who have neither terminal illness nor cancer. BMJ 1987; 295: 318-319.

28. Taube AW, Jenkins C, Bruera E. Is a “palliative” patient always a palliative patient? Two case reports. J Pain Symptom Manage 1997; 13: 347-351.

29. Block SD. Psychological issues in end-of-life care. J Palliat Med 2006; 9(3): 751-772.

30. Meza-Junco J, Montano-Loza AJ, Martinez-Benitez B, Cabrera- Aleksandrova T. Spontaneous partial regression of hepatocellular carcinoma in a cirrhotic patient. Ann Hepatol 2007; 6(1): 66-69.

31. Vardhana HG, Panda M. Spontaneous regression of hepatocellular carcinoma: potential promise for the future. South Med J 2007; 100(2): 223-224.

32. Ohta H, Sakamoto Y, Ojima H, Yamada Y, Hibi T, Takahashi Y, et al. Spontaneous regression of hepatocellular carcinoma with complete necrosis: case report. Abdom Imaging 2005; 30 (6): 734- 737.

33. Sakata H, Konishi M, Kinoshita T, Satake M, Moriyama N, Ochiai T. Prognostic factors for hepatocellular carcinoma presenting with macroscopic portal vein tumor thrombus. Hepatogastroenterol 2004; 51(60): 1575-1580.

LILIANNA STEFANCZYK-SAPIEHA, Division of Palliative Care Medicine, University of Alberta, Edmonton, Alberta, ROBIN L. FAINSINGER, Division of Palliative Care Medicine, Department of Oncology, University of Alberta, Edmonton, Alberta, Canada

Stefanczyk-Sapieha, Lilianna, MD

Palliative Care Medicine Resident

University of Alberta

Grey Nuns Hospital

217 – Health Services Centre

1090 You ville Drive West

Edmonton, Alberta

Canada T6L 5X8

Fainsinger, Robin L., MD

Division of Palliative Care Medicine

Grey Nuns Hospital

217 – Health Services Centre

1090 Youville Drive West

Edmonton, Alberta

Canada T6L 5X8

Copyright Center for Bioethics, Clinical Research Institute of Montreal Spring 2008

(c) 2008 Journal of Palliative Care. Provided by ProQuest Information and Learning. All rights Reserved.

Vitreoretinal Technologies, Inc. Announces the Enrollment of First Patients for Its Phase III Clinical Trial for Vitreosolve for Diabetic Retinopathy Patients in the U.S. And India

IRVINE, Calif., April 3, 2008 (PRIME NEWSWIRE) — Vitreoretinal Technologies, Inc. (VRT) today announced the enrollment of the first patients in a multinational Phase III clinical trial for evaluation of the safety and efficacy of VRT’s investigational drug Vitreosolve(r) for diabetic retinopathy patients. The first patients were enrolled and treated at clinical sites both in the U.S. and India.

Vitreoretinal Technologies (VRT) is a specialty pharmaceutical company with a specific focus on ophthalmology. The company’s drug candidates are focused on diabetic retinopathy, glaucoma and retinitis pigmentosa. This study is the first one of two pivotal studies required by the FDA for the application of a New Drug Application (NDA).

“The start of this trial has generated excitement in the ophthalmic community,” said Vicken Karageozian, MD, Co-Founder and Chief Technical Officer at VRT. “Diabetes is one of the fastest growing diseases and there are no FDA-approved pharmaceutical treatments for diabetic retinopathy. Retinal specialists recognize the need for new therapies to arrest the progression of this disease.”

“We at De Novo Ventures are pleased that VRT has attained this important milestone and we continue to be optimistic about successful completion of these pivotal studies,” said Fred Dotzler, Managing Director of De Novo Ventures and Board Member. De Novo is the sole institutional investor in Vitreoretinal Technologies, Inc.

Hampar Karageozian, Founder and Chief Executive Officer of VRT added, “This is a validation of our unique strategy for drug development and management of the regulatory process. Our team should be congratulated for their focus and execution.”

Ramgopal Rao, VRT Chief Operating Officer, adds, “We placed emphasis from the beginning on assembling a dedicated management team and experienced group of external consultants, and we look forward to their continued contribution in successful completion of these clinical trials.”

It is estimated that there are 8 million diabetic retinopathy patients in the U.S. There is no effective treatment available for arresting or reversing the progression of nonproliferative diabetic eye disease today. Similarly, there are 2 million patients in the U.S. with glaucoma with no neurorescue treatments currently available. VRT drugs address these unmet needs.

VRT has two drug candidates; Vitreosolve(r) for diabetic retinopathy and Neurosolve(r), a neuronal rescue agent for glaucoma and retinitis pigmentosa. Both drugs are small molecules with a long history of safety for systemic use in humans that have been optimized for safety and efficacy in these ophthalmic applications through a series of Phase II trials in humans.

This news release was distributed by PrimeNewswire, www.primenewswire.com

 CONTACT:  Vitreoretinal Technologies, Inc.           Ramgopal Rao, Chief Operating Officer           949-753-1008              Cell: 714 299 9986            [email protected]            De Novo Venture Partners            Frederick Dotzler, Managing Director           650-329-1999           [email protected] 

Can Animals Grasp the Concept of Time?

Are animals stuck in time?

Dog owners, who have noticed that their four-legged friends seem equally delighted to see them after five minutes away as five hours, may wonder if animals can tell when time passes. Newly published research from The University of Western Ontario may bring us closer to answering that very question.

The results of the research, entitled “Episodic-Like Memory in Rats: Is it Based on When or How Long Ago,” appear in the current issue of the journal Science, which was released today.  

William Roberts and his colleagues in Western’s Psychology Department found that rats are able to keep track of how much time has passed since they discovered a piece of cheese, be it a little or a lot, but they don’t actually form memories of when the discovery occurred. That is, the rats can’t place the memories in time.

The research team, led by Roberts, designed an experiment in which rats visited the “Ëœarms’ of a maze at different times of day. Some arms contained moderately desirable food pellets, and one arm contained a highly desirable piece of cheese. Rats were later returned to the maze with the cheese removed on certain trials and with the cheese replaced with a pellet on others. 

All told, three groups of rats were tested in the research using three varying cues: when, how long ago or when plus how long ago. 

Only the cue of how long ago food was encountered was used successfully by the rats.  

These results, the researchers say, suggest that episodic-like memory in rats is qualitatively different from human episodic memory, which involves retention of the point in past time when an event occurred.

“The rats remember whether they did something, such as hoarded food a few hours or five days ago,” explained Roberts. “The more time that has passed, the weaker the memory may be. Rats may learn to follow different courses of action using weak and strong memory traces as cues, thus responding differently depending on how long ago an event occurred. However, they do not remember that the event occurred at a specific point in past time.”  

Previous studies have suggested that rats and scrub jays (a relative of the crow and the blue jay) appear to remember storing or discovering various foods, but it hasn’t been clear whether the animals were remembering exactly when these events happened or how much time had elapsed. 

“This research,” said Roberts, “supports the theory I introduced that animals are stuck in time, with no sense of time extending into the past or future.”

On the Net:

University of Western Ontario

New Planet Search Methods Push Tally Near 300

Astronomers have struggled for centuries to find our solar system’s planets, let alone any outside of our relatively puny cosmic neighborhood.

Yet during only the past 13 years, observers have tracked down nearly 300 distant bodies beyond our system thanks to rapid advances in ground-based telescope technology and methods.

Ten worlds
alone were identified by a group of astronomers in the past six months using
earthbound instruments, and another team of scientists just announced they have
found the youngest-ever planetary infant. The hunt for the first Earth-like
planet
, however, is still on.

Observers
discussed the state of their search for extrasolar planets, as worlds beyond
the solar system are known, during the Royal Astronomical Society’s National
Astronomy Meeting in Belfast, U.K., this week.

Sting
operation

Extrasolar
planets are tough for telescopes to detect unless the objects are about the
size of Jupiter, which is why astronomers rely on unique methods to find the
elusive bodies.

Periodic
wiggles in stellar movement can signal an orbiting world’s gravitational tug on
its star. Another method looks for dips in stellar brightness called
transits
— when planets pass directly in front of a star and block out some
of the light.

Instead of
spending weeks babysitting single stars to seek out gravitational wiggles, as
many planet-hunters do, some European astronomers are monitoring millions of
stars with inventive camera setups such as one called SuperWASP.

“SuperWASP
is now a planet-finding production line,” said Don Pollacco, a SuperWASP
project member and a Queen’s University Belfast (QUB) astronomer.

In the past
six months alone, Pollacco said, the project’s two batteries of cameras in South Africa and the Canary Islands have pinpointed 10 new planets, for which SuperWASP has also
estimated size and mass.

“[It] will
revolutionize the detection of large planets and our understanding of how they
were formed,” Pollacco said of the new planet-hunting program. “It’s
a great triumph for European astronomers.”

Fetal
planet

In addition
to the 10 new extrasolar recruits — the comprehensive exoplanet count now totals
277 — another group of astronomers said they’ve located an embryonic star
younger than any seen before with the Very Large Array of radio telescopes in New Mexico.

The group,
led by Jane Greaves of the University of St. Andrews in Scotland, found the 100,000-year-old fetal planet about 520 light-years away in the
constellation Taurus

“The new
object, designated HL Tau b, is the youngest planetary object ever seen,”
said Anita Richards, an astronomer at the U.K. Jodrell Bank Centre for
Astrophysics.

Richards,
who worked with Greaves’ team to describe the infant planet, said it’s just 1
percent as old as the young planet found in
orbit
around the star TW Hydrae last year.

“We
see a distinct orbiting ball of gas and dust, which is exactly how a very young
protoplanet should look,” Greaves said, noting the far-younger planet
should take on a Jupiter-like essence in millions of years.

Another
Earth?

Although
astronomers are developing large, space-based
projects
to hunt for Earth-like planets — such as the Jet Propulsion Lab’s
proposed Terrestrial Planet Finder — ground-based observers aren’t sitting idly
by.

In hopes of
finding small rocky planets, some U.K. astronomers are using a special camera
known as “RISE” that is mounted onto the Liverpool Telescope in England. The device rapidly photographs a portion of the sky and compares the brightness of
stars and large extrasolar planets from image to image.

If there’s any dimming, said Neale Gibson, also a QUB astronomer, the instrument will
find it and reveal if small rocky planets are disturbing the orbits of hot,
gassy planets.

“RISE
will allow us to observe and time the transits of extrasolar planets very
accurately,” Gibson said. “If Earth-mass planets are present in
nearby orbits … we will see their effect on the orbit of the larger transiting
planets.”

Caffeine May Help Prevent Alzheimer’s

A cup of coffee a day keeps dementia away? Research conducted by a U.S. team for the Journal of Neuroinflammation at the University of North Dakota suggests that coffee may block the damage cholesterol can inflict on the body and brain.

Scientists studied rabbits being fed a high-fat, high cholesterol diet for 12 weeks. Some of the subjects were given caffeine supplements, and others were not. In those given the caffeine supplement, the “blood brain barrier” between the main blood supply and the brain of the rabbits was protected.

This barrier prevents chemicals carried in the bloodstream from entering the central nervous system and potentially harming it. Prior studies showed that cholesterol in high levels in the blood stream can cause this barrier to deteriorate or “leak”.

Previous Alzheimer’s research supports this theory and suggests that the “leaks” in the barrier can make the brain susceptible to damage which can contribute to Alzheimer’s.

During the study, some of the rabbits were given a caffeine supplement equivalent to one cup of coffee daily. At the conclusion of the study, the “blood brain barrier” in the rabbits with no caffeine had weathered much more severe damage than those who had caffeine in their diet.

Dr. Jonathan Geiger, the study leader, said, “Caffeine appears to block several of the disruptive effects of cholesterol that make the blood-brain barrier leaky.” He went on to explain that high cholesterol can increase the risk for Alzheimer’s due to the fact that it compromises the strength of the barrier. According to Geiger, “Caffeine is a safe and readily available drug and its ability to stabilize the blood brain barrier means it could have an important part to play in therapies against neurological disorders.”

The Alzheimer’s Disease Society as well as UK experts deem this as important evidence of the benefits of coffee.  The spokeswoman for the Society said, “This is the best evidence yet that caffeine equivalent to one cup of coffee a day can help protect the brain against cholesterol. In addition to its effect on the vascular system, elevated cholesterol levels also cause problems with the blood brain barrier.” Prior to brain damage caused by strokes or Ahzheimer’s, the barrier is less efficient.

More research is necessary to determine the true effect of caffeine in humans.

On the Net:

Journal of Neuroinflammation

University of North Dakota

Alzheimer’s Disease Society

Shire Strives to Increase Adherence Among Chronic Kidney Disease Stage 5 Patients

PHILADELPHIA, April 3, 2008 /PRNewswire/ — Shire Pharmaceuticals, the global biopharmaceutical company, today announced the launch of On Track, an innovative program designed to improve adherence to disease management regimens among chronic kidney disease (CKD) Stage 5 patients with hyperphosphatemia (elevated levels of phosphorus in the blood). In addition to offering the highly effective phosphate binder, FOSRENOL (lanthanum carbonate), Shire has created On Track, a program designed to provide resources to patients, health care professionals (HCPs) and renal care teams that can help them collectively identify “real world” solutions to adherence- related challenges.

Approximately half of all CKD Stage 5 patients fail to attain serum phosphorus control, and poor adherence is a primary barrier preventing them from achieving success. In fact, up to 73 percent of patients are repeatedly non-compliant to phosphate binder therapy. Specifically, 30 percent of patients have decided not to fill one or more prescriptions due to medication costs and/or lack of transportation to the pharmacy, while 21 percent of CKD Stage 5 patients admit to not taking medications as prescribed due to side effects, cost, and the belief that they already take “too many” medications. This is not surprising, since CKD Stage 5 patients are prescribed an average of 12 different medications for concurrent medical conditions.

“Nonadherence to phosphate binder therapy is a serious problem and has become a growing issue among CKD Stage 5 patients due to the complexity of managing the condition,” said William Finn, MD, professor of Medicine, University of North Carolina School of Medicine at Chapel Hill. “It is vital for patients to work with their renal care teams to identify successful adherence strategies, as well as to utilize programs and resources available, to address the myriad of reasons that contribute to nonadherence, in order to realize the full benefit of their medications.”

Through On Track, Shire is providing resources and services developed with insights from experts in the nephrology and behavior modification fields to help renal care teams and patients take a multifaceted approach to hyperphosphatemia control. Shire also is offering additional support specifically for FOSRENOL patients through its comprehensive offerings in FOSRENOL On Track – a first of its kind program for the renal community that offers an array of unique services to help renal care teams and patients improve adherence.

    -- Shire developed tools for HCPs and renal care teams to help enhance       their communication with patients in clinical settings.  These tools       will facilitate the development of individualized plans that address       patients' specific barriers to managing their hyperphosphatemia       through diet, dialysis, and phosphate binder therapy.    -- Shire established the FOSRENOL On Track hotline to provide a simple,       single point of access to a comprehensive range of support services       and tools for all FOSRENOL patients and their renal care teams.       - FOSRENOL patients also can elect to receive regular lifestyle and         medication "reminders" through FOSRENOL On Track.    -- Knowing that high and rising out-of-pocket expenses for medications       often decrease patient adherence, Shire also is providing several       resources for FOSRENOL patients who face financial burdens to       adherence.       - Shire will provide a discount card for patients with private         insurance that will help them with financial assistance for their         medication.  This is the first offering of its kind for         hyperphosphatemia patients.       - Qualified dialysis patients can receive grants to help with         phosphate binder costs through Shire's partnerships with the         American Kidney Fund(R) and HealthWell Foundation(R).       - Eligible patients who fall into the Medicare Part D coverage gap and         who are in need of Medicaid assistance may receive FOSRENOL free of         charge.    

As an additional component of the On Track initiative, Shire assembled an Adherence Task Force composed of renal dietitians and other HCPs with expertise in improving patient adherence. Recognizing the complexities of effectively managing hyperphosphatemia over the long term, Shire is working with these experts to evaluate the many factors impacting adherence to hyperphosphatemia management and develop effective, “real world” solutions.

“Nonadherence is an everyday issue that hinders our collective efforts to help patients remain on track,” said Marianne Hutton, RD, CDE, instructor, Northern California Center for Well-Being. “By examining the wealth of techniques currently used to promote adherence among patients, the Adherence Task Force will develop resources and establish best practices to reduce our patients’ difficulties following hyperphosphatemia treatment regimens and help them achieve success more easily.”

One tool the Adherence Task Force believes can be useful to renal care teams as they work with their patients to develop individualized plans is the phosphate binding ratio (PBR). PBR signifies the number of grams of phosphorus bound by each gram of a particular medication. By knowing the PBR of the binder a patient is taking, HCPs can help patients make informed decisions about what they eat and the extent to which they can balance their diet and binder to achieve good phosphate balance. There are a variety of phosphate binders available for patients with hyperphosphatemia, and these medications have a range of PBRs based on their active ingredients. FOSRENOL is powered by lanthanum, which has a high PBR.

Phosphate binders are a major source of pill burden for CKD Stage 5 patients. With treatments, such as FOSRENOL, patients with hyperphosphatemia may be able to reduce their pill burden to as few as one tablet with each meal.* In fact, FOSRENOL has the lowest pill burden of all phosphate binders, which may aid in adherence and reduce treatment costs.

“Shire is committed to offering its effective, noncalcium, nonresin phosphate binder, FOSRENOL, to CKD Stage 5 patients who need assistance with the complications of elevated serum phosphorus,” said Joseph Schlitz, vice president, US Renal Business, Shire Pharmaceuticals. “Knowing the evolving needs of the CKD community, we are continuing to improve the product through new formulations and more user-friendly packaging. We hope the combination of these efforts and the On Track program will provide patients with the resources they require to help them adhere to their hyperphosphatemia management regimens.”

    For More Information, Contact:    Brenna Terry    Porter Novelli    Phone: 212-601-8236    Cell: 814-574-8966    E-mail: [email protected]     Sarah Stearns    Porter Novelli    Phone: 212-601-8413    Cell: 617-447-8878    E-mail: [email protected]    Managing Hyperphosphatemia  

Phosphorus, an element found in nearly all foods, is absorbed from the gastrointestinal tract into the bloodstream. When the kidneys fail, they no longer effectively remove phosphorus. While the normal adult range for phosphorus is 2.5 to 4.5 mg/dL, the blood phosphorus levels of many patients on dialysis often exceed 6.5 mg/dL. Such levels have been linked to a significantly higher morbidity and mortality risk for patients who have undergone at least one year of dialysis. Research has shown that for each mg/dL increase in mean serum phosphorus, the relative risk of death increases by 6 percent.

Hyperphosphatemia is managed with a combination of dialysis, diet restriction, and phosphorus-binding agents, because diet and dialysis alone generally cannot adequately control phosphorus levels. Such binders “soak up” phosphorus in the gastrointestinal tract, before it can be absorbed into the blood, and aid patients in maintaining acceptable levels of mean serum phosphorus.

FOSRENOL

FOSRENOL is indicated to reduce serum phosphate in patients with end-stage renal disease (ESRD).

FOSRENOL is an effective, noncalcium, nonresin phosphate binder that reduces high phosphorus levels in CKD Stage 5 patients. FOSRENOL is formulated as an easy-to-use, unflavored, chewable tablet that can be taken without water, an important consideration for CKD Stage 5 patients who must restrict their fluid intake.

FOSRENOL is available in a broad range of dosage strengths, including 500- mg, 750-mg and 1-g tablets. Patients taking FOSRENOL can achieve serum phosphorus target levels with as few as three tablets per day.*

FOSRENOL has the lowest pill burden of all phosphate binders, which may aid in adherence and reduce treatment costs.

The active ingredient in FOSRENOL is lanthanum, which has a high PBR. In addition, the binding power of FOSRENOL, in vitro, was not compromised by pH variations throughout the gastrointestinal tract.

FOSRENOL has a high affinity for phosphate and works by binding to dietary phosphorus in the gastrointestinal tract. Once bound, the FOSRENOL/phosphorus complex cannot pass into the bloodstream and is eliminated from the body, thereby decreasing mean serum phosphorus levels.

The safety of FOSRENOL has been studied in over 5,500 patients. Despite the challenge of CKD Stage 5 mortality for long-term data, a number of patients (N=22) taking FOSRENOL have been followed for more than 5 years. In addition, more than 87,000 patients have been prescribed FOSRENOL in the US alone.

FOSRENOL has the most extensive long-term safety data package of any phosphate binder and is generally well tolerated. Trials involving patients treated with FOSRENOL showed sustained serum phosphorus reduction in a majority of patients, with some patients being followed over a six-year duration.

FOSRENOL is now available in 25 countries, including Spain, Canada, France, Germany, Italy, and the UK, and continues to be launched in new markets around the world.

   Important Safety Information    -- The most common adverse events were gastrointestinal, such as nausea       and vomiting, and generally abated over time with continued dosing.     -- The most common side effects leading to discontinuation in clinical       trials were gastrointestinal events (nausea, vomiting, and diarrhea).     -- Other side effects reported in trials included dialysis graft       complications, headache, abdominal pain, and hypotension.     -- Although studies were not designed to detect differences in risk of       fracture and mortality, there were no differences demonstrated in       patients treated with FOSRENOL compared to alternative therapy for up       to 3 years.     -- The duration of treatment exposure and time of observation in the       clinical program were too short to conclude that FOSRENOL does not       affect the risk of fracture or mortality beyond 3 years.     -- While lanthanum has been shown to accumulate in the GI tract, liver       and bone in animals, the clinical significance in humans is unknown.     -- Patients with acute peptic ulcer, ulcerative colitis, Crohn's disease,       or bowel obstruction were not included in FOSRENOL clinical studies.       Caution should be used in patients with these conditions.     -- FOSRENOL should not be taken by patients who are nursing or pregnant.     -- FOSRENOL should not be taken by patients who are under 18 years of       age.    

For Full Prescribing Information on FOSRENOL, please visit http://www.fosrenol.com/.

SHIRE PLC

Shire’s strategic goal is to become the leading specialty biopharmaceutical company that focuses on meeting the needs of the specialist physician. Shire focuses its business on attention deficit and hyperactivity disorder (ADHD), human genetic therapies (HGT), gastrointestinal (GI), and renal diseases. The structure is sufficiently flexible to allow Shire to target new therapeutic areas to the extent opportunities arise through acquisitions. Shire’s in-licensing, merger, and acquisition efforts are focused on products in niche markets with strong intellectual property protection either in the US or Europe. Shire believes that a carefully selected portfolio of products with strategically aligned and relatively small-scale sales forces will deliver strong results.

For further information on Shire, please visit the Company’s Web site: http://www.shire.com/.

“SAFE HARBOR” STATEMENT UNDER THE PRIVATE SECURITIES LITIGATION REFORM ACT OF 1995

Statements included herein that are not historical facts are forward- looking statements. Such forward-looking statements involve a number of risks and uncertainties and are subject to change at any time. In the event such risks or uncertainties materialize, Shire’s results could be materially affected. The risks and uncertainties include, but are not limited to, risks associated with: the inherent uncertainty of pharmaceutical research, product development including, but not limited to the successful development of JUVISTA(R) (Human TGF.3) and veleglucerase alfa (GA-GCB); manufacturing and commercialization including, but not limited to, the establishment in the market of VYVANSE(TM) (lisdexamfetamine dimesylate) (Attention Deficit and Hyperactivity Disorder (“ADHD”)); the impact of competitive products, including, but not limited to, the impact of those on Shire’s ADHD franchise; patents, including but not limited to, legal challenges relating to Shire’s ADHD franchise; government regulation and approval, including but not limited to the expected product approval date of INTUNIV(TM) (guanfacine extended release) (ADHD); Shire’s ability to secure new products for commercialization and/or development; and other risks and uncertainties detailed from time to time in Shire plc’s filings with the Securities and Exchange Commission, particularly Shire plc’s Annual Report on Form 10-K for the year ended December 31, 2007.

*Dosing based on as few as three tablets per day. Number of meals per day may vary. To achieve certain doses, additional tablets may be required.

Shire Pharmaceuticals

CONTACT: Brenna Terry, +1-212-601-8236, Cell, +1-814-574-8966,[email protected], or Sarah Stearns, +1-212-601-8413, Cell,+1-617-447-8878, [email protected], both of Porter Novelli, forShire Pharmaceuticals

Web site: http://www.shire.com/http://www.fosrenol.com/

Woman’s Hospital in Mesquite to Close: Mesquite Employees Given 60 Days’ Notice; Continued Losses Cited

By Jason Roberson, The Dallas Morning News

Apr. 3–Workers at Woman’s Hospital in Mesquite were told Wednesday that it will close in 60 days because it’s losing money.

Health Management Associates Inc. of Naples, Fla., has owned Woman’s Hospital and its sister hospital, Dallas Regional Medical Center in Mesquite, since 2002.

The two campuses are 5 miles apart. As many as 200 people will no longer work for the 176-bed Woman’s Hospital, which is at 3500 Interstate 30. The other campus, a 172-bed hospital at 1011 N. Galloway, will stay open.

Health Management Associates paid $32 million last year to a minority shareholder to acquire the 20 percent equity interests it did not already own in both hospitals, according to its annual report.

During the fourth quarter, Woman’s Hospital, formerly Mesquite Community Hospital, was converted from a general acute care hospital to a specialty women’s hospital. Mesquite Community Hospital opened in 1978.

“Unfortunately, our Woman’s Hospital continues to operate at a significant financial loss that necessitates its closure,” said Roy Vinson, chief executive of the Dallas Regional Medical Center, adding later in a statement that it “was a very difficult decision to make.”

The company wouldn’t say how much money Woman’s Hospital was losing.

Paula Reisdorfer, director of business development for Dallas Regional Medical Center, could not point to specific causes of the financial losses.

“We just can’t continue operating in the red,” she said.

Mr. Vinson said he plans to invest the money that otherwise would have been spent trying to keep Woman’s Hospital open into the remaining campus.

An additional 26-bed unit of private rooms with a “brighter and more inviting atmosphere” is scheduled to open next year, he said.

Shares of Health Management, a publicly traded hospital system with $4.4 billion in 2007 revenue, closed Wednesday at $5.76, down 9 cents.

—–

To see more of The Dallas Morning News, or to subscribe to the newspaper, go to http://www.dallasnews.com.

Copyright (c) 2008, The Dallas Morning News

Distributed by McClatchy-Tribune Information Services.

For reprints, email [email protected], call 800-374-7985 or 847-635-6550, send a fax to 847-635-6968, or write to The Permissions Group Inc., 1247 Milwaukee Ave., Suite 303, Glenview, IL 60025, USA.

NYSE:HMA, Singapore:588,

Stem Cell Research Offers Hope For Diabetes Treatment

Scientists have discovered a new technique for turning embryonic stem cells into insulin-producing pancreatic tissue in what could prove a significant breakthrough in the quest to find new treatments for diabetes.

The University of Manchester team, working with colleagues at the University of Sheffield, were able to genetically manipulate the stem cells so that they produced an important protein known as a “Ëœtranscription factor’.

Stem cells have the ability to become any type of cell, so scientists believe they may hold the key to treating a number of diseases including Alzheimer’s, Parkinson’s and diabetes.

However, a major stumbling block to developing new treatments has been the difficulty scientists have faced ensuring the stem cells turn into the type of cell required for any particular condition ““ in the case of diabetes, pancreatic cells.

“Unprompted, the majority of stem cells turn into simple nerve cells called neurons,” explained Dr Karen Cosgrove, who led the team in Manchester’s Faculty of Life Sciences.

“Less than one per cent of embryonic stem cells would normally become insulin-producing pancreatic cells, so the challenge has been to find a way of producing much greater quantities of these cells.”

The pancreas contains different types of specialized cells ““ exocrine cells, which produce enzymes to aid digestion, and endocrine cells, including beta cells, which produce the hormone insulin to regulate the blood glucose levels. Diabetes results when there is not enough insulin to meet the body’s demands.

There are two forms of the disease: type-1 diabetes is due to not enough insulin being produced by the pancreas, while type-2 or adult-onset diabetes occurs when the body fails to respond properly to the insulin that is produced.

The team found that the transcription factor PAX4 encouraged high numbers of embryonic stem cells ““ about 20% ““ to become pancreatic beta cells with the potential to produce insulin when transplanted into the body.

Furthermore, the scientists for the first time were able to separate the new beta cells from other types of cell produced using a technique called “Ëœfluorescent-activated cell sorting’ which uses a special dye to color the pancreatic cells green.

“Research in the United States has shown that transplanting a mixture of differentiated cells and stem cells can cause cancer, so the ability to isolate the pancreatic cells in the lab is a major boost in our bid to develop a successful therapy,” said Dr Cosgrove.

“Scientists have had some success increasing the number of pancreatic cells produced by altering the environment in which the stem cells develop, so the next stage of our research will be to combine both methods to see what proportions we can achieve.”

Scientists believe that transplanting functional beta cells into patients, most likely into their liver where there is a strong blood supply, offers the best hope for finding a cure for type-1 diabetes. It could also offer hope to those with type-2 diabetes whose condition requires insulin injections.

But the more immediate benefit of the team’s research is likely to be in providing researchers with a ready-made supply of human pancreatic cells on which to study the disease process of diabetes and test new drugs.

The research, which was funded by the Juvenile Diabetes Research Foundation and the Medical Research Council, is published in the journal Public Library of Science (PLoS) One.

On the Net:

University of Manchester

University of Sheffield

Public Library of Science (PLoS)

The Relationship Between Obesity and the Age at Which Hip and Knee Replacement is Undertaken

By Changulani, M Kalairajah, Y; Peel, T; Field, R E

We audited the relationship between obesity and the age at which hip and knee replacement was undertaken at our centre. The database was analysed for age, the Oxford hip or knee score and the body mass index (BMI) at the time of surgery. In total, 1369 patients were studied, 1025 treated by hip replacement and 344 by knee replacement. The patients were divided into five groups based on their BMI (normal, overweight, moderately obese, severely obese and morbidly obese). The difference in the mean Oxford score at surgery was not statistically significant between the groups (p > 0.05). For those undergoing hip replacement, the mean age of the morbidly obese patients was ten years less than that of those with a normal BMI. For those treated by knee replacement, the difference was 13 years. The age at surgery fell significantly for those with a BMI > 35 kg/ m^sup 2^ for both hip and knee replacement (p > 0.05). This association was stronger for patients treated by knee than by hip replacement.

In the Health Survey for England 2005(1) it was identified that the proportion of men classified as obese (body mass index (BMI) > 30 kg/m^sup 2^) had increased from 13.2% in 1993 to 23.1% in 2005. For women the increase was from 16.4% to 24.8% over the same time period. The International Obesity Task Force has documented a varying prevalence of obesity around the world, with many countries having a prevalence which exceeds that of the United Kingdom.2 Obesity is associated with degenerative arthritis, coronary artery disease, hypertension and type-2 diabetes.3 It affects men and women of all age groups, of all socio-economic strata and of all ethnic groups. Its prevalence increases with advancing age1 and peaks in the age group of 55 to 64 years. In this subgroup, 21 % of men and 29% of women are obese.4

Degenerative joint disease can affect any synovial joint, most commonly the hip, knee, hands, feet and spine.5 Risk factors associated with its development include age, obesity, gender, previous joint trauma, joint dysplasia and excessive work or sports activity.3,6 Of these, age appears to be the most influential.7 This is attributed to decreasing chondrocyte function with advancing age and a reduced ability to synthesise appropriate proteoglycan aggregates.8,9 These proteoglycan aggregates are less responsive to cytoldnes and mechanical forces and thus predispose to the degeneration of cartilage.8,9

Several models have also sought to explain the association between obesity and the development of degenerative joint disease. It has been suggested that obesity leads to the repetitive application of excessive axial loading forces on the surface of a joint, resulting in degeneration of articular cartilage.6,9 Another study argues that excessive fat leads to the irregular growth of articular cartilage and inhibits its repair.6

Although it has been shown that obesity predisposes to premature osteoarthritis (OA) in large weight-bearing joints,10-12 it has not been ascertained whether this is associated with a requirement for hip and knee replacement at a younger age. We have, therefore, examined the relationship between obesity and the age at which hip and knee replacements are undertaken at our centre.

Patients and Methods

Our data on patients who had undergone primary hip or knee replacement were gathered from three sources. The first was the database established for the Joint Replacement Review Programme at St Helier Hospital, Carshalton, United Kingdom. This programme monitored patients whose hip or knee replacement had been performed between 1995 and 2004. The second source was the parallel database at St Anthony’s Hospital, Cheam, United Kingdom, which monitored patients whose hip or knee replacement had been performed since 1995, and the third was the database used to track patients who had undergone hip or knee replacement at the South West London Elective Orthopaedic centre since January 2004. Unfortunately, the transcription of the patients’ pre-operative height and weight from the hospital records to our databases was inconsistent and we have only been able to extract a complete dataset for 1369 patients.

Table I. Mean values for the Oxford hip score13 (OHS) (95% confidence interval (CI)) and age (range) at surgery, for the different body mass index (BMI) groups, in 1025 patients treated by hip replacement

We subdivided our patients according to whether they underwent hip or knee replacement. In cases in which the same patient underwent both types of replacement, we included the pre-operative data for each, as available. For patients undergoing bilateral replacement of the hip or knee, the data have been recorded as a single procedure.

Our dataset comprised the following information for each patient: the age, height and weight at the time of surgery, the pre- operative Oxford hip score (OHS)13 or Oxford knee score (OKS)14 and the BMI calculated by the formula:

BMI = weight in kg/(height in metres)2

The patients were divided into five subgroups according to their BMI: normal (/ = 40 kg/m^sup 2^). In order to analyse the results, we designated the group of patients with a BMI

Statistical analysis. This was undertaken using an independent sample t-test on SPSS software version 15 (SPSS Inc., Chicago, Illinois). The level of statistical significance was set at p = 0.05. Pearson’s correlation coefficient (r) was determined between BMI and age at surgery.

Results

Hip replacement. We analysed 1025 patients. Their mean age at surgery was 65 years (25 to 93). The mean age and OHS at surgery for each subgroup is shown in Table I and Figure 1. No statistical difference was identified between the mean OHS of the subgroups compared with the control group (p > 0.05). A statistically significant correlation between the age at surgery and BMI for those undergoing hip replacement (r = 0.1; p 40 kg/m^sup 2^) was ten years less that of those with normal weight for their height.

Fig. 1

Bar chart showing the mean Oxford hip score13 (OHS) and the mean age at surgery for each subgroup in patients treated by hip replacement.

Knee replacement. We analysed 344 patients. Their mean age at surgery was 72 years (44 to 90). The mean OKS and the mean age at surgery for each subgroup is shown in Table II and Figure 2. As was shown for hip replacement there was no statistical difference between the mean OKS of the different subgroups compared with the control group (p > 0.05). A statistically significant correlation was found between the age at surgery and the BMT for those undergoing knee replacement (r = 0.3; p

Table II. Mean values for the Oxford knee score14 (OKS) (95% confidence interval (CI)) and age (range) at surgery for the different body mass index (BMI) groups in 344 patients treated by knee replacement.

Fig. 2

Bar chart showing the mean Oxford knee score14 (OKS) and the mean age at surgery for each subgroup in patients treated by knee replacement.

Discussion

We were only able to extract a complete dataset from just over 10% of the patients in our three databases. This is a clear example of a poorly-structured process which ensured completion of questionnaires by patients, but which failed to generate translation of the data on the BMI from the hospital notes to the databases. This problem has now been addressed.

Our use of patients with a normal BMI as a control group was an arbitrary decision which is justified on the basis that these individuals are of normal weight for their height. The absence of any difference in the pre-operative OHS and OKS between the different subgroups reduces any potential selection bias or inconsistency in the selection of patients for joint replacement.

Previously published studies in the literature have shown an association between an increasing BMI and the development of OA. Manek et al,12 reported a strong association between a high BMI and the presence of OA of the knee in female twins with a mean age of 54.5 years. A moderate association of OA of the hip with obesity has also been described in previous studies.5,11

We have observed that both hip and knee replacements were undertaken at a younger age as the BMI increased above normal. However, statistical significance was not reached until the BMI reached 35 kg/m^sup 2^.

Our study shows that severe obesity is associated with a premature requirement for hip and knee replacement. The association is stronger for patients requiring knee replacement than for hip replacement and the age difference is greater. This finding correlates with the weak to moderate relationship described in the literature between the BMI and OA of the hip.5,11 Relatively few of our patients were morbidly obese. In part this was a result of the relatively small percentage of the population with a BMI > 40 kg/ m^sup 2^. Also, it has been shown that a proportion of morbidly obese patients will be unfit for hip or knee replacement because of comorbidities.15

Previous studies comparing the clinical and radiological results of hip16 and knee17 replacement in obese compared with non-obese patients have reported a lower age at surgery for obese patients. However, these studies did not explore the effect of increasing obesity or identify the magnitude of the reduction in the mean age at surgery in relation to the BMI subgroups.

The rising incidence of obesity in the developed world is a source of constant media attention. From an orthopaedic point of view we should expect a growing number of patients to present for hip and knee replacement surgery at a younger age than has been hitherto accepted. It remains to be seen whether these interventions will provide such patients with a satisfactory lifetime solution for their degenerative joint disease.

We acknowledge the valuable contribution to this study of Mr P. Moonot (MS, MRCS) and other members of the research team.

No benefits in any form have been received or will be received from a commercial party related directly or indirectly to the subject of this article.

References

1. No authors listed. The Information Centre. http:// www.ic.nhs.uk/pubs/ hlthsvyeng2004upd/04TrendTabs.xls/file (date last accessed 24 October 2007).

2. No authors listed. International Obesity Task Force Global Prevalence of Obesity. www.iotf.org/medi3/globalprev him (date last accessed 24 October 2007).

3. Hill JO, Catenacci V, Wyatt HR. Obesity: overview of an epidemic. Psychiatr Clin North Am 2005;28:1-23.

4. No authors listed, www.statistics.gov.uk. Obesity among people aged 16 and over: by social class of head of household and gender, 1998: Social Trends 32 (date last accessed 24 October 2007)

5. Flugsrud GB, Nordsletten L, Espehaug B, Havelin Ll, Meyer HE. Risk factors for total hip replacement due to primary osteoarthritis: a cohort-study in 50,034 persons. Arthritis Rheum 2002;46:675-82.

6. Sowers M. Epidemiology of risk factors for osteoarthritis systemic factors. Curr Opin Rheumatol 2001;13:447-51.

7. Buckwalter JA, Saltzman C, Brown T. The impact of osteoarthritis. implications for research. Clin Orthop 2004;427:6- 15

8. Martin JA, Ellerbroek SM, Buckwalter JA. Age-related decline in chondrocyte response to insulin-like growth factors-1: the role of growth factor binding proteins. J Ortop Res 1997;15:491-8.

9. Martin JA. Buckwalter JA. The role of chondrocyte senescence in the pathogenesis osteoarthritis and in limiting cartilage repair. J Bone Joint Surg [Am] 2003:85A(Suppl 2):106-10.

10. Buckwalter JA. Osteoarthritis and articular cartilage use, disuse and abuse: experimental studies. J Rheumatol 1995;22(Suppl 43):13-15.

11. Marks R, Allengrante JP. Body mass indices in patients with disabling hip osteoarthritis. Athritis Res 2002;4:112-16.

12. Manek NJ, Hart D, Specter TD. MacGregor AJ, The association of body mass index and osteoarthntis of the knee joint. Arthritis Rheum 2003;48:1Q24-9.

13. Dawson J, Fitzpatrick R, Muray D, Carr A. Questionnaire on the perceptions of patients about total hip replacement. J Bone Joint Surg [Br] 1996;78-B:18590.

14. Dawson J, Fitzpatrick R, Murray D, Carr A. Questionnaire on the perceptions of patients about total knee replacement. J Bone Joint Surg [Br] 1998;80-8:63-9.

15. Karlson EW, Mandl LA, Aweh GN, et al. Total hip replacement due to asteoarthritis: the importance of age. obesity, and other modifiable risk factors. Am J Med 2003;114:93-8

16. McLaughlin JR, Lee KR. The outcome of total hip replacement in obese and non-obese patients at 10- to 18-years. J Bone Joint Surg [Br] 2006;88-B:128692.

17. Amin AK, Pattern JT, Cook RE, Brenkel IJ. Does obesity influence the clinical outcome at five years following total knee replacement for osteoarthritis? J Bone Joim Surg [Br] 2006;88-B:335- 40.

M. Changulani,

Y. Kalairajah,

T. Peel,

R. E. Field

From South West

London Elective

Orthopaedic Centre,

Epsom, England

* M. Changulani, MS, MRCS,

Specialist Registrar

William Harvey Hospital,

Kennington Road,

Willesborough, Ashford, Kent

TN24 0LZ, UK.

* Y. Kalairajah, MPhil,

FRCS(Orth), Consultant

Orthopaedic Surgeon

Luton and Dunstable Hospital

NHS Trust, Lewsev Road,

Luton LU4 0DZ, UK.

* T. Peel, BSc(Hons), Research

Assistant

* R. E, Field, PhD, FRCS(Orth),

Consultant Orthopaedic

Surgeon, Director of Research

Orthopaedic Research Unit

South West London Elective

Orthopaedic Centre, Dorking

Road, Epsom KT18 7EG, UK.

Correspondence should be sent to Dr M. Changulani; e-mail: [email protected]

(c)2008 British Editorial Society of Bone and Joint Surgery

doi: 10.1302/0301-620X.9063. 19782 $2.00

J Bone Joint Surg [Br] 2008;90-B:360-3.

Received 4 J une 2007; Accepted after revision 23 October 2007

Copyright British Editorial Society of Bone & Joint Surgery Mar 2008

(c) 2008 Journal of Bone and Joint Surgery; British volume. Provided by ProQuest Information and Learning. All rights Reserved.

The Teacher-Librarian As Literacy Leader

By Braxton, Barbara

THERE WERE TWO PICTURES-ONE THAT WOULD MAKE A PRINCIPAL SHOUT WITH DELIGHT AND ONE THAT WOULD MAKE A TEACHER-LIBRARIAN WEEP WITH FRUSTRATION. The first was that of a large group of students, predominantly sixth-grade boys, waiting impatiently for the library to open at lunchtime to go in and continue reading the Deltora Quest series (a series of extremely popular books by Australian author Emily Rodda) so that they could earn the next gem for their belt in the challenge. The second was that of a group of sixth-grade students squirming and squiggling in their seats as they struggled to understand the nuances of To Kill a Mockingbird, being read aloud by their teacher, even though the teacher had already said, “Most kids would have missed the relevance of the trial, anyway.”

If we look behind the canvas to find out what was really the difference between the two pictures, it soon becomes clear-the first has the input of an experienced, qualified teacher-librarian; the other does not.

THE MANY ROLES OF THE TEACHER-LIBRARIAN

Once upon a time, the teacher-librarian was regarded as the literature expert in the school. Staff and students alike sought suggestions for books to read alone or aloud, and the teacher- librarian matched titles to interests, abilities, developmental levels, curriculum needs, and a host of other factors that put the right book into the right hands.

The literacy-through-literature role was the prime responsibility of the teacher-librarian.

But time and technology have marched on. Despite continued stereotyping of teacher-librarians as little old ladies in sensible shoes reading stories to children in the autumn of their careers, the role is now much more diversified. For instance, Information Power: Building Partnerships for Learning (American Library Association & Association for Educational Communications and Technology, 1998) identifies four major roles of the teacher- librarian:

Teacher: works with students and other learning community members to develop information literacy skills within the framework of best practice pedagogy based on up-to-date knowledge of learning and teaching practices

Instructional partner: collaborates with teachers to design and assess authentic learning tasks that are founded on using old information to create new

Information specialist: acquires and evaluates information in multiple formats and multiple sources and shares this information with the staff and students

Program administrator: develops policies, programs, and practices that enrich and enhance student learning; also manages budgets, people, equipment, and facilities

The Canadian Association of School Libraries also emphasizes the collaborative and information literacy instruction roles of the teacher-librarian, in its 2003 publication Achieving Information Literacy: Standards for School Library Programs in Canada. Furthermore, the Australian School Library Association (2001) identifies three distinct roles for the teacher-librarian:

Curriculum leader: focuses on embedding information literacy skills across the curriculum

Information specialist: provides access to a variety of information resources in equally various formats

Information services manager: develops a collection that reflects the ethos of the school and supports the curriculum

Perhaps it is significant that none of these works by teacher- librarian associations focuses on that “old” role of fostering a love of literature, although the Australian document (in its curriculum leader profile) does include maintaining “literacy as a high priority, engaging students in reading, viewing, and listening for understanding and enjoyment.”

WHAT RESEARCH SHOWS

In 2004, Nancy A. S. Miller conducted the “Oklahoma Association of School Library Media Specialists Time Task Study” in Oklahoma schools to clarify the various roles of teacher-librarians and to identify the proportion of time spent on each role in libraries that were staffed by one or more professionals and one or more library assistants. Miller’s concern was that despite the findings of research in a number of states, school libraries remained “so understaffed that they fail to achieve their potential to impact student learning and that staff reductions and program elimination were often due to a lack of understanding of what professional librarians do” (p. 1). Miller’s research shows that teacher- librarians in all categories (elementary, middle school, and high school) spend most of their day involved in the task that Information Power describes as program administrator (American Library Association ft Association for Educational Communications and Technology, 1998)-particularly, collection development-and that when the library professional was actually teaching, less than 20% of that time was devoted to fostering reading. As expected, this task was highest in the early years of elementary school but fell away sharply as students got older and more independent.

EMPHASIZE TEACHER IN TEACHER-LIBRARIAN

Although the reasons for the variations in duties are diverse and dependent on the school situation, it nevertheless appears that there is a need to review and renew our role and highlight the teacher part of teacher-librarian, especially, those responsibilities that relate to fostering reading.

This task is even more important if we consider the findings of recent Organization for Economic Cooperation and Development studies (2004, 2007). In 2003 and 2006, OECD conducted its Program for International Student Assessment, including 15-year-olds in more than 40 industrialized countries. The 2003 study concluded that the most important predictor of academic success was the amount of time that students spent reading and that this indicator was more accurate than economic or social status; that the time spent reading was highly correlated to success in math and science; and that the keys to success lay in teaching students how to read and then having them read as much as they can.

Even if the umbrella organizations for our profession have changed the emphasis, we as individuals can put ourselves into our school and district policies and procedures through their literacy plans. First, however, we need to ask and answer two critical questions: How does the library contribute to literacy and learning in this school? How do we ensure that the library is the curriculum center of the school? If we have these questions answered, with evidence, before we attend planning meetings, we can argue our position from a strong perspective.

CHILDREN WANT TO READ

For 5-year-olds, reading is what school is all about. It is why they are there, and such is their expectation-if they do not learn to read on the first day, they-go home disappointed and disillusioned.

It is not the role of the teacher-librarian to be the ‘children’s primary reading instructor-that is the privilege of their classroom teacher-but there is much that the teacher-librarian can do to support children’s reading development through the library. Central to knowing how we can accomplish this effectively is having an understanding of how humans learn. Through research by people such as Marion Diamond, Jesse Conel, Peter Huttenlocher, and Geoffrey Caine and Renate Caine, as well as through technological advances in brain-scanning equipment, we know the following:

* The brain constantly grows, and it changes from conception to death.

* The brain develops over three decades, with the sensory sectors being the most active in the first 10 years and with those enabling deep and independent thinking developing over the second 10 years.

* Different ages have different needs and conditions for learning.

* We build new concepts on old understandings, and new information must be connected to a prior experience to make sense.

* Intelligence is not fixed-it is a combination of nature and nurture.

* An enriched environment, with multisensory challenges and opportunities to explore it, has a significant impact on learning.

* Learning is unique and dependent on many factors, many of them internal and intrinsic to the individual.

* There are many ways to learn the same thing, and we each have our own preferences and predilections to ensure success.

* There are two types of learning: first, experience-expectant learning, which comprises basic survival skills and speech and which occurs in a well-described order and in a well-defined time frame, provided one has the opportunities to learn them; second, experience- dependent learning, which is the learning of nonessential skills, including reading, requiring explicit instruction, repetition, motivation, and mental effort, which develop at different times and different rates for each person.

With this knowledge, it is possible to create an environment where students establish partnerships and choose programs in which to participate, all of which will extend, enrich, and enhance their reading experiences.

CREATE A READING-FRIENDLY ENVIRONMENT

The first priority for us as teacher-librarians is to get staff and students into the library. It does not matter what books might be on our shelves or what challenges and incentives we offer the students if they and teachers do not come into the library in the first place. Therefore, the most effective place to start is that of examining our library landscape. Is it like the one ruled over by Madam Pince at Hogwarts School of Witchcraft and Wizardry, in J. K. Rowling’s Harry Potter series-“tens of thousands of books, thousands of shelves, hundreds of narrow rows”? Or is it a place of wonder and promise, with books begging to be taken and read?

With a little imagination, it is easy to compromise between the austerity of Hogwarts and the flamboyance of the bookstore that thinks that it is a coffee shop. Your library can be a place where students choose to be (even when they do not have to be there), and they can learn while they are there without borrowing a book.

Creating an inviting environment does more than attract and pique the imagination of visitors-it encourages learning. We know this because

* the brain simultaneously functions at many levels as one’s thoughts, emotions, imagination, predispositions, and physiology interact and exchange information with the environment;

* the brain absorbs information directly and indirectly, continuously aware of what is beyond the immediate focus of attention, to the extent that 70% of what is learned is not directly taught;

* learning involves conscious and unconscious processes, including experience, emotion, and sensory input, and much of our learning occurs and is processed below the level of immediate awareness so that understanding may not happen until much later, after there has been time for reflection and assimilation;

* the brain is elastic because its structure is changed, or rewired, by exposure to new experiences so that the more we use it, the better it gets;

* the brain is stimulated by challenge and inhibited by threat so that students who are in safe, secure environments, mental and physical, can allow the cognitive part of their brains to dominate the emotional parts and explore, investigate, take risks, and learn;

* 30% to 600/0 of the brain’s wiring comes from our genetic makeup (nature), and 40% to 70% comes from environmental influences and effect (nurture); and

* the two critical factors in learning are novelty and interactive feedback (National Research Committee, 2000).

So even the most disinterested, disconnected, or disabled student learns in an osmotic way in stimulating surroundings. Other research has demonstrated that our initial attitude toward something is a key determinant in the success of our learning, so if we can persuade each student the library holds something enjoyable for him or her and is therefore an okay place to be, then we are on the way.

CREATE THE WOW! FACTOR

Giving your library the Wow! factor does not mean having to change any of your core business practices. It means examining what you do and considering how you can do it with more flair. For example, ask yourself the following questions.

Is the space overwhelming, with stacks and stacks of shelves that could be rearranged so that spaces within the space become child- size? Despite such issues as fire safety regulations, duty of care, light and power sources, and maintaining Dewey order, which determine much of what we put where, it is still possible to create a physical layout that is in touch with the child’s needs and senses.

Are things within easy reach? Young children are hobbit-size, and so shelves and other storage units need to be low. Books need to have their colorful covers displayed, rather than look like soldiers on parade; popular authors and series can be grouped; and regularly changing displays offers opportunities for the child to see and be stimulated by the resources.

Are there displays that exploit the brain’s capacity to learn at many levels? Children absorb everything, and artifacts, colors, labels, and subjects can help them build the concept of the library as a fun, alive place-where their imaginations can be entertained and educated. Displays can be large-for example, by creating Santa’s Village at the North Pole-or they can be modest, by putting a single volume on a stand in a prominent place. They can cover curriculum topics, local and international issues and events, genres, slogans and sayings and maybe a line from a lyric, and they can introduce new subjects to the students. They can be short- or long-term, depending on their priority and purpose. They can be static or interactive, offering students opportunities to get involved. They can be created by staff or students, or they can be a joint venture. But they have to be there, and they have to change often.

Do your signs allow even the youngest student to independently find books? If kindergarten children can go straight to the dinosaur books without asking, those children will have the power to become regular library users.

MAKE PRACTICE MEET PROMISE

Once you draw the student into the library, the practice has to meet the promise. Specifically, what can we do that will foster reading as a lifelong activity, especially in this instant- information age, dominated by test results, grade levels, and other quantitative data? How can we offer the page as an alternative to the screen?

We know that demonstrations and models are critical teachers- imagine the impact made when students see their teachers valuing and using the library’s staff and services. Therefore, carefully examine and expand the range of services that the library might offer our teaching colleagues, beyond the regular collaborative planning and resourcing of teaching units. Consider the following:

* Suggesting appropriate fiction, read aloud or alone, to support teaching units and enhance understanding and constructing a display on a current curriculum theme that includes related fiction and suggestions for further reading

* Taking a sample of these themerelated titles to a faculty meeting so that they can be previewed and discussed

* Creating bibliographies of related subjects that support common curriculum themes as well as specific titles, ready to be handed to teachers as soon as they ask, “What have you got about . . .”

* Teaching teachers how to use OPAC, particularly if your school has recently upgraded its system

* Suggesting new read-alouds per age group (rather than the same old, same old), including suggestions for how the books might be used within the curriculum

* Offering to gather a selection of books to be used in class for 6 weeks or so and then swapped for another selection-the rotating classroom library-so that students are continually exposed to new reading materials at a range of reading levels, a variety of genres, a variety of formats, fiction that supports the curriculum theme but offers titles for leisure and pleasure (unrelated to teaching or testing), various authors, and new subject areas that pique imagination

* Visiting the class and book-talking a few of the titles in the rotating classroom library to build up interest and enthusiasm before handing it over to the teacher

* Setting aside a time when you can visit the class and read aloud to the students

* Providing books in digital formats to be listened to or viewed by the class so that nonreaders and non-English speakers can share the story

* Asking a class to create a display for the library based on the subject, theme, or genre that it is studying and then incorporating the class’s work as well as appropriate resources

* Suggesting that the teacher be involved when students are borrowing books because the teacher will always know the students’ abilities, interests, and needs better than the best teacher- librarian will

* Asking the teachers, individually and collectively, for suggestions for purchases and ensuring that they have first access to these

* Establishing an early-alert system (perhaps via e-mail) that informs teachers of new books that you purchased that they might like to preview and read

* Offering opportunities for teachers to come and browse the new books at their leisure before they disappear onto the shelves, such as at a brunch-and-browse or coffee-and-chat session

* Having regular lunchtime story sessions when staff members come to share their favorite read-alouds with students

* Creating a display of the staffs favorites from childhood and their current favorites and including photos of the staff reading the books

* Asking classroom teachers to suggest services that would make their lives easier without compromising the teacher-librarian’s professional judgment

One of the easiest traps for teacher-librarians to fall into is that of creating the perception that the library is theirs. One of the hardest things to do is to hand over ownership to the staff and students and become just its custodian. But in the interest of student learning, it is a critical transition to make.

Students will value the staff and services of the school library much more if they feel that it is their place, a place where they are welcome, rather than a place that is an austere shrine to the printed word. Students, especially those in high school, too often get the message that the contents of the library are more important than the clients.

As such, the first rule that should be abandoned is that of silence. No one lives and learns in isolation, and we should be encouraging students to talk about what they have found and discuss their learning. Even those who are seriously studying need to ask a question or two at times to clarify their understanding. When your library is humming, take a tour and discreetly listen to conversations-most students will be on task, and it takes but a look or a word to refocus those who are not. During this time, you will be helping to establish a positive attitude toward the library.

Look for ways that the students can feel ownership of the place and space. There is little that you can do that they cannot, with a little guidance. Consider the following:

* Asking them to actively participate in organizing an author visit by having them provide input into selecting the author, do fund-raising to cover the costs, create advertising posters and displays of the author’s work, prepare a literary luncheon so that they can talk with the author in an informal setting, introduce and thank the speaker, and purchase and present a gift * Developing a library assistants program that has structure and a career path (for a description of how this can be done, see the June 2005 issue of Teacher Librarian)

* Asking students for suggestions for titles to purchase and allowing senior students to spend part of the library budget choosing from a collection of preselected titles. When each student has made a choice, note the title and the student, and ask what attracted him or her to it. As soon as the book is processed, give it to the student to read and have him or her report on whether it lived up to expectations.

* Establishing a regular storytime, perhaps during lunch break, when older students can read to the younger ones, individually or as a small group

* Promoting a principal’s reading challenge that allows students to set and meet their own targets and be acknowledged for their efforts with a certificate from the principal. If you use a prescribed list of must-reads, ensure that the books are available from the library, that the students have input into the list, and that they are not restricted within it by arbitrary levels or lexiles.

Handing over the power in this way does not diminish your role, as some fear, but rather enhances it because you will have so many more users of the facility.

Once the students are in the library, there are countless ways that you can enrich their reading experiences. Through the development and promotion of some schoolwide literature-based activities that excite staff and students, you can reclaim the role of the literature expert and demonstrate that when it comes to literacy through literature in the school, you are the leader. Consider the following:

* Developing an interactive competition based on a book or a series, such as Deltora Quest, which requires the students read and reread, which demands higher-order thinking skills, and which can be offered one clue at a time (an outline of the Deltora Quest competition is published in my column in the June 2006 issue of Teacher Librarian.)

* Creating an across-grade challenge by having classes not only predict the ending to a book that you are reading to all of them but also make a display around their ideas

* Setting up hypothetical dilemmas that favorite characters might face and asking students to solve them. These might involve giving advice, constructing something, or being a helping hand.

* Creating a literature-based quiz (with prizes)

* Publishing “If you liked . . . , then try ….” lists and having these prominently on display. Ask the students for their input or have them produce their own lists.

* Creating a display with the theme “You’ve seen the movie, now read the book!” It is amazing how many do not realize that the book came first.

* Establishing a web page or blog from your library site where student reviews of the library’s latest purchases can be published. Students love to see their work in print and accessible to many.

* Constructing a list of activities that are not book reports that teachers might use if they need proof that a student has read a title

* Advertising the dates and details of your state’s or province’s readers’ choice awards, purchasing the books from those awards, and setting up activities that support participation of these. Almost every state or province has an award for elementary readers, and many have young adult categories as well. A list of these awards in the United States is available at www.mcelmeel.com/curriculum/book awards.html, whereas links to provincial readers’ choice programs in Canada can be found at www.clatoolbox.ca/slip/english/ School_Library_Programs/Reading/.

* Connecting with local sports teams to build partnerships between athletes and students. Many major league baseball teams have reading promotion programs, and the National Baseball Hall of Fame and the American Library Association run the annual program Step Up to the Plate @ Your Library, www.ala.org/ala/pio/ campaign/ sponsorship/stepuptothep Iateyourlibrary/stepup2007.htrn. The National Basketball Association has a program called Read to Achieve, www .nba.com/features/rta_index.html, and the National Football League offers Join the Team, www.jointheteam.com/. A less formal arrangement with the local sports heroes could work well, too.

* Cooperating with your local public library to produce and promote a summer reading program, perhaps constructing it so that participants can take advantage of some of the reading rewards offered through commercial enterprises such as Target, Pizza Hut, and Six Flags Amusement Parks (Read to Succeed, www.sixflags.com/ parks/greatamerica/ ParkPress/articlel 20203a.html).

MORE IDEAS TO USE SCHOOLWIDE

Once you start thinking about ideas, they just flow, and each one is evidence that you do have a role beyond that of the program administrator (Miller, 2004). By making that role explicit throughout your entire school community, you will show that the development of literacy is heart and soul of what you do and that the budget and time cutters should take their scissors elsewhere.

We all need to make time to demonstrate that what we are doing is based on best practice pedagogy grounded in the latest learning research and will therefore have an effect on those ubiquitous test scores. Then schools might realize that they can have a curriculum leader, an information specialist, an information services manager, a teacher, an instructional partner, a program administrator, and a reading specialist in one person. Seven hats for the price of one salary. How excellent is that!

REFERENCES

American Library Association & Association for Educational Communications and Technology. (1998). Information power: Building partnerships for learning. Chicago: Author.

Australian School Library Association. (2001). Learning for the future: Developing information services in schools (2nd ed.). Melbourne, Australia: Curriculum Corporation.

Canadian Association of School Libraries. (2003). Achieving information literacy: Standards for school library programs in Canada. Ottawa, Ontario, Canada: Author.

Miller, Nl. A. S. (2004). Oklahoma Association of School Library Media Specialists Time Task Study Report. Retrieved December 15, 2007, from www.oklibs.org/~oaslms/Links_OKTime TaskReport.pdf

National Research Committee. (2000). How people learn: Brain, mind, experiences and school. Washington, DC: National Academy Press.

Organization for Economic Cooperation and Development. (2004). Learning for tomorrow’s world-First results from PISA 2003. Retrieved December 15, 2007, from www.pisa.oecd .org/document/55/ 0,2340,en_32252351_ 32236173_33917303_1_1_1_1.00.html

Organization for Economic Cooperation and Development. (2007). PISA 2006 results. Retrieved December 15, 2007, from \vww.oecd.org/ document/2/0,3343,en_3 2252351_32236191_39718850_1_1_1_ 1,00.html

Barbara Braxton retired as teacher-librarian for the Rilmerston District Primary School in the Australian Capital Territory. She writes the “Strategy” column for TL. She can be reached at [email protected].

Copyright Ken Haycock & Associates Feb 2008

(c) 2008 Teacher Librarian. Provided by ProQuest Information and Learning. All rights Reserved.

Drinking Lots of Water Has Little Health Benefit

Two kidney experts have taken it on themselves to dispel a handful of myths regarding water. Dr. Stanley Goldfarb and Dr. Dan Negoianu of the Renal, Electrolyte, and Hypertension Division at the University of Pennsylvania, Philadelphia reviewed several published clinical studies on the benefits of drinking a large amount of water daily, and found that there was little proof to back up the theories.

There are plenty of myths floating around about the benefits of drinking a lot of water, and some of these myths seem almost factual due to their popularity. The standard U.S. recommendation of 8 glasses of 8 oz of water per day is widely known, but there is little if any evidence to prove this beneficial.

Goldfarb and Negoianu agreed that some people, such as athletes or those that live in very dry, hot climates may require this much water to keep from being dehydrated, but for the average person, there is absolutely no evidence that drinking this quantity of water is beneficial.

Their study, which will be published in the June 2008 issue of the Journal of the American Society of Nephrology, may dispel many of the water-related myths.

Goldfarb insists that the sources of the “water myth” are the worlds of complementary and alternative medicine, and that the spread of the myth was propelled even further by the internet.

Goldfarb’s specialization in kidneys prompted he and Negoianu to review literature covering the benefits of drinking water, due to a common interest in the way the kidney handles fluids.

The pair deflated at least four popular myths during their research.

One myth, that drinking water reduces headaches, had only one study to back it up, and the results of the study lacked statistical significance.

A second myth, that increased water intake improves skin tone, is also not proven. While there is evidence that dehydration can affect the skin, Goldfarb said, “There are no data to suggest that it actually improves the content of the skin.”

It has also been claimed that filling up on water will assist in suppressing appetite. Many claim that if you drink enough water it can help fight obesity, or at least maintain weight more easily. “Many people drink water before and during the meal to try to suppress their appetite,” Goldfarb stated, yet there is “no consistent evidence” that water suppresses appetite. “Because you absorb water so quickly and it moves through the GI tract so quickly, it probably doesn’t fill you up the way people have proposed, nor does it lead to the release of hormones which suppress appetite as far as we know,” Goldfarb affirmed.

A fourth falsehood is that drinking a large quantity of water flushes toxins from the body and improves kidney functions. The kidney experts were quick to dispel this myth. No clinical evidence exists to back this up. Some have even claimed that water intake can benefit the function of organs; however no studies have documented this type of benefit either. Goldfarb plainly stated that this is not how the kidney works. “When you drink a lot of water you end up having a larger volume of urine but don’t necessarily increase the excretion of various constituents of the urine.” Sodium and urea might be expelled, but there is nothing to back up any clinical benefit to this.

Goldfarb says there is absolutely no rational basis for the “8×8 rule” and it is very unclear where this recommendation even began. Not only is there no evidence to back up extra water’s benefit, there are some circumstances where over-consumption may be unhealthy. “In long-distance runners, for example, more harm is done by long distance runners over-drinking during races than by long distance runners who under-drink,” Goldfarb explicated.

Goldfarb further clarified by citing the case of a woman who recently died when she developed brain swelling from continuously and rapidly drinking water for several minutes as part of a radio contest.

Despite a lack of evidence, the Food Standards Agency and others are sticking to their recommendations of drinking 6 to 8 glasses a day.

On the Net:

University of Pennsylvania, Philadelphia

Tripler Army Medical Center’s Implementation of CliniComp’s Essentris Improves Productivity and Increases Revenue By 47 Percent

CliniComp Intl., the clinical documentation tool for more than 45 percent of all Department of Defense (DoD) inpatient beds, today announced that revenue at Tripler Army Medical Center (TAMC) rose 47 percent after deployment of CliniComp’s Essentris clinical documentation system enterprise-wide. As a result, TAMC recouped its initial IT investment in eighteen months.

Currently, TAMC Essentris users include 1,300 clinicians as well as administrative staff across all 17 TAMC sites. TAMC, the headquarters of the Pacific Regional Medical Command and the largest military hospital in the Asian and Pacific Rim region, initially implemented Essentris at four sites between 1997 and 2003. System-wide installation was completed in October 2007.

“Patient care is our top priority and we knew we wanted to implement an EMR solution that would standardize charting, provide instant access to patient information and maximize productivity,” said Debbie Arai, R.N., TAMC’s chief of inpatient clinical services. “Previously, our nursing staff worked in three different modalities — paper, an older EMR system and printouts — creating inefficient workflow processes and a lack of flexibility. Now, all nurses chart in the same system, using standardized forms, allowing them to transition easily between departments. The quick ROI was a nice bonus.”

TAMC raised revenue by 47 percent by creating key coding phrases for every service line and embedding them into Essentris. “When clinicians enter these phrases, this allows coders to match the appropriate reimbursement to the level of service patients receive, minimizing lost charges and undercoding of claims that resulted from the previously fragmented documentation environment,” stated Arai.

Arai added, “We have also seen a significant surge in productivity in our medical records department. For example, before Essentris, staff had to track down providers to sign an average of 20 incomplete charts every month, a very time-consuming and frustrating activity. We’re now down to about one chart a month that requires administrative closure.”

According to CliniComp’s President, Technology & Operations J.F. Lancelot, “It’s gratifying that our clinical documentation system supports Tripler Army Medical Center’s mission. The ability to provide our clients with almost immediate measurable improvements in patient care, clinician efficiency and revenue management underlies our commitment to providing the best and most user-friendly in-patient EMR solution.”

About Tripler

Tripler Army Medical Center (TAMC) is the headquarters of the Pacific Regional Medical Command of the armed forces administered by the United States Army in Honolulu. It is the largest military hospital in the Asian and Pacific Rim region, serving an area that includes Hawaii, Japan, Johnston Atoll, Guam, Eniwetok, Kwajalein, and a number of Pacific island nations along with American Samoa. Thousands of people are eligible to receive care at the Pacific Regional Medical Command’s premier teaching medical center. This includes active-duty and retired service members of all branches of service; their eligible family members; veterans; and many Pacific Island Nation residents. For more information about TAMC, please visit http://www.tamc.amedd.army.mil/.

About CliniComp

CliniComp is a leading provider of an intelligent, reliable inpatient electronic health record and surveillance solution suite to leading hospitals and health systems. In its nearly two-decade history, CliniComp has been a pioneer in bridging the gap between the back-office and the bedside by extending existing HIS and ancillary systems to the point of care. The Essentris enterprise solution maximizes IT investment and enforces consistent workflow across all aspects of patient care. Essentris’ use of a single, interdisciplinary, electronic chart enables caregivers to easily access and view of the total clinical experience and provides a new generation of real-time clinical surveillance and analysis tools. The result is a dramatic improvement in quality and safety of patient care, outcomes and clinical decision-making. Essentris is deployed at approximately 70 client sites, including UCLA Healthcare, Baptist Health South Florida and Tenet Healthcare Corporation. CliniComp also has a sizable deployment across the Military Health System, covering more than 45 percent of all Department of Defense beds. For more information, visit www.clinicomp.com or call 800.350.8202.

ED. NOTE:

“The appearance of name-brand products in this article does not constitute endorsement by Tripler Army Medical Center, Pacific Regional Medical Command, the Department of the Army, Department of Defense or the U.S. Government of the information, products or services contained therein.”

Georgia’s First Holistic Spa Focused On Sustaining a Healthy Lifestyle Opens in Blue Ridge Mountains

DAHLONEGA, Ga., April 2, 2008 (PRIME NEWSWIRE) — Forrest Hills Resort & Conference Center, one of the country’s top 100 privately-owned resorts, is celebrating its 30th anniversary by opening Georgia’s first holistic healing spa focused on sustainability: Anidawehi Plantation Wellness Center.

Set within 140 acres of wilderness near Dahlonega, Ga., north of Atlanta, the independent wellness center focuses on holistic healing through several unique programs that incorporate balanced nutrition through raw living foods, and encourage sustainability through follow-up with guests. The name, Anidawehi, is inspired from the Native American belief that individuals have the power within themselves to self-heal.

“Our mission is to help our clients discover how to achieve optimal wellness naturally — and to sustain it over their lifetime,” says Denise Roberson, MBA, Anidawehi Plantation director, and a 30-year veteran of the resort-hospitality industry.

The center will offer five programs in internal cleansing, healthy eating, weight management, age defiance and psychological well-being. Programs include a two-week natural cleansing and lifestyle-coaching program beginning May 4, 2008, as well as three- and four-day wellness center packages.

Leading Anidawehi Plantation’s wellness programs is Jackie Graff, RN, BSN, a nationally recognized raw and living foods chef and nutrition consultant.

“By eating raw foods, the body is able to efficiently assimilate the nutrients it needs and eliminate the build up of toxins in the body. Through hands-on food preparation instruction and delicious raw food recipes, we empower guests with the knowledge they need to stay healthy over a lifetime,” Graff says.

The Lifestyle Sustainability program further reinforces the learning to help guests adapt to a healthier lifestyle. Other programs focus on weight management and a curriculum on healthy-aging customized to help guests meet their specific health objectives and re-align their lifestyles to be healthier. The Rest and Rejuvenation program provides surroundings, inclusive of medical consultations and life coaching, to enable guests to recover from catastrophic life events, including loss of loved ones, surgery, chemotherapy, or other significant life events.

Anidawehi Plantation’s packaged programs include life coaching, spa therapy, yoga classes, and colon hydrotherapy. Other amenities include:

    * a Native American style sweat lodge,  * an on-site organic garden supplying fresh produce for guest meals,    and  * a follow-up support program to help guests sustain their lifestyle    goals. 

All Anidawehi Plantation visitors stay within the Forrest Hills Resort, choosing from a diverse range of luxury cottages and hotel suites. Corporate or retreat-style lodges also are available for larger groups.

About Anidawehi Plantation Wellness Center

Anidawehi Plantation Wellness Center, located inside the Forrest Hills Resort near the ancient site of a Cherokee village, opened in spring 2008 as a place for self-healing and spiritual renewal. Perched within the Blue Ridge Mountains, Anidawehi is surrounded by 140 acres of wilderness. The name Anidawehi is inspired from the Native American belief that our mind, body and spirit have the ability to heal themselves. Anidawehi Plantation has embraced these traditions in its unique wellness programs.

Five programs are offered in internal cleansing, healthy eating, weight management, age defiance and psychological well-being. Integrated into these programs are life-coaching, spa therapy and yoga classes. Guests also have full access to Forrest Hills’ amenities, such as horseback riding, a three-mile jogging/biking trail, and hiking. For more information, visit www.anidawehi.com or call (706) 864-3818.

Editor’s Note: Access high-res images of Anidawehi Plantation here: http://anidawehi.com/index.php?go=gallery.

This news release was distributed by PrimeNewswire, www.primenewswire.com

 CONTACT:  Anidawehi Plantation Wellness Center           For Anidawehi           Anne Wainscott-Sargent           678.352.0009           [email protected] 

Botox May Affect the Brain

A study now shows that Allergan Incorporated’s Botox, or botulinum neurotoxin type A, a wrinkle remedy, may move from the site of the injection to the brain. On April 2nd a study published in the Journal of Neuroscience claimed that botulism was found in the brain stems of test rats. Scientists had previously injected these rats’ whisker muscles with the botulism toxin, and tests of their brain tissue revealed these surprising results.

The authors of the study wrote that this neurotoxin may change the circuitry of the spinal cord as well as interrupt communication via nerve cells. Matthew Avram, the director of Massachusetts General Hospital’s Dermatology, Laser and Cosmetic Center claims that the study may not be a certain prediction of what happens in people due to the fact that human physiology deviates from rat and mouse physiology, but he does think the idea needs focused follow-up. If it is, in fact, being transmitted to the central nervous system, there may be big problems, but as Avram says, “this treatment has been used on millions of people for years, and we’re not seeing major central nervous system uses with it.”

This could affect millions, however, due to the popularity of the treatment. With $1.21 billion in sales last year, Botox is the company’s most popular and biggest selling product. It was approved in 1989 and originally was fashionable for celebrities. Since then, it has expanded into the middle class market. Currently Botox and Myobloc, a product from Solstice Neurosciences Incorporated, are being investigated as causes of botulism, an illness with symptoms of weakened muscles.

A spokeswoman from Allergan, Caroline Van Hove says that more work is necessary because the study contradicts previous findings and it lacks a conclusion. In a statement, Van Hove said, “The authors used a laboratory preparation of botulinum toxin and did not use Botox, and data suggest that different preparations of botulinum toxin react differently in both the laboratory and in clinical practice.”

Chief medical officer of Solstice Neurosciences Edgar Salazar-Grueso claims that Myobloc is a type B neurotoxin and a different type of botulinum than the one studied. According to Salazar-Grueso, studies have already been published in reference to the migratory behavior of toxin A. In monkeys, toxin A migrates more than B. Monkeys are more human-like than rodents, which makes these new findings consistent.

In the study, botulism toxin was injected into one side of each rodent’s hippocampus and into their visual center, or the superior colliculus, as well. The toxin proceeded to either migrate from one side of the hippocampus to the other or migrate to the eyes of the animal. Effects of the injection were still evident six months after the fact, according to scientists.

Warnings are issued with the prescribing literature for both Myobloc and Botox about possible breathing and swallowing difficulties. The FDA is concerned that the new data may make the severity of these warnings increase. The drugs might be life-threatening when used in patients with neuromuscular disorders.

Many of the cases already reviewed by the FDA which caused the warnings to be issued involved children who received injections in order to treat spasms associated with cerebral palsy. This use of the drug is not currently FDA approved, and the dosage is generally about 10 times more than the usual cosmetic dose. .

Matthew Avram is skeptical of linking this study to the previous problems with the FDA. He stated, “Those tend to be very young children with massive doses. I don’t know that this study relates to that.”

On the Net:

Allergan Incorporated

Journal of Neuroscience

Massachusetts General Hospital

Cec Code of Ethics and Standards for Professional Practice for Special Educators

By Anonymous

One of the central characteristics of mature profession is its willingness to abide by a set of ethical principles. As professionals serving individuals with exceptionalities, special educators possess a special trust endowed by the community and recognized by professional licensure. As such, special educators have a responsibility to be guided by their professional principles and practice standards. This section delineated the CEC Code of Ethics and Standards for Professional Practice. They are intended to provide the kind of leadership and guidance that makes each of us proud to be special educators and provides us with the principles by which our practice is guided.. The Code of Ethics is made up of eight fundamental principles to which all special educators are bound. The Standards for Professional Practice describe the guidelines special educatots use in carrying out day-to-day tesponsibilities. The Professional Practice Standards are how special educators measure themselves and theii colleagues professional excellence. It is incumbent on all special educators to use these standards.

CEC CODE OF ETHICS FOR SPECIAL EDUCATORS

We declare the following principles to be the Code of Ethics for educators of persons with exceptionalities. Members of the special education profession are responsible for upholding and advancing these principles. Members of the Council for Exceptional Children agree to judge and be judged by them in accordance with the spirit and provisions of this Code.

A. Special education professionals are committed to developing the highest educational and quality of life potential of individuals with exceptionalities.

B. Special education professionals promote and maintain a high level of competence and integrity in ptacticing their profession.

C. Special education professionals engage in professional activities which benefit individuals with exceptionalities, their families, other colleagues, students, of research subjects.

D. Special education professionals exercise objective professional judgment in the practice of their profession.

E. Special education professionals strive to advance their knowledge and skills regarding the education of individuals with exceptionalities.

F. Special education professionals work within the standards and policies of their profession.

G. Special education professionals seek to uphold and improve where necessary the laws, regulations, and policies governing the delivery of special education and related services and the practice of their profession.

H. Special education ptofessionals do not condone or participate in unethical or illegal acts, nor violate professional standards adopted by the Delegate Assembly of CEC.

SPECIAL EDUCATION PROFESSIONAL PRACTICE STAN DARDS

1. PROFESSIONALS IN RELATION TO PERSONS WITH EXCEPTIONALITIES AND THEIR FAMILIES

Instructional Responsibilities

Special education personnel are committed to the application of professional expertise to ensure the provision of quality education for all individuals with exceptionalities. Professionals strive to

1. Identify and use instructional methods and curricula that are appropriate to their area of professional practice and effective in meeting the individual needs of persons with exceptionalities.

2. Participate in the selection and use of appropriate instructional materials, equipment, supplies, and other resources needed in the effective practice of their profession.

3. Create safe and effective learning environments, which contribute to fulfillment of needs, stimulation of learning, and selfconcept.

4. Maintain class size and caseloads that are conducive to meeting the individual instructional needs of individuals with exceptionalities.

5. Use assessment instruments and procedures that do not discriminate against persons with exceptionalities on the basis of race, color, creed, sex, national origin, age, political practices, family or social background, sexual orientation, or exceptionality.

6. Base grading, promotion, graduation, and/or movement out of the program on the individual goals and objectives for individuals with exceptionalities.

7. Provide accurate program data to administrators, colleagues, and parents, based on efficient and objective record keeping practices, for the purpose of decision making.

8. Maintain confidentiality of information except when information is released under specific conditions of written consent and statutory confidentiality requirements.

Management of Behavior

Special education professionals participate with other professionals and with parents in an interdisciplinary effort in the management of behavior. Professionals

9. Apply only those disciplinary methods and behavioral procedures, which they have been instructed to use, and which do not undermine the dignity of the individual or the basic human rights of persons with exceptionalities, such as corporal punishment.

10. Clearly specify the goals and objectives for behavior management practices in the persons’ with exceptionalities individualized education program.

11. Conform to policies, statutes, and rules established by state/ provincial and local agencies to judicious application of disciplinary methods and behavioral procedures.

12. Take adequate measures to discourage, prevent, and intervene when a colleague’s behavior is perceived as being detrimental to exceptional students.

13. Refrain from aversive techniques unless repeated trials of other methods have failed and only after consultation with parents and appropriate agency officials.

Support Procedures

Professionals

1. seek adequate instruction and supervision before they are required to perform support services for which they have not been prepared previously.

2. May administer medication, where state/ provincial policies do not preclude such action, if qualified to do so or if written instructions are on file which state the purpose of the medication, the conditions under which it may be administered, possible side effects, the physician’s name and phone number, and the professional liability if a mistake is made. The professional will not be required to administer medication.

3. Note and report to those concerned whenever changes in behavior occur in conjunction with the administration of medication or at any other time.

Parent Relationships

Professionals seek to develop relationships with parents based on mutual respect for their roles in achieving benefits for the exceptional person. Special education professionals

1. Develop effective communication with parents, avoiding technical terminology, using the primary language of the home, and other modes of communication when appropriate.

2. Seek and use parents’ knowledge and expertise in planning, conducting, and evaluating special education and related services for persons with exceptionalities.

3. Maintain communications between patents and professionals with appropriate respect for privacy and confidentiality.

4. Extend opportunities for parent education utilizing accurate information and professional methods.

5. Inform parents of the educational rights of their children and of any proposed or actual practices, which violate those rights.

6. Recognize and respect cultural diversities which exist in some families with persons with exceptionalities.

7. Recognize that the relationship of home and community environmental conditions affects the behavior and outlook of the exceptional person.

Advocacy

Special education professionals serve as advocates for exceptional students by speaking, wtiting, and acting in a variety of situations on their behalf. They

1. Continually seek to improve government provisions for the education of persons with exceptionalities while ensuring that public statements by professionals as individuals are not construed to represent official policy statements of the agency that employs them.

2. Work cooperatively with and encourage other professionals to improve the provision of special education and related services to persons with exceptionalities.

3. Document and objectively report to one’s supervisors or administrators inadequacies in resources and promote appropriate corrective action.

4. Monitor for inappropriate placements in special education and intervene at appropriate levels to cotrect the condition when such inappropriate placements exist.

5. Follow local, state/ptovincial, and federal laws and regulations which mandate a free appropriate public education to exceptional students and the protection of the rights of persons with exceptionalities to equal opportunities in our society.

2. PROFESSIONALS IN RELATION TO EMPLOYMENT

Certification and Qualification

Professionals ensure that only persons deemed qualified by having met state/provincial minimum standards are employed as teachers, administrators, and related service providers for individuals with exceptionalities.

Employment

1. Professionals do not discriminate in hiring on the basis of race, color, creed, sex, national origin, age, political practices, family or social background, sexual orientation, or exceptionality.

2. Professionals represent themselves in an ethical and legal mannet in regard to their training and experience when seeking new employment.

3. Professionals give notice consistent with local education agency policies when intending to leave employment. 4. Professionals adhere to the conditions of a contract or terms of an appointment in the setting where they practice.

5. Professionals released from employment are entitled to a wtitten explanation of the reasons for termination and to fair and impartial due process procedures.

6. Special education professionals share equitably the opportunities and benefits (salary, working conditions, facilities, and other resources) of other professionals in the school system.

7. Professionals seek assistance, including the services of other professionals, in instances where personal problems threaten to interfere with their job performance.

8. Professionals respond objectively when requested to evaluate applicants seeking employment.

9. Professionals have the right and responsibility to resolve professional problems by utilizing established procedures, including grievance procedures, when appropriate.

Assignment and Role

1. Professionals should receive clear written communication of all duties and responsibilities, including those which are prescribed as conditions of their employment.

2. Professionals promote educational quality and intra- and inter- professional cooperation through active participation in the planning, policy development, management, and evaluation of the special education program and the education program at large so that programs remain responsive to the changing needs of persons with exceptionalities.

3. Professionals practice only in areas of exceptionality, at age levels, and in program models for which they are prepared by their training and/or experience.

4. Adequate supervision of and support for special education professionals is provided by other professionals qualified by their training and experience in the area of concern.

5. The administration and supervision of special education professionals provides for clear lines of accountability.

6. The unavailability of substitute teachers or support personnel, including aides, does not result in the denial of special education services to a greater degree than to that of other educational programs.

Professional Development

1. Special education professionals systematically advance their knowledge and skills in order to maintain a high level of competence and response to the changing needs of persons with exceptionalities by pursuing a program of continuing education including but not limited to participation in such activities as inservice training, professional conferences/ workshops, professional meetings, continuing education courses, and the reading of professional literature.

2. Professionals participate in the objective and systematic evaluation of themselves, colleagues, services, and programs for the purpose of continuous improvement of professional performance.

3. Professionals in administrative positions support and facilitate professional development.

3. PROFESSIONALS IN RELATION TO THE PROFESSION AND TO OTHER PROFESSIONALS

The Profession

1. Special education professionals assume responsibility for participating in professional organizations and adherence to the standards and codes of ethics of those organizations.

2. Special education professionals have a responsibility to provide varied and exemplary supervised field experiences for persons in undergraduate and graduate preparation programs.

3. Special education professionals refrain from using professional relationships with students and parents for personal advantage.

4. Special education professionals take an active position in the regulation of the profession through use of appropriate procedures for bringing about changes.

5. Special education professionals initiate, support, and/or participate in research related to the education of persons with exceptionalities with the aim of improving the quality of educational services, increasing the accountability of programs, and generally benefiting persons with exceptionalities. They:

* Adopt procedures that protect the rights and welfare of subjects participating in the research.

* Interpret and publish research results with accuracy and a high quality of scholarship.

* Support a cessation of the use of any research procedure that may result in undesirable consequences for the participant.

* Exercise all possible precautions to prevent misapplication or misuse of a research effort, by self or others.

Other Professionals

Special education professionals function as members of interdisciplinary teams, and the reputation of the profession resides wirh them. They

1. Recognize and acknowledge the competencies and expertise of members representing other disciplines as well as those of members in their own disciplines.

2. Strive to develop positive attitudes among other professionals toward persons with exceptionalities, representing them with an objective regard for their possibilities and their limitations as persons in a democratic society.

3. Cooperate with other agencies involved in serving persons with exceptionalities through such activities as the planning and coordination of information exchanges, service delivery, evaluation, and training, so that duplication or loss in quality of services may not occur.

4. Provide consultation and assistance, where appropriate, to both general and special educators as well as other school personnel serving persons wirh exceptionalities.

5. Provide consultation and assistance, where appropriate, to professionals in non-school settings serving persons with exceptionalities.

6. Maintain effective interpersonal relations with colleagues and other professionals, helping them to develop and maintain positive and accurate perceptions about the special education profession.

Copyright Council for Exceptional Children Spring 2008

(c) 2008 Exceptional Children. Provided by ProQuest Information and Learning. All rights Reserved.

The Challenge of Measuring Epistemic Beliefs: An Analysis of Three Self-Report Instruments

By DeBacker, Teresa K Crowson, H Michael; Beesley, Andrea D; Thoma, Stephen J; Hestevold, Nita L

ABSTRACT. Epistemic beliefs are notoriously difficult to measure with self-report instruments. In this study, the authors used large samples to assess the factor structure and internal consistency of 3 self-report measures of domain-general epistemic beliefs to draw conclusions about the trustworthiness of findings reported in the literature. College students completed the Epistemological Questionnaire (EQ; M. Schommer, 1990; N = 935); the Epistemic Beliefs Inventory (EBI; G. Schraw, L. D. Bendixen, & M. E. Dunkle, 2002; N = 795); and the Epistemological Beliefs Survey (EBS; P. Wood & C. Kardash, 2002; N = 795). Exploratory factor analyses, confirmatory factor analyses, and internal consistency estimates indicated psychometric problems with each of the 3 instruments. The authors discuss challenges in conceptualizing and measuring personal epistemology. Keywords: beliefs, epistemological beliefs, measurement, motivation, personal epistemology

FOR SOME TIME NOW, educational researchers have been interested in the role of epistemic beliefs in learning and academic achievement. Epistemic beliefs refer to beliefs about knowledge (including its structure and certainty) and knowing (including sources and justification of knowledge; Buehl & Alexander, 2001; Duell & Schommer-Aikins, 2001; Hofer, 2000; Hofer & Pintrich, 1997). In particular, these can include beliefs about “the definition of knowledge, how knowledge is constructed, how knowledge is evaluated, where knowledge resides, and how knowing occurs” (Hofer, 2001, p. 355).

Researchers have differed in how they conceptualize epistemic beliefs. In much of early theorizing, researchers conceived of epistemic beliefs as broad and general (Baxter Magolda, 1992; Belenky, Clinchy, Goldberger, & Tarule, 1986; Kitchener & King, 1981, 1990; Kuhn, 1991; Perry, 1970). Thus, they were thought to influence treatment of knowledge across contexts or domains in a fairly uniform fashion, although researchers working within these frameworks conducted studies largely in academic settings and in regard to academic knowledge. These theorists all described developmental changes in epistemic beliefs as stage-like, although there was a great deal of variability in how many stages the various theorists described (e.g., as few as four [Baxter Magolda] or five [Belenky et al.] to as many as nine [Perry]) and how they characterized the stages (e.g., as intellectual and ethical development [Perry], as epistemological reflection [Baxter Magolda], as reflective judgment [Kitchener & King, 1981], or as argumentative reasoning [Kuhn]). Theorists working in this tradition used interviews and laboratory tasks to reveal the nature of epistemic beliefs and their development.

Other theorists have conceived of epistemic beliefs as a set of related beliefs about knowledge and knowing that are more narrowly defined. Each of these beliefs has its own developmental trajectory, and developmental change may vary across the range of individual epistemic beliefs (Schommer, 1990; Schraw et al., 2002; Wood & Kardash, 2002). In addition, some researchers suggest that epistemic beliefs may be domain- or discipline-specific rather than general (Buehl, Alexander, & Murphy, 2002; Hofer, 2000; Jehng, Johnson, & Anderson, 1993; Paulsen & Wells, 1998; Schommer & Walker, 1995). Theorists working from this multidimensional conception of epistemic beliefs have developed paper-and pencil self-report measures that tap a variety of proposed epistemic beliefs.

Because of the convenience and efficiency of the self-report measures of epistemic beliefs, they are widely used and form the basis for much of the current research on the role of epistemic beliefs in learning. Evidence that epistemic beliefs are related to learners’ achievement motivation (Braten & Olaussen, 2005; Braten & Stromso, 2004; Buehl & Alexander, 2005; DeBacker & Crowson, 2006; Muis, 2004; Ravindran, Greene, & DeBacker, 2005), cognitive engagement and strategy use (Braten & Olaussen; DeBacker & Crowson; Kardash & Howell, 2000; Ravindran et al.; Ryan, 1984; Schommer, Crouse, & Rhodes, 1992; Tsai, 1998), text comprehension (Schommer, 1990; Schommer et al., 1992; Schommer-Aikins & Easter, 2006), and achievement (Buehl & Alexander; Muis; Schommer, 1993; Schommer, Calvert, Gariglietti, & Bajaj, 1997; Schommer et al., 1992; Schommer- Aikins & Easter) is accumulating.

Although these findings suggest that the study of epistemic beliefs may prove fruitful in advancing understanding of learning and instruction, progress has been undermined by concerns about the available measurement tools (Clarebout, Elen, Luyten, & Bamps, 2001; Hofer & Pintrich, 1997; Duell & Schommer-Aikins, 2001). One concern is theoretical and relates to the categories of beliefs included in these multidimensional measures of epistemic beliefs. Although some of the proposed beliefs are clearly epistemic (beliefs about the structure and certainty of knowledge), others are not epistemic themselves but are related to epistemic beliefs (beliefs about the speed or ease of learning or the fixed nature of ability). Another concern is empirical and relates to instability of the factor structures underlying the self-report measures and the low internal consistency coefficients typically reported for the subscales in the various instruments.

Recent reviews have provided general overviews of the available measures of epistemic beliefs, including self-report measures (Buehl & Alexander, 2001; Duell & Schommer-Aikins, 2001). These reviews catalog the variety of measures available to researchers in a way that highlights their theoretical and procedural distinctions (Duell & Schommer-Aikins) and provides a critical discussion of issues relevant to the study of personal epistemology (Buehl & Alexander). In both instances, the researchers included psychometric issues in the reviews, but they were not the primary focus. However, careful reading of research on epistemic beliefs reveals a number of troubling indications that the research does not rest on a strong psychometric foundation. In the present study, measurement issues were the primary focus of inquiry. Our purpose was to assess the psychometric properties of three commonly used measures of epistemic beliefs, using larger samples and more rigorous analyses than those that have appeared in the literature, as a general gauge of the trustworthiness of knowledge about personal epistemology and researchers’ ability to make progress in this line of investigation.

The majority of researchers in the field have used the Epistemological Questionnaire (EQ; Schommer, 1990), although they have used other measures as well, including the Epistemic Beliefs Inventory (EBI; Schraw et al., 2002), and the Epistemological Beliefs Survey (EBS; Wood & Kardash, 2002). We describe each of these measures and associated findings in the following sections.

Epistemological Questionnaire

The measure of epistemic beliefs most commonly encountered in the literature is Schommer’s (1990) EQ (see Appendix A). With the introduction of this instrument, Schommer brought both conceptual and methodological changes to the study of epistemological understanding. Breaking with the developmentalstructural tradition (King & Kitchener, 1994; Kuhn, 1991; Perry, 1970), Schommer conceptualized personal epistemology as a belief system composed of several “more or less independent dimensions” of beliefs about knowledge and knowing (Schommer, 1990, p. 498). This system included beliefs about the structure of knowledge (simple vs. complex), the certainty of knowledge (certain vs. tentative), and the source of knowledge (omniscient authorities vs. personal construction) as well as beliefs about the nature of ability (fixed vs. malleable) and learning (e.g., learning happens quickly or not at all; Schommer, 1990, 1994). Schommer (1998) further proposed that, over time, individuals move from naive beliefs to more sophisticated beliefs in these areas.

Schommer (1990) created the 63-item EQ by developing 2 or more subsets of items to capture each of the five proposed dimensions of beliefs, for a total of 12 subsets. Scoring of the instrument often involves conducting a second-order factor analysis, whereby the 12 subsets are treated as items by using principal axis factoring. Item subsets are then combined in the manner indicated by the factor analysis to produce belief scores. In Schommer’s 1990 study, in which she introduced the EQ, factor analysis indicated the presence of four orthogonal factors: Simple Knowledge, Certain Knowledge, Innate Ability, and Quick Learning. Researchers would probably accept two of these factors (Simple Knowledge, Certain Knowledge) as constituting epistemic beliefs, whereas the other two (Innate Ability, Quick Learning) are better considered beliefs about learning (Hofer & Pintrich, 1997). The anticipated factor capturing beliefs about the source of knowledge did not emerge.

Since its introduction, Schommer, her colleagues, (e.g., Schommer, 1993; Schommer et al., 1992; Schommer & Dunnell, 1994) and other researchers (e.g., Clarebout et al., 2001; Kardash & Howell, 2000; Paulsen & Wells, 1998) have used the EQ in numerous studies. However, the sample-specific procedure for scoring makes it somewhat difficult to compare findings across studies. Because scoring of the instrument is typically based on a factor analysis of subset scores in each new sample, individual studies may in essence be using different instruments. Across the various samples,1 unique combinations of subsets emerge, often creating novel factors. Although Schommer (1998) and Schommer and Dunnell (1994) factor analyzed the subscale scores to successfully reproduce the four factors that emerged in Schommer’s (1990) original study, in other cases this did not occur. For example, Schommer (1993), studying high school students, factor analyzed subset scores to produce four factors that she gave the original four factor names but that were composed of somewhat different groupings of subsets. Likewise, Schommer et al. (1992) factor analyzed the subset scores and found that a four-factor solution fit the data, but again the resulting four factors did not completely replicate the Schommer (1990) findings. The researchers identified factors measuring Simple Knowledge, Certain Knowledge, and Quick Learning, although the item subsets that composed these factors were not identical to those in previous studies. The fourth factor, Externally Controlled Learning, was unique to this sample.

Kardash and Howell (2000), using a 42-item version of Schommer’s (1990) instrument, factor-analyzed subset scores to extract four factors from the data. Kardash and Howell identified the factors as Nature of Learning, Speed of Learning, Certain Knowledge, and Avoid Integration. Schommer-Aikins, Duell, and Barker (2002) reported four factors named Stability of Knowledge, Structure of Knowledge, Control of Learning, and Speed of Learning. The researchers did not provide details regarding how they arrived at the four factors. Last, Clarebout et al. (2001), using a Dutch translation of the EQ, factor analyzed subset scores in two different samples of college students. Although both samples yielded fourfactor solutions, the solutions resembled neither each other nor the factors that Schommer (1990) identified.

In some studies, a three-factor solution better represented the data. Schommer et al. (1997) analyzed subset scores to produce factors named Malleability of Learning Ability, Structure of Knowledge, and Speed of Learning. Schommer- Aikins, Mau, Brookhart, and Hutter (2000), using confirmatory factor analysis (CFA) on a 30- item version of the questionnaire that Schommer (1993) developed for middle school students, found that the four-factor solution fit the data poorly. A three-factor solution yielded stronger fit indexes. The three factors, which included only 11 items, were Speed of Learning, Ability to Learn, and Stability of Knowledge. Note that in each of these samples, two of the three resulting factors addressed learning beliefs rather than epistemic beliefs.

Across these studies, factors concerning speed of learning (similar to Schommer’s [1990] original factor Learning Happens Quickly or Not At All) and structure of knowledge (similar to Schommer’s [1990] original factor Simple vs. Complex Structure) appeared with the greatest regularity. Other factors, including many related to learning (Externally Controlled Learning, Nature of Learning, and Avoid Integration) were unique to one or two studies.

We suspect that-aside from the circumstance that some investigators did not use the full 63-item version of the EQ-at least two circumstances contribute to the inconsistency of factors that emerge across studies. These have to do with the internal consistency of the factors identified through factor analysis and of the item subsets on which the factors are based.

Sometimes internal consistency statistics have not been reported for the beliefs scales that emerge in individual studies (see Kardash & Howell, 2000; Paulsen & Wells, 1998; Schommer, 1990, 1993, 1998; Schommer et al., 1992; Schommer & Dunnell, 1994, 1997; Schommer & Walker, 1995, 1997). When they have been reported, they have tended to be low. For example, Schommer et al. (1997) reported reliabilities ranging from .63 to .85, and Schommer (1993) reported a range of .51 to .78. Schommer-Aikins et al. (2002) reported reliabilities ranging from .58 to .73 for their domain-specific versions of the EQ. In their review of measures of epistemic beliefs, Duell and Schommer-Aikins (2001) stated that reliability coefficients for the EQ ranged from .55 to .70 for middle school students, from .51 to .78 for high school students, and from .63 to .85 for college students. Poor internal consistency of scales is indicative of large proportions of measurement error and is related to difficulty in replicating findings across samples. This may contribute to the inconsistency seen across studies using the EQ.

It is also possible that the subsets of items used to produce the scores for factor analysis suffer from low internal consistency. Evidence of this is in Neber and Schommer-Aikins’ (2002) study, in which they analyzed subset scores directly (rather than using them as the basis of exploratory factor analysis [EFA]) and reported internal consistency coefficients for the six subsets in their study as ranging from .40 to .52. This raises questions about the appropriateness of including item subsets with high levels of unreliability as indicator variables in empirically based scoring procedures.

Taking a different approach, Qian and Alvermann (1995) factor analyzed items (not subset scores) from a pool of 53 items2 drawn from the high school version of the Schommer instrument (Schommer & Dunnell, 1994). Three factors emerged, which were reminiscent of Schommer’s (1990) original factors and which had greater internal consistencies than have sometimes been reported. The factors were Learning Is Quick (alpha = .79), Knowledge Is Simple and Certain (alpha = .68), and Ability Is Innate (alpha = .62). This finding suggested the possible utility of analyzing items rather than subset scores. However, when Hofer (2000) used Qian and Alvermann’s 32- item adaptation of Schommer’s (1990) scale, the result was much different. Like Qian and Alvermann, Hofer factor analyzed items rather than subscale scores. Unlike Qian and Alvermann, Hofer’s data failed to yield factors that resembled those that Schommer (1990) identified. Hofer reported that “the overall four-factor solution that emerged from an itembased factor analysis had no single factor that replicated those factors reported by Schommer and others when a factor analysis [was] conducted using subscales” (2000, p. 392).

In sum, several potential sources of difficulty are associated with using the EQ in the prescribed manner. Conceptually, the reliance on a sample-specific scoring procedure seems inconsistent with the goal of replicating results across samples. Empirically, research reports indicate inconsistency of factors across samples and persistently low internal consistency of scales. On a related matter, many researchers have used samples that have been fairly small relative to what is desirable for conducting factor analyses (see Russell, 2002), leaving open the question of whether and how sample size has contributed to consistency of findings. Regarding factor analysis of EQ items, rather than subscale scores, the picture is still unclear.

In the wake of findings suggesting potential problems with the EQ, researchers have created other measures of epistemic beliefs. In the present study, we have included two. The developers of the EBI (Schraw et al., 2002) retained the conceptual structure that Schommer proposed, and they created new items to try to better capture that structure. The developers of the EBS (Wood & Kardash, 2002) retained Schommer’s (1990) items and tried to find a cleaner and more stable structure among them.

Epistemic Beliefs Inventory

Bendixen, Schraw, and Dunkle (1998) and Schraw et al. (2002) have noted that one of the main problems researchers wishing to study epistemic beliefs have encountered has been the lack of valid and reliable self-report instruments. In response, Schraw et al. developed the EBI (see Appendix B), which was composed of new items created to better capture the five dimensions of epistemic beliefs that Schommer (1990) described. One of their objectives was to “construct an instrument in which all of the items fit unambiguously into one of five categories that corresponded to [Schommer’s] five hypothesized epistemic dimensions” (Schraw et al., p. 263). In particular, these researchers hoped to preserve the Source of Knowledge factor (called Omniscient Authority in the EBI), which Schommer hypothesized but was not empirically confirmed. The EBI contains five subscales: Simple Knowledge (seven items), Certain Knowledge (eight items), Omniscient Authority (five items), Quick Learning (five items) and Fixed Ability (seven items).

The EBI has been used in a number of studies. Bendixen et al. (1998), analyzing a 32-item version of the EBI, reported five clean factors that measured the anticipated categories of beliefs. Internal consistency coefficients for these factors ranged from .67 to .87. Schraw et al. (2002) factor analyzed a 28-item version of the EBI, reported reliability estimates ranging from .58 to .68, and reported the same five factors.

However, Nussbaum and Bendixen (2002, 2003) were unable to reproduce the five-factor structure of the EBI. In Nussbaum and Bendixen’s 2002 study, factor analysis produced two factors. The Complexity factor contained items intended to measure simple knowledge, quick learning, and innate ability. The Uncertainty factor contained items intended to measure certain knowledge and omniscient authority. Internal consistency estimates were not reported for these factors. In Nussbaum and Bendixen’s 2003 study, factor analysis produced three factors: Simple Knowledge (alpha = .69), Certain Knowledge (alpha = .69), and Innate Ability (alpha = .77). In several studies, researchers scored the EBI as recommended and did not subject it to factor analysis. Nonetheless, these studies provide information on the internal consistency of the proposed subscales. Ravindran et al. (2005) reported reliabilities for the five subscales ranging from .54 to .78. Hardre, Crowson, Ly, and Xie (2007) included the EBI in a study that compared internal consistency estimates of various instruments across three types of administration (paper and pencil, computer based, or Web based). For the five EBI scales across the three conditions, they reported Cronbach’s alpha coefficients ranging from .50 to .76 for Sample 1 and from .42 to .79 for Sample 2.

In sum, empirical support for the five a priori dimensions of epistemic beliefs has been mixed in published factor analyses. Internal consistency coefficients for the proposed subscales are higher than those seen with the EQ, but still lower than is desirable for some subscales. Moreover, we note that the sample sizes in studies using the EBI have been generally modest. Schraw et al. (2002) surveyed 160 participants, Ravindran et al. (2005) surveyed 101 participants, and Hardre et al.’s (2006) samples included 67 and 160 respondents. Nussbaum and Bendixen (2002, 2003) included 101 and 238 participants, respectively. Again, this raises the question of whether and how sample size may have affected findings.

Epistemological Beliefs Survey

In their discussion of measurement issues related to epistemic beliefs, Wood and Kardash (2002) reported that they had repeatedly been unable to satisfactorily reproduce the expected factor structure of Schommer’s (1990) EQ when conducting factor analyses at the item level and noted that the same was true of a related measure of epistemic beliefs that Jehng et al. (1993) developed. Jehng et al. designed their instrument to capture the factors that Schommer described by using some, but not all, of Schommer’s (1990) original items plus new items that they created. To find a factor structure that fit Schommer’s items better than did the five-factor structure that she originally proposed, Wood and Kardash combined items from Schommer’s (1990) and Jehng et al.’s instruments to create an 80- item survey of epistemic beliefs.3

After subjecting these items to a test of internal consistency and several different exploratory factor analyses, Wood and Kardash (2002) retained 38 items that they argued represented five independent dimensions of epistemic beliefs (see Appendix C). The resulting EBS included five subscales: Speed of Knowledge Acquisition (8 items), Structure of Knowledge (11 items), Knowledge Construction and Modification (11 items), Characteristics of Successful Students (5 items), and Attainability of Objective Truth (3 items). Although the new scales Speed of Knowledge Acquisition and Structure of Knowledge seem to correspond closely to Schommer’s (1990) original dimensions, the other three dimensions seem fairly novel.

There is little information on the EBS in the literature, so its psychometric properties remain largely unknown. Wood and Kardash (2002) reported the internal consistency of the five subscales as .74 for Speed of Knowledge Acquisition, .72 for Structure of Knowledge, .66 for Knowledge Construction and Modification, .58 for Characteristics of Successful Students, and .54 for Attainability of Objective Truth. Sinatra and Kardash (2004) used two subscales of the EBS in their study, reporting alphas of .59 for the Speed of Knowledge Acquisition scale and .54 for the Knowledge Construction and Modification scale. Schommer- Aikins and Easter (2006) reported alphas for the five EBS scales ranging from .54 to .74.

Summary

The EQ, EBI, and EBS are similar in several ways. Each uses a Likert scale format in which respondents indicate their degree of agreement with each item on the instruments. It is more important that each conceptualizes epistemic beliefs as domain general. That is, the context tapped by individual items is assumed to be relatively unimportant, because beliefs about knowledge are thought to apply uniformly across knowledge domains (e.g., social knowledge vs. academic knowledge, or knowledge about math vs. knowledge about history vs. knowledge about psychology). Therefore, items included in the measures may tap a particular context (most often school knowledge and learning) or may refer to beliefs about knowledge in general. Each of the instruments includes items that vary in context and scope from the specific (e.g., “Most words have one clear meaning”; “When I study I look for specific facts”) to the broad (e.g., “The only thing that is certain is uncertainty itself”; “The best ideas are often the most simple”). Although recent thinking calls into question the wisdom of assuming that epistemic beliefs are domain general (Buehl & Alexander, 2001), such a conceptualization was common at the time these measures were created.

Each instrument conceptualizes epistemic development as consisting of changes in a set of beliefs about the nature of knowledge and knowing. Each instrument specifies five dimensions of epistemic beliefs; however, the specific dimensions constituting the various instruments, and the items constituting those dimensions, are similar but not identical.

Purpose

The purpose of the current investigation was to examine the factor structure of the EQ, the EBI, and the EBS by using CFA.4 CFA provides a more stringent test of the hypothesized model implied by these measures of epistemic beliefs than does EFA (the technique most commonly reported in the literature; Byrne, 2005; Kline, 2005). Our use of multiple samples sheds further light on the stability of the factor structures by allowing for replication on independent samples. In Studies 1 and 2, we report our analyses of the EBI and the EBS. We present the EQ, which required additional analyses because of its scoring procedure, in Study 3.

STUDY 1: THE EPISTEMIC BELIEFS INVENTORY

Method

Participants

We drew on two different samples of participants to assess the psychometric properties of the EBI. In each case, participants completed the EBI as part of a larger study. In Sample 1, we aggregated data across several student samples that took the EBI (Bendixen et al., 1998) during the period from fall 2002 to spring 2004 while enrolled in an introductory or developmental psychology course. College students (N = 417) from a midsized Southwestern university with a mean age of approximately 22 years composed this composite sample. Participants were largely female (94%) and White (67%).

Undergraduate students (N = 378) at a midsized Southeastern university enrolled in educational psychology classes in 2004 composed Sample 2. Their mean age was approximately 20 years. Participants were largely female (78%) and White (80%).

Procedure

We recruited students at their universities through educational psychology or human development courses. Volunteers received an informed consent form and packet of surveys, including the EBI, to be completed at home and returned at the next class meeting. Volunteers received course credit for their participation. We conducted CFA by using LISREL 8.52 (Joreskog & Sorbom, 2002) on two separate samples to assess the extent to which the prescribed five- factor model fit the data.

Results

Confirmatory Factor Analysis

For each sample, we loaded items from the EBI onto the five latent factors hypothesized to account for the variance in the items: belief in simple knowledge, belief in certain knowledge, belief in quick learning, belief that ability is fixed, and belief that knowledge is derived from omniscient authorities. The five latent factors in our CFA were allowed to covary.

We assessed the overall fitness of the model by using several fit indexes reported in the LISREL output. The Goodness of Fit Index (GFI) reflects the proportion of variance explained in the variance- covariance matrix by the model, whereas the Adjusted Goodness of Fit Index (AGFI) reflects the same information by taking model complexity into account (Kline, 2005). Other things being equal, AGFI values are lower for more complex models as opposed to more parsimonious models. The Comparative Fit Index (CFI) is an incremental fit index that reports degree of improvement in fit of the research model over a null model. Of the common fit indexes, the CFI is most robust to sample size differences (Tanguma, 2001). Last, the root mean square error of approximation (RMSEA) is a badness-of- fit index that indicates degree of discrepancy between the model- implied and sample correlation matrixes and that includes a correction for model complexity (Kline). According to Schumacker and Lomax (2004) and Kline, GFI and AGFI values near or greater than .95 and CFI values greater than .95 are indicative of optimal model fit. RMSEA values less than .06 also indicate good fit (Hu & Bentler, 1999; Tabachnick & Fidell, 2001).

For Samples 1 and 2, the RMSEA value were .069 and .060, respectively, indicating marginally good model fit. However, in both samples, the CFI, GFI, and AGFI values all fell well below optimal levels at .79, .83, and .80, respectively, for Sample 1 and .83, .85, and .83, respectively, for Sample 2. These values provide evidence that the theoretical model did not fit the data well.

The R^sup 2^ values (see Table 1) for the items in each sample suggested that the five hypothesized factors explained a fair amount of variance in the items. However, across Samples 1 and 2, 11 of the 32 items had standardized factor loadings less than |.35|, suggesting they are not strong indicators of the hypothesized latent factors. This poses a particular problem for the scale of belief in simple knowledge.

Correlations Among Latent Constructs

Table 2 provides the correlations among the latent epistemic belief factors in our CFA model. Most correlations were of moderate magnitude. We found higher correlations among constructs capturing beliefs about knowledge (e.g., Knowledge is simple; Knowledge is certain; and Knowledge is gained from omniscient authorities) and among constructs capturing beliefs about learning (Learning is quick; Ability is fixed). This finding is not unexpected because of the conceptual similarity in the clusters of constructs. Subscale Reliabilities

To further examine the measurement properties of the EBI, we obtained internal consistency estimates for its five subscales for the two samples. Across the samples, subscale reliabilities were lower than was desirable (see Table 3).

Ancillary Analyses

To determine the extent to which the 11 weaker items identified in the aforementioned CFAs were hurting overall model fit, we excluded them and ran an additional CFA on each sample.5 Fit statistics improved, with CFI, GFI, and AGFI values at .89, .91, and .88, respectively, for Sample 1; and .91, .91, and .89, respectively, for Sample 2. RMSEA values were .060 in Sample 1 and .053 in Sample 2. These findings indicate that dropping the items increased the fit of the model to the data in our samples.

Summary

Our analyses uncovered a variety of problems with the EBI. CFA suggested that the five dimensions proposed to constitute the EBI were not a good fit to the data in either sample. Moreover, the magnitude of correlations among some of the latent variables calls into question the assumption that the dimensions of epistemic beliefs are orthogonal. Tests of internal consistency produced coefficients that were uniformly below .70, suggesting that there is a fair amount of measurement error associated with the EBI subscale scores. We note that the Cronbach’s alpha coefficients found in our large samples are smaller than those in the literature. Removal of 11 items improved the fit of the model to the data.

STUDY 2: THE EPISTEMOLOGICAL BELIEFS SURVEY

Method

Participants

We used two different samples of participants to assess the psychometric properties of the EBS. In each case, participants completed the EBS as part of a larger study. In Sample 1, we aggregated data across several student samples who took the EBS (Wood & Kardash, 2002) in spring 2005. College students (N = 380) from two neighboring midsized Southwestern universities composed this composite sample. The participants’ mean age was approximately 24 years. Participants were largely female (75%) and White (71%). In Sample 2, we again aggregated data across several student samples who took the EBS (Wood & Kardash, 2002) in fall 2005. College students (N = 415) from two neighboring midsized Southwestern universities composed this composite sample. Their mean age was approximately 25 years. Participants were largely female (73%) and White (72%).

Procedure

We recruited students at their universities through educational psychology courses. Volunteers received an informed consent form and packet of surveys, including the EBS, to be completed at home and returned at the next class meeting. Volunteers received course credit for their participation. We again conducted two separate CFAs to assess the extent to which the prescribed five-factor model fit the data from two different samples.

Results

Confirmatory Factor Analysis

For each sample separately, we loaded items from the EBS onto the five latent factors hypothesized to account for the variance in the items: Speed of Knowledge Acquisition, Structure of Knowledge, Knowledge Construction and Modification, Characteristics of Successful Students, and Attainability of Objective Truth. Again, the five latent factors in our CFA were allowed to covary. In our assessment of the EBS, we used the same fit statistics used to assess the fit of our hypothesized model with the EBI.

Results for the EBS indicated somewhat better fit than for the EBI. For Samples 1 and 2, the RMSEA values were .050 and .052, respectively, which are indicative of fairly good model fit. However, the CFI, GFI, and AGFI values fell below optimal levels in both Sample 1 (.90, .85, and .83, respectively) and Sample 2 (.88, .85, and .83, respectively).

Inspection of the R^sup 2^ values (see Table 4) by sample suggested that the five hypothesized factors explained a fair amount of variance in the items. The majority of items on the EBS appear to be fairly good indicators of the factors that Wood and Kardash (2002) described. However, there were nine items with standardized factor loadings less than |.35| in one or both samples, suggesting they were not strong indicators of the hypothesized latent factors.

Correlations Among Latent Constructs

Correlations among the latent epistemic belief factors in our CFA model are shown in Table 5. Constructs in the EBS demonstrated a higher degree of interrelatedness than those on the EBI, with many correlations being moderate to strong in magnitude.

Subscale Reliabilities

Internal consistency estimates for the five subscales of the EBS are shown in Table 6. Across the samples, subscale reliabilities were better than those associated with the EBI but still lower than is desirable.

Ancillary Analyses

To determine how the nine weaker items influenced the CFAs, we excluded them and ran an additional CFA on each sample. Fit statistics improved slightly, with CFI, GFI, and AGFI values at .92, .88, and .86, respectively, for Sample 1; and .89, .87, and .85, respectively, for Sample 2. RMSEA values actually increased slightly to .052 in Sample 1 and .059 in Sample 2.

Summary

Our analyses of the EBS revealed psychometric problems. CFA suggested that the five dimensions proposed to constitute the EBS fit the data only marginally well in either sample, although fit indexes were better for the EBS than those reported for the EBI. Internal consistency coefficients were also somewhat stronger than those seen in the EBI, but many still fell below .70, and all fell below .80.

STUDY 3: THE EPISTEMOLOGICAL QUESTIONNAIRE

Method

Participants

We aggregated data across several undergraduate student samples who took the EQ (Schommer, 1990) between summer 2000 and spring 2002 while enrolled in introductory or developmental psychology classes in conjunction with various larger studies. College students (N = 935) from a midsized Southeastern university composed this composite sample. Participants’ ages ranged from 18 to 45 years, with a mean age of approximately 20 years. Approximately 84% of the sample was 18-21 years of age. Participants were largely female (75%) and White (68%).

Procedure

We recruited students through educational psychology or human development courses. Volunteers received an informed consent form and packet of surveys, including the EQ, to be completed at home and returned at the next class meeting. Volunteers received course credit for their participation.

We examined the psychometric properties of the EQ from three different angles. Having split the composite sample into random halves (creating Sample 1 and Sample 2), we used Sample 1 to conduct an EFA on the subscale scores. This allowed us to assess the factor structure of the EQ when analyzed in the intended manner. We then conducted a second EFA on individual items rather than on item subsets. This allowed us to assess whether a reasonable factor structure could be found in the data when side-stepping the controversial practice of analyzing item subsets. Having identified the more efficacious of the two factor structures emerging from EFA, we used the second sample to conduct a CFA to assess how well the model fit a new data set.

Results

Exploratory Factor Analysis

Using Sample 1, we subjected the 12 item-subset scores composing the EQ to principal axis factoring with Varimax rotation, maintaining only those factors with eigenvalues greater than 1.0. (as Schommer [1990, 1993] recommended). Using this criterion, three factors emerged in the data that accounted for a combined 26.74% of the variance.

We retained subsets with loadings at or above |.35| on a factor (see Table 7). Six item subsets met the criterion for inclusion in Factor 1. Three of these subsets (Learn the first time, Success is unrelated to hard work, and Cannot learn how to learn) have consistently loaded together in previous research (see Schommer, 1993, 1998). These subsets are often interpreted as suggesting a belief in innate ability, although they might also be indicative of a belief in quick learning. Concentrated effort is a waste of time and Learning is quick have demonstrated inconsistencies in their loadings with the aforementioned subsets in the literature but nevertheless also reflect the innate ability or quick learning theme. Do not criticize authority has not typically loaded with the other item subsets and has little, if any, conceptual similarity to them. We note that in past research, subsets pertaining to authority have shown the greatest inconsistency in loadings across studies. We named the first factor Quick and Fixed Learning on the basis of the pattern of loadings.

Four item subsets met the criteria for inclusion on the second factor: Seek single answers, Avoid ambiguity, Avoid integration, and Ability to learn is innate. Of these four subsets, Ability to learn is innate has not typically loaded with the remaining three in the literature and is the only subset not related to a belief in simple knowledge. Because the other subsets had higher loadings than the Ability to learn is innate subset, we named the second factor Belief in Simple Knowledge.

With respect to the third factor, only two item subsets met our criteria for inclusion: Success is unrelated to hard work and Knowledge is certain. We note that the success subset also loaded heavily and positively onto the first factor. Because of its conceptual coherence with the first factor, it made sense to treat it primarily as an indicator of Quick and Fixed Learning. The third factor, then, was only (theoretically) definable in terms of the item subset Knowledge is certain. Because of the overall pattern of loadings on this factor, we hesitated to interpret it. As planned, we also conducted an EFA on the individual items composing the EQ. We subjected the 63 items of the EQ to principal axis factoring with Varimax rotation, retaining those factors with eigenvalues greater than 1.0. Using this criterion, we extracted 22 factors, with those accounting for the most variance being largely uninterpretable. Comparison of the two EFA results indicated that the factor structure resulting from analysis of the subset scores, despite its weakness, held greater meaning than did the factor structure resulting from an item-level analysis. Therefore, that is the factor structure we attempted to confirm in Sample 2.

Confirmatory Factor Analysis

Based on our exploratory findings, it was clear that only six item subsets loaded as they typically have in previous research and in a meaningful pattern: Avoid ambiguity, Avoid integration, and Seek single answers, Can’t learn how to learn, Learn the first time, and Success is unrelated to hard work. The pattern of loadings for the remaining subsets (Learning is quick, Knowledge is certain, Do not criticize authority, Depend on authority, Ability to learn is innate, and Concentrated effort is a waste of time) did not represent a clean conceptual scheme and provided additional evidence of their instability as indicators of latent factors associated with the EQ. Rather than include these unstable indicators in our confirmatory model, we chose to confirm the presence of only two factors by using only those aforementioned six subsets that have demonstrated the most stability in their pattern of loadings in EFA in the literature. We performed this analysis by using the second half of our composite sample.

We loaded the Item subsets Seek simple answers, Avoid ambiguity, and Avoid integration onto one latent construct, Belief in simple knowledge. We loaded Learn the first time, Can’t learn how to learn, and Success is unrelated to hard work onto the second latent construct. Because the three item subsets that compose the second construct have generally been thought to measure beliefs about ability, we called the factor Ability to Learn Is Fixed. The two latent factors were allowed to covary in the CFA model.

In our assessment of the EQ, we used the same fit statistics used to assess the fit of our hypothesized model with the EBI and EBS. For the model, the RMSEA was .045, whereas values for CFI, GFI, and AGFI were .97, .99, and .97, respectively. All of these indexes indicated that the model fit the data well. In addition, the R^sup 2^ values (see Table 8) associated with all six subsets suggested they were functioning well as indicators of their respective latent constructs. For these items, R^sup 2^ values ranged from .15 to .48.

Subset Reliabilities

One possible reason for the tendency of item subsets to group differently across studies may be unreliability in the subsets themselves. Indeed, if firstorder factors are not measured well by their respective indicators (e.g., items), then including those factors in a second-order factor analysis becomes dubious because the correlations among the first-order factors may be unstable because of measurement error.

To assess the possibility that measurement unreliability at the item subset level may be a source of instability in the EQ, we examined the internal consistencies (using Cronbach’s alpha) of each subset in our sample (see Table 6). Across the board, we found that the internal consistencies for the EQ item subsets were poor. Although we recognize that having a low number of items in a subset may attenuate internal consistency estimates somewhat, it is important to note that even among subsets with more items, the estimates were low. For example, the 11-item Seek single answers subset exhibited one of the lowest internal consistencies, falling at .22, and the 8-item Avoid integration subset had a reliability estimate of .25.

Summary

Our results indicated a variety of problems with the EQ. First, EFA of neither item subsets nor individual items produced a factor solution that resembled Schommer’s (1990). The CFA fit statistics were good, but this analysis was based on only six item subsets that represented only two dimensions of beliefs. Earlier, we proposed that one potential reason for the instability of some of the EQ subsets might be unreliability in the measurement of the subsets themselves. Based on the low reliabilities of the two items, it is clear that the item subsets generally lacked internal consistency, supporting our assertion that this may contribute to low correlations among them.

DISCUSSION

From a theoretical perspective, our investigations failed to support the view of epistemic beliefs as a domain-general and multidimensional collection of related beliefs about knowledge and knowing. This is seen most clearly in the consistent failure of factor analyses (exploratory and confirmatory) to support the hypothesized factor structures of the instruments investigated herein and is underscored by the low internal consistency estimates seen for subscales in the target instruments. Lack of support is further demonstrated in the extant research literature, which has failed to yield a consistent picture of the number or nature of dimensions that constitute epistemic beliefs.

Regarding use of the target instruments, caution must be advised. The EQ presented the most insurmountable psychometric problems because of the samplespecific scoring procedure and our EFA results. The CFA model we ultimately tested showed good fit to the data; however, it included only two scales: Belief in simple knowledge, and Ability to learn is fixed. Use of just these two scales to capture epistemic beliefs cannot be recommended because of concerns about undersampling the construct. There are certainly more facets of epistemic beliefs.

The EBI and EBS each fared better than the EQ, although the fit indexes for our CFA fit statistics and estimates of internal consistency were lower than desirable. Looking for sources of relative stability, we found that Belief that ability is fixed had the most desirable psychometric properties of the EBI scales and that Speed of knowledge acquisition had the most desirable psychometric properties of the EBS scales.

Epistemic Nature of Beliefs

There have been a number of recent developments regarding how epistemic beliefs are conceptualized. For instance, there is growing consensus that some of the beliefs originally included in measures of epistemic beliefs are not, themselves, epistemic in nature (Bendixen & Rule, 2004; Hofer, 2000; Hofer & Pintrich, 1997). Hofer (2000) and Pintrich (2002) have suggested that epistemic beliefs include beliefs about knowledge (the simplicity and certainty of knowledge) and beliefs about knowing (source and justification of knowledge) but not beliefs about learning or the nature of ability. Schommer-Aikins (2004) recently made a similar distinction, separating beliefs about knowing (e.g., fixed ability, quick learning) from beliefs about knowledge (e.g., knowledge is simple and certain). We note that three of the four subscales we identified as sources of relative stability are, in fact, beliefs about learning (a la Hofer) or knowing (a la Schommer-Aikins): Belief that ability is fixed from the EBI, Speed of knowledge acquisition from the EBS, and Ability to learn is fixed from the EQ.

Of the remaining EBI subscales, three are clearly epistemic in nature (Belief in simple knowledge, Belief in certain knowledge, and Belief that knowledge is derived from omniscient authorities) but cannot be recommended because internal consistency estimates were low for the three scales, ranging from .47 to .63. The Structure of knowledge and Knowledge construction and modification subscales of the EBS also address epistemic beliefs, and had internal consistency estimates that were strong relative to other epistemic beliefs scales assessed herein but that still only ranged from .65 to .76. These findings underscore the challenge of conceptualizing and operationalizing the more abstract and possibly tacit elements of epistemic beliefs in contrast to more concrete beliefs about learning.

Levels of Specificity

Consensus is growing in the research community that epistemic beliefs are both domain specific and domain general in nature. Buehl and Alexander (2001) proposed a nested model featuring three levels of beliefs: domain-specific epistemic beliefs (e.g., beliefs about mathematical knowledge or psychological knowledge), which are embedded within more general beliefs about academic knowledge, which are embedded within general epistemic beliefs at a broad level. Each of the three instruments examined here included a mix of items representing each of these levels of specificity, which may contribute to a lack of stability.

It is intuitively appealing to believe that measures of epistemic beliefs that are more domain- or context-specific would yield higher internal consistency. However, several researchers have tried to develop domain-specific measures of epistemic beliefs, and these attempts have not fared well either. Hofer (2000) designed an instrument to assess four dimensions of epistemic beliefs (Knowledge is certain, Knowledge is simple, Source of knowledge, Justification of knowledge) relative to the domains of science and psychology. Cronbach’s alphas for the four scales on the two versions of the instrument ranged from .51 to .81, with five of the eight alphas being less than .70.

Buehl et al. (2002) measured two dimensions of epistemic beliefs (Need for effort, 6 Integration of information and problem solving) situated in the domains of history and mathematics. Cronbach’s alphas for these four scales ranged from .61 to .75 in their Study 1 and from .58 to .72 in their Study 2. Likewise, Buehl and Alexander (2005) derived factors capturing three epistemic beliefs (Isolation of knowledge, Certainty of knowledge, Authority as source of knowledge) in two academic domains (history, mathematics) and reported alphas ranging from .64 to .77 on the resulting six subscales, with three of the six alphas being less than or equal to .70. Last, Mansell, DeBacker, and Crowson (2005) introduced a measure of beliefs about school learning (Buehl & Alexander’s [2001] middle level of specificity) that included two scales that address beliefs about academic knowledge. The internal consistency estimates were alpha = .77 for Knowledge is constructed and alpha = .63 for Knowledge is a commodity. It appears, therefore, that the challenges of measuring epistemic beliefs are not due solely to lack of domain specificity.

Undue Empirical Influence

In reviewing how measures of epistemic beliefs have been developed and used, empirical approaches are more in evidence than theoretical approaches. Although not providing a fully explicated theoretical grounding, Schommer (1990) did articulate in her pioneering work a multidimensional model of epistemic beliefs that she then sought to operationalize through development of the EQ. The dimensions of epistemic beliefs proposed in the theory were not, however, reliably captured by the items or item subsets constituting the EQ. As a result, routine use of the EQ involved factor analyses performed on specific samples, which led to findings regarding belief factors that were grounded empirically rather than theoretically.

In other cases, instrument development efforts have been essentially empirical from the start. For example, the EBS emerged through a series of analyses performed on a large pool of items adopted from other researchers. To the extent that identified dimensions of epistemic beliefs lack a firm theoretical grounding, they will be more strongly influenced by the particular characteristics of the development sample. Careful theoretical grounding will assure that any successful instruments that do emerge in the future will be able to yield explanatory, and not merely descriptive, information about epistemic beliefs.

Limitations

A limitation of the present study is the preponderance of White female participants in our samples. This is an artifact of the geographic locations in which we conducted the study. There is little evidence in the epistemic-beliefs literature regarding gender differences, and what is there is mixed. Hofer (2000) reported gender differences, with males being more likely than females to view knowledge as simple and certain and to view authorities as the source of knowledge. However, Buehl et al. (2002) failed to find gender differences regarding beliefs about integration of knowledge and the need for effort when learning. In both cases, the researchers investigated gender differences by comparing the beliefs scores of the two groups. We are not aware of any studies that have investigated gender differences in the factor structure of epistemic beliefs.

Conclusion

Theoretical arguments suggest that epistemic beliefs are related to classroom learning and achievement. Although research to date has generally supported this assertion, studies linking epistemic beliefs to motivation and academic achievement that use the EQ, EBI, or EBS should be interpreted cautiously. Because of our findings that these measurement instruments contain large amounts of error variation and offer dubious operationalizations of the constructs that they purportedly measure, researchers should seriously reconsider the state of knowledge in the area of epistemic beliefs and their relationships with learning processes and outcomes. As work in this area continues, researchers invested in exploring epistemic beliefs from a multiple-beliefs perspective need to clearly define dimensions of beliefs that are explicitly epistemic in nature and to refrain from making decisions about dimensionality that are overly based on empirical- rather than theoretical- foundations. Theoretical grounding will not ensure greater psychometric success in measurement of epistemic beliefs with selfreport instruments, but its absence will surely obscure understanding of the role of epistemic beliefs in the classroom.

NOTES

1. In some studies, researchers did not perform factor analyses on the study sample. Rather, they calculated belief scores by using factor coefficients from previous samples and z scores from their current sample, as in Paulsen and Wells (1998), Schommer and Walker (1995, 1997), and Schommer- Aikins and Hutter (2002).

2. The researchers removed 10 items on the EQ related to omniscient authority because they failed to emerge as a separate factor in previous studies.

3. The original 80-item survey was composed of 29 items that were unique to Schommer’s (1990) instrument, 22 items that were unique to Jehng et al.’s (1993) instrument, and 29 items that appeared on both instruments.

4. For the EQ only, we also conducted EFA because it is an element in the scoring procedure that researchers typically use.

5. This information is provided for informational purposes. We note that these modifications necessarily capitalized on the unique characteristics of our samples, producing fit statistics that may not generalize to other samples.

6. Some researchers may question whether this is an epistemic belief or a belief about learning.

REFERENCES

Baxter Magolda, M. B. (1992). Knowing and reasoning in college: Gender-related patterns in students’ intellectual development. San Francisco: Jossey-Bass.

Belenky, M. F., Clinchy, B. M., Goldberger, N. R., & Tarule, J. M. (1986). Women’s ways of knowing: The development of the self, voice, and mind. New York: Basic Books.

Bendixen, L. D., & Rule, D. C. (2004). An integrative approach to personal epistemology: A guiding model. Educational Psychologist, 39(1), 69-80.

Bendixen, L. D., Schraw, G., & Dunkle, M. E. (1998). Epistemic beliefs and moral reasoning. The Journal of Psychology, 132, 187- 200.

Braten, I., & Olaussen, B. S. (2005). Profiling individual differences in student motivation: A longitudinal cluster-analytic study in difference academic contexts. Contemporary Educational Psychology, 30, 359-396.

Braten, I., & Stromso, H. I. (2004). Epistemological beliefs and implicit theories of intelligence as predictors of achievement goals. Contemporary Educational Psychology, 29, 371-388.

Buehl, M. M., & Alexander, P. A. (2001). Beliefs about academic knowledge. Educational Psychology Review, 13, 385-418.

Buehl, M. M., & Alexander, P. A. (2005). Motivation and performance differences in students’ domain-specific epistemological belief profiles. American Educational Research Journal, 42, 697- 726.

Buehl, M. M., Alexander, P. A., & Murphy, P. K. (2002). Beliefs about schooled knowledge: Domain specific or domain general? Contemporary Educational Psychology, 27, 415-449.

Byrne, B. M. (2005). Factor analytic models: Viewing the structure of an assessment instrument from three perspectives. Journal of Personality Assessment, 85, 17-32.

Clarebout, G., Elen, J., Luyten, L., & Bamps, H. (2001). Assessing epistemological beliefs: Schommer’s questionnaire revisited. Educational Research and Evaluation, 7(1), 53-77.

DeBacker, T. K., & Crowson, H. M. (2006). Influences on cognitive engagement and achievement: Personal epistemology and achievement motives. British Journal of Educational Psychology, 76, 535-551.

Duell, O. K., & Schommer-Aikins, M. (2001). Measures of people’s beliefs about knowledge and learning. Educational Psychology Review, 13, 419-449.

Hardre, P. L., Crowson, H. M., Ly, C., & Xie, K. (2007). Testing differential effects of computerbased, Web-based, and paper-based administration of questionnaire research instruments. British Journal of Educational Technology, 38(1), 5-22.

Hofer, B. K. (2000). Dimensionality and disciplinary differences in personal epistemology. Contemporary Educational Psychology, 25, 378-405.

Hofer, B. K. (2001). Personal epistemology research: Implications for learning and teaching. Journal of Educational Psychology Review, 13, 353-383.

Hofer, B. K., & Pintrich, P. R. (1997). The development of epistemological theories: Beliefs about knowledge and knowing and their relation to learning. Review of Educational Research, 67(1), 88-140.

Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6, 1-55.

Jehng, J. J., Johnson, S. D., & Anderson, R. C. (1993). Schooling and students’ epistemological beliefs about learning. Contemporary Educational Psychology, 18, 23-35.

Joreskog, K., & Sorbom, D. (2002). LISREL (Version 8.52) [Computer software]. Lincolnwood, IL: Scientific Software International.

Kardash, C. M., & Howell, K. L. (2000). Effects of epistemological beliefs and topic-specific beliefs on undergraduates’ cognitive and strategic processing of dual- positional text. Journal of Educational Psychology, 92, 524-535.

King, P. M., & Kitchener, K. S. (1994). Developing reflective judgment: Understanding and promoting intellectual growth and critical thinking in adolescents and adults. San Francisco: Jossey- Bass.

Kitchener, K. S., & King, P. M. (1981). Reflective judgment: Concepts of justification and their relationship to age and education. Journal of Applied Developmental Psychology, 2, 89-116.

Kitchener, K. S., & King, P. M. (1990). The reflective judgment model: Ten years of research. In M. L. Commons, C. Armon, L. Kohlberg, F. A. Richards, & T. A. Grotzer (Eds.), Adult development: Vol. 2. Models and methods in the study of adolescent and adult thought (pp. 63-78). New York: Praeger.

Kline, R. B. (2005). Principles and practice of structural equation modeling (2nd ed.). New York: Guilford Press.

Kuhn, D. (1991). The skills of argument. Cambridge, England: Cambridge University Press. Mansell, R., DeBacker, T. K., & Crowson, H. M. (2005, November). Further validation of the BASLQ, a measure of epistemology grounded in educational context. Poster presented at the first meeting of the Southwest Consortium for Innovations in Psychology in Education, Las Vegas, NV.

Muis, K. R. (2004). Personal epistemology and mathematics: A critical review and synthesis of research. Review of Educational Research, 74, 317-377.

Neber, H., & Schommer-Aikins, M. (2002). Self-regulated science learning with highly gifted students: The role of cognitive, motivational, epistemological, and environmental variables. High Ability Studies, 13(1), 59-74.

Nussbaum, E. M., & Bendixen, L. D. (2002, April). The effect of personality, ability, and epistemological beliefs on students’ argumentation behavior. Paper presented at the annual meeting of the American Educational Research Association, New Orleans, LA.

Nussbaum, E. M., & Bendixen, L. D. (2003). Approaching and avoiding arguments: The role of epistemological beliefs, need for cognition, and extraverted personality traits. Contemporary Educational Psychology, 28, 573-595.

Paulsen, M. B., & Wells, C. T. (1998). Domain differences in the epistemological beliefs of college students. Research in Higher Education, 39, 365-384.

Perry, W. G. (1970). Forms of intellectual and ethical development in the college years: A scheme. New York: Holt, Rinehart & Winston.

Pintrich, P. R. (2002). Future challenges and directions for theory and research on personal epistemology. In B. K. Hofer & P. R. Pintrich (Eds.), Personal epistemology: The psychology of beliefs about knowledge and knowing (pp. 103-118). Mahwah, NJ: Erlbaum.

Qian, G., & Alvermann, D. (1995). Role of epistemological beliefs and learned helplessness in secondary school students’ learning science concepts from text. Journal of Educational Psychology, 87, 282-292.

Ravindran, B., Greene, B. A., & DeBacker, T. K. (2005). The role of achievement goals and epistemological beliefs in the prediction of pre-service teachers’ cognitive engagement and learning. Journal of Educational Research, 98, 222-233.

Russell, D. W. (2002). In search of underlying dimensions: The use (and abuse) of factor analysis. Personality and Social Psychology Bulletin, 28, 1629-1646.

Ryan, M. P. (1984). Monitoring text comprehension: Individual differences in epistemological standards. Journal of Educational Psychology, 76, 248-254.

Schommer, M. (1990). Effects of beliefs about the nature of knowledge on comprehension. Journal of Educational Psychology, 82, 498-504.

Schommer, M. (1993). Epistemological development and academic performance among secondary students. Journal of Educational Psychology, 85, 406-411.

Schommer, M. (1994). An emerging conceptualization of epistemological beliefs and their role in learning. In R. Garner & P. Alexander (Eds.), Beliefs about text and about text instruction (pp. 25-40). Hillsdale, NJ: Erlbaum.

Schommer, M. (1998). The influence of age and education on epistemological beliefs. British Journal of Educational Psychology, 68, 551-562.

Schommer, M., Calvert, C., Gariglietti, G., & Bajaj, A. (1997). The development of epistemological beliefs among secondary students: A longitudinal study. Journal of Educational Psychology, 89, 37-40.

Schommer, M., Crouse, A., & Rhodes, N. (1992). Epistemological beliefs and mathematical text comprehension: Believing it is simple does not make it so. Journal of Educational Psychology, 84, 435- 443.

Schommer, M., & Dunnell, P. A. (1994). A comparison of epistemological beliefs between gifted and non-gifted high school students. Roeper Review, 16, 207-210.

Schommer, M., & Dunnell, P. A. (1997). Epistemological beliefs of gifted high school students. Roeper Review, 19, 153-156.

Schommer, M., & Walker, K. (1995). Are epistemological beliefs similar across domains? Journal of Educational Psychology, 87, 424- 432.

Schommer, M., & Walker, K. (1997). Epistemological beliefs and valuing school: Considerations for college admissions and retention. Research in Higher Education, 38, 173-186.

Schommer-Aikins, M. (2004). Explaining the epistemological belief system: Introducing the embedded systemic model and coordinated research approach. Educational Psychologist, 39(1), 19-29.

Schommer-Aikins, M., Duell, O. K., & Barker, S. (2002). Epistemological beliefs across domains using Biglan’s classification of academic disciplines. Research in Higher Education, 44, 347-366.

Schommer-Aikins, M., & Easter, M. (2006). Ways of knowing and epistemological beliefs: Combined effect on academic performance. Educational Psychology, 26, 411-423.

Schommer-Aikins, M., & Hutter, R. (2002). Epistemological beliefs and thinking about everyday controversial issues. The Journal of Psychology: Interdisciplinary and Applied, 136, 5-20.

Schommer-Aikins, M., Mau, W., Brookhart, S., & Hutter, R. (2000). Understanding middle students’ beliefs about knowledge and learning using a multidimensional paradigm. Journal of Educational Research, 94, 120-127.

Schraw, G., Bendixen, L. D., & Dunkle, M. E. (2002). Development and validation of the Epistemic Belief Inventory. In B. K. Hofer & P. R. Pintrich (Eds.), Personal epistemology: The psychology of beliefs about knowledge and knowing (pp. 103-118). Mahwah, NJ: Erlbaum.

Schumacker, R. E., & Lomax, R. G. (2004). A beginner’s guide to structural equation modeling (2nd ed.). Mahwah, NJ: Erlbaum.

Sinatra, G. M., & Kardash, C. K. (2004). Teacher candidates’ epistemological beliefs, dispositions, and views on teaching as persuasion. Contemporary Educational Psychology, 29, 483-498.

Tabachnick, B. G., & Fidell, L. S. (2001). Using multivariate statistics (4th ed.). Boston: Allyn & Bacon.

Tanguma, J. (2001). Effects of sample size on the distribution of select

Culturally and Linguistically Diverse Students in Gifted Education

By Ford, Donna Y Grantham, Tarek C; Whiting, Gilman W

ABSTRACT: The field of gifted education has faced criticism about the underrepresentation of African American, Hispanic/Latino, and American Indian students who are culturally and linguistically diverse (CLD) in its programs. This article proposes that efforts targeting both recruitment and retention barriers are essential to remedying this disparity. Educators’ deficit thinking about CLD students underlies both areas (recruitment and retention) and contributes to underrepresentation in significant, meaningful ways. The authors examine factors hindering the recruitment and retention of CLD students in gifted education, attending in particular to definitions and theories, testing, and referral issues, and offer recommendations for improving the representation of CLD students in gifted education. A persistent dilemma at all levels of education is the underrepresentation of African American, American Indian, and Hispanic/Latino students in gifted education and advanced placement (AP) classes. Research on the topic of underrepresentation has tended to focus on African American students, starting with Jenkins’s (1936) study, which found that despite high intelligence test scores African American students were not formally identified as gifted. For over 70 years, then, educators have been concerned about the paucity of Black students being identified as gifted. During this timeframe, little progress has been made in reversing underrepresentation. This lack of progress may be due in part to the scant database on gifted students who are culturally and linguistically diverse (CLD). In 1998, Ford reviewed trends in reports on underrepresentation spanning 2 decades and found that African American, Hispanic/Latino American, and American Indian students have always been underrepresented in gifted education, with underrepresentation increasing over the years for African American students. (Unlike African American, Hispanic/Latino, and American Indian students, Asian American students are well represented in gifted education and AP classes. For example, as of 2002, Asian American students represented 4.42% of students in U.S. schools but 7.64% of those in gifted education; see Table 1). Regardless of the formula used to calculate underrepresentation (see Skiba et al., 2008), the aforementioned three groups of CLD students are always underrepresented, and the percentage of underrepresentation is always greater than 40%. Also, as noted by Ford (1998), less than 2% of publications at that time focused on CLD gifted groups, resulting in a limited pool of theories and studies from which to draw.

The most recent data from the U.S. Department of Education’s Office for Civil Rights (OCR; see Table 1) indicate that as of 2002, African American, Hispanic/Latino, and American Indian students remain poorly represented in gifted education, especially CLD males. Further, CLD students seldom enroll in AP classes (The College Board, 2002), the main venue for gifted education at the high school level. In both programs, underrepresentation is at least 50%-well beyond statistical chance and above OCR’s 20% discrepancy formula stipulation (Ford & FrazierTrotman, 2000). Several OCR Annual Reports to Congress (2000, 2004, 2005) and publications by Karnes, Troxclair, and Marquardt (1997) and Marquardt and Karnes (1994) indicated that discrimination against CLD students continues in school settings and in gifted education. Karnes et al. examined 38 complaints or letters of findings in gifted education, falling into four categories: (a) admission to gifted programs; (b) identification of gifted students; (c) placement in gifted programs; and (d) procedures involving notification, communication and testing of gifted students. Of these 38 complaints or letters, almost half (n = 17) pertained to discrimination against CLD students. Likewise, Marquardt and Karnes reported that most of the 48 letters of findings they reviewed related to discrimination against CLD students, mainly involving lack of access to gifted programs. They concluded that “unless a school district is constantly vigilant in monitoring its procedures for minority students identification and admission to gifted programs, minorities report underrepresentation” (p. 164).

Compared to special education, gifted education is a small field; fewer publications are devoted to this area of study. And unlike special education, gifted education is not federally mandated, leaving much room for differences in definitions, identification, and programming across districts and states. Only 6 states fully mandate gifted education, and 10 states have neither funding nor a mandate (Davidson Institute, 2006). Proponents of gifted education argue that gifted students have exceptional or special needs, as do children in special education classes; without appropriate services, gifts and talents may be lost or not fully developed. Accordingly, the Javits Act of 1994 recognized this potential loss of talent, specifically among economically disadvantaged and CLD students. The major goal of the Javits Act is to support efforts to identify and serve CLD students and low socioeconomic status (SES) students.

This article first focuses on recruitment and retention issues (acknowledging that most of the scholarship has concentrated on recruitment) and then offers specific recommendations to guide educators in eliminating barriers and opening doors to gifted education for CLD students. We examine the education literature regarding the various conditions that hindet the representation of CLD students in gifted programs nationally, relying heavily on publications and studies that address the impact of perceptions on behavior, such as teacher expectancy theory and student achievement and outcomes (Merton, 1948; Rosenthal & Jacobson, 1968). We suggest that deficit thinking and the use of traditional tests (especially IQ tests) and lack of teacher referral of CLD students for gifted education screening and placement are the primary contributing factors to underrepresentation. In the process of reviewing the literature, we attend to the larger question of the impact of testing instruments and policies and procedures (particularly teachef referrals) on underrepresentation. Further, we consider what school personnel (teachers, school counselors, and administrators) can do to both recruit and retain CLD students in gifted education.

TABLE 1

Racial and Gender Composition of Gifted Students in 2002

UNDERREPRESENTATION: RECRUITMENT AND RETENTION ISSUES

A lack of incentive and opportunity limits the possibility of high achievement, however superior one’s gifts may be. Follow-up studies of highly gifted young African Americans, for instance, reveal a shocking waste of talent-a waste that adds an incalculable amount to the price of prejudice in this country (Educational Policies Commission, 1950).

To date, a disproportionate amount of the literature focuses on the recruitment aspect of underrepresentation, and particularly on intelligence tests and lack of teacher referral (Ford, 1994, 2004). The preponderance of research and scholarship indicates that poor IQ test performance by CLD students and low teacher expectations for these youngsters are the most salient reasons African American, Hispanic/Latino, and American Indian students are underrepresented in gifted education (Baldwin, 2005; Castellano & Diaz, 2001; Elhoweris, Kagendo, Negmeldin, & Holloway, 2005; Ford, 2004; Ford & Grantham, 2003; Frasier, Garcia, & Passow, 1995; Whiting & Ford, 2006).

Over a decade ago, Ford (1994) proposed that to improve the representation of African American and other CLD students in gifted education, educational professionals (i.e., teachers, school counselors, administrators, policy makers, etc.) needed to focus on retention as well as recruitment. She advocated following initiatives in higher education that went beyond the concept of “recruitment” (finding and placing students in gifted education) to focus on getting and then keeping CLD students in gifted education. Specifically, educators should (a) find effective measures, strategies, policies and procedures to better recruit CLD students; (b) find more effective and inclusive ways of retaining these students in gifted programs once recruited; and (c) collect data on factors affecting both the recruitment and retention of CLD students in gifted education in order to more completely understand and redress the issue. Karnes et al. (1997) and Marquardt and Karnes (1994) offered similar recommendations after reviewing OCR letters of findings.

In 2004, Ford reported that the notion of retention continued to be neglected when considering underrepresentation. This lack of attention to keeping CLD students in gifted programs and AP classes contributes to underrepresentation (Ford, 1996). Retention issues often fall into three categories: (a) social-emotional needs exptessed by students, including relationships between CLD students, and with theit classmates and teachers (Harmon, 2002; Louie, 2005); (b) concerns expressed by CLD families regarding their children’s happiness and sense of belonging (Boutte, 1992; Huff, Houskamp, Watkins, Stanton, & Tavegia, 2005); and (c) CLD students performing at acceptable achievement levels (Ford, 1996). For example, a Latino/ a student may withdraw from an AP class for any number of reasons- including feelings of isolation from educators and/or classmates, the majority of whom are likely to be White. Similarly, African American parents may feel forced to withdraw their child from such classes because their child complains of being treated unfairly and not fitting in with other students. Another possible case would be one in which a teacher requests removal of an American Indian student from gifted education or AP classes, attributing the student’s low grades to misidentification and error in placement. Resolving the underrepresentation problem is not easy; there are no quick fixes. To begin this process, however, educators-teachers, school counselors, and administrators-must consider the following question: “How can we improve access to gifted education for CLD students, and once we successfully recruit them, how can we successfully retain them?”

Intentionally or unintentionally, gifted education and AP classes remain culturally, linguistically, and economically segregated (U.S. Department of Education, 1993, 2002; see also Table 1), still largely populated by White students in general and White middle- class students in particular. Recommendations regarding how to “desegregate” gifted education vary (Ford & Webb, 1994), but they share the goal of finding alternative ways-more valid and reliable instruments, processes and procedures-to equitably recruit and retain CLD gifted students. These options include culturally sensitive instruments (e.g., nonverbal tests), multidimensional assessment strategies, and broader philosophies, definitions, and theories of giftedness (Baldwin, 2005; Ford, 2005; Frasier et al., 1995; Milner & Ford, 2007; Naglieri & Ford, 2003, 2005; Sternberg, 2007).

Although most of the available literature focuses on recruitment, pointing to testing and assessment issues as primarily contributing to underrepresentation, we believe that underrepresentation is a symptom of a larger social problem, as discussed by Harry (2008). More directly, the main obstacle to the recruitment and retention of CLD students in gifted education appears to be a deficit orientation that persists in society and seeps into its educational institutions and programs (Ford & Grantham, 2003; Ford, Moore, & Milner, 2005; Moore et al., 2006).

DEFICIT THINKING: DENYING ACCESS AND OPPORTUNITY

The United States has a long history of fraudulent research, works, theories, paradigms, and conjecture that promotes deficit thinking about CLD groups, especially African Americans. Early in our history, African Americans and Latinos/as were deemed “genetically inferior”; later, they were viewed as “culturally deprived” or “culturally disadvantaged” (Gould, 1995; Valencia, 1997). The more recent and neutral nomenclature is that CLD groups are “culturally different.” Unfortunately, the arguments have gone full circle, with some recent literature reverting to genetic inferiority and cultural deprivation (e.g., Herrnstein & Murray, 1994) as the primary or sole explanation for the achievement gap and lower test scores of CLD students. (For a detailed examination of this issue, see Gould, 1995; Valencia, 1997.)

Deficit thinking is negative, stereotypical, and prejudicial beliefs about CLD groups that result in discriminatoiy policies and behaviors or actions. Deficit thinking and resignation are reflected in the statement of two participants interviewed by Garcia and Guerra (2004) who believed that the success of some children is set early and it is irrevocable: “Some children are already so harmed by their lives that they cannot petform at the same level as other children,” and “[i]f those neurons don’t start firing at 8 or 9 months, it’s never going to happen. So, we’ve got some connections that weren’t made and they can’t be made up” (p. 160).

According to Valencia (1997), “the deficit thinking paradigm posits that students who fail in school do so because of alleged internal deficiencies, such as cognitive and/or motivational limitations, or shortcomings socially linked to the youngster-such as familial deficits and dysfunctions” (p. xi). Such thinking inhibits individuals from seeing strengths in people who are different from them; instead, attention centers on what is “wrong” with the “different” individual or group, having low expectations for them, feeling little to no obligation to assist them, and feeling superior to them. Deficit thinking, subsequently, hinders meaningful educational change and reform because educators are unwilling to assume or share any responsibility for CLD students’ poor school performance and outcomes (Berman & Chambliss, 2000; Garcia & Guerra, 2004).

Like othet types of thinking, deficit thinking affects behavior: People act upon their thoughts and beliefs. Consequent behaviors include (but are not limited to) a heavy reliance on tests with little consideration of biases, low referral rates of CLD students for gifted education services, and the adoption of policies and procedures that have a disparate impact on CLD students.

As Harry (2008) notes, deficit orientations go beyond thoughts, attitudes, and values; deficitbased orientations are evident in behaviots and actions. Specifically, ideas about group differences in capacity and potential influence the development of definitions, policies, and practices and how they are implemented. Gould (1981, 1995) and Menchaca (1997) noted that deficit thinking contributed to past (and current) beliefs about race, culture, achievement, and intelligence. Gould’s work helped to establish the reality that researchers or scientists are not objective, bias-free persons, and that preconceptions and fears about CLD groups (particularly African Americans) have led to polemical and prejudicial research methods, deliberate miscalculations, convenient omissions, and data misinterpretation among scientists studying intelligence. These prejudgments and related practices paved the way for the prevalent belief that human races could be ranked on a linear scale of mental worth (Gould, 1981, 1995).

Menchaca (1997) traced the evolution of deficit thinking and demonstrated how it influenced segregation in schools (e.g., Plessy v. Ferguson, 1896) and resistance to desegregation during the Civil Rights era and today. Some scholars have concluded that educators continue to resist desegregation, and use tracking and ability grouping to racially segregate students (e.g., Ford & Webb, 1994; Losen & Orfield, 2002; Oakes, 1985; Orfield & Lee, 2006). Accordingly, it seems reasonable to argue that much of the underrepresentation problem in gifted education stems from deficit thinking orientations. The impact of deficit thinking on gifted education underrepresentation should be clear when one considers how the terms giftedness and intelligence are used interchangeably, how both are subjective or social constructs (e.g., Sternberg, 2007), and how highly the educational elite and middle class prize gifted programs (e.g., Sapon-Shevin, 1994).

In this article we address four major symptoms or resultant behaviors of deficit thinking: (a) the reliance on traditional IQ- based definitions, philosophies, and theories of giftedness; (b) the dependence on identification practices and policies that have a disproportionately negative impact on diverse students (e.g., a reliance on teacher referral for initial screening); (c) the lack of commitment to helping educators become better prepared in gifted education; and (d) the lack of commitment among administrators to preparing educators to work competently with CLD students, which results in the inadequate training of teachers and other school personnel in multicultural education.

DEFINITIONS, TESTING, AND ASSESSMENT

IQ-BASED DEFINITIONS AND THEORIES

Debates are pervasive in education regarding how best to define the terms intelligent, gifted, and talented. A 1998 national survey of state definitions of gifted and talented students (Stephens & Karnes, 2000) revealed great differences and inconsistencies among the 50 states in their definitions. Most used the 1978 federal definition, which includes intellectual, creative, academic, leadership, and artistic categories. Other states have adopted either definitions derived from the Javits Act (1994), a definition created by Renzulli (1978), or the most tecent federal definition (U.S. Department of Education, 1993). Some states do not have a definition (see Davidson Institute, 2006). Further, most states continue-despite recognizing more than one type of giftedness-to assess giftedness unidimensionally, that is, as a function of high IQ or achievement test scores. Such test-driven definitions may be effective at identifying middle-class White students (Sternberg, 2007), but they too infrequently capture giftedness among students who (a) perform poorly on paper-and-pencil tasks conducted in artificial or lab-like settings (Helms, 1992; MillerJones, 1989); (b) do not perform well on culturally loaded tests (e.g., Fagan & Holland, 2002; Flanagan & Ortiz, 2001; Kaufman, 1994; Sternberg, 2007); (c) have learning and/or cognitive styles that are different from White students (e.g., Hale, 2001; Helms, 1992; Hilliard, 1992; Shade, Kelly, & Oberg, 1997); (d) have test anxiety or suffer from stereotype threat (Aronson, Fried, & Good, 2002; Aronson & Steele, 2005; Steele, 1997; Steele & Aronson, 1995); or (e) have low academic motivation or engagement while being assessed (e.g., Wechsler, 1991).

TESTING AND ASSESSMENT ISSUES

The use of tests to identify and assess students is a pervasive educational practice that has increased with recent federal legislation such as No Child Left Behind Act of 2001. Test scores play the dominant role in identification and placement decisions. The majority of school districts use intelligence or achievement test scores for recruitment to gifted education (Davidson Institute, 2006; Davis & Rimm, 2003). This almost exclusive dependence on test scores for recruitment disparately impacts the demographics of gifted programs by keeping them disproportionately White and middle class. Although traditional intelligence tests, more or less, effectively identify and assess middle-class White students, they have been less effective for African American, Hispanic/Latino, and American Indian students (e.g., Helms, 1992; Miller-Jones, 1989; Naglieri & Ford, 2005; Skiba, Knesting, & Bush, 2002), including those at highet SES levels. This issue raises a fundamental question based on the Griggs Principle and the notion of disparate impact (see Griggs v. Duke Power Co., 1971). In Griggs v. Duke Power Co. (1971), African American employees at Duke Power’s generating plant brought action pursuant to Title VII of the Civil Rights Act of 1964, challenging the company’s requirement of a high school diploma or passing of intelligence tests as a condition for employment or transfer to jobs at the plant. African American applicants, less likely to hold a high school diploma and averaging lower scores on the aptitude tests, were selected at a much lower rate for these positions when compared to White candidates. This case called into question the validity and utility of using tests for employment decisions. Duke Power had not attempted to demonstrate that the requirements were related to job performance. The lower court ruled that because no evidence of intent to discriminate existed, Duke Power did not discriminate. On appeal, however, a unanimous Supreme Court sided with Griggs, concluding that if a test advetsely impacts a protected class, then the company must demonstrate the job- relatedness of the test used. The Court ruling led to this question: “If certain groups do not perform well on a test, why do we continue to use the test so exclusively and extensively?”

There are at least three explanations for the poor test performance of CLD students: (a) the burden rests within the test (e.g., test bias); (b) the burden rests with the educational environment (e.g., poor instruction and lack of access ro high quality education contributes to poor test scores); or (c) the burden rests with (or within) the student (e.g., he/she is cognitively inferior or “culturally deprived”).

The first two explanations recognize the influence of the environment (including schools) on test performance and might suggest that we need to make changes in assessment and educational practices that pose barriers to the participation of CLD students in gifted education, eliminating tests, policies, and procedures that have a disparate impact on CLD students (Karnes et al, 1997; Marquardt & Karnes, 1994; OCR, 2000, 2004, 2005). However, the third explanation is positioned in deficit thinking. Those who support this view relinquish any accountability for CLD students’ underrepresentation and lower test scores because of the belief that genetics or heredity extensively determines intelligence, that intelligence is static, and that some groups are simply more intelligent than others (see Herrnstein & Murray, 1994; Jensen, 1981; Rushton, 2003).

Decision makers must appreciate the impact of culture on test scores in order to use the scores in educationally meaningful and equitable ways (Ford, 2004; Ford & Frazier-Trotman, 2000; Helms, 1992; Miller-Jones, 1989; Sternberg, 2007). Educators need to understand how culturally loaded tests can lower CLD students’ test scores (Fagan & Holland, 2002; Flanagan & Ortiz, 2001; Skiba et al., 2002). We must be conscientious in seeking to interpret and use test scores sensibly, to explore various explanations for the differential test scores, and to consider alternative instruments and assessment practices (American Educational Research Association, American Psychological Association, & National Council on Measurement in Education, 1999).

INEFFECTIVE POLICIES AND PRACTICES

Procedural and policy issues also contribute to underrepresentation; of these, teacher referral is particularly worthy of attention. The teacher referral process conttibutes significantly to the underrepresentation of culturally and linguistically diverse students in gifted education. Specifically, educators systematically under-refer CLD students for gifted education services (e.g., Saccuzzo, Johnson, & Guertin, 1994). Teacher referral (and its rating checklists and forms), intentionally or unintentionally, serves as a gatekeeper, closing doors to gifted education classrooms for CLD students. The importance of addressing teacher referral as a gatekeeper is not an insignificant matter, as mosr states rely on teacher referral or completed checklists and forms for selecting students for gifted education placement (Davidson Institute, 2006; National Association for Gifted Children and State Directors of Gifted Education, 2005). Likewise, according to the College Board (2002), access to AP classes is primarily dependent on faculty recommendations, accounting for almost 60% of eventual placement.

The topic of teachers as referral sources for gifted education assessment and placement falls under the larger umbrella of the teacher expectations or perceptions, and subsequent student achievement and outcomes (Merton, 1948; Rosenthal & Jacobson, 1968). This body of work refers to the extent to which a teacher’s a priori judgment of a student’s achievement corresponds to the student’s achievement (e.g., grades) or performance on some formal and objective measure, such as a standardized or achievement-related instrument (Rist, 1996; Zuckef & Prieto, 1977).

Since at least the 1920s, researchers have examined the efficacy of teacher judgment when making referrals for gifted education screening, identification, and placement (e.g., Cox & Daniel, 1983; Gagne, 1994; Gear, 1976; Hoge & Coladarci, 1989; Pegnato & Birch, 1959; Terman, 1925). Not surprisingly, results have been mixed; some studies find teachers to be accurate in their referrals, whereas others find them to be inaccurate. For example, Terman found that teachers overlooked up to 25% of students eventually identified as highly gifted on an intelligence test; however, Gagne argued that teachers are effective and that some of the previous studies were methodologically and conceptually flawed. At least four factors appear to contribute to the differential findings: (a) different instruments used to validate teacher’s judgment; (b) different referral forms, checklists, and other forms used by teachers; (c) different populations of gifted students being judged (e.g., gifted vs. highly gifted; male vs. female; younger vs. older students; high vs. low SES); and (d) different methodologies (e.g., use of vignettes vs. actual student cases).

TEACHER REFERRAL AND CLD STUDENTS

Few studies or literature reviews have focused on teacher referral and identification of gifted students who are culturally and linguistically diverse. As previously noted, a body of scholarship has shown that some teachers have negative stereotypes and inaccurate perceptions about the abilities of CLD students-and their families (e.g., Boutte, 1992; Harmon, 2002; Huff et al., 2005; Louie, 2005; Rist, 1996; Shumow, 1997). Specifically, it is possible that teachers (the vast majority of whom are White) are more effective at identifying giftedness among White students, but less effective with CLD students. On this note, Beady and Hansell (1981) found that African American teachers held higher expectations of African American students than did White teachers (also see Ladson- Billings, 1994, and Irvine, 2002, on this issue).

In 1974, Fitz-Gibbons studied different components of identification for intellectually gifted low-income minority students in California, including tests and teacher referral. Relative to teacher referral, she concluded:

One might hazard the generalization that when teacher judgments are relied upon for placement or identification it is likely to be the child who does not relate to the teacher who gets overlooked, despite the fact that his achievements and ability are equal to or higher than those of the students recognized as bright, (pp. 61-62)

When CLD students were immature, taciturn, less comfortable with adults, or viewed as affable in some way, they were more likely to be overlooked by teachers.

Ford (1996) found that most of the African American students in one of her studies had high test scores-high enough to meet district criteria for identification and placement-but they were underrepresented in gifted education because teachers did not refer them for screening. For example, Dawn, an African American eighth grader, not only had high achievement scores (from the 95th to 99th percentile) each year tested, she had a perfect 4.0 cumulative GPA, and an IQ score of 143. Although Dawn had exceeded the identification and placement criteria (93rd percentile or higher on any subscale) since the third grade, she was not identified as intellectually or academically gifted, and she had not been referred for screening.

In a study of Hispanic and White students, Plata and Masten (1998) reported that White students were significantly more likely to be referred than Hispanic students, and White students were rated higher on a rating scale across four areas of giftedness- intelligence, leadership, achievement, and creativity (also see Pfeiffer, Petscher, & Jarosewich, 2007). Forsbach and Pierce (1999), in their sample of students in 199 middle schools in New York, found teacher referral ineffective as an identification tool for African American, Hispanic/Latino American, and Asian American students. After formal training, however, teachers were more effective at identifying gifted African American students only. Two recent studies have continued this line of research on teacher referral and culturally diverse students. Elhoweris, Mutua, AIsheikh, and Holloway (2005) examined the effects of students’ ethnicity on teachers’ decision making using three vignettes of gifted students. Only the ethnicity of the student in the vignette changed. This impacted teacher referrals; specifically, “elementary school teachers treated identical information contained in the vignettes differently and made different recommendations despite the fact that the student information was identical in all ways except for ethnicity” (p. 29). Finally, in a study of referral sources using all elementary students in the state of Georgia, McBee (2006) reported that teacher referrals were more effective (accurate) for White and Asian students than for African American and Hispanic/ Latino students. McBee concluded: “The results suggest inequalities in nomination, rather than assessment, may be the primary source of the underrepresentation of minority . . . students in gifted programs” (p. 103). Further, he noted that the findings could be interpreted in several ways, one being that “the low rate of teacher nomination could indicate racism, classism, or cultural ignorance on the part of teachers” (p. 109).

Shaunessy, McHatton, Hughest, Brice, and Ratliff (2007) focused on the experiences of bilingual Latino/a students in gifted and general education. Several students in their study believed that being gifted was special, and being culturally diverse and bilingual added to that specialness. One of the students in their study stated:

You’re already special enough [because you are bilingual], but you are extra special because you are also gifted. . . . Latinos/as are not supposed to do well in school, and that’s the expectation. So if you are gifted and Latino/a, then you’ve exceeded expectations. You feel a sense of pride, because you are doing better than even Americans are doing and you aren’t even from here. (p. 177)

These Hispanic/Latino students appeared to believe, as proposed by Milner and Ford (2007) and Sternberg (2007), that cultural diversity cannot be ignored in our ideas, theories, and measures of giftedness, or in eventual placement. Despite the pride expressed by many of the students in the study by Shaunessy et al. (2007) about being gifted and culturally and linguistically diverse, all of these CLD youngsters had faced some form of discrimination; some students mentioned discriminatory school policies, and some did not feel accepted by White teachers and White students, both of whom made disparaging comments to them about their ethnicity (p. 179). When feeling isolated or rejected socially, CLD students and their parents may wish to withdraw their students from gifted education classes (Ford & Milner, 2006).

INADEQUATE TEACHER PREPARATION IN GIFTED EDUCATION AND MULTICULTURAL EDUCATION

VanTassel-Baska and Stambaugh (2006) recently reported rhat only 3% of colleges and universities offer courses in gifted education. With so few opportunities for formal preparation in gifted education, how can we expect teachers to effectively identify, refer, and teach gifted students? This problem is compounded by the lack of teacher training in multicultural education or cultural diversity. Too few educators, even at the time of this writing, receive formal and meaningful exposure to multicultural educational experiences, multicultural curriculum and instruction, and internships and practicum in urban settings (see Banks, 1999, 2006; Banks & Banks, 2006). Frequently, such preparation is limited to one course on diversity (Banks & Banks, 2006). This is a “double whammy” when students are gifted and culturally and linguistically diverse.

Essentially, future professionals, including education majors at both the undergraduate and graduate levels, frequently matriculate with a monocultural or ethnocentric curriculum that does not prepare them to understand, appreciate, and work with students who are culturally and linguistically diverse (Banks, 2006). They consequently misunderstand cultural differences among CLD students relative to learning, communication, and behavioral styles. This cultural mismatch or clash between educators and students contributes to low teacher expectations of students, poor student- teacher relationships, mislabeling, and misinterpretation of behaviors (along with other outcomes), as previously noted.

In the Spring 2007 issue of Roeper Review, five of the nine articles focused on CLD gifted students (Chan, 2007; Milner & Ford, 2007; Pfeiffer et al., 2007; Shaunessy et al., 2007; Sternberg, 2007). Sternberg (2007) called for educators to be more proactive in understanding and making identification and placement decisions, placing culture at the forefront of our thinking and decisions. His article presents a forceful depiction of how culture affects what is valued as gifted and intelligence, how gifts and talents manifest themselves differently across cultures (also see Chan regarding leadership and emotional intelligence among Chinese students), and how our assessment instruments and the referral process should be culturally sensitive such that they do not hinder the recruitment and retention of CLD students in gifted education (Flanagan & Ortiz, 2001; Skiba et al., 2002; Whiting & Ford, 2006). Similarly, Milner and Ford shared cultural scenarios and models, and urged educators to assertively and proactively seek extensive training in cultural and linguistic diversity in order to become more culturally competent.

RECOMMENDATIONS FOR CHANGE

To recruit and retain more CLD students in gifted education and AP classes, school personnel and leaders must address low expectations and deficit thinking orientations, and the impact of such thinking on decisions, behaviors, and practice. This proactive attitudinal or philosophical shift increases the probability that educators will address all barriers to gifted education for CLD students. Figure 1 presents one model for reconceptuaiizing how educators can acquire the necessary dispositions, knowledge, and skills and competencies to work with students who are gifted and culturally and linguistically diverse. The Venn diagram suggests that teacheis combine the best of research, policy, and theory in gifted education with the best of research, policy, and theory in multicultural education in order to meet the needs of gifted CLD students. Thus, we must study issues surrounding teacher referral of gifted students in general, as well as referral issues specific to culturally and linguistically diverse students. In other words, a cultural lens or frame of reference must always be used to examine the status of gifted education for students who are gifted as well as culturally and linguistically diverse. Figure 2 presents an overview of recruitment and retention barriers, along with suggested recommendations for addressing them.

FIGURE 1

Meeting the Needs of CLD Gifted Students

ADOPT CULTURALLY RESPONSIVE THEORIES AND DEFINITIONS OF GIFTEDNESS

Although the federal government does not mandate gifted education services, it does propose definitions. In 1993, the U.S. Department of Education offered its most culturally responsive definition of gifted to date:

Children and youth with outstanding talent perform or show the potential for performing at remarkably high levels of accomplishment when compared with others of their age, experience, or environment. These children and youth exhibit high performance capacity in intellectual, creative, and/or artistic areas, and unusual leadership capacity, or excel in specific academic fields. They require services or activities not ordinarily provided by the schools. Outstanding talents are present in children and youth from all cultural groups, across all economic strata, and in all areas of human endeavor, (p. 19, emphases added)

This definition should appeal to those who are responsible for recruiting and retaining students into gifted education. First, the concept of talent development is a majot focus of the definition. It recognizes that many students have had inadequate opportunities to develop and perform at high academic levels. For example, many students, especially those who live in poverty, lack exposure to books and other literature, they may not visit libtaries or bookstores, and they often miss out on other meaningful educational experiences (Hart & Risley, 1995). Accordingly, the federal definition recognizes that students coming from high SES homes are likely to have such opportunities, which are likely to contribute to the fruition of their giftedness.

Further, the federal definition recognizes that some students face more barriers in life than others (including racial discrimination and prejudice). Discrimination and prejudice weigh heavily on the motivation, aspirations, and mental health (i.e., self-esteem, self-concept and racial identity) of CLD students and adults (e.g., Cross & Vandiver, 2001; Sue et al., 2007). Stated another way, discrimination places these students-at all levels of intelligence-at greater risk for low achievement, academic disengagement, school failure, and other social ills that have been described elsewhere (Allport, 1954; Constantine, 2007; Ford, Moore, & Whiting, 2006; Merton, 1948; Sue et al.). Two theories of intelligence show potential for recruiting and retaining CLD students in gifted education; both theories assert that “gifted” is a social construct, that definitions and views of giftedness vary from culture to culture, and that giftedness is not easily quantifiable and easily measured by tests (see Sternberg, 2007; Whiting & Ford, 2006). What is viewed as gifted in one culture may not be viewed and valued as gifted in another culture, and how giftedness is measured among different cultural groups varies as well. Our point here is to suggest that alternative theories and models of giftedness are needed that are sensitive to cultural differences. FIGURE 2

Underrepresentation Barriers and Recommendations

Steinberg’s (1985) Triarchic Theory of Intelligence proposes that intelligence is multidimensional and dynamic, and that no one type of intelligence or talent is superior to another. The theory holds that intelligence manifests itself in at least three ways: (a) componentially, (b) experientially, and (c) contextually. Componential learners are analytical and abstract thinkers who do well academically, and on achievement and standardized tests. Experiential learners value creativity and enjoy novelty. They often dislike rules and follow few of their own; they see rules as inconveniences meant to be broken. Contextual learners readily adapt to their environments (one of many skills that IQ tests fail to measure). They are street-smart and survivors, socially competent and practical, but they may not be high achievers in school. Gardner (1983) defined intelligence as the ability to solve problems or to fashion products valued in one or more cultural settings, a stipulation that does not get much attention in other definitions. In his Theory of Multiple Intelligences, Gardner differentiated seven types of intelligences: linguistic, logical-mathematical, interpersonal, intrapersonal, bodily kinesthetic, spatial, and musical. Each type of intelligence comprises distinct forms of perception, memory, and other psychological processes.

Both of these theories are inclusive, comprehensive, and culturally sensitive; they are flexible and dynamic theories which contend that giftedness is a sociocultural construct that manifests itself in many ways and means different things to different cultural and linguistic groups. The theorists recognize the many-sided and complex nature of intelligence and how current tests (which are too simplistic and static) fail to do justice to this construct (Ford, 2004; Gould, 1995; Sternberg, 2007).

IDENTIFY AND SERVE GIFTED UNDERACHIEVERS

Related to this notion of talent development, it is important to consider gifted underachievers when discussing underrepresentation. Some perspectives specify that gifted students must be high achievers, equating giftedness with achievement or demonstrated performance. In schools that follow this philosophy, gifted students must demonstrate high achievement, otherwise they are unlikely to be identified or kept in gifted programs if their grades or test scores fall below a certain level. When one makes giftedness synonymous with achievement, gifted underachievers will be neither recruited nor retained. This has key implications for CLD students, too many of whom have lower grades and achievement scotes than their White classmates. A wealth of reports under the topic of the achievement gap suggests that this problem cannot be ignored.

ADOPT CULTURALLY SENSITIVE INSTRUMENTS

The most promising instmments for assessing the strengths of CLD students are nonverbal tests of intelligence, such as the Naglieri Non-Verbal Ability Test (NNAT; Naglieri, 1997), Universal Non- Verbal Intelligence Test (Bracken & McCaIlum, 1998), and Raven’s Progressive Matrices (Raven, Raven, & Court, 2003). These tests are considered less culturally loaded than traditional tests (see Flanagan & Ortiz, 2001; Kaufman, 1994; Naglieri & Ford, 2003, 2005; Saccuzzo et al., 1994) and thus hold promise for more effectively assessing the cognitive strengths of CLD students. Saccuzzo et al., for instance, identified substantially more Black and Hispanic students using Raven’s than using a traditional test, and reported that “50% of the non-White children who had failed to qualify based on a WISC-R qualified with the Raven” (p. 10), deciding that “the Raven is a far better measure of pure potential than tests such as the WISC-R, whose scores depend heavily on acquired knowledge” (p. 10). More recently, Naglieri and Ford (2003) reported that CLD students had comparable scores to White students on the NNAT, with IQs ranging from 96 to 99. This three-point difference is markedly less than the frequently reported 15-point gap that exists on traditional IQ tests between Black and White students. These nonverbal tests give students opportunities to demonstrate their intelligence without the confounding influence of language, vocabulary, and academic exposure. Fagan and Holland (2002) conducted several studies showing that CLD students get comparable scores to White students when there is an equal opportunity to learn the material, specifically vocabulary and language skills.

PROVIDE GIFTED EDUCATION PREPARATION FOR EDUCATORS

Few teachers have formal preparation in gifted education, leading us to question the extent to which teachers understand giftedness, are familiar with characteristics and needs of gifted students, are effective at referring students for gifted education screening and placement, and whether they can teach and challenge such students once placed.

We recommend that teachers take advantage of opportunities to become more competent in gifted education, by enrolling in any relevant courses at local colleges and by attending professional development workshops and conferences in gifted education, such as the National Association for Gifted Children, Council for Exceptional Children (Talented and Gifted Summer Institute for the Gifted, SIG), and state and regional gifted conferences. Potential topics include definitions and theories of giftedness; identification and assessment; policies and practices; cross- cultural assessment, characteristics and needs of gifted students (e.g., intellectual, academic, social/emotional); curriculum and instruction; programming options; gifted underachievers; talent development; working with families; and underrepresentation.

PROVIDE MULTICULTURAL PREPARATION FOR EDUCATORS

With forecasts projecting a growing CLD student population (Hochschild, 2005), teachers and other educators (e.g., school counselors and administrators) will have to bear a greater responsibility for demonstrating multicultural competence (Banks & Banks, 2006; Ford & Milner, 2006). Multicultural education preparation among all school personnel-teachers, school counselors, psychologists, administrators, and support staff-must focus on knowledge, dispositions, and skills. Comprehensive preparation should help school personnel become culturally competent so that deficit orientations no longer impede diverse students’ access to gifted education. This preparation can increase the recruitment and retention of CLD students in gifted education-if it permeates educational and professional development experiences.

Banks and Banks (2006) offer one model for infusing multicultural content into the curriculum. At the contributions and additive levels, diversity is addressed superficially: Students are exposed to safe topics and issues; diversity permeates only a few courses; and alternative perspectives, paradigms, and theories are avoided. These two lower levels tend to promote or reinforce stereotypes about diverse groups. However, these shortcomings are rectified at the higher levels of transformation and social action. A transformational curriculum shares multiple perspectives; teachers are encouraged to be empathetic and to infuse multicultural teaching strategies, materials, and resources into all subject areas and topics as often as possible. Finally, teachers can be catalysts, agents of social change; if they are taught to be empowered, social justice is at the heatt of their teaching. To become more culturally aware, sensitive, and competent, educators must

1. Engage in critical self-examination that explores their attitudes and perceptions concerning cultural and linguistic diversity, and the influence of these attitudes and perceptions on CLD students’ achievement and educational opportunities.

2. Acquire accurate information about CLD groups (e.g., histories, cultural styles, values, customs and traditions, child rearing practices, etc.) and use this information to support and guide students as they matriculate through school.

3. Acquire formal and ongoing multicultural preparation in order to maximize their understanding of and skills at addressing the academic, cognitive, social, psychological, and cultural needs and development of CLD students.

ONGOING EVALUATION OF UNDERREPRESENTATION

Along with OCR (2000, 2004, 2005), we recommend that educators design racial equity plans to monitor gifted education data, including demographics, referrals, and instruments, all with the notion of disparate impact and eventual underrepresentation in mind. These data should be disaggregated by race, gender, and income level (Black males on free or reduced lunch vs. White males paying full price, teacher referral of American Indian males vs. all other males, patterns of referral by teacher demographics, patterns of representation across grade levels and school buildings, etc.) and should focus on both recruitment and retention barriers (e.g., What percentage of CLD students compared to White students leave gifted education and AP classes, and for what reasons? How many complaints are received about inequities in gifted education and what is the nature of these complaints?). Other recommendations include

* Changing or eliminating any policies and practices that have a dispaiate impact on CLD students relative to their representation in gifted education (e.g., teacher referral, family referral, peer referral, tests, definitions, checklists, nomination forms, views about underachievement).

* Setting concrete and measurable goals for changing the demographics of gifted education, and otherwise improving rhe experiences and outcomes of CLD students.

* Reviewing these goals, plans, policies and practices annually, and making changes where necessary (i.e., retrain teachers and other school personnel who do not refer CLD students for gifted education screening, adopt alternative assessments, modify screening and placement criteria, provide different or additional support to CLD students and families, increase or modify professional development in gifted education and multicultural education). SUMMARY

Since its development, gifted education has failed to adequately provide access to gifted education and AP classes for students who are culturally and linguistically diverse. African American, Hispanic/Latino, and American Indian students have always been poorly represented in gifted education. We believe that the problem is complex, but not insoluble. Educators, particularly those in positions of authority, must explore this complex and pervasive problem, and then become proactive in eliminating all barriers that prevent CLD students from being recruited and retained in gifted education. Attitudinal changes are essential, as are changes in instruments, and policies and practices.

The underrepresentation problem is a result of both recruitment barriers and retention barriers; recruitment often receives greater attention because there is more data and information on this issue. A lack of preparation in and sensitivity to the characteristics of gifted students, a lack of understanding of needs and development of gifted CLD students, and a lack of attention to multicultural preparation all undermine educators’ competency at making fair and equitable referrals and decisions. All educators-teachers, school counselors, and administrators-should seriously and honestly examine their respective school context to make changes, and seek the preparation and knowledge necessary to work with gifted students, CLD students, and gifted CLD students. The time to open doors to gifted education and AP classes is long overdue.

The main obstacle to the recruitment and retention of CLD students in gifted education appears to be a deficit orientation that persists in society and seeps into its educational institutions and programs.

Nonverbal tests give students opportunities to demonstrate their intelligence without the confounding influence of language, vocabulary, and academic exposure.

REFERENCES

Allport, G. (1954). The nature of prejudice. Boston: Beacon Press.

American Educational Research Association (AERA), American Psychological Association (APA), & National Council on Measurement in Education (NCME). (1999). Standards for educational and psychological testing. Washington, DC: American Psychological Association.

Aronson, J. Fried, C, & Good, C. (2002). Reducing the effects of stereotype threat on African American college students by shaping theoties of intelligence. Journal of Experimental Social Psycho fogy 38, 113-125.

Aronson, J. & Steele, C. M. (2005). Stereotypes and the fragility of human competence, motivation, and self-concept. In C. Dweck & E. Elliot (Eds.), Handbook of Competence & Motivation. New York, Guilford.

Baldwin, A. Y. (2005). Identification concerns and promises for gifted students of diverse populations. Theory Into Practice, 44, 105-114.

Banks, J. A. (1999). Introduction to multicultural education (2nd ed.). Boston: Allyn & Bacon.

Banks, J. A. (2006). Diversity in American education: Foundations, curriculum and teaching. Boston: Allyn & Bacon.

Banks, J. A., & Banks, C. M. (Eds.). (2006). Multicultural education: Issues and perspectives (6th ed.). Hoboken, NJ: John Wiley & Sons.

Beady, C, & Hansell, S. (1981). Teacher race and expectations for student achievement. American Educational Research Journal, 18, 191- 206.

Berman, P., & Chambliss, D. (2000). Readiness of lowperforming schools for comprehensive reform. Emeryville, CA: RPP International, High Performance Learning Community Project.

Boutte, G. S. (1992). Ftustrations of an African-American parent- A personal and professional account. Phi Delta Kappan, 73, 786-788.

Bracken, B. A, & McCallum, R. S. (1998). Universal Nonverbal Intelligence Test (UNIT). Chicago: Riverside.

Castellano, J. A., & Diaz, E. (2001). Reaching new horizons: Gifted and talented education for culturally and linguistically diverse students: Boston: Allyn & Bacon.

Chan, D. W (2007). Leadership competencies among Chinese gifted students in Hong Kong: The successful connection with emotion and successful intelligence. Roeper Review, 29, 183-189.

The College Boafd. (2002). Opening cUssroom doors: Strategies for expanding access to AP; AP teacher survey results. Washington, DC: Author.

Constantine, M. G. (2007). Racial microaggressions against African American clients in cross-racial counseling relationships. Journal of Counseling Psychology, 54, 1-16.

Cox, J., & Daniel, N. (1983). Identification: Special problems and special populations. Gifted Child Today 30, 54-61.

Cross, W. E., Jr., & Vandiver, B. J. (2001). Nigrescence theory and measurement: Introducing the Cross Racial Identity Scale (CRIS). In J. G. Ponterotto, J. M. Casas, L. A. Suzuki, & C. M. Alexander (Eds.), Handbook of multicultural counseling (2nd ed.) (pp. 371- 393). Thousand Oaks, CA: Sage.

Davidson Institute. (2006). State mandates for gifted programs as of 2006. Retrieved August 4, 2006, from http://www.gt- cybersource.org/StatePolicy.aspx?NavID=4_0

Davis, G. A., & Rimm, S. B. (2003). Education of the gifted and talented. Boston: Allyn & Bacon.

Educational Policies Commission. (1950). Education of the gifted. Washington, DC: National Education Association and American Association of School Administrators.

Elhoweris, H., Kagendo M., Negmeldin, A., & HoIloway P. (2005). Effect of children’s ethnicity on teachers’ referral and recommendation decisions in gifted and talented program. Remedial and Special Education, 26,25-31.

Elhoweris, H., Mutua, K., Alsheikh, N., & Holloway, P. (2005). The effects of the child’s ethnicity on teachers’ referral and recommendations decisions in the gifted/talented programs. Remedial and Special Education, 26, 25-31.

Fagan, J. E, & Holland, C. R. (2002). Equal opportunity and racial differences in IQ. Intelligence, 30, 361-387.

Fitz-Gibbons, C. T. (1974). The identification of mentally gifted, “disadvantaged” students at the eighth grade level. Journal of Negro Education, 43, 53-33.

Flanagan, D. P., & Ortiz, S. (2001). Essentiak of crossbattery assessment. New York: Wiley.

Ford, D. Y. (1994). The recruitment and retention of African- American students in gifted programs. Storrs, CT: University of Connecticut, National Research Center on the Gifted and Talented.

Ford, D. Y. (1996). Reversing underachievement among gifted Black students: Promising practices and programs. New York: Teachers College Press.

Ford, D. Y. (1998). The under-representation of minority students in gifted education: Problems and promises in recruitment and retention. The Journal of Special Education, 32, 4-14.

Ford, D. Y (2004). Recruiting and retaining culturally diverse gifted students from diverse ethnic, cultural, and language groups. In J. Banks and C. A. Banks (Eds.), Multicultural education: Issues and perspectives (5th ed., pp. 379-397). Hoboken, NJ: John Wiley & Sons.

Ford, D. Y. (2005). Ten strategies for increasing diversity in gifted education. Gifted Education Press Quarterly 19(4), 2-4.

Ford, D. Y, & Frazier-Trotman, M. (2000). The Office for Civil Rights and non-discriminatory testing, policies, and procedures: Implications fot gifted education. Roeper Review, 23, 109-112.

Ford, D. Y, & Grantham, T C. (2003). Providing access for culturally diverse gifted students: From deficit to dynamic thinking. Theory Into Practice, 42, 217-225.

Ford, D. Y, & Milner, H. R. (2006). Counseling high achieving African Americans. In C. C. Lee (Ed.), Multicultural issues in counseling: New approaches to diversity (pp. 63-78). Alexandria, VA: American Counseling Association.

Ford, D. Y, Moore, J. L., Ill, & Milner, H. R (2005). Beyond cultureblindness: A model of culture with implications for gifted education. Roeper Review, 27, 97-103.

Ford, D. Y, Moore, J. L., Ill, & Whiting, G. W (2006). Eliminating deficit orientations: Creating classrooms and curriculums for gifted students from diverse cultural backgrounds. In M. G. Constantine & D. W. Sue (Eds.), Addressing racism: Facilitating cultural competence in mental health and educational settings. (pp. 173-193). Hoboken, NJ: John Wiley & Sons.

Ford, D. Y, & Webb, K S. (1994). Desegregation of gifted educational programs: The impact of Brown on underachieving children of color Journal of Negro Education, 63, 358-375.

Forsbach, T, & Pierce, N. (1999, April). Factors related to the identification of minority gifted students. Paper presented at the Annual Conference of the American Educational Research Association, Montreal, Canada. (ERIC Document Reproduction Service No. 430 372)

Frasier, M. M., Garcia, J. H., & Passow, A. H. (1995). A review of assessment issues in gifted education and their implications for identifying gifted minority students. Storrs, CT: University of Connecticut, National Research Center on the Gifted and Talented.

Gagne, E (1994). Are teachers really poor talent detectors? Comments on Pegnato and Bitch’s (1959) study of the effectiveness and efficiency of various identification techniques. Gifted Child Quarterly 38, 124-126.

Garcia, S. B., & Guerra, P. L. (2004). Deconstructing deficit thinking: Wotking with educators to create more equitable learning environments. Education and Urban Society 36, 150-168.

Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. New York: Basic Books.

Gear, G. H. (1976). Accuracy of teacher judgment in identifying intellectually gifted children: A review of the literature. Gifted Child Quarterly 20, 478-489.

Gould, S. J. (1981). The mismeasure of man. New York: Norton.

Gould, S. J. (1995). The mismeasure of man (Rev. ed.). New Yotk: Norton.

Griggs v. Duke Power Co., 401 U.S. 424 (1971).

Hale, Janice. (2001). Learning While Black: Creating Educational Excellence for African American Children. Baltimore: The Johns Hopkins University Press. Harmon, D. (2002). They won’t teach me: The voices of gifted African American inner-city students. Roeper Review, 24, 68-75.

Harry, B. (2008). Collaboration with culturally and linguistically diverse families: Ideal versus reality. Exceptional Children, 74, 372-388.

Hart, B. H., & Risley, T. R (1995). Meaningful differences in the everyday experience of young American children. Baltimore: Paul H. Brookes.

Helms, J. E. (1992). Why is there no study of cultural equivalence in cognitive ability testing? American Psychologist, 47, 1083-1101.

Herrnstein, R ]., & Mutray, C. (1994). The bell curve: Intelligence and class structure in American life. New York: Free Press.

Hilliard, A. G., III. (1992). Why we must pluralize rhe curriculum. Educational Leadership, 49(4), 12-16.

Hochschild, J. L. (2005). Looking ahead: Racial trends in the U.S. Daedalus, 134(1), 7-81.

Hoge, R. D., & Coladarci, T. (1989). Teacher-based judgments of academic achievement: A review of literature. Review of Educational Research, 59, 297-313.

Huff, R. E., Houskamp, B. M., Watkins, A. V, Stanton, M., & Tavegia, B. (2005). The experiences of parents of gifted African American children: A phenomenological study. Roeper Review, 27(4), 215-221.

Irvine, J. J. (2002). In search of wholeness: African American teachers and their culturally specific classroom practices. New York: Palgrave/St. Martins Press.

Jacob K. Javits Gifted and Talented Students Education Act of 1994, 20 U.S.C. [section] 8031 et seq. (1994)

Jenkins, M. D. (1936). A socio-psychological study of Negro children of superior intelligence. Journal of Negro Education, 5, 175-190.

Jensen, A. R. (1981). Straight talk about mental tests. New York: Free Press.

Karnes, F. A., Troxdair, D. A., & Marquardr, R. G. (1997). The Office of Civil Rights and the gifted: An update. Roeper Review, 19, 162-165.

Kaufman, A. S. (1994). Intelligent testing with the WISC-III. New York: John Wiley & Sons.

Ladson-Billings, G. (1994). The dreamkeepers: Successful teachers for African-American children. San Francisco: Jossey-Bass.

Losen, D., & Orfield, G. (Eds.). (2002). Racial inequality in special education. Boston: Harvard Education Publishing Group.

Louie, J. (2005). We don’t feel welcome here: African Americans and Hispanics in metro Boston. Cambridge, MA: The Civil Rights Project at Harvard University.

Marquardt, R G., & Karnes, F. A. (1994). Gifted education and discrimination: The role of the Office of Civil Rights. Journal for the Education of the Gifted 18, 87-94.

McBee, M. T. (2006). A descriptive analysis of referral sources for gifted identification screening by race and socioeconomic status, fournal of secondary Gifted Education, 17, 103-111.

Menchaca, M. (1997). Early racist discourses: The roots of deficit thinking. In R R. Valencia (Ed.), The evolution of deficit thinking: Educational thought and practice (pp. 13-40). New York: Falmer.

Merton, R K. (1948). The self-fulfilling prophecy. Antioch Review, 8, 93-210.

Miller-Jones, D. (1989). Culture and testing. American Psychologist, 44, 360-366.

Milner, H. R, & Ford, D. Y (2007). Cultural considerations in the under-representation of culturally diverse elementary students in gifted education. Roeper Review, 29, 166-173.

Moore J. L., Ill, Ford, D. Y, Owens, D., Hall, T, Byrd, M., Henfield, M., et al. (2006). Recruitment of African Americans in gifted education: Lessons learned from higher education. Mid- Western Educational Research Journal, 19, 3-12.

Naglieri, J. A. (1997). Naglieri Nonverbal Ability Test: Multilevel technical manual. San Antonio, TX: Harcoutt Brace.

Naglieri, J. A., & Ford, D. Y. (2003). Addressing under- representation of gifted minority children using the Naglieri Nonverbal Ability Test (NNAT). Gifted Child Quarterly 47, 155-160.

Naglieri, J. A., & Ford, D. Y. (2005). Increasing minority children’s participation in gifted classes using the NNAT: A response to Lohman. Gifted Child Quarterly 49, 29-36.

National Association for Gifted Children and Council of State Directors of Programs for the Gifted. (2005). State of the states 2004-2005. Washington, DC: Author.

Oakes, J. (1985). Keeping track: How schooL- structure inequality. New Haven, CT: Yale University Press.

Orfield, G., & Lee, C. (2006). Racial transformation and the changing nature of segregation. Cambridge, MA: The Civil Rights Project at Harvard University.

Pegnato, C. W., & Birch, J. W. (1959). Locating gifted children in junior high schools: A comparison of methods. Exceptional Children, 48, 300-304.

Pfeiffer, S. I., Petscher, Y, & Jarosewich, T. (2007). The Gifted Rating Scales-Preschool/Kindergarten Form: An analysis of the standardization sample based on age, gender, and race. Roeper Review, 29, 206-210.

Plata, M., & Masten, W G. (1998). Teacher ratings of Hispanic and Anglo students on a behavior rating scale. Roeper Review, 21, 139- 144.

Plessy v. Ferguson, 163 U.S. 537 (1896).

Raven, J., Raven, J. C, & Court, J. H. (2003). Manual for Raven’s Progressive Matrices and Vocabufory Scales. section 1: General Overview. San Antonio, TX: Harcourt Assessment.

Renzulli, J. S. (1978). What makes giftedness? Reexamining a definition. Phi Delta Kappan, 60, 180-184, 261.

Rist, R. C. (1996). Colot, class, and the realities of inequality. Society 33, 2-36.

Rosenthal, R., & Jacobson, L. (1968). Pygmalion in the classroom: Teacher expectation and pupils’ intellectual development. New York: Rinehart and Winston.

Rushton, J. P. (2003). Brain size, IQ and racial-group differences: Evidence from musculoskeletal traits. Intelligence, 31, 139-155.

Saccuzzo, D. P., Johnson, N. E., & Guertin, T. L. (1994). Identifying underrepresented disadvantaged rifted and talented children: A multifaceted approach, Vols. 1 & 2. San Diego, CA: San Diego State University.

Sapon-Shevin, M. (1994). Playing favorites: Gifted e

Cow-Human Hybrid Survives for Three Days

Crossing human eggs with cow eggs to produce an embryo may be well-intentioned according to the scientific community, but those in opposition view the concept as monstrous.

In England, at Newcastle University, a team has grown hybrid human-cow embryos for the first time in order to provide research tools for stem-cell based solutions. The embryos were produced after human DNA was injected into eggs from cows’ ovaries.

These embryos lasted three days prior to dying. According to Dr. Teija Peura, director of human embryonic stem cell laboratories at the Australian Stem Cell Centre, these “99 percent human” embryos could boost research and help develop therapies for Parkinson’s, Alzheimer’s, and spinal cord injuries that human or animal embryos alone could not.

This method of embryo creation, somatic cell nuclear transfer (or SCNT) has previously been done between animal species, but this attempt is different. Dr. Peura stated, “If successful, they would provide an important additional research tool to help realization of stem cell-based therapies for human diseases.” According to her encouraging all avenues of research including SCNT is incredibly important, especially “if we want to fulfill the promises of stem cell technologies.”

This is not the first time hybrid embryos have been produced. Dr. Hui Zhen Sheng of the Shanghai Second Medical University previously fused rabbit eggs with human cells to produce embryos which eventually yielded stem cells.

Peura’s colleague, Dr. Andrew Laslett warned that the cow/human bond had not yet produced stem cells; it was only an academic possibility.

Embryos are not allowed to be developed past two weeks according to a license given to Newcastle University by the Fertilization and Embryology Authority. Next month, the British Parliament will debate the long-term future of similar embryonic research.

Until further debate and legislation occurs, the debate about the embryos will continue. Professor John Burn, head of the Institute of Human Genetics at the university said in defense of the process, “If the team can produce cells which will survive in culture, it will open the door to a better understanding of disease processes without having to use precious human eggs.” He continued by stating that these eggs may not be used to treat patients due to safety issues, but they may further the development of new stem cell therapies, regardless.

Kevin McGovern, the director of the Caroline Chisholm Centre for Health Ethics supports the way that Britain’s Catholics view the creations. He too thinks the practice is “monstrous”. He said, “An “An almost-human embryo is being created and then it’s being destroyed,” he said. “I cannot see that that respects human life or the dignity of human life.”

McGovern goes on to claim that almost human embryos are the equivalent of human beings, and these are things that shouldn’t be used in a lab experiment. He is correct in stating that the possibilities are endless and no one can predict what might grow from these embryos. He said, “What is being created is life.” He voices some concerns, similar to those that others in opposition have voiced, stating, “If this is approved in the UK, there will be renewed pressure to permit it here, and we will travel further down the slippery slope of allowing just about anything.”

On the Net:

Newcastle University

“Animal-Human Hybrids Created in the UK”

Australian Stem Cell Centre

“Rabbit Eggs Used to Grow Human Stem Cells”

Human Fertilization and Embryology Authority

Virginia Hospital Center Reports HCAHPS Scores

ARLINGTON, Va., April 1, 2008 /PRNewswire-USNewswire/ — The Centers for Medicare & Medicaid Services (CMS) and the national Hospital Quality Alliance (HQA) have released HCAHPS scores that cover patient satisfaction from an overall hospital rating to nine other dimensions of a patient’s hospital experience. Virginia Hospital Center scored above the state and national average in categories such as the overall rating of the hospital, the likelihood that a patient would recommend the hospital, and cleanliness which ties directly to stopping infection before it starts.

“We are very proud of the high scores Virginia Hospital Center received in survey areas such as the overall rating and the likelihood to recommend,” said Jim Cole, President and CEO of Virginia Hospital Center. “Our Hospital views HCAHPS as a viable benchmarking tool, and we are always looking for ways to improve our services so that our patients are served with the highest quality of compassionate care the community so deserves.”

HCAHPS scores assisted Virginia Hospital Center in identifying our strengths and where we have opportunity to improve. The Hospital will continue to provide change agents and internal education to our nurses and physicians on items such as communication and timely response to the needs of our patients. Initiatives supporting this change include the creation of new patient discharge materials, nursing education in-services on communication and response, and the upcoming development of a Patient Advisory Board.

Virginia Hospital Center appreciates and supports CMS and HQA for creating a set of clear, standardized quality and satisfaction measures. Our top priority is being proactive in providing the highest quality of care to all patients and with the transparency of our healthcare system we can better serve and satisfy our community.

For more information on Virginia Hospital Center’s HCAHPS scores or for more detail about the HCAHPS survey visit http://www.hospitalcompare.hhs.gov/.

About Virginia Hospital Center:

For over 60 years, Virginia Hospital Center has provided exceptional medical services to the Washington metropolitan area. Virginia Hospital Center’s new $150 million state-of-the-art facility offers comprehensive healthcare and multiple Centers of Excellence including Cardiology & Cardiovascular Surgery, Neuroscience, Oncology, Women & Infant Health and Urology. Growing service lines include Executive Health and the only Lung Cancer Center in Northern Virginia. Virginia Hospital Center is a teaching hospital, long-associated with Georgetown University’s School of Medicine, and accredited by the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) and Licensed by the Commonwealth of Virginia Department of Health. For additional information, please visit http://www.virginiahospitalcenter.com/.

Virginia Hospital Center

CONTACT: Kristen Peifer of Virginia Hospital Center, +1-703.558.6595,[email protected]

Web Site: http://www.hospitalcompare.hhs.gov/http://www.virginiahospitalcenter.com/

Startup Seeks to Tap the Mississippi River for Power

For more than a century, the Mississippi River has been one of the nation’s most-important transportation corridors, a muddy, winding pathway for moving bulk commodities such as grain and coal and other goods.

Now, a New England startup company wants to harness the mighty river for a secondary purpose — generating electricity.

The company, Free Flow Power Corp., is pursuing a $3 billion plan to install thousands of small electric turbines in the river bed, reaching from St. Louis to the Gulf of Mexico that would collectively generate 1,600 megawatts of electricity — enough to power 1.5 million homes.

Gloucester, Mass.-based Free Flow Power is one of a number of developers of so-called hydrokinetic projects, defined as those that produce electricity from river currents or ocean waves and tides — not dams.

Like the dozens of young companies building wind farms across the Great Plains or putting solar panels on roofs in California, interest in hydrokinetic projects is a response to a growing appetite for renewable energy as the nation tries to wean itself off crude oil and natural gas and reduce emissions of carbon dioxide, a heat-trapping gas linked to global warming.

“Necessity is the mother of invention, and what’s really driving this is the need to develop alternatives to fossil fuels,” said Daniel R. Irvin, 48, the company’s chief executive.

Free Flow Power already has preliminary permits from the Federal Energy Regulatory Commission to study 59 sites on the river. Each site would consist of hundreds or thousands of turbines installed over a stretch of several miles, according to permit applications filed with FERC, which regulates most of the nation’s hydropower projects.

The turbines, which would be attached to pilings in the river bed, are about two feet in diameter and probably would be made of carbon fiber or another lightweight composite material, Irvin said. The river’s natural flow would spin the turbines to generate electricity, which would be transmitted to the power grid.

Free Flow Power chose the Mississippi River following a nationwide search in which it reviewed government data for 80,000 potential sites, looking for minimum average river flows of about 6.5 miles per hour. The sites between St. Louis and New Orleans were among the best they found and also are near electricity markets in the Midwest and Southeast, Irvin said.

The FERC permits give Free Flow Power three years to complete detailed environmental and technical studies as well as the first right to seek operating licenses for projects at those locations.

Irvin, a former investment banker, is hopeful the company can begin producing electricity as soon as 2012, though it may take five additional years to complete the build out.

To be sure, getting there won’t be easy. The projects are likely to face close inspection by state and federal environmental regulators and the Army Corps of Engineers, which must assure the turbines don’t interfere with river navigation. There’s also the potential impact to fish and wildlife habitat, including the pallid sturgeon, an endangered species native to the Missouri and Mississippi river basins.

In January, an official with the Missouri Department of Conservation urged FERC to require an environmental impact statement on the “cumulative effects of proposed hydrokinetic power projects.”

“The department has serious reservations regarding the installation and operation of hydrokinetic power within the Mississippi River,” Janet Sternberg, a policy coordinator for the department, said in a letter to FERC. “Little information is available regarding the environmental impacts of a single project, or the cumulative environmental impacts from 14 projects that may affect more than 74 miles of the river.”

Irvin said he expects scrutiny and has reached out to state and federal regulators. “With a public resource like a river, there are a lot of concerns that need to be addressed,” he said.

Another hurdle is cost. Free Flow Power believes its projects can produce electricity at a price that’s competitive with the output from natural-gas-fired plants.

“This is not as cheap as conventional hydro,” Irvin said. “But we’re working hard to make it comparable with fossil fuels.”

Free Flow Power and other developers of hydrokinetic projects are hopeful to be included in legislation approved by the U.S. House of Representatives that makes production tax credits available to wind power developers. The measure would provide a tax credit per kilowatt hour that helps level the playing field with more-established generating technologies.

Free Flow Power sees scale as a key to its business plan to help hold down per-unit costs and improve economics.

“To try to do a one- to five-megawatt site or a bunch of smaller ones is pretty much setting yourself up for failure,” Irvin said.

The so-called “in stream” hydrokinetic projects being pursued by Free Flow Power represent just one of several new technologies being developed to take advantage of untapped energy sources.

Virginia-based Verdant Power began testing underwater turbines, resembling the large wind turbines sprouting up across the Midwest, in New York City’s East River off New York in late 2006 as part of the Roosevelt Island Tidal Energy Project. A larger commercial project is planned for Canada’s St. Lawrence River.

Another company, Houston-based Hydro Green Energy LLC, also is pursuing projects in the Mississippi. Still others are developing technologies that would generate electricity from ocean waves and tides.

“There are some new technologies now that are being developed that we certainly haven’t thought about before,” said Linda Church Ciocci, executive director of the National Hydropower Association in Washington.

These new technologies are the product of rising prices for conventional fuels such as oil and natural gas that help make emerging technologies more viable as well as state and federal policies that encourage the use of cleaner renewable energy.

Proposed hydrokinetic projects are just part of the untapped potential for hydropower, according to a 2007 study by the Electric Power Research Institute.

According to the study, the United States has the resource potential to develop an additional 23,000 megawatts of hydropower by 2025, including 3,000 megawatts from new hydrokinetic technologies and 10,000 megawatts from ocean wave energy devices.

Confusing Sexual Interest With Friendliness

New research from Indiana University and Yale suggests that college-age men confuse friendly non-verbal cues with cues for sexual interest because the men have a less discerning eye than women — but their female peers aren’t far behind.

In the study, appearing in the April issue of the journal Psychological Science, men who viewed images of friendly women misidentified 12 percent of the images as sexually interested. Women mistook 8.7 percent of the friendly images for sexual interest.

Both men and women were even more likely to do the opposite — when viewing images of sexually interested women, men mistakenly called 37.8 percent of the images “friendly.” Women mistook 31.9 percent of the sexual interest cues for friendliness.

Scientist have long known that young men are more likely than women to confuse friendly non-verbal cues with cues for sexual interest but the explanation for the gender difference has been less clear. The more popular of two competing theories attributes this to a tendency by young men to over-sexualize their social environment. The less popular theory — and the one supported by this new study — claims that women have an advantage when it comes to interpreting facial expressions and body language expressing a variety of emotions, thus are more likely to accurately ID cues for sexual interest. Young men are simply less literate when it comes to non-verbal cues.

“Relative to women, men did not oversexualize the image set in our study,” said lead author Coreen Farris, a doctoral student in the Department of Psychological and Brain Sciences at IU Bloomington. “Both men and women were reluctant to state that ambiguous cues were ‘sexual interest.’ In fact, men and women utilized nearly identical thresholds for the degree of sexual interest that must be perceived before they were willing to go out on a limb and state that the nonverbal cues were sexual in nature.”

Farris said it is interesting that their study found no evidence to support the first theory.

“In many ways, the results point to a more general explanation for why young men make the decisions they make,” she said. “The observed advantage among women in ability to discriminate between friendliness and sexual interest extends to processing of sad and rejecting cues. This suggests that the increased tendency among young men to incorrectly read sexual interest rather than friendliness may simply be an extension of a general disadvantage in reading nonverbal cues, rather than a process unique to sexual signaling.”

The study involved 280 heterosexual college-age men and women, average age of 19.6. Seated in a private computer room, the men and women each categorized 280 photo images of women (full body, fully clothed) into one of four categories — friendly, sexually interested, sad or rejecting. Images were selected for each of the categories based on an extensive validation process.

The study found that both men and women were least accurate at correctly identifying the photos indicating sexual interest. Farris, whose research focuses on sexual aggression in men, noted that the results reflect average differences.

“The data don’t support the idea that all men are bad at this or that all women are great at this,” she said. “It’s a small difference.”

The authors wrote in Psychological Science that in most cases, the “negative consequences of sexual misperception will not extend beyond minor social discomfort.” However, among a small group of men, sexual misperception is linked to sexual coercion, and thus, is an important process to understand in order to improve rape prevention efforts on university campuses. Farris said studies such as this should help establish a better understanding and a baseline for young men’s perceptions of sexual intent and contribute to efforts aimed at preventing sexual aggression.

Coauthors are Teresa A. Treat, associate professor in the Department of Psychology at Yale University; Richard J. Viken, professor in the Department of Psychological and Brain Sciences at IU Bloomington; and Richard M. McFall, professor emeritus in the Department of Psychological and Brian Sciences at IU Bloomington.

On the Net:

Indiana University

Yale University

Psychological Science

Ohio Association of Health Plans Announces New Officers

COLUMBUS, Ohio, April 1, 2008 /PRNewswire/ — The Ohio Association of Health Plans (OAHP), the statewide trade association of companies providing health care benefits to more than six million Ohioans, announced the election of new officers this week.

Kathleen Crampton of New Albany, Ohio, will serve as chair of OAHP. Crampton is the Chair of the Board of Buckeye Community Health Plan, a 127,000-member managed care plan serving the Ohio CFC Medicaid program in the Northwest and East Central regions of Ohio, and the ABD Medicaid program in the Northwest, East Central, Southwest, and Northeast regions of Ohio. Kathleen has more than 20 years experience in health care administration and managed care, with experience ranging from hospital administration to sales and marketing.

Crampton takes the reigns as chair of the association following the leadership of Bill Epling, Chief Executive Officer, WellCare, who served as the chair of OAHP for 11 years.

“OAHP is grateful for Bill’s dedication to the health plan industry during this time as chair,” said Crampton. “His vision and dedication to ensuring that Ohio’s health plans have a seat at the table in critical discussions about health care will have a positive impact for many years to come.”

Other newly elected OAHP officers include:

Fred Hyser of Westerville, Ohio, will serve as OAHP’s vice chair. Hyser is currently Director, National Accounts for Aetna, where he has served in the capacities of six different offices for the organization since joining it more than 35 years ago. Aetna is one of the nation’s leaders in health care, dental, pharmacy, group life, and disability insurance, and employee benefits, with nearly 17 million medical health plan participants nationwide.

Lisa Bateson of Dublin, Ohio, will serve as treasurer for OAHP. Bateson, is Staff Vice President State Affairs for Anthem Blue Cross and Blue Shield, a subsidiary of WellPoint, Inc. , the largest health benefits company in terms of commercial membership in the United States. Bateson has been with Anthem for seventeen years.

“Health plans are a vital contributor to Ohio’s health care industry,” said Kelly McGivern, OAHP President and CEO. “These companies are the source of more than 13,000 Ohio jobs, they create and implement numerous initiatives to improve the health of Ohioans and the efficiency of providing health care services, and they are valuable participants in the debate about health care reform. Our new officers will be leading this organization at a time when critical decisions will be made about the future of health care in Ohio – including working with the state to find solutions for the uninsured – and there is no doubt that these leaders are up to the task.”

The mission of the OAHP is to promote and advocate quality health care and access to a variety of affordable health care benefits for all Ohioans. http://www.oahp.org/

Ohio Association of Health Plans

CONTACT: Kelly McGivern, President and CEO, Ohio Association of HealthPlans, +1-614-228-4662, [email protected]

Web site: http://www.oahp.org/

Nanomachine Kills Cancer Cells

‘Nanoimpeller’ releases anticancer drugs inside of cancer cells

Researchers from the Nano Machine Center at the California NanoSystems Institute at UCLA have developed a novel type of nanomachine that can capture and store anticancer drugs inside tiny pores and release them into cancer cells in response to light.

Known as a “nanoimpeller,” the device is the first light-powered nanomachine that operates inside a living cell, a development that has strong implications for cancer treatment.

UCLA researchers reported the synthesis and operation of nanoparticles containing nanoimpellers that can deliver anticancer drugs March 31 in the online edition of the nanoscience journal Small.

The study was conducted jointly by Jeffrey Zink, UCLA professor of chemistry and biochemistry, and Fuyu Tamanoi, UCLA professor of microbiology, immunology and molecular genetics and director of the signal transduction and therapeutics program at UCLA’s Jonsson Comprehensive Cancer Center. Tamanoi and Zink are two of the co-directors for the Nano Machine Center for Targeted Delivery and On-Demand Release at the California NanoSystems Institute.

Nanomechanical systems designed to trap and release molecules from pores in response to a stimulus have been the subject of intensive investigation, in large part for their potential applications in precise drug delivery. Nanomaterials suitable for this type of operation must consist of both an appropriate container and a photo-activated moving component.

To achieve this, the UCLA researchers used mesoporous silica nanoparticles and coated the interiors of the pores with azobenzene, a chemical that can oscillate between two different conformations upon light exposure. Operation of the nanoimpeller was demonstrated using a variety of human cancer cells, including colon and pancreatic cancer cells. The nanoparticles were given to human cancer cells in vitro and taken up in the dark. When light was directed at the particles, the nanoimpeller mechanism took effect and released the contents.

The pores of the particles can be loaded with cargo molecules, such as dyes or anticancer drugs. In response to light exposure, a wagging motion occurs, causing the cargo molecules to escape from the pores and attack the cell. Confocal microscopic images showed that the impeller operation can be regulated precisely by the intensity of the light, the excitation time and the specific wavelength.

“We developed a mechanism that releases small molecules in aqueous and biological environments during exposure to light,” Zink said. “The nanomachines are positioned in molecular-sized pores inside of spherical particles and function in aqueous and biological environments.”

“The achievement here is gaining precise control of the amount of drugs that are released by controlling the light exposure,” Tamanoi said. “Controlled release to a specific location is the key issue. And the release is only activated by where the light is shining.”

“We were extremely excited to discover that the machines were taken up by the cancer cells and that they responded to the light. We observed cell killing as a result of programmed cell death,” Tamanoi and Zink said.

This nanoimpeller system may open a new avenue for drug delivery under external control at specific times and locations for phototherapy. Remote-control manipulation of the machine is achieved by varying both the light intensity and the time that the particles are irradiated at the specific wavelengths at which the azobenzene impellers absorb.

“This system has potential applications for precise drug delivery and might be the next generation for novel platform for the treatment of cancers such as colon and stomach cancer,” Zink and Tamanoi said. “The fact that one can operate the mechanism by remote control means that one can administer repeated small-dosage releases to achieve greater control of the drug’s effect.”

Tamanoi and Zink say the research represents an exciting first step in developing nanomachines for cancer therapy and that further steps are required to demonstrate actual inhibition of tumor growth.

The research team also includes Eunshil Choi, a graduate student in Zink’s lab, and Jie Lu, a postdoctoral researcher in Tamanoi’s lab.

On the Net:

University of California – Los Angeles

Paper Abstract

California NanoSystems

Digging For Absolute Answers at Stonehenge

Researchers began their two-week long excavation at Stonehenge on Monday in the first dig at the site since 1964.

Throughout history Stonehenge’s great mystery has bred an abundance of theories of its origin. Some claim the site served ceremonial purposes while other, more eccentric beliefs link the monument to extraterrestrial life.

Stonehenge experts Tim Darvill, president of the Society of Antiquaries, and Geoff Wainwright, archaeology professor at Bournemouth University, intend to set the record straight. By using carbon dating techniques and analysis of soil pollen, they hope to be able to find definitive answers.

“If you want to find out why Stonehenge was built, you need to look 250 kilometers away to the Presili Hills in north Pembrokeshire, where the first bluestones that built Stonehenge come from,” Wainwright said.

The ambiguous arrangement is composed of large sandstone blocks surrounded by smaller bluestones. Darvill and Wainwright hope that their research will affirm their thesis that Stonehenge served as a place of healing.

“This was a place of healing, for the soul and the body,” said Darvill. “The Presili Hills is a magical place. The stones from there are regarded as having healing properties.”

Recently uncovered evidence may support their claim. Many of the Neolithic remains found near the site show signs of skeletal trauma such as broken bones and evidence of operations to the skull.

The remains have been linked to people who had traveled long distances, possibly in order to seek supernatural cures from Stonehenge.

The original site consisted of about 80 bluestones, weighing between one and four tons each. They were transported from South Wales to the Salisbury Plain about 5,000 years ago. Almost two-thirds of the stones were either stolen or weathered over time.

“In the early 1900s there were signs in Amesbury (the nearest town to the site) offering the hire of a hammer so that people could come up here to chip off their own bit of bluestone,” Darvill said.

In the 1990s, archaeologists attempted to date the first circle, and judged it to have been erected in 2,550BC.

The current excavation will give researchers a more precise dating of the Double Bluestone Circle, which was the first circle to be erected at the site. They plan to dig a 3.5m by 2.5m trench around the area in order to collect pieces of the original bluestone circle.

“This small excavation of a bluestone is the culmination of six years of research which Tim and I have conducted in the Preseli Hills of North Pembrokeshire and which has shed new light on the eternal question as to why Stonehenge was built,” Wainwright said.

“We will be able to say not only why but when the first stone monument was built.”

On the Net:

Society of Antiquaries

Bournemouth University

Stonehenge

Stroke Patients Re-Learn to Walk Correctly Again Using Special Treadmill

DALLAS, April 1, 2008 /PRNewswire-USNewswire/ — For the more than 700,000 people who experience a stroke each year, many never regain the ability to walk like they did prior to their stroke. But physical therapists, using a specialized treadmill, have discovered a new way to help stroke patients walk again — correctly.

The results of their study, conducted at Baylor Institute for Rehabilitation (BIR), appear in the April 2008 issue of the Archives of Physical Medicine and Rehabilitation.

Often times, during rehabilitation, stroke patients develop an abnormal gait pattern, which can be difficult and sometimes impossible to correct.

“Gait impairment is common after a stroke with many survivors living with a walking-related disability, despite extensive rehabilitation,” says Karen McCain, D.P.T., lead investigator of the study at BIR. “Walking incorrectly not only creates a stigma for these patients, but it also makes them more susceptible to injury and directly affects their quality of life.”

After completing the pilot study, all seven of the patients enrolled were able to walk with a basically normal gait pattern, all without the use of even a cane.

“In my 14 years as a physical therapist I have not treated seven stroke patients total that walk this well. We are definitely on to something.”

Lisa Day, a 44-year-old with no family history of stroke, was completely paralyzed on her left side after experiencing a stroke in September 2007. Within three weeks of sessions on the treadmill she was walking again the way she did prior to her stroke.

“I brought a wheelchair home from the hospital in case I needed it, but I only used it twice. To see me walk, you would never know that I had a stroke,” says Day.

Lisa is now back on the treadmill again — this time for exercise — and walks over a mile several days a week.

The approach, known as locomotor treadmill training with partial body weight support, consists of a treadmill outfitted with a harness. The patient is secured to the harness to support a portion of their body weight while walking on the treadmill. In this reduced weight environment, the patient can relearn how to walk in a safe and controlled manner. Once the patient becomes stronger, more body weight is added until they can comfortably walk on their own without the need for assistance.

“The key to the success of our method is early intervention. All of the patients started on the treadmill as soon as possible during the acute period of recovery after their stroke,” explains McCain. “We wanted to keep these abnormal gait patterns from developing in the first place.”

Currently, there is no consensus regarding the optimal treatment for reestablishing a normal gait pattern in stroke patients — most are rehabilitated using walkers and other assistive devices.

“Our ultimate goal for this study is to one day change the clinical practice in physical therapy,” adds McCain.

Baylor Institute for Rehabilitation is a not-for-profit, 92-bed hospital that offers intense, specialized rehabilitation services for traumatic brain injuries, spinal cord injuries, strokes, and other orthopaedic and neurological disorders. Physicians specializing in physical medicine and rehabilitation, known as physiatrists, lead interdisciplinary clinical teams, which work with patients to design and implement a treatment program to achieve the patient’s goals. In 2007, Baylor Institute for Rehabilitation was named among the top rehabilitation hospitals in U.S. News & World Report’s “America’s Best Hospitals” guide, an honor it has received for 10 years.

For more information about Baylor Institute for Rehabilitation, visit http://www.baylorhealth.com/.

Baylor Health Care System

CONTACT: Ashley Howland, Marketing & Public Relations Consultant ofBaylor Health Care System, +1-214-820-7540, [email protected]

Web Site: http://www.baylorhealth.com/

A Cancer Patient’s Evolution From Surrender to Survivor

By Jason Sheeler, The Dallas Morning News

Apr. 1–Loose change is so annoying,” I thought, focusing on anything but the present.

I lay inert, terrified to move, my body unnaturally reclined at a 45-degree angle in the hospital bed. But my mind raced, keeping pace with the soft-footed ICU nurses attaching sticky pulse-keeping things all over my body. I clamped my eyes shut, every light, noise and heartbeat feeling like a slap to the nickel-size hematoma I woke up with that morning — intra-cranial blah-blah-blah. The rancid smell of hospital hygiene, like expired chardonnay, and my mother’s muffled and panicked voice would not let me forget where I was. Today would be rough, I knew, with or without the lovely IV painkillers.

No one was saying what everyone was thinking: My 29-year-old body was not producing blood platelets. My body rejected platelet transfusions. I was bleeding in my brain.

My bone marrow was 4 months old, transplanted from an unknown donor in September to battle myelodysplastic syndrome, a rare blood disorder sometimes known as “pre-leukemia.” Four rounds of high-intensity chemotherapy had left me 30 pounds lighter. I swallowed 42 pills a day. And I had spent every minute of the last 11 months fearing this part of the hospital. I was filled with hate.

“Pompous know-it-alls,” I thought, watching the doctor furiously scribbling something important, robotic nurses fluttering around him. “All the lab coats and Land Rovers and mandated sincerity. Just drop it and tell me what’s going to happen. Tell me I am going to die.”

I tried to think of pleasant things, such as what my funeral would be like. I had told my friends I wanted an a cappella choir to perform Leonard Cohen’s “Hallelujah,” followed by Jennifer Lopez’s “Love Don’t Cost a Thing.” I smiled at the thought of my mother’s face when, per my explicit stage directions, the singers in the chorus fling off their breakaway robes to reveal a pastel palette of Juicy Couture sweat suits as they sing J.Lo’s gospel: “Think I wanna drive your Benz, I don’t.”

The hollow screech of a flat-lining patient elevated my blood pressure and sent the flock of silent scrubs away in unison, like startled pigeons.

I exhaled slowly, my cheeks full of dry, recycled air. “Well, that sucks.”

The first signs

Exactly 11 months before, I was living in Little Rock, Ark., and celebrating Valentine’s Day with friends. After a raucous night, I woke up with a hangover I didn’t recognize. And I had met them all.

I went to the doctor Monday, after sleeping 40 of 48 hours. It was bizarre. I was almost gelatinous. After being sent home with antibiotics and an unnecessary prescription for heartburn that I still can’t explain, I went back to bed. Later, I received a call: Get to the hospital.

The doctor may have been a Nexium pusher, but he did a blood test. The results were startling. My blood count was at 6, meaning I had less than half the amount of blood found in healthy men — not that I was ever one of those.

Anyone who knew me in the ’90s will be curious to see my name under the banner “Healthy Living.” My barometer for health was always my waistline, with its number in direct proportion to my well-being. If it was 31 inches or less, I was doing something right, I surmised.

In the summer of 2002, while living in New York, my energy level plummeted. My gums started to bleed spontaneously. Illness was taking hold. But I had a 30-inch waist.

On Feb. 19, 2003, at the same Little Rock hospital where I was born, I was told I had MDS. My bone marrow was not producing enough blood — red blood cells, white blood cells and platelets. The treatment would be chemotherapy, and the only hope for a cure would be a bone marrow transplant. What I took away from that day, other than the punched looks on my parents’ faces and the doctor’s confusing illustrations on the dry-erase board, was that about one in three patients with MDS develops acute myelogenous leukemia. Nice.

When I found out about my cancer, my first reaction was embarrassment. “Cool people don’t have cancer,” I thought. I was ashamed, humiliated and felt immediately left out. My friends were still living, I thought, and I, suddenly and irrevocably, was not. I had this disease that seems to occur in people older than 60. I begged my parents not to tell anyone, fearing my picture would end up on coffee cans at service stations. I left the hospital, made reservations at my favorite restaurant, picked up two bottles of Veuve Clicquot and toasted my fate with friends who didn’t know why we were toasting. I smoked, I drank and I gave up.

Plea for help

But my parents weren’t ready to quit. Two things happened for which I credit my life.

First, ignoring my pleas for isolation and anonymity, my father asked for help. While I received crocheted caps and creepy drawings from Sunday school children (including a Matisse-looking graveyard scene with angels flying above), I also found a great doctor. A family friend connected us with Dr. Kari Nadeau at Stanford University Medical Center. She became a mentor of sorts, advising us on doctors and procedures. It was her advice that I must get the very best treatment from the start. And that meant going to the University of Texas MD Anderson Cancer Center in Houston.

Second, my mother got organized. She tackled the situation with her Type-A tenacity; the house my parents rented in Houston during the two years we spent there was wallpapered with charts, its bookshelves stuffed with notebooks chronicling every medication and procedure. Cancer is a full-time job for both the patient and the caregiver.

After three rounds of chemotherapy, my cancer was in remission, but with the caveat that it probably would return. The race was on to find a bone marrow donor.

Waiting is the worst side effect of cancer treatment, and it hurt. My tolerance for physical pain grew. The stomach problems, the headaches, even the biopsies, with a 16-gauge needle hammering into my pelvis, extracting marrow and bone; it all became routine. I learned to deal with physical pain. I could understand it. It was finite.

The abstraction of emotional pain kept me suffocated. And the lag time left me with more opportunities to ask, “What if?”

Hospitals are bureaucratic dinosaurs, with office procedures that can rival a post office’s expediency. The outer rooms are filled with patients one-upping each other with “you’ll never believe this” stories, each educating the other on newer and more insane things that may happen.

Due to my increasingly terminal uniqueness, finding a bone marrow donor was a struggle. I had “difficult typing,” they told me, with one antigen not found commonly in the Caucasian population. Most whites will find 40 to 50 matches from the National Bone Marrow registry, with those with the most common genotyping finding up to 7,000. My brother was not a match, either. And, after not finding a match in the U.S. registry of more than 7 million donors, and a private search of my friends, family and anyone meeting my mother, the search was expanded internationally.

Good news. A match was found. Great news. It was a perfect match.

Risky business

Even a perfect match is not without risks. For seven days, my bone marrow was decimated by high-intensity chemotherapy, after which my donor’s marrow would be transfused intravenously. On Sept. 25, 2003, the bone marrow and blood mixture entered my veins to, we hoped, settle into my bones. Then, another wait: My body would decide whether to accept or decline my one shot at life. Each day, we hoped that the donor marrow would begin to produce cells.

I spent 27 days in the hospital, celebrating both the birth of my new DNA and my 29th birthday. A month later, my platelets disappeared. And, for the first time, the cold percentages with which doctors speak entered the “D” range.

Idiopathic thrombo- cytopenic purpura is a condition of low platelets with no known cause. The condition was unrelated to my transplant, just an annoyingly uncommon situation. I knew things were bad when my hospital room became the first stop for visiting oncologists.

Back in the ICU in January 2004, the hematoma disappeared after eight days. My intestines continued to slowly bleed, my platelets stayed away, and the doctors continued to answer my yes or no questions with statistics and words with too many consonants. Experimental medicines, nuclear imaging and daily plasma exchanges were my life for the next 15 months.

My health insurance ran out, leaving me on Medicare and tripling my family’s paperwork. Steroids added more than 50 pounds to my frame. One of the experimental medicines destroyed the nerves in my feet. I began to fall — at the hospital, at home and, embarrassingly, twice into a sale rack at the Gap — and so did my percentages of surviving.

I was ready to die. The idea seemed like a welcome change of pace.

In early 2005, I sought the advice of a platelet specialist in New York. It was her opinion that my spleen was the cause of the ITP. In March, with no cells to clot my blood, I underwent a splenectomy. Almost immediately, my platelets returned. But my desire to live did not.

When I woke up in the recovery room, I learned that everything was great, normal — I was officially a survivor. Whether I liked it or not.

Moving on

The sleek architecture of the Texas Medical Center complex in Houston slid past my cab window last month, the buildings and streets neatly labeled with the names of the rich and the dead. It’s almost perverse, the gorgeous facades hiding gruesome acts happening 24 hours a day.

As I entered the outpatient clinic at MD Anderson, I moved from present to past. The hurried lobby has the feel of an airport, everyone going in different directions with different connections, with different fates awaiting at their gate.

I think about the first time I was here. The resentment I felt, not wanting to be one of those people, masked and hairless with gray skin, frail and contorted bodies hiding behind tubes and blankets. The misery of the intravenous catheter stitched to my chest, a constant reminder of my illness, a source of infection and an obstacle to showering. That bitterness grew each day, with my entire fate hanging on black Courier-font numbers. The terrifying platelet count of zero adjacent to the boldface “normal averages.”

Walking through the front doors, five years after diagnosis, I saw many ghosts: the nurses I disrespected with my anger and bitterness while living as only the dying can; the transplantation coordinator who worked tirelessly to find me a donor while I didn’t even bother to show up for appointments; my doctor, who saved the life that I didn’t even want.

“You have been through war, and, along with that, can come post-traumatic stress,” Dr. Borje Andersson told me as we discussed the complications of ITP. “No one can recall seeing a patient like you. Everyone just shakes their head. This was fairly unique, to put it mildly.”

“Yes, it was bad,” he said. “In more than 20 years, I have only seen one other person survive a bleed of the brain in your condition.” But the recovery “was not my doing. It was divine.”

I was gob-stopped when hearing his words, that a man of science, with a handful of degrees and the almighty lab coat, would tell me that my fate was ordained by a power greater than him.

It rolled over me. The uncommon cancer, the rare genotype, the one donor, the successful match, the hematoma, the dicey splenectomy.

And then, it happened: I was grateful. Leonard Cohen’s words rolled through my head: And every breath we drew was Hallelujah.

“Your prognosis now should be as good as any 33-year-old man who is basically otherwise healthy, who has not had a transplant,” Dr. Andersson said, asking to see me again in September, which will be my fifth bone marrow birthday.

Before I left MD Anderson, I filled out a form asking to meet my donor. A cold, impersonal form with boxes for how much information to disclose to this person who gave me not only his DNA, but life.

The process is a little like Internet dating, with both parties having to agree to meet, a third party facilitating the communication. Once again, I’m waiting, but not worried. I think it will be a good match.

Jason Sheeler is a staff writer who covers style.

JASON’S TIMELINE

February 2003: Diagnosed with myelodysplastic syndrome at 28, began receiving regular blood and platelet transfusions

March-May 2003: Began chemotherapy at MD Anderson Cancer Center in Houston. The first round was not successful, but, by the third round, the cancer was in remission. The search for a donor began.

August 2003: Bone marrow donor found through the National Bone Marrow Donor Program’s Bone Marrow Donors Worldwide

September 2003: Bone marrow transplant

November 2003: Idiopathic thrombocytopenic purpura discovered, platelet level dropped

January 2004: Intracranial hematoma found

February 2004-March 2005: Daily visits to clinic for plasma exchanges and blood transfusions

January 2005: Met with platelet specialist (Dr. Henny H. Billett, director, Clinical Hematology, Albert Einstein College of Medicine in New York City, who specializes in platelet disorders). She recommended a splenectomy.

March 2005: Spleen removed and platelet level returned to normal

Today: Continuing biannual checkups at MD Anderson. The doctors are focusing on complications in transplant patients that can affect the eyes, skin and gut.

How to help

The National Bone Marrow Donor Program has access to nearly 7 million donors that may reach the more than 10,000 Americans who need bone marrow transplants each year. With only 30 percent of patients finding a match in their family, most will need an unrelated transplant donor.

Joining the registry is easy and relatively painless. Most people ages 18 to 60 are eligible. The program will send a tissue-typing kit to your home for a tax-deductible fee of $52. After submitting a swab of blood, you will be typed and added to the system. Keeping your contact information updated is crucial.

Minority donors are in demand. Because genetic traits are inherited, a patient’s most likely match is someone of the same heritage. Recruitment is focused on American Indian, black, Latino, Asian and Pacific Islander donors.

If you are matched with a patient, you will be asked to become a donor. Under anesthesia, a small portion of your bone marrow will be extracted, leaving your back sore for a few days. Your body replaces the lost bone marrow in four to six weeks.

For details or to order a tissue-typing kit, call 1-800-627-7692 or see www.marrow.org.

—–

To see more of The Dallas Morning News, or to subscribe to the newspaper, go to http://www.dallasnews.com.

Copyright (c) 2008, The Dallas Morning News

Distributed by McClatchy-Tribune Information Services.

For reprints, email [email protected], call 800-374-7985 or 847-635-6550, send a fax to 847-635-6968, or write to The Permissions Group Inc., 1247 Milwaukee Ave., Suite 303, Glenview, IL 60025, USA.

When the Donor Says Yes and the Family Says No

By Howard, Richard J Cornell, Danielle L; Koval, Charles B

Signed donor cards clearly demonstrate the donor’s intention to donate organs after death. In many states, this donation cannot be rescinded by the next of kin, and organs can be recovered from the donor even if the family objects. The family usually does not object if the donor has signed an organ donor card, especially if the donor had discussed the issue with the family. In some situations, however, the family objects to donation despite the signed organ donor card. If the organ procurement organization pursues donation, adverse publicity and even legal action are possible. It can be a challenge for organ procurement personnel to deal with families who object to donation in the face of a signed organ donor card in a manner that will lead to successful organ recovery without adverse consequences. This article describes 4 cases where the donor had a signed organ donor card but the family initially objected to donation. Ultimately organs were recovered from 3 of these donors. (Progress in Transplantation. 2008;18:13-16) The recent and ongoing efforts to increase the number of people on organ donor registries throughout the United States will challenge organ procurement organizations (OPOs) to deal with what may be new circumstances. The shortage of donor organs for transplantation is the main problem confronting transplant centers worldwide. In the past few years, the federal government has undertaken a few initiatives to get OPOs to recover organs from donors that they would not have considered previously and to recover as many organs as possible.

In October 2006, the Health Resources and Services Administration (HRSA) launched a program, Donate Life America Donor Decision Collaborative, with the objective of adding 40 million people to donor registries throughout the United States. Currently 60 million donors are registered, and the HRSA goal is to bring that number to 100 million. In many states, people can indicate they want to be a donor on their driver’s license at the time of renewal.1,2 Currently 48 states have donor registries, and 32 of these registries are web based (Kathy Giery, Donor Designation Collaborative, personnal communication). The registry lists persons who have signed organ donor cards, allowing OPOs to identify people who wish to donate.

The laws in many states provide that if a person has indicated that he or she wants to be an organ donor, the family cannot override that decision at the time of death. Some states have a statutory provision that permits a donor to recant a written intent to donate by making a written or verbal statement to family members, in which case the donation is not pursued. Otherwise, OPOs can recover organs from such donors even if the family objects. Efforts should be made to have potential donors tell their families that they have signed an organ donor card and want to donate. In July 2003, Florida enacted the Nick Oelrich Gift of Life Act, which states that the family cannot override the wishes of the donor, and that a signed donor card or indication of desire to donate on a person’s driver’s license creates a presumption of the donor’s intent.’ In Florida, however, if the donor tells 2 people that he or she no longer wants to donate, the effect of the signed donor card or indication on the driver’s license is rescinded. Consent from the donor’s survivors is not required.

Usually families do not object to donation if they are shown that the individual has signed an organ donor card or otherwise indicated his or her wish to be an organ donor. In some cases, however, the donor’s family may still object. This situation can pose unique challenges for OPO personnel.

Retrospective Case Reviews

Between January 1, 2004, and December 31,2006, LifeQuest Organ Recovery Services approached 815 potential organ donors of whom 159 (19.5%) had signed donor cards for first-person consent. In 4 cases, the potential donors had previously signed an organ donor card but the families objected to donation. The institutional review board at the University of Florida approved this review.

Case 1

A 20-year-old man was declared brain dead after a self-inflicted gunshot wound to the head. He had signed an organ donor card. His parents were divorced. His mother lived out of state and happened to be a lawyer. She initially objected to organ donation and threatened to go to the news media if the OPO persisted in wanting to recover his organs. OPO personnel spent considerable time with her and explained the Florida law regarding the respect to be afforded a person’s direction regarding organ donation. With time, she did change her mind and agreed with her son’s choice to be an organ donor. Further discussions revealed that she wanted to make sure her son’s death was not a homicide, and she was concerned that organ recovery might impede the ability to determine the cause of death. His organs were recovered and transplanted.

Case 2

An 18-year-old man was brain dead as a result of a gunshot wound to the head. The family was angry and felt that the physicians and hospital were giving up on their efforts to save their son. His driver’s license indicated he wanted to be an organ donor and his name appeared on the state of Florida donor registry. The OPO coordinator approached the family and discussed organ donation with them. Initially the family was not ready to discuss donation. Later, the OPO coordinator again discussed organ donation with them and pointed out that their loved one’s driver’s license indicated he wanted to donate his organs for transplantation. Nevertheless, the family strongly objected to organ donation despite their son’s expressed desire to do so.

The OPO coordinator spent a considerable amount of time explaining organ donation and the applicable state law to the family, but they still opposed recovery of their son’s organs. Finally, a copy of the actual statute was brought to the family so they could read it for themselves. They read the law and found a provision of the law that states that if the individual verbally tells 2 family members he has changed his mind and no longer wants to be an organ donor, the signed donor card is rescinded. The family then told the coordinator that the donor had so indicated while he was alive and therefore the family’s decision opposing organ donation was final. We do not know, of course, whether the donor actually had indicated to his family that he no longer wanted to be an organ donor, but in that situation the OPO chose to no longer pursue organ recovery.

Case 3

A 28-year-old male sheriffs deputy was severely injured in a motor vehicle accident. He had signed an organ donor card and had told his family of his desire to donate. He was ultimately declared brain dead, and the OPO coordinator approached his family about donation. Family members, led by his father, indicated they knew about his desire to be a donor but they were adamantly opposed to organ donation. They said it was against their religion. They had initially been told about organ donation by their neurosurgeon who was not trained in best practices to approach families regarding donation. Two hours later, the OPO coordinator together with a more experienced senior OPO supervisor, approached the family again and told them about the patient’s wishes and state law regarding respecting the wishes of a donor. The donor’s father eventually consented to organ donation. The OPO no longer asks the next of kin for consent when appropriate documentation of intent to donate is available. After the donation, the donor’s father thanked the OPO coordinator for helping them respect their son’s wishes.

Case 4

A 54-year-old woman underwent carotid endarterectomy. She proceeded to brain death because of an unfortunate outcome from the operation. Her family was totally unprepared for her death. They were notified that she was brain dead by the intensive care unit (ICU) physician. Shortly thereafter, the OPO coordinator approached the family, and they were told that their mother had signed an organ donor card. This was the first time the family had heard about their mother’s desire to be an organ donor. The family initially opposed organ donation. Furthermore, the social worker in the ICU strongly supported the family and urged the OPO coordinator not to pursue organ donation in this case. The family left the hospital to arrange for their mother’s funeral. A senior OPO staff member called the daughter and discussed her mother’s hospital course and her indication of her desire to be an organ donor. The daughter decided to return to the hospital for further discussions with an OPO senior staff member and ultimately agreed to organ donation. The donor’s daughter is now a strong advocate for organ donation. She has signed an organ donor card and has shared this decision with her adult children.

Discussion

Since the landmark book discussing family relationships in organ donation written by Simmons et al,4 many articles about family interactions in consent and other aspects of organ and tissue donation have been published.5″9 Use of first-person consent is relatively new in organ and tissue donation. Honoring the wishes that the donor expressed while alive now has legal backing in Florida and several other states.

These cases all presented a challenge for the OPO because of the initial opposition to donation from the families. In the end, only 1 of the 4 families persisted. These cases were useful to OPO personnel in learning how to confront similar situations in the future. The Uniform Anatomic Gift Act (UAGA) of 1968 provided for first-person consent.10 The UAGA did not address, however, whether the donor’s desire to become an organ donor could be overridden by the surviving family members. The UAGA was passed into law in all 50 states. The UAGA was revised in 1987.10 The procedures associated with first-person consent were clarified, but the family’s ability to override the wishes of the decedent was not clearly addressed. Only 26 states, however, have passed this revision. The UAGA was revised in 2006, and the first-person consent portion (section 8) was strengthened further.12 This revision clearly states that family members or other survivors cannot alter or revise the wishes of the donor. It is too early to know how many states will eventually pass this latest revision. Those states that have passed the 2006 revision have the legal backing to recover organs from donors with firstperson consent even if the family objects. Because this version of the law was not in effect at the time, the OPOs in Florida worked together to obtain passage of the Nick Oelrich Gift of Life Act in 2003, which greatly strengthened first-person consent in Florida.

In most states, a properly executed organ donor card or indication on a person’s driver’s license of the desire to donate is sufficient in and of itself to permit organ recovery. Just as a will detailing how a person wants to dispose of his or her assets after death cannot be changed by surviving family members, so a donor’s directive cannot be changed after death by the survivors. Many OPOs are reluctant to rely solely on a donor’s directive and still seek approval of the family because of the concern about unfavorable publicity and the lack of widespread knowledge and awareness of the legal effect of donor directives. We historically sought approval from the donor’s family for organ donation as occurred in the first 2 cases presented here. As we gained experience with Florida law, we ceased seeking written permission from the family. The signed donor card is sufficient. We do notify the family or next of kin about the directive, as a properly conveyed change in the donor’s desire would supercede the donor’s directive according to the Florida statute as exercised by the family in case 2. We try to cultivate a relationship with the family to ensure a positive donation experience, for follow-up care of the donor family, and to obtain a reliable medical and social history.

The death of virtually all organ donors is unexpected and tragic. It is not surprising that their families are distraught and in a state of shock. The decision they make shortly after being told of the untimely death of a loved one may well not be the one they would make if they had more time for reflection. The first, third, and fourth cases showed us the value of approaching families again if their initial decision is against organ donation. Although we no longer seek to obtain the consent of survivors, we make every attempt to have them understand and accept the wishes of the decedent. In fact, almost all families do. In only a small number of families that object to donation are more time and a second or third discussion required. How the discussion is framed is also important. We no longer ask family members to sign any document when we have documentation of a donor’s intent. We adopt an approach that says to the family that their loved one has agreed to organ donation and we intend to honor his wish. We inform them how the process will take place and how long it will take, and we provide other information that families have a right to know.

The second case was the only one where we failed to get consent from the family to recover organs. We thought if we showed them applicable Florida law, they would better understand the situation and withdraw their objection. We suspect, but cannot prove, that the family read the section of the law describing how an individual can reverse a previously expressed intent to donate. After reading the law, the family told the OPO that in fact the patient had verbally rescinded his desire to be an organ donor. There was little we could do in the face of a united family under these circumstances, and no organ recovery was made.

The fourth case was particularly difficult because not only did the family initially object to donation, but a social worker involved in the case was upset by the manner in which the family was approached regarding donation. The social worker’s support of the OPO’s mission and confidence in their processes was significantly damaged. This case again points out that positive effects can come about as a result of a second discussion with the family. It also demonstrates that it can be helpful to separate notification of the family that their loved one has died from the notification that the donor had signed an organ donor card.

This fourth case also stresses the importance of educational programs for the ICU care team. The nurses, physicians, and social worker became intimately involved with the family and did not want the family to have any more anguish by having the OPO recover the organs against the family’s wishes. Furthermore, many of the ICU, medical, and nursing staff indicated that they did not think most of the public understands that when they say yes to donation in the driver’s license bureau that it means organs can be recovered despite their loved one’s grief. The importance of public and professional education related to donor directives cannot be overstated. OPO personnel must be sensitive not only to the family’s feelings but also to those of the ICU providers. They must take care to explain to them why the OPO is doing what it does; above all else, the OPO speaks for the donor. This education of the ICU staff is better done in advance of any actual donation so that it is separated from what can be an emotionally charged situation.

The OPO’s pursuit of organ donation despite the refusal of the family can result in negative publicity. One can only imagine what a creative reporter could do with a story depicting the OPO as recovering organs despite the objection of the grieving family. Then relations between the OPO and the ICU staff can be threatened, as the fourth case so clearly illustrates. The ICU staff may have bonded with the family and not the donor, who most likely was always unconscious and noncommunicative. The OPO must always try to have the best relations with the ICU staff in these cases, because a negative relationship can adversely affect future organ donation. It is important that the OPO educate the ICU staff on a regular basis about signed organ donor cards and the implications they have for donation despite the family’s objection. We always attempt to have a team, huddle between the OPO and the healthcare team caring for the patient before approaching the family. The team huddle should include reference to the presence of a signed donor card and review of the implications of the document. This gives the OPO another chance to educate the ICU staff about the implications of a signed organ donor card.

The public should also be better educated about the implications of signing an organ donor card. Probably few are told that organs can be recovered despite any objections of their family. Some who sign donor cards may believe that the family still has to give consent. Knowing that their intention to donate overrides the family’s wishes may cause some individuals to refrain from signing donor cards. It may also spur them to inform their families that they want to be donors in order that their wishes are respected when they die.

There are also potentially negative consequences for the OPO if it fails to recover organs from a donor who has a signed organ donor card. First is an ethical issue. No one but the OPO may be speaking for the donor who has indicated his or her desires. The donor clearly indicated that he wished to be an organ donor upon his death. Just as courts do not allow a family to change a will disposing of a dead person’s assets, we should not allow the family to alter the individual’s specified desire to be an organ donor. A donor card is similar to a living will and other end-of-life documents that define the care that someone wants to receive. Although unlikely, does the OPO expose itself to later legal action if it does not recover the organs of someone with an organ donor card? Could the estate file a lawsuit claiming that the wishes of the donor were not respected by the OPO? Could a patient waiting for an organ sue the OPO when he learns that organs were not recovered, and he was high on the list to receive the organ from that donor? And what if he died because he did not receive a lifesaving organ and his family brought suit against the OPO? There is no case law addressing these issues. The Florida statues do not provide any clear penalties for failure to recover organs. It is possible, however, that some regulatory or civil liability could result.

So how does an OPO avoid these potential negative outcomes if it insists on recovering organs from someone with a signed organ donor card? The OPO should be ready to recover organs from these donors despite the family’s wishes; otherwise the organ donor card becomes meaningless. The OPO must be prepared in advance for any negative publicity that might occur. The hospital’s administration or legal department should be contacted to ensure they understand the issues involved and raise no objections. The OPO should be aware of possible long-term effects on the family, although those effects can be virtually impossible to identify. OPO staff must also consider the impact on future relations with the hospital and ICU staff, and how many organs are likely to be transplanted from the donor. We believe that the ICU staff must fully understand the law and that the primary concern should be carrying out the patient’s wishes. We believe donor advocacy is one of the important missions of an OPO. Keeping this principle in the forefront when making difficult administrative decisions will assist in making those decisions and will be in keeping with an OPO’s ultimate mission-to recover organs for transplantation. References

1. HRSA Web site, http://www.hrsa.gov. Accessed January 21, 2008.

2. OrganDonor.gov: access to US government information on organ and tissue donation and transplantation. http://www .organdonor.gov. Accessed January 21, 2008.

3. The Nick Oelrich Gift of Life Act, Laws of Florida, Chapter 2003-046, amending sections 765.512 and 765.514, Florida Statutes. http://www.myfloridahouse.gov/sections/BUls/billsdetail. aspx?BillId=9501 Accessed December 18, 2007.

4. Simmons RG, Klein SD, Simmons RL. Gift of Life: Family, and Societal Dynamics. New York, NY: Wiley; 1977.

5. Atkins L, Davis K, Holtzman SM, Durand R, Decker PJ. Family discussion about organ donation among African Americans. Prog Transplant. 2003;13(1):28-32.

6. Morgan SE, Harrison TR, Long SD, Afifi WA, Stephenson MT, Reichert T. Family discussions about organ donation: how the media influences opinions about donation decisions. Clin Transplant. 2005;19(5):674-682.

7. Thompson T1, Robinson JD, Kenny RW. Family conversations about organ donation. Prog Transplant. 2004;14(1):49-55.

8. Siminoff L, Mercer MB, Graham G, Burant C. The reasons families donate organs for transplantation: implications for policy and practice. J Trauma. 2007;62(4):969-78.

9. Beard J, Ireland L, Davis N, Barr J. Tissue donation; what does it mean to families? Prog Transplant. 2002;12(1):42-48.

10. Uniform Anatomical Gift Act (1968). http://www2.sunysuffolk. edu/pecorip/SCCCWEB/ETEXTS/DeathandDying_ TEXT/UAGA.htm. Accessed Decemberl8, 2007.

11. Uniform Anatomical Gift Act (1987). http://www.law.upenn. edu/ bll/archives/ulc/fnact99/uaga87.htm. Accessed December 18,2007.

12. Revised Uniform Anatomical Gift Act (2006). http://www. anatomicalgiftact.org/DesktopDefault.aspx?tabindex=l&tabid =63. Accessed December 18, 2007.

Richard J. Howard, MD, Danielle L Cornell, RN, BSN, Charles B. Koval, JD

LifeQuest Organ Recovery Services

(RJH, DLC) and Shands Legai Serv

ices (CBK), Shands Hospital at the

University of Florida, Gainesville

To purchase electronic or print reprints, contact:

The InnoVision Group

101 Columbia, Aliso Viejo, CA 92656

Phone (800) 809-2273 (ext 532) or

(949) 448-7370 (ext 532)

Fax (949) 362-2049

E-mail [email protected]

Copyright North American Transplant Coordinators Organization Mar 2008

(c) 2008 Progress in Transplantation. Provided by ProQuest Information and Learning. All rights Reserved.

Subcapsular Hematoma Evacuation As a Method of Evaluating Injured Kidneys for Transplant

By Yushkov, Yuriy Hoffman, Allison; Giudice, Anthony

Background-Approximately one-third of organ donors in the United States are trauma victims. In general, kidneys with large subcapsular hematomas are not used for transplant because of the possibility of significant parenchymal injury. A large subcapsular renal hematoma may cause scarring resulting in renal parenchymal compression and development of the Page syndrome. Objective-To elucidate a successful method of evaluating kidneys subject to trauma, while also possibly preventing further damage and improving their function.

Design-Data were collected from the donor kidney pool of the New York Organ Donor Network from January 2006 through July 2007. Four kidneys during that period were determined to have significant subcapsular hematomas. Surgical intervention was undertaken and outcomes after transplantation were reviewed.

Main Outcome Measures-Four of the kidneys underwent a surgical procedure to drain the subcapsular hematoma allowing assessment of the underlying renal parenchyma. All 4 of these kidneys were deemed transplantable. After transplantation, 3 of the 4 kidneys had immediate function and did not require dialysis. The remaining kidney was removed as a result of primary nonfunction.

Conclusion-The described surgical intervention allows the transplant surgeon to accurately assess the extent of damage to a traumatized kidney while possibly preventing further damage to the kidney. (Progress in Transplantation. 2008;18:6-9)

According to the 2006 Annual Report of the Organ Procurement and Transplantation Network and the Scientific Registry of Transplant Recipient, the number of kidney transplants is growing at an annual rate of 5.9%.’ This is a substantial increase except when compared with the 8.9% growth rate of the patient waiting line.1 The mounting gap between the organs available for transplantation and the rising number of people on the waiting list prompts the transplant community to reevaluate the existing criteria for kidney acceptance to increase the number of transplants and shorten the demand.2,3

Studies4-6 have shown that a large subcapsular hematoma may lead to scarring that results in renal parenchyma compression, which in turn may result in arterial hypertension and then, possibly, renal failure. This phenomenon was demonstrated in the animal model by Irwin Page in 1939 and has been supported by subsequent clinical findings, becoming known as Page syndrome.4-7 Page syndrome is defined as the compression of the transplanted kidney by a subcapsular hematoma leading to renal hypertension.4,3,9 These subcapsular hematomas have resulted from renal allograft trauma or posttransplant biopsy.10-4 The evacuation of a subcapsular hematoma has been used as the method of treatment to prevent renal failure.10- 3

A significant percentage of organ donors in the United States have resulted from trauma fatalities (in 2006, 35.47%; in 2005, 34.70%).1 In general, kidneys with large subcapsular hematomas are not used for transplantation because of the possibility of significant parenchymal injury. In most cases, these kidneys are discarded. From January 1,2006, to July 31, 2007, the New York Organ Donor Network handled and evaluated 1722 kidneys, of which 20.4% (352) were kidneys from trauma victims. Of these kidneys, 17 (4.8%) showed physical signs of injury evidenced by hematomas in perirenal adipose tissue, and 9 of those had subcapsular hematomas. Five of the 9 had hematomas that were not clinically significant (we defined significant subcapsular hematomas as encompassing 20% of the kidney’s surface).

Evacuation of the subcapsular hematomas was performed with the goal of better assessment of the renal parenchymal injury and a prophylactic against scarring of the subcapsular hematoma and development of Page syndrome.

Methods

In order to better assess the possible damage to the graft kidney and to prevent the formation of Page kidney, the Preservation Unit of the New York Organ Donor Network began opening the renal capsule above the hematoma to evacuate blood clots beginning in June 2006.

From June 2006 to July 31,2007,4 kidneys were identified as having significant subcapsular hematomas, covering 33% to 50% of the kidney surface area. Two senior renal preservation staff members and 1 surgeon performed 4 capsular decompressions to better evaluate the tissue beneath the capsule. In each case the subcapsular hematoma that was close to the hylum or covered an entire kidney’s pole was opened and relieved. In order to assess a kidney with a subcapsular hematoma and possible parenchymal trauma, it had to be completely cleaned of perirenal adipose tissue and blood clots. Occasionally, trauma affects the surrounding perirenal tissue, giving the appearance that the kidney may not be transplantable; however, after complete kidney dissection, the kidney itself may still be determined to be viable (Figures 1 and 2).

The subcapsular hematoma evacuation technique was performed as follows: the renal capsule was opened with an incision of 1 to 2 cm (Figures 3 and 4) and the hematoma was “massaged out” (Figure 5). The hematoma’s remnants were removed using a mosquito clamp (Figure 6) and rinsed out using a syringe with preservation solution. After the hematoma was removed, the kidney’s parenchyma was observed for the possible rupture (Figure 7).

The renal artery with aortic patch was dissected and cannulated using a Seal Ring. The kidney was placed on the renal pulsatile preservation machine to better preserve the kidney and to evaluate vascular resistance and flow. The continuous preservation also allowed us to assess if parenchymal leakage existed from the location where the hematoma had been evacuated. In each of the 4 cases, no leakage was observed by the preservation technician for the several hours that the kidney was on the pulsatile perfusion pump.

Pictures were taken before and after decompression and sent to a local transplant center for evaluation. All 4 kidneys were placed on the pulsatile perfusion pump after decompression and parameters were within acceptable ranges. Without this intervention it is likely that these kidneys would have been discarded rather than being used for transplantation.

In one case the kidney was placed on a pulsatile perfusion pump before its decompression. The flows were in a range that reflected a poor impression of its potential for transplantation. In the other 3 cases, the hematoma was evacuated from the kidney before it was placed on the pulsatile perfusion machine.

Three of 4 kidneys had immediate function after transplantation, and the recipients were discharged, free from dialysis. One kidney was a primary nonfunctioning kidney and was subsequently removed.

Conclusion

Kidneys from trauma donors should be completely dissected from adipose tissue and blood clots before clinicians make a decision regarding their suitability for transplantation. The dissection should be performed by staff trained in evaluating trauma kidneys.

The evacuation technique permits a better assessment of the underlying renal parenchyma. It also allows the renal capsule to adhere back to kidney tissue instead of developing scar tissue or a cyst, possibly preventing the development of Page syndrome. The accumulation of blood and fluid in the hematoma may lead to venous outflow obstruction, kidney edema, and eventual kidney discard. The evacuations of subcapsular clots were performed on 4 kidneys. Each of these kidneys had hematomas covering more than 20% of their surface.

Evacuation of subcapsular hematomas appears to allow successful transplantation of renal allografts. In this small cohort, 3 of the 4 kidneys have performed well after transplantation.

References

1. US Organ Procurement and Transplantation Network and the Scientific Registry of Transplant Recipients. Trends in Organ Donation and Transplantation in the United States, 1996-2005. http:/ /www.optn.org/AR2006/Chapter_I_AR_CD.htm ?cp=2. Accessed November 28, 2007.

2. Cadillo-Chavez R, Santiago-Delpin EA, Gonzalez-Caraballo Z, et al. The fate of organs refused locally and transplanted elsewhere. Transplant Proc. 2006;38(3):892-894.

3. Stratta RJ, Rohr MS, Sundberg AK, et al. Intermediate-term outcomes with expanded criteria deceased donors in kidney transplantation. A spectrum or specter of quality? Ann Surg. 2006;243(5):594-603

4. Engel WJ, Page IH. Hypertension due to renal compression resulting from subcapsular hematoma. J Urol. 1955;73(5): 735-739.

5. Moriarty KP, Lipkowitz GS, Germain MJ. Capsulectomy: a cure for the Page kidney. J Pediatr Surg. 1997;32(6):831-833.

6. Haydar A, Bakri RS, Prime M, Goldsmith DJ. Page kidney-a review of the literature. J Nephrol. 2003;16(3):329-333.

7. Page IH. The production of persistent arterial hypertension by cellophane perinephritis. JAMA. 1939;! 13:2046-2048.

8. Cromie WJ, Jordan MH, Leapman SB. Pseudorejection: the Page kidney phenomenon in renal allografts. J Urol. 1976;116(5):658-659.

9. Nguyen BD, Nghiem DD, Adatepe MH. Page kidney phenomenon in allograft transplant. Clin Nucl Med. 1994; 19(4): 361-363.

10. Martinez-Mier G, Garcia-Almazan E, Esselente-Zetina N, Tlatelpa-Mastranso MA, Mendez-Lopez MT, Estrada-Oros J. Blunt trauma in kidney transplant with preservation of renal function. Cir Cir. 2006;74(3):205-208.

11. Mohammed EP, Venkat-Raman G, Marley N. Is trauma associated with acute resection of a renal transplant? case report. Neprol Dial Transplant. 2002;17(2):283-284. 12. Rea R, Anderson K, Mitchell D, Harper R, Williams T. Subcapsular hematoma: a cause of post biopsy oliguria in renal allografts. Neprol Dial Transplant. 2000;15(1):1104-1105.

13. Abutaleb N, Obaideen A. Renal tamponade secondary to subcapsular hematoma. Saudi J Kidney Dis Transplant. 2007; 18(3):426- 429.

14. Wanic-Kossowska M, Kobelski M, Oko A, Czekacski S. Arterial hypertension due to perirenal and subcapsular hematoma induced by renal percutaneous biopsy. Int Urol Nephrol. 2005; 37(1):141-143.

Yuriy Yushkov, PhD, CTBS, MBA, Allison Hoffman, BS, Anthony Giudice, BS

New York Organ Donor Network,

New York

To purchase electronic or print reprints, contact:

The InnoVision Group

101 Columbia, Aliso Viejo, CA 92656

Phone (800) 809-2273 (ext 532) or

(949) 448-7370 (ext 532)

Fax (949) 362-2049

E-mail [email protected]

Copyright North American Transplant Coordinators Organization Mar 2008

(c) 2008 Progress in Transplantation. Provided by ProQuest Information and Learning. All rights Reserved.

Torsades De Pointes Related to Transient Marked QT Prolongation Following Successful Emergent Percutaneous Coronary Intervention for Acute Coronary Syndrome

By Kawabata, Mihoko Hirao, Kenzo; Takeshi, Sasaki; Sakurai, Kaoru; Inagaki, Hiroshi; Hachiya, Hitoshi; Isobe, Mitsuaki

Abstract We report 2 patients in whom transient marked QT prolongation occurred after successful emergent percutaneous coronary intervention (PCI) for acute coronary syndrome. One patient developed torsades de pointes. In both cases, the QT interval became markedly prolonged within 24 hours after PCI, and this prolongation persisted for 4 days. The T waves had a giant and bizarre negative shape with a prolonged T-wave peak to T-wave end interval. No new- onset ischemia or congenital long QT syndrome was related to the episodes. The patients had not taken any drugs that could have prolonged the QT interval, and their serum potassium levels were within normal limits. Torsades de pointes following successful PCI for acute coronary syndrome is uncommon, but acquired long QT syndrome should be considered and treated in patients in whom giant and bizarre negative T waves and QT prolongation develop after PCI. (c) 2008 Elsevier Inc. All rights reserved.

Keywords: Torsades de Pointes; Acute coronary syndrome; Long QT syndrome; T-wave peak to T-wave end interval; Transmural dispersion of repolarization

Introduction

Torsades de Pointes (TdP) is a typical form of potentially life- threatening polymorphic ventricular tachycardia that is associated with a long QT interval. Long QT syndrome (LQTS) is a disorder that involves delayed ventricular repolarization and is classified as either the congenital or acquired form. There are several possible causes of acquired LQTS: medications, electrolyte abnormalities, heart disease, such as bradycardia, congestive heart failure, or myocardial ischemia and other conditions such as cerebrovascular accidents.

Recent studies have shown that a long QT interval is not sufficient to provoke TdP.1-3 Heterogeneity of repolarization throughout the ventricle, which results in QT dispersion (QTD), is a marker of dispersion of ventricular repolarization and electrical instability. This spatial inhomogeneity of repolarization develops in transmural regions as well and is known as transmural dispersion of repolarization (TDR). The interval from QRS onset to T-wave offset and that to T-wave apex correspond to the action potential durations of the midmyocardial M cells (the longest action potentials) and epicardial cells (the shortest action potentials), respectively. Therefore, TDR is reflected in the duration of the interval from T-wave peak to T-wave end (TPE).1,2,4,5 Enhanced TDR allows for the propagation of multiple waves of reentry, which is responsible for TdP by serving as a functional underlying reentrant substrate.6

We describe 2 patients in whom transient marked QT prolongation developed after successful emergent percutaneous coronary intervention (PCI) for acute coronary syndrome (ACS). In both patients, the T waves had a giant and bizarre negative shape with a prolonged TPE interval. TdP occurred in 1 patient.

Case reports

Case 1

A 71-year-old woman was referred to our hospital for worsening dyspnea and prolonged chest pain lasting 20 minutes. For 3 days before her admission, she had experienced dyspnea and chest pain on effort. She had suffered diabetes for 15 years, hypertension for 6 years, hyperlipidernia for 6 years, and obesity as coronary risk factors.

Upon admission, her consciousness was clouded, and she struggled to breath and wheezed remarkably. She was cyanotic and wet with sweat. Her systemic blood pressure was 209/111 mm Hg, and pulse rate was 140 beats per minute. Cardiopulmonary arrest ensued. After successful cardiopulmonary resuscitation, acute myocardial infarction (AMI) was diagnosed (Fig. 1, left). One hour later, she underwent successful PCI of the proximal left anterior descending artery (LAD), which had 95% stenosis (Fig. 2). Her left ventricular ejection fraction was 60%, and peak creatine kinase level was 2764 IU/L.

The QT interval, defined as the interval between QRS onset and end of the T wave (defined as return of the terminal T wave to the isoelectric baseline), increased after PCI (Fig. 3, left). When U waves were present, the QT interval was measured to the nadir of the curve between the T wave and the U wave. It was corrected for heart rate according to Bazett’s formula (QTc = QT/square root of the R-R interval). On day 2, a deep inverted T wave, remarkable QT prolongation, and QTD (Fig. 1, middle) were accompanied by development of incessant TdP (Fig. 4). The serum potassium level was 3.8 mEq/L. The patient had not been given any drug that would have prolonged the QT interval; however, nicorandil, an adenosine triphosphate-sensitive potassium channel opener, had been continuously infused. The medications prescribed after PCI were aspirin, heparin, and ticlopidine. There was no familial history of sudden cardiac death, syncope, or LQTS. Chest pain did not recur. Although intravenous magnesium and lidocaine were ineffective, beta- blockers terminated the electrical storm. The patient was reintubed and sedated, and the QT interval normalized on day 5 (Fig. 1, right). Electrocardiograms (ECGs) showed no signs of new-onset ischemic events. Reevaluation of the coronary artery on day 19 revealed no changes.

Case 2

A 54-year-old woman was referred to our hospital because of chest pain on exertion that had appeared 1 week earlier. Hypertension had been present for 1 year. She had previously undergone surgery for ovarian cancer.

Fig. 1. The 12-lead ECG series in patient 1 showing dynamic changes in the QT and T-wave peak to T-wave end (TPE) intervals. Left, After successful cardiopulmonary resuscitation, anteroseptal acute myocardial infarction was diagnosed. The QTc interval was 485 milliseconds. Middle, The QTc interval increased maximally on day 2 (QTc = 708 milliseconds) and exhibited a deep inverted T-wave and remarkable prolongation of the TPE interval (200 milliseconds). Right, Both the QT and TPE intervals normalized on day 5 (QTc = 447 milliseconds, TPE = 80 milliseconds). Electrocardiograms over time showed no signs of any new-onset ischemic event.

Fig. 2. Emergent coronary angiography in patient 1 shows 95% stenosis of the proximal LAD. After successful PCI, no other lesions were found.

Upon admission, she had no chest pain; however, ST-T changes were observed on the ECG (Fig. 5, left). Emergent coronary angiography revealed 90% stenosis of the proximal LAD; thus, PCI was performed (Fig. 6).

Percutaneous coronary intervention was successful, and the QT interval increased maximally on day 2 (Figs. 3, 5, middle). The T wave had a deep negative morphology with a prolonged TPE interval. The serum potassium level was 4.0 mEq/L. The patient had not been given any drug that could have prolonged the QT interval. After PCI, nicorandil was infused continuously. Moreover, aspirin, heparin, ticlopidine, angiotensin-converting enzyme inhibitors, and amlodipine were also administered. There had never been any prior evidence of LQTS, and there was no family history of LQTS, syncope, or sudden cardiac death. The patient underwent repeat angiography immediately, which showed no restenosis or new lesion. The maximum creatine kinase level was 305 IU/L, and left ventricular function was preserved. Mexiletine (300 mg daily) was prescribed, and the QTc interval spontaneously shortened to within normal range by day 5 without any episode of malignant arrhythmia (Fig. 5, right).

Fig. 3. Dynamic changes in the QTc interval, QT dispersion, and TPE interval in our patients. In patient 1, the QTc and TPE intervals were measured in lead V^sub 2^, and in patient 2, both intervals were measured in lead V5. The intervals reached maximum length on day 2 and returned to near normal on day 5. Incessant TdP developed in patient 1 on day 2.

Discussion

We report 2 cases of non-Q-wave myocardial infarction (NQMI) and unstable angina exhibiting marked prolongation of both the QT and TPE intervals, the myocardial ischemia was relieved in the early phase by successful emergent revascularization. In both cases, both the QT and TPE intervals became maximally prolonged on day 2 and returned to near normal on day 5. Incessant TdP developed in the NQMI patient. There were no other factors that might have prolonged the QT interval.

Ischemic heart disease and LQTS

In previous reports, both the QT interval and QTD became prolonged during the early postinfarction period,7,8 reaching a transient peak on day 2 or 3 before returning to baseline on day 4.9 Both were significantly more prolonged in patients with NQMI than in patients with Q-wave myocardial infarction and were most evident in patients in whom the NQMI was most extensive, suggesting that the transmural distribution of necrosis might influence the repolarization. The clinical courses in our cases were similar to those in these reports; however, the QT prolongation was much greater than was reported previously: 491 milliseconds for NQMI patients and 465 milliseconds for Q-wave myocardial infarction patients. In addition, no TdP was observed in the reported series.

Fig. 4. In patient 1, the incessant TdP that developed on day 2 constituted an electrical storm.

Halkin et al.10 reported similar cases with TdP following AMI. They documented pause-dependent TdP during transient obvious QT prolongation in 1.8% of their patients with AMI. In the absence of identifiable causes, they called the LQTS “infarct-related LQTS.” The prominent feature of the ECGs was also deep and inverted T waves. In their patients, the QTc interval increased from 445 +- 58 milliseconds to 558 +- 84 milliseconds by day 2, and the maximum QT prolongation and TdP occurred 3 to 11 days after infarction. Myocardial ischemia itself prolongs the QT interval, however, a study in rats showed that myocardial ischemia and even reperfusion increased QTD and that increased QTD was associated with cardiac arrhythmias.11

Injection of ionic contrast agents into the coronary arteries is known to prolong the QT interval; however, we used nonionic contrast, which has not been shown to affect the QT interval.12 Moreover, although both of our patients underwent repeat coronary angiography for reevaluation with the same nonionic contrast medium, QT prolongation was not observed after the repeat procedures.

Women are 2-3 times more likely to develop TdP than men, and female sex is considered a risk factor for TdP. Both of our patients were women. Chauhan et al. reported that infarct-related QT prolongation was independent of sex.9 In the study of Halkin et al.10 study, 4 of the 8 patients in whom “infarct-related LQTS” developed were women, revealing no sex predilection.

Mechanism of QT prolongation and TdP

The mechanism responsible for the transient changes that occurs in the QT and TPE intervals after coronary events despite relief from ongoing myocardial ischemia is not yet clear. However, marked TPE prolongation suggests that the increase in TDR plays an important role. A disproportionate prolongation of the M-cell action potential, which was also seen in models of subendocardial myocardial infarction, contributes to the development of long QT intervals and augmented TDR.5,6,13

Although it is beyond the scope of this report to uncover the mechanism underlying TPE prolongation and TdP, our cases may provide some clues. Substantially injured endocardium might result in increased TDR. Both of our patients were continuously infused with nicorandil, which is reported to be capable of abbreviating a long QT interval, reducing TDR, and preventing TdP when LQTS is secondary to reduced I^sub Kr^ or I^sub Ks^ but less so when it is due to augmented late I^sub Na^.14 In addition, mexiletine is reported to be very effective in shortening the QT interval and decreasing TDR in LQT3 models and to be valuable in reducing the incidence of arrhythmogenesis in LQT2 models.1,2 Our second patient took mexiletine and had no TdP attacks despite the longer QTc intervals than in patient 1. The medication in our cases might suggest that augmented late I^sub Na^ might have been more involved than reduced I^sub Kr^ or I^sub Ks^ in the infarct-related LQTS and TdP.

Fig. 5. The 12-lead ECG series in patient 2. Left, Upon admission, ST-T change was associated with a normal QT interval. Middle, The QTc interval became prolonged and peaked on day 2 at 756 milliseconds. Of note, the TPE interval also increased markedly (340 milliseconds) and exhibited a giant and bizarre negative T-wave. Right, On day 5, the QT and TPE intervals returned to normal (QTc = 442 milliseconds, TPE = 120 milliseconds).

Fig. 6. Emergent coronary angiogram in patient 2 revealing 90% stenosis of the proximal LAD. Percutaneous coronary intervention was successfully performed.

Clinical implications

Patients with ACS should undergo careful ECG monitoring even after successful PCI because transient QT prolongation with TdP may occur even in the absence of any other QT interval-prolonging factors. Although this phenomenon is transient, clinicians must be alert to the appearance of any QT interval prolongation and pleomorphic ventricular tachyarrhythmia.

Conclusion

Torsades de pointes after successful PCI for ACS is uncommon; however, a variant of acquired LQTS should be considered and treated in patients in whom giant and bizarre negative T waves and QT prolongation develop after PCI despite relief of any ongoing myocardial ischemia.

References

1. Shimizu W, Antzelevitch C. Sodium channel block with mexiletine is effective in reducing dispersion of repolarization and preventing torsade de pointes in LQT2 and LQT3 models of the long- QT syndrome. Circulation 1997;96:2038.

2. Shimizu W, Antzelevitch C. Cellular basis for the ECG features of the LQT1 form of the long-QT syndrome: effects of beta- adrenergic agonists and antagonists and sodium channel blockers on transmural dispersion of repolarization and torsade de pointes. Circulation 1998;98:2314.

3. Antzelevitch C, Shimizu W. Cellular mechanisms underlying the long QT syndrome. Curr Opin Cardiol 2002;17:43.

4. Shimizu W, Antzelevitch C. Differential effects of beta- adrenergic agonists and antagonists in LQT1, LQT2 and LQT3 models of the long QT syndrome. J Am Coll Cardiol 2000;35:778.

5. Yan GX, Antzelevitch C. Cellular basis for the electrocardiographic J wave. Circulation 1996;93:372.

6. El-Sherif N, Caref EB, Yin H, Restivo M. The electrophysiological mechanism of ventricular arrhythmias in the long QT syndrome: tridimensional mapping of activation and recovery patterns. Circ Res 1996;79:474.

7. Moreno FL, Villanueva T, Karagounis LA, Anderson JL. Reduction in QT interval dispersion by successful thrombolytic therapy in acute myocardial infarction. Circulation 1994;90:94.

8. Doroghazi RM, Childers R. Time-related changes in the Q-T interval in acute myocardial infarction: possible relation to local hypocalcemia. Am J Cardiol 1978;41:684.

9. Chauhan VS, Tang AS. Dynamic changes of QT interval and QT dispersion in non-Q-wave and Q-wave myocardial infarction. J Electrocardiol 2001;34:109.

10. Halkin A, Roth A, Lurie I, Fish R, Belhassen B, Viskin S. Pausedependent torsade de pointes following acute myocardial infarction: a variant of the acquired long QT syndrome. J Am Coll Cardiol 2001;38:1168.

11. Lu HR, Yu F, Dai DZ, Remeysen P, De Clerck F. Reduction in QT dispersion and ventricular arrhythmias by ischaemic preconditioning in anesthetized, normotensive and spontaneously hypertensive rats. Fundam Clin Pharmacol 1999; 13:445.

12. Kenigsberg DN, Khanal S, Kowalski M, Krishnan SC. Prolongation of the QTc interval is seen uniformly during early transmural ischemia. J Am Coll Cardiol 2007;49:1299.

13. Yan GX, Antzelevitch C. Cellular basis for the normal T wave and the electrocardiographic manifestation of the long-QT syndrome. Circulation 1998;98:1928.

14. Shimizu W, Antzelevitch C. Effects of a K+ channel opener to reduce transmural dispersion of repolarization and prevent torsade de pointes in LQTl, LQT2, and LQT3 models of the long-QT Syndrome. Circulation 2000; 102:706.

Mihoko Kawabata, MD, PhD,* Kenzo Hirao, MD, PhD, Sasaki Takeshi, MD, Kaoru Sakurai, MD, Hiroshi Inagaki, MD, Hitoshi Hachiya, MD, PhD, Mitsuaki Isobe, MD, PhD

Department of Cardiovascular Medicine, Tokyo Medical and Dental University, Tokyo, Japan

Received 23 May 2007; accepted 21 September 2007

None of the authors have a conflict of interest or financial relationship related to this manuscript.

* Corresponding author. Department of Cardiovascular Medicine, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8519, Japan. Tel.: +81 358035231; fax: +81 358030131.

E-mail address: [email protected]

Copyright Churchill Livingstone Inc., Medical Publishers Mar/Apr 2008

(c) 2008 Journal of Electrocardiology. Provided by ProQuest Information and Learning. All rights Reserved.

Mouth-to-Mouth CPR No Longer Recommended

The American Heart Association announced Monday that hands-only CPR is just as effective as standard CPR for sudden cardiac arrests in adults.

Experts hope the announcement will make bystanders more likely to assist someone who suddenly collapses, since hands-only CPR ““ quick, deep presses on the victim’s chest until help arrives ““ is simple and easier to remember.  It also removes a major barrier for some who may be skittish about mouth-to-mouth breathing.

“You only have to do two things. Call 911 and push hard and fast on the middle of the person’s chest,” Dr. Michael Sayre, an emergency medicine professor at Ohio State University who headed the committee that made the recommendation, told the Associated Press.

The recommendation calls for continuous chest presses at a rate of 100 per minute, done until either paramedics can take over or an automated external defibrillator is available. 

It is estimated that 310,000 Americans die each year of cardiac arrest outside hospitals or in emergency rooms. Although data varies by location, only about six percent of people who suffer cardiac arrest outside a hospital survive. But those who can quickly get CPR before receiving medical treatment up to triple their chance of survival. 

Unfortunately, less than a third of victims receive the essential CPR that may save their lives.

Hands-only CPR should only be performed on adults who unexpectedly collapse, stop breathing and are unresponsive, symptoms that would likely indicate a person is experiencing cardiac arrest.  In such a case, the victim has an adequate amount of air in the lungs and blood, and CPR compressions would keep blood flowing to the brain, heart and other organs.

In the case of a child who collapses, the underlying problem is more likely to be a respiratory issue, and mouth-to-mouth breathing should be used. This would also apply to adults who experience near-drowning, drug overdose, carbon monoxide poisoning or other situations where the victim suffers a lack of oxygen. For these people, mouth-to-mouth is needed to get air into the lungs and bloodstream.

But Sayre added that in either case, doing “something is better than nothing.”

The CPR guidelines had been trending toward compression-only for some time. The previous update, in 2005, emphasized chest pushes by alternating 30 presses with two quick breaths in between. For those “unable or unwilling” to do the breaths, chest presses alone were advised. The new recommendations, however, now give equal weighting to hands-only CPR, adding that those who have been trained can still choose to perform traditional CPR.

Sayre said the heart association decided to update their recommendations now, rather than in 2010 when the next update was due, after three studies last year found that hands-only was as good as traditional CPR.  Hands-only will be added to the association’s CPR training.

Dr. Gordon Ewy, director of the University of Arizona Sarver Heart Center in Tucson, where the compression-only technique was pioneered, has been advocating hands-only CPR for 15 years.  He said the new recommendations had him “dancing in the streets”.

Ewy said it was pointless to give cardiac arrest victims early breaths because the 16 seconds required to stop compressions and give the two breaths is too long. He explained that victims often gasp periodically anyway, drawing in small amounts of air on their own.

Ewy cited anonymous surveys that found some people are reluctant to perform mouth-to-mouth, in part because of fear of infections.

“When people are honest, they’re not going to do it,” he said. “It’s not only the yuck factor.”

In recent years, emergency service dispatchers have been advising and coaching callers to perform the hands-only CPR, instead of the traditional CPR technique.

“They love it. It’s less complicated and the outcomes are better,” Dallas emergency medical services chief Dr. Paul Pepe told the Associated Press.  Pepe chairs emergency medicine at the University of Texas Southwestern Medical Center.

Jared Hjelmstad, 40, a chiropractor in Temecula, Calif., has been spreading the word about hands-only CPR after he helped save the life of a fellow health club member in Southern California. Hjelmstad had read about it in a medical journal, and performed it on Garth Goodall, a 49-year-old construction contractor, after he collapsed while working out at the gym in February. Hjelmstad’s 15-year-old son Josh called 911 in the meantime.

Hjelmstad said he performed the hands-only CPR on Goodall for more than 12 minutes until paramedics arrived, and was overjoyed the next day to learn that Goodall had indeed survived.

On Sunday, he visited Goodall in the hospital where he is recovering from triple bypass surgery.

“After this whole thing happened, I was on cloud nine,” Hjelmstad told the AP. “I was just fortunate enough to be there.”

“It’s a second lease on life,” said Goodall, adding that he had been fit and healthy prior to the collapse, and there had been no hint that he had clogged heart arteries.

“I was lucky,” he said.  If the situation were reversed, “I wouldn’t have known what to do.”

On the Net:

Heart Association’s Hand-Only CPR Home Page

Sarver Heart Center

Ohio State University

University of Texas Southwestern Medical Center

Blood-Clotting Bandage a Good Addition to First Aid Kit

By Jeanine Kendle

A new high-performance bandage made from the same blood-stopping technology the U.S. Army uses to save lives on the battlefields of Afghanistan and Iraq is now available to the public.

When in direct contact with blood and pressure is applied, the KytoStat Bandage does what traditional bandages do not — stops stubborn bleeding while sealing and protecting the wound.

The new bandage offers peace of mind and a new choice in wound care to millions on blood-thinning medications and those who work and play in extreme environments.

Made by HemCon Medical Technologies, KytoStat is a convenient, effective first response to control stubborn and persistent bleeding. In some cases, KytoStat may be the only first aid needed.

For patients with bleeding disorders, like hemophilia or von Willebrand disease, KytoStat provides the ability to control bleeding caused by cuts or scrapes, potentially saving a costly trip for professional help. If a wound requires expert care, KytoStat helps buy valuable time to seek medical attention.

Uncontrolled bleeding is the second leading cause of death in the United States, after trauma, and more than 6 million bleeding wounds are treated annually by trauma and emergency rooms in the U.S.

By directly controlling blood loss, even in the case of severed arterial bleeds, HemCon Bandages reduce the need for transfusions and provide critical time to administer care (up to 48 hours from the point of placement).

KytoStat, the public version, is distributed 1-inch-by-4-inch, (latex free), is engineered with chitosan, a compound that is naturally occurring in shrimp shells. The proprietary manufacturing process used to create the bandage makes KytoStat 30 times more effective than leading competitors.

There have been no known allergic reactions as a result of using the HemCon Bandage since distribution began in 2003 and there have been no adverse effects reported in more than 1 million bandages shipped.

HemCon Medical Technologies has results from a shellfish allergy study conducted by its chitosan supplier that demonstrates that, out of 221 individuals with suspected hypersensitivity, including eight individuals with known shellfish allergies, none demonstrated any dermal sensitivity when pricked with a chitosan test solution.

However, since chitosan is extracted from the shells of shrimp and other shellfish, individuals with known shellfish allergies should exercise caution in the use of products containing chitosan.

According to the Journal of the American Medical Association, more than 50 million adults in the U.S. take aspirin regularly for long-term prevention of cardiovascular disease. Low-dose aspirin regimens interfere with the body’s blood clotting ability.

In addition, more than 34 million prescriptions were written in the U.S. for oral anticoagulants in 2007, according to IMS Health. KytoStat gives these millions of people control when conditions or medications interfere with an individual’s ability to stop external bleeding from cuts, and when medical care isn’t close at hand.

The KytoStat Bandage can be used anywhere and anytime. This take- along is a great addition to a home care or vacation first aid kit. It provides the assurance if there is a nasty, bleeding cut, the KytoStat Bandage is ready to stop the bleeding and allow you to clot and heal on your own.

In comparison to traditional bandages, the high-performance KytoStat Bandage provides superior control of bleeding — 30 times more effective. The KytoStat Bandage quickly seals the wound and protects it, allowing you to get back to what you want to do.

New Blood Pressure Med Has Fewer Side Effects

Telmisartan is as effective as ramipril in reducing cardiovascular death

A major Canadian-led global study has found that a new blood pressure medication is effective in reducing cardiovascular death, with fewer side effects than the current standard of care.

The study found a new drug telmisartan is as effective as the popular drug ramipril in reducing cardiovascular death in high risk patients and it has fewer side effects.

Dr. Salim Yusuf, director of the Population Health Research Institute at McMaster University and Hamilton Health Sciences and principal investigator of the study, presented the results of ONTARGET today at the American College of Cardiology conference. The paper has also been published on-line by the New England Journal of Medicine.

Previous studies such as the Heart Outcomes Prevention Evaluation Trial (HOPE) demonstrated that angiotensin converting enzyme (ACE) inhibitors such as ramipril reduce cardiovascular death, myocardial infarction, strokes and heart failure in high risk individuals, however, a significant proportion (about 20 percent) of patients are unable to tolerate an ACE-inhibitor due side effects such as coughing, hypotension or swelling.

An alternate therapy, telmisartan, which is an angiotensin II receptor blocker, proved to be at least as effective and better tolerated, offering clinicians and patients an important alternative.

“This study is of clinical importance because it demonstrates that telmisartan is an effective and safe alternative to ramipril. This means both patients and physicians have choices and can use telmisartan where appropriate with a high degree of confidence,” said Yusuf, a professor of the Michael G. DeGroote School of Medicine at McMaster. Dr. Yusuf is also vice-president of research and chief scientific officer at Hamilton Health Sciences.

Investigators from 733 centers in 40 countries collaborated in conducting the ONTARGET study, which enrolled 25,620 patients with coronary heart disease or diabetes plus additional risk factors and were over the age of 55 years of age, but did not have evidence of heart failure. Patients were randomized to receive ramipril 10-mg a day, telmisartan 18-mg a day or the combination of the two. The mean duration of follow-up of the study was 55 months.

Telmisartan and ramipril were found to be equally effective but telmisartan was better tolerated than ramipril with the chief differences being lower rates of coughing and lower rates of angioneurotic edema (a life-threatening swelling of the throat and airways). There was a small excess of minor symptoms related to hypotension such as dizziness with telmisartan.

“All people who have cardiovascular disease or diabetes with organ target damage and physicians managing these diseases should be interested in the results of this important trial,” said Dr. Gilles Dagenais, cardiologist at the Laval University Heart and Lung Institute, Quebec City, and one of the Canadian national co-coordinators of the ONTARGET trial. “If it’s possible to have access to a medication that can prevent serious cardiovascular events but with fewer side effects and better compliance than what’s currently available, it will also have a great impact on their quality of life.”

Dr. Koon Teo, professor of medicine at McMaster University and head of clinical trials in the Population Health Research Institute at Hamilton Health Sciences, said: “The ONTARGET trial is very important because it addresses the question of how we can best prevent heart attack, stroke, heart failure, cardiovascular death and other outcomes such as diabetes. These conditions affect millions of people around the world and if we can find a better treatment that improves these outcomes we’re doing a lot of good.”

Surprisingly, combination therapy did not offer any additional benefit but was associated with a higher rate of hypotension related side effects including fainting. There was also an increase in discontinuations for hyperkalemia (high potassium levels).

Photo Caption: Dr. Salim Yusuf, Director, Population Health Research Institute, McMaster University

On the Net:

McMaster University

Population Health Research Institute

New England Journal of Medicine

American College of Cardiology

Scientists Discover Molecules That Reverse Liver Cirrhosis

Japanese scientists have designed artificial molecules that reversed liver cirrhosis in rats.

Cirrhosis is a hardening or scarring of the liver that occurs when liver cells begin producing collagen, a fibrous material found in skin and tendons.  

The researchers said they designed molecules that block collagen production by the liver’s “stellate cells,” which are also known to absorb vitamin A.  The scientists loaded these molecules into vitamin-A coated carriers, which tricked the stellate cells into absorbing the molecules.

“By packaging the (molecules) in carriers coated with vitamin A, they tricked the stellate cells into letting in the inhibitor, which shut down collagen secretion,” the researchers wrote in a report about their work.

During the study, the researchers injected the vitamin A-laced molecules in rats with induced liver cirrhosis.

“We were able to completely eradicate the fibrosis by injecting this agent … we cured them of the cirrhosis,” said Yoshiro Niitsu of Sapporo Medical University School of Medicine, during a telephone interview with Reuters.

“The liver is such an important organ, after you remove the fibrosis, the liver by itself starts to regenerate tissues. So liver damage is reversible,” he added.

“Liver is itself responsible for the production and deposition of collagen, it also secretes certain enzymes that dissolve collagen … dissolve the fibrosis which has already been deposited in the tissues,” Niitsu said, explaining the how the damage reversal came about.

Cirrhosis is often caused by Hepatitis B and C, or by heavy drinking, and is especially serious in parts of Asia. Until now, liver damage from cirrhosis was thought to be irreversible, and the disease cured only with transplants.

But Niitsu is optimistic that in time the molecules would provide a cure.

“We hope it (a drug) will be ready for humans in a few years,” he said.

On the Net:

Sapporo Medical University School of Medicine

The research was published in the journal Nature Biotechnology.  A summary can be viewed at http://www.nature.com/nbt/journal/vaop/ncurrent/abs/nbt1396.html

Nuance’s Vocada Veriphy(TM) Solution Links Hospital IT Systems for Enterprise-Wide Critical Test Result Management

Nuance Communications, Inc. (NASDAQ: NUAN), a leading supplier of speech solutions, is today announcing the release of Nuance’s Dictaphone® Healthcare Division’s Veriphy-Ready HL7 Integration Server (VIS). VIS will automate the closed-loop communication process for critical patient findings by allowing Vocada Veriphy, Dictaphone’s market-leading solution for critical test result management (CTRM), to communicate directly with internal diagnostic systems and enterprise wide clinical information systems.

Vocada Veriphy is currently the only end-to-end enterprise solution for communicating critical test results from hospital diagnostic departments to ordering clinicians, and has been proven in deployments at more than 150 hospitals nationwide. In the wake of the recently announced Veriphy 2.0, Dictaphone built VIS as a Windows-based application integration platform, based on the most widely deployed HL7 core engine on the market. Features include:

Seamless integration with internal systems, such as picture archiving and communication systems (PACS), radiology information systems (RIS), laboratory information systems (LIS), electronic medical records (EMR) and hospital information systems (HIS)

Automated communication of critical test results from any internal diagnostic system to the ordering physician

Automated documentation of the critical test result’s distribution, receipt and verification within any internal diagnostic system

The ability to automatically populate the patient record in internal systems such as LIS, RIS and EMRs with the Veriphy communications audit trail

“Veriphy facilitates a process for not just sending out critical patient finding alerts, but for coding messages based on content, priority and appropriate follow up,” said Steven Defossez M.D., medical director of North Shore Magnetic Imaging Center in Peabody, Massachusetts. “The value of having an effective, consistent way of communicating patient information between providers is essential for patient safety and provider productivity. CTRM has allowed us to automate an otherwise unstructured, time-intensive process, improve patient care and reduce malpractice risk with its auditable trail of what happened, when. The ability to share Veriphy’s audit trail across multiple systems through the VIS product is an extension that will help more hospitals get on board with best practices in clinical communications.”

VIS will make it easier for healthcare organizations to achieve best practices in clinical communication–a task that has challenged many. In 2006, nearly two-thirds of hospitals surveyed by The Joint Commission failed to meet the accreditation requirement for communication of critical test results. With VIS, hospitals can improve the effectiveness of communication among caregivers to advance patient safety, meet the guidelines of The Joint Commission and overcome integration hurdles associated with legacy information systems.

“The communication of critical test results amongst caregivers can mean the difference between a patient’s life or death; and as recognized by The Joint Commission, a manual process is inefficient and prone to errors. Veriphy provides a consistent and accountable way to share, track and document high-priority patient information,” said Peter White, general manager, Veriphy, in the Dictaphone Healthcare Division at Nuance. “With our recent announcement of Veriphy 2.0, we expanded the solution’s capability to better meet the needs of hospital administrators and clinicians. With VIS, we’re removing barriers so all hospitals and labs can effectively manage their critical test results for improved patient safety and increased provider productivity without worrying about integration roadblocks.”

Dictaphone Healthcare Solutions Division

Dictaphone Healthcare Solutions is a division of Nuance Communications, Inc., a leading provider of speech and imaging solutions. Dictaphone provides the most comprehensive family of speech-driven clinical documentation and communications solutions. Dictaphone solutions orchestrate and optimize clinical workflow, reduce transcription expense, raise standards of care via more thorough documentation, deliver results rapidly to meet patient safety guidelines, and heighten clinician satisfaction by making EMR systems easy to use. Dictaphone’s solutions accelerate the adoption of clinical information systems, so provider organizations can maximize the return from their IT investments. For more information, please visit www.nuance.com/dictaphone/.

Nuance Communications, Inc.

Nuance (NASDAQ: NUAN) is a leading provider of speech and imaging solutions for businesses and consumers around the world. Its technologies, applications and services make the user experience more compelling by transforming the way people interact with information and how they create, share and use documents. Every day, millions of users and thousands of businesses experience Nuance’s proven applications and professional services. For more information, please visit www.nuance.com.

Nuance, the Nuance logo and Veriphy are trademarks or registered trademarks of Nuance Communications, Inc. or its affiliates in the United States and/or other countries. All other company names or product names may be the trademarks of their respective owners.

Diana Crawley Appointed Executive Director Of New Publicis Clinical Health Partners Division

Publicis Selling Solutions Group, a leading provider of sales and marketing solutions for biopharma, has announced the appointment of Diana Crawley to Executive Director of Publicis Clinical Health Partners, a new division which provides integrated, behaviorally-based, education programs for patients and healthcare professionals to optimize patient compliance and treatment outcomes. Publicis Selling Solutions Group is a Publicis Healthcare Communications Group company.

Diana Crawley has leadership responsibility for the operations of Publicis Clinical Health Partners. Ms. Crawley has 18 years of leadership and healthcare experience and 6 years of industry experience in leading clinical health educator teams. Prior to joining Publicis Selling Solutions Group, she was a Project Leader at a company that also provides health educator teams. During that time, Ms. Crawley led a variety of teams in multiple disease states including oncology, diabetes, depression, and kidney disease. She also served as a resource to other health educator teams in multiple sclerosis, asthma, Parkinson’s disease, and osteoarthritis.

“Diana’s diverse experience and her broad professional and educational background ensure that Publicis Clinical Health Partners’ programs have a positive impact on patient compliance and outcomes while positively impacting sponsors’ return-on-education (ROE),” said Rick Keefer, Chief Operating Officer of Publicis Selling Solutions Group. “Through Diana’s leadership, Publicis Clinical Health Partners uniquely offers fully integrated, behavioral-based educational programs that provide a ‘win-win’ to all the key stakeholders–patients, healthcare professionals, managed markets, and sponsors.”

Ms. Crawley earned a Bachelor of Science degree in Nursing from the University of Florida, as well as a Master’s degree in Business Administration from Indiana Wesleyan University. In addition, she has two years of post-graduate studies in pre-medicine from the University of Northern Colorado, and holds a certificate in project management from Indiana University-Purdue University.

For more information and/or a free white paper that answers many frequently asked questions addressing some of the key logistical, regulatory, and legal issues about implementing a clinical health education program, contact Diana Crawley at (609) 896-4717 or visit the Publicis Clinical Health Partners website at www.pclinicalhealthpartners.com.

About Publicis Clinical Health Partners

Publicis Clinical Health Partners provides integrated, behaviorally-based education programs for patients and healthcare professionals to optimize patient compliance and treatment outcomes. Publicis Clinical Health Partners’ proprietary Behavioral Wellness Optimization™ methodology leverages the science of behavior change by incorporating the key best practices based on the latest scientific research for wellness behavior change. Website: www.pclinicalhealthpartners.com

About Publicis Selling Solutions Group

Publicis Selling Solutions Group offers a comprehensive range of sales services for pharmaceutical and biotech companies. Through its divisions–which include Publicis Selling Solutions, Publicis Clinical Health Partners, Publicis Managed Markets, Total Learning Concepts, and Publicis Healthcare Recruiting–the organization delivers messages to all touch-points on the healthcare continuum from healthcare professionals and managed markets to patients and caregivers. The group’s range of services includes field sales teams and support services, recruiting, sales training and content development, clinical health educators, and managed markets account teams. Publicis Selling Solutions Group is a Publicis Healthcare Communications Group company. Website: www.psellingsolutions.com

About Publicis Healthcare Communications Group

Publicis Healthcare Communications Group (PHCG), a member of Publicis Groupe SA, is one of the largest healthcare communications groups in the world with over 2,700 employees located in 10 countries. Worldwide healthcare services include advertising, medical education, sales and marketing, and medical and scientific affairs. PHCG offers its clients a strategic partnership, a strong focus on ensuring value for their marketing spend, and exceptional performance on their assignments. Website: www.publicishealthcare.com

Publicis Groupe (Euronext Paris: FR0000130577) is the world’s fourth largest communications group. In addition, it is ranked as the world’s second largest media counsel and buying group, and is a global leader in digital and healthcare communications. With activities spanning 104 countries on five continents, the Groupe employs approximately 44,000 professionals. The Groupe offers local and international clients a complete range of communication services, from advertising, through three autonomous global advertising networks, Leo Burnett, Publicis, Saatchi & Saatchi and two multi-hub networks, Fallon and 49%-owned Bartle Bogle Hegarty; to media consultancy and buying, through two worldwide networks, Starcom MediaVest Group and ZenithOptimedia; interactive and digital marketing led by Digitas; Specialized Agencies and Marketing Services (SAMS) offering healthcare communications, corporate and financial communications, public relations, CRM and direct marketing, event communications, sports marketing and multicultural communications. Website: www.publicisgroupe.com

Research on Bakken Formation’s Oil Reserves Nearly Completed

By Schuster, Ryan

The U.S. Geological Survey is nearing completion of a research project that will attempt to quantify how much oil is contained in the Bakken shales formation and how much of it is recoverable.

The study is expected to be completed by late April, according to Sen. Byron Dorgan, D-N.D., who, along with other state officials, pushed the federal agency to finish the research started by scientist Leigh Price.

Billion barrels

Price estimated the Bakken formation may hold as many as 900 billion barrels of oil. But Price died in 2000 before the study could be published or peer reviewed.

Other estimates of the Bakken formation’s oil reserves have pegged the number at closer to 200 billion or 300 billion barrels.

Dorgan said Thursday during a stop in Grand Forks that completing the survey is important to North Dakota. The Bakken formation stretches across western and central North Dakota, eastern Montana, southern Saskatchewan and part of northwestern South Dakota.

“I think it’s going to show a very substantial recoverable reserve of oil,” Dorgan said. “It will be important as a signal to the rest of the world what we have here.”

Dorgan optimistic

Dorgan said the U.S. Geological Survey began work on finishing Price’s work about a year and a half ago.

He said he is optimistic that improvements in technology will lead to a substantial increase in how much of the oil in the formation will be able to be recovered.

Dorgan said the study’s findings will only increase the oil boom that the western part of the state currently is experiencing.

“The oil boom is real and it’s going to be real significant” Dorgan said.

During a stop at the International Crop Expo on Thursday in the Alerus Center, Dorgan questioned why the Department of Energy continues to put aside 50.000 to 60,000 barrels of crude oil in the nearly-full strategic petroleum reserve every day in light of high oil and gas prices.

“When oil is $90 to $100 a barrel, we shouldn’t be taking it out of the supply pipeline and sticking it underground,” Dorgan said. “It is increasing prices. I think it’s absolutely nuts to be doing this with the current price of oil.”

Dorgan said it is important to maintain the strategic petroleum reserve, but he said the reserve is about 97 percent full and there is no reason to continue filling it when oil and gas prices are so high.

Dorgan also said that high oil and gas prices affect farmers and North Dakotans more than some, citing research that showed North Dakota residents use twice as much gas per person as New York residents.

Copyright Grand Forks Herald Inc. Feb 22, 2008

(c) 2008 Grand Forks Herald. Provided by ProQuest Information and Learning. All rights Reserved.

Researchers Find Six More Genes Linked to Diabetes

Six more genes that make people more susceptible to developing type 2 diabetes have been discovered by U.S. and European scientists””a discovery that may help prevent and treat the chronic condition.

The researchers said this particular finding extends the total number of genes linked to the disease to 16 and offers clues as to how the biological mechanisms that control blood sugar levels go off-course when people get type 2 diabetes.

Mark McCarthy, a diabetes researcher at the University of Oxford, who co-led the study, said none of the genes they found were previously on the radar screen of diabetes researchers.

“Each of these genes therefore provides new clues to the processes that go wrong when diabetes develops, and each provides an opportunity for the generation of new approaches for treating or preventing this condition,” said McCarthy.

Diabetes causes blood glucose levels to rise too high resulting in damage to the eyes, kidneys and nerves, and can also lead to heart disease, stroke and limb amputations.

Researchers from over 40 centers analyzed the genetic data of more than 70,000 people. The team found six individual genetic differences that slightly raise a person’s risk of diabetes.

But McCarthy said the risk for the few people unlucky enough to inherit all six variations is two to three times higher than the average risk.

“By getting a handle on the mechanisms involved in disease we can start to tackle them in a more systemic and scientific way,” he said.

One surprising find was the link between type 2 diabetes and a gene called JAZF1, which researchers recently showed plays a role in prostate cancer.

McCarthy said the researchers believe the genes””which also include the CDC123-CAMK1D, TSPAN8-LGR5, THADA, ADAMTS9 and NOTCH2 genes””are involved in regulating the number of insulin-producing cells in the pancreas.

Type 2 diabetes accounts for about 90 percent of all diabetes cases and is closely linked to obesity and physical inactivity.

The World Health Organization estimates that more than 180 million people worldwide have diabetes””a number likely to more than double by 2030.

On the Net:

American Diabetes Association: www.diabetes.org

Study: Mobile Phones More Dangerous Than Smoking

A new study led by award-winning cancer expert Dr. Vini Khurana has found mobile phone use could be even more dangerous than smoking or asbestos exposure. 

Dr. Khurana’s study presents the most disturbing evidence to date about the health risks of mobile phones, and supports mounting data that mobile phone use over a 10 year time period can more than double a person’s risk for brain cancer. 

Dr. Khurana, a renowned neurosurgeon who has published over three dozen scientific papers and received over 14 awards in the last 16 years, said people should avoid using cell phones whenever possible, and called on governments and industry to take “immediate steps” to reduce radiation exposure through the devices.  

Many previous studies included very little examination of people who had used mobile phones for an extended period of 10 years or more.  Since cancer can take at least a decade to develop, the lack of data for this group invalidates many of the official safety assurances based on those studies.

Earlier this year, the French government warned against mobile phone use, particularly by children. And Germany and the European Environment Agency have also urged its people to minimize their exposure to mobile handsets.

But not everyone agrees with Professor Khurana’s conclusions. Last week the Mobile Operators Association rejected Khurana’s study as “a selective discussion of scientific literature by one individual”. The group said the study “does not present a balanced analysis” of the published science, and “reaches opposite conclusions to the WHO and more than 30 other independent expert scientific reviews”.

In conducting his study of mobile phone use, Professor Khurana reviewed more than 100 previous studies on the effects of mobile handsets. He has posted his analysis on a neurosurgery Web site, and a paper about his research is currently under peer review for publication in a leading scientific journal.

Although he acknowledges that mobile phones can be lifesavers in times of emergencies, he nevertheless concludes “there is a significant and increasing body of evidence for a link between mobile phone usage and certain brain tumors”. Khurana told the UK newspaper The Independent that he believes his conclusion will be “definitively proven” within the next decade.

“We are currently experiencing a reactively unchecked and dangerous situation,” he told the Independent, adding that malignant brain tumors are “a life-ending diagnosis”.  

Khurana worries of a sharp increase in malignant brain tumors worldwide over the coming decade, “unless the industry and governments take immediate and decisive steps”.   

“It is anticipated that this danger has far broader public health ramifications than asbestos and smoking,” Khurana said, basing his conclusion on the fact that there are currently three billion mobile phone users worldwide, three times the number of smokers.  

According to a report by The Independent, smoking is responsible for five million deaths globally each year, and in Britain asbestos exposure kills as many people as traffic accidents.

Sugar: Your Sweet Tooth Craves It, but Your Body Doesn’t Need It

By Julie Deardorff

Sugar, sorry to say, can make us sick. The most popular alternative, artificial sweeteners, have long posed health concerns and may lead to weight gain.

Enter stevia, a calorie-free herb said to be up to 300 times sweeter than sugar.

In what will surely spice up the decadeslong debate over sugar substitutes, companies as large as Coca-Cola and as obscure as Seattle-based Zevia say stevia’s time has come. But the U.S. Food and Drug Administration isn’t about to make things easy for consumers worried about sugar intake and often confused by the options.

Stevia has been used as a sweetener for hundreds of years in Paraguay and Brazil and has been added to soft drinks, ice cream, pickles, candies and breads in Japan since the 1970s.

But the FDA has not approved it as a food additive, citing safety concerns. The European Union and Canada also don’t allow food companies to add stevia to products.

“Reports have raised concerns about control of blood sugar and the effects on the reproductive, cardiovascular and renal systems,” the FDA wrote in a warning letter to Hain Celestial, which included stevia as an ingredient in one of its teas.

But stevia, also called stevioside, is widely available – and perfectly legal – in the United States when it’s purchased as a dietary supplement. It often can be found just a few aisles away from Equal, tucked among the vitamins, minerals and herbs. The sweet- leafed herb, derived from the bushy South American stevia rebaudiana plant, also is easily obtained via the Internet.

Stevia proponents believe this nonsensical situation – stevia is acceptable as a dietary supplement but not as an ingredient – has kept Americans in the dark about the herb’s candylike leaves, which can have a menthol-like bitter aftertaste. When used in low amounts for sweetening, stevia has zero calories, is not carcinogenic – on the contrary, it has been shown to reduce breast cancer in rats – and does not accumulate in the body, proponents say.

The lethal dose is very high, according to Belgian researcher Jan Geuns, author of “Stevioside: A safe sweetener and possible new drug for treatment of the metabolic syndrome,” a paper he presented at the 2006 American Chemical Society national meeting.

“Stevia is completely safe,” he said.

What worries stevia critics is that Americans tend to have a problem with moderation. Stevia might be fine if it’s used twice a day in a cup of tea. But “if stevia were marketed widely and used in diet sodas, it would be consumed by millions of people and that might pose a public health threat,” said the consumer watchdog group Center for Science in the Public Interest.

Regardless, Americans want a natural alternative. Nearly 7 of 10 U.S. adults say they want to cut down or avoid sugar completely, according to the market research firm The NPD Group, a concern that has driven up the use of artificial sweeteners. But two-thirds are concerned about the safety of sweeteners, according to another report.

The two leading chemical sweeteners, aspartame (NutraSweet, Equal) and sucralose (Splenda), have been approved by the FDA, but are still highly controversial.

Whole Foods says it won’t carry products containing sucralose, which is made by chlorinating sugar, because it believes many of the safety studies were commissioned by those who had a financial interest in its approval. And the granddaddy of the group, saccharin (Sweet’n Low), is a petroleum derivative that has been banned in Germany and France for almost a century.

“I’ve seen a shift in consciousness” about sugar substitutes, said Ann Louise Gittleman, author of “Get the Sugar Out” (Random House, $13.95). Gittleman recently updated her 1996 book to include more information on high-fructose corn syrup as well as sugar’s effect on aging and cancer.

“It’s part of people becoming more aware of toxins in the environment on all levels,” she said. “Try as we might, you can’t trick the body or Mother Nature.”

When we do try, by using no- or low-calorie artificial sweeteners, for example, it often backfires. A recent study by Purdue University researchers showed that artificial sweeteners can make you fat because the body is programmed to associate sweet tastes with calories consumed. When the natural connection is broken – false sweetness isn’t followed by lots of calories – the metabolic system is confused and people may eat more, or expend less energy than they normally would, said study co-author Susan Swithers.

Cue stevia. For Jessica Newman, 37, the intensely sweet leaf that can be dropped in tea, coffee or oatmeal was exactly what she needed to break her daily habit of five Diet Cokes.

An attorney, mother of three and marathon runner in Seattle, she fueled herself on diet soda and Powerbars, but longed for a healthy alternative to artificial sweeteners.

When she found stevia, she became such a proponent that she, along with her husband, Derek, and their friend Ian Eisenberg, developed a stevia-based dietary supplement called Zevia. The five- calorie sugar-free beverage, which is essentially a soft drink but can’t be labeled as such, has no artificial flavors, food dyes or phosphoric acid.

Demand has been brisk; Zevia is in a dozen states and within a month is expected to be available at Sunset Foods stores in Chicago’s north and northwest suburbs. Newman says they’ve received e-mail orders from every state and currently are offering a free six- pack to those willing to pay the shipping charges.

“Many of the people who are responding to Zevia already know about stevia and the dangers of artificial sweeteners,” Newman said. “We think we’re offering a choice to kick the diet soda habit. We call it “nature’s answer to diet soda.”‘

Coke, meanwhile, has filed several dozen patent applications for the ingredient and teamed up with Cargill to develop its own stevia product called Rebiana. It plans to introduce Rebiana in countries where the ingredient is already approved and petition the FDA to allow stevia to be used as a food additive.

“Stevia is wonderful; it has no glycemic properties, actually enhances blood sugar balance, is high in soluble fiber, and full of antioxidants,” said Chicago nutritionist Bonnie Minsky of Nutritional Concepts.

But not everyone wants to give up an occasional Diet Coke. Fifteen-year-old Christine Elizabeth Cauthen started a Facebook group called “I Drink Artificial Sweeteners and I’m Proud of It” after a friend planned to swear them off because studies have linked them to cancer.

“If you think about it, a lot of things in life cause cancer,” Cauthen said in an e-mail. “I don’t see anything wrong with having (Diet Coke) every once in a while.”Stop! Don’t reach for that diet soda!

Although we all would be healthier if we cut sugar and sweeteners out of our diet, it’s a tall order. Humans are hard-wired for sweetness.

But since 1985, the annual per-person consumption of all added sugars – everything from beet sugar to high-fructose corn syrup – has climbed 30 pounds, from 128 pounds to 158 pounds. The result of this national sugar rush is an epidemic of inflammatory-related disorders, obesity and Type 2 diabetes.

“Most Americans taste buds are so completely out of whack that we don’t know what tastes sweet,” said Connie Bennett, author of “Sugar Shock” (Penguin, $14.95) “When you kick artificial sweeteners or sugar, your taste buds begin to change. Vegetables such as celery, jicama and sweet potatoes taste much better and more interesting.”

Tapering down is your best bet, because stopping “cold turkey” may cause withdrawal symptoms, sometimes severe. Here are a few ways to get started.

-“Unless there’s an overwhelming reason (such as diabetes) to cut sugar consumption quickly, begin by avoiding sugary snacks, foods and drinks until dinner,” said nutritionist Bonnie Minsky of Nutritional Concepts in Chicago. “Eating protein three times daily and substituting sugary snacks with nuts/seeds/dried fruits will prevent blood-sugar lows. Look forward to one sugary “treat (dark chocolate) after a balanced dinner. Keep cookies, cakes, and candies out of the house.”

-To wean yourself off diet soda, stick to two a day and don’t drink it between meals to satisfy thirst, said Ann Louse Gittleman, whose book “Get the Sugar Out” (Random House, $13.95) contains 501 ways to reduce sugar consumption. “If you drink it with food, you might be tempted to have something more nutritious. But don’t use (soda) as a stimulant to keep you going.”

-Drink half your body weight in ounces of water; when you crave something sweet, eat something sour, such as a pickle. Also, suck on cinnamon sticks or cloves, Gittleman said.

-If you’re a real sugarholic, substitute two pieces of dried fruit, a fig or date. “Eat a little of everything and a lot of nothing,” Gittleman said. “And eat it after a full meal where you have fat and protein to prevent your blood sugar from dipping.”

-Delay, distance and decode your craving, Bennett advised. “If you want diet soda, first get a glass of water. Then distance yourself from the tempting soda machine.”

-Find an acceptable alternative. Gittleman recommends Celestial Seasonings Bengal Spice tea and carbonated or regular water with a slice of lemon or orange.More choices to help you move away from sugar

Although sugar is still sugar, the following can be used in small amounts in place of artificial sweeteners until you’re ready to give it up altogether. The products below are available at most health food stores and gourmet or specialty food stores. Online, visit localharvest.org. Check Asian or Mediterranean grocery stores for ground date sugar. Prices listed are approximate.Brown rice syrup

Amber colored, with a mild butterscotch or caramel-like flavor; it’s about half as sweet as sugar and is gluten free, according to Connie Bennett, author of “Sugar Shock.” The syrup is made by fermenting cooked brown rice with enzymes. After straining off the liquid, the process converts the rice starches into about 50 percent soluble complex carbohydrates, 45 percent maltose and 3 percent glucose.

Cost: $5 to $6 for 16 ounces.Real maple syrup

A little drop goes a long way. It’s made by boiling down maple sap and contains a full complement of minerals and is particularly rich in potassium and calcium, said Ann Louse Gittleman, author of “Get the Sugar Out.”

Cost: $7 to $10 per pint.Honey

Although it has more calories and raises the blood sugar even more than white sugar, Jonny Bowden lists raw, unfiltered honey in his book “The 150 Healthiest Foods on Earth” (Fair Winds Press, $24.99) because it contains enzymes and phytonutrients and has some reported medicinal benefits. But it could cause allergic reactions to pollen-sensitive individuals.

Cost: $3.50 and up – way up – for 16 ounces.Blackstrap molasses

Another Bowden favorite, molasses is the thick syrup that’s left after sugar beets or cane is processed for table sugar. Blackstrap contains the lowest sugar content of the molasseses and has a bitter- tart flavor. It has good-for-you ingredients, but few consume enough of the strong-flavored syrup to benefit.

Cost: $5 to $6 for 16 ounces.Sorghum syrup

The National Sweet Sorghum Producers and Processors Association makes this very clear: Sorghum syrup is not the same as molasses, a byproduct of the sugar-making process. Sorghum syrup comes from sorghum cane: Juices are extracted and then concentrated through evaporation. Genuine sorghum contains nutrients such as iron, calcium and potassium. The association recommends substituting sorghum cup for cup in any recipe or dish that calls for molasses, honey, corn syrup or maple syrup.

Cost: $8 to $12 for 16 ounces.Date sugar

If you simply can’t do without sugar, this is Gittleman’s favorite stand-in. It’s made from pulverized dried dates; although it has the consistency of sugar, it isn’t refined like sugar. It also contains fiber and is high in many minerals. One tablespoon of date “sugar” is counted as one fruit exchange in the diabetic exchange system. Because it has an intense flavor, you might be inclined to use less.

Cost: $6 to $8 for 12 ounces.

Artificial sweeteners have been hailed as an effective way to cut calories and control weight, help manage chronic conditions such as diabetes and potentially prevent cavities.

But some contend that the ubiquitous pink, blue and yellow packets can be just as harmful as sugar.

Mounting research, meanwhile, shows they can actually trigger carbohydrate cravings and lead to weight gain.

Here’s a quick look at three common sweeteners approved by the Food and Drug Administration.

Aspartame (NutraSweet and Equal)

Aspartame, a general all-purpose sweetener in foods and drinks, is 200 times sweeter than sugar. Despite concerns that aspartame is linked to a host of ailments, including cancer, autoimmune disorders, digestive distress, mood swings and joint pain – and efforts by two states to ban it – the FDA says the sweetener is safe unless you have a genetic disorder of metabolism known as phenylketonuria.

Saccharin (Sweet’N Low, Sweet Twin)

Saccharin is 200 to 700 times sweeter than sugar. A petroleum derivative, it is found in gum, cosmetics, baked goods, tabletop sweeteners, soft drinks and jams.

In 1977, the FDA proposed a ban on saccharin because of concerns about rats that developed bladder cancer after receiving high doses of it.

The National Cancer Institute cleared saccharin of the charge, but it is banned in foods in Germany and France.

Sucralose (Splenda)

Sucralose is 600 times sweeter than sugar on average and is marketed as a “no-calorie sweetener” even though it contains 96 calories a cup, said Ann Louise Gittleman, author of “Get the Sugar Out (Random House, $13.95) Made from table sugar, sucralose adds no calories because it isn’t digested in the body.

Although some report digestive distress, especially constipation and headaches, concerns also have surfaced over long-term safety. Whole Foods won’t carry products containing sucralose because the company doesn’t believe there’s enough balanced information. But in 1999, the FDA allowed sucralose as a general-purpose sweetener in all foods.

Suspect Charged in Fatal Ga. Hospital Shootings

COLUMBUS, Ga. _ Charles Johnston, the suspect in Doctors Hospital shooting, has been released from The Medical Center and charged with multiple offenses, including three counts of murder and four counts of aggravated assault on a peace officer.

Police said Johnston entered the hospital Thursday afternoon, armed with three guns, and shot and killed two hospital employees and a truck driver in the parking lot, before a Columbus police officer shot him in the upper shoulder. Johnston was treated for his injuries, then released into police custody on Friday.

He is scheduled to appear in Columbus Recorder’s Court at 2 p.m. Monday for the following charges:

_Murder for the shooting of James David Baker in the head with an unknown type handgun.

_Murder for the shooting of Pete Wright in the chest and back with an unknown type handgun.

_Murder for the shooting of Leslie Harris in the head and chest with an unknown type handgun.

_Four counts of aggravated assault on a peace officer for shooting at Columbus police Cpl. Michael Dahnke, Officer Jonathan Goodrich, Officer Gregory Anderson and Muscogee County Deputy Marshal Alicia Davenport.

_Aggravated Assault for pointing a handgun at Sherry Wilkerson.

_Possession of a firearm during the commission of a crime for having three guns in his possession during the crime.

During a Friday press conference, Columbus Police Chief Ricky Boren said Johnston entered the hospital earlier that day, looking for someone named “Pete,” who had treated his mother at the hospital before she died in 2004.

He could not find him that morning, but returned around 2 p.m., armed with three handguns and extra ammunition, Boren said. One weapon was concealed in a jacket pocket, one in a pants pocket, and another tucked into his waistband.

Wright, a critical care nurse from Fortson Ga., who had worked at the hospital for 11 years, was the first victim.

Boren said when Johnston came to the hospital the second time, he exchanged words with Wright, asking if he remembered him. Johnston followed Wright into an empty room on the hospital’s fifth floor. When Wright attempted to leave the room, Johnston shot him, Boren said.

Wright was taken to surgery where he later died. He was 44.

Harris, an administrative assistant at the hospital, was shot after he tried to stop Johnston from leaving at the elevator. The 44-year-old from LaGrange, Ga., died at the hospital at 2:30 p.m. Thursday, Muscogee County Coroner Bill Thrower said.

Johnston left the hospital through the emergency room doors and entered the parking lot, where he shot Baker, Boren said. Baker, a truck driver from Columbus, died in surgery at The Medical Center. He was 76.

Boren said officers received a 911 call in reference to the shooting a little after 2 p.m., requesting that all available units respond. Davenport, a deputy with the marshal’s office, was the first to respond at 2:13 p.m.

Johnston, now in a tan Ford stationwagon, was stopped by Davenport, who commanded him to show his hands, Boren said. The suspect pointed a gun at her and fired. Davenport returned fire, and at least two bullets hit the station wagon’s driver’s side door.

Marshal Greg Countryman said Davenport fired three shots, none of which hit Johnston. Davenport was also unscathed.

Boren said Johnston was then blocked in place by an unmarked Columbus police unit. Johnston fired shots at three Columbus officers in the parking lot _ Dahnke, Goodrich and Anderson.

Dahnke returned fire twice, wounding Johnston in the shoulder, Boren said. Goodrich and Anderson did not fire shots.

Dahnke has been at the Columbus Police Department since Jan. 10, 2000; Goodrich since Dec. 6, 2004; and Anderson since July 23, 2007.

The police officers and the deputy marshal have all been placed on administrative assignment with pay and will receive counseling.

Countryman credited Davenport’s quick response to the scene for keeping the gunman isolated to the hospital grounds. He said Davenport seemed to be coping well when she left the marshal’s office Thursday night.

“This is a major incident for her and we are going to recommend based on our policy that she seeks counseling,” Countryman said. “This is something that she’s going to have to deal with for the rest of her life.”

Based on his own experience, he guessed the trauma hadn’t yet kicked in.

“A man unloaded a gun on me,” Countryman recalled. “Everything seemed to happen in slow motion. Two or three days later, your body aches. It was a bad situation, but it could have been worse.”

Marion Scott, a spokesperson for Columbus Regional, said all employees affected by the incident will undergo counseling and a memorial service will be planned at a later date.

___

(c) 2008, Columbus Ledger-Enquirer (Columbus, Ga.).

Visit the Ledger-Enquirer Online at http://www.ledger-enquirer.com/

Distributed by Knight Ridder/Tribune Information Services.

For reprints, email [email protected], call 800-374-7985 or 847-635-6550, send a fax to 847-635-6968, or write to The Permissions Group Inc., 1247 Milwaukee Ave., Suite 303, Glenview, IL 60025, USA. 1056376

Experts Uncover World’s Oldest Recording

A group of audio historians have discovered what may be the oldest recording of the human voice.

The 10-second clip is of a woman singing part of a French song called “Au Clair de la Lune” and it was recorded in 1860 – making it 17 years older than Thomas Edison’s “Mary had a little Lamb.”

The song was recorded using a phonautograph, a device created by Parisian inventor Edouard-Leon Scott de Martinville. The device used a needle to scratch sound waves onto paper blackened by the soot of an oil lamp.

Audio historian David Giovannoni, discovered the phonautograph in France’s patent office after learning of its existence in some Parisian archives – he traveled to the French capitol a week later.

Using high resolution optical scanning equipment, Giovannoni collected images of the phonautograms that he brought back to the United States. He employed the help of First Sounds, a group of audio historians, recording engineers and sound archivists dedicated to preserving the world’s earliest sound recordings.

“We found that Scott’s technique wasn’t very developed,” Giovannoni said. “There were squiggles on paper, but it was not recording sound.”

The U.S. experts made high-resolution digital scans of the paper. According to First Sounds, scientists at the Lawrence Berkeley National Laboratory in California converted the scans into sound waves using technology developed to preserve and create early recordings.

“It was magical, so ethereal,” said Giovannoni. “It’s like a ghost singing to you. The fact is it’s recorded in smoke. The voice is coming out from behind this screen of aural smoke.”

Thomas Edison is generally considered to be the first person to have recorded sound and had his phonograph patented in 1878.

“It doesn’t take anything away from Edison, in my opinion,” said Giovannoni.

“But actually the truth is he was the first person to have recorded (sound) and played it back. There were several people working along the lines of Scott, including Alexander Graham Bell, in experimenting — trying to write the visual representation of sound before Edison invented the idea of playing it back,” Giovannoni said.

Scott never intended for anyone to listen to his phonoautograms.

“What Scott was trying to do was to write down some sort of image of the sound so that he could study it visually. That was his only intent,” Giovannoni said.

The annual conference of the Association for Recorded Sound Collections at Stanford University in California will present the results of Scott’s experimenting publicly on Friday.

Photo Caption: Thomas Edison and his early phonograph

On the Net:

Sound Files

Lawrence Berkeley National Laboratory

Association for Recorded Sound Collections

Stanford University

Replacing Destroyed Rainforests May Be Possible

Half a century after most of Costa Rica’s rainforests were cut down, researchers from the Boyce Thompson Institute took on a project that many thought was impossible – restoring a tropical rainforest ecosystem.

When the researchers planted worn-out cattle fields in Costa Rica with a sampling of local trees, native species began to move in and flourish, raising the hope that destroyed rainforests can one day be replaced.

Carl Leopold and his partners in the Tropical Forestry Initiative began planting trees on worn-out pasture land in Costa Rica in 1992. For 50 years the soil was compacted under countless hooves, and its nutrients washed away. When it rained, Leopold says, red soil appeared to bleed from the hillsides.

The group chose local rainforest trees, collecting seeds from native trees in the community. “You can’t buy seeds,” Leopold says. “So we passed the word around among the neighbors.” When a farmer would notice a tree producing seeds, Leopold and his wife would ride out on horses to find the tree before hungry monkeys beat them to it.

The group planted mixtures of local species, trimming away the pasture grasses until the trees could take care of themselves. This was the opposite of what commercial companies have done for decades, planting entire fields of a single type of tree to harvest for wood or paper pulp.

The trees the group planted were fast-growing, sun-loving species. After just five years those first trees formed a canopy of leaves, shading out the grasses underneath.

“One of the really amazing things is that our fast-growing tree species are averaging two meters of growth per year,” Leopold says. How could soil so long removed from a fertile rainforest support that much growth?

Leopold says that may be because of mycorrhizae, microscopic fungi that form a symbiosis with tree roots. Research at Cornell and BTI shows that without them, many plants can’t grow as well. After 50 years, the fungi seem to still be alive in the soil, able to help new trees grow.

Another success came when Cornell student Jackeline Salazar did a survey of the plants that moved into the planted areas. She counted understory species, plants that took up residence in the shade of the new trees. Most plots had over a hundred of these species, and many of the new species are ones that also live in nearby remnants of the original forests.

Together, these results mean that mixed-species plantings can help to jump-start a rainforest. Local farmers who use the same approach will control erosion of their land while creating a forest that can be harvested sustainably, a few trees at a time.

“By restoring forests we’re helping to control erosion, restore quality forests that belong there, and help the quality of life of the local people,” says Leopold.

That quality-of-life issue is drinking water. It’s in scarce supply where forests have been destroyed, since without tree roots to act as a sort of sponge, rain water runs off the hillsides and drains away.

Erosion is also out of control. “You might drive on a dirt road one year, and then come back the next to find it’s a gully over six feet deep,” says Leopold. “It’s a very serious problem.”

Does the experiment’s success mean that rainforests will one day flourish again? Fully rescuing a rainforest may take hundreds of years, if it can be done at all.

“The potential for the forest being able to come back is debatable,” Leopold says, but the results are promising.

“I’m surprised,” he said. “We’re getting an impressive growth of new forest species.” After only ten years, plots that began with a few species are now lush forests of hundreds. Who knows what the next few decades – or centuries – might bring?

On the Net:

Boyce Thompson Institute for Plant Research

Recommendation: Infant Formula Should Include Omegas

New recommendations published by international experts in the Journal of Perinatal Medicine state that infant formula should include DHA omega-3 and AA omega-6 to guarantee correct eye and brain development.

These recommendations fro DHA and AA intake have been developed by a panel of child health experts from 11 countries with endorsements from organizations such as the World Association of Perinatal Medicine, Child Health Foundation and the Early Nutrition Foundation.

The expert team emphasizes that breastfeeding is the preferred method of feeding, as DHA and AA are available in breast milk. However, when the mother is unable or chooses not to breastfeed, infant formula should include DHA at the recommended levels of between 0.2% and 0.5% of fatty acids and the amount of AA should be at least equal to the DHA level. The experts also note that the addition of at least 0.2% DHA plus AA is necessary to achieve functional developmental benefits.

“Over the past decade, many research studies have highlighted the importance of DHA omega-3 and AA omega-6 in infant development -said Cristina Campoy, of the Department of Paediatrics of the University of Granada (CIBM)-. It is therefore vital that pregnant and nursing mothers consume adequate amounts of DHA in their own diet, and, if using an infant formula, should provide their infants with a formula containing DHA and AA at recommended levels”.

DHA omega-3 and AA omega-6

Docosahexaenoic acid, or DHA, is a long-chain polyunsaturated omega-3 fatty acid, or “Ëœgood’ fat, found throughout the body. It is a major structural fat in the brain and retina of the eye accounting for up to 97 percent of the omega-3 fats in the brain and up to 93 percent of the omega-3 fats in the retina. It is also a key component of the heart.

Studies have shown that DHA omega-3 is important for infant brain, eye and nervous system development and has been shown to support long-term heart health. It is important throughout pregnancy, but particularly in the third trimester when significant brain growth occurs.

Arachidonic acid, AA, is a long-chain omega-6 fatty acid, another “Ëœgood’ fat. It is the principal omega-6 in the brain, representing about 48 percent of the omega-6 fats. Like DHA, AA omega-6 is important for proper brain development in infants. It is also a precursor to a group of hormone-like substances called eicosanoids that play a role in immunity, blood clotting and other vital functions in the body.

Infants whose mothers supplement with DHA during pregnancy and nursing or who are fed formula milk supplemented with DHA and AA have significantly enhanced levels of these nutrients available to them. Major infant brain growth occurs during pregnancy and throughout the first two years of life. During these times, infants have the greatest need for DHA omega-3 and AA omega-6.

DHA and AA in the diet

The main dietary source of DHA is oily fish. AA is found in foods such as meat, eggs and milk. While most women typically consume enough AA in their diets, those who consume a typical Western diet are at risk for low stores of DHA. This may be because oily fish is not a staple of the typical Western diet. Additionally, expert bodies have advised pregnant and nursing women to limit their fish consumption due to the potentially high levels of toxins such as mercury.

The amount of essential fatty acids provided to infants through maternal intake during pregnancy and/or breastfeeding and through supplemented formula milks is important. Babies cannot make these essential fats themselves, which is why it is vital that they are made available via the mother’s diet during pregnancy and breastfeeding or through supplemented infant formula.

About the recommendations

The Recommendations and Guidelines for Perinatal Medicine were developed by a team of 19 experts from 11 countries who reviewed the current research and recommendations on DHA and AA and evaluated the body of research exploring how DHA & AA affect infant brain and eye development. The expert team, which included experts from Italy, France, Germany, Spain and the UK concluded that both DHA and AA should be added to infant formula in order to provide formula-fed infants these important nutrients at a comparable rate to their breastfed counterparts. The guidelines also recommend that pregnant or breastfeeding women should include enough DHA in their diets to support the brain and eye development of their babies. The Recommendations and Guidelines for Perinatal Medicine were supported by the World Association of Perinatal Medicine, the Early Nutrition Academy, and the Child Health Foundation.

Summary of the recommendations

  • The authors emphasize the importance of a balanced diet for breastfeeding women, including a regular supply of DHA
  • Pregnant women should aim for a DHA intake of at least 200mg a day (equivalent to two portions of oily sea fish per week)
  • If breast milk is not available to the baby, current evidence supports the addition of DHA and AA to infant formula
  • The DHA added should make between 0.2% and 0.5% of fatty acids [noting that 0.2% is the minimum level necessary to see functional developmental benefits]
  • Infant formula should be supplemented with AA in amounts at least equal to the amount of DHA
  • EPA, another omega-3 fatty acid, should be less than the amount of DHA
  • Dietary supply of DHA and AA should continue during the second six months of life, but experts do not have enough information to recommend exact amounts

On the Net:

Journal of Perinatal Medicine

Universidad de Granada

The World Association of Perinatal Medicine

Early Nutrition Academy

Child Health Foundation

Ethics Guidelines Needed for Human-Genome Research

A global team of legal, scientific and ethics experts have put forward eight key recommendations to establish much needed guidelines for conducting human-genome sequencing research.

Timothy Caulfield, professor and research director of the Health Law Institute at the University of Alberta in Canada, led a consensus workshop to develop rigorous guideline recommendations for research ethics boards. The results appear in the current issue of PLOS Biology (March 2008).

Researchers met to develop these recommendations because national and international funding initiatives have substantially increased whole-gene research activities, and media coverage of both the science and the emerging commercial offerings related to human-genome research has heightened public awareness and interest in personal genomics, says Caulfield.

“Yes, these are early days in the field of human-genome research, but research ethics guidance is needed immediately,” said Caulfield. “With how fast this research is growing, it is necessary that we develop carefully considered consensus guidelines to ensure ethical research practices are defined for all.”

Some key recommendations of the paper include the right for participants to withdraw consent (which includes the destruction of tissue samples and written information); the issues associated with participants’ family members and relevant groups; and the means of obtaining clear consent from participants for possible future use of their genes.

“As technology continues to advance, whole-genome research activities seem likely to increase and expand,” says Caulfield. “As the pace of this research intensifies, we need to continue to explore the ethical, legal and social implications of this rapidly evolving field.”

The researchers note that the policy recommendations covered in the report are not the only issues that need to be considered. Commercialization, patenting, benefit sharing and the possibility of genetic discrimination are among other topics that warrant discussion in the future.

On the Net:

University of Alberta

PLOS Biology

Medicare to Now Cover INR Self Testing for Patients on Anticoagulants for Chronic Atrial Fibrillation and Venous Thromboembolism

The announcement this past week from the Centers for Medicare & Medicaid Services (CMS) that Medicare coverage for at-home blood testing of prothrombin time (PT)/International Normalized Ratio (INR) will be expanded is welcome news to the millions of patients who take anticoagulants daily and will now have easier access to proactive, improved quality of health care. In addition to mechanical heart valve patients (for whom CMS approved weekly self-testing in 2002), the new decision expands coverage of home testing to those patients who take warfarin, an anticoagulant medication, for chronic atrial fibrillation or venous thromboembolism.

“Warfarin — a blood thinner used chronically by a large population of patients for a variety of medical conditions — is a very critical, black-box drug whose dosage needs to be managed closely in order to minimize serious complications from continual use, including blood clots, stroke and hemorrhage,” says Jack Ansell, M.D., an internationally recognized expert in hemostasis and thrombosis, and Chairman of Medicine at Lenox Hill Hospital in New York City. “The scientific research and success of mechanical heart valve patients with INR home testing for the past six years serve as proof that weekly patient home self-testing helps patients remain in their therapeutic range, thereby reducing the risk of costly — and deadly — complications.”

Dr. Ansell, whose main area of research focuses on the application of new modes of delivering and monitoring anticoagulants, is the founder and immediate past Chair of the Anticoagulation Forum, a network of anticoagulation clinics throughout North America. “This new CMS decision removes a substantial barrier to the wider use of patient self-testing for INR that we are hopeful will happen in the near future,” he says.

International Normalized Ratio (INR) is the standard unit for reporting the clotting time of blood, which must be tested on a regular basis in order for a person on anticoagulants to remain in his or her prescribed “range” For those with chronic atrial fibrillation or venous thromboembolism, it can mean monthly trips to the physician, clinic or lab for blood work, which imposes a tremendous burden on both the patient and the physician.

“Now that CMS has expanded coverage for these two additional patient groups on anticoagulation therapy, more Medicare beneficiaries will be eligible to self-test their INR more regularly in the comfort and convenience of their own homes,” says David Phillips, vice president of marketing at HemoSense®, manufacturer of the INRatio® PT/INR Monitoring System, an easy-to-use, portable device designed for home INR testing that is already being used successfully by thousands of mechanical heart valve patients on daily anticoagulants. “This decision helps bring the patient into the healthcare team to be more proactive in maintaining their own INR levels, which is a major step in preventing complications, and improving compliance.”

At-home INR testing, done with a finger prick and a specialized, FDA-cleared meter (similar to the process utilized by diabetics for glucose monitoring), includes a simple blood test that measures the time it takes for the liquid portion (plasma) of the blood to clot. “The optimal therapeutic range for patients with atrial fibrillation or venous thrombosis is an INR between two and three,” explains Dr. Ansell. “Should the patient’s level fall out of therapeutic range, it will be more readily detected with the weekly self-testing, allowing the doctor to make timely dosage changes in response.”

Atrial fibrillation is the most frequently encountered and sustained cardiac arrhythmia in clinical practice, affecting millions of patients nationwide. Venous thromboembolism, which includes deep vein thrombosis (DVT) and pulmonary embolism (PE), affects upwards of 500,000 people. “There is the now the potential for these patients to take an active role in monitoring their own health via the self-testing of their INR levels,” concludes Dr. Ansell.

For more information about INR self testing, visit www.hemosense.com.

Note to Media:

Dr. Ansell will be the keynote speaker of a media Web cast, addressing this new CMS decisionand the positive effects it can have on these new patients and their doctors, today,Thursday, March 27th, 10 a.m. ET (program time approximately 30 minutes).

Dr. Ansell will present for 15 minutes, with additional time allotted for media Q&A.

To register for this informative Web cast in order to receive the URL to access this online event,

e-mail: Laura Giardina: [email protected] or call (914) 241-0086 ext 20

HemoSense® will also be exhibiting at the American College of Cardiology 57th AnnualScientific Session, beginning March 29 in Chicago; Booth # 9063

About HemoSense

HemoSense Inc., a subsidiary of Inverness Medical Innovations, Inc. (AMEX: IMA), is a point-of-care diagnostic healthcare company that manufactures and commercializes easy-to-use, handheld blood coagulation systems for monitoring patients taking warfarin. The HemoSense INRatio® system, used by healthcare professionals and patients themselves, consists of a small monitor and disposable test strips. It provides accurate and convenient measurement of blood clotting time, or PT/INR values. Routine measurements of PT/INR are necessary for the safe and effective management of the patient’s warfarin dosing. INRatio is sold in the United States and internationally.

About Inverness

By developing new capabilities in near-patient diagnosis, monitoring and health management, Inverness Medical Innovations enables individuals to take charge of improving their health and quality of life. A global leader in rapid point-of-care diagnostics, Inverness’ products, as well as its new product development efforts, focus on infectious disease, cardiology, oncology, drugs of abuse and women’s health. Inverness is headquartered in Waltham, Massachusetts.

For more information about HemoSense and Inverness Medical Innovations, please visit www.hemosense.com and www.invernessmedical.com.

This press release contains forward-looking statements within the meaning of the U.S. Private Securities Litigation Reform Act of 1995. Statements in this press release regarding HemoSense’s business that are not historical facts may be “forward-looking statements” that involve risks and uncertainties. Specifically, the statements regarding the potential for wider use of patient self-testing for INR are forward looking statements within the meaning of the Safe Harbor. Forward-looking statements are based on management’s current, preliminary, expectations and are subject to risks and uncertainties which may cause the actual results to differ materially from the statements contained herein. Further information regarding the business of HemoSense and Inverness and risk factors relating to those businesses are detailed in Inverness’ filings with the Securities and Exchange Commission, including its 2007 Form 10-K. Undue reliance should not be placed on these forward-looking statements, which speak only as of the date they are made. HemoSense and Inverness undertake no obligation to update publicly any forward-looking statements to reflect new information, events or circumstances after the date they were made, or to reflect the occurrence of unanticipated events.

HemoSense(R) and INRatio(R) are registered trademarks of HemoSense, Inc.

More Teens Getting Breast Augmentation Surgery

Before she underwent breast augmentation surgery last summer, Melissa Wohl said she felt self-conscious about her body — especially at the beach.

“I wasn’t as developed as some of my friends, who were filling out their bathing suits,” said Wohl, 19, of Wantagh, who now attends Binghamton University. “I guess I felt like I didn’t fit in.”

Wohl is among an increasing number of young women looking to plastic surgery for a boost in confidence as well as cup size.

Last week’s death of Stephanie Kuleba, 18, of South Florida, during breast augmentation surgery has drawn attention to what some describe as a growing trend. Kuleba, whose parents say she sought the surgery to correct an inverted nipple and asymmetrical breasts, died Saturday of what may have been a rare genetic reaction to general anesthesia.

According to the American Society for Aesthetic Plastic Surgery, the number of women 18 and younger who have had breast enlargements has risen nearly 500 percent over the past decade — a sharper climb than the 300 percent increase in breast augmentations among all age groups.

Dr. Stephen Greenberg, a plastic surgeon in Woodbury, estimated he has seen a 20 percent to 30 percent rise in cosmetic procedures among young people. Often, he said, a girl will come in with her parents, who are buying her a breast augmentation as a birthday or high school graduation gift.

“There are girls and women who are devastated by the fact that they don’t have breasts and their friends do,” Greenberg said. “They don’t play gymnastics and they don’t go on dates or they can’t wear certain clothing, and I hear these things every day.”

Greenberg attributed the trend in part to young women who see their parents undergoing cosmetic procedures, or relate closely to the celebrities who have them.

Not everyone agrees.

Dr. Alan Gold, a Great Neck plastic surgeon, noted that those 18 and under accounted for only 2 percent of the nearly 400,000 breast augmentation surgeries performed nationwide in 2007. He said that while many might choose to have such a surgery in a “transitional period” like summer vacation when they won’t see their friends every day, he has not seen a trend in graduation and birthday gift breast jobs among his patients.

Breast augmentation is the most popular plastic surgery, accounting for about 20 percent of all surgical procedures. It costs on average about $4,000, according to the plastic surgeons’ society.

Traci Levy, an assistant professor who teaches courses in feminism and gender studies at Adelphi University, said that the growing perception that it’s a common procedure, along with the bombardment of women with advertisements for plastic surgery, may be contributing to its popularity.

“To say that you need to have a very expensive surgical procedure with real health risks in order to be considered beautiful, I think, is a problematic image,” she said.

The Food and Drug Administration recommends against performing breast augmentations on girls younger than 18, and both Gold and Greenberg said that if younger teenagers request the surgery, they counsel them to wait. But Gold said he sees nothing necessarily wrong with performing the surgeries on women who are 18 or older.

“Eighteen is certainly an age where we’re putting men and women in uniform on a battlefield,” he said. “I think they can decide if they want larger breasts.”

What to do before surgery

Long Island doctors offer advice to parents whose teens are considering cosmetic surgery:

WAIT A YEAR. Teens should wait a year before going ahead with breast correction or rhinoplasty, said Dr. David Graham, chief deputy health commissioner for Suffolk County. Another year of maturity and thought might cause them to be more accepting of minor bodily flaws.

KNOW THE COMPLICATIONS. Parents should make the child aware of the possible complications, including the rare chance of death, Graham said. “You think you know everything when you’re 17 or 18, but you don’t. You don’t realize the downside of anesthesia.”

DO YOUR RESEARCH. Research not only the surgeon, but just as importantly, the anesthesiologist who will participate in the operation, said Dr. Mark Shikowitz, vice chairman of the ear, nose and throat department at North Shore University Hospital and Long Island Jewish Medical Center. Shikowitz does three to five teenage nose jobs a week, he said. While cosmetic surgery has risks such as bleeding and infection, the most dangerous part can be the general anesthesia, he said.

CONSIDER A HOSPITAL. Consider doing the surgery in a hospital or a hospital-affiliated ambulatory care center instead of a doctor’s office, Shikowitz said. In a crisis, more specialists and fellow anesthesiologists are nearby to lend a hand and offer solutions.

– BETH WHITEHOUSE

Surgical numbers

Breast augmentation operations for those 18 and younger have risen since 1997.

Total cases 18 and younger

1997 101,176 1,326

2000 203,310 2,123

2007 394,440 7,882

Source: American Society or Aesthetic Plastic Surgery

Underwater Plastic Waste Threatens World’s Food Chain

Marine scientists from the University of Plymouth say plastic waste accumulating in the oceans is becoming a devastating, toxic threat to the world’s food chain.

Although the spotlight has traditionally been on the dangers that visible items of plastic waste pose to seabirds and other wildlife, studies suggest billions of microscopic underwater plastic fragments are concentrating pollutants like DDT.  And researchers warn the risk of these hidden contaminants could be even more serious.

University of Plymouth’s Richard Thompson investigated the way plastic degrades in water, and the corresponding effect on tiny marine organisms, such as barnacles and sand-hoppers.

“We know that plastics in the marine environment will accumulate and concentrate toxic chemicals from the surrounding seawater and you can get concentrations several thousand times greater than in the surrounding water on the surface of the plastic,” he told BBC News.

“Now there’s the potential for those chemicals to be released to those marine organisms if they then eat the plastic.”

Once the chemicals reside inside the organism, the toxins may then be transferred into the organism itself.

“There are different conditions in the gut environment compared to surrounding sea water and so the conditions that cause those chemicals to accumulate on the surface of the plastic may well be reversed – leading to a release of those chemicals when the plastic is eaten.”

Thompson said the plastic particles “act as magnets for poisons in the ocean”.

Thompson conducted his experiment using plastic carrier bags immersed in the water off a jetty in Plymouth harbor. He is now assessing the time is takes for them to fragment. In similar studies, he and colleagues have also added plastic powder to aquarium sediment to determine the amount ingested by various marine organisms.

Previous research on areas along the shoreline have shown plastic pollution is far worse than feared at the microscopic level.   Such studies have identified traces of plastic on every continent on the planet.

On the tiny Pacific island of Midway, Matt Brown of the US Fish and Wildlife Service reiterates the threat from plastic waste.

“The thing that’s most worrisome about the plastic is its tenaciousness, its durability. It’s not going to go away in my lifetime or my children’s lifetimes,” he told BBC News.

“The plastic washing up on the beach today”¦ if people don’t take it away it’ll still be here when my grandchildren walk these beaches.”

On the Net:

University of Plymouth

Salmon Farming Tactics Produce Unhealthy Fish

Looking out over the low green mountains jutting through miles of placid waterways here in southern Chile, it is hard to imagine that anything could be amiss. But beneath the rows of neatly laid netting around the fish farms just off the shore, the salmon are dying.

A virus called infectious salmon anemia, or ISA, is killing millions of salmon destined for export to Japan, Europe and the United States. The spreading plague has sent shivers through Chile’s third-largest industry, which has left local people embittered by laying off more than 1,000 workers.

It has also opened the companies to fresh charges from biologists and environmentalists who say that the breeding of salmon in crowded underwater pens is contaminating once-pristine waters and producing potentially unhealthy fish.

Some say the industry is raising its fish in ways that court disaster, and producers are coming under new pressure to change their methods to preserve southern Chile’s cobalt blue waters for tourists and other marine life.

“All these problems are related to an underlying lack of sanitary controls,” said Felipe Cabello, a microbiologist at New York Medical College in Valhalla that has studied Chile’s fishing industry. “Parasitic infections, viral infections, fungal infections are all disseminated when the fish are stressed and the centers are too close together.”

Industry executives acknowledge some of the problems, but they reject the notion that their practices are unsafe for consumers. American officials also say the new virus is not harmful to humans.

But the latest outbreak comes on top of a rash of non-viral illnesses in recent years that the companies acknowledge have led them to use high levels of antibiotics. Researchers say the practice is widespread in the Chilean industry, which is a mix of international and Chilean producers. Some of those antibiotics, they said, are not allowed for use on animals in the United States.

Many of those salmon are ending up in American grocery stores anyway, where about 29 percent of Chilean exports are destined. While fish from China have come under special scrutiny in recent months, here in Chile regulators have yet to form a registry that even tracks the use of the drugs, researchers said.

The new virus is spreading, but it has primarily affected the fish of Marine Harvest, a Norwegian company that is the world’s biggest producer of farm-raised salmon, which exports about 20 percent of the salmon that come from Chile.

Salmon produced in Chile by Marine Harvest end up in Costco and Safeway stores, among other major U.S. grocery retailers, said Torben Petersen, managing director of Marine Harvest here.

Arne Hjeltnes, the head spokesman in Oslo for Marine Harvest, said his company recognizes that antibiotic use is too high in Chile and that fish pens located too close together have contributed to the problems. He said Marine Harvest welcomes tougher environmental regulations.

“Some people have advocated that this industry is too good to be true,” Hjeltnes said. “But as long as everybody has been making lots of money and it has been going very well there has been no reason to take tough measures.” He called the current crisis “eye-opening” to the different measures that are needed.

On a recent visit to a port south of Puerto Montt, a warehouse contained hundreds of bags, some as large as 1,250 kilograms, or 2,750 pounds, filled with salmon food and medication. The bags – many of which were labeled “Marine Harvest” and “medicated food” for the fish – contained antibiotics and pigment as well as hormones to make the fish grow faster, said Adolfo Flores, the port director.

Environmentalists say the salmon are being farmed for export at the expense of almost everything else around. The equivalent of some three to five kilograms of fresh fish are required to produce one kilogram of farmed salmon, according to estimates.

Salmon feces and food pellets are stripping the water of oxygen, killing off other marine life and spreading disease, biologists and environmentalists say. Escaped salmon are eating other fish species and have begun invading rivers and lakes as far away as neighboring Argentina, researchers say.

“It is simply not possible to produce fish on an industrial scale in a sustainable way,” said Wolfram Heise, director of the marine conservation program at the Pumalin Project, a private conservation initiative in Chile. “You will never get it into ecological balance.”

When companies began breeding non-native Atlantic salmon here some two decades ago, salmon farming was seen as a godsend for this sparsely populated area of sleepy fishing towns and campgrounds.

The industry has grown eight-fold since 1990. Today it employs some 53,000 people either directly or indirectly. Marine Harvest currently operates the world’s largest “closed system” fish-farming facility at Rio Blanco, near Puerto Montt, where 35 million fish a year are raised until they weigh about 10 grams.

As the industry now abandons the region in search of uncontaminated waters elsewhere, local people are angry and worried about their future.

The salmon companies “are robbing us of our wealth,” said Victor Gutierrez, a fisherman from a town on the Gulf of Reloncavi, which is dotted with salmon farms. “They bring illnesses and then leave us with the problems.”

Since discovering the virus in Chile last July, Marine Harvest has closed 14 of its 60 centers and announced it would lay off 1,200 workers, or one-quarter of its Chilean operation. Since the company announced last month that it would move to a region farther south, the government has said the virus had spread there as well, in two separate outbreaks not involving Marine Harvest.

Industry officials say Chile is suffering similar growing pains to salmon farming operations in Norway, Scotland and the Faroe Islands, where the ISA virus, in a different form, struck previously.

Norway, the world’s leading salmon producer, eventually decided to spread salmon farms farther apart, reducing the stresses on the fish, and responded to criticism of high antibiotic use with stronger regulations and the development of vaccines.

Researchers in Chile say salmon farming’s problems go well beyond the latest virus. Their concerns mirror those of the Organization for Economic Cooperation and Development in Paris, which heavily criticized Chile’s farm-fishing industry in a 2005 report.

The OECD said the industry needed to limit the escape of about one million salmon a year; control the use of fungicides like green malachite, a carcinogen that was prohibited in 2002; and better regulate the colorant used to make salmon more rosy, which has been associated with retina problems in humans. It also noted that Chile’s use of antibiotics was “excessive.” Officials at Sernapesca, Chile’s national fish agency, declined repeated interview requests for this article and did not respond to written questions submitted more than a week before publication.

But Cesar Barros, president of SalmonChile, the industry association, said, “We are working with the government to improve the situation.” He dismissed the broader criticism of sanitary conditions, saying there was no scientific evidence to support the claims. But researchers charge that the industry has been reluctant to fund scientific studies, which Chile sorely needs.

Residual antibiotics have been detected in Chilean salmon that have been exported to the United States, Canada and Europe, Cabello, the microbiologist, said. He estimated that some 70 to 300 times more antibiotics are used by salmon producers in Chile to produce a ton of salmon than in Norway.

California Association of Health Plans & California Medical Association Foundation Team Up to Combat Obesity

SACRAMENTO, Calif., March 26 /PRNewswire/ — As Californians get physically fit for their summer activities, the California Association of Health Plans (CAHP) and the California Medical Association Foundation (CMA Foundation) has teamed up to encourage physicians to discuss healthier lifestyles with their patients by distributing the first-ever comprehensive toolkits to address overweight and obesity.

At a press conference at Sutter General Hospital in Sacramento, the two organizations unveiled the three toolkits they’ve developed to address obesity and overweight and to improve the care and outcomes for adults, children and adolescents and pre/post bariatric surgery patients.

This set of easy-to-use guides includes the first obesity toolkit ever produced for adults here in California and the first time that all three toolkits have been published together.

“As we get closer to summer, many Californians will look to fashion magazines or TV infomercials for the latest fad diets, rather than turning to their doctors who know best how to prevent and combat obesity,” said Chris Ohman, CAHP president and CEO. “These kits will encourage and foster discussion between doctors and their patients about achieving and maintaining a healthy weight.”

The toolkits will be distributed to physicians across the state so they can better assist their patients in weight management and obesity prevention. The toolkits will also be available to the public on the CAHP and CMA Foundation websites as well as participating health plans’ websites.

According to the former United States Surgeon General, obesity is “the fastest-growing, most threatening disease in America today,” and California is experiencing the fastest increase in adult obesity of any state in the nation. The direct and indirect cost of obesity is $100 billion per year nationally. In California alone, it is $28.5 billion.

“Obesity is second only to tobacco as a preventable cause of death, and overweight adults have a significantly higher risk of disease,” said Dr. Frank Staggers, Chair, CMA Foundation Board of Directors. “Less than one-third of overweight patients report being counseled by their physicians regarding obesity, yet in a recent survey nine out of 10 Californians said they want their doctors to be their primary source of information about nutrition, physical activity and other issues associated with weight management. That is why the CMA Foundation and CAHP are working together to reverse the obesity trend – starting in the doctors’ office.”

The result of this unique collaboration is the “ultimate” package of obesity toolkits which include:

   -- Guidelines and policy statements on obesity prevention, weight      management, diet, physical activity counseling, body mass index (BMI)      screening and other measurements   -- Effective communication techniques to help patients make decisions   -- Culturally appropriate, ready to copy, materials and handouts   -- Identification of internet tools and information   -- Strategies for managing overweight patients   -- Patient education resources    

These three toolkits were developed in response to Gov. Arnold Schwarzenegger’s challenge for the state’s community organizations and companies to take action to address obesity and weight management issues during his 2005 Summit on Health, Nutrition and Obesity. CAHP accepted that challenge by agreeing to create clinical education toolkits. At the same time, the CMA Foundation was hearing from physicians throughout the state the need for a uniform set of resources to help them address the prevention and management of overweight and obesity in their practices. The CMA Foundation partnered with CAHP to meet the challenge.

Kim Belshe, Secretary of the California Health and Human Services Agency, said that the toolkits are an important resource that will help address obesity in California.

“Governor Schwarzenegger challenged leaders outside government to help make California the nation’s model for health, nutrition and fitness,” Belshe said. “I’m pleased the California Medical Association Foundation and the California Association of Health Plans have stepped up to meet this challenge, providing resources to help people understand the serious health implications of obesity and encourage them as they work to lead healthier lives.”

This unique collaboration of health plans and doctors brought together an expert panel of 53 practicing physicians, health plan medical directors and other healthcare professionals to develop these guides to assist doctors and their patients in the fight against obesity.

To view the toolkits, please visit http://www.calhealthplans.org/ or http://www.calmedfoundation.org/projects/obesityProject.aspx.

CAHP is a statewide trade association representing 40 full-service health plans. Through legislative advocacy, education and collaboration with other member organizations, CAHP works to sustain a strong environment in which our member plans can provide access to products that offer choice and flexibility to the more than 21 million members they serve. For more information, please visit http://www.calhealthplans.org/ or call (916) 552-2910.

The CMA Foundation is a nonprofit organization that serves as a link between physicians and their communities. The Foundation champions improved individual and community health through a partnership of leaders in medicine, related health professions and the community. For more information, please visit http://www.calmedfoundation.org/ or call (916) 551-2550.

California Association of Health Plans; California Medical Association

CONTACT: Nicole Kasabian Evans, +1-916-502-2756 (cell), Elissa Maas,+1-916-712-7547 (cell), Allyn Davis, +1-916-337-6517 (cell), all forCalifornia Association of Health Plans and California Medical AssociationFoundation

Web site: http://www.calhealthplans.org/http://www.calmedfoundation.org/