Pregnancy Interrupted: Loss of a Desired Pregnancy After Diagnosis of Fetal Anomaly

By McCoyd, Judith L M

Abstract Prenatal diagnostic techniques both enable and force women and couples to make decisions about whether to continue a pregnancy where the fetus has an anomaly. Few studies have explored the decision-making and bereavement processes of women who terminate a desired pregnancy after the discovery of a fetal anomaly. This reports the qualitative results of a study designed to explore these processes while placing them within the context of the societal milieu. Findings are reported as themes that emerged from the 30 intensive interviews conducted with women at varying stages after this experience. These include mythical expectations based on denial that anomaly could occur, misconceptions about the nature of prenatal testing and inaccurate expectations about the experience and duration of grief. Further, the contradictory norms in society are defined as creating additional dilemmas for women as they attempt to gain support and understanding following their loss. Suggestions for how providers may assist women with their grief are incorporated.

Keywords: Prenatal diagnosis, abortion, pregnancy termination for anomaly (TFA), perinatal bereavement, psychosocial response to abortion, feeling rules

Introduction

Prenatal diagnostic techniques both enable and force women and couples to make decisions about whether to continue a pregnancy where the fetus has an anomaly. These decisions are considered individual, yet they are made in a medical culture that pushes for the most sophisticated diagnosis and treatment [1], often within a culture such as the United States that stigmatizes abortion. Kersting and colleagues [2] comment on the paucity of studies of psychosocial response to termination of pregnancy due to fetal anomaly (TFA/TOP) and add to the knowledge base via their survey. Most current research reports quantitative methods which use psychological scales to deduce mental health effects of TFA [3-6] or utilize demographics to assess likelihood of termination given a certain severity of anomaly [7-11]. Few have explored the processes of decision-making and/or mourning [12-16]. This article reports a qualitative study of women after TFA. Qualitative studies allow for exploration of the medical decision-making process and emotional responses in more depth for each individual respondent. Although quantitative studies are generalizable and capture trends, qualitative studies capture the rich, nuanced, detailed aspects of the experience and elaborate on the process/es involved. They provide more information for understanding clinical aspects of the experience of patients as they traverse the medical system from prenatal care to diagnosis to termination.

Women typically struggle to make decisions about whether to continue an affected pregnancy while isolated, frightened, and confused about the “feeling rules” [17,18]. Women desire these pregnancies and bond with the offspring they hope to birth, but then decide to terminate the pregnancy when they discover the anomaly. This dilemma is exacerbated by the fact that implicit and contradictory societal “rules” tell women that they must love their fetus from conception, but that they should not deliver a disabled child. There are misconceptions on the part of family members and medical staff that those who terminate a pregnancy will not experience grief [ 19], yet research has consistently shown that grief and trauma occur [2-6,12,15,16,20,21].

This study was designed to explore how women, with little normative guidance, experience and make sense of their pregnancy loss within the framework of society, medical culture, and relationship with family and friends. The woman’s psychological processes as she bonds to, and then separates from, the fetus are explored and analyzed within the context of prevailing societal expectations. This article describes the themes that emerged from intensive interviews with women after TFA.

Methods

Intensive interviews about decision-making, grief and supports were conducted with 30 women [22]. Each woman also completed the short form of the Perinatal Grief Scale [23,24] for purposes of triangulating quantitative data with the qualitative data to enhance rigor. Qualitative interviews with four physicians and a focus group with perinatal social workers provided additional opportunities for triangulation of data. Twenty of the interviews were conducted by e- mail, a new method [25] which allows more longitudinal qualitative data to be collected. All interviews yielded rich data about the way women across the United States have experienced TFA.

The data collection took 10 months, extending from November 2001 to August 2002. Women were recruited via letters of invitation distributed by obstetricians and perinatologists in the researcher’s geographical area. The same letter of invitation was posted on a web site devoted to supporting women who have “interrupted a pregnancy due to fetal anomaly”.

There were two Hispanic women and one Asian woman in the all white study group. Age ranged from 21 to 45, with most respondents between 31 and 35. One quarter were high school graduates, half had Bachelor’s degrees, several had Masters and five had J.D.’s, M.D.’s or Ph.D’s. This may reflect the fact that most women in the US who have access to prenatal diagnostic testing and subsequent TFA tend to be from the upper middle and upper classes and have insurance that will cover these expenses. Anyone receiving federal funds (Medicaid, military coverage, state and federal employees) may not receive coverage for terminating a pregnancy. Nearly all subjects identified spiritually in some way, including 10 Protestants, eight Roman Catholics, and five Jews. Other faiths were also represented.

The theoretical sampling strategy originally was limited by inclusion criteria of:

(1) having a desired pregnancy within a committed relationship;

(2) where the fetus was found to be anomalous;

(3) and the decision was made to terminate the pregnancy between the 16th and 24th weeks of estimated gestational age (EGA);

(4) and this experience occurred within the prior year.

Modifications of the theoretical sampling strategy were made after saturation of the themes occurred. It became clear that sampling women who had terminated both earlier and later than the 16- 24 week window would add further data of interest. Additionally, after several women responded to recruitment letters who had experienced their termination longer than one year before, the researcher interviewed them to gather further data about the way the process of grieving occurs over a more extended time after a TFA. One theoretical category that was not able to be “filled” [26] was that of someone who experienced very little to no grief after TFA.

The interviews were all transcribed verbatim and coded for emergent themes using grounded theory methods [27]. The themes emerged in ways that were framed by ecological systems theory [28] via analytic induction and the theoretical sampling strategy [29- 31]. As in all qualitative research, the goal is not randomization and generalization in the traditional quantitative sense, but a rich description via narrative data and analysis of the themes that characterize the experience of a population.

Results

The following section reports the themes that emerged from analysis of the data and fall into two broad categories of results. The results from the quantitative scale, the focus group and physician interviews were consistent with the findings reported here. The first category, “mythic expectations”, relates to the set of expectations that individual women have as they become pregnant and soon after the termination. The second category, “exquisite dilemmas”, relates to the dilemmas that emerge from the women’s narratives as they consider their experience within the societal and medical context of their lives. These expectations and dilemmas are exemplified with quotes that capture the themes that emanated from the study group more generally.

Mythic expectations

Women do not arrive at ultrasonographers’ or perinatologists’ office with a “blank slate”, but with adherence to a set of expectations, derived primarily from societal messages. These create a context in which the actual diagnosis of a fetal anomaly is not only traumatic in and of itself, but is also in direct conflict with the myths and expectations with which they arrive. These mythic expectations fell into seven categories.

Our baby would be fine. Women who cope well with pregnancy must use a certain degree of denial about the possibility that something could go wrong with the pregnancy. For the vast majority of women, this allows them to cede control of their lives to the momentous changes occurring in their bodies and social relationships. For women who have the experience of something going wrong however, their beliefs about the way the world works are called into question. Often, this is the first “bad” thing that has ever directly affected the woman and she must expend energy overcoming the denial in order to begin to process the fact that a diagnosis has occurred and requires that she actively make decisions. Zelia [32] shows how decisions made previously based on denial quickly are revised:

After our country was attacked on 9/11,1 felt ready to try again. My husband is active duty Air Force, and I knew he was about to “get busy”. So I went off the pill on Oct. 1 and by the end of the month I was pregnant. We were so happy. Because of my age, there was a 1/ 130 chance of a chromosome problem. And after a healthy daughter, I was sure we would have another healthy child. The result of the AFP was devastating: 1 in 10 chance of Down ‘s Syndrome. I always thought I would not have an amnio, we are Catholic, !changed my mind in an instant. I had to know if my baby was OK. A respondent who has a Ph.D. in a biomedical area said:

I decided to put the thought [that something could go wrong] out of my mind given the odds were overwhelmingly in our favor for a healthy baby. I was still in my early thirties and was the picture of good health. I was taking every step I could to ensure an ideal pregnancy. Besides, we had no history of abnormality in either of our families. These things happened to other people.

This quote and the academic credentials of its speaker reveal the powerful nature of mythic expectations to negate cognitive knowledge about the nature and purpose of prenatal testing.

Many expressed a similar feeling that by following the rules of good prenatal care, they have “contracted” to receive a healthy baby. Lorrie said:

I think Matt and I had the feeling that most parents do-no history of birth defects in our family, we are young and healthy, I wok prenatals and got good care, ate right, etc., so that meant that our baby would be fine.

The combination of generally healthy denial and a sense of contractual entitlement combine to make the diagnosis of an anomaly even more traumatic as expectations are violated.

I thought I was home free- no miscarriage in the first trimester. Many women are aware that early miscarriage is a possibility [33- 35] and nearly all in this study asserted some version of this statement. In fact, if they consider a potential problem at all, it is that they expect that a miscarriage could occur “if there is something wrong with the baby”. They often breathe a sigh of relief when they pass the 12th week as they believe they have dodged the potential for miscarriage. Ricki had a miscarriage previously and had little idea that her next pregnancy would be affected by spina bifida:

We had had a miscarriage a year earlier, so we were urn, I think holding off on getting attached to the baby in the first three months because we thought that our baby ‘s problem would be miscarriage, that would be the thing to be afraid of, so when three months passed, we were overjoyed.

There is an embedded myth that if there is a genetic or other anomaly, the pregnancy will be spontaneously aborted. Women often believe that the passing of the first trimester means that the fetus must be healthy. Tracy commented:

I think women are trained to be at least a little hesitant about their pregnancies until the first trimester is over – most everyone I know breathes a sigh of relief when they make it past that hurdle. I bet most women feel that if they don’t miscarry (by then), the baby must be fine because otherwise their bodies “would take care of it”. I know I felt that way. So second trimester findings come like a two-by-four to the head!

Once again, the violation of the mythic expectations and the assumptions embedded within adds to the level of distress as medical beliefs come into question.

I wouldn’t terminate anyway. Women who are still operating under the denial mentioned above also have the luxury of believing that they would never join the ranks of the women who would terminate a pregnancy. Indeed, many respondents were quite vociferously “pro- life” prior to diagnosis and struggled not only with their grief, but with the fact that they had asserted “I wouldn’t terminate” and then go on to do precisely that in the face of an actual diagnosis. Tracy (who terminated for Down’s Syndrome and cardiac anomalies) reported:

… my husband and I thought we would keep the baby if it was ‘just Downs’. I even decided against the triple screen test because I knew I would keep it… But there u nothing like hearing those words to your face, and I am no longer confident what I would do in that situation. I never speak for myself anymore unless I’m actually in the situation because it’s so, so hard to know how you would actually react. Reality of the fetal anomaly news hits so much harder than conceptual thinking takes into account.

Indeed, one respondent had been the chair of her state’s Right to Life chapter. Olina expresses a belief that one third of the women had – that only unwanted pregnancies were terminated and that, by extension of the logic, they would never terminate a pregnancy because these were desired pregnancies. She says:

When the doctor suggested I consider terminating the pregnancy, I grew furious. As a married woman who desperately wanted a child, and as a Christian, I had never given much thought to the abortion issue. I assumed that only women with unwanted pregnancies had the surgery. I wanted my twins to live. I kept thinking of how this was just not happening to me. There was such pain in my stomach, my soul really just throbbed and ached. I can remember how that felt, even right now. “We’re not going to do that”, I thought. “There’s no way”.

Even women who assert that they are pro-choice often made the above assertion believing that they upheld other women’s rights, but would never utilize that option themselves. A woman is confronted with the fact that she is, indeed, someone who would (and did) terminate a pregnancy.

Testing is nothing to worry about. A corollary to the idea that one would never terminate is the expectation that testing is nothing to worry about. Indeed, medical providers often reassure women, despite the fact that testing is for the purpose of identifying poor outcomes. Nearly all the women in the sample (27) reported doctors and nurses who told them there was “nothing to worry about” or that everything would be fine. Women then felt betrayed by the medical establishment and felt that their expressed fears had been unheard and dismissed when they had validity. Adele had expressed multiple fears to her obstetrician and been reassured constantly until she arrived in the hospital emergency room with vaginal bleeding:

They drew blood and apparently my ph (I hope I have it right) was not good. I was told by a nurse that I was probably going to lose the baby that night and to collect what came out for testing… The previous week, my OBIGYN told me the chances of a miscarriage was so small because of how far along I was, now we were hearing this… The ultrasound report was not good or normal, as I was told… So, Monday came and I called my doctor. I did not tell him I had the test results, and I really don’t know why, but probably because he delivered my second child and I really liked him and didn’t want to have a confrontation with him. He told me no, I was not going to have a miscarriage, that the hospital nurse was wrong.

The reassurances continued for another week until she was finally told that there were signs of “a syndrome – I can’t remember the name” and that it was unlikely the fetus would survive. Breena was pregnant for the first time and had expressed fears about her pregnancy because she had been in Manhattan on September 11, 2001.

Another doctor saw me and said everything was fine, and I took the AFP test then because it just seemed like I should. I mean, everyone takes it and she said ‘nothing to worry about’ and I didn’t give it a second thought. It was just OK. So then the doctor called me on Monday night and said my AFP was a little high and she also said there’s a high rate of false positives so there’s no needtoworry. And I said, ‘NO NEED?’And I was supposed to have the ultrasound that same week… the fun one where you find out the sex and all. So it took a few minutes to register that it could mean a problem.

When medical providers acknowledge that there could be something wrong and that the testing is designed to explore such a possibility, women can begin to use anticipatory coping strategies and to form more realistic expectations. Women who enter the testing expecting “run” and information about the sex of the fetus often had the most difficulty with feelings of betrayal.

The right decision couldn’t possibly hurt this much. Once women decide to terminate, they move into a period of shock and numbness that is protective during the days or weeks until the termination procedure is done. This protective numbness often extends into the first days and even up to 2-3 weeks after the procedure. The vast majority experience intense grief and guilt in the following days and weeks, peaking at 3-6 weeks after the procedure. They describe “just wantfing] to die” or “feeling crazy”. This is a normal aspect of intense grief inspired by loss [36], however many women interpret these feelings to mean they must have made a poor decision. They implicitly believe that the “right” decision would not have caused such pain. Felicia recognizes the fallacy in this only after meeting with other couples who had TFA’d:

When you feel that bad, you think you made the wrong decision. And to look around the group and see other people feel that bad, and that first night you’re thinking, ‘Wow! We all made the wrong decision – look what a bunch we are!’ And then you go back and you start questioning other people’s lives in addition to your own decisions. But it breaks in -just because we all feel really bad DOES NOT MEAN that we’ve all made the wrong decision.

Frances recognized this dynamic, and its implications, for herself, but not until two years after her loss. She wrote:

In the days that followed, I could not eat or sleep. I was obsessed by the idea that he was out there somewhere, cold, hungry, frightened, and looking for me. I cried endlessly over his photos [ultrasounds]. In my more hysterical periods, I would scream over and over that I wanted my baby. I was convinced we had made a mistake. The right decision could not possibly hurt this much… It took a long time before it occurred to me that had we continued with the pregnancy, I still would be crying and in agony. I would be mourning the healthy child we would not be having, mourning the son of life my son would have to face. Whether we had continued the pregnancy or not, I would be crying over my son for the rest of my life. I now realize the tragedy of the situation isn ‘t so much that my son ‘s life is over. It is that he had a severe anomaly. It is a critical rinding that more than three quarters (22) of the women in this study “just wanted to die”. Sarah says “I felt I had no purpose in life anymore. Life had truly lost its meaning. My body was screaming for motherhood and my baby was gone”. Although they eventually believe they made the “wise” decision, shortly after implementation of their decision, they often seem to cope with varying levels of passive suicidal ideation.

By the time of the first doctor visit, “the healing would be all done by then”. In a society of quick-fixes and the popularity of simple approaches to complex tasks (see the How-to-do-it-for- Dummies books [37]), women often recognize that loss will entail grief, but they also expect that the grief will follow a set of stages and be done quickly. Many believe they will feel better by the time of the first physician visit after the TFA. Since most find grief intensifying for the first 3-6 weeks and lasting until the due date [16], this is an expectation that leads women to believe there is something wrong with their grief experience. Felicia was able to describe this well:

Daniel Pier husband] had said to me through this process, ‘just wait until vie go see Dr. N for the check-up’ kind of like it was a touchstone and I would be all better. Kind of like the healing would be all done then – and I bought that, because I wanted to believe it. So after I wasn’t better by then, I got back in touch with the genetic counselor and said ‘could you give me those names… I need those names’ [of counselors and support groups]…. And I talked to my friend Diana last night and she sent me this email that says something like ‘Felicia, I was operating on all my cylinders today – physically, emotionally, spiritually, intellectually’. And I thought, OK, if it took her from March to July to work on all her cylinders, it’s all right that I’m not there yet. Knowing her own situation and what she had to go through gives me the validation… it’s going to take awhile. It doesn ‘t go away when you go for the follow-up check up”.

The work of Kubler-Ross [38] was ground-breaking: most in this study were aware of her work and the stages of grief she theorized and they expected to “work through them” in a relatively brief period of time. Grief is seldom informed by accurate expectations about the length and depth of the experience [36,39], but chosen losses are often dismissed and expected to entail less active grief. Yet all in this study experienced some grief, with the majority exhibiting grief reactions quite similar to other types of perinatal bereavement [19,40-43].

I was very afraid of running into someone who would pass judgment. This statement is a mythic expectation that defines the milieu in which the woman sees herself. This then leads her to be secretive about the experience she has been through, reluctant to seek support from friends and family as she processes her grief, and isolated at the time she is most in need of empathy [12]. The women who take the chance to talk to others about their experience most frequently met less criticism than expected or were able to utilize defenses if judgment did occur. Ricki comments:

I was looking for things to read at first, became you’re not quite ready to expose yourself to people yet: you’re still afraid of judgment -you ‘re just afraid of everyone, because you don’t want anyone to know. And telling people is hard and each time they don’t reject you, it feels great, but you’re still very much in your shell – so reading is good.

The stigma of abortion, particularly in the US, heavily influences women’s grief process as they harbor a mythic expectation that they will be judged negatively. Yet, without risking this judgment, women are left with few supports and little ability to mobilize their own defenses against the internalized sense of stigma they often develop. Marilyn talks about refusing to divulge her story to anyone:

I do think that bearing the secret was an additional stress on me. No question about it, but telling them the truth would have, or so I imagine[d], be additional stress as well, just a different kind of stress.

Women who take the risk of pushing past their reticence frequently find the support they need from others. Ironically, in the face of harsh judgment, three women were able to mobilize the righteous anger they needed to reject internalized stigma.

Down ‘s syndrome was the worst that it could get. A final mythic expectation is the idea that Down’s syndrome is the only serious anomaly that is diagnosed and is the worst scenario. Respondents commented that they assumed that since Down’s syndrome is associated with older ages, they must be safe when they are younger than 35. Yael was 34 when her fetus was diagnosed with a variant of Ivemark syndrome (missing pancreas, spleen, gonads):

It’s just the whole – everything you hear and read and see, it’s like when you’re 35, something happens. Something happens at 35 and you’re going to have a baby with Down’s syndrome, or whatever. And honestly, before all this happened, I thought Down’s syndrome was the worst it could get.

Breena was 31 years old when her female fetus was diagnosed with Triploidy:

I said to the doctor, like I knew trisomy 21 and I’d learned about 13 and 18 and I didn ‘t know if you could have like 22, but I never heard of all of them. And I asked her and she said “Oh, that’sthe worst. I don’t like to compare genetic conditions, but this is really bad. The average lifespan of a triploidy baby is two hours. And to tell you the truth, with your baby’s condition, we’re shocked that she’s still hanging on “. … Now I know all the things that can go wrong. Six months ago, I thought Triploidy was the worst thing that could happen to anyone, but now I realize that they’re all horrible in different ways.

Some women in this study had the unfortunate education of finding out about diagnoses such as trisomy 18, triploidy, hypoplastic left heart syndrome, and Ivemark syndrome. Others found that the less serious anomalies (such as Klinefelter’s syndrome and others that are not incompatible with life) create a further difficulty in making decisions that feel ethical in the face of uncertainty about the levels of impairment. When women have considered the possibility of fetal anomaly, they typically consider Down’s syndrome; when they discover another anomaly, they are in shock that other anomalies occur.

Excruciating dilemmas

The mythic expectations above relate to a woman’s individual beliefs. These intersect with a set of dilemmas that are inherent in the contradictory norms and feeling rules that society prescribes. These then exacerbate the basic dilemma of having to make a decision about whether to end the existence of a loved entity in order to spare it from a perceived worse existence on earth.

The dilemma of conception and bonding

Dilemmas of pregnancy and the experience of TFA begin before an anomaly is even diagnosed. Before a woman becomes physically pregnant, she is often “a little bit pregnant” [13] or has already started her reproductive story [44] and begun to bond and fantasize about an entity that is purely in mental form at that point. Carole describes this:

For me, I started planning for that baby long before I was even pregnant -1 stopped drinking, cut down on coffee – made life style changes in anticipation. Then, from the beginning of the pregnancy, we are encouraged to prepare – buy this, buy that, hearthe “horror stories”, accept the life inside you – don’t do this or that – the focus of my life changed as I became the vessel (for lack of a better metaphor). Then to find your child is not healthy – it is unimaginable.

Feeling rule prescriptions encourage early prenatal bonding that focuses on the “life” of the fetus. This active “conceiving” process of creating a view of the fetus as one’s child, even prior to its physical conception, is underway as women follow typical pregnancy behavior in US culture [45]. Finding out sex and creating an identity is part of this as Erin shows:

We also wanted to see the lab results, just to know for sure. And we hadn ‘t wanted to know the sex, so Ellen copied the lab results and put a sticky note over the sex, but the sex showed through the sticky note and as soon as she walked out, I said, “It’s a girl, it’s a girl” and he said “yeah, I know” [crying and pause].

J – Why had you decided not to find out the sex?

E-I don’t know – trying to stay detached and trying not to put an identity on the baby. Which is, I guess, leading to the naming part, because then it felt like we should give her a name. But it feels sort of dual; first, once I name her, then I know she’s really gone / crying openly/ and the other thing is, it makes her more of a personality or a person who ‘s gone.

This same behavior opens the women to higher levels of grief and guilt when a decision must be made. Sarah talks about how this bonding impacted her grieving:

When we found out we were having a son at 19 weeks, he became part of our family. He was given a name and we bonded with him immediately. We had plans and dreams… Jared was a living, breathing little baby to us, in life and in death… We thought of him as our son, our first child and that he ‘Il always be. I remember some of the cruelest comments were regarding his name. “Why don’t you save the name Jared in case you have a living child”. Jared was the name we gave him and it was his and nobody else ‘s. The dilemma of conception and bonding means one must either breach the rules of early bonding in order to remain emotionally distant until the fetal health assessment, or one must be vulnerable to high levels of emotional pain if a decision to TFA must be made after becoming attached to the fetus.

The dilemma of testing

Medical anthropologists in the US and around the world have found that ultrasound screening is typically presented to pregnant women as a “fun” experience, a way to “see the baby” and possibly learn its sex [46-48]. They find that ultrasound technologists frequently view “promoting bonding” as one aspect of this form of prenatal screening [49-52]. Victoria describes this:

You see, I saw the baby at 7 weeks and it was just a little heart inside a blob. Then the day of the amnio and level II, he was a baby. I saw the baby and I forgot why I was there. I just saw my baby moving around. He looked normal to me. Then she asked us if we wanted to know what we were having. As she said this, I could see for myself it was a boy. I had wanted a healthy baby first and foremost, but deep down, I wanted a boy for Byron. Then as they did the scan and saw the indicators for Downs, I just stoned sinking… 7 am glad I got to go and see the level II. I guess I knew it would be the last time I would see him. I have to stop here a minute -1 am a little bit overwhelmed all of the sudden – OK, so I guess I feel that I was brought closer to him, but then it also made it more difficult because I knew what was going to happen to him.

Indeed, Gray [53] comments “the ultrasound machine not only represents the child, it helps to make a child of the fetus, something that used to happen at birth” (p. 92). Prenatal testing is viewed as a “way of getting to know the baby”, rather than acknowledged as a fetal health assessment tool. As obstetrical care providers, we often take the view that promoting bonding is always good. Even so, this exposes the woman to even greater emotional stress in cases where decisions must be made about whether to continue the pregnancy. Zelia shows how she experienced this:

Ever since my first pregnancy ended in miscarriage at 10 weeks, I have tried not to bond with my babies prenatally. I know that sounds awful, but I am a pessimist and it is hard for me to be positive after all of this… I really did not worry much until the AFP came back – then I stopped touching my belly and tried to ignore every movement of the baby… I do not recall wanting to look at the ultrasound screen during the amnio, and my doctor kept saying “the baby looks great” and I wished he would just shut up.

Consenting to prenatal screening or diagnosis is assumed to be a sign of taking good care of the pregnancy and is hoped to guarantee a good outcome. Women enter the procedures with a sense that they are getting the best prenatal care, and ready to enjoy a bonding experience. When they are told of an anomaly, they go from expecting fun and bonding to having to consider ending the pregnancy. Ricki describes this:

I guess I took it for granted. I mean, everyone gets an ultrasound now. It doesn’t seem like something you do to look for health things, it seems like a fun thing you do to get to see your baby, so – even though I know it’s not, it really is for looking, but I guess I just took it for granted.

The only available medical response to most diagnosed fetal anomalies is the socially stigmatized procedure of abortion. Barring the option of fetal surgery (not useful in genetic disorders), women are confronted with a choice: they can either deliver a baby with significant health problems, or terminate the pregnancy in a society that stigmatizes such an option. Their consent to testing has led them to a decision where there are no good options.

The dilemma of choice

Women are told at the time of diagnosis that they must make a choice about whether to continue the pregnancy. Choice implies that there is a good option, yet neither choice will allow the mother’s goal of a healthy child. Women expect that there must be a good or ‘right’ choice and instead are confronted with a true dilemma- “a choice between two evenly balanced alternatives, both of which are usually unpleasant” [54].

Additionally, the word “choice” has connotations connected to the abortion debate in the US. Yet, the polarized “camps” of abortion have no room for women in this situation. The pro-choice group cannot accept the love the woman feels for the entity she calls her “baby”; the pro-life group cannot condone the woman’s willingness to terminate the pregnancy. After complaining about “pro-lifers”, Nanci says:

The pro-choicers aren’t much better. It’s hard to use people like us as political propaganda when we freely admit that we aborted our BABIES. When so much of the debate is based on when does life begin and when we proclaim that our babies were alive and are our children. That makes it messy.

Neither group will embrace a woman who is emotionally attached to her “baby”, but who is also willing to consider (and indeed follow through on) having an abortion. The women are left with little support and no advocacy group to embrace the sort of “choice” they must make. Even so, for those who receive support from friends and family, the assurance often fails to comfort. Frances says:

I also am bewildered when people assure me “I also would have ended the pregnancy”. A decision like this is ultimately based on a thousand unanticipated and very personal factors unique to your exact circumstances.

The simplified poles of the abortion debate in the US leave no room for the complexity and uniqueness of each woman’s decision.

The dilemma of identity

Once an anomaly is diagnosed, a dilemma of self-perception arises. The woman must choose between the identity of being the mother of a disabled child or of becoming a bereaved mother. Many friends and family send mixed messages about whether a pregnant woman is viewed as a mother at all, often giving Mother’s Day cards and gifts prior to a birth, but withdrawing recognition of mother- status if the woman does not go on to deliver a baby, as Wendy describes:

That first Mother’s Day was so hard. The second Mother’s Day, after I was pregnant with Walt/her healthy second son], my mother said, “Maybe next year, you or your sister will be a mother”. And that set me off. And I said “Well, what am I now?” and she said “Oh stop, you’re just a woman who lost a baby”. And I said, “Well, I was a mother last yearwhenyou all gave me gifts”. My mom just gave me this look and she said, “Well, at that time we thought you’d be a mother”- so it was a big huge fight. So, OK, now I’m recognized as a mother.

There is an additional quality to this dilemma. “There is a feeling within US culture that ‘good’ mothers do not have ‘bad’ babies” [55,56]. This means that women who have children with disabilities, especially those diagnosed prior to birth, are breaking the rules by giving birth to those children [57,58]. Nevertheless, women who elect to end the pregnancy are not recognized as mothers and are frequently judged negatively for having a stigmatized medical procedure. Their dilemma consists of needing to decide whether to become the mother of a disabled child (a “bad” mother) or of not being recognized as a mother and branded with the stigma of having an abortion. Victoria says:

It is so easy for them [anti choice people] to say that what I did was wrong, but they are not the ones who have to watch the baby and my other children suffer. Codey would never have left the hospital. He would have been connected to tubes and wires for as long as he was alive. It doesn’t seem humane to me to do that to a baby. They don’t think terminating is humane. I guess everyone is entitled to their own opinion. It is one of those issues people have strong beliefs about… I guess it comes down to “unless you have walked a mile in my shoes”.

The dilemma of disability

US culture claims acceptance of all people. Laws protect people with disabilities, despite the fact that little support is available to help people who have fewer opportunities to make a viable income. Women are once again faced with a dilemma. They experience the love they have for the fetus, but they also struggle with the meaning of not accepting a potential child with a disability. Frances describes her struggle:

My husband and I were faced with a moral dilemma no person should ever have to face. Do we sentence ourson to death, or do we sentence him to a lifetime with a severe abnormality? These were our only options. Either choice would lead to tremendous suffering and regret. I have heard this situation described as choosing between having your left hand or your right hand cut off. Having this knowledge about our son forced us into making a decision one way or the other.

Women feel that if they were more capable, good, or loving, they would be more accepting of a disabled child. Even if rejection of disability is not a factor, women realize that they and their families will be required to handle the burden of care, economically, physically and emotionally. They are unwilling to knowingly take on this set of challenges. Beatrice epitomizes this:

I have guilt for not being the kind of person who could parent this particular type of special need. There is a lot about this [chromosomal] deletion that looks like a mental illness. I grew up with a sister who is schizoaffective. I have a lot of fear about that… that was my biggest fear – Being utterly exhausted, sleep deprived and depressed and having a mentally ill or out of control, non-sleeping, bizarrely energetic boy. And lo and behold, those very characteristics are pan of the behavioral phenotype that has been identified for the chromosome deletion: it was terrifying.

A sizable minority (12/30) of the study group discussed the stigma that attaches to disability in the US. They understand that the lack of economic support is only one part of a culture that claims acceptance but ultimately ostracizes people with visible disabilities. The financial lack of support was a critical factor for Urika: I should backtrack here. It wasn’t so much about the quality of life she could have – it was also about our immediate situation. Neither of us have living parents or immediate family that we’re close to. The relatives that we do have are nice on a superficial level, but not real helpful and we’re not very close. And we’re basically working schmoes – we have no savings. And I do know someone whose baby was diagnosed with cerebral palsy and seizures at 4 months old and they told the parents he probably wouldn’t live past his 12th birthday. He died at 25 and they went bankrupt twice. These are working people. And I said, we just can’t go through what they went through. We just don’t have the resources. It’s all about resources that we didn’t have. That poor woman who had that child 28yean ago didn ‘t have any choices, but I do and I think I have to put some foresight into this.

Six women spoke of experiencing the teasing most endure due to differences at some stage in life. They were not willing to give birth to children who are likely to be teased for much of their existence. They talked about feeling pain due to the awareness of difference and a sense of not belonging and did not want that for their child. Women want to protect themselves and their living and future children from the stigma, financial, and emotional demands of caring for a child with disabilities, despite feeling love toward the “baby” they have conceived in their hearts, minds and bodies.

The dilemma of the whole story

The events of testing, decision-making and termination are cloaked in secrecy. The various stigmas attached keep women silenced about their experience. Women long for support, yet fear telling “the whole story” about the circumstances of their loss for fear of negative judgment. Marilyn explains:

I do think that bearing the secret was an additional stress on me. No question about it, but telling them the truth would have, or so I imagine[d], be additional stress as well, just a different kind of stress.

They adjust the story to the level of support they believe the listener may be able to provide, carefully adjusting disclosure to perceived support. When support is granted (i.e., expressions of sympathy are extended) in cases where the woman did not tell “the whole story”, she feels the support is ill-gotten and is unable to utilize it effectively. Sarah talks about this clearly:

I think I only told the people I knew would support me. To have anyone condemn me would have hurt too badly. I felt fragile as glass. The people I worked with (except two dose friends) I simply told them “I lost the baby”. Then some people who I told we simply lost the baby due to a genetic condition commented on how brave we were to continue the pregnancy. This just added to the guilt… There were some people I wished I could share the whole story with, but in the end felt I just couldn’t risk my heart. Everyone I told was 100 percent supportive. I told all of the people closest to me. I wish I could tell everyone… Sometimes there is that guilt though of not telling the whole story.

Thus women are confronted with the dilemma of sharing the whole story and risking rejection or titrating the story and feeling any support they receive is ill-gotten.

Partners’ dilemmas of support

These dilemmas affect women’s partners as well. Although the study group was comprised of the women, they reported on their perceptions of their partner’s responses. When the partner is male (as were all partners in this study group) there is a mismatch between gender roles and expectations. Deirdre found this problematic during the diagnosis and decision-making phase when her husband kept telling her not to worry:

How bad was it? What kind of Down’s? Me being a teacher, I know the kind of life we’re talking. So he pretty much left it in my arms… The hardest part was, he didn’t really give me any feedback, nothing like what we should do. He wasn’t saying anything. And that really hurt me for a long time. That he didn ‘t say, “I agree with what you’re doing”. Of course, now that I look back at it, months after, I know what he was doing. If I would have said “we have to continue this pregnancy” and he’s thinking, “we can’t do this”, he knows what will happen. But at the time, I was very, very angry that he wouldn’t.

Male gender roles do not incorporate prenatal bonding, verbalization of emotion, and tolerance for tearful emotional expression in most situations. Although men experience these things, they may stifle their own emotional expression in order to be “strong” for their grieving partners. Women from the study group report this as a source of extreme stress within the relationship as they tend to interpret the lack of emotional expression as indicating a lack of attachment on the part of the partners to both themselves and the pregnancy they shared. Nanci describes how this extends to grieving:

About 3-4 weeks after Maddy’s burial, I really felt him not only pulling away from me emotionally, but also felt as though he was pushing me to “be okay”. I resented it. And I was terribly hurt and felt completely alone. I would come to him needing to talk and though I felt he would listen, he would also try to say things to make me feel better which I didn ‘t want to hear. I wanted someone to say, “Yes, you’re right. You should be sad. You should be angry. It’s Okay to feel this way”. And he really couldn’t do it. I think he believed that if he gave me permission to really go deep into my pain that I would never come out.

Partners must decide whether to share their own sense of attachment and grief and thereby violate gender roles, or follow the gender role prescription of minimal emotional expression and being “strong” so their wife would not “really go deep into [her] pain”, but risk their wife’s alienation and potential for not meeting their partner’s needs at a time of distress.

Conclusions

Women who made the decision to terminate a desired pregnancy after the discovery of a fetal anomaly experienced grief that mirrors that of spontaneous pregnancy loss. The responsibility for decision-making complicates and seems to intensify this grief after TFA. Chosen loss is a concept yet to be explored in bereavement studies. When a person experiences losses as a direct result of their own choice, such as TFA, clinical experience shows that they often feel they are not entitled to their grief [59]. Yet, these chosen losses are often the most challenging in terms of ability to grieve with adequate supports and time. Many of these challenges are directly related to ambivalent societal messages about bonding, prenatal screening and diagnosis, abortion and disability. The societal ambivalence manifests itself in a lack of formal support resources.

The stigma of abortion and disability in the USA compounds the isolation, leading to a major challenge to elicit and receive effective support for one’s grief. Support is further compromised due to the titrated nature of the stories women tell about the circumstances of their loss. Women report feeling frightened of judgment and refrain from telling “the whole story”, leaving them without the benefit of ventilation and reconstruction that narrative therapies have shown to be so valuable [60]. Further, they are unable to ask for and receive the empathy, validation and support that they desperately need. Frequently, few formal supports are available beyond a post-delivery or post-surgical medical exam. The finding that nearly all the women in this study report feelings of wanting to die should raise provider awareness of the need for these services to be available for women who have a TFA.

Women benefit from support where the whole story can be told, particularly in support groups designed for couples who have this experience. They benefit from a more nuanced understanding of the abortion debate. They benefit from health care providers who assure adequate formal and informal support resources and who allow them to process their feelings, expectations and dilemmas with an empathetic manner. Successful interventions are designed to help women consider the fact that the existence of the anomaly is the thing that causes the emotional pain. Pain would exist whether they terminate or whether they had a baby who then had to cope with medical and social challenges. Further, it is important to explore the contrast of this finding with women who give birth to children with anomalies.

Assistance in coping with the grief cannot be separated from an understanding of the way advances in technology affect the decision- making and grief process for these women and their families.

Acknowledgments

Dr. McCoyd would like to thank the Association of American University Women for an American Dissertation Fellowship award in 2002-2003 which helped to support this research. The comments of two anonymous reviewers are also appreciated.

Current knowledge on this subject

* Greater use of fetal screening and diagnostic testing leads to expanded use of termination of desired pregnancies.

* Women experience complicated grief and trauma in response to these losses, though studies of mental health sequelae are variously interpreted and primarily quantitative.

* Earlier gestational age, good social support and an anomaly that is incompatible with life all seem to make coping with this loss somewhat easier.

What this study adds

* Qualitative studies can explore the ongoing process of grieving and adjustment over time – this is one of the very few studies to do that.

* Discussion of the social support context is often mentioned, yet seldom the focus of the study as it was in this research. * The ambivalent societal messages in the USA strongly influence women’s emotional responses and their coping abilities, as the women in this study reveal.

References

1. Fuchs VR, editor. Essays in the economics of health and medical care. New York: Columbia University Press; 1972. p 239.

2. Kersting A, Dorsch M, Kreulich C, Reutemann M, Ohrmann P, Baez E, Arolt V. Trauma and grief 2-7 years after termination of pregnancy because of fetal anomalies- a pilot study. Journal of Psychosomatic Obstetrics and Gynecology 2005;26(1):9-14.

3. Use S, Gath D. Psychiatric outcome of termination of pregnancy for foetal abnormality. Sociological Medicine 1993; 23:407-413.

4. Korenromp MJ, Page-Christiaens GCML, van den Bout J, Mulder EJH, Hunfeld JAM, Bilardo CM, Offermans JPM & Visser GHA. Long-term psychological consequences of pregnancy termination for fetal abnormality: a cross-sectional study. Prenatal Diagnosis 2005;25:253- 260.

5. Korenromp MJ, Page-Christiaens GCML, van den Bout J, Mulder EJH, Hunfeld JAM, Bilardo CM, Offermans JPM & Visser GHA. Psychological consequences of pregnancy for fetal anomaly: similarities and differences between partners. Prenatal Diagnosis 2005;25:1226-1233.

6. White-van Mourik MC, Connor JM, Ferguson-Smith MA. The psychosocial sequelae of a second trimester termination for fetal abnormality. Prenatal Diagnosis 1992;12(3):189-204.

7. Dormandy E, Michie S, Hooper R, Marteau TM. Low uptake of prenatal screening for Down syndrome in minority ethnic groups and socially deprived groups: a reflection of women’s attitudes or a failure to facilitate informed choices? International Journal of Epidemiology 2005;34:346-352.

8. Khoshnood B, De Vigan C, Vodovar V, Goujard J, Lhomme A, Bonnet D, Goffinet F. Trends in prenatal diagnosis, pregnancy termination, and perinatal mortality of newborns with congenital heart disease in France, 1983-2000: A populationbased evaluation. Pediatrics 2005;115(1):95-101.

9. Mansfield C, Hopfer S, Marteau TM. Termination rates after prenatal diagnosis of Down syndrome, spina bifida, anencephaly, and Turner and Klinefelter syndromes: a systematic literature review: European Concerted Action DADA (Decision-making After the Diagnosis of a fetal Anomaly). Prenatal Diagnosis 1999;19:808-812.

10. Rauch ER, Smulian JC, DePrince K, Ananth CV, Marcella SW. Pregnancy interruption after second trimester diagnosis of fetal structural anomalies: The New Jersey Fetal Abnormalities Registry 2005. American Journal of Obstetrics and Gynecology 2005;193:1492- 1497.

11. Schechtman KB, Gray DL, Baty JD, Rothman SM. Decisionmaking for termination of pregnancies with fetal anomalies: Analysis of 53,000 pregnancies. Obstetrics and Gynecology 2002;99(2):216-222.

12. Geerinck-Vercammen CR, Kanhai HHH. Coping with termination of pregnancy for fetal anomaly in a supportive environment. Prenatal Diagnosis 2003;23:543-548.

13. Gregg R. Explorations of pregnancy and choice in a high-tech age. In: Reissman CK, editor. Qualitative studies in social work research. Thousand Oaks, CA: Sage Publications; 1993. pp 49-66.

14. Liamputtong P, Halliday JL, Warren R, Watson LF, Bell RJ. Why do women decline prenatal screening and diagnosis? Australian women’s perspective. Women & Health 2003; 37(20):89-108.

15. Rapp R. Testing women, testing the fetus: the social impact of amniocentesis in America. New York: Routledge; 1999. p 361.

16. McCoyd JLM. Pregnancy interrupted: Non-normative loss of a desired pregnancy after termination for fetal anomaly [dissertation]. Bryn Mawr (PA): Bryn Mawr College; 2003. p 281. Available from: Proquest, Ann Arbor, MI; 3088602.

17. Hochschild AR. Emotion work, feeling rules, and social structure. American Journal of Sociology 1979;85(3): 551-575.

18. Hochschild AR. The sociology of emotion as a way of seeing. In: Bendelow G, Williams S, editors. Emotions in Social Life. New York: Routledge; 1998. pp 3-15.

19. Zeanah CH, Dailey J, Rosenblatt MJ, Sailer N. Do women grieve following termination of pregnancy for fetal anomalies? A controlled investigation. Obstetrics & Gynecology 1993; 82:270-275.

20. De Puy C, Dovitch D. The healing choice. New York: Simon & Schuster; 1997. p 237.

21. Minnick M, DeIp K, Ciotti M. A time to decide, a time to heal. 4th ed. St John’s, MI: Pineapple Press; 2000. p 110.

22. All research was approved by The Bryn Mawr College Institutional Review Board after a Full Review.

23. Toedtler L, Lasker J, Alhadeff J. The Perinatal Grief Scale: Development and initial validation. American Journal of Orthopsychiatry 1988;58:435-449.

24. Potvin L, Lasker JN, Toedter LJ. Measuring grief: A short version of the perinatal grief scale. Journal of Psychopathology and Behavioral Assessment 1989;! 1:29-45.

25. McCoyd JLM, Kerson TS. Conducting Intensive Interviews Using E-mail: A Serendipitous Comparative Opportunity. Qualitative Social Work Research and Practice 2006;5(3): 389-406.

26. Miles MB, Huberman AM. Qualitative data analysis. 2nd ed. Thousand Oaks, CA: Sage Publications; 1994. p 338.

27. Glaser BG, Strauss AL. The discovery of grounded theory: Strategies for qualitative research. Chicago: Aldine; 1967. p 264.

28. Germain C, Gitterman A. The life model of social work practice. 4th ed. New York: Columbia University Press; 1980/1996. p 490.

29. Denzin NK, Lincoln YS. Handbook of qualitative research. Thousand Oaks, CA: Sage Publications; 2000. p 1065.

30. Padgett DK. Qualitative methods in social work research: Challenges and rewards. Thousand Oaks, CA: Sage Publications; 1998. p 177.

31. Padgett DK, editor. The qualitative research experiment. Belmont, CA: Wadsworth; 2004. p 328.

32. All names are pseudonyms and all italicized text reflects study group quotes.

33. Davis DL. Empty cradle, Broken heart. Golden, CO: Fulcrum Publishing; 1997.

34. Use S. Empty Arms. Maple Plain, MN: Wintergreen Press; 1982/ 2002. p 80.

35. Layne LL. Motherhood lost. New York: Routledge; 2003. p 352.

36. Rando TA. Treatment of complicated mourning. Champaign, IL: Research Press; 1993. p 750.

37. For Dummies(TM) Book Series. Foster City, CA: IDG Books Worldwide.

38. Kubler-Ross E. On death and dying. New York: Macmillan Publishing; 1969. p 289.

39. Thompson N, editor. Loss and grief. New York: Palgrave; 2002. p 265.

40. Herman MR. Parenthood lost. Westport, CT: Bergin & Garvey; 2001. p 247.

41. Kluger-Bell K. Unspeakable losses. New York: Quill; 2000. p 180.

42. Kohn I, Moffitt PL. A silent sorrrow. New York: Routledge; 2000. p 299.

43. Peppers LG, Knapp RJ. Motherhood and mourning: Perinatal death. New York: Praeger; 1980. p 165.

44. Diamond M, Diamond DJ, Jaffe J. Reproductive trauma: Treatment implications of recent theory and research. Presented: National Association of Perinatal Social Workers annual conference; San Diego, CA; May 11, 2001.

45. Eisenberg A, Murkhoff HE, Hathaway SE. What to expect when you’re expecting. New York: Workman Publishing; 1991. p 454.

46. Davis-Floyd RE, Sargent, CF, editors. Childbirth and authoritative knowledge: Cross-cultural perspectives. Berkeley: University of California Press; 1997. p 510.

47. Ginsberg FD, Rapp R, editors. Conceiving the new world order. Berkeley: University of California Press; 1995. p 450.

48. Mitchell LM, Georges E. The cyborg fetus of ultrasound imaging. In: Davis-Floyd R, Dumit J, editors. Cyborg babies: From techno-sex to techno-tots. New York: Routledge; 1998. pp 105-124.

49. Browner CH, Press NA. The normalization of prenatal diagnostic testing. In: Ginsberg F, Rapp R editors. Conceiving the new world order: The global politics of reproduction. Berkeley: University of California Press; 1995. pp 307-322.

50. Browner CH, Press NA. The production of authoritative knowledge in American prenatal care. In: Davis- Floyd RE, Sargent, CF, editors. Childbirth and authoritative knowledge: Cross-cultural perspectives. Berkeley: University of California Press; 1997. pp 113- 131.

51. Georges E. Fetal ultrasound imaging and the production of authoritative knowledge in Greece. In: Davis- Floyd RE, Sargent CF, editors. Childbirth and authoritative knowledge: Cross-cultural perspectives. Berkeley: University of California Press; 1997. pp 91- 112.

52. Morgan LM, Michaels MW, editors. Fetal subjects, feminist positions. Philadelphia: University of Pennsylvania Press; 1999. p 345.

53. Gray CH. Cyborg citizen: Politics in the posthuman age. New York: Routledge; 2002, p 92. p 241.

54. Webster’s II New College Dictionary. New York: Houghton Mifflin; 1999.

55. Ladd- Taylor M, Umansky L, editors. “Bad” mothers: The politics of blame in twentieth century America. New York: New York University Press; 1998.

56. McDonnell JT. On being the “Bad” mother of an autistic child. In: Ladd- Taylor M, Umansky L, editors. “Bad” mothers: The politics of blame in twentieth century America. New York: New York University Press; 1998. pp 220-229.

57. Beck M. Expecting Adam. New York: Berkley Books, 2000. p 336.

58. Zuckoff M. Choosing Naia: A family’s journey. Boston: Beacon Press; 2002. p 240.

59. Doka KJ, editor. Disenfranchised grief: Recognizing hidden sorrow. Lexington, MA: Lexington Press, 1989. p 368.

60. Pennebaker JW. Telling stories: The health benefits of narrative. Literature and Medicine 2000;19(1):3-18.

JUDITH L. M. MCCOYD

Rutgers University, The State University of New Jersey, School of Social Work, NJ, USA

(Received 27 December 2005; accepted 31 October 2006)

Correspondence: Judith McCoyd, 327 Cooper Street, RU-SSW, Camden, NJ 08102-1519. Tel: (856) 225-2657. E-mail: [email protected]

Copyright Taylor & Francis Ltd. Mar 2007

(c) 2007 Journal of Psychosomatic Obstetrics and Gynecology. Provided by ProQuest Information and Learning. All rights Reserved.

Teacher Charged With Sex Abuse Glenbard East Instructor Put on Unpaid Leave

By Jake Griffin

jgriffin@@dailyherald.com

A Glenbard East High School driver’s education teacher has been charged with aggravated criminal sexual abuse for having an inappropriate relationship with a 17-year-old female student from the Lombard school, police said Thursday.

George A. Langlotz, 58, of 2332 Hartford Court, Naperville, was arrested Wednesday and charged with the felony.

Police from Naperville and Lombard conducted a joint investigation after Lombard police became aware on May 30 of sexual conduct between the teacher and the girl, authorities said.

Police in both towns said a witness spotted Langlotz and the girl in Naperville over Memorial Day weekend in the teacher’s vehicle.

“The witness observed them doing something this witness considered to be inappropriate and it was reported to Lombard police,” Naperville Lt. Dave Hoffman said.

The witness first reported the incident to Glenbard High School District 87 administrators, Lombard Deputy Chief Dane Cuny said.

Further investigation revealed Langlotz was tutoring the student, and police say the sexual conduct occurred at his Naperville home between October 2006 and May 2007.

“The school administration and District 87 personnel provided information to law enforcement, which led to the arrest of Mr. Langlotz,” Glenbard Superintendent Mike Meissen said.

Police are investigating whether there are other victims, Hoffman said. Additional charges may come.

The Glenbard superintendent said the at-home tutoring was unauthorized and against district policy.

“Tutoring at home by teachers is not authorized by the school district except in formal homebound tutoring programs set up by the district for a student unable to attend,” Meissen said.

He would not say whether Langlotz’s tutoring could result in the teacher’s firing.

Hoffman said investigators described the tutoring as “general studies.”

Langlotz is a 20-year district employee and full-time driver’s education instructor at Glenbard East. He was put on paid leave after being suspended May 31.

He was the school’s athletic director from 1987 to 1992 and a physical education teacher from 1992 to 2003.

Meissen said the school board likely will discuss the status of the case during closed session at its regularly scheduled Monday meeting.

Besides his teaching activities, Langlotz was a youth hockey coach in the area. Until last fall, he had coached for the Huskies Hockey Club in Romeoville, club board President Joe Pedota said.

“He is a very good guy,” Pedota said. “This is shocking. USA Hockey requires all coaches to be fingerprinted and screened, and that was done and it never indicated anything in his past.”

An item in the hockey club’s newsletter last fall praised the coach players called “Pops,” saying he coached youth hockey for almost 40 years.

Langlotz turned himself in to police in Naperville, where he was processed and posted $10,000 bond. He is scheduled to appear at 1:30 p.m. July 11 in Will County court.

Naperville police are asking anyone with information about the case to call Lt. Kenneth Parcel at (630) 305-5485.

(c) 2007 Daily Herald; Arlington Heights, Ill.. Provided by ProQuest Information and Learning. All rights Reserved.

Robotic Surgery on Prostate Cancer Arrives in R.I.

By Felice J. Freyer; Journal Medical Writer

A costly new device at Miriam Hospital makes tricky surgery easier and may reduce side effects.

* * *

PROVIDENCE – In the operating room at Miriam Hospital, five people surround the patient, draped and unconscious on the table.

But the surgeon who’s performing the operation – Dr. Joseph F. Renzulli II – isn’t among the five.

He’s several feet away, sitting at a console, his shoes on the floor next to him.

Renzulli peers into the machine, operating handles and pedals that direct the surgical instruments poking into the patient’s belly.

This is robot-assisted surgery – and some say it’s the future of surgery. Miriam Hospital is the first hospital in the state to acquire the robot, called the da Vinci Surgical System, which makes it easier to operate in the tighter corners of the human body.

In this case, the tight corner is the pelvis of a 51-year-old man with a cancer-ridden prostate gland, the walnut-sized organ at the base of the bladder that produces seminal fluid.

With instruments inserted through five half-inch incisions, the robot-assisted procedure promises less blood loss, faster recovery, and possibly – though it hasn’t been proven – lower rates of incontinence and impotence, the two most troubling side effects of prostate removal.

Approved for prostate removal in 2001, some 461 da Vinci systems are in use today at North American hospitals. The technology has caught on quickly despite its high cost – Miriam paid $1.7 million for its system – and despite the fact that the jury is still out on whether robot-assisted surgery leads to better long-term outcomes than standard procedures.

Some 920 Rhode Island men (and 219,000 Americans) will get a diagnosis of prostate cancer this year, making it the most common cancer diagnosis, according to American Cancer Society estimates. There is considerable uncertainty about which cases of prostate cancer require treatment (because some tumors can linger harmlessly for years) and whether to treat with surgery or radiation. But for men who choose surgery, three methods are in use.

The conventional approach is open surgery, in which a doctor makes an 8-to-10-inch incision in the abdomen. Patients typically need five to seven days in the hospital and two to three months at home to recover.

Recently, minimally invasive laparoscopic methods have been introduced for prostate surgery. These involve inserting a camera and surgical instruments into tiny incisions in the lower abdomen. Using long, stiff, hand-held instruments, the surgeon watches his work on a video screen. Similar methods have long been used for abdominal surgery, such as gall bladder removal. But because the bones of the pelvis leave little room to maneuver, laparoscopic prostate removal requires extraordinary physical endurance and technical skill, and few medical centers offer it.

Enter the robot. Robot-assisted surgery offers the advantages of laparoscopic surgery – less blood loss, faster recovery – while overcoming the technical challenges. The surgeon, through a binocular lens, watches a three-dimensional, highly magnified image of his work. The instruments have jointed wrists that can swivel more than the normal motion of the human wrist. The system filters out the inevitable tremor in the surgeon’s hand, and converts the surgeon’s movements to smaller, finer movements of the instruments.

“Every limitation of laparoscopic surgery is overcome by the robot,” says Renzulli. “It’s like you’re inside the person’s body.” (One thing that’s missing, however, is the surgeon’s ability to feel what he is doing.)

In the operation Renzulli is performing on the 51-year-old man, he’s especially glad to be able to peer inside. This man’s anatomy poses a particular challenge: his prostate is ensconced under the pubic bone. If this were open surgery, Renzulli says, “this would be a disaster. Think about it. Underneath the bone, you can’t get your head in there.”

Renzulli points out the slender squiggle of the dorsal vein. This vessel must be cut to remove the prostate, and in open surgery it can be the source of massive bleeding. But in robot-assisted surgery the abdomen is pumped full of gas to make room for the instruments to move about, and the pressure from the gas limits the bleeding. Remotely controlling the hooked cautery tool, Renzulli severs the vein, activates another instrument to suction off the little bit of blood that spurts out, and stitches it shut.

Now Renzulli painstakingly separates the netlike membrane of nerves that covers the prostate. It’s like peeling the skin off a grape. He wants to preserve as much of the nerves as possible, because they control urinary continence and erections.

But on the left side, where the cancer resides, his main focus is making sure not to leave any cancer behind. This man’s tumor is particularly worrisome because he is young, with many years ahead in which it can grow, and because tests indicated it was an aggressive cancer. (A recent study comparing 50 robotic prostatectomies with 147 laparoscopic ones indicates that the robotic procedure, with its magnification and precision, may improve doctors’ ability to get all the cancer.)

Renzulli says his first priority is cancer control. Second, he strives for continence, and third, to preserve sexual function. He recalls a mentor during his training at Yale observing, “You can’t get an erection in the cemetery.”

A tiny plastic bag, like a snack bag with a long blue drawstring, is brought inside the patient’s abdomen. With a grasping tool, Renzulli maneuvers the detached prostate gland into the bag and pulls the drawstring tight. It will be removed and sent to a pathologist to examine later.

He completes the operation in a little over three hours. It’s June 1, and this was the 71st robotic surgery at the Miriam.

The push to get the da Vinci machine originated with Dr. Henry C. Sax, who became Miriam’s surgeon-in-chief two years ago. He had previously worked at the University of Rochester, which purchased the first da Vinci system in upstate New York and used it to treat hundreds of patients a year.

When Sax came to Rhode Island with the mission of beefing up Miriam’s surgery program, especially minimally invasive surgery, he knew he had to have that machine.

The clincher came in December 2005, when Newsweek published a story about da Vinci surgeries. The featured patient was a Rhode Islander. But he had to go to New York for the procedure. Sax also identified a “pipeline” to Hartford, where 50 to 100 Rhode Islanders had gone for da Vinci prostatectomies.

He wanted people to stay here to get their medical care. Also, he wanted to be able to train – and retain – new surgeons.

“It’s really important for this to be available to help us train the next generation of doctors,” said Sax, who is professor of surgery at the Warren Alpert Medical School of Brown University. “And having it may encourage more to stay here.”

After an arduous review before the Health Department, whose approval is needed for expensive new technologies, Miriam won permission to acquire the robot last fall. The state set strict standards for surgeons’ training and monitoring.

Bill Bradley, 62, of Providence, was the very first da Vinci patient at Miriam, on Nov. 16, 2006. He says he wasn’t worried about the surgeons’ inexperience because his sister-in-law, an oncologist, supported the idea, because he trusted his doctor’s recommendation and because he knew an outside surgeon experienced in robotics would be overseeing the first 10 procedures.

Bradley, a retired schoolteacher who now runs a group home, says he was playing tennis eight days after the surgery. And seven months after the procedure, he has no sign of cancer.

Paul Pecchia, 68, a former Stop & Shop manager who lives in Cranston, had his prostate removed with the da Vinci on March 22, by Dr. Gyan Pareek.

He has two brothers who’d undergone open prostate surgery, 6 and 10 years ago. One lost a lot of blood during surgery and spent five days in intensive care. The other suffered scarring that continues to make it difficult to urinate, requiring a catheter.

In contrast, Paul Pecchia left the hospital after three days. He leaked urine for a couple of days after the catheter was removed and then had no other problems; he says he never even needed a pad. Pecchia volunteered that he’d had no sexual problems either. “If you have to have it done, God forbid,” he says, “I would say that’s the best way to go.”

“The main thing is,” he adds, “the cancer’s gone.”

But despite the glowing testimonials, and all the gee-whiz wonders of the da Vinci machine, do most patients actually fare better with it? Miriam’s advertisements – with the headline, “Love the rest of your life” – emphasize “dramatically” lower risks of incontinence and impotence.

But it will take many years of follow-up to know if that’s true.

“There is no good evidence to suggest that the da Vinci robot produces any better result than standard laparoscopic surgery,” said Dr. Scott MacDougal, chief of surgery at Massachusetts General Hospital in Boston, which has held off on buying a da Vinci robot. Although many studies indicate better results with robot-assisted surgery, MacDougal does not believe those studies were well done.

“To add to the cost of health care through these sorts of mechanisms when it’s not proven to be helpful,” MacDougal said, ” it doesn’t seem like a responsible thing to do.”

MacDougal described da Vinci as “a first step” in the inevitable new world of robotic surgery. “This is an early entry into that area. As technology improves, it will be much more versatile, much less costly.”

But MacDougal acknowledged that Mass. General may be losing some prospective patients to hospitals that have the robot. Many patients read about the da Vinci online and decide they want it. Da Vinci’s manufacturer, Intuitive Surgical, estimates that by the end of this year, more than half of all prostatectomies will be done with a da Vinci machine – up from 15 percent in 2004.

Renzulli acknowledges that the studies done so far involved individual surgeons reporting on their own results, not the gold standard for clinical research. But he says the surgeons are reporting similar findings.

Renzulli is especially impressed by what he has read and experienced regarding incontinence.

“You really see the difference when you talk to patients,” Renzulli said. “Our patients are coming back – they’re dry much quicker than they were with open surgery.”

As for preserving potency, that’s harder to measure. Half of all men over 50 already have some erectile dysfunction before the surgery, Renzulli said. Discerning whether the surgery made matters worse can be subjective.

But, says Renzulli, “If they have no preoperative erectile dysfunction, three out of four guys get that potency back in a year.”

“I think this thing is worth it,” said Pareek, the other Miriam urologist trained in using the da Vinci machine. “When you’re operating deep down in the pelvis, it’s very difficult.”

Sandra L. Coletta, Miriam’s chief operating officer, acknowledged that at $1.7 million, the da Vinci machine required a big upfront investment, which Miriam was able to pay without borrowing. The time needed for training and the slower, more carefully monitored procedures in the beginning also cost money. Additionally, disposable surgical equipment costs $1,500 to $2,000 per procedure, according to the manufacturer.

But because patients spend less time in the hospital, the overall cost is comparable to that of open surgery, Coletta said. Shorter stays also open up beds for new patients.

The robot also attracts more patients. In 2006, Miriam surgeons did about 50 prostate removals. Since November, when the machine came on line, the hospital has done more than 70.

Among those patients, Renzulli and Pareek said, there were few complications. No one needed a blood transfusion. One patient had surgical adhesions from a previous operation, forcing doctors to switch to an open procedure. No other patients had complications during surgery, and postoperative complications were few and not unusual.

One patient died, however. Megan Martin, hospital spokeswoman, said the man died in the days after the procedure from a common complication of any surgery and that a Health Department investigation did not find anything that the hospital should have done differently.

One aspect of that investigation is still under way, as the state Board of Medical Licensure and Discipline examines the conduct of the physicians involved. Dr. Robert S. Crausman, the board’s chief administrative officer, said he cannot comment on any incomplete investigation. But he added: “The medical board was very impressed with the quality of work being done [in the da Vinci surgery program] and the efforts they’ve put into bringing this technology to the state.”

So far, Miriam has been using the robot exclusively for prostate removal. But surgeons hope to soon employ it for kidney and bladder cancer, and also have state approval to use it for cardiac and gastrointestinal procedures.

As for the man in the surgery that Renzulli performed on June 1, everything seemed to have gone smoothly. Renzulli said a few days afterward that the man had stayed in the hospital for two nights and seemed to be recovering well. The pathologist was still examining his tissue, so the status of his cancer wasn’t yet known.

As with so many things in medicine, only time will tell.

[email protected] / (401) 277-7397

* * *

ROBOTIC SURGERY

Advantages:

Superior 3-dimensional vision

Magnification of surgical field

Greater dexterity, precision and control of instruments

Filtration of physiological tremors

DISADVANTAGES

Costs of system, instruments, disposable supplies

Increased time in learning, connecting and maintaining equipment

Additional operative time

Lack of tactile feedback

Source: “The Advantages and Disadvantages of Robot-Assisted Surgery,” prepared for the R.I. Department of Health by Harvey Zimmerman.

* * *

Dr. Joseph Renzulli demonstrates the da Vinci robot, used to remove cancerous prostate glands. Most da Vinci patients spend less time in the hospital than traditional prostate cancer patients.

The Providence Journal / Steve Szydlowski

* * *

As Dr. Joseph F. Renzulli II works at the robotic surgery console in the background, Dr. Harry Iannotti, center, monitors the procedure during the operation at Miriam Hospital on June 1.

The Providence Journal / Mary Murphy

(c) 2007 Providence Journal. Provided by ProQuest Information and Learning. All rights Reserved.

Promise Hospital of Phoenix Relocates to Phoenix Memorial Healthcare Center Campus

Promise Hospital of Phoenix, a long-term acute care (LTAC) hospital, relocated to the Phoenix Memorial Healthcare Center Campus at 1201 South 7th Avenue, Phoenix, Arizona. The announcement was made jointly by Peter R. Baronoff, Chairman and Chief Executive Officer, and Howard B. Koslow, President and Chief Operating Officer of Promise Healthcare, Inc., one of the largest LTAC hospital companies in the country. Employing more than 2,000 staff members nationwide, Promise Healthcare owns and manages 14 facilities (including Promise Hospital of Phoenix) in six states with new locations under construction or development.

According to Baronoff, the 22,428-square-foot, 40-bed Promise Hospital of Phoenix now occupies the fourth floor of the former Phoenix Memorial Hospital building, and plans to expand to 57 beds in the near future. With the move, the hospital increased its bed capacity with 28 general acute beds (primarily private rooms), and an Intensive Care Unit (ICU) that includes an additional 12 beds (all private rooms), the latter a level of service not previously offered by Promise at its prior location.

Opened in May 2003 and operating as a 36-bed “hospital within a hospital” in St. Luke’s Medical Center until today’s relocation, the Medicare-certified Promise Hospital of Phoenix provides acute medical management and treatment specifically designed for patients who require a longer acute care hospital-based recovery period for unresolved, chronic illnesses, and multi-system disease processes due to severe illness or injury. Utilizing an interdisciplinary approach to treatment, the LTAC hospital provides aggressive therapy integrated with acute medical care to achieve optimum outcomes for patients requiring critical medical care for ventilator dependency and weaning, cardiomyopathy and congestive heart failure, head trauma, complications resulting from surgery, infectious disease, comprehensive and complex or aggressive wound care management, etc. On-site services include hemodialysis; ICU services; diabetes management; physical, occupational and speech therapy; pharmacy services, and chaplaincy.

Baronoff noted that the relocation provides Promise with a tremendous opportunity to fill a significant healthcare void in downtown Phoenix, and at the same time permits Promise to operate where many thought the hospital would have to vacate the market due to federal restrictions imposed by Centers for Medicare & Medicaid Services (CMS) on co-located LTAC hospitals within facilities like St. Luke’s Medical Center. He added that, “Promise greatly appreciates its relationship with St. Luke’s Medical Center, the years of support it has given Promise, and its continued assistance and understanding throughout this transition.”

“At our new location we have increased our bed capacity and expanded LTAC services based on the burgeoning demand in Greater Phoenix for increased quality care for high acuity LTAC hospital patients requiring extended recovery periods for the treatment of life’s most serious illnesses and injuries,” said Baronoff. “Additionally, our new facility is more conveniently located to better accommodate physicians and encourage frequent visiting by families and friends.”

Promise Healthcare, Inc. President and Chief Operating Office Howard B. Koslow noted that, “In addition to responding to the location limitations resulting from the new CMS rules, our decision to relocate Promise Hospital to the Phoenix Memorial campus was also in large part due to Promise’s vision to be a stronger part of the Phoenix healthcare community.”

“Our entire team of physicians and allied healthcare professionals has joined us at our new location as we welcome the affiliation of additional local physicians, ancillary professionals, and staff members to assist in delivering the highest level of personalized, compassionate, high acuity care in the most effective, efficient, and appropriate manner,” added Promise Hospital of Phoenix Senior Vice President and Chief Executive Officer Richard Luna. “At Promise, our patients are visited daily by their physicians who develop and direct individualized treatment plans to achieve recovery goals.”

St. Luke’s Medical Center CEO Paul Jenson said that, “Promise Hospital of Phoenix has brought to our community quality long-term acute care, and we look forward to continuing our relationship with Promise, understanding that its business decision to relocate from our facility was a direct result of CMS patient access limitations placed upon co-located LTAC hospitals.”

Promise Hospital admits patients with a physician referral from most settings, including short-term acute care (STAC) hospitals, intensive care units, med/surg units, skilled nursing facilities, assisted living facilities, home, and physician offices. Knowing that time is valuable and every minute counts when treatment is needed, Promise’s team of area clinical liaisons provide timely evaluations of patients to determine if they meet criteria for our LTAC hospital services.

“We are pleased to have Promise Hospital of Phoenix join in our transformation of the Phoenix Memorial campus as a significant part of a collaborative community effort to bring enhanced comprehensive health care services to downtown Phoenix,” said Dan F. Ausman, President and Chief Executive Officer of Abrazo Health Care. “Promise Healthcare is a respected and proven industry leader in acute long-term care, and having Promise Hospital on our Phoenix Memorial campus means greater community access to its exemplary extended-stay hospital expertise and care.”

Promise Healthcare, Inc. Executive Vice President of Hospital Operations Richard Gold reinforced that, “The Promise name reflects a commitment to patients, physicians, hospital staff, and the Greater Phoenix area and surrounding counties. Accordingly, we support our local hospital team with the vast and diversified national business and healthcare industry resources, leadership, financial resources, and management acumen of Promise Healthcare, Inc.”

Promise Hospital of Phoenix, Inc. is located at Phoenix Memorial Healthcare Center at 1201 South 7th Avenue, Phoenix, AZ 85007. For more information, contact (602) 716-5000 or visit www.promise-phoenix.com.

To Grow and Expand Quality Long-Term Acute (LTAC) Hospital Care

To Best Serve The Greater Phoenix Community

A Child’s Legacy: Amazing Jacob’s Family Keeps His Memory Alive

By Michelle Bearden, Tampa Tribune, Fla.

Jun. 12–LITHIA — Heather Duckworth is thinking pink. Yes, that’s the perfect nursery color for baby Allie Patricia, coming from Guatemala to join the family in the next few months.

Heather’s not used to pink rooms. She’s the mother of four boys. She has to catch herself when she says that. Three are with her; one’s in heaven.

“Jacob would have liked the idea of a little sister,” she says wistfully. “We talked about adoption even before he got sick. If we could have a house full of children, we would.”

A year ago today, Heather and her husband, Don, were in such a different place, far from talk of diapers and babyproofing their home in FishHawk Ranch.

They were keeping a bedside vigil at St. Joseph’s Hospital, where their 6-year-old son — the middle triplet — was slipping away. He died at 12:30 a.m., succumbing to adrenocortical carcinoma, a brutal cancer that had plagued him since February 2004.

Some parents never get past the grief of losing a child. Such devastation breaks up marriages and destroys spirits. The Duckworths, both 35, say they’re not immune to moments of despair. But their steady faith in God and determination to keep Jacob’s memory alive in positive ways have helped them cope.

The towheaded boy with aquamarine eyes smiles from framed photos throughout the house. All of them show an impish, grinning Jacob in good health. That’s deliberate, Don says.

“We want to remember him as he really was — a boy with incredible spirit and a beautiful smile. He’s the one who usually lifted us up,” he says. “We don’t need to be reminded of how he looked and felt when he was feeling poorly and going through the treatments.”

They battled the Christmas blues with a donation drive called Jacob Stocking Project, collecting more than 600 children’s books and gift cards for hospitalized kids. They got involved with the Tampa-based Pediatric Cancer Foundation, raising more than $50,000 for research at a benefit golf tournament sponsored by Dimmitt Cadillac and Don’s employer, Enterprise Rent-A-Car. And in April, Heather joined the foundation board, where she’s working with other parents on an advocacy project they hope will benefit the 40,000 children fighting cancer.

An estimated 3,000 children die of the disease each year in the United States. To Heather, that’s unacceptable.

“Everything I do, I do with him in mind,” she says. That means getting over her shyness and speaking at fundraising events. “Jacob suffered a lot in his short life. We don’t want any other child to have to go through that. So we’ll work as hard as we can to fund research and find a cure.”

Heather has a special weapon. It’s a venue with unlimited potential to deliver her message. She has Jacob’s Web site.

Strangers all over the world felt Jacob’s death.

They were connected through the Web site his mom started through CaringBridge a few months after his diagnosis. The nonprofit organization provides space and technical support to connect patients with families and friends. Jacob’s site includes his mother’s heartfelt journal of his difficult journey.

She wrote of doctor visits, hospital stays, endless treatments and hopeful months of remission. She posted family photos, Bible verses, personal reflections.

Keyboard therapy, she called it.

Thousands responded. They wrote notes of encouragement, offered prayers and shared challenges of their own in the site’s guestbook. Even after Jacob died, the posts continued. The site had become a meeting place of hope, then grief, then healing.

The Tampa Tribune told the story of the boy known as “Amazing Jacob” last August, two months after his death. At that time, Jacob’s site had drawn a record 3.1 million visits. It’s now at 4.3 million, one of the three most-visited sites in CaringBridge’s 10-year history.

“It’s one of the wonderful things about our sites,” says spokesman Chris Moquist. “They can be used long after the health care crisis is over. People form a strong connection and give each other support in difficult times.”

It’s obvious a lot of work goes into Jacob’s site, he says. That it continues to draw viewers doesn’t surprise him at all. “It’s helping her [his mom], and it’s helping others,” Moquist says.

People urged Heather to take the best of the journal entries and guestbook comments and write a book. She wanted to do it not for profit, but for her surviving children — remaining triplets Devin and Brandon, now 7; Kyle, who’s 23 months older; and the little daughter-to-be waiting in Gautemala.

Heather is stubborn when it comes to her vision. She wants family photos, lots of them, in the book. Publishers said that would be too expensive, so she’s going to self-publish.

“First, this is for my kids,” she says. “They’ll always remember him, especially Brandon. A day doesn’t go by without him mentioning Jacob in some way. He was such a special guy, and we want to honor his legacy.”

The way to do that, the Duckworths believe, is through raising money for pediatric cancer research.

It doesn’t get the attention or the funding it deserves because there aren’t as many patients as, say, adults with lung cancer, says Barbara Rebold, executive director of the Pediatric Cancer Foundation.

“But if your child is one of the some 13,000 new cases diagnosed a year, then it’s a big deal,” she says. “When there aren’t any treatment regimes out there for your child’s cancer because not enough research has gone into it, then it makes a big difference.”

She got to know the couple in Jacob’s final months. Even though he was quite ill toward the end, she sensed how special he was. She felt the same about his parents.

“Their faith is very real. It gives a glow about them, almost like an aura,” Rebold says. “They’ve taken the lessons they learned from this horrible experience and they’re using that knowledge to help others. They’re putting a personal face on this disease.”

Heather says she doesn’t know how a parent survives such an ordeal without faith. The family collects frogs — stuffed, plastic, whimsical pictures — as constant reminders to “Fully Rely on God.” She never downplays the pain of grief, but as a Christian, “we are able to grieve with the hope of seeing him again.”

That also sustains Don. On a recent trip to Walt Disney World, he remembers feeling perfectly happy as he watched his excited sons dashing through the theme park. Then, out of the blue, came the realization: Something’s missing. Jacob’s missing.

“We feel life has gone on for us, but it’s also gone on for him. Just in a different capacity,” he says. “We look forward to that time when we can all be together.”

Allie Patricia should be joining the Duckworths in the next three to six months.

She’s a fat-cheeked baby with a head full of dark hair, born at 7:38 p.m. Feb. 20. That’s the day in 2004 when Jacob was diagnosed with a cancer so rare, it affects about 20 children a year in the United States.

Heather learned of the Orlando-based Celebrate Children International, a Christian adoption agency, through one of the visitors to Jacob’s Web site. So far, it has been a smooth process. But she won’t relax entirely until the baby is here, cuddled in her arms.

Heather says Allie is not a replacement for Jacob. Her triplet son will always be in her heart and on her mind.

Nearly every day since his passing, someone in the family has spotted a cardinal. Brandon believes the sightings are a sign from his brother. He’s telling them he’s doing fine and watching over them.

Reporter Michelle Bearden can be reached at (813) 259-7612 or at [email protected].

Benefits for Pediatric Cancer Foundation

LE CASINO ROYALE WHAT: a sit-down dinner and casino event WHEN: 5:30 to 10:30 p.m. Aug. 9 WHERE: Quorum Hotel, 700 N. Westshore Blvd.,Tampa HOW MUCH: $75 for nonmembers of SMPS Tampa Bay; go to www.acteva.com.

CADILLAC INVITATIONAL GOLF TOURNAMENT SPONSORED BY: Dimmitt Cadillac, in memory of Jacob Duckworth WHEN: Noon shotgun start Oct. 15 WHERE: Feather Sound Country Club, 2201 Feather Sound Drive, Clearwater HOW MUCH: TBA; for foursomes and sponsorships, call (813) 269-0955

—–

To see more of the Tampa Tribune — including its homes, jobs, cars and other classified listings — or to subscribe to the newspaper, go to http://www.tampatrib.com.

Copyright (c) 2007, Tampa Tribune, Fla.

Distributed by McClatchy-Tribune Information Services.

For reprints, email [email protected], call 800-374-7985 or 847-635-6550, send a fax to 847-635-6968, or write to The Permissions Group Inc., 1247 Milwaukee Ave., Suite 303, Glenview, IL 60025, USA.

Alli(TM) – The OTC Weight Loss Product – On Shelves Nationwide Later This Week

PITTSBURGH, June 12 /PRNewswire-FirstCall/ — GlaxoSmithKline Consumer Healthcare announces that alli(TM), the only FDA-approved over- the-counter weight loss product will be on shelves of pharmacies, grocery stores and mass merchandisers nationwide later this week. alli is designed to be a partner for overweight adults willing to change how they eat and lose weight gradually. Adding alli to diet and exercise can help people lose 50 percent more weight than with dieting alone. alli is being supported by an unprecedented multicultural and bilingual educational effort, making it more than just a pill but a comprehensive weight loss program. With alli, people will have access to an individualized online action plan with a year’s worth of lessons included in each alli purchase, at myalli.com.

(Photo: http://www.newscom.com/cgi-bin/prnh/20070612/NYTU049 )

“alli is not for people looking for miracle pills and overnight results. It offers consumers a proven, safe, over-the-counter option to help them lose weight gradually,” says Steven L. Burton, vice president, Weight Control, GlaxoSmithKline Consumer Healthcare. “One of the most rewarding parts of being involved with the alli revolution is the opportunity to have a real impact on the health and lives of people all across this country. We recognize people need help to lose weight and that is why we are so excited that alli is now available to provide adults with the support they need.”

alli First Team

In April 2007, GlaxoSmithKline Consumer Healthcare selected 400 overweight women and men to be part of the alli First Team. Members of the alli First team were invited to participate in an online community focused on following the alli program, sharing personal experiences, and supporting other community members to be successful. For participating in the alli First Team, people received educational materials and tracking tools, access to the online community of alli users and a six-month supply of alli.

These early users are providing honest and candid feedback on their experiences with alli in an effort to help others succeed on the program. “In beginning of the alli program, I learned that there are treatment effects, so if you cheat (by eating foods that are high in fat), you will pay the price,” says Caryn Eyring, an alli First Team Community member. “Luckily, I’ve stayed within the plan so I have not had any treatment effects. However, I pushed the envelope once.”

Team members are adults from all across the United States who represent a broad mix of ages, backgrounds and life experiences and are committed to a gradual and healthy approach to losing 10 pounds or more. According to alli First Team Community member Maryann Gatti, “I am so happy that alli is available. It is giving me the advantage to succeed at losing weight. I needed that extra little something to help me, because for those of us who aren’t as young as we once were, and don’t burn calories as well as we use to, dieting can seem hopeless. alli is working for me.”

The Program

GlaxoSmithKline Consumer Healthcare has developed innovative tools to help consumers achieve success with the alli program. People can find educational materials where alli is sold that can help overweight adults decide if they are ready for alli.

alli comes in a variety of sizes – the 60 capsule and 90 capsule educational starter packs or a 120 capsule refill pack and is priced at approximately 60 cents a capsule or $1.80 per day. The recommended dose is one 60mg orlistat (the active ingredient in alli) capsule, three times a day, with meals containing fat. alli starter packs include:

   - Read Me First Guide   - Welcome and Companion Guides   - Guide to Healthy Eating   - Daily Journal   - Calorie and Fat Counter   - Quick Facts Cards   - Free access to an individualized online action plan at myalli.com   - Convenient capsule carrying case called the Shuttle(TM) that easily fits     into a purse or pocket.   

The centerpiece of the alli program – myalliplan — is an individually tailored online action plan. The alli plan was developed by nutritional and weight management experts who understand the struggle to lose weight. After purchasing alli, people can access this innovative support program 24 hours a day. myalliplan includes:

   - Customized online action plan   - Recipes, meal plans and shopping lists   - Online tools to record food and lifestyle information   - Connection to a network of other alli users   - Personalized e-mails that deliver lessons about: meal planning, managing     hunger, dealing with setbacks, and making the food and lifestyle changes     to help people succeed   

alli works by blocking about 25 percent of fat in the foods eaten which reduces the amount of fat and calories absorbed. alli must be used in conjunction with a reduced-calorie, low-fat diet. People in studies who adhered to the diet reported satisfaction with the product and weight loss results. By sticking to a low-fat diet many were able to manage bowel changes or “treatment effects” (such as having an urgent need to go to the bathroom). Many did not experience treatment effects at all. The program teaches adults which foods to eat and which foods to avoid for weight loss success and a healthier lifestyle overall.

Gary Foster, Ph.D, director for the Center for Obesity Research and professor at the Temple University School of Medicine, and one of the consulting experts who helped develop the online plan says, “We welcome the addition of the alli program as an easily accessible, safe and effective option in the war against overweight and obesity. It’s time we set the record straight – losing weight requires lifestyle changes. To get people started, we must provide them with the tools to be successful, and the alli program does that.”

Safety Profile

Orlistat (the active ingredient in alli) has been used by millions of people worldwide. Its safety and efficacy are well established. According to Vidhu Bansal, Pharm.D., director, Medical Affairs, GlaxoSmithKline Consumer Healthcare, “With a raging overweight and obesity epidemic, alli offers consumers a proven, safe option to help them lose weight without a prescription. Unlike the herbal and dietary supplements for weight loss on the market, alli is an FDA-approved over the counter weight loss product.” Bansal adds, “alli works in the gut; it is non-systemic, which means it is not absorbed into the bloodstream. That means alli does not affect the heart or brain or cause sleeplessness, jitters or an increased heart rate.”

For more information about alli and the alli program, go to http://www.myalli.com/. People can also find a new book “are you losing it? losing weight without losing your mind,” wherever alli is sold. The book delivers straight talk about healthy weight loss strategies and offers a sensible approach to losing weight. Support materials and the online action plan are also available in Spanish at http://www.mialli.com/.

About alli(TM)

alli is the only FDA-approved weight-loss product available to overweight adults, 18 years or older, without a prescription. It combines a clinically- proven product with a comprehensive individualized action plan. The alli program encourages modest, gradual weight loss, known by experts as the best way to lose weight. alli (60 mg orlistat capsules) is safe and effective when used as directed.

About GlaxoSmithKline Consumer Healthcare

GlaxoSmithKline Consumer Healthcare is one of the world’s largest over- the-counter consumer healthcare products companies. Its more than 30 well- known brands include the leading smoking cessation products, Nicorette(R), NicoDerm(R) CQ and Commit(R) as well as many medicine cabinet staples, including Abreva(R), Aquafresh(R), Sensodyne,(R) Tums(R) and Breathe Right(R).

About GlaxoSmithKline

GlaxoSmithKline — one of the world’s leading research-based pharmaceutical and healthcare companies — is committed to improving the quality of human life by enabling people to do more, feel better and live longer. For company information visit: http://www.gsk.com/.

Photo: NewsCom: http://www.newscom.com/cgi-bin/prnh/20070612/NYTU049AP Archive: http://photoarchive.ap.org/AP PhotoExpress Network: PRN7PRN Photo Desk, [email protected]

GlaxoSmithKline Consumer Healthcare

CONTACT: GlaxoSmithKline Consumer Healthcare: Brian Jones,+1-215-751-3415, [email protected], or Malesia Dunn, +1-412-200-3544,[email protected]; HealthSTAR Public Relations: David Schemelia,+1-646-722-8819, [email protected]

Web site: http://www.gsk.com/http://www.myalli.com/

Sanofi-Pasteur: Approval Delays Add to Pentacel Problems

Although on the market in various countries, the US launch of Pentacel is proving to be a problematic process for Sanofi-Pasteur. The drug is already likely to be hindered by failing to match US vaccination recommendations, unlike competing products such as GSK’s Pediarix. The FDA’s decision to delay approval, citing technical reasons, will further limit Pentacel’s overall commercial potential.

Pentacel, Sanofi-Pasteur’s combination vaccine against diphtheria, tetanus, pertussis (DTP), polio and as well as Haemophilus influenzae type B (Hib) is currently marketed in nine countries including the UK and Canada, where Pentacel was first registered in 1997. By April 2006, over 13.5 million doses had been distributed in a four-dose injection schedule vaccinating children at two, four, six and 18 months of age.

However, gaining approval for Pentacel in the US has proved difficult for Sanofi, the market leader in the DTP vaccines sector with sales totaling $1.1 billion in 2006. The first submission for Pentacel was made in 2005, followed by an FDA recommendation for pediatric approval in January 2007. The recent FDA announcement to delay a decision on the vaccine until November comes as a result of Sanofi’s decision to shift the control assays for Pentacel’s pertussis component from the company’s Canadian to its US facilities.

The delay in approval is not the only issue associated with the US launch of Pentacel. Although Sanofi’s vaccine would be the first DTP-based combination product for infants in the US that includes both polio and Hib components, its commercial potential will be limited by the national immunization schedules: US authorities do not recommend a concomitant vaccination of children under the age of 12 months against DTP and Hib.

Therefore, the use of Pentacel would currently be limited to the fourth DTP booster shot administered at 18 months of age, significantly restricting its market opportunity. GSK, Sanofi’s strongest competitor in the DTP sector, has avoided this issue by omitting the Hib component from its own DTP combination Pediarix (DTP/polio/HepB), which has become firmly established in the US market following its launch in 2003.

Thanks to the FDA’s decision, GSK will continue to enjoy a leading position at least until the end of the year. Even when approved, Pentacel will only be a significant competitor if Sanofi-Pasteur manages to change the CDC’s current guidelines in their favor.

Stroke Victims Regain Focus Through Working Memory Training

A new study finds that victims of acquired brain injury, such as stroke can improve their attention by using a software-based program to train working memory, a key cognitive function that allows individuals to hold information “online” for short periods of time. Eighty-nine percent of stroke victims who participated in the training reported that after training they were less easily distracted, less likely to daydream and less likely to lose focus when reading. The study is the first of its kind to demonstrate that working memory training among stroke victims leads to improvements in daily life.

Results of the research effort appeared in the January 2007 issue of the journal Brain Injury under the title “Computerized working memory training after stroke — A pilot study.” Conducted in 2005, the study was led by Helena Westerberg, Ph.D., a researcher at the Karolinska Institute’s Aging Research Centre in Stockholm, Sweden.

The study included eighteen subjects suffering from stroke and exhibiting severe problems with working memory, which is critical for reading, focusing attention and remembering what to do next. Participants were randomly assigned to either a control group, or a treatment group that performed Cogmed Working Memory Training, a software-based training program originally designed for children with attention deficits.

Participants in the treatment group demonstrated strong improvements in all tasks related to working memory after the training, based on a neuropsychological test battery and a self-reported rating scale regarding the symptoms of cognitive failure in daily life. Eight of the nine participants (89 percent) in the treatment group reported a significant reduction in cognitive failure according to a 25-question assessment.

“These results are especially encouraging because there is a high correlation between working memory capacity and the outcome of physical rehabilitation,” said Westerberg. “This study is an indication of the broad potential of working memory training. In many ways, we are only beginning to understand the tremendous impact that this kind of focused training can have on individuals suffering from various cognitive limitations.”

Severe working memory deficits commonly result from acquired brain injuries such as stroke and impair executive functioning and social interaction. Working memory capacity is a fundamental cognitive ability necessary for the rehabilitation of other mental functions.

About Cogmed

Cogmed has made a breakthrough discovery that individuals can train and improve their working memory, a key function of the brain that allows individuals to store information for brief periods of time. Cogmed Working Memory Training helps people with attention deficits improve focus, impulse control and complex problem solving. Through a combination of software-based memory exercises and personal coaching, participants engage in a challenging five-week program using an Internet-connected computer at home. More than 80 percent of those who have completed Cogmed’s rigorous and rewarding training have demonstrated dramatic and lasting improvements. Cogmed’s proprietary and patented program has been validated by high-impact research in controlled scientific studies at the Karolinska Institute, a world-renowned medical university based in Stockholm, Sweden. A leader in the emerging field of neurotechnology, Cogmed was founded in 2001 and is headquartered in Naperville, Ill. Cogmed’s services are provided by a growing network of more than 25 specialist practices around the U.S.

DARIA GIUCHICI: Caring for Others is Her Calling

By Howard Buck, The Columbian, Vancouver, Wash.

Jun. 10–Battle Ground parents couldn’t do better for a hired baby sitter than Daria Giuchici.

The honors student shuttled between Battle Ground High School and the Clark County Skills Center’s applied medical sciences program. She’s earned certificated health occupations and caregiver credentials, plus first-aid and CPR papers.

She’s logged 400 hours of volunteer work at Southwest Washington Medical Center, escorting patients, stripping and setting up rooms, and running errands of all types.

For paid, part-time work this year, she cared for residents at the Mallard Landing assisted-living center. She still baby-sits on the side.

When finally at home, she watches over five younger siblings — the youngest of whom, Amy, 10, has Down syndrome.

“At times it is hard, but it’s worth it,” Daria says. She savors every one of Amy’s hugs. “I don’t even know what our lives would be without her, she brings so much joy.”

There are five older siblings, too. That puts Daria smack in the middle of 11 children to a couple who fled communist Romania and arrived in Vancouver when she was 2.

The load grew when cancer claimed her father two years ago. But close ties to the large Philadelphia Romanian Pentecostal Church in east Portland, where she worships each week, help anchor the family.

Now, Daria is Seattle-bound, the recipient of a full, four-year nursing scholarship to the University of Washington. She weighed physician training but would much rather have close, daily personal contact with patients.

“I’ve always wanted to be a nurse,” she says.

– Howard Buck

—–

To see more of The Columbian, or to subscribe to the newspaper, go to http://www.columbian.com.

Copyright (c) 2007, The Columbian, Vancouver, Wash.

Distributed by McClatchy-Tribune Information Services.

For reprints, email [email protected], call 800-374-7985 or 847-635-6550, send a fax to 847-635-6968, or write to The Permissions Group Inc., 1247 Milwaukee Ave., Suite 303, Glenview, IL 60025, USA.

Family Intervention and Therapy for Overweight and Obese Kids

By Lake, Karen

Follow the progress of the pilot FIT-OK project in Peterborough as they attempt to tackle the growing problem of childhood obesity Background

Childhood obesity is associated with adverse health outcomes, both physical and psychosocial. Studies have associated child obesity with cardiovascular risk factors,1,2,3 as well as insulin resistance syndrome.4 Obese adolescents are more likely to become obese adults.5,6,7,8 Due to the increasing number of obese and overweight children,9,10 tackling child obesity is a public health priority and is now a Public Service Agreement (PSA) target.11

This paper outlines a pilot project undertaken in Peterborough, to treat obese children.

In 2004, a multi-agency ‘Child Obesity Action Group’ was set up. The group’s remit was to devise a strategy to help tackle child obesity. The strategy was to include three main areas: prevention, monitoring and treatment of child obesity. In relation to treatment, the group identified a lack of specialist services to manage obese children.

A review of the literature highlighted that targeting parents and children together, and involving one parent in a programme, is effective in treating obese and overweight children.12,13

With this in mind, a proposal was written for a specialist team and programme, named FIT-OK (family intervention and therapy for overweight and obese kids). The project lead applied and won an award for Pounds 15,000 from the Queens Nursing Institute (QNI). The FIT-OK pilot was launched in December 2005 and finished in October 2006.

Aims and objectives of FIT-OK

The project primarily aimed to promote healthier lifestyles to families with children who are obese and overweight by: increasing the family’s awareness of obesity and boosting the child’s self esteem.

The FIT-OK team

The team, headed by a project lead, consisted of four FIT-OK ‘trainers’, two health visitors, one school nurse and one dietitian technician. Specialist team members included: one paediatric dietitian, two sports specialists and one children’s learning disabilities support worker. Advisory specialists were available if needed, including a consultant dietitican for obesity and a paediatrician.

FIT-OK programme

Once a family was accepted on to the project, a FIT-OK ‘trainer’ undertook an initial assessment that included recording a history of the family lifestyle and food intake, the child’s and parents’ expectations, and the child’s level of self esteem and baseline anthropometric measurements. The child and parent then attended the local specialist children’s gym at Arthur Mellow Village College (AMVC) for their fitness test.

The FIT-OK team devised goals like healthy eating and physical activities, which were agreed with the family. The trainer’s role was to motivate and support the family in reaching their weekly goals, with the child and participating parent attending the gym once or twice a week. The physical activity plan was designed to be sustainable and fit in with family life. Examples included walking the dog for 15 minutes a day and walking to and from school.

Methods

In December 2005,17 children who met the following criteria were accepted on to the project:

* Aged 4-16 years old

* 91st or 98th BMI centile14

* One parent willing to participate

* Family must be willing to change.

Children were accepted on to the project as they were referred between December 2005 and May 2006.

Sample

Of the 17 children originally accepted on to the project, 76% (n=13) were referred by professionals and 23% (n=4) were self referrals via parents. In the first few weeks, one child withdrew from the project and another dropped out after two months, leaving 15 children. The mean BMI z score at baseline was 3.1 (SDS ranged 2.2 – 4.7).

The majority of the children were white British, with one child from a white and Afro-Caribbean ethnic background. The mean age of the children on the project was 12.4 years old, and their weight ranged from 58kg up to 177kg. Five children had low self-esteem at their baseline measurement. Few children took part in any physical activity and none of the children ate the recommended five a day fruit and vegetables.

Health and co-morbidities

Several children had co-morbidities. This included: slipped capital femoral epiphysis, joint pains, exacerbation of asthma, hypothyroidism, learning disabilities, dyspraxia and autistic disorder.

Data collection

Children’s measurements were taken at baseline O, three and six months. These included:

* Weight

* Height

* BMI centile (age and gender specific)

* Waist circumference

* Arm and thigh circumference

* Fitness test

* Fruit and vegetable intake

* Self-esteem.

To minimise intra- and inter-observer bias, the same trainer measured the same child using the same technique and equipment.

The fitness test was multi-staged, in which the child had to run 20 metre ‘shuttle runs’ in time with an audio ‘bleep’, until the bleeps got too quick for the child.

This test was used to estimate the maximum endurance of the child and to test their aerobic capacity. At the baseline fitness test, 86% (n=13 out of 15) children, were classified in the ‘very poor’ category, one child scored ‘poor’ and one child did not attend.

Results

Fifteen children completed three months and of those, 13 completed six months. From the 15 children, 60% (n=9) had slightly decreased their BMI z score. These children remained on or above the 98th centile (obese category). Weight losses were relatively small, the greatest weight loss being 5kg. Overall, the children did make changes to their lifestyle and continued to attend the gym regularly.

40% (n=6) slightly increased their BMI z score. Out of those children who gained weight, three children had learning disabilities and one child had hypothyroidism. The children with learning disabilities did attend the gym and work reasonably hard. The other remaining two children had no medical problems but they did have a lower attendance at the gym.

Self-esteem

At final measurements, 84% (n=11) out of 13 increased their self- esteem. One child dropped out and did not repeat the questionnaire, one child remained the same and one child had lower self-esteem; this same child also gained weight.

Some children were unable to complete the Rosenberg questionnaire.15 To address this, the team designed a pictorial questionnaire, asking the children to indicate which ‘face’ represented how they felt today and how they felt about their bodies. Children ticked a ‘happier’ face at the end of the project.

Fruit and vegetable intake/physical activity

Children were asked to complete a three-day food diary at baseline. On the basis of this, the paediatric dietitian provided a dietary plan. A limitation of food diaries, however, was the potential under-reporting of food intake.16 In the final parent/ child survey, 80% (n=12) out of 15, reported that they had increased their fruit and vegetable consumption. It was difficult to ascertain by how much, as diaries were sometimes incomplete. 80% (n=12) out of 15 children reported that they were ‘more active’ in their lifestyles.

Since the project ended, two families have continued to regularly use the children’s gym. From the children that repeated their fitness test, 87.5% (n=7) out of 8 improved their fitness on average by 33%. There were seven children who did not attend for their final fitness test.

Anthropometric measurements

69% (n=9 out of 13) children had decreased thigh circumference by an average of 3.6cm. 15% (n=2) increased by 2.9cm. Two children stayed the same and two did not repeat measurements.

46% (n=6 out of 13) had decreased arm circumference by an average 2.7cm, and 46% (n=6) increased by 1.8cm; one child stayed the same.

Waist circumference is a useful marker for central body fat accumulation. Waist circumference is linked to an increased risk of metabolic complications.” Measurements were undertaken using the umbilicus as a marker,” as this was easier with the more overweight children. 53% (n=7) children decreased their waist by an average of 2.4cm. 46% (n=6) increased by an average of 3.5cm. There were two children who did not have repeat measurements.

Qualitative data: what the parents say

Over half the parents changed their own lifestyle and acted as role models to their child. Some parents had not attended a gym before and really enjoyed it.

Most parents said they felt better, and one father said: ? can switch off from work’.

Taking part in the project influenced other areas of families’ lives and even changed parents’ outlook. In one case, a mother reduced her smoking from 60 to 10 cigarettes a day. She admitted she had a long way to go but had made a start.

What the children say

At the end of the project, children reported positive comments about going to the gym.

I don’t get so puffed out.[boy aged 13]

I can run about more with my friends, [boy aged 12]

I feel better, [girl aged 12]

Key findings

* 60% (n=9) decreased their BMI z score

* 40% (n=6) increased their BMI z score

* 80% (n=12) increased fruit and vegetable intake

* 80% (n=12) increased physical activity

* 86% (n=13 out of 15) scored very poor on their baseline fitness test, one child scored poor, one child did not attend the baseline test

* 87.5% (n=7 out of 8) who attended a repeat fitness test, and improved their fitness level by 33%. One child decreased and seven did not attend

* 84% (n=11 ) increased self esteem, two did not repeat the test, one chiayed the same and one child reported lower self-esteem. Scope and limitations

The pilot only included a small number of children over a short period of time. Children from ethnic minority groups were also under- represented. Data such as the food diaries were a self-reported measure and therefore contained potential bias: families might over or under-report.

The wide inclusion criteria proved to be both a strength and weakness. It meant that most children could be included. Children with complex needs and medical problems were accepted. However, these children seemed least successful in terms of reduction of their BMI centile. This probably affected the overall results.

Plans for the future

FIT-OK was the first family-based project for obese children in Peterborough. We are hoping to mainstream the project; discussions are underway with all the agencies. We would recommend some minor amendments, such as reviewing the criteria for acceptance, and whether children with severe obesity and complex needs may not be suitable; adopting a cohort approach, by having several families start at the same time. This would provide more opportunities for peer support. It would also lend itself to delivering group nutritional sessions throughout the programme in addition to the individual advice.

Tips for setting up similar schemes

This project could be adapted to other areas. The success of the project is due to the multi-agency team, particularly sports trainers and dietitians. Although we used a specialist children’s gym, any sports hall or community venue could be used to set up a circuit class or other activities. The time of the session is important, as it needs to be after school and work, when both parents and children can attend. Team members need to be flexible in their working hours.

Team members collecting the Queen’s Nursing Institute award, at the Cafe Royal, London, June 2006. Christine Hancock, third from right, presented the award

Steven. aged 11 years old, using the equipment in the gym

References

1 Freedman DS, Scrinivasan SR, Burke GL, Shear CL, Smoak CG, Harsha DW, Webber LS, Berenson GS. Relation of body fat distribution to hyperinsulinaemia in children and adolescents; the Bogalusa heart study. American Journal of Clinical Nutrition 1987; 46:403-10.

2 Freedman DS, Dietz WH, Scrinivasan SR. The relation of overweight to cardiovascular risk factors among children and adolescents; the Bogulsa heart study Pediatrics 1999; 103:1175-82.

3 Gunnell DJ. Childhood obesity and adult cardiovascular mortality – a 57 year follow up study based on the Boyd Orr cohort. American Journal Clinical Nutrition 1998; 67: 1111-8.

4 Viner RM, Segal TY, Lichtarowicz-Krynska E, Hindmarsh P. Prevalence of the insulin resistance syndrome in obesity. Archives of Disease in Childhood 2005:90: 10-14.

5 Guo SS, Chumlea WC. Tracking of body mass index in children in relation to overweight in adulthood. American Journal of Clinical Nutrition. 1999; 70:145S8S.

6 Rolland-Cachera MF, Deheeger M, Guilloud-Bataille M, Avons P, Patois E, Sempe M. Tracking the development of adiposity from one month of age to adulthood. Annals of Human biology. 1987; 14:219- 29.

7 Whitaker RC, Pepe M, Wright IA, Seidel K, Dietz WH. Early adiposity rebound and the risk of adult obesity 1998. Available at: www.pediatrics.org/cgi/content/full/101/3/e5. Accessed 3 May 07.

8 Freedman DS, Khan LK, Serdula MK, Dietz WH, Scrinivasan SR, Bereson G. The relation of childhood BMI to Adult Adiposity: The Bogalusa Heart Study. Pediatrics 2005; 115: 22-7.

9 Hughes JM, Chinn S, Rona RJ. Trends in growth in England and Scotland 1972 to 1994. Archives Disease Childhood 1997; 76: 182-9.

10 Chinn S, Rona RJ. Prevalence and trends in overweight and obesity in three cross-sectional studies of British Children 1974- 94. British Medical Journal 2001:322:24-6.

11 Department of Health. Choosing Health: making healthier choices easier. London: DoH, 2004.

12 Edwards C, Nicholls D, Croker H, ZyI SV, Viner R, Wardle J. Family-based behavioural treatment of obesity; acceptability and effectiveness in the UK. European Journal of Clinical Nutrition. 2006; 60: 58792.

13 Mulvihill C, Quigley R. The management of obesity and overweight; An analysis of reviews of diet, physical activity and behavioural approaches. Evidence briefing 1st edition. London: Health Development Agency, 2003.

14 Cole TJ, Freeman JV, Preec MA. Body mass index reference curves for the UK 1990. Archives Disease of Childhood 1995; 73: 25- 9.

15 Rosenberg M. Society and the adolescent self-image. New Jersey: Princeton University Press, 1965.

16 Bandini LG, Schoeller DA, Cyr H, Dietz WH. Validity of reported energy intake in obese and non-obese adolescents. American Journal Clinical of Nutrition. 1990:52:421-5.

17 McCarthy HD, Ellis S, Cole TJ. Central overweight and obesity in British youths aged 11-16 years crosssectional surveys of waist circumference. British Medical Journal 2003; 326:624-6.

18 Zannolli R, Morgese G. Waist percentiles, a simple test for atherogenic disease Ada Paediatrica 19%; 85:1368-9.

Karen Lake

Principal public health practitioner, Project lead FIT-OK

Copyright TG Scott & Son Ltd. Jun 2007

(c) 2007 Community Practitioner. Provided by ProQuest Information and Learning. All rights Reserved.

Effect of the Novel Oral Dipeptidyl Peptidase IV Inhibitor Vildagliptin on the Pharmacokinetics and Pharmacodynamics of Warfarin in Healthy Subjects*

By He, Yan-Ling Sabo, Ron; Riviere, Gilles-Jacques; Sunkara, Gangadhar; Et al

Key words: Drug interaction – International normalized ratio – Pharmacokinetics – Prothrombin time – Vildagliptin – Warfarin ABSTRACT

Objective: Vildagliptin is a potent and selective dipeptidyl peptidase-IV (DPP-4) inhibitor that improves glycemic control in patients with type 2 diabetes by increasing alpha and beta-cell responsiveness to glucose. This study assessed the effect of multiple doses of Vildagliptin 100mg once daily on warfarin pharmacokinetics and pharmacodynamics following a single 25 mg oral dose of warfarin sodium.

Research design and methods: Open-label, randomized, two-period, two-treatment crossover study in 16 healthy subjects.

Results: The geometric mean ratios (co-administration vs. administration alone) and 90% confidence intervals (CIs) for the area under the plasma concentration-time curve (AUC) of Vildagliptin, R- and S-warfarin were 1.04 (0.98, 1.11), 1.00 (0.95, 1.04) and 0.97 (0.93, 1.01), respectively. The 90% CI of the ratios for Vildagliptin, R- and S-warfarin maximum plasma concentration (C^sub max^ were also within the equivalence range 0.80-1.25. Geometric mean ratios (co-administration vs. warfarin alone) of the maximum value and AUC for prothrombin time (PT^sub max^, 1.00 [90% CI 0.97, 1.04]; AUC^sub PT^, 0.99 [0.97, 1.01]) and international normalized ratios (INR^sub max^, 1.01 [0.98, 1.05]; AUC^sub INR^, 0.99 [0.97, 1.01]) were near unity with the 90% CI within the range 0.80-1.25. Vildagliptin was well tolerated alone or co-administered with warfarin; only one adverse event (upper respiratory tract infection in a subject receiving warfarin alone) was reported, which was judged not to be related to study medication.

Conclusions: Co-administration of warfarin with vildagliptin did not alter the pharmacokinetics and pharmacodynamics of R- or S- warfarin. The pharmacokinetics of vildagliptin were not affected by warfarin. No dosage adjustment of either warfarin or vildagliptin is necessary when these drugs are co-medicated.

Introduction

Vildagliptin is an orally active, potent and selective inhibitor of dipeptidyl peptidase IV (DPP-4), the enzyme responsible for the degradation and inactivation of the incretin hormones, glucagon- like peptide-1 (GLP-I) and glucose-dependent insulinotropic peptide (GIP)1. Vildagliptin increases post-prandial levels of intact, biologically active GLP-I and GIP by selectively and reversibly inhibiting DPP-4. Clinical studies in patients with type 2 diabetes mellitus have shown that vildagliptin treatment improves glycemie control2’3 and pancreatic beta-cell function4’5.

Vildagliptin is rapidly and almost completely absorbed following oral administration in humans. Vildagliptin exhibits linear pharmacokinetics with an elimination half-life of about 3 hours after oral administration. The pharmacodynamic half-life, reflected by the mean residence time of DPP-4 inhibition after administration of a single lOOmg oral dose, was 9.6 hours, allowing once daily administration. Metabolism is the primary elimination pathway for vildagliptin in humans, accounting for approximately two-thirds of the elimination of an oral dose. The predominant metabolic pathway of vildagliptin is hydrolysis at the cyano moiety to form a carboxylic acid (LAYl 51) which is pharmacologically inactive. Renal clearance of unchanged vildagliptin accounts for about one-third of total body clearance of the drug. In vitro experiments assessing the activity of a range of cytochrome P450 (CYP450) isoenzymes in human liver microsomes showed no inhibitory effects (IC^sub 50^ values [mu]mol/L) and no quantifiable metabolism of vildagliptin by CYP450 (Novartis, data on file). These results indicate that pharmacokinetic interactions with vildagliptin due to effects on CYP450 isoenzymes are unlikely.

Warfarin is a vitamin K antagonist widely used for the long-term prevention of thrombosis6. Prothrombin time (PT) is the most commonly used parameter to measure the anticoagulant effect of warfarin. An increase in PT results from a reduction of three of the four vitamin K-dependent procoagulant clotting factors (factors II, VII and X) by warfarin, which occurs gradually7. Standardization of PT across studies is achieved by calculation of the international normalized ratio (INR)8; which provides a more reliable measure of blood coagulation than the non-standardized PT ratio9. Warfarin has a narrow therapeutic index with large inter- and intra-individual variations in dose response, necessitating regular determinations of PT to monitor the anticoagulant effect and reduce the risk of bleeding10. Warfarin is a racemic compound; the S-enantiomer has approximately five times greater anticoagulant activity than the R- enantiomer”. Metabolism of R-warfarin is mediated by P450 isoenzyme 3A412, and that of S-warfarin occurs primarily through CYP2C9- mediated hydroxylation12’13. The most common cause of pharmacokinetic interaction with warfarin is inhibition of CYP2C9, leading to an increased concentration of S-warfarin and an increased anticoagulant effect and risk of hemorrhage14.

Based on the known pharmacokinetics and mechanism of action of vildagliptin, pharmacokinetic or pharmacodynamic interactions with warfarin would not be anticipated. The objective of this study was to confirm the lack of any potential drug-drug interaction when warfarin is co-administered with vildagliptin.

Patients and methods

Study population

This study enrolled male or female subjects ages 18-45 years in good health, as determined from medical history, physical examination, vital signs, electrocardiogram (ECG), and laboratory tests, including a normal PT. Subjects had a body weight of at least 50kg and were within +-20% of normal for their height and frame size according to the Metropolitan Life Insurance Tables.

Subjects were excluded if they had any condition that would represent a contra-indication for the use of an anticoagulant or history of abnormal bleeding. Other exclusion criteria included smoking (use of tobacco product in the previous 3 months and/or urine cotinine > 500 ng/mL), clinically significant ECG abnormalities or abnormal laboratory values, and any condition that might significantly alter the absorption, distribution, metabolism or excretion of study drugs. Subjects were also excluded if they were vegetarian or ate large amounts of leafy green vegetables (as a high dietary intake of vitamin K could influence the pharmacodynamics of vitamin K antagonists such as warfarin), or if they had used any prescription drugs in the 4 weeks prior to dosing or any over-the-counter medication (except acetaminophen) during the 2 weeks prior to dosing.

Study participants were not permitted to engage in strenuous physical exercise for 7 days before dosing, or take alcohol for 72 hours before dosing until after the study completion evaluation. Intake of xanthinecontaining food or beverages was discontinued 48 hours before dosing and was not permitted while subjects were admitted to the study center.

The study was performed in compliance with the Guidelines for Good Clinical Practice and the Declaration of Helsinki of the World Medical Association and received approval by the Western Institutional Review Board (Olympia, WA1 USA). All participants provided written informed consent prior to study participation.

Study design

This was a single-center, open-label, randomized, two-period, two- treatment crossover study. Subject eligibility for the study was assessed based on inclusion and exclusion criteria, safety evaluations and normal PT during a 21-day screening period.

All subjects who met inclusion criteria at screening were admitted to the study center. Following Period 1 baseline evaluations (day-1), subjects were randomized to one of two treatment sequences. Randomization was performed by Novartis Drug Supply Management using a validated system. Subjects assigned to the first treatment sequence received open-label vildagliptin lOOmg once daily for 6 days, co-administered with a single 25 mg oral dose of warfarin sodium on day 2 (Period 1). After a washout period of at least 14 days, subjects then received placebo (matching vildagliptin) once daily for 6 days, with co-administration of warfarin on day 2 (Period 2). Subjects assigned to the second treatment sequence received treatments in the opposite order.

Subjects were admitted to the study center for 9 days and were discharged following the final pharmacokinetic and safety assessments of Period 1. Following the washout period, subjects were readmitted to the study center for Period 2, which was performed in an identical fashion to Period 1 ; end-ofstudy evaluations were performed on the last day of Period 2, after which subjects were discharged.

Study treatments were administered as tablets with 24OmL of water between 7:00am and 8:30am, following an overnight fast of at least 10 hours. On days 1 and 2 of each treatment period, subjects continued to fast until 4 hours post-dose; on days 3-6, subjects were provided with breakfast 30 minutes after dosing.

Blood samples for determination of plasma vildagliptin concentrations were taken pre-dose and at 0.5, 1, 1.5, 2, 3, 4, 5, 6, 8, 10, 12, 16 and 24 hours post-dose on days 1 and 2 of the appropriate treatment period for easubject. Samples for determination of plasma R- and S-warfarin concentrations were taken pre-dose and at 0.5, 1, 2, 4, 6, 8, 10, 12, 16, 24, 36, 48, 60, 72, 96, 12, 144, and 168 hours post-dose on day 2 of both treatment periods. Blood samples (1 mL for vildagliptin and 5 mL for warfarin) were taken by either direct venipuncture or an indwelling cannula inserted in a forearm vein, and collected into a sodium heparin tube. Within 15 minutes, samples for analysis of vildagliptin were centrifuged at between 3 and 5[degrees]C for 15 minutes at approximately 2500rpm, and plasma samples were frozen at – 70[degrees]C or below until analysis was performed. Determination of plasma drug concentrations

Plasma concentrations of vildagliptin were measured by a high- performance liquid chromatography (HPLC)-tandem mass spectrometry (MS/MS) method. The assay consisted of a liquid-solid extraction on Oasis HLB 96-well extraction plates (Waters Corporation, Milford, MA, USA) using an automated system, followed by HPLC using a XTerra MS CIS 5[mu]m column (Waters Corporation, Milford, MA, USA) with isocratic elution using 40% mobile phase A (lOmmol/L ammonium acetate [adjusted to pH 8 with ammonia solution]-methanol [95:5, v/ v]) and 60% mobile phase B (acetonitrile-methanol [10:90, v/v]). Detection was performed by MS/MS with Electro Spray lonization (ESI) using an API 3000 (Applied Biosystems, Foster City, CA, USA) mass spectrometer in positive ion mode. The masses for vildagliptin were precursor ion m/z 304 and product ion m/z 154. The lower limit of quantification for the assay was 2.0ng/mL. The internal standard for this assay was [^sup 13^C^sub 5^^sup 15^N]vildagliptin. Within- study assay validation at nominal vildagliptin concentrations of 5.25, 400 and 900ng/mL showed an assay precision (coefficient of variation) of 5.8-14.5% and a bias of 2.5-4.0%.

Plasma concentrations of R- and S-warfarin were measured by HPLC and fluorescence by CEPHAC EUROPE (Saint Benoit, France). The assay consisted of liquid-liquid extraction with hexane/methylene chloride followed by reverse phase HPLC using a Chiral-AGP column (Chromtech AB, Sollentuna, Sweden) with isocratic elution using phosphate buffer (pH 7.3)/2-propanol. Detection was performed by fluorescence using a Fluorimeter 474 (Waters Corporation, Milford, MA, USA) with lambda^sub exc^ 292 nm and lambda^sub em^ 380nm. The internal standard for the assay was diclofenac sodium. The lower limit of quantification for the assay was lOng/mL. Within-study assay validation at nominal concentrations of 20, 1000, and 1600ng/mL showed an assay precision (coefficient of variation) of 3.1-9.9% for R-warfarin and 2.8-5.5% for S-warfarin, and a bias of 0.0-9.0% for R- warfarin and 0.6 to 10.0% for S-warfarin.

Statistical analyses

In previous drug-drug interaction studies conducted with warfarin (administered as a single 25 or 30 mg oral dose) and employing a crossover design, the ranges of intra-subject coefficients of variation (CV) for the area under the plasma concentration-time curve (AUC) and maximum observed plasma concentration (C^sub max^) for R-warfarin were 0.06-0.15 and 0.07-0.19, respectively, and for S- warfarin AUC and C^sub max^, 0.09-0.19 and 0.08-0.16, respectively15″17. The criterion used to evaluate the significance of a drug interaction was based on the 90% confidence intervals (CIs) for the ratio of geometric means of AUC and C^sub max^ CIs contained within the equivalence range of 0.80-1.25 for both warfarin enantiomers were taken as evidence for no significant interaction. Based on an intra-subject CV of 0.1, a sample size of 16 subjects provided at least 90% power to establish bioequivalence of warfarin administered alone or in combination with vildagliptin.

Pharmacokinetic assessments

Pharmacokinetic parameters determined for vildagliptin, R-, and S- warfarin were the AUC extrapolated to infinity (AUC^sub 0^_), C^sub max^, and time at which C^sub max^ occurred (t^sub max^), apparent total plasma clearance (CL/F), and terminal elimination half-life (t^sub 1/2^. The AUC was also calculated between O and 24 hours for vildagliptin (AUC^sub 0-24h^) and between O and 168 hours for warfarin (AUC^sub 0-168h^). Pharmacokinetic parameters were calculated by non-compartmental methods using WinNonlin Pro (Version 4.0, Pharsight Corp, Mountain View, CA, USA).

Statistical comparisons of log-transformed AUC and C^sub max^ values were performed using an analysis of variance (ANOVA) model (PROC MIXED SAS procedure, SAS Institute Inc., Gary, NC, USA), with sequence, period and treatment as sources of variation and subject (sequence) as random effect. Ratios of geometric mean AUC and C^sub max^ and the associated 90% CIs and p-values for co-administration of vildagliptin and warfarin compared with vildagliptin or warfarin administered alone were calculated using the above model.

Pharmacodynamic assessments

The anticoagulant effect of warfarin was assessed by measurement of the PT from blood samples taken at intervals up to 168 hours post- dose. Measurements of prothrombin times were performed on a Sysmex CAl 500 using DATE BEHRING INNOVIN reagent (Lab Corporation of America, Hollywood, FL, USA).

Pharmacodynamic variables were obtained from collected PT values, expressed in seconds, and the INR in INR units. Pharmacodynamic parameters were determined from both the original PT/INR data and change from pre-dose (calculated as the difference between PT/INR obtained at each time point minus prothrombin time obtained at pre- dose). The following pharmacodynamic parameters were determined using non-compartmental methods: the area under the PT-time curve from O to 168 hours (AUC^sub PT^), the peak PT reached (PT ), the area under the INR-time curve from O to 168 hours in international normalized ratio units (AUC^sub INR^), and the peak INR (INR^sub max^).

Statistical comparisons of pharmacodynamic parameters following administration of a single dose of warfarin alone or in combination with vildagliptin were performed using an ANOVA model with sequence, period and treatment as fixed effects and subject (sequence) as random effect. Ratios of geometric means for the pharmacodynamic parameters of warfarin (PT^sub max^, AUC^sub PT^, INR^sub max^ and AUC^sub INR^ and the associated 90% CIs and p-values for co- administration of vildagliptin and warfarin compared with warfarin administered alone were calculated using SAS MIXED.

Safety and tolerability assessments

Safety and tolerability assessments were conducted in all subjects, and included the monitoring and recording of all adverse events (AEs) as well as details of concomitant medications or significant non-drug therapies.

Evaluation of routine blood chemistries, hematologie profile and urine analysis, as well as a physical examination, ECG recordings, and monitoring of vital signs, were performed at screening, baseline, and after the completion of the study.

Results

Patient characteristics

A total of 16 subjects were enrolled and 15 completed the study; one subject withdrew consent after completion of Period 1 and was not replaced. Subjects had a mean age of 31.9 +- 9.6 years, a mean body weight of 70.2 +- 11.7kg and height of 166 +- 11 cm; there were 10 men and six women. Most subjects (15/16) were of white Hispanic or black Hispanic origin.

Pharmacokinetics of vildagliptin

The plasma concentration-time profiles of vildagliptin were similar, and the pharmacokinetic parameters unaffected, by co- administration of a single 25 mg oral dose of warfarin (Figure 1, Table 1). Ratios of geometric mean values were near unity with the 90% CIs being contained within the bioequivalence range of 0.80- 1.25 for vildagliptin Cmax (geometric mean ratio 1.01 [90% CI 0.89, 1.14]) and AUC^sub 0-24h^, (mean ratio 1.04 [0.98, 1.11]).

Figure 1. Plasma concentration-time profiles for vildagliptin following administration of a 100 mg oral dose alone or in combination with a single 25 mg dose of warfarin in healthy subjects. Symbols denote plasma concentrations of vildagliptin following administration alone (filled circles) or in combination with a single dose of warfarin (open triangles). Data are presented as mean +- SEM (standard error of the mean)

Table 1. Pharmacokinetic parameters of vildagliptin following oral administration of vildagliptin lOOmg alone or in combination with warfarin 25 mg in healthy subjects

Figure 2. Plasma concentration-time profiles for (A) R-warfarin and (B) S-warfarin following administration of a single 25 mg oral dose of warfarin alone or in combination with vildagliptin lOOmgin healthy subjects. Symbols denote plasma concentrations of R- warfarin or S-warfarin following administration of warfarin alone (filled circles) or in combination with vildagliptin (open triangles). Data are presented as mean +- SEM (standard error of the mean)

Pharmacokinetics of H- and S-warfarin

Co-administration of vildagliptin lOOmg with a single 25 mg oral dose of warfarin had no major effect on the plasma concentration- time profile or pharmacokinetic parameters of either R-warfarin or S- warfarin (Figure 2, Table 2). Ratios of geometric mean values were near unity with the 90% CIs being contained within the bioequivalence range of 0.80-1.25 for R-warfarin C^sub max^ (geometric mean ratio 1.02 [90% CI 0.95, 1.11]) and AUC^sub 0_^. (mean ratio 1.00 [90% CI 0.95, 1.04]), and for S-warfarin C^sub max^ (mean ratio 1.02 [90% CI 0.94, 1.1O]) and AUC^sub 0_^ (mean ratio 0.97 [90% CI 0.93, 1.01]).

Pharmacodynamics of warfarin

Mean PT values (raw data and international normalized ratios [INR]) over 168 hours following administration of warfarin were essentially identical when warfarin was administered alone or in combination with vildagliptin (Figure 3). Maximum PT and INR values and areas under the PT-time and INR-time curves were not altered by co-administration with vildagliptin (Table 3); ratios of geometric means were near unity with 90% nce range of 0.80-1.25 for absolute PT or INR values and for changes from predose values (Table 4). Safety and tolerability

Administration of vildagliptin and warfarin alone or in combination was well tolerated. The only adverse event reported during the study was a mild upper respiratory tract infection reported by one subject 12 days after receiving treatment with warfarin and placebo, which was judged not to be related to study medication. This subject did not discontinue the study due to the adverse event but later withdrew consent.

Table 2. Pharmacokinetic parameters of R- and S-warfarin following oral administration of a single dose of warfarin 25 mg alone or in cpmbination with vildaglipin 100 mg once daily in healthy subjects

Figure 3. (A) Mean prothrombin time (PT) and (B) international normalized ratio (INR) following administration of a single 25 mg oral dose ofwarfann alone or in combination with vildagliptin 100 mg in healthy subjects. Symbols denote PT or INR following administration of warfarin alone (filled circles) or in combination with vildagliptin (open triangles). Data are presented as mean +- SEM (standard error of the mean)

There were no clinically significant changes in blood or urine chemistry, vital signs or ECG findings during the course of the study. There were multiple instances of prolonged PT (range 9-13 seconds) or increased INR (range 2-3.5), which would be anticipated following warfarin administration, but no other clinically relevant changes in hematology parameters were observed.

Discussion

The vitamin K antagonist warfarin has been used for several decades for long-term prevention of thrombosis and thromboembolism. Because warfarin has a narrow therapeutic index, pharmacokinetic or pharmacodynamic interactions with warfarin can have potentially serious consequences14’18. The aim of the present study was to assess the effects of co-administration of warfarin with the potent and selective DPP-4 inhibitor vildagliptin, a novel antihyperglycemic agent, on the pharmacokinetics and pharmacodynamics of warfarin in healthy subjects. The results showed that co-administration of vildagliptin with warfarin was well tolerated and had no effect on the single-dose pharmacokinetics of either the R- or S-enantiomers of warfarin, or on the anticoagulant effects of warfarin in healthy subjects.

Co-administration of oral vildagliptin lOOmg once daily with a single 25 mg oral dose of warfarin had no effect on the pharmacokinetics of either drug. The ratio of geometric means and associated 90% CIs for the AUC^sub 0_[infinity]^ and C^sub max^ of vildagliptin, R-warfarin and S-warfarin were all contained entirely within the equivalence range 0.801.25. Most pharmacokinetic interactions with warfarin occur as a result of either inhibition (e.g., by miconazole)19 or induction (e.g., by phenytoin)20 of CYP2C9-mediated metabolism of S-warfarin, the more active of the two enantiomers. The lack of effect of vildagliptin on the pharmacokinetics of warfarin is consistent with the fact that vildagliptin does not inhibit or induce the activity of CYP450 isoenzymes in vitro, and exhibits negligible metabolism by CYP450 in vivo (Novartis, data on file). Warfarin is highly protein bound (> 99%). Several drugs such as clofibrate, ibuprofen and cotrimoxazole displace warfarin from plasma proteins18, but the importance of protein binding displacement in drug-drug interactions is controversial14. Effects of vildagliptin on the protein binding of other drugs are unlikely as plasma protein binding of vildagliptin is very low (9.3%).

Table 3. Pharmaeodynamic parameters following oral administration of a single dose of warfarin 25mg alone or in combination with vilaagliptin lOOmg once daily in healthy subjects

Table 4. Ratio of geometric means for pharmacodynamic parameters following oral administration of a single dose of warfarin 25 mg alone or in combination with irildagliptin 100 mg once daily in healthy subjects

Co-administration of multiple doses of vildagliptin with warfarin in the present study had no effect on PT or INR (ratios of geometric means for peak PT or INR values and areas under the PT-time or INR- time curves were all entirely contained within the equivalence range 0.80-1.25). Moreover, the 90% CIs for the area under the PT-time curve were 0.97-1.01, and therefore within the more stringent 0.95- 1.05 equivalence range proposed by Schall et al.21 to take account of the narrow therapeutic index of warfarin. These results indicate that vildagliptin has no effect on the pharmacodynamics of a single dose of warfarin.

A limitation of the present study could be that the pharmacokinetics and pharmacodynamics of warfarin were assessed following administration of only a single dose. The terminal elimination half-lives of R-warfarin and S-warfarin are 35-58 hours and 24-33 hours, respectively, and so the anticoagulant effect of warfarin is likely to be more pronounced following multiple dose administration10. However, unlike the dosing regimen typically used in clinical practice (a loading dose of 5-lOmg warfarin for the first 1 or 2 days and then titration of subsequent doses based on the INR response7), a single 25 mg or 30 mg oral dose of warfarin has been used by many investigators to study the pharmacokinetic and pharmacodynamic interaction with warfarin15’17,22. This provides peak plasma levels and anticoagulant effects of warfarin similar to those observed at steady-state in a clinical dosing regimen, such that the potential for pharmacokinetic and pharmacodynamic interaction can be assessed in healthy volunteers without compromising safety. Consistent with the results of previous studies that have employed a single 25mg oral dose of warfarin15’22, warfarin administration in the present study resulted in an increase in INR, reaching a peak of approximately 2.0 at around 36-48 hours post-dose. Other studies that have shown no drug-drug interaction with single-dose warfarin15’17’22 have also confirmed the lack of drug-drug interaction following chronic administration in clinical practice.

Vildagliptin treatment was well tolerated when administered alone or in combination with a single dose of warfarin in healthy subjects in the present study. Only one adverse event (a case of upper respiratory tract infection that was judged to be mild in severity) was reported; this event was transient, resolved spontaneously, and was not judged to be related to study treatment. With the exception of multiple instances of prolonged PT or increased INR, which would be anticipated following administration of warfarin in healthy subjects, no other clinically relevant changes in hematology parameters, or in blood or urine chemistry, vital signs or ECG findings, were observed during the course of the study. The safety and tolerability profile of vildagliptin in healthy subjects in the present study is consistent with the findings of clinical studies in patients with type 2 diabetes. The lOOmg once daily dosage of vildagliptin is the highest anticipated clinical dose of vildagliptin, and has previously been shown to be effective in improving glycEmie control and associated with a rate of adverse events similar to that of placebo in a 12-week dose-finding study3. Phase 3 clinical trials have shown that vildagliptin 50 mg once daily was well tolerated over up to 1 year of treatment2’4.

Conclusion

In summary, the results of the present study show that the orally active, potent and selective DPP-4 inhibitor vildagliptin does not affect the pharmacokinetics of R-warfarin or S-warfarin, or the anticoagulant effect of warfarin. The pharmacokinetics of vildagliptin were also not altered by administration of a single dose of warfarin. Co-administration of multiple doses of vildagliptin with a single oral dose of warfarin was well tolerated. These results suggest that no dosage adjustment of either vildagliptin or warfarin should be required when these drugs are co- prescribed.

Acknowledgements

Declaration of interest: This study was supported by Novartis Pharmaceuticals. With the exception of Mitchell Rosenberg, all authors are employees of Novartis Pharmaceuticals Corporation and are eligible for Novartis stock and stock options.

* The results of this study were presented in abstract form at the 35th meeting of the American College of Clinical Pharmacology, Cambridge, MA, USA, September 17-19, 2006

References

1. Brandt I, Joossens J, Chen X, et al. Inhibition of dipeptidyl- peptidase IV catalyzed peptide truncation by Vildagliptin ((2S)- ([(3-hydroxyadamantan-l-yl)amino]acetyl)-pyrrolidine-2- carbonitrile). Biochem Pharmacol 2005;70:134-43

2. Ahren B, Gomis R, Standl E, et al. Twelve- and 52-week efficacy of the dipeptidyl peptidase FV inhibitor LAF237 in metformin-treated patients with type 2 diabetes. Diabetes Care 2004;27:2874-80

3. Ristic S, Byiers S, Foley J, et al. Improved glycaemic control with dipeptidyl peptidase-4 inhibition in patients with type 2 diabetes: vildagliptin (LAF237) dose response. Diabetes Obes Metab 2005;7:692-8

4. Ahren B, Pacini G, Foley JE, et al. Improved meal-related betacell function and insulin sensitivity by the dipeptidyl peptidaseFV inhibitor vildagliptin in metformin-treated patients with type 2 diabetes over 1 year. Diabetes Care 2005;28:1936-40

5. Mari A, Sallas WM, He YL, et al. Vildagliptin, a dipeptidyl peptidase-IV inhibitor, improves model-assessed beta-cell function in patients with type 2 diabetes. J Clin Endocrinol Metab 2005;90:4888-94

6. Hirsh J. Oral anticoagulant drugs. N Engl J Med 1991;324: 1865- 75

7. Ansell J, Hirsh J, Poller L, et al. The pharmacology and management of the vitamin K antagonists: the Seventh ACCP Conference on Antithrombotic and Thrombolytic Therapy. Chest 2004; 126:2045-33

8. Kirkwood TB. Calibration of reference thromboplastins and standardisation of the prothrombin time ratio. Thromb Haemost 1983;49:238-44

9. Johnston M, HarrisL, Moffat K, et al. Reliability of the international normalized ratio for monitoring the induction phase of warfarin: comparison with the prothrombin time ratio. J Lab Clin Med 1996;128:214-7 10. Ufer M. Comparative pharmacokinetics of vitamin K antagonists: warfarin, phenprocoumon and acenocoumarol. CHn Pharmacokinet 2005;44:1227-46

11. Takahashi H, Echizen H. Pharmacogenetics of warfarin elimination and its clinical implications. Clin Pharmacokinet 2001;40:587-603

12. Black DJ, Kunze KL, Wienkers LC, et al. Warfarin- fluconazole. II. A metabolically based drug interaction: in vivo studies. Drug Metab Dispos 1996;24:422-8

13. Rettie AE, Korzekwa KR, Kunze KL, et al. Hydroxylation of warfarin by human cDNA-expressed cytochrome P-450: a role for P- 4502C9 in the etiology of (S)-warfarin-drug interactions. Chem Res Toxicol 1992; 5:54-9

14. Berlin MJ, Breckenridge AM. Drug interactions with warfarin. Drugs 1983;25:610-20

15. Van Hecken A, Depre M, Verbesselt R, et al. Effect of montelukast on the pharmacokinetics and pharmacodynamics of warfarin in healthy volunteers. J Clin Pharmacol 1999;39:495-500

16. Jiang X, Williams KM, Liauw WS, et al. Effect of St John’s wort and ginseng on the pharmacokinetics and pharmacodynamics of warfarin in healthy subjects. Br J Clin Pharmacol 2004;57:592-9

17. Anderson DM, Shelley S, Crick N, et al. No effect of the novel antidiabetic agent nateglinide on the pharmacokinetics and anticoagulant properties of warfarin in healthy volunteers. J Clin Pharmacol 2002;42:1358-65

18. Freedman MD, Olatidoye AG. Clinically significant drug interactions with the oral anticoagulants. Drug Saf 1994;10: 381-94

19. O’Reilly RA, Goulart DA, Kunze KL, et al. Mechanisms of the stereoselective interaction between miconazole and racemic warfarin in human subjects. Clin Pharmacol Ther 1992;51: 656-67

20. Cropp JS, Bussey HI. A review of enzyme induction of warfarin metabolism with recommendations for patient management. Pharmacotherapy 1997; 17:917-28

21. Schall R, Muller FO, Hundt HK, et al. No pharmacokinetic or pharmacodynamic interaction between rivastatin and warfarin. J Clin Pharmacol 1995;35:306-13

22. Zhou H, Patat A, Parks V, et al. Absence of a pharmacokinetic interaction between etanercept and warfarin. J Clin Pharmacol 2004;44:543-50

CrossRef links are available in the online published version of this paper: http://www.cmrojournal.com

Paper CMRO-3895_3, Accepted for publication: 20 March 2007

Published Online: 13 April 2007

doi: 10.1185/030079907X188008

Yan-Ling Hea, Ron Sabob, Gilles-Jacques Riviere0, Gangadhar Sunkarab, Selene Leonb, Monica LiguerosSaylanb, Mitchell Rosenbergd, William P. Dolea and Dan Howard”

“Exploratory Development, Novartis Institutes for Biomedical Research, Cambridge, MA, USA

6 Novartis Pharmaceuticals Corporation, East Hanover, NJ, USA

0 Novartis Pharma S.A., Rueil-Malmaison, France

“Parkway Research Center Inc., North Miami Beach, FL, USA

Address for correspondence: Dr Yan-Ling He, Exploratory Development-DMPK, Novartis Institutes for Biomedical Research, 400 Technology Square, Building 605, Cambridge, MA 02139-3584, USA. Tel.: +1-617-871-3065; Fax: +1-617-871-3331; email: [email protected]

Copyright Librapharm May 2007

(c) 2007 Current Medical Research and Opinion. Provided by ProQuest Information and Learning. All rights Reserved.

Perindopril Arginine: Benefits of a New Salt of the ACE Inhibitor Perindopril

By Telejko, Elwira

Key words: Acceptability – L-Arginine – Bioequivalence – Perindopril – Shelf life – Stability ABSTRACT

Background: The efficacy of the angiotensin-converting enzyme (ACE) inhibitor perindopril in the treatment of hypertension, stable coronary artery disease, and heart failure is well established. The reduced stability of the current salt, perindopril-tert-butylamine, in extreme climatic conditions has prompted research into more stable compounds. This article presents stability and bioequivalence results for a new L-arginine salt of perindopril.

Methods: Drug stability studies were performed on nonsalified perindopril, perindopril-tert-butylamine, and perindopril arginine in closed and open containers. The bioequivalence of perindopril arginine was tested in 36 healthy male volunteers in an open-label, randomized, two-period, crossover pharmacokinetic study. A consumer study was carried out in 120 patients to assess preference for a simplified packaging using a high-density polyethylene canister designed for distribution to all climatic zones.

Results and discussion: Perindopril arginine is 50% more stable than perindopril-tert-butylamine, which increases the shelf life from 2 to 3 years. At the revised dosage (perindopril arginine 5-10 mg/day corresponds to perindopril-tert-butylamine 4-8 mg/day), the new salt is equivalent in terms of pharmacokinetics, efficacy, safety, and acceptability. The consumer studies indicate a preference for the new packaging, with 62% of patients nominating the canister as better than the blister packs.

Conclusion: The new perindopril arginine salt is equivalent to perindopril-tert-butylamine and more stable, and can be distributed to climatic zones III and IV without the need for specific packaging. The patient preference for the new packaging could have positive implications for compliance.

Introduction

The angiotensin-converting enzyme (ACE) inhibitor perindopril is one of the most effective drug treatments for cardiovascular disease1-5. Perindopril is indicated in essential hypertension, in stable coronary artery disease to reduce the risk of cardiac events in patients with a history of myocardial infarction (MI) and/or revascularization, and in the treatment of symptomatic heart failure. Because of this extensive range of indications for perindopril at all stages of the cardiovascular disease continuum and its efficacy and acceptability, this agent has become extremely popular in many countries. The currently available salt of perindopril is a salt of ferf-butylamine. It has a shelf life of about 2 years in countries with a temperate climate, and requires special packaging in countries with high temperatures and relative humidity (RH). This has led Servier, the research-based company that discovered and developed perindopril, to sponsor research to further improve the product by increasing its shelf life and stability. This is important because of the varying – and often difficult – conditions of transport, storage, and delivery in different parts of the world, with large variations in temperature and RH. The article describes investigations into the bioequivalence and stability of the new L-arginine salt of perindopril.

ACE inhibition with perindopril

ACE inhibitors act via blockade of the conversion of angiotensin I to angiotensin II, thereby reducing the complex and widespread effects of the latter. Since ACE also inactivates the breakdown of bradykinin, ACE inhibitors effectively increase bradykinin, which promotes the formation of vasodilators, including the formation of nitric oxide (NO). Improvement in the angiotensin II-bradykinin balance has a number of beneficial actions on the cardiovascular system, including antihypertensive and antiatherosclerotic effects6- 8.

Perindopril is a prodrug that is metabolized in the liver to its active diacid metabolite perindoprilat. Perindopril is rapidly and extensively absorbed9, and perindoprilat has one of the highest tissue ACE affinities among the ACE inhibitors10. Tissue ACE affinity is related to its lipophilicity, which determines the extent of its penetration into endothelial and adventitial tissue and atherosclerotic plaque. Specific inhibition of tissue ACE increases the antiatherosclerotic action of an ACE inhibitor, as well as its effects on left ventricular hypertrophy and remodeling3.

The onset of activity of perindopril is longer than the other members of the class, and maximal inhibition of ACE occurs 8 h after a single oral dose of perindopriliert-butylamine 8mg. The inhibition is still 70% effective 24 h after intake10. The clinical consequence of this is that perindopril has the highest trough to peak ratio of all the ACE inhibitors11, between 75% and 100%. At a once-daily dosage, this translates into effective 24-h blood pressure control.

The clinical efficacy of perindopril is well established in hypertension12,13, heart failure14, and diabetic nephropathy15. Moreover, the cardiovascular effects of ACE inhibition with perindopril go well beyond this, as shown by four recent trials2-5. First, the Perindopril pROtection aGainst REcurrent Stroke Study (PROGRESS) showed that 4 years of a perindopril-based regimen caused a significant 28% decrease in the risk of recurrent stroke and a 26% reduction in the risk of major vascular events1. second, the EURopean trial On reduction of cardiac events with Perindopril in stable coronary Artery disease (EUROPA)2 demonstrated that 4 years’ treatment with perindopril on top of other standard preventive therapies produced a 20% reduction in the relative risk of cardiovascular death, nonfatal MI, and resuscitated cardiac arrest. These effects have been linked to the action of the ACE inhibitor in correcting endothelial dysfunction2,10,16. Third, the Perindopril and Remodeling in Elderly with Acute Myocardial Infarction (PREAMI) study showed that treatment of patients with preserved left ventricular function with perindopril for 1 year produced a significant 22% reduction in the absolute risk of death, hospitalization for heart failure, or cardiac remodeling3. Finally, the Anglo-Scandinavian Cardiac Outcomes Trial-Blood Pressure Lowering Arm (ASCOT-BPLA), the largest European study ever performed in hypertension, including 20000 patients4, demonstrated a significant advantage of the amlodipine/perindopril combination over a more traditional atenolol/bendroflumethiazide combination. The results of ASCOT-BPLA are already having an impact on clinical practice17.

Perindopril has a good acceptability profile. As expected for an ACE inhibitor, the most commonly reported adverse event is dry cough, which is generally mild and disappears upon cessation of treatment. The rates of adverse events for perindopril compare favorably with other members of the ACE inhibitor class, with low rates of cough, hypotension, and withdrawals, as demonstrated in long-term clinical trials10,18.

Drug stability and climatic zones

Perindopril thus has a major role to play in one of the most important classes in the cardiovascular field. Any change in salt should therefore be considered carefully, with appropriate investigations of stability, bioequivalence, and therapeutic efficacy. Before we describe these studies for perindopril, we should review the factors affecting drug stability and bioavailability.

The stability of a therapeutic agent is defined as its resistance to unfavorable chemical, physical, biological, and microbiological processes occurring during production and storage. The shelf life of a drug is the period of time during which its pharmacological activity does not decrease below a certain value. A drug is deemed suitable for use if no more than about 5% of the active substance has undergone degradation and the degradation products are not toxic19. The chemical stability of a therapeutic agent can be assessed by investigating the chemical processes occurring during storage. Climatic factors, such as humidity, oxygen, light, and temperature, also have a major influence on drug stability.

Assessment of the stability of a drug is one of the necessary requirements for marketing authorization by the drug regulatory agencies. The International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) was set up to unify the quality, safety, and efficacy requirements for therapeutic agents entering the pharmaceutical market. The ICH provides recommendations that are accepted by the major regulatory agencies. ICH guidelines20 for studies on quality of pharmaceutical raw materials (ICH Q3A) and ready-to-use therapeutic preparations (ICH Q3B) include the obligation to monitor the impurities arising during the manufacturing process and identify any degradation products that may be produced during storage. The toxicity of these compounds should also be determined.

Separate ICH guidelines on stability20’21 set out reference values for temperature and RH for each of four defined climatic zones: climatic zone I, temperate, 24[degrees]C/35% RH, for long-term studies; and climatic zone IV, hot and humid, >24[degrees]C/75% RH. These climatic zones are summarized in Table 1, together with examples of countries assigned to each zone for the purposes of drug stauivalent to climatic zone IV are used for accelerated aging studies. Even if the pure pharmacologically active substance is stable under given conditions, it may undergo degradation when stored under extreme conditions of increased temperature and RH (climatic zones III and IV)22″26. This degradation may involve the interaction of the pharmacologically active substance with excipients, decreasing the content of active substance and modifying parameters such as the rate of release of active substance and its solubility27. Moreover, degradation products may also cause adverse events, for example, the degradation of tetracycline under storage conditions of high temperature and RH not only changes the physical and chemical parameters of the drug (color change from light yellow to dark brown), but also increases its nephrotoxicity28’29.

The use of a prodrug like perindopril increases the bioavailability of an agent because the prodrug has a better lipophilicity than the active metabolite (perindoprilat), while the degradation of a prodrug during storage may significantly reduce the bioavailability of the active metabolite in vivo, and decrease therapeutic efficacy. When a new salt is sought, the aim is to improve the balance between bioavailability and drug stability.

Table 1. Definitions of climatic zones and examples of countries2′. RH, relative humidity

Stability of perindopril arginine

When perindopril was first marketed in 1988, the terf-butylamine salt of perindopril was selected for distribution because impurities could be easily isolated during the crystallization phase. The stability of this salt in conditions corresponding to climatic zones III and IV meant that different packaging was necessary for distribution to countries in those zones. In climatic zones I and II, perindopril-terf-butylamine is currently marketed in PVC/ aluminum blister packs, while a multicomponent type of packaging is used in climatic zones III and IV, involving PVC/aluminum blister packs overwrapped with a watertight bag containing a desiccant. Stability testing has proved this type of packaging to be efficient, but it is complicated and difficult to implement on a large scale.

Stability studies conducted according to the ICH guidelines have shown that perindopril may undergo degradation via two mechanisms: (i) at high RH, ester hydrolysis results in the formation of the poorly absorbed diacid; and (ii) at relatively high temperatures, cyclization and formation of lactam-type compounds lead to the degradation product known as Y31.

Studies on powdered perindopril indicate that the cyclization step can be almost entirely prevented by salification. When nonsalified perindopril and its tert-butylamine salt were stored in separate closed containers for 2 days at 100[degrees]C, the nonsalified perindopril completely degraded to Y31, whereas the perindopril-ferf-butylamine salt remained almost intact. However, when they were stored under the same conditions, but in open containers, the terf-butylamine salt also completely degraded to Y31 (Table 2). This can be explained by the volatility of feri- butylamine at high temperatures, leaving the unbound form of perindopril, which then undergoes degradation to Y31. Thus, salification with a volatile substance entails a potential risk for the stability of the product once the container has been opened.

A number of nonvolatile alternatives have been tested for the salification of perindopril. The arginine salt (Figure 1) was selected because it had the best balance between stability and hygroscopicity. The stability of the perindopril arginine salt was tested in closed and open containers for 2 days at 100[degrees]C, as described above. The arginine salt performed better in these tests than the tert-butylamine salt, and was 100% stable even in the open container (Table 2).

Substitution of the ferf-butylamine salt with perindopril arginine is also an attractive solution to the complex packaging of perindopril in climatic zones III and IV. The stability of the perindopril arginine formulation was tested for 6 months in a simple high-density polyethylene (HDPE) canister (Figure 2) with a desiccant under stressed conditions predictive of climatic zone IV (40[degrees]C/75% RH). The results show that the stability of the arginine salt in this simplified packaging is better than the stability of the tert-butylamine salt in the blister packs (Table 3). These changes will also have consequences for the shelf life of the product. Changing the salt to perindopril arginine increases the shelf life of perindopril by 50%, e.g., from 2 years to 3 years irrespective of the storage temperature.

Table 2. Comparative stability of nonsalified penndopnl, penndopnl argtnine, and perindopril-tert-butylamine stored in open or closed bottles at 100[degrees]C for 2 days

Figure 1. Molecular structure of the new angiotensin-converting enzyme (ACE) inhibitor perindopril as its L-arginine salt

Figure 2. Simplified packaging for perindopnl arginine in the form of a new high-density polyethylene (HDPE) canister

Table 3. Stability results obtained with penndopril arginine 5 mg packaged in canisters with desiccant compared with penndopnl-tert- butylamine 4 mg tablets packaged in aluminum/PVC blister packs, in terms of sum of degradation products (penndoprilat and Y31) remaining after 12 months at 25[degrees]C/60% relative humidity (RH) or 30[degrees]C/60% RH, or after 6 months at 40[degrees]C/75% RH

Bioequivalence of perindopril arginine and perindopril-tert- butylamine

A change in the salification of a compound necessarily leads to a change in quantity of substance per dosage unit, because of the change in molecular weight. The molecular weight of perindopril arginine (542.680) is nearly 25% greater than that of perindopril- terfbutylamine (441.615), and so the dosage changes accordingly. To achieve equimolar quantities and plasma concentrations of perindoprilat, a dosage of perindopril arginine 5 mg replaces perindopril-tert-butylamine 4 mg and a dosage of perindopril arginine 10mg replaces perindopril-tert-butylamine 8mg.

The pharmacokinetic properties of the two salts have been compared after single oral and intravenous administration to Wistar rats and repeated administration to beagle dogs for 7 days. Perindopril, perindoprilat, and their respective glucuronides were assayed using liquid chromatography with tandem mass spectrometry detection (LC-MS-MS). These preclinical experiments found perindopril arginine and perindopril-feri-butylamine to be comparable in terms of their absorption, distribution, metabolism, and elimination characteristics (Perindopril Arginine, Common Technical Document, Module 2). Their bioavailability was also found to be similar.

The bioequivalence of the two perindopril salts in humans has been examined in an open-label, randomized, two-period, crossover, pharmacokinetic study involving 36 healthy male volunteers (age 19- 52 years, mean 31.3+- 9.6 years; body mass index 20.1-27.4g/m^sup 2^, mean 23.3 +- 1.7kg/m^sup 2^). These subjects were randomly assigned to two groups. Each group received a single oral dose of immediate-release perindopril as either the arginine salt (10mg) or the tert-butylamine salt (2 x 4mg). After an 8-day washout period, each group received a single oral dose of the salt that they did not receive in the first test period. They were monitored for pharmacokinetic parameters (maximum plasma concentration (C^sub max^, time at maximum plasma concentration (t^sub max^), area under the plasma concentration-time curve (AUC^sub t^), and half-life (t^sub 1/2^), cardiovascular parameters (blood pressure (BP) and heart rate), and safety, before and for 12Oh after administration. A follow-up examination was performed 3-5 days after the last blood sample had been taken, including physical examination, electrocardiogram (ECG), BP, heart rate, and laboratory parameters (Perindopril Arginine, Common Technical Document, Module 3).

The bioequivalence of the two salts was demonstrated with perindopril arginine/perindopril-feri-butylamine AUC^sub t^ ratios of 96.3% (95% confidence interval (CI), 92-100%) for perindopril and 100% (95% CI, 92-108%) for perindoprilat (Table 4). The CIs are well within the recommended range of 80-125%. Similar ratios were found for the other pharmacokinetic parameters. The two salts had the same antihypertensive efficacy with no clinically significant changes in BP, laboratory parameters, vital signs, or ECG parameters.

Acceptability of perindopril arginine

The bioequivalence trial described above also recorded the acceptability profile of the perindopril arginine salt. There were two times fewer treatment-related adverse events with perindopril arginine (5.56%) than with perindopril-terf-butylamine (16.67%). The most common adverse event in this study was headache (Table 5).

Packaging of perindopril arginine

Consumer preference for the simplified HDPE canister described above (Figure 2) versus the blister pack has been evaluated in a sample of 120 patients. The sample included men and women aged 60- 70 years (50%) and consumers aged over 70 years (50%) accessed through three large pharmacies in Australia. All were receiving one or more chronic prescription medications. Of the sample nominated, 69% expressed an overall preference for the canister (perindopril arginine 5 and lOmg) compared with 31% (p

Table 4. Ratio of pharmacokinetic parameters for perindopril and penndoprilat (baseline corrected) following a single oral dose of perindopril arginine (Wmg) or perindopril-tert-butylamine (8mg) to 36 healthy male volunteers, including confidence intervals (CI). AUC^sub t^, area under the plasma concen 5. Emergent adverse events with perindopril arginine Wmg compared with perindopril-tert- butylamine 8mg: 2/36 (5.56%) participants had experienced two treatment-related emergent adverse events during the perindopril arginine treatment period; 6/36 (16.67%) participants had experienced nine treatment-related emergent adverse events during the perindopril-tert-butylamine treatment period. NOS, not otherwise specified; NAE, number of adverse events; n, number of patients reporting adverse event Discussion

The main features of the new and old salts of perindopril are compared in Table 6. The new perindopril arginine salt has been demonstrated to be safe and to have similar antihypertensive efficacy to the previous one, provided the dosage is revised due to the difference in molecular weight. The rates of treatment-related adverse events with the two perindopril salts indicate that the acceptability profile of the new perindopril arginine salt is at least as good as that of the perindopril-tert-butylamine one. The rates of emergent adverse events (Table 5) are in line with those observed in the large-scale trials with perindopril1’2. The low rate for cough is in agreement with previous reports of lower rates with perindopril than with other ACE inhibitors18.

The significance of drug stability studies has considerably increased due to the globalization of the pharmaceutical market, in which widely used medicinal products, such as perindopril, are manufactured in one or more countries, and then distributed worldwide and stored under extremely varied climatic conditions. The World Health Organization (WHO) has recognized the importance of this issue by recommending that stability studies should be carried out for drugs intended for ‘the global market/ using the conditions for climate zone IV30.

Figure 3. Preference for packaging: canister versus blister pack. Percentage of subjects (n=l 20) nominating the packaging they feel best suits each dimension

Table 6. Comparison of the main features of two perindopril salts, perindopnl arginine and perindopnl-tert-butylamine

In line with these recommendations, the increased stability and shelf life of the new perindopril arginine will permit the use of a simplified packaging in the form of a HDPE canister all over the world, even in climatic zone IV. This will considerably facilitate production and distribution. The patient approval of the packaging is one outcome of the change. This approval may be expected to have an influence on compliance, especially in these often-elderly patients. Compliance with perindopril treatment is already very good2 and could increase further with optimized packaging and improved acceptability.

Conclusion

The new salt of the ACE inhibitor perindopril as an arginine salt improves the stability of the product and increases its shelf life. Pharmacokinetic studies indicate that perindopril arginine can be expected to have at least equivalent antihypertensive efficacy to the previous one, with a revised dosage due to the difference in molecular weight of the two salts: perindopril arginine 5-10 mg replaces perindopril-terf-butylamine 4-8 mg. Therefore, the benefits demonstrated in large-scale trials performed with perindopril-tert- butylamine also apply to perindopril arginine. As a consequence, in the countries where it has been registered, perindopril arginine has been granted the same indications, i.e. hypertension, heart failure, and stable coronary artery disease. The improved stability of the new salt means that the same simplified packaging can be used in all climatic zones. Patients’ preference for the new canister can be expected to have further positive impact on compliance.

Acknowledgments

E. Telejko has received speaker fees and honoraria, as well as editorial assistance in the preparation of this article, from Servier.

References

1. PROGRESS Collaborative Group. Randomised trial of a perindopril-based blood-pressure-lowering regimen among 6,105 individuals with previous stroke or transient ischaemic attack. Lancet 2001;358:1033-41

2. Fox KM. Efficacy of perindopril in reduction of cardiovascular events among patients with stable coronary artery disease: randomised, double-blind, placebo-controlled, multicentre trial (the EUROPA study). Lancet 2003;362:782-8

3. Ferrari R. Effects of angiotensin-converting enzyme inhibition with perindopril on left ventricular remodeling and clinical outcome: results of the randomized Perindopril and Remodeling in Elderly with Acute Myocardial Infarction (PREAMI) Study. Arch Intern Med 2006; 166:659-66

4. Dahlof B, Sever PS, Poulter NR, et al. Prevention of cardiovascular events with an antihypertensive regimen of amlodipine adding perindopril as required versus atenolol adding bendroflumethiazide as required, in the Anglo-Scandinavian Cardiac Outcomes Trial-Blood Pressure Lowering Arm (ASCOT-BPLA): a multicentre randomised controlled trial. Lancet 2005;366:895-906

5. Cleland JG, Tendera M, Adamus J, et al. The perindopril in elderly people with chronic heart failure (PEP-CHF) study. Eur Heart J 2006;27:2338-45

6. Bertrand ME. Provision of cardiovascular protection by ACE inhibitors: a review of recent trials. Curr Med Res Opin 2004;20:1559-69

7. Morishita T, Tsutsui M, Shimokawa H, et al. Long-term treatment with perindopril ameliorates dobutamine-induced myocardial ischemia in patients with coronary artery disease. Jpn J Pharmacol 2002;88:100-7

8. Su JB, Barbe F, Crozatier B, et al. Increased bradykinin levels accompany the hemodynamic response to acute inhibition of angiotensin-converting enzyme in dogs with heart failure. J Cardiovasc Pharmacol 1999;34:700-10

9. Devissaguet JP, Ammoury N, Devissaguet M, Perret L. Pharmacokinetics of perindopril and its metabolites in healthy volunteers. Fundam Clin Pharmacol 1990;4:175-89

10. Ferrari R. Angiotensin-converting enzyme inhibition in cardiovascular disease: evidence with perindopril. Expert Rev Cardiovasc Ther 2005;3:15-29

11. Physicians’ Desk Reference, 55th edn. Montvale, NJ: Medical Economics Company, 2001

12. Julius S, Cohn JN, Neutel J, et al. Antihypertensive utility of perindopril in a large, general practice-based clinical trial. J CHn Hypertens (Greenwich) 2004;6:10-17

13. Poggi L, Renucci JF, Denolle T. Treatment of essential hypertension in general practice: an open-label study of 47,351 French hypertensive patients treated for one year with perindopril. Can J Cardiol 1994;10(Suppl D):21-24D

14. Lechat P, Garnham SP, Desche P, Bounhoure JP. Efficacy and acceptability of perindopril in mild to moderate chronic congestive heart failure. Am Heart J 1993;126:798-806

15. Comparison between perindopril and nifedipine in hypertensive and normotensive diabetic patients with microalbuminuria. Melbourne Diabetic Nephropathy Study Group. BMJ 1991;302:210-16

16. Ceconi C, Fox K, Remme WJ, et al. ACE inhibition with perindopril and endothelial dysfunction. Results of a substudy of the EUROPA study: PERTINENT. Cardiovasc Res 2006;doi:10.1016/ j.cardiores.2006.10.021

17. National Institute for Health and Clinical Excellence (NICE) – British Hypertension Society (BHS). Hypertension: management of hypertension in adults in primary care. Available from www. nice.org.uk/CG034guidance. [Accessed 12 July 2006]

18. Yoshinaga K, Saruta T, Abe K, et al. Clinical evaluation of monotherapy with perindopril, an ACE inhibitor, in the treatment of essential hypertension: double-blind parallel comparison with enalapril. J Clin Ther Medicines 1997; 13:4259-97

19. Telejko E. Stability of tablets and efficiency and safety of drug use in various situations [in Polish]. Farmacja Polska 2006;62:381-88

20. ICH Steering Committee. Quality guidelines. Available from www.ich.org/cache/compo/363-272-l.html. [Accessed 9 May 2006]

21. Dietz R, Feilner K, Gerst F, Grimm W. Drug stability testing. Classification of countries according to climatic zone. Drugs made in Germany 1993;36:99-103

22. Al Omari MM, Abdelah MK, Badwan AA, Jaber AM. Effect of the drug-matrix on the stability of enalapril maleate in tablet formulations. J Pharm Biomed Anal 2001;25:893-902

23. al Zein H, Riad LE, Abd-Elbary A. Effect of packaging and storage on the stability of carbamazepine tablets. Drug Dev Ind Pharm 1999;25:223-7

24. Reynolds JM, Rogers DH. Adjusting dissolution specifications for the variability induced by storage conditions. J Biopharm Stat 2000; 10:425-31

25. Stanisz B. Kinetics of degradation of quinapril hydrochloride in tablets. Pharmazie 2003;58:249-51

26. Stanisz B. Kinetics of degradation of enalapril maleate in dosage forms. Acta Pol Pharm 2004;61:415-18

27. Risha PG, Vervaet C, Vergote G, et al. Drug formulations intended for the global market should be tested for stability under tropical climatic conditions. Eur J Clin Pharmacol 2003;59:135-41

28. Wu Y, Fassihi R. Stability of metronidazole, tetracycline HCl and famotidine alone and in combination. Int J Pharm 2005;290:1-13

29. Mohamad H, Aiache JM, Renoux R, et al. Study on the biopharmaceutical stability of medicines IV. Application to tetracycline hydrochloride capsules. In vivo study. STP Pharm 1987;3:407-11

30. Nazerali H, Muchemwa T, Hogerzeil HV. Stability of Essential Medicines in Tropical Climates: Zimbabwe. WHO/DAP/94.16. Geneva, Switzerland: WHO 1996

CrossRef links are available in the online published version of this paper: http://www.cmrojournal.com

Paper CMRO-3839_6, Accepted for publication: 26 February 2007

Published Online: 29 March 2007

doi:10.1185/030079907X182158

Elwira Telejko

Studium Ksztalcenia Podyplomowego, Biatystok, Poland

Address for correspondence: Dr Elwira Telejko, Department of Postgraduate Education, Faculty of Pharmacy, Medical University of Bialystok, Kilinskiego Street 1, 15-222 Bialystok, Poland. Tel./ Fax: +48 85 748 55 02; [email protected]

Copyright Librapharm May 2007

(c) 2007 Current Medical Research and Opinion. Provided by ProQuest Information and Learning. All rights Reserved.

Cost-Effectiveness of Salmeterol Xinafoate/Fluticasone Propionate Combination Inhaler in Chronic Asthma

By Doull, Iolo Price, David; Thomas, Mike; Hawkins, Neil; Et al

Key words: Asthma – Cost-effectiveness – Fluticasone/salmeterol – Quality of life ABSTRACT

Objective: To determine where in the treatment steps recommended by the British Thoracic Society and Scottish Intercollegiate Guidelines Network (BTS/SIGN) Asthma Guideline it is cost-effective to use salmeterol xinafoate/fluticasone propionate combination inhaler (SFC) (Seretide*) compared with other inhaled corticosteroid (ICS) containing regimens (with and without a long acting beta-2 agonist (LABA)) for chronic asthma in adults and children.

Research design and methods: Meta-analyses of percentage symptom- free days (%SFD) were used within a cost-effectiveness model. Time spent in two asthma control health states, ‘symptom-free’ and ‘with- symptoms’ was used as the measure of differential treatment effectiveness. SFC was compared with varying doses of fluticasone propionate (FP) and beclometasone dipropionate (BDP) with or without a separate salmeterol inhaler, and with the budesonide/formoterol combination inhaler (BUD/FORM) (Symbicort[dagger]). Drug costs, nondrug costs and quality adjusted life years (QALY) were incorporated into the analyses. Results are presented as cost per QALY ratios and uncertainty explored using probabilistic sensitivity analysis.

Results: Compared with an increased dose of FP in adults, SFC either ‘dominates’ (i.e. cheaper and more effective) FP or the cost per QALY is Pounds 6852. The cost per QALYs estimated in sensitivity analyses using BDP costs range from Pounds 5679 to Pounds 15997. For children the cost per QALY for SFC 50 Evohaler* compared with an increased dose of FP is Pounds 15739. SFC is similarly clinically effective in improving %SFDs compared with FP plus salmeterol delivered in separate inhalers (mean differences for each dose comparison of -3.9 (low) (with a 95% confidence interval (CI): – 12.96; 5.16); 4.10 (medium) (95% Cl: -3.01; 11.21); -0.4 (high) (95% Cl: -8.88; 8.08)) and BUD/FORM (mean difference of 0.40 (95% Cl – 3.69; 4.49)) in adults, and a cheaper SFC option is available at all doses (annual cost savings range from Pounds 18-Pounds 427 per patient). SFC was similarly effective compared with FP plus salmeterol in separate inhalers in children under 12 and also resulted in annual cost savings of between Pounds 47 and Pounds 77. A number of other comparisons were also made and the results are available as electronic supplementary data.

Conclusions: This is the first analysis to estimate the cost- effectiveness of SFC in chronic asthma compared with multiple comparators and based on a systematic identification of relevant trials and data on %SFDs. The findings suggest that for adults and children uncontrolled on BDP 400 [mu]g/day or equivalent It Is a cost-effective option to switch to SFC (at an equivalent ICS dose) compared with increasing the dose of ICS. For adults and children aged 12 years and over who have passed this point and are uncontrolled on BDP 800[mu]g/day or equivalent, switching to SFC remains a cost-effective approach. Where an adult or child requires an ICS and a LABA to be co-prescribed, SFC is a cost-effective option compared with FP or BDP plus salmeterol delivered in separate inhalers. In adults who require combination therapy, SFC is a cost- effective option compared with BUD/FORM.

Introduction

Asthma is characterised by inflammation of the airways and by bronchoconstriction causing wheezing and shortness of breath. It is one of the most common chronic diseases, affecting an estimated 300 million adults and children throughout the world’. In England and Wales it is estimated that 3.7 million adults and 1 million children (under the age of 16 years) have asthma, causing considerable impact on health care resources2. Overall, health care costs for asthma patients have been found to be twice as high as for patients without asthma3. Survey and prospective observational data estimate that poorly controlled asthma patients cost at least three times that of wellcontrolled patients4-7.

The British Thoracic Society and Scottish Intercollegiate Guidelines Network (BTS/SIGN) Guideline8 describes the treatment of asthma as a series of steps chosen dependent on disease severity and response to current treatment. In adults and children aged 5 and over, step 1 is the intermittent use of an inhaled short-acting beta- 2 agonist (SABA) as necessary, step 2 is the addition of an inhaled corticosteroid (ICS) at 200-800 [mu]g/day beclometasone diproprionate (BDP) or equivalent (200-400 [mu]g/day for children 5- 12) and step 3 is the addition of an inhaled long-acting beta-2 agonist (LABA).

A combination inhaler containing salmeterol xinafoate (salmeterol), a LABA, and fluticasone propionate (FP), a corticosteroid, (SFC) (Seretide*) is licensed in the UK for the regular treatment of asthma where use of a combination product is appropriate, i.e. in patients not adequately controlled with ICSs and ‘as needed’ inhaled SABA or in those already adequately controlled on both ICS and LABA administered as separate devices.

Two important questions regarding the use of LABAs are left unanswered within the BTS/SIGN Guideline. Specifically, at what dose of ICS should a LABA be added to therapy and should it be added as a separate inhaler device or as a component of a combination device? This paper addresses these questions through a series of cost- effectiveness analyses for both adults and adolescents (aged 12 and over) and children (under 12). The analyses presented here differ from previously published cost-effectiveness analyses developed by Paltiel et al.9 and Price and Briggs10, as they are based on a systematic identification of a range of clinical trial data to address the cost-effectiveness of treatment options within a stepwise treatment pathway.

Methods

The question of when to add a LABA to ICS therapy was addressed by a series of cost-effectiveness analyses of SFC introduced at different points in the dose escalation of inhaled steroids. Four comparisons were evaluated using dose bands and ICS dose equivalences in line with current guidelines1’8 and reviews of the literature11 (see Table 1):

1. For patients failing to achieve adequate disease control on their current dose of ICS alone, SFC is compared with the current dose of FP. As patients might improve spontaneously due to the episodic nature of asthma or adhere better with background therapy, it is important to compare SFC with optimising current treatment.

2. For patients failing to achieve adequate disease control on their current dose of ICS alone, SFC is compared with an alternative treatment strategy of increasing the dose of FP.

3. For those patients requiring a LABA, SFC is compared with FP and salmeterol delivered in separate inhalers.

4. For those patients requiring a combination inhaler, SFC is compared with a combination inhaler containing budesonide and formoterol (BUD/FORM) (Symbicort, AstraZeneca).

Model overview

To estimate cost-effectiveness, a model with two asthma control health states was developed where patients were considered to be in either a ‘symptomfree’ or ‘with-symptoms’ state. Differential treatment effectiveness was incorporated based on a series of meta- analyses of the percentage of symptom-free days (%SFD) endpoint following a systematic review of clinical trials. This endpoint was selected as it directly reflects the patient’s experience of the condition and was widely reported. It was not possible to abstract comparable information on exacerbations across trials; therefore a distinct exacerbation state was not included.

Table 1. Comparisons for cost-effectiveness analyses of SFC introduced at different points in the dose escalation of inhaled steroids (ICS)

A number of key assumptions were made in building the model. As there is no curative treatment, ‘the aims of pharmacological management of asthma are the control of symptoms…, prevention of exacerbations and the achievement of best possible pulmonary function, with minimal side-effects’8. It was also assumed that treatments have no differential effects on mortality or toxicity12 and that the trial-based estimates are applicable to wider patient populations, such that the differential proportion of time patients spend in the ‘symptom-free’ state over their treatment lifetime would be the same as that observed during the trial period. There are two SFC devices, Accuhaler (GlaxoSmithKline) and Evohaler (GlaxoSmithKline), and both were used in the meta-analysis, thus assuming no differences in effectiveness13,14.

The estimated mean difference in %SFD between the treatments from the meta-analysis was taken to represent the difference in proportion of time spent in the ‘symptom-free’ compared with the ‘withsymptoms” state in the model. Therefore, definitions of ‘symptom-free’ used to estimate %SFDs were assumed to be consistent with the ‘symptom-free’ state defined using the health states in the Gaining Optimal Asthma controL (GOAL) trial15.

The difference between treatment arms in Other health service” costs (i.e., unscheduled health service costs other than the study therapies themselves) was estimated by multiplying the mean difference in %SFD by the difference between the two model states (‘symptom-free’ and ‘with-symptoms’) in annual Other health service’ costs. The incremental QALYssed to estimate costs and utilities. This 1-year, stratified, randomised, double-blind, parallel-group study of 3416 patients with uncontrolled asthma compared SFC with three doses of FP alone (200, 500 and 1000pg/day) in achieving two guideline-based measures of asthma control: totally- and well- controlled. These two measures are defined in electronic supplementary data (see Appendix). The modelled time horizon was nominally 1 year, corresponding to the duration of the GOAL trial. The 1 year duration made the GOAL trial particularly appropriate for this task as it ensured that costs and utilities estimates were not affected by any seasonal variation in the observed data. Within the model the proportion of time spent in the two model states was assumed to be constant over time and so costs and benefits accrue in a constant proportion. The estimated cost-effectiveness ratio would therefore be independent of the model time horizon and discounting was therefore not required. All costs are reported as 2006 Great Britain pounds (Pounds ). The R 2.2.1 statistical package16 was used initially to perform the cost-effectiveness analysis and as a validity test the analysis was fully replicated using a model built in Microsoft Excel.

Meta-analysis of clinical data

An extended search of available literature was undertaken on 22 February 2006, with the aim to identify all the relevant studies addressing the research questions. The databases searched and the search strategy used can be found as electronic supplementary data (see Appendix). A total of 740 records were retrieved from this process. An additional search was conducted in the GSK clinical trial register17 and other internal records to identify any trials for which no publications currently exist and to ensure identification of all the relevant publications for each trial.

Records were hand searched to fulfil the following inclusion criteria for the meta-analysis:

* Randomised controlled trials (RCTs)

and

* Patients aged 4 and over with chronic asthma (in line with SFC licence)

* %SFD data with standard deviation, standard error or percentiles reported

* Comparisons of SFC (in line with doses listed in Table 1) with:

_ FP alone (same and increased dose)

_ FP in combination with salmeterol, or

_ BUD/FORM

* Studies published in English language

In addition to estimating the cost-effectiveness of SFC against FP, estimates were also generated for BDP (in ICS alone and separate inhaler comparisons) and, where possible, budesonide (within BUD/ FORM) comparisons, assuming equal efficacy when using BDP or budesonide at double the FP dose”. As BDP and FP are not licensed above 400 [mu]g/day for children, only a low-dose comparison was made for this age group. Analyses against BUD/FORM were conducted using 100/6, 200/6 and 400/12 inhalers. The comparison of SFC with BUD/FORM was made on a fixed and equivalent dose basis, to allow comparison of efficacy rather than dosing strategy (Table 2). Therefore, the trials comparing SFC with BUD/FORM delivered using different dosing strategies were excluded18-20.

Table 2. Included studies in the meta-analysis of trial data

The meta-analysis of %SFD was conducted using the R 2.2.1 meta package. Both random and fixed treatment effect estimates were obtained.

Treatment acquisition costs

Annual costs for SFC were based on the unit prices for each inhaler device (Accuhaler and Evohaler). For FP and BDP, costs were input as average prices, calculated by weighting all licensed preparations capable of delivering the appropriate dose. For all steroids a reasonable number of puffs per day were assumed (two puffs twice daily (b.d.) for aerosols, one puff b.d. for dry powder inhalers). This was not possible for BDP2000 so the cost of BDP1000 was doubled. For children, licensed non-breath actuated metered- dose inhalers (MDIs) were assumed to be administered via a spacer device as recommended21’22. All calculations were based on July 2006 unit prices and were obtained from three sections of the online Drug Tariff3 with Part viii having priority over the ‘generics’ list which had priority over the ‘prescribing costs’ list. Prices unavailable from the Drug Tariff were taken from eMIMS24.

Estimation of utilities and other health service costs

The mean utilities and annual non-drug costs associated with the ‘symptom-free’ and ‘with-symptoms’ states were estimated using regression analyses of individual patient data relating control state to non-drug costs and utilities from the GOAL15 clinical trial. Compared with other potential data sources, the GOAL trial could provide non-drug cost and utility data that was most suited to the model design. In the economic analysis of the GOAL study25 patients’ asthma control during each week of the study was described as ‘totally-controlled’, ‘well-controlled’, ‘not wellcontrolled’ or ‘exacerbation’. For this analysis of the costs and utilities associated with the model states, patients in the ‘totally- controlled’ state, as defined in the GOAL study, were regarded as being in the ‘symptom-free’ state and patients in the other states were grouped together as the ‘with-symptoms’ state. Costs and utilities estimated for the ‘with-symptoms’ state would therefore represent a patient-week weighted average across ‘well-controlled’, ‘not wellcontrolled’ and ‘exacerbation’ states from the GOAL analysis.

Utility data were not directly collected in the GOAL study. A mapping algorithm derived from an external study in which both EQ5D utility scores and the asthma quality of life questionnaire (AQLQ) data were collected was used to calculate a utility score for patients in the GOAL study based on their AQLQ scores25. Similar mapping exercises have been carried out, although insufficient details were available to allow alternative utility estimates to be used in the model26. It was assumed in this study that the utility scores mapped from the AQLQ are representative of those that would have been obtained directly from patients in the two asthma control states in GOAL using the EQ5D.

Unscheduled non-drug Other health service’ costs were estimated from individual patient data on resource use collected in the GOAL study under three main categories:

* Secondary care visits: visits to emergency departments, length of stay (number of days) in intensive care, inpatient days, and outpatient visits

* Primary care visits: day and night general practitioner home visits, surgery visits, and telephone calls to primary care clinic

* Rescue medication use (per occasion cost)

Unit costs relating to each of these resources were taken from published sources for England and Wales, updating those used in the original GOAL analysis25 (Table 3).

Linear regression analyses were undertaken using weekly Other health service’ costs and utilities as the dependent variable and a dummy variable for the ‘symptom-free’ and ‘with-symptoms’ asthma control states, as the independent variable. A UK indicator variable was also included to adjust for differences in utility between the UK and other countries found in the GOAL analysis25, although the corresponding parameter was found not to be statistically significant in the cost regression, and therefore was not included in the regression analysis. Eicker-White robust standard errors were estimated, which take account of the repeated (weekly) measures at the patient level. The regression was conducted using the reg procedure and cluster option from the STATA software.

Model evaluation, decision rules and uncertainty

Standard cost-effectiveness decision rules were employed27. Uncertainty in the cost-effectiveness estimates was analysed probabilistically using Monte Carlo simulation. The mean treatment effects were sampled from a normal distribution; the Other health service’ cost and utility regression coefficients were estimated from a multivariate normal distribution. The results presented are based on the mean values from the distribution of Other’ costs and QALY results. Treatment costs were regarded as fixed. The uncertainty surrounding each of the cost-effectiveness estimates is expressed in terms of the probability of SFC being more cost- effective than its comparator against cost-effectiveness thresholds28. Three incremental cost-effectiveness ratio (ICER) thresholds were used – Pounds 0 (equivalent to presenting the probability that SFC is less costly), Pounds 20000 and Pounds 30000 per QALY gained. The latter two thresholds are indicated by the National Institute for Health and Clinical Excellence (NICE)29. Above Pounds 20 000/QALY factors such as decision uncertainty, innovation, and the nature of the condition are considered alongside cost-effectiveness. Above an ICER of Pounds 30 000/QALY the case for these factors needs to be increasingly strong.

Results

State-specific utilities and Other health service’ costs

The results of the regression analyses are shown in Table 4. The ‘with-symptoms’ state is associated with Pounds 78.26 higher annual health service costs and 0.12 lower utility per patient compared with the ‘symptomfree’ state.

Table 3. Unit costs of resources

Table 4. Other health service costs and utilities by model state (Pounds )

Meta-analysis of trial data

Fourteen studies retrieved by the search fulfilled the criteria for inclusion in the meta-analysis. A detailed description of these studies is presented in Table 2 (information on trials included for SFC versus the same dose of FP are available as electronic supplementary data (see Appendix)). A large number of trials were excluded as they did not include the %SFD endpoint or sufficient information required for the meta-analysis, or were not of the treatment comparisons of interest. Fixed effects estimates are presented and were used in the cost-effectiveness models. Using criteria described in the Cochrane Handbook30 there was no evidence of significant heterogeneity in all analyses where three or melow 50% and the chi^sup 2^ tests for heterogeneity did not reach significance at the p = 0.1 level. Table 5 shows that in adults, SFC results in a statistically significant higher %SFDs compared with an increased dose of FP. In paediatric trials, the differences are numerically higher with SFC (see Table 6) but not statistically significant at the 5% level for these comparisons. The differences between SFC and either FP plus salmeterol delivered in separate inhalers or BUD/FORM are small and variable and not statistically significant. This would suggest that they are clinically equivalent.

Table 5. Cost-effectiveness results for SFC in adults and children aged 12 and over

Table 6. Cost-effectiveness results for SFC in children under 12

Cost-effectiveness of SFC vs. increased dose FP alone in patients aged 12 and over

In Table 5 the cost-effectiveness of switching uncontrolled patients to SFC compared with the alternative treatment strategy of increasing the dose of FP is shown. As a result of the higher cost of increased doses of FP and lower effectiveness, SFC is shown to be cost-effective at both dose options, either cheaper and more effective than FP alone (referred to in health economics as ‘dominant’), or with an ICER of Pounds 6852 and therefore well below a Pounds 30000 threshold. Results for SFC versus the same dose of FP and other ICS doses can be found as electronic supplementary data (see Appendix).

Cost-effectiveness of SFC vs. separate FP plus salmeterol in patients aged 12 and over

The cost-effectiveness of SFC compared with the use of separate FP plus salmeterol is shown in Table 5. SFC is generally cheaper, with all of the six cost differences a saving (ranging from -Pounds 40 to -Pounds 303 per year); however, even though the differences in clinical effectiveness are small and uncertain, they were treated as ‘real’ differences in the analysis. As a result, the direction of differences in QALYs between the options varies, which means that the ICERs switch around in terms of whether they relate to SFC or comparator. Since the ICER calculation is highly sensitive to the magnitude of the denominator, two of the comparisons (low and high dose) result in an ICER for FP plus salmeterol as separate inhalers, since this is the (marginally) more effective and costly therapy. However, the resultant ICERs are appreciably higher than a Pounds 30000 costeffectiveness threshold with the exception of the comparison with the Accuhaler at the low dose. At the medium dose SFC is cheaper (annual saving of Pounds 175 per patient) and more clinically effective.

Cost-effectiveness of SFC vs. BUD/FORM in patients aged 12 and over

For the comparison of SFC and BUD/FORM, only one analysis could be informed by appropriate trial evidence (see Table 5). This is for medium dose SFC compared to BUD/FORM 200/6 for which SFC is shown to be a dominant treatment in cost-effectiveness terms. However, even though SFC is Pounds 18 cheaper per patient per year, the difference in %SFDs is small and statistically insignificant (0.40, 95% CI: – 3.69; 4.49). Where no trial evidence existed to conduct a full analysis, the costs of BUD/FORM and SFC were compared as a partial economic evaluation (see Table 7). In all but one of the eleven comparisons SFC is cheaper than BUD/FORM with annual cost savings ranging from Pounds 18-Pounds 427 per patient.

Table 7. Cost differences: SFC compared with BUD/FORM

Cost-effectiveness of SFC in children under 12

In children under 12, appropriate effectiveness data were available for all comparisons except versus BUD/FORM (data for SFC versus the same dose of FP are available as electronic supplementary data (see Appendix)). The trial data shown in Table 6 show that SFC has similar efficacy to both increased dose FP and FP plus salmeterol. For the comparison of low-dose SFC with an increased dose of FP (400) shown in Table 6, the cost per QALY for SFC Evohaler is well below the Pounds 30000 threshold (Pounds 15 739), but the Accuhaler ICER is above the threshold (Pounds 63 736). For the comparison of low-dose SFC against its components delivered in separate inhalers, SFC is cheaper and is the dominant option (Table 6).

Cost-effectiveness of SFC vs. BDP

Alternative model scenarios are also presented in Tables 5 and 6 showing the implications of using the acquisition cost of BDP rather than FP (at a clinically equivalent dose). These results show that SFC remains cost-effective for most comparisons. Exceptions for adults are the comparison of SFC with BDP plus salmeterol delivered in separate inhalers at the low and high doses, where the trials suggest there is little difference clinically between the options and at low dose both SFC devices are cheaper than the BDP option. However, in each case there is a more costeffective SFC device available. In children under 12, SFC is still dominant compared with BDP and salmeterol delivered in separate inhalers. There is no comparison of SFC with an increased dose of BDP (800pg/day), as it is not licensed in children.

Discussion

This study was conducted to determine where in the BTS/SIGN Asthma Guideline it is cost-effective to use the SFC combination inhaler in the treatment of chronic asthma in adults and children. This analysis is the first to undertake a systematic identification of the relevant trials and data on %SFDs, building a decision model to address questions relating to the cost-effectiveness of using inhaled LABA and ICS combinations in the treatment of chronic asthma. The results of this study indicate that in adults, SFC is a cost-effective alternative to increasing the dose of either FP or BDP. For children the cost per QALY for SFC 50 Evohaler compared with the increased dose of FP is well below the Pounds 30000 threshold. SFC is similarly clinically effective in improving %SFDs compared with FP plus salmeterol delivered in separate inhalers and BUD/FORM in adults, and a cheaper SFC option is available at all doses.

The main weakness of the analysis is that for a few of the comparisons examined, there was a lack of clinical trials that met the inclusion criteria for the metaanalysis and that therefore were able to support a costeffectiveness analysis. There is also little available data on the long-term consequences of poor asthma control and treatment outcomes, and a lack of consistent data allowing any differences in exacerbation rates to be compared across the treatments. Whilst randomised controlled trials are the gold standard in demonstrating clinical efficacy, the limited generalisability of these study designs could also be a weakness of this analysis31,32. Another limitation of the study was that EQ 5 D utility scores were not available directly from patients in each of the two asthma control states in the GOAL trial, and instead a mapping algorithm was used to translate AQLQ scores to EQ5D utility scores.

In the paediatric analyses few studies were available to make comparisons. This could be due to the difficulties of recruiting children into clinical trials. In addition, asthma is more variable and heterogeneous in children than in adults33, which may result in less severe patients being recruited into the trials than expected, and subsequently a reduced likelihood of finding clinically meaningful differences between the therapies being tested. An absence of specific data in children under 12 also required the use of the same utility and resource use data as in the adult population. This may underestimate treatment costs associated with more severe asthma states in children, as children may be more likely than adults to use NHS resources for the same level of symptoms. In the model the effect would be to underestimate the cost offsets for the more effective treatment in a comparison.

The model is also conservative with respect to SFC in a number of other ways. In comparing SFC with FP plus salmeterol in separate inhalers, there is evidence that combination inhalers are associated with improved adherence34″38, which may have outcome benefits35’37’39. These benefits may not be seen in double blind, double dummy study designs where all patients use two inhalers. Other benefits of combination inhalers such as ensuring patients take both their ICS and LABA, which is in line with the Medicines and Healthcare products Regulatory Agency and Commission on Human Medicines guidance40, and the potential synergy of both drugs when taken in one inhaler41 are relevant to this analysis also. The analysis also does not include any longer-term benefits related to avoiding routine use of higher dose ICS1.

Another potential limitation is the grouping of GOAL study health states which may over-estimate the average utility and under- estimate the average costs of being in the ‘with-symptoms’ state because this state includes patients who are ‘well-controlled’, and are therefore likely to have reported symptom-free days in the trial. This leads to a more conservative model with respect to the more active treatment.

Conclusions

The BTS/SIGN Asthma Guideline8 does not specify the ICS dose to add in a LABA and therefore at which to use SFC. However, the findings presented in this analysis show that for adults and children uncontrolled on BDP 400pg/day or equivalent it is a cost- effective option to switch to SFC compared with increasing the dose of ICS and therefore this should be the initial preferred therapeutic approach. For adults and children aged 12 and over who have passed this point and are uncontrolled on BDP 800pg/day or equivalent, switching to SFC remains a cost-effective approach. For children under 12, SFC Evohaler may have benefits by delivering similar efficacy at a lower steroid dose than an ICS alone and is a cost-effective approach.

Compared with FP plus salmeterol delivered in separate inhalers, SFC is at least as clinically effective and has other benefits such as adherence. In addition, SFC is genera adult requires an ICS and LABA to be co-prescribed. The most appropriate way to compare SFC and BUD/FORM on a like-for-like basis is to review head-to-head studies comparing equivalent dosing regimens. There was only one study available, but the data suggests that SFC and BUD/FORM achieve similar levels of efficacy. There is, however, a cheaper SFC option at all doses and only SFC is available as an MDI, the device recommended by NICE as the preferred option in the management of asthma in children21,22. Acknowledgements

Declaration of interest: The authors would like to acknowledge Dr Mark Sculpher for his valuable contribution in advising on the design and analysis of this study. Funding for the study was provided by GlaxoSmithKline.

Authorship and contributorship: All authors were involved in the conception, design and data interpretation as well as drafting the article and final approval for publication. In addition, NH was responsible for the statistical analysis of the data, with assistance from ES. TG and MT had lead responsibility for editing and finalising the manuscript.

CrossRef links are available in the online published version of this paper: http://www.cmrojournal.com

Paper CMRO-3813_4, Accepted for publication: 20 March 2007

Published Online: 18 April 2007

doi: 10.1185/030079907X187982

* Seretide, Accuhaler and Evohaler are registered trade marks of the GlaxoSmithKline group of companies

[dagger] Symbicort is a registered trade mark of AstraZeneca AB

* Seretide is the trade mark of the GlaxoSmithKline group of companies

References

1. Global Initiative for Asthma (GINA). Global strategy for asthma management and prevention: NHLBI/WHO Workshop Report. Bethesda: National Institutes of Health. National Heart, Lung and Blood Institute 2002; Publication No. 02-3659

2. Asthma UK. Where do we stand? Asthma in the UK today 2004

3. Lyseng-Williamson KA, Plosker GL. Inhaled salmeterol/ fluticasone propionate combination. Pharmacoeconomics 2003;21:951- 89

4. National Asthma Campaign. Out in the open. A picture of asthma in the UK today 2002

5. Hoskins G, McCowan C, Neville R, et al. Risk factors and costs associated with an asthma attack. PharmacoEconomics and Outcomes News 2000;253:8

6. Vervloet D, Williams A, Lloyd A, Clark T. Costs of managing asthma as defined by a derived Asthma Control TestTM score in seven European countries. Eur Respir Rev 2006; 15:17-23

7. Lloyd A, Price D, Brown R. The impact of asthma exacerbations on health-related quality of life in moderate to severe asthma patients in the UK. Prim Care Respir J 2007;16:22-27

8. British Thoracic Society. British Guideline on the Management of Asthma. Revised edn. November, 2005

9. Paltiel AD, Fuhlbrigge AL, Kitch BT, et al. Cost- effectiveness of inhaled corticosteroids in adults with mild-to- moderate asthma: Results from the Asthma Policy Model. J Allergy Clin Immunol 2001;108:39-46

10. Price MJ, Briggs AH. Development of an economic model to assess the cost effectiveness of asthma management strategies. Pharmacoeconomics 2002;20:183-94

11. Adams N, Bestall JM, Lasserson TJ, Jones PW. Fluticasone versus beclomethasone or budesonide for chronic asthma in adults and children. The Cochrane Database Syst Rev 2005; Issue 3: CD002310.pub3. DOI: 10.1002/14651858.CD002310.pubS

12. Ni Chroinin M, Greenstone H, Danish A, et al. Long-acting beta2-agonists versus placebo in addition to inhaled corticosteroids in children and adults with chronic asthma. Cochrane Database Syst Rev (Online: Update Software) 2007;1

13. Bateman ED, Silins V, Bogolubov M. Clinical equivalence of salmeterol/fluticasone propionate in combination (50/100 mg twice daily) when administered via a chlorofluorocarbon-free metered dose inhaler or dry powder inhaler to patients with mildto-moderate asthma. Respir Med 2001;95:136-46 (SFCB3022)

14. van Noord JA, LiIl H, Carillo Diaz T, et al. Clinical equivalence of a salmeterol/fluticasone propionate combination product (50/500 meg) delivered via a chlorofluorocarbon-free metereddose inhaler with the Diskus in patients with moderate to severe asthma. Clin Drug Invest 2001;21:243-55 (SFCB3023)

15. Bateman ED, Bousehey H, Bousquet J, et al. Can Guidelinedefined asthma control be achieved? The Gaining Optimal Asthma ControL Study. Am J Respir Crit Care Med 2004; 170:836-44 (SAM40027)

16. Hornik K. The R FAQ. http://CRAN.R-project.org/doc/FAQ/R- FAQ.html (accessed July 2006) ISBN 3-900051-08-9 2006

17. GSK. Clinical Trials Register. http://ctr.gsk.co.uk/Summary/ fluticasone_salmeterol/studylist.asp (accessed 30 April 2006) 2006

18. Aalbers R, Backer V, Kava T, et al. Adjustable maintenance dosing with budesonide/formoterol compared with fixed-dose salmeterol/fluticasone in moderate to severe asthma. Curr Med Res Opin 2004; 20:225-40

19. Fitzgerald J, Boulet L, Follows R. The CONCEPT trial: A 1- year, multicenter, randomized, double-blind, double-dummy comparison of a stable dosing regimen of salmeterol/fluticasone propionate with an adjustable maintenance dosing regimen of formoterol/budesonide in adults with persistent asthma. Clin Ther 2005;27:393-406

20. Vogelmeier C, D’Urzo A, Pauwels R, et al. Budesonide/ formoterol maintenance and reliever therapy: an effective asthma treatment option? Eur Respir J 2005;26:819-28

21. National Institute of Health and Clinical Excellence. Technology Appraisal Guidance No 10. Guidance on the use of inhaler systems (devices) in children under the age of 5 years with chronic asthma. http://www.nice.org.uk/download.aspx?o=TA 010guidance&template=download.aspx (accessed July 2006) 2000

22. National Institute of Health and Clinical Excellence. Technology Appraisal Guidance No 38. Inhaler devices for routine treatment of asthma in older children (aged 5-15 years). http:// www. nice.org.uk/page.aspx?o=TA038guidance (accessed July 2006) 2002

23. Drug Tariff, http://www.drugtariff.com (accessed July 2006)

24. eMIMS. http://www.emims.net (accessed July 2006) 2006

25. Briggs AH, Bousquet J, Wallace MV, et al. Cost-effectiveness of asthma control: an economic appraisal of the GOAL study. Allergy 2006;61:531-6

26. Tsuchiya A, Brazier J, McCoIl E, Parkin D. Deriving preferencebased single indices from non-preference based condition- specific instruments: Converting AQLQ into EQ5D indices. Discussion paper 02/1 http://www.shef.ac.Uk/content/1/c6/01/87/47/ DP0201 .pdf (accessed July 2006) 2002

27. Karlsson G, Johannesson M. The decision rules of cost- effectiveness analysis. Pharmacoeconomics 1996;9:120

28. Fenwick E, Claxton K, Sculpher M. Representing uncertainty: the role of cost-effectiveness acceptability curves. Health Econ 2001; 10:779-87

29. National Institute of Health and Clinical Excellence. Guide to the methods of technology appraisal. April 2004

30. Higgins J, Green S (editors. Cochrane Handbook for Systematic Reviews of Interventions 4.2.6 [updated September 2006]. In: The Cochrane Library, Issue 4, 2006. Chichester, UK: John Wiley & Sons Ltd

31. Bjermer L. Evidence-based recommendations or ‘show me the patients selected and I will tell you the results’. Respir Med 2006;100 Suppl. A:S17-21. Epub 2006 May 23

32. Herland K, Akelsen J, Skj0nsberg, Bjermer L. How representative are clinical study patients with asthma or COPD for a larger ‘real life’ population of patients with obstructive lung disease? Respir Med 2005;99:11-19

33. Chipps J, Spahn C, Sorkness L, et al. Variability in asthma severity in pediatrie subjects with asthma previously receiving short-acting beta2-agonist. J Pediatr 2006;148:517-21B

34. Stempel DA, Stoloff SW, Carranza R, Jr., et al. Adherence to asthma controller medication regimens. Respir Med 2005;99:1263-7

35. Stoloff SW, Stempel DA, Meyer J, et al. Improved refill persistence with fluticasone propionate and salmeterol in a single inhaler compared with other controller therapies. J Allergy Clin Immunol 2004; 113:245-51

36. Tews JT, Volmer T. Differences in compliance between combined salmeterol/fluticasone propionate in the Diskus device and fluticasone + salmeterol given via separate Diskus inhalers. Am J Respir Crit Care Med 2002;165:A188

37. Marceau C, Lemiere C, Berbiche D, et al. Persistence, adherence, and effectiveness of combination therapy among adult patients with asthma. J Allergy CHn Immunol 2006; 118: 574-81

38. O’Connor RD, Carranza Rosenzweig J, Stanford R, et al. Asthma- related exacerbations, therapy switching, and therapy discontinuation: a comparison of 3 commonly used controller regimens. Ann Allergy Asthma Immunol 2005;95: 535-40

39. Delea TE, Hagiwara M, Stanford R, Stempel DA. Utilization and costs of asthma-related care in patients initiating fluticasone propionate/salmeterol combination, salmeterol, or montelukast as add- on therapy to inhaled corticosteroids. Data in preparation 2006

40. MHRA. Commission on Human Medicines. Salmeterol (Serevent) and formoterol (Oxis, Foradil) in asthma management. cuit Probl Pharmacovigilance 2006;31:6

41. Nelson HS, Chapman KR, Pyke SD, et al. Enhanced synergy between fluticasone propionate and salmeterol inhaled from a single inhaler versus separate inhalers. J Allergy Clin Immunol 2003; 112:29-36

42. Bergmann KC, Lindemann L, Braun R, Steinkamp G. Salmeterol/ fluticasone propionate (50/250 microg) combination is superior to double dose fluticsone (500 microg) for the treatment of symptomatic moderate asthma. Swiss Med Wkly 2004; 134:50-8 (SAS40009)

43. Bateman ED, Britton M, Carillo J, et al. Salmeterol/ fluticasone propionate (50/100 mg) combination inhaler (Seretide). A new effective and well tolerated treatment for asthma. CUn Drug Invest 1998;16:193-201 (SFCB3017)

44. Chapman KR, Ringdal N, Backer V, et al. Salmeterol and fluticasone propionate (50/250 mg) administered via combination Diskus inhaler: As effective as when given via s eparate Diskus inhalers. Can Respir J 1999;6:45-51 (SFCB3018)

45. Aubier M, Pieters WT, Schlosser NJJ, et al. Salmeterol/ fluticasone propionate (50/500 meg) in combination in a Diskus inhaler (Seretide) is effective and safe in the treatment of steroid- dependent asthma. Respir Med 1999;93:876-84 (SFCB3019) 46. Dahl R, Chuchalin A, Gor D, et al. EXCEL: A randomised trial comparing salmeterol/fluticasone propionate and formoterol/ budesonide combinations in adults with persistent asthma. Respir Med 2006; 100:1152-62 (SAM40040)

47. Van den Berg NJ, Ossip MS, Hederos CA, et al. Salmeterol/ fluticasone propionate (50/100 [mu]g) in combination in a Diskus inhaler (Seretide) is effective and safe in children with asthma. Fed Pulm 2000;30:97-105 (SFCB3020)

48. Curtis L, Netten A. Unit Costs of Health and Social Care 2005. Personal and Social Services Research Unit, University of Kent. Available at htp://pssru.ac.uk/publications.htm (Accessed July 2006)

49. NHS Reference Costs. http://www.dh.gov.uk/assetRoot/ 04/10/ 55/61/04105561.xls. (accessed July 2006) 2004

Iolo Doull(a), David Price(b), Mike Thomas(c), Neil Hawkins(d), Eugena Stamuli(d), Maggie Jabberer(d), Toby Gosder(e) and Helen Rudge(e)

a Consultant Respiratory Paediatrician, Children’s Hospital for Wales, Cardiff, UK

b GPIAG Professor of Primary Care Respiratory Medicine, Department of General Practice and Primary Care, University of Aberdeen, UK

c Asthma UK Senior Research Fellow, Department of General Practice and Primary Care, University of Aberdeen, UK

d Oxford Outcomes Ltd, Oxford, UK

e GlaxoSmithKline Ltd, Uxbridge, UK

Address for correspondence: Neil Hawkins, Oxford Outcomes Ltd, Seacourt Tower, West Way, Botley, Oxford, OX2 OJJ, UK. Tel.: +44 1865 324930; Fax: +44 1865 324931; [email protected]

Appendix: Supplementary tables

The following tables are available as electronic supplementary data to the online version of this article (doi:10.1185/ 030079907X187991):

Table A1: Definitions of well-controlled and totally-controlled asthma based on global initiative for asthma/national institutes of health guideline aims of treatment

Table A2: Search strategy

Table A3: Included studies (SFC vs. same dose FP only)

Table A4: Meta-analysis results (SFC vs. same dose FP only)

Table A5: Cost-effectiveness of SFC compared with same dose ICS and additional doses of FP and BDP for other comparisons (adults & children aged 12 and over)

Table A6: Cost-effectiveness of SFC compared with same dose ICS (Children under 1

Copyright Librapharm May 2007

(c) 2007 Current Medical Research and Opinion. Provided by ProQuest Information and Learning. All rights Reserved.

Teacher Dispositions Affecting Self-Esteem and Student Performance

By Helm, Carroll

Keywords: academic achievement, social status, teacher qualities We have all seen her. She is the little girl in the pink frilly dress, big blond curls and a large pink bow in her hair, lace socks, and black, patent-leather shoes. She carries a designer purse over her shoulder; she holds a box, with her Crayola-brand sharpened crayons, scissors, pencils, and all the other supplies requested by the teacher, gently in her hands. Her name is Samantha, and she is perfectly prepared for school. Behind her enters a skinny little boy wearing a torn t-shirt and large brown shorts that are held up by an oversized black belt. The belt is pulled tight so that the oversized shorts will not fall down. His face is dirty; his knees are scraped; his hands are black; and his nose is snotty and dirt crusted. Looking down, I notice that his shoes are at least two sizes too big. He is smiling, but his big smile reveals several cavities in his front teeth. His name is Joey. He looks around the room, and many of the other students are smiling at him but others seem to take no notice of the fact that he has entered the room. His parents were sent the same letter as Samantha’s parents, but they could not, or would not, get the supplies the teacher requested.

This is the typical first day of school in classrooms across the country. I like to refer to what you read above as the haves and the have-nots. This day is the beginning of what many students experience; for some, constant praise and encouragement whereas others begin with a feeling that they do not belong. It will not take Joey long to discover that he does not have many of the things that the other children have and to realize that he is different. On the first day of school, Joey’s self-esteem was lowered a little more. Joey’s parents have always told him that he is dumb and cannot read; this illiteracy must be labeled by the school system so that they can get an extra $540 a month from welfare. Has Joey’s fate been sealed, or can the school system do something to reverse this cycle and make him believe that he is somebody?

In fact, something can be done by the school system. Many options are available to school systems, and it is important that they do them, or the little Joeys of the world will just become additional at-risk statistics. Wealth and social status are major factors in determining who learns in our schools (Cole 1990), but they are not the only factors. Dedicated teachers, who possess the right dispositions, can be the keys to reach students who do not come from wealth or privilege. Bridgett Harme and Robert Pianta (2001) found that students with significant behavior problems in their early years are less likely to have problems later in school if their teachers are sensitive to their needs and provide frequent, consistent, and positive feedback.

Harme and Pianta (2001) followed 179 students in a small school district who entered kindergarten the same year and continued in the school district through eighth grade. Even when the researchers accounted for the gender, ethnicity, cognitive ability, and behavior ratings of a student, the student’s relationship with their teacher still predicted aspects of school success. Sanders and Rivers (1996) found that having several effective teachers, in consecutive years, could affect standardized scores by as much as 50 percentile points.

Darling-Hammond (2000) indicated that the quality of teachers, as measured by whether the teachers were fully certified and had a major in their teaching field, was related to student performance. Measures of teacher preparation and certification were the strongest predictors of student achievement in reading and mathematics-both before and after controlling for student poverty and English language proficiency. Proper training and certification, matched with the identification and assessment of proper teacher dispositions, both have a significant impact on student learning. Ready, as reported in Woolfolk (2004, 21), listed the following indicators of excellent teaching:

1. Love children.

2. Respect all children and parents in all circumstances.

3. See potential in all children.

4. Motivate children to reach their highest potential.

5. Be a spontaneous and creative educator who is able to see teachable moments and seizes them.

6. Have a sense of humor.

In a study, Davies and Brember (1999) found that feelings of worth or unworthiness could affect mathematics and reading performance of individuals forming their self-image while receiving feedback from others. According to Dole and McMahan (2005), many students with learning and behavior problems have poor social skills and low self-esteem in addition to low academic achievement. El- Anzi (2005) stated that self-esteem relates to academic achievement and physical, emotional, and social areas. In another study, Legum and Hoare (2004) linked academic performance and self-esteem by giving the students counseling and support to make better educational and career preparation choices. In the study, all participants saw an increase in academic performance. That nine- week intervention program for at-risk middle school students showed a significant increase in student grade point averages (Legum and Hoare 2004). Keverne (2004) suggested that being aware of sensitive times in the brain’s development that affect emotional development may help guide teachers to set high self-esteem levels in children at a young age.

What does this mean for the Joeys of the world? Children who begin their educational journey without wealth or privilege and without positive family support can succeed at higher academic levels if teachers are willing to invest themselves in those young children. Teachers must possess and exhibit the disposition of caring, have a positive work ethic, and be able to think critically to begin to deal with most of the have-nots. After all, the individual teacher is the most important element in whether the Joeys of the world develop positive self-esteem and make positive academic gains.

Research supports several factors related to student success. This article has broached a few. I have not explored pupil-teacher ratios and higher funding for education here, but evidence exists that these affect student performance. If a model could be developed to guarantee student success, it would most assuredly include a teacher who: (a) is highly qualified, (b) possesses the proper teaching license for their area, (c) possesses the dispositions of caring and empathy, (d) has a strong work ethic and critical thinking ability, (e) has supportive classroom parents, (f) has an eighteen to one pupil-teacher ratio, and (g) has adequate funding. If all of those qualities and support systems cannot be present at the same time, I would take strong, positive teacher dispositions every time. We all expect the teacher to be able to teach his or her subject and have good content knowledge, but more important, those teachers should be willing to teach the child first. That child, Joey, will thank us for this when he grows up and becomes successful.

REFERENCES

Cole, R. 1990. Teachers who make a difference. Instructor 110:58- 59.

Darling-Hammond, L. 2000. Teacher quality and student achieve- ment: A review of state policy evidence. Educational Policy Analysis Archives 8:1-48. http://epaa.asu.edu/epaa/v8n1/ (accessed January 20, 2002).

Davies, J. and I. Brember. 1999. Self-esteem and national tests in years two and six. Educational Psychology 19 (3): 337-45.

Dole, S., and J. McMahan. 2005. Using video therapy to help adolescents cope with social and emotional problems. Intervention in School and Clinic 40 (3): 151-55.

El-Anzi, F. 2005. Academic achievement and its relationship with anxiety self-esteem, optimism and pessimism in Kuwaiti students. Social Behavior and Personality 33 (1): 95-104.

Harme, B. K., and R. C. Pianta. 2001. Early teacher-child relationships and the trajectory of children’s school outcomes through eighth grade. Child Development 72:625-38.

Keverne, E. 2004. Understanding well-being in evolutionary context of brain development. Philosophical Transactions: Biological Sciences 359 (1449): 1349-59.

Legum, H., and C. Hoare. 2004. Impact of a career intervention on at-risk middle school student’s career maturity levels, academic achievement, and self-esteem. Professional School Counseling (110): 148-56.

Sanders, W. L., and J. C. Rivers. 1996. Cumulative and residual effects of teachers on student academic achievement. Knoxville: University of Tennessee Value-Added Research and Assessment Center.

Woolfolk, A. 2004. Educational Psychology. 9th ed. Boston: Pearson Education.

Carroll Helm is an associate professor and director of undergraduate education at the University of the Cumberlands in Williamsburg, Kentucky. Copyright (c) 2007 Heldref Publication

Copyright Heldref Publications Jan/Feb 2007

(c) 2007 Clearing House, The. Provided by ProQuest Information and Learning. All rights Reserved.

Mid-Columbia Pools Make a Splash (W/Slideshow)

By Kaylani Evans, Tri-City Herald, Kennewick, Wash.

Jun. 8–Schools closing and temperatures rising signal the unofficial start of summer.

And so does the opening of the area’s outdoor swimming pools.

Richland, Pasco, Kennewick and Sunnyside open their pools Saturday. Prosser, Grandview and Connell’s pools open today.

“We always look forward to getting the pools open and getting people active and swimming,” said Doug Hagedorn, recreation coordinator in Richland.

And the weekend’s weather should be good for swimming.

Temperatures are expected to reach 75 to 80 degrees this weekend, said meteorologist Vincent Papol of the National Weather Service in Pendleton.

And they’ll only continue to climb as we head into the dog days of summer.

Shayna Collins, a lifeguard and water safety instructor at the Kenneth E. Serier Memorial Pool in Kennewick for three years, said it doesn’t matter what the temperature is — kids will show up to swim.

“We have kids who have been coming every year since they were little. They know all the lifeguards’ name and all the rules and will come no matter what,” she said.

Here is a schedule of public pool openings, public swim times and available swim lessons:

Kennewick

w Kenneth E. Serier Memorial Pool, 315 W. Sixth Ave.

Opens: Saturday

Hours: 1:15 to 2:45 p.m. and 3 to 4:30 p.m. every day; 6:30 to 8 p.m. Mondays, Wednesdays, and Fridays; 5:30 to 7:30 p.m. Saturdays; 5:30 to 7 p.m. and 7 to 8:30 p.m. Sundays.

Cost: 75 cents for seniors and kids under 17, $2 for adults.

Swim lessons: Starting June 18

Cost:$13 to $18 for Kennewick residents, $19 to $27 for others, depending on age and skill level.

Kennewick residents can register June 14 at 304 W. Sixth Ave. Non-Kennewick residents can register June 15.

w Tri-City Court Club

1350 N. Grant St., Kennewick

Swim lessons: starting Monday. Classes are available to swimmers of all skill levels.

Cost: $28 to $112 for members, $32 to $128 for others, depending on skill level and length of session. Private, 30-minute lessons also are available; $20 for members, $25 for others.

w World Gym

2008 N. Pittsburgh St., Kennewick

Swim lessons: starting Monday. Classes are available in morning and evening sessions for every skill level.

Cost: $50 for members, $60 for others, per two-week session.

Richland

w George Prout Pool, 1005 Swift Blvd.

Opens: Saturday

Hours: 1:15 to 2:45 p.m., 3 to 4:30 p.m., and 7 to 8:30 p.m. weekdays; 1 to 2:30 p.m., 2:45 to 4:15 p.m., and 4:30 to 6 p.m. Saturdays and Sundays; 6:15 to 7:45 p.m. Saturdays only.

Cost: 50 cents for kids under 17, $1.50 for adults, and $3.50 for families who live in Richland. For others, cost is 75 cents for kids, $2 for adults and $4.25 for families.

Swim lessons: starting Monday

Cost: $17 for Richland residents, $21.25 for others per two-week session.

w Columbia Basin Racquet Club

1776 Terminal Drive, Richland

Swim lessons: starting Tuesday. Classes are offered for a day, a week, three weeks and 10 weeks. Different levels are available for beginning through advanced swimmers.

Cost: $7.75 a day to $195 for 10 weeks for members, $9.25 a day to $240 for 10 weeks for others.

Pasco

Memorial Pool, 14th Avenue and Shoshone Street; Kurtzman Pool, 207 S. Wehe Ave.; Richardson Pool, 19th Avenue and Pearl Street.

Opens: Saturday

Hours: 1 to 2:30 p.m., 2:45 to 4:15 p.m., and 7 to 8:30 p.m., Mondays to Fridays; 1 to 2:30 p.m., 2:45 to 4:15 p.m., and 6 to 7:30 p.m. Saturdays and Sundays.

Cost: 50 cents for kids and $1 for adults at Richardson and Kurtzman pools; $1 for kids and $2 for adults at Memorial Pool.

Swim lessons: starting June 18

Cost: $12 to $16 for Pasco residents, $18 to $24 for others, depending on age.

Swim lesson registration will be accepted Fridays, starting June 15, at Pasco City Hall, 525 N. Third Ave. Pasco and Franklin County residents can register from 8 a.m. to 12:45 p.m.; Benton and Walla Walla County residents can register from 1 to 3 p.m.

Prosser

w Prosser city pool, 920 S. Kinney Way

Opens: Today

Hours: Open swim is from 1:30 to 5 p.m. daily, family swim is from 5 to 7 p.m., except Sundays.

Cost: $2 daily admission, all ages

Swim lessons: starting Monday

Cost: $25 per two-week session

Register at the city of Prosser building, 601 Seventh St. Sessions are limited to the first 30 swimmers registered.

w Lower Valley Athletic Club, 1419 Sheridan Ave.

Swim lessons: starting June 18

Connell

Pioneer Park Pool, 431 E. Birch St.

Opens: Today

Hours: 1 to 5 p.m., 7 to 9 p.m. Mondays to Fridays; 2 to 7 p.m. Saturdays.

Cost: $1.50 per day or $1 per session; children 4 and under are free.

Grandview

Grandview city pool, 601 W. Second St.

Opens: Today

Hours: Open swim is 1 to 4 p.m., adult lap swim is 5 to 6:15 p.m., and family swim is 6:30 to 8 p.m. Mondays to Fridays. Open swim is 1 to 4 p.m., adult lap swim is 4 to 5 p.m., and family swim is 5 to 7 p.m. Saturdays.

Cost: $2 for everyone

Swim lessons: starting Monday

Cost: $20 for city residents, $25 for others.

Sunnyside

Sunnyside city pool, Central Park and Fourth Street

Opens: Saturday

Hours: Open swim is 1 to 3 p.m. and 3:15 to 5:15 p.m. daily; Family swim is 5:30 to 7:30 p.m. daily.

Cost: $1.50 for kids 3-7; $2.50 for kids 8 and up.

Swim lessons: Starting June 18

Cost: $20 for city residents, $26 for others.

Register during open swim hours.

Moses Lake

w Moses Lake Family Aquatic Center, 401 W. Fourth Ave.

Open: Now

Hours: 11 a.m. to 7 p.m. weekends, 4 to 8 p.m. during the week. Hours change in mid-June to 11 a.m. to 6:30 p.m. Mondays to Thursdays, 11 a.m. to 7 p.m. Fridays to Sundays.

Cost: Children 4 and under are free; $4.50 for kids 5-17; $5.50 for adults.

Swim lessons: starting June 18

Cost: $20 per two-week session.

w Moses Lake High School Pool, 803 E. Sharon Ave.

Open: Now

Hours: 5:45 to 7:15 a.m. Mondays, Wednesdays and Fridays; 6 to 8 p.m. Mondays to Fridays; 3 to 6 p.m. Saturdays and Sundays.

Cost: Children under 4 are free; $2.50 for students 5-17; $3.50 for adults.

Hermiston

Hermiston Family Aquatic Center, 879 W. Elm.

Open: Now

Hours: Noon to 7 p.m. Saturday and Sunday; Noon to 7 p.m. June 8-17; 6:30 a.m. to 9 p.m. after June 17.

Cost: $3 for kids up to 12; $4 for kids 13-17; $5 for adults; $4 for seniors; $15 per family of 5 or less.

Swim lessons: Starting June 18

Cost: $28 for city residents and $32 for others per two-week session; $90 for city residents and $100 for others for the entire summer.

w Kaylani Evans: 582-1515; [email protected]

—–

To see more of the Tri-City Herald, or to subscribe to the newspaper, go to http://www.tri-cityherald.com.

Copyright (c) 2007, Tri-City Herald, Kennewick, Wash.

Distributed by McClatchy-Tribune Information Services.

For reprints, email [email protected], call 800-374-7985 or 847-635-6550, send a fax to 847-635-6968, or write to The Permissions Group Inc., 1247 Milwaukee Ave., Suite 303, Glenview, IL 60025, USA.

Cigarette Butts Prove A Lasting Drag

TAMPA — The Tampa Bay region is much like the rest of the world when it comes to the origin and content of its shoreline litter, according to a report issued Thursday.

It starts as roadside trash, is flushed into drains by rainfall and ends up in the water. Topping the list of the most commonly found piece of trash: cigarette butts.

The Ocean Conservancy, a nonprofit environmental organization based in Washington, D.C., compiled the results of a shoreline cleanup day conducted by volunteers last year in 68 countries. It covered a combined 34,000 miles of shoreline and collected 7 million pounds of litter, 80 percent of which had been washed from land into the water.

In all, 1.9 million cigarette filters were gathered.

“People think they are biodegradable,” said Kathryn Novak, coordinator for the Florida branch of the Ocean Conservancy.

They’re not, so think before flicking that cigarette butt out the car window.

“It’s going to be in Tampa Bay the next rain,” said Bill Sanders, executive director of Keep Pinellas Beautiful and organizer of the cleanup in Pinellas County. “All road litter goes straight into the bayous.”

The region’s flat terrain, unscreened storm drains and miles of waterways combine to lead roadside litter to bays, bayous and beaches. Currents can even carry debris out the mouth of the Bay to the Gulf, where it comes ashore on the beach, Sanders said.

“Once a current catches something, it can go anywhere,” he said.

The area’s shorelines, both the beach and along the Bay, have been getting cleaner over the years. Sanders said the Adopt a Road programs, with groups or businesses cleaning litter from roadsides several times a year, has reduced the amount of trash reaching the water.

Aside from cigarette filters, the most common type of litter found in Florida were caps and lids, followed by food wrappers.

The cleanup is more than just a cosmetic effort to remove litter, Novak said.

Plastic bags were a frequent find in Florida and can be deadly to sea turtles. Floating plastic bags look like jellyfish, a favorite food for turtles. The bags can clog a turtle’s stomach, making it think it is full and stop eating, or jam in its digestive tract.

Either can be fatal.

Monofilament fishing line can be a tangling trap for birds and manatees.

“It’s really dangerous. It’s strong and doesn’t degrade,” Novak said.

Other types of litter found on Florida shores included 5,377 toys, 1,069 shotgun shells, 135 appliances and 78 syringes.

Though common items dominate the cleanups, some unusual things do pop up, such as the headless goat found last year along Courtney Campbell Parkway.

Sanders said volunteers have found a safe, an envelope with an ultrasound image of a baby, false teeth, and a barnacle-crusted, antique, child’s potty chair. Wallets and credit cards are common finds.

“At first places may seem pristine, but once you start looking it’s surprising what you will find,” Novak said.

Reporter Neil Johnson can be reached at (352) 544-5214 or [email protected].

Student Skits Teach Energy-Saving Tips

By Joe Nelson, San Bernardino County Sun, Calif.

Jun. 7–HIGHLAND — You can bet that students at St. Adelaide School can tell you how wind, water and the sun produce energy in ways more environmentally friendly than those nasty fossil fuels.

They proved it Wednesday during the school’s first-ever Energy Follies, a one-hour performance of skits and songs reflecting what they’ve learned this year about energy conservation and alternative energy sources.

Windmills crafted from PVC pipe and balsa wood spun on stage as kindergartners sang “Blowin’ in the Wind.”

Several skits centered around the Peanuts gang waxing pragmatic on a variety of energy-saving practices, like how a fluorescent light bulb is four times more energy efficient than an incandescent light bulb and how consumers can save money on their electric bills for every degree they lower their thermostat during the winter.

“I liked the windmills,” said 10-year-old fourth-grader Denny Nguyen, who played Charlie Brown in one skit, referring to how much he enjoyed learning about wind energy. “They can make a lot of voltage.”

Ten-year-old fourth-grader George Sanchez also played Charlie Brown in a skit. He was fascinated by how far one aluminum can can go in the world.

“One soda can can power a television for three hours,” he said.

The performances were made possible with a $10,000 grant from the BP Group and the Manassas, Va.-based National Energy Education and Development Project, a nonprofit promoting the teaching of energy sources and their impacts on science, the economy and the environment, said second-grade teacher Inez Smith, who coordinated the Energy Follies.

She said teaching science has never been so easy. Using materials and curriculum purchased from The NEED Project enabled Smith’s students to learn about solar cells by making solar-powered boats out of water bottles, small electric motors and solar panels.

“It made it an absolute piece of cake,” Smith said of the curriculum.

Students in science teacher Mary Pettitt’s classes learned how geothermal energy is still used to heat buildings in San Bernardino, and how steaming water from the Arrowhead Hot Springs is a provider of that energy. The billows of white steam frequently seen ascending from storm drains across San Bernardino serve as a reminder, she said.

“San Bernardino is one of the earliest cities to have used that power,” Pettitt said.

Such energy sources are worth the additional cost, she said.

“It’s expensive to start with, but the savings in the long run are just incredible,” she said.

—–

To see more of the San Bernardino County Sun, or to subscribe to the newspaper, go to http://www.sbsun.com.

Copyright (c) 2007, San Bernardino County Sun, Calif.

Distributed by McClatchy-Tribune Information Services.

For reprints, email [email protected], call 800-374-7985 or 847-635-6550, send a fax to 847-635-6968, or write to The Permissions Group Inc., 1247 Milwaukee Ave., Suite 303, Glenview, IL 60025, USA.

Mallinckrodt Launches Magnacet(TM) Tablets

ST. LOUIS, June 7 /PRNewswire/ — Mallinckrodt Brand Pharmaceuticals today announced that it is providing a new pain management drug, called Magnacet(TM) (oxycodone HCl /acetaminophen tablets CII). Magnacet is the only available Oxycodone product coupled with 400mg of acetaminophen, a unique dosage that gives physicians flexibility in treating patients with moderate to moderately severe pain.

Magnacet is indicated for the relief of moderate to moderately severe pain. The most frequently observed adverse reactions include lightheadedness, dizziness, sedation, nausea and vomiting. Oxycodone can produce drug dependence of the morphine-type and, therefore, has the potential for being abused.

“Mallinckrodt Brands is committed to offering additional treatment options that help healthcare providers improve patient care,” says Marco Polizzi, executive business director for Mallinckrodt Brand Pharmaceuticals, a unit of Tyco Healthcare/Mallinckrodt. “Magnacet is an example of our efforts to provide proven molecules in unique combinations and strengths in an area such as pain, where it is important that medical professionals are able to tailor treatments for specific needs of patients.”

According to Wolters Kluwer Health data, more than 200 million prescriptions were written in the pain market in 2006, representing a 9 percent increase from the previous year. The U.S. Food and Drug Administration (FDA) approved Magnacet in May 2006. The 400mg dose of acetaminophen provides another option for physicians seeking additional dosing flexibility in treating patients with moderate to moderately severe pain.

Magnacet is available in 2.5/400, 5/400, 7.5/400, and 10/400mg tablets through wholesalers and pharmacies in the United States. For more information, including the full prescribing information, please call Mallinckrodt at 1-888-744-1414.

About Mallinckrodt Pharmaceuticals Brands

Mallinckrodt Pharmaceuticals Brands division is dedicated to providing quality pharmaceutical products to help healthcare professionals enhance patient care. The organization consists of over 120 sales and marketing personnel in the continental United States and is engaged in the promotion of a broad range of medications focused on sleep, depression, palliative care, pain management and other areas of the Central Nervous System. For more information, visit http://www.pharmaceuticals.mallinckrodt.com/brand or http://www.tycohealthcare.com/.

About Tyco Healthcare/Mallinckrodt

Tyco Healthcare’s Mallinckrodt division offers a diverse line of imaging and pharmaceutical products that diagnose disease and relieve pain. As a major business segment of Tyco International Ltd., Tyco Healthcare manufactures, distributes and services an extensive product line including disposable medical supplies, monitoring equipment, innovative wound closure products, advanced surgical devices, medical instruments and bulk analgesic pharmaceuticals. With industry-leading brand names such as Autosuture, Kendall, Mallinckrodt, Nellcor, Puritan Bennett, Syneture and Valleylab, Tyco Healthcare products are found in virtually every healthcare setting.

Trademarks are owned by Mallinckrodt Inc.

Mallinckrodt Inc.

CONTACT: JoAnna Schooler, Media Relations, Tyco Healthcare Mallinckrodt,+1-314-654-3543, [email protected]

Web site: http://www.mallinckrodt.com/

Integrating Technology into K-12 Teaching and Learning: Current Knowledge Gaps and Recommendations for Future Research

By Hew, Khe Foon; Brush, Thomas

Abstract Although research studies in education show that use of technology can help student learning, its use is generally affected by certain barriers. In this paper, we first identify the general barriers typically faced by K-12 schools, both in the United States as well as other countries, when integrating technology into the curriculum for instructional purposes, namely: (a) resources, (b) institution, (c) subject culture, (d) attitudes and beliefs, (e) knowledge and skills, and (f) assessment. We then describe the strategies to overcome such barriers: (a) having a shared vision and technology integration plan, (b) overcoming the scarcity of resources, (c) changing attitudes and beliefs, (d) conducting professional development, and (e) reconsidering assessments. Finally, we identify several current knowledge gaps pertaining to the barriers and strategies of technology integration, and offer pertinent recommendations for future research.

Keywords Technology integration * Barriers * Strategies * K-12 * Curriculum * Future research

Introduction

From the birth of the motion picture in 1922, to the advent of the computer in the mid-1970s, educators have been intrigued with the potential of technology to help transform education and improve student learning. Research studies in education demonstrate that the use of technology (e.g., computers) can help improve students’ scores on standardized tests (Bain & Ross, 1999), improve students’ inventive thinking (e.g., problem solving) (Chief Executive Officer (CEO) Forum on Education and Technology, 2001), and improve students’ self-concept and motivation (Sivin-Kachala & Bialo, 2000). Moreover, technology is also seen as being able to provide a number of opportunities that would otherwise be difficult to attain. The use of computer-mediated communication tools, for example, can help students from various geographical locations “talk” to one another and experts conveniently. The increased ability to communicate with experts enhances students’ learning process (Bransford, Brown, & Cocking, 2000).

The belief that technology can positively impact student learning has led many governments to create programs for the integration of technology in their schools. In the United States, school districts reportedly spent $7.87 billion on technology equipment during the 2003-2004 school year (Quality Education Data, 2004). The student- per-instructional computer ratio dropped to 3.8:1 in 2004, whereas the student-per-Internet-connected computer ratio dropped to 4.1:1 (Education Week, 2005).

In Singapore, the first Master plan for Information Technology in Education was launched in April 1997. This program cost approximately $1.2 billion. As part of this plan, all Singapore schools are expected to acquire and integrate technology in their curriculum in order to develop in students a culture of thinking, lifelong learning, and social responsibility. More recently, the Singapore government unveiled the second Master plan for Information Technology in July 2002 to continue to provide overall direction on how schools can harness the possibilities offered by information technology for teaching and learning.

Although research studies in education show that use of technology can help student learning, its use is generally affected by certain barriers. These barriers are all too prevalent-even among exemplary users of technology in schools (Becker, 2000). The purpose of this paper is to examine the current barriers related to the integration of technology into the curriculum that are currently faced by K-12 schools both in the United States and in other countries, and to identify strategies to overcome those barriers. In addition, we identify current knowledge gaps in the literature and provide recommendations for future research.

What is technology integration?

There is no clear standard definition of technology integration in K-12 schools (Bebell, Russell, & O’Dwyer, 2004). For some scholars, technology integration was understood and examined in terms of types of teachers’ computer use in the classrooms: low- level (e.g., students doing Internet searches) or high-level use (e.g., students doing multimedia presentations, collecting and interpreting data for projects) (Cuban, Kirkpatrick, & Peck, 2001). For other scholars, technology integration was understood and examined in terms of how teachers used technology to carry out familiar activities more reliably and productively, and how such use may be re-shaping these activities (Hennessy, Ruthven, & Brindley, 2005). Still others consider technology integration in terms of teachers using technology to develop students’ thinking skills (Lim et al., 2003). Despite the lack of a clear standard definition, certain prevailing elements appear to cut across the many different current discussions about technology integration in K-12 schools. These elements typically include the use of computing devices for instruction. In this paper, technology integration is thus viewed as the use of computing devices such as desktop computers, laptops, handheld computers, software, or Internet in K-12 schools for instructional purposes.

Analysis of previous research studies

To examine the current barriers and strategies, we analyzed existing studies from 1995 to spring 2006 that reported empirical research findings. The focus of our technology integration literature search and discussion in this paper is on the general barriers affecting the use of computing devices in K-12 schools for instructional purposes, and the strategies to overcome those barriers. We looked for a mixture of empirical studies that were conducted in the United States and countries abroad. Using databases such as Academic Search Premier, ERIC, and PsycARTICLES, and Professional Development Collection, we searched using several combinations of keywords including: “technology,””computer,””Internet,””teacher,” and “K-12 school.” We also employed the “snowball” method and reviewed the references in the selected articles for additional empirical studies. We eliminated those that pertained only to (a) pre-service teachers, (b) non-empirical descriptions of technology integration programs, (c) literature reviews, and (d) opinion papers. We also excluded studies that discussed the non-instructional purposes of technology such as use of technology for administrative support work (e.g., keeping students’ attendance records), and other forms of technology such as instructional radio. Consequently, we examined 48 studies that reported empirical findings. Of these 48 studies, 43 came from peer- reviewed journals (e.g., American Educational Research Journal), two came from research reports (e.g., the U.S.A. exemplary technology- supported case studies project), two came from conference presentations (e.g., the American Educational Research Association annual meeting), and one came from a book reporting the results of a 10-year empirical study on technology integration.

We then used the constant comparative method (Lincoln & Guba, 1985) on these studies to derive the barrier and strategy categories. Each empirical study was analyzed to identify the types of research studies being conducted, the barriers, and the strategies (if any) used to address the barriers. These barriers and strategies were then subsequently grouped into a number of tentative categories. Every subsequent new barrier or strategy identified was compared to the existing categories, with specific barriers and strategies being receded as the definitions and properties of each category became better developed. Data analysis continued until the barrier and strategy categories were saturated, meaning that additional data began to confirm the categories rather than identify new categories.

Barriers of technology integration

A total of 123 barriers were found from the review of past empirical studies. In order to provide a coherent and parsimonious description of the various technology integration barriers, we classified them into six main categories: (a) resources, (b) knowledge and skills, (c) institution, (d) attitudes and beliefs, (e) assessment, and (f) subject culture. These barriers are listed in order of the relative frequency in which they were mentioned in the studies reviewed (see Fig. 1).

Resources

The lack of resources may include one or more of the following: (a) technology, (b) access to available technology, (c) time, and (d) technical support. Lack of technology includes insufficient computers, peripherals, and software (e.g., Karagiorgi, 2005; O’Mahony, 2003; Pelgrum, 2001; Sandholtz, Ringstaff, & Dwyer, 1997). Without adequate hardware and software, there is little opportunity for teachers to integrate technology into the curriculum. Even in cases where technology is abundant, there is no guarantee that teachers have easy access to those resources. Access to technology is more than merely the availability of technology in a school; it involves providing the proper amount and right types of technology in locations where teachers and students can use them (Fabry & Higgs, 1997). For example, Selwyn (1999) found that the best resources tended to be dominated by technology classes (e.g., computer studies); thus resulting in a “pecking order” of subjects where use \of computer laboratories is concerned, putting teachers of non-technological subjects (e.g., art, humanities) at a disadvantage. Zhao, Pugh, Sheldon, and Byers (2002) similarly found that although schools have computers housed in laboratories, teachers might not have easy access to them if they needed to compete with other teachers for laboratory time.

Lack of time is another resource-type barrier (Butzin, 2001; Cuban et al., 2001; Karagiorgi, 2005; O’Mahony, 2003). Teachers needed hours to preview web sites, to locate the photos they required for the multimedia project they assigned to students, or to scan those photos into the computers. Teachers who were willing to work longer hours paid a personal price in “burn out” and an eventual exit from the school. The lack of technical support is yet another resource-type barrier (Lai, Trewen, & Pratt, 2002; Rogers, 2000). Teachers need adequate technical support to assist them in using different technologies. Employing a limited number of technical support personnel in a school severely hinders teachers’ technology use. More often than not, these technical support personnel were often overwhelmed by teacher requests, and could not respond swiftly or adequately (Cuban et al., 2001).

Knowledge and skills

The lack of specific technology knowledge and skills, technology- supportedpedagogical knowledge and skills, and technology-related- classroom management knowledge and skills has been identified as a major barrier to technology integration. Lack of specific technology knowledge and skills is one of the common reasons given by teachers for not using technology (Snoeyink & Ertmer, 2001/2; Williams, Coles, Wilson, Richardson, & Tuson, 2000). For example, in a study of Scottish schools, Williams et al. (2000), found that lack of skills in the use of databases and spreadsheets was seen as an inhibiting factor by more than 10% of elementary school teachers. Snoeyink and Ertmer (2001/2), in their study of one middle-class school in the United States, also found that limited computer knowledge or skills contributed to the lack of technology integration by teachers. The teachers in their study did not attempt any technology-related activities with their students until they had developed basic skills such as logging onto the network, opening and closing files and applications, and basic word processing.

In addition to the lack of technology knowledge and skills, some teachers are unfamiliar with the pedagogy of using technology. According to Hughes (2005), teachers need to have a technology- supported-pedagogy knowledge and skills base, which they can draw upon when planning to integrate technology into their teaching. Technology-supported-pedagogy may be classified into three categories in which technology functions as: (a) replacement, (b) amplification, or (c) transformation (Hughes, 2005). Technology as replacement involves technology serving as a different means to the same instructional goal. For example, a teacher could type a poem on a PowerPoint slide and project it on the wall. This activity replaces the writing of the poem on a poster and taping it on the wall with the unchanged instructional goal for students to read the poem. Technology as amplification involves the use of technology to accomplish tasks more efficiently and effectively without altering the task (Pea, 1985). For example, a teacher may ask students to edit peers’ stories typed in a word processor. As opposed to hand- written stories, the author’s ability to easily revise the story based on peers’ comments is amplified because the student does not have to rewrite the story each time to accommodate the peers’ feedback. Finally, use of technology as transformation has the potential to provide innovative educational opportunities (Hughes, 2005) by reorganizing students’ cognitive processes and problemsolving activities (Pea, 1985). For example, students can use computer databases and graphing software as tools for exploratory data analysis, data organization, and for framing and testing hypotheses related to the data. Many teachers have not been exposed to transformative technology-supportedpedagogy because professional development activities have focused primarily on how to merely operate the technology.

The lack of technology-related-classroom management knowledge and skills is another barrier to technology integration into the curriculum. Traditionally, classroom management includes “the provisions and procedures necessary to establish and maintain an environment in which instruction and learning can occur and the preparation of the classroom as an effective learning environment” (Fraser, 1983, p. 68). Classroom management has been identified as the most important factor influencing student learning (Wang, Haertel, & Walberg, 1993).

Typically, traditional classroom management involves a set of guidelines for appropriate student behaviors (Lim et al., 2003). Although the rules and procedures established in a non-technology integrated classroom can apply in a technology-integrated one, there are additional rules and procedures to be established in the latter due to the inclusion of computers, printers, monitors, CD-ROMs, and other technology resources (Lim et al., 2003). Thus, in a technology- integrated classroom, teachers need to be equipped with technology- related classroom management skills such as how to organize the class effectively so that students have equal opportunities to use computers, or what to do if students run into technical problems when working on computers. Examples of empirical evidence indicating that the lack of technology-relatedclassroom management skills inhibits technology integration can be found in studies conducted by Lim et al. (2003) and Newhouse (2001).

Institution

Institutional barriers may include: (a) leadership, (b) school time-tabling structure, and (c) school planning. Research has shown that school leadership can hinder the integration of technology by teachers. Fox and Henri (2005) found that the majority of Hong Kong teachers felt that their principals did not understand technology and its relevance to the government’s proposed shift to more learner- centered activities. Consequently, the impact of technology on the teachers’ practices in the classroom was restricted. An inflexible timetable can also act as a barrier. In a survey of more than 4,000 teachers in over 1,100 schools in the United States, Becker (2000) found that most secondary students have a continuous block of less than one hour’s duration to do work in any one class. Such a time limit constrains the variety of learning modalities their teachers can design. Consequently, fewer teachers plan computer activities on a regular basis. The lack of school planning with regard to technology use is another barrier. Lawson and Comber (1999) found that in one United Kingdom school that made minimal use of technology, the administrators had decided to enter a technology integration project as a way of getting free Internet access for a year. There had been no planning regarding what to do with the technology once it was installed, and the administrators left the information technology department to its own devices during the project. Consequently, the use of technology did not extend beyond that department.

Attitudes and beliefs

Teacher attitudes and beliefs towards technology can be another major barrier to technology integration (Hermans, Tondeur, Valcke, & Van Braak, 2006). According to Simpson, Koballa, Oliver, and Crawley (1994), attitudes can be defined as specific feelings that indicate whether a person likes or dislikes something. In the context of technology integration, teacher attitudes toward technology may be conceptualized as teachers liking or disliking the use of technology. Beliefs can be defined as premises or suppositions about something that are felt to be true (Calderhead, 1996; Richardson, 1996). Specifically, teachers’ beliefs may include their educational beliefs about teaching and learning (i.e., pedagogical beliefs), and their beliefs about technology (Ertmer, 2005; Windschitl & Sahl, 2002). Researchers have found that beliefs determine a person’s attitude (Bodur, Brinberg, & Coupey, 2000).

Ertmer (2005) argued that the decision of whether and how to use technology for instruction ultimately depends on the teachers themselves and the beliefs they hold about technology. For example, in an investigation of one elementary school in the United States, Ertmer, Addison, Lane, Ross, and Woods (1999) found that teachers’ beliefs about technology in the curriculum shaped their goals for technology use. Teachers who viewed technology as merely “a way to keep kids busy,” did not see the relevance of technology to the designated curriculum. Computer time was commonly granted after regular classroom work was done and as a reward for the completion of assigned tasks. To these teachers, other skills and content knowledge were more important. Similarly, other researchers have found teacher beliefs about technology to be a major barrier to technology integration. For example, a study in Australia that investigated the perceptions of students and teachers towards the use of portable computers at a secondary school revealed that the majority of teachers believed that computers would not lead to better understanding or faster learning (Newhouse, 2001). Similarly, teachers in Cyprus who participated in a program focusing on information and communication technologies in schools, failed to see the value of such technology for their students. Although they had seen the power of the computer in other areas, they were unconvinced that it could help in education (Karagiorgi, 2005).

Assessment

Assessment can be defined as the activity of measuring student learning (Reeves, 2000). It can be formative or summative in nature, although traditionally, it is typically summative i\n the form of school and national highstakes testing. High-stakes testing can be defined as assessment with serious attached consequences such as promotion or graduation for students (CEO Forum on Education and Technology, 2001) or rewards versus sanctions for schools. The pressures of such testing can be a major barrier to technology integration. For example, Fox and Henri (2005) explored the use of technology in Hong Kong elementary and secondary school classrooms and found that pressures related to high-stakes testing gave teachers little time to attempt new instructional methods involving technology. This view was corroborated by Butzin (2004) who noted that the pressure to meet higher standards and score high on standardized tests, along with the need to cover vast scope of material within a limited amount of time, creates a daunting challenge for any teacher. Consequently, teachers feel they can cover more material when they are in front of the class talking with every student doing the same thing at the same time, rather than using technology because of the additional technology planning time required to identify and select appropriate software to match lesson objectives (Butzin, 2004).

In addition, high-stakes testing can result in the shift of using technology from teaching and learning to using it to facilitate assessment (Bichelmeyer, 2005). The “No Child Left Behind” act has placed great emphasis on testing and has accordingly drawn more attention to comparative test scores (Brantley-Dias, Calandra, Harmon, & Shoffner, 2006). Such emphasis on testing, argued Schneiderman (2004), undercuts the potential promise of technology as a teaching and learning tool. As a result, the focus of technology use in K-12 education has not been on the use of computers for teaching and learning, but rather on the financial benefits of computer-based testing and the warehousing of assessment results (Bichelmeyer & Molenda, 2006; Education Week, May 8, 2003).

Finally, Hennessy et al. (2005) found that there was a perceived tension between using technology and the need to conform to the external requirements of traditional examinations. Requirements to use technology to enhance learning without recognition through assessment were deemed problematic. For example, there was concern that the use of graphic calculators was disadvantageous to students because such calculators are prohibited in national examinations. Such concerns led to decreased enthusiasm among teachers for using technology.

Subject culture

Subject culture refers to the “general set of institutionalized practices and expectations which have grown up around a particular school subject, and shapes the definition of that subject as a distinct area of study” (Goodson & Mangan, 1995, p. 614). Subject cultures have long-standing histories, reinforced by generations of school practice (Goodson & Mangan, 1995), and are typically shaped by the subject content, subject pedagogy, and subject assessment (Selwyn, 1999). Teachers are reluctant to adopt a technology that seems incompatible with the norms of a subject culture (Hennessy, Ruthven, & Brindley, 2005). For example, Selwyn (1999) found an art teacher who justified her avoidance of using computers by saying that when painting, one would be more in tune with it if one did it physically with one’s own hand; the art teacher believed that using a mouse makes one’s mind and hand disjointed. Another art teacher argued that from an aesthetic point of view, accessing art galleries through a computer can never equal experiencing an actual painting in person.

Identifying the relationships among the barriers

Although each type of barrier was described separately, in reality the barriers are related to one another. In this section, we construct a tentative model based on the findings of past studies to describe such relationships (see Fig. 2). The linkages shown in Fig, 2 denote claims made by the studies that certain barriers can influence others. For example, Selwyn (1999) and Hennessy et al. (2005) claim that assessment influences subject cultures. It can be seen from Fig. 2 that technology integration is thought to be directly influenced by the following four barriers: (a) the teacher’s attitudes and beliefs towards using technology, (b) the teacher’s knowledge and skills, (c) the institution, and (d) resources. Teachers’ attitudes and beliefs toward using technology are also thought to be affected by their knowledge and skills, and vice-versa. In addition, the institution appears to directly affect the adequacy of resources provided for technology integration, the adequacy of teachers’ knowledge and skills (via provision of professional development), and teachers’ attitudes toward using technology. For example, Hennessy et al. (2005) found that an institution’s top-down internal policies to use technology within subject teaching could cause a feeling of disempowerment in teachers. Teachers interviewed felt that they had to include technology into schemes of work, regardless of whether technology was particularly useful for that aspect of the curriculum.

Technology integration is also thought to be indirectly influenced by the subject culture and assessment. Subject culture indirectly affects technology integration via teachers’ attitudes and beliefs, and the institution. The latter is affected because an institution is made up of various subject departments that are inexorably linked with their respective subject cultures (e.g., arts department with the arts subject culture). Although technology may be integrated more routinely in certain subjects such as geography and business studies (Selwyn, 1999), its use is still affected by the mode of assessment. Assessment indirectly affects technology integration because the form of assessment typically dictates both how a subject should be taught and assessed and thus how technology should be used (e.g., the use of graphing calculators is not encouraged because they are prohibited in high-stakes testing).

Having described the general barriers typically faced by K-12 schools when integrating technology into the curriculum for instructional purposes, we now describe the strategies to overcome the barriers in the following section.

Strategies to overcome barriers

In order to provide a coherent description of various strategies to overcome barriers, we have classified them into five main categories: (a) having a shared vision and technology integration plan, (b) overcoming the scarcity of resources, (c) changing attitudes and beliefs, (d) conducting professional development, and (e) reconsidering assessments. These strategies are not listed in order of priority or importance. Table 1 summarizes all the five categories of strategies.

Having a shared vision and technology integration plan

Having a shared vision of learning and teaching can serve as a driving force for overcoming leadership barriers to technology use (Sandholtz et al., 1997; Tearle, 2004). Lim and Khine (2006), for example, found in their study of four schools that a shared vision and technology integration plan gave school leaders and teachers an avenue to coherently communicate how technology can be used, as well as a place to begin, a goal to achieve, and a guide along the way. Without such a vision, it is likely that teachers and administrators will limit their thinking about technology to “boxes and wires” or isolated computer skills (Fishman & Pinkard, 2001, p. 70). Probably the most important issue to consider when formulating a shared vision regarding technology integration is to address the specific relationship between technology and particular curriculum content areas because a commitment to the curriculum is a critical scaffold for technology integration (Staples, Pugach, & Himes, 2005). In other words, the vision for technology integration should be to enhance student learning of the curriculum (Staples et al., 2005). It is also important to note that the vision should not be created by just the school leaders; teachers, in particular, should be involved in the decision-making because teacher participation has been found to be one of the ingredients for successful wide-scale integration of technology in a school district (Bowman, Newman, & Masterson, 2001; Eshet, Klemes, Henderson, & Jalali, 2000).

After a vision has been successfully created and accepted, the next step is to articulate a technology integration plan, which provides a detailed blueprint of the steps needed to translate the school technology vision into reality. Fishman and Pinkard (2001) offered some practical advice on how to facilitate the development of a technology integration plan: establish a “planning for technology” committee that consists of teachers, administrators, and outside facilitators (e.g., educational technology experts) who are willing to help facilitate change. The outside facilitators can help to address any questions that teachers and administrators may have.

In a study of one school in Turkey, Gulbahar (in press), found several issues that were deemed necessary to be considered during the actual development of a technology integration plan. These issues relate to the maintenance and regular upgrade of the technology resources, equity of access to technology for teachers and students, a reward or recognition system that encourages teachers’ use of technology, and professional development opportunities to teachers. Another issue that needs to be considered is the expectations of technology use for instructional purposes such as the stipulated number of technology-mediated lessons to be conducted per week (Lim & Rhine, 2006). Stipulating the number of technology-integrated lessons can serve as a tool to exert pressure on teachers to use technology and thereby to increase usage (O’Dwyer, Russell, & Bebell, 2004). Other forms of pressure that had been found useful for technologyintegration involve the expectation for teachers to participate in team meetings regarding use of technology, and requiring the scope for technology use to be developed for all grade and skill levels (Schiller, 2002). Another issue to be considered in the technology plan is the formulation of monitoring activities to ensure that technology integration is taking place. Examples of monitoring activities used by principals that were found to be significant in ensuring teachers’ use of technology include: one-onone discussions with teachers, observation visits to classrooms, and scrutiny of lesson and program plans (Schiller, 2002).

Overcoming the scarcity of resources

Three strategies to overcome the lack of technology barrier were reported in previous studies. First, create a hybrid technology setup in classrooms that involved cheaper computer systems, such as “thin client computers.” Thin client computers consist of only a monitor and a device that provides access to a network with no hard or floppy drive. These computers can be purchased at one third the cost of a traditional personal computer. In their study of a U.S. K8 public school district, Sandholtz and Reilly (2004) found that the use of thin client computers provided three distinct advantages: (a) their lower cost enabled schools to stretch their purchasing capacity, (b) the thin clients presented few maintenance or technical problems for teachers to address, and (c) thin clients reduced space management issues due to their small size. second, introduce technology into one or two subject areas at a time to ensure that teachers and students in those areas have adequate technology (Tearle, 2004). Third, instead of building expensive computer laboratories and equipping them with desktop computers, use laptops with wireless connections to achieve a one-to-one student- to-computer ratio (Lowther, Ross, & Morrison, 2003). Using laptops can save building and maintenance costs of the computer laboratories. Furthermore, there is evidence that laptops can provide potentially optimal contexts for integrating technology use into teaching practices (Lowther et al., 2003). Laptops can either be provided to students on a permanent or temporary one-to-one basis. One possible way to achieve a temporary one-toone student-to- laptop ratio is to use mobile laptop carts (Grant, Ross, Wang, & Potter, 2005; Russell, Bebell, & Higgins, 2004). The mobile laptop carts can be brought from one classroom to another on an as-needed basis.

Overcoming the lack of access to technology barrier can involve two strategies. First, several computers could be placed in the classroom, rather” than in centralized locations. For example, Becker (2000) found that secondary subject teachers who have five to eight computers in their classroom were twice as likely to give students frequent computer experience during class as their counterparts whose classes used computers in a shared location. Explaining this paradox, Becker said that the need for scheduling whole classes to use computers as in the case of centralized or shared locations makes it nearly impossible for technology to be integrated as research, analytic, and communicative tools in the context of the work of an academic class. The use of laptops or mobile laptop carts can also eliminate the inconvenience of scheduling class time since the laptops can be brought to class to achieve a one-to-one student-to-computer ratio (Lowther et al., 2003). The second strategy for overcoming the lack of access to technology is to rotate students in groups (e.g., cooperative learning) (Johnson & Johnson, 1992) through the small number of computers in the classrooms. In such classrooms, the teachers employ a station approach using various learning activities (e.g., reading centers, computer centers, etc.). Groups of students then take turns rotating through each learning center; thus ensuring that each one has an opportunity to use the computers (Sandholtz et al., 1997).

To overcome the lack of time barrier, three strategies were identified from our review of empirical studies. First, schools can change their time-tabling schedule to increase class time to double period sessions (Bowman et al., 2001). Becker (2000) found that secondary school teachers who work in schools with schedules involving longer blocks of time (e.g., 90-120 min classes) were more likely to report frequent use of technology during class compared to teachers who taught in traditional 50-minute periods. second, class loads for teachers can be reduced in order to free up some school time for teachers to familiarize themselves with technology and develop appropriate technology-integrated curricula activities (Snoeyink & Ertmer, 20012002). One way to decrease class loads is to reduce the overall curriculum content. For example, since 1998 the Ministry of Education in Singapore has achieved a 10-30% content reduction in almost all curriculum subjects at the secondary school level without compromising on basic foundation knowledge that students need to master to proceed to higher levels of education (MOE Singapore, 1998). Third, teachers should be encouraged to collaborate to create technology-integrated lesson plans and materials (Dexter & Anderson, 2002; Lim & Khine, 2006). By working together, teachers are able to shorten the time needed to produce technology-integrated lessons as compared to producing the lessons alone.

To overcome the lack of technical support, students can be trained to handle simple hardware and software problems rather than employing many professional technicians. Thus, paying technicians would be necessary only when the hardware or software problems are beyond the students’ abilities to remedy. This can be a more cost- effective way than employing many full time professional technicians. Lim et al. (2003) found the use of student helpers an effective way to relieve some of the technical problems that may occur in a technology-integrated lesson, so that the teacher could focus more attention on conducting and managing instructional activities.

Changing attitudes and beliefs

To facilitate change in attitudes and beliefs, the current review has suggested that four factors need to be taken into consideration: teachers’ knowledge and skills, subject culture, assessment, and institution support. Institution support typically comes in four major ways: (a) having a vision and plan of where the school wishes to go with technology (e.g., Lawson & Comber, 1999); (b) providing necessary resources for teachers (e.g., Sandholtz & Reilly, 2004); (c) providing ongoing professional development for teachers (e.g., Schiller, 2002; Teo & Wei, 2001); and (d) providing encouragement for teachers (e.g., Granger, Morbey, Lotherington, Owston, & Wideman, 2002; Mouza, 2002-2003).

Granger et al. (2002), in their study of four schools in Canada, found that teachers stressed the importance of principals providing encouragement for teachers by acting as advocates in a period of fiscal restraint and everincreasing demands on educators. As one teacher said, “[The] atmosphere is very relaxed with administrators who give you an opportunity to basically experiment and explore and you don’t have to be perfect…[it] allows us to be risk takers, to make mistakes…” (p. 485). Another teacher noted that good leadership is “being allowed to do your own thing with encouragement to improve” (p. 486). These findings support the notion that school leaders should not take teachers immediately to task for any mistakes that teachers may make, especially when they are new to technology.

Given that teachers need encouragement when integrating technology, how then can principals’ support be increased? One possibility is to help principals develop an appreciation for technology so that they can be more understanding of what teachers experience when they integrate technology in their lessons (e.g., teachers’ anxieties and struggles). Such understanding is likely to be fulfilled by providing principals with technology training, particularly exposure to methods and procedures of integrating technology into the curriculum (Dawson & Rakes, 2003).

Providing professional development

Professional development can influence a teacher’s attitudes and beliefs towards technology (Shaunessy, 2005; Teo & Wei, 2001), as well as provide teachers with the knowledge and skills to employ technology in classroom practice (Fishman & Pinkard, 2001). In an empirical study of the effects of different characteristics of professional development on a national sample of over 1,000 teachers, Garet, Porter, Desimone, Birman, and Yoon (2001) found that both traditional and innovative types of professional development of the same duration tend to have the same effects on reported outcomes. They concluded on this basis that it is more important to focus on the features of professional development rather than its types (i.e., innovative types versus traditional types such as study groups or mentoring versus formal training workshops or conferences). Following this recommendation, we focused specifically on features that made professional development effective.

A review of relevant literature shows that effective professional development related to technology integration: (a) focuses on content (e.g., technology knowledge and skills, technology- supported pedagogy knowledge and skills, and technology-related classroom management knowledge and skills), (b) gives teachers opportunities for “hands-on” work, and (c) is highly consistent with teachers’ needs. First, focusing on technology knowledge and skills is clearly important because technology integration cannot occur if the teacher lacks the knowledge or skills to operate computers and software. Snoeyink and Ertmer (2001-2002) found that teachers did not see the value of technology integration until they had developed basic skills such as logging onto t\he network and basic word processing.

Teachers also need to have the necessary technology-supported pedagogy knowledge and skills in order to integrate technology for instructional purposes (Dexter & Anderson, 2002; Mulkeen, 2003). In her study of four English language arts teachers, Hughes (2005) found that the power to develop technology-supported pedagogy lies in the teacher’s interpretation of the technology’s value for instruction and learning in the classroom. The most effective method toward this end, claimed Hughes, is helping teachers to see a clear connection between the technology being used and the subject content being taught-what Hughes referred to as “learning experiences grounded in content-based technology examples” (p. 277). As Hughes put it, “It accords that the more content-specific the example, the more likely the teacher will see the value [of technology] and learn it” (p. 296). For example, a novice teacher can observe a more knowledgeable colleague using technology in a content-specific area (e.g., use of PowerPoint to teach the structure of English Language and composition). Teachers also need to understand the unique aspects of preparing lessons that use technology, for example, having tight definition of tasks involving the use of the Internet. Such teacher actions were found to contribute towards successful lessons with technology (Rogers & Finlayson, 2004). Teachers, for example, need to recognize the balance between the advantages of giving students responsibility and the potential unproductiveness of random surfing on the Internet. Successful solutions employed by the teachers in Rogers and Finlayson’s (2004) study involved use of limited ranges of website addresses, clear deadlines, and encouragement to students to develop their critical skills about the nature and quality of information obtained.

Effective professional development also focuses on technology- related classroom management knowledge and skills. Sandholtz et al. (1997) noted that in every classroom, events typically take unexpected directions. The changes in a classroom environment caused by the addition of technology often lead to an even higher level of unpredictability. One way to help manage unpredictability is to establish clear rules and procedures for technology usage (Lim et al., 2003). Some of these rules included the following: (a) no unauthorized installation of programs and (b) no unauthorized change to the features of the computer control panel. Some of the procedures included: (a) indexing the computers with the index number of the student to facilitate student seat assignment and enable the teacher to track down the student who abused the computer, and (b) pairing students with stronger technology skills with those who needed more support using technology to reduce the need for students to frequently interrupt the teacher for help.

Classroom layout redesign is another strategy to help teachers manage technology-integrated classroom. For example, Zandvliet and Fraser (2004) found that room layouts could either promote or restrict the technologyintegrated activities performed in those settings. The researchers found that teachers consistently preferred peripheral-type layouts (characterized by computer workstations positioned along the wall of a room) because such layouts allowed teachers to monitor student work to ensure that the students were constantly engaged in the learning tasks while using the computers. Students also preferred this type of layout as it allowed easy movement and interaction among them as they worked on their projects or assignments.

Second, effective professional development provides teachers with opportunities for active learning. Active learning can take a number of forms, including the opportunity to observe expert teachers in action (Garet et al., 2001). One possible method for novice teachers to observe expert teachers in action is through the use of a “buddy system” strategy where novice teachers work together with expert teachers in a classroom using technology (Lim & Khine, 2006). For example, a novice teacher can observe a more knowledgeable colleague using technology in a content-specific area, a strategy that Ertmer (2005) referred to as vicarious experiences.

Third, effective professional development is situated to teachers’ needs (Dexter & Anderson, 2002; Keller, Bonk, & Hew, 2005). Granger et al. (2002) found that “just-in-time” professional development is the most influential factor contributing to teachers’ integration of technology into their classrooms. “Just-in-time” professional development, rather than “just-in-case” development (Schrum, 1999) may gain more teacher acceptance because it addresses the teachers’ immediate concerns and is thus consistent with teachers’ needs (Granger et al., 2002). This need-to-know approach to constructing technology knowledge and skills can transform teachers into active knowledge builders possessing substantial autonomy regarding the specific skills required (Granger et al., 2002). An example of how professional development for in-service K- 12 teachers can build upon the tenets of situative learning perspectives has been provided by Keller et al. (2005).

Reconsidering assessment

Because curriculum and assessment are closely intertwined, there is a need to either completely reconsider the assessment approaches when technology is integrated into the school curriculum, or consider more carefully how the use of technology can meet the demands of standards-based accountability. To address the former, alternative modes of assessment strategies may be formulated. For example, Bowman et al. (2001) found that one teacher created a contract with students detailing what they were expected to submit as part of their final grade. The contract indicated how many PowerPoint slides would be produced and evidence of how the information was obtained. Other teachers developed protocols for creating electronic portfolios of student work that would be evaluated and assessed during the school year.

Although the use of alternative modes of assessment is a possible strategy, there is still a need to consider how technology can be used to meet the current demands of standards-based accountability. Dexter and Anderson (2002) provided some examples of how schools can achieve this, mainly by closely aligning the technology to their state’s curriculum standards. Newsome Park Elementary School, for instance, had received a warning from its state department of education concerning its students’ low scores related to the Standards of Learning (SOL). The school then made it a major priority to align the district’s curricular content and requirements and its use of technology to the state’s SOLs. Specifically, the school decided to implement technologysupported project-based learning using wireless laptops through three distinct phases: planning, fieldwork, and celebration of learning. For example, in the planning phase, students brainstormed, under the teachers’ guidance, the specific questions they wanted to answer. The teachers then planned how they could address the SOLs through the students’ project work. Anderson and Dexter (2003) reported that teachers were pleased to find that they could let the students set the direction (hence increased students’ motivation toward learning) and still be able to make significant gains on the state’s SOL examinations, indicating that technology-supported project-based learning might have played a key role in the improvement of student outcomes.

Current knowledge gaps and recommendations for future research

Based on the analysis of related research, we now discuss several current knowledge gaps and provide recommendations for future research related to barriers and strategies of integrating technology for instructional purposes. In discussing these current knowledge gaps, it is useful to adopt Ertmer’s et al. (1999) notion of first- and second-order barriers to achieve a more parsimonious classification of the barriers. First-order barriers are obstacles that are external to teachers; while second-order barriers are intrinsic to teachers (Ertmer et al., 1999). This notion can also be extrapolated to strategies (Table 2).

Barriers

The first knowledge gap is associated with the relationships between the first-and second-order barriers: How much do we exactly know about how firstand second-order barriers interact and influence each other in hindering the integration of technology for instructional purposes? In the present literature review, the study by Ertmer et al. (1999) was unique in that examined the relationship between the two classifications of barriers in more detail rather than merely highlighting that the barriers are related to one another. Many researchers have thought that second-order barriers cause more difficulties than the first-order ones (e.g., Ertmer, 1999; Ertmer et al., 1999). The danger of this assumption is that educators and administrators may be led to assume that overcoming second-order barriers is enough. As noted by Zhao et al. (2002), there are “serious problems with the current effort to prepare teachers to use technology. Most of the current efforts take a very narrow view of what teachers need to use technology-some technical skills and a good attitude” (p. 511). Having technical skills and a good attitude might help to overcome second-order barriers. However, Fig. 1 suggests that second- and first-order barriers are so inextricably linked together that it is very difficult to address them separately. For example, trying to change teachers’ attitudes and beliefs (a second-order barrier) toward using technology is likely to be futile in the long run if one does not seriously consider changing the way students are currently assessed through current high-stakes national examinations (a firstorder barrier) that discourage using technology during the a\ssessment. Future research should therefore examine the relationships between the first- and second-order barriers in greater detail. For example, how valid are the relationships among the various barriers shown in Fig. 1? How do these relationships change over time? Future research should also investigate other barriers that may need to be considered, especially a when one-to-one student to computer ratio is achieved.

It would also be useful to compare and contrast our model shown in Fig. 1 with other existing models. For example, in Rogers’ (2000) model, six main barriers are shown: (a) stakeholder attitudes and perceptions, (b) stakeholder development, (c) availability and accessibility of technology, (d) technical support, (e) funding, and (f) time. All Rogers’ (2000) barriers are represented in our model, with the exception of “funding.” The lack of funding was not highlighted in our model because it was not explicitly mentioned in the studies we reviewed. Perhaps this is due to lack of funding being implicitly expressed in the barriers already mentioned (e.g., lack of technology, lack of technical support, or lack of professional development).

There is also a need for research to examine specific barriers of technology integration in greater detail. We highlight the barrier of teacher beliefs in our discussion. As previously mentioned, teachers’ beliefs may include their educational beliefs about teaching and learning (i.e., pedagogical beliefs), and their beliefs about technology. Making the distinction between beliefs and knowledge, Ertmer (2005) considers teacher pedagogical beliefs as the final frontier in our quest for technology integration because of the assumption that beliefs are far more influential than knowledge in predicting teacher behavior due to the stronger affective components often associated with beliefs (Nespor, 1987). Other scholars, however, disagree. Baker, Herman, and Gerhart (1996), for example, suggested that teachers’ content knowledge and pedagogical knowledge are the prime influence on whether and how teachers use technology. Perhaps the appropriate question to address with regard to this disagreement is under what conditions beliefs and knowledge will exert the main influence on teachers’ use of technology. Research conducted in other settings showed that knowledge can be a better predictor than beliefs with regard to certain tasks (e.g., predicting the studying behavior of undergraduate students) (Trafimow & Sheeran, 1998).

With regard to teachers’ beliefs about technology, there is a need to develop clear operational definitions of such beliefs. Currently, different researchers view teacher beliefs about technology differently-thus complicating efforts by researchers and educators to interpret the findings across studies. For example, Ertmer et al. (1999) view teacher beliefs about technology primarily in relation to the curriculum. For example, is technology used to reinforce skills, enrich current topics, or extend topics beyond current levels? O’Dwyer, Russell, and Bebell (2004), on the other hand, consider teacher beliefs about technology to whether it can harm students (e.g., computers have weakened students’ research skills), or benefits students (e.g., computers help student grasp difficult concepts).

Integration strategies

The second knowledge gap is related to the relationships between the strategies. Research has shown that successful technology integration requires a holistic approach that addresses both first- and second-order strategies (Dexter & Anderson, 2002; Eshet et al., 2000). Zhao et al.’s (2002) study, for example, investigated factors needed for classroom technology integration, revealing that factors or strategies related to the teacher, the technology project, and the school context were interrelated. Interestingly, the researchers found that second-order factors associated with the teacher (e.g., teachers’ knowledge and skills of the broader computing system requirements associated with the use of a specific technology), appeared to play a more significant role in contributing to classroom technology integration efforts than other factors such as having access to technological infrastructure, or support from peers. Future research should be conducted to examine this claim.

There is also a crucial need to learn more about certain strategies. We highlight two in our discussion: subject culture and assessment, and technology integration plan. We concur with Hennessy et al. (2005) that hitherto little research has been conducted to examine how and why subject cultures affect the use of technology. Studies by Goodson and Mangan (1995), Hennessy et al. (2005), and Selwyn (1999) were the three exceptions that attempted to provide more detailed analysis and discussion of the reasons underlying why technology use appears to be more biased toward subjects such as business, and design and technology, rather than simply highlighting subject matter differences in technology applications. In short, these studies corroborate the notion that subject cultures can be an important barrier that hinders teachers’ use of technology in their teaching. However, none of these studies investigated specific strategies that can be used to overcome subject culture barriers. There is therefore a need for further research to investigate how teachers could use technology specifically in the case that technology is incongruous with a particular subject culture. Interestingly, there is evidence showing that use of technology is not widespread even in subject cultures that appear to be congruous with technology. For example, Williams et al. (2000) found that mathematics and science teachers used technology relatively less frequently than teachers of social and aesthetic subjects. However, no explanation was provided by Williams et al. (2000) for the discrepancies found.

In addition, because subject cultures are closely influenced by how students are assessed, future research is needed to examine the use of alternative modes of assessment that can accommodate students’ use of technology. Probably the most pressing need is for more research to investigate how the use of technology can fit with the current demands of standards-based accountability.

With regard to technology integration planning, Mulkeen (2003) found that Irish schools that regularly updated their technology plans had significantly more use of technology in subject areas than those that did not. However, nothing was mentioned about the nature and actual frequency of such updates. Further research should be conducted to verify Mulkeen’s (2003) findings, as well as address in greater depth the nature of the updates that lead to certain schools having significantly greater uses of technology for instructional purposes.

It is also important to examine the potential drawbacks of each integration strategy. For example, although the strategy of encouraging teachers to collaborate to create technology-integrated lesson plans and materials could help teachers save time (Lim & Khine, 2006), collaboration in itself can be difficult to achieve given that teachers have many other responsibilities to which they need to attend in a school day. Zhao et al. (2002) reported that teachers who were less dependent on other teachers (i.e., less reliance on the cooperation, participation, or support of other people) tended to have greater success in integrating technology in their classrooms. Similarly, the strategy of having students work cooperatively in groups and rotating them through the small number of classroom computers can itself be difficult to design and deliver effectively (Nath & Ross, 2001). For example, studies indicate caution about the conditions that favor success regarding cooperative group work (Rogers & Finlayson, 2004). In particular, groups must have the ability to organize themselves in ways, which integrate the contributions of all members. How a teacher structures the tasks, organizes, and manages productive cooperative group work in relation to technology use is an area that needs further study. Acknowledging the drawbacks is essential for teachers or school administrators to make informed decisions about the strategies they are considering implementing. Future efforts should therefore be expended in examining the efficacy and feasibility of these strategies (especially over a long period of time), leading perhaps to some empirical-based guidelines as to how these strategies can be optimally employed.

Another point regarding strategies is that none of the previous studies we examined included discussion of findings in relation to past evidence about the integration of a prior technology (e.g., instructional television). Findings from the integration of past technologies, may help today’s researchers and educators better understand the factors that can facilitate the integration of current computing devices for instructional purposes. In an attempt to determine if there are any differences between the integration of computing devices and the integration of a past technology into teaching and learning, we examined Chu and Schramm’s (1967) work that summarizes the findings of research on instructional television. We found that much of what had been written about strategies (and barriers) for integrating instructional television for instructional purposes were similar to the current strategies (and barriers) for integrating computing devices. For example, strategies such as providing adequate technology planning and time, and training for the classroom teacher were considered important for the integration of instructional television into the curriculum. However, there is one key issue that appears to suggest why despite the barriers (e.g., teacher attitudes and beliefs) instructional television was used widely and effectively in certain quarters. This difference is related to t\he size and urgency of an educational problem, rather than integration strategy. As Chu and Schramm (1967) stated: “If the objective is obviously important… it is easier for the classroom teacher to put aside his objections, make his schedule fit, learn the new role. If the objective is not urgent… it is easier for a classroom teacher to drag heels” (p. 18). Examples of sizeable and urgent problems included the need to teach large number of students in remote areas (e.g., in certain sections of Italy and Japan) where instructional television was the only technology that could be used efficiently. Similarly, perhaps the way that barriers of integrating computing devices for instructional purposes can be overcome is not by examining more strategies but through the occurrence of events that exclude or discourage usage of other media.

Stages of technology integration

The third knowledge gap is related to the barriers and strategies associated with the different stages of technology integration by teachers. Some researchers see technology integration by teachers as an evolutionary process rather than a revolutionary one (Hokanson & Hooper, 2004; Rogers, 2000; Zhao et al., 2002). Hokanson and Hooper (2004), for example, postulated that technology integration occurs along different stages: (a) familiarization, (b) utilization, (c) integration, (d) reorientation, and (e) evolutionary. A survey conducted by Rogers (2000) with 507 art teachers found that certain barriers were more prevalent in certain stages. For example, first- order barriers such as availability and accessibility of technology were most likely to be encountered by teachers at the beginning stages (e.g., familiarization and utilization). Additional research is needed to validate Rogers’ findings and conclusions about the barriers in other schools and subjects areas to determine if the findings are typical of all teachers at the beginning stages or strongly dependent on the specific subject areas. Other additional knowledge gaps related to the stage theory of technology integration include the following: (a) it is unclear whether the stages were derived from long-term observations of individual teachers or represented levels that different teachers occupied at a certain point in time, and (b) it is unclear how individual teachers make leaps of progress from one stage to another and the strategies used to help them do so (Windschitl & Sahl, 2002).

One-to-one computing learning environments

The fourth knowledge gap is associated with barriers and strategies in K-12 contexts where every student is provided with a computer for use in the classroom or school (i.e., one-to-one computing learning environments). Oneto-one learning environment is typically made possible in a number of ways, including the use of laptops for every student (e.g., Sclater, Sicoly, Abrami, & Wade, 2006; Windschitl & Sahl, 2002), mobile laptop carts (Grant et al., 2005; Russell et al., 2004), or handheld devices (van ‘t Hooft, Diaz, & Swan, 2004). Since a growing body of literature suggests that a high ratio of computers to students (e.g., laptops for every student) may change the teaching and learning dynamics in the classroom (Garthwait & Weller, 2005), it is possible that oneto-one computing learning environments also introduce new barriers. Hence, new strategies may need to be formulated to overcome these new barriers.

Current studies on laptop integration have largely focused on comparing student achievement scores (e.g., reading scores), student writing and problem solving skills, frequency of technology use, types of activities for which the technology was used (e.g., search the Internet), motivation, or classroom structure between classrooms that had laptops (1:1 student:computer ratio) with classrooms that had several students per computer (e.g., Lowther et al., 2003; Sclater et al., 2006). Other studies examined classrooms that had 1:1 laptops on a permanent basis with those classrooms that shared a mobile cart of laptops on a temporary basis (Russell et al., 2004). Strategies to overcome the barriers for using laptops or handheld devices were typically not the main focus. One exception is the study by Garthwait and Weller (2005) that sought to examine the factors that facilitate as well as hinder teachers in using laptops in a Maine classroom. However, there were limitations to Garthwait and Weller’s study: convenient sampling of only two teachers, and study context limited to only science-math content areas. Future research should be conducted to examine in greater breath and depth the barriers and strategies for using laptops and handheld computing devices (e.g., Palm(TM)) using a larger sample and in other subject content areas.

Types and quality of previous studies

Finally, we discuss the types and quality of past research studies that have been conducted on technology integration. Using the types of research study categorization frameworks of Ross and Morrison (1995), as well as Knupfer and McLellan (1996), the 48 studies may be categorized as follows: (a) 38 were descriptive studies1, (b) three were correlational studies, (c) four were a mixture of descriptive and correlation studies, and (d) three were quasiexperiments.

The quality of past research studies on technology integration appeared to have one or more of the following four main limitations: (a) incomplete description of methodology, (b) reliance on self- reported data, (c) short-term in duration, and (d) focus primarily on the teacher and what went on in the classroom. First, regarding the incomplete description of methodology, 12 of 48 studies did not report the research duration. Reporting the duration is important because it informs the reader whether the study is short-term or long-term. We suggest that there are benefits to conducting longitudinal studies on technology integration. In addition, 7 of 48 studies did not report the number of participants involved, and 21 of 22 studies that used observations as a means to gather data did not

Maternal Plasma Interleukin-6, Interleukin-1[Beta] and C-Reactive Protein As Indicators of Tocolysis Failure and Neonatal Outcome After Preterm Delivery

By Skrablin, Snjezana; Lovric, Helena; Banovic, Vladimir; Kralik, Saska; Et al

Abstract

Objective. To investigate whether maternal serum interleukin-6 (IL-6), interleukin-1/ (IL- 1/i) and high sensitive C-reactive protein (CRP) could be used as markers of tocolysis failure and adverse neonatal outcome in pregnancies with preterm labor (PL).

Methods. Forty-seven maternal blood samples taken because of PL at admission and delivery were analyzed. Control samples were taken from 20 gravidas with normal pregnancies. Differences in interleukins and CRP levels with or without chorioamnionitis, connatal infection or periventricular leukomalacia (PVL) were analyzed. Cut-off values were estimated for prediction of tocolysis failure and adverse neonatal outcome.

Results. All three parameters were significantly higher in patients delivering prematurely than in patients delivering at term. All three parameters were significantly higher with than without histologie chorioamnionitis ([>

Conclusions. Maternal blood IL-6 and CRP could become useful in predicting tocolysis failure and intrauterine treat for the fetus.

Keywords: Prematurity, interleukin-6, interleukin-, C-reactive protein, connatal infection, periventricular leukomalacia

Introduction

Intrauterine infection is thought to play a key role in the pathogenesis of about one third of preterm labors [1,2]. If present it is frequently followed by serious neonatal complications, periventricular leukomalacia (PVL) being the most deleterious [1- 4]. Overt infection is infrequent, but subclinical infection has been demonstrated in about 25% of patients with preterm labor and intact membranes and even more frequently with preterm premature rupture of membranes (PPROM). The diagnosis is sometimes very difficult and often hampered by the absence of an accurate diagnostic test.

It has recently become suspected that the cytokines could be promising markers, since they have been shown to be mediators of the early phase of the local inflammatory response [1,2]. They activate prostaglandin production, an important path in the onset and propagation of myometrial contractility [5-7]. In pregnancies complicated by overt intrauterine infection, elevated amniotic fluid [8,9], maternal [7,10-12] and fetal [13-15] inflammatory cytokine levels have been found. It may be speculated that silent intrauterine infection could also be diagnosed by use of maternal blood inflammatory cytokine levels [6,7].

In the present investigation we aimed to determine whether maternal serum interleukin-6 (IL-6), interleukin-1 (IL-1/0 and high sensitive C-reactive protein (CRP) estimated at diagnosis of preterm labor (PL) without clinically overt intrauterine infection could be used as markers of chorioamnionitis, connatal infection or PVL. We attempted to determine whether maternal plasma interleukin levels could aid in recognizing the risk of connatal infection and brain damage. We also attempted to estimate the cut-off interleukin levels for predicting tocolysis failure and unfavorable neonatal outcome.

Materials and methods

Patients

A prospective trial was performed at the Department of Obstetrics and Gynecology, Medical School in Zagreb, a tertiary referral center for perinatal medicine in Croatia, during the time period August 2003 to August 2004. Out of 332 preterm labors in that period we randomly selected 47 pregnant women with preterm contractions (PC) or PPROM but without any sign of overt clinical intra-amniotic infection. All patients were admitted between 27 and 33 weeks of gestation.

Preterm labor was diagnosed when either 10 or more uterine contractions per hour could be documented by uterine activity monitoring or cervical effacement followed by dilatation was identified or when rupture of fetal membranes was diagnosed by sterile speculum examination showing amniotic fluid leakage or by fern testing. Gestational age was determined from the data on last menstrual period confirmed by ultrasound data on crownrump length in the first trimester.

All patients were hospitalized and 5 mL of blood from the cubital vein was collected immediately following admission for cytokine and CRP analysis. Urinalysis and cervical smears were also carried out for all the patients for microbiologie evaluation, including aerobic and anaerobic bacteria, genital mycoplasmas and Chlamydia trachomatis, according to methods described previously [16,17]. Gravidas with signs of overt intrauterine infection at admission (fever, uterine tenderness, bloody discharge, leukocytosis and elevated band count), those with positive urinalysis and cervical smears, suspected or proved viral infection, immunodeficiency, as well as those with known immunologie or other inflammatory diseases were not included in the study. The control group consisted of 20 healthy pregnant women who delivered at term. All their children were healthy. Maternal plasma samples for IL-I, IL-6 and high sensitive CRP were taken at routine laboratory blood sampling at 27- 34 weeks of gestation, and again immediately before term delivery.

Parenteral tocolysis with ritodrine was started immediately after admission and dexamethasone was administered for fetal lung maturation at a dose of 12 mg daily for three consecutive days in all patients. All die patients delivered prematurely: 12 deliveries were completed by cesarean section because of abnormal CTG tracings, the remaining 35 delivered vaginally because of intractable contractions.

Two out of 47 developed overt clinical intrauterine infection by the time of delivery. Again, 5 mL of maternal blood from the cubital vein was collected immediately before preterm birth for IL-1/i, IL- 6 and high sensitive CRP determination. All placentas were analyzed pathohistologically for the presence of chorioamnionitis. Acute chorioamnionitis was denned as the presence of acute inflammatory changes on pathohistological examination of the placenta, membranes and umbilical cord, according to criteria previously published [16].

Neonatal blood was also microbiologically analyzed. Newborns underwent an ultrasound brain examination (Kretz SA 6000C) within three weeks of birth. White matter lesions were diagnosed by standard transfontanellar approach according to the following criteria: presence of cystic lesions within the periventricular white matter or persistent abnormally increased white matter echogenicity with definitive periventricular tissue loss. Neonates showing a lesion or a suspected lesion were followed up with serial scans every seven days until the lesion was stable [18]. All ultrasound examinations were performed by an experienced neonatologist. Connatal infection was diagnosed in newborns with either bacteriological proven sepsis or early onset clinical sepsis that required antibiotic therapy irrespective of bacteriological data, and with pneumonia, as has previously been described [19,2O].

Sample preparation and IL-6, IL-I and high sensitive CRP measurements

A 5-mL sample of blood from the maternal cubital vein was collected at the time of admission (regular visit in the control group) and during the active phase of labor. These were drawn in a dry vacutainer and after 30 minutes centrifuged at 3000 rpm for 15 minutes. Supernatants were removed and stored at – 20 C until assayed.

Immunoassays: IL-1 and IL-6 were measured using an enzyme- linked immunosorbent assay kit (R&D System, UK). The assays do not cross react with any other known cytokine. The sensitivity of the test used for IL-1 was 1 pg/mL, and for IL-6 was 0.7 pg/mL. CRP was determined by particle enhanced turbidimetric immunoassay (PETIA). Latex particles coated with antibody to CRP aggregate in the presence of CRP in the sample. The increase in turbidity that accompanies aggregation is proportional to the CRP concentration. The concentration is determined by means of a mathematical function. The sensitivity of the assay is 0.5 mg/L.

Statistical analysis

The r-test was used to evaluate the differences between IL- 1/i, IL-6 and high sensitive CRP in the analyzed and control populations. A value of p

Receiver operating characteristic (ROC) curves were constructed to estimate the diagnostic indices (sensitivity and specificity) for IL-6, IL-I and CRP cut-off values in the prediction of tocolysis failure, preterm delivery, histologie chorioamnionitis, neonatal infection and periventricular leukomalacia. Cut-off values were approximated from the optimal sensitivity and specificity level defi\ned by ROC curves.

Results

Forty-seven pregnant women were admitted to our institution because of symptoms of preterm labor (PL) not accompanied by any sign of intra-amniotic or other infection. Rupture of fetal membranes (PPROM) was diagnosed in 27 patients (57%), uterine contractions (PC) in the remaining 20 gravidas (43%). By the time of delivery clinically overt intrauterine infection had occurred in only two of the patients with PPROM and they were treated with antibiotics. Pathohistological analysis of the placenta revealed acute chorioamnionitis, and the children suffered connatal streptococcus B sepsis and later developed PVL. Only three out of 27 pregnancies with PPROM and 13 out of 20 with PC could be prolonged for more than 48 hours. Histological chorioamnionitis was diagnosed in 23 (48.9%) placentas. Twenty-five infants suffered connatal infection, and 11 (23%) PVL. No child died. All children developing PVL came from PPROM pregnancies, all suffered connatal infection, and all their placentas revealed chorioamnionitis. Tocolysis failed within 48 hours in all pregnancies with proved chorioamnionitis or later neonatal PVL. All women in the control group delivered at term and all their children remained healthy. There were no differences regarding age and parity between women in the examined and control groups.

At admission, mean maternal plasma IL-6, IL-1 and high sensitive CRP levels in a group of patients admitted because of PL were significantly higher than in the control group of healthy gravidas. Also, respective cytokine and high sensitive CRP plasma levels were significantly higher in PL gravidas immediately before preterm labor, than in healthy gravidas immediately before term labor. At admission with PPROM, mean plasma IL-6 was 50.0 pg/mL, IL-1 3.5 pg/ mL and high sensitive CRP 20.4. Respective values at admission with PC were 21.9 pg/mL, 2.8 pg/mL and 6.0. The differences in IL-6, IL- I^ and high sensitive CRP plasma levels between PC and PPROM patients at both admission and delivery are highly significant. In PPROM gestations that could be prolonged for more than 48 hours IL- 6, IL1 and high sensitive CRP levels were significantly lower than in those that failed tocolysis and were delivered within 48 hours. In the PC group, levels of all three parameters were higher in patients that failed tocolysis and gave birth within 48 hours after admission, but the discriminative levels in comparison to those whose pregnancy could be prolonged was not significant. There was no difference in admission levels of all three markers between PPROM and PC group where the pregnancy could be prolonged for more than 48 hours.

Chorioamnionitis was found in 23 out of 47 placentas. Maternal plasma IL-6 and IL-1/i, as well as high sensitive CRP levels at both admission and at preterm delivery were significantly higher in the presence than in the absence of chorioamnionitis (p

Significant increases in mean IL-6 and high sensitive CRP levels at delivery in comparison to admission in both the PL and control groups was observed. Significant increases in IL-1 at term delivery, in comparison to levels at 27-33 weeks of gestation in the control group of healthy gravidas was also observed. Increases in IL- I levels at delivery in comparison to admission values in the PL group were not significant (Figure 1).

Cut-off values for all three markers from ROC curve analysis were used for prediction of chorioamnionitis, connatal infection and PVL as well as tocolysis failure in the PPROM group. An IL-6 level of 27.5 pg/mL has 60% sensitivity and 100% specificity; an IL-1/ level of 2.9 pg/mL has 83% sensitivity and 64% specificity; and a high sensitive CRP level of 10.8 has 87% sensitivity and 100% specificity for prediction of tocolysis failure in PPROM patients. A maternal admission IL-6 value of 29.1 pg/mL has 91% sensitivity and 84% specificity in predicting chorioamnionitis, a value of 27.8 pg/mL has 80% sensitivity and 69% specificity in predicting connatal infection, and a value of 50.9 pg/mL has 81% sensitivity and 91% specificity in predicting PVL. Respective cut-off high sensitive CRP levels show exceptionally high specificity and sensitivity in predicting chorioamnionitis, connatal infection, and PVL (Table II).

Table I. Maternal blood IL-6, IL-I// and CRP with preform premature rupture of membranes, preterm contractions and complications (mean plasma levels at admission and range).

Discussion

Preterm birth is still the leading perinatal problem in the world. In most cases, especially in pregnancies up to 34 weeks of gestation, physicians are left with a difficult decision: to deliver promptly or manage expectantly by hospitalization, bed rest and always uncertain and marginally effective ways to delay delivery by arresting contractions until there is unequivocal evidence of infection, fetal jeopardy or proof of fetal lung maturation [1,21].

Nowadays, we are faced with increasing evidence of connection of intrauterine infection and brain damage [9,13,22], most frequently in preterm [23], but also in term infants [12]. Infection could start early in gestation, be clinically silent and very dangerous for fetal neurological outcome [22]. Inflammatory cytokines, e.g., tumor necrosis factor (TNF), IL-6 and IL-I// have been implicated as mediators for the development of PVL, a major risk for cerebral palsy [13]. So, the decision of how to manage pregnancy with PC or PPROM, where the possibility of hidden and silent infection is very high [13,21-24], is even more complicated. Diagnosis of silent intrauterine infection is difficult but a prerequisite for proper clinical management.

There are a number of methods for the detection of intrauterine infection, but they often take several days and show low sensitivity and specificity [1,25]. After Romero et al. showed elevated levels of certain cytokines in amniotic fluid of women with microbial invasion of the intra-amniotic cavity [26,27], many investigators tried to estimate discriminative levels of amniotic fluid IL-6, IL- I , interferons, TNF [8,16] and IL-8 [12] in patients with preterm labor with and without concomitant intra-amniotic infection or chorioamnionitis. More recently, maternal serum inflammatory cytokine concentrations in patients with premature rupture of membranes [14,21] and preterm contractions [10,12,28], as well as in term labors [7] have also been analyzed.

We decided to measure inflammatory cytokines IL-1/i and IL-6 together with high sensitive CRP in a group of patients with symptoms of PL but without any clinical sign of infection. The levels were correlated with the presence or absence of chorioamnionitis and connatal infection, as well as with PVL as final outcome. We tried to determine whether all or any of the analyzed markers could aid in making the diagnosis of silent intrauterine infection and hence improve the directing of clinical decisions. Our results point strongly towards significant differences in plasma levels of IL-6, IL-1 and CRP in patients with PL in comparison to healthy gravidas and in preterm in comparison to term labor, and the correlation of elevated levels of IL-6, IL-I// and high reactive CRP with chorioamnionitis, connatal infection and final PVL is striking. The reliability of each marker is high; however, that of IL-6 appears to be the best. The levels also show interesting correlation with tocolysis failure, especially in PPROM patients; namely, there was no difference in the admission level of either marker between PPROM and PC patients if the pregnancy could be prolonged for more than 48 hours. Also, in pregnancies with similarly low mean marker levels, tocolysis success could be expected together with low risk of neonatal infectious complications, especially PVL. This finding could prove very important.

Figure 1. Differences between mean + SD maternal plasma IL-6, IL- 1/f and CRP at admission (A, open bars) and delivery (D, closed bars) in patients (N) with preterm labor and control group.

According to the results presented, as long as inflammatory markers are low, it should be possible to administer tocolysis with a reasonable expectation for pregnancy prolongation for at least a three-day corticosteroid treatment, without fear of newborn infectious complications. A maternal blood cut-off concentration at admission for IL-6 of 50.9 pg/mL, for IL- 1/ of 3.3 pg/mL and for CRP of 19.7 offers an opportunity to predict newborn PVL with great sensitivity and specificity. Similarly Hatzidaki et al. [21] showed a maternal blood IL-6 cut-off level of 81 pg/mL at delivery as being highly accurate in detecting early neonatal sepsis. However, our finding of elevated plasma IL-6, IL-1 and high sensitive CRP in patients with the symptoms of preterm labor is only partially consistent with some other reports [7,11,22,29]. Lencki et al. found no significant differences in maternal IL-6 and IL-I/^ levels between patients delivering prematurely with and without clinical chorioamnionitis [1O]. Also in the research of Salafia et al. [15] in a group of patients failing tocolysis, maternal serum cytokine levels were not associated with the presence or severity of histologie evidence of acute placenta! inflammation. Shimoya et al. could not find elevated IL-6, IL-I or IL-8 in maternal se\rum with either term or preterm histological chorioamnionitis [12]. Bahar et al. observed no significant difference in the concentration of IL- 6, IL-8, TNF-alpha and interferon gamma measured between PL patients and healthy gravidas, as well as between women in PL with ruptured membranes and those with intact membranes [28]. However, in the study of Lencki et al. [10] a certain proportion of patients with subclinical infections could have been put into a group of patients without clinical infection, and in the research of Salafia et al. [15] only patients who failed tocolysis were analyzed.

The decidua is thought to be the primary source of inflammatory cytokines in human reproductive tissue as it contains the largest pool of immunocompetent cells in this area [8,3O]. So, the finding of elevated IL-6 and IL-I not only in amniotic fluid but also in maternal serum with intrauterine infection could be expected. Moreover, the increase of CRP [30] and elevation in IL-6 level were suspected previously to precede overt clinical infection [U]. Although small in number of patients, our group represents a uniquely homogenous group of gravidas with PL but without clinical signs of intrauterine infection. Further studies are obviously required to clarify the differences in the literature reports. According to our results, and some previous results [15,31], at least a portion of patients in the group that failed tocolysis could have suffered subclinical infection that aided in the failure of tocolytic therapy. And, tocolysis failure could be predicted by a high inflammatory cytokine level.

Table II. Sensitivity and specificity of maternal plasma IL-6, IL- and CRP at admission in the prediction of tocolysis failure, chorioamnionitis, connatal infection and periventricular leukomalacia.

During pregnancy there is an alteration in maternal immunity within the uterus where innate, proinflammatory immune responses are tightly regulated to prevent immunological rejection of the fetal allograft. Disruption of the delicate balance of cytokines by bacteria or other factors increases the production of proinflammatory cytokines at the maternal-fetal interface and activates die parturition mechanism prematurely [32]. Hence, a parallel increase in IL-1 , IL-6 and CRP not only with inflammation could be expected. IL-I is released from gestational tissues in response to bacterial products or other stimuli. Together with TNF- alpha it stimulates amnion and decidua cells to produce prostaglandin E2. Simultaneously, decidual cells respond to IL-1 with increased IL-6 production. Further, IL-6 stimulates hepatic C- reactive protein production and at the same time aggravates prostaglandin E2 production. So, IL-1/S, IL-6 and CRP all play a role in the host response to infectioninduced [33], but also immunologically-induced preterm labor [34-36]. According to our results, if significantly elevated they may discriminate infection- induced PL and PL precipitated by other factors. If beyond critical levels, IL-6 and CRP are highly sensitive and specific for prenatal detection of chorioamnionitis, connatal infection, and what is most important, for estimating the risk of PVL when no other clinical sign of intrauterine infection exists.

These data show a direct connection between preterm labor with high cytokine levels and placental and connatal infection with final PVL. Not all preterm labors are associated with infection, but maternal IL-6, IL-I// and CRP levels are still significantly higher than in normal term labors. Maternal type 1 cytokine bias as the cause for preterm labor in certain proportion of patients [34] delivering prematurely could be an explanation for the occurrence of such high inflammatory cytokine levels. With our results the difference in IL-6, IL-I and CRP levels can be used to discriminate preterm labor cases complicated by infection and those precipitated by numerous other causes [35,36], thus enabling us to undertake optimal therapeutic modalities. Caution must be taken, however, because activation of the cytokine network may cause white matter damage even when bacteriological data are negative and placental inflammation does not exist [1,9,20,23].

Maternal plasma cytokine levels increase both in term and in preterm labors but the levels are significantly greater in preterm labor. The increase in preterm births is significantly higher than in term deliveries. Especially high levels can be registered in patients with PPROM [21]. Although some authors suggest and report different molecular pathways for preterm labor and PPROM [1], the connection of high cytokine levels and proven neonatal infection points to a connection of the majority of PPROM cases with intrauterine inflammation.

In conclusion, as previously shown for amniotic fluid concentrations [8,9,37,38] and umbilical blood levels [13-15], our results demonstrate that maternal IL-6, IL-\/i and high sensitive CRP levels provide information about the risk of fetal infection and PVL with preterm labor. The process responsible for at least some cases of PVL and cerebral palsy obviously begins during intrauterine life without clinical signs of overt infection, implying that effective strategies for the prevention of cerebral palsy associated with PVL must begin in utero.

References

1. Gibbs RS, Romero R, Hillier SL. A review of premature birth and subclinical infection. Am J Obstet Gynecol 1992; 166: 1515- 1528.

2. Kim CJ, Yoon BH, Park SS. Acute funisitis of preterm but not term placentas is associated with severe fetal inflammatory response. Hum Pathol 2001;32:623-629.

3. Romero R, Chaiworapongsa T, Espinoza J. Fetal plasma MMP-9 concentrations are elevated in preterm premature rupture of the membranes. Am J Obstet Gynecol 2002;187: 1125-1130.

4. Friebe-Hoffmann U, Chiao JP, Rauk PN. Effect of IL-lbeta and IL-6 on oxytocin secretion in human uterine smooth muscle cells. AmJ Reprod Immunol 2001;46:226-331.

5. Fortunate S, Menon R. Distinct molecular events suggest different pathways for prcterm labor and preterm rupture of membranes. Am J Obstct Gynecol 2001; 184:1399-1406.

6. Alvarez-de-la-Rosa M, Rebollo FJ. Maternal serum interleukin 1, 2, 6, 8 and interleukin-2 receptor levels in the preterm labor and delivery. Eur J Obstet Gynecol Reprod Biol 2000; 88:57-60.

7. Greig PC, Murtha A, Jimmerson CJ, Herbert WN, RoitmanJohnson B, Alien J. Maternal serum interleukin-6 during pregnancy and during term and preterm labor. Obstct Gynecol 1997;90:465-469.

8. Fortunato SJ, Menon RP, Swan KF, Menon R. Inflammatory cytokine (interleukin 1, 6, and 8 and tumor necrosis factoralpha) release from cultured human fetal membranes in response to endotoxic lipopolysaccharide mirrors amniotic fluid concentrations. Am J Obstet Gynecol 1996;174:1855-1862.

9. Yoon BH, Jun JK, Romero R, Park KH, Gomez R, Choi JH, Kim IO. Amniotic fluid inflammatory cytokincs (interleukin-6, interleukin- !beta and tumor necrosis factor), neonatal brain white matter lesions and cerebral palsy. Am J Obstet Gynecol 1997; 177:19-26.

10. Lencki SG, Maciulla MB, Eglinton GS. Maternal and umbilical cord serum interleukin levels in preterm labor with clinical chorioamnionitis. Am J Obstet Gynecol 1994; 170: 1345-1351.

11. Murtha AP, Greig PC, Jimmerson CE, Roitman-Johnson B, Alien J, Herbert WN. Maternal serum intcrleukin-6 concentrations in patients with preterm premature rupture of membranes and evidence of infection. Am J Obstet Gynecol 1996; 175:966-969.

12. Shimoya K, Matsuzaki N, Taniguchi T, Okada T, Saji F, Murata Y. Interleukin-8 level in maternal serum as a marker for screening of histological chorioamnionitis at term. Int J Obstet Gynecol 1997;57:153-159.

13. Yoon BH, Romero R, Yang SH, Jun JK, Kim IO, Choi JH, Syn HC. Interleukin-6 concentrations in umbilical cord plasma are elevated in neonates with white matter lesions associated with periventricular leukomalacia. Am J Obstet Gynecol 1996;145:1433- 1440.

14. Buscher U, Chen CK, Pitzen A. IL-!beta, IL-6, IL-8 and G-CSF in the diagnosis of early-onset neonatal infections. J Pcrinat Med 2000;28:383-388.

15. Salaria CM, Sherer DM, Spong CY. Fetal but not maternal serum cytokine levels correlate with histologie acute placenta! inflammation. Am J Perinatol 1997;14:419-422.

16. Yoon BH, Romero R, Kim CJ, Jun JK, Gomez R, Choi JH, Syn HC. Amniotic fluid interleukin-6: A sensitive test for antenatal diagnosis of acute inflammatory lesions of preterm placenta and prediction of perinatal morbidity. Am J Obstet Gynecol 1995; 172:960- 970.

17. Skrablin S, Goluza T, Kuvacic I, Zagar L, Banovic V. First trimester microbiology of the cervix and the outcome of pregnancies at high risk for prematurity. Gynecol Perinatol 2002;! 1:143-149.

18. Yoon BH, Jun JK, Romero R, Park KH, Gomez R, Choi JH, Kim IO. Amniotic fluid inflammatory cytokines (interleukin6, interleukin- !beta, and tumor necrosis factor-alpha), neonatal brain matter lesions, and cerebral palsy. Am J Obstet Gynecol 1997; 177:19-26.

19. Wu YW, Escobar GJ, Grether JK, Croen LA, Green JD. Chorioamnionitis and cerebral palsy in term and near term infants. JAMA 2003;290:2677-2684.

20. Miller LC, Sana I, LoPeste G. Neonatal interleukin-!beta, interleukin-6 and TNF: Cord blood levels and cellular production. J Pediatr 1990;! 17:961-965.

21. Hatzidaki E, Gourgiotis D, Manoura A, Korakaki E, Bossios A, Galanakis E, Giannakopoulou C. Intcrlcukin-6 in preterm premature rupture of membranes as an indicator of neonatal outcome. Acta Obstet Gynecol Scand 2005;84:632-638.

22. Leviton A, Paneth N. White matter damage in prcterm newborns- an epidemiologic perspective. Early Hum Dev 1990;24:l-22.

23. Iida K, Takashima S, Takeuchi Y. Etiologies and distribution of neonatal lcukomalacia. Pcdiatr Neurol 1992;8:205-209.

24. Verma U, Tejani N, Klein S, Rcalc MR, Beneck D, Figueroa R, Visintainer P. Obstetric antecedents of intraventricular hemorrhage and periventricular leukomalacia in the low-b\irthweight nconate. Am J Obstet Gynecol 1997; 176:275-281.

25. Ohlsson A, Wanf E. An analysis of antenatal tests to detect infection in preterm premature rupture of membranes. Am J Obstet Gynecol 1990; 162:809-818.

26. Romero R, Manogue KR, Mitchell MD, Wu YK, Oyarzun E, Hobbins JC, Cerami A. Infection and labor. IV. Cachcctintumor necrosis factor in the amniotic fluid of women with intraamniotic infection and preterm labor. Am J Obstet Gynecol 1989;161:336-341.

27. Romero R, Yoon BH, Mazor M, Gomez R, Gonzalez R, Diamond MP, Baumann P, Araneda H, Kenney JS, Cotton DB, et al. A comparative study of the diagnostic performance of amniotic fluid glucose, white blood cell count, interleukin-6 and Gram stain in the detection of microbial invasion in patients with pretcrm premature rupture of membranes. Am J Obstet Gynecol 1993;169:839-850.

28. Bahar AM, Ghalib HW, Moosa RA, Zaki ZM, Thomas C, Nabri OA. Maternal serum interleukin-6, interleukin-8, tumor necrosis factor- alpha and interfcron-gamma in preterm labor. Acta Obstet Gynecol Scand 2003;82:543-549.

29. Laham N, Rice GE, Bishop GJ, Hansen MB, Bendtzen K, Brennecke SP. Elevated plasma interleukin-6: A biochemical marker of human preterm labor. Gynecol Obstet Invest 1993;36:145-147.

30. Dudley DJ, Trautman MS, Araneo B, et al. Dccidual cell biosynthesis of interleukin-6: Regulation by inflammatory cytokines. J CHn Endocrinol Metab 1992;74:884-889.

31. Kuvacic I, Skrablin S, Fuduric I, et al. Common laboratory tests in predicting infectious morbidity in patients with preterm labor. Gynecol Perinatol 1992;l(Suppl):3-9.

32. Cherouny PH, Pankuch GA, Botti JJ, Applcbaum PC. The presence of amniotic fluid leukoattractants accurately identifies histologie chorioamnionitis and predicts tocolytic efficacy in patients with idiopathic preterm labor. Am J Obstet Gynecol 1992; 167:683-688.

33. Peltier MR. Immunology of term and preterm labor. Reprod Biol Endocrinol 2003;!: 122-129.

34. Makhseed M, Raghupathy R, El-Shazly S, et al. Proinflammatory maternal cytokine profile in preterm delivery. AmJ Reprod Immunol 2003;49:308-318.

35. Gibb W, Challis JR. Mechanisms of term and preterm birth. J Obstet Gynaecol Can 2002;24:874-883.

36. Ruiz RJ, Fullerton J, Dudley DJ. The interrelationship of maternal stress, endocrine factors and inflammation on gestational length. Obstet Gynecol Surv 2003;58:415-428.

37. Lamont RF. Looking to the future. BJOG 2003;! 10(Suppl 20):131-135.

38. Jacobsson B, Mattsby-Baltzer I, Andersch B, et al. Microbial invasion and cytokine response in amniotic fluid in a Swedish population of women with preterm prelabor rupture of membranes. Acta Obstet Gynecol Scand 2003;82:423-431.

SNJEZANA SKRABLIN1, HELENA LOVRIC1, VLADIMIR BANOVIC1, SASKA KRALIK2, ALEKSANDAR DIJAKOVIC1, & DRZISLAV KALAFATIC1

1Department of Perinatal Medicine, Medical School, Zagreb, Croatia and 2Department of Biochemistry, Medical School, Zagreb, Croatia

(Received 10 November 2006; revised 18 December 2006; accepted 18 December 2006)

Correspondence: Snjezana Skrablin, MD, PhD, University of Zagreb Medical School, Department of Perinatal Medicine, Petrova 13, Zagreb H)OOO, Croatia. Tel: +385 1 4810866. Fax: +385 1 4633512. E-mail: ivan.kuvacicwzg.htnet.hr

Copyright Taylor & Francis Ltd. Apr 2007

(c) 2007 Journal of Maternal – Fetal & Neonatal Medicine. Provided by ProQuest Information and Learning. All rights Reserved.

Conjoined Twins: Historical Perspective and Report of a Case

By Kokcu, Arif; Cetinkaya, Mehmet B; Aydin, Oguz; Tosun, Migraci

Abstract

In this article we review the historically important cases of conjoined twins (Biddenden Maids, Siamese twins, Blazek sisters) and contemporary knowledge regarding incidence, etiopathogenesis, antenatal diagnosis, antenatal management, and outcome of conjoined twins. We also present a case of male cephalothoracoomphalopagus, which is extremely rare.

Keywords: Biddenden maids, Siamese twins, Blazek sisters, cephalothoracoomphalopagus

A historical review of conjoined twins

In classical times, gross anomalies were considered a warning from the Gods. St. Augustine took the view that they were a reminder from God of man’s imperfection and original sin. Scholars, including Aristotle, Hippocrates, Empedocles, and Pliny the Elder, thought that the unborn child was susceptible to external stimuli. Such extrinsic factors were blamed when a case of craniopagus twins were born in sixteenth century Germany. Apparently, the pregnant mother clashed heads with a neighbor [I].

Artistic representations of the human body date back 15 000 years. From this earliest period of art itself, the ill and the deformed were portrayed almost as often as the healthy and vigorous. Given the superstition and fear that must have accompanied conjoined births – and their rarity – it would come as no surprise if such births had never been portrayed. Nevertheless, excavations of Tlatilco, a small Mexican village that existed about 3000 years ago, have revealed remarkably accurate clay sculptures of a wide range of facial and cranial duplications. Many of these artifacts are small female figurines with small waists and breasts, short phocomelic arms, and bulging thighs. Although most of the figurines have normal faces, some have double faces with a shared, central, cyclopic eye and normal lateral eyes. Others have separate faces, and a few are fully dicephalic (double headed) with separate necks on a single body [2]. Tlatilco was part of the Olmec cultural world, sharing its maize agriculture, iconography, and much else from that widespread society. However, these small diprosopus (partial facial duplication) and dicephalic statues appear only at Tlatilco and nowhere else in Olmec art. Although representations of ‘monstrous’ beings are common in all traditional iconography, the faces and heads from Tlatilco are interesting because they are developmentally and proportionately correct-they are not just impossible hybrids, such as centaurs. The reports of unexplained clusters of conjoined- twin births around the world make the biologic accuracy of these Tlatilco figures particularly tantalizing [2].

Since ancient times, the entity of conjoined twins has fascinated both lay and medical people alike. Anecdotal reports of viable conjoined twins in European medical history date back more than 1000 years. According to tradition, the Biddenden Maids, Mary and Eliza Chulkhurst, were born to fairly wealthy parents in the year AD 1100. Their bodies were joined at the hips and shoulders. They were naturally very close friends, although they sometimes disagreed in minor matters, and had “frequent quarrels, which sometimes terminated in blows”. In AD 1134, when the Maids had lived joined together for 34 years, Mary was suddenly taken ill and died. It was proposed that Eliza should be separated from her sister’s corpse by means of a surgical operation, but she refused with the words “As we came together we will also go together”, and herself died six hours later. In their will, the Maids left certain parcels of land in Biddenden, containing in all about 20 acres, to the church wardens of that parish; the annual rent from the fields, which is stated to have been 6 guineas at the time of the Maids’ death, was to provide an annual dole for the poor. In 1808, the income from the Maids’ lands had increased to 31 guineas and 11 shillings per annum. While the Biddenden Maids have been extensively cited in the teratological literature as one of the earliest genuine cases of conjoined twins upon record, some antiquaries have considered the tradition to be wholly fabulous [3].

There is a remarkable accumulation of reports of English conjoined twins at the beginning of the 12th century. According to Lycosthenes’ Chronicon Prodigorum et Ostentorum from 1557, conjoined twin brothers had been born in England in 1112, and their bodies were joined at the hips and “ad superiores panes”, like in the popular descriptions of the Biddenden Maids. Even more interesting is that a medieval historical chronicle, the Chronicon Scotornm, tells us that in AD 1099, a woman gave birth to “two children together, in this year, and they had but one body from the breast to the navel, and they were two girls”. In the Irish chronicle Annals of the Four Masters is an almost identical description, although the conjoined twin girls are stated as having been born in 1103; in the Annals of Clonmacnoise their year of birth is given as AD 1100. These ancient descriptions are unreliable in detail and probably dependent on each other, but in spite of this, they add some credibility to the old tradition that the Biddenden Maids were really born in AD 1100 [3].

The first well-known case was not documented until 1811, when two boys – Chang and Eng – were born in Bangkok, Thailand, attached to each other at the sternum. P.T. Barnum named them the “Siamese twins”. As they traveled the world with Barnum’s circus, they consulted a multitude of physicians. All, including Rudolf Vircow, concluded that separation would be fatal to both. This prognosis may have been welcomed by the twins because their wealth and fame depended on their conjoined state. At the age of 31, these xiphopagus twins married two sisters who bore them a total of 21 children; they died within hours of each other at the age of 61. An autopsy found that they shared no organs. They only shared a small amount of liver tissue, peritoneum, and the hypogastric artery and vein. Death probably came to the surviving twin not from fright, as initially stated, but from slow exsanguination as blood flowing into the already dead twin was not returned [2,4-6].

A young couple prepared for the birth of their second child – the Blazeks were a middle-class Bohemian farming family who lived in the tiny village of Skrejsov in the central part of the present-day Czech Republic. The expectant mother was 22 years old, and the father-to-be was 38 years old. The delivery was vaginal and began simply. Rosa and Josepha Blazek were born in Bohemia on January 20, 1878, as pygopagus twins, Rosa being the first such twin to a mother having a normal infant. Word quickly spread throughout Bohemia of these unusual sisters. Even as toddlers, the Blazek sisters were exhibited in local village fairs “doing their duty on a single potty”. They began to walk in the second year and could speak by age 4 years. Their schooling was considered normal instruction by contemporary standards and was overseen by home-teachers. They learned to play the violin at a young age. Menarche occurred simultaneously for both girls, one month before their 14th birthday [4,5,7].

The Blazek case exemplifies an extreme manifestation of the terminal monogenital pygopagus type of conjoined twins. Their pelves and sacrum were united at the lower posterior part, although the vertebral columns were not parallel. In the genital region, the sisters’ degree of fusion was maximal. The sisters shared an anus that was situated in a common anal groove formed by the two buttocks on which the sisters lay. A common urethral ostium existed in the midline immediately dorsal to the single clitoris and labia minora. Beneath the shared urethral ostium was the vagina, which appeared to develop as an asymmetric fusion of two separate vaginas separated by a nearly continuous longitudinal membrane 0.75 cm thick. These two distinct vaginas communicated proximally with two normal-sized but separate uteri. Each uterus had two fallopian tubes, and the ovarian anatomy was normal [7].

The sisters claimed to have sexual intercourse only once, on July 20, 1Q09. Both sisters agreed to coitus, and both achieved orgasm. Intercourse involved the vagina of Rosa only, and her last menses before pregnancy occurred in mid-July 1909. Interestingly, Josepha continued to menstruate from her side of their shared vagina throughout Rosa’s pregnancy and until approximately gestational week 32. A healthy male infant was delivered vaginally on April 16, 1910 [7]. From 1910 through 1922, the Blazek sisters continued their worldwide circuit of exhibition travel, and no records exist to describe their medical care during this period. In early 1922, however, their touring was unexpectedly interrupted in Chicago when Rosa became ill with a serious cough. This soon developed into influenza, and Rosa’s convalescence required three weeks. Josepha then experienced abdominal pain and jaundice that was thought to be appendicitis. With a differential diagnosis including cholecystitis, the sisters were admitted to the West End Hospital of Chicago on March 25, 1922, under the care of Dr Benjamin Breakstone [7]. The sisters’ temperatures and pulse rates rose, although Josepha’s were consistently higher than Rosa’s. As Josepha’s condition steadily deteriorated, the notion of performing surgery to separate the twins was considered to save \Rosa’s life. However, before an operation plan could be developed, Josepha became comatose. Within 24 hours, Rosa was also unresponsive. Pulmonary compromise worsened in both, and this quickly advanced to “terminal bronchopneumonia”. On March 30, 1922, five days after hospitalization, Josepha died. The death of Rosa quietly followed 12 minutes later. Their age at death was 43 years [7].

Autopsy of the Blazek sisters was performed on April 2, 1922. Dissection revealed situs inversus that involved Rosa’s liver (a condition not suspected before death). Furthermore, the forensic investigations verified independent abdominal cavities for the sisters; no vascular communication between aortas of inferior venae cavae could be found. Any vascular anastomoses that existed between the sisters must have involved only the iliac vessels or their branches. This limited vascular connection may help explain the reason why selected endocrine mediators or toxins of sufficient bioactivity or half-life could manifest their effects in both sisters [7].

A further difficulty in accepting the tradition of the Biddenden Maids as entirely authentic is the nature of their malformation: in the available plates and drawings, they are depicted as being conjoined at both the shoulders and the hips. It is most uncommon that Siamese twins have two separate parts of conjunction, and very few such cases have been reported; most teratologists would not accept the possibility of a fusion at the hips and shoulders. In 1895, the teratologist J.W. Ballantyne was the first to consider the Biddenden Maids from a teratological point of view. He suggested that they were in fact only conjoined at the hips and thus belonged to the teratological type pygopagus. In such conjoined twins, each has two arms and two legs, and it has often been noted that in order to walk without difficulty, they put their arms around each other’s shoulders. About 18% of conjoined twins are of the pygopagus teratological type, where twins are joined at the sacrum. The twins have more or less complete fusion of the rectum and other perineal structures, but the spinal cords are usually separate. The first successful surgical separation of pygopagus twins was performed in 1950 [3].

The Hungarian sisters, Helena and Judith, were a celebrated pair of pygopagus conjoined twins in the 18th century; they traveled extensively through Europe, and were examined by many eminent naturalists. In later life, the Hungarian sisters entered a convent, where they died aged 22 years in 1723 [3].

Incidence

Conjoined twins are regarded as an ‘extraordinary accident of nature’. The exact frequency of conjoined twins is not established, and the estimated incidence varies in the literature. All conjoined twins are monoamniotic, monochorionic, monozygotic twins. Spontaneous twinning occurs in 1.6% of all human pregnancies, of which 1.2% are dizygotic and 0.4% are monozygotic. Monochorionicmonoamniotic twins account for less than 1% of monozygotic twins, and conjoined twins are even less common, occurring in approximately 1 in 50 000100 000 and 1:600 twin births. Conjoined twinning is three times more common in female fetuses than in males. It is believed that 1 in 40 monozygous twins fails to separate completely, yielding united or conjoined twins. Throughout the years, there have been more than 1000 reports concerning conjoined twins. It has been reported that the incidence of conjoined twins does not vary with maternal age, parity, or race, while the recurrence risk seems to be negligible. In addition to this, the frequency is reported as 1:14 000 births in India and Africa and 1:250 000 live births in Europe and the USA, suggesting an increased incidence in black populations [4,7-14].

Classification

Conjoined twins are classified on the basis of the most prominent site of union together with the suffix pagus, which means fixed. Ventral unions occur 87% of the time and are classified as: cephalopagus (11%), thoracopagus (19%), omphalopagus (18%), ischiopagus (11%), and parapagus (28%). Dorsal unions occur in 13% of conjoined twins and are classified as: craniopagus (5%), rachiopagus (2%), and pygopagus (6%).

Cephalopagus

Fusing from the top of the head down to the umbilicus. There are two faces on opposite sides of the conjoined head. The lower abdomen and pelvis are not united, and there are four arms and four legs.

Thoracopagus

United face to face from the upper thorax down to the umbilicus. The union always involves the heart. The pelvises are not conjoined, and there are four arms and four legs.

Omphalopagus

Fetuses are joined face to face, primarily in the area of the umbilicus. The union often includes the inferior lower thorax but never the heart or single patent intercardiac vessel. The pelvises are not united, and there are four arms and four legs.

Ischiopagus

United from umbilicus down to a large conjoined pelvis with two sacrums and two symphises pubis. There are four arms and four legs, and the external genitalia and anus are always involved.

Parapagus

Parallel duplication of two notochords in close proximity will produce twins conjoined laterally. They are always joined anterolaterally, pulled together ventrally by closure of the umbilicus, and are always single at the caudal end but double at the cranial end [6,8,10,12,15].

Craniopagus

United on any portion of the skull except the face and foramen magnum. The trunks are not united, and there are four arms and four legs.

Rachiopagus

Fused above the sacrum; the union may involve occiput as well as vertebral column.

Pygopagus

Share the sacrococcygeal and perineal regions, usually one anus with two rectums, four arms, and four legs.

Etiopathogenesis

Conjoined twinning is an extremely rare complication of monozygotic twins that results from incomplete embryonic division occurring between 13 and 15 days after conception. Conjoined twins occur if twinning is initiated after the embryonic disc and rudimentary amniotic sac have been formed. Because conjoined twins develop after differentiation of the chorion and amnion, all conjoined twins are monochorionic-monoamniotic [10,12].

The precise etiology of conjoined twinning is unknown. In the 18th century, it was believed that conjoined twins were produced when one ovum was fertilized by two sperms (collision theory). Currently, two important theories have been proposed to account for the formation of conjoined twins. The most common explanation is an incomplete fission of a single zygote or, alternatively fusion of two dizygotic or monozygotic embryos in their very early embryonic development. On the basis of this explanation, routine cytogenetic analysis of all conjoined twins, including teratomas, has been recommended. Although it is based on molecular evidence, this theory has been critically challenged. Molecular analysis with informative genetic markers has been performed and no genetic difference between the autosite and parasitic fetus was found. These data are consistent with monozygous (parasitic) conjoined twins. Other authors have hypothesized that the proximity of the ovum and its first polar body after dispermic fertilization may cause a parasitic conjoined twin of dissimilar sex [4,8,9,13,16,17].

Assisted reproductive technologies (ART) are extensively used worldwide, producing an increasing number of high-order multiple gestations, which continue to pose a considerable management challenge to clinicians. The phenomenon of increased rates of monozygotic twinning after ART has been found in all types of ART, including conventional in vitro fertilization (IVF), intracytoplasmic sperm injection (ICSI), blastocyst culture, and assisted hatching. The incidence has been recorded as 1.35% of all ART pregnancies, or two to eight times more common than in the general population. The rarity of conjoined twins precludes any statistical analysis of the relative risk of its happening after ART, but it seems reasonable to consider that this type of twinning would be more likely to occur after ART than in the general population. The combination of ICSI and conjoined twins has also been reported [13].

It may coincide with X chromosome inactivation, implying that lionization may be a driving force for female monozygotic twins. For this reason, the excess of females among monozygotic twins is most marked among conjoined twins. More than 70% of all conjoined twins are female. The nine sets in Rudolph et al. [18] were all female. In addition to this, the sexes of the cephalothoracopagus conjoined twins presented by Wedberg et al. [5] in 1979 and Turgut et al. [9] in 1998 were female. Zeng et al. [19] reported that genetic or environmental factors other than abnormal X-inactivation must be involved in causing monozygous multiple gestation or conjoined twins. Steinman [20] reported that a significant number of the women who delivered conjoined twins were subjected to environmental triggers. Of particular interest is increased conjoined twinning following the use of oral contraceptives. The incidence of uniovular twinning is inversely related to a woman’s prepregnancy weight. It is hypothesized that this resulted from prolonged ovulatory dysfunction in underweight women. It has been reported that factors that induce calcium depression and delayed implantation encourage uniovular duplication in general and conjoined twinning in particular [2O].

Diagnosis

Ultrasound has become the safest and most reliable way to make this diagnosis in utero. Prenatal diagnosis has increasingly been reported for conjoined twins since the first report in the late 1970s. The diagnosis of conjoined twins can be made by ultrasound in the first trimester. At this early gestational age, the diagnosis should be suspected if the embryonic pole appears bifid. However, the diagnosis should be made with caution in the first trimester and, when suspected, follow-up imaging should be performed to con\firm the diagnosis. Additional sonographic features of conjoined twins that may be apparent in the first and second trimesters include an inability to separate the fetal bodies and skin contours, lack of a separating membrane between the twins, the presence of more than three vessels in the umbilical cord, heads remaining at the same level and body plane, extremities in unusual proximity, and failure of the fetuses to change their relative positions over time. Polyhydramnios is found in approximately 50-76% of cases, but usually not in the first trimester. Duplication of any anatomical parts and persistence of the inseparable parts on repeated scans confirms the diagnosis. The evaluation of conjoined twins should include a detailed ultrasound examination, including a fetal echocardiogram, at 18-20 weeks to determine the extent of shared organs and to exclude additional anomalies. Associated anomalies, even in organs unrelated to the conjoining, are not uncommon. Further information may be gained by magnetic resonance imaging (MRI) evaluation. A diagnosis of the anatomy of shared organs and the presence of additional malformations is essential for counseling families regarding outcome and planning postnatal surgical separation. Because chromosomal abnormalities are rare in conjoined twins, karyotyping is generally not indicated [4,6,8,10,21].

Although two-dimensional (2D) ultrasound is instrumental in prenatally diagnosing conjoined twins, precise classification is difficult because of complex three-dimensional (3D) structures. 3D ultrasound has shown promise in improving the visualization of complex anatomic spatial relationships. It may therefore be beneficial in defining the complex fetal anatomy of conjoined twins, whereby early diagnosis facilitates selective termination [U]. 3D ultrasound has the potential to add anatomic information and it is valuable to improve the classification accuracy. The category of conjoined twin is already suspected by 2D scanning, only to be confirmed by 3D ultrasound. In cases of conjoined twins, the majority of parents opt for a termination irrespective of the site of twin attachment. Within this context, the additional value of 3D scanning seems further limited to those cases for which the parents opt for a conservative prenatal management

and postnatal surgery. In these cases, the 3D ultrasound examination must be postponed to after 14 weeks of gestation when better views of the anatomy can be obtained. However, it is likely that other imaging techniques such as MRI will provide more accurate and valuable information for prenatal counseling and the planning of postnatal surgery than second trimester 3D ultrasound [14].

Management

A thorough targeted ultrasound examination, including a careful evaluation of the point of connection and the organs involved, is essential before counseling is provided. Surgical separation of nearly complete conjoined twins may be successful when organs essential for life are not shared. Color Doppler and 3D ultrasound can be effectively used to complement 2D imaging, to confirm the diagnosis as well as to determine the extent of organ sharing and definite classification of conjoined twinning. 3D sonography also provides images that are easier for parents to understand, which can help them with decision-making. Since fetal soft tissues are so well visualized with today’s sonographic equipment, invasive imaging procedures such as amniography should no longer be necessary. Consultation with a pdiatrie surgeon often facilitates parental decisionmaking [4,6,22-25].

Outcome

Survival of conjoined twins depends largely on the site of conjoining and the organs involved. In a series of 14 cases of prenatally diagnosed conjoined twins, 28% of cases died in utero, 54% died immediately after birth, and only 18% survived. Emergent surgical separation is performed in the event that one twin dies, or if a life-threatening condition exists in one of the twins that threatens the life of the cotwin. In these cases, the survival rate is reported as 30-50%. Elective separation, which usually occurs at 2-4 months of age, allows for stabilization of the twins, confirmation of anatomic relationships, diagnosis of previously unrecognized anomalies, and adequate planning of surgery. Most series report a survival rate of 80-90% for elective separation [1O].

Overall, the prognosis depends on the type of fusion and presence of associated structural defects. In early pregnancy, the parents can opt for a medical or surgical termination. After 18-20 weeks, transvaginal termination will become difficult and the delivery could require a major surgical procedure, that is, hysterotomy or classical cesarean section [14].

If they are considered to have a poor chance of surviving and are small enough to pass through the birth canal without damaging the mother, vaginal delivery may be the preferable option. It is suggested that with near-term-sized twins, cesarean delivery seems indicated even if the fetuses are dead [4,26,27].

Our case. A very rarely seen case of conjoined twins: Cephalothoracoomphalopagus

A 27-year old, multigravid woman (gravida 4, para 3, alive 2) was referred to Ondokuz Mayis University Hospital at the 34th gestational week. There was no history of exposure to teratogenic agents in pregnancy. She had not consulted any gynecologist before applying to our clinic. During abdominal ultrasonography, a fetus, with four legs, four arms, a single head, single thorax, two hearts (one of which was atretic), and double vertebra was seen. The biparietal diameter measurement was consistent with 32 gestational weeks. The amount of amniotic fluid was normal. According to these ultrasound findings, a cephalothoracopagus conjoined twin was diagnosed. Of the history of the patient, we learned that in her first pregnancy fetal ventriculomegaly was diagnosed and her pregnancy resulted in fetal death at 24 weeks of gestation. Her second and third pregnancies resulted in the births of normal live children. There was also no family history of twinning. The couple was informed that this was a surgically inseparable condition and opted for pregnancy termination. Since the pregnancy was advanced, and considering that conjoined twins may cause a genital trauma, the delivery was performed by cesarean section. Conjoined twins with one head, one body, four arms, four legs, Apgar score 1, weighing 3520 g were delivered (Figures 1 and 2). The conjoined twins died twenty minutes following delivery.

Figure 1. Postmortem photograph demonstrating anterior aspects of cephalothoracoomphalopagus conjoined twins.

After delivery, in external examination of the conjoined twins, it was seen that the twins were joined from the head down to the umbilicus with the presence of one head, one neck, a single thorax, a single upper abdomen, and a single umbilical cord. This appearance was consistent with cephalothoracoomphalopagus. The head and face were just in the middle of conjoined thorax. They had two eyes, one mouth, one nose and two ears. On the thorax, at the front and at the back, there existed a total of four nipples, being two at either side. The upper and the lower extremities were of normal morphology and in appropriate locations at the front and at the back. They had male external genitalia (Figure 1).

In the postmortem autopsy examination, it was found that the upper part of the body from the umbilicus level was single, excluding the extremities. The neck was single but the brain stem and spinal cord were double. At the base of the skull there were two foramina magna with normal spinal cords. There were two cerebral hemispheres with normal lateral ventricles divided by falx cerebri in the supratentorial compartment, but there were two posterior cranial fossa including four cerebellar hemispheres and brain stems. Each twin had normal cervical, thoracic and lumbar spine. There was a thymus, two hearts one of which was atretic, and left and right lungs. Larynx, trachea, esophagus, stomach and duodenum were present and all single. There was a single liver and a single pancreas, two spleens, but the small intestine and colon, found in the both abdomens, were conjoined. There was a single umbilical cord that had four vessels (two arteries and two veins). There were two kidneys, two ureters and two bladders. The karyotype of the conjoined twins was found out to be 46XY.

Figure 2. Postmortem photograph demonstrating posterior aspects of cophalothoracoomphalopagus conjoined twins.

As far as we are aware, this case is the first of cephalothoracoomphalopagus twins of male sex. In the autopsy examination of the conjoined twins, a normal cerebrum consisting of two hemispheres and two cerebral peduncles was seen, and two brain stems joined at the cranial end of the midbrain. Neuropathologic features have shown a resemblance to the findings of the cephalothoracopagus case presented by Turgut et al. [9]. To the best of our knowledge, this is the second case in which such neuropathologic features in this type of conjoined twins have been determined.

Cases of conjoined twins occur so rarely that it is important to leam as much as possible from each case.

References

1. Anderson T. Documentary and artistic evidence for conjoined twins from sixteenth century England. Am J Med Genet 2002;109:155- 159.

2. Kennedy GE. The 3000-year history of conjoined twins. West J Med 2001; 175:176-177.

3. Bondeson J. The Biddenden Maids: A curious chapter in the history of conjoined twins. J R Soc Med 1992;85:217221.

4. Chitkara U, Berkowitz RL. Multiple gestations. In: Gabbe SG, Niebly JR, Simpson JL, editors. Third ed. Obstetrics: Normal and problem pregnancies. New York: Churchill Livingstone; 1996. pp 821- 863.

5. Wedberg R, Kaplan C, Leopold G, Porreco R, Resnik R, Benirschke K. Cephalothoracopagus (Janiceps) twinning. Obstet Gynecol 1979;54:392-396.

6. Cunningham FG, Gant NF, Leveno KJ, Gi\lstrap LC, Hauth JC, Wenstrom KD. Williams obstetrics. 21″‘ ed. New York: McGraw-Hill; 2001. pp 765-810.

7. Sills ES, Vrbikova J, Kastratovic-Kotlica B. Conjoined twins, conception, pregnancy, and delivery: A reproductive history of the pygopagus Blazek sisters. Am J Obstet Gynecol 2001; 185: 1396-1402.

8. Daskalakis G, Pilalis A, Tourikis I, Moulopoulos G, Karamoutzos I, Antsaklis A. case report: First trimester diagnosis of dicephalus conjoined twins. Eur J Obstet Gynecol Reprod Biol 2004;! 12:110-113.

9. Turgut F, Turgut M, Basaloglu H, Basaloglu HK, Haberal A. Extremely rare type of conjoined twins: Cephalothoracopagus deradelphus. Eur J Obstet Gyn R B 1998;80:191-194.

10. Graham GM 3rd, Gaddipati S. Diagnosis and management of obstetrical complications unique to multiple gestations. Semin Perinatol 2005;29:282-295.

11. Suzumori N, Nakanishi T, Kancko S, Yamamoto T, Tanemura M, Suzuki Y, Suzumori K. Three-dimensional ultrasound of dicephalus conjoined twins at 9 weeks of gestation. Prenat Diagn 2005;25:1063- 1064.

12. Vural F, Vural B. First trimester diagnosis of dicephalic parapagus conjoined twins via transvaginal ultrasonography. J Clin Ultrasound 2005;33:364-366.

13. Maymon R, Mendelovic S, Schachter M, Ron-El R, Weinraub Z, Herman A. Diagnosis of conjoined twins before 16 weeks’ gestation: The 4-year experience of one medical center. Prenat Diagn 2005;25:839-843.

14. Pajkrt E, Jauniaux E. First-trimester diagnosis of conjoined twins. Prenat Diagn 2005;25:820-826.

15. Durin L, Hors Y, Jeanne-Pasquier C, Barjot P, Herlicovicz M, Dreyfus M. Prenatal diagnosis of an extremely rare type of conjoined twins: Cranio-rachi-pygopagus twins. Fetal Diagn Ther 2005;20:158- 160.

16. Logrono R, Garcia-IJthgow C, Harris C, Kent M, Meisner L. Heteropagus conjoined twins due to fusion of two embryos: Report and review. Am J Med Genet 1997;73:239-243.

17. Fujimori K, Shiroto T, Kuretake S, Gunji H, Sato A. An omphalopagus parasitic twin after intracytoplasmic sperm injection. Fertil Steril 2004;82:1430-1432.

18. Rudolph AJ, Michaels JP, Nichols BL. Obstetric management of conjoined twins: Birth defects. Birth Defects Orig Artic Ser III 1967;3:28-37.

19. Zeng SM, Yankowitz J, Murray JC. Conjoined twins in a monozygotic triplet pregnancy: Prenatal diagnosis and Xinactivation. Teratology 2002;66:278-281.

20. Steinman G. Mechanisms of twinning. V. Conjoined twins, stem cells and the calcium model. J Reprod Med 2002;47: 313-321.

21. Van den Brand SF, Nijhuis JG, van Dongen PW. Prenatal ultrasound diagnosis of conjoined twins. Obstct Gynecol Surv 1994;49:656-662.

22. Kim JA, Cho JY, Lee YH, Song MJ, Min JY, Lee HJ, Han BH, Lee KS, Cho BJ, Chun YK. Complications arising in twin pregnancy: Findings of prenatal ultrasonography. Korean J Radiol 2003;4:54-60.

23. Sen C, Celik E, Vural A, Kepkep K. Antenatal diagnosis and prognosis of conjoined twins-a case report. J Perinat Med 2003;31:427-430.

24. Chen CP, Shih JC, Shih SL, Huang JK, Huang JP, Lin YH, Wang W. Prenatal diagnosis of cephalothoracopagus janiceps disymmetros using three-dimensional power Doppler ultrasound and magnetic resonance imaging. Ultrasound Obstet Gynecol 2003;22:299-304.

25. Bonilla-Musoles F, Machado LE, Osborne NG, Blancs J, Bonilla F Jr, Raga F, Machado F. Two-dimensional and three-dimensional sonography of conjoined twins. J Clin Ultrasound 2002;30:68-75.

26. Compton HL. Conjoined twins. Obstet Gynecol 1971;37: 27-33.

27. Vaughn TC, Powell LC. The obstetrical management of conjoined twins. Obstet Gynecol 1979;53(3 Suppl):67-70.

ARIF KOKCU1, MEHMET B. CETINKAYA1, OGUZ AYDIN2, & MIGRACI TOSUN1

1 Department of Obstetrics and Gynecology, School of Medicine, University of Ondokuz Mayis, Satnsun, Turkey and

2 Department of Pathology, School of Medicine, University of Ondokuz Mayis, Samsun, Turkey

(Received 14 April 2006; revised 7 June 2006; accepted 29 November 2006)

Correspondence: Arif Kokcu, Bahcelievler mah, Abdiilhakhamid Cad, Onursal Ap. No. 19/4, 55070 Samsun, Turkey. Tel: +90 0362 4576000 ext. 2452. Fax: +9 0362 4576029. E-mail: arifkokcu

Copyright Taylor & Francis Ltd. Apr 2007

(c) 2007 Journal of Maternal – Fetal & Neonatal Medicine. Provided by ProQuest Information and Learning. All rights Reserved.

LabCorp Announces Contract Extension With CIGNA HealthCare

Laboratory Corporation of America® Holdings (LabCorp®) (NYSE: LH) today announced that it has executed a multi-year clinical laboratory services contract renewal with CIGNA HealthCare. LabCorp will continue to be a contracted laboratory provider in all CIGNA HealthCare markets. Additionally, effective January 1, 2008, LabCorp will no longer be contractually restricted from marketing that the Company is a fully participating, in-network provider to CIGNA HealthCare for all services in all major markets. Additional terms were not disclosed.

“This agreement is important because we will no longer be prohibited from marketing to doctors and patients that we are a participating, in-network provider to all CIGNA HealthCare members and plans in all major markets,” said David P. King, President and Chief Executive Officer of LabCorp. “We welcome the opportunity to compete for CIGNA HealthCare business on a level playing field with all other contracted laboratories. Of course, CIGNA HealthCare’s participating physicians may continue to send all of their work to LabCorp, giving choice to those physicians who prefer using a single high-quality, full-service laboratory. We look forward to continue offering CIGNA HealthCare, and their participating doctors and members, the convenience, innovation and quality that help improve patient care and lead to better health outcomes.”

About LabCorp®

Laboratory Corporation of America® Holdings, an S&P 500 company, is a pioneer in commercializing new diagnostic technologies and the first in its industry to embrace genomic testing. With annual revenues of $3.6 billion in 2006, over 25,000 employees nationwide, and more than 220,000 clients, LabCorp offers clinical assays ranging from routine blood analyses to HIV and genomic testing. LabCorp combines its expertise in innovative clinical testing technology with its Centers of Excellence: The Center for Molecular Biology and Pathology, National Genetics Institute, Inc., ViroMed Laboratories, Inc., The Center for Esoteric Testing, DIANON Systems, Inc., US LABS, and Esoterix and its Colorado Coagulation, Endocrine Sciences, and Cytometry Associates laboratories. LabCorp clients include physicians, government agencies, managed care organizations, hospitals, clinical labs, and pharmaceutical companies. To learn more about our growing organization, visit our Web site at: www.labcorp.com.

Each of the above forward-looking statements is subject to change based on various important factors, including without limitation, competitive actions in the marketplace and adverse actions of governmental and other third-party payors. Actual results could differ materially from those suggested by these forward-looking statements. Further information on potential factors that could affect LabCorp’s financial results is included in the Company’s Form 10-K for the year ended December 31, 2006, and subsequent SEC filings.

Iron Will: 17-Year-Old Bodybuilder Prepares for Contest

By David Schulte, Tulsa World, Okla.

Jun. 6–His bulging biceps, thick chest and sculptured thighs scream that David Taylor is a bodybuilder.

The 17-year-old senior at Broken Arrow High School has been competing for three years, but his passion for the sport goes beyond achieving a chiseled physique.

He believes that working out and eating a healthy diet have shaped his character as much as his body.

“If you have a hard physique, it shows you that you are a hard worker,” Taylor said. “I just like the self satisfaction that it gives me.

“I feel I can do anything if I put my mind to it, and I can achieve any goal that I attempt.”

His immediate goal is winning the teenage and middle weight divisions in the Oklahoma Bodybuilding & Fitness Championships, which will be held at the Performing Arts Center at Union High School, 6636 S. Mingo Road.

The annual event is a national qualifying event for bodybuilding and fitness competitors.

Taylor began pumping iron for contests when he was 15.

He found bodybuilding appealing because it was an individual sport, and success would be determined by how much effort he put into it.

“I could gauge my progress independently,” he said.

“In a team sport, you may be the all star or standout, but your team may not be victorious.”

In 2005, he entered his first bodybuilding competition in Edmond and placed first in the teenage division.

Last year, he placed second in the teenage division and third in the lightweight division at the contest held at Union.

Taylor lifts weights five days a week and does aerobic exercise to burn body fat seven days a week.

His workout schedule is similar to that of most adult bodybuilders, but he admits that occasionally he falls victim to his youth.

“I tend to overtrain,” Taylor said.

“I always train heavy and hard, pushing every set to (muscle) failure almost, but I need to quit that, because I am going to get injured.”

His training and strict diet require self-discipline, but they have not hurt him academically.

Taylor’s grade point average at Broken Arrow is above 3.5, and he believes that his self-discipline in bodybuilding has carried over to his schoolwork.

“It helps me with school in the fact that I built a regime in my day where I dedicate a certain amount of time for training and for studying,” he said.

Taking classes for college-bound students, he is preparing for a career in sports medicine.

This year, Taylor also took health science technology courses at Tulsa Technology Center Broken Arrow, 4600 S. Olive Ave.

Taylor’s goals in sports medicine are as ambitious as in bodybuilding, where he aspires to be Mr. Olympia — the most prestigious title in the sport.

“I would really like to help in the development in hormone replacement therapy and anti-aging clinics,” Taylor said.

“As people get older, their body slows down a little bit, and you can help them.”

Eileen Luis, a personal trainer in south Tulsa and chairwoman of the Oklahoma National Physical Committee, sponsor of the bodybuilding and fitness competition at Union, described Taylor as a role model for all people.

“David is proof positive that most bodybuilding and fitness competitors are highly successful overachievers that balance busy professional lives with family, school, volunteer work, and tons of other responsibilities,” Luis said.

Luis is not the only one who thinks that Taylor is highly focused for someone so young.

His academic counselor, Judy Grass, described Taylor as a polite, considerate student who has never been a discipline problem.

“He’s a well-mannered young man,” Grass said.

“He takes care of business and is responsible — he is working toward his future.”

For the remainder of the summer, Taylor is studying for a national diet and nutrition test in Orlando, Fla., that goes toward a college scholarship should he achieve a high score, he said.

He is also training for a teenage national bodybuilding championship that will be held in Pittsburgh, Pa., in July.

He believes both activities will help him achieve his goals in bodybuilding and sports medicine.

“I want to be known as a great competitor and a great overall person — someone that you can look up to,” Taylor said.

——

Contest

Oklahoma Bodybuilding & Fitness Championships

When: Saturday. Prejudging starts at 9 a.m.; Evening finals competition begins at 6 p.m.

Where Performing Arts Center at Union High School, 6636 S. Mingo Road

Cost $15 for prejudging and $30 for finals competition

For more Call 492-7006

—–

To see more of the Tulsa World, or to subscribe to the newspaper, go to http://www.tulsaworld.com.

Copyright (c) 2007, Tulsa World, Okla.

Distributed by McClatchy-Tribune Information Services.

For reprints, email [email protected], call 800-374-7985 or 847-635-6550, send a fax to 847-635-6968, or write to The Permissions Group Inc., 1247 Milwaukee Ave., Suite 303, Glenview, IL 60025, USA.

AdCare Health Systems, Inc. Announces Definitive Merger Agreement With Family Home Health Services, Inc.

SPRINGFIELD, Ohio, June 6 /PRNewswire-FirstCall/ — AdCare Health Systems, Inc., an Ohio corporation , and Family Home Health Services, Inc., a Nevada corporation, today announced the signing of a definitive merger agreement. The transaction was unanimously approved by the Boards of Directors of both companies and is anticipated to close in the third quarter of 2007, subject to the approval of AdCare’s and Family’s respective stockholders and other customary closing conditions. Family provides home health care services to residents in Florida, Illinois and Michigan and generated over $20,000,000 in home health care revenue in 2006.

When this transaction closes, Family will merge into AdCare and the Family stockholders will receive, in exchange for their stock, AdCare common stock resulting in the Family stockholders owning 65% of the outstanding common stock of AdCare, or 7,036,953 common shares. Approximately 90% of this stock will be deposited in a three year voting trust, to be voted upon the recommendations of current management.

After the merger is completed, it is anticipated that: Family’s main operations will be integrated with AdCare’s existing operations; current members of AdCare’s Board will occupy seven (7) of the nine (9) Board positions (with the remaining two positions to be occupied by representatives of Family); and AdCare’s executive officers will remain the same. Kevin Ruark, currently President and CEO of Family, will become President of AdCare’s home health care subsidiary, Assured Home Health, Inc. and its name will be changed to Family Home Health Services, Inc.

David A. Tenwick, Chairman of AdCare, stated, “The combination of AdCare and Family will create a much larger company with operations in Ohio, Florida, Illinois and Michigan. We will focus on facility-management based operations as well as providing a wide variety of ancillary health care services to the residents of those facilities. In addition, we will continue to provide quality health care to the elderly residing in their homes. This will be our first acquisition since our initial public offering (IPO) in November of 2006, and puts us on track to deliver on our short term growth strategy of increasing our revenues, turning profitable and expanding our asset base.”

Kevin Ruark, President of Family, said, “We were preparing to start home health care operations in Ohio, and by joining with AdCare, the combination should produce significant synergies. We, at Family, share the same goals as AdCare, and look forward to participating in the growing business opportunities within the senior living sector.”

Dominick and Dominick, LLC and City Capital Partners, LLC are the investment bankers advising on the transaction.

Where to find additional information about the Merger.

AdCare intends to file with the Securities and Exchange Commission, or the Commission, a Registration Statement on Form S-4 which will include a joint proxy statement/prospectus of AdCare and Family and other relevant materials related to the proposed transaction. The joint proxy statement/prospectus will be mailed to all stockholders of AdCare and Family. Investors and security holders of AdCare and Family are urged to read the joint proxy statement/prospectus and other relevant materials when they become available because they will contain important information about AdCare, Family and the proposed merger. A joint proxy statement/prospectus and other relevant materials (when they become available), and any other documents filed by AdCare with the Commission, may be obtained free of charge at the Commission’s website at http://www.sec.gov/. In addition, investors and security holders may obtain free copies of the documents filed with the Commission by AdCare by contacting:

   Ms. Carol Groeber   AdCare Health Systems Inc.   5057 Troy Road   Springfield, Ohio 45502   (937)-964-8974 ext - 27   [email protected]   

AdCare and its respective directors and executive officers may be deemed to be participants in the solicitation of proxies from the stockholders of AdCare and Family in favor of the proposed merger. Information about the directors and executive officers of AdCare and their respective interests in the proposed merger will be available in the joint proxy statement/prospectus.

Family and its respective directors and executive officers may be deemed to be participants in the solicitation of proxies from the stockholders of AdCare and Family in favor of the proposed merger. Information about the directors and executive officers of Family and their respective interests in the proposed merger will be available in the joint proxy statement/prospectus.

Investors and security holders are urged to read the proxy statement/prospectus and other relevant materials carefully when they becomes available before making any voting or investment decisions with respect to the proposed transaction. This Press Release does not constitute an offer of any securities for sale or solicitation of any proxy.

About AdCare Health Systems, Inc.

AdCare Health Systems, Inc. develops, owns and manages assisted living facilities, nursing homes and retirement communities and provides home health care services. Prior to becoming a publicly traded company in November of 2006, AdCare operated as a private company for 18 years. AdCare’s 850 employees provide high-quality care for patients and residents residing in the 15 facilities that they manage, seven of which are assisted living facilities, six skilled nursing centers and two independent senior living communities. The Company has ownership interests in seven of those facilities. In the ever expanding marketplace of long term care, AdCare’s mission is to provide quality healthcare services to the elderly.

About Family Home Health Services, Inc.

Family Home Health Services, Inc. is a provider of home health care services in the United States with a broad service offering in home health services and home medical care. Family provides a variety of clinical services and related products and supplies to patients in the states of Florida, Illinois and Michigan. Family has a strategic plan focused on the provision of Medicare home health services to the senior population within its operating areas.

Safe Harbor Statement

Statements contained in this press release that are not historical facts may be forward-looking statements within the meaning of federal law. Such forward-looking statements reflect management’s beliefs and assumptions and are based on information currently available to management. The forward- looking statements involve known and unknown risks, results, performance or achievements of the Company to differ materially from those expressed or implied in such statements. Such factors are identified in the public filings made by the Company with the Securities and Exchange Commission and include the Company’s ability to secure lines of credit and/or an acquisition credit facility, find suitable acquisition properties at favorable terms, changes in the health care industry because of political and economic influences, changes in regulations governing the industry, changes in reimbursement levels including those under the Medicare and Medicaid programs and changes in the competitive marketplace. There can be no assurance that such factors or other factors will not affect the accuracy of such forward-looking statements.

AdCare Health Systems, Inc.

CONTACT: David Tenwick, Chairman of the Board of AdCare Health Systems,Inc., +1-937-964-8974, [email protected]

Web site: http://www.adcarehealth.com/

BCP Veterinary Pharmacy’s Transdermal Gels Are Easy Way to Medicate Pets

HOUSTON, June 6 /PRNewswire/ — Studies show that owning a pet reduces our stress level, lowers our blood pressure, and can increase our life expectancy. But what happens when a beloved pet is diagnosed with an illness that requires ongoing medication?

(Photo: http://www.newscom.com/cgi-bin/prnh/20070606/NYFNSN04 )

To alleviate the stress and difficulty associated with medicating animals, BCP Veterinary Pharmacy offers a number of medications in the form of a transdermal gel, which is simply rubbed into the pet’s inner ear where it is absorbed into the bloodstream. Transdermal gels come in prefilled syringes to make it easy to dispense the proper dosage.

“Transdermal gels are an important alternative to oral therapy,” said Jennifer Gimon, registered pharmacist and founder of BCP Veterinary Pharmacy. “They can make a major difference by empowering people to keep pace with their pets’ daily medicine regimen. Our mission is to make life easier for pet owners and to keep their pets living longer, healthier lives.”

“We make transdermal gels for pets that are suffering from chronic ailments such as heart disease, high blood pressure, inflammatory bowel disease, urinary incontinence and hyperthyroidism,” Gimon said. “We fill a tremendous number of prescriptions for transdermal versions of antibiotics, steroids, antihistamines and appetite stimulants.”

For more information, visit the website at http://www.bcpvetpharm.com/ (or call 1-800-481-1729). Pet owners can ask their veterinarians to fax prescriptions to the pharmacy’s toll-free fax (1-866-PET-CHEW).

BCP Veterinary Pharmacy’s most popular transdermal gels include cisapride, amitriptyline, prednisone, prednisolone, methimazole, enrofloxacin, clomipramine, diltiazem, chlorpheniramine, diphenhyrdamine, metronidazole, amlodipine and phenylpropanolamine.

The effectiveness of the transdermal gel form of methimazole for treatment of 16 cats with hyperthyroidism was evaluated in a study reported in the International Journal of Pharmaceutical Compounding (Vol.6 No. 5, September/October 2002). The results indicate that transdermal methimazole is effective in reducing the serum T4 level in hyperthyroid cats. A lack of adverse side effects, an increase in satisfaction among pet owners and an improved ability to administer the medication were among the benefits cited.

A Full Fellow with the American College of Apothecaries and cofounder of the American College of Veterinary Pharmacies, Gimon founded BCP Veterinary Pharmacy in 1994 in Houston, Texas.

For information, call 1-800-481-1729 or fax 1-866-738-2439. In Houston call 713-771-1144 or fax 713-771-1131.

Photo: NewsCom: http://www.newscom.com/cgi-bin/prnh/20070606/NYFNSN04AP Archive: http://photoarchive.ap.org/PRN Photo Desk, [email protected]

BCP Veterinary Pharmacy

CONTACT: Tamera Herrod for BCP Veterinary Pharmacy, +1-561-753-2933,[email protected]

Web site: http://www.bcpvetpharm.com/

Cancer Fighters Have New Weapon: Black Raspberries

Most widely known for being a key ingredient in cobblers and pies, black raspberries may soon gain a new reputation as the most promising form of cancer treatment.

In the 1990s, Gary Stoner, professor of internal medicine at Ohio State University, began studying the effects of black raspberries on cancer, specifically colon and esophageal cancer.

“There are a large number of compounds in berries that inhibit cancer in animals,” said Stoner.

Upon close chemical examination of the raspberries, Stoner and other scientists found that anthocyanins, the compounds that gives the berries their color, played a crucial role in preventing the development of cancer. Scavenging free radicals, molecules that alter and destroy DNA, and inhibiting the inflammation process are among the ways scientists believe anthocyanins help prevent and treat cancer.

“The inflammatory process produces cytokines,” said Stoner. “Cytokines stimulate cell growth and inhibit cell death — this drives the cancer process.”

After seeing the success of Stoner, other Ohio State doctors became anxious to try the fruit compounds on other types of cancer, including oral and non-melanoma skin cancer. Dr. Susan Mallery, professor for the College of Dentistry at OSU, teamed up with Dr. Russell Mumper, associate director of the Center for Pharmaceutical Science & Technology at the University of Kentucky, who created a molecule-adhesive gel from black raspberry extracts for the treatment of oral cancer.

Mallery said that while oral cancer is not the most common type of cancer, it does have major side effects.

“Treatment is usually to cut the cancer out; it can be very disfiguring” said Mallery. “Even with the cancerous tissue out, many have recurrences.”

During the first clinical trial at OSU earlier this year, 20 patients with pre-cancerous lesions, called dysplasia, were brought in to try out the gel before surgery to remove the lesions was performed.

Mumper said that the patients were instructed to apply the black raspberry gel to their lesions four times a day over a period of 42 days. At the end of the trial, the patient’s lesions were evaluated and biopsies were used to determine what effect the gel had made at the molecular level.

According to Mumper, half of the patients showed a clinical downgrade in their dysplasia. In some cases, the lesions went away completely.

“The berry formulation is having a positive effect at the molecular level, changing enzymes and proteins,” said Mumper. “It’s very tantalizing. OSU is moving forward with phase two of the trial at several different cancer centers.”

Hearing about the research of Stoner and others, Dr. Anne VanBuskirk, assistant professor of surgery at Ohio State University, decided to try the gel on animal’s skin after UV exposure.

While the results were successful, VanBuskirk said it is still not known exactly how the berries slow cancer growth.

“We think the anti-oxidant activity of the extract helps to reduce inflammation,” said VanBuskirk. “Chronic inflammation sets the stage for cancer in a number of ways.”

Following suit of the Ohio State doctors is Dr. Ramesh Gupta, professor of oncological research at the University of Louisville. Using a mixture of berries including blackberries, blueberries, strawberries and black raspberries, Gupta has been testing their impact on lung and breast cancer in rats.

Gupta explained that for the lung cancer trials, the rats were exposed to cigarette smoke 5 days a week for 9 months. During this time, some of the rats were fed one pound of berries a day.

At the end of the trial, Gupta observed that 30 percent less of the rats who were fed berries developed tumors as opposed to the rats who were not fed berries.

A similar trial was conducted for rats that were fed chemicals that caused breast cancer, and corresponding results were experienced when the rats who ate berries produced less of the cancer causing enzymes. Next up for Gupta is clinical trials, in which the objective will be to see if compounds from the berries are absorbed into tissue and visible in the bloodstream.

Gupta says that eventually the researchers want to test people who already have tumors to see if the berries can effectively combat the cancer.

With the rapid success of the berry trials, it is looking more likely to doctors that the black raspberry gel could receive FDA approval and hit drug store shelves within the next few years.

Mumper predicts that the berry gel could be available through prescription as early as 2009, adding that the gel might have other applications as a treatment for other topical diseases of the skin.

“Maybe they could incorporate it into sunscreens,” said Dr. Harry Carloss of Paducah, an oncologist. “It’s always a big if, but if you don’t look at the big ifs, you won’t find what’s important.”

Misonix Announces FDA Clinical Trial Enrollment Acceleration for Sonablate 500 HIFU Treatment of Prostate Cancer

Misonix, Inc. (NASDAQ: MSON), a developer of ultrasonic medical device technology for the treatment of cancer and other chronic health conditions, today announced the acceleration of patient enrollment relating to the United States Food and Drug Administration (“FDA”) clinical trials for the Sonablate® 500 (“SB500”), a medical device using high intensity focused ultrasound (“HIFU”) for non-invasive treatment of prostate cancer. The Sonablate® 500 is approved by the FDA as an investigational device for clinical trials in the United States. The increased enrollment pace pertains to the ongoing FDA approved pivotal study for the treatment of prostate cancer. Over 16 patients in the pivotal study have been treated using the SB500 device at two clinical study sites. A third clinical study site will start treatments in June 2007.

The SB500 is a medical device developed by Focus Surgery, Inc. (www.focus-surgery.com) and manufactured by Misonix. Misonix also has the exclusive European distribution rights for the product. Misonix is an investor in privately-held Focus Surgery, one of the most prominent developers of HIFU in the world. Other investors in Focus include Takai Hospital Supply, Inc. (www.thsinternational.com), which has the exclusive distribution rights to market the SB500 in Asia, Australia, Japan and part of the middle east, and US HIFU, LLC, the exclusive distributor in the Americas region and South Africa. US HIFU and Focus Surgery are leading the FDA clinical trials and approval process.

With the SB500 being used in 6,000 treatments in over 100 clinics in six years, interest in and usage of the HIFU medical device for prostate cancer treatment is gaining momentum. Drs. Michael Alabaster and Walter Rayford recently released preliminary data from the first U.S. clinical trials for the treatment of de novo localized prostate cancer using the SB500. Based on their positive results, the doctors report that interest in participating in the FDA clinical trials has been extremely heavy, with calls received from all over the country requesting patient entrance into the trials.

Dr. Alabaster, managing partner with Southeast Urology Network (“SUN”) of Memphis, TN, had been performing HIFU procedures outside of the United States for approximately two years prior to his participation in the FDA clinical studies using the SB500. He stated that the demand for this procedure is great, with scores of American citizens going for HIFU treatment every month in Europe, Canada, Mexico, and the Dominican Republic where the SB500 is presently being used. HIFU treatment with the SB500 only became available in the U.S. as part of the next phase of FDA approval process which commenced in March 2007. With the approval of the trials by the FDA earlier this year, Dr. Alabaster stated, “I have been inundated with calls almost daily for three months.”

Dr. Rayford, an Urologic Oncologist with SUN, stated that the PSA data for their patients following treatment with the SB500 has been “astounding at six weeks status post-surgery.” He added, “These patients have probably not even reached their nadir yet, and at six weeks we are finding their PSAs typically almost undetectable…[Furthermore,] the side effect profile to date within our clinical trials, as well as previous offshore experience, definitely offers an advantage over the standard treatments presently approved in the United States.”

“There is long-term data out there to support the technology as being as effective, if not more so, than many forms of treatment that presently are being deployed in the United States,” noted Dr. Alabaster. “There is no doubt that this will be a heavily demanded procedure by patients, as they seek other treatments that are less invasive than presently available procedures and maintain a better quality of life.”

The Memphis clinical study site is currently enrolling patients with prostate gland sizes 40 grams or less, Gleason scores of 6 or less, and PSAs scores of 10 or less. Meeting these criteria, Stuart Boyd , a professional pilot from Florida, enrolled for treatment in May. Immediately following the procedure, he stated, ” I was feeling so good the next day, I went out to dinner that night. I found out about this technology from a fellow pilot who had gone to Canada for the same treatment. His result and quality of life was excellent and that led me to search the Internet. I found the trials being offered by Focus Surgery and SUN. I have returned to my normal baseline in several days and was using all bodily functions as they were designed to be used!”

Mr. Boyd offered Dr. Alabaster his thanks for performing the surgery and changing his life. Then, after a short pause, he added, “No, I mean thank you for NOT changing my life in terms of incontinence and impotence.”

About Misonix:

Misonix, Inc. (NASDAQ: MSON) designs, develops, manufactures, and markets medical, scientific, and industrial ultrasonic equipment, laboratory safety equipment, and air pollution control products. Misonix’s ultrasonic platform is the basis for several innovative medical technologies. Misonix has a minority equity position in Focus Surgery, Inc. which uses high intensity focused ultrasound technology to destroy deep-seated cancerous tissues without affecting surrounding healthy tissue. Addressing a combined market estimated to be in excess of $3 billion annually, Misonix’s proprietary ultrasonic medical devices are used for wound debridement, cosmetic surgery, neurosurgery, laparoscopic surgery, and other surgical and medical applications. Additional information is available on the Company’s Web site at www.misonix.com.

With the exception of historical information contained in this press release, content herein may contain “forward looking statements” that are made pursuant to the Safe Harbor Provisions of the Private Securities Litigation Reform Act of 1995. These statements are based on management’s current expectations and are subject to uncertainty and changes in circumstances. Investors are cautioned that forward-looking statements involve risks and uncertainties that could cause actual results to differ materially from the statements made. These factors include general economic conditions, delays and risks associated with the performance of contracts, uncertainties as a result of research and development, acceptable results from clinical studies, including publication of results and patient/procedure data with varying levels of statistical relevancy, potential acquisitions, consumer and industry acceptance, litigation and/or court proceedings, including the timing and monetary requirements of such activities, regulatory risks including approval of pending and/or contemplated 510(k) filings, the ability to achieve and maintain profitability in the Company’s business lines, and other factors discussed in the Company’s Annual Report on Form 10-K, subsequent Quarterly Reports on Form 10-Q and Current Reports on Form 8-K.

Acoustic Neuroma Association Announces New Medical Website for Newly Diagnosed Patients, Their Families and Health Care Professionals

ATLANTA, June 4 /PRNewswire/ — Acoustic Neuroma Association (ANA) has launched a new medical website listing (http://www.anausa.org/) for newly diagnosed and current acoustic neuroma patients, it was announced today by Executive Director Judy Vitucci. ANA is a non-profit organization with the mission to inform, educate and provide national and local support networks for those affected by acoustic neuromas, and to be an essential resource for health care professionals who treat the condition.

An acoustic neuroma (sometimes termed a vestibular schwannoma) is a benign brain tumor on the eighth cranial nerve, which leads from the brain to the inner ear. The most common forms of treatment are surgery, radiation or “watch and wait.”

ANA, founded by a treated patient, Virginia Fickel Ehr, has been a source of information and support for acoustic neuroma patients for over 25 years, and oversees over 50 local support groups around the country. ANA’s 18th National Symposium in Philadelphia is slated for July 13-15, 2007 at The Doubletree Hotel Philadelphia.

According to Vitucci, “The new medical website listing is designed to provide up-to-date information regarding the most important question for a new patient — where do I find a qualified physician?” She adds, “Although this is a rare type of tumor, recent studies show that acoustic neuroma diagnoses are increasing, and most patients are between the ages of 30 and 60.”

Typical symptoms include hearing loss, balance issues, tinnitus and a feeling of fullness in the ear.

The new website medical listing at http://www.anausa.org/ will provide patients with a tool to help them find qualified medical professionals across the country. The website also provides information on the various types of treatment. Additionally website users can fill in the “contact us” information, and ANA will send them a packet of information with referrals of former patients who can provide support.

ANA strongly urges patients, families or anyone seeking information or treatment for an acoustic neuroma to consider consultation with physicians who have had substantial experience in treating this condition. The physicians or organizations listed have self-reported data to meet criteria established by the ANA for having substantial experience in treating acoustic neuromas. The physicians have paid an administrative fee to be listed, and the listings should NOT in any way be construed as an endorsement or recommendation by the ANA. It is every individual’s responsibility to verify the qualifications, education and experience of any healthcare professional. ANA advises that all treatment choices, including “watch and wait” have consequences. ANA recommends those patients, families or anyone seeking treatment to carefully weigh treatment options and make a well-informed decision after careful consideration of risks, consequences, complications, and potential outcomes.

ANA is a 501(c) (3) organization serving a membership of over 5,000 acoustic neuroma patients, family members and health care professionals providing information regarding all treatment options.

   Contact:  Judy Vitucci             Executive Director ANA             (877)200-8211             [email protected]              Or              Pamela Golum (ANA board member)             The Lippin Group             (323)965-1990 x325             [email protected]  

Acoustic Neuroma Association

CONTACT: Judy Vitucci, Executive Director ANA, 1-877-200-8211,[email protected]; or Pamela Golum (ANA board member) of The Lippin Group,+1-323-965-1990 x325, [email protected]

Web site: http://www.anausa.org/

Teacher Charged in Student Sex; Alleged Trysts Were in Teaneck School

By BRIAN ABERBACK and KIBRET MARKOS , STAFF WRITERS

An award-winning Teaneck teacher was arrested Friday and charged with having sex with an underage student, including frequent encounters in his classroom, the men’s faculty bathroom and the dean’s office.

James Darden, a 36-year-old English teacher at Thomas Jefferson Middle School, was being held on $250,000 bail at the Bergen County Jail after turning himself in Friday morning. He was charged with aggravated sexual assault, aggravated sexual contact, child endangerment and official misconduct. If convicted at a trial, he could face up to 20 years in prison for the most serious offense.

The former student, now 21, told authorities that she and Darden had a sexual relationship from the time she was a 13-year-old eighth- grader until she was a 15-year-old high school sophomore, Bergen County Prosecutor John L. Molinelli said, adding that the case developed only recently.

The prosecutor also raised questions about student safety in the Teaneck school district.

“I have to wonder about security in the building and how something like this can take place as frequently as it did over such a long period of time the way it did,” Molinelli said at a news conference. “It concerns me, and I hope tonight it concerns some school board members in Teaneck at least I hope so.”

Schools Superintendent John Czeterko said he didn’t immediately consider security an issue.

“We value teachers who stay after school,” the superintendent said. “He was a valued and trusted employee. If he violated that trust, we’re stunned.”

Czeterko said letters were being sent Friday to Thomas Jefferson parents explaining what happened. Counselors would be available at the school on Monday to talk with students, he said. A meeting with parents is scheduled for 7 p.m. Tuesday.

Darden, of Cliffside Park, an eighth-grade teacher who has worked at Thomas Jefferson for 10 years, was suspended with pay from his $62,242-a-year job Friday.

Parents and former students expressed shock, disbelief and horror.

By all accounts, Darden is revered a demanding but inspirational educator who holds students to high standards and makes their education his top priority.

“He’s the best teacher we’ve ever had,” said Andrea Thompkins, whose son is in Darden’s class this year. “I can’t give the man enough accolades. I’m shocked.”

“He just put his students first before other things,” said high school sophomore Josh Levinsky, a student of Darden’s two years ago. “He was really tough, but a good tough. He made people want to succeed.”

In June 2005, a group of parents proclaimed Darden their unofficial teacher of the year. Four months later, he received a prestigious Milken Family Foundation National Educator Award. Up to 100 teachers each year are recognized by the foundation for their efforts in furthering excellence in public education.

“You know that I constantly push you,” Darden told students the day he received the $25,000 award. “Understand that I have high expectations for myself as your teacher, and I have the same high expectations for you, just like all of your other teachers.”

The Milken Foundation doesn’t keep track of how teachers use their award money, said Bonnie Sommers, the foundation’s vice president of communications. Teaneck school officials said they also didn’t know.

Molinelli said the trysts were conducted in various places, including in Darden’s car and at his former home in Plainfield. The woman said Darden also instructed her to masturbate via a Web camera that he gave her.

Molinelli described her as a “bright, energetic person who did well in school and was an honor student.”

“This is obviously something that she lived with at least since she was 13,” he said. “At this point in time, for whatever reason, she chose that this was the best thing for her life and for her welfare to come forward.”

Over the past month, detectives obtained additional evidence to corroborate the alleged victim’s statement, the prosecutor said. He declined to elaborate, however.

A source with knowledge of the case said Darden made incriminating statements online with investigators impersonating the woman.

Wearing a jail-issued uniform and shackled at the waist and ankles, Darden said little and showed little emotion at a hearing Friday in Superior Court in Hackensack – other than to shake his head briefly when the charges were read.

Darden came to court without an attorney, telling Judge Harry G. Carroll that he couldn’t afford one. The judge advised him to apply for a public defender.

The arrest comes nearly one year after authorities arrested former Teaneck High School Principal Joe White for allegedly engaging in a graphic sexual conversation with a student. White, who was charged in June, was recently offered a plea bargain through which he would serve one year in jail.

He has not decided whether to accept it. White was acquitted of molestation charges against a different student in 2003.

***

E-mail: [email protected] and [email protected]

(SIDEBAR, page A01)

Defendant: James Darden, 36, Cliffside Park

Occupation: Eighth-grade English teacher, Thomas Jefferson Middle School, Teaneck

Charges: Allegedly had sex with an underage female student over a three-year period in the school and elsewhere. Faces up to 20 years in prison if convicted of the most serious count.

Honors: Milken Family Foundation National Educator Award, 2005. Revered by parents and students.

Status: Held on $250,000 bail; suspended with pay.

(c) 2007 Record, The; Bergen County, N.J.. Provided by ProQuest Information and Learning. All rights Reserved.

Cleveland Clinic Enters Partnership To Manage And Operate Sheikh Khalifa Medical City In Abu Dhabi

CLEVELAND, June 4 /PRNewswire/ — Cleveland Clinic announced today it has signed an agreement with the Health Authority of Abu Dhabi to manage Sheikh Khalifa Medical City (SKMC), a network of healthcare facilities in Abu Dhabi. The agreement is designed to transform health services in Abu Dhabi, the capital city of the United Arab Emirates.

SKMC consists of 700-bed Sheikh Khalifa Hospital, a 150-bed Behavior Sciences Pavilion and the 100-bed Abu Dhabi Rehabilitation Center, in addition to more than 12 specialized outpatient clinics and nine primary healthcare centers around the city of Abu Dhabi.

“As a global healthcare institution, Cleveland Clinic has sought to cultivate opportunities to further expand our presence abroad, sharing state- of-the-art medical practices, procedures and administrative capabilities and raising healthcare standards worldwide,” said Delos M. “Toby” Cosgrove, CEO and President of Cleveland Clinic. “In partnering with the Health Authority of Abu Dhabi, we have committed to integrating our medical expertise and Best in Class practices with SKMC to achieve the highest clinical outcomes possible and enhance research and training. This partnership stands to redefine what is possible in healthcare worldwide.”

Today’s announcement follows the agreement Cleveland Clinic and Mubadala Development signed in September of last year to design and build a new preeminent first-class specialty hospital on Al-Suwwa Island within the next three years.

Cleveland Clinic, located in Cleveland, Ohio, is a not-for-profit multispecialty academic medical center that integrates clinical and hospital care with research and education. Cleveland Clinic was founded in 1921 by four renowned physicians with a vision of providing outstanding patient care based upon the principles of cooperation, compassion and innovation. U.S. News & World Report consistently names Cleveland Clinic as one of the nation’s best hospitals in its annual “America’s Best Hospitals” survey. Approximately 1,500 full-time salaried physicians at Cleveland Clinic and Cleveland Clinic Florida represent more than 100 medical specialties and subspecialties. In 2005, there were 2.9 million outpatient visits to Cleveland Clinic. Patients came for treatment from every state and from more than 80 countries. There were nearly 54,000 hospital admissions to Cleveland Clinic in 2005. Cleveland Clinic’s Web site address is http://www.clevelandclinic.org/.

Cleveland Clinic

CONTACT: Christina Thompson of Cleveland Clinic, +1-216-444-0899,[email protected]

Web site: http://www.clevelandclinic.org/

EKR Therapeutics Expands Its Management Team in Support of Growth Momentum

EKR Therapeutics, Inc., a specialty pharmaceutical company focused on identifying, acquiring, and commercializing supportive care prescription products to enhance the quality-of-life for cancer patients, today announced the appointments of Susan C. Bacso as Vice President of Operations and Supply and William P. Zadinski as National Sales Director.

Ms. Bacso and Mr. Zadinski report directly to Howard Weisman, co-founder, Chairman and CEO of EKR Therapeutics, who said, “Sue and Bill possess outstanding qualifications that are the characteristic hallmarks of the EKR management team. They are highly experienced pharmaceutical professionals with proven abilities to both support and fuel EKR’s growth momentum, including the commercialization of our first product Gelclair(R) which we launched in the fourth quarter of 2006.” Gelclair is an FDA cleared product indicated for the management of pain associated with oral lesions of various etiologies, including chemotherapy and radiation induced oral mucositis/stomatitis.

“In the comparatively short time period since we initiated operations in June 2006, EKR has successfully met several key milestones,” noted Howard Weisman. “During the second-half of 2006 we not only acquired exclusive North American rights to Gelclair from Swiss based Helsinn Healthcare SA, but we also recruited and deployed our own sales force of specialty representatives.”

He further noted, “Gelclair has already been accepted on the formularies of major cancer centers and we are highly encouraged by the strength of the product’s market traction to date. Moreover, we are actively pursing opportunities to leverage our sales force and further bolster our growth prospects through the potential acquisition of other specialty therapeutics in our market space.”

Mr. Weisman concluded saying, “Sue and Bill have the experience, expertise, and entrepreneurial spirit that have proven to be key elements in the successful execution of our core business strategy. We are delighted to have them onboard as we position EKR for what we foresee as the next phase in our Company’s growth cycle.”

Susan C. Bacso

Serving as Vice President of Operations and Supply, Susan Bacso’s responsibilities include all aspects of manufacturing, technology transfer, supply chain, quality and regulatory assurance affairs related to the production and movement of product. She brings to EKR over twenty years broad-based and international pharmaceutical experience ranging in scope from contract manufacturing management to project management to research and science technology functions. She has demonstrated a wide range of abilities, including spearheading high performance teams, entering into due diligence assessments and quickly identifying and assessing key issues and risks, and establishing cGMP compliant quality management systems.

Earlier in her career, Susan worked at Merck & Co. (NYSE:MRK) where she advanced to positions of increasing responsibility including Manufacturing Area Head and Divisional Capital Project Leader. She then served as Senior Director of Quality Assurance and Technical Operations for ESP Pharma, a specialty pharmaceutical company co-founded by Howard Weisman to pursue an acquisition strategy similar to EKR. Within three years of initiating operations, ESP was acquired by PDL BioPharma, Inc (NASDAQ: PDLI) for over $500 million. Prior to joining EKR Therapeutics, Sue was a Senior Operations Executive for ESP Equity Partners, LLC, a private investment company engaged in acquiring pharmaceutical and medical assets in targeted therapeutic areas.

Sue received a B.S. degree in Chemical Engineering from Northeastern University and also holds a B.S. degree in Physical Geography, Environmental Science from McGill University.

William P. Zadinski

Joining EKR Therapeutics as National Sales Director, Bill Zadinski brings to this position impressive credentials and extensive, pertinent experience in sales management. His professional career encompasses over fourteen years of sales and sales management experience in the pharmaceutical and biotechnology industries with an extensive background in hematology and oncology.

After first serving as a Sales Representative for Solvay Pharmaceuticals and then Eli Lilly & Co. (NYSE:LLY), Bill joined Immunex Corporation, a company involved with chemotherapeutic agents as well as a supportive care oncology product, Leukine. Bill initially joined Immunex as an Oncology Specialty Representative and was promoted to the position of Regional Manager responsible for overseeing a hematology/oncology specialist sales team. Following the acquisition of Immunex by Amgen (NASDAQ:AMGN) and the sale of Leukine to Berlex Laboratories due to antitrust concerns, Bill transitioned his specialty sales team to Berlex. More recently, he served as Regional Manger for OSI Pharmaceuticals (NASDAQ:OSIP) where he played an integral role in hiring, training, and managing OSI’s start-up oncology sales team and where he gained invaluable experience and knowledge of Gelclair as OSI held the U.S. license to this product prior to the product’s acquisition by EKR.

Bill, who has been honored with several sales awards, is a graduate of Yale University where he received his B.A. degree in Political Science.

About EKR Therapeutics

Founded by experienced healthcare and business executives, EKR Therapeutics is a privately held specialty pharmaceutical company that has brought together a highly seasoned team of industry professionals committed to providing prescription products and therapeutic solutions to support and enhance the quality-of-life of cancer patients. As new and increasingly effective oncology therapies emerge, more patients survive cancer. The EKR management team recognizes that oncology supportive products are becoming increasingly critical in the overall care of cancer patients, because the side-effects of oncology treatments can be severe and debilitating. EKR is committed to addressing this largely unmet medical need by collaborating with the full scope of healthcare practitioners in helping patients adhere to primary treatments through the use of supportive care products. EKR intends to identify, acquire, market, distribute and eventually develop therapeutics for patients in need of supportive care products during cancer treatment. For additional information about EKR visit the Company’s website at http://www.ekrtx.com. Full prescribing information for Gelclair may be obtained at http://www.gelclair.com

TMC to Open Hospice, Calls It Peppi’s House

By Jane Erikson, The Arizona Daily Star, Tucson

Jun. 4–Tucson Medical Center is preparing to open its new hospice for terminally ill patients, many of whom may live a bit longer just by being in hospice care.

Peppi’s House, on the northwest corner of the Tucson Medical Center campus at 2715 N. Wyatt Drive, is awaiting final inspections before opening to patients.

The $5 million facility, built entirely with donations, includes 16 private patient rooms that open onto secluded courtyards landscaped with flowering desert plants and fountains — a serene environment that might make anyone feel better.

But the benefit may be much greater, according to a new national study of about 4,500 patients who died of heart disease or cancer. The study revealed that those receiving hospice care before they died lived, on average, 29 days longer than those who were not in a hospice program.

The study, based on patient data from the Centers for Medicare and Medicaid Services, shakes a finger at the common misconception that people in hospice die sooner than those who continue with aggressive treatment for their illness.

“The reason we did this research is we’ve seen a lot of patients who come to us as train wrecks and they look like they’re going to die soon, then they get better for a while. I’ve been in this field for 31 years, and I’ve seen it a lot,” said Stephen Connor, a vice president of the National Hospice and Palliative Care Organization and lead author of the study.

“I don’t want to over-reach and say we’ve proven that all hospice patients live longer,” Connor said, “but what we can say clearly is that people in hospice don’t just give up on life and die sooner, as many people believe.”

The study — conducted by the hospice organization and Milliman Inc., a leading actuarial firm — does not show why people in hospice gained an extra month of life.

One possible explanation, Connor said, is that people in hospice avoid what he called “over-exposure to the health-care system.”

Terminally ill patients not in hospice are more likely to be hospitalized, which puts them at increased risk of infections from other patients, getting the wrong medication and receiving aggressive treatment when they are too ill to tolerate it, Connor said.

The 4,493 patients enrolled in the study included 540 who were dying of congestive heart failure. The rest had cancer of the breast, colon, lung, pancreas or prostate.

The greatest survival benefit was seen in heart-failure patients in hospice care, who survived an average of 81 days longer than their nonhospice counterparts.

Hospice Patients with colon, lung or pancreatic cancer lived up to 39 days longer, on average, than their counterparts who did not choose hospice care. The survival benefit for patients with breast or prostate cancer was not statistically significant. That’s probably because it is more difficult to alter the course of those cancers, Connor said.

Dr. Evan Kligman, medical director of Casa de la Luz Hospice in Tucson, called the study’s findings interesting “because we tend to think we are not going to make a major change in life expectancy.”

He agreed with the theory that hospice patients face fewer risks than terminally ill patients being treated in hospitals.

For example, Kligman said, hospice patients are given medications to control pain, help them breathe and otherwise keep them comfortable, but they generally receive less medication than hospitalized patients. That puts them at lower risk for medication errors, estimated by the national Institute of Medicine to total 400,000 in this country each year.

“It may also be that if someone is in a hospice setting where they are getting good pain control, and their other symptoms are being managed, maybe the will to live lasts a little bit longer,” Kligman said.

Mary Steele, a registered nurse and director of Tucson Medical Center’s hospice program, said she often sees patients improve for a while once they enter a hospice program.

“The nurse is helping them with their medications, the aides are helping them with bathing and hygiene, and I think the socialization with the staff all add to a patient’s quality of life,” Steele said. “I think that extends life.”

Often when patients choose hospice, “a lot of tension leaves when we stop poking them and just let them rest,” said Dr. Larry Lincoln, medical director of TMC’s hospice program. “The stress just melts away, and that has a physiological effect.”

Peppi’s House is designed to help terminally ill patients let go of their stress in a homelike setting. The beds are all topped with quilts, some of them hand-made. The landscaped courtyards are abloom with star jasmine and blue plumbago.

The “house” is named for Rose “Peppi” Grosse, a philanthropist who volunteered at TMC before developing cancer and enrolling in TMC’s hospice program. One of her final decisions was to donate $1 million for a new hospice building. She was 73 when she died on May 22, 2000.

About 25 percent of TMC hospice patients will be admitted to Peppi’s House; the rest will be cared for in their own homes.

“I love it. I absolutely love it,” Pam Varner, the youngest of Grosse’s three adult children, said of the hospice building that bears her mother’s name. Varner had a role in designing the building and insisted on the wide French doors that will allow patients’ beds to be rolled out into their courtyards.

“My Mom died at home, and the sliding glass doors to the outside were open all the time,” Varner said. “She really wanted that fresh air. I think people in that stage of their lives feel better when there’s fresh air.”

Hospice at a glance

–In 2005, there were more than 4,100 hospice programs nationwide, according to the National Hospice and Palliative Care Organization.

–The Arizona Department of Health Services licenses 10 hospice programs in Tucson.

–More than 75 percent of hospice patients receive care in their homes or residential care facilities. Nurses and other caregivers visit them as frequently as their conditions warrant, helping them remain comfortable in familiar surroundings in the company of family and friends.

–An inpatient hospice facility, like the soon-to-open Peppi’s House at Tucson Medical Center, is for patients who need closer monitoring to keep their pain and other symptoms under control. Inpatient hospice also admits patients whose caregivers need a break.

–People with terminal cancer account for 46 percent of hospice patients nationally, according to the hospice organization. Heart disease, dementia, debility and lung disease are the other most common illnesses of hospice patients.

–Approximately 1.2 million Americans received hospice care in 2005, the hospice organization reports.

“I don’t want to over-reach and say we’ve proven that all hospice patients live longer, but what we can say clearly is that people in hospice don’t just give up on life and die sooner.”

Stephen Connor

Vice president of the National Hospice and Palliative Care Organization

The first hospice program in the United States started in 1974 in New Haven, Conn., and just three years later, Tucson had one of the first three comprehensive hospice programs in the country, with both inpatient and at-home services. It was Hillhaven Hospice, which opened in 1977 at 5504 E. Pima St. The National Cancer Institute funded Hillhaven for four years. St. Mary’s Hospital took over Hillhaven in 1981.

–Contact reporter Jane Erikson at 573-4118 or at [email protected].

—–

To see more of The Arizona Daily Star, or to subscribe to the newspaper, go to http://www.azstarnet.com.

Copyright (c) 2007, The Arizona Daily Star, Tucson

Distributed by McClatchy-Tribune Information Services.

For reprints, email [email protected], call 800-374-7985 or 847-635-6550, send a fax to 847-635-6968, or write to The Permissions Group Inc., 1247 Milwaukee Ave., Suite 303, Glenview, IL 60025, USA.

Envisioning Abstraction: The Simultaneity of Robert Delaunay’s First Disk

By Hughes, Gordon

Of all of the movements lucky enough to have been caught in the tangles of Alfred Barr’s spider-web chart of modern art (Fig. 1), only one, Orphism-Guillaume Apollinaire’s misbegotten term and attempt to unite the post-Cubist abstraction of Robert Delaunay, Marcel Duchamp, Fernand Lger, and Francis Picabia-goes nowhere.1 That the diagrammatic arrow leading from Cubism to Orphism ends in what Barr sees as the sole cul-de-sac in twentieth-century art is surprising given that the relation between Cubism and abstract art was the ostensible motive for Barr’s flow chart. Though Orphism is important enough to warrant mapping, the exact nature of its significance is unclear within the logic of Barr’s schema. For importance, as conceived by Barr, is clearly determined by flow; from movement to movement, one into the next, twentieth-century modernism progresses smoothly and logically down into the twin funnels of “geometrical” and “nongeometrical” abstract art. Everything leads to something else. Everything, that is, but Orphism, which just sits there, an apparent clog in the pipes. Yet this is a clog that cannot be cleanly removed or ignored. Delaunay’s abstraction in particular is too much of a modernist milestone-too much of a first-to be left off the chart. Despite its lack of flow- despite the fact that it goes nowhere-Orphism is important. It’s just not clear why.

Part of the problem, of course, is the lack of fit among the artists assembled under the Orphic umbrella. All had undergone their artistic formation within Cubism, and all shared a tendency toward abstraction as they in turn broke with Cubism in 1912. But otherwise these artists had little of substance in common. Indeed, the divergent nature of those who were muscled into Orphism is apparent by the very fact that most of these artists did slide their way into the channels of modernist flow: arrows can be drawn from Duchamp and Picabia to Dada and Surrealism, and from Lger to the postwar machine aesthetic of Purism. In part, it is these different tensions in direction that prevent Orphism from flowing as it should. The real problem, however, is Delaunay. While the art historical tracks of influence have long been laid for the majority of those who passed through Orphism, Delaunay and his First Disk (Fig. 2) present twentieth-century modernism with a stubborn radicality that it doesn’t know what to do with. While others moved on, Delaunay was left, more ebb than flow, to clog things up.

If Barr’s chart stands out as the first art historical account to find itself at a loss in how to deal with Delaunay’s abstraction, it was not to be the last. More recently Yve-Alain Bois has acknowledged the radicality of Delaunay’s First Disk, while simultaneously characterizing it as a “fluke.”2 Thierry de Duve has similarly described The First Disk as “a moment of surprise that was without epistemological consequences.”3 Pierre Francastel, in his introduction to Delaunay’s writings, Du cubisme l’art abstrait, cites The First Disk (along with the Window series) as both a “historical landmark” and “an isolated study.”4 The difficulty of The First Disk for art historians is likewise reflected in the scant attention it receives in Sherry A. Buckberrough’s 1982 book Robert Delaunay: The Discovery of Simultaneity, which devotes only 3 of its 243 pages to The First Disk, most of which are purely descriptive. Especially surprising is the near-total absence of Delaunay from Clement Greenberg’s writings. In one of his few references to Delaunay, Greenberg writes that “abstract art itself may have been born amid the painterliness of Analytical Cubism, Lger, Delaunay, and Kandinsky.”5 Unlike the other names on the list, though, Delaunay is discussed only once in Greenberg’s collected writings (for a total of two paragraphs), in a 1949 exhibition review. Describing Delaunay in this review as “an enterprising painter whose influence is perhaps more important than his art, fine as it is,” the influence ascribed to Delaunay, for Greenberg as for art history in general, is duly noted but never substantiated.6

In large part it is the obdurate singularity of The First Disk that frustrates interpretation-a singularity foregrounded by the fact that the paintings leading up to it are neatly bundled into discrete series: the Saint-Svrin series (1909-10); the City series (1909-11); the Eiffel Tower series (1909-12); the City of Paris series (1911-12); the Window series (1912-14); the Cardiff Team series (1913); the Circular Forms series (1913); and then, suddenly and seemingly out of nowhere, The First Disk (1913). Against the backdrop of its preceding work, The First Disk appears as a one-off that, as de Duve describes it, “bursts out violently as something without real precedent in Delaunay’s work.”7

Far from being what Bois calls “a unicum in his oeuvre”-coming out of (and going) nowhere-The First Disk must be situated within Delaunay’s larger project to ground painting in a new optical model. This model was first put forward in the 1912 Window series, developed further in the 1912 Circular Forms series, and culminated in 1913 with the singular statement of The First Disk. It is only through establishing this developmental logic that The First Disk can properly resist its current characterization as an art historically significant, if otherwise inexplicable, “fluke.”

1912: Delaunay contra Cubism

This happened in 1912. Cubism was in full force. I made paintings that seemed like prisms compared to the Cubism my fellow artists were producing. I was the heretic of Cubism. I had great arguments with my comrades who banned color from their palette, depriving it of all elemental mobility. I was accused of returning to Impressionism, of making decorative paintings, etc…. I felt I had almost reached my goal.-Robert Delaunay, “First Notebook,” 1939(8)

Nineteen twelve was a watershed year for Robert Delaunay. On March 13 his first major exhibition in Paris closed to great acclaim after two weeks at the Galerie Barbazanges.9 Comprising forty-six works, the exhibition spanned his career to date: his early, self- taught Impressionist works;10 his 1905-6 Neo-Impressionist period; a single painting from his 1909-10 Saint-Svrin series (Saint-Svrin No. 1); a large number of Parisian cityscapes produced between 1909 and 1911; and the series of Cubist Eiffel Tower paintings from 1909-11. Apollinaire, who was to live briefly with Delaunay and his wife Sonia from November to mid-December of 1912, praised these works in his review of the exhibition, portraying Delaunay as “an artist who has a monumental vision of the world. . . . Robert Delaunay has already come to occupy an important place among the artists of his generation.”11 Two weeks later, he singled out Delaunay’s La ville de Paris in his review of the Salon des Indpendants. “Decidedly, the picture by Robert Delaunay is the most important of this salon,” Apollinaire effused. “La ville de Paris is more than an artistic manifestation. … He sums up, without any pomp, the entire effort of modern, painting.”12

Flagged by Apollinaire as a new force on the Parisian artistic landscape, Delaunay was also making steady strides elsewhere in Europe, most notably in Germany, Switzerland, and Russia. Participating in the first Blaue Reiter exhibition in Munich, Delaunay sold four of the five works on view, including the now- lost La ville No. 1, to the painter Alexei von Jawlensky.13 More significant than sales, Delaunay’s paintings prompted an enthusiastic response within the Blaue Reiter, leading to active correspondence with Wassily Kandinsky, August Macke, and Franz Marc.14 Delaunay’s Blaue Reiter connections in turn led to Erwin Ritter von Busse’s article “Robert Delaunay’s Methods of Composition,” which appeared in the 1912 Blaue Reiter Almanac, alongside Roger Allard’s description of Delaunay, in his essay “The Signs of Renewal in Painting,” as a painter “who has conquered the arabesques of the picture plane and who shows the rhythm of great, indefinite depths.”15 Delaunay went on to exhibit that February in the second Blaue Reiter exhibition in Munich and in the Valet de Carreau.exhibition in Moscow. In March he exhibited in the first Der Sturm exhibition in Berlin and, at the invitation of Hans Arp, in July at the Moderner Bund exhibition Zweite Ausstellung in Zurich. Among the many painters in Germany to come under the sway of Delaunay’s influence was the Swiss artist Paul Klee. After visiting Delaunay at his Paris studio in 1912, Klee translated the Frenchman’s 1912 essay “Light” into German, and it appeared in the January 1913 issue of Der Sturm.16

Delaunay’s critical triumphs in 1912 would have been little more than art historical footnotes, however, were it not for his Window series. Begun in all likelihood in La Madelaine in the Chevreuse Valley, where the Delaunays were vacationing for the summer, the twenty-two-painting Window series marks Delaunay’s self-described moment of artistic maturity and break with Cubism. “They were my true aesthetic departure for modern art in reaction to the academicism and confusion of early Cubism,” Delaunay writes of the Windows in an undated letter to his friend the Cubist painter Albert Gleizes.17 Delaunay relates the substance of \his break with Cubism- that “which truly began my life as an artist”18-in a 1939 notebook: “At this moment, about 1912-1913, I had the idea for a kind of painting that would depend only on color and its contrast but would develop over time, simultaneously perceived at a single moment. I used Chevreul’s scientific words: simultaneous contrast…. I called them Windows.”19 It was this series of remarkable paintings, unabashed-flaunting even-in their use of color, that ended Delaunay’s otherwise unremarkable apprenticeship within Cubism.20 It was evident to all who saw his paintings that Delaunay had broken ranks; he quickly became, as he stated in his notebook, “the heretic of Cubism [l’hrsiarque du cubisme].”

To reference Delaunay’s self-proclaimed heresy, or to characterize him as “breaking ranks,” is not, I should make clear, to ascribe an overall aesthetic or conceptual coherence to the Cubism that Delaunay came to oppose. As many scholars correctly insist, the Cubism of Pablo Picasso and Georges Braque is wholly distinct from the so-called Salon Cubism of Gleizes, Jean Metzinger, Henri Le Fauconnier, and others. Differences likewise abound within the critical reception of Cubism. Yet despite the myriad internal tensions and contradictions gathered under the rubric “Cubism,” Delaunay and others understood his break in opposition to a very specific group of Cubist painters, and in opposition to a very specific set of ideas associated with those painters. For all their various formal and intellectual differences, the Salon Cubists strategically represented themselves as a more-or-less cohesive movement, downplaying disparity in favor of a unifying common ground. Indeed, prior to his break with Cubism, Delaunay himself played a central role in the decision, made with Gleizes, Metzinger, and Le Fauconnier, to display their work collectively in what would come to be the first public group manifestation of Cubism: the famous Salle 41 at the 1911 Salon des Indpendants (hence “Salon Cubism”). Gleizes notes in his memoirs that it was important for these Cubist painters to appear unified: “Metzinger, Le Fauconnier, Delaunay, and I decided to send work to the next Salon des Indpendants. . . . But we must be grouped, that was the opinion of all.”21 Salon Cubist Georges Ribemont-Dessaignes similarly recalls how Gleizes and Metzinger aimed “to establish a kind of legislation of the Cubist movement.”22

The first published suggestion that Delaunay had broken with this group of Cubists appeared in the March 23, 1912, issue of L’Assiette au Beurre, in James Burkley’s review of that year’s Salon des Indpendants. Commenting on entry number 868, Delaunay’s La ville de Pans, Burkley wrote, “The Cubists, who occupy only a room, have multiplied. Their leaders, Picasso and Braque, have not participated in their grouping, and Delaunay, commonly labeled a Cubist, has wished to isolate himself and declare that he has nothing in common with Metzinger or Le Fauconnier.”23 In an open letter to the Cubist critic Louis Vauxcelles, published in an editorial in Gil Bias on October 28, 1912, Delaunay further differentiated himself from the Salon Cubists. Responding to Olivier-Hourcade’s claim, printed in Paris-Journal eight days earlier, that “it was, however, these four painters [Metzinger, Gleizes, Le Fauconnier, and Lger] who, with Delaunay, in 1910 and above all at the [Salon des] Indpendants in 1911, created-and truly are-Cubism,”24 Delaunay retorted:

I don’t support the opinion, inaccurately put forward by Mr. Hourcade, that proclaims me the creator of Cubism with four colleagues and friends. Unbeknownst to me several young painters have made use of my early studies. Lately they have exhibited canvases they call Cubist. I don’t exhibit. Only some friends, artists, and critics know the direction that my art has taken…. It is necessary to set the record straight.25

Given this explicit and public disavowal of Cubism-a disavowal already picked up on by critics-what was it exactly, vis–vis Cubism, that Delaunay was disassociating himself from? In part, Delaunay’s break with Cubism marked a refusal to have his increasingly evident individualism suppressed through Salon Cubism’s aspirations for unity. More to the point, however, it was the conceptual means by which the Salon Cubists sought to establish this unity that Delaunay rejected. As the Salon Cubists would have it, Cubism’s cohesion as a group was determined by a set of aesthetic and theoretical concerns common to all forms of Cubism-a set of unifying concerns whose discursive power overrode individual formal and stylistic differences. Pascal Rousseau conceives the conceptual stakes underlying the Salon Cubists’ efforts toward unity:

It was a matter of uniting the modern movement around a solid critical discourse that would at once validate the inscription of Cubism within a classical tradition ” la franaise” (the refusal of Impressionist sensuality in favor of a more cerebral art), privilege structure through essentialist decisions (the permanent harmony of line versus the too-loose and fleeting character of light), and, more generally, translate visually the subjective character of representation through a neo-Kantian interpretation of optical synthesis (multiple views of the object and the “Cubism of conception”).26

In short, the substance of Delaunay’s break with Cubism centered on rejection of its central orthodoxy: the suppression or elimination of superficial visual sensation-color being the worst offender in this regard-in favor of the invariable and essential ground of conception. This position is voiced in the criticism of Olivier-Hourcade, among others, who claimed that Cubism depicts what we know of the represented object-what we know of its, physical form- as opposed to what we see. As Hourcade writes:

The painter, when he has to draw a round cup, knows very well that the opening of the cup is a circle. When he draws an ellipse, therefore, he is making a concession to the lies of optics and perspective, he is telling a deliberate lie. Gleizes, on the contrary will try to show things in their sensible truth.27

While Salon Cubism and its advocates turned to conception as the basis for a new realism-what Gleizes and Metzinger termed, in their 1912 book Du Cubisme, the “profound realism” of the mind, in opposition to the “superficial realism” of the eye28-Delaunay stood alone in his attempt to develop a new, nonsuperficial model of vision for painting. He first posed this new optical model in the Windows, where he sought to combine Cubism’s emphasis on conception with Impressionism’s emphasis on optical sensation. In so doing Delaunay not only reconciled the seemingly irreconcilable-Cubism and Impressionism-he also posited a pictorial model of vision that was fully informed by modern optical theory.

Explicit in his understanding of the shift from a premodern to a modern conception of perception, Delaunay remarked on the consequences of this shift for painting in an entry to his 1939-40 notebook: “Historically there really was a change in understanding in modes of seeing, and thus in [pictorial] technique.”29 Virginia Spate, who begins her 1979 book on Orphism with this quote, takes this shift in visual understanding not as a literal change enacted by modern optical theory but as a general response to the perceptual conditions of modernity. For Spate, in other words, Delaunay’s “Perceptual Orphism” stands in relation to historical changes external to the viewing subject, which in turn affect the overall mental conditions of perception. Delaunay’s engagement with issues of perception is thus conceived through the “profound aspects of modern life and of a new form of consciousness: Simultaneous consciousness.”30 I want to argue something quite different. Delaunay, I believe, should be taken at his word: that his work stands in response not simply to an altered mode of seeing wrought by modernity but to a historical change in the actual understanding of perception-a change, that is, in the understanding of the internal, psychophysiological mechanics of perception. How, then, to formulate this change in understanding, and with it Delaunay’s attempt to salvage vision as a viable ground for painting? In order to grasp Delaunay’s reformed visual realism, it is first necessary to comprehend the structure of vision as reformed by modern optical theory. It is necessary, that is, to comprehend what Delaunay describes as the “change in understanding in modes of seeing.”

Physiological Optics

In Techniques of the Observer: On Vision and Modernity in the Nineteenth Century, Jonathan Crary posits the beginning of the nineteenth century as a fundamental break from what he terms “classical models of vision.” For Crary, one of the primary models that founds and supports the idea of classical vision is the monocular paradigm of the camera obscura. The enclosed, darkened space of the camera obscura creates an inverted image of the external world as light passes through a small opening. While the effects of this simple imaging device have been likened to human vision since antiquity, it was only in the period from the late 1500s to the end of the 1700s that it became the dominant model for visual perception. The ascendance of the camera obscura model effectively brought to an end the prior debates over extramission- the theory that the eye emits as well as receives light.31 Stripped of its active function of emission, the eye became instead a totally passive and transparent receptor of light and the optical information it carried.

In addition to positing a stable visual field, the camera obscura serves as a model of the subject in several important ways. On a structural level, it separates the observer from others, which has the effect of individuating the viewer. This simultaneously supports the viewer as a free and sovereign individual and universa\lizes the observer as an interchangeable position openly available to anyone. At the same time that the mechanism of the camera obscura separates the user from others, however, it also separates the user from the external world. The camera obscura thus became a kind of technological analog to the Cartesian separation of the viewing subject (res cogitans) from the world (res extenso). Constitutive of this interior-exterior divide is a fundamental stability. The camera obscura asserts a coherent and consistently unified visual field “from which any inconsistencies and irregularities are banished to insure the formation of a homogenous, unified and fully legible space.”32 And, finally, the model of the camera obscura severs the eye from the body of the viewer. This decisive separation functions to “sunder the act of seeing from the physical body of the observer, to decorporealize vision.. .. The body then is a problem the camera could never solve except by marginalizing it into a phantom to establish the space of reason.”33

The ascendance of this model of vision, and the subject position that it supports, came to an abrupt end in the early nineteenth century. In its place developed a modern and heterogeneous regime of vision, one that is grounded, above all else, by the insertion of the body into optical discourses. Within this newly formed optical paradigm, a split emerges between the study of light and color as independent phenomena within the physical world (prismatic light) and of light and color as they are experienced subjectively through the physiological and cognitive processes of the body. The early nineteenth century thus marks a break between optics as a branch of physics (the study of light and its constituent properties) and physiological optics as a branch of perceptual and cognitive theory.

The insertion of the body into various theories of perceptual physiology is perhaps nowhere more evident than in the sudden centrality assumed by physiologically produced chromatic effects in the early nineteenth century. These include such phenomena as visual afterimages; colors that mix in the retina of the viewer; and the experience of light and color from causes such as pressure on the optic nerve, certain narcotics, and so on. As Crary points out, the centrality of internally produced chromatic effects cuts the supposed bond between optical sensation and real-world referent, and in so doing breaks radically from the camera obscura model of classical vision. The experience of light or color is thus no longer dependent on any external light or color. Centralizing the physiology of the body in the optical process creates an epistemological rupture at the heart of visual perception. As Hermann von Helmholtz stated in 1867 in “On the Recent Progress of the Theory of Vision”:

We have already seen enough to answer the question whether it is possible to maintain the natural and innate conviction that the quality of our sensations, and especially our sensations of sight, give us a true impression of corresponding qualities of the outer world. It is clear they do not…. Pressure upon the eyeball, a feeble current of electricity passing through it, a narcotic drug carried to the retina by the blood, are capable of exciting the sensation of light just as well as sunbeams. The most complete difference offered by our various sensations … does not, as we now see, at all depend upon the nature of the external object, but solely upon the central connections of the nerves which are affected.34

No longer grounded in a unified, stable field, visual perception becomes irrefutably conditioned by the body. This break with classical models of vision actively participates in the construction of a new visual subject. Once the physiological intervention of the body is foregrounded in the perceptual process, the previous stability of a clearly demarcated “inside” (the projected image inside the camera obscura, the res cogitans) and “outside” (the world, res extenso) becomes untenable.35 As a result, color and light lose their prior bond to an externally stable and unified visual field.

It is vital not to mistake the significance of physiological optics within modern optical theory as simply a new form of epistemological skepticism. Throughout the seventeenth and eighteenth centuries the observation of internally experienced physiological effects was frequently cited as evidence for the fallibility of the senses.36 Modern optical psychology does not tell us anything new, or even anything less than obvious, in stating that pressure on the eye produces the internal experience of light. What modern optical theory provides is a conception of “pure” sensory information that is internal to the body and actively produced by the senses. By way of contrast, a pre-nineteenth-century thinker such as David Hume (who relied on the camera obscura model of vision) understood the perceptual image to exist externally to the body and independently of the senses. Hume is typical of seventeenth- and eighteenth-century thinkers in his conception of the perceptual image as a unified, external entity that flows through the senses much as water flows through an inlet: “Nothing can ever be present to the mind but an image or perception, and the senses are only the inlets through which these images are conveyed.”37 While a modern optical theorist such as Helmholtz understands the senses as actively producing sensory information, for Hume the senses function as mere conduits for an external perceptual image.

The significance of internally produced chromatic effects for modern optical theory went beyond providing new evidence for the fallibility of the senses: it served to demonstrate that sensory information is produced actively by the senses. This was the crucial step that was excluded from the camera obscura model of vision-that between the eye and the brain, optical information exists in a “pure” state, wholly distinct from the external stimuli that generate it and the final perceptual image that is registered in the brain. As Crary states, it was this middle step)-the moment of pure, internally produced sensory information-that was unthinkable prior to the advent of modern optical psychology: “In the seventeenth and eighteenth centuries this kind of ‘primordial’ vision could not be thought, even as a hypothetical possibility.”38 Svetlana Alpers similarly describes how the seventeenth-century optical theorist Johannes Kepler made a revolutionary distinction between the world outside of the eye (idola, or visual species) and the image formed on the retinal surface (pictura). Despite this distinction, Kepler was unable to conceive of an intermediary step between the retina and perception. As Alpers writes, “The study of optics, so defined, starts with the eye receiving light and ceases with the formation of the picture on the retina. What happens before and after-how the picture so formed, upside down and reversed, was perceived by the viewer-troubled Kepler but was of no concern to him.”39

How, then, can “pure optical information”-the optical sensory data that is transmitted from the optic nerve to the brain-be characterized? First, as is commonly known, pure optical information when registered on the retina is inverted in relation to the external world, both upside down and left and right. second, this inverted image on the wall of the retina is not simply a straightforward mirror image of the world turned on its head, for the neatly bounded distinctions of figure and ground that we see within everyday visual perception do not exist within pure optical data. Yve-Alain Bois has pointed out that one of the crucial distinctions between visual perception and pure optical information is precisely the absence of a figure-ground distinction in the latter: “To perceive is first of all to perceive a figure against a ground (this is the basic definition of perception). But the ground is not always given: it is indeed what we must preconsciously construct differently each time we are solicited to perceive.”40 Third, optical information is binocular, and this binocularity creates a slight discrepancy in the information that is registered in the different retinas. This discrepancy in the retinal image is crucial to the visual process, as the brain compares the slight differences within the eyes as a means to generate depth perception.41 Finally, pure optical information as it is registered on the concave surface of the retina is completely two-dimensional. The knowledge of optical flatness was indeed so widespread within the critical literature of the mid-nineteenth and early twentieth centuries that Adolf Hildebrand felt only a footnote was necessary to remind the reader of this well-established fact: “The reader need hardly be reminded that our actual impression is two-dimensional, a flat picture on the retina.”42 In sum, optical information is understood by modern optical theory to be fundamentally unlike what we actually see.

Granting the specificity of discrete sensory operations, modern optical theory faced the central problem of how dissimilar forms of sensory information combine to generate a coherent, cohesive perceptual array.43 Helmholtz in particular was intensely concerned with the problem of how twodimensional, inverted, binocular optical data, devoid of all spatial perception, is experienced as visual perception with depth and clearly bounded figure-ground distinctions. The problem for Helmholtz and others was how the distinct nerve functions of touch and vision come together to construct a unified and functionally seamless field of vision. Helmholtz thus identified two basic components fundamental to the process of visual cohesion. The first is that the deficiencies of optic and haptic sense functions compensate for each other as they merge cognitively in the mind. As Helmholtz states: “The two senses which real\ly have the same task, though with very different means of accomplishing it, happily supply each other’s deficiencies. Touch is a trustworthy and experienced servant, but enjoys only a limited range, while sight rivals the boldest flights of fancy in penetrating unlimited distances.”44 For Helmholtz, therefore, touch and vision are made to cohere within cognition in order to produce normative visual perception. “Ordinary vision,” as Helmholtz asserts, “is not produced by any anatomical mechanism of sensation, but by a mental act.”45

Given the conclusion that touch and sight supplement one another, how then can touch be mobilized when an object is only seen? In answering this problem, Helmholtz proposed that the second component fundamental to visual perception is memory. No longer an innate, fully formed condition present from birth, perception is reconfigured as a process that is learned. Accordingly, Helmholtz analyzes the means by which an infant develops visual perception as it learns to connect the tactile senses of its body with what it sees in order to develop perceptual unity:

A child seizes whatever is presented to it, turns it over and over again, looks at it, touches it, and puts it in its mouth…. After he has looked at such a toy every day for weeks together, he learns at last all the perspective images which it presents…. By this means the child learns to recognize the different views that the same object can afford in connection with the movements which he is constantly giving it. The conception of the shape of any given object, gained in this manner, is the result of associating all these visual images…. All these different views are combined in the judgment we form as to the dimensions and shape of an object.46

We learn, over a period of time, to make sense of what we see and what we know. In so doing our memory brings past sensory experience and knowledge into the present.47 When we see an object we are remembering how our body has interacted with it-how it feels, how it recedes in space, how tall it is, how hard, cold, or sticky it is.48 As Henri Bergson explains in Matter and Memory: “our senses require education. Neither sight nor touch is able at the outset to localize impressions. A series of comparisons and inductions is necessary, whereby we gradually coordinate one impression with another.”49 Beginning in the mid-nineteenth century, seeing ceases to be a given- it ceases to be a simple fact of being that we are born into, fully formed and already present. From the mid-nineteenth century on, seeing becomes instead a physiological and cognitive process that we must learn.

Cubism and Antivisuality

Delaunay’s attempt to rehabilitate vision as the basis for a new pictorial realism must be placed squarely in the context of the modern optical theory explained above. But just as important, it must also be seen in opposition to the Cubist reception of that same optical theory. For as far as the Cubists were concerned, modern optical theory demonstrated conclusively that vision-now understood to be fundamentally different from pure optical information and subject to the vagaries of the body in which it is enmeshed-is not to be trusted. According to the Cubist reception of modern optical theory, vision had been proven to be wildly erratic and prone to a range of corruptions. This stood in contrast to the mind, which filters out impurities, giving us a true and stable image-giving us perception as an act of cognition. References to modern optical science abound in Cubist criticism. Gleizes and Metzinger refer to the modern optical theory of binocular accommodation in Du Cubisme. “As for visual space we know that it results from the agreement of the sensations of convergence and ‘accommodation’ in the eye.”50 Maurice Raynal likewise invokes Helmholtz in an appeal to Cubist antivisuality: “‘The truth is not in the senses,’ said MaIebranche, ‘but in the mind,’ and Helmholtz-and indeed Bousset before him- showed that the senses tell us nothing but our own sensations.”51

The most sustained and thoroughgoing use of modern optical theory within Cubist-era criticism is found in DanielHenry Kahnweiler’s The Rise of Cubism. In his understanding of the relations between sensation, memory, and cognition, Kahnweiler demonstrates a clear debt to modern optical science. Kahnweiler contends that the process of “seeing” a Cubist painting (Kahnweiler uses the quotation marks) almost identically mirrors the process of visual perception as theorized by Helmholtz. In both cases, a visual stimulus is charged with a specific memory image. The ensemble of visual stimuli and memory images combine within the viewer’s mind to produce a final image. “Seeing” a Cubist painting, for Kahnweiler, constitutes more an act of conception-of cognitive assembly-than an act of vision: “When ‘real’ details are thus introduced the result is a stimulus that carries with it memory images. Combining the ‘real’ stimulus and the scheme of forms, these images construct the finished object in the mind. Thus, the desired representation comes into being in the spectator’s mind.”52 For Kahnweiler, as for Helmholtz, the viewer combines the known with the seen, such that, as Kahnweiler describes Cubism, we mediate “the two dimensional ‘seen’ with the three dimensional ‘known.'”53

Not only does Cubist criticism use modern theories of vision against vision, it does so in the name of an essentialist realism. Cubist painting, according to early advocates, provides the oncological blueprints-the essential truth-of its objects, such that, as Olivier-Hourcade claims, “The ruling preoccupation of the [Cubist] artists is with cutting into the essential TRUTH of the thing they wish to represent, and not merely the external and passing aspect of’this truth.”54 Likewise, for Jacques Rivire, “The true purpose of painting is to represent objects as they really are; that is to say, differently from the way we see them. It [Cubism] tends to give us objects in their sensible essence, their presence; this is why the image it forms does not reveal their appearance.”511 Stripping away “contingent visual and anecdotal elements” from its represented object, “Scientific Cubism,” as Apollinaire terms it, “resulted from the fact that the essential reality was depicted there.”56

Beginning with the Window series and culminating with The First Disk, Delaunay broke decisively with Cubism’s attempted end run of vision into the essentialism of conception. Understanding that modern optical theory does not invalidate vision, as the Cubists argued, but rather reformulates it into a wholly new and modern conception, Delaunay sought to do the same for painting. Engaging rigorously with the structure of perception as claimed through modern optical theory-a structure premised on the double role of cognition and sensation-Delaunay’s paintings emphasize the role of the mind in the act of vision. And in so doing, Delaunay’s paintings develop a fundamentally new model of visual realism-a visual realism in which painting serves to bridge the body of the viewer with its ground in the world.

The Windows

The immediate optical impact of the Windows announces Delaunay’s commitment to opticality-and thus his break from Cubism-in no uncertain terms. Revisiting Neo-Impressionism’s involvement with chromatic retinal mixing, Delaunay abandoned the Divisionist technique of colored dots, or taches, in favor of larger colored planes of “simultaneous contrast,” a form of optical mixing first theorized by the nineteenth-century optical theorist Michel-Eugne Chevreul. Chevreul explains this process in the introduction to his 1839 De la loi du contraste simultan des couleurs:

A ray of solar light is composed of an indeterminate number of differently colored rays … [these] have been distributed into groups which have been given the names red rays, orange rays, yellow rays, green rays, blue rays, indigo rays, and violet rays; but it must not be supposed that all the rays comprising the same group, red for instance, are identical in color. On the contrary, they are generally considered as differing, more or less among themselves, although we recognize the impression they separately produce as comprising that which we ascribe to red.57

Looking onto a field of contiguous colored planes, we perceive what appear to be discrete colors, “we recognize the impression they separately produce as comprising that which we ascribe to red.” But as Chevreul points out, these apparently unified colors are in fact composed of “an indeterminate number of differently colored rays,” which intermingle and vary according to the proximity, hue, brightness, and surface texture of adjacent colors. Rays of colored light mix with neighboring rays of light to produce the optical mixing of simultaneous contrast. As Biaise Cendrars describes the effect of simultaneous contrast within Delaunay’s paintings, “A color isn’t a color unto itself. It is only a color in contrast with one or more colors. A blue is only blue in contrast with a red, a green, an orange, a grey and all the other colors.”58 No longer is vision construed as a classical piercing of space-a progressive (or diachronic) succession into depth, from the eye through lines of perspective to an ever-receding vanishing point. Rather, vision is reconfigured as a simultaneous (or synchronic) field, distributed across the visual plane.”

The emphasis on optical phenomena in the Windows has led art historians to characterize Delaunay as a “retinalist,” a term of disparagement coined by the Cubists to denigrate the “superficial realism” of Impressionism. Rosalind Krauss maintains that Delaunay’s paintings establish a visual homology between the surface of the canvas and the surface of the retina. For Krauss, this “retinalism” eliminates the role of the mind, stripping vision of its conceptual depth: “the ‘arrt la rtine,’ the stoppingof the analytic process at the retina … [became] a kind of self-sufficient or autonomous realm of activity…. This is the logic we hear, for example, in Delaunay’s assertions that the laws of simultaneous contrast within the eye and the laws of painting are one and the same….”60 More than just a superficial model of vision lacking “the analytic process,” however, retinalism is also premised on speed. Deprived of cognitive function, visuality according to the retinalist model indulges the pure optical stimuli of rapid and kaleidoscopic retinal sensations. Rousseau, for example, argues that Delaunay’s paintings attempt to translate the frenzied pace of modern life as it darts across the surface of the eye: “The paintings of Robert Delaunay are entirely motivated by the avid retinalism of the ‘painting of modern life’: ‘Looking to see,’ to see more and more quickly, to see too much, sometimes to the point of risking a hypnotic vertigo as the eye is carried away by the gyrating movement of colors.”61 Rousseau sees the speed of Delaunay’s retinalism as part and parcel of the artist’s larger efforts to recover a primitive vision, cleaved from knowledge and experiential memory. In a 1997 essay, Rousseau claims that Delaunay “adopts the ‘innocent eye’ thesis defended by Ruskin” and quotes a passage from Paul Valry’s 1895 Introduction la mthode de Lonard de Vinci:

Most people see with the intellect much more frequently than with the eye. Instead of colored spaces they become aware of concepts. A tall, whitish cube with holes filled with the reflections of glass is immediately a house: the House! A complex concurrence of abstract qualities. When they move they miss the movement of the rows of windows, the transformation of the surfaces continually changing their aspect-for the concept does not change. They see through a dictionary rather than through the retina….62

It is this innocent eye engagement with a world of pure opticality that Rousseau assigns to Delaunay: “The project of simultanism adds to this refutation of a priori knowledge within representation through a claim to a primitive sense of color.”63 Along similar lines, Spate, too, asserts that Delaunay attempted to locate “a consciousness found in perceptual experience unclouded by conceptual, learnt experience.”64

The view of Delaunay as a “retinalist” who seeks to derive knowledge from the domain of the visual not only is at odds with the experience of his paintings, it also underestimates (or misunderstands) the depth of Delaunay’s engagement with modern optical theory.65 Far from expressing a naive retinalism that is optically immersed in the speed of modern life, Delaunay’s paintings, beginning with the Window series, are at pains to slow the gaze of the viewer, while at the same time coupling vision with experiential knowledge. Consistent with modern optical theory, Delaunay’s concern is to move beyond the two-dimensional surface of the retina and into the depth of visual perception. Knowledge works in tandem with the senses to create spatial perception such that, as Delaunay writes, “We live in depth, we travel in depth. I am there. The senses are there. And the mind!!”66 This equilibrium between the senses and the mind forms the basis of Delaunay’s reconstituted visual realism. And, true to modern optical theory, this is a model of vision in which we must literally learn to see.

Seeing in Time

Delaunay’s Windows perform the slow process of learning to see described by Helmholtz. As Delaunay suggests, it is only over time that we learn to see his paintings: “I had the idea for a kind of painting that would depend on color and contrast, but would develop over time.”67 As discussed above, Helmholtz theorized that young children must learn to cognitively merge two-dimensional optical information with three-dimensional spatial knowledge in order to develop visual perception. Remarkably, this complex process of learning to make sense of what we see is performed experientially in the act of looking at the Delaunay’s Windows. We can see this if we begin with what is now generally considered to be the second (though first finished) painting in the series, Simultaneous Windows (1st Part, 2nd Motif, 1st Replica) (Fig. 3).68 Our initial experience of the painting, much like that of an infant, with its undeveloped perceptual acuity, is purely optical; we see a loosely articulated grid of apparently abstract colors with no clear figure-ground distinction, no evident orientation-top to bottom, left to right- and a two-dimensional array of rough, ill-defined chromatic shapes that bleed one into the other. What we see, in other words, replicates the conditions of pure optical information as it is registered on the retina of children prior to perceptual development. Color, in this initial view, both painterly and perceptual, takes precedence over form.

The granting of chromatic priority in the Windows places Delaunay in direct opposition to Cubism’s privileging of form (determined through conception) over effects of color (determined through vision). Kahnweiler, for example, claims Cubism’s suppression of color serves to separate the “primary qualities” of geometric form and spatial position from the “secondary qualities” of color and tactility:

[The Cubists] distinguish between primary and secondary qualities. They endeavor to represent the primary, or most important qualities, as exactly as possible. In painting these are: the object’s form, and its position in space. They merely suggest the secondary qualities such as color and tactile quality, leaving their incorporation into the object in the mind of the spectator.69

Kahnweiler’s reliance on optical physiology notwithstanding, the nineteenth-century realization that infantile vision is initially experienced as pure optical sensation prior to the learned perception of form and space supports Delaunay’s visual prioritization of color. No longer subservient to preexistent three- dimensional objects, color, in Delaunay’s Windows, refuses to function as a coating, applied as a secondary characteristic to a priori forms in space. As Apollinaire maintains, “color is no longer used for just coloring … color is now itself the form…. Color no longer depends on the three dimensions, for it is color that creates them.”70 Walter Benjamin expresses a similar view, which he relates directly to infantile perception, in his 1914 essay “A Child’s View of Color.” Emphasizing that for young children, “their eyes are not concerned with three-dimensionality, this they perceive through their sense of touch,” Benjamin, like Apollinaire, claims the priority of color over form: “for the person who sees with a child’s eyes . . . [color] is not something superimposed on matter, as it is for adults. The latter abstract from color, regarding it as a deceptive cloak for individual objects existing in time and space.”71

Prior to the perception of form, touch and color exist within separate sensory registers before the mind learns to combine them into a cohesive, three-dimensional view of space. This double separation and cohesion of touch and color is also performed in the experience of viewing the Windows. Avoiding a view of color that is “superimposed on matter,” Delaunay strategically engages the retinal mixing of simultaneous contrast to separate color from its material ground. Our experience of color, in looking at the Windows, is necessarily determined through the physiological mixing of color in the eye. This physiological experience of color, determined through the effects of simultaneous contrast, is thus literally stripped of form. At the same time, however, we see the contrasting textures of paint application and the tactile differences in paint created by the wooden ground of the frame and the rough weave of the canvas. The colors we see in Delaunay’s Windows are thus located simultaneously and indeterminately between the pure, physiologically produced colors of the eye-devoid of form and internal to the body- and the material colors of the paint, inextricably bound to its tactile form and ground, external to the body. The effect of simultaneous contrast, in other words, allows for a seemingly impossible and paradoxical expression of color, in which it is simultaneously both separated from and bound to tactile form.

This simultaneous separating and reassembling of sight from touch is further articulated in a curious detail in Delaunay’s painting. If we look at the rough, conspicuous miters of the painted frame, we see how Delaunay has carefully delimited each miter with paint so that two or more colors meet along the exact seam of the wooden joint. This holds true in all but the upper-right corner of the frame, where we see the light blue cross over the forty-five-degree angle of the miter with an exacting deliberation. It is as if Delaunay were at pains to show how the coarse frame that we can feel, and the color that we see, have been cleaved from one another- separated as distinct forms of sensory information-only to be reassembled again in the rest of the frame. In prying the tactile form of the frame from its corresponding color, it seems that Delaunay is intent on showing not simply the structure of painting but also how it relates to the structure of the viewer’s vision.

It is crucial to Delaunay’s project that our perceptual experience of the Windows not rest with this initial sensory view of color and tactile surface but that it extend into an acquired perception of form determined through experiential knowledge. This move beyond the sensory is vital, for in the Windows, Delaunay seeks to foreground the perceptual priority of color over form by replicating in the viewer the process of infantile perceptual development. Accordingly, in the process of learning to see-as in the process of learning to see Delaunay’s Windows-we combine sensory data with cognitive information inorder to determine spatial depth and decipher images. This occurs with a series of images that are gradually perceived within the chromatic grid. The first, and most easily discerned, is the green elongated triangle of the Eiffel Tower in the center of the canvas, with its lighter, more difficult to see supporting columns below (Fig. 4). The two small windows on a building front follow in the lower part of the frame. Most difficult to determine are two images that have gone unnoticed in past accounts of the Windows and have taken an especially long time to learn to see. The first is a face in the yellow field of the viewer’s right-hand side of the painting (Fig. 5). The dark green patch of paint two-thirds of the way down the right-hand side functions as lips, while the quarter-circle of yellow beneath forms the chin. The ear nestles in the right-hand corner of the base of the tower. The jaw and then the neck extend the sloping, fragmented line of the tower a fraction away from the corner of the canvas. This line then continues into the outer edge of the picture through the rough, prominent miter of the frame. The second is a rectangular form that cuts diagonally across the surface of each painting in the series, which represents an aerial view of the Champ de Mars, the field on which the Eiffel Tower sits. Delaunay took the aerial image from a photograph published in 1909 in the journal Comoedia (Fig. 6) and later reworked it in his 1922 lithograph Tour Eiffel et Jardins du Champ de Mars (Fig. 7).72

All of these images make sense. Setting the stage for our seeing, Delaunay tells us his painting is a window. Given Delaunay’s renown for cityscapes of Paris, and the Eiffel Tower especially, it stands to reason that we should come to see the tower. The surrounding sky blue pushes back into space, and the two small dashes of green on the bottom of the frame ground the tower in neighboring buildings. By the same token, we know from experience that facing a window, we can see our mirror image reflected back into our line of vision. Drawing from experience, each individual viewer can learn to see his or her face (or is it Delaunay’s face, the first viewer of this window?). We also know that Delaunay’s past Cubist representations of the tower contained simultaneous and aerial views. Accordingly, there is a visual logic to seeing the Champ de Mars in the diagonal rectangular form. Similarly, we also know from experience how aerial views activate memory in the act of seeing. Looking at the all but unrecognizable sprawl of Paris from the top of the actual tower, we perceive isolated fragments that orient us to the overall structure of the city. Memory works to relate part to whole such that, over time, we begin to form a comprehensible image. As Roland Barthes describes this process of aerial vision from the top of the Eiffel Tower:

Take some view of Paris taken from the Eiffel Tower; here you make out the hill sloping down from Chaillot, there the Bois de Boulogne; but where is the Arc de Triomphe? You don’t see it and its absence compels you to inspect the panorama once again, to look for this point which is missing in your structure; your knowledge struggles with your perception, and in a sense, that is what intelligence is: to reconstitute, to make memory and sensation cooperate so as to produce in your mind a simulacrum of Paris.73

In laying bare the structure of vision, the aerial view demands that the mind cooperate with the eye in an act of visual interpretation. Separating and grouping, moving between recognized fragments and their position within the larger structure, the mind must negotiate between what is known and what is seen. Visual perception, as it is experienced and seen from the top of the Eiffel Tower, is thus a process that takes place over time as the mind pieces together all the parts of the visual puzzle, moving between the memory of past experience and what is given in sight.

Structuring Vision

Duplicating a forty-five-degree rotation of the central canvas almost exactly, the rectangular form of the Champ de Mars twists the structure of painting (the canvas support) into a metaphor for the structure of vision (the aerial view). Indeed, in drawing a connection between visual and pictorial structure, Delaunay radically reformulated the correlation between painting and vision, both of which were classically understood to provide transparent, windowlike views onto the world. Delaunay’s window thickens space so that its visual opacity gives way to depth only after negotiation with experiential knowledge. The orthogonal grid of the Windows exemplifies this imbrication of pictorial and visual structure. Simultaneously marking both the flat, literal surface of the painting and the spatial vectors that have traditionally served as the markers of perspective, the grid fluctuates, back and forth, between the literal form of the painting’s surface and the grid of perspectival vision. Krauss foregrounds this distinction between the literal material flatness of the grid, on the one hand, and its simultaneous relation to vision and the perception of depth, on the other. “The grid,” Krauss writes, “is flattened, geometricized, ordered. It is antinatural, antimimetic, antireal. It is what art looks like when it turns its back on nature.”74 At the same time, Krauss stresses that prior to its inscription on the pictorial surface in the twentieth century, the grid was first articulated in the nineteenth century through the Symbolist fascination with the window: “The grid appears in symbolist art in the form of windows, the material presence of their panes expressed by the geometrical intervention of the windows’ mullions.” Yet what Delaunay’s Windows make explicit is that vision, despite appearances to the contrary, is precisely not like a window. In Delaunay’s series, the image of the transparent window vies with the literal flatness of the nontransparent screen of the painting’s surface. In order to see through this particular window, out onto the world of objects, we must first learn to see through its resolutely two-dimensional and opaque grid.

Given Delaunay’s concern with structure, it does not take long for us to realize that in addition to the frontal and aerial view of the Eiffel Tower, there is another view of the tower that we eventually learn to see. Interwoven within the cohesive solidity of this orthogonal structure, we come to find another geometric pattern: a kind of prismatic shattering of the window whereby the rectangular grid splinters into a subset of intersecting triangles. As with the view from the tower, we can also learn to make sense of these interlocking rectangular and triangular forms. If we look, for example, at one of the pen-and-ink studies of the tower done between 1910 and 1912, we see about a third of the way up from the bottom a conspicuous and familiar-looking square that is subdivided into intersecting triangles (Fig. 8). And once we have noticed it, we see it throughout the remainder of the ink drawings in this series as well as in the Cubist series of red towers that Delaunay painted prior to the Windows between 1910 and 1911 (Fig. 9).

The orthogonal and diagonal lattice of the grid presents us with another view of the Eiffel Tower, a view not from straight on or from above but from within-an image of the interlocking grids that form the skeletal structure of the tower itself (Fig. 10). What we see in this close-up image of structure are the pressures and strains-the nuts and bolts-of a structural framework that, once assembled and looked at from a remove, piece together to form another image of the tower.

The First Disk

Delaunay’s Eiffel Tower serves metaphorically to map visual structure onto pictorial structure. In so doing, the metaphoric image emphasizes, as metaphors do, the role of the mind. It is this emphasis on the visual role of the mind and its reliance on metaphor that The First Disk attempts to balance in relation to the body. Without in any way downplaying how we are drawn into the intimate surfaces of the Windows-we feel and literally see ourselves thrust into the heart of the tower-the accent, both metaphoric and experiential, is placed on cognition. The First Disk by contrast, foregrounds not only the impact of embodied vision as it strikes the painted surface but also the impact of the surface as it strikes embodied vision-an impact made explicit by Delaunay in his multiple references to The First Disk as “a punch.”75

Like the Windows, The First Disk folds the structure of vision into the structure of painting. The First Disk, like the retinal image, presents no clear sense of orientation, no obvious left or right, top or bottom. And, like the retinal image, The First Disk is resolutely two dimensional. The merging of visual and pictorial structure is most immediately evident, however, in the shape of The First Disk. As in Frank Stella’s stripe paintings of the early 1960s (Fig. 11), the depicted shape of the circular bands echoes the literal shape of the canvas. The shape of Delaunay’s First Disk, unlike that of Stella’s stripe paintings, is additionally structured according to the viewer’s vision-to the circular radius of the viewer’s optical cone, whose circumference delimits vision prior to its peripheral distortion. Vision is thus inserted as an active term in the play between the painted concentric bands and the material structure of the circular support. Indeed, it is crucial to the viewing of The First Disk that vision can be aligned to the shape of its outer edge and also with the shape of each of the seven painted bands. For, as painted and visual radii coincide in their respective diminishment, The First Disk positions not just the orientation of the viewer’s vision but also the physical position of his or her body. With each step closer, the viewer’s visual radius realigns with a new painted radius. The \First Disk pulls us in, nearer and nearer toward its central blue and red bull’s-eye, only to push us out again, back toward the outer frame of the support.

The diminishing circumference of the painted bands can be seen equally in terms of the body’s movement toward The First Disk and The First Disk’s movement toward the body. The successively smaller circles can be visualized as increasingly proximate cross-sections within a static visual cone extending from The First Disk to the viewer. Accordingly, the outer edge of the painting corresponds to the most distant point of vision, with the absolute center of the painting being the point at which the tip of the cone touches the viewer’s eye.

If the painted bands do function as distinct positions within the viewer’s optical cone, collapsed into the flatness and circular shape of the painting, then the central crossing of the horizontal and vertical lines that bisect The First Disk designate the contact point between the eye and the painting. At the same time, these lines also serve to entwine visual and pictorial structure. As vision is aligned along the cross-hairs of the painting, the intersecting lines reflect the structural cross bracing used to reinforce the stretcher of the tondo.

Delaunay further blurs the boundaries between vision and its pictorial object through his strategic use of simultaneous contrast. In the process of viewing The First Disk, the colors that are perceived exist both physiologically within the eye as a result of retinal mixing and as a material property of the pigment, immanent and bound to the painted surface.76 Color butts against color in The First Disk to produce retinal effects of simultaneous contrast that seem to belong at once to the interiority of the eye and to the exteriority of the painted surface. Neither quite inside the eye nor quite outside on the surface of the painting, the location of color is perceived indeterminately between The First Disk and the viewer. It is this visual indeterminacy-an indeterminacy that blurs the distinction between the viewing subject and its painted object-that relates The First Dish back to the optical model first established in the Windows. Yet, while the emphasis in the Windows is on a metaphoric thrust into and from above the Eiffel Tower, The First Disk performs this visual thrust on an experiential level, as the boundaries are broken between the eye and the painting.

The retinal effects of simultaneous contrast do more, though, than confound the neatly prescribed location of color in either the eye or the painting. The chromatic vibrations of retinal mixing also produce the effect of movement-a slow, optical illusion of gentle rotation and pulse. As Delaunay described the physiologically produced perception of movement in The First Disk “The experience was convulsive. No more fruit dish, no more Eiffel Tower, no more street. . . . I tried to cry out to them: I have found it! It turns! But they avoided me.”77

Along with the effect of chromatic retinal mixing, the rhythmic movement of simultaneous contrast further obscures the boundary between the eye and the paintingbetween the physiological pulse of movement and the representation of movement as a static image. We see, therefore, the quiet turns of The First Disk as a perceptual effect, produced through the chromatic vibrations of simultaneous contrast. Delaunay characterized this effect of movement as being distinct from The First Disk’s surface in a 1926 interview: “[Simultaneous] Colors offer the depth of a penetrating rhythm . . . a movement . . . insolently outside of the noticeable surface.”78 But The First Disk also provides a graphic suggestion of movement that is bound to its surface-a representation of movement resembling a spinning propeller or wheel. Hidden in plain sight like Poe’s purloined letter, the representational function of The First Disk sits in open view, even while it is concealed within the rings of its own abstraction. Like the colors of simultaneous contrast, the exact location of The First Disk’s movement is again indeterminate: both inside the eye of the viewing subject and outside, bound to the form of the painted image. It is unclear in viewing The First Disk whether we perceive movement as a physiological response to its circular color patterns or as a representational form independent of the eye. Unable to clearly separate physiological effect from representational form, the viewer is folded into an indeterminate space that combines the perception of movement with the stasis of the image. As Delaunay claimed: “That’s what I tried to realize with my simultanism, that which can properly be called static movement.”79

Delaunay’s first attempt to represent movement in relation to the effects of simultaneous contrast occurs in the later paintings of the Window series. This is seen, for instance, in Delaunay’s first shaped canvas, perhaps the most abstracted work in the series, Les fentres (Fentres ouvertes simultanment 1re partie, 3e motif) Windows (Windows Open Simultaneously, 1st Part, 3rd Motif) (Fig. 12). Here, more than in any other painting in the series, the triangular motif of the Eiffel Tower is obscured in favor of the interchromatic reactions of the orthogonal and triangular grid patterns. All but gone are the iconic images of the Champ de Mars, the tower, and the reflected face whose barest residue can be seen in the yellow forms just to the right of center. If the small green square that sits in the middle of the white broken triangle on the left side of the painting is to be taken as a window, it is by no means obviously so. The retinal force of the chromatic grid and its material assertion of the picture plane have all but subsumed the composition. Within the rectilinear and diagonal grid pattern, however, we see a new form that appears for the first time in the series: the S-shaped patch of white on the right side of the painting.

The appearance of this serpentine form in the otherwise angular network marks Delaunay’s first attempt to represent circular movement. Just as the oval shape of the canvas carries the eye around the edge

Hospital Branches Out: Akron General to Open New Wellness Facility in Stow That Will Feature Fitness Center and ER

By Cheryl Powell, The Akron Beacon Journal, Ohio

Jun. 1–Whether they’re trying to stay healthy or they need quick treatment when medical problems strike, residents of northern Summit County now have a new option closer to home.

Akron General Medical Center is unveiling its Health & Wellness Center-North, located in a highly visible spot off state Route 8 near Steels Corners Road in Stow.

The medical-fitness-center portion of the new, $32 million facility will open to members next week following a private, invitation-only ribbon-cutting event on Wednesday night.

The fitness and wellness programs in the 97,000-square-foot building are modeled after those at Akron General’s popular medical fitness center in Bath Township, known as Health & Wellness Center-West.

But unlike the Montrose-area facility, the Stow location also will offer a 24-hour branch of Akron General’s emergency department, set to open in early July.

The project will bring quicker and better access to care for people in northern Summit County, said Alan Bleyer, Akron General’s president and chief executive.

“That’s part of the plan — to make it as convenient as possible,” Bleyer said.

Stow Mayor Karen Fritschel called the project “a great plus” for the community.

Akron General already has worked with the city to launch a communitywide exercise program and has offered one of the three pools at the new center for use by the Stow-Munroe Falls High School swim team.

“I’m looking forward to so many people participating in a healthy lifestyle,” Fritschel said.

The new facility is targeting customers in Stow and surrounding communities, including Hudson, Cuyahoga Falls, Boston Heights, Tallmadge, Munroe Falls, Peninsula and Twinsburg.

The hospital is the latest health care provider to invest in Summit County’s growing, affluent northern tier, where residents often split their allegiance between hospitals in Akron and the Cleveland area.

University Hospitals and the Cleveland Clinic both are moving forward with outpatient facilities in nearby Twinsburg.

And two years ago, Akron General’s rival, Summa Health System, teamed up with Akron Children’s Hospital to open an outpatient facility in Hudson.

The portion of the facility run by Children’s includes diagnostic testing, doctors’ offices and an after-hours branch of its emergency department that’s staffed from 11 a.m. until 11 p.m. on weekends and from 4 p.m. until 11 p.m. on weekdays.

Summa is scheduled to open a 65,000-square-foot medical fitness center — called Summa Wellness Institute — in September at part of the Hudson facility.

Memberships available

Akron General Health & Wellness Center-North already has sold about 1,300 memberships, said Doug Ribley, Akron General’s vice president for health and wellness services.

The Akron General Health & Wellness Center-West in Bath Township opened in 1996 and has 9,000 members.

The Stow facility will be able to accommodate as many as 8,000 members, who each will receive an initial medical assessment and access to all-new equipment and programs developed by medical professionals, Ribley said.

“There really is medical supervision, and that’s the key difference,” he said.

Members pay a one-time fee ranging from $275 for individuals to $350 for families and monthly fees of $53 to $107.

Memberships include access to both locations.

The new facility also is marketing sports performance enhancement programs, a service that isn’t offered yet at the Bath Township location.

Emergency facility

The other new feature — the branch emergency room — will be staffed around the clock by emergency medicine doctors.

The 13-bed ER will be able to handle medical emergencies for children and adults, with the exception of serious trauma injuries requiring surgery, said Dr. Scott Felten, medical director for the new emergency department.

The Stow branch will accept walk-in patients as well as those transported by ambulances, he said. People who need inpatient care or surgery will be transferred by a private ambulance company to Akron.

Felten expects the Stow location to treat about 18,000 patients annually.

“Some of those 18,000 will be people who never used Akron General before who are choosing to come to this facility because it’s closer to them,” he said.

The new facility also will have an MRI, CT scanner, X-ray machines and laboratory services that are relocating from a nearby center in Hudson.

In the future, the Stow location could offer after-hours appointments for diagnostic tests, said Bleyer, the Akron General president.

The memberships and services are expected to bring in enough money to pay for the $32 million investment within nine years, Bleyer said.

Cheryl Powell can be reached at 330-996-3902 or [email protected].

—–

To see more of the Akron Beacon Journal, or to subscribe to the newspaper, go to http://www.ohio.com.

Copyright (c) 2007, The Akron Beacon Journal, Ohio

Distributed by McClatchy-Tribune Information Services.

For reprints, email [email protected], call 800-374-7985 or 847-635-6550, send a fax to 847-635-6968, or write to The Permissions Group Inc., 1247 Milwaukee Ave., Suite 303, Glenview, IL 60025, USA.

Parental Influence on Eating Behavior: Conception to Adolescence

By Savage, Jennifer S; Fisher, Jennifer Orlet; Birch, Leann L

Introduction

Eating behaviors evolve during the first years of life as biological and behavioral processes directed towards meeting requirements for health and growth. For the vast majority of human history, food scarcity has constituted a major threat to survival, and human eating behavior and child feeding practices have evolved in response to this threat. Because infants are born into a wide variety of cultures and cuisines, they come equipped as young omnivores with a set of behavioral predispositions that allow them to learn to accept the foods made available to them. During historical conditions of scarcity, family life and resources were devoted to the procurement and preparation of foods, which are often low in energy, nutrients, and palatability. In sharp contrast, today in non-Third World countries children’s eating habits develop under unprecedented conditions of dietary abundance, where palatable, inexpensive, ready-to-eat foods are readily available.

In this review, we describe factors shaping the development of children’s food preferences and eating behaviors during the first years of life, in order to provide insight into how growing up in current conditions of dietary abundance can promote patterns of food intake which contribute to accelerated weight gain and overweight. In particular, we focus on describing children’s predispositions and parents’ child feeding practices. We will see that the feeding practices that evolved across human history as effective parental responses to the threat of food scarcity, can, when combined with infants’ unlearned preferences and predispositions, actually promote overeating and overweight in our current eating environments. In addition to the relatively recent changes in our eating environments, concurrent reductions in opportunities for physical activity undoubtedly also contribute to positive energy balance and obesity, but are outside the scope of this article.

The first five years of life are a time of rapid physical growth and change, and are the years when eating behaviors that can serve as a foundation for future eating patterns develop. During these early years, children are learning what, when, and how much to eat based on the transmission of cultural and familial beliefs, attitudes, and practices surrounding food and eating. Throughout, we focus on the vital role parents and caregivers play in structuring children’s early experiences with food and eating, and describe how these experiences are linked to children’s eating behavior and their weight status.

The Current Eating Environment

These days, food and drink are available in most venues of everyday life. As of 2002, there were 514,085 food-service establishments in the United States and an additional 152,582 stores where food and beverages could be purchased.1 In addition, a growing variety of inexpensive and energy-dense foods have become available in increasingly larger portions. A typical American supermarket carries 45,000 items2 and consumer portions served by restaurants and fast-food establishments are often double the size of current recommended USDA serving size.3

In most families, women still have primary responsibility for feeding children.4 Changes in employment patterns and family structure, however, leave women with less time to devote to this activity. From 1975 to 2004, labor force participation among mothers with children under eighteen years of age increased from forty- seven to seventy-one percent.5 Moreover, both parents work in sixty- one percent of two-parent families with children under eighteen years of age.6 Among single mothers, seventy-two percent are employed. Additionally, more women than men parent and feed their children without the assistance of a spouse: twenty-three percent of children under eighteen years of age live with their mother only.7

One consequence of these trends is that young children are routinely fed by someone other than a parent. In fact, thirty-one percent of preschool-age children receive out-of-home childcare which includes mealtime care from a grandparent or other relative, and forty-one percent participate in organized childcare.8 In addition, families spend less time eating meals together. Only fifty- five percent of married parents and forty-seven percent of single parents eat breakfast daily with their preschool-age child.9 Finally, an increasing proportion of food that children eat is prepared and consumed away from home.10 About forty percent of family food dollars are now spent on food away from the home.11 In these contexts children may be served particularly large portions12 and consume more energy and fat than when eating at home.13 Collectively, these trends suggest that today s young children spend less time eating at the family table and have routine exposure to large portions of palatable, energy dense foods than in previous generations.

Early Taste and Experience with Food Flavors in Amniotic Fluid

A growing body of evidence suggests that the food choices a mother makes during her pregnancy may set the stage for an infant’s later acceptance of solid foods. Amniotic fluid surrounds the fetus, maintaining fetal temperature, and is a rich source of sensory exposure for infants. Many flavors in the maternal diet appear to be present in amniotic fluid. Adult sensory panels have detected odors and compounds of garlic,14 cumin, and curry15 in the amniotic fluid of pregnant women ingesting oil of garlic capsules and spicy foods, respectively. Because taste and smell are already functional during fetal life, and because the fetus regularly swallows amniotic fluid, the first experiences with flavor occur prior to birth. Exposure to these “transmittable” flavors influences the acceptance of these flavors by the infant postnatally.16 Julie Mennella and colleagues examined the influence of repeated prenatal exposure to carrot juice and found that women who consumed carrot juice for three consecutive weeks during their third trimester of pregnancy had infants who exhibited fewer negative facial expressions when first introduced to carrot-flavored cereal as compared to plain cereal.17 These findings reveal that experience with dietary flavors begins as the fetus is exposed to flavors from the maternal diet in utero, and that this early experience can provide a “flavor bridge” that can begin to familiarize the infant with flavors of the maternal diet. As we will see, familiarity plays a key role in the acquisition of food and flavor preferences.

The Impact of Breast Milk Feeding

Breastfeeding is recommended as the optimal feeding method for the first six months of life, followed by the introduction of solids and continued breastfeeding for a minimum of one year.18 These recommendations are largely based on evidence that breast milk supports normal growth and also has immunological properties that provide some early protection from infection, and is associated with creating a lower risk of infant morbidity and mortality.19 A growing body of literature also suggests that breastfeeding affords a small, yet consistent, protective effect against obesity. Specifically, Christopher Owen and colleagues conducted a systematic review of sixty-one studies, of which twenty-eight provided odds ratios to examine the influence of breastfeeding on obesity from infancy to adulthood. They found that breastfeeding was associated with a reduced risk of obesity among infants, young children, older children, and adults with an unadjusted odds ratio of 0.50, 0.90, 0.66 and 0.80, respectively.20 Moreover, Stephan Arenz and colleagues reviewed twenty-eight studies investigating the association between breastfeeding and childhood obesity that met the following inclusion criteria: relative risk had to be reported, age at last follow-up had to be between five and eighteen years, feeding mode had to be reported, and obesity had to be defined using BMI. Of these twenty-eight studies, nine studies comprising more than 69,000 children were eligible for the meta-analysis. They found a significant adjusted odds ratio (AOR) for “ever breastfed” of 0.78,95% CI (0.71-0.85) in the fixed model.21 These odds ratios, which are significantly lower than 1.0, indicate a significantly lower risk for subsequent obesity among those who were breastfed, even when adjusting for other factors.

In one review of twenty-two high quality studies, fifteen found protective effects to be more consistently noted among school-aged children and adolescents than among younger children.22 One possible explanation is that the impact of breastfeeding on subsequent weight status may be an example of metabolic or behavioral programming, in which the impact of breastfeeding on weight status only emerges later in development, and in this case, may not be clearly manifested until adolescence or adulthood. However, at this point, the mechanism(s) by which breastfeeding exerts protective effects are not understood. Specifically, breastfeeding is the ideal feeding method for the human infant and influences the developing anatomy and physiology of the gastrointestinal tract in ways that differ from formula feeding, such that breast-fed and formula-fed individuals may differ i\n the absorption and utilization of nutrients later in life.23 In addition, there is some evidence for two complementary behavioral mechanisms that may explain the protective effects of breastfeeding. The first involves the effects of breastfeeding on food acceptance and the second involves the developing controls of energy intake.

The sensory properties of breast milk may facilitate the transition to the modified adult diet. Many flavors of the maternal diet appear in breast milk. For example, adult sensory panels can detect odors of garlic,24 alcohol,25 and vanilla26 in milk samples of lactating women who ingested those flavors prior to providing milk samples. Flavors in human milk influence infant consumption. For example, breast milk flavored with garlic27 and vanilla28 increased infant sucking time at the breast compared to breast milk without garlic or vanilla flavor. Mennella and colleagues also tested the hypothesis that experience with flavor in breast milk modifies the infants’ acceptance and enjoyment of those foods in a sample of forty-five mothers and their babies that were randomly assigned to one of three groups. The first group drank carrot juice during pregnancy and water during lactation; group two drank water during pregnancy and carrot juice during lactation, and the control drank water during both conditions.29 Results revealed that repeated postnatal exposure to carrot flavors increased acceptance and enjoyment of carrot flavor in infant cereal. These findings indicate that flavors in breast milk, which vary with the maternal diet, provide the infant with a changing flavor environment. This early flavor experience appears to facilitate the infant’s acceptance of foods of the modified adult diet, especially those foods consumed by the mother during lactation.30 In contrast to the varied flavor experience provided by breastmilk, formula provides the infant with the same consistent flavor experience.

There is limited evidence that these early differences in flavor experience provided by the breast and formula feeding also influence infants’ subsequent acceptance of solid foods, especially those foods that might not otherwise be readily accepted, such as vegetables. For example, Susan Sullivan and Leann Birch conducted a short term longitudinal study of nineteen breastfed and seventeen exclusively formula fed four- to six-month-old infants and their mothers to examine the influence of milk feeding regimen and repeated exposure on acceptance of their first pureed vegetable. Participants were randomly assigned to be repeatedly fed one vegetable, either pureed peas or green beans. Results revealed infant feeding regimen moderated the effects of repeated exposure; the initial intake of vegetables did not differ between breastfed and formula-fed infants, but breastfed infants increased their intake more rapidly over days than formula fed infants, and continued to consume significantly more vegetables after ten exposures.31 These findings are consistent with the view that breastfeeding can more easily facilitate the acceptance of solid foods compared to formula feeding.

A second hypothesis regarding the protective effect of breastfeeding on later risk of overweight is that breastfeeding provides the infant with greater opportunity for self-regulation of intake. A limited body of evidence suggests that infants have some ability to self-regulate caloric intake by adjusting the volume of milk consumed,32 although this can be influenced by maternal feeding practices. In bottle feeding, the infant can obtain milk with less effort than from the breast, so the formula-fed infant is more passive in the feeding process and has fewer opportunities to control the amount consumed, making it easy to overfeed the infant. In contrast, the breastfed infant must take an active role in order to transfer milk from the breast. The higher levels of maternal control that are possible with bottle feeding reduce infants’ opportunities to control the amount consumed at a feeding.33 Limited evidence indicates that bottle-fed infants consume more milk and gain weight more rapidly than breastfed infants, increasing their risk for childhood obesity.3* Moreover, research suggests the difference in milk intake between breastfed and formula-fed infants becomes greater with age.35 In short, while evidence is limited, breast feeding and formula feeding provide very different opportunities for early self-regulation of energy intake, and additional research is needed to assess how these differing feeding methods influence the developing controls of energy intake, weight gain, and risk for childhood obesity.

Whether and how infants exert control during feeding to regulate energy intake are not new questions. Clara Davis conducted seminal research in the late 1920s and 1930s, providing the first evidence of an unlearned ability to self-regulate energy intake in infancy. In Davis’ studies, infants and toddlers grew well and had few illnesses when given the opportunity to select and consume a variety of simply prepared foods at each meal.36 As previously mentioned, Samuel Fomon and colleagues revisited the issue of self-regulation of energy intake by systematically varying the energy density of infant formula.37 By six weeks of age, full-term infants who were fed a concentrated formula (100 kcal/mL) consumed smaller volumes than did those infants who were fed a diluted formula (54 kcal/mL), such that total daily energy intake did not differ between the two groups. In 1977, observational data from Sharon Pearcey and John De Castro complemented these experimental findings, revealing that individual variability in energy consumed at meals among twelve- month-old infants was close to fortyseven percent, while variability in daily energy intake was seventeen percent.38 Similarly, Roberta Cohen and colleagues39 found no difference in daily energy intake among infants four to six months of age who were fed only breast milk versus those who were fed breast milk along with complementary foods, suggesting that infants were adjusting their intake of breast milk in response to the addition of solid foods.

The ability to regulate energy intake has also been described in preschool-age children. Children responded to covert manipulations in the energy content of foods served as first courses by adjusting their subsequent intake, such that their total energy intake for the meal and energy consumed over a thirty-hour period40 was maintained across conditions in which low- or high-energy foods were provided as a first course. Differences among preschool-age children in their ability to self-regulate energy intake have been associated with differences in weight status. For example, Susan Johnson and Leann Birch examined the influence of weight status on regulation of energy intake in seventy-seven three- to five-year-old children. Each child participated in two treatments, differing only in whether children received a low- or high-calorie preload of fruit flavored drinks of equal volume before lunch. After twenty minutes, children self-selected intake from a familiar lunch menus (i.e., turkey hot dogs, American cheese, unsweetened applesauce, carrot sticks, fruit bars, and 2% milk) to assess their ability to adjust food intake in response to changes in energy density of the preload drinks. They found that children who showed little evidence of adjusting their lunch intake in response to the energy differences in the preloads were significantly heavier.41 Leann Birch and Jennifer Fisher used a similar protocol to investigate the association between weight status and children’s caloric compensation in a sample of 197 non- Hispanic white five-year-old girls. Data were used from two separate lunches which differed in whether a low- or high-energy preload drink was consumed prior to lunch. Again, after a brief delay, participants ate a self-selected lunch (i.e., sandwich, carrots, applesauce, cookies, and milk) ad libitum. Results indicated substantial individual differences in the extent to which girls adjusted their energy intake at lunch in response to the differences in preload energy content. On average the girls only compensated for about half of the energy in the preloads. In this case, greater maternal restriction in feeding was associated with poorer compensation and higher weight status in daughters.42 While infants show a predisposition to respond to differences in energy density early in life, the child’s early experience, including child feeding practices, shape the development of individual differences in self- regulation abilities.43 That infants and young children are capable of self-regulating energy intake under laboratory conditions, in the absence of adult intervention, and in the presence of only simply prepared healthy foods, does not speak to the extent to which this ability can be exercised in current family environments.

The Influence of Genetic Predispositions and Repeated Exposure on Food Acceptance during Infancy and Childhood

Infants do not have to learn preferences for the basic tastes (sweet, salty, sour, bitter, and umami). Rather, they are predisposed to pleasing flavors. Shortly after birth infants express preferences for sweet tastes and reject those that are sour and bitter.44 Preferences for salt are apparent at approximately four months.45 These predispositions are thought to have evolved to serve a protective function, by encouraging the consumption of energy- rich foods (often signaled by the sweet taste) and discouraging ingestion of toxins (signaled by bitter and sour tastes).46 These taste preferences are unlearned, and become very apparent to parents once infants begin the transition from exclusive milk feeding to a modified adult diet. In general, sweet foods such as fruits, flavored yogurts, and juices are readily accepted by infants, while foods such as vegetables, which are no\t sweet, and may contain bitter components, are initially rejected. Laboratory studies have confirmed that young children readily form preferences for flavors associated with energy rich foods.47 Even the fruits and vegetables most preferred by children (e.g., bananas, apples, potatoes, and peas) tend to be those that contain the most energy.48 Innate preferences for energy dense foods may be one catalyst acting to promote energy intake among children in abundant dietary environments.

Alternatively, children’s acceptance of foods that have less intrinsic hedonic appeal to children (such as vegetables) are shaped by their experience with those foods. Children decide their food likes and dislikes by eating, and associating food flavors with the social contexts and the physiological consequences of consumption. The tendency for children to initially reject novel foods is often just a case of neophobia. Several studies have demonstrated that children’s preferences for and acceptance of new foods are enhanced with repeated exposure to those foods in a non-coercive setting. New foods may need to be offered to preschool-aged children ten to sixteen times before acceptance occurs. At the same time, simply offering new foods will not necessarily produce liking; having children taste new foods is a necessary part of the process.49 Awareness of this normal course of food acceptance is important because approximately one quarter of parents with infants and toddlers prematurely drew conclusions about their child’s preference for foods after two or fewer exposures.30

Transition to the Modified Adult Diet: Food and Beverage Consumption

During the first year of life, eating patterns undergo rapid evolution. Initially, infants obtain all nutrition from a single fluid source (i.e. milk) consumed approximately every two to four hours. By the end of the first year, however, the infant has moved to a modified meal and snack pattern, consuming many foods found in their culture’s adult diet. The American Academy of Pediatrics (AAP) recommends breastfeeding for the first four to six months of life, followed by the introduction of complementary foods once the child is developmentally ready.51 At this point, the evidence regarding the impact of early complementary feeding on the development of obesity is inconsistent. Only four located studies examined the association between the timing of complementary food introduction and weight gain in longitudinal studies. Two of these studies linked the early introduction of solid foods and obesity at twelve months52 and eighteen months53 of age, independent of breastfeeding. However, the other studies, using similar designs, failed to note associations between the early introduction of solid foods and childhood obesity at twenty-four months54 and seven years of age.55 Therefore, there is a need for well designed prospective, longitudinal studies examining this association in order to better understand the influence of the early introduction of solids on the development of childhood obesity.

Results from a recent survey, the Feeding Infants and Toddlers Study, which provides data on the dietary patterns of 3,022 infants and toddler four to twenty-four months of age, has also raised concerns regarding excessive energy intake as well as the quality of young children’s diets.56 Barbara Devaney and colleagues found that mean reported energy intakes exceeded estimated energy requirements by ten percent for infants four to six months, twenty-three percent for infants seven to twelve months, and thirty-one percent for toddlers twelve to twenty-four months.57 Analyses also revealed that children consumed significant amounts of energy-dense but nutrient poor foods.58 For example, french fries were the most common vegetable consumed among fifteen- to eighteen-month-olds, and approximately fifty percent of seven- to eight-month-olds consumed some type of dessert, sweet, or sweetened beverage. Results also revealed that eighteen to thirty-three percent of infants and toddlers consumed no servings of vegetables, and twenty-three and thirty-three percent consumed no fruits. Moreover, fewer than ten percent of infants and toddlers consumed dark green, leafy vegetables.59 Thus, it appears that parents and caregivers need encouragement to repeatedly offer nutrient dense age-appropriate foods (e.g., fruits, dark green and yellow vegetables, and yogurt) as opposed to convenient energy dense foods and snacks.

The large amounts of fruit juice and sweetened beverages that begin to appear in young children’s diets have also been cause for concern. The AAP recommends no more than four to six ounces a day of fruit juice for children one- to six-years old. By nineteen to twenty-four months of age, however, mean intake of children who consumed 100% fruit juice was 9.5 ounces a day, with ten percent of toddlers consuming over fourteen ounces a day.60 One cross- sectional study of two- and five-year-old children found that consumption of twelve fluid ounces per day of fruit juice was associated with obesity and short stature.61 The findings of this study have not been replicated, however, and several longitudinal studies report no association between fruit juice consumption and overweight.62 In fact, Melanie Smith and Fima Lifshitz reported an association of excess juice consumption with nonorganic failure to thrive, suggesting that large intakes of fruit juices may displace more calorie and nutrient dense foods.63

Alternatively, Jean Welsch and colleagues used a retrospective longitudinal design to evaluate juice intake and the persistence of overweight among two- to threeyear-old children.64 Children identified as at risk for overweight who consumed sweet drinks (e.g., vitamin C containing juices, other juices, fruit drinks, and soda) as infrequently as one to two times daily increased their odds of becoming overweight by sixty percent. Consumption of sweetened beverages (i.e. fruit drinks, soda) has also been associated with excessive weight and weight gain among eleven- to twelve-year-old and nine- to fourteen-year-old children.65

Parents as Providers and Models

Parents powerfully shape children’s early experiences with food and eating, providing both genes and environments for children. Children’s eating patterns develop in the early social interactions surrounding feeding. As young omnivores, they are ready to learn to eat the foods of their culture’s adult diet, and their ability to learn to accept a wide range of foods is remarkable, especially given the diversity of dietary patterns across cultural groups. Several decades of research inside and outside of the laboratory have revealed that, as in other areas of children’s development, caregivers act as powerful socialization agents.66 Parents select the foods of the family diet, serve as models of eating that children learn to emulate, and use feeding practices to encourage the development of culturally appropriate eating patterns and behaviors in children.

Caregivers as Providers

Studies conducted outside the laboratory support the notion that children’s preference and intake patterns are largely a reflection of the foods that become familiar to them. Research indicates that the extent to which fruits and vegetables are present and readily available and accessible in the home correlates positively with the level of consumption in school-age children.67 For example, Karen Cullen and colleagues examined the relationships among availability, accessibility, and preferences for fruit, 100% fruit juice, and vegetables in a sample of eighty-eight fourth through sixth graders and their parents. Results revealed that availability was the only significant predictor of intake for children reporting high preferences, whereas for children reporting low preferences, availability and accessibility were significantly related to consumption of fruits, vegetables, and 100% fruit juice. Thus, accessibility ‘appears to be particularly important for children with low preferences for fruit, 100% fruit juice, and vegetables.68 Similarly, Polly Kratt, Kim Reynolds, and Richard Shewchuk examined the role of availability of fruits and vegetables in the home and found that homes with greater availability had a stronger set of motivational factors (e.g., self efficacy and behavioral capability/ knowledge of parents) for fruit and vegetable consumption compared to homes with low fruit and vegetable availability. Furthermore, the availability of fruits and vegetables was a moderating variable for intake by both parents and children.69 The findings are much the same for milk drinking. In a study of beverage intake among girls during middle childhood, milk consumption among girls almost always or always served milk at meals and snacks was two times higher than it was for girls rarely or never served milk. Similarities in milk intake quantities among mothers and daughters were also attributable to the extent that milk was served at meals.70

Children’s intake of particular foods is influenced not only by the types of foods present in the home but also by the amount of those foods available to them. Recent laboratory studies provide causal evidence that large food portions promote greater energy intake by children as young as two years of age. When age- appropriate portions of an entre were doubled in size, preschool- age children ate approximately twenty-five to twenty-nine percent more than the age-appropriate portions of those foods, even though they consumed only two thirds of the smaller portions of the entre and were not aware of increases in the portion size.71 These effects were attributable principally to increases in the average size of children’s bites. Children did not adequately reduce their intake of other foods to compensate for their intake of larger portions of the entre. As a result, energy intake was nine to fifteen percent higher at meals d\uring which larger portions were served.

Adults, like children, eat more when served large portions.72 However, for children and adults, the intake of large portions is not associated with weight status, suggesting that the relevance of large portions to weight gain is not a function of exposure to large portions; rather, it is a particular susceptibility of the overweight adult individual to overeating large portions when available. Evidence from laboratory studies suggests that larger portions served to consumers at restaurants, in convenience and grocery stores, and in other retail settings are driving increases in the average size of portions consumed both at home and away from home,73 as well as increasing the daily energy intake of children.74

Caregivers as Models

Children learn about food through the direct experience of eating and by observing the eating behavior of others. Leann Birch found that the selection and consumption of vegetables by preschool-age children were influenced by the choices of their peers.75 When preschool-age children observed the eating behavior of adults, it had a similar effect. For example, Helen Hendy and Bryan Raudenbush found that children’s intake of a novel food increased at those meals during which they observed a teacher enthusiastically consuming the food. Interestingly, enthusiastic modeling by a teacher was not as effective when children were seated with peers who exhibited different food preferences than did their teachers.76 While one might expect modeling by parents to have a similar if not stronger influence on children’s preference and choices, experimental evidence is lacking.

Studies conducted outside the laboratory also provide indirect evidence for the role of social modeling. For example, low-income adolescent girls who reported seeing their fathers consume milk had higher calcium intakes than did those girls who did not see their fathers drink milk.77 Parental modeling has also been associated with greater fruit juice and vegetable intake among school-age children.78

Parenting Styles and Children’s Eating Behavior

Parenting, by definition, involves the task of care and feeding one’s children. Subsequently, child feeding practices have evolved as parental responses to perceived environmental threats to children’s well being.79 For nearly all of human history, the major threats to child health have been food scarcity and infectious disease. Feeding practices developed to address these threats have been passed from one generation to the next, and have become traditional practices routinely used by parents without question. However, in today’s environment, we must ask, “Are these child feeding practices, evolved to address the threats posed by food scarcity and infectious disease, effective in dealing with the current threats to child health posed by too much food, obesity, and its comorbidities?” The simple answer to this question is “no.”

Traditional feeding practices used with infants and young children include feeding children frequently and quickly in response to distress, offering foods designed especially for infants and young children, offering preferred foods if possible, and encouraging children to eat as much as possible when food is available, often involving the use of coercion and force feeding. There are, of course, differences across cultures in the specifics of these practices and in the particular foods offered to children. There are also differences within cultures among parents’ feeding practices. These differences are caused by cultural differences among parents and by their goals for their children. In addition, parents’ feeding practices are influenced by children’s individual characteristics, including age, sex, weight status, and eating behavior.

Parenting practices and parent-child interaction during feeding vary in the degree to which children are allowed some degree of autonomy in eating.80 These interactions can have a powerful influence on children’s developing food preferences, intake patterns, diet quality, growth, and weight status. However, it is important to note that child feeding practices may have unintended effects on children. For example, parents’ feeding practices often include attempts to increase children’s intake of nutrient dense foods (e.g., “eat your vegetables”) or restrict children’s access to and intake of “unhealthy” or “junk” foods (e.g., “no, you can’t have any cookies now”). Parents using these practices may intend to promote healthier diets in children, and perhaps even prevent obesity, but the results of research reveals such attempts can have negative effects on children’s food preferences and their self- regulation of energy intake.81

In general, parental control of feeding practices, especially restrictive feeding practices, tends to be associated with overeating and poorer self-regulation of energy intake in preschool- age children.82 The manner in which eating behavior is affected depends on the nature of the directive. For example, using food as reward for good behavior increased preschool-age children’s preferences for those foods,83 and because sweet, palatable foods are often used as rewards, this practice can have the unintended consequence of promoting children’s preferences for energy dense palatable foods that are often unhealthy. Parents may also reward children for consuming healthy foods in hopes of increasing children’s intake of foods such as vegetables; but research has demonstrated that this practice can actually result in children learning to dislike and avoid those foods.84

Restricting children’s access to “forbidden” foods also has a paradoxical effect on food preference and energy intake. Research reveals that placing a preferred food in sight, but out of reach, decreases children’s ability to exhibit self-control over obtaining the food.85 As a result, when restriction is lifted, and “forbidden” foods are present, children often have difficulty controlling the amount of food eaten, resulting in overeating and eating in the absence of hunger. For example, Fisher and Birch investigated the effects of restricting threeto five-year-old children’s physical access to foods (i.e. apple or peach bar cookies) within their environment. Each child was observed on ten occassions over five weeks. During the restricted-access procedure, children had free access to a control food throughout the twenty minute procedure. In contrast, the restricted food was kept in a large transparent jar in the center of the table. After ten minutes, chidren were granted access to the restricted food for two minutes, foliowed by the removal of the restricted food from the table. Results revealed that the restricted food elicited more positive comments, more requests, and when it was made available, children took larger portions and ate more, compared to freely accessable control food.86 These findings indicate that restricing access to palatable foods may be counterproductive in that it may promote their intake. Research with animal models produced a similar pattern when access to a preferred food source was given daily to some rats and on alternating days to others.87 Furthermore, longitudinal research reveals that at least among middle class white families with daughters, maternal use of restrictive feeding practices predicts uninhibited overeating and greater weight gain.88

Excessive parental control and pressure to eat may also influence dietary intake and disrupt children’s short-term behavioral control of food intake. For example, longitudinal studies have reported that higher levels of parental control and pressure to eat were associated with lower fruit and vegetable intakes89 and higher intake of dietary fat90 among young girls. Moreover, in a study of children’s feeding practices, encouraging children to eat by focusing their attention on the amount of food on the plate promotes greater consumption and makes children less sensitive to the caloric content of the foods consumed.91 Thus, pressuring children to eat their vegetables in order to leave the table or as a contingency to receiving dessert may ultimately lead to the dislike of those vegetables.

Controlling feeding practices are not likely used in isolation, but rather represent the caregiver’s broader approach to child feeding. Indeed, parents of preschoolers who reported placing greater restrictions on their children’s eating also reported using higher levels of pressure or coercion in feeding.92 These practices are thought to typify an authoritarian style of feeding in which eating demands placed on the child are relatively high, but responsiveness to the child’s needs or behavior is relatively low.93 Unlike specific practices, feeding styles are believed to be stable over time and characterize parent-child interactions across a wide range of situations. Several studies have observed that authoritarian parents have fewer fruits and vegetables available in the home and their children consume smaller amounts of those foods.94 Specifically, Heather Patrick and colleagues examined the association between parent feeding styles and children’s food consumption patterns of Head Start preschoolers and their parents. Results indicated that authoritative feeding was positively associated with the availability of fruits and vegetables and attempts to get their child to eat those foods. In contrast, authoritarian feeding was associated with lower availability of fruits and vegetables. In addition, actual consumption of these foods varied by feeding style; authoritative feeding was positively associated with the consumption of dairy and vegetables, in contrast, authoritarian feeding was negatively associated with vegetable intake.95

Authoritative styles of feeding are also characterized by the high demands or expectations placed on the child while eating. Unlike authoritarian parents, however, those with authoritativestyles tend to be highly responsive to the child’s eating cues and behaviors. Authoritative parenting has been associated with greater home availability of fruits and vegetables as well as greater child consumption of dairy, fruits, and vegetables and lower consumption of junk foods.96 The balance of setting limits and clear expectations with consideration of the child’s needs is thought to promote appropriate nutrition and growth. In fact, empirical data on feeding styles and their influence on weight and weight gain are quite limited. Recent findings from the National Institute of Child Health and Human Development Study of Early Child Care and Youth Development revealed a protective association of authoritative parenting style with risk of overweight among five-year-old children. Among a national sample of 872 socio-economically and ethnically diverse families with young children, authoritarian parents were almost five times as likely to have an overweight child as authoritative parents, after statistically adjusting for potentially confounding effects of race and income.97 These findings reveal that the consistent use of authoritative feeding practices, which set clear expectations for children’s eating behavior and are responsive to children’s needs, can reduce the risk of obesity.

Finally, feeding styles involving low demand and low responsiveness to the child are considered neglectful whereas those with low demand and high responsiveness to the child are indulgent. These permissive styles of feeding would logically appear to engender overnutrition and overweight among those children exposed to the current dietary environment of abundance. However, this assertion remains unproven. In one study, neglected children, possibly reflecting permissive parenting styles, had a greater risk of adult obesity.98 However, feeding styles and their effects on dietary intake were not considered. In a more recent study of low- income African-American and Hispanic families, children of parents using indulgent feeding styles had higher weight status scores compared to children with authoritarian parents.99

Differing Perceptions of Healthy Weight: Socioeconomic and Cultural Contexts

Parents’ approach to feeding their children reflects their goals for their children’s eating and health,100 and these goals are influenced by culture and socioeconomic status. For example, among middle-income, non-Hispanic white families, mothers who employed greater restrictions in feeding their daughters had greater concerns about their daughters becoming overweight.101 Overweight, however, is not universally perceived as a detriment to health, especially for infants and very young children. For example, low-income mothers have reported that a heavy infant is viewed as a sign of a healthy child and successful parenting. Parents who view a child large for their age as healthy are unlikely to be concerned about child overweight, or to use restrictive feeding practices to prevent overweight. Given such values, caregivers may interpret infant behavior in terms of potential hunger and take specific care to prevent that state. Indeed, low-income mothers often interpret nonspecific behaviors such as frequent crying as signs of hunger. Consequently, feeding practices that are at odds with current recommendations, including concentrating formula or adding cereal to formula, or introducing solid foods before four months of age may be adopted by mothers who value having bigger babies.102

Cultural, socioeconomic and psychological factors also may shape parents’ perceptions of a healthy weight for their children. Data from the Third National Health and Nutrition Examination Survey (1988-1994) indicate that nearly one third of mothers with overweight children do not perceive their children as being overweight.103 Among low-income populations, seventy to eighty percent of mothers perceive their overweight child to be of normal weight or even underweight.104 In addition, low-income mothers of young children have reported that social stigmatization, physical limitations, and lack of a healthy diet are more relevant indicators of problematic weight than are objective measurements.105 These findings indicate that low-income parents desire their children to be at a healthy weight, but differ from health care professionals in their view of just what constitutes a “healthy” weight.

Summary and Suggestion for Intervention

Experiences with food flavors begin very early; the fetus becomes familiar with the flavors of the maternal diet during pregnancy, and the breastfed infant experiences the flavors of the maternal diet in breast milk. This early experience provides a “flavor bridge,” which can promote the infant’s acceptance of the foods from the maternal diet. As children make the transition to the modified adult diet of their culture, children’s food preferences and their diets reflect the foods that are available and accessible to them; parental modeling and familiarity plays an important role in their developing food preferences.

These findings suggest a number of potential early intervention approaches that could be used during infancy and very early childhood to promote healthier intake patterns. The implication of the research findings is that if we want children to learn to like and eat healthy foods such as vegetables, they need early, positive, and repeated experiences with those foods, as well as opportunities to observe others consuming those foods. The natural tendency of children to prefer sweet or salty, caloric rich foods over energy- poor but micronutrient-rich alternatives highlights the need for adult intervention to provide a varied and healthful diet. As such, caregivers play a critical role in determining which kinds of foods will become familiar to their children – from the foods kept routinely in the cupboard to those served regularly at the family table and even those consumed away from home. Caregivers also act as important gatekeepers to the social influences surrounding children’s eating, including access to media and modeling. Because observing the eating behavior of others influences children’s acceptance of foods, decisions about how often families eat together, who is present during family meals, as well as what is served, will dictate what is consumed and what children learn to like and eat. Evidence regarding the poor nutritional quality of table foods infants and toddlers are consuming as they transition to the adult diet reveals a need for parental guidance regarding the importance of offering healthy foods, avoiding restrictive and coercive feeding practices and serving as positive models of eating behavior for their infants and young children.

Although children possess an innate ability to selfregulate their energy intake, the extent to which they exercise this ability is determined by environmental conditions: for example, offering large food portions, calorically rich, sweet or salty palatable foods; the use of controlling feeding practices that pressure or restrict eating; and the modeling of excessive consumption can all undermine self-regulation of energy intake in children. As indicated previously, these current manifestations of traditional child feeding practices, which involve promoting children’s intake, can be maladaptive in the current food environment where food surfeit, obesity, and chronic disease have replaced food scarcity and infectious disease as major threats to children’s health.

A major theme of this review is that the strategies parents use to feed their children and the effects of those strategies on children’s eating and health are influenced by the broader context in which feeding is embedded. As such, culture, tradition and context reveal what is valued and what actions are taken to achieve feeding goals. As a part of culture, by definition, these feeding practices are not readily subject to change. However, since the threats provided by current eating environments have changed, changes in traditional feeding practices are needed. A first step in initiating change in traditional feeding practices is to provide parents with information to change parents’ perceptions and concerns regarding the threat that obesity poses to their children’s growth and health.

In the current context, feeding strategies that are responsive to children’s hunger and satiety cues and which encourage children’s attention to hunger and fullness are needed to support self- regulation. However, these approaches to child feeding are a clear departure from the traditional feeding practices, which have evolved to promote children’s intake whether or not they are hungry. Influential parenting factors reveal that in order to change parenting practices, we need to alter parent’s beliefs regarding current threats to children’s health. In this instance, parents need to learn that a large, rapidly growing child who is crossing percentiles on the growth chart is not a sign of successful parenting, but a cause for concern, and that guidance may be needed regarding alternative approaches to feeding. The challenge will be providing parents with information that will alter their concerns and perceptions regarding overweight as a threat to child health, and with guidance on alternative feeding strategies, which can be effective in promoting healthy weight in an environment that encourages excessive consumption.

Limited evidence indicates that bottle-fed infants consume more milk and gain weight more rapidly than breastfed infants, increasing their risk for childhood obesity.

Laboratory studies have confirmed that young children readily form preferences for flavors associated with energy rich foods. Even the fruits and vegetables most preferred by children tend to be those that contain the most energy.

Results from the Feeding Infants and Toddlers Survey also revealed that eighteen to thirtythree percent of \infants and toddlers consumed no servings of vegetables, and twenty-three and thirty-three percent consumed no fruits.

Parental control of feeding practices, especially restrictive feeding practices, tends to be associated with overeating and poorer self-regulation of energy intake in preschool-age children.

Nearly one third of mothers with overweight children do not perceive their children as being overweight.

The first years of life mark a time of rapid development and dietary change, as children transition from an exclusive milk diet to a modified adult diet. During these early years, children’s learning about food and eating plays a central role in shaping subsequent food choices, diet quality, and weight status. Parents play a powerful role in children’s eating behavior, providing both genes and environment for children. For example, they influence children’s developing preferences and eating behaviors by making some foods available rather than others, and by acting as models of eating behavior. In addition, parents use feeding practices, which have evolved over thousands of years, to promote patterns of food intake necessary for children’s growth and health. However in current eating environments, characterized by too much inexpensive palatable, energy dense food, these traditional feeding practices can promote overeating and weight gain. To meet the challenge of promoting healthy weight in children in the current eating environment, parents need guidance regarding alternatives to traditional feeding practices.

References

1. U.S. Census Bureau, County Business Patterns for the United States, 2003.

2. Food Marketing Institute, Supermarket Facts (2004), available at <> (last visited November 29, 2006).

3. L. R. Young and M. Nestle, “Expanding Portion Sizes in the U.S. Marketplace: Implications for Nutrition Counseling,” Journal of the American Dietetic Association 103, no. 2 (2003): 231-234.

4. Agricultural Research Service Community Nutrition Research Group, Results from USDA’s 1994-96 Diet and Health Knowledge Survey: Table Set 19 (U.S. Department of Agriculture, 2000).

5. Bureau of Labor Statistics, Women in the Labor Force: A Databook (U.S. Department of Labor, 2004).

6. Bureau of Labor Statistics, Employment Characteristics of Families (U.S. Department of Labor, 2005).

7. U.S. Census Bureau, Current Population Survey Reports, America’s Families and Living Arrangements (2004).

8. U.S. Census Bureau, Survey of Income and Program Participation, Who’s Minding the Kids? Child Care Arrangements (Spring 1999).

9. T. Lugaila, “A Child’s Day: 2000 (Selected Indicators of Child Well-Being),” in Current Population Reports: U.S. Census Bureau (Washington, D.C.: 2003): 70-89.

10. S. J. Nielsen, A. M. Siega-Riz, and B. M. Popkin, “Trends in Energy Intake in U.S. between 1977 and 1996: Similar Shifts seen across Age Groups,” Obesity Research 5 (2002): 370-378.

11. U.S. Bureau of Labor Statistics, Consumer Expenditures in 2003 (U.S. Department of Labor, 2003): at Table 6: Composition of Consumer Unit: Average Annual Expenditures and Characteristics, Consumer Expenditure Survey, 2003.

12. S. J. Nielsen and B. M. Popkin, “Patterns and Trends in Food Portion Sizes, 1977-1998,” JAMA 289, no. 4 (2003): 450-453.

13. S. A. Bowman, S. L. Gortmaker, C. B. Ebbeling, M. A. Pereira, and D. S. Ludwig, “Effects of Fast-Food Consumption on Energy Intake and Diet Quality among Children in a National Household Survey,” Pediatrics 113 (2004): 112-118.

14. J. A. Mennella, A. Johnson, and G. K. Beauchamp, “Garlic Ingestion by Pregnant Women Alters the Odor of Amniotic Fluid,” Chemical Senses 20, no. 2 (1995): 207-209.

15. G. J. Hauser, D. Chitayat, L. Berns, D. Braver, and B. Muhlbauer, “Peculiar Odours in Newborns and Maternal Pernatal Ingestion of Spicy Foods,” European Journal of Pediatrics 144, no. 4 (1985): 403.

16. B. Schaal, L. Marlier, and R. Soussignan, “Human Foetuses Learn Odours from their Pregnant Mother’s Diet,” Chemical Senses 25 (2000): 729-737.

17. J. A. Mennella, P. Coren, M. S. Jagnow, and G. K. Beauchamp, “Prenatal and Postnatal Flavor Learning by Human Infants,” Pediatrics 107, no. 6 (2001): 88-94.

18. L. M. Gartner, J. Morton, R. A. Lawrence, A. J. Naylor, D. O’Hare, R. J. Schanler, and A. I. Eidelman, “Breastfeeding and the Use of Human Milk,” Pediatrics 115, no. 2 (2005): 496-506; American Academy of Pediatrics, “Breastfeeding and the Use of Human Milk. American Academy of Pediatrics. Work Group on Breastfeeding,” Pediatrics 100, no. 6 (1997): 1035-1039.

19. M. S. Kramer and R. Kakuma, “The Optimal Duration of Exclusive Breastfeeding: A Systematic Review,” Advances in Experimental Medicine and Biology 554 (2004): 63-77.

20. K. G. Dewey, “Is Breastfeeding Protective against Child Obesity?” Journal of Human Lactation 19, no. 1 (2003): 9-18; C. Owen, R. Martin, P. Whincup, G. D. Smith, and D. G. Cook, “Effect of Infant Feeding on the Risk of Obesity across the Life Course: A Quantitative Review of Published Evidence,” Pediatrics 115, no. 5 (2005): 1367-1377; S. Arenz, R. Ruckerl, B. Koletzko, and R. von Kries, “Breast-Feeding and Childhood Obesity-A Systematic Review,” International Journal Obesity Related Metabolic Disorders 28, no. 10 (2004): 1247-1256.

21. See Arenz, supra note 20.

22. See Dewey, supra note 20.

23. J. Riordan and B. A. Countryman, “Basics of Breastfeeding. Part I: Infant Feeding Patterns Past and Present” Journal of Obstetric, Gynecological O Neonatal Nursing 9, no. 4 (1980): 207- 210; see Dewey, supra note 20.

24. J. A. Mennella and G. K. Beauchamp, “Maternal Diet Alters the Sensory Qualities of Human Milk and Nursling’s Behavior,” Pediatrics 88 (1991): 737-744.

25. J. A. Mennella and G. K. Beauchamp, “The Transfer of Alcohol to Human Milk: Effects on Flavor and the Infant’s Behavior,” New England Journal of Medicine 325 (1991): 981-985.

26. J. A. Mennella and G. K. Beauchamp, “The Infant’s Response to Vanilla Flavors in Mother’s Milk and Formula,” Infant Behavior and Development (1996): 13-19.

27. J. A. Mennella and G. K. Beauchamp, “The Effects of Repeated Exposure to Garlic-flavored Milk on the Nursling’s Behavior,” Pediatric Research 34 (1993): 805-808.

28. J. A. Mennella, C. P. Jagnow, and G. K. Beauchamp, “Prenatal and Postnatal Flavor Learning by Human Infants,” Pediatrics 107, no. 6 (2001): E88.

29. See Mannella, supra note 17

30. See Menella, supra note 28.

31. S. A. Sullivan and L. L. Birch, “Infant Dietary Experience and Acceptance of Solid Foods” Pediatrics 93, no. 2 (1994): 271- 277.

32. L. S. Adair, “The Infant’s Ability to Self-Regulate Caloric Intake: A case Study,” Journal of the American Dietetic Association 84, no. 5 (1984): 543-546; S. J. Fomon, L. J. Filer, L. N. Thomas, I T. A. Anderson, and S. E. Nelson, “Influence of Formula Concentration on Caloric Intake and Growth of Normal Infants,” Acta Pediatrica Scandinavica 64 (1975): 172-181; M. K. Fox, B. Devaney, K. Reidy, C. Razafindrakoto, and P. Ziegler, “Relationship between Portion Size and Energy Intake among Infants and Toddlers: Evidence of Self Regulation,” American Journal of the Dietetics Association 106 (2006): S77-S83.

33. J. O. Fisher, L. L. Birch, H. Smiciklas-Wright, and M. F. Picciano, “Breast-Feeding through the First Year Predicts Maternal Control in Feeding and Subsequent Toddler Energy Intakes,” American Journal of the Dietetics Association 100, no. 6 (2000): 641-646.

34. K. G. Dewey, “Growth Characteristics of Breast-fed Compared to Formula-fed Infants,”Biology of the Neonate 74, no. 2 (1998): 94- 105.

35. K. G. Dewey, L. Nommsen-Rivers, and B. Lonnerdal, “Plasma Insulin and Insulin-releasing Amino Acids (IRAA) Concentrations are Higher in Formula-fed than in Breastfed Infants at 5 Months of Age,” in Experimental Biology (2004): abstract #1124.

36. C. M. Davis, “Results of the Self-Selection of Diets by Young Children,” The Canadian Medical Association Journal 41 (1939): 257- 261; C. M. Davis, “Self-Selection of Diet by Newly Weaned Infants,” American Journal of Diseases of Children 36 (1928): 651-679.

37 S. J. Fomon, L. J. Filer, L. N. Thomas, R. R. Rogers, and A. M. Proksch, “Relationship between Formula Concentration and Rate of Growth of Normal Infants,” Journal of Nutrition 98, no. 2 (1969): 241-254; S. J. Fomon, L. J. Filmer, L. N. Thomas, T. A. Anderson, and S. E. Nelson, “Influence of Formula Concentration on Caloric Intake and Growth of Normal Infants,” Acta Paediatric Scandinavia 64, no. 2 (1975): 172-181.

38. S. M. Pearcey and J. M. De Castro, “Food Intake and Meal Patterns of One Year Old Infants,” Appetite 29, no. 2 (1997): 201- 212.

39. R. J. Cohen, K. H. Brown, J. Canahuati, L. L. Rivera, and K. G. Dewey, “Effects of Age of Introduction of Complementary Foods on Infant Breast Milk Intake, Total Energy Intake, and Growth: A Randomised Intervention Study in Honduras,” Lancet 344 (1994): 288- 293.

40. L. Birch and M. Deysher, “Conditioned and Unconditioned Caloric Compensation: Evidence for Self-Regulation of Food Intake by Young Children,” Learning and Motivation 16 (1985): 341-355; L. L. Birch and M. Deysher, “Caloric Compensation and Sensory Specific Satiety: Evidence for Self Regulation of Food Intake by Young Children,” Appetite 7 (1986): 323-331; L. L. Birch, S. L. Johnson, M. B. Jones, and J. C. Peters, “Effects of a Nonenergy Fat Substitute on Children’s Energy and Macronutrient Intake,” American Journal of Clinical Nutrition 58 (1993): 326-333.

41. S. L. Johnson and L. L. Birch, “Parents’ and Children’s Adiposity and Eating Style,” Pediatrics 94, no. 5 (1994): 653-661.

42. L. L. Birch and J. O. Fisher, “Mothers’ Child-feeding Practices Influence Daughters’ Eating and Weight,” American Journal of Clinical Nutrition 71 (2000): 1054-1061.

43. P. Wright, “Learning Experiences in Feeding Behaviour during Infancy,” Journal of Psychosomatic Research 32, no. 6 (1988): 613- 619.

44. L. L. Birch, “Preschool Children’s \Preferences and Consumption Patterns,” Journal of Nutrition Education 11 (1979): 189- 192; L. K. Bartoshuk and G. K. Beauchamp, “Chemical Senses,” Annual Review of Psychology 4,5 (1994): 414-449; L. L. Birch, “Children’s Preference for High-fat Foods,” Nutrition Reviews 50 (1992): 259- 255; L. L. Birch, “Development of Food Preferences,” Annual Review of Nutrition 19 (1999): 41-62.

45. G. K. Beauchamp, B. J. Cowart, J. A. Mennella, and R. R. Marsh, “Infant Salt Taste: Developmental, Methodological, and Contextual Factors,” Developmental Psychobiology 27, no. 6 (1994): 353-365.

46. B. J. Cowart, “Development of Taste Perception in Humans: Sensitivity and Preference throughout the Life Span,” Psychological Bulletin 90, no. 1 (1981): 43-73; L. L. Birch, L. McPhee, B. C. Shoba, E. Pirok, and L. Steinberg, “What Kind of Exposure Reduces Children’s Food Neophobia?” Appetite 9 (1987): 171-178; see Sullivan, supra note 31.

47. D. L. Kern, L. McPhee, J. Fisher, S. Johnson, and L. L. Birch, “The Post-ingestive Consequences of Fat Condition Preferences for Flavors Associated with High Dietary Fat,” Physiology and Behavior 54, no. 1 (1993): 71-76.

48. E. L. Gibson and J. Wardle, “Energy Density Predicts Preferences for Fruit and Vegetables in 4-Year-Old Children,” Appetite 41 (2003): 97-98.

49. S. A. Sullivan and L. L. Birch, “Pass the Sugar, Pass the Salt: Experience Dictates Preference,” Developmental Psychology 26 (1990): 546-551; L. L. Birch and D. W. Marlin, “I Don’t Like It; I Never Tired It: Effects of Exposure on Two-Year-Old Children’s Food Preferences,”Appetite 3 (1982): 353-360; see Birch, supra note 46.

50. B. R. Carruth, P. Ziegler, A. Gordon, and S. I. Barr, “Prevalence of Picky Eaters among Infants and Toddlers and their Caregiver’s Decisions about Offering a Food,” Journal of the American Dietetic Association 104 (2004): S57-S64.

51. See Gartner, supra note 18.

52. M. Kramer, R. Barr, D. Leduc, C. Boisjoly, L. McVey-White, and I. Pless, “Determinants of Weight and Adiposity in the First Year of Life,”Journal of Pediatrics 106 (1985): 10-14.

53. J. L. Baker, K. F. Michaelsen, K. M. Rasmussen, and T. I. Sorensen, “Maternal Prepregnant Body Mass Index, Duration of Breastfeeding, and Timing of Complementary Food Introduction are Associated with Infant Weight Gain,” American Journal of Clinical Nutrition 80, no. 6 (2004): 1579-1588.

54. B. Carruth, J. Skinner, K. Houck, and J. Moran, “Addition of Supplementary Foods and Infant Growth (2 to 24 Months),” Journal of the American College Nutrition 19 (2000): 405-412.

55. J. J. Reilly, J. Armstrong, A. R. Dorosty, P. M. Emmett, A. Ness, I. Rogers, C. Steer, and A. Sherriff, “Early Life Risk Factors for Obesity in Childhood: Cohort Study,” British Medical Journal 330, no. 7504 (2005): 1357.

56. M. K. Fox, S. Pac, B. Devaney, and L. Jankowski, “Feeding Infants and Toddlers Study: What Foods are Infants and Toddlers Eating?” Journal of the American Dietetic Association 104, Supplement 1 (2004): S22-S30.

57. B. Devaney, P. Ziegler, S. Pac, V. Karwe, and S. I. Barr, “Nutrient Intakes of Infants and Toddlers,” Journal ofthe American Dietetic Association 104, no. 1, Supplement 1 (2004): S14-21.

58. See Fox, supra note 32.

59. See Fox, supra note 56.

60. J. D. Skinner, P. Ziegler, and M. Ponza, “Transitions in Infants’ and Toddlers’ Beverage Patterns,” Journal of the American Dietetic Association 104, no. 1 (2004): S45-50.

61. B. A. Dennison, H. L. Rockwell, and S. L. Baker, “Excess Fruit Juice Consumption by Preschool-aged Children is Associated with Short Stature and Obesity,” Pediatrics 99 (1997): 15-22.

62. J. D. Skinner and B. R. Carruth, “A Longitudinal Study of Children’s Juice Intake and Growth: The Juice Controversy Revisited,” Journal of the American Dietetic Association 101 (2001): 432-437; U. Alexy, W. Sichert-Hellert, M. Kersting, F. Manz, and G. Schoch, “Fruit Juice Consumption and Prevalence of Obesity and Short Stature in German Preschool Children: Results of the DONALD Study,” Journal of Pediatric Gastroenterol Nutrition 29 (1999): 343-349; R. A. Forshee and M. L. Storey, “Total Beverage Consumption and Beverage Choices among Children and Adolescents,” International Journal of Food Science and Nutrition 54, no. 4 (2003): 297-307.

63. M. M. Smith and F. Lifshitz, “Excess Fruit Juice Consumption as a Contributing Factor in Nonorganic Failure to Thrive,” Pediatrics 93, no. 3 (1194): 438-443.

64. J. A. Welsh, M. E. Cogswell, S. Rogers, H. Rockett, Z. Mei, and L. M. Grummer-Strawn, “Overweight among Low-income Preschool Children Associated with the Consumption of Sweet Drinks: Missouri, 1999-2002,” Pediatrics 115, no. 2 (2005): e223-229.

65. D. S. Ludwig, K. E. Peterson, and S. L. Gortmaker, “Relation between Consumption of Sugar-sweetened Drinks and Childhood Obesity: A Prospective, Observational Analysis,” Lancet 357, no. 9255 (2001): 505-508; C. S. Berkey, H. R. Rockett, A. E. Field, M. W. Gillman, and G. A. Colditz, “Sugar-Added Beverages and Adolescent Weight Change,” Obesity Research 12, no. 5 (2004): 778-788.

66. R. Hardy, M. Wadsworth, and D. Kuh, “The Influence of Childhood Weight and Socioeconomic Status on Change in Adult Body Mass Index in a British National Birth Cohort,” International Journal of Obesity 24 (2000): 725-734; H. M. Hendy, “Effective-ness of Trained Peer Models to Encourage Food Acceptance in Preschool Children,” Appetite 39, no. 3 (2002): 217-225; S. Lee and M. Reicks, “Environmental and Behavioral Factors are Associated with the Calcium Intake of Low-income Adolescent Girls,” Journal of the American Dietetic Association 103, no. 11 (2003): 1526-1529; E. M. Young, S. W. Fors, and D. M. Hayes, “Associations between Perceived Parent Behaviors and Middle School Student Fruit and Vegetable Consumption” Journal of Nutrition Education Behavior 36, no. 1 (2004): 2-8; K. W. Cullen, T. Baranowski, L. Rittenberry, C. Cosart, D. Hebert, and C. de Moor, “Child-reported Family and Peer Influence

Reframing the Obesity Debate: McDonald’s Role May Surprise You

By Adams, Catherine

Introduction

McDonald’s is a popular destination for fifty million customers every day. We highly value that level of trust, just as we value the feedback we get from our unique, face-to-face relationship with local customers the world over.

Our business realizes our corporate responsibilities and stands behind our Golden Arches. Today, consumers expect more from corporations. They are relying less on governments to lead, and instead, they’re looking to companies for positive change. Retail is not just about revenue – it is about responsibility.

We welcome this new dynamic because it is in line with our heritage to be leaders on issues that are important to our customers and to society. We know we are not perfect, and we don’t have all the answers. However, we look for continuous improvement. We listen to our customers and work with the experts in science, health, and agriculture.

That’s why our leadership position in the foodservice industry demands that we take a seat at the table for solutions to nutrition and obesity challenges.

Amidst all the studies, data, and news coverage on these issues, it is sometimes hard to separate science from sensationalism.

A leader of McDonald’s once said, “I don’t know what we will be serving in 50 years, but we will be serving more of it than anyone else.” Although these words were shared more than two decades ago, they still ring true.

I wonder if anyone ever believed that McDonald’s would one day serve more salads than anyone else in the world. Or that McDonald’s would be the number one buyer of apples in the United Sates. Or that McDonald’s would serve more than 3.7 billion servings of fruit and vegetables in just one year.

These numbers say that McDonald’s is listening and evolving.

Sure, McDonald’s sells lots of chicken, hamburgers, French fries, Egg McMuffins, drinks, and desserts. However, customers tell us that they want choice and variety, and we have responded by adding more choice and variety to our menu than ever before.

Take a look.

We invite you to learn the real story about our food through the nutrition information found on our packaging, on tray liners in our restaurants, and on our innovative website (www.mcdonalds.com).

Our Food

McDonald’s has always served safe and high quality food. Why wouldn’t we? Ultimately, customers decide what brands stay and what brands disappear, and the starting point for them has to be “Can we trust the safety and quality of the food we are buying and serving to our families?” That’s a pretty powerful incentive for a restaurant business to get it right every time.

One of McDonald’s real advantages to getting it right is the purchasing power of our substantial global supply chain. It helps us to have cost-effective access to the highest quality products available anywhere. Our beef is 100% ground beef, free of additives or fillers. Our chicken is primarily white breast meat. Our eggs are from laying hens. Our salads and yogurt are from the same suppliers who provide food to your grocery store.

In addition, our position within the food industry allows us to raise the bar on industry practices regarding animal welfare and antibiotic use in poultry. We dictate that suppliers optimize energy and water use and we routinely score our suppliers on their stewardship for the environment and the well-being of animals that are part of the food chain. We have a “Code of Conduct” for our suppliers that demands their commitment to fair labor practices, and we monitor their compliance. We demonstrate our sense of responsibility to the environment, animal well-being and social responsibility everyday. Are we perfect? No, but we are working on it and remain dedicated to a path of continuous improvement and industry leadership.

The food McDonald’s serves provides essential dietary nutrients, and the truth is, it tastes good. Our beef is an excellent source of protein and iron. Salads at McDonald’s have become a popular menu choice, and our numbers prove that we provide good sources of nutrients to our customers. McDonald’s served 2.5 billion servings of vegetables in 2005, including 1.7 billion servings of mixed greens, 580 million servings of tomatoes, and 51 million servings of carrots. We became the largest purchaser of apples in the restaurant business in 2005, buying more than 34 million pounds of apples in the United States. McDonald’s purchased a total of one billion servings of fruit in 2005. Our cut apples are preserved for color and freshness with ascorbic acid, providing an excellent source of vitamin C to our customers in addition to the natural nutrients in apples.

McDonald’s routinely employs the advice of third-party experts to help direct our food programs, including our Global Advisory Council for Nutrition and Balanced Active Lifestyles. This global group of clinicians and academics has an open view to our business and freely directs our priorities for food and nutrition. We are serious about our commitment to this group of experts. We routinely seek their guidance, and they have been impressed with our receptivity for their ideas and adoption of their recommendations.

Menu Choice

We stand behind our menus around the world, and work continuously to provide food choices that meet every customer’s desire and dietary needs. We have introduced a range of premium salads and chicken sandwiches, yogurt parfaits, and fruit salads; and in some countries, yogurt and fruit smoothies that appeal to a range of individual tastes and nutrition preferences. Worldwide, we continue to add more choice to our menus. People still think of McDonald’s as a “burger place,” but in many countries, we sell more chicken than beef. We sell salmon and couscous in salads in Europe. We sell dinner entrees with chicken, vegetables, rice, or pasta in Australia.

Our menu selections are also oriented to regional or local tastes. For example, we offer coconut water in Brazil, rice burgers in Taiwan, and porridge in the U.K..

Today, our focus is on the foods that experts around the world generally agree people should eat more often – fruits and vegetables. We are measuring our progress with these foods by presenting key performance indicators (KPIs) developed from an international panel of experts as part of the development of the World Health Organization (WHO)/Food and Agriculture Organization (FAO) Global Strategy on Diet, Physical Activity and Health. We have begun to report on our website and our bi-annual Corporate Social Responsibility Report the number of menu items with at least one or one-half a serving of fruits and vegetables. In 2005, the first year that this metric was reported, there were 58 menu items in our nine major markets that provided at least one serving of fruits or vegetables, and 88 menu items with at least one-half a serving of fruits or vegetables.

McDonald’s is actively promoting more choices in our meals for children. Our Happy Meals were originally designed to provide portion sizes suitable for young children. This continues to be the case today. They also provide essential nutrients for growing children, including protein, iron, calcium, vitamin E and B vitamins.

There is a strong focus today on providing an increasing number of choices for Happy Meals. In many of our major markets, customers can mix and match their selections for the entre, side, beverage and dessert. These customized Happy Meals optimally meet the nutritional needs and preferences for children and their parents. In France, for example, we offer a choice of five sandwiches, three side dishes including carrots, ten beverages, and two desserts. Beverage choices include bottled water (flavored and unflavored), two fruit juices, and soft drinks without added sugar. The dessert options include a yogurt drink. Many countries, including Brazil and the United States, offer a fruit bag, semi-skimmed milk, or an apple or fresh fruit salad as Happy Meal sides. These options enjoy a prominent place in our menus, and we will continue to provide more choices for children and parents in the future.

Nutrition Information

McDonald’s has provided nutrition information since the early 1990’s through a variety of mechanisms, including printed brochures and more recently through websites in all of our major markets. In 2005, we started printing nutrition information for the more popular menu items on the back of tray liners. We have diligently worked to provide nutrition education for children and adults through multiple channels. Knowing that not everyone orders their foods with all components listed in the ingredients, we created a unique customizable website tool in the United States, called “Bag A Meal,” enabling individuals to learn the nutrition content of their meals as they order them – for example without the sauce on a Big Mac or the pickle on a hamburger.

Building on the leadership role McDonald’s took over a decade ago, we began printing nutrition information on our packaging in 2006. McDonald’s is the first major restaurant business to voluntarily place nutrition information on food labels, making it easier than ever for our customers to know what they are eating and tomake personal food choices as well as choices for their families.

In order to meet our global standards, we developed a novel form of labeling that conveys nutrient information without depending on language. Using nutrition experts and consumer research as our guides, we created icons as symbols for calories, protein, fat, carbohydrates, and sodium. We present nutrient content for each of these elements as a bar chart based on the percentage of the recommended daily intake in the respective country or region of the world. As a responsible global citizen, we elected to present nutrition information in the local government’s format wherever applicable. In Europe, we are the first restaurant company to use the new pan-European nutrition reference values – the Guideline Daily Amounts (GDAs). In the United States, we include on our packaging the same nutrition fact panel that is required on retail packages so that consumers can build on the nutrition education tools with which they have already become familiar.

We are already using the latest technology to convey nutrition information in countries where such tools are available and popular. For example, in Japan nutrition information is accessed via the customer’s preferred method. Food packages carry special bar codes which are read by web-enabled cell phones. Nutrition information from Japan’s McDonald’s web site is then displayed on the customer’s cell phone.

McDonald’s is committed to informing our customers about food choices and what is in our food. We believe that those already interested in and familiar with nutrition information will appreciate this transparency from McDonald’s. We encourage those who have not yet elected to learn about nutrition to become motivated by the access to information that we provide.

We also believe that we have a responsibility to our customers to remind them that health is the result of food intake and exercise. It is not possible to “eat your way” to good health. Health and an appropriate weight may only be achieved through a balance of energy intake and expenditure. Therefore, McDonald’s voluntarily elected to include a public education campaign as part of our global advertising strategy. We bring this concept to life in a brief and personally engaging message – “It’s What I Eat and What I Do.” The campaign began in March 2005 and has been seen in television, print and outdoor advertising, tray liners, packaging, and a variety of other communication vehicles around the world.

Conclusion

McDonald’s is a restaurant business dedicated to feeding people today and tomorrow. We make deliberate choices concerning our food quality, available menu choices, and visibility of nutrition information and educational messages on energy balance. We believe that these choices demonstrate our sincere commitment to our customers’ health and well-being. We do not offer “fast food;” rather, we provide “good food fast.”

Do these facts surprise you about McDonald’s? We hope that as much as you have always “known” about McDonald’s, you will want to know more about us, including our food and our values. You are invited to explore our business through the multiple channels through which we open our doors to customers and the public everyday.

McDonald’s has taken a seat at the table of the obesity discussion, but our role is not apologetic – it is as a partner equally dedicated to sensible, responsible, and sustainable solutions.

Catherine Adams,

Corporate Vice-President, McDonald’s Corporation

Catherine Adams, Ph.D., R.D. is Corporate Vice President, Worldwide Quality, Food Safety and Nutrition for the McDonald’s Corporation. She is responsible for corporate strategies and policies relating to the quality, safety and nutrition for the global business.

Copyright American Society of Law and Medicine, Incorporated Spring 2007

IN THE KNOW Nile Sandeen HIV Has Changed His Life, but; the Disease is Far From Ruining It

By RASHAE OPHUS JOHNSON

Summer “break” in Milwaukee is a misnomer for Valparaiso University senior Nile Sandeen. Spare time consists of precious moments spent wedding planning with his fiancee in Illinois, a traveling job repairing church organs and preparations for the discernment process to become a Lutheran pastor.

Given a genuine break, Sandeen could idle away weeks just worrying. There’s his older brother (and only sibling) deployed in Afghanistan, the complications of becoming a clergyman in an interfaith marriage and his mother’s fragile health. Plus, he’s living with AIDS.

Are the utmost precautions enough to protect his HIV-negative fiancee? Will they attempt the medical procedure to conceive without transmitting the virus, or will the excessive cost stifle hope for biological children? If they do start a family, will AIDS leave his wife a widow and his children without a father?

Fortunately, Sandeen is too busy with life to worry much about death. “We all die. I might die tomorrow from anything, but it’s highly unlikely I’ll die tomorrow from AIDS,” he said. “Even if I do, I believe in a higher world. I believe in being saved, and death is part of that journey.”

Sandeen was 4 when he was diagnosed HIV-positive along with his mother, Dawn Wolff, who contracted it from her then-husband. She unknowingly passed it to Nile through birth.

Wolff, a nurse, shared her family’s story to raise awareness of the disease, but rumors of contagiousness provoked hysteria when Nile entered kindergarten in Mequon. Between bouts of hospitalization, Wolff did her best to counter the stigma and shield her boys, including Sean, born HIV-negative two years before Nile.

“She always just told me, ‘You’re a healthy boy. You can live. You can keep going and not have to worry,’ ” he said. “Somewhere along the way I started believing it.”

Coverage of his confrontational entry into kindergarten introduced him to a lifelong friend, Milwaukee native Neil Willenson, a then-college student whom Sandeen inspired to start the nationally renowned Camp Heartland to provide respite for children with AIDS.

After considering careers in baseball or the FBI, growing up in public scrutiny ultimately revealed Sandeen’s pastoral calling. As he distanced himself from his upbringing in the Evangelical Lutheran Church of America (ELCA), a Camp Heartland chum invited him to a lock-in at a vibrant Sherman Park church of another Lutheran synod. There, Sandeen discovered his passion for the ministry.

“Things happen for a reason. I had to learn to deal with hard things a lot sooner in life. It prepared me and engaged me (for pastoral life). Because I was HIV-positive, it engaged me in public speaking. Because I was HIV-positive, I met someone who invited me back to church. God touched me in many ways in my life because I was HIV-positive,” he said. “Sometimes I still feel, ‘Why me?’ You certainly wonder, couldn’t there have been another way? At the same time, I know how fortunate I’ve been spiritually.”

Though AIDS created dating obstacles, it was less of an issue with his fiancee, Nicole Evers, who knew that Sandeen was HIV- positive before they started dating. Sandeen said his religious differences initially presented greater misgivings than his AIDS status for his future in-laws, who are devout members of the United Reformed Church. But since resolving their differences, they happily obliged when Sandeen requested their daughter’s hand in marriage.

Sandeen often attends Sunday services at both churches with Evers and strives to focus on shared beliefs. As for AIDS, “They’ve known for years. They’ve never really verbally expressed their concern, although I’m sure it’s there. They trust that I love her and wouldn’t try to endanger her.”

Today, Sandeen maintains good health with a regimen of six pills per day. His wedding is scheduled for May 31, 2008. His mom, who barely survived the early years of AIDS, just celebrated her 49th birthday. Their future looks promising.

Meanwhile, the AIDS population in the United States is approaching 1 million, with about 40,000 new cases annually. The fastest-growing segment is heterosexual young adults, and Sandeen fears for his peers gambling with their health.

“They need to get educated and protect themselves. They can make the numbers smaller faster than any medicine can,” he said. “It’s turned from a stereotype of ‘you get AIDS and then you die,’ to you don’t need to worry about it anymore.

“AIDS will not ruin your life. It will not end your life. But it will totally change your life.”

bio

Age 21

Education Senior theology major

at Valparaiso University

Summer job Church organ repairman

Career aspiration Evangelical

Lutheran pastor

Hometown Mequon

Wedding date May 31, 2008

they say

“He is the definition of a survivor. He beat the odds. He wasn’t supposed to survive into adolescence. Not only is he surviving, he’s thriving – he’s engaged, and he’s going to be a minister in the Lutheran church! These were all things that weren’t supposed to happen.”

— Neil Willenson, friend and founder of Camp Heartland for children with HIV

Copyright 2007, Journal Sentinel Inc. All rights reserved. (Note: This notice does not apply to those news items already copyrighted and received through wire services or other media.)

(c) 2007 Milwaukee Journal Sentinel. Provided by ProQuest Information and Learning. All rights Reserved.

Anywhere MD, Inc. Announces the Initiation of Audit for Fiscal Years 2005 and 2006 As First Phase to Achieving OTC Bulletin Board Status

Anywhere MD, Inc. (PINKSHEETS: ANWM) announced today that pursuant to MedLink International, Inc.’s (OTCBB: MLKNA) acquisition of Anywhere MD, Inc. the Company has initiated an audit for fiscal years ending 2005 and 2006. The Completion of the audit will be the first phase of achieving OTC Bulletin Board status for Anywhere MD.

In its aim to create the most transparency for investors of all categories; Individual, International as well as Institutional, the Company is moving forward and taking the first step towards becoming a “reporting issuer.” The Company is initiating the first phase with an audit of its 2005 and 2006 financials expected to be complete in the next three to four weeks.

Steve Hixson, Chief Executive Officer of Anywhere MD, Inc., stated, “Recent events such as the purchase of a controlling interest in Anywhere MD by MedLink International allows for both companies to benefit from economies of scale and increased growth capabilities with the ability to grow and prosper as separate entities only illustrates the need for total transparency for ANWM for the investment community to make a more informed decision.” Mr. Hixson went on to say, “This is a turning point for ANWM, once we achieve ‘full reporting status’ on the OTCBB we will then have increased access to available investment dollars available through various sources such as Financial Institutions both here in the United States as well as in Europe and Asia who in the past would not be able to participate in the ANWM opportunity. We will continue to keep our valuable shareholders and the general public informed as to our progress throughout the process and hope to continue to set high standards and achieve our goals not only for the Company, but for our stakeholders as well by increasing shareholder value.”

About Anywhere MD, Inc.

Anywhere MD, Inc. (PINKSHEETS: ANWM) provides state-of-the-art HealthCare Technologies that are shaping a new generation of patient care. Anywhere MD’s expertise in clinical documentation for physicians offers a broad range of technology products to improve productivity for healthcare providers and enable them to diagnose, treat and manage patient information at the highest level.

Anywhere MD, Inc. develops, markets, sells and supports proprietary software applications for mobile handheld devices. These mobile applications provide the physician with the most recent and accurate healthcare information at the “Point Of Care.” This technology eliminates a confusing and tedious ‘paper trail’ that can lead to inaccurate and inadequate patient charting, resulting in malpractice suits and poor patient care.

ANWM is headquartered on the central coast of California and is committed to serving thousands of healthcare professionals across the USA, Canada, Europe, Asia and Australia. Company web site is www.anywheremd.com

About MedLink International, Inc.

MedLink International is a publicly held NASDAQ Bulletin Board and Frankfurt Stock Exchange company (OTCBB: MLKNA) (FRANKFURT: WM6B), which supplies its proprietary MedLink EHR software via a Virtual Private Network (VPN) to a network of physicians, radiology clinics and other types of medical offices.

The MedLink VPN allows subscribing doctors to securely communicate with other physicians and remotely access and retrieve patient records, lab results, X-Rays, CAT Scans and other patient health information. Through its VPN, MedLink offers member institutions and physicians other products and services, such as MedLink TV, MedLink Scheduler, MedLink Billing, Secure Health Mail, Remote PACS, Health IT infrastructure and networking, document management, and video conferencing. The MedLink VPN delivers pertinent drug information from pharmaceutical companies to physicians. In addition to the physician Virtual Private Network, MedLink is also providing a consumer network displaying medical education, information and advertising on a network of digital screens installed in the waiting rooms of radiology clinics, medical laboratories, and physician offices. Please visit www.medlinkus.com for more information.

This news release may include comments that do not refer strictly to historical results or actions and may be deemed to be forward-looking within the meaning of the safe harbor provisions of the U.S. federal securities laws. These include, among others things, statements about expectations of future business, revenues, cash flows and capital requirements. Forward-looking statements are subject to risks and uncertainties that may cause the company’s results to differ materially from expectations. These risks include the company’s ability to further develop its business, the company’s ability to generate revenues, develop appropriate strategic alliances and successful development and implementation of technology, acceptance of the company’s services, competitive factors, new products and technological changes, and other such risks as the company may identify and discuss from time to time.

 Investor Relations Contact:  Anywhere MD, Inc. Jay Smith 775-851-7397   MedLink International, Inc. Jameson Rose [email protected] 631-342-8800  

SOURCE: Anywhere MD, Inc.

Bradley Pharmaceuticals Solaraze(R) and TX Systems(R) B-Lift(R) Featured at European Dermatology Conference

FAIRFIELD, N.J., May 30 /PRNewswire-FirstCall/ — Bradley Pharmaceuticals, Inc. announced today that two brands marketed by the Company’s Doak Dermatologics subsidiary were presented recently at the 16th European Academy of Dermatology and Venerology (EADV) Conference in Vienna, Austria.

TX Systems(R) B-Lift(R), licensed from Drs. Albert and Douglas Kligman, is a cosmetic formulation of salicylic acid that is used for epidermal skin peeling to improve appearance. The product was presented in a scientific session conducted by Dr. Aleksandar Krunic, Clinical Assistant Professor of Dermatology, University of Illinois College of Medicine. His study entitled, “Salicylic Acid Peels for Dyschromia, Photoaging and Acne-Related Conditions — Our Experience,” assessed the effect of salicylic acid peels in patients with darker skin types and conditions including acne, rosacea, postinflammatory hyperpigmentation, melasma, enlarged pores and photoaging.

Thirty-six adult patients participated voluntarily in the study. All other systemic and topical treatments were prohibited. The patients had undergone facial peels with 20-30% salicylic acid in a hydroethanolic solution bi-weekly for three months. They were pre-treated for two weeks with hydroquinone 4% prior to undergoing a series of five salicylic acid chemical peels. Two investigators independently evaluated patients by physical examination and comparison of pre-treatment and post-treatment photos. Paired comparisons revealed moderate to significant improvement in 80.6% of the patients, and it was concluded that salicylic acid peels are beneficial in improving the appearance of acne, dyschromia rosacea and photoaging in patients with darker skin tones.

Solaraze(R) Gel (diclofenac sodium – 3%), indicated for the topical treatment of actinic keratosis, was featured in a scientific poster exhibit. Dr. Boni E. Elewski, professor and Director of Clinical Research, Department of Dermatology, at the University of Alabama at Birmingham and past president of the American Academy of Dermatology, presented a poster entitled, “Disseminated Superficial Actinic Porokeratosis (DSAP) Treated with Diclofenac Sodium 3% Gel.” DSAP is a hereditary disorder of keratinization causing numerous dry patches on sun-exposed areas of the arms and legs.

An open-label, multi-center pilot study was initiated with 15 adult patients enrolled across four sites. Diclofenac sodium 3% gel was applied to the left or right forearm, twice daily and patients were followed monthly for 12 weeks. If lesions were still present, patients continued medication for an additional 12 weeks. Target area lesion counts were performed at each visit to objectively measure efficacy. The primary endpoint was a decrease in the number of DSAP lesions from baseline. Preliminary data from the study indicates treatment with diclofenac sodium 3% gel lead to a stable or reduced number of DSAP lesions in 5 out of 9 patients at 12 weeks. The study is still ongoing and not all patients have completed 24 weeks.

“We at Bradley are fully committed to supporting our brands with data from clinical studies,” stated Daniel Glassman, President and CEO of Bradley Pharmaceuticals. “Currently there are four additional dermatology studies in progress and we look forward to the results because such studies further validate the benefits of our products and support the importance of the specialty markets we serve.”

Important Product Safety Information About Solaraze(R) Gel:

SUN AVOIDANCE IS INDICATED DURING SOLARAZE(R) GEL THERAPY. As with other NSAIDs, anaphylactoid reactions may occur in patients without prior exposure to diclofenac. Diclofenac sodium should be given with caution to patients with the aspirin triad. In clinical trials, the most common adverse reactions involved the skin and included contact dermatitis, rash, dry skin and exfoliation. The majority of these reactions were mild to moderate, and resolved upon discontinuation of therapy. SOLARAZE(R) Gel should not be applied to open skin wounds, infections, or exfoliative dermatitis.

For additional important information about Solaraze(R) Gel, please view full prescribing information at http://www.bradpharm.com/ or request full prescribing information by contacting Bradley Pharmaceuticals.

Please visit Bradley Pharmaceuticals web site at: http://www.bradpharm.com/.

Bradley Pharmaceuticals common stock is listed on the NYSE under the symbol BDY.

Bradley Pharmaceuticals, Inc. was founded in 1985 as a specialty pharmaceutical company and markets to niche physician specialties in the U.S. and international markets. Bradley’s success is based upon its core strengths in marketing and sales which enables the company to Commercialize brands that fill unmet patient and physician needs; Develop new products through life cycle management; and In-License phase II and phase III drugs with long-term intellectual property protection that upon approval leverage Bradley’s marketing and sales expertise to increase shareholder value. Bradley Pharmaceuticals is comprised of Doak Dermatologics, specializing in therapies for dermatology and podiatry; Kenwood Therapeutics, providing gastroenterology, OB/GYN, respiratory and other internal medicine brands; and A. Aarons, which markets authorized generic versions of Doak and Kenwood therapies.

Safe Harbor for Forward-Looking Statements:

This release contains “forward-looking statements” within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements include statements that address activities, events or developments that Bradley expects, believes or anticipates will or may occur in the future, such as Bradley’s plans to in-license, develop and launch new and enhanced products with long-term intellectual property protection or other significant barriers to market entry, sales and earnings estimates, other predictions of financial performance, timing of payments on indebtedness, launches by Bradley of new products, market acceptance of Bradley’s products, and the achievement of initiatives to enhance corporate governance and long-term shareholder value. Forward-looking statements are based on Bradley’s experience and perception of current conditions, trends, expected future developments and other factors it believes are appropriate under the circumstances and are subject to numerous risks and uncertainties, many of which are beyond Bradley’s control. These risks and uncertainties include Bradley’s ability to: launch VEREGEN(TM) at the end of 2007 and ELESTRIN(TM) at the end of the second quarter 2007; predict the safety and efficacy of these products in a commercial setting; estimate sales; maintain adequate inventory levels; implement the returns and inventory optimization plan timely, if at all; reduce product returns; comply with the restrictive covenants under its credit facility; refinance its credit facility; access the capital markets on attractive terms or at all; favorably resolve the pending SEC informal inquiry; maintain or increase sales of its products; or effectively react to other risks and uncertainties described from time to time in Bradley’s SEC filings, such as fluctuation of quarterly financial results, estimation of product returns, chargebacks, rebates and allowances, concentration of customers, reliance on third party manufacturers and suppliers, litigation or other proceedings (including the pending class action and shareholder derivative lawsuits), government regulation and stock price volatility. Further, Bradley cannot accurately predict the impact on its business of the approval, introduction, or expansion by competitors of generic or therapeutically equivalent or comparable versions of Bradley’s products or of any other competing products. In addition, actual results may differ materially from those projected. Bradley undertakes no obligation to publicly update any forward-looking statement, whether as a result of new information, future events or otherwise.

Bradley Pharmaceuticals, Inc.

CONTACT: Cecelia C. Heer, Investor-Public Relations of BradleyPharmaceuticals, Inc., +1-973-882-1505, ext. 252

Web site: http://www.bradpharm.com/

Quetiapine in the Successful Combination Treatment of an Assaultive Patient With Treatment-Resistant Schizoaffective Disorder

By Post, David Edward

Abstract. The author relates the case study of a 38-year-old man with schizoaffective disorder (bipolar type) refractory to multiple medications. This patient had a long history of psychiatric illness and had been hospitalized at least 15 times because of aggression, hostility, and psychosis. Two separate trials of risperidone and olanzapine resulted in adverse effects, including possible neuroleptic malignant syndrome. Quetiapine, as part of combination therapy, led to substantial reductions in the patient’s schizoaffective disorder symptoms and problematic behaviors. The patient tolerated quetiapine and did not experience any adverse effects. Quetiapine may be a suitable treatment option in patients with schizoaffective disorder.

Index terms: adverse effects, atypical antipsychotic, neuroleptic malignant syndrome, quetiapine, schizoaffective disorder

Schizoaffective disorder, a common, chronic, and frequently disabling psychiatric disorder, manifests prominent symptoms of both schizophrenia and mood disorder.1 Over the past decade, atypical antipsychotic drugs (eg, risperidone, quetiapine, olanzapine) have replaced typical antipsychotic drugs (eg, haloperidol, chlorpromazine) as the treatment of choice for schizophrenia and now are indicated for a number of psychiatric conditions, with target symptoms including aggression, hostility, suicidality, and substance abuse.2 Because atypical psychotic drugs have a better adverse effect profile, fewer extrapyramidal symptoms, a lower risk of tardive dyskinesia, and improved negative symptoms and certain parameters of cognition, researchers have found these drugs to be beneficial in the treatment of patients with schizoaffective disorder. 2,3 In this article, I describe the case report of an assaultive patient with schizoaffective disorder (bipolar type) refractory to multiple medications who improved significantly when high-dose quetiapine was added to his treatment regimen.

CASE REPORT

In May 1997, a 38-year-old muscular man of impressive size and strength (6 feet 1 inch tall and 240 lbs) was admitted for the third time to the tertiary state hospital where I practiced general and forensic psychiatry. Physicians at an intermediate state hospital where he had been an inpatient 6 times referred him to us because of his hostility, extreme mania, and demanding behavior. The referring hospital staff frequently used locked seclusion and 4-point restraints as part of the patient’s treatment protocol. The physicians at the intermediate state hospital diagnosed him with schizoaffective disorder and alcohol dependence in remission and recommended long-term hospitalization in a closed environment. During his stay at our tertiary state hospital, I also added a diagnosis of personality disorder not otherwise specified (with prominent antisocial traits).4 The admitting physician’s emergency certificate stated that the patient threatened to kill his family members and police officers, had destroyed his home, exhibited pressured speech, and was noncompliant with his medication regimen. He faced criminal charges for trespassing in his grandmother’s house, and his family experienced difficulty in restraining him from being a public nuisance (eg, confronting and assaulting people without provocation) and vandalizing public and private property with his fists (eg, parked cars). His family members were finding it increasingly difficult to live with and care for him.

His family history was notable for psychotic disorder in his grandmother, who required several hospitalizations for exacerbations of psychosis. School records and family statements show that he demonstrated behavioral problems during his early adolescence. He began smoking cigarettes at age 11 years and started to drink alcohol excessively.

Soon after, at age 12 years, he also began drinking alcohol in an excessive fashion. In his late 20s, he experimented with phenylcyclohexyl piperidine (PCP) and marijuana. He had never been gainfully employed, and Social Security payments were his only source of income. Over a 6-year period, he had been admitted approximately 15 times to the local acute care hospital and twice to our hospital for behavioral problems. During his most recent hospital admission, he exhibited symptoms of paranoia without evidence of substance abuse.

Previous physicians had administered a number of psychotropic medications to the patient in an attempt to control his hostile and aggressive behavior without causing excessive sedation or cognitive decline. These agents included haloperidol 30 mg/d, valproic acid 2500 mg/d, mesoridazine 100 mg/d, risperidone 4 mg/d, olanzapine 10 mg/d, buproprion 200 mg/d, gabapentin 1200 mg/d, lithium 1500 mg/d, and fluphenazine 10 mg/d. During trials of the atypical antipsychotic drugs risperidone and olanzapine, he developed some symptoms that raised concern about possible neuroleptic malignant syndrome (NMS). Despite trials with various psychotropic medications, the patient remained intimidating, boisterous, and difficult to redirect. At one point, he became severely agitated, and in a fit of rage, he self-inflicted a compound fracture of his left humerus.

His state hospital chart listed numerous physical, verbal, and psychological behavioral altercations, including repeatedly throwing furniture, threatening peers, screaming obscenities, slamming himself into inanimate objects, beating on walls and windows, swinging his fists wildly, and harassing female hospital staff members. He experienced auditory hallucinations, whereby he had unintelligible conversations with imaginary children he called “the kids.” He also acknowledged excessive energy, decreased need for sleep, sadness (especially over the death of his mother), and guilt (from not listening to his parents). In contrast to his agitation, however, he would often be euthymic and apologetic, and he would express a genuine sense of regret for his uncontrolled behaviors.

On his third admittance to our hospital, physicians initially administered risperidone 2 mg/d and titrated him to 4 mg/d over a period of a few days; however, he experienced a body temperature of 102.6F, unsteady gait, substantial lethargy, and urinary incontinence. During physical examination, the physician found an abscess on the patient’s buttock, which may have exacerbated development of symptoms. Because of the acute symptoms, the physician admitted him to the intensive care unit, and risperidone was discontinued as a precaution because its administration was temporally correlated with the onset of symptoms. On return to my ward, physicians switched him to olanzapine and increased his dosage to 10 mg/d within 7 days; however, the patient redeveloped disturbing symptoms, including a body temperature of 102.0F, pulse rate of 120 bpm, lethargy, and confusion. Physicians discontinued olanzapine as a precaution because of the close correlation of its administration with the subsequent acute symptoms.

Two months later, my colleagues and I initiated quetiapine 50 mg/ d and titrated to 250 mg/d. We used quetiapine as part of a combination therapy that included chlorpromazine 700 mg/d, thiamine 300 mg/d, and benztropine 4 mg/d. Over the next few months, because of incomplete symptom resolution, we increased the dosage of quetiapine several times, eventually reaching 750 mg/d. At a dosage of quetiapine 750 mg/d, he appeared more cooperative and attentive, and he was less hostile and aggressive. During the 3-month quetiapine titration, we also added propranolol hydrochloride 40 mg/ d and clonazepam 4 mg/d to the treatment regimen because of anxiety, nervous tension, and residual aggressive behavior. We increased propranolol hydrochloride to 100 mg/d (60 mg in the morning and 40 mg at night), and he no longer exhibited unprovoked aggression. He became actively involved in recreational therapy activities and anger management classes and had meaningful visits with his sister and father.

Although the patient exhibited periodic anger and irritability, he recognized that he had more control over his behavior than he had before receiving quetiapine. It was clinically notable that he responded well to a medication regimen that included quetiapine

750 mg/d and that he no longer experienced NMS-like symptoms. His treatment regimen resulted in a substantial calming effect, in addition to improved attention, concentration, and sociability. As a result of his continuing improvement, he was discharged to a less restrictive inpatient hospital. He continued to improve substantially at the other facility, and after a 4-month period he was discharged home. His medication regimen on discharge included quetiapine 750 mg/d, chlorpromazine 900 mg/d, propranolol hydrochloride 100 mg/d, thiamine 300 mg/d, clonazepam 4 mg/d, and benztropine 4 mg/d.

COMMENT

Because individuals with schizoaffective disorder exhibit symptoms of both schizophrenia and mood disorder, their illness has a prominent affective component, in addition to chronic psychosis, and they usually require maintenance antipsychotic medication and psychotropic medication for affective symptoms.5 In the present case, the patient had been prescribed numerous medications with ind\ications ranging from treatment of psychosis and control of extrapyramidal symptoms (induced by neuroleptic drugs) to treatment of depression and management of manic episodes of bipolar disorder. Several of these medications, including risperidone and olanzapine, caused symptoms of NMS, an unpredictable and rare, but potentially fatal, complication of antipsychotic medications.6

Although the patient did not respond positively to a number of medications, he did improve significantly while taking quetiapine, which has been proven in clinical studies to treat both the positive and negative symptoms of schizophrenia effectively.7-9 Quetiapine is well tolerated and has a low propensity for causing adverse events during acute and long-term treatment in adult populations.10

The success of this patient is in line with a randomized, placebo- controlled trial in which Goldstein11 found that quetiapine reduced hostility and aggression markedly in patients with schizophrenia. Our patient’s improvement is also consistent with the results of an open-label trial5 involving 10 patients with bipolar disorder and 10 patients with schizoaffective disorder who received quetiapine therapy. Overall, these patients with serious mood disorders tolerated quetiapine well, suggesting that this atypical antipsychotic may be beneficial when treating individuals with serious mood disorders who have suboptimal responses to mood stabilizers alone.

Our patient improved greatly with high-dose quetiapine, reinforcing the notion that patients who have been resistant to other agents may need a higher dose of the drug, which is typically well tolerated across the dose range.12 Although the present case suggests that physicians should use quetiapine in the treatment of patients with schizoaffective disorder, further studies involving atypical antipsychotic drugs in combination with other medications or as monotherapy are needed.

ACKNOWLEDGMENT

The author independently identified this case to report and received no financial compensation with respect to this article. As well, the author does not have any financial relationship with respect to any medical/pharmaceutical company. Astra Zeneca did provide editorial assistance.

REFERENCES

1. McElroy SL, Keck PE Jr, Strakowski SM. An overview of the treatment of schizoaffective disorder. J Clin Psychiatry. 1999;60:16- 22.

2. Glick ID, Murray SR, Vasudevan P, Marder SR, Hu RJ. (2001). Treatment with atypical antipsychotics: New indications and new populations. J Psychiatr Res. 2001;35:187-191.

3. Grieger TA, Benedek DM, Flynn J. Pharmacologic treatment of patients hospitalized with the diagnosis of schizoaffective disorder. J Clin Psychiatry. 2001;62:59-60.

4. American Psychiatric Association (APA). Diagnostic and Statistical Manual of Mental Disorders: DSM-IV, 4th ed. Washington, DC: APA;1994.

5. Sajatovic M, Brescan DW, Perez et al. Quetiapine alone and added to a mood stabilizer for serious mood disorders. J Clin Psychiatry. 2001;62:728-732.

6. Susman VL. Clinical management of neuroleptic malignant syndrome. Psychiatr Q. 2001;72:325-336.

7. Arvanitis LA, Miller BG. Multiple fixed doses of “Seroquel” (quetiapine) in patients with acute exacerbation of schizophrenia: a comparison with haloperidol and placebo. The Seroquel Trial 13 Study Group. Biol Psychiatry. 1997;42:233-246.

8. Borison RL, Arvanitis LA, Miller BG. ICI 204,636, an atypical antipsychotic: efficacy and safety in a multicenter, placebo- controlled trial in patients with schizophrenia. US SEROQUEL Study Group. J Clin Psychopharmacol. 1996;16: 158-169.

9. Meats P. Quetiapine (‘Seroquel’): an effective and well- tolerated atypical antipsychotic. Int J Psych Clin Pract. 1997;1:231- 239.

10. Dev V, Raniwalla J. Quetiapine: a review of its safety in the management of schizophrenia. Drug Saf. 2000;23:295-307.

11. Goldstein JM. “Seroquel” (quetiapine fumarate) reduces hostility and aggression in patients with acute schizophrenia. Paper presented at: American Psychiatric Association 151st Annual Meeting; May 30-June 4, 1998; Toronto, Canada.

12. Brooks JO III. Successful outcome using quetiapine in a case of treatment-resistant schizophrenia with assaultive behavior. Schizophr Res. 2001;50:133-134.

David Edward Post, MD

Dr. Post is a general and forensic psychiatrist with Capital Area Human Services District in Baton Rouge, LA.

Copyright 2007 Heldref Publications

NOTE

For comments and further information, address correspondence to Dr. David Edward Post, Capital Area Human Services District, 4615 Government St / Bldg #2, Baton Rouge, LA 70806 (e-mail: [email protected]).

Copyright Heldref Publications Spring 2007

(c) 2007 Behavioral Medicine. Provided by ProQuest Information and Learning. All rights Reserved.

Babies Steal the Spotlight During Discovery Health’s BABY WEEK

SILVER SPRING, Md., May 29 /PRNewswire/ — A multimedia event like no other, BABY WEEK is Discovery Health’s annual tribute to adorable infants everywhere and the hardworking parents who love and care for them. This year, BABY WEEK kicks off on Sunday, June 10, at 8 PM (ET/PT) and continues through Friday, June 15, 2007. Viewers can tune in all week to watch baby-themed stories about remarkable families, inspiring pregnancies and baffling medical mysteries. In addition, Discovery Health will show support for its mission partner, the March of Dimes, by airing public service announcements addressing the growing problem of premature birth.

Discovery Health is known for introducing audiences to amazing and unforgettable families during BABY WEEK. While this year is no exception, viewers also will have the opportunity to become reacquainted with some memorable parents and little ones from years past. First out of the playpen in this year’s BABY WEEK lineup is the premiere of QUINTUPLETS! REVISITED, Sunday, June 10, at 8 PM (ET/PT), which checks in with the Derk family three years after the memorable birth of their quintuplets. The one-of-a-kind Gosselin family also makes a return appearance in 2007 with SEXTUPLETS AND TWINS: ONE YEAR LATER on Sunday, June 10, at 9 PM (ET/PT).

In addition to the wall-to-wall television programming, BABY WEEK will include a full offering of parenting advice and online baby-themed games, puzzles and video at DiscoveryHealth.com, starting June 10, 2007. Gamers can compete in the Olympics-inspired Parenting Games, which include five virtual events: feeding, diapering, bathing, cleaning up and bedtime. In addition, users will have the chance to put their short-term memory to the test with the Gosselin Concentration Game. Special March of Dimes content will also be featured on DiscoveryHealth.com detailing the organization’s mission to give every baby a healthy start.

Another unique feature of BABY WEEK is the robust video-on-demand (VOD) content on Discovery Health On-Call, including short-form video clips with tips and advice on babies and parenting as well as interviews with Discovery Health’s experts. This year, Discovery Health On-Call will be featuring a special sneak peak at a full-length episode of MYSTERY DIAGNOSIS focusing on the stories of three babies born with mysterious medical conditions.

   2007 BABY WEEK Programming Preview:    QUINTUPLETS! REVISITED   Premieres Sunday, June 10, at 8 PM  

Revisit the Derk family three years after the birth of their quintuplets. With the support of doctors and their community, remarkable parents Brenda and Jim were able to handle some incredible emotional ups and downs. Now, revisit the quintuplets’ first year of life and see how these frolicking 4-year-olds are thriving. Catch up with Brenda and Jim and the quintuplets as they celebrate their fourth birthday.

   SEXTUPLETS AND TWINS: ONE YEAR LATER   Airs Sunday, June 10, at 9 PM  

Like many young married couples, Jon and Kate Gosselin planned on having children, but eight children, all under the age of five, was not exactly part of their plan. After finding out that Kate suffered from polycystic ovarian syndrome, which prevents normal ovulation, the couple turned to fertility treatment and quickly became pregnant with twin girls. Then, wanting to add one more child to their family, Jon and Kate went through another round of fertility treatment, resulting in sextuplets! Now Kate’s stretched-out post- pregnancy belly is going to get a tummy tuck, courtesy of one very generous viewer.

   BABIES: SPECIAL DELIVERY REUNION   Airs Monday, June 11, at 8 PM  

Follow five of the most fascinating cases from past seasons of BABIES: SPECIAL DELIVERY, and follow up with the parents and babies one and two years later. Learn how the babies and mothers are doing since those eventful births.

   QUINT-ESSENTIAL   Airs Tuesday, June 12, at 8 PM  

Meet Pete and Jenny of small-town Illinois. After losing two previous babies, Jenny manages to get pregnant again… this time with quintuplets! Experience the drama of a high-risk pregnancy, then meet the five miracle babies.

   BABIES, BABIES EVERYWHERE   Airs Wednesday, June 13, at 8 PM  

In this one-hour reunion special, a trio of memorable families previously featured in the Discovery Health specials TRIPLE THE TRIPLETS and DOUBLE IDENTICAL TWINS are faced with medical and emotional challenges as they struggle to combine multiples and marriage.

   PARENTING A BAKER'S DOZEN   Premieres Thursday, June 14, at 8 PM  

With six kids from a previous marriage, the Healys added one set of identical twins, then a singleton, then a set of fraternal twins, and now, without any fertility drugs, they have added two more identical twins, defying odds of one in five million.

   MYSTERY DIAGNOSIS: BABIES   Premieres Friday, June 15, at 8 PM  

From the moment he was born, Lisa Murphy knew that something was terribly wrong with her youngest child, Eamon. His unresponsiveness and general lack of development worried Lisa and her husband Bob, but no doctor could find anything wrong with their son.

About the March of Dimes

The March of Dimes is a national voluntary health agency whose mission is to improve the health of babies by preventing birth defects, premature birth, and infant mortality. Founded in 1938, the March of Dimes funds programs of research, community services, education, and advocacy to save babies and in 2003 launched a campaign to address the increasing rate of premature birth. For more information, visit the March of Dimes Web site at marchofdimes.com or its Spanish language Web site at nacersano.org.

About Discovery Health Media Enterprises

Discovery Health Media Enterprises includes the Discovery Health and FitTV television networks and online assets including http://www.discoveryhealth.com/, as well as its Continuing Medical Education (CME) business and Discovery’s first stand-alone VOD service, Discovery Health On-Call. Discovery Health Media Enterprises is part of Discovery Communications, the number-one nonfiction media company reaching more than 1.5 billion cumulative subscribers in over 170 countries. Through TV and digital media, Discovery’s 100-plus worldwide networks include Discovery Channel, TLC, Animal Planet, The Science Channel, Discovery Health and Discovery HD Theater. Discovery Communications is owned by Discovery Holding Co. , Advance/Newhouse Communications and John S. Hendricks, Discovery’s founder and chairman. For more information please visit http://www.discovery.com/.

Discovery Health

CONTACT: Anna Reinhart-Marean for Discovery Health, +1-240-662-6502,[email protected]

Web site: http://www.discovery.com/http://www.marchofdimes.com/http://www.nacersano.org/

Organic Movement Faces Split Over Air-Freighted Food

By Martin Hickman

For the conscientious, food shopping poses many ethical dilemmas: are organic bananas better than Fair-trade or English tomatoes preferable to imports?

Britain’s booming organic movement has been wrestling with one such dilemma for years and the debate has become so heated it can no longer be ignored. From today, the country’s organic farmers, suppliers and shoppers are being asked for an answer to an awkward question: is it acceptable to air-freight organic food?

On this one question could hinge the prosperity of thousands of African farmers, fruit and vegetable importers, the integrity of the organic movement and, to some extent, the health of the planet itself.

If the body which certifies three-quarters of organic food, the Soil Association, rules that the climate change pollution cannot be justified, it may ban all flown-in food.

A ban might split the organic movement: one side with strict environmental standards and another with looser standards that factor in the development of the Third World. The argument arises from the rapid rise of the UK organic movement, which has burgeoned into a [pound]1.6bn-a-year business.

Farmers have struggled to grow enough food and in 2005 supermarkets imported one-third of their organic range, mostly by air.

Nationally “food miles” are at a record high, with air- freighting up 136 per cent between 1992 and 2002. Yet flying food thousands of miles from poor farmers to wealthy Westerners generates substantial amounts of C02 just as climate change is being recognised as an emergency. Shoppers find the dissonance uncomfortable: a Soil Association survey found that eight out of 10 would prefer to buy conventional local food rather than an organic import.

At Britain’s biggest vegetable box supplier, River-ford Farm in Devon, airfreighted food is banned. The self-imposed ban is sometimes difficult but Guy Watson, its founder, believes the environment must take priority. He tells customers: “Most out-of- season veg imported to the UK is flown in from Africa and South America causing horrendous emissions, or trucked from southern Europe with less, but still substantial, environmental impact.” About 80 per cent of the company’s 35,000 customers’ food comes from the UK, with the rest arriving by road or ship.

By contrast, the importer Blue Skies in Northamptonshire buys fresh pineapple, mango and coconuts from Ghana, where it employs 1,500 people. “We would see any change to the rules as unfair to us and unfair to Africa,” said the founder, Anthony Pile. “The carbon emissions for air freighted food is something like 1 per cent of the total emissions. Why hit farmers who have a tiny carbon footprint and often live without electricity?” he asked.

In its consultation, which ends on 28 September this year, the Soil Association is setting out the case for five options. Maintaining the status quo would help faraway producers but might damage the organi-sation’s credibility. A gradual or total ban would damage exporters but help tackle climate change and encourage more sustainable agriculture. Warning stickers or offsetting flights would be a compromise.

Anna Bradley, of the Soil Association’s standards committee, explained that the rules had to evolve over time and the time had come for a definitive answer on aviation. “It’s quite clear right now that these issues of climate change and CO2 are much more important than they were 10 years ago and it feels much more pertinent to talk about them,” she said. But Britain’s organic trailblazer could lose business by raising its standards, just as it did when it tightened its rules on poultry farms. “That cost us licensees but it has … retained the integrity of the standard,” she said.

Flown in from abroad

PORK FROM DENMARK

When it opens its massive new 75,000 sq ft store in London’s Kensington High Street next month, the US organic giant Whole Foods Market will stock many imports from all over the world because UK supply of organic produce is overstretched. Among products likely to be brought in are pork from Denmark and beef from France and Germany.

PINEAPPLES FROM GHANA

Workers in west Africa grow and pack tropical fruit such as pineapples, mangoes and papaya which is then flown to the UK. Countries like Ghana say the foreign income is vital to development.

MILK FROM THE NETHERLANDS

Supermarkets are struggling to find enough organic milk because of the number of dairy farmers going out of business and the time taken to convert to new methods. Organic milk is bought from the Netherlands.

(c) 2007 Independent, The; London (UK). Provided by ProQuest Information and Learning. All rights Reserved.

New Pediatrician Hired to Staff Clinic

By Steve Jones, The Sun News, Myrtle Beach, S.C.

May 28–SHALLOTTE, N.C. — The pediatric clinic at the Brunswick County Health Department has resumed a regular, three-day-a-week schedule after contracting with a Myrtle Beach pediatrician to staff the facility.

The clinic has been a Brunswick County staple for decades, but its schedule suffered when Laura Pope, the nurse practitioner who staffed it for 28 years, contracted cancer last year and died earlier this year, said Fred Michael, the health department’s deputy director.

Before her death in February, Michael said, “She’d been out at least six months.”

The county contracted another pediatrician while it was looking for another full-time nurse practitioner when Duke medical school graduate Dr. Francesca Brown called. Michael said Brown learned of the clinic’s need through Denise Mihal, chief executive officer of Brunswick Community Hospital.

A native of Myrtle Beach, Brown practiced pediatrics for nine years in the U.S. Air Force before returning to the area. Michael said she wants to help underserved children and believes public health is the way to go.

She and health department officials hope that the demand for pediatric care at the health department will build so that a doctor is needed full time.

The clinic will see anyone from infants to 21-year-olds on Mondays through Wednesdays. Michael said there are about 17 appointments on each of the three days now and that additional, walk-in patients can add to that total.

The clinic sees all patients, whether they qualify for Medicaid, have no insurance or are privately insured. Fees are charged based on a patient’s ability to pay.

Ideally, Michael hopes the clinic will see 30 patients a day to help fund Brown’s $80-per-hour contract.

It’s a number he believes is possible.

“I think it’s going to take getting the word out,” he said of achieving the higher number of patients. “Most of our business is from word of mouth.”

——

If you go

What — Brunswick County Pediatric Clinic

When — 8:30 a.m.-4:30 p.m., Monday, Tuesday and Wednesday

Where — Brunswick County Health Department, County Government Complex, Bolivia, N.C.

Information — 910-253-2250

Contact STEVE JONES at 910-754-9855 or [email protected] [mailto:[email protected]].

—–

Copyright (c) 2007, The Sun News, Myrtle Beach, S.C.

Distributed by McClatchy-Tribune Information Services.

For reprints, email [email protected], call 800-374-7985 or 847-635-6550, send a fax to 847-635-6968, or write to The Permissions Group Inc., 1247 Milwaukee Ave., Suite 303, Glenview, IL 60025, USA.

Troubles Run Deep on Gulf Oil Platform

THUNDER HORSE PLATFORM, Gulf of Mexico — The day after massive Hurricane Dennis churned through the Gulf of Mexico in July 2005, a commercial vessel traveling past BP PLC’s hulking Thunder Horse oil platform radioed the bad news to its owner: The platform’s top deck was listing into the water.

When a landing party from BP arrived at the platform two days later, they had to tie onto rails near the control tower to haul themselves up the platform’s 30-degree incline.

“It looked like a ship that had been sunk,” recalled Stan Bond, BP’s head of subsea operations for the Gulf of Mexico.

The workers were surprised to learn that the platform, evacuated before Dennis hit, had not taken on water from a leak through its hull. Rather, an incorrectly plumbed, 6-inch length of pipe had allowed water to flow freely among several ballast tanks. That began a chain of events that caused the platform to tip into the drink.

Now BP is attempting to do what no oil company has done before: essentially rebuild the entire architecture of an oil field on the sea floor some 6,000 feet beneath the waves.

At $250 million, the job is costlier, and riskier, than putting the equipment on the gulf floor in the first place. On the frontier of oil exploration, the margin between riches and disaster can be as small as a 6-inch piece of pipe. Yet for BP, rebuilding the platform is critically important because the company desperately needs the oil flowing as reserves in formerly rich fields such as Prudhoe Bay in Alaska dwindle.

Politics have made oil from the Middle East, Africa, Russia and South America is increasingly out of reach. And new discoveries around the world are more rare and continue to shrink in size.

“We have passed the peak for world discoveries,” said Robert Gillon, an analyst at oil-industry research firm John S. Herold Inc. “It’s hard to see how the industry can do anything whatsoever to materially increase its oil reserves or production.”

Against this backdrop, Thunder Horse, sitting atop a reserve that possibly holds 1.5 billion barrels, promises to deliver up to 250,000 gallons of oil a day, making it one of the gulf’s biggest producers. For U.S. consumers now paying an average of $3.10 a gallon for gas, Thunder Horse would relieve some of the price pressure: Fully operational, it would boost total U.S. production by 5 percent.

For BP, the troubles at Thunder Horse have turned the oil platform into a dual symbol. Like Janus, the two-faced Roman god that glimpses both the past and the future, Thunder Horse stands as a reminder of BP’s mistake-prone recent track record. Looking forward, though, it holds out the prospect of a lucrative, rewarding future.

The Thunder Horse mishap followed by nearly four months BP’s worst-ever accident on U.S. soil, a refinery explosion in Texas City, Texas, that killed 15 people. Then, last spring, BP spilled 200,000 barrels of oil onto the Arctic tundra, the first of several pipe leaks that ultimately led BP to temporarily shut down half of North America’s largest oil field.

Yet getting Thunder Horse on line will not be easy. The deep Gulf of Mexico is challenging in its own right. Removing and reassembling an oil field at such depths has never been attempted.

One daunting challenge: delicately lifting miles-long strings of steel pipe from the sea floor. If the pipes stress or twist too much, they might weaken and perhaps spring a leak one day, resulting in disaster.

“It will not be easy to pull off,” Bond said. “You’re trying to change things to make something good. You’ve got to make sure you don’t change things and make them worse.”

Exploring the ‘Dead Sea’

BP’s history in the deep waters of the gulf began inauspiciously. Though the company had drilled in the shallows since the 1980s, in 1991 it began focusing on finding new reserves under water greater than 2,500 feet deep.

The early effort did not go well. At the outset, BP drilled a series of dry holes. At a cost well beyond $100 million, BP was learning why others in the industry had given the gulf a derisive nickname: “The Dead Sea.”

“Drilling a succession of dry holes, it was almost the definition of insanity,” said Cindy Yeilding, a leading geologist for BP’s Gulf of Mexico effort. Executives at BP’s London headquarters agreed. For two years, they would not approve any additional deep-water gulf drilling.

But that thinking changed. After a reassessment, BP’s oil explorers decided on a new strategy that focuses all the company’s energy on seeking big reserves, dubbed “elephants.” And the company put big resources behind the new approach: as much as $2.5 billion annually in recent years on gulf exploration.

That’s nearly double the amount spent in BP’s next-largest target, Azerbaijan, and roughly 20 percent of BP’s total exploration and production budget.

The allocation makes sense for BP, because the gulf’s deep waters today float above one of the hottest oil prospects on the planet, matching up with Angola and a small handful of lesser places at a time when new huge prospects are not on anyone’s maps.

BP is able to move so aggressively in large part because of its $55 billion purchase of Chicago-based Amoco in 1999. Until then, the efforts of BP, like those of other oil giants, had been stymied in the deep Gulf of Mexico by thick layers of liquefied salt that sit like opaque blankets over much of the gulf’s oil deposits.

But Amoco had world-class imaging technology, as well as data-mining capability and mathematical algorithms that could interpret the data it collected.

Combining Amoco’s tools with some of its own, BP developed a unique exploration approach: It began placing sensing nodes on the floor of the gulf floor. Combining the sea-floor node data with information collected by the conventional method of towing sensors behind a large boat, BP could look through salt from several angles. Suddenly, the opaque blanket was lifted.

Exploration was not cheap, though. These days, it costs about $50 million to fully map a potential oil reserve. To drill an exploratory well costs an additional $100 million. Both those figures are huge jumps from what they would have been a decade ago, when shallower, less complex oil reserves were still available to tap. Back then, roughly $10 million would cover the cost of an exploratory hole.

Such costs and technology hurdles are what drove BP to adopt its “elephant hunt” strategy: Focusing only on the potentially biggest and most lucrative prospects, and ignoring the rest.

An aggressive lease-acquisition strategy, paying $300,000 and more a pop for rights to explore and pump oil from a 9-square-mile plot of the ocean floor, backed the effort. Taking advantage of a controversial Clinton administration program that drastically reduced royalties on deep-water gulf leases sold after 1995, BP stocked up.

And for good reason. Like most oil companies, BP has seen its exploration opportunities diminish over time. Its reserve replacement ratio, which measures whether a company adds new oil reserves at the same rate it depletes its existing resources, has fallen steadily in recent years.

BP’s replacement ratio had a modern-day peak of 191 percent in 2001, meaning BP added almost twice as much in reserves as it sold. But that number dropped below full replacement in 2004 and 2005 before climbing above the break-even line again last year, to 113 percent.

By 2006, BP held leases on 650 tracts in Gulf of Mexico water deeper than 1,250 feet. After 15 years of effort, BP was vying with longtime deep-water player Chevron to become the largest leaseholder in the deep gulf.

A host of productive exploratory wells followed. Going by names like Atlantis, Neptune, Mad Dog and Holstein, they are among the gulf’s richest finds.

One, at first called Crazy Horse, got a name change after descendants of the Native American warrior protested. Today it’s called Thunder Horse.

The $250 million pipe

At a cost of $1 billion to build, and physically imposing with a top deck that rises 15 stories above the water’s surface, the Thunder Horse platform appears to be invulnerable to the forces of nature and a wonder of technology. After all, more than 18 major parts on the platform have Serial No. 001 — meaning they were invented just for this job.

It turns out Thunder Horse is vulnerable to both the power of nature and the shortcomings of modern technology.

The platform was designed to handle hurricanes as strong as Dennis. But the evacuation for the hurricane, combined with just the slightest shifting in Dennis’ strong winds, set in motion an unlikely chain of events that caused the platform to tilt. That, in turn, has led to the delay that is costing BP billions in lost revenue — and serving for the industry as an example of what can go wrong at the outer limits of technology.

The platform rests on four hollow, airtight legs that are as wide across as a two-bedroom apartment. Normally, the legs give the platform buoyancy, and horizontal connecting sections add stability.

After workers evacuated in advance of Hurricane Dennis, though, the misplumbed pipe allowed water to cascade through ballast and bilge tanks. The force of the flow forced open valves that in turn allowed the water to gather in the two port-side legs of the platform.

As Thunder Horse’s top deck tilted toward the water, ballast pipes that normally pump water out began taking water in.

The support legs filled with water, and all manner of calamity set in. Some 30 car-size pumps and motors were ruined. A corroding process started that ran through the platform’s 25 miles of electric cable and wiring like oil being sucked up by a wick.

“There’s the $250 million pipe,” said Sammy McDaniel, BP’s head of Gulf of Mexico operations, a wry smile on his thin face as he showed a visitor the cleaned-up inside of one of Thunder Horse’s large, hollow legs.

Neither McDaniel nor Bond had set foot on Thunder Horse before the mishap. On the first helicopter flight in, they agreed to work together, with McDaniel focusing mainly on the platform’s operations and Bond zeroing in on the bottom of the ocean.

“We knew this one was going to be a bear,” McDaniel said.

In the weeks after the landing party first boarded Thunder Horse, three days after the storm, the platform became a hive of frantic activity. With 150 workers living on a ship anchored nearby, working with lamps on their hard hats until electricity could be restored, McDaniel and Bond led a frantic cleanup and restart effort.

Work stopped only for hurricanes. After the devastating successive storms, Katrina and Rita, came through, the workers stayed off the platforms while trying to help their colleagues piece their lives back together.

BP’s corporate brass told the public that it believed Thunder Horse could restart by late August 2006. Privately, Bond and McDaniel thought they could get the platform back in operation before the end of 2005. Rushing to meet the deadline, workers piled up nearly 4 million man-hours on the cleanup alone.

With start-up approaching, the recovery team in May of 2006 used water to pressure-test the subsea system of pumps, wellheads, piping and gathering centers that sprawl over an oil field on the ocean floor that covers an area nearly as wide across as the North Side of Chicago.

Then the unthinkable happened: The system leaked.

“We were this close,” said McDaniel, holding a thumb and forefinger close together. “Then, ‘Damn! What went wrong?'”

Sleuthing at 6,000 feet

Perhaps a valve was left open. Perhaps a coupling on a pumping station wasn’t properly tightened. “We figured we would find out in our spare time,” McDaniel said.

Two weeks passed, then a month. One pressure test held for eight hours, and then failed. The team injected ink into the piping network and sent a remotely operated, unmanned submarine 6,000 feet down into the water to photograph what was going on.

Outfitted with cameras and high-precision robotic arms, the sub was capable of spotting any problems, and fixing many potential mishaps.

As the robot’s operators watched on a black-and-white video monitor inside a cramped control room, the camera focused on an image of a sea-floor metal structure, a manifold. The size of the container on the back of a semi-truck, the manifold is a key piece of equipment that ties together the lines from dozens of sea-floor wells, and then helps transfer oil up toward the platform.

Most of the huge manifold looked fine. But on the side, on one of the large pipes that snaked through the frame that formed a sort of exoskeleton for the structure, was a shocking sight: an inch-wide gash slashed through a weld. The leak was found.

Cause of the cracks

Perhaps it was just one bad weld, but McDaniel and Bond had to determine if there were any more. They directed the submarine to another manifold and found a second ruptured weld. Inspection of other welds in the subsea equipment turned up even more cracks.

Thunder Horse’s oil reservoir is nearly 5 miles below the water’s surface. At that depth, oil will gush from the drill pipes at a temperature of 275 degrees Fahrenheit, under a metal-crunching 17,400 pounds per square inch of pressure.

Those conditions can stress even the mixture of high-strength steel and alloy that make up the half-inch welds on the manifolds and pipes of the Thunder Horse oil fields. But the equipment had gone through severe tests — at 125 percent of the worst stresses that the Thunder Horse field might exert.

There had to be something else.

“We’re operating at the edge of what is known,” said Kenny Lang, BP’s head of Gulf of Mexico operations. “When you’re at the edge, you’re creating knowledge. And when you create knowledge, you sometimes stub your toe.”

Now the hunt was on for a new spot of knowledge: What caused the problem?

Lang flew in a team of experts in subsea oil production, welding and metallurgy from around the world to Houston to determine the cause of the weld failures.

Meanwhile, he directed others to touch base with the manufacturers of every component built into the sea-floor manifolds. He asked for testing of the anti-corrosion materials and insulation that enshrouded the subsea pipes. He wanted no clue missed.

“Ultimately you say, ‘What if I’m wrong about what caused this? We put our equipment back on the seabed, and it fails?’ ” Lang said. “You can’t risk that.”

Lang also wanted other oil companies to be aware of the dangers. Learning that Shell Oil Co. was due to submerge manifolds at depths similar to Thunder Horse in the fourth quarter of last year, he made certain Shell was notified of the possible risks.

Even as the investigation started, though, pressure mounted onboard Thunder Horse.

BP had commissioned the Balder, one of only two ships in the world capable of lifting the manifolds and other heavy equipment from the sea floor, to visit the platform in December. After that, the Balder wouldn’t be available again for almost a year.

By late September 2006, the manifold investigation team delivered its verdict. The welds, indeed, were the problem thanks to an unforeseen chemical reaction.

While the manifolds sat idle for a year after the platform tilted, the crushing pressure at the bottom of the sea forced hydrogen atoms into the mix of steel and high-strength alloy that made up the welds. The hydrogen caused the metal to become brittle, and when water was forced through the piping during the restart testing, the welds failed.

Drilling toward Mardi Gras

In the meantime, Bond hadn’t been waiting for a verdict. He knew he only had until the end of 2006 to have all the sea-floor equipment ready to be lifted. That meant sealing wellheads, cutting pipes and planning logistics. It also meant working around the schedules of the 280 people onboard Thunder Horse, some of whom continued drilling new holes even as the rest of the sea-floor operation stood idle.

Drilling, after all, is what Thunder Horse was built to do.

On a recent spring day, a team of workers operated the ship’s drill rig, pulling up a drill bit that had gone more than 20,000 feet below the seabed floor. Nearly 3 million pounds of pipe stretched from the drill rig to the bottom of the hole.

Spinning furiously, with “mud” that is used to lubricate them spitting out of the hole, the drilling pipes came out in 95-foot sections. As each joint emerged, workers stood by as a huge, mechanized clamp twisted off the coupling that separated it from the long line still stuck in the ground.

Directed by operators using joysticks in an air-conditioned control studio, an overhead winch grabbed the newly freed section of pipe and hung it on a rack. The pipes knocked together, sounding like a supersize wind chime.

Nearby, in the main Thunder Horse control room, BP workers monitored huge computer displays that showed the pressure, temperature and fluid volumes in all of the oil platform’s piping systems.

There was something eerily missing on the screens, though: Not an ounce of oil was anywhere to be found.

The rebuilding process

Today, Thunder Horse’s crews have removed about three-quarters of the equipment that once nestled on the seabed. They are putting new insulation and anti-corrosion coatings on some, replacing other pieces entirely.

The most delicate operation — pulling the pipe up from the seabed without bending it — is necessary, Bond said, because it’s the only way he can reassemble the equipment that’s needed on the oil field. The deep-sea robots can cut the pipes at the point they connect to the equipment 6,000 feet below the surface. But robots can’t weld.

So Bond must oversee an operation that pulls up the freed pipe and brings it within reach of the Thunder Horse deck. There workers can weld it back to the huge, heavy pieces of equipment. Then BP workers must carefully lower the joined pieces back down, all without causing any new problems.

No one says it will be easy. But everyone onboard says it must happen on time. They will need the Balder for some of work, and demand for that ship is so high that it only comes by every 18 months or so.

“We’ve just been going full speed for a long time, and there’s no letting up,” said McDaniel, the operations chief.

“What we want to do is prove to ourselves and the world that we’re ready,” he said. “We just need to get all this stuff under us, and begin operation.”

Leading a reporter on a tour of the complex onboard systems that separate oil, water and gas, McDaniel pointed to a pipe from the platform that plunges deep into the ocean. By the time Thunder Horse goes into production, the pipe will connect to Mardi Gras — a $1 billion pipeline BP is building that one day will carry half of all the oil pumped from the deep-water gulf.”This is the top end of the Mardi Gras pipeline,” McDaniel said. “When the oil leaves here, it’s gone.”

For BP, and for gas-hungry consumers across the U.S., it can’t happen soon enough.

– – –

About this story

Chief Business Correspondent David Greising spent six months reporting on BP PLC’s attempts to fix three major problem sites within its huge North American unit. Greising, along with Tribune photographer Bob Fila, reported from Deadhorse, Alaska, site of the Prudhoe Bay oil spill last summer; Texas City, Texas, the location of a deadly refinery explosion that killed 15 in 2005; and atop BP’s Thunder Horse oil platform in the Gulf of Mexico, which nearly toppled into the sea during a hurricane in 2005. The Tribune is the first news organization to visit all three sites since the disasters and the only one ever to set foot on Thunder Horse in the gulf.

[email protected]

IN THE WEB EDITION: Read Part 1 of the series, plus the Tribune’s David Greising describes his visit to BP’s facilities and how the oil giant is struggling to rebuild in a video at chicagotribune.com/bp

UK’s Doctor of Development: Michael Karpf Praised for Setting ‘New Tone’ at Facility

By Karla Ward, The Lexington Herald-Leader, Ky.

May 28–From the conference room adjoining his office at the University of Kentucky, Dr. Michael Karpf can see what is now only an expanse of red dirt but will one day be a new UK Chandler Hospital.

“I will watch this hospital go up brick by brick,” Karpf said as he gazed out his window on a recent morning.

It’s one of many transformations that have begun in the medical enterprise at UK since Karpf came to Lexington 31/2 years ago:

— The number of faculty in the College of Medicine has increased from 610 to 712.

— Operating revenues at the hospital are up 49 percent, and research funding is up 45 percent.

— UK has plans for a new academic medical campus across Limestone from the existing hospital and Kentucky Clinic.

— And this week, UK officials will break ground for the new hospital, which could cost $700 million by the time its two phases are finished.

“He was really able to … set a whole new tone,” said Dr. Richard Lofgren, the hospital’s chief medical officer, who also worked with Karpf at the University of Pittsburgh.

Karpf was brought in to the newly created position of executive vice president for health affairs at a time when the medical center had begun to lose key clinical faculty, and the hospital’s patient numbers were on the decline.

A primary problem was a disconnect between the hospital administration and the medical faculty, said Dr. Frederick de Beer, chairman of UK’s Department of Internal Medicine.

“It was a model that wasn’t working,” he said. “It was a divided house.”

Karpf’s job was to put in place a new governance structure that would unite the hospital, College of Medicine and physician practices under a common vision and set UK on the path to becoming a top 20 academic medical center.

High praise from peers

So far, his supporters say, Karpf has succeeded.

“He has the ability to read the typography and understand where you need to go,” then make it happen, Lofgren said.

Lofgren noted that when he came to UK 21/2 years ago, the medical center seemed “organizationally down,” but that has since turned around and then some, as UK has become a “magnet” for faculty recruits and medical school applicants.

UK President Lee Todd attributes the growth to “the trust level” that Karpf has built with physicians, both within the UK medical center and in rural communities where the university is working to strengthen relationships.

“He’s a very good communicator. He cares about the health care of Kentuckians,” Todd said.

Karpf also helped build support by opening the hospital’s financial books, a change that allowed employees to share in a “single financial vision,” said de Beer, who has worked at UK for 18 years and served on the committee that recruited Karpf.

He praised Karpf for his accessibility and integrity.

“He’s already ensuring that what was shaped will exist beyond him,” de Beer said. “That is great leadership.”

Karpf said he came to Kentucky because he liked the idea of being able to generate large-scale changes in a health system that affects many people’s lives.

“I get up in the morning to try to figure out how to do things right for health care,” he said. “I enjoy doing stuff that has a public good to it. … It’s a privilege to have that kind of job.”

Difficult times in L.A.

If the hospital system Karpf previously worked for in Los Angeles failed, he said, the rest of the health care market could quickly pick up the slack. But if UK fails, he said, “it’s a big problem for 2 million people.”

During his eight years at the University of California-Los Angeles, Karpf served as vice provost for hospital systems and was responsible for integrating the UCLA Medical Center, Santa Monica/UCLA Medical Center and Neuropsychiatric Hospital into a single corporate entity.

But when he left, the health system’s income was dropping, and a private consulting group brought in to assess the situation said the organization could face major losses if changes were not made. At the same time, the university was in the midst of building two hospitals to replace existing facilities.

Sergio Melgar, chief financial officer for UK HealthCare who also served as chief financial officer for the medical enterprise at UCLA before coming to UK in 2004, said the dour circumstances at UCLA were a result not of Karpf’s leadership, but of outside economic pressures.

“The health care market was an issue. The economy in California was an issue,” he said.

In response to a request for an interview, Dr. Gerald S. Levey, vice chancellor of UCLA Medical Sciences and dean of the David Geffen School of Medicine, issued a statement describing Karpf as “a knowledgeable, hard working and compassionate leader.”

“Running a major academic medical center is no easy task, especially in California, but Dr. Karpf worked diligently to meet the many challenges,” Levey said.

Karpf said he decided to leave UCLA because he grew frustrated that there was not a good structure in place for making difficult decisions.

“It was hard to get the faculty and the hospital administration on the same page,” he said.

“What I like here is that my job is to be the final referee.”

Melgar said Karpf was well-liked among the faculty at UCLA, and “the rank and file were absolutely crushed” when he decided to leave.

In addition to Lofgren and Melgar, some other former associates have followed Karpf to the Bluegrass.

Karpf lured Karen Riggs from UCLA to start UK’s Physician Liaison Program, which works with doctors and their staffs to resolve concerns regarding referrals to UK.

Like UCLA, UK has had “communications issues” between the health care enterprise and the doctors who support it. Riggs said her job is to work with doctors to find out what they need and help fill those needs.

“I like what Dr. Karpf is doing here,” she said. “It just feels good to be helping with all this.”

Karpf said he has promised Todd that he will remain at UK for at least five more years to see the first phase of the hospital building project through.

After that, even if he decides to do something else, he plans to stay in the Bluegrass, because he loves the lifestyle.

“I’ll always have a place in Kentucky,” he said.

Reach Karla Ward at (859) 231-3314 or 1-800-950-6397, Ext. 3314.

—–

Copyright (c) 2007, The Lexington Herald-Leader, Ky.

Distributed by McClatchy-Tribune Information Services.

For reprints, email [email protected], call 800-374-7985 or 847-635-6550, send a fax to 847-635-6968, or write to The Permissions Group Inc., 1247 Milwaukee Ave., Suite 303, Glenview, IL 60025, USA.

CLSI Publishes Guideline for Toxicology and Drug Testing in the Clinical Laboratory

Clinical and Laboratory Standards Institute (CLSI, formerly NCCLS) recently published a new edition of its document, Toxicology and Drug Testing in the Clinical Laboratory; Approved Guideline–Second Edition (C52-A2). It is designed to aid the clinical laboratorian in developing procedures for the efficient and reliable analysis of clinical and forensic specimens to qualitatively and/or quantitatively determine the presence of drugs of abuse and other commonly encountered drugs. This guideline addresses forensic drug testing applications, such as workplace, criminal justice system, and rehabilitation settings, and clinical drug testing as typically undertaken for the diagnosis and treatment of emergency room patients. Its primary objective is to ensure that high quality standards are maintained within both of these important areas of clinical laboratory analysis.

This guideline addresses the most common specimen types used for toxicology testing, including serum and plasma, whole blood, and urine. Topics addressed in the guideline include:

specimen collection and processing;

methods of analysis;

quality assurance; and

the reporting and interpretation of results.

Because the results of forensic analyses have obvious potential for use as evidence in legal proceedings, information is provided relating to forensic procedures used to safeguard the identity of the specimen, document the chain of custody, and ensure proper use of analytical results.

The previous editions of this guideline focused extensively on forensic testing conducted to detect drug abuse. The guideline has been expanded to include drug testing for clinical purposes. This document replaces Urine Drug Testing in the Clinical Laboratory; Approved Guideline (T/DM8-A).

For additional information on CLSI or for further information regarding this release, visit our website at http://www.clsi.org or call +610.688.0100.

CLSI is a global, nonprofit, membership-based organization dedicated to developing standards and guidelines for the healthcare and medical testing community. CLSI’s unique consensus process facilitates the creation of standards and guidelines that are reliable, practical, and achievable for an effective quality system.

University of Colorado Hospital Completes Move to Anschutz Medical Campus

DENVER, May 24 /PRNewswire/ — In mid-June, University of Colorado Hospital will complete the move of all inpatients, staff and equipment from its Denver campus at East Ninth Avenue and Colorado Boulevard to its new facilities at the Anschutz Medical Campus in Aurora. Located near I-225 and Colfax — just six miles east of the old campus — the hospital’s new facilities offer the most advanced medical treatment and finest amenities available in the Rocky Mountain Region.

While University of Colorado Hospital has been caring for patients at both campuses for several years, the old hospital, built in 1964, will officially close its doors for inpatients at Ninth and Colorado after the final leg of the move takes place June 11-17.

“University of Colorado Hospital at the Anschutz Medical Campus combines state-of-the-art research and technology, world-class facilities, innovative patient care and guest services to set a new standard in the total health care experience,” said Bruce Schroffel, president and CEO of University of Colorado Hospital.

“As the only academic medical center in the region, we have long offered one of the highest concentrations of health care experts in the nation. Our new facilities at the Anschutz Medical Campus are designed to enhance the possibilities of collaborative medicine in the context of a new patient — and family centered care model. We’re truly looking forward to giving the people of Colorado and this region unparalleled levels of care and service.”

As the hospital’s move nears completion, the community should be aware of important dates regarding the closing of emergency, surgical and obstetrics services at the Ninth and Colorado facility:

   *  June 11, 7 a.m. - Obstetrics (OB) services at Ninth and Colorado will      close permanently, and be available only at the Anschutz Medical      Campus.  OB and delivery services are already available at the Anschutz      Inpatient Pavilion.   *  June 14, 7 a.m. - The Emergency Department at Ninth and Colorado will      close permanently, and move all its services to the Anschutz Medical      Campus. An Emergency Department currently exists at the Anschutz      Medical Campus.   *  June 18, 7 a.m. - The Operating Room (OR) at Ninth and Colorado will      close permanently and move to the Anschutz Medical Campus. Beginning      June 14, the Ninth and Colorado OR will handle only emergency      procedures as needed through June 18, when it will close forever.   

“Although we’ve been informing our patients and health partners about the move, we want to ensure that patients who have been coming to us at Ninth and Colorado remember that we will be relocating soon,” said Schroffel. “We’re especially concerned that those who need us quickly, including women in labor or patients requiring emergency room attention, come to our new location after those services move.”

While University of Colorado Hospital is affiliated with the University of Colorado at Denver and Health Sciences Center, it is legally and financially separate and receives no general fund support from the state.

The hospital’s new home in Aurora — which it shares with the University of Colorado at Denver and Health Sciences Center’s research and educational programs — was renamed the Anschutz Medical Campus in November in recognition of $91 million of philanthropic support to UCDHSC and the University Of Colorado Hospital from The Anschutz Foundation.

The University of Colorado Hospital is the Rocky Mountain region’s leading academic medical center, and has been recognized as one of the United States’ best hospitals, according to U.S. News & World Report. It is best known as an innovator in patient care and often as one of the first hospitals to bring new medicine to patients’ bedsides. Located in Denver and at the Anschutz Medical Campus in Aurora, Colo., the hospital is affiliated with the University of Colorado at Denver and Health Sciences Center, one of three universities in the University of Colorado system. For more information, visit the Web site at http://www.uch.edu/ or the UCDHSC Newsroom at http://www.uchsc.edu/news.

Contact: Sarah Ellis, (303) 724-1520, [email protected]

Kerry Dixon, (303) 672-4337, [email protected]

University of Colorado Hospital

CONTACT: Sarah Ellis, +1-303-724-1520, [email protected], or KerryDixon, +1-303-672-4337, [email protected], both for University of ColoradoHospital

Web site: http://www.uch.edu/

New York City Health and Hospitals Corporation Selects Canopy Care Management From Allscripts

CHICAGO, May 24 /PRNewswire-FirstCall/ — Allscripts , the leading provider of clinical software, connectivity and information solutions that physicians use to improve healthcare, today announced that New York City Health and Hospitals Corporation (HHC) has selected the Canopy(R) Care Management solution from Allscripts to help manage and coordinate care for patients throughout their stay at the hospital.

(Logo: http://www.newscom.com/cgi-bin/prnh/20061005/ALLSCRIPTSLOGO-b)

The largest municipal hospital and health care system in the country, HHC is a $5.4 billion public benefit corporation that serves 1.3 million New Yorkers and nearly 400,000 who are uninsured. HHC provides medical, mental health and substance abuse services through its 11 acute care hospitals, four skilled nursing facilities, six large diagnostic and treatment centers and more than 80 community-based clinics. Additionally, HHC Health and Home Care provides health services at home for New Yorkers.

Following a competitive selection process, HHC will deploy Canopy initially at four New York metropolitan sites, with an option to extend the solution to all 11 of its hospitals. The four sites, which together account for nearly 2 million annual patient visits, are: Elmhurst Hospital Center, a 556-bed facility in Elmhurst, NY; Queens Hospital Center, a 260-bed facility in Jamaica, NY; Woodhull Medical Center, a 371-bed facility in Brooklyn, NY; and Kings County Hospital Center, a 627-bed facility, also in Brooklyn.

Canopy streamlines and automates the care management process — utilization, case management, quality management and discharge — and monitors each patient’s length of stay and health status from admission to discharge. A web-based solution that leverages the Application Service Provider (ASP) model to minimize hardware and support requirements, Canopy can be installed and implemented in under 12 weeks and is available to end-users from any location with Internet access. By standardizing the patient care management process and enhancing the productivity of the care management team, Canopy has generated an immediate return-on-investment for more than 200 hospitals across the nation, including some of the nation’s most prestigious medical institutions.

“We’re pleased that Allscripts is the choice of the nation’s largest municipal hospital and healthcare system, and we look forward to helping HHC enhance its operations for the benefit of all New Yorkers,” said Glen Tullman, Chief Executive Officer of Allscripts. “Hospitals today need accurate, real-time information to provide the best possible care for their patients while more effectively managing their costs. Allscript’s Canopy offering delivers that and more.”

HHC will interface Canopy with its other inpatient information systems, providing a real-time feed of essential patient clinical, financial and demographic information to inform the care management team, and will also have the option of providing information feeds to the Allscripts Electronic Health Records being used by ambulatory physicians across the state.

About Allscripts

Allscripts is the leading provider of clinical software, connectivity and information solutions that physicians use to improve healthcare. The Company’s business units provide unique solutions that inform, connect and transform healthcare. Allscripts award-winning software applications include Electronic Health Records, practice management, e-prescribing, document imaging, emergency department, and care management solutions, all offered through the Company’s Clinical Solutions units. Additionally, Allscripts provides clinical product education and connectivity solutions for physicians and patients through its Physicians Interactive(TM) unit, and medication fulfillment services through its Medication Services unit. To learn more, visit Allscripts on the Web at http://www.allscripts.com/.

This announcement may contain forward-looking statements about Allscripts Healthcare Solutions that involve risks and uncertainties. These statements are developed by combining currently available information with Allscripts beliefs and assumptions. Forward-looking statements do not guarantee future performance. Because Allscripts cannot predict all of the risks and uncertainties that may affect it, or control the ones it does predict, Allscripts’ actual results may be materially different from the results expressed in its forward-looking statements. For a more complete discussion of the risks, uncertainties and assumptions that may affect Allscripts, see the Company’s 2006 Annual Report on Form 10-K, available through the Web site maintained by the Securities and Exchange Commission at http://www.sec.gov/.

Photo: http://www.newscom.com/cgi-bin/prnh/20061005/ALLSCRIPTSLOGO-bAP Archive: http://photoarchive.ap.org/PRN Photo Desk, [email protected]

Allscripts

CONTACT: Dan Michelson, Chief Marketing Officer,+1-312-506-1217, [email protected], or Todd Stein, SeniorManager\Public Relations, +1-312-506-1216, [email protected], both ofAllscripts

Web site: http://www.allscripts.com/

SmartCare Family Medical Centers Now Accepting Multiple Insurance Plans

ENGLEWOOD, Colo., May 23 /PRNewswire/ — SmartCare Family Medical Centers, a rapidly growing operator of retail-based, walk-in medical clinics, now has preferred provider agreements with more than 30 health insurance carriers, allowing more SmartCare guests to receive treatment for the cost of their insurance co-pay. SmartCare continues to ink agreements with additional insurance providers.

Insurance carriers that currently have agreements in place with SmartCare include:

   *  Aetna                                 *  MULTIPLAN   *  Ameriplan                             *  National Provider Network   *  Anthem Blue Cross and Blue Shield     *  NextCare Management   *  Beech Street                          *  NPPN   *  Casualty Management Network           *  PacifiCare   *  Choice Care                           *  PHCS   *  First Health                          *  Plan Vista Solutions   *  Great-West                            *  PPO NEXT   *  HealthStar                            *  Preferred HealthNetwork   *  HMS                                   *  Secure Horizons   *  Humana                                *  Sloans Lake   *  MasterCare                            *  Sterling   *  MedAvant HCS                          *  Three Rivers Provider Network   *  Medical Control                       *  UnitedHealthcare (UHC)   *  Medicare                              *  US HealthCare   *  MMA    

“These agreements — coupled with our extended hours during evenings and weekends, with no appointment necessary — further underscore SmartCare’s commitment to providing consumers convenient access to quality healthcare for everyday needs,” said Lawrence W. Hay, chief executive officer of SmartCare. “They represent more than 90 percent of covered lives on the Colorado Front Range that now have even more affordable and convenient access to minor and preventive care at SmartCare Centers.”

SmartCare Centers offer convenient, quality care for common ailments such as sore throats, ear infections and seasonal allergies, as well as basic health services including flu shots and other vaccines, school and employment physicals and cholesterol screenings. Each SmartCare Center is staffed by nurse practitioners and certified medical assistants, with an on-call physician.

About SmartCare Family Medical Centers

Denver-based SmartCare Family Medical Centers operates retail-based, walk-in medical clinics for everyday family healthcare needs. Formed in 2004, the company aims to revolutionize the experience and delivery of healthcare by being the premier provider of specific-scope healthcare offering convenient, quality care from dedicated health professionals. SmartCare’s practitioners provide treatment for minor acute illnesses, blood tests, physicals and screenings, while emphasizing wellness and guest education. No appointment is necessary; centers are open extended hours — including evenings and weekends. SmartCare accepts most health insurance plans and self-pay. For more information, visit http://www.smartcarecenters.com/

SmartCare Family Medical Centers

CONTACT: Amy Hudson, +1-303-417-6303, [email protected], for SmartCareFamily Medical Centers

Web site: http://www.smartcarecenters.com/

A Life Remembered: Pediatrician Remembered As an Advocate for Abused Children

By Terry Rindfleisch, La Crosse Tribune, Wis.

May 23–Dr. Kenneth Kolb Jr. looked out and cared for abused and neglected children.

Kolb, who died Monday at the age of 49, was a Gundersen Lutheran pediatrician who was a passionate child advocate.

“He really cared very, very deeply about children,” said Dr. Ann Budzak, a Gundersen Lutheran pediatrician. “He believed they deserved to be safe, and we as adults should be advocates for them because they can’t do it for themselves.”

Kolb was instrumental in developing Stepping Stones,

La Crosse’s first child advocacy center that opened in 2005 at Family and Children’s

Center. He served on the

advocacy center’s board and was a member of Gundersen Lutheran’s child maltreatment team.

“He always made himself available at the drop of a hat to see these kids,” Budzak said.

Budzak said she appreciated Kolb as a colleague who was willing to share his ideas and thoughts about a patient. “He also was a funny guy, fun to be around,” she said.

Kolb, who with his wife, Darci, have four children and two stepchildren, died at Gundersen Lutheran after a three-year battle with melanoma, the most serious form of skin cancer. In the summer of 2004, a spot appeared on his cheek, and he went to get it checked out by a dermatologist, who diagnosed the melanoma.

Each time the cancer was surgically removed, it returned. He participated in an experimental vaccine trial, but the cancer had spread to his lungs and spine. Kolb also tried an experimental gene therapy treatment at the National Institutes of Health, but the cancer spread to the brain.

Kolb was compassionate, honest and committed to his work, said Dr. Steve Manson, a Gundersen Lutheran pediatrician.

“He dedicated himself to every patient, but he especially wanted to help kids who were victims of physical and sexual abuse,” Manson said. “He had a big heart.”

Dr. Rajiv Naik, chairman of Gundersen Lutheran’s pediatrics department, said Kolb always saw extra patients in his day.

“I’ll remember his positive attitude, his willingness to help his colleagues and put the patient first,” Naik said.

Kolb, who joined Gundersen Lutheran in 1997, was a gentle, kind and quiet man who cared about his work and his patients, said Dr. Richard Strauss, another Gundersen pediatrician.

Dr. Jeff Thompson, Gundersen Lutheran CEO, issued a statement: “We feel extremely fortunate to have had Dr. Kolb on our staff, although his stay with us was far too brief. His dedication to bettering the health and lives of children was exemplary.”

The Rev. Dave Bersagel, pastor at Our Savior’s Lutheran Church in West Salem, Wis., said helping abused children was Kolb’s passion.

“I’ll remember him as a deeply caring person with humility,” Bersagel said. “He packed a lot of living in his life.”

Terry Rindfleisch can be reached at trindfleisch@lacrossetribune. com, or (608) 791-8227.

—–

Copyright (c) 2007, La Crosse Tribune, Wis.

Distributed by McClatchy-Tribune Information Services.

For reprints, email [email protected], call 800-374-7985 or 847-635-6550, send a fax to 847-635-6968, or write to The Permissions Group Inc., 1247 Milwaukee Ave., Suite 303, Glenview, IL 60025, USA.