Algorithms, Inertia, and Race as a Residual Category
FDA approval for a particular diagnostic test for warfarin response affirms the test's clinical validity. Such technical ability to test for a genetic variation is one thing, but establishing a genetic test's clinical utility (i.e., its effectiveness in real-life clinical settings) is another. For the tests to be worthwhile, a clinician needs to figure how to use the information they produce; this is where algorithms come in. A dosing algorithm for warfarin (or any drug) compiles diverse data relating to factors influencing drug response, assigns values to different factors, and applies them to compute an estimated optimal dosage for any given patient. Weight and age are common factors. For example, a dose of two 200mg tablets of ibuprofen might be recommended for an average adult, whereas the recommendation for a 2-3-year-old child weighing between 24 and 35 pounds would be one 100mg dose. In between, the recommendation for an 11-year-old weighing between 72 and 95 pounds would be 300mg.
Warfarin dosing is far more complex because of the high inter-individual variability of response and the severe consequences of over- or under-dosing. Since the FDA announced its labeling change, much attention has been devoted to developing dosing algorithms that incorporate new genetic information regarding the CYP2C9 and VKORC1 polymorphisms. Such algorithms are designed to tell doctors what they should do with the data produced by diagnostic genetic tests.
Prominent among recent genetic dosing algorithms is one constructed by a group called the International Warfarin Pharmacogenetics Consortium (IWPC), which was published in February 2009 in the New England Journal of Medicine. The IWPC is comprised of eminent biomedical researchers and institutions from around the world and is based in Stanford, California under the auspices of the PharmGKB. To develop a genetic dosing algorithm, it analyzed warfarin dose-response information from over 21 sites in nine countries for inclusion in the study. The study was not the first to show the advantage of incorporating genetic information into prescribing patterns, but it was by the largest and most inclusive to date with information on more than 5,000 patients.
The study was based on the perceived need to develop a dosing algorithm using both genetic and clinical data gathered from a “diverse and large population.” In this context, “diverse” included the concepts of race or ethnicity. From the outset, the study's use of these concepts was problematic. As stated in the “Methods” sections, “Information of race or ethnic group was reported by the patient or determined by the local investigator.” On the one hand, this reflects the simple reality of using data collected from numerous sites across the globe. On the other hand, both self-report and external ascription of race raise significant issues in a biomedical context. As bioethicists Mildred Cho and Pamala Sankar note, “Individual self-classification is not stable; for example, one US study found that one-third of people change their own self-identified race or ethnicity in two consecutive years.” Complicating matters still further, a study by Condit et al. found people often have very incomplete knowledge of the biological ancestry. Of a sample of 224 subjects interviewed for a study on attitudes toward race-based pharmacogenomics (the tailoring of drugs to genetic profiles), Condit et al. found that 39.6% did not know all four of their biological grandparents. In such situations, self-declared race may fail to capture significant variation in biological ancestry.
Such problems are only compounded in situations where race is externally ascribed by a local investigator. Robert Hahn, for example, has conducted a series of studies in the field of public health showing how external ascriptions of race are often unreliable and also may change over time. Despite these issues, the one sentence quoted above states the entirety of the study's consideration of how it used race. This contrasts starkly with the detailed elaboration of “Genotype Quality Controls” and ““Statistical Analysis” in the remaining two pages of the same “Methods” section.
To compose its algorithm, the IWPC researchers calculated warfarin dosing three ways: (1) based on standard clinical data; (2) based on clinical data and genetic variations; and (3) using fixed daily doses. In developing dosing algorithms for each patient, researchers genotyped three VKORCl alleles and six CYP2C9 alleles. They also collected clinical data on such characteristics as age, height, weight, use of certain other drugs, and, of course, race. Then, they compared how closely their computational predictions matched the actual, clinically derived stable warfarin dosage for each patient. This was, therefore, a retrospective study and did not measure prospectively whether the use of a genetic dosing algorithm actually reduced adverse events. The study concluded,
The use of a pharmacogenetic algorithm for estimating the appropriate initial dose of warfarin produces recommendations that are significantly closer to the required stable therapeutic dose than those derived from a clinical algorithm or a fixed-dose approach. The greatest benefits were observed in the 46.2% of the population that required 21 mg or less of warfarin per week or 49 mg or more per week for therapeutic anticoagulation.
Thus, while successful as proof of concept showing the possible utility of the genetic dosing algorithm, the “greatest benefits” accrued only to the roughly half of the patient population who were outliers at the high and low ends of warfarin doing.
As for race, at one point the study presented a figure comparing “the predicted doses according to representative clinical or demographic characteristics, genotype combinations, race, and use or nonuse of amiodarone (an important interacting drug).”
It concluded that the data “suggest that most of the racial differences in dose requirements are explained by genotype.” The text accompanying the figure itself states that, “racial differences in the estimated does become insignificant when genetic information is added to the model.” On page 15 of the supplemental materials, the study calculated the percentage of variance in dose explained by race (R2). In the study's own terms, race accounted for 14.2% of variation when it was the only thing in the model. This would comport with the notion that race serves as a potentially useful surrogate when specific genetic variables are unknown. When pharmacogenetic data was added to the model, however, the contribution of race went from 14.2% down to 0.3%, i.e., almost nothing.
It would seem then that with this algorithm, warfarin dosing had reached the promised land of truly individualized pharmacogenomic practice. The study case specific genetic information was cast as rendering race “insignificant,” just as advocates of using race as an interim proxy said it would. Yet, when we turn to the supplemental material provided with the study and examine the actual dosing algorithm used (and recommended for further use), there we find race, still a prominent factor to be used by every doctor in every dosing calculating apparently regardless of the fact that it is “insignificant”: [Table 1 Ommitted]
* * * The output of this algorithm must be squared to compute weekly dose in mg.
All referencesto VKORC1 refer to genotypefor rs9923231. The accompanying “legend for the use of algorithm” further specifies: “Asian Race = 1 if self-reported race is Asian, otherwise zero; Black/African American = 1 if self-reported race is Black or African American, otherwise zero; Missing or Mixed race = 1 if self-reported race is unspecified or mixed, otherwise zero.”
Most immediately striking here is the straightforward use of whiteness as an unmarked norm in biomedical research. The IWPC algorithm is thus typical, and yet stunning in that it is appearing at the end of a decade of heated discussion concerning the proper use of race in biomedical contexts-- discussion that directly critiqued the use of “white” as the unmarked standard from which all other races are cast as deviating. The algorithm only makes a person present as a racialized subject if he or she is not white. Here white people do not explicitly possess race--that is, their race is tacit and does not formally come into play in calculating dosage. They are the norm against which the race of other subjects is made to matter. Thus, “black” and ““Asian” dosages are calculated as deviations from the unstated white norm.
It is also curious to note that the algorithm lumps together “mixed race” and ““missing race.” Having mixed race is here made the equivalent of absent race. In both cases a sort of statistical guess is being made to encompass the unknown. Through this association, the algorithm renders mixed race as mysterious and uncertain, a category without clear boundaries that is treated as the equivalent of absence in order to contain any challenge it might pose to the model. Yet, how can this characterization of mixed race as a distinct category deal with estimates that anywhere between 30-70% of self-identified African Americans have some white relatives in their ancestral history or that a significant proportion of white-identified people have some multi-racial background? Many of these people would likely register as either “black” or “white.” Thus, a self-identified black person and self-identified white person each with “mixed ancestry” would likely register respectively as black, or white, even though both could also register under the same “mixed race” category. Moreover, once a value is assigned, the algorithm multiplies it by different amounts depending again on racial identification. Thus, a person of mixed African and European ancestry could be counted three different ways depending on whether they self-identified as “White,” “Black,” or “Mixed.” This would be the case for everybody's mixed-race person of the moment, President Barack Obama. Alternatively, three siblings each with the same proportion of “mixed” ancestry could be counted differently by this algorithm, again depending on how they self-identified. Perhaps more significantly, this model assumes some sort of “pure” notion of races as biologically bounded and distinct, perhaps capable of “mixture” but with such mixtures ultimately separable into preexisting purportedly pure kinds, thus reinforcing genetically essentialist conceptions of races as biologically distinct entities.
A similar dynamic is evident in a separate dosing algorithm published a year earlier by a team or researchers led by Brian Gage, of Vanderbilt University, a prominent warfarin researcher and member of the IWPC (as were several of the other researchers on the study). Declaring warfarin to be the “ideal drug to test the hypothesis that pharmacogenetics can reduce drug toxicity,” this study contained a single race variable, specified as ““African-American race” which ultimately accounted for 0.4% of variation in response to warfarin--almost the exact same (negligible) amount as in the IWPC algorithm. With these dosing algorithms, we have a formal institutionalization of the imperative to render ambiguous racial identifiers as genetically bounded and fixed. In this particular promised land of genetically guided dosing, race did not wither away; to the contrary it persisted--prominently and structurally.
In both the Gage and the IWPC algorithm, race appears to be operating as a sort of residual category that has persisted through inertia. Race had long been long a part of considering how individuals respond to warfarin. It was generally understood to be a surrogate for underlying genetic variation and therefore included as a variable in numerous pre-genomic studies. The use of race thus became a standardized norm in warfarin research. Once instantiated as normative practice, race simply appears to have persisted, almost reflexively, even in the face of data showing its minimal impact on dose response variation. The inertial power of race was so strong that it persisted in the IWPC algorithm even alongside explicit assertions of its insignificance. To the extent that race captured any substantive information (that is, to the extent that the 0.3-0.4% variation could be deemed significant), race must be understood as a sort of catch-all category encompassing a residuum variation for which the researchers had no specific explanation. This brings us back to observation by Yen-Revello, Aumen, and McCleod that race remains an appealing surrogate where “drug-genotype interactions remain unknown.” Apparently, however, race remains appealing even after drug-genotype interactions are discovered. This is because genotype generally will never explain all variation in drug response. There will always be some aspects of inter-individual variation in drug response that remain “unknown.” As long as the inertial force of race maintains it as a normative category in biomedical research, it may be possible for researchers to associate race with whatever residuum of unexplained variation they find in their studies.