y=mx or y=mx+b
Discussions about HPLC, CE, TLC, SFC, and other "liquid phase" separation techniques.
28 postsPage 1 of 2
Topic locked
- Print view
y=mx or y=mx+b
- mtnshawn
- Posts: 30
- Joined: Fri Oct 01, 2004 1:44 pm
by mtnshawn » Wed May 25, 2005 5:08 pm
Fellow chromatographers-
When evaluating recoveries (off of a standard curve) at or near the LOQ for a method, when I use y=mx recoveries are approximately 88-90%. When I use y=mx+b recoveries are approximately 97-99%.
This is a pharma method and I have never had to use y=mx +b before.
The method analyzes an API degradant (small molecule) @ 210 nm. I assume that the better recoveries for y=mx+b is a function of noise due to the low wavelength.
Question(s):
Is the use of y=mx+b an acceptable practice?
Are there literature references (I have looked on the 'net') to support the use of y=mx+b?
If I had to justify the use of y=mx+b in the method to the FDA would they laugh at me?
thanks in advance any/all comments.
Shawn
- Uwe Neue
- Posts: 2916
- Joined: Mon Aug 30, 2004 10:19 pm
by Uwe Neue » Thu May 26, 2005 3:06 am
I am not an expert on this, so take this only as my opinion.
It appears to me that the issue is related to the peak start and peak end parameter being set too high above the noise. This will cut off a part of your integration, and this effect is stronger, as you get to peak heights that approach the noise level.
I am sure that your colleagues are dealing with this all the time and will give you good advice how to fix this.
- Alex Buske
- Posts: 239
- Joined: Tue Nov 09, 2004 3:06 pm
by Alex Buske » Thu May 26, 2005 7:07 am
What does the method say? If it utilises a one point calibration or y=mx type curve, then a y=mx calibration has to be used in validation.
as we said, check the integration and look at the linearity. Is there an offset in the linearity curve?
While it doesnt make much sense, people often compare peak aeras from injections with and without matrix/placebo.
Alex
- adam
- Posts: 366
- Joined: Wed Feb 02, 2005 9:34 pm
by adam » Thu May 26, 2005 4:53 pm
First of all nobody will laugh at you. Both of these equations are perfectly valid. It is a matter of which one better models your curve.
This is really a validation issue. One would normally validate linearity by running 4 or 5 point curve. At the same time you would evaluate the intercept. If the intercept is small you can either use a single point calibration or a multipoint calibration with y = mx. You could also use the y = mx + b form but the y = mx form (or single point) is preferable if you can justify it. The reason for this is a long story, but basically it is because the 0,0 point is known with perfect confidence and, by fixing it you drastically reduce the well known effect of the high level points throwing off the curve at the low end. This issue is especially problematic with related substances assays where a component at a high level is used to quantitate peaks at low levels.
Whatever you choose I agree that it should be specified in the method.
- mtnshawn
- Posts: 30
- Joined: Fri Oct 01, 2004 1:44 pm
by mtnshawn » Thu May 26, 2005 5:45 pm
Thanks for the input thus far. Some answers to your questions and some more information.
Uwe- The peak itself is well resolved and has nice symmetry (>0.95). When I overintegrate (try to pull a little more area out from the peak) the result is negligible.
Alex- the method is currently under development. Linearity is >0.999 (expressed as R2) when regressed with or with out incorporating the origin (0,0). There isn't an inflection ("+" or "-") in the placebo at the elution time for this component. I don't quite understand what you mean by an offset in the curve.
Adam-thanks for not laughing too hard at me. I am preparing to send this method to a GMP laboratory for validation. So I am performing a method qualification. I am running a 5 point curve. The y-intercept is -4.
More information
1. y=mx provides ~ 90% recoveries throughout the working range and not just at the LOQ.
2. y = mx+b provides >97% recoveries throughout the working range of the method (0.004 mg/mL to 0.04 mg/mL).
Another Q.
How do I justify the use of y=mx+b?
Thanks again
Shawn
- DR
- Posts: 2082
- Joined: Tue Aug 17, 2004 7:59 pm
- Location: 38.590967, -90.213236
by DR » Thu May 26, 2005 7:05 pm
Practically speaking, to use y=mx, you would have to demonstrate linearity down to nearly 0. In real linearity data, it is typical to see that r² for y=mx+b > r² for y=mx (this is your justification for using the +b). This is a function of skew away from the origin by the trend line that is typical of LC assays (or any other techique that exploits Beer's law as far as I know). I'm not sure why this is, but I would guess that it has to do with noise interfering with low level responses in a significant way and adding to response throughout the linear portion of the response curve (albeit in a insignificant manner).
Thanks,
DR
- adam
- Posts: 366
- Joined: Wed Feb 02, 2005 9:34 pm
by adam » Thu May 26, 2005 7:11 pm
Your original method says that you're analyzing a degradation product. It seems clear, given everything you've said, that you are using an external standard.
Is the standard concentration much higher than the concentation of your degradant? I am only guessing this, since there is such a big difference between the 2 methods of calibration.
If there is a big difference between the standard concentration and the analyte concentration, one way to minimize these problems is to dilute the standard down.
At any rate, you don't need any justification to use the y = mx + b approach. Just pick what works best, then prove that you can pass all the validation criteria using that approach.
- Daren
- Posts: 17
- Joined: Tue Apr 26, 2005 8:10 pm
by Daren » Thu May 26, 2005 8:42 pm
I have always found the best way to justify using y=MX+b is to calculate a 95% confidence interval for your 5-point curve. If 0,0 does not fall within the y-intercept confidence interval for your curve then you have justified the need to use y=MX+b for accurate quantitation at those low levels.
- Daren
- Posts: 17
- Joined: Tue Apr 26, 2005 8:10 pm
by Daren » Thu May 26, 2005 9:51 pm
just to add/clarify my previous post. What you're really doing is trying to justify the use of y=mx. So you start out using y=mx+b to create your curve, take 95% confidence interval on that curve, and then if 0,0 falls within the y-intercept C.I. you have justified being able to use y=mx. So I kind of always approach it oppositely, start out with using y=mx+b and then see if I can justify doing a single point force through zero.
- mtnshawn
- Posts: 30
- Joined: Fri Oct 01, 2004 1:44 pm
by mtnshawn » Thu May 26, 2005 10:45 pm
Thanks again to all who have responded.
Daren-You have hit the nail on the head for what I was fishing for. A mathematical/scientific means by which I could justify determining my concentration using y=mx+b instead of y=mx!
I have never calc'd CI for a standard curve. Could you lend some insight on that?
If you wish, my email is
scook@rxkinetix.com
- Daren
- Posts: 17
- Joined: Tue Apr 26, 2005 8:10 pm
by Daren » Fri May 27, 2005 1:02 am
Hi Shawn,
I'm glad I could help. There are various softwares that can calculate the confidence interval for you. You can plot your curve on xcell, do the linear regression analysis and then use the data analysis tab ( I believe that is its name) which will have confidence interval as an option, it should automatically set it to 0.05 which is for 95%. Your version of xcell though will have to have the statistical package, most of my employers and schools have had it, all of my personal PC's have not. In addition to xcell any statistics software can do it for you, I have used JMP for this purpose as well.
- JI2002
- Posts: 366
- Joined: Fri Oct 22, 2004 7:14 pm
- Location: MA
by JI2002 » Fri May 27, 2005 2:33 am
Shawn,
From statistical point of view, by looking at the data you presented, you don't need to justify why you need to use y=mx + b, but need to justify why you want to use y = mx because the recoveries are better across the working range by using the y = mx + b model. I agree with some other members that you need to use the model that works best. Also, although I haven't done this before, you need to have replica data for at least some of the concentrations for you calculate confidence interval for the curve.
From chemistry standpoint of view, it's interesting to see a set of data like this. Usually by using y = mx model, you can have a curve biased high or low either in the high concentration or in the low concentration, but not across the linear range.I'm curious to know what the x intercept is for the y =mx + b model and if you inject a std at that concentration, what is the response? No response at all?
- HW Mueller
- Posts: 2846
- Joined: Mon Aug 30, 2004 7:17 am
by HW Mueller » Fri May 27, 2005 6:47 am
What´s the fuzz here? As students we learned that y = mx+b is the "slope intercept" form of a straight line, m is the slope, b the intercept (with the y axis, ie, the point where the line crosses the y axis, or the value with x = 0). Now, if the intercept is zero (line goes through the origin of the plot) b = 0 and you have y = mx. If the curve does not go through the origin then b does not = 0. If you then use y = mx you are fudging (in that case with the connotation of cheating).
One should always inject a blank, if that comes out as 0 (we used to say here "within the error....") but your curve does not, you do indeed have a problem.
- DR
- Posts: 2082
- Joined: Tue Aug 17, 2004 7:59 pm
- Location: 38.590967, -90.213236
by DR » Fri May 27, 2005 2:18 pm
re:
Daren wrote:
Hi Shawn,Your version of xcell though will have to have the statistical package, most of my employers and schools have had it, all of my personal PC's have not. In addition to xcell any statistics software can do it for you, I have used JMP for this purpose as well.
They pretty well all have it, you just may have to hit Tools>Add-Ins and make sure the analysis tookpack is among the selected items (by default, it is not).
Thanks,
DR
- Ron
- Posts: 586
- Joined: Mon Oct 04, 2004 3:00 am
by Ron » Wed Jun 01, 2005 11:57 pm
If your data system allows this, you might want to try weighting of the regression points. When you use the weighting approach to calibration you will in many cases get more accurate concentration values for data points near the origin, especially if the upper calibration points are significantly higher in concentration. Weighting of the regression points helps to minimize the problem pointed out by Adam in his posting.
Topic locked
- Print view
Previous topic
28 postsPage 1 of 2
Next topic
Return to “Liquid Chromatography”
- Chromatography
- Liquid Chromatography
- Ion Chromatography
- Capillary Electrophoresis
- GPC/GFC/SEC
- Gas Chromatography
- LC-MS, GC-MS, and other
- Pharmaceutical Analysis
- Food Analysis
- Data Systems / Controllers / Computers
- Sample Prep
- Analytical Training Solutions Online Courses
- Student Projects
- Other Topics
- Webinars
- Around the Water Cooler
- Policies
- Administration
Who is online
In total there is 1 user online :: 0 registered, 0 hidden and 1 guest (based on users active over the past 5 minutes)
Most users ever online was 1117 on Mon Jan 31, 2022 2:50 pm
Users browsing this forum: No registered users and 1 guest
Latest Blog Posts from Separation Science
Separation Science offers free learning from the experts covering methods, applications, webinars, eSeminars, videos, tutorials for users of liquid chromatography, gas chromatography, mass spectrometry, sample preparation and related analytical techniques.
Subscribe to our eNewsletter with daily, weekly or monthly updates: Food & Beverage, Environmental, (Bio)Pharmaceutical, Bioclinical, Liquid Chromatography, Gas Chromatography and Mass Spectrometry.
- Follow us on Twitter: @Sep_Science
- Follow us on Linkedin: Separation Science