Showing posts with label Mathematics and Statistics. Show all posts
Showing posts with label Mathematics and Statistics. Show all posts

Monday, June 2, 2014

Differential Equations In A Nutshell

A derivative is the rate of change of one quantity with respect to another; for example, the rate at which an object’s velocity changes with respect to time (compare to slope). Such rates of change show up frequently in everyday life. For example, the compound interest law states that the velocity of interest accumulation is proportional to the principal account value, given by dV(t)/dt=rV(t) and V(0)=P, where P is the initial (principal) account value, V(t), a function of time, is the current account value (on which interest is continuously assessed), and r is the interest rate (dt is an instantaneous time interval, dV(t) is the infinitesimal amount by which V(t) changes in this time, and their quotient is the accumulation rate). Although credit card interest is typically compounded daily and described by the APR, annual percentage rate, this differential equation can be solved to give the continuous solution V(t) = Pe^(rt). 




1) Define derivative. Derivative (also called differential quotient; especially British) - the limit of the ratio of the increment of a function (generally y) to the increment of a variable (generally x) in that function, as the latter tends to 0; the instantaneous change of one quantity with respect to another, as velocity, which is the instantaneous change of distance with respect to time. Compare first derivative, and second derivative:[1]
  • First derivative – the derivative of a function, example: "Velocity is the first derivative of distance with respect to time."
  • Second derivative – the derivative of the derivative of a function, example: "Acceleration is the second derivative of distance with respect to time."  

2) Know the order and degree of the differential equation. The order of a differential equation is determined by the highest order derivative; the degree is determined by the highest power on a variable. For example, the differential equation shown in Figure 1 is of second-order, third-degree.

3) Know the difference between a general, or complete solution versus a particular solution. A complete solution contains a number of arbitrary constants equal to the order the equation. (To solve an nth order differential equation, you have to perform n integrations, and each time you integrate, you have to introduce an arbitrary constant.) For example, in the compound interest law, the differential equation dy/dt=ky is of order 1, and its complete solution y = ce^(kt) has exactly 1 arbitrary constant. A particular solution is obtained by assigning particular values to the constants in the general solution.

Sunday, May 29, 2011

A Quick Guide to Least Squares Regression Method..

Carl Friedrich Gauss’s method of least squares is a standard approach for sets of equations in which there are more equations than unknowns. Least squares problems fall into two main categories. Linear also know as ordinary least squares and non-linear least squares. The linear least-squares problem pops up in statistical regression analysis and it has a closed-form solution while the non-linear problem has no closed-form solution. Thus the core calculation is similar in both cases.

In a least squares calculation with unit weights, or in linear regression, the variance on the jth parameter, denoted is usually estimated with

Weighted least squares

When that the errors are uncorrelated with each other and with the independent variables and have equal variance, is a best linear unbiased estimator (BLUE). If, however, the measurements are uncorrelated but have different uncertainties, a modified approach might be adopted. When a weighted sum of squared residuals is minimized, is BLUE if each weight is equal to the reciprocal of the variance of the measurement.


The gradient equations for this sum of squares are



Which, in a linear least squares system give the modified normal equations



When the observational errors are uncorrelated and the weight matrix, W, is diagonal, these may be written as


Where


For non-linear least squares systems a similar argument shows that the normal equations should be modified as follows.



Least-squares regression is a method for finding a line that summarizes the relationship between the two variables.

Regression Line: A straight line that describes how a response variable y changes as an explanatory variable x changes.

A residual is a difference between an observed y and a predicted y.

Important facts about the least squares regression line.

  • The point (x,y) is on the line, where x is the mean of the x values, and y is the mean of the y values.
  • Its form is y(hat) = a + bx. (Note that b is slope and a is the y-intercept.)
  • a = y - bx.
  • The slope b is the approximate change in y when x increases by 1.
  • The y-intercept is the predicted value of y when x = 0

r2 in regression: The coefficient of determination, r2 is the fraction of the variation in the values of y that is explained the least squares regression of y on x.

Calculation of r2 for a simple example:

r2 = (SSM-SSE)/SSM, where

SSM = sum(y-y)2 (Sum of squares about the mean y)
SSM = sum(y-y(hat))2 (Sum of squares of residuals)

Calculating the regression slope and intercept

The terms in the table are used to derive the straight line formula for regression: y = bx + a, also called the regression equation. The slope or b is calculated from the Y's associated with particular X's in the data. The slope coefficient (by/x) equals


THINGS TO NOTE:

· Sum of deviations from mean = 0.

· Sum of residuals = 0.

· r2 > 0 does not mean r > 0. If x and y are negatively associated, then r < 0.