C
CL40
I am just starting to learn the theory of electronics.
Coming from a probability and statistics background I have a very deeply ingrained idea of the term average.
The average to me is the Arithmetic Mean. In other words, if I was to take a sample from whatever data I calculated my mean from I have a pretty good chance of seeing a value near the mean.
It seems this definition is not the same in electrical engineering. The average current is defined by the change in charge over the change in time. This is similar to the physical velocity. But there is no averaging here being done at all, is there? You aren't summing anything in the numerator, instead you are dividing the change in charge by the change in time. This would give you the amount of charge changed per unit time.
Imagine I was to draw a charge value from a list of charge values measured from a circuit. If I naively assumed average meant the above I would expected any given charge measurement to be dispersed somewhere near the value there. However, in reality the Average Charge defines the slope of a line.
How can I reconcile these two things so I can really understand what average means here?
EDIT:
After sitting with paper for a while justifying it to myself there is a single case that confuses me:
Let's use this data set:
and we have the following data points
Finding the average current between t = 0 and t = 4 is:
However, using arithmetic mean:
The final value at time
Coming from a probability and statistics background I have a very deeply ingrained idea of the term average.
The average to me is the Arithmetic Mean. In other words, if I was to take a sample from whatever data I calculated my mean from I have a pretty good chance of seeing a value near the mean.
It seems this definition is not the same in electrical engineering. The average current is defined by the change in charge over the change in time. This is similar to the physical velocity. But there is no averaging here being done at all, is there? You aren't summing anything in the numerator, instead you are dividing the change in charge by the change in time. This would give you the amount of charge changed per unit time.
Imagine I was to draw a charge value from a list of charge values measured from a circuit. If I naively assumed average meant the above I would expected any given charge measurement to be dispersed somewhere near the value there. However, in reality the Average Charge defines the slope of a line.
How can I reconcile these two things so I can really understand what average means here?
EDIT:
After sitting with paper for a while justifying it to myself there is a single case that confuses me:
Let's use this data set:
Q = [0, 1, 2, 3, 4]
t = [0, 1, 2, 3, 4]
and we have the following data points
(0, 0), (1, 1), (2, 3), (3, 2), (4, 4)
Finding the average current between t = 0 and t = 4 is:
4 - 0 / 4 - 0 = 1 C/s = 1A
However, using arithmetic mean:
1/5 * (0 + 1 + 3 + 2 + 4) = 2A
.The final value at time
t = 4
where Q = 4
is throwing the calculation off.