Math/Algorithm brain teaser - re-distribution of numbers

This is something I’ve been working on, but not quite there yet - other priorities.
It’s also very difficult to explain, I’ll try.


  • I have a set of numbers ranging between 0 to 1000
  • The set of numbers contains 600 values between that range
  • I have a variable called “compensationNumber” (cN) with a value of 210
  • Also another variable called sumOfRandomNumbers (sORN)

In other words I have 600 random numbers ranging between 0 to 1000.

I need to subtract cN from ALL of those numbers evenly (except for zero). That is, the relevant samples (rS) might be 595. So, I divide cN by rS : 210/595 and get .35.
I then subtract .35 from every relevant sample (rS). Now I’ve successfully subtracted cN EVENLY across all numbers. I believe this is correct.

FINALLY (what this is all about) is I want to re-distribute cN over the range according to a weighting. ie. 10 gets less, 450 more and 900 even more and so on.

What I’m thinking is to sum all the numbers (sORN) then divide cN by sORN. Whatever that number is, then I multiply that by each individual number (except zero), then add that to the original number. Compensation should be complete.???

This is hard to explain, hopefully the example is enough to be understood. If not then I’ll expand further.

Cheers :slight_smile:

I follow this part of the algorithm but I can’t say if it’s correct because I don’t know what it’s supposed to accomplish (other than the algo described :slight_smile: )

Wait, you said the set of numbers contains 600 values then later say the relevant samples (rS) might be 595. So is this set just around 600 values?

This needs clarification. What do you mean specifically by ‘re-distribute’. Linear, polynomial, logarithmic, exponential… there’s many ways to skew 10 less and others more.

Sounds simple, so I have obviously misunderstood.

But here goes…
cN is 20 is as good as any.

Lets consider that 20 pennies for analogy.
You ‘pull’ those 20 pennies from your rS evenly, and stick them in a jar.
Now you have ‘20 pennies to hand out’, unevenly.

Sum your current samples. new sORN
Then add to each (Actual Sample Value new rS) / (new sORN) * cN

if there were 2 samples, 10 and 90
And you wanted to hand out 20 (cN)

Then the first would get 10/100* 20 = 2
and the second would get 90/100 * 20 = 18

If I understand correctly, your goal is to eliminate all of the zero values, and then adjust the remaining values so that the total is still the same?

Thinking about this a bit more…
…of course the total isn’t going to change by deleting zero values, but I’m guessing that the intention is something along these lines.

Ok thanks people.

My fault, we can forget about zero. I’ll explain it a bit clearer this time with some images. The numbers refer to kilograms of force as shown in the example graph:

My software records the thrust of solid fuel model rocket motors, then calculates the results. The hardware device and load cell are mounted similar to a standard set of electronic kitchen scales and the test motor is mounted on top vertically with the nozzle pointed upwards then tared to zero. The motor is mounted into the vertical ‘V’ shaped holder as shown (ex-motor):

The problem is that after the motor is fired, naturally the fuel has been consumed and therefore we end up with a ‘negative’ weight at the end of the graph (not shown in the example). This is useful because then we know what the fuel weight was for other calculations.

What I need to do is to adjust the graph to account for the loss of fuel, and do it over the whole time period. This is a two step process.

Step one was to divide the amount of samples by the weight lost, then distribute that number (n) in a linear fashion over the time period. ie. sample one+(1 x n), sample two+(2 x n), sample 100+(100 x n), etc. and so on. This worked well and the graph is adjusted so that the end of the graph now shows at zero (as shown in the graph).

I could leave it at that, but it’s not entirely correct. Although the mean/average would be correct, the peak force is not or any of the other readings across the time line. It’s important to note that there are two main types of model rockets motors. There are ‘end burners’ where the solid fuel burns in a linear way (like a cigarette) and maintains an even amount of thrust, then there are ‘core burners’ where the fuel has an internal hollow core and burns from the inside out, with ever increasing thrust (as in the graph).

If there were only ‘end burners’ then Step one is only required. For example, you would see a graph that showed a straight line going across the time line but at an angle dipping down to the right. I want my software to work with both. The whole problem would not exist if the recording device was mounted sideways - but that is much more complicated and not very portable.

This graph shows the thrust curve of an off the shelf Estes D-12:

You’ll notice an initial peak thrust at the beginning - this is a combination of end and internal burner. There will be various thrust curves that the software needs to evaluate. If I just used the process described in step one, then it would unfairly assign less at the beginning than the end.

Therefore we need step two.

Looking at both graphs you’ll see the peak force. It’s fair to say that the amount of fuel consumed is directly proportional to the thrust produced. Therefore I want to redistribute the lost fuel weight correctly. I’ve already added it back in step one, so now what?

In other words, some of the areas of least thrust got more and visa-versa. I could remove that initial weight. Lets say it was 120grams of fuel, but this time remove the same amount from each sample, then redistribute it to each sample but this time where there is more force it gets more etc. Sorry, but I don’t know if the distribution is “linear, polynomial, logarithmic, exponential”??

I hope this makes more sense than my first post. It’s been very difficult to describe. I’ve probably left something out so please ask. It’s also possible that there only needs to be one single process but I can’t work it out.


I think step one causes the problem (artificially applying a linear function) and what you want to do is just what you described for “re-distribute cN over the range according to a weighting”.

Since weight loss is proportional to thrust it will have the same graph shape as thrust. You just need to scale the graphs y values so it’s integral will sum to the negative weight at the end. Well, sums to the absolute of the negative weight so they will cancel. Once this scaling is found, apply it and add the integrated pieces to each value. This is basically what you described for re-distributing.

[code]Sum up the thrust values
scale = Abs( negativeWeight ) / thrustSum

sum = 0
for i = 0 to lastValue
sum = sum + value(i) * scale
value(i) = value(i) + sum

By the end, if what I’m saying makes sense, then sum should equal Abs( negativeWeight ) and it’s effect has been proportionately/functionally applied throughout.

It might be more accurate to collect the sum after updating the value, or to split the sum somehow since the weight changes with the thrust, not before or after.

To clarify… the derivative of the weight loss graph will have the same shape as the thrust graph. That’s why it needs to be integrated.

Thanks Will, much appreciated. This is a lot to take in and understand, but that’s what I asked for :slight_smile:

That is the key.
That’s also why I still believe it’s a 2 step process. Step 1 is to “normalise” the data, Step 2 is to re-distribute the data in a fair and correct manner as described above.

I apologise for my lack of knowlede using correct technical terms. This does make it harder to communicate on the same level. Neverthelsess, I know in my “mind” what has to happen.

What I’m going to do is go back to basics. There’s no need to introduce this into my program yet. I’ll do some simple graphs - even if it has to be done on graph paper using a calculator. If the solution is correct, then 20-25 samples or integrals should also hold true.


I agree with Will’s comments. This is straightforward integral/differential calculus. Unfortunately, if you haven’t studied calculus, it won’t be obvious. The good thing is that, while the theory may not be obvious, the actual implementation should be very simple.

Ok, thanks Robert, but I find your comments rather terse and a bit condescending.

Obviously I lack understanding of “straightforward integral/differential calculus” otherwise the question would not have been asked. I don’t think the solution is as “obvious” or simple as you suggest.

I certainly don’t want a “spoon-fed” answer otherwise that would be boring. I’ll do my tests as described above. Sometimes I prefer to rely more on empirical evidence.

We don’t know what you don’t know. I would recommend a look at Khan academy for the topics of integrals and differentials. I promise it will help with your software.

Well, I had assumed you were familiar with calculus. I mean I associate it with scientific researching with load cells and thrust plots :slight_smile:

Looking back at the original post…

…I’m not clear on the purpose of Step 1. The description is that “compensationNumber” (cN) is subtracted evenly across all the values

piece = cN / totalNumberOfValues for i = 0 to lastValue value(i) = value(i) - piece next

But where does “compensationNumber” come from, what is it compensating/accomplishing? I got under the impression it had to do with the negative weight you were talking about. That Step 1 is just adding the negative weight back into the graph so it ends at 0 instead of negative. Now I’m thinking that can’t be right.

OK, I think I’m too tired. In the long post that was how you describe the intent of Step 1: [quote]Step one was to divide the amount of samples by the weight lost, then distribute that number (n) in a linear fashion over the time period… the graph is adjusted so that the end of the graph now shows at zero[/quote]

This is what I’m picturing. A load cell measures force. So you place a motor on top, let the system settle, and mark the load cell reading as 0. Fire the motor, let it settle, and now the reading is a negative force. The challenge is really to separate out 2 graphs, one of just the thrust and one of the weight, because the load cell recording is a combination of both forces.

Is this what you’re after?

The whole range has been reduced by 210, with higher numbers having a larger reduction?

If it is, let me know and I’ll send you the sheet.

Julian, the ratio between acttual thrust and measured thurst is not constant. The two are equal initially and deviate as weight is lost.

Steve, can you share one of your measurements, the data? The entire curve, including the negative part. How many data points do you have per measurment?

Out of curiosity, why is there a peak in the thrust measured for the systems in which the rate of combustion is constant? Some initial effect/collision? That will induce an error if we are assuming that thurst and weight loss rate are proportional.


Here you go:

Apparently its initial impulse of the thing lighting up /shrug (I’m no rocket scientist) :slight_smile:

Here you go, “Corrected curve” is what you’re after, which is “Orig” - “Mass Reduction”

The correction should increase the thrust value, not reduce it. At the end the measurment of thrust is negative, but the actual thrust is zero.