I have a new idea about measuring the distance. (Forget for a moment about water :smileyhappy: )
Assume, I have a receiver that has constant stable range in most of the conditions.
The equation for the range is:
D = lambda / 4*pi * sqrt((Ptx * Gtx * Grx) / Prx)
D = Distance ("range")
Ptx = Transmit Power (Sensitivity)
Prx = Receiver Input Power (Sensitivity)
Grx = RX Antenna Gain
Gtx = TX Antenna Gain
lambda = Wave Length
Now, assume that we have a tranciever with a "programmable output power" (up to 1 dBm).
In other words, we can change the the Ptx as we wish.
According to the equation if we'll change Ptx in quadratic way (Ptx = x^2) so we'll get
a Linear change in "range"!
So if we want to discover a distance between the tranciever A and B we do the following:
1. Tranciever A sends message with Ptx 1dbm (max) to tranciever B ( A writes in message what Ptx it was used to send the message + "number of message" in the sequence).
2. If Tranciever B received the message it sends to A "I was able to receive your message, try again".
Because now we know that the distance is less than assumed before.
3. Tranceiver A decreases ((x-deltaGain)^2) the Ptx and sends again to B.
4. If Tranciever B didn't received a message from A for a reasonble time, A understands
from privious message from A that it was a limit of Ptx, so if one put this Ptx in the equation
for "distance" we get the distance from A to B!
The idea could be optimized...
What do you think?
BeNoM