Do sentinels repeat?


This is probably a simple question i am going to over complicate. DO sentinels repeat off of eachother . For example, the max range on a sentinel is 100 meters, lets say I live 400 meters from a major road and conditions are perfect. as such:

| | | / \ / \ / \ /
| || A || B |_____| C |_| Bridge|
| | | 100m 100m 100m 100m
| |

As it is sentinel A would make the majority of the handshakes. B can communicate with A, but would A transfer its witnesses to B and would B transfer the information from A and B to C and would C finally transfer all of the information from A, B, C to the bridge. In this scenario, I would give each of my three neighbors a sentinel to put on let’s say their spare keys which stay home 99.9% of the time. This would be beneficial if they do repeat, yet if they do not and all three would have to pass within range of the bridge to upload this would be inefficient. If they do not I would have to give a sentinel to A to put on something they travel with such as main keys and hope they pass by the bridge which they likely would not seeing they travel towards the main road not deeper into the neighborhood. I would only get information gathered when I travel along the road with one or to of my other devices and only for the short time I’m on the road. In that scenario, my bridge would likely see very little in the means of uploads. Yet if they do repeat the bridge would consistently get the information from all devices traveling along the main road. As I said I likely made this more complicated then it needs to be I am just curious as to how the transfers travel before it reaches a bridge device.

Thank You,
Luis Medina

p.s The graphics looks correct during writing and editing yet it doesn’t post correctly but I think it still relays what I’m saying.


And a follow up question. Does connection method (wired vs wireless) make a difference when it comes to the bridge. I can put it at the front of my place by the road wireless or by the back of my place through ethernet. if speed and strength matter I would hardwire if that doesn’t play a factor I would place it closer to higher traffic areas through a wifi connection.


I’ll make a semi-intelligent guess at this one and say the sentinels do not repeat. Imagine a very busy block in NY city with residences and businesses all within say 500ft of one another and say one day in the future 10% of the residences in that block participate in geomining in that city block. Any device working its way through town would trigger 1 or more sentinels as it approached one of the intersections near this city block. At this point, at least 1 sentinel nearest to the intersection would trigger at least 1 sentinel farthest from the intersection (<300ft away) and if that sentinel repeated itself, it would trigger at least 1 sentinel farthest from itself (<300ft). You now have 3 example sentinels in the verification chain of which only 1, when searched by archivist is of any value (the closest) The other sentinels would only repeating signals ad nauseum and wasting battery life. BY having no signal repeats in the network, archivist are guaranteed that the furthest verification of location is <300ft from the device being tracked without having to “sort” a query of repeated signals, which in a highly dynamic world would be horrendously painful to maintain from a data perspective.
Of course I could be wrong too but this just seems most logical from an IT infrastructure perspective.


That makes sense, It depends on how much data is transferred per witness and how much sentinels can hold. From the archivist side “duplicate” information can be merged or deleted, lets say the closest to the original is kept rest are purged reducing the diviner load. It would be a good way to ensure data makes it to the network. Lets say of the hundreds of sentinels your talking about only 10 percent make it to a bridge device all the rest of that data is lost, yet if they repeat the majority of the witnesses would make it to the network. Lets say there are sentinels a,b,c,and d . all in a line, a reached b , b reached c , and c reached d . a and d never performed a witness yet the witness from a and b and b and c would in their un-altered entirety be transferred to through D three witnesses would reach the bridge and the network. From your perspective if all three eventually reach the bridge (network) it would be an overload of data, yet the witnesses were already reported and that data can be purged from the system. Either the earlier “repeated” information can be deleted in favor for data from the original source or vice versa. It would be a way to ensure the most information reaches the network, which for this network is critical, it falls in line with the “mesh” theory. It would also be very beneficial for geominers . Lets say 100 people in my area have sentinels but only 2 have bridges which is pretty likely in most areas. 95 percent of those witnesses would never reach the (bridge) network. Only those who made contact with a bridge owner or happen to pass very nearby the bridge itself would reach the network.if data repeated one witness can transfer the data of many and when it does reach that bridge it all reaches the network. Just my two cents both arguments are valid, guess it depends on the vision and resources of the company. sorry if that was hard to follow my thoughts were a little scattered but think i got my point across lol.


Ok I can see your point. I think the guarantee comes down to the heuristic you wish to pursue and what it costs financially to pursue it. Here are what is being considered:
A. Have at least 1 sentinel at a designated location with at least 1 witness. This requires no purging of data so no overhead for managing the purge
B. Have at least 1 sentinel at a designated location with at least 1 witness that casts the signal down line the next sentinel and the next and the next. This is a linear progression of “extra” data ( extra being the gurantee is already in place in the system with the first witness so paying for further guarantees drastically deminishes the value of casting the signal further down the line because it also increases the work of algorithm to clean up the extra data
C. Have at least 1 sentinel at a designated location with a minimum 2 or more witnesses that casts the signal down line to 2+ witnesses each. Now you begin to have an exponential increase in the amount of guarantees you have in your system which are nothing more than extra repeated data. This becomes an IT infrastructure nightmare with a single package. imagine a busy city in the future where everything seems to be tracked. In addition, how do you pay out to all those guarantees? afterall, they are providing the service of a guarantee for that location.

This is why I think sentinels just validate location and second sentinel validates the same and that’s it. The sentinels are not smart enough to hold all that data nor control who is providing the guarantee so the network and the algorithms running it need to be highly efficient, use as little information as possible, and not get bogged down with “data maintenance” of extra guarantees. In the future, the system probably won’t have time for that as well designed systems never have time for that anyway.

As for mesh networks, each node needs to know about the other nodes in order to control radio interference and provide the highest degree of data throughput. The sentinels are not designed to do anything like that so mesh theory does not apply to the operations of a sentinel because they are not providing high degrees of data throughput and do not know about one another in the network. They don’t need to know. The algorithms running XYO can figure out where things are with as little as a single guarantee of ground truth data (time and location).


I see what your saying, it makes sense as there is a possibility of a large amount of extra data. While I still think it would be beneficial maybe it should be optional. For example people in non urban areas can set sentinels to repeat or set sentinels to repeat early in the implementation and as the network grows identify areas where the “extra” data is problematic and block repeating in those areas through firmware updates. A large amount of your assumption is based on most or all witnesses making it to the network. Currently sentinels have a very small realistic range 20-30 feet same with bridges, so I believe the majority of witnesses will never make it into the system, allowing sentinels to repeat would drastically increase the amount of non repeated witnesses ( although there will eventually be some repetition) . I believe this provides a large value when compared to the potential costs of processing repeated data. That being said I do see how this may present a burden so your likely correct.