downshift - μ/acc pfp
downshift - μ/acc
@downshift.eth
lately, i’ve been quietly working on data infra for dealing with flaky data sources (like Farcaster data and blockchain data…this isn’t a dig at Farcaster by any means; it’s just the nature of consensus-based protocols in general) this stuff is non-trivial to get right because intermittent and complete failures of data providers are guaranteed, especially in times of massive usage and network congestion we are and will continue experiencing robust data access patterns and thorough fallback strategies are absolutely essential to have any chance of creating reliable systems in these sorts of environments, especially if your goal is to minimize latencies
6 replies
18 recasts
53 reactions

0xygendebt pfp
0xygendebt
@0xygendebt
What causes the data to lose integrity when it comes to consensus based? Is it based on the type? PoS vs PoH?
1 reply
1 recast
1 reaction

downshift - μ/acc pfp
downshift - μ/acc
@downshift.eth
since there isn’t a single source of truth (like a centralized database in common web apps) we must rely on data replicating across many nodes and indexing through many bespoke and opaque means…data sources aren’t all guaranteed to be in a consistent state this means some nodes or indexes will not be fully up to date, or might even have stale data due to their own implementations or network errors, caching, etc.
1 reply
0 recast
2 reactions

0xygendebt pfp
0xygendebt
@0xygendebt
Is there then a way to determine precisioning based on multifactoral analysis? Using multiple nodes to provide higher level of data redundancy or a way to pre-validate at a certain resolution?
1 reply
0 recast
1 reaction

downshift - μ/acc pfp
downshift - μ/acc
@downshift.eth
getting blockchain data is generally not the challenge but that is akin to reading machine code…hard to interpret yourself, especially for complex contracts ($moxie is a good example of this in action) so you either have to rely on centralized indexing services to get ergonomic data, or do it yourself (one is flaky, one is hard) so using redundant data providers is the only practical option unless you have massive resources
0 reply
0 recast
1 reaction